Can Software Keep Up? FortifAI’s Hardware Breakthrough Challenges AI Data Limits
FortifAI Limited has unveiled benchmark results showing its Nol8 AI Data Plane technology outperforms Google’s RE2 engine by over 200,000 times under demanding AI workloads, signalling a breakthrough in data processing scalability.
- Nol8 delivers constant 1,500 MB/s throughput across all complexity tiers
- Outperforms Google RE2 by up to 200,000x under AI-grade workloads
- FPGA hardware acceleration overcomes software scalability limits
- Enterprise-ready benchmarking engine expected by June 2026
- Ongoing tests to quantify infrastructure and cost efficiencies
A New Benchmark in AI Data Processing
FortifAI Limited (ASX:FTI) has announced a striking technological milestone with its Nol8 AI Data Plane technology, which has demonstrated a throughput advantage exceeding 200,000 times that of Google’s RE2 engine under AI-grade workloads. This leap was confirmed through rigorous benchmarking tests designed to simulate real-world enterprise AI data processing challenges.
Google’s RE2 engine, a widely adopted software standard for high-speed data pattern matching, has long been considered the ceiling for software-based scalability. Nol8’s breakthrough comes from leveraging FPGA (Field-Programmable Gate Array) hardware acceleration combined with neural-network-based algorithms, enabling it to maintain a steady 1,500 MB/s throughput regardless of workload complexity or load conditions.
Breaking the Scalability Ceiling
Traditional CPU-based software solutions, including RE2, suffer significant performance degradation as data complexity and volume increase. Nol8’s FPGA architecture processes data in parallel at the hardware level, sidestepping these limitations. The benchmark results show that while RE2’s throughput collapses to near zero under extreme loads and complex rule sets, Nol8’s performance remains constant and predictable.
For example, under the highest complexity tier; representing AI-grade data classification with over 6,000 rules; RE2’s throughput drops to a mere 0.007 MB/s, whereas Nol8 sustains 1,500 MB/s, marking a staggering 200,000x advantage. This consistency across all tested tiers highlights Nol8’s potential to redefine enterprise AI infrastructure.
Implications for the AI Data Plane Market
The global datasphere is expected to balloon from 334 zettabytes in 2025 to over 19,000 zettabytes by 2035, driven largely by unstructured data from autonomous AI systems and large language models. Processing this data in real time requires a new infrastructure layer; the AI Data Plane; that filters, classifies, and routes data before it reaches AI models.
Nol8’s technology addresses this critical bottleneck, offering enterprises a scalable, energy-efficient alternative to sprawling CPU arrays currently used to manage complex AI workloads. FortifAI’s co-founder and CTO, Alon Rashelbach, emphasised that this is not merely a software upgrade but an architectural breakthrough that could reshape AI infrastructure.
Looking Ahead
FortifAI plans to release an enterprise-ready benchmarking engine by June 2026, with further testing underway to translate throughput gains into tangible reductions in hardware footprint, computational load, and infrastructure dependency. These developments could significantly lower costs and energy consumption for organisations deploying AI at scale.
While the current results focus on throughput, the company’s ongoing work to quantify economic benefits will be closely watched by investors and industry players alike, as the AI sector demands ever more efficient and scalable data processing solutions.
Bottom Line?
FortifAI’s Nol8 technology sets a new standard in AI data processing, but the market will be watching closely to see how these gains translate into real-world cost and efficiency benefits.
Questions in the middle?
- How will FortifAI’s technology integrate with existing AI infrastructure ecosystems?
- What are the expected cost savings and energy efficiencies from deploying Nol8 at scale?
- How might competitors respond to this hardware-accelerated breakthrough in AI data processing?