HomeTechnologyFortifai (ASX:FTI)

FortifAI’s Nol8 FPGA Cuts AI Infrastructure Costs by Replacing 60,000 CPUs

Technology By Sophie Babbage 4 min read

FortifAI’s Nol8 FPGA appliance dramatically reduces AI infrastructure costs by replacing up to 60,000 CPUs, slashing annual operational expenses from millions to tens of thousands of dollars.

  • Nol8 FPGA matches 60,000 CPUs under AI-grade workloads
  • Annual hardware costs drop from A$4.5m to under A$50,000
  • Technology targets unstructured AI data processing bottlenecks
  • Raised $15m in at-market placement to accelerate growth
  • Enterprise use cases include cybersecurity and financial fraud monitoring

Nol8 FPGA Delivers Unprecedented CPU Replacement

FortifAI Limited (ASX:FTI) has unveiled benchmark results showing its Nol8 FPGA appliance can replace the compute capacity of up to 60,000 conventional CPUs under demanding AI-grade workloads. This is not just an incremental efficiency gain but a structural shift in AI infrastructure economics. The testing, based on Google’s RE2 pattern-matching engine, demonstrates Nol8’s ability to handle high-complexity data classification tasks (6,000+ rules) at the 99th percentile load, a scenario typical of real-world enterprise AI environments.

Dr. Alon Rashelbach, Nol8’s Co-Founder and CTO, emphasised the breakthrough: "The question is no longer how many CPUs do we need? It is about why are we still using CPUs at all for this class of workload." By freeing CPUs from data pipeline bottlenecks, Nol8 enables GPUs; which run AI models; to operate at full capacity, unlocking greater returns on existing AI investments.

Massive Cost Savings for AI Infrastructure

To put the scale into perspective, a typical mid-scale enterprise AI deployment running 10,000 CPUs for high-complexity workloads incurs an estimated annual hardware operating cost of A$4.5 million, with total operating expenses (including management) reaching between A$6 million and A$7.5 million. By contrast, a single Nol8 FPGA appliance delivers equivalent compute for under A$50,000 per year, including infrastructure management costs.

This dramatic reduction in cost and footprint could reshape enterprise budgeting for AI infrastructure. The savings stem from Nol8’s hardware-accelerated architecture that processes data in parallel without performance degradation as workload complexity grows. This contrasts sharply with CPU-based software approaches, which face scalability ceilings.

Real-World Applications Driving Demand

The 60,000 CPU equivalence benchmark aligns with workloads in cybersecurity platforms like IBM QRadar and Microsoft Sentinel, where thousands of detection rules are applied continuously to petabytes of event data. Similarly, financial institutions running real-time fraud and compliance monitoring, and telecommunications carriers performing deep packet inspection at scale, operate CPU arrays of comparable size.

FortifAI’s Nol8 AI Data Plane is designed to handle these unstructured AI data flows, which are expected to constitute 90% of future data streams. The technology processes data-in-flight at millisecond latency, avoiding buffering and batching delays, and is backed by five years of academic research from Technion University.

These results build on FortifAI’s earlier demonstration of Nol8’s throughput superiority over Google RE2 by over 200,000 times, showcasing the technology’s scalability and speed advantages in AI data processing environments Nol8 Crushes Google RE2.

Capital Injection to Accelerate Commercialisation

FortifAI has secured a $15 million strategic placement at $0.715 per share, reflecting strong institutional demand and confidence in Nol8’s market potential. The funds will accelerate technology development, marketing, and business initiatives around Nol8, as well as support existing assets and general working capital.

Non-Executive Chairperson Shannon Robinson highlighted the placement as a validation of Nol8’s world-first technology and the scale of the opportunity ahead. The capital boost comes ahead of FortifAI’s plans to engage enterprise design partners in the coming quarter, aiming to translate these benchmark breakthroughs into commercial deployments FortifAI Raises $15 Million.

The Emerging AI Data Plane Category

FortifAI positions Nol8 as the foundational AI Data Plane layer, a critical infrastructure tier that filters, classifies, and routes unstructured data before it reaches AI inference models. This layer addresses a growing bottleneck in AI systems, particularly as agentic AI and large language models drive exponential growth in data volume from 334 zettabytes in 2025 to an anticipated 19,267 zettabytes by 2035.

By combining neural network algorithms with FPGA hardware acceleration, Nol8 offers enterprises a high-speed, scalable solution that outperforms traditional CPU arrays by orders of magnitude. The company’s ongoing benchmarking and commercialisation efforts will be pivotal in validating and expanding this new technology category.

Bottom Line?

FortifAI’s Nol8 FPGA technology challenges entrenched CPU-centric AI infrastructure models, but its commercial adoption and real-world cost savings remain to be proven as enterprise trials ramp up.

Questions in the middle?

  • How quickly will enterprises adopt Nol8 to replace existing CPU arrays?
  • Can FortifAI sustain its technology lead as competitors innovate in AI infrastructure?
  • What are the risks and costs involved in integrating Nol8 into complex AI data pipelines?