Well, hello there Sentinels, today we're going beyond the final test.
We’ve all been there, haven’t we, or maybe not? You spend six hours carefully routing your logic, hit 'Generate Bitstream', and then wander off for a coffee, or perhaps a full roast dinner, while the tools do their thing. You return, load the image onto the board, and nothing. Blank screen, static LED, or, if you’re really lucky, that infamous 'blink of death' which is the FPGA’s way of saying your timing is in tatters.
In the world of FPGA (Field-Programmable Gate Array) development, the 'Wait and Pray' approach is a fast track to burnout and a caffeine addiction. Enter formal verification. Most developers treat verification like a final exam, something you cram for at the end, when it’s far too late to fix your dodgy study habits. FPGAs are now the backbone of safety-critical industries, from avionics and nuclear control to self-driving cars and medical kit. Their flexibility, reconfigurability, and parallelism are all very clever, but they also bring a whole new set of verification headaches. Enter formal verification, the gold standard for making sure your FPGA-based system behaves itself under every possible scenario, not just the ones you thought to test. Below, I’ll give you a whistle-stop tour of the discipline, with a nod to the key academic work and what the practitioners are actually doing.
But before I get too carried away with the introductions, let’s see what the internet has coughed up for us this week.
In a World of AI Agents: Intent > Identity
AI-powered bots aren’t just logging in anymore. They’re mimicking real users, slipping past identity checks, and scaling attacks faster than ever.
Thousands of companies worldwide trust hCaptcha to protect their online services from automated threats while preserving user privacy.
Now is the time to take control of your security.
News from around the web!
Mathematical Rigour in Safety-Critical Systems
These days, digital design is a labyrinth of algorithms, and we’re all relying on hardware that’s both dazzlingly clever and, let’s be honest, a bit fragile. Whether it’s medical devices or the kit keeping national security ticking over, the stakes are sky-high. In places where failure is not an option, think airborne FPGA systems needing DO-254 certification, or the folks at Sandia National Labs who call their creations 'dishwashers of Armageddon', the old 'best-effort' simulation just doesn’t cut it. Simulation only checks a handful of paths through your logic, but the number of possible routes in modern hardware is enough to make even the solar system blush. So, critical bugs slip through the cracks. Formal verification steps in here, taking us back to first principles and swapping out stimulus-driven checks for good old state-based mathematical proof. The result? 'Correctness by construction', so your blueprint actually makes it into silicon without any nasty surprises.
Switching from the old-school testbench to formal verification means engineers have to swap hats, from crafting clever test inputs to writing mathematical properties. In simulation, you’re basically poking your design with a stick and seeing what happens, but you’re limited by what you can dream up. Formal verification, or 'Property Checking' if you want to sound fancy, uses a Satisfiability (SAT) Solver to trawl through the entire input space at once. Instead of stepping through logic cycle by cycle, it hunts for any possible leap from a valid state to an invalid one. This means it catches whole categories of bugs that manual testing would never spot. The best bit? When it finds a violation, it spits out the shortest possible sequence to show you exactly where things went wrong, much easier to debug than those epic, spaghetti-like traces you get from traditional simulation.
Formal verification really boils down to two main mathematical engines: Bounded Model Checking (BMC) and Induction. Which one you pick decides just how much confidence you can have in your design. BMC is where most people start, it kicks off at system reset and checks what happens over a set number of clock cycles. Brilliant for catching those power-on gremlins or handshake hiccups. But if you want to be absolutely sure, you’ll need to move up to Induction Proofs. Here, you start from any valid state and prove that every possible next state is also valid, so you’re covered for all time. The catch? Induction can throw up 'spurious counter-examples', the solver might start in a state that’s technically valid by your rules, but impossible to reach in real life. So, mastering induction is all about writing a strong enough set of properties to keep the solver on the straight and narrow.
To connect the dots between your high-level design dreams and the gritty details the SAT solver cares about, we use the Property Specification Language (PSL). PSL lets you bake your formal intent right into the verification environment, using three main ingredients: assert, assume, and cover. 'Assume' sets the ground rules for your environment, how the inputs are allowed to behave. 'Assert' is your module’s promise about its outputs and internal state. PSL’s temporal logic is razor-sharp, especially when it comes to implication operators. The |-> operator means the postcondition must be true in the same cycle as the precondition, while |=> means it has to be true in the next cycle. For example, in an AXI ready/valid handshake, you might assert that if valid is high and ready is low, the data must stay put in the next cycle. By pulling in internal signals like FIFO counts or protocol flags, PSL makes sure every transition inside your design does exactly what you intended.
Once upon a time, formal verification was the preserve of pricey, proprietary 'black-box' tools. These days, the savvy Senior Systems Architect is more likely to reach for an open-source stack, because who doesn’t like a bit of 'full observability'? In high-security settings, tools like JasperGold are a bit of a gamble, since you can’t peek under the bonnet. With open-source workflows, think GHDL for VHDL, Yosys for synthesis, and SymbiYosys as the front-end, you can audit every line from RTL right down to the SMT models. In practice, it’s all wrangled with a .sby script, where you set up your task (BMC or cover), pick your engine (like smtbmc), and specify your VHDL standard, usually --std=08 for the modern stuff. Just remember, while this setup is brilliant for unit testing modules, the state-space explosion means formal verification gets a bit unwieldy for big integrations. The smart move is to use formal methods for the tricky bits, and stick with traditional simulation for the top-level glue.
A 'pass' in formal verification is only as good as the properties you’ve written. To guard against missing something, the pros use 'mutation testing', that’s where you sneak bugs into your VHDL on purpose (like commenting out the bit that deasserts a valid signal) just to check the formal tool is paying attention. If it still gives you a gold star, your property set needs work. When a failure does pop up, using clear labels like f_after_reset_valid or f_fifo_ordering_in means you can spot exactly which bit of intent has gone astray. Cover statements are also handy for proving things are actually possible, like making sure a FIFO can go from full to empty. The solver will even give you the shortest possible waveform to hit the cover, which is a great way to poke around your design’s reachable state space and make sure your module does exactly what it says on the tin.
Moving to a 'correct by construction' mindset is really the only sensible way forward for safety-critical systems. By weaving formal verification into the development process, we swap out ad-hoc testing for something much more robust. This approach not only saves money in the long run by catching those 'impossible' bugs early, but it also gives you the mathematical proof you need for the tough certification crowd. As we sharpen our formal intent from the first spec to the final implementation, we edge closer to a world where our hardware isn’t just 'tested', but actually proven. And with the 'dishwashers of Armageddon' running the show, mathematical rigour isn’t just a nice-to-have, it’s absolutely essential.
Summary
This study introduces a customised FPGA hardware architecture that accelerates 4D grid-based Hamilton-Jacobi reachability analysis by up to 142 times relative to standard CPU implementations for autonomous systems. Achieving a deterministic computation frequency of 5Hz, the proposed design enables real-time formal verification and ensures safety for robotic vehicles operating in dynamic environments.
"Our design can overcome the complex data access pattern while taking advantage of the parallel nature of the HJ PDE computation."
Background
Autonomous systems, such as self-driving cars and rescue robots, require guaranteed safety to achieve widespread adoption in human-centric environments. Hamilton-Jacobi (HJ) reachability analysis serves as a robust formal verification method by calculating Backward Reachable Tubes (BRT) to identify unsafe states. However, this method is limited by the "curse of dimensionality," where computational demands increase exponentially with the number of state variables. While multi-core CPUs can manage 3D systems, 4D or 5D systems often require minutes or hours to compute.

The authors contend that customised hardware accelerators can complement existing decomposition methods to enable real-time safety guarantees for higher-dimensional systems. This transition is driven by the end of Moore's law, which limits the ability of general-purpose CPUs to meet the growing performance requirements of safety-critical applications. The study focuses on accelerating the Hamilton-Jacobi Partial Differential Equation (PDE) solver by leveraging the parallelism inherent in FPGA technology. By addressing memory access challenges and employing pipelined processing elements, the proposed design aims to solve the Hamilton-Jacobi Partial Differential Equation (HJ PDE) at frequencies appropriate for live deployment.
"This method is very good at handling non-linear system dynamics with disturbances and flexible set representations."
Use-case
The primary use case involves real-time obstacle avoidance for an autonomous RC car operating within a 4m by 6m environment containing dynamic obstacles. The system employs a 4D extended Dubins car model to represent the vehicle's state, including position, speed, and steering angle. As a user attempts to steer the car toward orange cones that serve as obstacles, the FPGA-based verifier computes safe boundaries in real time. When the car nears the boundary of a Backward Reachable Tube, (defined as a set of states that a system must stay out of in order to avoid obstacles) optimal control is automatically applied to override the user and prevent a collision.

This technology is tailored for safety-critical systems that require fixed and deterministic computational latency. In contrast to CPU-based implementations, which exhibit variable performance, the FPGA achieves a consistent 0.176-second latency, essential for maintaining a stable 5Hz update frequency. The design accommodates various system dynamics and can be scaled to support different grid configurations or larger environments. This use case demonstrates the shift of formal verification from a pre-computed offline process to an active, online safety monitor for mobile robots.
"We demonstrate obstacle avoidance with a robot car in a changing environment."
Conclusion
The paper concludes that the FPGA design achieves a 16-fold speedup over optimised CPU implementations and is essential for ensuring deterministic safety in autonomous vehicles. Future work includes extending this hardware acceleration approach to systems with dimensions higher than the 4D model evaluated. Furthermore, since the current design does not fully utilise the FPGA's resources, it can be scaled to support larger grid sizes for more complex environments. These advancements aim to further extend the capabilities of real-time formal verification for increasingly sophisticated robotic platforms.
"Our design approach presented here can be applied to system dynamics and potentially higher dimensional systems."
The report can be found here.


