Virtual computer chip tests expose flaws and protect against hackers
Testing new computer chips for security and reliability often takes longer than designing them. A new method for modelling them virtually and testing them with programs conventionally used for software instead of hardware could slash development time.
Current hardware testing either randomly probes a chip to find flaws or seeks to formally test every single possible input and output on each computer chip. The first approach can easily miss problems and the second quickly becomes infeasible for all but the simplest designs. Either way, it can take months.
But a single flaw in a piece of hardware can make computers unreliable or vulnerable to hacks and, unlike a software flaw, cannot be fixed with an update after shipping.
A team of researchers at the University of Michigan, Google and Virginia Tech have sped up the testing process by simulating computer chips and using advanced software testing tools to analyse them. The chip designs were translated into executable code and given an additional layer of software on top to make them operate as they would in real world use.
Testing chips virtually enables engineers to use an approach called fuzzing, which watches for unexpected results or crashes that can then be examined and fixed. The problematic inputs that caused them are mutated and adapted to probe for similar problems in the rest of the chip. This covers ground more productively than purely random tests.
One problem the team had to overcome is that chips and code work in very different ways. Chips churn through a constant stream of inputs and outputs, carrying out thousands of simple operations to achieve a larger task, whereas software tests often involve submitting single inputs like text or a file and observing what is output. The team had to adapt software fuzzers to run over time rather than fire off a single input and wait for the response.
Read more: Uncrackable computer chips stop malicious bugs attacking your computer
“We had to trick these software fuzzers into interpreting inputs differently than they do in the software space. In a sense, what we were doing is we were taking inputs in a single dimension and extracting multiple dimensions,” says Timothy Trippel at the University of Michigan.
This method reduced the time taken to test a chip design by two orders of magnitude. A chip that would usually take 100 days to test can be analysed in a single day.
The team later took four sections of an OpenTitan chip designed by Google and tested them using their fuzzing approach. Within an hour they had covered 88 per cent of the lines of code that made up the software model among three of the sections, and 65 per cent of the fourth.
Faster hardware testing could reduce development time and bring the next generation of chips to consumers faster, and in a more reliable and secure state, says the team.
Rob Hierons at the University of Sheffield, UK, agrees that testing hardware before production would be desirable, faster and cheaper, but cautions that no approach could ever guarantee a design is free of flaws.
“Fuzzing adds some direction to random [testing traditionally used], but in the end it will often be limited,” he says. “There’ll be parts of the space that are incredibly hard to find, and you’d have to be running your fuzzer for a vast amount of time to reach them.”
Reference:arxiv.org/abs/2102.02308