In Lockstep: Kernel and Verification

September 21st, 2017

And e’er the twain shall meet.”

Say hello to our Formal Methods engineers.

“I need to talk to you about the kernel’s debug API code,” Dr. Rafal Kolanski said, approaching me with a sense of urgency. “We have a deadline and we won’t have the time to verify the debug-API for ARM – and we’ll barely be able to deliver the ARM-hyp proofs on time even if we skip it.”

“How can I help?” I asked Rafal lazily.

“Can you hide the debug-register context switching code in the verified build of the kernel?” Rafal asked, brows furrowed ever so slightly.

I sat up and paid attention. Our proof engineers often request small (or large) posthumous tweaks of the seL4 kernel codebase in order to make it verifiable. These tweaks may often introduce eccentricities into the code – but they each have a reason behind them. My investigation was about to begin. I found a way to hide this feature without compromising the feature’s functionality – and so the kernel was tweaked; another milestone in the successful collaboration between the kernel and verification teams.

I want to invite our readers to see how we co-ordinate things internally – let me unveil to you the fabric that entwines our Kernel and Verification engineering teams together.

Towards a disciplined approach.

It occurred to me that we could try to open a window of view into how the Kernel team and the Verification team here at Trustworthy systems really interact and in what ways we interact; it’s a push and pull cycle: we write code and verification takes a look through it; after digesting carefully, they come back and tell us what modifications would make it easier for them to reason about the code. Usually that’s the end of it, unless the particlar changes requested by the Verification team would introduce unacceptable performance overheads. Very seldom does the kernel team push back against verification.

We don’t yet have a SDLC (Software Development LifeCycle: a formal approach to software development) so to speak; we’ve just roughly developed an in-house rhythm between the two teams. Generally, it starts like this: verification sees some code we wrote, and they puzzle over it for a few hours. Then they send a diplomatic mission to query the kernel team about the code they are puzzling over. That analogy is surprisingly apt in practice because it usually looks similar to a delegation of dignitaries crossing the floor.

Figure 1: Three members of the verification team asking Anna Lyons, the mind behind the MCS/Realtime (Mixed-Criticality System) kernel extensions, questions about its design.

After a round of verbose inquiry, the verification mission will return to their mothership (Usually Gerwin’s office). A short while later, they will have brainstormed a series of “invariants” to begin with. These invariants are a set of properties about the kernel that are maintained across kernel invocations that are required for correct execution. These invariants are proven through the use of Hoare logic, in the form of preconditions which are true before any step of execution and postconditions which are true after the step.

They will then launch a second diplomatic mission. During this round of talks, they will explain the invariants they have come up with, and ask us if those invariants actually describe the code we’ve written.

Figure 2: Often, as we hammer things out, new voices join the discussion on-the-fly – these discussions are very robust and dynamic.

Finally, depending on how complex the code being verified is, over the next few days, we will negotiate over our shared understanding of the code, and then take pointers from verification on ways in which we can make the code more amenable to the application of formal methods. This process may be repeated over the course of a few weeks depending on how complex the code in question is.

The end result? A mutual understanding, and later, a proof.

What are the concrete outcomes of the dialogue between the teams?

I asked Rafal to tell me, as a senior proof engineer, what he thought the useful outcomes from the teams’ discussions were. These were his insights:

  • The formal methods team is able to come to us and interface with us when they identify an actual problem in the code. If the process of formally verifying the code didn’t help to identify actual bugs and errors in the code being verified, there wouldn’t be much use for it.
  • The formal methods team is forced to think about the meaning of the code, its side-effects and what the semantic implications of a particular design decision are. For that reason, they often notice subtle ramifications behind the behaviour of the code which the kernel team might not have explicitly considered. In asking us to explain the meaning of a piece of code, they force us to also make such tacit decisions explicit, and improve the robustness of the design (and the implementation!)
  • The result of the interactions between kernel and verification is that the verification team derives the properties that the kernel team intended (or did not intend) to introduce, and they can use these to model the code in the proofs.

As a final comment, making the meaning of the code explicit tends to gradually refine the code itself by making the kernel team look through the code not just from a top down or bottom up functionality perspective, but also as an overall composite system.

…What is the state of Proof Engineering?

Drs. Gerwin Klein and June Andronick, our champion verification minds, have an interest in structuring our interactions into some form of SDLC – a formal discipline that can be replicated by other companies scalably to encourage the adoption of formal methods.

Would the discipline that arises from this aspiration be of a “Waterfall” nature? Or an “iterative” nature? Given the fact that verification needs multiple iterations of negotiation with the kernel engineers, a highly responsive, interactive and iterative process seems to recommend itself.

Has the development time for proofs improved over time? Can formal verification be made cost effective?

Yes: we have strong reason to believe so. As a result of the gradually increased mutual understanding between the two teams we have established a sort of rapport which has gone a long way to reducing the time that it takes to develop proofs. I turned to Joel Beeren, one of our lead Proof Engineers to get a feel for what factors have enhanced our proof engineers’ productivity:

  • The recent proofs have been mostly building on the already-existing proof-base. So proof engineering does scale and seems to lend itself to modularization and parameterization.
  • There was accrued knowledge in the verification team about the kernel concepts which they had to reason about. In contrast, a large amount of the work going on right now internally for our work on verifiying concurrency is about going back and forth between the kernel and verification teams and defining the problem, because concurrency is a great challenge for formal methods.
  • The kernel team also had a much better understanding of what rules of thumb we could stick to in order to ensure that the code was more easily verifiable.
  • The verification team has developed its own in-house tools and frameworks which have automated a lot of tasks and increased productivity. An example of such a framework is “Autocorres” which automates the ability for the verification team to reason about C-language code in functional languages similar to Haskell.
  • In various ways, the development discipline within the verification team has itself also matured.

We complete each other.

A lot of the story between our teams has been about learning the expectations that we have of each other and what things make our respective jobs easier.

For the kernel development team, writing code that is amenable to verification takes a slight understanding of what things the verification team is likely to be uncomfortable with. For the verification team, they have to keep in mind that even if an amendment would make it easier to reason about and verify the code, all such amendments must not compromise the performance of the kernel unreasonably.

I asked Joel Beeren, one of our more experienced Proof Engineers whether, in his experience, the strides we’ve made are more as a result of the verification team coming to understand the kernel team’s needs better or whether they’ve come about more as a result of the kernel team coming to understand the verification team’s requirements better.

He feels that the understanding that both of our teams each have of the other is essentially of equal weight. I guess it’s like push and pull.

Stay tuned for more in this series:

Formal methods are the future of high-assurance systems. You can consider this your invitation to see how we accomplish our verification goals, and maybe you can leave a comment or two: do you have any insights on what kind of SDLC could arise that integrates traditional software development and formal methods?

We like answering the tough questions. Come challenge us on our mailing list or get in touch another way. Keep an eye out for more posts in this series, In Lockstep, on this blog, and check out some of our other posts too. G’day!