Galois Internship Round 2: A Second Summer Intern Experience

We interact with networked computers many times every day—often without realizing it. Configurations for networks are hard to get right! How do the network operators know if they are right? Or is the system vulnerable to attack, leading to our private data being stolen. We are often accustomed to networks “just working” the way we expect in everyday uses, but this is not the same thing as saying that the network behaves as expected for all possible packets. Is testing enough? Is there some vulnerability in our network’s policies that would give an attacker a way in? For reliable and secure networking, we want to investigate this question with full mathematical rigor.

I have been working on a project with Galois on this topic for the last few years. As a full time PhD student at Cornell, I was excited to return to Galois this summer to continue and extend this work in several exciting directions. Our project, 5STARS, seeks to offer new tools for network administrators to understand the properties of their networks, and to ensure security and availability using formal methods as they modify their policies. In this post, I’ll describe some of the work I got to do while at Galois, and the fantastic experience I had during my time in the Portland office.

The Work I Did: Practical Verification of Network Policies

Immediately upon arriving at Galois in May, I dove right into helping the 5STARS team prepare for a cybersecurity exercise. The exercise involved a simulation of a hospital network where a Red Team was trying to exploit security vulnerabilities and take control of devices on the network. This was an engaging problem for me because it was realistic: nearly every small enterprise (hospital or otherwise) has a network vulnerable to the sorts of attacks explored in this exercise—and it makes a compelling case for the broad usefulness of approaches like 5STARS.

In this scenario, the Red Team was trying to infiltrate an anesthesia device on the network and conduct a cyber-physical attack (alter dosage, etc.). Participants in the exercise were invited to show off active defense technologies which could be used to stop the attackers in their tracks during the live event. While Galois did not participate in the live exercise as a Red Team or active defender, we instead presented a demo showing that the 5STARS network analysis tooling could reveal the networking vulnerabilities that were left open to exploitation. Briefly, we wanted to show that if 5STARS were used in advance, that the attack vectors used by the Red Team could all be patched and the attack in the exercise would no longer be possible.

This task was directly in the wheelhouse of what the 5STARS technology is meant to do: 5STARS is an analysis engine for proving the existence or nonexistence of possible paths that packets can take. Its strength is its generality: the system uses symbolic techniques to reason about all possible input packets and all possible packet paths that can result. This can provide a much greater assurance of security to operators than a standard suite of tests can hope to provide. While this core analysis software was already built (which I worked on in my previous internship!), we still had to build the simulation of the exercise hospital network and build anew user interface that showed off the power of the tool.

In the weeks immediately leading up to the event, I worked closely with Galois Principal Scientist Cole Schlesinger and several research engineers (Ryan Peroutka and Valentin Robert) to clear both of these hurdles in time. It was a lot of fun, and it was exciting to see the final product: our work showed that all of the security vulnerabilities at the network layer could be detected in advance by 5STARS! In other words, if the hospital had been running our tool already before the Red Team arrived, they could have easily identified and corrected the issues and stopped the attack before it began.

Learning Network Policies from Example Traces

For the rest of the summer, I shifted focus to research on learning models of network components from observations. For background, a 5STARS verification query takes a formal model of a network and compares it against some specification to check for adherence or to identify violations. This method works great when we have a means to build that formal model, for example by analyzing all of the routing configurations (as we did in the challenge exercise). In many instances, however, would-be users of 5STARS might not have access to a complete formal specification of the devices they are using, or they might not be able to analyze their behavior directly.

So our research question is: Can we design an algorithm to “learn” a precise model of a device (or even a whole network) by sending experiment packets and observing how they are modified and forwarded? The idea is that the model we learn could then be used to check security and availability properties by the same core 5STARS tool.

As it happens, I had already been working on this problem from a more theoretical viewpoint before coming to Galois this summer—in fact, I left the internship for a week during June to present a paper1 on this topic at the PLDI’25 conference in Seoul. Given that the paper was more oriented towards theory, the specific task at hand for the summer was to demonstrate its practicality by applying the techniques to a practical network simulation. The simulation we built for the exercise in June was an ideal target. So, I set about attaching my learner to the simulation we built for the hospital network.

The long term goal of this work is making 5STARS applicable and useful in cases where a complete model of the network is unavailable or hard to create. If a network administrator can use our system to build a complete and precise description of their network automatically, then it unlocks the ability to use 5STARS to verify its properties—without the need of a formal methods expert at any step.

This effort was a success! With help from the 5STARS team at various times, I was able to build a prototype system that learns a model for the entire hospital network from sending actual packets in a network simulator. The model is not complete in the same sense as the full formal specification, but it is an exciting starting point.

Still, important research questions remain. For example, we assumed we can observe packets at every switch, but this might not be convenient or possible in every situation depending on the hardware involved. It would be interesting to lighten this assumption by coming up with a general way to fully incorporate information from observed packets without requiring that we see any particular packets. That is, can we infer missing information to make the system even more robust? Similarly, we may not be able to send experiment packets from every device, depending on the type of hardware and the configurations involved. In this situation, can we learn in a more “passive” setting? Engineering a tool that addresses these challenges will bean exciting asset for network administrators!

While I return to finish my doctoral work at Cornell, 5STARS continues at Galois! This project is exciting because it is about democratizing verification: network operators who are not experts in formal verification will be able to use our software to prove (or disprove—and help identify how to repair) the security and availability properties of their network policies. This power will make it much easier (and less expensive!) to build safe and secure networks for systems important to us, like our example hospital network.

Footnotes

[1] Mark Moeller, Tiago Ferreira, Thomas Lu,Nate Foster, and Alexandra Silva. 2025. Active Learning of Symbolic NetKATAutomata. Proc. ACM Program. Lang. 9, PLDI, Article 192 (June 2025), 24 pages.doi:10.1145/3729295

Distribution Statement A: Approved for Public Release

This material is based upon work supported by the Naval Surface Warfare Center-Corona Division (NSWC-Corona) under Contract No. N6426725C7016. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSWC-Corona.