The **P**rogramming **LA**nguages, **I**nformation security and **D**ata privacy (PLAID)
research lab at UVM.

##### Faculty #####

- [Chris Skalka](http://www.cs.uvm.edu/~ceskalka/)  
- [Joe Near](http://www.uvm.edu/~jnear/)  
- [Jeremiah Onaolapo](https://www.uvm.edu/~jonaolap/)  
- [Emma Tosch](https://emmatosch.com/)  

##### PhD Students #####

- [Chike Abuah](https://www.uvm.edu/~cabuah/)  
- [John Ring](http://johnhringiv.com/)  
- Krystal Maughan  
- Ryan Estes  
- Mako Bates  
- Tim Stevens  
- Craig Agricola  

##### MS Students #####

- Sam Clark  

##### BS Students #####

- Alex Silence  
- Phillip Nguyen  
- Vanessa White  
- Zachary Ward  

##### Courses #####

- Compilers  
- Computer Networks  
- Computer Security Foundations  
- Data Privacy  
- Programming Languages  
- Type Theory  
- Secure Distributed Computation  

-------------------------------------------------

#### Recent Papers by UVM PLAID ####

- **DDUO: General-Purpose Dynamic Analysis for Differential Privacy.**  
  Chike Abuah, Alex Silence, David Darais, Joseph P. Near.  
  *Computer Security Foundations (CSF), 2021.*

- **PrivFramework: A System for Configurable and Automated Privacy Policy Compliance.**  
  Usmann Khan, Lun Wang, Jithendaraa Subramanian, Joseph P Near, and Dawn Song.  
  *NeurIPS 2020 Workshop on Dataset Security and Curation, 2020.*

- **Towards Auditability for Fairness in Deep Learning**  
  Ivoline Ngong, Krystal Maughan, and Joseph P Near.  
  *NeurIPS 2020 Workshop on Algorithmic Fairness through the Lens of Causality and Interpretability, 2020.*

- **DuetSGX: Differential Privacy with Secure Hardware.**  
  Phillip Nguyen, Alex Silence, David Darais, Joseph P. Near.  
  *Theory and Practice of Differential Privacy (TPDP) workshop, 2020.*

- **Towards a Measure of Individual Fairness for Deep Learning.**  
  Krystal Maughan, Joseph P. Near.  
  *Mechanism Design for Social Good (MD4SG) workshop, 2020.*

- **Types and Abstract Interpretation for Authorization Hook Advice.**  
  Christian Skalka, David Darais, Trent Jaeger, Frank Capobianco.  
  *Computer Security Foundations (CSF). IEEE, 2020.*

- **Abstracting Faceted Execution.**  
  Kris Micinski, David Darais, Thomas Gilray.  
  *Computer Security Foundations (CSF). IEEE, 2020.*

- **CHORUS: a Programming Framework for Building Scalable Differential Privacy Mechanisms.**  
  Noah Johnson, Joseph P. Near, Joseph M. Hellerstein, Dawn Song.  
  *European Symposium on Security and Privacy (EuroS&P). IEEE, 2020.*

- **A Language for Probabilistically Oblivious Computation.**  
  David Darais, Ian Sweet, Chang Liu, Michael Hicks.  
  *Principles of Programming Languages (POPL). ACM, 2020.*  

- **Proof Carrying Network Code.**  
  Christian Skalka, John Ring, David Darais, Minseok Kwon, Sahil Gupta, Kyle
  Diller, Steffan Smolka, Nate Foster.  
  *Computer and Communications Security (CCS). ACM, 2019.*

- **Duet: An Expressive Higher-order Language and Linear Type System for
  Statically Enforcing Differential Privacy.**  
  Joseph P. Near, David Darais, Chike Abuah, Tim Stevens, Pranav Gaddamadugu,
  Lun Wang, Neel Somani, Mu Zhang, Nikhil Sharma, Alex Shan, Dawn Song.  
  *Object-oriented Programming, Systems, Languages, and Applications (OOPSLA). ACM, 2019.*  
  ***«ACM SIGPLAN Distinguished Paper Award»***

- **Data Capsule: A New Paradigm for Automatic Compliance of Data Privacy
  Regulations.**  
  Lun Wang, Joseph P. Near, Neel Somani, Peng Gao, Andrew Low, David Dao, and
  Dawn Song.  
  *Workshop on Polystore Systems (POLY). Spriner, 2019.*

- **Towards Practical Differentially Private Convex Optimization.**  
  Roger Iyengar, Joseph P. Near, Dawn Song, Om Thakkar, Abhradeep Thakurta, Lun
  Wang.  
  *IEEE Security and Privacy (IEEE S&P). IEEE, 2019.*

- **Constructive Galois Connections.**  
  David Darais, David Van Horn.  
  *Journal of Functional Programming (JFP). Cambridge University Press, 2019.*

- **Tracking the Provenance of Access Control Decisions.**  
  Frank Capobianco, Christian Skalka, and Trent Jaeger.  
  *Workshop on Theory and Practice of Provenance (TaPP). USENIX Association, 2017.*

- **On Risk in Access Control Enforcement.**  
  Giuseppe Petracca, Frank Capobianco, Christian Skalka, and Trent Jaeger.  
  *Symposium on Access Control Models and Technologies (SACMAT). ACM, 2017.*

- **Life on the Edge: Unraveling Policies into Configurations.**  
  Shrutarshi Basu, Nate Foster, Hossein Hojjat, Paparao Palacharla, Christian
  Skalka, and Xi Wang.  
  *Architectures for Networking and Communications Systems (ANCS). ACM, 2017.*

-  **In-Depth Enforcement of Dynamic Integrity Taint Analysis.**  
  Sepehr Amir-Mohammadian and Christian Skalka.  
  *Workshop on Programming Languages and Security (PLAS). ACM, 2016.*

-------------------------------------------------

#### Industry Partnerships ####

###### *Provable Fairness in Deep Learning* ######

Artificial intelligence is a powerful tool, but AI systems often
reflect and magnify society's biases -- for example, by predicting
above-average risk of recidivism for Black defendants. This project
seeks to *prove* that a model's predictions are free of this kind of
bias. We seek to apply techniques from program analysis to analyze a
machine learning model directly, enabling a proof that the model's
predictions satisfy a formal fairness guarantee.

*UVM PI: Joe Near. Funded by an Amazon Research Award.*

#### Funded Projects ####

###### *Local Sensitivity for Differentially Private Deep Learning* ######

Differential privacy can provide strong privacy guarantees for deep
learning models, but existing approaches require adding too much noise
to train accurate models. This project aims to use techniques from
program analysis to bound the *local sensitivity* of model updates, in
order to significantly reduce the amount of noise needed during
training and increase the accuracy of the trained models.

*UVM PIs: Joe Near. Key Personnel: David Darais. Funded by DARPA (CSL).*

###### *Compilers for Zero-knowledge Proofs* ######

Zero-knowledge proofs are a family of cryptographic approaches used to convince
another party that some secret or evidence is known, without directly revealing
the evidence. This framework has the potential to secure and/or enable
important societal functions, such as auctions, voting and auditing.  However,
the computation and resource cost of realizing the framework has been
cost-prohibitive to adoption in practice for non-trivial applications. In this
project, we aim to improve the performance of zero-knowledge instantiations
through innovations in the compiler frameworks used to translate proof
statements to their embeddings in efficient zero-knowledge protocols.

*UVM PIs: Joe Near. Key Personnel: David Darais. Funded by DARPA (SIEVE).*


###### *Languages for Differential Privacy* ######

Differential privacy is a promising framework for analyzing data while
providing formal privacy guarantees. Today, implementing
differentially private algorithms is an error-prone process: incorrect
algorithms don't crash; instead, they appear to function correctly,
but don't actually protect privacy. In this project, we invent new
programming languages capable of automatically verifying that an
algorithm satisfies differential privacy, enabling non-experts to
quickly develop correct differentially private algorithms.

*UVM PIs: Joe Near. Funded by DARPA (Brandeis).*

###### *Proof Carrying Network Code (PCNC)* ######

The goal of this collaborative project is to support formally well-defined
security guarantees in software defined networks (SDNs) involving multiple
security domains (federations).

*UVM PIs: Chris Skalka. Funded by NSF SaTC award CNS-1718083.*

###### *STRATA* ######

An integrative approach to defense in depth. This multi-institutional research
project seeks to combine authorization, isolation, information flow, and
auditing in a uniform framework. At UVM we are focused on mathematically
well-founded approaches to auditing. 

*UVM PIs: Chris Skalka. Funded by NSF SaTC award CNS-1408801.*