Skip to Main Content

Apply Now to the University of Wyoming apply now
CEDAR banner logo


Modern machine learning methods have helped tackle many problems facing our society. However, despite their ubiquity, there has been little done to understand the security of these methods. The QUIRK project investigates how standard machine learning models could be vulnerable or leak information and then develop secure protections from such attacks.


Project Lead: Shaya Wolf

Jangseung is a preprocessor that limits the effects of poisoning attacks without impeding accuracy. Jangseung was created to guard support vector machines against poisoned data by utilizing anomaly detection algorithms. Jangseung was tested on a series of support vector machines (SVMs) and protected SVMs from basic poisoning attacks. Also, while Jangseung protects against adversaries actively attacking the accuracy of an SVM, it also defends against accidental false data that would otherwise go unnoticed and unintentionally poison an SVM. Current research is extending this to deep learning neural networks on large datasets for multi-class classification.


Privacy-Preserving AI

Project Lead: Hui Hu

Privacy has emerged as a big concern in machine learning as recent studies have revealed numerous privacy threats. Privacy-preserving AI project aims to develop efficient algorithms or mechanisms to protect users’ privacy in the modeling process such as users’ gender, age, health status, etc. Currently, we are investigating privacy issues in deep learning on graph data. This study is motivated by the fact that Graph Neural Networks are extremely vulnerable to attribute inference attacks.


Side Channel ML Attacks

Project Lead: Hui Hu and Shaya Wolf

This project aims to infer privacy by utilizing side-channel power attack techniques, including model privacy and data privacy. Our previous studies have shown that side-channel power attacks are efficient in inferring private information in the modeling process such as model types, model structures, or model hyperparameters. Currently, we are looking into adversarial sample detection and privacy-preserving under side-channel power attacks.
Great Revolt Logo

Great Revolt

Project Lead: Andey Tuttle

Great Revolt is a large-scale project investigating the security of Machine Learning systems and infrastructure. Instead of focusing on the models themselves, it examines the underlying libraries and code that’s running those models. Beginning in the Summer of 2021, the project began evaluating the three major machine learning frameworks for possible attack surfaces with the goal of being able to examine exploits through those attack vectors in the years to come.

Data Venom Project Logo

Data Venom

Project Lead: Josh Sloan(graduated)

This research explores an adversarial approach to highlight security deficiencies in the poisoning defense literature by providing a novel poisoning attack method that incorporates the strengths of a genetic algorithm’s search capabilities to generate substantial volumes of poisoned data for attacks in near real-time, while also avoiding detection from existing poisoning defense methods. Thus, this attack framework endeavors to highlight a present danger in machine learning, incentivizing further research into defense mechanisms against such attacks. Click here to read the thesis.

Contact Us

Cybersecurity Education and Research

Computer Science Department

Dept. 3315, 1000 E. University ave

Laramie, WY 82071


Find us on Facebook (Link opens a new window) Find us on Twitter (Link opens a new window) Find us on Instagram (Link opens a new window) Find us on YouTube (Link opens a new window) Find us on LinkedIn (Link opens a new window)

1000 E. University Ave. Laramie, WY 82071
UW Operators (307) 766-1121 | Contact Us | Download Adobe Reader

Accreditation | Virtual Tour | Emergency Preparedness | Employment at UW | Privacy Policy | Harassment & Discrimination | Accessibility Accessibility information icon