Code

In the ASSET research group, we encourage open-source release of software artifacts. As of now, we have released software artifacts (exceptions are due to the restriction imposed by respective funding agencies supporting the project) associated with most of our research papers.


Side Channel Freedom


oo7 (Spectre Checker)

oo7 is a binary analysis framework to detect and patch Spectre vulnerabilities. See the paper oo7: Low-overhead Defense against Spectre Attacks via Program Analysis for more details.

KLEESpectre

KLEESpectre is a symbolic execution framework enhanced with speculative semantics and an accurate cache model. See the paper KLEESpectre: Detecting Information Leakage through Speculative Cache Attacks via Symbolic Execution for more details.

GDivAn

GDivAn is a measurement based execution time and side-channel analysis tool using a systematic combination of symbolic execution(SE) and genetic algorithm (GA). The inputs to GDivAn are GPU programs (CUDA). See the paper Genetic Algorithm Based Estimation of Non-Functional Properties for GPGPU Programs for more details.

Fuzzing Cache Side Channel

This repository contains our search-based software testing tool to discover cache side-channel leakage. It covers both timing and access-based cache attacks and is effective both in a controlled environment and real hardware (PC and Raspberry Pi). See the paper An Exploration of Effective Fuzzing for Side-channel Cache Leakage for more details.


IoT Wireless Security


ESP8266/ESP32 Wi-Fi Attacks

This repository contains the proof-of-concept code for three Wi-Fi attacks (CVE-2019-12586, CVE-2019-12587, CVE-2019-12588) against the popular ESP32/8266 IoT devices.

SweynTooth

This repository contains the proof-of-concept code for SweynTooth vulnerability in major BLE System-on-chips (SoCs) such as Texas Instruments, Cypress, NXP, Microchip, ST Microelectronics, Telink and Dialog semiconductors.

Mirai4IIoT

Mirai4IIoT is a repository to reproduce Mirai attacks in Industrial Internet-of-things (IIoT) systems. The attacks can be reproduced in both stealthy and non-stealthy mode. See the paper An Experimental Analysis of Security Vulnerabilities in Industrial IoT Devices for more details.

Stitcher

Stitcher is a tool designed to classify and correlate evidence from IoT devices. See the paper “STITCHER: Correlating Digital Forensic Evidence on Internet-of-Things Devices” for details.


AI Safety and Security


Aequitas

Aequitas is a directed fairness testing framework for machine learning models. See the paper Automated Directed Fairness Testing for more details.

Ogma

Ogma provides a systematic test framework for machine-learning systems that accept grammar-based inputs. See the paper Grammar Based Directed Testing of Machine Learning Systems for more details.

Callisto

Callisto is a novel test generation and data quality assessment framework. Callisto leverages the uncertainty in the prediction to systematically generate new test cases and evaluate data quality for Machine Learning classifiers. See the paper Callisto: Entropy based test generation and data quality assessment for Machine Learning Systems for more details.

Aegis

Aegis is a novel tool to detect backdoor attacks in adversarially robust machine-learning models. Aegis leverages intrinsic properties of adversarially robust models to break the stealthy nature of backdoors. See the preprint AEGIS: Exposing Backdoors in Robust Machine Learning Models for more details.

RAIDS

RAIDS is an intrusion detection system for autonomous cars. It extracts and analyzes sensory information (e.g., camera images and distance sensor values) to validate frames transmitted on the in-vehicle network (e.g., CAN bus). See the paper Road Context-aware Intrusion Detection System for Autonomous Cars for more details.


Verification


AIG-AC

AIG-AC is a tool that performs attacker classification on systems represented with And-inverter Graphs (AIGs) via bounded model checking. See the paper Systematic Classification of Attackers via Bounded Model Checking for more details.

AGRID

AGRID aims at encoding discrete models of robotic systems to support verification of robotic software. It is able to encode models as either Satisfiability Modulo Theory or Bounded Model Checking problems.