Awesome

Code

In the ASSET research group, we encourage open-source release of software artifacts. As of now, we have released software artifacts (exceptions are due to the restriction imposed by respective funding agencies supporting the project) associated with most of our research papers.


(Micro-architectural) Timing Channel


oo7 (Spectre Checker)

oo7 is a binary analysis framework to detect and patch Spectre vulnerabilities. See the paper oo7: Low-overhead Defense against Spectre Attacks via Program Analysis for more details.

KLEESpectre

KLEESpectre is a symbolic execution framework enhanced with speculative semantics and an accurate cache model. See the paper KLEESpectre: Detecting Information Leakage through Speculative Cache Attacks via Symbolic Execution for more details.

GDivAn

GDivAn is a measurement based execution time and side-channel analysis tool using a systematic combination of symbolic execution(SE) and genetic algorithm (GA). The inputs to GDivAn are GPU programs (CUDA). See the paper Genetic Algorithm Based Estimation of Non-Functional Properties for GPGPU Programs for more details.

Fuzzing Cache Side Channel

This repository contains our search-based software testing tool to discover cache side-channel leakage. It covers both timing and access-based cache attacks and is effective both in a controlled environment and real hardware (PC and Raspberry Pi). See the paper An Exploration of Effective Fuzzing for Side-channel Cache Leakage for more details.

Chalice

This repository contains our tool to quantify cache side-channel leakage from program executions. It covers both timing and access-based cache attacks. See the paper Quantifying the Information Leakage in Cache Attacks via Symbolic Execution for more details. The link for the code provided in the paper is not valid anymore, as bitbucket deleted all mercurial repositories. Please use the link provided here if you wish to reproduce the results.


(Wireless) Communication Security


VitroBench (Software)

This repository contains the Proof-of-concept (PoC) code for sniffing, injection or interception, and fuzzing or attacks executed on the VitroBench Test Platform. See the paper VitroBench: Manipulating In-vehicle Networks and COTS ECUs on Your Bench and the companion VitroBench Website for more details.

BrakTooth (Proof of Concept)

This repository contains the Proof-of-concept (PoC) code for BrakTooth vulnerability in major Bluetooth System-on-chips (SoCs) such as Intel, Qualcomm, Texas Instruments, Cypress, Silicon Labs, Xiaomi, Mediatek, Samsung etc.

BrakTooth (Sniffing)

This repository contains the BR/EDR active sniffer code developed as part of discovering BrakTooth vulnerability in major Bluetooth System-on-chips (SoCs). The sniffing tool is as cheap as any ESP32 board can get (less than 15 USD).

BrakTooth (ESP32 Firmware Patching)

This repository contains the ESP32 Firmware patching framework. This was used to design a real-time fuzzing interface that resulted in BrakTooth vulnerability.

ESP8266/ESP32 Wi-Fi Attacks

This repository contains the proof-of-concept code for three Wi-Fi attacks (CVE-2019-12586, CVE-2019-12587, CVE-2019-12588) against the popular ESP32/8266 IoT devices.

SweynTooth

This repository contains the proof-of-concept code for SweynTooth vulnerability in major BLE System-on-chips (SoCs) such as Texas Instruments, Cypress, NXP, Microchip, ST Microelectronics, Telink and Dialog semiconductors.

Mirai4IIoT

Mirai4IIoT is a repository to reproduce Mirai attacks in Industrial Internet-of-things (IIoT) systems. The attacks can be reproduced in both stealthy and non-stealthy mode. See the paper An Experimental Analysis of Security Vulnerabilities in Industrial IoT Devices for more details.


(IoT) Cybercrime and Forensics


Stitcher

Stitcher is a tool designed to classify and correlate evidence from IoT devices. See the FSIDI paper “STITCHER: Correlating Digital Forensic Evidence on Internet-of-Things Devices” for details.

SCI Threat Models

SCI Threat Models is a set of threat models designed to classify threats and map evidences for cybercrime investigation in smart city infrastructure (SCI). The threat model is technology-agnostic - the SCI modeled here focuses on gathering quality of life data indicators as specified in ISO37101:2016, ISO37120:2018, ISO37122:2019 and ISO37123:2019 instead. See the FSIDI paper “Identifying Threats, Cybercrime and Digital Forensic Opportunities in Smart City Infrastructure via Threat Modeling.” for details.


AI/ML Safety, Bias and Security


AequeVox

AequeVox is a fairness formalization and testing framework for speech recognition systems. See the paper in FASE 2022 AequeVox: Automated Fairness Testing of Speech Recognition Systems for more details.

Astraea

Astraea is a grammar-based fairness testing framework for machine learning models (specifically, NLP tasks). See the paper in IEEE TSE 2022 Grammar-based Fairness Testing for more details.

Aequitas

Aequitas is a directed fairness testing framework for machine learning models. See the paper in ASE 2018 Automated Directed Fairness Testing for more details.

Ogma

Ogma provides a systematic test framework for machine-learning systems that accept grammar-based inputs. See the paper in IEEE TSE 2019 Grammar Based Directed Testing of Machine Learning Systems for more details.

Aegis

Aegis provides a systematic framework for detecting backdoored, yet robust machine-learning systems. See the paper in Computers & Security 2023 Towards Backdoor Attacks and Defense in Robust Machine Learning Models for more details.

Neo

Neo provides a systematic model-agnostic defense for Backdoor attacks. See the paper in IEEE Transactions on Reliability 2022 Model Agnostic Defence Against Backdoor Attacks in Machine Learning for more details.

Callisto

Callisto is a novel test generation and data quality assessment framework. Callisto leverages the uncertainty in the prediction to systematically generate new test cases and evaluate data quality for Machine Learning classifiers. See the paper in ICST 2020 Callisto: Entropy based test generation and data quality assessment for Machine Learning Systems for more details.

RAIDS

RAIDS is an intrusion detection system for autonomous cars. It extracts and analyzes sensory information (e.g., camera images and distance sensor values) to validate frames transmitted on the in-vehicle network (e.g., CAN bus). See the paper in ICICS 2019 Road Context-aware Intrusion Detection System for Autonomous Cars for more details.


Verification


AIG-AC

AIG-AC is a tool that performs attacker classification on systems represented with And-inverter Graphs (AIGs) via bounded model checking. See the paper Systematic Classification of Attackers via Bounded Model Checking for more details.

AGRID

AGRID aims at encoding discrete models of robotic systems to support verification of robotic software. It is able to encode models as either Satisfiability Modulo Theory or Bounded Model Checking problems.