In the ASSET research group, we encourage open-source release of software artifacts. As of now, we have released software artifacts (exceptions are due to the restriction imposed by respective funding agencies supporting the project) associated with most of our research papers.
oo7 is a binary analysis framework to detect and patch Spectre vulnerabilities. See the paper oo7: Low-overhead Defense against Spectre Attacks via Program Analysis for more details.
KLEESpectre is a symbolic execution framework enhanced with speculative semantics and an accurate cache model. See the paper KLEESpectre: Detecting Information Leakage through Speculative Cache Attacks via Symbolic Execution for more details.
GDivAn is a measurement based execution time and side-channel analysis tool using a systematic combination of symbolic execution(SE) and genetic algorithm (GA). The inputs to GDivAn are GPU programs (CUDA). See the paper Genetic Algorithm Based Estimation of Non-Functional Properties for GPGPU Programs for more details.
This repository contains our search-based software testing tool to discover cache side-channel leakage. It covers both timing and access-based cache attacks and is effective both in a controlled environment and real hardware (PC and Raspberry Pi). See the paper An Exploration of Effective Fuzzing for Side-channel Cache Leakage for more details.
This repository contains the proof-of-concept code for SweynTooth vulnerability in major BLE System-on-chips (SoCs) such as Texas Instruments, Cypress, NXP, Microchip, ST Microelectronics, Telink and Dialog semiconductors.
Mirai4IIoT is a repository to reproduce Mirai attacks in Industrial Internet-of-things (IIoT) systems. The attacks can be reproduced in both stealthy and non-stealthy mode. See the paper An Experimental Analysis of Security Vulnerabilities in Industrial IoT Devices for more details.
Stitcher is a tool designed to classify and correlate evidence from IoT devices. See the paper “STITCHER: Correlating Digital Forensic Evidence on Internet-of-Things Devices” for details.
Ogma provides a systematic test framework for machine-learning systems that accept grammar-based inputs. See the paper Grammar Based Directed Testing of Machine Learning Systems for more details.
Callisto is a novel test generation and data quality assessment framework. Callisto leverages the uncertainty in the prediction to systematically generate new test cases and evaluate data quality for Machine Learning classifiers. See the paper Callisto: Entropy based test generation and data quality assessment for Machine Learning Systems for more details.
Aegis is a novel tool to detect backdoor attacks in adversarially robust machine-learning models. Aegis leverages intrinsic properties of adversarially robust models to break the stealthy nature of backdoors. See the preprint AEGIS: Exposing Backdoors in Robust Machine Learning Models for more details.
RAIDS is an intrusion detection system for autonomous cars. It extracts and analyzes sensory information (e.g., camera images and distance sensor values) to validate frames transmitted on the in-vehicle network (e.g., CAN bus). See the paper Road Context-aware Intrusion Detection System for Autonomous Cars for more details.
AIG-AC is a tool that performs attacker classification on systems represented with And-inverter Graphs (AIGs) via bounded model checking. See the paper Systematic Classification of Attackers via Bounded Model Checking for more details.
AGRID aims at encoding discrete models of robotic systems to support verification of robotic software. It is able to encode models as either Satisfiability Modulo Theory or Bounded Model Checking problems.