Seminar Schedule
Chronological | By Speaker | ||||||||||||
|
|
Title | Towards Robust and Secure Deep Learning: From Algorithmic Hardening to Hardware-Aware Defense |
Speaker | Ruyi Ding |
Northeastern University | |
Abstract | As artificial intelligence systems become increasingly pervasive, securing them demands a holistic approach that spans learning algorithm to deployment hardware. In this talk, I will present a layered defense framework that encompasses optimization algorithm design, model architecture pruning, and protection mechanisms leveraging hardware-level signals. We first tackle applicability authorization-protecting pre-trained model's IP from unauthorized transfer-by designing EncoderLock, a systematically method that blocks malicious probing. By embedding task-specific authorization into pre-trained encoders, it ensures models restrict illegitimate classification heads while maintaining intact performance on benign ones. However, optimization-centric protections alone are insufficient for locally deployed models. To fortify security at the model architectural level, we introduce Non-Transferable Pruning, which transforms efficiency-driven pruning into a defense mechanism, hardening model's IP. Yet, even robust algorithmic defenses remain vulnerable to high-priority adaptive attacks. To further enhance model robustness, our design incorporates hardware-level defenses. EMShepherd exemplifies hardware-software co-design: by analyzing electromagnetic emissions from DNN accelerators, it detects adversarial inputs in real time-without relying on model internals. This hardware-informed approach complements algorithmic safeguards, creating a unified defense where physical-layer observability reinforces software resilience. By combining these complementary strategies, my work demonstrates that robust AI security relies on a harmonious, multi-layered defense. I conclude by envisioning future AI systems that combine hardware-based defenses with robust algorithms to secure deployment against evolving adversarial threats. |
When | Tuesday, 11 February 2025, 9:30 - 10:30 |
Where | Room 3316E Patrick F. Taylor Hall |
More | Hide Abstracts. Announcement (PDF) |
Title | Securing Computer Systems Using AI Methods and for AI Applications |
Speaker | Mulong Luo |
The University of Texas at Austin | |
Abstract | Securing modern computer systems against an ever-evolving threat landscape is a significant challenge that requires innovative approaches. Recent developments in artificial intelligence (AI), such as large language models (LLMs) and reinforcement learning (RL), have achieved unprecedented success in everyday applications. However, AI serves as a double-edged sword for computer systems security. On one hand, the superhuman capabilities of AI enable the exploration and detection of vulnerabilities without the need for human experts. On the other hand, specialized systems required to implement new AI applications introduce novel security vulnerabilities. In this talk, I will first present my work on applying AI methods to system security. Specifically, I leverage reinforcement learning to explore microarchitecture attacks in modern processors. Additionally, I will discuss the use of multi-agent reinforcement learning to improve the accuracy of detectors against adaptive attackers. Next, I will highlight my research on the security of AI systems, focusing on retrieval-augmented generation (RAG)-based LLMs and autonomous vehicles. For RAG-based LLMs, my ConfusedPilot work demonstrates how an attacker can compromise confidentiality and integrity guarantees by sharing a maliciously crafted document. For autonomous vehicles, I reveal a software-based cache side-channel attack capable of leaking the physical location of a vehicle without detection. Finally, I will outline future directions for building secure systems using AI methods and ensuring the security of AI systems. |
When | Thursday, 13 February 2025, 10:00 - 11:00 |
Where | Room 3316E Patrick F. Taylor Hall |
More | Hide Abstracts. Announcement (PDF) |
Title | Towards Efficient and Robust Deployment of Graph Deep Learning |
Speaker | Yue Dai |
University of Pittsburgh | |
Abstract | Inspired by the success of Graph Neural Networks (GNNs), recent graph deep learning studies have introduced GNN-based models like Graph Matching Networks (GMNs) and Temporal Graph Neural Networks (TGNNs) for diverse tasks in various domains such as social media, chemistry, and cybersecurity. Despite these advances, deploying such models efficiently and robustly in real-world settings remains challenging. Three core issues impede their broader adoption: (1) limited training efficiency, which hinders rapid model development for targeted applications; (2) suboptimal inference latencies, which fail to meet real-world responsiveness needs; and (3) fragile robustness against adversarial attacks, posing serious security and privacy concerns. This talk will present my research on full-stack optimizations for GNN-based models. First, I will introduce Cascade, a dependency-aware TGNN training framework that boosts training parallelism without compromising vital dynamic graph dependencies, resulting in faster training while preserving model accuracy. Second, I will detail CEGMA, a software-hardware co-design accelerator that eliminates redundant computations and data movement in GMNs, and introduce FlexGM, a GPU runtime that adaptively optimizes GMN inferences. Finally, I will present MemFreezing, an adversarial attack that nullifies TGNN dynamics by exploiting node memory mechanisms. Building on these advances, my future work will push the frontier of deep graph learning by optimizing emerging models—including GNN-LLM hybrids, developing robust memory defenses for dynamic graphs, and applying graph-based reinforcement learning to address system design challenges. Through this holistic approach, I aim to enable efficient, scalable, and secure GNN-based solutions across a wide range of real-world applications. |
When | Thursday, 20 February 2025, 10:00 - 11:00 |
Where | Room 3316E Patrick F. Taylor Hall |
More | Hide Abstracts. Announcement (PDF) |