OSW

SIGNATURE WORK
CONFERENCE & EXHIBITION 2024

In-depth Research on the MTARD Algorithm — Method to Address the Challenges in Adversarial Training

Name

Xuanang Zhou

Major

Data Science

Class

2024

About

Xuanang Zhou is a data science student at DKU.

Signature Work Project Overview

With the rapid advancement of artificial intelligence, its applications have had far-reaching implications in various domains, including stock prediction and autonomous driving. However, the rapid deployment of AI technology has given rise to concerns regarding its safety. Among the multitude of threats, adversarial samples have emerged as a particularly significant concern, posing a substantial risk to AI systems. In response to these challenges, a variety of defense techniques have been developed, with adversarial training at the forefront, to mitigate the impact of adversarial samples. Adversarial training often sacrifices accuracy to achieve model robustness. Additionally, adversarial training typically favors models with large capacities, which present challenges for real-world deployment. Therefore, this paper delves into a method to address these issues: adversarial distillation. Adversarial distillation involves transferring knowledge from a larger-capacity model to a smaller one, thereby enhancing model performance while maintaining robustness against adversarial attacks. Specifically, the Multi-Teacher Adversarial Robustness Distillation (MTARD) algorithm is investigated for its efficacy in this area. This study delves into the algorithm to understand how it works. Additionally, it conducts comprehensive experiments to compare MTARD with other related adversarial robust distillation algorithms, and it further explores adjustments to the algorithm to achieve improved

Signature Work Presentation Video