Important: For the safety and well-being of all conference participants, ACNS 2020 will be virtual. Accordingly, we will run the AIHWS workshop virtually as well. The dates of the workshop will remain the same. We apologize for any inconvenience this may have caused.
Artificial intelligence is progressing ever faster with new applications and results that would not be possible only a few years ago. At the same time, hardware security is becoming increasingly important for embedded systems applications where the number of such applications keeps on growing. The connection between AI and hardware security is becoming more prominent. Today, there are numerous applications where AI has either an offensive or defensive role for HW security. AIHWS aims to position itself in the intersection of these topics and provide a space where ideas converge into exciting new approaches for HW security. This workshop will provide an environment for researchers from academic and industrial domains to discuss findings and on-going work on all aspects of hardware security and artificial intelligence including design, attacks, manufacturing, testing, validation, utilization.
We encourage researchers working on all aspects of AI and HW security to take the opportunity and use AIHWS to share their work and participate in discussions.
The authors are invited to submit the papers using EasyChair submission system.
Every accepted paper must have at least one author registered for the workshop. All submissions must follow the original
LNCS format with a page limit of 18 pages, including references and possible appendices. Papers should be submitted electronically in PDF format.
Deadlines extended
Workshop paper submission deadline: July 5, 2020
Workshop paper notification: Aug 5, 2020
Camera-ready papers for pre-proceedings: Aug 19, 2020
Workshop date: October 21, 2020
Accepted papers are available in Applied Cryptography and Network Security Workshops, ACNS 2020 Conference proceedings (LNCS 12418).
Find more information
here, or access the online
version here.
Nanyang Technological University, Singapore1, Tohoku University, Japan2
Digital Security Group, Radboud University Nijmegen, The Netherlands1, Ikerlan Technological Research Centre, Arrasate-Mondragón, Gipuzkoa, Spain2
Delft University of Technology, The Netherlands1
CEA, France1, LIRMM, France2
University of Isfahan, Iran1, KU Leuven, imec-COSIC/ESAT, Belgium2
University of Electro-Communications, Japan1
Delft University of Technology, The Netherlands1, Digital Security Group, Radboud University, The Netherlands2
Digital Security Group, Radboud University Nijmegen, The Netherlands1, Ikerlan Technology Research Centre, Arrasate-Mondrag ́on, Gipuzkoa, Spain2
Machine learning has the capability to transform how we interact with our devices. Examples include autonomous driving, voice commands and personal assistants. However, when machine learning systems grow more mature, so do the attacks against them: attackers can try to steal machine learning models, circumvent their functionality and sometimes even try to do physical harm to the user. On the other hand machine learning systems can also help us to improve security problems like authentication using facial features or intrusion detection. In this presentation I will discuss security, privacy and trust issues of machine learning systems and show that many attacks are trivial to mount. I will outline the consequences of attacks and present some techniques which can harden against them.
Simon Friedberger holds a computer science diploma from Karlsruhe Institute of Technology and a PhD from KU Leuven. He has worked on machine learning for video classification and optimizing implementations of post-quantum crypto for performance and side-channel resistance. In his current role as security architect at NXP he works on SoC security and does research into attacks on machine learning systems.
Machine Learning on encrypted data is a yet-to-be-addressed challenge. Several recent key advances across different layers of the system, from cryptography and mathematics to logic synthesis and hardware are paving the way for practical realization of privacy preserving computing for certain target applications. This keynote talk highlights the crucial role of hardware and advances in computing architecture on supporting the recent progresses in the field. I outline the main technologies and mixed computing models. I particularly center my talk on the recent progress in synthesis of Garbled Circuits that provide a leap in scalable realization of machine learning on encrypted data. I explore how hardware could pave the way for navigating the complex parameter selection and scalable future mixed protocol solutions. I conclude by briefly discussing the challenges and opportunities moving forward.
Farinaz Koushanfar is a professor and Henry Booker Faculty Scholar in the Electrical and Computer Engineering (ECE) department at University of California San Diego (UCSD), where she is the founding co-director of the UCSD Center for Machine Intelligence, Computing & Security (MICS). Prof. Koushanfar received her Ph.D. in Electrical Engineering and Computer Science as well as her M.A. in Statistics from UC Berkeley. Her research addresses several aspects of efficient computing and embedded systems, with a focus on system and device security, safe AI, privacy preserving computing, as well as real-time/energy-efficient AI under resource constraints, design automation and reconfigurable computing. Professor Koushanfar serves as an associate partner of the Intel Collaborative Research Institute for Secure Computing to aid developing solutions for the next generation of embedded secure devices. She has received a number of awards and honors for her research, mentorship, teaching, and outreach activities including the Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama, the ACM SIGDA Outstanding New Faculty Award, Cisco IoT Security Grand Challenge Award, Qualcomm Innovation Award(s), MIT Technology Review TR-35 2008 (World’s top 35 innovators under 35), Young Faculty/CAREER Awards from NSF, DARPA, ONR and ARO, as well as a number of Best Paper Awards. Dr. Koushanfar is a fellow of the IEEE, and a fellow of the Kavli Foundation Frontiers of the National Academy of Sciences.
The program starts at 12:30 pm, CEST time (UTC + 2).
TIME CEST (UTC+2) |
SESSION/TITLE |
---|---|
12:30 - 12:45 | Welcome note from the organizers |
12:45 - 13:45 | Keynote talk 1: Simon Friedberger, NXP |
Session 1: Practical Implementation Attacks 13:50 - 15:30 |
|
13:50 - 14:15 | Practical Side-Channel Based Model Extraction Attack on Tree-Based Machine Learning Algorithm |
14:15 - 14:40 | Simple Electromagnetic Analysis Against Activation Functions of Deep Neural Networks |
14:40 - 15:05 | Evolvable Hardware Architectures on FPGA for Side-channel Security |
15:05 - 15:30 | Leakage Assessement through Neural Estimation of the Mutual Information (ACNS workshops best paper award) |
15:30 - 16:30 | Break |
Session 2: Deep Learning-based Side-channel Analysis 16:30 - 19:25 |
|
16:30 - 16:55 | A Comparison of Weight Initializers in Deep Learning-based Side-channel Analysis |
16:55 - 17:20 | Controlling the deep learning-based side-channel analysis: A way to leverage from heuristics |
17:30 - 18:30 | Keynote talk 2: Farinaz Koushanfar, UCSD |
18:35 - 19:00 | Performance Analysis of Multilayer Perceptron in Profiling Side-channel Analysis |
19:00 - 19:25 | The Forgotten Hyperparameter: Introducing Dilated Convolution for Boosting CNN-Based Side-Channel Attacks |
19:25 - 19:40 | Farewell and discussion for future editions of AIHWS |
Shivam Bhasin, Nanyang Technological University, Singapore
Carlos Castro, UCM, Spain
Lukasz Chmielewski, Radboud University, and Riscure, The Netherlands
Chitchanok Chuengsatiansup, The University of Adelaide, Australia
Joan Daemen, Radboud University, The Netherlands
Fatemeh Ganji, Worcester Polytechnic Institute, United States
Julio Hernandez-Castro, University of Kent, UK
Annelie Heuser, CNRS/IRISA, France
Dirmanto Jap, Nanyang Technological University, Singapore
Alan Jović, University of Zagreb, Croatia
Liran Lerman, Thales, Belgium
Luca Mariot, TU Delft, The Netherlands
Nele Mentens, KU Leuven, Belgium
Kostas Papagiannopoulos, NXP, Germany
Guilherme Perin, TU Delft, The Netherlands
Lex Schoonen, Brightsight, The Netherlands
Shahin Tajik, University of Florida, United States
Vincent Verneuil, NXP, Germany
Nikita Veshchikov, NXP, Belgium
Jason Xue, The University of Adelaide, Australia
Marina Krček, TU Delft, The Netherlands