TY - GEN
T1 - Stealing Neural Network Models through the Scan Chain
T2 - 40th IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2021
AU - Potluri, Seetal
AU - Aysu, Aydin
N1 - Publisher Copyright: ©2021 IEEE
PY - 2021
Y1 - 2021
N2 - Stealing trained machine learning (ML) models is a new and growing concern due to the model’s development cost. Existing work on ML model extraction either applies a mathematical attack or exploits hardware vulnerabilities such as side-channel leakage. This paper shows a new style of attack, for the first time, on ML models running on embedded devices by abusing the scan-chain infrastructure. We illustrate that having course-grained scan-chain access to non-linear layer outputs is sufficient to steal ML models. To that end, we propose a novel small-signal analysis inspired attack that applies small perturbations into the input signals, identifies the quiescent operating points and, selectively activates certain neurons. We then couple this with a Linear Constraint Satisfaction based approach to efficiently extract model parameters such as weights and biases. We conduct our attack on neural network inference topologies defined in earlier works, and we automate our attack. The results show that our attack outperforms mathematical model extraction proposed in CRYPTO 2020, USENIX 2020, and ICML 2020 by an increase in accuracy of 220.7×, 250.7×, and 233.9×, respectively, and a reduction in queries by 26.5×, 24.6×, and 214.2×, respectively.
AB - Stealing trained machine learning (ML) models is a new and growing concern due to the model’s development cost. Existing work on ML model extraction either applies a mathematical attack or exploits hardware vulnerabilities such as side-channel leakage. This paper shows a new style of attack, for the first time, on ML models running on embedded devices by abusing the scan-chain infrastructure. We illustrate that having course-grained scan-chain access to non-linear layer outputs is sufficient to steal ML models. To that end, we propose a novel small-signal analysis inspired attack that applies small perturbations into the input signals, identifies the quiescent operating points and, selectively activates certain neurons. We then couple this with a Linear Constraint Satisfaction based approach to efficiently extract model parameters such as weights and biases. We conduct our attack on neural network inference topologies defined in earlier works, and we automate our attack. The results show that our attack outperforms mathematical model extraction proposed in CRYPTO 2020, USENIX 2020, and ICML 2020 by an increase in accuracy of 220.7×, 250.7×, and 233.9×, respectively, and a reduction in queries by 26.5×, 24.6×, and 214.2×, respectively.
UR - https://www.scopus.com/pages/publications/85124151499
U2 - 10.1109/ICCAD51958.2021.9643547
DO - 10.1109/ICCAD51958.2021.9643547
M3 - Conference contribution
T3 - IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD
BT - 2021 40th IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2021 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 1 November 2021 through 4 November 2021
ER -