TY - GEN
T1 - Reaching Data Confidentiality and Model Accountability on the CalTrain
AU - Gu, Zhongshu
AU - Jamjoom, Hani
AU - Su, Dong
AU - Huang, Heqing
AU - Zhang, Jialong
AU - Ma, Tengfei
AU - Pendarakis, Dimitrios
AU - Molloy, Ian
N1 - Publisher Copyright: © 2019 IEEE.
PY - 2019/6
Y1 - 2019/6
N2 - Distributed collaborative learning (DCL) paradigms enable building joint machine learning models from distrusted multi-party participants. Data confidentiality is guaranteed by retaining private training data on each participant's local infrastructure. However, this approach makes today's DCL design fundamentally vulnerable to data poisoning and backdoor attacks. It limits DCL's model accountability, which is key to backtracking problematic training data instances and their responsible contributors. In this paper, we introduce CALTRAIN, a centralized collaborative learning system that simultaneously achieves data confidentiality and model accountability. CALTRAIN enforces isolated computation via secure enclaves on centrally aggregated training data to guarantee data confidentiality. To support building accountable learning models, we securely maintain the links between training instances and their contributors. Our evaluation shows that the models generated by CALTRAIN can achieve the same prediction accuracy when compared to the models trained in non-protected environments. We also demonstrate that when malicious training participants tend to implant backdoors during model training, CALTRAIN can accurately and precisely discover the poisoned or mislabeled training data that lead to the runtime mispredictions.
AB - Distributed collaborative learning (DCL) paradigms enable building joint machine learning models from distrusted multi-party participants. Data confidentiality is guaranteed by retaining private training data on each participant's local infrastructure. However, this approach makes today's DCL design fundamentally vulnerable to data poisoning and backdoor attacks. It limits DCL's model accountability, which is key to backtracking problematic training data instances and their responsible contributors. In this paper, we introduce CALTRAIN, a centralized collaborative learning system that simultaneously achieves data confidentiality and model accountability. CALTRAIN enforces isolated computation via secure enclaves on centrally aggregated training data to guarantee data confidentiality. To support building accountable learning models, we securely maintain the links between training instances and their contributors. Our evaluation shows that the models generated by CALTRAIN can achieve the same prediction accuracy when compared to the models trained in non-protected environments. We also demonstrate that when malicious training participants tend to implant backdoors during model training, CALTRAIN can accurately and precisely discover the poisoned or mislabeled training data that lead to the runtime mispredictions.
KW - Data Privacy
KW - Learning Systems
KW - Systems Security
UR - https://www.scopus.com/pages/publications/85072116362
U2 - 10.1109/DSN.2019.00044
DO - 10.1109/DSN.2019.00044
M3 - Conference contribution
T3 - Proceedings - 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2019
SP - 336
EP - 348
BT - Proceedings - 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2019
Y2 - 24 June 2019 through 27 June 2019
ER -