TY - GEN
T1 - Feature quantization improves GAN training
AU - Zhao, Yang
AU - Li, Chunyuan
AU - Yu, Ping
AU - Gao, Jianfeng
AU - Chen, Changyou
N1 - Publisher Copyright: © 2020 by the Authors All rights reserved.
PY - 2020
Y1 - 2020
N2 - The instability in GAN training has been a longstanding problem despite remarkable research efforts. We identify that instability issues stem from difficulties of performing feature matching with mini-batch statistics, due to a fragile balance between the fixed target distribution and the progressively generated distribution. In this work, we propose Feature Quantization (FQ) for the discriminator, to embed both true and fake data samples into a shared discrete space. The quantized values of FQ are constructed as an evolving dictionary, which is consistent with feature statistics of the recent distribution history. Hence, FQ implicitly enables robust feature matching in a compact space. Our method can be easily plugged into existing GAN models, with little computational overhead in training. Extensive experimental results show that the proposed FQ-GAN can improve the FID scores of baseline methods by a large margin on a variety of tasks, including three representative GAN models on 9 benchmarks, achieving new state-of-The-Art performance.
AB - The instability in GAN training has been a longstanding problem despite remarkable research efforts. We identify that instability issues stem from difficulties of performing feature matching with mini-batch statistics, due to a fragile balance between the fixed target distribution and the progressively generated distribution. In this work, we propose Feature Quantization (FQ) for the discriminator, to embed both true and fake data samples into a shared discrete space. The quantized values of FQ are constructed as an evolving dictionary, which is consistent with feature statistics of the recent distribution history. Hence, FQ implicitly enables robust feature matching in a compact space. Our method can be easily plugged into existing GAN models, with little computational overhead in training. Extensive experimental results show that the proposed FQ-GAN can improve the FID scores of baseline methods by a large margin on a variety of tasks, including three representative GAN models on 9 benchmarks, achieving new state-of-The-Art performance.
UR - https://www.scopus.com/pages/publications/85105249226
M3 - Conference contribution
T3 - 37th International Conference on Machine Learning, ICML 2020
SP - 11313
EP - 11323
BT - 37th International Conference on Machine Learning, ICML 2020
A2 - Daume, Hal
A2 - Singh, Aarti
PB - International Machine Learning Society (IMLS)
T2 - 37th International Conference on Machine Learning, ICML 2020
Y2 - 13 July 2020 through 18 July 2020
ER -