TY - GEN
T1 - Exploring in Extremely Dark
T2 - 32nd ACM International Conference on Multimedia, MM 2024
AU - Wang, Xicong
AU - Fu, Huiyuan
AU - Wang, Jiaxuan
AU - Wang, Xin
AU - Zhang, Heng
AU - Ma, Huadong
N1 - Publisher Copyright: © 2024 ACM.
PY - 2024/10/28
Y1 - 2024/10/28
N2 - Due to the limitations of sensor, traditional cameras struggle to capture details within extremely dark areas of videos. The absence of such details can significantly impact the effectiveness of low-light video enhancement. In contrast, event cameras offer a visual representation with higher dynamic range, facilitating the capture of motion information even in exceptionally dark conditions. Motivated by this advantage, we propose the Real-Event Embedded Network for low-light video enhancement. To better utilize events for enhancing extremely dark regions, we propose an Event-Image Fusion module, which can identify these dark regions and enhance them significantly. To ensure temporal stability of the video and restore details within extremely dark areas, we design unsupervised temporal consistency loss and detail contrast loss. Alongside the supervised loss, these loss functions collectively contribute to the semi-supervised training of the network on unpaired real data. Experimental results on synthetic and real data demonstrate the superiority of the proposed method compared to the state-of-the-art methods.
AB - Due to the limitations of sensor, traditional cameras struggle to capture details within extremely dark areas of videos. The absence of such details can significantly impact the effectiveness of low-light video enhancement. In contrast, event cameras offer a visual representation with higher dynamic range, facilitating the capture of motion information even in exceptionally dark conditions. Motivated by this advantage, we propose the Real-Event Embedded Network for low-light video enhancement. To better utilize events for enhancing extremely dark regions, we propose an Event-Image Fusion module, which can identify these dark regions and enhance them significantly. To ensure temporal stability of the video and restore details within extremely dark areas, we design unsupervised temporal consistency loss and detail contrast loss. Alongside the supervised loss, these loss functions collectively contribute to the semi-supervised training of the network on unpaired real data. Experimental results on synthetic and real data demonstrate the superiority of the proposed method compared to the state-of-the-art methods.
KW - extremely dark
KW - low-light
KW - real event
KW - video enhancement
UR - https://www.scopus.com/pages/publications/85209778940
U2 - 10.1145/3664647.3681370
DO - 10.1145/3664647.3681370
M3 - Conference contribution
T3 - MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
SP - 4805
EP - 4813
BT - MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
Y2 - 28 October 2024 through 1 November 2024
ER -