← Back to VOLUME 15, ISSUE 3, MARCH 2026
This work is licensed under a Creative Commons Attribution 4.0 International License.
Temporal Continuity in Low-Light Environments: A Ghosting-Resistant Video Enhancement Framework
K. Mithun Rithick, M. Nowshad, Dr. G. Maria Priscilla
DOI: 10.17148/IJARCCE.2026.15318
Abstract: Low-light video enhancement has made considerable progress through deep learning, yet a persistent and often underappreciated failure mode remains: the introduction of ghosting artifacts across successive frames when scene content or camera motion causes temporal misalignment. This paper presents a Ghosting-Resistant Video Enhancement Framework (GR-VEF) that addresses temporal discontinuity as a first-class concern rather than a post-hoc correction. The proposed architecture couples an illumination-adaptive frequency decomposition module with a motion-aware temporal fusion network, coordinated through what we term a Coherence Gating mechanism. Unlike frame-by-frame enhancement pipelines, GR-VEF explicitly models inter-frame dependencies at multiple temporal scales, penalising enhancement choices that introduce perceptible flickering or double-edge artefacts even when individual frame quality metrics improve. On synthetic low-light sequences derived from the LOL-Video and SMID datasets, and on a purposebuilt evaluation corpus of real surveillance footage captured at 0.1β3 lux, GR-VEF achieves a PSNR improvement of 2.1β3.4 dB over the nearest competing method while reducing the Ghosting Artifact Index (GAI) by 38β52 percent. Qualitative inspection confirms substantially smoother temporal transitions, particularly in scenes with fast-moving foreground objects, which historically represent the hardest case for enhancement methods that rely on naively aligned reference frames.
Keywords: low-light video enhancement, ghosting artefacts, temporal coherence, deep learning, video processing, illumination normalisation, motion-aware fusion.
Keywords: low-light video enhancement, ghosting artefacts, temporal coherence, deep learning, video processing, illumination normalisation, motion-aware fusion.
π 35 views
How to Cite:
[1] K. Mithun Rithick, M. Nowshad, Dr. G. Maria Priscilla, βTemporal Continuity in Low-Light Environments: A Ghosting-Resistant Video Enhancement Framework,β International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2026.15318
