πŸ“ž +91-7667918914 | βœ‰οΈ ijarcce@gmail.com
International Journal of Advanced Research in Computer and Communication Engineering
International Journal of Advanced Research in Computer and Communication Engineering A monthly Peer-reviewed & Refereed journal
ISSN Online 2278-1021ISSN Print 2319-5940Since 2012
IJARCCE adheres to the suggestive parameters outlined by the University Grants Commission (UGC) for peer-reviewed journals, upholding high standards of research quality, ethical publishing, and academic excellence.
← Back to VOLUME 15, ISSUE 3, MARCH 2026

Temporal Continuity in Low-Light Environments: A Ghosting-Resistant Video Enhancement Framework

K. Mithun Rithick, M. Nowshad, Dr. G. Maria Priscilla

DOI: 10.17148/IJARCCE.2026.15318
Abstract: Low-light video enhancement has made considerable progress through deep learning, yet a persistent and often underappreciated failure mode remains: the introduction of ghosting artifacts across successive frames when scene content or camera motion causes temporal misalignment. This paper presents a Ghosting-Resistant Video Enhancement Framework (GR-VEF) that addresses temporal discontinuity as a first-class concern rather than a post-hoc correction. The proposed architecture couples an illumination-adaptive frequency decomposition module with a motion-aware temporal fusion network, coordinated through what we term a Coherence Gating mechanism. Unlike frame-by-frame enhancement pipelines, GR-VEF explicitly models inter-frame dependencies at multiple temporal scales, penalising enhancement choices that introduce perceptible flickering or double-edge artefacts even when individual frame quality metrics improve. On synthetic low-light sequences derived from the LOL-Video and SMID datasets, and on a purposebuilt evaluation corpus of real surveillance footage captured at 0.1–3 lux, GR-VEF achieves a PSNR improvement of 2.1–3.4 dB over the nearest competing method while reducing the Ghosting Artifact Index (GAI) by 38–52 percent. Qualitative inspection confirms substantially smoother temporal transitions, particularly in scenes with fast-moving foreground objects, which historically represent the hardest case for enhancement methods that rely on naively aligned reference frames.

Keywords: low-light video enhancement, ghosting artefacts, temporal coherence, deep learning, video processing, illumination normalisation, motion-aware fusion.
πŸ‘ 35 views
Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License.

How to Cite:

[1] K. Mithun Rithick, M. Nowshad, Dr. G. Maria Priscilla, β€œTemporal Continuity in Low-Light Environments: A Ghosting-Resistant Video Enhancement Framework,” International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2026.15318

Share this Paper