Abstract: This paper proposes an image-to-image interior design generation framework based on latent diffusion modeling. Given a single RGB image of an indoor space, the system formulates interior redesign as a conditional generation problem, where structural geometry is preserved while visual attributes are optimized under style-specific constraints. A Stable Diffusion backbone is employed with controlled text-conditioning and spatially consistent sampling to generate multiple décor configurations. The framework incorporates preprocessing for viewpoint normalization and semantic alignment, enabling robustness across varied room layouts. Quantitative and qualitative evaluations demonstrate improved perceptual quality, structural fidelity, and stylistic consistency compared with conventional template-based visualization methods. The results indicate that diffusion-based generative models provide an effective and scalable solution for automated interior design synthesis with minimal human intervention.

Keywords: Latent Diffusion Models; Image-to-Image Translation; Automated Interior Design; Style-Conditioned Generation; Stable Diffusion.


Downloads: PDF | DOI: 10.17148/IJARCCE.2026.15113

How to Cite:

[1] G D Gagandeep, M Sri Charan, H Prajwal, Kusammanavar Basavaraj, "Virtual Interior Design Using Stable Diffusion–Based Generative Models," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2026.15113

Open chat
Chat with IJARCCE