Speaker
Description
In the context of digital transformation across industries such as interior design, real estate, and warehouse management, there is a growing demand for accessible, efficient, and accurate 3D spatial data acquisition. Traditional methods using LiDAR or structured light systems offer high precision but are often cost-prohibitive and technically demanding. This research introduces a novel and scalable system that enables users to generate full 3D reconstructions of indoor environments using only a smartphone with dual cameras and a centralized image-processing server.
The system is composed of two main components: an Android application for panoramic image capture and metadata extraction, and a server that performs image enhancement, monocular depth estimation using a customized GLPDepth model, and point cloud generation via Open3D. Final 3D models are calibrated based on user-validated scale references and can be exported to CAD formats or visualized in VR-ready environments.This architecture allows for consistent performance, while ensuring model fidelity through advanced preprocessing techniques and focus-based depth refinement. Additional features include integration with AI-powered interior design tools and multi-user request management via timestamped asynchronous processing.
The proposed solution is fast, cost-effective, and highly adaptable, with potential applications in education, retail, smart homes, and beyond. This project demonstrates how edge-device simplicity and cloud-based intelligence can be leveraged to offer a practical alternative to conventional 3D scanning technologies.