Learning-Based Multimodal Mapping Systems for Real-Time Audio-to-Light Control in Immersive Environment
Design and evaluate a system that learns to map audio features and human affective response to DMX lighting patterns and visuals — enabling it to generate emotionally intelligent lighting designs in real-time for music, performance, or therapeutic settings.