LuminAI is an interactive art installation in which participants can improvise movement together with an AI dance partner that is projected onto a screen. The virtual agent segments users’ motion into gestures. The agent learns these gestures and then reasons about them using both bottom-up learned knowledge (in the form of unsupervised learning algorithms that cluster similar gestures together) as well as top-down domain knowledge (in the form of encodings of Laban Movement Analysis framework). The agent uses this knowledge to choose a relevant response to display. The LuminAI research project explores research questions related to computational creativity, cognitive science, and dance through this expressive, movement-based interactive experience.