Kris is a Mentor at the CODAME ART+TECH Festival (2026)
Creative technologist and researcher working at the edge of machine imagination, performance, and sensory design. Based between Cambridge and Atlanta, his practice blends experimental AI architectures with embodied interfaces to interrogate how synthetic systems can dream, feel, and evolve new ontologies. He is a Research Affiliate at MIT’s Spatial Sound Lab and Artist-in-Residence at Georgia State University’s Creative Media Industries Institute. Recent projects include PIP: Purposefully Induced Psychosis, presented at CHI 2025 and the Tools for Thought Workshop, AI JOE, an emergent digital mind with a viseme-driven avatar, and Aletheia, a recursive self-modeling AI that visualizes its shifting semantics and affect. His immersive work has featured at CCCB Barcelona, Ars Electronica, VRHAM, and Cannes XR, with commissions and showcases across museums and festivals. Pilcher is a winner of MIT StageHack, a three-time MIT Reality Hack awardee, and an Oculus Start and Launchpad developer. He lectures widely on XR, AI art, and posthuman design, and has collaborated with Berkeley SETI, Roscosmos partners, and academic labs focused on virtual humans.



![ART+TECH Festival [2026]](https://substackcdn.com/image/fetch/$s_!xMBg!,w_140,h_140,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a6d59a4-0891-4053-8aa3-ba2baa30162d_714x391.jpeg)
The PIP project sounds fascinating - love seeing XR used to explore altered states of perception rather than just slapping a headset on existing content. The semantic visualization in Aletheia reminds me of some early attempts at debugging transformer attention patterns, but taking it into immersive space adds a whol different dimension. One challenge I've run into with embodied AI interfaces is the uncanny valley gets way worse in VR than on flat screens - curious how you're handling that with AI JOE's viseme system. The MIT Spatial Sound collabs make sense for this kinda work dunno if binaural rendering helps bridge that gap or makes it more obvius when something feels off.