Shared Frequencies
Interactive simulation where communicating agents shift their signal frequencies to avoid interference, inspired by how animals adapt their calls within crowded environments.
Created during:

Artist
Ray McClure | Creative Programmer
Project Summary
This project began with a conversation in a crowded room where a friend proposed speaking in a higher pitch to cut through the noise. That moment sparked a line of thinking about how animals already adapt their communication in similar situations. Many species dynamically shift the frequency of their calls depending on environmental noise, competing signals, or changing conditions.
Humans also compete for communication bandwidth, but in very different ways. Large portions of the electromagnetic spectrum are carefully regulated and allocated to specific technologies such as radio, Wi-Fi, and cellular networks like 5G. These frequencies are valuable infrastructure, tightly managed and often owned or licensed by governments and telecommunications companies.
This contrast raised an interesting question: while animals fluidly adapt their signals within shared acoustic environments, human communication channels are often fixed and controlled. As our environments become increasingly saturated with devices, sensors, and intelligent systems, could our own forms of communication begin to adapt more dynamically as well?
Shared Frequencies explores this idea through a simulated communication ecosystem. The prototype is a browser-based WebGL environment where simple agents communicate through call-and-response signals. Visitors can introduce their own voice through a microphone, disrupting the system and forcing the agents to scatter and shift their signals across the frequency spectrum.
The piece uses this playful interaction as a speculative lens on how communication might evolve in environments increasingly populated by machines, sensors, and digital companions.
Vision
Shared Frequencies imagines communication as an adaptive ecology rather than a fixed channel. Many forms of intelligence already inhabit the same communication environments. The work brings this shared spectrum into focus and asks how individuals might become more capable participants within it.
Exploration
I am exploring how a simple audiovisual system can make the dynamics of communication and frequency adaptation tangible. Rather than explaining how animals shift their signals to avoid interference, the project allows visitors to experience these behaviors directly as signals collide, scatter, and migrate across the spectrum.
Collaboration
Before the hackathon began, hearing other participants share their ideas and project proposals, many of which engaged with the natural world, had a meaningful influence on how this concept took shape. Those conversations helped frame the project more clearly as a speculative communication ecology rather than simply a technical experiment.
During the hackathon itself I was also participating as a mentor and kept myself available to support other participants and exchange ideas. While the prototype was developed independently, the broader dialogue and curiosity of the group helped shape the direction of the work.
This project had been sitting in my notebook as a potential installation idea for quite some time. Collaborating with Claude Code made it possible to finally prototype the concept quickly. Claude generated much of the initial WebGL structure under my direction, and I refined the behaviors through prompts and hand editing.
The first working prototype emerged in roughly three hours, followed by several shorter refinement sessions.
White Mirror
Shared Frequencies reflects a hopeful view of technology as something that can expand communication rather than constrain it. Throughout history, technologies have both figuratively and literally amplified the human voice, from radio and telecommunications to the networks that carry our signals today. As AI systems and digital companions begin sharing our environments, new forms of cross-mind communication may emerge, helping us bridge communication barriers and negotiate signal space with many kinds of intelligences.
In that sense, the project imagines a future where communication might extend beyond symbolic language alone. Signals could become more adaptable, more collaborative, and more widely understood, emerging from shared environments rather than fixed vocabularies. Rather than replacing human expression, technology may help amplify and transform it, allowing communication to evolve into something more fluid, ecological, and participatory.
Art and Technology
WebGL and Web Audio allow the project to visualize communication as a living system. Signals appear, collide, migrate, and reorganize across a visualized frequency spectrum, where the vertical axis represents communication frequency. As visitors introduce sound through the microphone, their voice becomes a visible signal within the environment, disrupting the existing patterns and forcing agents to migrate in search of clearer channels.
Depth
The deeper question is whether human communication might also evolve in response to increasingly crowded signal environments. As AI voice systems and digital companions become more common, communication itself may expand beyond the narrow channels we currently use.
Tools & Materials Used
Software / AI / Data
WebGL
JavaScript
Web Audio API
Claude Code
Hardware / Sensors
Microphone
Project Demo / How to Experience It
Open the prototype and allow the simulated inhabitants to communicate for a few seconds. Then speak, hum, or make noise into the microphone. The signals will scatter and retune as agents search for clearer frequencies.
Project link:
https://shared-frequencies-ac63ae0b271a.herokuapp.com/
https://shared-frequencies-ac63ae0b271a.herokuapp.com/index_swarm.html
What’s Next?
The next step is to develop Shared Frequencies into a physical installation, potentially as a large LED wall with spatialized sound where visitors can introduce signals into a living communication ecosystem. I’m interested in refining the behavioral rules and interactions between the agents so their responses to interference feel more nuanced and ecological.
Future iterations may introduce species-level behaviors and grouping where different inhabitants occupy distinct frequency bands, communicate in clusters, or adapt collectively to changing signal environments. These additions could make the simulation feel less like individual sprites reacting to noise and more like a layered acoustic ecology.
I’m also interested in introducing AI into the simulation itself, allowing intelligent agents to participate in the communication system and evolve the rules that govern how signals propagate, compete, and adapt. Rather than fixed behaviors, these agents could learn to negotiate signal space in more complex ways.
Longer term, the project could evolve into a speculative platform for exploring how humans, animals, and intelligent systems might share communication space in increasingly dense signal environments.



