headshot of Daniel Flores Garcia ('24)

AILA welcomed Amherst College alumnus Daniel Flores Garcia (‘24) back to campus for a richly engaging talk on AI and Music, exploring how technological innovation is reshaping musical tools, creative processes, and cultural representation in music. Flores Garcia, whose work centers on generative systems for rhythm and percussion, opened by tracing a lineage of music-making technologies – from tape recorders and early live-looping practices to digital samplers and the creation of entire genres such as Drum and Bass from a single influential breakbeat.

With the rise of contemporary tools like Suno AI, he noted, anyone can now type a short textual prompt and receive a full audio track in seconds. But text, he emphasized, is a poor medium for describing sound, and most generative audio models inherit biases from their training data. This often results in a narrow range of styles, typically Western pop and EDM, while complex traditions such as Afro-Cuban percussion remain underrepresented.

To address this gap, Flores Garcia introduced Clavenet, a generative MIDI model trained on an augmented dataset of Afro-Cuban rhythms. Clavenet can take a single rhythmic idea and transform it into a fully voiced drum pattern, offering musicians real-time responsiveness and a sense of active musical engagement. “The more engaging an instrument is,” he explained, “the more people are going to use it.” His goal is not to replace drummers but to expand the range of tools available to musicians, much like the coexistence of guitar tube amplifiers and digital circuitry.

One of the most compelling moments of the event came during moderator Ravi Krishnaswami’s question about musical ‘feel.’ Krishnaswami asked whether Clavenet’s dataset might eventually incorporate transcriptions of live recordings in order to capture the subtle timing, looseness, and human expressiveness that define many rhythmic traditions.

Flores Garcia agreed this was essential. He noted that once generative models can represent “the feel of the tradition itself,” they can offer a new lens on musical nuance, not by perfectly imitating human performers, but by revealing patterns and structures deeply embedded within the music. This exchange underscored Clavenet’s dual identity as both an analytical tool and a creative instrument.

The conversation concluded with broader reflections on representation in training data, the risks of encoding cultural traditions inaccurately, and the importance of involving practitioners from those traditions early in the development process. Flores Garcia acknowledged that, although he has studied Afro-Cuban music extensively, true fidelity requires collaboration with musicians rooted in that tradition.

His visit highlighted not only the technical possibilities of generative AI in music but also the cultural responsibilities that come with it. As AI tools become increasingly integrated into creative practice, Flores Garcia’s work offers a thoughtful model for innovation grounded in respect, curiosity, and collaboration.