Computational Music for All Projects

EarSketch

EarSketch coding through music

EarSketch is a STEAM (science, technology, engineering, arts, and math) learning intervention that combines a programming environment and API for Python and JavaScript, a digital audio workstation, an audio loop library, and a standards-aligned curriculum to teach introductory computer science with music technology and composition. It seeks to address the imbalance in contemporary society between participation in music-making and music-listening, and a parallel imbalance between computer use and computer programming. It also seeks to engage a diverse population of students in an effort to address long-standing issues with under-representation in both computing and music composition.

EarSketch is free and browser-based and is used widely in computing and music technology classrooms from elementary school through college and in all 50 states and over 100 countries.

To learn more about EarSketch and use it yourself, click here.

Publications

Shadows

Shadows

In Shadows, the pianist reads an open-form score from a laptop screen, choosing his own path through a series of connected musical fragments. At the same time, the laptop listens to the pianist, tracks the decisions he makes about what to play, and constantly updates the score in response. This dialogue between pianist and computer, actuated through a dynamic score, serves to amplify the expressive decisions made by the pianist, to subtly push him in new musical directions, and to create large-scale structural arcs in the music.

Shadows consists of four movements, each of which explores the pianist-computer-score interaction from a different perspective:

I. Traces. The score consists of 12 chords followed by their echoes. The speed at which the pianist moves from chord to chord affects how much of the score is displayed and how much is hidden.

II. Chorale. The pianist plays from a selection of five chords and three embellishment notes. Each time a chord or note is played, its harmonic density and complexity is changed.

III. Perpetual Quiet. The pianist builds arpeggios from a constantly changing set of pitches.

IV. Perpetual Melody. The pianist chooses from a combination of rhythmically driven, short melodic motives and chords. Connections between fragments are added and removed based on the amount each fragment is being played.

Jason Freeman wrote Shadows for pianist Melvin Chen, during an artistic research residency at IRCAM in Paris. Many thanks to Arshia Cont and Jean-Louis Giavitto from IRCAM and to Dominique Fober from GRAME for collaborating with Jason to extend their Antescofo and INScore software, respectively, for use in this piece.

DataToMusic

Data to Music is used for developing data-agnostic sonification programs

DataToMusic (DTM) API is a JavaScript library for developing data-agnostic sonification programs, and also a real-time environment for experimenting with musical structure models. It enables musicians and researchers to rapidly map data onto musical structures, to explore the boundaries between sonification and musification, and to live code with data.

Publications

TuneTable

TuneTable is a responsive tabletop application with a tangible user interface

TuneTable is a responsive tabletop application with a tangible user interface. The intention is to teach basic computer programming concepts to middle school and high school students (ages 9-16 years old) using physical blocks that work as snippets of code. TuneTable applies computational elements like functions, parameters, and nested loops. Users compose short songs by building chains of blocks that represent code. Each block has a unique design on the bottom that, when placed on the acrylic surface of the table, is identified by the software using cameras mounted underneath the surface of the table. When the arrangement of blocks is recognized, the application outputs musical and visual feedback.

To View a video of the project, click here