Patent attributes
A method for generating music is provided, the method comprising receiving, on a capacitive touch sensitive interface such as a keyboard, multi-finger gesture inputs having a first component and a second component, wherein the second component has a temporal evolution such as speed; determining the onset of an audio signal, such as a tone, based on the first component, analyzing the temporal evolution of the second component to determine MIDI or Open Sound Control OSC instructions; modifying the audio signal based on the instructions, in particular by decoupling the temporal relationship between specific gesture inputs (e.g. at key onset, during a note and upon key release), thus mapping gesture and motion inputs, to thus obtain previously unachievable musical effects with music synthesizers.