The engineering challenges of implementing new generative models that our research will develop are so complex that we decided to create
a new platform, ground up, through open source. The following Motivation section will explain several additional factors that supported this decision.
We will conclude with a description of core platform components.
We believe that such a broad vision and set of requirements can only be achieved through a collaborative community of coders, contributors, and users through open source.
Musical building blocks with generative potentials
At the core of our platform will be a new model of musical entities and generative potentials. Current music representations typically model music events as some combination of audio, extensions based on the MIDI format, and higher level structures that are modeled from the recording studio. Generative models of musical events will be far more complex and require a high degree of flexibility and interoperability. We will support a much richer variety of features, not only musical (pitch, rhythm), but also relating to generative potentials and controlling the experience. We will have to deal with far more complex musical structures that are a natural byproduct of generative potentials and processes. Generative models will have to operate with additional models, such as context, that is required to modify generative potentials to account for differences in musical style or artist personalization.
Bridging across Artistic Biases
Every artists has his unique approach to creation, such as notation, graphic sketches, improvisation, algorithms, graphic manipulation, chance, or collaboration. We call each such approach a bias. From an artistic viewpoint all biases are equally valid; from a psychological perspective a bias might be considered a reflection of a personality. Whereas many composers have a single preferred bias, many also advocate they find others useful. The engineering implication is that in order for the artist to maintain a uniform flow of musical thinking, the system must support a seamless flow between such biases.
State of the art technology is extremely biased. For example Pro-tools, Ableton Live, Max, and Super Collider each support a different bias. We do not want to replace any of these technologies, in fact, we would like to leverage them. But we would like to provide a meta-approach that will enable a fluid transition between such biases.
Rapidly Evolving Concepts
New or revised models developed in our research imply new or improved musical concepts for artists to work with. But they must be implemented in our framework before they can be used. It is only then that artists can test them within an environment they are already familiar with. By doing so they will also provide back to the researchers data and feedback that is essential to progress. But this cycle can be tricky. First, programming itself may inadvertently cause side effects that may change the intended characteristics of concept. Second, any change to a system component must also fully integrate with other components within the overall system.
We expect rapid innovation and therefore need to support rapid invent-implement-consume-feedback cycles (this is not un similar to requirements of an interactive system that enables a composer rapid, if not real-time, experimentation cycles). All this implies a highly modular approach to the framework's core data-structures and algorithms. For example, as we learn and model musical building blocks and their generative forces, we may find that software implementations of feature types, event representations, and generative forces must also be updated. No existing framework can support this effectively.
Supporting a Creative Flow: think music
To increase the likelihood of artists adopting our tools we follow a guiding mantra "I can think music therefore I can create music". In other words, our paradigms provide artists with new concepts through which they can think creatively while working in a symbiotic partnership with technology that produces an output that is close enough to the composer's predictions so as not to break his creative flow. This, of course, is an ideal that can never be truly achieved. Our goal is therefore not to create a scientifically perfect "I think" "I create" interface, but merely to provide a good enough start point that artists will find both useful and inspiring to utilize in creative explorations.
We are aiming for a wide range of users in varied domains. In music composition, for example, we expect not only composers with different biases, but also composers with a varying range of expertise and experience. Beyond introducing new generative mechanisms and controls over experience and structure, we plan to expand into other time-based media, utilize new controls or AR/VR technologies, and experiment with new creative flows and user-experiences. We also hope our work will be picked up by other domains, such as therapy, education, or research.
This implies a new kind of back-end modularity, flexibility, and scalability, as it does a new kind of front-end fluidity between different interfaces that can also be highly personalized. For example, we foresee that even users that share the same bias may choose radically different GUI's or gravitate towards a very different approach to using sensors, controllers, or audio-visual interconnections. Again, such modularity is not supported by existing frameworks and requires a carefully thought out architecture of both back-end and front-end components.
Core Platform Components
A common back-end music OS
This could be jargoned as a "Linux for music". This layer defines the core data structures, representations of musical entities such as events, features (parameters), generative forces, representations of time, synchronization, controls, collaboration, alongside the APIs for both consumption, other components, and to simplify future extension. These also provide primitive mechanisms for supporting the creative workflow, and include non-destructive paths of exploration.
A common front-end experience OS
This component is directly responsible for supporting artists in their creative expression. It focuses on providing a focused, fluid, and productive creative experience. This includes front end interfaces, sensors & controllers, audio-visual interconnections, and the mechanisms and behaviors that together support a fluid creative experience. This goes well beyond GUI interchangeability or mashups, and is tightly coupled with backend components. Special attention is paid to the temporal aspects of creation, and how those could be utilized in non-destructive paths of exploration.
A new common music file structure
This could be jargoned a "PDF for music". Current music formats are either propriety or support a combination of formats that have not been updated for decades, namely MIDI or Audio. As we will significantly broaden the definition and types of media events, time, controls, synchronization, etc., we require a new file format that will take advantage of these features. This format is especially required for us to create a new kind of open collaboration environment (see below). We expect to support data-interchange utilities to most common existing frameworks.
A cloud Engine
This is where users will be able to come and use our software. It will leverage both our unique capabilities as well as the new features of our music file structure.
Apps that either provide capabilities that are hard to achieve on a cloud environment or provide a more limited set of features for a more specialized application.
An open collaboration environment
A social environment where users can share their creations, collaborate on new creations, or just consume existing works. As our creative capabilities will be ever-expanding, and as our music file structure is open, we expect that this is where new modes of large-scale social interactions may occur.