Inside Musico: A Q&A with the creators
Posted 8 months ago
We put together a series of questions raised by our users, early adopters and other professionals in the audio-visual and music industries, and discussed them with Musico CEO Lorenzo Brusci and other members of the technical team. The result is a fascinating conversation about the role of music and AI in the upcoming years, across various industries and applications.
What would you say are the essential features of Musico?
Musico is a very agile generative music system, able to rely on machine learning techniques as rules-based music generation, with a variety of deploying strategies matching as many input strategies as clients can dream of. This hybrid software design space, makes us very adaptive and resilient, capable of quick scalability and integration of new input or usability needs.
What sort of application of the technology have you experimented with, and which do you see as the more likely for making an impact in the way things are done in music or related industries?
I believe Musico’s most promising applications are in the field of biometric-driven automatic music generation (from the healthcare/well-being to the sport sectors, igniting an immersive and realtime learning process), sonifying biometrical states, without exiting the actual experience, boosting the immediate awareness of achievements.
Another relevant sector is automatic music for multimedia/instant media and game. Adaptive music and soundscape scenarios would allow a more immersive experience and would enhance the engagement and the personalisation of non-specialistic multimedia production as of the gaming dramaturgical mechanism.
Any industry interested in a huge amount of adaptive music scenarios would benefit from starting the interaction with MUSICO’s licensing and customisation services, especially in this initial phase of our B2B development roadmap (licensing and custom protocols are now fully implementable).
Currently Musico has free musical apps to experiment with, and a lot of interesting research conducted in the field of Musical AI. How long has Musico been around? What have been the main innovations that came out of this journey, and where do you seem them lead to in the near future?
We started from unintentional biometrically-driven automatic composition – mainly trying to serve the Sport industry; this was about 3 and a half years ago, with the App Musicfit. The main innovations that came along were related to the integration between machine learning and rules-based generative music techniques. This constant musicological, logical and technological challenge characterises our efforts at MUSICO, on a daily basis.
We now believe the biggest claim and horizon for MUSICO is to offer the world of internet-based connected/interacting average individuals the capacity to speak and communicate using music, like musicians or metamusicians/sound designers and Djs do, in the quickest and most natural of the ways: immediate automatic generation.
This is achieved serving industries as various as Sport, Wellbeing, instant-Media, Multimedia, Virtual Retail, VR, AR, Gaming.
Not to forget, Musico is able to customise input strategies and the variety of music outcomes, and this is and will be possible thanks to the involvement of a wide community of human professionals: there’s no AI self-consistency.
We always underline the capacity of AI to open up new professional opportunities, especially in critical areas of human working engagement – in particular, due to the gigantic leap implied by hyper connectivity and the required adaptive scenarios: the rising questions, cannot have just-human answers; augmented and powered human answers as MUSICO offers will work it out, including the last mile, where “contextualising” is and stays as the finest of the human arts.
Musico’s team is made up of engineers, musicians, educators and other industry professionals. Do you guys use your own technology in your artistic or professional practice? And if so how?
As a musician, I’m impressed by the deep procedural impact I experienced while being assisted by an AI agent: it’s not only a matter of the folder-factor, no doubt the acceleration in exploring scenarios and jumping to synthetic behaviours, is of great relevance – promptly sound-designing on top of automatic compositional flows.
What impresses me the most is the capacity of the AI assistant, mainly in the form of a Plug-In for DAWs and other audio software, to alter my confidence, my ambitions, my sensations of being able to dare where I usually didn’t, mainly due to the accelerated transition between intention and simulation, or the converging of simulation/experimentation and consistent compositional outcomes.
The team is multifaceted, and members come from different backgrounds and reside in different countries around Europe. How do you keep things moving as a team? How do you think the new events of Spring 2020 and the global lockdowns may affect your own operation as a company or the delivery of your products and services? What do these strange times represent for an innovative music technology startup?
On one side, we should continue our individual multi-located researches and commitments, because they make MUSICO special and keep growing, having constantly lots of knowledges and practices contributing to the uniqueness of MUSICO technology and services; though the dedication to MUSICO gets growing month by month, it’s also true that the lockdown did not really alter the way MUSICO uses the internet to be connected and exchanging work and views.
On the other side, this very moment is calling for an unprecedented alliance between the real and the virtual: serving personal rights-free music to millions of virtual identities, would make a big difference when it comes (and it’s coming) to consider music as a natural interacting language, with a strong attitude to be embedded into visuals and habitats, and becoming easily a neo-natural utterance.
You can quote a novel, or a magazine, but you just speak for yourself (or u pretend to): and this seems one of the main roles AI Media assistants will have: offering average and daily media-communication tools, allowing an advanced multimedia expressivity.
We’ve recently observed big companies like Google putting out suites like Magenta, under the free or open source models. How does Musico differ from this technology and how do you see it gaining its own place in the industry?
MUSICO is developing adaptive technologies with a clear attitude to sophisticate per-sector custom design generative services. We are not generically-driven and we design each single step of our technology and its related services.
This advantage, technologically and scientifically there, is leading us to think as a company and act as an open community-building platform: our music does not come with copyright, at any level; we will offer soon a block-chain system to guarantee that our users can also find through MUSICO a way to dynamically repay their engagement, becoming data-designers, for that same data that is so crucial for the scalability of our technology.
And even more fundamentally: we design specialistic datasets requiring human composers to join our design team, to design specific high level formalisation of music genres and project cross-genre strategies. This makes a big difference between us and for example Google-Magenta, that is lead by a vague-generic approach, when it comes to training datasets, reducing music to a learning robust problem, emerging with predictions with no stylistic defined references, making the usability currently very poor.
To what extend do you feel AI can help in the creative industries? What aspects do you think will likely be replaced, assisted or empowered by AI in these fields during the next years and what instead do you feel shall always be the domain of humans?
We are back to my feeling of being empowered by the AI assistant, while composing: an amazing feeling of being widely supported in my most daring ambitions; powered in courageous actions. In general, synthetic practices are natural; abstraction, simplifications, are a “natural” evolution of compositional and pragmatic human skills.
We are not discussing theoretically if a Dj is a musician or a meta-musician… he has an impact in the history of music making and experiencing; the same you can say for the use of all music software making out of music production a “music information fact” since almost 25 years, creating millions of new jobs and new expressive perspectives.
We believe new professions and new economic scenarios will emerge from the AI analytic and synthetic conditions, including its constant stylistic media provocations, boosting simulation capacities and expectations; surprisingly indicating areas where more systematics is needed (i.e. popular music formalisation, more variation and more musical adaptive and interactive space will be provided). More scale-jumps and more courageous music and soundscapes scenarios are ahead.
All this thinking is very common in the arts; the arts have finally the chance, and now during the Covid-19 lockdown more than ever, to promote an intensive experimental creative digital life, starting from an intensive hybridisations of natural and synthetic tools, if this division makes still any serious sense… maybe just common sense, for inviting to use all possible tools to make humans more resilient and able to adapt to fast changing scenarios, both physical and informational, seems to be more relevant than ever.