The AI & Music S+T+ARTS Festival proposes a conference program –that you can attend both live at the Auditori CCCB presented by NTT DATA, and via streaming– around artificial intelligence and music.
Musicians learning Machine Learning, a friendly introduction to AI
What is a neural network? How do you train it? What is the difference between machine learning and deep learning? What is and what isn’t artificial intelligence?
At the ‘Musicians learning machine learning. A friendly introduction to AI’ session, we will discover the basics of AI through musical examples with the participation of the scientist and developer Rebecca Fiebrink, researcher Stefano Ferretti, the artist and researcher Shelly Knotts, and the researcher and engineer Santiago Pascual and the University of Bologna researcher Stefano Ferretti. At the end of the sessions we won’t be able to program a robot, but AI will hide less mysteries from us.
Make way for the new instruments!
AI will reveal the true nature of digital instruments and will define the music of the future. Electronic instruments appeared to “imitate” acoustic instruments and they found their true personality when they stopped imitating the acoustic ones (think an electric guitar, or an organ, or a drum machine).
Let's make way for the new music instruments together with our panelists Douglas Eck, computer scientist and researcher at Google Magenta; Rob Clouth, musician, visual artist and developer, specialising in digital music tools; Agoria, dj, producer and curious mind; and Koray Tahiroglu, artist and researcher at Aalto University.
AI and future music genres
Technology has defined musical sounds, structures, genres and styles throughout the centuries, and in 2021 we are pretty much sure that AI will define the history of music for years to come, and that it will gather audiences that will identify with this new music made with AI.
Participating in the discussion are: Libby Heaney (artist, coder, quantum physicist), Nabihah Iqbal (musician, musicologist) and Jan St. Werner of the band Mouse on Mars.
Teaching Machines to feel like Humans do?
Music is mathematics and physics, but this is not the reason why we like it. We like music because it makes us feel things, so it’s crucial to explore human perception in order to train the machines and make music instruments and tools that allow us humans to play music expressively.
This is a session to discover how machines are trained to listen, see and feel like humans do.
We will count with the presence of the scientist and director of the MIT’s Artificial Intelligence and Decision Making Faculty ( AI+D), Antonio Torralba and with Luc Steels, researcher professor at Institut de Biologia Evolutiva (UPF-CSIC) Barcelona and AI pioneer in Europe, among others.