DeepMind's AI learns the piano from the masters of the past
Published on Jul 29, 2018
The paper "The challenge of realistic music generation: modelling raw audio at scale" is available here
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
The paper "The challenge of realistic music generation: modelling raw audio at scale" is available here
‘Singing with Machines’ is a spin-off of our Talking With Machines work, which aims to investigate new experiences using ‘smart speaker’ devices such as the Amazon Echo and Google Home,
These experiments were concerned specifically with how these devices can be used for new ways of creating, enjoying, and disseminating music and sound art. We have been working collaboratively with musicians and sound artists to explore this which enables us to explore the technologies and platforms on which these devices operate, giving us insights into how they work, which can be more generally useful.
Nearly 100 participants from around Europe and the US spent 36 hours in Abbey Road’s legendary Studio One, working on ways to transform the music industry as part of the inaugural Abbey Road RED Hackathon. Microsoft AI specialists helped teams set up, access, and understand APIs, including Facial Recognition, Sentiment Analysis, Speech to Text, and more.
In a world becoming ever more technologically advanced, and reliant upon computers, a team of scientists and musical theatre writers team up to devise a recipe for success in musical theatre and then task computers to use that knowledge and generate a hit. Will they succeed?
In this video, we discuss the paper “An Anthropomorphic Soft Skeleton Hand Exploiting Conditional Models for Piano Playing” by J. A. E. Hughes, P. Maolino, and F. Iida., 2018
How can we improve current control techniques and reduce computing power? And how can biology inspire us on this to develop better robots? We’ll investigate the opportunity of using passivity to navigate complex systems in accomplishing complicated tasks, such as a skeletal hand that is playing several songs on a piano.
Software is revamping robotics hardware, lowering the barrier to entry through cloud, web and mobile technologies. Custom-built for SXSW, our robot rock band leverages cutting-edge software technologies, such as Maya and KUKA.ready2_animate, to harmoniously fuse music with robotics by converting MIDI files into motion tables, and then into robot program.
At SXSW, we showed how manufacturers can use software, VR and KUKA Connect to reflect processing data about what's happening in their facility. KUKA Connect is a cloud-based analytics and intelligence software platform that converts robot data into actionable insights.
This is incredible: a neural network that removes vocals from music and vice versa. Listen to this example i made from David Bowie's "Changes". A bit robotic, but even the echo is present in the vocal track! Made using the Spleeter library: https://github.com/deezer/spleeter
In this episode of The Pretentious Geek, we argue on the applications of machine learning in music industry. We discuss companies like AIVA, Musenet and Magenta which are building machines which can compose music and are bringing the tech to consumer products.