Machines have been made to create music for years now, but in an international study in Japan has made that music usable with an addition of probably the most important component of music: the response of the listener.
As the listeners hear music wearing an EEG headset, data of their brainwaves gets fed to the computer, telling them the emotional state of the listener. With this new data, the computers can recreate those emotions in a listener to produce more music for the human listener to enjoy. Realistic uses for this tech would be making music for depressed or anxious patients in need of motivation to keep moving or patients needing to feel motivated to exercise.
Recently, Google has taken up the project of machine-made music also using real human minds to teach robots how to compose music. Google is calling their machine “Al” and I’m sure they have plenty of ideas of how to use it in the future. I suspect this ground-breaking tech will be part of our musical future. Who knows what this will bring to human/machine culture?