Source: https://github.com/Tsarpf/UE4-Procedural-Audio

Here are two example videos of it running live where at the beginning I just press play in Foobar on Windows, and the visualization begins. Spotify, Youtube, or any other audio source on Windows works straight away as well.

https://www.youtube.com/watch?v=Uqnj190stKw

https://www.youtube.com/watch?v=68T9ssyjO6Q

A word of warning though, it's only a proof of concept, so it is not stable! In-editor it generally works well, but sometimes crashes. The program also does not release all the memory that it should. Due to a yet unknown reason the standalone version basically doesn't work at all.

Thanks to other open source projects

For the mesh generation part I got a lot of help from SiggiG's procedural UE4 project/tutorial which lives here: https://github.com/SiggiG/ProceduralMeshes

Some of the code for figuring out frequencies from audio chunks is from eXi's sound visualization plugin, especially the original CalculateFrequencySpectrum function that can be found here, and his use of the library KissFFT.

Because I'm proud that I was able to figure out the frequency calculating part myself as well, I want to add that I have my own implementation for it (with the help of the library "ffft"), but for this project I overwrote that with eXi's solution to rule out bugs in that area.

A very brief and dense overview on how it works

On Windows, we're in luck because we've already done the audio capture part, now we just direct it towards UE4 instead of a file. An audio sink for the audio capturer receives chunks of audio frames from the capturer, and the audio listener itself is ran in it's own thread within the visualizer process. Now that chunks of audio are popping into a queue from which we can dequeue them in the UE4 main thread, we can calculate the sound spectrums for each audio chunk we receive. Finally on each game tick we fetch a list of new frequencies, and if found, add those to the mesh, and move the camera forward to keep up.

Feel free to ask me on Twitter or in the comments if something is unclear!

Making it work on platforms other than Windows

On Linux capturing the audio should be very easy for example by directing arecord 's standard output to the UE4 program's standard input and going forward from there, but haven't gotten around to trying that yet. On OSX I would start searching for a solution with the help of the project "Soundflower", but I'm not sure how easy that will be.

Thanks!

Maybe the proof of concept gives someone an idea for something awesome. Please make a new Audiosurf that takes in live audio and doesn't need to process the whole song from a file first. Or make the sound waves collideable and get a some sort of game mechanic out of that?