Poster of the original Quadraphonic Project

Quadraphonic Project Update Plans!

It’s been about 2 years since I had the chance to make a jamming environment in Max MSP, and I felt that it’s about time I create a better, and more compatible version for the world! Here are my first plans on a Web Audio reboot of the project, bringing jamming onto basically any browser!

As part of my Post-Grad coursework I have been able to look into the Web Audio API, and have even thrown together an example of it in action on the site in my last blog post. It got me thinking back to the days of connecting wires in Max, and then this project suddenly seemed like the only option I could possibly consider taking on.

The plan…

The plan is simple, but the execution of it will vary wildly on my ability to use code and different software to create something which goes beyond a basic keyboard, and more into interactive jam sessions online using various instruments. The idea in a nutshell is that the user connects to a “practice room” and is presented with an instrument of some sort to play. This user can also hear other players in the room, and can perform in sync with them using the same counted timing.

 

Initial ideas on making it happen!

Here are the first challenges I’ll have to beat to get the project rolling:

  • Connecting one browser session to another like you might see in an online game like agar.io
  • Battling lag
  • Building the instruments for players to use
  • Building a timing system that can be synchronised enough for collaborative music making

 

Making a connection

I hope to make use of an existing method such as sockets.io or webRTC in order to connect the sessions together. Using these it’s possible to send data of any kind, and should be the most efficient way to make this work. I had a go at using UDP in Max and Unity previously, so I am hoping that this crucial step can be made.

 

Delay and Timing Systems

Even with the best internet connection, sometimes audio or video just doesn’t want to work, or can get laggy. With this in mind I am hoping that small clumps of text will be much faster and manageable, especially since every device will be doing the hard work in real-time according to these packets of information. For instance, instead of Adam playing a G note, and sending the audio to Barry, Adam can just send a bit of text saying (“I just played a G note on beat 1 of this bar”), and then Barry’s session will do the rest, playing a G note automatically at the right time.

I am hoping for each user send data a lot more frequently than they receive. This will mean that the Adam receives a full bar’s worth of instructions for what Barry just played, and it plays seamlessly (albeit a bar after Barry played it). This means that hopefully lag is less of an issue, as it’s engineered to be always delayed by a bar, but should appear seamless to the end-user. I think the timing of the transport would be handled by the host of the room, but that’s a pure guess currently.

Instrument building!

This should be the fun part, as I enjoy the sound design element of making software instruments. I am thinking to start very small, possibly with just one basic synth, and then adding in things like drum machines later on.

 

Thank you very much for reading, I hope to have things to show off in the coming weeks! 🙂

Leave a Reply