Is BMax fast enough for realtime sound synthesis?

BlitzMax Forums/BlitzMax Programming/Is BMax fast enough for realtime sound synthesis?

CS_TBL(Posted 2007) [#1]
..like FM, with filters, some DSP's like delay/reverb/chorus etc, at least at 44.1MHz *err* Khz ^_^ /16bit/stereo, and then some channels..


errno!(Posted 2007) [#2]
we do real time image processing in our apps, so audio shouldn't be a problem if you have a decent cpu.


plash(Posted 2007) [#3]
you can never be sure until you actually try it :)


ImaginaryHuman(Posted 2007) [#4]
I guess you have to figure out how many Shorts of memory space is required per second to play 44.1MHz audio. That's 44.1 million samples per second. I think you're mistaken that it's actually KHz not MHz, which is 44100 samples per second which is not a whole lot. So you need to generate 44k of data every second, which is less than 1k per frame at 60fps. Then double that for stereo. Then add some time for adding effects it, it is very doable. If the old Amiga 500 could do 8 channels of 8-bit mixed audio in OctaMED with a lowly 7Mhz 68000 cpu I think you can do far more with greater ease these days.


CS_TBL(Posted 2007) [#5]
Err right, KHz .. Freudian slip I guess ^_^

But those A500 were doing the stuff in native ASM I guess? I actually don't know how VSTi's are doing their native stuff, C++ or ASM. Even so, Is C++ without inline asm comparible with BMax' performance?


Dreamora(Posted 2007) [#6]
yes +- as most of the underlaying libraries are GCC (3.3 - 3.4) compiled unless you are on OSX where it is XCode 2.2


Canardian(Posted 2007) [#7]
Well the A500 CPU was 7MHz, but it had also Co-CPUs like Denise, Agnus, Copper, etc...
And the 680x0 Assembler is almost like writing in Basic, the x86 Assembler is just hell with all those stupid segment pointers and useless crap.


xlsior(Posted 2007) [#8]
Well the A500 CPU was 7MHz, but it also Co-CPUs like Denise, Agnus, Copper, etc...


The Amiga's sound coprocessor, Paula, would only do hardware level mixing of four 8-bit channels. Octamed faked 8 distinct channels using real-time CPU level mixing, so the original statement above still stands...

(BTW: The copper was a sub compoment within Agnus, not a seperate chip)


CS_TBL(Posted 2007) [#9]
so euhm, I figure one needs some sorta buffer one updates with new data once the buffer is empty, meanwhile the system plays what's in the buffer .. ?

How would one do this in BMax - if it's natively possible at all! - ?


SculptureOfSoul(Posted 2007) [#10]
Perhaps you could use CreateStaticAudioSample and pass your buffer as the first parameter, and then create a TSound object from that using LoadSound( myStaticAudioSample ).

However, I'm assuming LoadSound will load the whole of the audio sample into memory when it is called instead of creating a TSound that is streaming the data in from the audio sample you created, and if that is the case it won't work at all.

Looking at the code doesn't reveal much info, sadly.


Damien Sturdy(Posted 2007) [#11]
I did realtime synthesis in B3D, I thought you had too. Fair enough I had to write a WAV for each section of sound to play and play it a little laggy, but itworked.


Max should be able to do this easily. I was actually thinking of converting the max sound system to use realtime synth instead of samples due to how borked max default sound system is when used on my system.


CS_TBL(Posted 2007) [#12]
I did synthesis indeed, but only by writing raw data to a file, and loading it back into soundforge orso, to check .. :P

Obviously I'm not really interested in this non-realtime method.


Perturbatio(Posted 2007) [#13]
SuperStrict
Graphics 1024,768,0,0

Global Quit:Int = False

Global sample:TAudioSample = TAudioSample.Create(100000, 22050, SF_STEREO16BE)
Global sampleData:Byte[100000] 

	Print sample.length

Function getSampleLength:Float(sample:TAudioSample)
	Return Float( (sample.length / Float( (sample.hertz * 60) ) ) * 60)
End Function



Function drawWave(sample:TAudioSample)
	Local x:Int, y:Int
	Local xscale:Double = sample.length/GraphicsWidth()
	For x = 0 Until sample.length Step 2
		y = sample.samples[x]
		If (y > 0) Then
			SetColor(255,y,y)
			DrawLine((x/xscale)-1, sample.samples[x-1], x/xscale, y)
			'DrawLine((x/xscale)-1, Abs(sample.samples[x-1]-255)+255, x/xscale, Abs(y-255)+255)
		End If
	Next
End Function

For Local m:Int = 0 Until 100000
	sample.samples[m] = Sin(m)*100
Next

Local sound:TSound = LoadSound(sample)
Local channel:TChannel = PlaySound(sound)

While Not Quit
	Cls
	drawWave(sample)
	Flip

	If KeyDown(KEY_ESCAPE) Then quit = True
Wend
End


My first attempt at sound synthesis...


CS_TBL(Posted 2007) [#14]
That's predefined synthesis, not realtime. (actually, it's just a sample player, without filters I highly doubt it's to be called 'synthesis' at all)

I highly wonder whether BMax can do realtime synthesis at all, not in terms of performance power, but pure in terms of functionality.


grable(Posted 2007) [#15]
What about using something like PortAudio?
The API looks straight forward enough, and its cross-platform.

EDIT: i was poking around the mods dir, and saw axe.portaudio =)
there is even an example that apears do to realtime synthesis.


SculptureOfSoul(Posted 2007) [#16]
I'm not seeing any way to do it, of course I'm still a relative BMax noob. It doesn't look like there's any direct way to get at the audio driver though.


ImaginaryHuman(Posted 2007) [#17]
I don't see why Perturbatio's code is not considered relevant. If this is how you have to do it in BlitzMax then what's the issue?

You have to create a dummy sound bank of some kind, then it has to be passed to the sound system which is what LoadSound does, then played.

I think what I would do, though, is create a larger sound and make sure it is looping. Then it's a matter of modifying the part that is not playing (kind of like a double buffer) so that when it loops around it plays the other part.

But I'm not sure if THAT is possible.


CS_TBL(Posted 2007) [#18]
what I had in mind:

loop
fill buffer1 (generate sound)
play buffer2 once/no loop

fill buffer2 (generate sound)
play buffer1 once/no loop
forever

Thing of attention: is the transition between playing buffer1 and buffer2 seamless and inaudible?


Russell(Posted 2007) [#19]
Ask REDi, the author of MaxMod (a music module playing, um, module for BMax), as it (probably) has to create the waveform in realtime based on the music mod data and have it played by Bmax somehow.

CS_TBL: Since the playback rate is far slower than the speed at which the buffers can be switched, I don't think the transition would be noticeable. Now, this is assuming that the data can be *written* to the audio 'backbuffer' in a timely manner (complete before the 'frontbuffer' has reached the end). If the buffer(s) are too small, then there may not be enough time, etc. Lots of testing would need to be done on different systems to find a good average buffer size, I guess.

Russell


Damien Sturdy(Posted 2007) [#20]
CS_TBL, I did it by way of using the WAV file as the current buffer, then loading and playing it when "full". Blitzmax is plenty fast enough, really.
ONe thing the buffer switch thing- it CAN be audible. I wonder how the large projects do it.

Would the code I used in a QBASIC game to directly proram the SB card be of use here? (to see how it was done..) if so, i can go find the source.


CS_TBL(Posted 2007) [#21]
Cygnus: but that's 'one whole sample', right? What I'm referring to is non-stop processing. For instance, playback of a sawtooth with a lowpass filter, with the filterfreq being controlled by the mouse. Or uhm.. 6-op FM synthesis, where one can change operator ratios, levels, envelopes, shapes, scales etc. in real time.

So, the whole generate non-realtime, save as wave, load & play wav, is completely not what I'm wondering about.


ImaginaryHuman(Posted 2007) [#22]
You arent really going to be able to generate one sample at a time and play it, ie making thousands of calls to `play one sample`. Good luck. But you can pre-generate some sound and then apply a filter to it. If you have a small enough buffer the user will experience it as responsive.


CS_TBL(Posted 2007) [#23]
Of course the intention was not to generate one sample at a time, rather I'd fill a buffer of -say- 1024 samples, play it, while I continue calculating in the other buffer. The filter was a mere example, what I'm opting for is advanced FM.


CS_TBL(Posted 2007) [#24]
'ere be a test.

2 issues:
- obvious stutter, is this all BMax can do, or am I coding p00?
- What's up with that sample format? I clearly chose a 16bit signed format, and yet the result seems to be 8bit unsigned. o_O (move mouse below 127)




Perturbatio(Posted 2007) [#25]
I've been playing around a bit with your code and it seems to work reasonably well like this:


Part of the stuttering problem appears to be that the buffersize is incorrect. I discovered this with my sound generation code above, in that the produced sample played for about half the calculated buffer duration.

Also added channel switching, the numbers need tweaking, but it's going somewhere...


CS_TBL(Posted 2007) [#26]
hm.. sofar it looks like overkill to me. Does it really have to be this complex? One would estimate that 2 channels would be plenty for "double-buffering", as they're not, I wonder how suitable BMax' sound/audio commands are. They're perfect for games I guess, but for sound generation like this I think one needs a whole different approach. See, playing audio like this is quite a lowlevel job I think, and currently we're trying to do the same using high-level game-oriented commands. -I think-

I propose an TAudiostream class, low-level, which doesn't bother the user with details that could be filled-in by the class itself (sounds, channels, buffers), just some methods to put a user array onto a queue, and meanwhile a playback method just plays what's in the queue! :-)


SculptureOfSoul(Posted 2007) [#27]
Instead of a user array on the queue, wouldn't you be better off with a circular/ring buffer?


CS_TBL(Posted 2007) [#28]
I guess those circular buffers are practical when playback runs in sync with cycling/generating, e.g. as it happens with delay lines. I assume the circular buffering you refer to also implicates per-sample processing..? But what if playback is faster or slower than generating?


ImaginaryHuman(Posted 2007) [#29]
If you know how fast the sample is playing you can regulate the generation of new samples. Not a big deal.


Damien Sturdy(Posted 2007) [#30]

So, the whole generate non-realtime, save as wave, load & play wav, is completely not what I'm wondering about.



No, im not generating te whole sample, i'm using the wav as a sound buffer. In max the code could be converted to do things properly. Maybe I'm generating 20ms of audio, saving,playing back. there are snaps but thats because generating a wav and plaing it back instead of feeding the data to a sound buffer is very inefficient.


But, if this technique works, Max is going to be capable of sound synthesis, thats pretty much what I was saying ;)

Hey, How does one usa a "circular" buffer eh? thatd be piss easy to work with :D