Audio

Archives Forums/Linux Discussion/Audio

Brucey(Posted 2014) [#1]
Is there a de facto audio library for Linux or does everyone/distros just pick and choose depending on the weather?


dawlane(Posted 2014) [#2]
distros just pick and choose depending on the weather?
That's about the size of it. Each distribution goes it's own way. It just a popularity contest out there in the Linux world for what becomes the new de-facto. If you want audio across all distributions. Then it's best to go OpenAL, but even then you still get people that don't want to install extra libraries no matter what audio system you use.


Derron(Posted 2014) [#3]
If you want to use "default audio" with BlitzMax you should rely on "freeaudio" or "rtAudio" - they are adding a layer to the basic sound engines.

In the case of most linux distributions it is still: Alsa or PulseAudio (years ago it was OSS - Open Sound System - or how German ubuntuusers calls it: OSS-Soundsystem [sic!]). If you are in a "audio editor"-specific distro it could be JACK too (low latency audio system)

If you use "Alsa" in BlitzMax while your system uses PulseAudio, I recognized that other audio outputting programmes (internet radio, youtube or whatever) stops as soon as you start playing sound. So the "mixing" is borked then. This is why I reordered the engines in rtAudio so PulseAudio is checked first.


bye
Ron


skidracer(Posted 2014) [#4]
I would say the standard is null audio, with no connection to speakers or headphone line. I'm guessing 99.9% of linux runs in the server room where it belongs.

Audio drivers on Windows actively adapt to jacks being connected to speakers / headphones so a linux issue people have is if they plug their speakers into the blue jack instead of the green one, or fail to have attached their headphones to the green out on the front of the case etc.

In Linux philosophy that means that each app needs ux for both mixer and joystick configuration.

The alternative answer is the driver that runs on both steam and ubuntu which I'm guessing is pulse.


skidracer(Posted 2014) [#5]
After reading wikipedia PulseAudio page Jack is talked of.

http://en.wikipedia.org/wiki/PulseAudio

I'm curious now what is recommended on SteamOS.


Brucey(Posted 2014) [#6]
Is it worth applying your Pulse audio patch script to https://github.com/maxmods/pub.mod , if you think Pulse audio is more likely to be available (in general) ?


Derron(Posted 2014) [#7]

so a linux issue people have is if they plug their speakers into the blue jack instead of the green one, or fail to have attached their headphones to the green out on the front of the case etc.



Jack-retasking is possible for some years now.

If you do not like to do it like people do it with linuxes "where it belongs" aka "command line". You are free to use a tool like: hda-jack-retask.

I personally do not prefer a audio driver selecting for me "automagically" every thing ... I remember myself using earphones as microphone etc.


The price of freedom (choice of software) is that you do not have a minimum common. I am on Linux for some years now -- no regrets that did not vanish over the months.


bye
Ron


skidracer(Posted 2014) [#8]
If SteamOS has pulse support maybe freeaudio just needs to have some port configuration / mixer commands so user configuration is then possible. I would be up for adding them.

I think BlitzMax audio on Linux should possible have all drivers available for user selection, currently this isn't possible. The possible drivers being OpenAL, OSS, ALSA, Pulse and maybe jack.

There are updates to OSS/ALSA I think in another thread here which I need to track down also.


dawlane(Posted 2014) [#9]
I was thinking of having a go at implementing audio driver selection by adapting this method. But lately I have been having doubts if it is still worth the effort to carry on with a product that looks like it may be on it's last legs. As you have seen in a number of post, people are getting a little reluctant to start any new projects, especially as Mark hasn't post on this site for awhile.
As a side note the 64 bit issue cropped up over on the Monkey site when a problem iOS popped up.


GaryV(Posted 2014) [#10]
But lately I have been having doubts if it is still worth the effort to carry on with a product that looks like it may be on it's last legs.
Although audio is a major issue with the Linux side of BlitzMax, so is the IDE and MaxGUI. It is pointless to fix one, without fixing the others.

Josh has been putting a lot of work (and money) into getting MaxGUI functional on Linux. He can get away without audio since he is making an app.

Seb did a wonderful job getting MaxGUI functional on Windows and OSX, but the Linux side didn't really benefit from his work.

Unless BRL is willing to put some effort into the Linux side of Max, I am not sure it is worth your effort.


Derron(Posted 2014) [#11]
Then use wxMax if you want to use OS-GUI-functionality.

@dawlane and .so-files
This would remove the "all in one file"-aspect which is currently possible (with incbin::bla).

In general you are absolutely right about a "drivers available" thing containing the current engines without much trouble.
Afterwards "just" code a crossplatform-engine-independend-sound-streaming-functionality and most people (the handful left :D) in the forums will be more than glad to have you next to them.

@Mark
Albeit Mark did not wrote something in this forum for years (I know that behaviour by myself... people think "he is 'dead'" and if you start again to post, you have to answer everything, people stress your nerves etc. - done this 5 years so so and now have to rebuild my whole community). BUT: he answered to my support question, so the project itself isn't that dead - just on halt or so.
Maybe I should ask him directly why he does not kept being active in the forums here on the BlitzMax-side of BRL.


bye
Ron


dawlane(Posted 2014) [#12]
Maybe I should ask him directly why he does not kept being active in the forums here on the BlitzMax-side of BRL.
I can answer this one myself by using one word, a hyphen and a letter..... Monkey-X.
I don't know who still has any thing to do directly with BRL or who has direct access to the primary compiler code base other than Mark. Maybe it could be put it to him to open up this code as a closed source project to a few talented BlitzMax users who can code C/C++ and Assembly. Of course there would have to be non-disclosure agreements, some in over-all control etc and someone who has time and is willing to commit them selves to such a project and of course a list of people posted on these forums who are involved wouldn't hurt either.

@Derron: The question of using incbin with dynamic/static libraries surely depend on the license of that library? To me that wouldn't be a good solution as the Free Software Foundation would crucify you if you dared to breach the terms of GPL/LGPL. The cross platform engine independend sound streaming functionality sounds like a good solution. But the audio side of Max would need a major rewrite though.


GaryV(Posted 2014) [#13]
Then use wxMax if you want to use OS-GUI-functionality.
The reason for buying MaxGUI was it was said to work on Linux. That said, wxMax is not officially supported and not without its bugs and is bloated beyond belief.


skidracer(Posted 2014) [#14]
Pretty please can this discussion remain on topic.


skidracer(Posted 2014) [#15]
SteamOS seems to have had some audio issues pulse audio is failing to start up... which have only recently been fixed.

It does seem like pulse is the driver best suited for app development on Linux but ideally the user needs to be able to switch between Pulse and OpenAL in the app's audio configurations options.


Derron(Posted 2014) [#16]
Never had this problem on my machines.

PulseAudio was a mess some years ago (the time they first introduced it as "new default") and you still had ALSA installed as default.

Like said for me the most "problematic" thing with sound on Blitzmax+Linux is that dependency check. You cannot fall back that easily to ALSA if Pulse Audio is not working/not installed. This may be a non-issue but was problematic within my short tests concerncing the sound engines.


bye
Ron


dawlane(Posted 2014) [#17]
Hold on people. PulseAudio is not a driver it is a sound server that relies on ALSA drivers. So no matter what, ALSA at some level has to be installed.

A BlitzMax application, should when executed, ideally hold an internal list of sound architectures available on a system, but always default to ALSA. It should then be the applications author to make these available to the end user. The easiest way to implement this would be to check which libraries are installed and load/unload them at run-time.

On a side note you usually get problems if you have Jack or Jack2 installed along with PulseAudio, but Jack2 with D-Bus is written to coexist with PulseAudio. The ALSA documentation mentions writing interrupt driven routines where systems that rely on a callback-driven mechanism like JACK etc are installed.


Brucey(Posted 2014) [#18]
So there's two current sets of updates I could be applying? Some stuff for ALSA and some stuff which makes BlitzMax "prefer" PulseAudio ?


dawlane(Posted 2014) [#19]
skid wrote a PulseAudio BlitzMax driver which I incorporated into a patch script along with first fix of the ALSA driver for a clean install of BlitzMax 1.48.

The script default is to use the PulseAudio. You can change this by editing brl.mod/freeaudioaudio/freeaudioaudio.bmx and pub.mod/freeaudio/freeaudio.bmx. You can edit it to include both the ALSA library API and PulseAudio, but the end user would have to have both on their system or the program will fall flat on it's face as function symbols are resolved at compile time. This is where checking, dynamic loading and function symbol resolving of the libraries at runtime would be a boon. As it stands at the moment the BlitzMax audio system doesn't do that.

PulseAudio is the default sound server for the main Linux distributions (Ubuntu, OpenSUSE, Fedora, Linux Mint, Mageia a Mandriva clone if I remember) as it can do a few things the the ALSA library API cannot, but PulseAudio may not be installed as the default sound server in any distribution that's based off of them. The ALSA library should really be used as a last resort as some people on these here pages said that they were having sound issue with using ALSA.


Brucey(Posted 2014) [#20]
Cool, so I'll integrate those two patches, and then look to convert the code for runtime linking.

And then what? Will everything "just work" or should we include Jack(2) as another option too?


dawlane(Posted 2014) [#21]
In theory it should work, but my internet searches have draw up a blank with dynamic loading of Linux sound systems. Looks like everyone just takes it for granted that you will install the requires library packages.

PulseAudio does have a jack module which needs a little bit of work for it to do any thing. But I cannot see any harm of having other sound servers to the list. All it requires is the know how, time and commitment to write the code.


skidracer(Posted 2014) [#22]
My first experiences were before pulse and then ALSA simply abstracted the lineout connection of the computer so if your window manager wanted to go bloop while you wanted to go blip you were shit out of luck.


skidracer(Posted 2014) [#23]
Brucey there is ALSA patches here.


dawlane(Posted 2014) [#24]
Just found this A Guide Through The Linux Sound API Jungle while I was trawling through the internet. The bit titled You want to know more about the safe ALSA subset? is worth a read.


Derron(Posted 2014) [#25]

You can edit it to include both the ALSA library API and PulseAudio, but the end user would have to have both on their system or the program will fall flat on it's face as function symbols are resolved at compile time. This is where checking, dynamic loading and function symbol resolving of the libraries at runtime would be a boon. As it stands at the moment the BlitzMax audio system doesn't do that.


I knew that sometime someone will find better words for the problem I tried to explain.


I also did not know that PulseAudio relies on ALSA. When using "ALSA" pure while your system is using PulseAudio - I had stopped audio (other programs playing audio) while my app played sounds. As soon as I quit my app the sound playback of other apps continued. Happened on different computers. As I patched according to the patches in the forum my BlitzMax-mods primary use PulseAudio.


What is about using rtAudio as layer? Isn't it a layer like the freeaudio mod?

http://www.music.mcgill.ca/~gary/rtaudio/


@Resolving of functions
Isn't it an option to do some "checking" so checking for OS, enabled "sound servers" and only use the functions of that server on success?


bye
Ron


dawlane(Posted 2014) [#26]
I also did not know that PulseAudio relies on ALSA.
In fact the kernel has low level ALSA functionality as component. When you write normal ALSA application you go through the ALSA user space library.

@Resolving of functions
Isn't it an option to do some "checking" so checking for OS, enabled "sound servers" and only use the functions of that server on success?
It should be possible to retrieve what sound server is running by searching the processes by using functions in libprocps or parsing /proc directly (I can't remember if parsing /proc can be done with a user program).

And as quick reminder of dynamic loading at run time.

1) In code you need to declare proto-types/defines as function pointers of the dynamic libraries exported functions and include dlfcn.h
2) You need to declare a handle to hold the address of the dynamic library to load. And a pointer variable of type char for any error messages.
3) Use dlopen (handle = dlopen( "lib to load", bining flag) to load the dynamic library to memory. If the library was not compiled with the fPIC flag, then link-loader will make a copy of that library in memory for each application that use it. If it has been compiled with fPIC, then the loader binds it to the one in memory if it has been already loaded.
If dlopen fails the returned value is NULL. That's where the checking comes in.
4) Upon the successful loading of the dynamic library. You will need to create an handle instance of those proto-types/define functions before you can then start to retrieve the exported dynamic library functions with dlsym e.g myfunctionhandle = (proto-types/defines)dlsym(handle of library,"function name");.

What is about using rtAudio as layer? Isn't it a layer like the freeaudio mod?
Good question. But from what bits of information I have seen so far; It still needs to built against one of the audio API's. So if I would think that you are back to square one. Unless it has it's own method to load libraries for other API's.


Derron(Posted 2014) [#27]
It still needs to built against one of the audio API's. So if I would think that you are back to square one.


Hmm rtAudio supports JACK ... so this should be a dependency. I think my system does not have JACK installed (aka still installable). So this might just be a thing of compiling the code (with the -dev-libs). I just had to modify the code of rtaudio to use pulseaudio and checks for it at the first glance not after ALSA. ALSA first would lead to that already mentioned problematic (non)simultaneous playback of audio on my systems.

Sure, I mostly favor rtAudio (if that were a solution) just to make the step for "vanilla blitzmax"-streaming-audio less big than it is now.


bye
Ron


dawlane(Posted 2014) [#28]
Well, the license for rtAudio is very accommodating for direct inclusion of the source code as long as you include the license. But you come back to the library linking problem. You could go for dynamic linking at run time with the repository supplied package, but what other packages get installed with it and would the end user accept having to install additional packages just for one type of application?

Edit:
Hmm rtAudio supports JACK ... so this should be a dependency. I think my system does not have JACK installed (aka still installable)
Just check with my system and in Debian under VM. libjack-jackd2-0 (this could be JACK2 with DBus) is installed as a default. And librtaudio.so.4 only support ALSA and JACK.

Edit: libjack-jackd2-0 is the multiprocessor version of jack. Whether D-Bus support in thrown in is any body's guess.


Derron(Posted 2014) [#29]
Ahh yes... libjack-jackd2-0 is installed here too ... just checked libjack (libjack0) which wasn't installed (same for libjack-dev).


@librtaudio.so.4
Isn't the maxmod2-module-rtAudio already PulseAudio-enabled?
Just had a look at some sources and most things I changed were there for making api-selection possible from blitzmax (RtAudioDriver::SetAPI(int) and others).
Should have to mention that maxmod+rtAudio does not work/compile-flawless with MinGW 4.8(.1). Ask Brucey for additional info... he made some "raw" changes to make it compile (wav only).


What I do not understand 100%: If I compile eg. rtAudio - I have to have that *-dev-libs installed. If I now compile that module into a binary It does not need the *-nondev-libs at all - except I call specific functions from it.
So if my program knows (eg. using a config file) what functions to use and which to avoid - why should the binary need that library files.
Isn't there a way to make that work (or is that why "dynamic linking" would be needed)?

bye
Ron


Brucey(Posted 2014) [#30]
Okay, here's my first attempt. Can someone look this over and see if there's anything broken?

At the bottom, the OpenALSADevice() function does the load too, returning 0 if it fails.

In freeaudio.bmx, comment out
Import "-lasound"

and add
Import "-ldl"

in the same section.

It seems to work fine for me here (Linux Mint 16, VM).
In theory, the Linux freeaudio could include *all* the drivers, enabling each one that is available. Then it's up to the app to decide which it wants to use?


Derron(Posted 2014) [#31]
Ok, I have a modded freeaudio (using dawlanes patch I think - or my manual adjustments)

so It contains (yours merged):

?Linux
Import "-lpulse-simple"
Import "-ldl"
'Import "-lasound"

Import "alsadevice.cpp"
Import "ossdevice.cpp"
Import "pulseaudiodevice.cpp"
Extern "C"
Function OpenPulseAudioDevice()
Function OpenOSSDevice()
Function OpenALSADevice()
End Extern
?


And in fa_init()
	Select deviceid
		Case 0
			device=OpenPulseAudioDevice()
		Case 1
			device=OpenALSADevice()
		Case 2
			device=OpenOSSDevice()
	EndSelect

So default is pulseaudio.


I now replaced the cpp file with yours: sound played without disturbing my internet radio.
I then commented out
- Import "-lpulse-simple"
- Import "pulseaudiodevice.cpp"
and replaced the "case 0" with the alsa-one.

Worked again (in both cases I run samples/hitoro/rockout.bmx).

Only marginal difference was that my PulseAudio-Mixer displayed it one time as "freeaudio : rockout" and the other time as "[ALSA plugin] freeaudio : rockout".


In both cases it seems that it is only visible during "audio playback". In this case when shooting. That is not your problem/fault - but that PulseAudio-Mixer allows to adjust volume of applications independend of "ingame adjustments".


So conclusion: my little test worked without the flaws I expected. Did not check rtAudio-without-pulseaudio.
If I use TMaxModRtAudioDriver.Init("LINUX_ALSA") instead of TMaxModRtAudioDriver.Init("LINUX_PULSE")
it crashes:
RtApiAlsa::getDeviceInfo: snd_pcm_open error for device (hw:0,0), Device or resource busy.
terminate called after throwing an instance of 'RtError'
  what():  RtApiAlsa::probeDeviceOpen: pcm device (hw:0,0) won't open for output.
Aborted


But that has nothing to do with the freeaudio ...



@ libasound.so.2
you use a line: if( !_alsa ) _alsa=dlopen( "libasound.so.2",RTLD_NOW );

Should that really be needed?
In all cases there should be a "libasound.so" which symlinks to the most current file. directly accessing "so.2" means you want that certain version.

As my /usr/lib/i386-linux-gnu/ does only contain libasound.so.2 but l have /usr/lib32/libasound.so I assume that I just misunderstood some principles.

Or vice versa - you try to load libasound.so first - in my case this is a symlink to the actual /usr/lib/i386-linux-gnu/libasound.so.2.0.0 (which got symlinked to libasound.so.2). If you really need version "2" shouldn't you just access it? Feel free to teach me some things concerning my question.


bye
Ron


Brucey(Posted 2014) [#32]
In all cases there should be a "libasound.so" which symlinks to the most current file

Not really. .so without a numbered suffix is *mostly* only available via dev packages, but sometimes, some builds have both.

For example, TAO (an open-source corba library) does runtime linking with some of its shared objects, and these particular ones it assumes will be named without the numbered suffix (because it's easier than to guess the correct number, I assume).
But in general, when you are using shared objects, you've compiled against one (compile-time link), rather than are trying to runtime-link to one - in which case the majority of libraries only have shared objects with their numbered suffixes.

Or something like that ;-)

<edit>
The .so.2 (numbered suffix) generally refers to a versioned API release. You'll also notice that these are usually also symlinks to another .so with even more numbered suffixes - which is the actual versioned build of the specific main version release.
You are generally safe, I think, to point to the specific number, as that is your fixed ABI. If they released a .so.3, then you may assume that the API has changed in some way - in which case you'd probably need to re-write some of your code to use it.


dawlane(Posted 2014) [#33]
@Brucey: Haven't tried it yet. But I would suggest some error reporting if there is a problem with dlopen and dlsym. One more thing. It is possible to load a library like libpulse-simple without the pulseaudio server running. When that happens you tend to get a Assertion 's' failed error. This is where it maybe an idea to look into seeing if a sound server is running on a system before calling any code for audio. It should bbe possible to use libproc/procps to do this.

What I do not understand 100%: If I compile eg. rtAudio - I have to have that *-dev-libs installed. If I now compile that module into a binary It does not need the *-nondev-libs at all - except I call specific functions from it.
So if my program knows (eg. using a config file) what functions to use and which to avoid - why should the binary need that library files.
Isn't there a way to make that work (or is that why "dynamic linking" would be needed)?
Not entirely sure what you're asking here. But...
If I compile eg. rtAudio - I have to have that *-dev-libs installed
If you are compiling a library that requires external dependencies (i.e other libraries), then you need those external libraries development packages installed.

If I now compile that module into a binary It does not need the *-nondev-libs at all - except I call specific functions from it.
So if my program knows (eg. using a config file) what functions to use and which to avoid - why should the binary need that library files.
Now this is the bit I'm having trouble understanding what your trying to ask. But to give you some understanding of the process of what is involved in running an application you should have a look at Linkers and Loaders (OK it's an old article, but I don't think that it has changed that much) and Executable and Linkable Format. And if possible read a copy of Linkers & Loaders by Jonh R. Levine. I've got a copy but haven't read it in ages, so without a re-read. I could give you duff information.

And as Brucey has already mentioned about .so without a number and what the number suffix is etc. I shall fill in you in on a few other things.

The default 64bit debian/ubuntu/mint file structure.
First thing to note is that there is no /usr/lib32 and /lib32 or /libx32 (I would have to check on this last one, but I'm sure this one is for binaries that are built to take advantage of the extras on a 64bit CPU). Most if not all of these gets created when you install the g++/gcc-multi-lib package.

To handle multiple architectures, the libraries now get their own directory in /lib and /usr/lib. Each are named CPU_ARCHITECTURE-OS-gnu e.g i386-linux-gnu.

So as not to screw up the system for 64bit development on debian/ubuntu/mint 64bit and maintain a way to build 32bit binaries. It was necessary to install just the normal development packages. These usually have header files, docs, libraries and an .so file that gets installed to the directory of which ever is the default architecture. On a 64bit debian/ubuntu/mint this is /usr/x86_64-linux-gnu.

Now if you wanted to build a 32bit application you would think just to install the 32bit version of the development library. That would be a mistake because installing the 32bit development package removes the 64bit version. So the work around is to add .so links that point to those 32bit libraries found in those i386-linux-gnu directories. Note you can place those links in the root i386-linux-gnu directories.

Now as bmk is hard coded to search a number of directories one being /usr/lib32 it make sense to add .so links there that point to the duplicate 32bit libraries that would get installed in /usr/lib/i386-linux-gnu. This way the compiler linker wont miss them during the link process.

When your application runs on someone else machine; All the have to do is make sure that they have those base i386 libraries as mentioned at the bottom of my posts where the install scripts are. This in general isn't a problem for a 32bit distribution as most are already in stalled, but it can be on a 64bit distribution as only a few if any 32bit libraries get installed by default.

Now if you was using OpenSUSE/Fedora/Mageia, it's a different story as you can have both the 32bit and 64bit development libraries side by side with maybe the odd link if there is no 32bit development package. I don't know off hand how they deal with non x86 architecture as I haven't looked into it. But I have hit a snag with a couple of compilers on other distributions that didn't like compiling png.mod due to having MMX assembly in the files.


Derron(Posted 2014) [#34]
Thanks for that really long elaboration at that topic (next time - do not put that much time into a posting for a to-lazy-to-google-it-himself-guy :D).

@what I tried to ask:
I thought an application is using external functions in a way that during compilation references to external things are kept as this: references or empty values. When starting the binary and "load external .so/.dll" that references are filled - so calling an external function is "redirected" to that function in that .so/.dll.
If that would be the case you could reference functions in your binary which are not available on the clients system. The "references" then just keep empty and the load-external-.so/.dll-function returns false.
The binary could handle that "not filled/returned false" with using another route (in this case another sound engine).

Using this the binary would not need that .so/.dll to run, it would only need it as soon as the binary wants to use that functionality. So I assumed this would be a nice thing to get rid of forcing people to install certain libs albeit they wont use that route (sound engine) at all.

Maybe I should more likely call it "modules", on-demand-loaded, instead of parts of the binary externalized to other files.


@libasound.so
If you target the libasound.so.2 directly - shouldn't you search for it in the first step instead of libasound.so (as this more easily could be a different version)? - According to your explanation libasound.so is the fallback not vice versa.



bye
Ron


dawlane(Posted 2014) [#35]
Thanks for that really long elaboration at that topic (next time - do not put that much time into a posting for a to-lazy-to-google-it-himself-guy :D).
Well I was bored. And I was thinking of copy & Pasting those external links pages in full on that post. But there's only so much you can fit into a reply box. ;-)

I thought an application is using external functions in a way that during compilation references to external things are kept as this: references or empty values. When starting the binary and "load external .so/.dll" that references are filled - so calling an external function is "redirected" to that function in that .so/.dll.
If that would be the case you could reference functions in your binary which are not available on the clients system. The "references" then just keep empty and the load-external-.so/.dll-function returns false.
The binary could handle that "not filled/returned false" with using another route (in this case another sound engine).
That first link to Linkers & Loader explains the concepts and process of what happens. As there are three ways to to link a library. Here's the basic gist of it.

static linking: All the library functions are built directly into the final executable and all symbols are resolved. Static (.a) libraries are just compressed object files (.o) files.

dynamic linking at load/runtime time: Here the compiler has added symbols to the final executable by treating the shared library like an object file, but missing the function addresses. But the runtime-linker needs to resolve these function symbol address when it loads the dynamic library or if it's already in memory. And if there are symbols in there that it cannot resolve because of a missing shared (.so) library on another computer, then it has a coronary.

dynamic linking during runtime: With this method we tell the compiler to reserve some space to the executable by declaring some pointers that we will fill in later with the address of a function in a shared library file we are going to load our selves through use of dlopen and dlsym. This gives you the option to pick and choose your poison.

All of these methods require the development libraries or to be more precise the header files for compilation. In the last case more for just the data structures.


Derron(Posted 2014) [#36]
Hmm I seem having problems to understand your response as answer to my question - or to ask the question in a way that it is understandable.

"I thought" ... that an binary can get executed without an "dynamically linked" external file having to exist on the system. But when trying to run external functions it could crash (avoidable with checking for successful loading of the external file). Using this the modules could support multiple audio engines without the final binary needing all libraries installed.
I still assume that this is possible (else "dynamic linking during runtime" seems not that useful to me).
This is just the originating thought to the problem that up to now a binary needs an installed libpulse to make it runnable albeit you might use ALSA as sound basement.


bye
Ron


Brucey(Posted 2014) [#37]
Yes, of course. You could, in theory not have any of the audio libraries on your system, and as long as you handled that fact in your app, it would still work.


Derron(Posted 2014) [#38]
So my memory must cheat on me ... I really thought I had to install the libs to get my app working - albeit I just checked if setting that certain engine was possible... Maybe it is originating in the rtAudio and my brains just mixed in something 3rd-party-style :D.


Sorry for the offtopic-getting-trouble I created here... just back to business now.


bye
Ron


dawlane(Posted 2014) [#39]
Derron: If your using an audio library/engine that handles it's own loading of different sound systems. Then you don't have to worry about installing the other libraries. It's the job of who ever wrote that library that has to worry about checking what's on some ones system. How they do this could be either using dlopen or creating a hooks to running code (have a look at the ALSA plugin source code).

If you are looking for a specific audio system then you use one of the functions within that library to see if it gives you a device back and if it draws a blank, try the next until you hit the jack-pot or run it with no sound.

And on another note the new BlitzMax 1.50 audio is still broken and you still have to import libld.


Brucey(Posted 2014) [#40]
And on another note the new BlitzMax 1.50 audio is still broken and you still have to import libld.

Well the "release notes" didn't mention anything other than bcc and maxgui :-p


dawlane(Posted 2014) [#41]
@Brucey: The the alsa linking code works. But as I'm using a 64bit distribution where the sound server is PulseAudio. It crashed with a seg fault. Then I remembered that I hadn't installed libasound2-plugins:i386 on the system. And now it works. I will run the test application on a few other distributions tomorrow. And see what gives.


dawlane(Posted 2014) [#42]
Well the "release notes" didn't mention anything other than bcc and maxgui :-p
It's no bloody good just updating half a system is it. I blame Derron for not nagging him enough.;-)

Edit: Oops; it should have be libdl.


Brucey(Posted 2014) [#43]
The the alsa linking code works. But as I'm using a 64bit distribution where the sound server is PulseAudio. It crashed with a seg fault.

Well, I think we'll need some better wrapper code to handle fallbacks to other drivers - and then perhaps a very default one which doesn't actually do anything? (you know, an empty driver which lets you call functions but doesn't play any sounds)


dawlane(Posted 2014) [#44]
I think the problem was a ALSA<>PULSE issue as I got the same thing with skids original fix for ALSA. It needs to be tested on a system where ALSA is the sole sound implementation.
The error was generated when the ALSA library tried to load the ALSA plugin for PulseAudio.
You could get the same error with JACK being the sound server.
Once I installed it the 32bit ALSA plugins library it played nicely.


Derron(Posted 2014) [#45]
Explains why it run "flawless" on my system - had that *:386 installed some how (could not remember to have this done by myself -- Mint 16 64Bit upgraded from 14).


@Nagging
Maaaan, I just wanted to write something myself. Mark was surprised when I asked if that "upcoming" patch (asked some days before) also tackles that "======" problem in comments. I cannot check if that directly connects to the quote-bug or if this is a more general problem. Seems Mark did not know what I am talking about which means he did not read the thread I linked for further details.
Maybe you (dawlane) could be the next starting a bug report :D. Just write your bug to the support-email, you will get an answer from Simon that he is relaying it to Mark.
Feel free to enjoy it if Mark sends you an email containing the email Simon wrote to him :D.


@brucey and fallbacks
rtAudio does already handle this with "RtApiDummy" (a class doing nothing :D). So you are not the first person thinking about that problem.
While such a fallback-stub-device should be a big problem for me to code - it should be just a matter of a hour of boredom and cracking your knuckles.

That "probes" of rtAudio should not be needed if you just could do "bool = OpenEngine(registeredEngineNumber, params)" and loop until bool=true


bye
Ron


dawlane(Posted 2014) [#46]
Explains why it run "flawless" on my system - had that *:386 installed some how (could not remember to have this done by myself -- Mint 16 64Bit upgraded from 14).
I did a clean install. On a 32bit distribution the asound2 plugins are there.

Maybe you (dawlane) could be the next starting a bug report :D
There is a post by me in the bug reports about the fasm 'Out of Memory Fix' that mark is aware of. But I haven't seen a real good solution to solve this 'once in a while' reoccurring issue. Ideally it should be passed as a user accessible parameter via bmk.

Back to topic.
So far I've test the dlopen version under VBox with
Ubuntu 12.04, Linux Mint 13,16, Fedora 19, OpenSUSE 13.1, Mageia 3, Lubuntu 13.10 which by default looks like it's ALSA/JACK combo.
Only fly in the ointment is Debian 7 as the test application binary is incompatible with GLIBC (different versions). I will have to rebuild it on Debian and try again.


Brucey(Posted 2014) [#47]
I'll patch my stuff for Pulse next, and then modify it for runtime linking.


Derron(Posted 2014) [#48]
look... magical dust all over my shoulders... there must have been a fairy while I closed my eyes.


bye
Ron


Brucey(Posted 2014) [#49]
Okay, here's pulse audio using runtime linking :


in freeaudio.bmx, comment out the line :
Import "-lpulse-simple"

and again, add this line :
Import "-ldl"


The rest should be the same for the patched files.


Jur(Posted 2014) [#50]
Nice! I am now using the updated version of freeaudio.mod from https://github.com/maxmods/pub.mod . I updated brl.freeaudioaudio.mod to have two drivers of TFreeAudioAudioDriver type for both supported devices (id= 0 for "pulse" and 1 for "ALSA"), so I can use the one i want with SetAudioDriver(). Both are working fine in Ubuntu.

I am planning to set the used sound driver for my application in this way:

if SetAudioDriver("pulse")=true --> use that
elseif SetAudioDriver("ALSA")=true --> use that
elseif SetAudioDriver("OpenAL")=true --> use that
else --> sorry, no sound for you

Am I too optimistic if I suppose this will work without crashes if pulse system or ALSA or something else is missing on a random distribution of Linux?


Derron(Posted 2014) [#51]
I think as soon as you try to set "pulse", the libs are tried to get loaded.

So this means: you have to change the order to "ALSA" -> "Pulse" -> "OpenAL" (think alsa is installed on more distros than pulse, and if pulse is installed most of the times alsa is there too :p).

Problem could arise from the fact, that your app might benefit from pulse (custom volume control for this specific application, network audio support ...). So you will end up configuring an user-customizable file stating which engine to try first (this is how I do it with maxmod2 and rtaudio) - if nothing is stated, try the one first you think the most use (I think it is PulseAudio).


PS: I also asked skidracer by email if he could add a data buffer fill-function to freeaudio so we could add streamed audio... but got no reply yet.

bye
Ron


skidracer(Posted 2014) [#52]
Derron, I got your email. Was going to have a look at a solution this weekend.

First, I have located thread with previous experiments in freeaudio streaming. here


skidracer(Posted 2014) [#53]
Derron, finally got a freeaudio streaming example working here.


Derron(Posted 2014) [#54]
Thanks for reading my email - and coding something :D

Of course I tested your code - and listened to the finest music.
Now to the "problem" I still have:

Your code does the following:
- create buffer reference
- init sound with referenced buffer
- loop
--- fill that buffer
--- adjust positions

What "I" (and others) need and want - if I did not misunderstand the approaches:
- sound objects with "GetData:byte[](position:int)" functions
- sound drivers (freeaudio?) calling "GetData" if they need new "bytes" to play

This would then allow to plugin multiple GetData-Types - so to play an OGG file we rely on pub.OggVorbis to provide the byte[] we want. If we want an artificial sound (like your osci wave) we use another GetData.

So the main difference is: you push your data to freeaudio - but "I" need "pull data from audio".


Your approach will "work" too: if you create some kind of "soundmanager", running "Update" for each attached streamedAudio-object. You then of course have to run an update for the soundmanager on each loop in your app.
Yours is of course less "intrusive" (less changes to original source code) but of course adds another wrapper around the sound playback implementation in BlitzMax.

My approach/idea allows for "extending" TSound etc - so a beginner just needs to import a module/.bmx file and use the specific LoadXYZformatFile(path)-command (or even use the automatic dispatching service like redi did in maxmod) without that coder having to fiddle around with "soundmanager.Update()" or similar things.

edit:

within freeaudio.cpp there is
int sound::mix(int *b,int size)

which is the function reading from the buffer. Maybe this direct buffer access should be wrapped in a "GetBuffer" - which then could be used to include the desired functionality (a call to the TSound's implementation of GetData(position, length) ).

As there are differing casts to the datablock - (short *) and (u8 *) there should be 2 Getters (I am not sure on this as my knowledge is limited a bit :P).

Maybe we could also use that "sound:peek"-Function which is not used in any module/code within brl.mod or pub.mod. I mean: peek tries to move to a specific position in a memory block - but it could also be extended to move to a specific position in a stream -> make that "extendable" so the implementation in BlitzMax could take override behaviour.

Another option would be to have a function which is called as soon as a "buffer block" reaches its end. So instead of "GetBuffer" we just inform some object that the cursor in the block reached the end. So this "call" could then reset the cursor to the start of the block (loop mode) and/or refill the block with new data (stream).
This would mean modifying the calls in freeaudio.cpp:
if (status&LOOPING) pos64-=len64;else return 1;

to add the "inform others call".

Just made me seeing the "STREAMING" flag in the code again ... still don't know if this is actually doing somehting already.

/edit

Ideas/Meanings/rants ?


PS: don't feel attacked by my post, I am glad you paved _one_ way to do it for me (but as stated, not that "general" that it eases use for everyone).

bye
Ron


Brucey(Posted 2014) [#55]
So the main difference is: you push your data to freeaudio - but "I" need "pull data from audio".

I knew you were going to say that :-p

But I think coding it in 'Max is not the answer (re GetData).
All this low-level stuff wants to be wrapped up into some C/C++ somewhere, so that, for example, your PlaySound can be either playing something from memory, or via a stream thru an Ogg stream loader.

At the end of the day, you don't (want to?) care about *how* the sound is playing (streamed or otherwise), you just want to Play it.


Derron(Posted 2014) [#56]
I just edited my posting to reflect some additional thoughts.

@brucey
You know me well... too well :D

@wrapping
That is why I suggested to have only a simple function in TSound getting called as soon as the buffers current position reaches the end of the buffer. The "stub" TSound just does nothing does nothing - because freeaudio already moves the "current buffer pointer" back to 0 if the LOOPING-bool is true and without loop mode, the sound is "finished" and ready to get released.

The extended TSound (created with streaming background) then just uses this function to refill the buffer with "new data".

BUT ... if extending freeaudio to do something, you also could rewrite TSound to allow "streaming" per se (independend from the implementation for ogg, mp3 or whatever).


bye
Ron


skidracer(Posted 2014) [#57]
I don't agree that the call back needs to be done by FreeAudio. It does not make sense for the mixer which itself is called typically from a high priority realtime scheduled thread at a high frequency to be "pulling data".

Processor intensive codecs with disk access delays should fill larger buffers in a less repetitive manner on their own threads or via a polled interface.


Derron(Posted 2014) [#58]
So in other words you would prefer some kind of "SoundManager" managing the "streamed audio objects" (and checking positions, buffers etc) instead of calling an extendable stub each update cycle.

I am just asking to see if I got it "right".

My concerns were only directed to beginners just want to have "streamed audio" while I "of course" am able to write a manager taking care of its children (...or maybe I tend to think I am able to do it :D). Especially as BlitzMax targets Game Developers with providing wrappers for many things to easy usage.


bye
Ron


Derron(Posted 2014) [#59]
@skidracer

this afternoon I tried to generalize your approach (making a basic sound type and so on) ... It seemed to work, but randomly crashed (segfaults) or gave an malloc error about linked list duplicates or so (do not remember).

I thought it is because of wrong types (int, pointers) when making it superstrict but I cut it down to use a "buffer inside a type instance".
It does not segfault everytime ... eg. I tested it now while posting here ... no problems (all of my test codes, the simple and the more complex ones).
To increase chances of segfaulting to 100% I would have to add "flip" right before that "delay 50". It then segfaults somewhere at the end of the buffer (+- 1k)

I of course added an graphics call to initialize a window (and an abortion key) so it is easier to code from an IDE (not compiling from my terminal).






If adding graphics + flip straight on your code (so no type-property) it works without trouble. May it be some kind of Garbage Collection?


bye
Ron


skidracer(Posted 2014) [#60]
Ron, what platform are you on?


Brucey(Posted 2014) [#61]
Linux ;-)


skidracer(Posted 2014) [#62]
Eeek, I'm not sure I'm ready to go back there :)

Here is some code tested on MacOS that will make your ears bleed on various MouseY values.




Derron(Posted 2014) [#63]
First of all I have to confess: yes... I played some time with your mouse-sound :D.

I guess the change from "buffer:Byte Ptr" to "buffer:Byte[]" did the trick ... I just changed line by line until I saw what you changed there....

This explains why my segfaults happened randomly ... and once my linux stopped responding :D. And Mac...hmm I just use it for compiling the binary for the <= 2 players of our current users using this OS, compared to minimum of 4 playing with linux ... so windows is still 98+%.


Like I said before... I am not the wisest man on earth concerning pointers and other thingies ... to avoid such trouble I like SuperStrict (or other modes not allowing to pass variables with the wrong/not exact - type).

Thanks for clarifying the problematic part - and feel free to update your entry in the code archives (byte ptr -> byte[]).


bye
Ron


Derron(Posted 2014) [#64]
New question:

In your example you create a "mono sound" - how to align the bytes for a strereo output?

I tried the most simple way and assumed two channels are just done this way:
buffer[i] = valueA_Channel1
buffer[i+1] = valueA_Channel2

The sound stuttered...


For playing streamed ogg (ignore the missing filestream-offset-reset)
- created a Byte Ptr for the ogg file (ogg = Decode_Ogg(...)), information was read correctly
- filled the buffer (aka replaced your for loop) with Read_Ogg(ogg, buffer, frags)

The only thing I was able to listen then, was again a stuttering and seemingly "repeated" sound. The buffer contained varying data, so the "reading" was done too.



So in short: how to generate stereo sound?


bye
Ron


skidracer(Posted 2014) [#65]
Good, glad that is fixed. Sorry about casting the byte array to a pointer and not retaining a reference to the array, my bad. GC 101...

I have added a 16 bit stereo example to the code archive, please feel free to post feedback there.


Derron(Posted 2014) [#66]
Your new code works.
But this isn't compatible with the TAudio and TAudiosample-approach isn't it ?

They (TAudio...) use a "memory blob" ( MemAlloc() ) to store their data. Also your sound output has a low volume - compared to my loud cracks/noise I get when trying to wire it with the ogg_decode-data. May the data be misinterpreted or is missing some information (like you only give frequency/tone but not volume).


EDIT:
Ok, I got some ogg sound playing... but stuttering (sound silence sound silence - each only some milliseconds). Also the sound seems "slower" (think the silence is the reason).

EDIT2:

Another Ok, fiddled together an example - to play that ogg file, I needed to switch back to buffer:Byte Ptr, allocated by MemAlloc().
This then does not work nicely with the sine-wave-code of you (@skidracer).

Also I recognized that a "fa_SetChannelVolume(faChannel, 255)" is needed to get a playback at "normal volume".

To run the following code, you need to adjust "uri" to point to a local OGG file.




So in its current state I have trouble of playing back random noise or ogg files ... both have those gaps in it ... the more "caching" I do (the bigger the "read blocks") the longer the sound gets - but the silent gaps increase in length too.
Somehow I think that the "fa_***" things are borked - or expecting something I do not provide to them.

bye
Ron


skidracer(Posted 2014) [#67]
I posted a working ogg stream in the code archive. On my first attempt the TStream was being collected as I forgot to retain a reference leading to some very odd crashes.

It sure is fun getting back into BlitzMax, I forgot how snappy it is.

I was just reading about webm spec, OGG is used for audio and VP8 for video. I am curious how complex a task to have VP8 on BlitzMax.


Derron(Posted 2014) [#68]
Thanks skidracer,

Now I will have to check what the difference to my code is (I was on my way to write it to load "chunks" instead of fragments, so I have more control about cache states) ... but yet I have to make it work.

Was already sad not to see a posting this morning (6hrs ago)... thought I have to wait like a child before christmas :p.

Thanks again.

Edit:
Problematic part was the length of the offset + length of the to-read-block

so from
local bufAppend:Byte Ptr = buffer + offset
'try to read the oggfile at the current position
Local err:int = Read_Ogg(ogg, bufAppend, length)


to
local bufAppend:Byte Ptr = buffer + offset*4
'try to read the oggfile at the current position
Local err:int = Read_Ogg(ogg, bufAppend, length*4)



So you can use int[] or "Byte Ptr = MemAlloc()"... think yours saves to get rid of the memory on delete - or is it just missing?


bye
Ron


Derron(Posted 2014) [#69]
Ok ... rewrote my code so it tries to utilize as much of existing code as possible (so adjustments to existing code in projects can be minified) so I use TAudioSample/TChannel and TSound.

When exiting the loop (to exit the app) the sound is played for 2-3 secs (seems to be some kind of a freeAudio-buffer).

This does not happen if you use your code - because your code does not close free audio.

Put an "fa_Close" after your loop (and make the loop exitable) to see what I mean.

If you use "SetAudioDriver("FreeAudio")" instead of "fa_init(x)" you will have the same effect, as the freeaudiodriver runs "fa_close" when exiting.


To avoid the "repeating sound" (the small buffer is repeated over and over) you have to manually stop the channel after the loop:

fa_StopChannel(TFreeAudioChannel(myAudio.currentChannel).fa_channel)
fa_FreeChannel(TFreeAudioChannel(myAudio.currentChannel).fa_channel)
fa_FreeSound(TFreeAudioSound(myAudio.sound).fa_sound)
myAudio = null

Other lines (freechannel, freesound) do not help and do not change the output if the channel isn't stopped.
If you stop the channel with the above code - you still have to wait some seconds until the app really exits (you just do not hear sound output any longer).

-> Using "SetAudioDriver" (which is the common approach) makes it hang on exit, because "fa_close" is called when an app exit is triggered.



Another mind binder is the following thing: If I use "CreateAudioSample" and then utilize that samples "samples pointer" as buffer ... it plays the sound fine - except for the very first milliseconds, so you can hear a light cracks/random noise at the begin. Therefore I had to keep the "buffer" (int array) and use "CreateStaticAudioSample" to refer to that buffer - cracking is gone.

All in all I tried to get rid of "fa_***"-functionality/dependencies but "LoadSound(audiosample, flags)" does not route that flags to the fa_CreateSound()-command. So there seems no "anonymous way" to tell fa_CreateSound() to use that dynamic flag value of $80000000. To add that functionality would add the need to modify the module and this is what I wanted to minify too (should work on "vanilla" blitzmax - except maybe that freeaudio-patches).

Another problem: a basic TChannel does not provide "Position()", which is needed to see how far the channel has played already (I already changed calculation because a channel can still "play" while streaming is "paused")). The only other thing I can think of is: check each update, how much time is gone - and if the channel is "playing", add that time to a counter. BUT ... if you pause the channel between two update/poll-cycles, you will skip things. So this adds another dependency I cannot get rid of (to make the whole implementation free of dependencies like FreeAudio).


I will test and adjust some things the next days (so it might replace the rtAudio-code in my project) and then post a github link to the complete code (zlib/libpng).


bye
Ron


skidracer(Posted 2014) [#70]
Why would you want to use BRL TAudio types?


Derron(Posted 2014) [#71]
To make them interchangeable.


I think people have codes containing "TSound" and "TChannel" (eg. for crossfading or mixing of streamed and nonstreamed sounds - or other possibilities) - to make the underlaying code independend from being able to stream things, I wanted to reuse that types. Keeping functionality intact (SetChannelVolume(channel), ...) is also a nice benefit.
This also enables to more easily exchange it for another underlaying sound engine (if there is one).


Nonetheless that sound bug is a bit annoying:
open your oggstream-sample file and append "
fa_Close" to the end. Execute it, and try to exit the app.

Print "freestream is free streaming..."
Graphics 800, 600
While not KeyHit(KEY_ESCAPE)
	Cls
	ogg.Poll
	Print "."	
	Flip
Wend

fa_Close


You will recognize some kind of "waiting" until the app really exits.


edit:
int BBCALL fa_Close(){
	if (io) delete io;
	if (audio) audio->close();
	audio=0;
	io=0;
	return 0;
}

commenting out that "if (audio) audio->close();" makes exiting snappy again. So there must be some action in that close() function.
"audio" is "audiodevice" ... so it should call "audiodevice->close()". Audiodevice has this defined:

// virtual OS dependent interface
	virtual int reset()=0;
	virtual int close()=0;


So this means, close() is defined somewhere else -> in the alsa/pulseaudio/files .. voila, there is a close function in each of them.

alsadevice.cpp
	int close(){
		int		timeout;
		running=0;
		timeout=5;
		while (timeout-- && playing) sleep(1);
		return 0;
	}


This means: it waits up to 5 seconds for channels to finish playing... ahem ... why should it do that? I want to exit an app - and this piece of code says "no, first wait until all channels finished playing". Is there a reason to do that this way?


PS: in dsounddevice.cpp there is another comment:
timeout=20;	//100ms timeout
while (timeout-- && playing) {
	Sleep(5);
}

Means: using DirectX "S"leep is used (param is milliseconds) so you wait up to 100 ms (which is acceptable).


pulseaudio.cpp waits 5 seconds too (10000microseconds * 500 = 5.000.000 microseconds = 5 seconds)

timeout=500;
while (timeout-- && playing){
	usleep( 10*1000 );
}


Somehow I think there was one mixing up timeout times...


EDIT 2:
to patch things up (to make them similar to dx on windows and coreaudio on mac):

alsadevice.cpp:
	int close(){
		int	timeout;
		running=0;
		timeout=20;
		while (timeout-- && playing) {
			//sleep(1);
			//1 ms = 1000 microseconds  -> 5000 microseconds = 5 ms
			usleep(5*1000);
		}
		return 0;
	}



pulseaudio.cpp
	int close(){
		int timeout;
		running=0;
		timeout=20;
		while (timeout-- && playing){
			usleep(5*1000);
		}
		pa_simple_free(simple);
		return 0;
	}


ossdevice.cpp
	int close(){	
		int	timeout;
		running=0;
		timeout=20;
		while (timeout-- && playing) usleep( 5*1000 );
		::close(fd);
		return 0;
	}



But question keeps the same: why do we wait at all ? (in the coreaudio file there is no while-waiting-loop at all).

PS: that waiting is independend from our streaming experiments, just have a look how samples/digesteroids exits in linux (disable full screen mode to make it more clearly recognizeable).

bye
Ron


skidracer(Posted 2014) [#72]
The code you are listing is using the running variable to communicate with the mixer thread.


Derron(Posted 2014) [#73]
While all wait for "not playing" this does not explain why "win32" waits up to 100ms, OSX no time at all, and Linuxes 5 seconds.

In my case - and like said WITHOUT the streaming portion (eg. compile samples - Digesteroids) this leads to a waiting time of 5 seconds after I quit an app.


bye
Ron


skidracer(Posted 2014) [#74]
Reviewing pinned ALSA and Pulse, pulse looks ok, ALSA looks bugged as it should mix until running goes 0 then set playing to 0 which it doesn't.


Derron(Posted 2014) [#75]
Hmm, maybe my postings got too long so you did not read what I wanted to express:

- Direct Sound: waits 100ms or until nothing is played anylonger
- OpenSound: does not wait, just calls a function

- ALSA/PulseAudio/OSS: wait 5 seconds or until nothing is played anylonger


In my environment (Linux Mint, PulseAudio) that 5 seconds have to be gone until the app exits - so "playing" is not reset correctly.
I assume this is what you saw as bug (does not set playing 0).

Even if you fix that mistake, I cannot imagine why a "5 seconds" waiting time (in the case of something failing - like it does now) is needed while DirectSound is happy with 100ms.


bye
Ron


Derron(Posted 2014) [#76]
Hmmm I cannot read your response (do you have deleted it and the board did not trigger correctly?)

Thread overview


Current Thread



bye
Ron


Brucey(Posted 2014) [#77]
Hmmm I cannot read your response..

It appears to have disappeared - perhaps deleted by skid himself?
It was there earlier, because I read it! :-)
(no, I can't remember all that it said)


Derron(Posted 2014) [#78]
Maybe I get a response this time ... last time I mentioned this odd behaviour (overview lists a post, thread view does not) was too a missing post of skidracer. Maybe "content deleted"-posts are better in this case (except the forum gets rid of that bug of exposing the former existence of a post).

Just hope it was a kind post not containing swears about my stubbornness regarding some timeout values :D


bye
Ron


degac(Posted 2014) [#79]
Skid's comment was something about the fact that is not a mistake but a bug (or something about).


skidracer(Posted 2014) [#80]
Yes, sorry, I won't do that again. I will look at what is needed to fix the shutdown issues this weekend.


Derron(Posted 2014) [#81]
Thanks in advance.

This is the current code of my AudioStream-code:

(Link is to the explicit commit of that file, so a specific revision of the file)
Github:Dig:base.sfx.soundstream.bmx


Problematic part is: the Update of the streams has to take place "threaded". To make it threaded without enforcing the users to "build threaded" the code should use some "c" which automates creation of a phtread etc. . So something like:

- add sound: add sound ref to a manager/collection
- threaded update of manager/collection
- remove sound - delete() : remove ref from manager/collection

PS: I know my code surely has some issues contained. A "crossfading mixer" could be found in the file "base.sfx.soundmanager.bmx" (utilizing 2 channels and adjusting volume according to given commands). As you cannot crossfade the same stream using the same buffer the crossfader automatically "clones" a stream if both to-play-streams are the "same". Ideas how to circumvent this? I think things like "multiple buffers" per stream object could do it but add overhead.


bye
Ron


skidracer(Posted 2014) [#82]
Back on topic. I think Mark has a point in that other thread, backing OpenAL seems the best solution for Linux.

Brucey, I'm interested how you rate it.


Brucey(Posted 2014) [#83]
Unfortunately I don't have much experience with it, although I gather a good amount of games (on Steam, for example), like to use it. Which makes me think it is a fairly stable platform for Linux audio.

irrklang has recently gone 64-bit across platforms which is interesting.
And BASS supports ARM Linux, which is quite exciting :-)

I need to try to find some time to play around with the latest OpenAL-Soft, and see how it goes.

I was playing around with my "ng" brl and pub mods the other day, and have it working so that it "falls back" to ALSA if PulseAudio isn't available.
I wonder if OpenAL-Soft handles that better though. I see it can be built with support for many different backends.


dawlane(Posted 2014) [#84]
Some distributions have problems with OpenAl and PulseAudio installed with certain hardware configurations. But some say that OpenAL-Soft is faster than the Creative Labs offering on Windows. Rumour has it that Creative dropped OAL as they could not successfully use hardware acceleration.


Derron(Posted 2014) [#85]
BTW: Brucey, some posts above yours you see why the experiments on linux had a 5 seconds delay when closing the programme afterwards.

@skidracer
Did you have a chance to fix that delay?


bye
Ron


Derron(Posted 2014) [#86]
Any news on the audio driver (delay)?

And still of interest: a streamed audio-type which can get used with TChannel. So the audio stream should contain some "c-code" so it fills the buffer in a thread without enforcing the blitzmax-project to get compiled in threaded mode too.


bye
Ron