could you please describe more detailed what the function is supposed to do? Basically you want to have two audio streams playing in sync and cross fade between them? Like in DJ Hero? How do you use this function? Several times or just one time to sync the streams?
What I do to make sure several streams play in sync is:
-create a separate mixer that is stopped and attach it to the global mixer
-create the streams and attach them to the separate mixer
-start the separate mixer
This will make sure the streams will start exactly at the same time. Otherwise it might happen that the OS makes a context switch between starting the first and the second stream which can introduce some delay under rare conditions.
I'm not using the self updating streams though but provide the sample data myself. Looping and offsets are handled by my code. If one of the synced streams should start later I insert silence at the beginning for the duration of the delay.
I tried really hard to find a way to use the played sample count of a stream to calculate the position for the self-updating streams. There are fundamental problems with that. It seems easy to do at first but there are some things that are hard to solve.
Self updating streams support loops. Currently the loop border can change at any time. It can happen while the stream is already playing and even worse. Please take a look at this code from _al_kcm_feed_stream:
1 /* In case it reaches the end of the stream source, stream feeder will
2 * fill the remaining space with silence. If we should loop, rewind the
3 * stream and override the silence with the beginning.
4 * In extreme cases we need to repeat it multiple times.
6 while (
stream, fragment +
The mutex is locked and unlocked several times. Especially for small loops where the while loop might run through several iterations this will allow for the loop bordes to be changed while a single fragment buffer is being filled.
I see no way to use the played_samples or played_sample_buffers counts to reliably get the correct playback position under these circumstances. It should at least be made sure that the loop borders are constant while a fragment buffer is being filled. But this would still require me to store the loop borders for every fragment buffer to be able to get the correct playback position based on the played_samples count. The implementation would be quite messy.
It would be even better to automatically stop and restart the stream if the loop borders change. But this will lead to a change in behavior. Code that relied on setting the loop borders while the stream is running would get different results. The implementation would be much easier but still quite some work.
You could argue that it's not required to get the real playback position in these extreme cases or that it would be OK to just reset the real playback position when reaching the end of a loop. But in this case it would also be OK to use the old implementation of al_get_audio_stream_position_secs as it does the same thing.
Thinking about the whole issue again I still see no real benefit from calling the function al_get_audio_stream_position instead of al_get_played_samples and making it work like you want it to for self-updating streams. For simple audio stream the function would have the wrong name because they don't have the concept of a position. For self-updating streams the function is hard to implement without changing the way these streams behave. It also further increases the interweaving (I'm not sure if this is the right word, hope you understand what I mean) of basic audio streams and higher level self-updating streams that isn't a good thing as we already agreed on. So I'm kindly asking again if it wouldn't be better to just add the function as al_get_audio_stream_played_samples and leave al_get_audio_stream_position_secs as it is?
If you still think al_get_audio_stream_position is the right way to go: What can I change about the way setting loop borders works right now? Can I stop the stream when loop borders are being changed?