Hi,
 

> For the second question, it seems like there might already be an easy way. 
> I was looking over the pyglet.lib module, and I discovered a few things I 
> was unaware of. First of all, it's possible to pass multiple library names 
> to the pyglet.lib.load_library() method. For an example, 
> pyglet/media/drivers/openal/lib_openal.py shows this in action. Basically, 
> we can use kwargs for additional win32 library names. This might make 
> "ffmpeg_libs_win" 
> unnecessary. 
> In addition to that, pyglet already has the 
> pyglet.options['search_local_libs'] option. If you look in the pyglet.lib 
> module, it shows that this option will make pyglet also check for a 
> subdirectory in the project path named "lib". If the ffmpeg libraries are 
> placed there, it should be a solution for bundling ffmpeg libraries with 
> applications. I'm interested if this works as expected on Windows. 
>

Thanks for that Benjamin. I've gone down that road. I had to fix something 
in lib.py for Windows to add the local lib folder to the PATH. See commit 
fcd693 
<https://bitbucket.org/dangillet/pyglet/commits/fcd6936d690a0c483ec151c8e859852d06f9db1d>
 I've 
hard-coded the FFmpeg version for Windows. My idea is that when a new 
version comes out, we need to make sure the ctypes wrapping is still valid 
and that there are no API changes that would require a rewrite of 
pyglet.media.

We still have a couple of bugs, as you are aware. But going forward I've 
also tried to fix the GroupSource class. It's basically used for queuing 
source on the Player. But I must say that the original intent is not 
entirely clear to me. It seems like they were trying to make a GroupSource 
behave like a normal Source, except it provides data from the first source 
in its queue. There were some configurations possible to ask to 
automatically switch to the next source in the queue when the preceding 
reached the end of its stream or to loop over that single source. They had 
a restriction that you could only queue Source with a similar audio and 
video format. But this is something I find rather surprising for the user 
then. He might not know that 2 media don't share the same audio format. And 
all of a sudden, they're not part of the same SourceGroup. 

So I retained the class just for the sake of not re-instantiating the audio 
player and re-creating the texture when Sources share the same audio and 
video format. But for the user, he does not see or care that sources are in 
a SourceGroup or different ones. For him the Player has different Sources 
on it. The default Player behaviour is to play the next media when the 
preceding has finished. The Player has a `loop` property which by default 
is False. If set to True, the Source will loop until we call on the Player 
its `next_source()` method. This obviously could be mapped to a GUI button 
or a key. The media player could demonstrate that. I just wanted to know if 
you guys had different opinions on the SourceGroup class and its usage. If 
my approach is sufficient, I might even re-consider this limitation about 
audio and video format. The SourceGroup would be the Player queue. It would 
be smart enough to tell the Player when changing Source if the Player 
should also re-instantiate a new AudioPlayer or re-create a new Texture.

For something completely different, I have written a small media manual 
which is more intended to developers rather than users. I will try also to 
make a small document which explains briefly the changes I made from the 
original code. But I wanted already to share a bit with you, because I want 
to know if it's not a no-go for you.

In the AudioPlayers (DirectSound and OpenAL) there was some threading 
madness going on there. There was also some threading madness going on in 
the Player itself, with a worker decoding video frames asynchronously. I 
have remove all that. The code is now single threaded! And guess what? It 
goes faster. :) But more importantly it is easier to reason about and to 
maintain. The threading was not bringing anything because of the GIL. Maybe 
when this code was written, the people didn't know or care about the GIL. 
But anyway, all these locks, threads and such are a thing of the past.

The other thing is the Source.get_audio_data method has a change in its 
API. Before it was get_audio_data(self, nbytes). Now it is def 
get_audio_data(self, bytes, compensation_time=0.0). This compensation time 
is about synchronizing the audio to the master clock. For a given audio 
sample rate, a given number of bytes will provide you with a given 
duration. Say you're asking for 1 second of audio data. But at the same 
time, the Player has noticed that the audio is too far ahead of the master 
clock. So that 1 second of audio data should play in let's say 0.9 sec. So 
compensation time will equal to -0.1 in this case. And the Source will 
shrink the audio samples to make the 1 second data fit in 0.9 seconds of 
real time.

I've added this optional kwargs to the other Sources, like the Procedural 
source. But there, it does not do anything with this parameter. The reason 
being that there is no audio synchronization necessary for those Sources. 
But I might be wrong.

That's it for the moment. I will come back later with further update while 
we are consolidating things.

Dan
 

-- 
You received this message because you are subscribed to the Google Groups 
"pyglet-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to pyglet-users+unsubscr...@googlegroups.com.
To post to this group, send email to pyglet-users@googlegroups.com.
Visit this group at https://groups.google.com/group/pyglet-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to