So I was a little too hasty on my previous 2 AM post of several days ago…I assumed the RIFF header problem was the only problem. Alas, once the encoder accepts the input data, it still reads the decompressed audio and video streams not at all in parallel. In other words, it doesn’t just grab a few kb of video data, process it, and then grab a few kb of corresponding audio data. Instead, it reads a ton of video data or a ton of audio data before switching streams. This is the reason many other sites have pointed out that you can’t directly feed both the decompressed audio and video from mplayer to the encoder via two named pipes: the current version of Linux has these fifo buffers pegged at 64 kb, and it was even smaller in older kernels. So, it seemed time to build in some additional buffering.

I looked through encoder_example.c first — in theory, you could implement it there, by having it read and temporarily store data from one named pipe whenever the other was empty, until data started appearing on it again. Trouble is, my experience with C is limited to one course, and it was in C++, so I’d have to do a bit of self-teaching to implement the kind of data structure and memory allocation that is necessary.

I spend a while attempting that, and then decided I was in a bit over my head. I ran back home to PHP and created an external buffering script with it, mostly just to make absolutely sure that it would work. It reads in data from a fifo mplayer is writing to, buffers it up to 10 MB, and writes it out to another fifo that is being emptied by the encoder. I didn’t really expect it to operate with any semblance of efficiency, which it definitely does not, but it did succeed in proving to me that figuring out how to implement a buffer directly within the encoder would really make it all work and thus be worth the effort.

And that’s where I’m at now…I didn’t expect this project would involve any coding outside of php…or contributions to other projects…but hey, this is where my SoC journey heading. Plus, it will be a fun challenge and a good achievement for me to improve the encoder.

I think I should also quickly comment on why I am so studiously ignoring ffmpeg2theora. Basically, because it was built to do something else. Since I’m clearly going to have to make modifications to example_encoder too, that reason may not be so valid, but sticking with mplayer provides a few extra benefits: obviously, people will be able to upload content in a few additional formats thanks to mplayer’s codec packs, and also this provides a tidy way to retain a copy of decompressed output that the encoder can reuse for, say, producing a ogg vorbis files at both low and high bitrates, without having to decompress twice.

I’ll still need to work on how to get a single instance of mplayer to decode at faster than normal playback speed…I’m not sure how robust the -speed 100 option is, and it also presents more RIFF header problems.