Ffmpeg is one of the open-source projects that have been fascinating me since its first days. I admire the project for what it has archieved, the quality of this piece of code is just marvelous. I had been in the video-editing business for quite some time, and maybe have a good grasp of what wonderful a tool those guys have put together. In particular, the approach to have all pieces in one binary has always attracted me - after having wasted countless hours in DLL- and codec-hell.
However, so far I did not have a chance to put ffmpeg to work in a serious (i.e. paid for) project. Now this chance seems to have arrived, and I am very excited of it. In short, the goal is to use ffmpeg to create movies from a series of images, no audio at this time. So far the implementation uses DirectShow (yes, I am a die-hard DirectShow-fan) - however, DirectShow right out of the box does not offer a lot of video formats. Sure, there are tons of codecs for every format one can think of - but going this route would inevitably mean to enter codec-hell once more. Then, there is MediaFoundation - imho not really better suited for the task at hand, because at least as complex as DirectShow and (again, right out of the box) does not deliver that many new codecs or containers. So, I decided to give ffmpeg a shot.
The first decision is about how to interface with ffmpeg. Three possible choices seem feasible:
- First create an AVI with DirectShow, then let ffmpeg convert it.
- Use the ffmpeg-API (i.e. link with libavcodec/libformat and use their API)
- transfer the source images into ffmpeg, where ffmpeg runs as a stand-alone process, let it encode and output it into a file
The first approach is for sure the lamest - but the easiest. In fact, performance is not that much a concern, so it is not immediately ruled out. The second is probably the most efficient and most solid approach, but there are quite a few drawbacks: besides (potential) legal issues it is about the problem of integrating it into a build-environment, worries about the stability of the API (and problems with updates of ffmpeg) and of course, the complexity of the API itself and the inevitable learning curve (and some more worries). So, I decided to take the third approach.
I was hoping that someone else already tried this out - and was hoping to find a nice library or code snippets for this task. To my astonishment a web search did not bring up many hits, at least not what I was hoping for. And the documentation on the ffmpeg-site itself for this was not too enlighting as well.
The basic idea is to use a named pipe in order to transport the images over to ffmpeg. I got this part (basically) working after some meandering. More on this in one of the next posts...