

This is the documentation for fluent-ffmpeg 2.x. In order to be able to use this module, make sure you have ffmpeg installed on your system (including all necessary encoding libraries like libmp3lame or libx264). But in the end that depends on what you intend to do with those frames.This library abstracts the complex command-line usage of ffmpeg into a fluent, easy to use node.js module. You can put them in /tmp, as most distributions mount a ramdisk there anyway.


If you want to create and manipulate individual frames, you're better off outputting them to actual files (see #360). Streams are useful when you want to pass data around without having it written to disk and without having to handle a huge buffer, but you also should not care about where the data is split. But the way that stream is split in chunks by the OS cannot be controlled, and you'll end up having to split it into images yourself (which implies being able to decode image data to find out where each one starts and ends in the stream).Īnyway, that's not a very good use case for node streams and pipes. Maybe it also works as an output format (again, there's not much info about it and I couldn't find any example of its usage as an output format), but in that case, what's most likely is that it outputs all images concatenated in the output stream. video frames or "valid" chunks of video).Īdditionnaly, I cannot find a lot of information about the image2pipe format but what I found is that this is only an input format, which expects to find data from multiple image files concatenated in the input stream. The way it is split in chunks depends mostly on the OS data buffering configuration (and possibly also both on how ffmpeg writes its output and on nodejs streams implementation), so chunk boundaries will most likely not be related to the actual data contents (ie. pipe(), you get the same data that would be written to a single output file, except it never ends up being written to the filesystem. Basically, when outputting to a stream using.
