Argument | Description |
---|---|
Information | |
-codecs | Display codecs |
-formats | Display formats |
General | |
-f fmt | Force format "fmt" |
-i filename | Set input file name |
-y | Overwrite output file |
-n | Never overwrite output files |
-t secs | Force duration to specific length (hh:mm:ss[.xxx] ) |
-fs limit | Set file size limit (for 2-pass encoding) |
-ss secs | Seek to given time position (hh:mm:ss[.xxx] ) |
Video | |
-r fps | Set frame rate (default 25) |
-s WxH | Set frame size (default same as source) |
-vf scale=W:H | Rescale video (use <=1 to scale, e.g. "640:-1" means resize height to scale, "640:-2" means same but maintain even /2 count) |
-vf transpose=n | Rotate video (0 = 90° counterclockwise and vertical flip; 1 = 90° clockwise; 2 = 90° counterclockwise; 3 = 90° clockwise and vertical flip) |
-aspect aspect | Set aspect ratio (4:3, 16:9 etc) |
-vn | Disable video |
-pass n | Multipass rendering (1 or 2) |
-c:v | Force video codec (e.g. "libx265", "h264", "copy") |
-crf nn | Constant Rate Factor (0=lossless, 23=default, 17-28=acceptable, every +6 means half the file size roughly) |
Audio | |
-ar freq | Audio frequency (default 44100 Hz) |
-ab bitrate | Audio bitrate in bps (default 64k) |
-ac channels | Audio channels (ac=2 to downmix 5.1 to stereo) |
-an | Disable audio |
-c:a | Force audio codec ("aac", "mp3", "copy") |
-q:a n | VBR quality: mp3=0-9, 0=~240kbit/s, 2=~190kbit/s (standard), 4=160kbit/s (medium) |
Subtitles | |
-scodec | Force subtitle codec ("copy" to copy stream) |
-sn | Disable subtitles |
Misc | |
-map_metadata -1 | Strip ID3 (or any kind of) metadata from input files (-1 is a dummy stream specifier) |
Good default converter command:
ffmpeg -y -i infile.mp4 -c:v h264 -c:a aac -ac 2 -sn -crf 23 -r 24 -vf scale=-2:480 \
-pix_fmt yuv420p out\outfile.mp4
Parameter | Explanation |
---|---|
-y |
Always overwrite (hence the subfolder in the output path) |
-c:v h264 |
Forces h264 encoder (h265 works too, but it's slooow) |
-c:a aac |
Forces AAC encoder |
-ac 2 |
Downsample to two channels if 5.1 |
-sn |
Ditch the subtitles, they suck anyway (get .srt subtitles) |
-crf 23 |
Not bad video compression |
-r 24 |
Force 24 fps (sometimes they can be weird, like >= 60) |
-vf scale=-2:480 |
set 480 pixels high, scale width to even pixels |
-pix_fmt yuv420p |
force 8-bit YUV420 pixels (downsample from 10-bit) |
Dump information to JSON format:
ffprobe "file.mkv" -v quiet -print_format json \
-show_streams \
-show_format \
-show_programs \
-show_chapters \
-show_private_data \
-show_error
Select specific track
ffmpeg -i movie1.avi -map 0:0 -map 0:2 movie1.m4v
Select the 0:0 video stream, and the 0:2 audio stream (supposedly 0:0 is the video, and 0:1 and 0:2 are audio streams).
Concatenate two videos two one:
ffmpeg -i movie1.avi -i movie2.avi \
-filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]" \
-map "[outv]" -map "[outa]" result.m4v
[0:v:0][0:a:0][1:v:0][1:a:0]
means "input 0 video stream 0; input 0 audio stream 0; input 1 video stream 0; input 2 video stream 0"concat=n=2:v=1:a=1
means concatenate filter, two streams, one stream per video and one stream per audio[outv][outa]
means map to two result streams, outv and outa-map
is used to map streams to output instead of the input streams
Cut part of video out
Generate a video file containing two minutes of video, one 60-second clip from the start and another 60-second clip from 5 minutes into the video.
ffmpeg -i input.mp4 -filter_complex "\
[0:v] trim=0:60, setpts=PTS-STARTPTS [v1]; \
[0:a] atrim=0:60, asetpts=PTS-STARTPTS [a1]; \
[0:v] trim=300:360, setpts=PTS-STARTPTS [v2]; \
[0:a] atrim=300:360, asetpts=PTS-STARTPTS [a2]; \
[v1][a1][v2][a2] concat=n=2:a=1:v=1 [outv][outa]" \
-map "[outv]" -map "[outa]" result.m4v
- The input stream is trimmed, both audio and video, into streams
v1
anda1
. - The setpts/asetpts filters are very important since that resets the timecode, otherwise there would be tons of dead space in between the clips
- The two video and audio streams are concatenated into
outv
andouta
- ...which are then mapped to the output streams