FFmpeg: Outputting multiple videos with multiple video/image inputs
Thank you so much for checking out my post, I'm still new to ffmpeg but getting addicted to experimenting with it, yet I'm facing the following problem:
ffmpeg -f gdigrab -s 1360x768 -framerate 30 -i desktop
-f dshow -i video="video-input-device":audio="audio-input-device"
-i image.png
-filter_complex "[0:v]format=yuv420p,yadif[v];[1:v]scale=256:-1,overlay=10:10[secondvideo];[v][2]overlay=main_w-overlay_w-10/2:main_h-overlay_h-10/2[image];[image][secondvideo]concat=n=2[outer];[outer]split=2[out0][out1]"
-map 1:a -c:a aac -b:a 128k -map "[out0]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output0.mp4
-map 1:a -c:a aac -b:a 128k -map "[out1]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output1.mp4
Expected output: Two videos containing audio, screen being recorded and multiple video streams located on different places in the video, in my case it's a webcam on the top left of the video and an image on the bottom right.
Real output: The following error
Stream mapping:
Stream #0:0 (bmp) -> format (graph 0)
Stream #1:0 (rawvideo) -> scale (graph 0)
Stream #2:0 (png) -> overlay:overlay (graph 0)
Stream #2:0 (png) -> overlay:overlay (graph 0)
Stream #1:1 -> #0:0 (pcm_s16le (native) -> aac (native))
split:output0 (graph 0) -> Stream #0:1 (libx264)
Stream #1:1 -> #1:0 (pcm_s16le (native) -> aac (native))
split:output1 (graph 0) -> Stream #1:1 (libx264)
Press [q] to stop, [?] for help
[dshow @ 0000003601a30ec0] Thread message queue blocking; consider raising
the thread_queue_size option (current value: 8)
[Parsed_concat_5 @ 000000360bdef840] Input link in1:v0 parameters (size
256x192, SAR 0:1) do not match the corresponding output link in0:v0
parameters (1360x768, SAR 0:1)
[Parsed_concat_5 @ 000000360bdef840] Failed to configure output pad on
Parsed_concat_5
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #2:0
[aac @ 0000003601a9ef00] Qavg: 198.729
[aac @ 0000003601a9ef00] 2 frames left in the queue on closing
[aac @ 000000360a253800] Qavg: 198.729
[aac @ 000000360a253800] 2 frames left in the queue on closing
Conversion failed!
I know it's a filter_complex issue but I don't know where exactly, any help would be very appreciated!
ffmpeg webcam
add a comment |
Thank you so much for checking out my post, I'm still new to ffmpeg but getting addicted to experimenting with it, yet I'm facing the following problem:
ffmpeg -f gdigrab -s 1360x768 -framerate 30 -i desktop
-f dshow -i video="video-input-device":audio="audio-input-device"
-i image.png
-filter_complex "[0:v]format=yuv420p,yadif[v];[1:v]scale=256:-1,overlay=10:10[secondvideo];[v][2]overlay=main_w-overlay_w-10/2:main_h-overlay_h-10/2[image];[image][secondvideo]concat=n=2[outer];[outer]split=2[out0][out1]"
-map 1:a -c:a aac -b:a 128k -map "[out0]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output0.mp4
-map 1:a -c:a aac -b:a 128k -map "[out1]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output1.mp4
Expected output: Two videos containing audio, screen being recorded and multiple video streams located on different places in the video, in my case it's a webcam on the top left of the video and an image on the bottom right.
Real output: The following error
Stream mapping:
Stream #0:0 (bmp) -> format (graph 0)
Stream #1:0 (rawvideo) -> scale (graph 0)
Stream #2:0 (png) -> overlay:overlay (graph 0)
Stream #2:0 (png) -> overlay:overlay (graph 0)
Stream #1:1 -> #0:0 (pcm_s16le (native) -> aac (native))
split:output0 (graph 0) -> Stream #0:1 (libx264)
Stream #1:1 -> #1:0 (pcm_s16le (native) -> aac (native))
split:output1 (graph 0) -> Stream #1:1 (libx264)
Press [q] to stop, [?] for help
[dshow @ 0000003601a30ec0] Thread message queue blocking; consider raising
the thread_queue_size option (current value: 8)
[Parsed_concat_5 @ 000000360bdef840] Input link in1:v0 parameters (size
256x192, SAR 0:1) do not match the corresponding output link in0:v0
parameters (1360x768, SAR 0:1)
[Parsed_concat_5 @ 000000360bdef840] Failed to configure output pad on
Parsed_concat_5
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #2:0
[aac @ 0000003601a9ef00] Qavg: 198.729
[aac @ 0000003601a9ef00] 2 frames left in the queue on closing
[aac @ 000000360a253800] Qavg: 198.729
[aac @ 000000360a253800] 2 frames left in the queue on closing
Conversion failed!
I know it's a filter_complex issue but I don't know where exactly, any help would be very appreciated!
ffmpeg webcam
add a comment |
Thank you so much for checking out my post, I'm still new to ffmpeg but getting addicted to experimenting with it, yet I'm facing the following problem:
ffmpeg -f gdigrab -s 1360x768 -framerate 30 -i desktop
-f dshow -i video="video-input-device":audio="audio-input-device"
-i image.png
-filter_complex "[0:v]format=yuv420p,yadif[v];[1:v]scale=256:-1,overlay=10:10[secondvideo];[v][2]overlay=main_w-overlay_w-10/2:main_h-overlay_h-10/2[image];[image][secondvideo]concat=n=2[outer];[outer]split=2[out0][out1]"
-map 1:a -c:a aac -b:a 128k -map "[out0]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output0.mp4
-map 1:a -c:a aac -b:a 128k -map "[out1]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output1.mp4
Expected output: Two videos containing audio, screen being recorded and multiple video streams located on different places in the video, in my case it's a webcam on the top left of the video and an image on the bottom right.
Real output: The following error
Stream mapping:
Stream #0:0 (bmp) -> format (graph 0)
Stream #1:0 (rawvideo) -> scale (graph 0)
Stream #2:0 (png) -> overlay:overlay (graph 0)
Stream #2:0 (png) -> overlay:overlay (graph 0)
Stream #1:1 -> #0:0 (pcm_s16le (native) -> aac (native))
split:output0 (graph 0) -> Stream #0:1 (libx264)
Stream #1:1 -> #1:0 (pcm_s16le (native) -> aac (native))
split:output1 (graph 0) -> Stream #1:1 (libx264)
Press [q] to stop, [?] for help
[dshow @ 0000003601a30ec0] Thread message queue blocking; consider raising
the thread_queue_size option (current value: 8)
[Parsed_concat_5 @ 000000360bdef840] Input link in1:v0 parameters (size
256x192, SAR 0:1) do not match the corresponding output link in0:v0
parameters (1360x768, SAR 0:1)
[Parsed_concat_5 @ 000000360bdef840] Failed to configure output pad on
Parsed_concat_5
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #2:0
[aac @ 0000003601a9ef00] Qavg: 198.729
[aac @ 0000003601a9ef00] 2 frames left in the queue on closing
[aac @ 000000360a253800] Qavg: 198.729
[aac @ 000000360a253800] 2 frames left in the queue on closing
Conversion failed!
I know it's a filter_complex issue but I don't know where exactly, any help would be very appreciated!
ffmpeg webcam
Thank you so much for checking out my post, I'm still new to ffmpeg but getting addicted to experimenting with it, yet I'm facing the following problem:
ffmpeg -f gdigrab -s 1360x768 -framerate 30 -i desktop
-f dshow -i video="video-input-device":audio="audio-input-device"
-i image.png
-filter_complex "[0:v]format=yuv420p,yadif[v];[1:v]scale=256:-1,overlay=10:10[secondvideo];[v][2]overlay=main_w-overlay_w-10/2:main_h-overlay_h-10/2[image];[image][secondvideo]concat=n=2[outer];[outer]split=2[out0][out1]"
-map 1:a -c:a aac -b:a 128k -map "[out0]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output0.mp4
-map 1:a -c:a aac -b:a 128k -map "[out1]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output1.mp4
Expected output: Two videos containing audio, screen being recorded and multiple video streams located on different places in the video, in my case it's a webcam on the top left of the video and an image on the bottom right.
Real output: The following error
Stream mapping:
Stream #0:0 (bmp) -> format (graph 0)
Stream #1:0 (rawvideo) -> scale (graph 0)
Stream #2:0 (png) -> overlay:overlay (graph 0)
Stream #2:0 (png) -> overlay:overlay (graph 0)
Stream #1:1 -> #0:0 (pcm_s16le (native) -> aac (native))
split:output0 (graph 0) -> Stream #0:1 (libx264)
Stream #1:1 -> #1:0 (pcm_s16le (native) -> aac (native))
split:output1 (graph 0) -> Stream #1:1 (libx264)
Press [q] to stop, [?] for help
[dshow @ 0000003601a30ec0] Thread message queue blocking; consider raising
the thread_queue_size option (current value: 8)
[Parsed_concat_5 @ 000000360bdef840] Input link in1:v0 parameters (size
256x192, SAR 0:1) do not match the corresponding output link in0:v0
parameters (1360x768, SAR 0:1)
[Parsed_concat_5 @ 000000360bdef840] Failed to configure output pad on
Parsed_concat_5
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #2:0
[aac @ 0000003601a9ef00] Qavg: 198.729
[aac @ 0000003601a9ef00] 2 frames left in the queue on closing
[aac @ 000000360a253800] Qavg: 198.729
[aac @ 000000360a253800] 2 frames left in the queue on closing
Conversion failed!
I know it's a filter_complex issue but I don't know where exactly, any help would be very appreciated!
ffmpeg webcam
ffmpeg webcam
asked Dec 11 '18 at 9:52
SanoSano
73
73
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
Use
ffmpeg -f gdigrab -s 1360x768 -framerate 30 -i desktop
-f dshow -i video="video-input-device":audio="audio-input-device"
-i image.png
-filter_complex "[1:v]scale=256:-1[secondvideo];[0:v][secondvideo]overlay=10:10[v1];[v1][2]main_w-overlay_w-10/2:main_h-overlay_h-10/2,split=2[out0][out1]"
-map 1:a -c:a aac -b:a 128k -map "[out0]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output0.mp4
-map 1:a -c:a aac -b:a 128k -map "[out1]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output1.mp4
There's no need to deinterlace an input from the GDI buffer, and the format is not necessary either. The overlays should be applied in succession with labelled input pads.
Thank you again so much for your help, it's crazy how fun working on FFmpeg can get!
– Sano
Dec 11 '18 at 11:07
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "3"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1382573%2fffmpeg-outputting-multiple-videos-with-multiple-video-image-inputs%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Use
ffmpeg -f gdigrab -s 1360x768 -framerate 30 -i desktop
-f dshow -i video="video-input-device":audio="audio-input-device"
-i image.png
-filter_complex "[1:v]scale=256:-1[secondvideo];[0:v][secondvideo]overlay=10:10[v1];[v1][2]main_w-overlay_w-10/2:main_h-overlay_h-10/2,split=2[out0][out1]"
-map 1:a -c:a aac -b:a 128k -map "[out0]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output0.mp4
-map 1:a -c:a aac -b:a 128k -map "[out1]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output1.mp4
There's no need to deinterlace an input from the GDI buffer, and the format is not necessary either. The overlays should be applied in succession with labelled input pads.
Thank you again so much for your help, it's crazy how fun working on FFmpeg can get!
– Sano
Dec 11 '18 at 11:07
add a comment |
Use
ffmpeg -f gdigrab -s 1360x768 -framerate 30 -i desktop
-f dshow -i video="video-input-device":audio="audio-input-device"
-i image.png
-filter_complex "[1:v]scale=256:-1[secondvideo];[0:v][secondvideo]overlay=10:10[v1];[v1][2]main_w-overlay_w-10/2:main_h-overlay_h-10/2,split=2[out0][out1]"
-map 1:a -c:a aac -b:a 128k -map "[out0]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output0.mp4
-map 1:a -c:a aac -b:a 128k -map "[out1]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output1.mp4
There's no need to deinterlace an input from the GDI buffer, and the format is not necessary either. The overlays should be applied in succession with labelled input pads.
Thank you again so much for your help, it's crazy how fun working on FFmpeg can get!
– Sano
Dec 11 '18 at 11:07
add a comment |
Use
ffmpeg -f gdigrab -s 1360x768 -framerate 30 -i desktop
-f dshow -i video="video-input-device":audio="audio-input-device"
-i image.png
-filter_complex "[1:v]scale=256:-1[secondvideo];[0:v][secondvideo]overlay=10:10[v1];[v1][2]main_w-overlay_w-10/2:main_h-overlay_h-10/2,split=2[out0][out1]"
-map 1:a -c:a aac -b:a 128k -map "[out0]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output0.mp4
-map 1:a -c:a aac -b:a 128k -map "[out1]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output1.mp4
There's no need to deinterlace an input from the GDI buffer, and the format is not necessary either. The overlays should be applied in succession with labelled input pads.
Use
ffmpeg -f gdigrab -s 1360x768 -framerate 30 -i desktop
-f dshow -i video="video-input-device":audio="audio-input-device"
-i image.png
-filter_complex "[1:v]scale=256:-1[secondvideo];[0:v][secondvideo]overlay=10:10[v1];[v1][2]main_w-overlay_w-10/2:main_h-overlay_h-10/2,split=2[out0][out1]"
-map 1:a -c:a aac -b:a 128k -map "[out0]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output0.mp4
-map 1:a -c:a aac -b:a 128k -map "[out1]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output1.mp4
There's no need to deinterlace an input from the GDI buffer, and the format is not necessary either. The overlays should be applied in succession with labelled input pads.
answered Dec 11 '18 at 10:20
GyanGyan
14.6k21745
14.6k21745
Thank you again so much for your help, it's crazy how fun working on FFmpeg can get!
– Sano
Dec 11 '18 at 11:07
add a comment |
Thank you again so much for your help, it's crazy how fun working on FFmpeg can get!
– Sano
Dec 11 '18 at 11:07
Thank you again so much for your help, it's crazy how fun working on FFmpeg can get!
– Sano
Dec 11 '18 at 11:07
Thank you again so much for your help, it's crazy how fun working on FFmpeg can get!
– Sano
Dec 11 '18 at 11:07
add a comment |
Thanks for contributing an answer to Super User!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1382573%2fffmpeg-outputting-multiple-videos-with-multiple-video-image-inputs%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown