With a recent upgrade (apt-get dist-upgrade to 12.04, custom-built Ubuntu) to my office machine, I started having serious audio/video sync issues when producing screencasts using recordmydesktop. As you may (or may not) recall, I record lectures on my computer and use, as a sort of visual aid, an on-screen whiteboard, where I type key words or phrases about which I'm speaking: that's what I capture from my screen when I'm recording these screencasts. Well, with the updated recordmydesktop, the text on my on-screen whiteboard would begin to appear several seconds before the audio about the word or phrase was indicating its appearance.
Oddly, the opposite was happening with the updated ffmpeg when running the screencasting incantation with which I'd earlier experimented. The on-screen text I was typing into the whiteboard was lagging a bit behind the audio.
I tried introducing a number of alternative switches into the commands I was issuing when using both recordmydesktop and ffmpeg. But to no avail: I couldn't get rid of the sync problems with either.
During the course of my searches aimed at resolving these issues, I ran across a crude script on the Ubuntu forums that someone had cobbled together, a script which uses ffmpeg, but which separately records video and audio, joining the two streams together, as a final step, into a final output file. I think this joining of an audio and video file are called, in electronic multimedia circles, "muxing," by the way. I decided the script was worth a try.
And, what do you know, after figuring out how to use the script, my tests indicated that it caused audio and video to be in nearly perfect sync. Thus, the answer to my newly-appeared screencasting issues was resolved.
I hoped to solicit improvements to the script but have so far not managed to find much help. Probably the weirdest thing about this script, which likely demonstrates the inexperience of the script's creator, is the fact that, once ffmpeg is invoked and the recording begins, you're supposed to enter, into the same terminal where ffmpeg is running, a file name, then hit the "enter" key as the singal to stop the recording and begin the joining of the audio and video files.
All this while you're seeing in the terminal the standard ffmpeg prompt that tells you to hit control-c to stop the recording. Confusing, to say the least--and made even more confounding by the fact that you can't actually see the text you're entering when you go to type the file name.
Despite those shortcomings, since the results produced by this script exceed anything else I've been able to accomplish, I think I'm going to stick with the script for now. I have made a couple of tweaks, mainly so as to make it record what are called "lossless" files--files that are produced with minimal processing (for example no compression), and which are therefore quite large. I have to re-encode my output files before uploading them in any case, so it's best to start with better-quality files.
Without further ado, then, I present the tweaked version of the script I've found:
I should mention that I discovered a slightly less incoherent way of soliciting file-name input. Though it is commented out in the version of the script you see above, I intend to use this until such time as the script can be further improved ( to use it you uncomment the line that begins out= and comment out the line that reads read -p: note that you must have zenity installed for this modification to work).
In a related item, as I've mentioned earlier, the disadvantage to using ffmpeg for screencasting is that there is no built-in provision for pausing. Well, apparently someone has proposed a kind of workaround for that--see this thread for further details. I've not tried that method and don't really understand how it works, so I cannot attest to its efficacy.
What I have tried is simply stopping, then restarting a new file when a pause is necessary. That's definitely more cumbersome than pausing, and, furthermore, it requires the additional step of somehow joining what could be thought of a separate "vignettes" into a single "episode."
The good news I can report on that front is that I've found another script that was created precisely to join such separate files. I've tried some tests with it and it has worked for me quite well. It's called mmcat and it can be found here.
I'd like to post more about the plughw switch seen in the above script and which I needed to introduce in order to record through a new USB sound device I've added to my computer. But I don't really understand well what differentiates it from the more standard hw switch. So I won't speak to that matter further in this entry. :)
That about sums things up so far as recent screencasting developments on my front is concerned. Do you have any suggestions for improving the screencasting script I found? If so, please pipe in. Any other suggestions for pausing ffmpeg screencast recording? Please let me/us know.
Oddly, the opposite was happening with the updated ffmpeg when running the screencasting incantation with which I'd earlier experimented. The on-screen text I was typing into the whiteboard was lagging a bit behind the audio.
I tried introducing a number of alternative switches into the commands I was issuing when using both recordmydesktop and ffmpeg. But to no avail: I couldn't get rid of the sync problems with either.
During the course of my searches aimed at resolving these issues, I ran across a crude script on the Ubuntu forums that someone had cobbled together, a script which uses ffmpeg, but which separately records video and audio, joining the two streams together, as a final step, into a final output file. I think this joining of an audio and video file are called, in electronic multimedia circles, "muxing," by the way. I decided the script was worth a try.
And, what do you know, after figuring out how to use the script, my tests indicated that it caused audio and video to be in nearly perfect sync. Thus, the answer to my newly-appeared screencasting issues was resolved.
I hoped to solicit improvements to the script but have so far not managed to find much help. Probably the weirdest thing about this script, which likely demonstrates the inexperience of the script's creator, is the fact that, once ffmpeg is invoked and the recording begins, you're supposed to enter, into the same terminal where ffmpeg is running, a file name, then hit the "enter" key as the singal to stop the recording and begin the joining of the audio and video files.
All this while you're seeing in the terminal the standard ffmpeg prompt that tells you to hit control-c to stop the recording. Confusing, to say the least--and made even more confounding by the fact that you can't actually see the text you're entering when you go to type the file name.
Despite those shortcomings, since the results produced by this script exceed anything else I've been able to accomplish, I think I'm going to stick with the script for now. I have made a couple of tweaks, mainly so as to make it record what are called "lossless" files--files that are produced with minimal processing (for example no compression), and which are therefore quite large. I have to re-encode my output files before uploading them in any case, so it's best to start with better-quality files.
Without further ado, then, I present the tweaked version of the script I've found:
I should mention that I discovered a slightly less incoherent way of soliciting file-name input. Though it is commented out in the version of the script you see above, I intend to use this until such time as the script can be further improved ( to use it you uncomment the line that begins out= and comment out the line that reads read -p: note that you must have zenity installed for this modification to work).
In a related item, as I've mentioned earlier, the disadvantage to using ffmpeg for screencasting is that there is no built-in provision for pausing. Well, apparently someone has proposed a kind of workaround for that--see this thread for further details. I've not tried that method and don't really understand how it works, so I cannot attest to its efficacy.
What I have tried is simply stopping, then restarting a new file when a pause is necessary. That's definitely more cumbersome than pausing, and, furthermore, it requires the additional step of somehow joining what could be thought of a separate "vignettes" into a single "episode."
The good news I can report on that front is that I've found another script that was created precisely to join such separate files. I've tried some tests with it and it has worked for me quite well. It's called mmcat and it can be found here.
I'd like to post more about the plughw switch seen in the above script and which I needed to introduce in order to record through a new USB sound device I've added to my computer. But I don't really understand well what differentiates it from the more standard hw switch. So I won't speak to that matter further in this entry. :)
That about sums things up so far as recent screencasting developments on my front is concerned. Do you have any suggestions for improving the screencasting script I found? If so, please pipe in. Any other suggestions for pausing ffmpeg screencast recording? Please let me/us know.
Good job.
ReplyDeleteI was using the same method to do a perfect synced screencasting.
I use these lines in 2 different terminals:
ffmpeg -f x11grab -r 20 -s 1920x1080 -i :0+0,0 -vcodec libx264 -threads 4 -preset ultrafast /home/vanacksabbadium/Scrivania/video.mkv
and
ffmpeg -f alsa -i pulse -acodec libmp3lame -ac 2 /home/vanacksabbadium/Scrivania/sound.mp3
Then i merge audio and video with Openshot. The final result is absolutely great, smooth and synced.
In fact, that script just make things easier...
You can make it so the output from the first two ffmpeg commands doesn't appear on the screen. Before the & at the end which puts each process into the background, you would add redirection operators; something like this:
ReplyDeleteffmpeg [options] >/dev/null 2>&1 &
This sends all text from standard output and standard error to /dev/null. The prompt from the read command should then be the only thing to appear.
You can also probably improve the zenity alternative by using the --file-selection and --confirm-overwrite options. This will present a dialog that most people would probably find more comfortable for entering file names rather than just the plain --entry dialog.
I tried a few iterations of your ffmpeg [options] >/dev/null 2>&1 & but the terminal was still showing output from ffmpeg. Not sure whether I was doing something wrong.
DeleteThe zenity --file-selection and --confirm-overwrite options do not present a blank in which to fill in a file name, but only a window allowing to select an already-present file. Additionally, the file selection window is fairly large and can obtrude onto the section of screen being captured in the video.
So, I've been unable, following these suggestions, to improve on the script.
Instead of pausing in the middle of a recording, just continue recording during any breaks, then edit the dead space out with Avidemux (which is also great at combining separate audio and video tracks together, as its name would suggest).
ReplyDeleteFor simple editing, you may find Avidemux easier to use than OpenShot. In fact, Avidemux is actually a front-end to ffmpeg.
Also, try Kazam; it's an excellet screencast recorder, much more stable than recordmydesktop or xvidcap, and much easier to use than console-based ffmpeg.
I've used avidemux in the past but have not yet tried it for my screencasting projects: it's a lot easier to pause the recording using recrdmydesktop, since no final editing step is needed. But now that recordmydesktop has gone haywire (audio/video sync issues I've been unable to resolve), I'm hunting for an alternative and may have to bite the bullet in terms of more manual manipulation of the resulting video. As I mentioned, I've so far been unable to get Kazam running on the computer on which I record lectures. I could try it on another, Arch computer, although Kazam is not yet in the main respositories, so I'd have to install it from AUR.
DeleteOn slower computers, I've found a useful kludge: record the screen with ffmpeg or (if it works) gtkrecordmydesktop, and record the sound with an inexpensive voice recorder. I press "record" on the voice recorder at the same time that I start ffmpeg. Synchronization hasn't been a problem, probably because the computer only has to record the video, so it's less likely to drop video frames. I use Avidemux to combine the audio and video tracks, but ffmpeg will work as well.
ReplyDeleteNot a very elegant solution, but it eliminates all the fuss in trying to synchronize audio and video tracks, and it's very reliable. Also, even a cheap voice recorder produces better audio quality than any built-in computer microphone.
As an aside, I recently persuaded the medical school where I work to install Epiphan lecture recorders in each of our larger auditoriums. This device records sound and video from any device that connects to the projector in the lecture hall, and makes no demands whatsoever on the presenter's computer. So, as an extra benefit, these lecture halls have become recording "studios". Several of our lecturers now go into the lecture hall after-hours, press the record button and give their lecture to an empty room, then press the stop button at the end, and the recording is done. Some compression is required (I use Avidemxux), but the actual recording process couldn't be easier; they can use their own computer if they like, without the need to install any recording software, or they can use one of the computers in the lecture hall. Sadly, at $2000 per device, this approach is really too expensive for individual use.