(My third attempt to post this with all errors being mine...)
I don't want to get anyone angry at me for posting "stupid questions" so
if the questions are not being posted in the right place, please tell me
where to post.
I am currently using the main Cinelerra (from HeroineVirtual)
distribution on both Gentoo 2005.1 and Fedora Core 3. I've taken a look
at it on and off since the days it was originally just the audio editor
Broadcast. The current 2.0 release of Cinelerra took some getting used
to due to my Mac and Windows exposure to things like Avid, Adobe and
Sonic Foundry (now Sony) software. I've become accustomed to the two
screen editing approach and within the past week have edited 46 projects
with varying degrees of POSITIVE success. Kudos to Alex and the rest of
the crew for making Cinelerra what it is today!
But onto my questions:
1. What is the best (least lossy) output format that i can dump out of
Cinelerra for re-encoding to my final chosen format with ffmpeg? I'm
currently using Quicktime with Two's Complement for audio and H264 for
video with a 2000000 (the default) bitrate. I just don't want to lose
much quality when working with the video I capture in the editing
phase. (I use a Hauppaugue PVR250 to capture MPEG2 at 9600000 bps) Is
there anything that is the equivalent of uncompressed video that I can
then use with ffmpeg to encode to my final output?
2. Is there a way to get Cinelerra to render using ffmpeg (so I can
chose from the codecs available to it vs. what's available in Cinelerra)
to save a step in my workflow? I have a lot of video that I want to
put online for members of the family to stream on both Windows and
Macintosh platforms and I'd prefer to not force them to install anything
beyond the default media players in their chosen OS (and in many cases
dial-up access). So far none of the output formats that I've been able
to find in Cinelerra are streamable in a way that fits these
requirements. So... I've been using ffmpeg to take care of that part
for me. This isn't a gripe about Cinelerra, just a question as to
whether the render output (uncompressed if possible) can be piped to an
external encoder.
3. For some reason I've been having trouble when I read in a video file
(MPEG2 format from my Hauppauge) that contains audio. The audio or the
video will wind up being minutes longer than the opposing tracks in the
timeline which results in HUGE sync problems. The videos themselves
play fine all the way through in Xine or MPlayer so the file should be
OK. The workaround for now is that I use ffmpeg to break my video file
up into separate audio and video tracks and read those into Cinelerra.
Then I render out to a .mov file and pull that in for two screen editing
with the Viewer ([ ] v). Or, I just edit to clips from the compositor
and utilize clips for the next step. Then I render my final edit out
and use ffmpeg to encode to my final output format. I've seen this
problem on three of my systems (both Gentoo and Fedore Core) so I've had
to use the ffmpeg split approach on all of them. The first system is a
P4 with Gentoo and 1 gig of RAM. It has a 2.6.14.2 Linux kernel on it
and an external ALSA build since my audio interface is not supported in
the main kernel ALSA. More details can be provided for this if needed.
4. The Viewer is great for using the in/out points and splice to get a
nice basic edit into the timeline. But since it doesn't offer any way
of seeing the audio waveform as well, it's hard to try and play tricks
with syncing audio and video manually. For example if I wanted to find
a specific frame and line up an audio effect to coincide with frame
accuracy, how would I achieve this? I haven't found a way to do this in
the timeline either since the audio tracks appear to be locked to the
video track and drag and drop editing is not yet fully supported. I'm
sure I'm not the only person wanting to sync audio events with video in
a frame accurate way so maybe there is a method I haven't yet discovered?
5. I got very excited about the idea of having a renderfarm. So I
installed Cinelerra on four of my machines at home. I started up the
render clients and then set the main system to point at them for the
renderfarm feature. I also made sure that my NFS setup is working
flawlessly (I sill have to figure out how to resolve the 2 gig limit in
NFS as a lot of my renders ar 15-20 gigs in size with Quicktime). I
started the batch render and watched with amazement as the renders
completed much faster than the single host could do. Something that
would take four-six hours seemed to complete in only an hour and a
half. I then pulled in the .mov001-.mov020 files into Cinelerra and
found that only the clips on the main Cinelerra host (where the GUI was
running) appeared to have rendered properly. The files rendered by the
other hosts were comparable in size but Cinelerra didn't draw the
thumbnails or waveform displays for them. I only saw a few segments
(probably rendered by the localhost) that looked right. Any idea what
might have gone wrong? I'm going to try this again soon with only one
master and one client (the most powerful boxes) and just render shorter
files. A lot of what I'm normally working with is usually 2-4 hours of
video.
Thanks in advance for any pointers and once again thanks to the
Cinelerra folks (on all sides) for making such a great app.
Deck I don't want to get anyone angry at me for posting "stupid
questions" so if the questions are not being posted in the right place,
please tell me where to post.
I am currently using the main Cinelerra (from HeroineVirtual)
distribution on both Gentoo 2005.1 and Fedora Core 3. I've taken a look
at it on and off since the days it was originally just the audio editor
Broadcast. The current 2.0 release of Cinelerra took some getting used
to due to my Mac and Windows exposure to things like Avid, Adobe and
Sonic Foundry (now Sony) software. I've become accustomed to the two
screen editing approach and within the past week have edited 46 projects
with varying degrees of POSITIVE success. Kudos to Alex and the rest of
the crew for making Cinelerra what it is today!
But onto my questions:
1. What is the best (least lossy) output format that i can dump out of
Cinelerra for re-encoding to my final chosen format with ffmpeg? I'm
currently using Quicktime with Two's Complement for audio and H264 for
video with a 2000000 (the default) bitrate. I just don't want to lose
much quality when working with the video I capture in the editing
phase. (I use a Hauppaugue PVR250 to capture MPEG2 at 9600000 bps) Is
there anything that is the equivalent of uncompressed video that I can
then use with ffmpeg to encode to my final output?
2. Is there a way to get Cinelerra to render using ffmpeg (so I can
chose from the codecs available to it vs. what's available in Cinelerra)
to save a step in my workflow? I have a lot of video that I want to
put online for members of the family to stream on both Windows and
Macintosh platforms and I'd prefer to not force them to install anything
beyond the default media players in their chosen OS (and in many cases
dial-up access). So far none of the output formats that I've been able
to find in Cinelerra are streamable in a way that fits these
requirements. So... I've been using ffmpeg to take care of that part
for me. This isn't a gripe about Cinelerra, just a question as to
whether the render output (uncompressed if possible) can be piped to an
external encoder.
3. For some reason I've been having trouble when I read in a video file
(MPEG2 format from my Hauppauge) that contains audio. The audio or the
video will wind up being minutes longer than the opposing tracks in the
timeline which results in HUGE sync problems. The videos themselves
play fine all the way through in Xine or MPlayer so the file should be
OK. The workaround for now is that I use ffmpeg to break my video file
up into separate audio and video tracks and read those into Cinelerra.
Then I render out to a .mov file and pull that in for two screen editing
with the Viewer ([ ] v). Or, I just edit to clips from the compositor
and utilize clips for the next step. Then I render my final edit out
and use ffmpeg to encode to my final output format. I've seen this
problem on three of my systems (both Gentoo and Fedore Core) so I've had
to use the ffmpeg split approach on all of them. The first system is a
P4 with Gentoo and 1 gig of RAM. It has a 2.6.14.2 Linux kernel on it
and an external ALSA build since my audio interface is not supported in
the main kernel ALSA. More details can be provided for this if needed.
4. The Viewer is great for using the in/out points and splice to get a
nice basic edit into the timeline. But since it doesn't offer any way
of seeing the audio waveform as well, it's hard to try and play tricks
with syncing audio and video manually. For example if I wanted to find
a specific frame and line up an audio effect to coincide with frame
accuracy, how would I achieve this? I haven't found a way to do this in
the timeline either since the audio tracks appear to be locked to the
video track and drag and drop editing is not yet fully supported. I'm
sure I'm not the only person wanting to sync audio events with video in
a frame accurate way so maybe there is a method I haven't yet discovered?
5. I got very excited about the idea of having a renderfarm. So I
installed Cinelerra on four of my machines at home. I started up the
render clients and then set the main system to point at them for the
renderfarm feature. I also made sure that my NFS setup is working
flawlessly (I sill have to figure out how to resolve the 2 gig limit in
NFS as a lot of my renders ar 15-20 gigs in size with Quicktime). I
started the batch render and watched with amazement as the renders
completed much faster than the single host could do. Something that
would take four-six hours seemed to complete in only an hour and a
half. I then pulled in the .mov001-.mov020 files into Cinelerra and
found that only the clips on the main Cinelerra host (where the GUI was
running) appeared to have rendered properly. The files rendered by the
other hosts were comparable in size but Cinelerra didn't draw the
thumbnails or waveform displays for them. I only saw a few segments
(probably rendered by the localhost) that looked right. Any idea what
might have gone wrong? I'm going to try this again soon with only one
master and one client (the most powerful boxes) and just render shorter
files. A lot of what I'm normally working with is usually 2-4 hours of
video.
Thanks in advance for any pointers and once again thanks to the
Cinelerra folks (on all sides) for making such a great app.
Deck
_______________________________________________
Cinelerra mailing list
[email protected]
https://init.linpro.no/mailman/skolelinux.no/listinfo/cinelerra