I'm having trouble with DTS values in a single dvbsub stream while
receiving LIVE MPEGTS with muxed video, audio, subs, and other data
(like scte35) over multicast UDP. I'm doing transcoding tasks, and then
output that to another MPEGTS stream that feeds other services. This are
the problems
I am not sure the direction from which you approuch this is going to
increase the chances this patch has.
Me neither.
But I can barely talk about ffmpeg internals: they're huge, and I know
the little bits I'm familiar with, having some experience with filters.
So, whatever argument I may
consider a subtitle track
consider 2 video tracks for US 3/1001 fps and EU 25 fps
the 6th frame in the US track is at 0.2002 sec, the 5th in the EU
track at 0.2sec
if these differ and you want a subtitle to either stop displaying after
the
earlier or begin displaying after the
The more the focus is moving to "a single frame" doesn't matter,
the more will that conversation create the impression that my patchset
would be lacking precision.
This is a possibility, indeed.
Yet, the less one answers, the more it also looks like the issue is dead
and/or some argument was
>
> Then you have never seen anime translations where signage in the
> videos are translated. If the subtitles are off even by one frame in
> such a case, you will see it, especially when the translated sign is
> moving with the video, and one new subtitle event is generated for
> every video
> This is a very lax attitude towards a serious problem that
> everyone in the fansubbing community deals with. A frame
> of offset is unacceptable for such use-cases, and they have
> to deal with formats which made concessions to timebases
> due to historical blunders.
Please link to some such
>> as well as ATSC subtitles
>
> There are like 2 or 3 characters in each frame. Sometimes
> they are shown as they are coming in, sometimes only
> when a line is completed, sometimes needs to wait
> for subsequent frames before emitting new characters.
> This is really not a high-precision
I also am not accepting a hardcoded timebase of microseconds.
Rounding really matters for subtitles, since presenting them
a frame early or late is unacceptable
That's simply not true.
I don't accept or deny a hardcoded microseconds timebase;
that's beyond my knowledge to judge. What I say is
>
> As mentioned already, I have an offer to make. It might not be exactly
> what you want, but it's all you can get.
>
> Everybody will need to make up his mind and decide whether the benefits
> will outweigh the drawback from one's own point of view - or not.
>
I don't feel I have a voice
>
> Incidentally, the "fps" filter is probably one of the simplest examples
> of a filter that does not involve subtitles which would benefit from a
> "heartbeat" mechanism.
>
> Currently in order to be able to output a frame, the fps filter needs
> to have 2 frames buffered (or alternately,
the v23_plus set still fails:
./ffmpeg -ss 20 -i dvbsubtest.ts -filter_complex
"[0:v][0:s]overlay[v]" -map '[v]' -map 0:a -acodec copy -vcodec mpeg4
-t 5 -bitexact /tmp/file.avi
Input #0, mpegts, from 'dvbsubtest.ts':
Duration: 00:00:34.64, start: 79677.098467, bitrate: 4842 kb/s
> Far from it. I have pointed the core flaws with the design a few times,
> but having been received with rudeness, I did not bother insisting. This
> patch series is nowhere near anything that can be accepted.
>
This quickly went from exciting to depressing. :(
I guess some recent-times
> yesterday, it happened for the 4th and 5th times that another developer
> called my patchset a “hack”.
Hope it wasn't me. If I did, I'm sorry, didn't wanted to imply bad code
or lack of skills, or anything: I was trying to understand if there's some
way to actually unblock your patchset. And
>
> I'm afraid, the only reply that I have to this is:
>
> - Take my patchset
> - Remove subtitle_pts
> - Get everything working
> (all example command lines in filters.texi)
>
> => THEN start talking
>
> The same goes out to everybody else who keeps telling it can be
> removed and that it's an
> One of the important points to understand is that - in case of subtitles,
> the AVFrame IS NOT the subtitle event. The subtitle event is actually
> a different and separate entity. (...)
Wouldn't it qualify then as a different abstraction?
I mean: instead of avframe.subtitle_property,
Hi Daniel,
I don't think that any of that will be necessary. For the generic ocr
filter, this might make sense, because it is meant to work in
many different situations, different text sizes, different (not
necessarily uniform) backgrounds, static or moving, a wide spectrum
of colours, and
Hi there softworkz.
Having worked before with OCR filter output, I suggest you a
modification for your new filter.
It's not something that should delay the patch, but just a nice addenum.
Could be done in another patch, or could even do it myself in the
future. But I let the comment here
Hi there.
This is my first message to this list, so please excuse me if I
unintendedly break some rule.
I've read the debate between Soft Works and others, and would like to
add something to it.
I don't have a deep knowledge of the libs as other people here show. My
knowledge comes from
18 matches
Mail list logo