Am 14.03.16 um 16:31 schrieb Clément Bœsch:
> On Mon, Mar 14, 2016 at 03:38:52PM +0100, Thilo Borgmann wrote:
>> Am 14.03.16 um 13:56 schrieb Clément Bœsch:
>>> On Mon, Mar 14, 2016 at 01:46:52PM +0100, Thilo Borgmann wrote:
>>>> Am 14.03.16 um 11:22 schrieb Clément Bœsch:
>>>>> [...]
>>>>>> +./ffmpeg -f lavfi -i 
>>>>>> nullsrc=s=100x100,coreimage=filter=CIQRCodeGenerator@@inputMessage=https\\\\\://FFmpeg.org/@@inputCorrectionLevel=H
>>>>>>  -frames:v 1 QRCode.png
>>>>>
>>>>> remove ./
>>>>>
>>>>> also, it's probably better to have 2 filters: one for usage as a source,
>>>>> and another one for filtering (coreimagesrc vs coreimage).
>>>>
>>>> I want to extend it later to allow the user to define an output size 
>>>> avoiding
>>>> the need to have it defined by input iamge size. Then it could be used as 
>>>> an
>>>> image source, too. However, I'll have to think about many more cases for 
>>>> that to
>>>> work good. Thus I really want to do it as further work.
>>>>
>>>> Also, if we split out the generator category into coreimagesrc, the 
>>>> following
>>>> would not work anymore without GPU-HOST transfer:
>>>>
>>>> coreimage=filter=boxblur@default#qrcode@whatever@somewhere
>>>>
>>>> Instead the user would have to use a complex filter:
>>>>
>>>> coreimagesrc=filter=qrcode@whatever [qr],
>>>> [v:0] coreimage=filter=boxblur@default [blur],
>>>> [blur][qr]overlay=somewhere [output]
>>>>
>>>> To do it without transfer then would require to allow coreimage to do 
>>>> generators
>>>> like coreimagesrc - which would void the argument for a seperated src 
>>>> because
>>>> complete feature duplication, don't you think?
>>>>
>>>
>>> If you can hold the picture into a CVPixelBuffer, you could use
>>> AV_PIX_FMT_VIDEOTOOLBOX, which make it possible to create one single
>>> apple filter per coreimage instance in the filtergraph.
>>>
>>> It's probably a cleaner design. You'll probably need a
>>> AV_PIX_FMT_VIDEOTOOLBOX to AV_PIX_FMT_RGBA (or whatever) convert filter
>>> (transparently in vf scale, or maybe simply a coreimage=torgba)
>>
>> Spec says: "A Core Video pixel buffer is an image buffer that holds pixels in
>> main memory." A more valid choice could be CVImageBuffer.
>>
> 
> You'll need a new pixel format then probably, but this makes me wonder why
> we are casting CVImageBufferRef into CVPixelBufferRef in
> lavc/videotoolbox...
> 
>> Anyway, the object itself should be retainable across filter instances, also 
>> as
>> a CIImage object. The filter would also have to know about the complete 
>> filter
>> chain to know if the image should be kept as CIImage object (probably in 
>> VRAM)
>> for further processing or if it has to be transferred to the host... and 
>> where
>> in the chain is the current instance? Did it correctly retain an image for 
>> this
>> instance? What if an error occurs in between? Do we know that?
>>
>> There is just missing some work for making a nullsrc in before avoidable. The
>> alternative might bring object conversion (PIX_FMT & toRGB filters?), much 
>> more
>> complex code and filter graph parsing together with a distinct modularization
>> with almost the same code. I see this much more error prone and way harder to
>> debug and maintain. Once the user can specify an output size the existing 
>> code
>> would already perfectly serve as an image source.
>>
> 
> The filter negotiation is already present in lavfi which should already be
> capable of inserting a scaling (convert) filter automatically where it
> matters.
> 
> An AV_PIX_FMT_VIDEOTOOLBOX is simply a frame with data[3]=CVPixelBufferRef
> so you can transmit them between filter instances. They are refcounted so
> properly wrapped in AVFrame (see ff_videotoolbox_buffer_create())

The more I think about it, the more problems I see. Yes, pixel format
converters are auto inserted, but all the other aspects? I don't have an
answer for these without another profound look into what _might_ be
possible. _At least_ the filter internal code becomes much more
complicated. But what for? To have a coreimagesrc filter chained before
possible other coreimage filters, managing the logic to have the
coreimagesrc output passed to coreimage without transfer _instead_
having a single filter instance with
"coreimage=filter=generator@default@img_size=wxh@out_rect=off_x off_y
width height#somefilter@default"..

Clement, I might just don't get your point in what would be beneficial
in this. If I'm not totally wrong we get all features of a coreimagesrc
for free in a coreimage with no-doubt-GPU-based filter chains with
simple code.

Would you be more pleased if I would implement the extended  capabilites
for generator usage so that a nullsrc before could be avoided?
I would much more like to implement this than to complicate the filter
by adding "non-related" object types and the need to look before and
beyond the filters in-/output pads...


>>>[...]
>>> #define SafeCFRelease(ptr) do { \
>>>     if (ptr) {                  \
>>>         CFRelease(ptr);         \
>>>         ptr = NULL;             \
>>>     }                           \
>>> } while (0)
>>
>> I don't know why this is beneficial but I'll change it locally.
>>
> 
> So
>   if (x)
>       SafeCFRelease(a);
>   else
>       SafeCFRelease(b);
> 
> can work.

At least the clang preprocessor keeps the outer {} so that this should work.
I'm not arguing - if experience says compilers act differently here, so
be it. I already changed it locally...

-Thilo
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to