On date Thursday 2025-04-24 19:12:09 +0200, Nicolas George wrote:
> softworkz . (HE12025-04-22):
[...]
> ffprobe has a concept of sections, and no more. XML does not have a
> concept of sections. JSON does not have a concept of sections. CSV does
> not have a concept of sections. Other parts of FFmpeg
> that could benefit from it do not, or they may have subsection,
> subsubsections, etc. Applications that may use this API even more so.

Elaborating on this. ffprobe/textformat is based on a notion of
hierarchical tree-like data, which maps pretty well with most data
formats, at the price that there are some ambiguities which need to be
resolved.

For example if you consider an XML element, it is represented as a
node (aka section): attributes might be represented as key-value
fields, as child nodes (subsections) with the key being the name of
the section and the value its datum, or as a list of key-value
elements.

This data representation choice is embodied in the definition of the
structure to fill, so there is no way to control serialization
"dynamically", but that was not the purpose of that API, since at the
end what we want is to be able to deserialize specific data rather
than being able to serialize any possible data using a specific
container format.

Considering this, there is probably no need to extend the API to cover
each possible format full semantics - this at least it is my view.
 
> The proper way to go at it involves two steps. These steps might
> overlap, but not by much. The first one is rather easy but long. The
> second one can be quick but it is much harder.
> 
> 
> The first step is adding each format into libavutil separately, with
> distinct APIs tailored for the specificities of each format. The APIs
> should run parallel whenever possible, i.e. use similar names and
> prototypes for things that make sense in multiple contexts. But other
> parts will be completely unique to certain formats.
> 
> So:
> 
> av_json_enc_…(): adding objects (dictionaries), arrays, strings, numbers,
> booleans, null values; controlling the indentation, the kind of quotes,
> the encoding.
> 
> av_xml_enc_…(): similar, but: no concept of numbers, booleans, null;
> and: control over attributes / nested elements, CDATA sections,
> comments.
> 
> av_csv_enc_…()…
>
> For each API, the parts of ffmpeg already do the same should be
> converted to use it. That means the ffprobe writers of course, but not
> only. If the XML writing code is not usable by dashenc, movenc,
> smoothstreamingenc, vf_signature, ttmlenc, etc., back to the design
> step.

As I wrote, this was not the purpose of the ffprobe formats in the
first place. MOV/DASH requires a specific use of an XML encoder. In
theory it might be done using the textformat API in the current form,
but it would be probably pretty awkward. We might want to factorize a
few generic utilities (e.g. escaping) to avoid code duplication
though.

For the filters output data we also might need to define the output
structure - currently the output is very simple therefore simple
definitions should be good enough - while serialization performance is
probably not a real concern.

> Also: these APIs can end up being used by things like the showinfo
> filters, and called once per frame. That means they must be fast, and in
> particular they should not need dynamic allocations as long as the
> objects are small.

This is a good point, but again probably the performance at this stage
for the current usage (ffprobe format, filters, etc.) is not a real
concern.

It might be if we want to make this a generic tool for library users,
and for purposes outside of the scope for which it was designed. But I
don't think this should be a real blocker - and we might even keep the
API private to enable libav* cross-librariers but not
external-libraries usage.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to