Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-27 Thread Michael Niedermayer
Hi

On Fri, Apr 20, 2018 at 10:35:05AM +0530, ANURAG SINGH IIT BHU wrote:
> Hello Sir,
> 
> I do understand that just 56 lines were inserted but sir 621 lines were
> deleted which were of no use for hellosubs and it included functons like
> expand_text(), expand_function() and other which were being called by the
> drawtext filter but not needed by the hellosubs filter as the text and
> operation to perform was predetermined. So sir skipping unnecessary
> functions should make the code a bit faster.
> 

> I am really looking forward to build my project proposal of speech to text
> subtitle generation filter under your guidance and contribute to ffmpeg as
> it will be helpful for a large number of users,and it is an important
> opportunity for me as a student as well. I am dedicated to learn and I am
> confident that with your guidance we can do it.

If you want to work on this outside GSOC, you are certainly welcome to do so
and i would be happy to mentor it

Thanks

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

The greatest way to live with honor in this world is to be what we pretend
to be. -- Socrates


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-19 Thread ANURAG SINGH IIT BHU
Hello Sir,

I do understand that just 56 lines were inserted but sir 621 lines were
deleted which were of no use for hellosubs and it included functons like
expand_text(), expand_function() and other which were being called by the
drawtext filter but not needed by the hellosubs filter as the text and
operation to perform was predetermined. So sir skipping unnecessary
functions should make the code a bit faster.

I am really looking forward to build my project proposal of speech to text
subtitle generation filter under your guidance and contribute to ffmpeg as
it will be helpful for a large number of users,and it is an important
opportunity for me as a student as well. I am dedicated to learn and I am
confident that with your guidance we can do it.


Thanks and regards,
Anurag Singh











‌

On Thu, Apr 19, 2018 at 10:48 PM, Michael Niedermayer <
mich...@niedermayer.cc> wrote:

> On Wed, Apr 18, 2018 at 02:45:36PM +0530, ANURAG SINGH IIT BHU wrote:
> > Hello Sir,
> >
> > I have implemented the suggested changes, now the filter does not break
> > builds, also now it works for all inputs and
>
> You are still removing the Copyright statments from the filter you copy
>
> --- libavfilter/vf_drawtext.c   2018-04-17 14:20:30.340366781 +0200
> +++ libavfilter/vf_hellosubs.c  2018-04-19 17:51:48.371572589 +0200
> @@ -1,8 +1,4 @@
>  /*
> - * Copyright (c) 2011 Stefano Sabatini
> - * Copyright (c) 2010 S.N. Hemanth Meenakshisundaram
> - * Copyright (c) 2003 Gustavo Sverzut Barbieri 
> - *
>   * This file is part of FFmpeg.
>   *
>   * FFmpeg is free software; you can redistribute it and/or
>
> the newly added code has 960 lines
> only 56 of these lines are not in vf_drawtext.c
>
> diff -wbu libavfilter/vf_drawtext.c libavfilter/vf_hellosubs.c |diffstat
>  vf_hellosubs.c |  677 --
> ---
>  1 file changed, 56 insertions(+), 621 deletions(-)
>
> wc libavfilter/vf_hellosubs.c
>   960   32438 libavfilter/vf_hellosubs.c
>
> From these 56 some are changes of the filter name and context name
>
> The remaining changes are reviewed below, I attempted to format
> this so the code is readable.
>
>
> - * drawtext filter, based on the original vhook/drawtext.c
> - * filter by Gustavo Sverzut Barbieri
> + * Libfreetype subtitles burning filter.
> + * @see{http://www.matroska.org/technical/specs/subtitles/ssa.html}
>
> The SSA link has nothing to do with the code based on drawtext
>
>
> @@ -207,3 +181,2 @@
> -{"text","set text", OFFSET(text),
>  AV_OPT_TYPE_STRING, {.str=NULL},  CHAR_MIN, CHAR_MAX, FLAGS},
> -{"textfile","set text file",OFFSET(textfile),
>  AV_OPT_TYPE_STRING, {.str=NULL},  CHAR_MIN, CHAR_MAX, FLAGS},
> -{"fontcolor",   "set foreground color", OFFSET(fontcolor.rgba),
>  AV_OPT_TYPE_COLOR,  {.str="black"}, CHAR_MIN, CHAR_MAX, FLAGS},
> +{"text","set text", OFFSET(text),
>  AV_OPT_TYPE_STRING, {.str="Hello world"},  CHAR_MIN, CHAR_MAX, FLAGS},
> +{"fontcolor",   "set foreground color", OFFSET(fontcolor.rgba),
>  AV_OPT_TYPE_COLOR,  {.str="white"}, CHAR_MIN, CHAR_MAX, FLAGS},
> @@ -217,5 +185,3 @@
> -{"fontsize","set font size",OFFSET(fontsize_expr),
> AV_OPT_TYPE_STRING, {.str=NULL},  CHAR_MIN, CHAR_MAX , FLAGS},
> -{"x",   "set x expression", OFFSET(x_expr),
>  AV_OPT_TYPE_STRING, {.str="0"},   CHAR_MIN, CHAR_MAX, FLAGS},
> -{"y",   "set y expression", OFFSET(y_expr),
>  AV_OPT_TYPE_STRING, {.str="0"},   CHAR_MIN, CHAR_MAX, FLAGS},
> -{"shadowx", "set shadow x offset",  OFFSET(shadowx),
> AV_OPT_TYPE_INT,{.i64=0}, INT_MIN,  INT_MAX , FLAGS},
> -{"shadowy", "set shadow y offset",  OFFSET(shadowy),
> AV_OPT_TYPE_INT,{.i64=0}, INT_MIN,  INT_MAX , FLAGS},
> +{"fontsize","set font size",OFFSET(fontsize_expr),
> AV_OPT_TYPE_STRING, {.str="h/20"},  CHAR_MIN, CHAR_MAX , FLAGS},
> +{"x",   "set x expression", OFFSET(x_expr),
>  AV_OPT_TYPE_STRING, {.str="w/2.7"},   CHAR_MIN, CHAR_MAX, FLAGS},
> +{"y",   "set y expression", OFFSET(y_expr),
>  AV_OPT_TYPE_STRING, {.str="h/1.3"},   CHAR_MIN, CHAR_MAX, FLAGS},
>
> ...
>
> +static int generatehellosub(AVFilterContext *ctx, AVBPrint *bp)
>
>  {
>  DrawTextContext *s = ctx->priv;
>  double pts = s->var_values[VAR_T];
>  int64_t ms = llrint(pts * 1000);
>
> +if (ms < 0)
> +   ms = -ms;
> +av_bprintf(bp, "Hello world %d:%02d",(int)(ms / (60 * 1000)),(int)(ms
> / 1000) % 60);
>  return 0;
>  }
>
> The use of float/double can cause rounding diffferences between platforms
> and is especially as the filters input is neverf float/double not needed
>
>  if (s->fix_bounds) {
>
>  /* calculate footprint of text effects */
> -int boxoffset = s->draw_box ? FFMAX(s->boxborderw, 0) : 0;
> -int borderoffset  = s->borderw  ? FFMAX(s->borderw, 0) : 0;
>
> -   

Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-19 Thread Michael Niedermayer
On Wed, Apr 18, 2018 at 02:45:36PM +0530, ANURAG SINGH IIT BHU wrote:
> Hello Sir,
> 
> I have implemented the suggested changes, now the filter does not break
> builds, also now it works for all inputs and

You are still removing the Copyright statments from the filter you copy

--- libavfilter/vf_drawtext.c   2018-04-17 14:20:30.340366781 +0200
+++ libavfilter/vf_hellosubs.c  2018-04-19 17:51:48.371572589 +0200
@@ -1,8 +1,4 @@
 /*
- * Copyright (c) 2011 Stefano Sabatini
- * Copyright (c) 2010 S.N. Hemanth Meenakshisundaram
- * Copyright (c) 2003 Gustavo Sverzut Barbieri 
- *
  * This file is part of FFmpeg.
  *
  * FFmpeg is free software; you can redistribute it and/or

the newly added code has 960 lines
only 56 of these lines are not in vf_drawtext.c

diff -wbu libavfilter/vf_drawtext.c libavfilter/vf_hellosubs.c |diffstat 
 vf_hellosubs.c |  677 -
 1 file changed, 56 insertions(+), 621 deletions(-)

wc libavfilter/vf_hellosubs.c
  960   32438 libavfilter/vf_hellosubs.c
  
From these 56 some are changes of the filter name and context name

The remaining changes are reviewed below, I attempted to format
this so the code is readable.


- * drawtext filter, based on the original vhook/drawtext.c
- * filter by Gustavo Sverzut Barbieri
+ * Libfreetype subtitles burning filter.
+ * @see{http://www.matroska.org/technical/specs/subtitles/ssa.html}

The SSA link has nothing to do with the code based on drawtext


@@ -207,3 +181,2 @@
-{"text","set text", OFFSET(text),   
AV_OPT_TYPE_STRING, {.str=NULL},  CHAR_MIN, CHAR_MAX, FLAGS},
-{"textfile","set text file",OFFSET(textfile),   
AV_OPT_TYPE_STRING, {.str=NULL},  CHAR_MIN, CHAR_MAX, FLAGS},
-{"fontcolor",   "set foreground color", OFFSET(fontcolor.rgba), 
AV_OPT_TYPE_COLOR,  {.str="black"}, CHAR_MIN, CHAR_MAX, FLAGS},
+{"text","set text", OFFSET(text),   
AV_OPT_TYPE_STRING, {.str="Hello world"},  CHAR_MIN, CHAR_MAX, FLAGS},
+{"fontcolor",   "set foreground color", OFFSET(fontcolor.rgba), 
AV_OPT_TYPE_COLOR,  {.str="white"}, CHAR_MIN, CHAR_MAX, FLAGS},
@@ -217,5 +185,3 @@
-{"fontsize","set font size",OFFSET(fontsize_expr),  
AV_OPT_TYPE_STRING, {.str=NULL},  CHAR_MIN, CHAR_MAX , FLAGS},
-{"x",   "set x expression", OFFSET(x_expr), 
AV_OPT_TYPE_STRING, {.str="0"},   CHAR_MIN, CHAR_MAX, FLAGS},
-{"y",   "set y expression", OFFSET(y_expr), 
AV_OPT_TYPE_STRING, {.str="0"},   CHAR_MIN, CHAR_MAX, FLAGS},
-{"shadowx", "set shadow x offset",  OFFSET(shadowx),
AV_OPT_TYPE_INT,{.i64=0}, INT_MIN,  INT_MAX , FLAGS},
-{"shadowy", "set shadow y offset",  OFFSET(shadowy),
AV_OPT_TYPE_INT,{.i64=0}, INT_MIN,  INT_MAX , FLAGS},
+{"fontsize","set font size",OFFSET(fontsize_expr),  
AV_OPT_TYPE_STRING, {.str="h/20"},  CHAR_MIN, CHAR_MAX , FLAGS},
+{"x",   "set x expression", OFFSET(x_expr), 
AV_OPT_TYPE_STRING, {.str="w/2.7"},   CHAR_MIN, CHAR_MAX, FLAGS},
+{"y",   "set y expression", OFFSET(y_expr), 
AV_OPT_TYPE_STRING, {.str="h/1.3"},   CHAR_MIN, CHAR_MAX, FLAGS},

...

+static int generatehellosub(AVFilterContext *ctx, AVBPrint *bp)
 
 {
 DrawTextContext *s = ctx->priv;
 double pts = s->var_values[VAR_T];
 int64_t ms = llrint(pts * 1000);
 
+if (ms < 0)
+   ms = -ms;
+av_bprintf(bp, "Hello world %d:%02d",(int)(ms / (60 * 1000)),(int)(ms / 
1000) % 60);
 return 0;
 }

The use of float/double can cause rounding diffferences between platforms
and is especially as the filters input is neverf float/double not needed

 if (s->fix_bounds) {
 
 /* calculate footprint of text effects */
-int boxoffset = s->draw_box ? FFMAX(s->boxborderw, 0) : 0;
-int borderoffset  = s->borderw  ? FFMAX(s->borderw, 0) : 0;
 
-int offsetleft = FFMAX3(boxoffset, borderoffset,
-(s->shadowx < 0 ? FFABS(s->shadowx) : 0));
-int offsettop = FFMAX3(boxoffset, borderoffset,
-(s->shadowy < 0 ? FFABS(s->shadowy) : 0));
-
-int offsetright = FFMAX3(boxoffset, borderoffset,
- (s->shadowx > 0 ? s->shadowx : 0));
-int offsetbottom = FFMAX3(boxoffset, borderoffset,
-  (s->shadowy > 0 ? s->shadowy : 0));
 
+int offsetleft = FFMAX3(0,0,0);
+
+int offsettop = FFMAX3(0, 0,0);
 
This can be simplified further


@@ -1515 +951 @@
-.description   = NULL_IF_CONFIG_SMALL("Draw text on top of video frames 
using libfreetype library."),
+.description   = NULL_IF_CONFIG_SMALL("Writes hello world time on top of 
video frames using libfreetype library."),

 
 

> 
> ffplay -f lavfi -i testsrc -vf 

Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-18 Thread ANURAG SINGH IIT BHU
Hello Sir,

I have implemented the suggested changes, now the filter does not break
builds, also now it works for all inputs and

ffplay -f lavfi -i testsrc -vf hellosubs

works.

Sir I think passing the text using metadata to drawtext filter would not be
an efficient way in terms of time as drawtext filter calls a number of
functions which are not needed by the hellosubs filter, functions like
expand_text(), expand_func() etc, i.e drawtext filter is of 1525 lines
where as  hellosubs is more than 1/3 rd  times less in lines. and sir for
realtime subtitles I think we should emphasise more on reducing the time
taken by the filter to remove lag.


Thanks and regards,
Anurag Singh




On Mon, Apr 16, 2018 at 11:57 AM, ANURAG SINGH IIT BHU <
anurag.singh.ph...@iitbhu.ac.in> wrote:

>
>
>
> Hello sir,
>
> Okay I'll implement the suggested changes and make sure that the filter
> does not break build without libfreetype.
>
> Thanks and regards
> Anurag Singh.
>
>
> On Mon, Apr 16, 2018 at 12:49 AM, Michael Niedermayer <
> mich...@niedermayer.cc> wrote:
>
>> On Sun, Apr 15, 2018 at 07:36:09PM +0530, ANURAG SINGH IIT BHU wrote:
>> > Hello Sir,
>> >
>> > I have implemented the adviced changes for the hellosubs filter for the
>> > qualification task which writes Hello World time on frames, now the
>> filter
>> > does not uses libavformat, and it uses libfreetype to draw over video
>> > frames. I have attached the complete patch.
>> >
>> > libfretype and libfontconfig should be enabled to run the filter.
>> > (libfontconfig if no font file is provided.)
>> >
>> > Command to run the filter
>> > ffmpeg -i  -vf hellosubs 
>> >
>> > Thanks and regards,
>> > Anurag Singh.
>> >
>> >
>> >
>> >
>> >
>> > ‌
>> >
>> > On Fri, Apr 13, 2018 at 9:39 AM, ANURAG SINGH IIT BHU <
>> > anurag.singh.ph...@iitbhu.ac.in> wrote:
>> >
>> > > Thank you sir, I'll implement the suggested reviews as soon as
>> possible.
>> > >
>> > >
>> > >
>> > >
>> > > ‌
>> > >
>> > > On Fri, Apr 13, 2018 at 4:04 AM, Michael Niedermayer <
>> > > mich...@niedermayer.cc> wrote:
>> > >
>> > >> On Fri, Apr 13, 2018 at 02:13:53AM +0530, ANURAG SINGH IIT BHU wrote:
>> > >> > Hello,
>> > >> > I have implemented the reviews mentioned on previous patch, now
>> there
>> > >> is no
>> > >> > need to provide any subtitle file to the filter, I am attaching the
>> > >> > complete patch of the hellosubs filter.
>> > >> >
>> > >> > Command to run the filter
>> > >> > ffmpeg -i  -vf hellosubs= helloout.mp4
>> > >> >
>> > >> >
>> > >> > Thanks and regards,
>> > >> > Anurag Singh.
>> > >> >
>> > >> >
>> > >> > ‌
>> > >> >
>> > >> > On Tue, Apr 10, 2018 at 4:55 AM, Rostislav Pehlivanov <
>> > >> atomnu...@gmail.com>
>> > >> > wrote:
>> > >> >
>> > >> > > On 9 April 2018 at 19:10, Paul B Mahol  wrote:
>> > >> > >
>> > >> > > > On 4/9/18, Rostislav Pehlivanov  wrote:
>> > >> > > > > On 9 April 2018 at 03:59, ANURAG SINGH IIT BHU <
>> > >> > > > > anurag.singh.ph...@iitbhu.ac.in> wrote:
>> > >> > > > >
>> > >> > > > >> This mail is regarding the qualification task assigned to
>> me for
>> > >> the
>> > >> > > > >> GSOC project
>> > >> > > > >> in FFmpeg for automatic real-time subtitle generation using
>> > >> speech to
>> > >> > > > text
>> > >> > > > >> translation ML model.
>> > >> > > > >>
>> > >> > > > >
>> > >> > > > > i really don't think lavfi is the correct place for such
>> code,
>> > >> nor that
>> > >> > > > the
>> > >> > > > > project's repo should contain such code at all.
>> > >> > > > > This would need to be in another repo and a separate library.
>> > >> > > >
>> > >> > > > Why? Are you against ocr filter too?
>> > >> > > >
>> > >> > >
>> > >> > > The OCR filter uses libtessract so I'm fine with it. Like I
>> said, as
>> > >> long
>> > >> > > as the actual code to do it is in an external library I don't
>> mind.
>> > >> > > Mozilla recently released Deep Speech (
>> https://github.com/mozilla/
>> > >> > > DeepSpeech)
>> > >> > > which does pretty much exactly speech to text and is considered
>> to
>> > >> have the
>> > >> > > most accurate one out there. Someone just needs to convert the
>> > >> tensorflow
>> > >> > > code to something more usable.
>> > >> > > ___
>> > >> > > ffmpeg-devel mailing list
>> > >> > > ffmpeg-devel@ffmpeg.org
>> > >> > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>> > >> > >
>> > >>
>> > >> >  Makefile   |1
>> > >> >  allfilters.c   |1
>> > >> >  vf_hellosubs.c |  513 ++
>> > >> +++
>> > >> >  3 files changed, 515 insertions(+)
>> > >> > 2432f100fddb7ec84e771be8282d4b66e3d1f50a
>> > >> 0001-avfilter-add-hellosubs-filter.patch
>> > >> > From ac0e09d431ea68aebfaef6e2ed0b450e76d473d9 Mon Sep 17 00:00:00
>> 2001
>> > >> > From: ddosvulnerability 
>> > >> > Date: Thu, 12 Apr 2018 22:06:43 +0530
>> > >> > Subject: [PATCH] avfilter: add hellosubs filter.
>> > >> 

Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-16 Thread ANURAG SINGH IIT BHU
Hello sir,

Okay I'll implement the suggested changes and make sure that the filter
does not break build without libfreetype.

Thanks and regards
Anurag Singh.


On Mon, Apr 16, 2018 at 12:49 AM, Michael Niedermayer <
mich...@niedermayer.cc> wrote:

> On Sun, Apr 15, 2018 at 07:36:09PM +0530, ANURAG SINGH IIT BHU wrote:
> > Hello Sir,
> >
> > I have implemented the adviced changes for the hellosubs filter for the
> > qualification task which writes Hello World time on frames, now the
> filter
> > does not uses libavformat, and it uses libfreetype to draw over video
> > frames. I have attached the complete patch.
> >
> > libfretype and libfontconfig should be enabled to run the filter.
> > (libfontconfig if no font file is provided.)
> >
> > Command to run the filter
> > ffmpeg -i  -vf hellosubs 
> >
> > Thanks and regards,
> > Anurag Singh.
> >
> >
> >
> >
> >
> > ‌
> >
> > On Fri, Apr 13, 2018 at 9:39 AM, ANURAG SINGH IIT BHU <
> > anurag.singh.ph...@iitbhu.ac.in> wrote:
> >
> > > Thank you sir, I'll implement the suggested reviews as soon as
> possible.
> > >
> > >
> > >
> > >
> > > ‌
> > >
> > > On Fri, Apr 13, 2018 at 4:04 AM, Michael Niedermayer <
> > > mich...@niedermayer.cc> wrote:
> > >
> > >> On Fri, Apr 13, 2018 at 02:13:53AM +0530, ANURAG SINGH IIT BHU wrote:
> > >> > Hello,
> > >> > I have implemented the reviews mentioned on previous patch, now
> there
> > >> is no
> > >> > need to provide any subtitle file to the filter, I am attaching the
> > >> > complete patch of the hellosubs filter.
> > >> >
> > >> > Command to run the filter
> > >> > ffmpeg -i  -vf hellosubs= helloout.mp4
> > >> >
> > >> >
> > >> > Thanks and regards,
> > >> > Anurag Singh.
> > >> >
> > >> >
> > >> > ‌
> > >> >
> > >> > On Tue, Apr 10, 2018 at 4:55 AM, Rostislav Pehlivanov <
> > >> atomnu...@gmail.com>
> > >> > wrote:
> > >> >
> > >> > > On 9 April 2018 at 19:10, Paul B Mahol  wrote:
> > >> > >
> > >> > > > On 4/9/18, Rostislav Pehlivanov  wrote:
> > >> > > > > On 9 April 2018 at 03:59, ANURAG SINGH IIT BHU <
> > >> > > > > anurag.singh.ph...@iitbhu.ac.in> wrote:
> > >> > > > >
> > >> > > > >> This mail is regarding the qualification task assigned to me
> for
> > >> the
> > >> > > > >> GSOC project
> > >> > > > >> in FFmpeg for automatic real-time subtitle generation using
> > >> speech to
> > >> > > > text
> > >> > > > >> translation ML model.
> > >> > > > >>
> > >> > > > >
> > >> > > > > i really don't think lavfi is the correct place for such code,
> > >> nor that
> > >> > > > the
> > >> > > > > project's repo should contain such code at all.
> > >> > > > > This would need to be in another repo and a separate library.
> > >> > > >
> > >> > > > Why? Are you against ocr filter too?
> > >> > > >
> > >> > >
> > >> > > The OCR filter uses libtessract so I'm fine with it. Like I said,
> as
> > >> long
> > >> > > as the actual code to do it is in an external library I don't
> mind.
> > >> > > Mozilla recently released Deep Speech (
> https://github.com/mozilla/
> > >> > > DeepSpeech)
> > >> > > which does pretty much exactly speech to text and is considered to
> > >> have the
> > >> > > most accurate one out there. Someone just needs to convert the
> > >> tensorflow
> > >> > > code to something more usable.
> > >> > > ___
> > >> > > ffmpeg-devel mailing list
> > >> > > ffmpeg-devel@ffmpeg.org
> > >> > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> > >> > >
> > >>
> > >> >  Makefile   |1
> > >> >  allfilters.c   |1
> > >> >  vf_hellosubs.c |  513 ++
> > >> +++
> > >> >  3 files changed, 515 insertions(+)
> > >> > 2432f100fddb7ec84e771be8282d4b66e3d1f50a
> > >> 0001-avfilter-add-hellosubs-filter.patch
> > >> > From ac0e09d431ea68aebfaef6e2ed0b450e76d473d9 Mon Sep 17 00:00:00
> 2001
> > >> > From: ddosvulnerability 
> > >> > Date: Thu, 12 Apr 2018 22:06:43 +0530
> > >> > Subject: [PATCH] avfilter: add hellosubs filter.
> > >> >
> > >> > ---
> > >> >  libavfilter/Makefile   |   1 +
> > >> >  libavfilter/allfilters.c   |   1 +
> > >> >  libavfilter/vf_hellosubs.c | 513 ++
> > >> +++
> > >> >  3 files changed, 515 insertions(+)
> > >> >  create mode 100644 libavfilter/vf_hellosubs.c
> > >> >
> > >> > diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> > >> > index a90ca30..770b1b5 100644
> > >> > --- a/libavfilter/Makefile
> > >> > +++ b/libavfilter/Makefile
> > >> > @@ -331,6 +331,7 @@ OBJS-$(CONFIG_SSIM_FILTER)   +=
> > >> vf_ssim.o framesync.o
> > >> >  OBJS-$(CONFIG_STEREO3D_FILTER)   += vf_stereo3d.o
> > >> >  OBJS-$(CONFIG_STREAMSELECT_FILTER)   += f_streamselect.o
> > >> framesync.o
> > >> >  OBJS-$(CONFIG_SUBTITLES_FILTER)  += vf_subtitles.o
> > >> > +OBJS-$(CONFIG_HELLOSUBS_FILTER)  += vf_hellosubs.o
> > >> >  

Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-15 Thread Michael Niedermayer
On Sun, Apr 15, 2018 at 07:36:09PM +0530, ANURAG SINGH IIT BHU wrote:
> Hello Sir,
> 
> I have implemented the adviced changes for the hellosubs filter for the
> qualification task which writes Hello World time on frames, now the filter
> does not uses libavformat, and it uses libfreetype to draw over video
> frames. I have attached the complete patch.
> 
> libfretype and libfontconfig should be enabled to run the filter.
> (libfontconfig if no font file is provided.)
> 
> Command to run the filter
> ffmpeg -i  -vf hellosubs 
> 
> Thanks and regards,
> Anurag Singh.
> 
> 
> 
> 
> 
> ‌
> 
> On Fri, Apr 13, 2018 at 9:39 AM, ANURAG SINGH IIT BHU <
> anurag.singh.ph...@iitbhu.ac.in> wrote:
> 
> > Thank you sir, I'll implement the suggested reviews as soon as possible.
> >
> >
> >
> >
> > ‌
> >
> > On Fri, Apr 13, 2018 at 4:04 AM, Michael Niedermayer <
> > mich...@niedermayer.cc> wrote:
> >
> >> On Fri, Apr 13, 2018 at 02:13:53AM +0530, ANURAG SINGH IIT BHU wrote:
> >> > Hello,
> >> > I have implemented the reviews mentioned on previous patch, now there
> >> is no
> >> > need to provide any subtitle file to the filter, I am attaching the
> >> > complete patch of the hellosubs filter.
> >> >
> >> > Command to run the filter
> >> > ffmpeg -i  -vf hellosubs= helloout.mp4
> >> >
> >> >
> >> > Thanks and regards,
> >> > Anurag Singh.
> >> >
> >> >
> >> > ‌
> >> >
> >> > On Tue, Apr 10, 2018 at 4:55 AM, Rostislav Pehlivanov <
> >> atomnu...@gmail.com>
> >> > wrote:
> >> >
> >> > > On 9 April 2018 at 19:10, Paul B Mahol  wrote:
> >> > >
> >> > > > On 4/9/18, Rostislav Pehlivanov  wrote:
> >> > > > > On 9 April 2018 at 03:59, ANURAG SINGH IIT BHU <
> >> > > > > anurag.singh.ph...@iitbhu.ac.in> wrote:
> >> > > > >
> >> > > > >> This mail is regarding the qualification task assigned to me for
> >> the
> >> > > > >> GSOC project
> >> > > > >> in FFmpeg for automatic real-time subtitle generation using
> >> speech to
> >> > > > text
> >> > > > >> translation ML model.
> >> > > > >>
> >> > > > >
> >> > > > > i really don't think lavfi is the correct place for such code,
> >> nor that
> >> > > > the
> >> > > > > project's repo should contain such code at all.
> >> > > > > This would need to be in another repo and a separate library.
> >> > > >
> >> > > > Why? Are you against ocr filter too?
> >> > > >
> >> > >
> >> > > The OCR filter uses libtessract so I'm fine with it. Like I said, as
> >> long
> >> > > as the actual code to do it is in an external library I don't mind.
> >> > > Mozilla recently released Deep Speech (https://github.com/mozilla/
> >> > > DeepSpeech)
> >> > > which does pretty much exactly speech to text and is considered to
> >> have the
> >> > > most accurate one out there. Someone just needs to convert the
> >> tensorflow
> >> > > code to something more usable.
> >> > > ___
> >> > > ffmpeg-devel mailing list
> >> > > ffmpeg-devel@ffmpeg.org
> >> > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >> > >
> >>
> >> >  Makefile   |1
> >> >  allfilters.c   |1
> >> >  vf_hellosubs.c |  513 ++
> >> +++
> >> >  3 files changed, 515 insertions(+)
> >> > 2432f100fddb7ec84e771be8282d4b66e3d1f50a
> >> 0001-avfilter-add-hellosubs-filter.patch
> >> > From ac0e09d431ea68aebfaef6e2ed0b450e76d473d9 Mon Sep 17 00:00:00 2001
> >> > From: ddosvulnerability 
> >> > Date: Thu, 12 Apr 2018 22:06:43 +0530
> >> > Subject: [PATCH] avfilter: add hellosubs filter.
> >> >
> >> > ---
> >> >  libavfilter/Makefile   |   1 +
> >> >  libavfilter/allfilters.c   |   1 +
> >> >  libavfilter/vf_hellosubs.c | 513 ++
> >> +++
> >> >  3 files changed, 515 insertions(+)
> >> >  create mode 100644 libavfilter/vf_hellosubs.c
> >> >
> >> > diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> >> > index a90ca30..770b1b5 100644
> >> > --- a/libavfilter/Makefile
> >> > +++ b/libavfilter/Makefile
> >> > @@ -331,6 +331,7 @@ OBJS-$(CONFIG_SSIM_FILTER)   +=
> >> vf_ssim.o framesync.o
> >> >  OBJS-$(CONFIG_STEREO3D_FILTER)   += vf_stereo3d.o
> >> >  OBJS-$(CONFIG_STREAMSELECT_FILTER)   += f_streamselect.o
> >> framesync.o
> >> >  OBJS-$(CONFIG_SUBTITLES_FILTER)  += vf_subtitles.o
> >> > +OBJS-$(CONFIG_HELLOSUBS_FILTER)  += vf_hellosubs.o
> >> >  OBJS-$(CONFIG_SUPER2XSAI_FILTER) += vf_super2xsai.o
> >> >  OBJS-$(CONFIG_SWAPRECT_FILTER)   += vf_swaprect.o
> >> >  OBJS-$(CONFIG_SWAPUV_FILTER) += vf_swapuv.o
> >> > diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> >> > index 6eac828..a008908 100644
> >> > --- a/libavfilter/allfilters.c
> >> > +++ b/libavfilter/allfilters.c
> >> > @@ -322,6 +322,7 @@ extern AVFilter ff_vf_ssim;
> >> >  extern AVFilter ff_vf_stereo3d;
> >> >  extern AVFilter ff_vf_streamselect;
> >> >  

Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-15 Thread ANURAG SINGH IIT BHU
Hello Sir,

I have implemented the adviced changes for the hellosubs filter for the
qualification task which writes Hello World time on frames, now the filter
does not uses libavformat, and it uses libfreetype to draw over video
frames. I have attached the complete patch.

libfretype and libfontconfig should be enabled to run the filter.
(libfontconfig if no font file is provided.)

Command to run the filter
ffmpeg -i  -vf hellosubs 

Thanks and regards,
Anurag Singh.





‌

On Fri, Apr 13, 2018 at 9:39 AM, ANURAG SINGH IIT BHU <
anurag.singh.ph...@iitbhu.ac.in> wrote:

> Thank you sir, I'll implement the suggested reviews as soon as possible.
>
>
>
>
> ‌
>
> On Fri, Apr 13, 2018 at 4:04 AM, Michael Niedermayer <
> mich...@niedermayer.cc> wrote:
>
>> On Fri, Apr 13, 2018 at 02:13:53AM +0530, ANURAG SINGH IIT BHU wrote:
>> > Hello,
>> > I have implemented the reviews mentioned on previous patch, now there
>> is no
>> > need to provide any subtitle file to the filter, I am attaching the
>> > complete patch of the hellosubs filter.
>> >
>> > Command to run the filter
>> > ffmpeg -i  -vf hellosubs= helloout.mp4
>> >
>> >
>> > Thanks and regards,
>> > Anurag Singh.
>> >
>> >
>> > ‌
>> >
>> > On Tue, Apr 10, 2018 at 4:55 AM, Rostislav Pehlivanov <
>> atomnu...@gmail.com>
>> > wrote:
>> >
>> > > On 9 April 2018 at 19:10, Paul B Mahol  wrote:
>> > >
>> > > > On 4/9/18, Rostislav Pehlivanov  wrote:
>> > > > > On 9 April 2018 at 03:59, ANURAG SINGH IIT BHU <
>> > > > > anurag.singh.ph...@iitbhu.ac.in> wrote:
>> > > > >
>> > > > >> This mail is regarding the qualification task assigned to me for
>> the
>> > > > >> GSOC project
>> > > > >> in FFmpeg for automatic real-time subtitle generation using
>> speech to
>> > > > text
>> > > > >> translation ML model.
>> > > > >>
>> > > > >
>> > > > > i really don't think lavfi is the correct place for such code,
>> nor that
>> > > > the
>> > > > > project's repo should contain such code at all.
>> > > > > This would need to be in another repo and a separate library.
>> > > >
>> > > > Why? Are you against ocr filter too?
>> > > >
>> > >
>> > > The OCR filter uses libtessract so I'm fine with it. Like I said, as
>> long
>> > > as the actual code to do it is in an external library I don't mind.
>> > > Mozilla recently released Deep Speech (https://github.com/mozilla/
>> > > DeepSpeech)
>> > > which does pretty much exactly speech to text and is considered to
>> have the
>> > > most accurate one out there. Someone just needs to convert the
>> tensorflow
>> > > code to something more usable.
>> > > ___
>> > > ffmpeg-devel mailing list
>> > > ffmpeg-devel@ffmpeg.org
>> > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>> > >
>>
>> >  Makefile   |1
>> >  allfilters.c   |1
>> >  vf_hellosubs.c |  513 ++
>> +++
>> >  3 files changed, 515 insertions(+)
>> > 2432f100fddb7ec84e771be8282d4b66e3d1f50a
>> 0001-avfilter-add-hellosubs-filter.patch
>> > From ac0e09d431ea68aebfaef6e2ed0b450e76d473d9 Mon Sep 17 00:00:00 2001
>> > From: ddosvulnerability 
>> > Date: Thu, 12 Apr 2018 22:06:43 +0530
>> > Subject: [PATCH] avfilter: add hellosubs filter.
>> >
>> > ---
>> >  libavfilter/Makefile   |   1 +
>> >  libavfilter/allfilters.c   |   1 +
>> >  libavfilter/vf_hellosubs.c | 513 ++
>> +++
>> >  3 files changed, 515 insertions(+)
>> >  create mode 100644 libavfilter/vf_hellosubs.c
>> >
>> > diff --git a/libavfilter/Makefile b/libavfilter/Makefile
>> > index a90ca30..770b1b5 100644
>> > --- a/libavfilter/Makefile
>> > +++ b/libavfilter/Makefile
>> > @@ -331,6 +331,7 @@ OBJS-$(CONFIG_SSIM_FILTER)   +=
>> vf_ssim.o framesync.o
>> >  OBJS-$(CONFIG_STEREO3D_FILTER)   += vf_stereo3d.o
>> >  OBJS-$(CONFIG_STREAMSELECT_FILTER)   += f_streamselect.o
>> framesync.o
>> >  OBJS-$(CONFIG_SUBTITLES_FILTER)  += vf_subtitles.o
>> > +OBJS-$(CONFIG_HELLOSUBS_FILTER)  += vf_hellosubs.o
>> >  OBJS-$(CONFIG_SUPER2XSAI_FILTER) += vf_super2xsai.o
>> >  OBJS-$(CONFIG_SWAPRECT_FILTER)   += vf_swaprect.o
>> >  OBJS-$(CONFIG_SWAPUV_FILTER) += vf_swapuv.o
>> > diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
>> > index 6eac828..a008908 100644
>> > --- a/libavfilter/allfilters.c
>> > +++ b/libavfilter/allfilters.c
>> > @@ -322,6 +322,7 @@ extern AVFilter ff_vf_ssim;
>> >  extern AVFilter ff_vf_stereo3d;
>> >  extern AVFilter ff_vf_streamselect;
>> >  extern AVFilter ff_vf_subtitles;
>> > +extern AVFilter ff_vf_hellosubs;
>> >  extern AVFilter ff_vf_super2xsai;
>> >  extern AVFilter ff_vf_swaprect;
>> >  extern AVFilter ff_vf_swapuv;
>> > diff --git a/libavfilter/vf_hellosubs.c b/libavfilter/vf_hellosubs.c
>> > new file mode 100644
>> > index 000..b994050
>> > --- /dev/null
>> > +++ 

Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-12 Thread ANURAG SINGH IIT BHU
Thank you sir, I'll implement the suggested reviews as soon as possible.




‌

On Fri, Apr 13, 2018 at 4:04 AM, Michael Niedermayer  wrote:

> On Fri, Apr 13, 2018 at 02:13:53AM +0530, ANURAG SINGH IIT BHU wrote:
> > Hello,
> > I have implemented the reviews mentioned on previous patch, now there is
> no
> > need to provide any subtitle file to the filter, I am attaching the
> > complete patch of the hellosubs filter.
> >
> > Command to run the filter
> > ffmpeg -i  -vf hellosubs= helloout.mp4
> >
> >
> > Thanks and regards,
> > Anurag Singh.
> >
> >
> > ‌
> >
> > On Tue, Apr 10, 2018 at 4:55 AM, Rostislav Pehlivanov <
> atomnu...@gmail.com>
> > wrote:
> >
> > > On 9 April 2018 at 19:10, Paul B Mahol  wrote:
> > >
> > > > On 4/9/18, Rostislav Pehlivanov  wrote:
> > > > > On 9 April 2018 at 03:59, ANURAG SINGH IIT BHU <
> > > > > anurag.singh.ph...@iitbhu.ac.in> wrote:
> > > > >
> > > > >> This mail is regarding the qualification task assigned to me for
> the
> > > > >> GSOC project
> > > > >> in FFmpeg for automatic real-time subtitle generation using
> speech to
> > > > text
> > > > >> translation ML model.
> > > > >>
> > > > >
> > > > > i really don't think lavfi is the correct place for such code, nor
> that
> > > > the
> > > > > project's repo should contain such code at all.
> > > > > This would need to be in another repo and a separate library.
> > > >
> > > > Why? Are you against ocr filter too?
> > > >
> > >
> > > The OCR filter uses libtessract so I'm fine with it. Like I said, as
> long
> > > as the actual code to do it is in an external library I don't mind.
> > > Mozilla recently released Deep Speech (https://github.com/mozilla/
> > > DeepSpeech)
> > > which does pretty much exactly speech to text and is considered to
> have the
> > > most accurate one out there. Someone just needs to convert the
> tensorflow
> > > code to something more usable.
> > > ___
> > > ffmpeg-devel mailing list
> > > ffmpeg-devel@ffmpeg.org
> > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> > >
>
> >  Makefile   |1
> >  allfilters.c   |1
> >  vf_hellosubs.c |  513 ++
> +++
> >  3 files changed, 515 insertions(+)
> > 2432f100fddb7ec84e771be8282d4b66e3d1f50a  0001-avfilter-add-hellosubs-
> filter.patch
> > From ac0e09d431ea68aebfaef6e2ed0b450e76d473d9 Mon Sep 17 00:00:00 2001
> > From: ddosvulnerability 
> > Date: Thu, 12 Apr 2018 22:06:43 +0530
> > Subject: [PATCH] avfilter: add hellosubs filter.
> >
> > ---
> >  libavfilter/Makefile   |   1 +
> >  libavfilter/allfilters.c   |   1 +
> >  libavfilter/vf_hellosubs.c | 513 ++
> +++
> >  3 files changed, 515 insertions(+)
> >  create mode 100644 libavfilter/vf_hellosubs.c
> >
> > diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> > index a90ca30..770b1b5 100644
> > --- a/libavfilter/Makefile
> > +++ b/libavfilter/Makefile
> > @@ -331,6 +331,7 @@ OBJS-$(CONFIG_SSIM_FILTER)   +=
> vf_ssim.o framesync.o
> >  OBJS-$(CONFIG_STEREO3D_FILTER)   += vf_stereo3d.o
> >  OBJS-$(CONFIG_STREAMSELECT_FILTER)   += f_streamselect.o
> framesync.o
> >  OBJS-$(CONFIG_SUBTITLES_FILTER)  += vf_subtitles.o
> > +OBJS-$(CONFIG_HELLOSUBS_FILTER)  += vf_hellosubs.o
> >  OBJS-$(CONFIG_SUPER2XSAI_FILTER) += vf_super2xsai.o
> >  OBJS-$(CONFIG_SWAPRECT_FILTER)   += vf_swaprect.o
> >  OBJS-$(CONFIG_SWAPUV_FILTER) += vf_swapuv.o
> > diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> > index 6eac828..a008908 100644
> > --- a/libavfilter/allfilters.c
> > +++ b/libavfilter/allfilters.c
> > @@ -322,6 +322,7 @@ extern AVFilter ff_vf_ssim;
> >  extern AVFilter ff_vf_stereo3d;
> >  extern AVFilter ff_vf_streamselect;
> >  extern AVFilter ff_vf_subtitles;
> > +extern AVFilter ff_vf_hellosubs;
> >  extern AVFilter ff_vf_super2xsai;
> >  extern AVFilter ff_vf_swaprect;
> >  extern AVFilter ff_vf_swapuv;
> > diff --git a/libavfilter/vf_hellosubs.c b/libavfilter/vf_hellosubs.c
> > new file mode 100644
> > index 000..b994050
> > --- /dev/null
> > +++ b/libavfilter/vf_hellosubs.c
> > @@ -0,0 +1,513 @@
> > +/*
> > + * Copyright (c) 2011 Baptiste Coudurier
> > + * Copyright (c) 2011 Stefano Sabatini
> > + * Copyright (c) 2012 Clément Bœsch
> > + *
> > + * This file is part of FFmpeg.
> > + *
> > + * FFmpeg is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU Lesser General Public
> > + * License as published by the Free Software Foundation; either
> > + * version 2.1 of the License, or (at your option) any later version.
> > + *
> > + * FFmpeg is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A 

Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-12 Thread Michael Niedermayer
On Fri, Apr 13, 2018 at 02:13:53AM +0530, ANURAG SINGH IIT BHU wrote:
> Hello,
> I have implemented the reviews mentioned on previous patch, now there is no
> need to provide any subtitle file to the filter, I am attaching the
> complete patch of the hellosubs filter.
> 
> Command to run the filter
> ffmpeg -i  -vf hellosubs= helloout.mp4
> 
> 
> Thanks and regards,
> Anurag Singh.
> 
> 
> ‌
> 
> On Tue, Apr 10, 2018 at 4:55 AM, Rostislav Pehlivanov 
> wrote:
> 
> > On 9 April 2018 at 19:10, Paul B Mahol  wrote:
> >
> > > On 4/9/18, Rostislav Pehlivanov  wrote:
> > > > On 9 April 2018 at 03:59, ANURAG SINGH IIT BHU <
> > > > anurag.singh.ph...@iitbhu.ac.in> wrote:
> > > >
> > > >> This mail is regarding the qualification task assigned to me for the
> > > >> GSOC project
> > > >> in FFmpeg for automatic real-time subtitle generation using speech to
> > > text
> > > >> translation ML model.
> > > >>
> > > >
> > > > i really don't think lavfi is the correct place for such code, nor that
> > > the
> > > > project's repo should contain such code at all.
> > > > This would need to be in another repo and a separate library.
> > >
> > > Why? Are you against ocr filter too?
> > >
> >
> > The OCR filter uses libtessract so I'm fine with it. Like I said, as long
> > as the actual code to do it is in an external library I don't mind.
> > Mozilla recently released Deep Speech (https://github.com/mozilla/
> > DeepSpeech)
> > which does pretty much exactly speech to text and is considered to have the
> > most accurate one out there. Someone just needs to convert the tensorflow
> > code to something more usable.
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >

>  Makefile   |1 
>  allfilters.c   |1 
>  vf_hellosubs.c |  513 
> +
>  3 files changed, 515 insertions(+)
> 2432f100fddb7ec84e771be8282d4b66e3d1f50a  
> 0001-avfilter-add-hellosubs-filter.patch
> From ac0e09d431ea68aebfaef6e2ed0b450e76d473d9 Mon Sep 17 00:00:00 2001
> From: ddosvulnerability 
> Date: Thu, 12 Apr 2018 22:06:43 +0530
> Subject: [PATCH] avfilter: add hellosubs filter.
> 
> ---
>  libavfilter/Makefile   |   1 +
>  libavfilter/allfilters.c   |   1 +
>  libavfilter/vf_hellosubs.c | 513 
> +
>  3 files changed, 515 insertions(+)
>  create mode 100644 libavfilter/vf_hellosubs.c
> 
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> index a90ca30..770b1b5 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -331,6 +331,7 @@ OBJS-$(CONFIG_SSIM_FILTER)   += vf_ssim.o 
> framesync.o
>  OBJS-$(CONFIG_STEREO3D_FILTER)   += vf_stereo3d.o
>  OBJS-$(CONFIG_STREAMSELECT_FILTER)   += f_streamselect.o framesync.o
>  OBJS-$(CONFIG_SUBTITLES_FILTER)  += vf_subtitles.o
> +OBJS-$(CONFIG_HELLOSUBS_FILTER)  += vf_hellosubs.o
>  OBJS-$(CONFIG_SUPER2XSAI_FILTER) += vf_super2xsai.o
>  OBJS-$(CONFIG_SWAPRECT_FILTER)   += vf_swaprect.o
>  OBJS-$(CONFIG_SWAPUV_FILTER) += vf_swapuv.o
> diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> index 6eac828..a008908 100644
> --- a/libavfilter/allfilters.c
> +++ b/libavfilter/allfilters.c
> @@ -322,6 +322,7 @@ extern AVFilter ff_vf_ssim;
>  extern AVFilter ff_vf_stereo3d;
>  extern AVFilter ff_vf_streamselect;
>  extern AVFilter ff_vf_subtitles;
> +extern AVFilter ff_vf_hellosubs;
>  extern AVFilter ff_vf_super2xsai;
>  extern AVFilter ff_vf_swaprect;
>  extern AVFilter ff_vf_swapuv;
> diff --git a/libavfilter/vf_hellosubs.c b/libavfilter/vf_hellosubs.c
> new file mode 100644
> index 000..b994050
> --- /dev/null
> +++ b/libavfilter/vf_hellosubs.c
> @@ -0,0 +1,513 @@
> +/*
> + * Copyright (c) 2011 Baptiste Coudurier
> + * Copyright (c) 2011 Stefano Sabatini
> + * Copyright (c) 2012 Clément Bœsch
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 
> USA
> + */
> +
> +/**
> + * @file
> + * Libass hellosubs burning filter.
> + *
> + 
> + */

Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-12 Thread ANURAG SINGH IIT BHU
Hello,
I have implemented the reviews mentioned on previous patch, now there is no
need to provide any subtitle file to the filter, I am attaching the
complete patch of the hellosubs filter.

Command to run the filter
ffmpeg -i  -vf hellosubs= helloout.mp4


Thanks and regards,
Anurag Singh.


‌

On Tue, Apr 10, 2018 at 4:55 AM, Rostislav Pehlivanov 
wrote:

> On 9 April 2018 at 19:10, Paul B Mahol  wrote:
>
> > On 4/9/18, Rostislav Pehlivanov  wrote:
> > > On 9 April 2018 at 03:59, ANURAG SINGH IIT BHU <
> > > anurag.singh.ph...@iitbhu.ac.in> wrote:
> > >
> > >> This mail is regarding the qualification task assigned to me for the
> > >> GSOC project
> > >> in FFmpeg for automatic real-time subtitle generation using speech to
> > text
> > >> translation ML model.
> > >>
> > >
> > > i really don't think lavfi is the correct place for such code, nor that
> > the
> > > project's repo should contain such code at all.
> > > This would need to be in another repo and a separate library.
> >
> > Why? Are you against ocr filter too?
> >
>
> The OCR filter uses libtessract so I'm fine with it. Like I said, as long
> as the actual code to do it is in an external library I don't mind.
> Mozilla recently released Deep Speech (https://github.com/mozilla/
> DeepSpeech)
> which does pretty much exactly speech to text and is considered to have the
> most accurate one out there. Someone just needs to convert the tensorflow
> code to something more usable.
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
From ac0e09d431ea68aebfaef6e2ed0b450e76d473d9 Mon Sep 17 00:00:00 2001
From: ddosvulnerability 
Date: Thu, 12 Apr 2018 22:06:43 +0530
Subject: [PATCH] avfilter: add hellosubs filter.

---
 libavfilter/Makefile   |   1 +
 libavfilter/allfilters.c   |   1 +
 libavfilter/vf_hellosubs.c | 513 +
 3 files changed, 515 insertions(+)
 create mode 100644 libavfilter/vf_hellosubs.c

diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index a90ca30..770b1b5 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -331,6 +331,7 @@ OBJS-$(CONFIG_SSIM_FILTER)   += vf_ssim.o framesync.o
 OBJS-$(CONFIG_STEREO3D_FILTER)   += vf_stereo3d.o
 OBJS-$(CONFIG_STREAMSELECT_FILTER)   += f_streamselect.o framesync.o
 OBJS-$(CONFIG_SUBTITLES_FILTER)  += vf_subtitles.o
+OBJS-$(CONFIG_HELLOSUBS_FILTER)  += vf_hellosubs.o
 OBJS-$(CONFIG_SUPER2XSAI_FILTER) += vf_super2xsai.o
 OBJS-$(CONFIG_SWAPRECT_FILTER)   += vf_swaprect.o
 OBJS-$(CONFIG_SWAPUV_FILTER) += vf_swapuv.o
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
index 6eac828..a008908 100644
--- a/libavfilter/allfilters.c
+++ b/libavfilter/allfilters.c
@@ -322,6 +322,7 @@ extern AVFilter ff_vf_ssim;
 extern AVFilter ff_vf_stereo3d;
 extern AVFilter ff_vf_streamselect;
 extern AVFilter ff_vf_subtitles;
+extern AVFilter ff_vf_hellosubs;
 extern AVFilter ff_vf_super2xsai;
 extern AVFilter ff_vf_swaprect;
 extern AVFilter ff_vf_swapuv;
diff --git a/libavfilter/vf_hellosubs.c b/libavfilter/vf_hellosubs.c
new file mode 100644
index 000..b994050
--- /dev/null
+++ b/libavfilter/vf_hellosubs.c
@@ -0,0 +1,513 @@
+/*
+ * Copyright (c) 2011 Baptiste Coudurier
+ * Copyright (c) 2011 Stefano Sabatini
+ * Copyright (c) 2012 Clément Bœsch
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * Libass hellosubs burning filter.
+ *
+ 
+ */
+
+#include 
+
+#include "config.h"
+#if CONFIG_SUBTITLES_FILTER
+# include "libavcodec/avcodec.h"
+# include "libavformat/avformat.h"
+#endif
+#include "libavutil/avstring.h"
+#include "libavutil/imgutils.h"
+#include "libavutil/opt.h"
+#include "libavutil/parseutils.h"
+#include "drawutils.h"
+#include "avfilter.h"
+#include "internal.h"
+#include "formats.h"
+#include "video.h"
+#include 
+#include 
+#include 
+
+typedef struct AssContext {
+const AVClass *class;
+ASS_Library  *library;
+ASS_Renderer *renderer;
+ASS_Track*track;
+char *filename;
+

Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-09 Thread Rostislav Pehlivanov
On 9 April 2018 at 19:10, Paul B Mahol  wrote:

> On 4/9/18, Rostislav Pehlivanov  wrote:
> > On 9 April 2018 at 03:59, ANURAG SINGH IIT BHU <
> > anurag.singh.ph...@iitbhu.ac.in> wrote:
> >
> >> This mail is regarding the qualification task assigned to me for the
> >> GSOC project
> >> in FFmpeg for automatic real-time subtitle generation using speech to
> text
> >> translation ML model.
> >>
> >
> > i really don't think lavfi is the correct place for such code, nor that
> the
> > project's repo should contain such code at all.
> > This would need to be in another repo and a separate library.
>
> Why? Are you against ocr filter too?
>

The OCR filter uses libtessract so I'm fine with it. Like I said, as long
as the actual code to do it is in an external library I don't mind.
Mozilla recently released Deep Speech (https://github.com/mozilla/DeepSpeech)
which does pretty much exactly speech to text and is considered to have the
most accurate one out there. Someone just needs to convert the tensorflow
code to something more usable.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-09 Thread Paul B Mahol
On 4/9/18, Rostislav Pehlivanov  wrote:
> On 9 April 2018 at 03:59, ANURAG SINGH IIT BHU <
> anurag.singh.ph...@iitbhu.ac.in> wrote:
>
>> This mail is regarding the qualification task assigned to me for the
>> GSOC project
>> in FFmpeg for automatic real-time subtitle generation using speech to text
>> translation ML model.
>>
>
> i really don't think lavfi is the correct place for such code, nor that the
> project's repo should contain such code at all.
> This would need to be in another repo and a separate library.

Why? Are you against ocr filter too?

This is necessarey for A->S filter, once subtitles are supported by lavfi.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-09 Thread Rostislav Pehlivanov
On 9 April 2018 at 03:59, ANURAG SINGH IIT BHU <
anurag.singh.ph...@iitbhu.ac.in> wrote:

> This mail is regarding the qualification task assigned to me for the
> GSOC project
> in FFmpeg for automatic real-time subtitle generation using speech to text
> translation ML model.
>

i really don't think lavfi is the correct place for such code, nor that the
project's repo should contain such code at all.
This would need to be in another repo and a separate library.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSOC 2018 qualification task.

2018-04-09 Thread Michael Niedermayer
Hi

On Mon, Apr 09, 2018 at 08:29:21AM +0530, ANURAG SINGH IIT BHU wrote:
> This mail is regarding the qualification task assigned to me for the
> GSOC project
> in FFmpeg for automatic real-time subtitle generation using speech to text
> translation ML model. My assigned task by Michael sir was writing a
> ffmpeg-libavfilter filter which outputs a "Hello World minute: sec"
> subtitle each second.

Yes
the exact task was to have the filter produce subtitle frames/packets and
then have these pass through the filter chain and into ffmpeg.
so that a subsequent filter could render them into the video or
ffmpeg store it in a file,
This would have required extending libavfilter to pass subtitles through
at least enough for this specific use case.

The time for this qualification task was very short as you contacted me rather
late.


> 
> I have built a libavfilter filter named "hellosubs" using the existing
> subtitle filter. hellosubs filter accepts a video file as input, along with
> any subtitle file of any supported subtitle format of FFmpeg with any
> number of enteries(>0), any entries i.e any random subtitle file, as an
> argument and writes "Hello World minute: sec" subtitle each second on the
> video.

yes, i understand that given the limited time that was as much as could be
implemented.
Ill review this patch below


>  Makefile   |1 
>  allfilters.c   |1 
>  vf_hellosubs.c |  463 
> +
>  3 files changed, 465 insertions(+)
> 73061db543833e745b2accee67d9cca3870c1996  
> 0001-avfilter-add-hellosub-filter.patch
> From 38fcf8c80f71a4186f03f33c9272b707390add67 Mon Sep 17 00:00:00 2001
> From: ddosvulnerability 
> Date: Fri, 6 Apr 2018 11:30:17 +0530
> Subject: [PATCH] avfilter: add hellosub filter.
> 
> ---
>  
>  libavfilter/Makefile   |   1 +
>  libavfilter/allfilters.c   |   1 +
>  libavfilter/vf_hellosubs.c | 463 
> +
>  3 files changed, 465 insertions(+)
>  create mode 100644 libavfilter/vf_hellosubs.c
> 
> 
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> index a90ca30..770b1b5 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -331,6 +331,7 @@ OBJS-$(CONFIG_SSIM_FILTER)   += vf_ssim.o 
> framesync.o
>  OBJS-$(CONFIG_STEREO3D_FILTER)   += vf_stereo3d.o
>  OBJS-$(CONFIG_STREAMSELECT_FILTER)   += f_streamselect.o framesync.o
>  OBJS-$(CONFIG_SUBTITLES_FILTER)  += vf_subtitles.o
> +OBJS-$(CONFIG_HELLOSUBS_FILTER)  += vf_hellosubs.o
>  OBJS-$(CONFIG_SUPER2XSAI_FILTER) += vf_super2xsai.o
>  OBJS-$(CONFIG_SWAPRECT_FILTER)   += vf_swaprect.o
>  OBJS-$(CONFIG_SWAPUV_FILTER) += vf_swapuv.o
> diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> index 6eac828..a008908 100644
> --- a/libavfilter/allfilters.c
> +++ b/libavfilter/allfilters.c
> @@ -322,6 +322,7 @@ extern AVFilter ff_vf_ssim;
>  extern AVFilter ff_vf_stereo3d;
>  extern AVFilter ff_vf_streamselect;
>  extern AVFilter ff_vf_subtitles;
> +extern AVFilter ff_vf_hellosubs;
>  extern AVFilter ff_vf_super2xsai;
>  extern AVFilter ff_vf_swaprect;
>  extern AVFilter ff_vf_swapuv;
> diff --git a/libavfilter/vf_hellosubs.c b/libavfilter/vf_hellosubs.c
> new file mode 100644
> index 000..7ba3a0e
> --- /dev/null
> +++ b/libavfilter/vf_hellosubs.c
> @@ -0,0 +1,463 @@
> +/*
> + * Copyright (c) 2011 Baptiste Coudurier
> + * Copyright (c) 2011 Stefano Sabatini
> + * Copyright (c) 2012 Clément Bœsch
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 
> USA
> + */
> +
> +/**
> + * @file
> + * Libass hellosubs burning filter.
> + *

> + * @see{http://www.matroska.org/technical/specs/hellosubs/ssa.html}

this looks like a search and replace error, this link does not work anymore


> + */
> +
> +#include 
> +
> +#include "config.h"
> +#if CONFIG_SUBTITLES_FILTER
> +# include "libavcodec/avcodec.h"
> +# include "libavformat/avformat.h"
> +#endif
> +#include "libavutil/avstring.h"
> +#include "libavutil/imgutils.h"
> +#include "libavutil/opt.h"
> +#include "libavutil/parseutils.h"
> +#include "drawutils.h"
> +#include "avfilter.h"
> 

Re: [FFmpeg-devel] GSoC 2018

2018-02-12 Thread Thilo Borgmann
Hi,

> yet again, the registration for Google Summer of Code 2018 has opened.

FFmpeg has just been accepted for Google Summer of Code 2018 - thanks for 
everyone's contribution so far!

If you're a mentor this year, you should have got another mail already - if 
not, please ping me on that. The student application period will begin 12th of 
March.

Thanks,
Thilo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-26 Thread Thilo Borgmann
Am 26.01.18 um 18:15 schrieb Pedro Arthur:
> 2018-01-26 14:26 GMT-02:00 Rostislav Pehlivanov :
> 
>> I think actually writing and reading papers for the filter is pointless -
>> RAVU already exists in shader form, its well known in the community and
>> we'll essentially get it for free once the Vulkan backend is complete.
>> What the task should be would be to make a CPU-only version of the shaders.
>>
>  I didn't know about these filters, I did a quick search and found a github
> repo "mvp-prefilters" but there isn't any documentation explaining the
> internals of it.
> Can you point me some resources where I can find for example which models
> are they using, motivation and maybe some benchmark with other models?
> My initial idea was that the student would evaluate a few SR models, make a
> balance based on quality/performance and pick the best fit.
> If it's already done there is only the porting task, but I guess maybe
> there is newer and unevaluated models or we could improve something.
> 
> Also the qualification task you just put in is already done - there's
>> already a filter to convolve 2 images.
>>
> Should I revert the wiki? or you have an idea for a qualification task?

No need to revert it. Just update the qual task one a better one is found.

Thanks,
Thilo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-26 Thread Pedro Arthur
2018-01-26 14:26 GMT-02:00 Rostislav Pehlivanov :

> I think actually writing and reading papers for the filter is pointless -
> RAVU already exists in shader form, its well known in the community and
> we'll essentially get it for free once the Vulkan backend is complete.
> What the task should be would be to make a CPU-only version of the shaders.
>
 I didn't know about these filters, I did a quick search and found a github
repo "mvp-prefilters" but there isn't any documentation explaining the
internals of it.
Can you point me some resources where I can find for example which models
are they using, motivation and maybe some benchmark with other models?
My initial idea was that the student would evaluate a few SR models, make a
balance based on quality/performance and pick the best fit.
If it's already done there is only the porting task, but I guess maybe
there is newer and unevaluated models or we could improve something.

Also the qualification task you just put in is already done - there's
> already a filter to convolve 2 images.
>
Should I revert the wiki? or you have an idea for a qualification task?

> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>

Thanks,
Pedro.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-26 Thread Rostislav Pehlivanov
On 15 January 2018 at 22:08, Pedro Arthur  wrote:

> Added an entry in the ideas page for the super resolution project. I'd like
> to know if any one could be co-mentor with me? just in case my studies
> conflicts with mentoring as gsoc overlaps with half of my phd period.
> Also I need to think about a reasonable qualification task, if anyone have
> any idea let me know.
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>

I think actually writing and reading papers for the filter is pointless -
RAVU already exists in shader form, its well known in the community and
we'll essentially get it for free once the Vulkan backend is complete.
What the task should be would be to make a CPU-only version of the shaders.
Also the qualification task you just put in is already done - there's
already a filter to convolve 2 images.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-23 Thread Thilo Borgmann
Am 24.01.18 um 00:38 schrieb Carl Eugen Hoyos:
> 2018-01-23 18:27 GMT+01:00 Pascal Massimino :
> 
>> Anyone interested in a project  around completing the animated-WebP
>> support in ffmpeg?
> 
> Please consider to add the project and yourself as mentor
> in the wiki:
> https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2018

+1. Please add it to the wiki as soon as it is well defined (incl backup mentor 
and qual task, please).

Thanks,
Thilo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-23 Thread James Almer
On 1/23/2018 2:27 PM, Pascal Massimino wrote:
> Hi,
> 
> sorry for coming late to the game...
> 
> On Tue, Jan 9, 2018 at 10:27 AM, Thilo Borgmann 
> wrote:
> 
>> Hi folks,
>>
>> yet again, the registration for Google Summer of Code 2018 has opened.
>>
>> Like in the previous years, we've setup an ideas page in our wiki:
>> https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2018
>>
>> Same procedure as every year - we need to define more interesting tasks
>> for students to apply for and fellow developers to mentor them. So please
>> feel free to suggest a project and volunteer as mentor or backup mentor as
>> you see fit. You are welcome to edit the wiki directly or just post your
>> suggestion here.
>>
> 
> Anyone interested in a project  around completing the animated-WebP support
> in ffmpeg?
> Hooking up libwebpmux / libwebpdemux libraries would be helpful.

The WebPAnimEncoder API from libwebpmux is already supported to encode
animated WebP (libwebp_anim encoder). The native muxer writes the
packets untouched in that case instead of writing the ANIM headers itself.

> There was a request some times ago:
> https://lists.ffmpeg.org/pipermail/ffmpeg-devel/2017-October/218547.html
> 
> I can help with the libwebp interfacing.
> 
> thx
> skal/

Ideally, a GSoC project would add ANIM support to the native decoder
rather than adding a new decoder based on the WebPAnimDecoder
libwebpdemux API, but at least one of the two should definitely be
implemented.

Thanks.

> 
> 
>>
>> Please keep in mind that a potential student should be able to finish the
>> project successfully in time.
>>
>> GSoC offers a lot of potential gain for FFmpeg as it brings in new
>> contributors that might become active developers in our community
>> afterwards. So dedicating some of our time to mentor as many projects as we
>> should be in our best interest.
>>
>> The application deadline is January 23th which is exactly two weeks from
>> now. Therefore, let's try to define new tasks soon.
>>
>> Thanks,
>> Thilo
>> ___
>> ffmpeg-devel mailing list
>> ffmpeg-devel@ffmpeg.org
>> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-23 Thread Carl Eugen Hoyos
2018-01-23 18:27 GMT+01:00 Pascal Massimino :

> Anyone interested in a project  around completing the animated-WebP
> support in ffmpeg?

Please consider to add the project and yourself as mentor
in the wiki:
https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2018

Thank you, Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-23 Thread Pascal Massimino
Hi,

sorry for coming late to the game...

On Tue, Jan 9, 2018 at 10:27 AM, Thilo Borgmann 
wrote:

> Hi folks,
>
> yet again, the registration for Google Summer of Code 2018 has opened.
>
> Like in the previous years, we've setup an ideas page in our wiki:
> https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2018
>
> Same procedure as every year - we need to define more interesting tasks
> for students to apply for and fellow developers to mentor them. So please
> feel free to suggest a project and volunteer as mentor or backup mentor as
> you see fit. You are welcome to edit the wiki directly or just post your
> suggestion here.
>

Anyone interested in a project  around completing the animated-WebP support
in ffmpeg?
Hooking up libwebpmux / libwebpdemux libraries would be helpful.
There was a request some times ago:
https://lists.ffmpeg.org/pipermail/ffmpeg-devel/2017-October/218547.html

I can help with the libwebp interfacing.

thx
skal/


>
> Please keep in mind that a potential student should be able to finish the
> project successfully in time.
>
> GSoC offers a lot of potential gain for FFmpeg as it brings in new
> contributors that might become active developers in our community
> afterwards. So dedicating some of our time to mentor as many projects as we
> should be in our best interest.
>
> The application deadline is January 23th which is exactly two weeks from
> now. Therefore, let's try to define new tasks soon.
>
> Thanks,
> Thilo
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-22 Thread Thilo Borgmann
Hi again,

> Like in the previous years, we've setup an ideas page in our wiki:
> https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2018

> The application deadline is January 23th which is exactly two weeks from now. 
> Therefore, let's try to define new tasks soon.

thank you all for your project ideas and participation in brainstorming about
them! We have now a list of six project ideas from different areas.

The super resolution filter project still lacks a qualification task and backup
mentor. So if anyone feels his competence anywhere near, we'd appreciate to find
a backup mentor for that :)

The motion estimation on the GPU project is also looking for a backup mentor. I
might be able to do that, however I'd prefer not to backup more projects and
would love someone else to volunteer for that.

Please keep in mind that the deadline for finishing up the ideas page is 
_tomorrow_!

Thanks,
Thilo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-16 Thread Michael Niedermayer
On Mon, Jan 15, 2018 at 08:08:33PM -0200, Pedro Arthur wrote:
> Added an entry in the ideas page for the super resolution project. I'd like
> to know if any one could be co-mentor with me? just in case my studies
> conflicts with mentoring as gsoc overlaps with half of my phd period.

If noone else volunteers then i can try to help with mentoring when you are
unavailable. (assuming here that nothing unexpected keeps me from having time)


> Also I need to think about a reasonable qualification task, if anyone have
> any idea let me know.
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

If you think the mosad wants you dead since a long time then you are either
wrong or dead since a long time.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-15 Thread Pedro Arthur
Added an entry in the ideas page for the super resolution project. I'd like
to know if any one could be co-mentor with me? just in case my studies
conflicts with mentoring as gsoc overlaps with half of my phd period.
Also I need to think about a reasonable qualification task, if anyone have
any idea let me know.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-15 Thread Rostislav Pehlivanov
On 15 January 2018 at 17:31, Thilo Borgmann  wrote:

> Hi,
>
> > yet again, the registration for Google Summer of Code 2018 has opened.
> >
> > Like in the previous years, we've setup an ideas page in our wiki:
> > https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2018
> >
> > Same procedure as every year - we need to define more interesting tasks
> for students to apply for and fellow developers to mentor them. So please
> feel free to suggest a project and volunteer as mentor or backup mentor as
> you see fit. You are welcome to edit the wiki directly or just post your
> suggestion here.
> >
> > Please keep in mind that a potential student should be able to finish
> the project successfully in time.
> >
> > GSoC offers a lot of potential gain for FFmpeg as it brings in new
> contributors that might become active developers in our community
> afterwards. So dedicating some of our time to mentor as many projects as we
> should be in our best interest.
> >
> > The application deadline is January 23th which is exactly two weeks from
> now. Therefore, let's try to define new tasks soon.
>
> just as a reminder, up to now we have 3-5 project ideas. We could make use
> of some more and have one week left to define them.
>
> Thanks,
> Thilo
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>


I'll think of something related to the Vulkan hwcontext/filtering
infrastructure during this time. I'd like a motion search on the GPU but
I'll need to review whatever papers there are on this.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-15 Thread Thilo Borgmann
Hi,

> yet again, the registration for Google Summer of Code 2018 has opened.
> 
> Like in the previous years, we've setup an ideas page in our wiki:
> https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2018
> 
> Same procedure as every year - we need to define more interesting tasks for 
> students to apply for and fellow developers to mentor them. So please feel 
> free to suggest a project and volunteer as mentor or backup mentor as you see 
> fit. You are welcome to edit the wiki directly or just post your suggestion 
> here.
> 
> Please keep in mind that a potential student should be able to finish the 
> project successfully in time.
> 
> GSoC offers a lot of potential gain for FFmpeg as it brings in new 
> contributors that might become active developers in our community afterwards. 
> So dedicating some of our time to mentor as many projects as we should be in 
> our best interest.
> 
> The application deadline is January 23th which is exactly two weeks from now. 
> Therefore, let's try to define new tasks soon.

just as a reminder, up to now we have 3-5 project ideas. We could make use of 
some more and have one week left to define them.

Thanks,
Thilo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-14 Thread Michael Niedermayer
On Sun, Jan 14, 2018 at 09:28:58PM -0200, Pedro Arthur wrote:
> 2018-01-13 23:32 GMT-02:00 Michael Niedermayer :
> 
> > On Fri, Jan 12, 2018 at 11:56:07AM -0200, Pedro Arthur wrote:
> > > 2018-01-12 0:06 GMT-02:00 Michael Niedermayer :
> > >
> > > > if pedro is up to date on this stuff, then maybe he wants to mentor
> > this
> > > >
> > > > either way, links to relevant research, tests, literature are welcome
> > > >
> > > > I can mentor this.
> > >
> > > One of the first NN based method was [1] which has a very simple network
> > > layout, only 3 convolution layers. More complex methods can be found in
> > > [2], [3], [4].
> >
> > > The important question is where we are going to perfom only inference,
> > > using a pre-trained net or we will also train the net. The first is more
> > > easy to do but we don't exploit the content knowledge we have, the second
> > > is more powerful as it adapts to the content but requires training which
> > > may be  expensive, in this case it would be best to use some library to
> > > perform the training.
> >
> > Iam sure our users would want to train the filter in some cases.
> > use cases for different types of content anime vs movies with actors for
> > example likely benefit from seperate training sets.
> >
> > The training code could be seperate from the filter
> >
> > Also another issue is the space requirements that result out of the
> > training.
> > This was an issue with NNEDI previously IIRC
> >
> >
> > >
> > > There are also method which does not use NN like A+ [5] and ANR.
> >
> > How do these perform in relation to the latest NN based solutions ?
> >
> Comparing psnr the first NN method (SRCNN) achieves the same quality but
> evaluation is faster than A+, or better quality at same speed.
> 
> Newer NN methods ([3], [4]) uses "perceptual loss" functions which degrades
> the psnr but the images are much more sharp and appear to have better
> quality than those that maximize psnr.

it seems PSNR does not work as a way to compare these filters
though its possible that with video instead of still images there could
be instabilities in the details created by the filters.


[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Modern terrorism, a quick summary: Need oil, start war with country that
has oil, kill hundread thousand in war. Let country fall into chaos,
be surprised about raise of fundamantalists. Drop more bombs, kill more
people, be surprised about them taking revenge and drop even more bombs
and strip your own citizens of their rights and freedoms. to be continued


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-14 Thread Pedro Arthur
2018-01-13 23:32 GMT-02:00 Michael Niedermayer :

> On Fri, Jan 12, 2018 at 11:56:07AM -0200, Pedro Arthur wrote:
> > 2018-01-12 0:06 GMT-02:00 Michael Niedermayer :
> >
> > > if pedro is up to date on this stuff, then maybe he wants to mentor
> this
> > >
> > > either way, links to relevant research, tests, literature are welcome
> > >
> > > I can mentor this.
> >
> > One of the first NN based method was [1] which has a very simple network
> > layout, only 3 convolution layers. More complex methods can be found in
> > [2], [3], [4].
>
> > The important question is where we are going to perfom only inference,
> > using a pre-trained net or we will also train the net. The first is more
> > easy to do but we don't exploit the content knowledge we have, the second
> > is more powerful as it adapts to the content but requires training which
> > may be  expensive, in this case it would be best to use some library to
> > perform the training.
>
> Iam sure our users would want to train the filter in some cases.
> use cases for different types of content anime vs movies with actors for
> example likely benefit from seperate training sets.
>
> The training code could be seperate from the filter
>
> Also another issue is the space requirements that result out of the
> training.
> This was an issue with NNEDI previously IIRC
>
>
> >
> > There are also method which does not use NN like A+ [5] and ANR.
>
> How do these perform in relation to the latest NN based solutions ?
>
Comparing psnr the first NN method (SRCNN) achieves the same quality but
evaluation is faster than A+, or better quality at same speed.

Newer NN methods ([3], [4]) uses "perceptual loss" functions which degrades
the psnr but the images are much more sharp and appear to have better
quality than those that maximize psnr.


> Also i think its a great project, you should definitly mentor this if it
> interrests you
>
>
> >
> > [1] - https://arxiv.org/abs/1501.00092
> > [2] - https://arxiv.org/abs/1609.05158
> > [3] - https://arxiv.org/abs/1603.08155
> > [4] - https://arxiv.org/abs/1609.04802
> > [5] -
> > http://www.vision.ee.ethz.ch/~timofter/publications/Timofte-
> ACCV-2014.pdf
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> Awnsering whenever a program halts or runs forever is
> On a turing machine, in general impossible (turings halting problem).
> On any real computer, always possible as a real computer has a finite
> number
> of states N, and will either halt in less than N cycles or never halt.
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-13 Thread Michael Niedermayer
On Fri, Jan 12, 2018 at 11:56:07AM -0200, Pedro Arthur wrote:
> 2018-01-12 0:06 GMT-02:00 Michael Niedermayer :
> 
> > if pedro is up to date on this stuff, then maybe he wants to mentor this
> >
> > either way, links to relevant research, tests, literature are welcome
> >
> > I can mentor this.
> 
> One of the first NN based method was [1] which has a very simple network
> layout, only 3 convolution layers. More complex methods can be found in
> [2], [3], [4].

> The important question is where we are going to perfom only inference,
> using a pre-trained net or we will also train the net. The first is more
> easy to do but we don't exploit the content knowledge we have, the second
> is more powerful as it adapts to the content but requires training which
> may be  expensive, in this case it would be best to use some library to
> perform the training.

Iam sure our users would want to train the filter in some cases.
use cases for different types of content anime vs movies with actors for
example likely benefit from seperate training sets.

The training code could be seperate from the filter

Also another issue is the space requirements that result out of the training.
This was an issue with NNEDI previously IIRC


> 
> There are also method which does not use NN like A+ [5] and ANR.

How do these perform in relation to the latest NN based solutions ?

Also i think its a great project, you should definitly mentor this if it
interrests you


> 
> [1] - https://arxiv.org/abs/1501.00092
> [2] - https://arxiv.org/abs/1609.05158
> [3] - https://arxiv.org/abs/1603.08155
> [4] - https://arxiv.org/abs/1609.04802
> [5] -
> http://www.vision.ee.ethz.ch/~timofter/publications/Timofte-ACCV-2014.pdf
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Awnsering whenever a program halts or runs forever is
On a turing machine, in general impossible (turings halting problem).
On any real computer, always possible as a real computer has a finite number
of states N, and will either halt in less than N cycles or never halt.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-12 Thread Pedro Arthur
2018-01-12 0:06 GMT-02:00 Michael Niedermayer :

> if pedro is up to date on this stuff, then maybe he wants to mentor this
>
> either way, links to relevant research, tests, literature are welcome
>
> I can mentor this.

One of the first NN based method was [1] which has a very simple network
layout, only 3 convolution layers. More complex methods can be found in
[2], [3], [4].
The important question is where we are going to perfom only inference,
using a pre-trained net or we will also train the net. The first is more
easy to do but we don't exploit the content knowledge we have, the second
is more powerful as it adapts to the content but requires training which
may be  expensive, in this case it would be best to use some library to
perform the training.

There are also method which does not use NN like A+ [5] and ANR.

[1] - https://arxiv.org/abs/1501.00092
[2] - https://arxiv.org/abs/1609.05158
[3] - https://arxiv.org/abs/1603.08155
[4] - https://arxiv.org/abs/1609.04802
[5] -
http://www.vision.ee.ethz.ch/~timofter/publications/Timofte-ACCV-2014.pdf
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-12 Thread Thilo Borgmann
Am 12.01.18 um 03:41 schrieb wm4:
> On Fri, 12 Jan 2018 03:06:29 +0100
> Michael Niedermayer  wrote:
> 
>> On Thu, Jan 11, 2018 at 09:17:06PM +0100, Thilo Borgmann wrote:
>>> Am 11.01.18 um 19:45 schrieb Michael Niedermayer:  
 On Thu, Jan 11, 2018 at 02:43:01PM -0200, Pedro Arthur wrote:  
> Hi,
>
> What about a Super Resolution filter? lately there was much research in
> this area, mainly in Single Image Super Resolution.
> I think it would be an interesting experiment, and maybe we could get
> something useful from it.  

 this sounds very interresting, yes  
>>>
>>> +1. If you would like to mentor such a task please feel free to define a 
>>> task on the wiki page.  
>>
>> I would first have to read up on the subject as iam not up to date on
>> this.
>> [...]

I was actually referring to Pedro with that, mentoring this from scratch should
hardly be feasible.

-Thilo

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread wm4
On Fri, 12 Jan 2018 03:06:29 +0100
Michael Niedermayer  wrote:

> On Thu, Jan 11, 2018 at 09:17:06PM +0100, Thilo Borgmann wrote:
> > Am 11.01.18 um 19:45 schrieb Michael Niedermayer:  
> > > On Thu, Jan 11, 2018 at 02:43:01PM -0200, Pedro Arthur wrote:  
> > >> Hi,
> > >>
> > >> What about a Super Resolution filter? lately there was much research in
> > >> this area, mainly in Single Image Super Resolution.
> > >> I think it would be an interesting experiment, and maybe we could get
> > >> something useful from it.  
> > > 
> > > this sounds very interresting, yes  
> > 
> > +1. If you would like to mentor such a task please feel free to define a 
> > task on the wiki page.  
> 
> I would first have to read up on the subject as iam not up to date on
> this.
> Also it would likely need some neural net. Here the question of
> what lib or a native implementation would arrise.
> 
> if pedro is up to date on this stuff, then maybe he wants to mentor this
> 
> either way, links to relevant research, tests, literature are welcome

The standard is apparently NNEDI, which has been implemented in many
contexts. Maybe it's even possible to make the new opencl program
filter use existing OpenCL implementations of it.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Michael Niedermayer
On Thu, Jan 11, 2018 at 09:17:06PM +0100, Thilo Borgmann wrote:
> Am 11.01.18 um 19:45 schrieb Michael Niedermayer:
> > On Thu, Jan 11, 2018 at 02:43:01PM -0200, Pedro Arthur wrote:
> >> Hi,
> >>
> >> What about a Super Resolution filter? lately there was much research in
> >> this area, mainly in Single Image Super Resolution.
> >> I think it would be an interesting experiment, and maybe we could get
> >> something useful from it.
> > 
> > this sounds very interresting, yes
> 
> +1. If you would like to mentor such a task please feel free to define a task 
> on the wiki page.

I would first have to read up on the subject as iam not up to date on
this.
Also it would likely need some neural net. Here the question of
what lib or a native implementation would arrise.

if pedro is up to date on this stuff, then maybe he wants to mentor this

either way, links to relevant research, tests, literature are welcome

thx

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Awnsering whenever a program halts or runs forever is
On a turing machine, in general impossible (turings halting problem).
On any real computer, always possible as a real computer has a finite number
of states N, and will either halt in less than N cycles or never halt.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Thilo Borgmann
Am 11.01.18 um 19:45 schrieb Michael Niedermayer:
> On Thu, Jan 11, 2018 at 02:43:01PM -0200, Pedro Arthur wrote:
>> Hi,
>>
>> What about a Super Resolution filter? lately there was much research in
>> this area, mainly in Single Image Super Resolution.
>> I think it would be an interesting experiment, and maybe we could get
>> something useful from it.
> 
> this sounds very interresting, yes

+1. If you would like to mentor such a task please feel free to define a task 
on the wiki page.

-Thilo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Michael Niedermayer
On Thu, Jan 11, 2018 at 02:43:01PM -0200, Pedro Arthur wrote:
> Hi,
> 
> What about a Super Resolution filter? lately there was much research in
> this area, mainly in Single Image Super Resolution.
> I think it would be an interesting experiment, and maybe we could get
> something useful from it.

this sounds very interresting, yes

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

In fact, the RIAA has been known to suggest that students drop out
of college or go to community college in order to be able to afford
settlements. -- The RIAA


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Reto Kromer
Thilo Borgmann wrote:

>Should I?

Yes, please! Best regards, Reto

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Thilo Borgmann
Am 11.01.18 um 15:25 schrieb Ronald S. Bultje:
> Hi,
> 
> On Thu, Jan 11, 2018 at 7:32 AM, Thilo Borgmann 
> wrote:
> 
>> Am 11.01.18 um 08:04 schrieb Lou Logan:
>>> On Wed, Jan 10, 2018, at 7:31 PM, Carl Eugen Hoyos wrote:

 Were the GSoC rules concerning documentation projects changed?
>>>
>>> I don't know. I haven't memorized the rules and haven't looked at them
>> for several years.
>>
>> I can't find anything explicitly forbidding such a project. We might reach
>> out
>> to the mentors mailing list just to be sure if we are up to define such a
>> task.
> 
> 
> 1.26 “Project” means an open source coding project to be worked on by a
> Student as an individual. For the avoidance of doubt, Projects do not
> include projects for documentation only.
> 
> https://summerofcode.withgoogle.com/rules/

Did not catch that on this very site... Well I would assume it is meant to
exclude explicitly a project just "to write documentation" and not necessarily
to prevent a more automatically generated one, which would include actual need
for coding I guess.

Well if we would really like that project idea, we can reach out and find out if
this would be acceptable or not. Should I?

-Thilo

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Pedro Arthur
Hi,

What about a Super Resolution filter? lately there was much research in
this area, mainly in Single Image Super Resolution.
I think it would be an interesting experiment, and maybe we could get
something useful from it.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Michael Niedermayer
On Thu, Jan 11, 2018 at 03:38:23PM +0100, Paul B Mahol wrote:
> On 1/11/18, Michael Niedermayer  wrote:
> > On Wed, Jan 10, 2018 at 04:10:39PM -0900, Lou Logan wrote:
> >> On Wed, Jan 10, 2018, at 3:30 PM, Michael Niedermayer wrote:
> >> >
> >> > Would there be interrest in a project to write a QR code / barcode
> >> > search & decode filter ?
> >>
> >> I've personally never scanned a QR code, but maybe I'm an outlier. Seems
> >> more like an OpenCV project to me.
> >
> > The idea would not be so much to use it to scan a single QR code. I was
> > thinking more of running it over a video or a collection of videos and scan
> > the whole video for QR codes, barcodes from products and so on.
> > The idea is for example that if a random product appears in a video, we may
> > have a clear enough view of its barcode or other easy machine readable
> > identifer
> > and can automatically turn it into metadata. This could be quite fun
> > to index and search a large video collection that way.
> > And it also would raise awareness of what easy parsable information people
> > have
> > in their videos
> >
> > OpenCV is BSD, and for basically volunteer work i want to make sure my work
> > and future iterations stay available to the public. BSD does not ensure
> > this.
> > So i probably will not significantly contribute to BSD licensed code unless
> > its paid work.
> 

> So are GPL and LGPL, so yout want to finally leave?
 
Why this hostility ?

Debating licenses is really off topic, I had only mentioned BSD as it was 
implied
by the question about writing this filter in opencv.

But if you want my oppinion, i like the (L)GPL*, i also like most other free
software licenses including BSD but with a limited budget of time and there
being more licenses and projects under the sky than i ever could contribute to.
One picks what one likes most :)
I can spend X time working on BSD or X time working on (L)GPL licensed code.
If all else is equal i pick (L)GPL here. If not all else is equal then i might
pick BSD, it sure could be ...


[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

I am the wisest man alive, for I know one thing, and that is that I know
nothing. -- Socrates


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Paul B Mahol
On 1/11/18, Michael Niedermayer  wrote:
> On Wed, Jan 10, 2018 at 04:10:39PM -0900, Lou Logan wrote:
>> On Wed, Jan 10, 2018, at 3:30 PM, Michael Niedermayer wrote:
>> >
>> > Would there be interrest in a project to write a QR code / barcode
>> > search & decode filter ?
>>
>> I've personally never scanned a QR code, but maybe I'm an outlier. Seems
>> more like an OpenCV project to me.
>
> The idea would not be so much to use it to scan a single QR code. I was
> thinking more of running it over a video or a collection of videos and scan
> the whole video for QR codes, barcodes from products and so on.
> The idea is for example that if a random product appears in a video, we may
> have a clear enough view of its barcode or other easy machine readable
> identifer
> and can automatically turn it into metadata. This could be quite fun
> to index and search a large video collection that way.
> And it also would raise awareness of what easy parsable information people
> have
> in their videos
>
> OpenCV is BSD, and for basically volunteer work i want to make sure my work
> and future iterations stay available to the public. BSD does not ensure
> this.
> So i probably will not significantly contribute to BSD licensed code unless
> its paid work.

So are GPL and LGPL, so yout want to finally leave?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Michael Niedermayer
On Wed, Jan 10, 2018 at 04:10:39PM -0900, Lou Logan wrote:
> On Wed, Jan 10, 2018, at 3:30 PM, Michael Niedermayer wrote:
> >
> > Would there be interrest in a project to write a QR code / barcode
> > search & decode filter ?
> 
> I've personally never scanned a QR code, but maybe I'm an outlier. Seems more 
> like an OpenCV project to me.

The idea would not be so much to use it to scan a single QR code. I was
thinking more of running it over a video or a collection of videos and scan
the whole video for QR codes, barcodes from products and so on.
The idea is for example that if a random product appears in a video, we may
have a clear enough view of its barcode or other easy machine readable identifer
and can automatically turn it into metadata. This could be quite fun
to index and search a large video collection that way. 
And it also would raise awareness of what easy parsable information people have
in their videos

OpenCV is BSD, and for basically volunteer work i want to make sure my work
and future iterations stay available to the public. BSD does not ensure this.
So i probably will not significantly contribute to BSD licensed code unless
its paid work.


> 
> While we are throwing ideas around how about a documentation project? Instead 
> of manually (not) updating the man files like we do now, automatically create 
> and sync the majority, or at least the available options, from the built-in 
> documentation with the ability to manually add additional information, 
> examples, explanations, etc. I didn't think about how to implement something 
> like this but I think Timothy Gu (CC-ing him) may have had some thoughts if I 
> recall correctly (which I may not).
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Into a blind darkness they enter who follow after the Ignorance,
they as if into a greater darkness enter who devote themselves
to the Knowledge alone. -- Isha Upanishad


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Ronald S. Bultje
Hi,

On Thu, Jan 11, 2018 at 7:32 AM, Thilo Borgmann 
wrote:

> Am 11.01.18 um 08:04 schrieb Lou Logan:
> > On Wed, Jan 10, 2018, at 7:31 PM, Carl Eugen Hoyos wrote:
> >>
> >> Were the GSoC rules concerning documentation projects changed?
> >
> > I don't know. I haven't memorized the rules and haven't looked at them
> for several years.
>
> I can't find anything explicitly forbidding such a project. We might reach
> out
> to the mentors mailing list just to be sure if we are up to define such a
> task.


1.26 “Project” means an open source coding project to be worked on by a
Student as an individual. For the avoidance of doubt, Projects do not
include projects for documentation only.

https://summerofcode.withgoogle.com/rules/

Ronald
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Thilo Borgmann
Am 11.01.18 um 09:45 schrieb Paul B Mahol:
> On 1/11/18, Michael Niedermayer  wrote:
>> On Tue, Jan 09, 2018 at 07:27:28PM +0100, Thilo Borgmann wrote:
>>> Hi folks,
>>>
>>> yet again, the registration for Google Summer of Code 2018 has opened.
>>>
>>> Like in the previous years, we've setup an ideas page in our wiki:
>>> https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2018
>>
>> Would there be interrest in a project to write a QR code / barcode
>> search & decode filter ?
>>
>> If yes, i may be intgerrested to mentor that.
> 
> IIRC There are libs that do that already, at least I know for QR code encoder.
> which could be made as source video filter.

IIRC the coreimage filter source can already generate QR on OSX. However, I like
the idea and a native implementation or a wrapper for these libs would be
welcome in my eyes.

-Thilo

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Thilo Borgmann
Am 11.01.18 um 08:04 schrieb Lou Logan:
> On Wed, Jan 10, 2018, at 7:31 PM, Carl Eugen Hoyos wrote:
>>
>> Were the GSoC rules concerning documentation projects changed?
> 
> I don't know. I haven't memorized the rules and haven't looked at them for 
> several years.

I can't find anything explicitly forbidding such a project. We might reach out
to the mentors mailing list just to be sure if we are up to define such a task.

Also, the proposed auto-generation of docs from the code is somewhat more and
more technical than just writing more documentation...

-Thilo

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-11 Thread Paul B Mahol
On 1/11/18, Michael Niedermayer  wrote:
> On Tue, Jan 09, 2018 at 07:27:28PM +0100, Thilo Borgmann wrote:
>> Hi folks,
>>
>> yet again, the registration for Google Summer of Code 2018 has opened.
>>
>> Like in the previous years, we've setup an ideas page in our wiki:
>> https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2018
>
> Would there be interrest in a project to write a QR code / barcode
> search & decode filter ?
>
> If yes, i may be intgerrested to mentor that.

IIRC There are libs that do that already, at least I know for QR code encoder.
which could be made as source video filter.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-10 Thread Lou Logan
On Wed, Jan 10, 2018, at 7:31 PM, Carl Eugen Hoyos wrote:
>
> Were the GSoC rules concerning documentation projects changed?

I don't know. I haven't memorized the rules and haven't looked at them for 
several years.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-10 Thread Carl Eugen Hoyos
2018-01-11 2:10 GMT+01:00 Lou Logan :
> On Wed, Jan 10, 2018, at 3:30 PM, Michael Niedermayer wrote:
>>
>> Would there be interrest in a project to write a QR code / barcode
>> search & decode filter ?
>
> I've personally never scanned a QR code, but maybe I'm an outlier.
> Seems more like an OpenCV project to me.

Any (audio-, video-, image-related) project that has mentor and student
sounds interesting to me.

> While we are throwing ideas around how about a documentation project?

Were the GSoC rules concerning documentation projects changed?

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-10 Thread Lou Logan
On Wed, Jan 10, 2018, at 3:30 PM, Michael Niedermayer wrote:
>
> Would there be interrest in a project to write a QR code / barcode
> search & decode filter ?

I've personally never scanned a QR code, but maybe I'm an outlier. Seems more 
like an OpenCV project to me.

While we are throwing ideas around how about a documentation project? Instead 
of manually (not) updating the man files like we do now, automatically create 
and sync the majority, or at least the available options, from the built-in 
documentation with the ability to manually add additional information, 
examples, explanations, etc. I didn't think about how to implement something 
like this but I think Timothy Gu (CC-ing him) may have had some thoughts if I 
recall correctly (which I may not).
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC 2018

2018-01-10 Thread Michael Niedermayer
On Tue, Jan 09, 2018 at 07:27:28PM +0100, Thilo Borgmann wrote:
> Hi folks,
> 
> yet again, the registration for Google Summer of Code 2018 has opened.
> 
> Like in the previous years, we've setup an ideas page in our wiki:
> https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2018

Would there be interrest in a project to write a QR code / barcode
search & decode filter ?

If yes, i may be intgerrested to mentor that.


[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

I am the wisest man alive, for I know one thing, and that is that I know
nothing. -- Socrates


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel