Re: [Wikitech-l] Transcoding Video Contributions in Mediawiki

2009-01-26 Thread Michael Dale
good points 
I don't think it would be a _bad_ idea to support server side 
transcoding it ofcourse gives more flexibility to have the original file 
and then let us target different output formats in the future. Would let 
us support camera video uploads etc.

But there are logistical issues. It adds a bit of complexity / cost to 
the server side setup. Additionally we are interested in working with 
archive.org who already offers free transcode to ogg from arbitrary 
uploaded formats for free licensed content. They have 2100+ 
transcode/storage cpu units and petabytes of storage. Commons has on the 
order of 40 TB storage and all of (already busy) wikimedias servers 
together are around 400 units  ... It makes sense to encourage long form 
video contribution to be supported via partnership with archive.org. 
Especially once we have them integrated as an archive provider.

Firefogg ideally is not complex for end users. Its a one click 
extension install, the user does not have to know anything about 
encoding video. We supply the transcode settings via the javascript api 
so the settings are identical to what we would request server side. 
Using an extension also lets us control the upload system so we can have 
it upload in 1 meg chunks for example. That way we can improve usability 
around multi hundred meg POST uploads by giving progress indicators, 
support resumed uploads etc.

 Would it be worth providing a simple http-upload to a server-side transcoder
 for these relatively small files that are low-quality to begin with?
   

yes I would support that effort. Just focused on the firefogg stuff 
right now. If you have time to push forward on this we can try and get 
something set up.

 wouldn't it be more efficient to let
 an infrastructure like the one I created encode _all_ versions used for
 streaming, whether for desktops or mobile devices, from a single
 archival-quality upload? 

yes, it may be more ideal to just upload the HQ version and have the 
server do the transcode. Your transcode infrastructure could be very 
useful for that.  But we will have to see how the logistical issues 
mentioned above play out.


peace,
--michael

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Transcoding Video Contributions in Mediawiki

2009-01-20 Thread Platonides

 Gregory Maxwell wrote:
 This does
 client side transcoding, but as far as the user can tell it's all done
 by the server except no long transmission time for his 14gbyte DV
 movie. (although, perhaps a long transcoding time. :) )

Remember to add some message like 'Uploading a low-res version. Keep the
original if you want it full-res for the future.' We don't want anyone
thinking 'I uploaded this 14GB file. Now I can delete as they keep a
copy.' without fully understanding it. Some people deleted their photos
after uploading to commons.


Michael Dale wrote:
 At some talks here a FOMS (foundations of open source media) meeting we 
 discussed adding support for uploading _while_ transcoding to firefogg. 
As far as you can transcode faster than upload...

 Also we will talked about supporting splitting the encoded file every 
 meg or so and re-assembling them on the server.  This way if your 
 browser http POST connection gets reset halfway though your upload it 
 will just resume on the next chunk instead of starting from scratch. 
 (eventually we could support the 
 http://code.google.com/p/gears/wiki/ResumableHttpRequestsProposal )

Resumable uploads are really needed. And designing not only the server
(as usual) but also the client we can finally do it.



 If people can operate an FTP and have the massive bandwidth necessary to 
 upload source material I highly recommend they upload to archive.org. We 
 will be supporting archive.org as a remote repository so it will be easy 
 to embed any ogg piece from there into a wikipedia article see:
 http://metavid.org/blog/2008/12/08/archiveorg-ogg-support/
 
 I don't think wikimedia is targeting (in the immediate future) the 
 multi-petabyte storage and multi-thousand cpu system necessary to store 
 and transcode original DV and MPEG2 streams of everything.  I think it 
 makes sense to partner with like minded organizations for this purpose.
 
 peace,
 --michael

It's sensible, but unless it can be done now, really really few people
will bother to upload the file to commons *and* the full version to
archive.org
If people could point to a archive.org file instead of uploading, with
proper tutorials, it could get more popular. It doesn't need transcoding
(even though we would be leeching the transcoded file from archive.org)
We could even store just a reference to archive.org instead of the full
transcoded file if there're concerns about disk space once people start
auto-linking to archive.org. It wouldn't be too nice for their servers,
but as it's into their mission and have the resources, they may be
willing to do that.
It would alsoserve to present archive.org to many people who hasn't
still heard about it.






___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Transcoding Video Contributions in Mediawiki

2009-01-20 Thread Daniel Kinzler
Platonides wrote:
 Remember to add some message like 'Uploading a low-res version. Keep the
 original if you want it full-res for the future.' We don't want anyone
 thinking 'I uploaded this 14GB file. Now I can delete as they keep a
 copy.' without fully understanding it. Some people deleted their photos
 after uploading to commons.

+5 insightful

-- daniel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Transcoding Video Contributions in Mediawiki

2009-01-19 Thread Michael Dale
Mike Baynton asked about some server side transcoding code he has worked 
on this seems appropriate for wikitech-l so I have cc'ed it here.

The current direction is to encourage in-browser client side 
transcoding. This offloads the costs of server side transcoding and 
maximizes quality letting us supply the transcode settings for 
generating theora files from the HD or DV source.  Instead of users 
uploading intermediary format at low bandwidth  arbitrary encode 
quality settings.

I see a few potential directions to take with the transcoder work that 
has been done...

1) Firefogg will request two copies one for archival and one for web 
streaming. Perhaps we want to create derivatives from the archival 
version (say for playback on a mobile device) and we need to re-derive 
everything from the archival version.

2) Perhaps the infrastructure you developed may be applicable to dealing 
out the tasks of rendering out of sequences of video. I have been 
working on a web video sequencer/editor that will playback in html5 
browsers but will need to be flattened to be played in other players, 
browsers, put on DVD's etc. The best way to do this will probably be the 
Gstreamer's GNonLin library (as used in PiTiVi). More on that later...

3) Perhaps we want to support _arbitrary file formats_ uploads from 
users not running firefox in the immediate term? I don't see this as 
hugely important. I think people that use video editing software or 
people regularly dealing with digital video assets can likely 
download/run firefox and handle the one click install of an extension. 

3.5) The special case ofcourse is supporting video contributions from 
mobile devices that are limited to encoding hardware specific video 
codecs... But I think we would wait on this until A) we have the upload 
api working, B) have an actual use case. ie lets get uploading photos 
from cellphones working first.  This use case will likely include 
funding given the controlled exposure of applications / services in cell 
phone devices. (hopefully that controlled environment issue will change 
though)  ...ultimately we want to push out patent unencumbered formats 
to the phones too...

peace,
--michael

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Transcoding Video Contributions in Mediawiki

2009-01-19 Thread Mike.lifeguard
The current direction is to encourage in-browser client side
transcoding. This offloads the costs of server side transcoding
and
maximizes quality letting us supply the transcode settings for
generating theora files from the HD or DV source.  Instead of
users
uploading intermediary format at low bandwidth  arbitrary
encode
quality settings.
Why can we not do server-side transcoding to derive a few files
(ie 3 levels of quality, plus an animated gif thumbnail...?) akin
to Archive.org?
This seems to work nicely for them (and me, when I used it).
Simply upload your file, and it automatically transcodes the file
into the appropriate derivative files. This is certainly a lot
easier than asking the user to do it (most have no sweet clue,
and even experienced users are in over their head), and ensures
that the derived files have a minimal level of quality (ie no
transoding mistakes, which is easy to do if you don't know what
you're doing), saves the user time and energy, and also automates
a repetitive task. If we're asking users to upload several sizes
of a video because we can't thumbnail while streaming it then
instead of making them transcode it a bunch of times so there are
a few sizes of the file, WMF servers can do it.
Incidentally, archive.org required me to transfer the file via
FTP, which would also be /very/ nice to allow on WMF servers.
Cheers,
Mike

  Mike.lifeguard
  mikelifegu...@fastmail.fm

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Transcoding Video Contributions in Mediawiki

2009-01-19 Thread Gregory Maxwell
On Mon, Jan 19, 2009 at 11:05 PM, Mike.lifeguard
mikelifegu...@fastmail.fm wrote:
[snip]
 into the appropriate derivative files. This is certainly a lot
 easier than asking the user to do it (most have no sweet clue,
 and even experienced users are in over their head),

You're missing a major component of this.  The whole thing mr. dale is
discussing is the use of the firefogg firefox extension.   This does
client side transcoding, but as far as the user can tell it's all done
by the server except no long transmission time for his 14gbyte DV
movie. (although, perhaps a long transcoding time. :) )

There are some other potential benefits to client side transcoding…
such as a being able to use the user's local codecs for decode.  The
desired end result being if the user can play it; he can upload it.
(otherwise you require the server to support decoding every format
ever invented, which may not be realistic).

[snip]
 and ensures
 that the derived files have a minimal level of quality (ie no
 transoding mistakes, which is easy to do if you don't know what
 you're doing), saves the user time and energy, and also automates
 a repetitive task. If we're asking users to upload several sizes

Firefogg can handle all this, so long as you make an assumption that
the user isn't totally CPU starved.

I'm not trying to trumpet the extension based solution here— just
attempting to point out that it was created to address many of these
issues, and it's what is being mentioned.

[snip]
 Incidentally, archive.org required me to transfer the file via
 FTP, which would also be /very/ nice to allow on WMF servers.

In your first breath you were speaking of users with no clue being in
over their head, and now you bring up upload via FTP.  Think about it.

;)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Transcoding Video Contributions in Mediawiki

2009-01-19 Thread Michael Dale
Gregory Maxwell wrote:
 This does
 client side transcoding, but as far as the user can tell it's all done
 by the server except no long transmission time for his 14gbyte DV
 movie. (although, perhaps a long transcoding time. :) )
   
At some talks here a FOMS (foundations of open source media) meeting we 
discussed adding support for uploading _while_ transcoding to firefogg. 
Also we will talked about supporting splitting the encoded file every 
meg or so and re-assembling them on the server.  This way if your 
browser http POST connection gets reset halfway though your upload it 
will just resume on the next chunk instead of starting from scratch. 
(eventually we could support the 
http://code.google.com/p/gears/wiki/ResumableHttpRequestsProposal )

We also discussed adding dirac support to firefogg... Other related 
stuff was discussed...I will try and do a full wikimedia related report 
back from FOMS shortly.

To further respond to Mike.lifeguard inquiry:

If people can operate an FTP and have the massive bandwidth necessary to 
upload source material I highly recommend they upload to archive.org. We 
will be supporting archive.org as a remote repository so it will be easy 
to embed any ogg piece from there into a wikipedia article see:
http://metavid.org/blog/2008/12/08/archiveorg-ogg-support/

I don't think wikimedia is targeting (in the immediate future) the 
multi-petabyte storage and multi-thousand cpu system necessary to store 
and transcode original DV and MPEG2 streams of everything.  I think it 
makes sense to partner with like minded organizations for this purpose.

peace,
--michael

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l