I am going to jump in on this thread (a rare thing!)....
Michael Sullivan wrote:
> this undesirable scenario would prob be more likely with the plethora
> of what i call 'orphaned feeds' that some directories store. these
> are typically feeds that services generate....based on tags, user
> uploads, meta-feeds etcetera. they are channels without any true
> parent... that is to say they are not vlog projects managed and created
I am a big fan of the blip.tv general feed, as I get to see a great
cross-section of what's being produced out there in the "participatory
culture". It's not quite as wide a range as say youtube, but its still
pretty diverse and the quality is quite high. However, if i chose to
auto-download every enclosure in that feed, I would be quickly
overwhelmed and waste lots of bandwidth. I've been thinking a lot about
how you can bridge the auto-download vs. no-download approach, and
basically I think some sort of partial, ahead of time cacheing could be
interesting. I know there are some commercial streaming video
applications already doing this. You figure out which content the viewer
might like to watch, and then cache the first minute of it, so that
playback starts immediately. Over time, perhaps, the application could
build a model of what to cache, how much, etc. Just ramblings for now,
but an approach that, if implemented right, could enable a great user
experience that is also efficient.
> by people with an intention, a genre, an actual audience..... produced
> by the content creator(s).... videoblogs ;-) rboom, apperceptions,
> pouringdown, dltq etc. where you have a good idea of the content you
> are going to get and you are subscribed because you generally like it,
> trust it or are at least giving it a chance before you unsubscribe.
So, another idea I've had, and that works in I/ON (or at least in a soon
to be released version), is that you can subscribe to a bunch of feeds,
but only download content that matches certain keywords or other
criteria. That way you can keep an eye on blip, but only download
content regarding "food" or "brooklyn" (two of my favorite topics).
> if people download then filter/discard instead of the opposite.... we
> got a wasted bandwith problem. again, i am not faulting the software.
> just a thought in my head....
finally, my third thought, is that using bittorrent or a similar
protocol would mean that people could autodownload, and then become
nodes themselves, causing the bandwidth of the original host not to be
wasted at all. basically, if we can figure out how to make true p2p
dead-simple for desktop aggregators, then we get the best of both worlds
- quick start, no buffering playback, ability to sync to mobile players
*and* reduction in bandwidth headaches for content distributors.
> thoughts on this speculation? i should go fix my leaky faucet now.
>
those are my thoughts. drip drip.
+nathan
Yahoo! Groups Links
<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/videoblogging/
<*> To unsubscribe from this group, send an email to:
[EMAIL PROTECTED]
<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/