Thank you both of you for reply.
I am trying to find out critical points in the system with shared disk volume 
storage and Wowza server on the front. 
It's very likely that we never reach the number of 100 viewers - I simply took 
this figure from projected recordings in 5 rooms, a then in critical time 20 
viewers/room playing back.
We currently have hundreds of hours of recordings done in the last 6 years, but 
uploaded 'by hand' to QTSS, and I never saw more than 25 simultaneous viewers 
(we have about 12000 students).
But, it is worrying me that when we do recording on more regularly bases the 
number of viewers might go up quickly.

I think we need to find for an optimized for IO disk storage and I agree the 
50k price tag is over the top.

Leslaw




On May 3, 2012, at 9:59 PM, [email protected] wrote:

> And just to add to what Hank said, with 8 capture agents and a dozen courses 
> we
> get at most 48 viewers at once.  Finals/midterms though are when we see this,
> we will have ten viewers max throughout the term, with hot spots.
> 
> And I agree that you shouldn't need 20-50k price tag, but 100 is likely low 
> too
> since you probably want a commercial storage san/nas instead of a build it
> yourself solution.
> 
> Chris
> 
> Quoting Hank Magnuski <[email protected]>:
> 
>> There are other factors that go into this calculation:
>> 
>> 1. This is not a database problem. These are streaming media files. A good
>> disk layout would use blocks much larger than 4KB.
>> 2. A modern operating system would do both seek optimization and caching. I
>> don't know how a well written streaming server works, but I would expect
>> Wowza to read-ahead 30-60 seconds to pre-buffer the stream flow. That's
>> about 8-16 MB per stream.
>> 3. Your 100 viewers are probably looking at the 10 most recent files, not
>> 100 different files. So there will be a lot of overlap in disk I/O.
>> 4. It's much, much harder to get 100 simultaneous viewers than you think.
>> You need a VERY large potential viewing audience to achieve those numbers.
>> For example, if you have 1000 students doing 2 hours a week of viewing,
>> that averages out to 12 simultaneous streams.
>> 
>> There have been lots of PhD studies on how to optimize a video server. This
>> discussion could go on and on.
>> 
>> Hank
>> 
>> 
>> On Thu, May 3, 2012 at 12:54 PM, Leslaw Zieleznik
>> <[email protected]>wrote:
>> 
>>> 
>>> Good point, certainly true for progressive download.
>>> The worry is in case of streaming when the chunks of data need to be
>>> delivered simultaneously, unless the streaming server can do the trick?
>>> 
>>> Lelsaw
>>> 
>>> 
>>> On 3 May 2012, at 20:13, Hank Magnuski wrote:
>>> 
>>> Something doesn't seem right here.
>>> 
>>> 100 viewers x 1 GB each requires a transfer of 100 GB in less than an hour.
>>> 
>>> An ordinary SATA disk can easily do 100 MB/second so transferring 100 GB
>>> will take 1000 seconds or 16 minutes.
>>> 
>>> What's the disk going to do with the rest of the time?
>>> 
>>> I'd say £100 is more like it.
>>> 
>>> Hank
>>> 
>>> On Thu, May 3, 2012 at 11:09 AM, Leslaw Zieleznik <
>>> [email protected]> wrote:
>>> 
>>>> 
>>>> We have discussion today about the shared volume storage that can also
>>>> support streaming.
>>>> And the conclusion was that to support 100 streams (1GB/hour high
>>>> resolution recordings) played at the same time,
>>>> we need a high-performance disk RAID array which may cost at least £20k
>>>> or rather £50k. The calculation is shown below.
>>>> 
>>>> Therefore my question is, whether there is any escape from purchasing
>>>> such an expensive storage?
>>>> 
>>>> Many thanks,
>>>> Leslaw
>>>> 
>>>> And here is the calculation.
>>>> Take the case of video encoded at 1GB/hour, equal to 300KB/s, stored on
>>>> a standard disk array with 4KB blocks. A single "viewer" will require
>>>> the disk to sustain 300/4 = 75 IOPS (I/O operations per second).
>>>> 100 streams served simultaneously will require 100 times as much I/O, i.e.
>>>> 7,500 IOPS. A typical 7200rpm disk can sustain at most 150 IOPS (i.e. two
>>>> streams);
>>>> a typical 5-disk RAID5 array (e.g. five 2TB 7200rpm disks) would support
>>>> perhaps 150*4 = 600
>>>> IOPS i.e. just 8 streams!
>>>> So the solution is a high-performance RAID array.
>>>> 
>>> 
>> 
> 
> 
> 
> _______________________________________________
> Matterhorn-users mailing list
> [email protected]
> http://lists.opencastproject.org/mailman/listinfo/matterhorn-users

======================
Dr Leslaw Zieleznik
OBIS (Oxford Brookes Information Solutions)
Oxford Brookes University
Headington
Oxford OX3 0BP
______________________
[email protected]
Tel:  +44 (0)1865 483973
Fax: +44 (0)1865 483073
======================

_______________________________________________
Matterhorn-users mailing list
[email protected]
http://lists.opencastproject.org/mailman/listinfo/matterhorn-users

Reply via email to