Hank,

You are right!

Julian King from Cambridge Uni, who has built and runs a large streaming 
service corrected that (wrong) calculation.

>>It isn't a disk, it is an array, but I assume that is what you meant. However 
>>this is not a useful measurement since most of the reads will be sequential, 
>>and you don't get 4k reads across an array you >>get block-size*width reads, 
>>which given 4k blocks and a 7+1 RAID5 setup gets you 7*4k reads, so 300/28 or 
>>the equivalent of around 10 IOPS.  Except, as I said, these are mostly 
>>sequential reads, >>so you get much higher IOPS than from random reads.  In a 
>>straight line our drives can do 110MB/s (theoretical).  So 27,000 IOPS (using 
>>that measurement).  I'm not suggesting that this is a real >>measurement in 
>>the real world, but it shows that assuming approx 100 IOPS/drive isn't a 
>>helpful figure.

>>Basically the mistake is assuming that quoted IOPS apply to sequential reads. 
>> They don't.  If they did then all disks would be painfully slow. They apply 
>>much more closely to random seeks.  

So I don't need such an expensive disk array as it was suggested to me: 12*1TB 
disk array from Dell starts from £5.5k.

Leslaw


On 3 May 2012, at 21:43, Hank Magnuski wrote:

> There are other factors that go into this calculation:
> 
> 1. This is not a database problem. These are streaming media files. A good 
> disk layout would use blocks much larger than 4KB.
> 2. A modern operating system would do both seek optimization and caching. I 
> don't know how a well written streaming server works, but I would expect
> Wowza to read-ahead 30-60 seconds to pre-buffer the stream flow. That's about 
> 8-16 MB per stream.
> 3. Your 100 viewers are probably looking at the 10 most recent files, not 100 
> different files. So there will be a lot of overlap in disk I/O.
> 4. It's much, much harder to get 100 simultaneous viewers than you think. You 
> need a VERY large potential viewing audience to achieve those numbers. For 
> example, if you have 1000 students doing 2 hours a week of viewing, that 
> averages out to 12 simultaneous streams.
> 
> There have been lots of PhD studies on how to optimize a video server. This 
> discussion could go on and on.
> 
> Hank
> 
> 
> On Thu, May 3, 2012 at 12:54 PM, Leslaw Zieleznik <[email protected]> 
> wrote:
> 
> Good point, certainly true for progressive download. 
> The worry is in case of streaming when the chunks of data need to be 
> delivered simultaneously, unless the streaming server can do the trick?
> 
> Lelsaw
> 
> 
> On 3 May 2012, at 20:13, Hank Magnuski wrote:
> 
>> Something doesn't seem right here.
>> 
>> 100 viewers x 1 GB each requires a transfer of 100 GB in less than an hour.
>> 
>> An ordinary SATA disk can easily do 100 MB/second so transferring 100 GB 
>> will take 1000 seconds or 16 minutes.
>> 
>> What's the disk going to do with the rest of the time?
>> 
>> I'd say £100 is more like it.
>> 
>> Hank
>> 
>> On Thu, May 3, 2012 at 11:09 AM, Leslaw Zieleznik <[email protected]> 
>> wrote:
>> 
>> We have discussion today about the shared volume storage that can also 
>> support streaming.
>> And the conclusion was that to support 100 streams (1GB/hour high resolution 
>> recordings) played at the same time,
>> we need a high-performance disk RAID array which may cost at least £20k or 
>> rather £50k. The calculation is shown below.
>> 
>> Therefore my question is, whether there is any escape from purchasing such 
>> an expensive storage?
>> 
>> Many thanks,
>> Leslaw
>> 
>> And here is the calculation.
>> Take the case of video encoded at 1GB/hour, equal to 300KB/s, stored on
>> a standard disk array with 4KB blocks. A single "viewer" will require
>> the disk to sustain 300/4 = 75 IOPS (I/O operations per second).
>> 100 streams served simultaneously will require 100 times as much I/O, i.e.
>> 7,500 IOPS. A typical 7200rpm disk can sustain at most 150 IOPS (i.e. two 
>> streams);
>> a typical 5-disk RAID5 array (e.g. five 2TB 7200rpm disks) would support 
>> perhaps 150*4 = 600
>> IOPS i.e. just 8 streams!
>> So the solution is a high-performance RAID array.
> _______________________________________________
> Matterhorn-users mailing list
> [email protected]
> http://lists.opencastproject.org/mailman/listinfo/matterhorn-users




_______________________________________________
Matterhorn-users mailing list
[email protected]
http://lists.opencastproject.org/mailman/listinfo/matterhorn-users

Reply via email to