Re: [vdr] Recommendation for new hd vdr system.

2009-11-19 Thread Lauri Tischler

Artem Makhutov wrote:

Hi,

On Wed, Nov 18, 2009 at 10:40:38AM +0200, Lauri Tischler wrote:

Luca Olivetti wrote:

The asus P5N7A-VM motherboard seems a good candidate, is its integrated 
 9300 graphics powerful enough for good deinterlacing?

P5N7A does not have enough slots :(
Any comments about M4N78 PRO ?


I have an Asus M4N78 Pro. It works pretty nice and stable.


Also 1080p and audio over HDMI ?

___
vdr mailing list
vdr@linuxtv.org
http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr


Re: [vdr] mdadm software raid5 arrays?

2009-11-19 Thread Pasi Kärkkäinen
On Tue, Nov 17, 2009 at 03:34:59PM +, Steve wrote:
 Alex Betis wrote:
 I don't record much, so I don't worry about speed.
 
 While there's no denying that RAID5 *at best* has a write speed
 equivalent to about 1.3x a single disk and if you're not careful with
 stride/block settings can be a lot slower, that's no worse for our
 purposes that, erm, having a single disk in the first place. And reading
 is *always* faster...
 
 Example. I'm not bothered about write speed (only having 3 tuners) so I
 didn't get too carried away setting up my 3-active disk 3TB RAID5 array,
 accepting all the default values.
 
 Rough speed test:
 #dd if=/dev/zero of=/srv/test/delete.me bs=1M count=1024
 1073741824 bytes (1.1 GB) copied, 13.6778 s, 78.5 MB/s
 

You should use oflag=direct to make it actually write the file to disk..

 #dd if=/srv/test/delete.me of=/dev/null bs=1M count=1024
 1073741824 bytes (1.1 GB) copied, 1.65427 s, 649 MB/s
 

And now most probably the file will come from linux kernel cache. 
Use iflag=direct to read it actually from the disk.

-- Pasi

 Don't know about anyone else's setup, but if I were to record all
 streams from all tuners, there would still be I/O bandwidth left.
 Highest DVB-T channel bandwidth possible appears to be 31.668Mb/s, so
 for my 3 tuners equates to about 95Mb/s - that's less than 12 MB/s. The
 78MB/s of my RAID5 doesn't seem to be much of an issue then.
 
 Steve
 
 
 
 ___
 vdr mailing list
 vdr@linuxtv.org
 http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr

___
vdr mailing list
vdr@linuxtv.org
http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr


[vdr] Yaepghd record dialog

2009-11-19 Thread Stuart Morris
I have been trying out the yaepghd plugin. It looks very impressive
but the record dialog appears to be disabled. Does anyone know if this 
has been intentionally disabled or is only partially implemented?

There has been no development activity on this plugin for about
9 months now.



  

___
vdr mailing list
vdr@linuxtv.org
http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr


Re: [vdr] mdadm software raid5 arrays?

2009-11-19 Thread Steve

H. Langos wrote:
Depending on the amount of RAM, the cache can screw up your results 
quite badly. For something a little more realistic try: 
  

Good point!


 sync; dd if=/dev/zero of=foo bs=1M count=1024 conv=fsync
  

Interestingly, not much difference:

# sync; dd if=/dev/zero of=/srv/test/delete.me bs=1M count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 14.6112 s, 73.5 MB/s

Steve

___
vdr mailing list
vdr@linuxtv.org
http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr


Re: [vdr] mdadm software raid5 arrays?

2009-11-19 Thread Steve

Pasi Kärkkäinen wrote:

You should use oflag=direct to make it actually write the file to disk..
  
And now most probably the file will come from linux kernel cache. 
Use iflag=direct to read it actually from the disk.
  


However, in the real world data _is_ going to be cached via the kernel 
cache, at least (we hope) a stride's worth minimum. We're talking about 
recording video aren't we, and that's surely almost always sequentially 
written, not random seeks everywhere?


For completeness, the results are:

#dd if=/dev/zero of=/srv/test/delete.me bs=1M count=1024 oflag=direct
1073741824 bytes (1.1 GB) copied, 25.2477 s, 42.5 MB/s

# dd if=/srv/test/delete.me of=/dev/null bs=1M count=1024 iflag=direct
1073741824 bytes (1.1 GB) copied, 4.92771 s, 218 MB/s

So, still no issue with recording entire transponders; using 1/4 of the 
available raw bandwidth with no buffering.


Interesting stuff, this :)

Steve

___
vdr mailing list
vdr@linuxtv.org
http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr


Re: [vdr] mdadm software raid5 arrays?

2009-11-19 Thread H. Langos
On Thu, Nov 19, 2009 at 01:37:46PM +, Steve wrote:
 Pasi Kärkkäinen wrote:
 You should use oflag=direct to make it actually write the file to disk..
   And now most probably the file will come from linux kernel cache.  
 Use iflag=direct to read it actually from the disk.
   

 However, in the real world data _is_ going to be cached via the kernel  
 cache, at least (we hope) a stride's worth minimum. We're talking about  
 recording video aren't we, and that's surely almost always sequentially  
 written, not random seeks everywhere?

True. Video is going to be written and read sequentially. However the
effects of cache are always that of a short time gain. E.g. write caches 
mask a slow disk by signaling ready to the application while in reality the
kernel is still holding the data in RAM. If you continue to write at a speed
faster than the disk can handle, then cache will fill up and at some point
in time your application's write requests will be slowed down to what the
disk can handle. 

If however your application writes to the same block again, before the 
cache has been written to disk, then your cache truely has gained you 
performance even in the long run, by avoiding writing data that already 
has been replaced.


Same thing with read caches. They only help if you are reading the same data
again.

The effect that you _will_ see is that of reading ahead. That helps if 
your application reads one block, and then another and the kernel has 
already looked ahead and fetched more blocks than originally requested
from the disk.

This also has the effect of avoiding too many seeks if you are reading from 
more than one place on the disk at once .. but again. The effect in regard to
read throughput however fades away as you read large amounts of data only once.

What it boils down to is this:

  Caches improve latency, not throughput.


What read-ahead and write-caches will do in this scenario, is to help you
mask the effects of seeks on your disk by reading ahead and by aggregating
write requests and sorting them in a way that reduces seek times. In this
regard writing multiple streams is easier than reading. When writing stuff,
you can let your kernel decide to keep some of the data 10 or 15 seconds 
in RAM before commiting it to disk. However if you are _reading_ you will 
be pretty miffed if your video stalls for 15 seconds because the kernel
found something more interesting to read first :-)

 For completeness, the results are:

 #dd if=/dev/zero of=/srv/test/delete.me bs=1M count=1024 oflag=direct
 1073741824 bytes (1.1 GB) copied, 25.2477 s, 42.5 MB/s

Interesting. The difference between this and the oflag=fsync is that
in the later the kernel gets to sort all of the write requests more or less
as its wants to. So I guess for recording video, the 73MB/s will be your
bandwidth, while this test here shows the performance that a data integrity 
focused application like e.g. a database will get from your RAID.

 # dd if=/srv/test/delete.me of=/dev/null bs=1M count=1024 iflag=direct
 1073741824 bytes (1.1 GB) copied, 4.92771 s, 218 MB/s

 So, still no issue with recording entire transponders; using 1/4 of the  
 available raw bandwidth with no buffering.

Well, using 1/4 bandwidth by one client or shared by multiple clients can 
make all the difference.

How about making some tests with cstream ? I only did a quick apt-cache
search but it seems like cstream could be used to simulate clients with
various bandwidth needs and for measuring the bandwidth that is left.

 Interesting stuff, this :)

Very interesting indeed. Thanks for enriching this discussion with real
data!

cheers
-henrik


___
vdr mailing list
vdr@linuxtv.org
http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr


Re: [vdr] Recommendation for new hd vdr system.

2009-11-19 Thread Artem Makhutov
Hi,

On Thu, Nov 19, 2009 at 10:59:26AM +0200, Lauri Tischler wrote:
 Artem Makhutov wrote:
 Hi,

 On Wed, Nov 18, 2009 at 10:40:38AM +0200, Lauri Tischler wrote:
 Luca Olivetti wrote:

 The asus P5N7A-VM motherboard seems a good candidate, is its 
 integrated  9300 graphics powerful enough for good deinterlacing?
 P5N7A does not have enough slots :(
 Any comments about M4N78 PRO ?

 I have an Asus M4N78 Pro. It works pretty nice and stable.

 Also 1080p and audio over HDMI ?

Yes, both of it works:

$ aplay -l
 Liste der Hardware-Geräte (PLAYBACK) 
Karte 0: NVidia [HDA NVidia], Gerät 0: VT1708S Analog [VT1708S Analog]
  Sub-Geräte: 2/2
  Sub-Gerät #0: subdevice #0
  Sub-Gerät #1: subdevice #1
Karte 0: NVidia [HDA NVidia], Gerät 1: VT1708S Digital [VT1708S Digital]
  Sub-Geräte: 1/1
  Sub-Gerät #0: subdevice #0
Karte 0: NVidia [HDA NVidia], Gerät 3: NVIDIA HDMI [NVIDIA HDMI]
  Sub-Geräte: 1/1
  Sub-Gerät #0: subdevice #0

___
vdr mailing list
vdr@linuxtv.org
http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr


Re: [vdr] livebuffer patch improvements suggestions.

2009-11-19 Thread Tommi Lundell

Tommi Lundell wrote:

Hello.

1) RAM is cheap now days. Can I use RAM to keep buffer data?
(System running from USB stick so only reason to start Hard Disk is when 
i use recordings)


2) Patch only record current channel. It would be nice if i can select 
channels from lists where buffers are active.
Example. i change channel and noticing that program what i want to look 
is already running so i simply press jump backward button and start to 
look program from beginning.
(2GB can keep about 100 minutes in buffers. If i select 5 different 
channels that i got 20mins buffer in every channel)




I reply to myself. (Sometimes it would be nice to test it before ask 
not so smart questions)

It's look that point 1) is pretty easy to do.
Simply select use RAM from setup-permanent timeshift menu :P
or other way is use tmpfs to emulate Hard Disk:
# mount -t tmpfs none /var/ramdisk -o size=2G
and run vdr with switch --buffer=/var/ramdisk

But point 2) is more complicated and my skills don't go even close what 
implementation needs.

Is that even possible to implement directly into VDR or with patches?

Regards: Tommi





___
vdr mailing list
vdr@linuxtv.org
http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr


Re: [vdr] livebuffer patch improvements suggestions.

2009-11-19 Thread VDR User
On Thu, Nov 19, 2009 at 10:26 AM, Tommi Lundell prel...@kapsi.fi wrote:
 2) Patch only record current channel. It would be nice if i can select
 channels from lists where buffers are active.
 Example. i change channel and noticing that program what i want to look is
 already running so i simply press jump backward button and start to look
 program from beginning.
 (2GB can keep about 100 minutes in buffers. If i select 5 different
 channels that i got 20mins buffer in every channel)


 But point 2) is more complicated and my skills don't go even close what
 implementation needs.
 Is that even possible to implement directly into VDR or with patches?

To buffer any channel, a dvb device must be locked to the same sat 
transponder the channel is on.  If you want to always buffer 5
channels, and 4 of them are on different transponders, you would need
to have 5 dvb devices installed to watch live tv.  4 devices dedicated
to those transponders and 1 available to surf any channels of live tv.

I can't honestly see the benefit of this over just setting timers to
record your favorite shows.  It seems like a lot of work  hardware
for something that there are better solutions for imo.  Maybe you
watch an unusually large amount of tv without every checking the guide
for future shows that might interest you?  ;)

___
vdr mailing list
vdr@linuxtv.org
http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr