Well, the real reason is that the buffers in the kernel are allocated as
DMA memory. So it's not just a matter of getting hold of a lot of
memory, but a lot of memory which is continuous in its physical space.
Or more precisely, a lot of continuous buffers which are fairly large.
One could, of
On Sun, Feb 5, 2012 at 10:41 AM, Eli Billauer e...@billauer.co.il wrote:
Besides, I have a faint memory of a limitation on the total RAM allocatable
inside the kernel. Was it 512MB? Has this limitation vanished?
You can reserve more memory for kernel side processing using
echo size_in_kb
Hi all,
I need a simple command-line program, which works as a plain FIFO stream
buffer with a huge RAM. Something I can do:
$ fatcat -b 256M /dev/datasource | ./my_shaky_data_sink
The idea is that fatcat reads data whenever available and stores it to
non-swappable RAM. It then pushes
On Sat, Feb 4, 2012 at 4:27 PM, Eli Billauer e...@billauer.co.il wrote:
Hi all,
I need a simple command-line program, which works as a plain FIFO stream
buffer with a huge RAM. Something I can do:
$ fatcat -b 256M /dev/datasource | ./my_shaky_data_sink
The idea is that fatcat reads
Thanks for that one. yum install buffer. How simple.
So I tried it out:
$ dd if=/dev/zero bs=1M count=256 /dev/null
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 0.0258608 s, 10.4 GB/s
Original throughput is 10.4 GB/sec. Looks not so bad.
$ dd if=/dev/zero bs=1M
On Saturday, 4 בFebruary 2012 16:27:27 Eli Billauer wrote:
...
Rationale: The (kernel) device /dev/datasource has limited RAM it can
allocate in kernel space.
...
So if data is loaded into a huge RAM array (what is 256 MB these days?)
I fail to see why the kernel driver would be more