On Thu, May 03, 2007 at 11:43:49AM -0500, [EMAIL PROTECTED] wrote:
I think this may be a premature leap -- It is still undetermined if we are
running up against a yet unknown bug in the kernel implementation of gzip
used for this compression type. From my understanding the gzip code has
been
Adam Leventhal wrote:
On Wed, May 09, 2007 at 11:52:06AM +0100, Darren J Moffat wrote:
Can you give some more info on what these problems are.
I was thinking of this bug:
6460622 zio_nowait() doesn't live up to its name
Which was surprised to find was fixed by Eric in build 59.
Adam
Hello Ian,
Thursday, May 3, 2007, 10:20:20 PM, you wrote:
IC Roch Bourbonnais wrote:
with recent bits ZFS compression is now handled concurrently with many
CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
IC Would changing
Ian Collins writes:
Roch Bourbonnais wrote:
with recent bits ZFS compression is now handled concurrently with many
CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
Would changing (selecting a smaller)
The reason you are busy computing SHA1 hashes is you are using
/dev/urandom. The implementation of drv/random uses
SHA1 for mixing,
actually strictly speaking it is the swrand provider that does that part.
Ahh, ok.
So, instead of using dd reading from /dev/urandom all the time,
I've now
I'm not quite sure what this test should show ?
Compressing random data is the perfect way to generate heat.
After all, compression working relies on input entropy being low.
But good random generators are characterized by the opposite - output
entropy being high.
Even a good compressor, if
On 5/3/07, Frank Hofmann [EMAIL PROTECTED] wrote:
I'm not quite sure what this test should show ?
I didn't try the test myself... but I think what it shows is a
possible problem that turning compression can hang a machine.
Rayson
Compressing random data is the perfect way to generate
with recent bits ZFS compression is now handled concurrently with
many CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
So the observed pauses should be consistent with that of a load
generating high system time.
The
[EMAIL PROTECTED] wrote on 05/03/2007 11:35:24 AM:
with recent bits ZFS compression is now handled concurrently with
many CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
So the observed pauses should be consistent
Roch Bourbonnais wrote:
with recent bits ZFS compression is now handled concurrently with many
CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
Would changing (selecting a smaller) filesystem record size have any effect?
So
A couple more questions here.
[mpstat]
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
00 0 3109 3616 316 1965 17 48 45 2450 85 0 15
10 0 3127 3797 592 2174 17 63 46 1760 84 0 15
CPU minf mjf xcal
11 matches
Mail list logo