On 2013-01-23 21:22, Wojciech Puchar wrote:
While RAID-Z is already a king of bad performance,
I don't believe RAID-Z is any worse than RAID5. Do you have any actual
measurements to back up your claim?
it is clearly described even in ZFS papers. Both on reads and writes it
gives single
then stored on a different disk. You could think of it as a regular RAID-5
with stripe size of 32768 bytes.
PostgreSQL uses 8192 byte pages that fit evenly both into ZFS record size and
column size. Each page access requires only a single disk read. Random i/o
performance here should be 5
Wow!.! OK. It sounds like you (or someone like you) can answer some of my
burning questions about ZFS.
On Thu, Jan 24, 2013 at 8:12 AM, Adam Nowacki nowa...@platinum.linux.plwrote:
Lets assume 5 disk raidz1 vdev with ashift=9 (512 byte sectors).
A worst case scenario could happen if your
several small files at once, does the transaction use a record, or does
each file need to use a record? Additionally, if small files use
sub-records, when you delete that file, does the sub-record get moved or
just wasted (until the record is completely free)?
writes of small files are always
On 2013-01-24 15:24, Wojciech Puchar wrote:
For me the reliability ZFS offers is far more important than pure
performance.
Except it is on paper reliability.
This on paper reliability in practice saved a 20TB pool. See one of my
previous emails. Any other filesystem or hardware/software raid
On 2013-01-24 15:45, Zaphod Beeblebrox wrote:
Ok... so my question then would be... what of the small files. If I write
several small files at once, does the transaction use a record, or does
each file need to use a record? Additionally, if small files use
sub-records, when you delete that
Ok... here's the existing data:
There are 3,236,316 files summing to 97,500,008,691 bytes. That puts the
average file at 30,127 bytes. But for the full breakdown:
512 : 7758
1024 : 139046
2048 : 1468904
4096 : 325375
8192 : 492399
16384 : 324728
32768 : 263210
65536 : 102407
131072 : 43046
On 01/22/2013 16:03, Ryan Stone wrote:
Offhand, I can't of why this isn't working. However there is already a way
to add new DTrace probes to the kernel, and it's quite simple, so you could
try it:
Thank you for this information, this works.
As for my previous approach, there is a bug in gcc
So far I've not lost a single ZFS pool or any data stored.
so far my house wasn't robbed.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to
There are 3,236,316 files summing to 97,500,008,691 bytes. That puts the
average file at 30,127 bytes. But for the full breakdown:
quite low. what do you store.
here is my real world production example of users mail as well as
documents.
/dev/mirror/home1.eli 2788 1545 124355%
On Wednesday, January 23, 2013 11:57:33 am Ian Lepore wrote:
On Wed, 2013-01-23 at 08:47 -0800, Matthew Jacob wrote:
On 1/23/2013 7:25 AM, John Baldwin wrote:
On Tuesday, January 22, 2013 5:40:55 pm Sushanth Rai wrote:
Hi,
Does freebsd have some functionality similar to Linux's NMI
On Wednesday, January 23, 2013 4:49:50 pm Mikolaj Golub wrote:
On Wed, Jan 23, 2013 at 11:31:43AM -0500, John Baldwin wrote:
On Wednesday, January 23, 2013 2:25:00 am Mikolaj Golub wrote:
IMHO, after adding procstat_getargv and procstat_getargv, the usage of
kvm_getargv() and
On Thu, Jan 24, 2013 at 2:26 PM, Wojciech Puchar
woj...@wojtek.tensor.gdynia.pl wrote:
There are 3,236,316 files summing to 97,500,008,691 bytes. That puts the
average file at 30,127 bytes. But for the full breakdown:
quite low. what do you store.
Apparently you're not really following
On Jan 24, 2013, at 4:24 PM, Wojciech Puchar woj...@wojtek.tensor.gdynia.pl
wrote:
Except it is on paper reliability.
This on paper reliability saved my ass numerous times.
For example I had one home NAS server machine with flaky SATA controller that
would not detect one of the four drives
14 matches
Mail list logo