My ARC is ~3GB. I'm doing a test that copies 10GB of data to a volume where the blocks should dedupe 100% with existing data.
First time, the test that runs <5MB sec, seems to average 10-30% ARC *miss* rate. <400 arc reads/sec. When things are working at disk bandwidth, I'm getting 3-5% ARC misses. Up to 7k arc reads/sec. If I do a "recv" on a small dataset, then immediately destroy & replay the same thing, I can get "in-core" dedupe performance, and it's truly amazing. Does anyone know how big the dedupe tables are, and if they can be given some priority/prefetch in ARC? I think I have enough RAM to make this work. mike On Thu, Dec 17, 2009 at 4:12 PM, Brandon High <bh...@freaks.com> wrote: > It looks like the kernel is using a lot of memory, which may be part > of the performance problem. The ARC has shrunk to 1G, and the kernel > is using up over 5G. > > I'm doing a send|receive of 683G of data. I started it last night > around 1am, and as of right now it's only sent 450GB. That's about > 8.5MB/sec. > > Are there any other stats, or dtrace scripts I can look at to > determine what's happening? > > bh...@basestar:~$ pfexec mdb -k > Loading modules: [ unix genunix specfs dtrace mac cpu.generic > cpu_ms.AuthenticAMD.15 uppc pcplusmp rootnex scsi_vhci zfs sata sd > sockfs ip hook neti sctp arp usba fctl random crypto cpc fcip smbsrv > nfs lofs ufs logindmux ptm sppp ipc ] > > ::memstat > Page Summary Pages MB %Tot > ------------ ---------------- ---------------- ---- > Kernel 1405991 5492 67% > ZFS File Data 223137 871 11% > Anon 396743 1549 19% > Exec and libs 1936 7 0% > Page cache 5221 20 0% > Free (cachelist) 9181 35 0% > Free (freelist) 52685 205 3% > > Total 2094894 8183 > Physical 2094893 8183 > > bh...@basestar:~$ arcstat.pl 5 3 > Time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c > 16:05:33 204M 6M 3 3M 5 3M 2 3M 1 1G 1G > 16:05:38 562 101 18 97 17 4 23 97 17 1G 1G > 16:05:43 1K 709 39 71 6 637 94 79 15 1G 1G > > -B > > -- > Brandon High : bh...@freaks.com > Always try to do things in chronological order; it's less confusing that > way. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss