[zfs-discuss] slog/L2ARC on a hard drive and not SSD?
Hi, Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)? I know these are designed with SSDs in mind, and I know it's possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down? I guess it would slow things down, because it would be trying to read/write from a single spindle instead of a multidisk array, right? I havent found any articles discussing this, only ones talking about SSD-based slogs/caches. Thanks, Hernan -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Deleting large amounts of files
I have 8GB RAM, arcsz as reported by arcstat.pl is 5-7GB usually. It took about 20-30 mins to delete the files. Is there a way to see which files have been deduped, so I can copy them again an un-dedupe them? Thanks, Hernan -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Deleting large amounts of files
Hello, I think this is the second time this happens to me. A couple of year ago, I deleted a big (500G) zvol and then the machine started to hang some 20 minutes later (out of memory), even rebooting didnt help. But with the great support from Victor Latushkin, who on a weekend helped me debug the problem (abort the transaction and restart it again, which required some black magic and recompiling of ZFS) it worked. Now I'm facing a similar problem. I was writing about 20GB (from CIFS) to a filesystem. While that was going, I deleted some old files, freeing up about 60GB in the process. After Windows was done deleting those (it was instant), i tried to delete another file, which I didnt have permision to. So I SSHd to the machine and removed it manually (pfexec rm file). And thats where problems started. First, I noticed the rm wasnt instant. It was taking long (over 5 minutes). I tried Ctrl-C, Ctrl-Z, another SSH and kill, nothing worked. After a while it died with killed. I did a zfs list, and noticed the free space wasn't updated. I tried sync, it also hangs. I try a reboot - it won't, I guess it's waiting for the sync to finish. So I hard reboot the machine. When it comes back I can access the ZFS pool again. I go to the directory where I tried to delete the files with rm: files are still there (they weren't before the reboot). I try a sync again. Same result (hang). top shows a decreasing amount of free memory. zpool iostat 5 shows: rpool 69.4G 79.6G 0 0 0 0 tera3.12T 513G 63 0 144K 0 -- - - - - - - rpool 69.4G 79.6G 0 0 0 0 tera3.12T 513G 63 0 142K 0 -- - - - - - - rpool 69.4G 79.6G 0 0 0 0 tera3.12T 513G 62 0 142K 0 -- - - - - - - rpool 69.4G 79.6G 0 0 0 0 tera3.12T 513G 64 0 144K 0 -- - - - - - - rpool 69.4G 79.6G 0 0 0 0 tera3.12T 513G 65 0 148K 0 Could this be related to the fact that I THINK i enabled deduplication on this pool a while ago (but then I disabled it due to performance reasons)? What should I do? Do I have to wait for these reads to finish? Why are they so slow anyway? Thanks, Hernan -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Scrub extremely slow?
I tested with Bonnie++ and it reports about 200MB/s. The pool version is 22 (SunOS solaris 5.11 snv_134 i86pc i386 i86pc Solaris) I let the scrub run for hours and it was still at around 10MB/s. I tried to access an iSCSI target on that pool and it was really really slow (about 600KB/s!) while the scrub is running. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Scrub extremely slow?
Too bad then, I can't afford a couple of SSDs for this machine as it's just a home file server. I'm surprised about the scrub speed though... This used to be a 4x500GB machine, to which I replaced the disks one by one. Resilver (about 80% full) took about 6 hours to complete - now it's twice the size and it's taking 10x. Guess I'll just have to wait for a fix. Thanks! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Scrub extremely slow?
Hello, I'm trying to figure out why I'm getting about 10MB/s scrubs, on a pool where I can easily get 100MB/s. It's 4x 1TB SATA2 (nv_sata), raidz. Athlon64 with 8GB RAM. Here's the output while I cat an 8GB file to /dev/null r...@solaris:~# zpool iostat 20 capacity operationsbandwidth poolalloc free read write read write -- - - - - - - rpool123G 26.0G 2 5 206K 38.8K tera3.04T 598G 19 43 813K 655K -- - - - - - - rpool123G 26.0G 1 0 199K 0 tera3.04T 598G966 0 121M 0 -- - - - - - - rpool123G 26.0G 1 8 212K 60.7K tera3.04T 598G 1.53K 7 195M 20.9K -- - - - - - - and here's what happens when I'm scrubbing the pool: -- - - - - - - rpool123G 26.0G 1 7 106K 78.2K tera3.04T 598G 87 8 10.5M 20.7K -- - - - - - - rpool123G 26.0G 0 7 90.3K 81.8K tera3.04T 598G 87 7 10.3M 18.1K -- - - - - - - rpool123G 26.0G 1 0 130K 0 tera3.04T 598G 88 0 10.5M 0 -- - - - - - - I'd be glad to provide any info you might need. Thanks. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Dedup performance hit
Hello, I tried enabling dedup on a filesystem, and moved files into it to take advantage of it. I had about 700GB of files and left it for some hours. When I returned, only 70GB were moved. I checked zpool iostat, and it showed about 8MB/s R/W performance (the old and new zfs filesystems are in the same pool). So I disabled dedup for a few seconds and instantly the performance jumped to 80MB/s It's Athlon64 x2 machine with 4GB RAM, it's only a fileserver (4x1TB SATA for ZFS). arcstat.pl shows 2G for arcsz, top shows 13% CPU during the 8MB/s transfers. Is this normal behavior? Should I always expect such low performance, or is there anything wrong with my setup? Thanks in advance, Hernan -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss