Re: [zfs-discuss] Basic ZFS Questions + Initial Setup Recommendation

2012-03-21 Thread Jim Klimov
)... //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Basic ZFS Questions + Initial Setup Recommendation

2012-03-22 Thread Jim Klimov
2012-03-21 22:53, Richard Elling wrote: ... This is why a single vdev's random-read performance is equivalent to the random-read performance of a single drive. It is not as bad as that. The actual worst case number for a HDD with zfs_vdev_max_pending of one is: average IOPS * ((D+P) / D)

Re: [zfs-discuss] Basic ZFS Questions + Initial Setup Recommendation

2012-03-22 Thread Jim Klimov
(not shared with other files' BPs), inflated by ditto copies=2 and raidz/mirror redundancy. Right/wrong? Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Good tower server for around 1,250 USD?

2012-03-24 Thread Jim Klimov
in illumos, their known-good HCL might be quite relevant for OpenIndiana users in general, I think :) Good luck, really! //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] webserver zfs root lock contention under heavy load

2012-03-25 Thread Jim Mauro
, before going down the lock path… This assume that most or all of CPU utilization is %sys. If it's %usr, we take a different approach. Thanks /jim On Mar 25, 2012, at 1:29 AM, Aubrey Li wrote: Hi, I'm migrating a webserver(apache+php) from RHEL to solaris. During the stress testing

Re: [zfs-discuss] volblocksize for VMware VMFS-5

2012-03-26 Thread Jim Klimov
heard that VMWare has some smallish limit on the number of NFS connections, but 30 should be bearable... HTH, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] webserver zfs root lock contention under heavy load

2012-03-26 Thread Jim Klimov
- if it is normal or not). Also I'm not sure if tmpfs gets the benefits of caching, and ZFS ARC cache can consume lots of RAM and thus push tmpfs out to swap. As a random guess, try pointing PHP tmp directory to /var/tmp (backed by zfs) and see if any behaviors change? Good luck, //Jim

Re: [zfs-discuss] webserver zfs root lock contention under heavy load

2012-03-26 Thread Jim Mauro
THE PROBLEM - Linux is 15% sys, 55% usr, Solaris is 30% sys, 70% usr, running the same workload, doing the same amount of work. delivering the same level of performance. Please validate that problem statement. On Mar 25, 2012, at 9:51 PM, Aubrey Li wrote: On Mon, Mar 26, 2012 at 4:18 AM, Jim

Re: [zfs-discuss] webserver zfs root lock contention under heavy load

2012-03-26 Thread Jim Klimov
As a random guess, try pointing PHP tmp directory to /var/tmp (backed by zfs) and see if any behaviors change? Good luck, //Jim Thanks for your suggestions. Actually the default PHP tmp directory was /var/tmp, and I changed /var/tmp to /tmp. This reduced zfs root lock contention

Re: [zfs-discuss] kernel panic during zfs import

2012-03-27 Thread Jim Klimov
stacktrace should tell you in which functions you should start looking... Good luck, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] no valid replicas

2012-04-05 Thread Jim Klimov
the stats took zdb at least 40 minutes. HTH, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] no valid replicas

2012-04-05 Thread Jim Klimov
2012-04-05 16:04, Jim Klimov написал: 2012-04-04 23:27, Jan-Aage Frydenbø-Bruvoll wrote: Which OS and release? This is OpenIndiana oi_148, ZFS pool version 28. There was a bug in some releases circa 2010 that you might be hitting. It is harmless, but annoying. Ok - what bug is this, how

Re: [zfs-discuss] What's wrong with LSI 3081 (1068) + expander + (bad) SATA disk?

2012-04-07 Thread Jim Klimov
into raId and can do miracles with SATA disks. Reality has shown to many of us that many SATA implementations existing in the wild should be avoided... so we're back to good vendors' higher end expensive SATAs or better yet SAS drives. Not inexpensive anymore again :( Thanks, //Jim

Re: [zfs-discuss] What's wrong with LSI 3081 (1068) + expander + (bad) SATA disk?

2012-04-08 Thread Jim Klimov
on the project roadmap)? Just a thought... //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Drive upgrades

2012-04-17 Thread Jim Klimov
2012-04-17 5:15, Richard Elling wrote: For the archives... Write-back cache enablement is toxic for file systems that do not issue cache flush commands, such as Solaris' UFS. In the early days of ZFS, on Solaris 10 or before ZFS was bootable on OpenSolaris, it was not uncommon to have ZFS and

Re: [zfs-discuss] zpool split failing

2012-04-17 Thread Jim Klimov
2012-04-17 14:47, Matt Keenan wrote: - or is it possible that one of the devices being a USB device is causing the failure ? I don't know. Might be, I've got little experience with those beside LiveUSB imagery ;) My reason for splitting the pool was so I could attach the clean USB rpool to

Re: [zfs-discuss] Aaron Toponce: Install ZFS on Debian GNU/Linux

2012-04-18 Thread Jim Klimov
- impressive and interesting, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Aaron Toponce: Install ZFS on Debian GNU/Linux

2012-04-18 Thread Jim Klimov
got legally leaked into Linux, and if they were there, then they might be legally included into other ZFS source code projects. I hope this subject is closed for now ;( without personal gripes ;) //Jim ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Two disks giving errors in a raidz pool, advice needed

2012-04-23 Thread Jim Klimov
is not there, is it a worthy RFE, maybe for GSoC? //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Two disks giving errors in a raidz pool, advice needed

2012-04-24 Thread Jim Klimov
. If only ZFS could queue scrubbing reads more linearly... ;) //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] cluster vs nfs

2012-04-26 Thread Jim Klimov
On 2012-04-26 2:20, Ian Collins wrote: On 04/26/12 09:54 AM, Bob Friesenhahn wrote: On Wed, 25 Apr 2012, Rich Teer wrote: Perhaps I'm being overly simplistic, but in this scenario, what would prevent one from having, on a single file server, /exports/nodes/node[0-15], and then having each node

Re: [zfs-discuss] cluster vs nfs

2012-04-26 Thread Jim Klimov
to access it. Their actual worksets would be stored locally in the cachefs backing stores on each workstation, and not abuse networking traffic and the fileserver until there are some writes to be replicated into central storage. They would have approximately one common share to mount ;) //Jim

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-04-26 Thread Jim Klimov
operations budget (and further planning, etc.) has only been hit for 1Tb. HTH, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-05-02 Thread Jim Klimov
be sparse, compressable, and/or not unique), but in the end that's unpredictable from the start. HTH, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] autoexpand in a physical disk with 2 zpool

2012-05-03 Thread Jim Klimov
into rpool), destroy opt, relabel the solaris slices with format, expand rpool, create a new opt. You should back it up anyway before such dangerous experiments But for the sheer excitement of the experiment, you can give the dd-series a try, and tell us how it goes HTH, //Jim

Re: [zfs-discuss] slow zfs send

2012-05-07 Thread Jim Klimov
mean heavy fragmentation and lots of random small IOs... HTH, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Resilver restarting several times

2012-05-11 Thread Jim Klimov
zfs_resilver_min_time_ms/W0t2 | mdb -kw mdb: failed to dereference symbol: unknown symbol name Thanks for any ideas, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Resilver restarting several times

2012-05-11 Thread Jim Klimov
2012-05-11 17:18, Bob Friesenhahn wrote: On Fri, 11 May 2012, Jim Klimov wrote: Hello all, SHORT VERSION: What conditions can cause the reset of the resilvering process? My lost-and-found disk can't get back into the pool because of resilvers restarting... I recall that with sufficiently

Re: [zfs-discuss] Resilver restarting several times

2012-05-11 Thread Jim Klimov
2012-05-11 17:18, Bob Friesenhahn написал: On Fri, 11 May 2012, Jim Klimov wrote: Hello all, SHORT VERSION: What conditions can cause the reset of the resilvering process? My lost-and-found disk can't get back into the pool because of resilvers restarting... I recall that with sufficiently

Re: [zfs-discuss] Resilver restarting several times

2012-05-11 Thread Jim Klimov
= 58650 [user daemon on thumper] 2012-05-12.02:45:56 [internal snapshot txg:91071280] dataset = 58652 [user jim on thumper] 2012-05-12.02:46:15 [internal snapshot txg:91071283] dataset = 58654 [user daemon on thumper] 2012-05-12.02:53:01 [internal pool scrub done txg:91071298] complete=0 [user

Re: [zfs-discuss] Resilver restarting several times

2012-05-11 Thread Jim Klimov
2012-05-12 4:26, Jim Klimov wrote: Wonder if things would get better or worse if I kick one of the drives (i.e. hotspare c5t6d0) out of the equation: raidz1 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 spare ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 6.72G resilvered c5t6d0 ONLINE 0 0 0 c4t3d0 ONLINE 0 0 0 c6t5d0

Re: [zfs-discuss] Resilver restarting several times

2012-05-12 Thread Jim Klimov
2012-05-11 14:22, Jim Klimov wrote: What conditions can cause the reset of the resilvering process? My lost-and-found disk can't get back into the pool because of resilvers restarting... FOLLOW-UP AND NEW QUESTIONS Here is a new piece of evidence - I've finally got something out of fmdump

Re: [zfs-discuss] Resilver restarting several times

2012-05-12 Thread Jim Klimov
2012-05-12 15:52, Jim Klimov wrote: 2012-05-11 14:22, Jim Klimov wrote: What conditions can cause the reset of the resilvering process? My lost-and-found disk can't get back into the pool because of resilvers restarting... Guess I must assume that the disk is dying indeed, losing connection

Re: [zfs-discuss] Resilver restarting several times

2012-05-12 Thread Jim Klimov
Thanks for staying tuned! ;) 2012-05-12 18:34, Richard Elling wrote: On May 12, 2012, at 4:52 AM, Jim Klimov wrote: 2012-05-11 14:22, Jim Klimov wrote: What conditions can cause the reset of the resilvering process? My lost-and-found disk can't get back into the pool because of resilvers

Re: [zfs-discuss] Resilver restarting several times

2012-05-12 Thread Jim Klimov
2012-05-12 7:01, Jim Klimov wrote: Overall the applied question is whether the disk will make it back into the live pool (ultimately with no continuous resilvering), and how fast that can be done - I don't want to risk the big pool with nonredundant arrays for too long. Here lies another

[zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-15 Thread Jim Klimov
detection on POST (I'll test tonight) or these big disks won't work in X4500, period? [1] http://code.google.com/p/solaris-parted/downloads/detail?name=solaris-parted-0.2.tar.gzcan=2q= Gotta run now, will ask more in the evening :) Thanks for now, //Jim

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-15 Thread Jim Klimov
, check! ;} 2012-05-15 13:41, Jim Klimov wrote: Hello all, I'd like some practical advice on migration of a Sun Fire X4500 (Thumper) from aging data disks to a set of newer disks. Some questions below are my own, others are passed from the customer and I may consider not all of them sane - but must ask

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Jim Klimov
reasoning should apply to other similar methods though, like iSCSI from remote storage, or lofi-devices, or SVM as I thought of (ab)using in this migration. Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Jim Klimov
2012-05-16 13:30, Joerg Schilling написал: Jim Klimovjimkli...@cos.ru wrote: We know that large redundancy is highly recommended for big HDDs, so in-place autoexpansion of the raidz1 pool onto 3Tb disks is out of the question. Before I started to use my thumper, I reconfigured it to use

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Jim Klimov
for no benefit to the buyer. So this method was ruled out for this situation. Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Jim Klimov
be more performant and have more RAM, I expect that this Thumper would be the backup box for a new server, ultimately. Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-16 Thread Jim Klimov
stuff into the new test pools to see if any conflicts arise in snv_117's support of the disk size. Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-17 Thread Jim Klimov
with no deletions so far is oh-so-good! ;) 2012-05-17 1:21, Jim Klimov wrote: 2012-05-15 19:17, casper@oracle.com wrote: Your old release of Solaris (nearly three years old) doesn't support disks over 2TB, I would think. (A 3TB is 3E12, the 2TB limit is 2^41 and the difference is around 800Gb

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-17 Thread Jim Klimov
or later (oi_151a3?) Perhaps, some known pool corruption issues or poor data layouts in older ZFS software releases?.. Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-17 Thread Jim Klimov
2012-05-18 1:39, Jim Klimov написал: A small follow-up on my tests, just in case readers are interested in some numbers: the UltraStar 3Tb disk got filled up by a semi-random selection of data from our old pool in 24 hours sharp One more number: the smaller pool completed its scrub in 57

[zfs-discuss] How does resilver/scrub work?

2012-05-17 Thread Jim Klimov
recovery windows when resilvering disks. Q4: I wonder if similar (equivalent) solutions are already in place and did not help much? ;) Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo

Re: [zfs-discuss] How does resilver/scrub work?

2012-05-18 Thread Jim Klimov
on that below :) 2012-05-18 15:30, Daniel Carosone wrote: On Fri, May 18, 2012 at 03:05:09AM +0400, Jim Klimov wrote: While waiting for that resilver to complete last week, I caught myself wondering how the resilvers (are supposed to) work in ZFS? The devil finds work for idle hands

Re: [zfs-discuss] How does resilver/scrub work?

2012-05-18 Thread Jim Klimov
2012-05-18 19:08, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov I'm reading the ZFS on-disk spec, and I get the idea that there's an uberblock pointing to a self-balancing tree (some say b-tree, some say

Re: [zfs-discuss] dataset is busy when doing snapshot

2012-05-20 Thread Jim Klimov
, mount, umount, reenable zoned). Hope this helps, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How does resilver/scrub work?

2012-05-20 Thread Jim Klimov
zfs developers to make a POC? ;) Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How does resilver/scrub work?

2012-05-22 Thread Jim Klimov
2012-05-22 7:30, Daniel Carosone wrote: On Mon, May 21, 2012 at 09:18:03PM -0500, Bob Friesenhahn wrote: On Mon, 21 May 2012, Jim Klimov wrote: This is so far a relatively raw idea and I've probably missed something. Do you think it is worth pursuing and asking some zfs developers to make

Re: [zfs-discuss] How does resilver/scrub work?

2012-05-23 Thread Jim Klimov
counts anyway (if no new problems are found) Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How does resilver/scrub work?

2012-05-23 Thread Jim Klimov
this is functionally identical. (At least, would be - if it were part of a supported procedure as I suggest). Thanks, //Jim Klimov PS: I pondered for a while if I should make up an argument that on a dying disk mechanics, lots of random IO (resilver) instead of sequential IO (DD) would cause

Re: [zfs-discuss] How does resilver/scrub work?

2012-05-24 Thread Jim Klimov
the incomplete resilver made me a practical experiment of the idea. The failure data does not support your hypothesis. Ok, then my made-up and dismissed argument does not stand ;) Thanks for the discussion, //Jim Klimov ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] How does resilver/scrub work?

2012-05-24 Thread Jim Klimov
for preventive regular scrubs)... //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] MPxIO n00b question

2012-05-25 Thread Jim Klimov
that a pool with an error is exposed to possible fatal errors (due to double-failures with single-protection). //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] MPxIO n00b question

2012-05-25 Thread Jim Klimov
2012-05-25 21:45, Sašo Kiselkov wrote: On 05/25/2012 07:35 PM, Jim Klimov wrote: Sorry I can't comment on MPxIO, except that I thought zfs could by itself discern two paths to the same drive, if only to protect against double-importing the disk into pool. Unfortunately, it isn't the same

Re: [zfs-discuss] How does resilver/scrub work?

2012-05-25 Thread Jim Klimov
2012-05-26 1:07, Richard Elling wrote: On May 25, 2012, at 1:53 PM, zfs user wrote: The man page seems to not mention the critical part of the FMA msg that OP is worried about. OP said that his motivation for clearing the errors and fearing the degraded state was because he feared this:

Re: [zfs-discuss] Remedies for suboptimal mmap performance on zfs

2012-05-28 Thread Jim Klimov
, expiring ARC data pages and actually claiming the RAM for the application... Right? ;) //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Advanced Format HDD's - are we there yet? (or - how to buy a drive that won't be teh sux0rs on zfs)

2012-05-29 Thread Jim Klimov
, when you have much RAM dedicated to caching. Hmmm... did you use dedup in those tests?- that is another source of performance degradation on smaller machines (under tens of GBs of RAM). HTH, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Disk failure chokes all the disks attached to the failing disk HBA

2012-05-30 Thread Jim Klimov
... Now, waiting for experts to chime in on whatever I missed ;) HTH, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Terminology question on ZFS COW

2012-06-05 Thread Jim Klimov
more valid (making a copy of old data upon a new write), and if any vendors actually did that procedure outlined above? Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-06 Thread Jim Mauro
I can't help but be curious about something, which perhaps you verified but did not post. What the data here shows is; - CPU 31 is buried in the kernel (100% sys). - CPU 31 is handling a moderate-to-high rate of xcalls. What the data does not prove empirically is that the 100% sys time of CPU

[zfs-discuss] A disk on Thumper giving random CKSUM error counts

2012-06-10 Thread Jim Klimov
better ideas, perhaps someone had same experiences? Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Scrub works in parallel?

2012-06-11 Thread Jim Klimov
architectural choices, components and their specs. HTH, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Scrub works in parallel?

2012-06-12 Thread Jim Klimov
, presence of pool activity would likely delay the scrub completion time, perhaps even more noticeably. Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Scrub works in parallel?

2012-06-12 Thread Jim Klimov
such apparent bottlenecks. But people who construct their own storage should know of (and try to avoid) such possible problem-makers ;) Thanks, Roch, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Jim Klimov
to affect these CPUs either... I wonder if creating a CPU set not assigned to any active user, and setting that CPU set to process (networking) interrupts, would work or help (see psrset, psradm)? My 2c, //Jim Klimov ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Jim Mauro
So try unbinding the mac threads; it may help you here. How do I do that? All I can find on interrupt fencing and the like is to simply set certain processors to no-intr, which moves all of the interrupts and it doesn't prevent the xcall storm choosing to affect these CPUs either… In

Re: [zfs-discuss] (fwd) Re: ZFS NFS service hanging on Sunday

2012-06-14 Thread Jim Klimov
(tens of GBs for moderate-sized pools of tens of TB). Your box seems to have a 12Tb pool with just a little bit used, yet already the shortage of RAM is well seen... Hope this helps (understanding at least), //Jim Klimov ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Jim Klimov
new rpool (and data pool if you've made one), installgrub onto the second disk - and you're done. HTH, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Jim Klimov
2012-06-15 17:18, Jim Klimov wrote: 7) If you're on live media, try to rename the new rpool2 to become rpool, i.e.: # zpool export rpool2 # zpool export rpool # zpool import -N rpool rpool2 # zpool export rpool Ooops, bad typo in third line; should be: # zpool export

Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Jim Klimov
)? Or if the drive lies, saying its sectors are 512b while they physically are 4KB - it is undetectable except by reading vendor specs? Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Recovery of RAIDZ with broken label(s)

2012-06-16 Thread Jim Klimov
haven't browsed the zfs on-disk spec, it may be also helpful (though outdated in regard to current features): * http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/ondiskformat0822.pdf HTH, //Jim Klimov ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-17 Thread Jim Klimov
- according to datasheets on site. HTH, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Restore destroyed snapshot ???

2012-06-18 Thread Jim Klimov
(uberblocks with newer TXG numbers are, AFAIK, explicitly invalidated (zeroed out)). I don't know how/if rollbacks work with read-only imports, if they allow to inspect a pool at TXG number N without forfeiting its newer changes. HTH, //Jim Klimov

Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-20 Thread Jim Klimov
it manually (if you only actively use this disk for one or more ZFS pools - which play with caching nicely). HTH, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-20 Thread Jim Klimov
2012-06-21 1:58, Richard Elling wrote: On Jun 20, 2012, at 4:08 AM, Jim Klimov wrote: Also by default if you don't give the whole drive to ZFS, its cache may be disabled upon pool import and you may have to reenable it The behaviour is to attempt to enable the disk's write cache if ZFS has

Re: [zfs-discuss] snapshots slow on sol11?

2012-06-26 Thread Jim Klimov
? Regarding the zfs-auto-snapshot, it is possible to install the old scripted package from OpenSolaris onto Solaris 10 at least; I did not have much experience with newer releases yet (timesliderd) so can't help better. HTH, //Jim Klimov ___ zfs-discuss mailing

[zfs-discuss] shareiscsi and COMSTAR

2012-06-26 Thread Jim Klimov
some other services and/or files (/etc/iscsi, something else?) Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] shareiscsi and COMSTAR

2012-06-26 Thread Jim Klimov
2012-06-27 1:00, Bill Pijewski wrote: On Tue, Jun 26, 2012 at 1:47 PM, Jim Klimov jimkli...@cos.ru wrote: 1) Is COMSTAR still not-integrated with shareiscsi ZFS attributes? Or can the pool use the attribute, and the correct (new COMSTAR) iSCSI target daemon will fire up? I can't speak

Re: [zfs-discuss] Benefits of enabling compression in ZFS for the zones

2012-07-10 Thread Jim Klimov
are overqualified for their jobs and have lots of spare cycles - so (de)compression has little impact on real work anyway. Also decompression tends to be faster than compression, because there is little to no analysis to do - only matching compressed tags to a dictionary of original data snippets. HTH, //Jim

Re: [zfs-discuss] Understanding ZFS recovery

2012-07-12 Thread Jim Klimov
the expectations from block-pointers, ZFS will know there are errors or even losses. For example, zpool scrub does just that - so you should run that on your pool, if it is now importable to you. Good luck, HTH, //Jim Klimov ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-19 Thread Jim Klimov
for the second half's rewrite (if that comes soon enough), and may be spooled to disk as a couple of 64K blocks or one 128K block (if both changes come soon after each other - within one TXG). HTH, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-21 Thread Jim Klimov
2012-07-20 5:11, Bob Friesenhahn wrote: On Fri, 20 Jul 2012, Jim Klimov wrote: Zfs data block sizes are fixed size! Only tail blocks are shorter. This is the part I am not sure is either implied by the docs nor confirmed by my practice. But maybe I've missed something... This is something

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-21 Thread Jim Klimov
2012-07-22 1:24, Bob Friesenhahn пишет: On Sat, 21 Jul 2012, Jim Klimov wrote: During this quick test I did not manage to craft a test which would inflate a file in the middle without touching its other blocks (other than using a text editor which saves the whole file - so that is irrelevant

Re: [zfs-discuss] Question on 4k sectors

2012-07-23 Thread Jim Klimov
it properly or not) are not all inherently evil - this emulation by itself may be of some concern regarding performance, but not one of reliability. Then again, firmware errors are possible in any part of the stack, of both older and newer models ;) HTH, //Jim

[zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-07-29 Thread Jim Klimov
, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZIL devices and fragmentation

2012-07-29 Thread Jim Klimov
. Is this understanding correct? Does it apply to any generic writes, or only to sync-heavy scenarios like databases or NFS servers? Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo

Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-29 Thread Jim Klimov
2012-07-29 19:50, Sašo Kiselkov wrote: On 07/29/2012 04:07 PM, Jim Klimov wrote: For several times now I've seen statements on this list implying that a dedicated ZIL/SLOG device catching sync writes for the log, also allows for more streamlined writes to the pool during normal healthy TXG

Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-29 Thread Jim Klimov
2012-07-30 0:40, opensolarisisdeadlongliveopensolaris пишет: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov For several times now I've seen statements on this list implying that a dedicated ZIL/SLOG device catching sync writes

Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-08-01 Thread Jim Klimov
found, for some apparent reason ;) Also, I am not sure whether bumping the copies attribute to, say, 3 increases only the redundancy of userdata, or of regular metadata as well. //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-08-01 Thread Jim Klimov
2012-08-01 16:22, Sašo Kiselkov пишет: On 08/01/2012 12:04 PM, Jim Klimov wrote: Probably DDT is also stored with 2 or 3 copies of each block, since it is metadata. It was not in the last ZFS on-disk spec from 2006 that I found, for some apparent reason ;) The idea of the pun

Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-08-01 Thread Jim Klimov
(and rewrite both its copies now). //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-08-01 Thread Jim Klimov
2012-08-01 17:55, Sašo Kiselkov пишет: On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov Availability of the DDT is IMHO crucial to a deduped pool, so I won't

Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-08-01 Thread Jim Klimov
ultimately remove an unreferenced entry, then you benefit on writes as well - you don't take as long to find DDT entries (or determine lack thereof) for the blocks you add or remove. Or did I get your answer wrong? ;) //Jim ___ zfs-discuss mailing list

Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-08-01 Thread Jim Klimov
, as well. //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-08-01 Thread Jim Klimov
2012-08-01 23:34, opensolarisisdeadlongliveopensolaris пишет: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov Well, there is at least a couple of failure scenarios where copies1 are good: 1) A single-disk pool, as in a laptop

Re: [zfs-discuss] number of blocks changes

2012-08-03 Thread Jim Klimov
ob varies similarly, for the fun of it? //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Should ZFS clones inherit(clone) current settable attributes of origin datasets, or of their new hierarchical parents

2012-08-08 Thread Jim Klimov
updates, and the new software image is written to disk without compression). I wonder if it is possible to augment zfs clone with an option to replicate origin's changeable attributes (all and/or a list of ones we want), and use this feature in beadm? Thanks, //Jim Klimov

<    1   2   3   4   5   6   7   8   >