Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-21 Thread Richard Lowe

I built in the normal fashion, with the CBE compilers
(cc: Sun C 5.9 SunOS_i386 Patch 124868-10 2009/04/30), and 12u1 lint.

I'm not subscribed to zfs-discuss, but have you established whether the
problematic build is DEBUG? (the bits I uploaded were non-DEBUG).

-- Rich

Haudy Kazemi wrote:
 Could it somehow not be compiling 64-bit support?


 -- 
 Brent Jones
 

 I thought about that but it says when it boots up that it is 64-bit, and I'm 
 able to run
 64-bit binaries.  I wonder if it's compiling for the wrong processor 
 optomization though?
 Maybe if it is missing some of the newer SSEx instructions the zpool 
 checksum checking is
 slowed down significantly?  I don't know how to check for this though and it 
 seems strange
 it would slow it down this significantly.  I'd expect even a non-SSE enabled
 binary to be able to calculate a few hundred MB of checksums per second for
 a 2.5+ghz processor.

 Chad

 Would it be possible to do a closer comparison between Rich Lowe's fast 142
 build and your slow 142 build?  For example run a diff on the source, build
 options, and build scripts.  If the build settings are close enough, a
 comparison of the generated binaries might be a faster way to narrow things
 down (if the optimizations are different then a resultant binary comparison
 probably won't be useful).

 You said previously that:
 The procedure I followed was basically what is outlined here:
 http://insanum.com/blog/2010/06/08/how-to-build-opensolaris

 using the SunStudio 12 compilers for ON and 12u1 for lint.
   
 Are these the same compiler versions Rich Lowe used?  Maybe there is a
 compiler optimization bug.  Rich Lowe's build readme doesn't tell us which
 compiler he used.
 http://genunix.org/dist/richlowe/README.txt

 I suppose the easiest way for me to confirm if there is a regression or if my
 compiling is flawed is to just try compiling snv_142 using the same procedure
 and see if it works as well as Rich Lowe's copy or if it's slow like my other
 compilations.

 Chad

 Another older compilation guide:
 http://hub.opensolaris.org/bin/view/Community+Group+tools/building_opensolaris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot send/recv hangs X4540 servers

2009-06-08 Thread Richard Lowe
Brent Jones br...@servuhome.net writes:


 I haven't figured out a way to identify the problem, still trying to
 find a 100% way to reproduce this problem.
 Seemingly the more snapshots I send at a given time, the likelihood of
 this happening goes up, but, correlation is not causation  :)

 I might try to open a support case with Sun (have a support contract),
 but Opensolaris doesn't seem to be well understood by the support
 folks yet, so not sure how far it will get.

 --
 Brent Jones
 br...@servuhome.net


 I can reproduce this 100% by sending about 6 or more snapshots at once.

 Here is some output that JBK helped me put together:

 Here is a pastebin 'mdb' findstack output:
 http://pastebin.com/m4751b08c

 Not sure what I'm looking at, but maybe someone at Sun can see whats going on?

I've had similar issues with similar traces.  I think you're waiting on
a transaction that's never going to come.

I thought at the time that I was hitting:
   CR 6367701 hang because tx_state_t is inconsistent

But given the rash of reports here, it seems perhaps this is something
different.

I, like you, hit it when sending snapshots, it seems (in my case) to be
specific to incremental streams, rather than full streams, I can send
seemingly any number of full streams, but incremental sends via send -i,
or send -R of datasets with multiple snapshots, will get into a state
like that above.

-- Rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New version of the ZFS test suite released

2007-08-04 Thread Richard Lowe
Pawel Jakub Dawidek [EMAIL PROTECTED] writes:

 On Fri, Aug 03, 2007 at 10:56:53PM -0700, Jim Walker wrote:
 Version 1.8 of the ZFS test suite was released today on opensolaris.org.
 
 The ZFS test suite source tarballs, packages and baseline can be
 downloaded at:
 http://dlc.sun.com/osol/test/downloads/current/
 
 The ZFS test suite source can be browsed at:
 http://src.opensolaris.org/source/xref/test/ontest-stc2/src/suites/zfs/  
 
 More information on the ZFS test suite is at:
 http://opensolaris.org/os/community/zfs/zfstestsuite/
 
 Questions about the ZFS test suite can be sent to zfs-discuss at:
 http://www.opensolaris.org/jive/forum.jspa?forumID=80

 Is it in mercurial repository? I'm not able to download it, but maybe
 I'm using wrong path:

   % hg clone ssh://[EMAIL PROTECTED]/hg/test/ontest-stc2 test
   remote: Repository 'hg/test/ontest-stc2' inaccessible: No such file or 
 directory.
   abort: no suitable response from remote hg!


It doesn't appear to be, no.  You'd have to pull the tarballs from
dlc.sun.com.

-- Rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get new ZFS Solaris 10 U3 features going from Solaris 10 U2

2006-12-15 Thread Richard Lowe

Jeff Victor wrote:

Robert Milkowski wrote:

Hello Jeff,

Friday, December 15, 2006, 9:36:48 PM, you wrote:

JV David Smith wrote:
We currently have a couple of servers at Solaris 10 U2, and we would 
like
to get to Solaris 10 U3 for the new zfs features.  Can this be 
accomplished
via patching, or do you have to do an upgrade from S10U2 to S10U3?  
Also
what about a system with Zones?  What is the best practice for 
upgrading a

system with zones?


JV For zones: use standard upgrade, because it is not yet possible to 
use Live
JV Upgrade on a zoned system.  Also, see the Zones FAQ for other 
important


IIRC upgrade on system with Zones won't work (it was only lately
integrated into Nevada).


That is incorrect.  My earlier statement is correct (you can upgrade 
with standard upgrade).  You are thinking of Live Upgrade.  The ability 
to use Live Upgrade on a zoned system was recently integrated.




You can't use standard upgrade (or live upgrade?) if the zone root is 
on ZFS, which I assume is what Robert was thinking...


-- Rich

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [xen-discuss] dom0 hangs when using an emulated ZFS volume

2006-08-03 Thread Richard Lowe

Patrick Petit wrote:

Hi,

Some additional elements. Irrespective of the SCSI error reported 
earlier, I have established that Solaris dom0 hangs anyway when a domU 
is booted from a disk image located on an emulated ZFS volume. Has this 
been also observed by other members of the community? Is there a known 
explanation to this problem? What would be the troubleshooting steps?


The hang I see isn't when booting on a zvol.  I see hangs, 
intermittently when using a zvol for anything xen related.


The first I saw was while making the proto on a zvol, the second was 
while creating a domU on a zvol (not booting it, just the vbdcfg).


I've been utterly unable to get any useful information out of the 
machine at that point.  I don't drop to the debugger, I *can't* drop to 
the debugger, and the machine doesn't respond in any way (even to 3 
C-a's, though that maybe a problem on my end).


As I've said elsewhere, I'm still trying to reproduce this in such a way 
I can get some kind of information about it (and failing).


I'm not sure what you could do to troubleshoot it.
Do you/can you get into kmdb when this happens?  Does sending 3 
Control-a's to the console do anything?


-- Rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Poor performance on NFS-exported ZFS volumes

2006-07-28 Thread Richard Lowe

Frank Cusack wrote:

Patrick Bachmann:

Hey Bill,

Bill Sommerfeld wrote:

Overly wide raidz groups seems to be an unfenced hole that people new to
ZFS fall into on a regular basis.

The man page warns against this but that doesn't seem to be sufficient.

Given that zfs has relatively few such traps, perhaps large raidz groups
ought to be implicitly split up absent a Yes, I want to be stupid
flag..


IMHO it is sufficient to just document this best-practice.


I disagree.  The documentation has to AT LEAST state that more than 9
disks gives poor performance.  I did read that raidz should use 3-9 disks
in the docs but it doesn't say WHY, so of course I went ahead and used
12 disks.

When I say I disagree, I mean this has to be documented in the standard
docs (man pages) not some best-practices guide on some wiki.

But really I disagree that this needs documentation.  So much of zfs is
meant to be automatic, now we're back to worrying about stripe width?
(Or maybe that's not the problem but it sure is the same type of manual
administration.)  I may have 12 disks and it simply does not make sense
(for my theoretical configuration) to split them up into two pools.  I
would then have to worry about sizing each pool correctly.  zfs is supposed
to fix that problem.



This may just be a matter of wording, but you wouldn't have to split it 
up into two pools.  You could use two smaller raidz vdevs within the 
same pool.


-- Rich.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss