Re: [zfs-discuss] grub prompt after aborted ufs to zfs live upgrade

2008-11-14 Thread Robert Buick
Still no luck :-( I installed snv_100 on a new disk, mounted the old disk and copied the home directories etc. and now at least I have a system that works, if somewhat stunted cf the old system. It would be good if the old disk could be brought back to its former glory... -- This message

Re: [zfs-discuss] zfs boot - U6 kernel patch breaks sparc boot

2008-11-14 Thread Jens Elkner
On Thu, Nov 13, 2008 at 04:54:57PM -0800, Gerry Haskins wrote: Jens, http://www.sun.com/bigadmin/patches/firmware/release_history.jsp on the Big Admin Patching center, http://www.sun.com/bigadmin/patches/ list firmware revisions. Thanks a lot. Digged around there and found, that 121683-06

[zfs-discuss] Solaris Compatibility on Foxconn or Gigabyte MB

2008-11-14 Thread John Doe
Hi guys. Read this thread, good info! I'm now considering getting one of the MBs recommended in the Tom's Hardware review, to which a URL was posted earlier. The article is here: http://www.tomshardware.com/reviews/intel-e7200-g31,2039.html I would like to know if any of you can confirm

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Joerg Schilling
Andrew Gabriel [EMAIL PROTECTED] wrote: That is exactly the issue. When the zfs recv data has been written, zfs recv starts reading the network again, but there's only a tiny amount of data buffered in the TCP/IP stack, so it has to wait for the network to heave more data across. In

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Thomas Maier-Komor
Joerg Schilling schrieb: Andrew Gabriel [EMAIL PROTECTED] wrote: That is exactly the issue. When the zfs recv data has been written, zfs recv starts reading the network again, but there's only a tiny amount of data buffered in the TCP/IP stack, so it has to wait for the network to heave

Re: [zfs-discuss] Race condition yields to kernel panic (u3, u4) or hanging zfs commands (u5)

2008-11-14 Thread Andreas Koppenhoefer
Could you provide the panic message and stack trace, plus the stack traces of when it's hung? --matt Hello matt, here is info and stack trace of a server running Update 3: $ uname -a SunOS qacpp03 5.10 Generic_127111-05 sun4us sparc FJSV,GPUSC-M $ head -1 /etc/release

Re: [zfs-discuss] Best SXCE version for ZFS Home Server

2008-11-14 Thread mike
No clue. My friend also upgraded to b101. Said it was working awesome - improved network performance, etc. Then he said after a few days, he's decided to downgrade too - too many other weird side effects. This has a comparison (at the time) as to what the differences are with the different

[zfs-discuss] mbuffer WAS'zfs recv' is very slow

2008-11-14 Thread Jerry K
Hello Thomas, What is mbuffer? Where might I go to read more about it? Thanks, Jerry yesterday, I've release a new version of mbuffer, which also enlarges the default TCP buffer size. So everybody using mbuffer for network data transfer might want to update. For everybody unfamiliar

Re: [zfs-discuss] mbuffer WAS'zfs recv' is very slow

2008-11-14 Thread Thomas Maier-Komor
Jerry K schrieb: Hello Thomas, What is mbuffer? Where might I go to read more about it? Thanks, Jerry yesterday, I've release a new version of mbuffer, which also enlarges the default TCP buffer size. So everybody using mbuffer for network data transfer might want to update.

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Andrew Gabriel
Joerg Schilling wrote: Andrew Gabriel [EMAIL PROTECTED] wrote: That is exactly the issue. When the zfs recv data has been written, zfs recv starts reading the network again, but there's only a tiny amount of data buffered in the TCP/IP stack, so it has to wait for the network to heave

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Andrew Gabriel
Andrew Gabriel wrote: Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's many orders of magnitude bigger than SO_RCVBUF can go. No -- that's wrong -- should read 250MB buffer! Still some orders of

Re: [zfs-discuss] [fm-discuss] fmd wakeup disks in zpool

2008-11-14 Thread Tarik Soydan - Sun BOS Software
On 11/14/08 04:29, Tobias Exner wrote: Hi experts, I need a little help from your site to understand what's going on. I've got a SUN X4540 Thumper and setup some zpools. Further I engaged the powerd configuration to stop the disks when there are idle for a specified time. Now I

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Joerg Schilling
Andrew Gabriel [EMAIL PROTECTED] wrote: Andrew Gabriel wrote: Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's many orders of magnitude bigger than SO_RCVBUF can go. No -- that's wrong --

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Brent Jones
On Fri, Nov 14, 2008 at 10:04 AM, Joerg Schilling [EMAIL PROTECTED] wrote: Andrew Gabriel [EMAIL PROTECTED] wrote: Andrew Gabriel wrote: Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's many

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Bob Friesenhahn
On Fri, 14 Nov 2008, Joerg Schilling wrote: On my first Sun at home (a Sun 2/50 with 1 MB of RAM) in 1986, I could set the socket buffer size to 63 kB. 63kB : 1 MB is the same ratio as 256 MB : 4 GB. BTW: a lot of numbers in Solaris did not grow since a long time and thus create problems

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Andrew Gabriel
Joerg Schilling wrote: Andrew Gabriel [EMAIL PROTECTED] wrote: Andrew Gabriel wrote: Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's many orders of magnitude bigger than SO_RCVBUF can go. No --

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Thomas Maier-Komor
- original Nachricht Betreff: Re: [zfs-discuss] 'zfs recv' is very slow Gesendet: Fr, 14. Nov 2008 Von: Bob Friesenhahn[EMAIL PROTECTED] On Fri, 14 Nov 2008, Joerg Schilling wrote: On my first Sun at home (a Sun 2/50 with 1 MB of RAM) in 1986, I could set the socket buffer

Re: [zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-11-14 Thread Richard Elling
Neil Perrin wrote: I wouldn't expect any improvement using a separate disk slice for the Intent Log unless that disk was much faster and was otherwise largely idle. If it was heavily used then I'd expect quite the performance degradation as the disk head bounces around between slices.

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-11-14 Thread Orvar Korvar
OpenSolaris + ZFS achieves 120MB/sec read speed with 4 SATA 7200 rpm discs. 440 MB/Sec read speed with 7 SATA discs. 220MB/sec write speed. 2GB/sec write speed with 48 discs (on SUN Thumper x4600). I have links to websites were Ive read this. -- This message posted from opensolaris.org

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Andrew Gabriel
[EMAIL PROTECTED] wrote: But zfs could certainly use bigger buffers; just like mbuffer, I also wrote my own pipebuffer which does pretty much the same. You too? (My buffer program which I used to diagnose the problem is attached to the bugid ;-) I know Chris Gerhard wrote one too. Seems like

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Bob Friesenhahn
On Fri, 14 Nov 2008, Joerg Schilling wrote: ----- Disk RPM 3,600 10,000x3 The best rate I did see in 1985 was 800 kB/s (w. linear reads) now I see 120 MB/s this is more than x100 ;-) Yes. And how that SSDs are

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-14 Thread Joerg Schilling
Bob Friesenhahn [EMAIL PROTECTED] wrote: On Fri, 14 Nov 2008, Joerg Schilling wrote: ----- Disk RPM 3,600 10,000x3 The best rate I did see in 1985 was 800 kB/s (w. linear reads) now I see 120 MB/s this is more

Re: [zfs-discuss] continuous replication

2008-11-14 Thread David Pacheco
Brent Jones wrote: *snip* a 'zfs send' on the sending host monitors the pool/filesystem for changes, and immediately sends them to the receiving host, which applies the change to the remote pool. This is asynchronous, and isn't really different from running zfs send/recv in a loop. Whether

Re: [zfs-discuss] continuous replication

2008-11-14 Thread Mattias Pantzare
I think you're confusing our clustering feature with the remote replication feature. With active-active clustering, you have two closely linked head nodes serving files from different zpools using JBODs connected to both head nodes. When one fails, the other imports the failed node's pool and

[zfs-discuss] zfs not yet suitable for HA applications?

2008-11-14 Thread alex black
hi All, I realize the subject is a bit incendiary, but we're running into what I view as a design omission with ZFS that is preventing us from building highly available storage infrastructure; I want to bring some attention (again) to this major issue: Currently we have a set of iSCSI

Re: [zfs-discuss] continuous replication

2008-11-14 Thread Adam Leventhal
On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote: That is _not_ active-active, that is active-passive. If you have a active-active system I can access the same data via both controllers at the same time. I can't if it works like you just described. You can't call it

Re: [zfs-discuss] zfs boot - U6 kernel patch breaks sparc boot

2008-11-14 Thread Jens Elkner
On Fri, Nov 14, 2008 at 01:07:29PM -0800, Ed Clark wrote: hi, is the system still in the same state initially reported ? Yes. ie. you have not manually run any commands (ie. installboot) that would have altered the slice containing the root fs where 137137-09 was applied could you

Re: [zfs-discuss] Best SXCE version for ZFS Home Server

2008-11-14 Thread Al Hopper
On Fri, Nov 14, 2008 at 10:22 AM, mike [EMAIL PROTECTED] wrote: No clue. My friend also upgraded to b101. Said it was working awesome - improved network performance, etc. Then he said after a few days, he's decided to downgrade too - too many other weird side effects. Any more details

Re: [zfs-discuss] Best SXCE version for ZFS Home Server

2008-11-14 Thread mike
On Fri, Nov 14, 2008 at 3:18 PM, Al Hopper [EMAIL PROTECTED] wrote: No clue. My friend also upgraded to b101. Said it was working awesome - improved network performance, etc. Then he said after a few days, he's decided to downgrade too - too many other weird side effects. Any more details

Re: [zfs-discuss] continuous replication

2008-11-14 Thread Richard Elling
Adam Leventhal wrote: On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote: That is _not_ active-active, that is active-passive. If you have a active-active system I can access the same data via both controllers at the same time. I can't if it works like you just described.

Re: [zfs-discuss] Still more questions WRT selecting a mobo for small ZFS RAID

2008-11-14 Thread Al Hopper
On Fri, Nov 14, 2008 at 4:43 PM, gnomad [EMAIL PROTECTED] wrote: Like many others, I am looking to put together a SOHO NAS based on ZFS/CIFS. The plan is 6 x 1TB drives in RAIDZ2 configuration, driven via mobo with 6 SATA ports. I've read most, if not all, of the threads here, as well as

Re: [zfs-discuss] continuous replication

2008-11-14 Thread Mattias Pantzare
On Sat, Nov 15, 2008 at 00:46, Richard Elling [EMAIL PROTECTED] wrote: Adam Leventhal wrote: On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote: That is _not_ active-active, that is active-passive. If you have a active-active system I can access the same data via both

Re: [zfs-discuss] [pkg-discuss] where to for 'pkg install' issues

2008-11-14 Thread Jordan Brown
Rich Reynolds wrote: BTW: I am loath to call them bugs until I know its not a configuration/pilot error. IMHO, if you can cause the root to become corrupt, it's a bug. Short of mucking around in /dev/kmem or /dev/dsk/*, it just shouldn't be possible to corrupt a file system.

Re: [zfs-discuss] Still more questions WRT selecting a mobo for small ZFS RAID

2008-11-14 Thread Rob
WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2 The P45 based boards are a no-brainer 16G of DDR2-1066 with P45 or 8G of ECC DDR2-800 with 3210 based boards That is the question. Rob ___ zfs-discuss mailing list

Re: [zfs-discuss] Still more questions WRT selecting a mobo for small ZFS RAID

2008-11-14 Thread Ian Collins
[EMAIL PROTECTED] wrote: WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2 The P45 based boards are a no-brainer 16G of DDR2-1066 with P45 or 8G of ECC DDR2-800 with 3210 based boards That is the question. I guess the answer is how valuable is your data? -- Ian.

Re: [zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-11-14 Thread Nicholas Lee
On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling [EMAIL PROTECTED]wrote: In short, separate logs with rotating rust may reduce sync write latency by perhaps 2-10x on an otherwise busy system. Using write optimized SSDs will reduce sync write latency by perhaps 10x in all cases. This is one of