Re: [zfs-discuss] am I "screwed"?

2008-10-13 Thread dick hoogendijk
James C. McPherson wrote: > Lots of things. We can't channel A nice way for a smiley.. Thanks for the mail so far. I'm in Holland and at work right now (13.45h) but I'll check out your suggestions as soon as I get home. -- Dick Hoogendijk -- PGP/GnuPG key: F86289CE ++ http://nagual.nl/ | Su

Re: [zfs-discuss] zpool import of bootable root pool renders it unbootable

2008-10-13 Thread Robert Milkowski
Hello Jürgen, Monday, October 6, 2008, 6:27:54 PM, you wrote: >> Cannot mount root on /[EMAIL PROTECTED],0/pci103c,[EMAIL PROTECTED],2/[EMAIL >> PROTECTED],0:a fstype zfs JK> Is that physical device path correct for your new system? JK> Or is this the physical device path (stored on-disk in t

Re: [zfs-discuss] am I "screwed"?

2008-10-13 Thread dick hoogendijk
James C. McPherson wrote: > Please add -kv to the end of your kernel$ line in > grub, #GRUB kernel$ add -kv cmdk0 at ata0 target 0 lun 0 cmdk0 is /[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED]/cmdk0,0 ### end the machine hangst here > have you tried > mount -F zfs rpool/ROOT/s

[zfs-discuss] Improving zfs send performance

2008-10-13 Thread Carsten Aulbert
Hi all, although I'm running all this in a Sol10u5 X4500, I hope I may ask this question here. If not, please let me know where to head to. We are running several X4500 with only 3 raidz2 zpools since we want quite a bit of storage space[*], but the performance we get when using zfs send is somet

Re: [zfs-discuss] Improving zfs send performance

2008-10-13 Thread Darren J Moffat
Carsten Aulbert wrote: > Hi all, > > although I'm running all this in a Sol10u5 X4500, I hope I may ask this > question here. If not, please let me know where to head to. > > We are running several X4500 with only 3 raidz2 zpools since we want > quite a bit of storage space[*], but the performanc

[zfs-discuss] External storage recovery?

2008-10-13 Thread C. Bergström
I had to hard power reset the laptop... Now I can't import my pool.. zpool status -x bucket UNAVAIL 0 0 0 insufficient replicas c6t0d0UNAVAIL 0 0 0 cannot open cfgadm usb8/4 usb-storage connectedconfigured ok --

Re: [zfs-discuss] zpool import of bootable root pool renders it unbootable

2008-10-13 Thread Jürgen Keil
> Again, what I'm trying to do is to boot the same OS from physical > drive - once natively on my notebook, the other time from withing > Virtualbox. There are two problems, at least. First is the bootpath as > in VB it emulates the disk as IDE while booting natively it is sata. When I started exp

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-13 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 10/11/2008 09:36:02 PM: > > On Oct 10, 2008, at 7:55 PM 10/10/, David Magda wrote: > > > > > If someone finds themselves in this position, what advice can be > > followed to minimize risks? > > Can you ask for two LUNs on different physical SAN devices and have > an

Re: [zfs-discuss] Improving zfs send performance

2008-10-13 Thread Thomas Maier-Komor
Carsten Aulbert schrieb: > Hi all, > > although I'm running all this in a Sol10u5 X4500, I hope I may ask this > question here. If not, please let me know where to head to. > > We are running several X4500 with only 3 raidz2 zpools since we want > quite a bit of storage space[*], but the performa

[zfs-discuss] scrub restart patch status..

2008-10-13 Thread Wade . Stuart
Any news on if the scrub/resilver/snap reset patch will make it into 10/08 update? Thanks! Wade Stuart we are fallon P: 612.758.2660 C: 612.877.0385 ** Fallon has moved. Effective May 19, 2008 our address is 901 Marquette Ave, Suite 2400, Minneapolis, MN 55402. ___

Re: [zfs-discuss] External storage recovery?

2008-10-13 Thread Victor Latushkin
C. Bergström пишет: > I had to hard power reset the laptop... Now I can't import my pool.. > > zpool status -x > bucket UNAVAIL 0 0 0 insufficient replicas > c6t0d0UNAVAIL 0 0 0 cannot open > > > cfgadm > > usb8/4 usb-s

Re: [zfs-discuss] am I "screwed"?

2008-10-13 Thread Lori Alt
It would also be useful to see the output of `zfs list` and `zfs get all rpool/ROOT/snv_99` while booted from the failsafe archive. - lori dick hoogendijk wrote: James C. McPherson wrote: Please add -kv to the end of your kernel$ line in grub, #GRUB kernel$ add -kv cmdk0

Re: [zfs-discuss] scrub restart patch status..

2008-10-13 Thread Blake Irvin
I'm also very interested in this. I'm having a lot of pain with status requests killing my resilvers. In the example below I was trying to test to see if timf's auto-snapshot service was killing my resilver, only to find that calling zpool status seems to be the issue: [EMAIL PROTECTED] ~]# e

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-13 Thread Mike Gerdts
On Thu, Oct 9, 2008 at 10:33 PM, Mike Gerdts <[EMAIL PROTECTED]> wrote: > On Thu, Oct 9, 2008 at 10:18 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote: >> On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote: >>> Nevada isn't production code. For real ZFS testing, you must use a >>> prod

Re: [zfs-discuss] am I "screwed"?

2008-10-13 Thread dick hoogendijk
Lori Alt wrote: > > It would also be useful to see the output of `zfs list` > and `zfs get all rpool/ROOT/snv_99` while > booted from the failsafe archive. # zfs list rpool 69.0G 76.6G40K /a/rpool rpool/ROOT22.7G 18K legacy rpool/ROOT/snv99 22.7G

Re: [zfs-discuss] scrub restart patch status..

2008-10-13 Thread Richard Elling
Blake Irvin wrote: > I'm also very interested in this. I'm having a lot of pain with status > requests killing my resilvers. In the example below I was trying to test to > see if timf's auto-snapshot service was killing my resilver, only to find > that calling zpool status seems to be the issu

Re: [zfs-discuss] restore from snapshot

2008-10-13 Thread Miles Nordin
> "r" == Ross <[EMAIL PROTECTED]> writes: r> roll back to a previous configuration, while keeping r> the ability to roll forward again if you wanted. I believe that's called a ``clone''. It doesn't make sense to roll back, write, roll forward, unless you will accept that you are b

Re: [zfs-discuss] Improving zfs send performance

2008-10-13 Thread Carsten Aulbert
Hi Darren J Moffat wrote: > > What are you using to transfer the data over the network ? > Initially just plain ssh which was way to slow, now we use mbuffer on both ends and socket transfer the data over via socat - I know that mbuffer already allows this, but in a few tests socat seemed to b

Re: [zfs-discuss] am I "screwed"?

2008-10-13 Thread Lori Alt
dick hoogendijk wrote: Lori Alt wrote: It would also be useful to see the output of `zfs list` and `zfs get all rpool/ROOT/snv_99` while booted from the failsafe archive # zfs list rpool 69.0G 76.6G40K /a/rpool rpool/ROOT22.7G 18K legacy rpool

Re: [zfs-discuss] Improving zfs send performance

2008-10-13 Thread Carsten Aulbert
Hi Thomas, Thomas Maier-Komor wrote: > > Carsten, > > the summary looks like you are using mbuffer. Can you elaborate on what > options you are passing to mbuffer? Maybe changing the blocksize to be > consistent with the recordsize of the zpool could improve performance. > Is the buffer running

Re: [zfs-discuss] Segmentation fault / core dump with recursive

2008-10-13 Thread BJ Quinn
Ok so I left the thumb drive to try to backup all weekend. It got *most* of the first snapshot copied over, about 50MB, and that's it. So I tried an external USB hard drive today, and it actually bothered to copy over the snapshots, but it does so very slowly. It copied over the first snapsho

Re: [zfs-discuss] scrub restart patch status..

2008-10-13 Thread blake . irvin
Correct, that is a workaround. The fact that I use the beta (alpha?) zfs auto-snaphot service means that when the service checks for active scrubs, it kills the resilver. I think I will talk to Tim about modifying his method script to run the scrub check with least privileges (ie, not as root).

Re: [zfs-discuss] am I "screwed"?

2008-10-13 Thread dick hoogendijk
Lori Alt wrote: > dick hoogendijk wrote: >> Lori Alt wrote: > Since I don't fully understand the problem, I can't > be sure this will work, but I'm pretty sure it won't > hurt: try setting the mountpoint of the dataset to "/": > > zfs set mountpoint=/ rpool/ROOT/snv99 > > Then reboot and see if

[zfs-discuss] zpool CKSUM errors since drive replace

2008-10-13 Thread Matthew Angelo
After performing the following steps in exact order, I am now seeing CKSUM errors in my zpool. I've never seen any Checksum errors before in the zpool. 1. Performing running setup (RAIDZ 7D+1P) - 8x 1TB. Solaris 10 Update 3 x86. 2. Disk 6 (c6t2d0) was dying, $(zpool status) read errors, and de

Re: [zfs-discuss] about variable block size

2008-10-13 Thread Roch Bourbonnais
Files are stored as either a single record (ajusted to the size of the file) multiple number of fixed size records. -r Le 25 août 08 à 09:21, Robert a écrit : > Thanks for your response, from which I have known more details. > However, there is one thing I am still not clear--maybe at first