James C. McPherson wrote:
> Lots of things. We can't channel
A nice way for a smiley..
Thanks for the mail so far. I'm in Holland and at work right now (13.45h)
but I'll check out your suggestions as soon as I get home.
--
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
++ http://nagual.nl/ | Su
Hello Jürgen,
Monday, October 6, 2008, 6:27:54 PM, you wrote:
>> Cannot mount root on /[EMAIL PROTECTED],0/pci103c,[EMAIL PROTECTED],2/[EMAIL
>> PROTECTED],0:a fstype zfs
JK> Is that physical device path correct for your new system?
JK> Or is this the physical device path (stored on-disk in t
James C. McPherson wrote:
> Please add -kv to the end of your kernel$ line in
> grub,
#GRUB kernel$ add -kv
cmdk0 at ata0 target 0 lun 0
cmdk0 is /[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED]/cmdk0,0
### end the machine hangst here
> have you tried
> mount -F zfs rpool/ROOT/s
Hi all,
although I'm running all this in a Sol10u5 X4500, I hope I may ask this
question here. If not, please let me know where to head to.
We are running several X4500 with only 3 raidz2 zpools since we want
quite a bit of storage space[*], but the performance we get when using
zfs send is somet
Carsten Aulbert wrote:
> Hi all,
>
> although I'm running all this in a Sol10u5 X4500, I hope I may ask this
> question here. If not, please let me know where to head to.
>
> We are running several X4500 with only 3 raidz2 zpools since we want
> quite a bit of storage space[*], but the performanc
I had to hard power reset the laptop... Now I can't import my pool..
zpool status -x
bucket UNAVAIL 0 0 0 insufficient replicas
c6t0d0UNAVAIL 0 0 0 cannot open
cfgadm
usb8/4 usb-storage connectedconfigured ok
--
> Again, what I'm trying to do is to boot the same OS from physical
> drive - once natively on my notebook, the other time from withing
> Virtualbox. There are two problems, at least. First is the bootpath as
> in VB it emulates the disk as IDE while booting natively it is sata.
When I started exp
[EMAIL PROTECTED] wrote on 10/11/2008 09:36:02 PM:
>
> On Oct 10, 2008, at 7:55 PM 10/10/, David Magda wrote:
>
> >
> > If someone finds themselves in this position, what advice can be
> > followed to minimize risks?
>
> Can you ask for two LUNs on different physical SAN devices and have
> an
Carsten Aulbert schrieb:
> Hi all,
>
> although I'm running all this in a Sol10u5 X4500, I hope I may ask this
> question here. If not, please let me know where to head to.
>
> We are running several X4500 with only 3 raidz2 zpools since we want
> quite a bit of storage space[*], but the performa
Any news on if the scrub/resilver/snap reset patch will make it into 10/08
update?
Thanks!
Wade Stuart
we are fallon
P: 612.758.2660
C: 612.877.0385
** Fallon has moved. Effective May 19, 2008 our address is 901 Marquette
Ave, Suite 2400, Minneapolis, MN 55402.
___
C. Bergström пишет:
> I had to hard power reset the laptop... Now I can't import my pool..
>
> zpool status -x
> bucket UNAVAIL 0 0 0 insufficient replicas
> c6t0d0UNAVAIL 0 0 0 cannot open
>
>
> cfgadm
>
> usb8/4 usb-s
It would also be useful to see the output of `zfs list`
and `zfs get all rpool/ROOT/snv_99` while
booted from the failsafe archive.
- lori
dick hoogendijk wrote:
James C. McPherson wrote:
Please add -kv to the end of your kernel$ line in
grub,
#GRUB kernel$ add -kv
cmdk0
I'm also very interested in this. I'm having a lot of pain with status
requests killing my resilvers. In the example below I was trying to test to
see if timf's auto-snapshot service was killing my resilver, only to find that
calling zpool status seems to be the issue:
[EMAIL PROTECTED] ~]# e
On Thu, Oct 9, 2008 at 10:33 PM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> On Thu, Oct 9, 2008 at 10:18 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
>> On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote:
>>> Nevada isn't production code. For real ZFS testing, you must use a
>>> prod
Lori Alt wrote:
>
> It would also be useful to see the output of `zfs list`
> and `zfs get all rpool/ROOT/snv_99` while
> booted from the failsafe archive.
# zfs list
rpool 69.0G 76.6G40K /a/rpool
rpool/ROOT22.7G 18K legacy
rpool/ROOT/snv99 22.7G
Blake Irvin wrote:
> I'm also very interested in this. I'm having a lot of pain with status
> requests killing my resilvers. In the example below I was trying to test to
> see if timf's auto-snapshot service was killing my resilver, only to find
> that calling zpool status seems to be the issu
> "r" == Ross <[EMAIL PROTECTED]> writes:
r> roll back to a previous configuration, while keeping
r> the ability to roll forward again if you wanted.
I believe that's called a ``clone''.
It doesn't make sense to roll back, write, roll forward, unless you
will accept that you are b
Hi
Darren J Moffat wrote:
>
> What are you using to transfer the data over the network ?
>
Initially just plain ssh which was way to slow, now we use mbuffer on
both ends and socket transfer the data over via socat - I know that
mbuffer already allows this, but in a few tests socat seemed to b
dick hoogendijk wrote:
Lori Alt wrote:
It would also be useful to see the output of `zfs list`
and `zfs get all rpool/ROOT/snv_99` while
booted from the failsafe archive
# zfs list
rpool 69.0G 76.6G40K /a/rpool
rpool/ROOT22.7G 18K legacy
rpool
Hi Thomas,
Thomas Maier-Komor wrote:
>
> Carsten,
>
> the summary looks like you are using mbuffer. Can you elaborate on what
> options you are passing to mbuffer? Maybe changing the blocksize to be
> consistent with the recordsize of the zpool could improve performance.
> Is the buffer running
Ok so I left the thumb drive to try to backup all weekend. It got *most* of
the first snapshot copied over, about 50MB, and that's it. So I tried an
external USB hard drive today, and it actually bothered to copy over the
snapshots, but it does so very slowly. It copied over the first snapsho
Correct, that is a workaround. The fact that I use the beta (alpha?)
zfs auto-snaphot service means that when the service checks for active
scrubs, it kills the resilver.
I think I will talk to Tim about modifying his method script to run
the scrub check with least privileges (ie, not as root).
Lori Alt wrote:
> dick hoogendijk wrote:
>> Lori Alt wrote:
> Since I don't fully understand the problem, I can't
> be sure this will work, but I'm pretty sure it won't
> hurt: try setting the mountpoint of the dataset to "/":
>
> zfs set mountpoint=/ rpool/ROOT/snv99
>
> Then reboot and see if
After performing the following steps in exact order, I am now seeing CKSUM
errors in my zpool. I've never seen any Checksum errors before in the
zpool.
1. Performing running setup (RAIDZ 7D+1P) - 8x 1TB. Solaris 10 Update 3
x86.
2. Disk 6 (c6t2d0) was dying, $(zpool status) read errors, and de
Files are stored as either a single record (ajusted to the size of the
file) multiple number of fixed size records.
-r
Le 25 août 08 à 09:21, Robert a écrit :
> Thanks for your response, from which I have known more details.
> However, there is one thing I am still not clear--maybe at first
25 matches
Mail list logo