Cindy,
I gave your suggestion a try. I did the zpool clear and then did another zpool
scrub and all is happy now. Thank you for your help.
David
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Cindy,
Thanks for the reply. I'll get that a try and then send an update.
Thanks,
David
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I recently had an issue with my LUNs from our storage unit going offline. This
caused the zpool to get numerous errors on the luns. The pool is on-line, and
I did a scrub, but one of the raid sets is
degraded:
raidz2-3 DEGRADED 0 0 0
I was recently running Solaris 10 U9 and I decided that I would like to go
to Solaris 11 Express so I exported my zpool, hoping that I would just do
an import once I had the new system installed with Solaris 11.
An update:
I had mirrored my boot drive when I installed Solaris 10U9 originally, so I
went ahead and rebooted the system to this disk instead of my Solaris 11
install. After getting the system up, I imported the zpool, and everything
worked normally.
So I guess there is some sort of
Hi, I'm setting up a ZFS environment running on a Sun x4440 + J4400 arrays
(similar to 7410 environment) and I was trying to figure out the best way to
map a disk drive physical location (tray and slot) to the Solaris device
c#t#d#. Do I need to install the CAM software to do this, or is
I was wondering if anyone has any experience with how long a zfs destroy of
about 40 TB should take? So far, it has been about an hour... Is there any
good way to tell if it is working or if it is hung?
Doing a zfs list just hangs. If you do a more specific zfs list, then it is
okay... zfs
A few more details:
The system is a Sun x4600 running Solaris 10 Update 4.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I would like advise about how to replace a raid 0 lun. The lun is basically a
raid 0 lun which is from a single disk volume group / volume from our Flexline
380 unit. So every disk in the unit is a volume group/volume/lun mapped to the
host. We then let ZFS do the raid.
We have a lun now
Addtional information:
It looks like perhaps the original drive is in use, and the hot spare is
assigned but not in use see below about zpool iostat:
raidz22.76T 4.49T 0 0 29.0K 18.4K
c10t600A0B80001139967CF945E80E95d0 - - 0
Yes! That worked to get the spare back to an available state. Thanks!
So that leaves me with the trying to put together a recommended procedure to
replace a failed lun/disk from our Flexline 380. Does anyone have
configuration in
which they are using a RAID 0 lun, which they need to
To list your snapshots:
/usr/sbin/zfs list -H -t snapshot -o name
Then you could use that in a for loop:
for i in `/usr/sbin/zfs list -H -t snapshot -o name` ;
do
echo Destroying snapshot: $i
/usr/sbin/zfs destroy $i
done
The above would destroy all your snapshots. You could put a grep on
What are your thoughts or recommendations on having a zpool made up of
raidz groups of different sizes? Are there going to be performance issues?
For example:
pool: testpool1
state: ONLINE
scrub: none requested
config:
NAMESTATE READ
I was in the process of doing a large zfs send | zfs receive when I decided
that I wanted to terminate the the zfs send process. I killed it, but the zfs
receive doesn't want to die... In the meantime my zfs list command just hangs.
Here is the tail end of the truss output from a truss zfs
I don't believe LUN expansion is quite yet possible under Solaris 10 (11/06).
I believe this might make it into the next update but I'm not sure on that.
Someone from Sun would need to comment on when this will make it into the
production release of Solaris.
I know this because I was working
Well, the zfs receive process finally died, and now my zfs list works just fine.
If there is a better way to capture what is going on, please let me know and I
can duplicate the hang.
David
This message posted from opensolaris.org
___
zfs-discuss
I was wondering if anyone had a script to parse the zpool status -v output
into a more machine readable format?
Thanks,
David
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Thank you to everyone that has replied. It sounds like I have a few options
with regards to upgrading or just waiting and patching the current environment.
David
This message posted from opensolaris.org
___
zfs-discuss mailing list
I have run zpool scrub again, and I now see checksum errors again. Wouldn't
the checksum errors gotten fixed with the first zpool scrub?
Can anyone recommend actions I should do at this point?
Thanks,
David
This message posted from opensolaris.org
I currently have a system which has two ZFS storage pools. One of the pools is
coming from a faulty piece of hardware. I would like to bring up our server
mounting the storage pool which is okay and NOT mounting the one with from the
hardware with problems. Is there a simple way to NOT
20 matches
Mail list logo