Le mardi 12 février 2008 à 07:22 +0100, Johan Kooijman a écrit :
Goodmorning all,
Hi,
can anyone confirm that 3ware raid controllers are indeed not working
under Solaris/OpenSolaris? I can't seem to find it in the HCL.
I do confirm they don't work
We're now using a 3Ware 9550SX as a S-ATA
Jason J. W. Williams wrote:
X4500 problems seconded. Still having issues with port resets due to
the Marvell driver. Though they seem considerably more transient and
less likely to lock up the entire systems in the most recent ( b72)
OpenSolaris builds.
Build 72 is pretty old. The build
X4500 problems seconded. Still having issues with port resets due to
the Marvell driver. Though they seem considerably more transient and
less likely to lock up the entire systems in the most recent ( b72)
OpenSolaris builds.
-J
On Feb 12, 2008 9:35 AM, Carson Gaspar [EMAIL PROTECTED] wrote:
I think so. On your backup pool, roll back to the last snapshot that
was successfully received. Then you should be able to send an incremental
between that one and the present.
Jeff
On Thu, Feb 07, 2008 at 08:38:38AM -0800, Ian wrote:
I keep my system synchronized to a USB disk from time to
[EMAIL PROTECTED] said:
One thought I had was to unconfigure the bad disk with cfgadm. Would that
force the system back into the 'offline' response?
In my experience (X4100 internal drive), that will make ZFS stop trying
to use it. It's also a good idea to do this before you hot-unplug the
On Feb 1, 2008, at 7:17 AM, Nicolas Dorfsman wrote:
Hi,
I wrote an hobbit script around lunmap/hbamap commands to monitor
SAN health.
I'd like to add detail on what is being hosted by those luns.
With svm metastat -p is helpful.
With zfs, zpool status output is awful for script.
Is
Here's a bit more info. The drive appears to have failed at 22:19 EST
but it wasn't until 1:30 EST the next day that the system finally
decided that it was bad. (Why?) Here's some relevant log stuff (with
lots of repeated 'device not responding' errors removed) I don't know if
it will be
Ok. I think I answered my own question. ZFS _didn't_ realize that the
disk was bad/stale. I power-cycled the failed drive (external) to see if
it would come back up and/or run diagnostics on it. As soon as I did
that, ZFS put the disk ONLINE and started using it again! Observe:
bash-3.00#
Carson Gaspar wrote:
Tim wrote:
A much cheaper (and probably the BEST supported card), is the supermicro
based on the marvell chipset. This is the same chipset that is used in
the thumper x4500 so you know that the folks at sun are doing their due
diligence to make sure the
Johan Kooijman wrote:
Goodmorning all,
can anyone confirm that 3ware raid controllers are indeed not working
under Solaris/OpenSolaris? I can't seem to find it in the HCL.
We're now using a 3Ware 9550SX as a S-ATA RAID controller. The
original plan was to disable all it's RAID functions
Hmm... this won't help you, but I think I'm having similar problems with an
iSCSI target device. If I offline the target, zfs hangs for just over 5
minutes before it realises the device is unavailable, and even then it doesn't
report the problem until I repeat the zpool status command.
What I
Hi List,
I'm wondering if one of you expert DTrace guru's can help me. I want to
write a DTrace script to print out a a histogram of how long IO requests
sit in the service queue. I can output the results with the quantize
method. I'm not sure which provider I should be using for this.
Tim wrote:
A much cheaper (and probably the BEST supported card), is the supermicro
based on the marvell chipset. This is the same chipset that is used in
the thumper x4500 so you know that the folks at sun are doing their due
diligence to make sure the drivers are solid.
Except the
This is Solaris 10U3 w/127111-05.
It appears that one of the disks in my zpool died yesterday. I got
several SCSI errors finally ending with 'device not responding to
selection'. That seems to be all well and good. ZFS figured it out and
the pool is degraded:
maxwell /var/adm zpool status
Will Murnane wrote:
On Feb 12, 2008 4:45 AM, Lida Horn [EMAIL PROTECTED] wrote:
The latest changes to the sata and marvell88sx modules
have been put back to Solaris Nevada and should be
available in the next build (build 84). Hopefully,
those of you who use it will find the changes
On 2/12/08, Johan Kooijman [EMAIL PROTECTED] wrote:
Goodmorning all,
can anyone confirm that 3ware raid controllers are indeed not working
under Solaris/OpenSolaris? I can't seem to find it in the HCL.
We're now using a 3Ware 9550SX as a S-ATA RAID controller. The
original plan was to
Yesterday, we needed to stop our nfs file server.
after having restarted one pool zfs was marked as faulted
we exported the faulted pool
we tried to import it (even with option -f) but it can't
msg by solaris is cannot import one or more devices curently unavaillable.
what can I do ?
is there a
bda wrote:
I haven't noticed this behavior when ZFS has (as recommended) the
full disk.
Good to know, as i intended to use the whole disks anyway.
Thanks,
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing list
Ralf Ramge wrote:
Quotas are applied to file systems, not pools, and a such are pretty
independent from the pool size. I found it best to give every user
his/her own filesystem and applying individual quotas afterwards.
Does this mean, that if i have a pool of 7TB with one filesystem for
+--
| On 2008-02-12 02:40:33, Thomas Liesner wrote:
|
| Subject: Re: [zfs-discuss] Avoiding performance decrease when pool usage is
| over 80%
|
| Nobody out there who ever had problems with low diskspace?
Only in
Nobody out there who ever had problems with low diskspace?
Regrads,
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Nicolas Szalay wrote:
Le mardi 12 février 2008 à 07:22 +0100, Johan Kooijman a écrit :
Goodmorning all,
Hi,
can anyone confirm that 3ware raid controllers are indeed not working
under Solaris/OpenSolaris? I can't seem to find it in the HCL.
I do confirm they don't work
We're now
Thomas Liesner wrote:
Nobody out there who ever had problems with low diskspace?
Okay, I found your original mail :-)
Quotas are applied to file systems, not pools, and a such are pretty
independent from the pool size. I found it best to give every user
his/her own filesystem and applying
Thomas Liesner wrote:
Does this mean, that if i have a pool of 7TB with one filesystem for all
users with a quota of 6TB i'd be alright?
Yep. Although I *really* recommend creating individual file systems,
e.g. if you have 1,000 users on your server, I'd create 1,000 file
systems with a
If you can't use zpool status, you probably should check wether your system
is right and not all devices needed for this pool are currently available...
i.e. format...
Regards,
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing
25 matches
Mail list logo