Has anybody here got any thoughts on how to resolve this problem:
http://www.opensolaris.org/jive/thread.jspa?messageID=261204tstart=0
It sounds like two of us have been affected by this now, and it's a bit of a
nuisance your entire server hanging when a drive is removed, makes you worry
about
We do not use raidz*.
Virtually, no raid or stripe through OS.
We have 4 disk RAID1 volumes. RAID1 was created from CAM on 2540.
2540 does not have RAID 1+0 or 0+1.
cheers
tharindu
Brandon High wrote:
On Tue, Jul 22, 2008 at 10:35 PM, Tharindu Rukshan Bamunuarachchi
[EMAIL PROTECTED]
many on HD setup:
Thanks for the replies, but actual doubt is on MB.
I would go with the suggestion of different HD (even if I think that the speed
will be aligned to the slowest of them), and may be raidz2 (even if I think
raidz is enough for a home server)
bhigh:
It seems than 780G/SB700
On Thu, Jul 24, 2008 at 1:28 AM, Steve [EMAIL PROTECTED] wrote:
And interesting of booting from CF, but it seems is possible to boot from the
zraid and I would go for it!
It's not possible to boot from a raidz volume yet. You can only boot
from a single drive or a mirror.
-B
--
Brandon High
On Wed, Jul 23, 2008 at 10:02 PM, Tharindu Rukshan Bamunuarachchi
[EMAIL PROTECTED] wrote:
We do not use raidz*. Virtually, no raid or stripe through OS.
So it's ZFS on a single LUN exported from the 2540? Or have you
created a zpool from multiple raid1 LUNs on the 2540?
Have you tried
As I used OpenSolaris for some time I wanted to give SXCE (snv_93) a change on
my home server. Now I' wondering what would be the best setup for my disks.
I have two 300GiB PATA disks* in stock, two 160G SATA disks** in use by my old
linux server and - maybe for temporary use - an external 160G
Following the VIA link and googling a bit I found something that seems
interesting:
- MB: http://www.avmagazine.it/forum/showthread.php?s=threadid=108695
- in the case http://www.chenbro.com/corporatesite/products_detail.php?serno=100
Are they viable??
This message posted from
Hi Jorgen,
This isn't an answer to your problem I'm afraid, but a request for you to do a
test when you get your new x4500.
Could you try pulling a SATA drive to see if the system hangs? I'm finding
Solaris just locks up if I pull a drive connected to the Supermicro
AOC-SAT2-MV8 card, and I
Do you have any recommend parameters should I try ?
Ellis, Mike wrote:
Would adding a dedicated ZIL/SLOG (what is the difference between those 2 exactly? Is there one?) help meet your requirement?
The idea would be to use some sort of relatively large SSD drive of some variety to absorb
We have had a disk fail in the the existing x4500 and it sure froze the
whole server. I believe it is an OS problem which (should have) been
fixed in a version newer than we have. If you want me to test it on the
new x4500 because it runs Sol10 508 I can do.
Ross wrote:
Hi Jorgen,
This
We had the same problem, at least a good chunk of the zfs volumes died when the
drive failed. Granted, I don't think the drive actually failed, but a driver
issue/lockup. A reboot 2 weeks ago brought the machine back up and the drive
hasn't had a problem since. I was behind on two patches that
Yes, I am vary happy with the M2A-VM.
Charles
On Wed, Jul 23, 2008 at 5:05 PM, Steve [EMAIL PROTECTED] wrote:
Thank you for all the replays!
(and in the meantime I was just having a dinner! :-)
To recap:
tcook:
you are right, in fact I'm thinking to have just 3/4 for now, without
I'm upgrading my B92 UFS-boot system to ZFS root using Live Upgrade. It
appears to work fine so far, but I'm wondering why it allocates a ZFS
filesystem for swap when I already have a dedicated swap slice.
Shouldn't it just use any existing swap slice rather than creating a ZFS
one?
--
Alan
Hmmn, that *sounds* as if you are saying you've a very-high-redundancy
RAID1 mirror, 4 disks deep, on an 'enterprise-class tier 2 storage' array
that doesn't support RAID 1+0 or 0+1.
That sounds weird: the 2540 supports RAID levels 0, 1, (1+0), 3 and 5,
and deep mirrors are normally only
Great news, thanks for the update :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Nevermind -- this problem seems like it's been fixed in b94. I saw a bug that
looked like the description fit (slow clone removal, didn't write down the bug
number) and gave it a shot. imported and things seem like they're back up and
running.
This message posted from opensolaris.org
Hi Alan,
ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
environment requires separate ZFS volumes for swap and dump devices.
The ZFS boot/install project and information trail starts here:
http://opensolaris.org/os/community/zfs/boot/
Cindy
Alan Burlison wrote:
I'm
On Thu, 24 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
We do not use raidz*. Virtually, no raid or stripe through OS.
We have 4 disk RAID1 volumes. RAID1 was created from CAM on 2540.
What ZFS block size are you using?
Are you using synchronous writes for each 700byte message? 10k
On Thu, Jul 24, 2008 at 10:38:49AM -0400, Charles Menser wrote:
I installed it with snv_86 in IDE controller mode, and have since
upgraded ending up at snv_93.
Do you know what implications there are for using AHCI vs IDE modes?
I had the same question and Neal Pollack [EMAIL PROTECTED] told
I installed it with snv_86 in IDE controller mode, and have since
upgraded ending up at snv_93.
Do you know what implications there are for using AHCI vs IDE modes?
Thanks,
Charles
On Thu, Jul 24, 2008 at 9:26 AM, Florin Iucha [EMAIL PROTECTED] wrote:
On Thu, Jul 24, 2008 at 08:22:16AM -0400,
On Thu, Jul 24, 2008 at 08:22:16AM -0400, Charles Menser wrote:
Yes, I am vary happy with the M2A-VM.
You will need at least SNV_93 to use it in AHCI mode.
The northbridge gets quite hot, but that does not seem to be impairing
its performance. I have the M2A-VM with an AMD 64 BE-2400 (45W) and
On Thu, 24 Jul 2008, Brandon High wrote:
Have you tried exporting the individual drives and using zfs to handle
the mirroring? It might have better performance in your situation.
It should indeed have better performance. The single LUN exported
from the 2540 will be treated like a single
On Wed, Jul 23, 2008 at 11:36 AM, [EMAIL PROTECTED] wrote:
Rainer,
Sorry for your trouble.
I'm updating the installboot example in the ZFS Admin Guide with the
-F zfs syntax now. We'll fix the installboot man page as well.
Perhaps it also deserves a mention in the FAQ somewhere near
[EMAIL PROTECTED] wrote:
ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
environment requires separate ZFS volumes for swap and dump devices.
The ZFS boot/install project and information trail starts here:
http://opensolaris.org/os/community/zfs/boot/
Is this going to
Mike Gerdts wrote:
On Wed, Jul 23, 2008 at 11:36 AM, [EMAIL PROTECTED] wrote:
Rainer,
Sorry for your trouble.
I'm updating the installboot example in the ZFS Admin Guide with the
-F zfs syntax now. We'll fix the installboot man page as well.
Perhaps it also deserves a mention in the FAQ
On Thu, 24 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
Do you have any recommend parameters should I try ?
Using an external log is really not needed when using the StorageTek
2540. I doubt that it is useful at all.
Bob
==
Bob Friesenhahn
[EMAIL
Enda O'Connor ( Sun Micro Systems Ireland) wrote:
Mike Gerdts wrote:
On Wed, Jul 23, 2008 at 11:36 AM, [EMAIL PROTECTED] wrote:
Rainer,
Sorry for your trouble.
I'm updating the installboot example in the ZFS Admin Guide with the
-F zfs syntax now. We'll fix the installboot man page as
Alan,
Just make sure you use dumpadm to point to valid dump device and
this setup should work fine. Please let us know if it doesn't.
The ZFS strategy behind automatically creating separate swap and
dump devices including the following:
o Eliminates the need to create separate slices
o Enables
[EMAIL PROTECTED] wrote:
Alan,
Just make sure you use dumpadm to point to valid dump device and
this setup should work fine. Please let us know if it doesn't.
The ZFS strategy behind automatically creating separate swap and
dump devices including the following:
o Eliminates the need to
Alan Burlison wrote:
[EMAIL PROTECTED] wrote:
ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
environment requires separate ZFS volumes for swap and dump devices.
The ZFS boot/install project and information trail starts here:
Lori Alt wrote:
In designing the changes to the install software, we had to
decide whether to be all things to all people or make some
default choices. Being all things to all people makes the
interface a lot more complicated and takes a lot more
engineering effort (we'd still be developing
Aaron Botsis пишет:
Hello, I've hit this same problem.
Hernan/Victor, I sent you an email asking for the description of this
solution. I've also got important data on my array. I went to b93 hoping
there'd be a patch for this.
I caused the problem in a manner identical to Hernan; by
Lori Alt [EMAIL PROTECTED] writes:
use of swap/dump zvols? If your existing swap/dump slice
is contiguous with your root pool, you can grow the root
pool into that space (using format to merge the slices.
A reboot or re-import of the pool will cause it to grow into
the newly-available
I've discovered this as well - b81 to b93 (latest I've tried). I
switched from my on-board SATA controller to AOC-SAT2-MV8 cards because
the MCP55 controller caused random disk hangs. Now the SAT2-MV8 works as
long as the drives are working correctly, but the system can't handle a
drive
Alan Burlison wrote:
Lori Alt wrote:
In designing the changes to the install software, we had to
decide whether to be all things to all people or make some
default choices. Being all things to all people makes the
interface a lot more complicated and takes a lot more
engineering effort
Did you have success?
What version of Solaris? OpenSolaris? etc?
I'd want to use this card with the latest Solaris 10 (update 5?)
The connector on the adapter itself is IPASS and the Supermicro part number
for cables from the adapter to standard SATA drives is CBL-0118L-02 IPASS to 4
SATA
Yeah, I thought of the storage forum today and found somebody else with the
problem, and since my post a couple of people have reported similar issues on
Thumpers.
I guess the storage thread is the best place for this now:
http://www.opensolaris.org/jive/thread.jspa?threadID=42507tstart=0
Have any of you guys reported this to Sun? A quick search of the bug database
doesn't bring up anything that appears related to sata drives and hanging or
hot swapping.
This message posted from opensolaris.org
___
zfs-discuss mailing list
Rex Kuo wrote:
Dear All :
We are looking for best practice for Solaris as an NFS-server sharing a
number of ZFS file systems and nfs clients are RHEL 5.0 OS to mount
NFS-server.
Any S10 NFS-server and RHEL 5.0 NFS-client tuning guide or suggestion are
welcome.
We try to keep
PS: I scaled down to mini-ITX form factot because it seems that the
http://www.chenbro.com/corporatesite/products_detail.php?serno=100 is the
PERFECT case for the job!
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Thu, Jul 24, 2008 at 3:41 AM, Steve [EMAIL PROTECTED] wrote:
Or Atom maybe viable?
The atom CPU has pretty crappy performance. At 1.6 GHz performance is
somewhere between a 900MHz Celeron-M and 1.13 Pentium 3-M. It's also
single-core. It would probably work, but it could be CPU bound on
Lori Alt wrote:
What if you turned slice 1 into a pool (a new one), migrated your BE
into it,
then grow that pool to soak up the space in the slices that follow it?
You might
still need to save some stuff elsewhere while you're doing the transition.
Doesn't work, because LU wants to
s == Steve [EMAIL PROTECTED] writes:
s About freedom: I for sure would prefere open source drivers
s availability, let's account for it!
There is source for the Intel gigabit cards in the source browser.
There are known issues with the Marvell drivers in X4500s. You will
want to pay attention to the release notes, SRDBs, InfoDocs, and SunAlerts
for the platform.
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/SunFireX4500/SunFireX4500
You will want to especially pay attention
Ross,
The X4500 uses 6x Marvell 88SX SATA controllers for its internal disks. They
are not Supermicro controllers. The new X4540 uses an LSI chipset instead of
the Marvell chipset.
--Brett
This message posted from opensolaris.org
___
Alan Burlison wrote:
Lori Alt wrote:
What if you turned slice 1 into a pool (a new one), migrated your BE
into it,
then grow that pool to soak up the space in the slices that follow
it? You might
still need to save some stuff elsewhere while you're doing the
transition.
Doesn't
I will look into this. I don't know why it would have failed.
Lori
Rainer Orth wrote:
Lori Alt [EMAIL PROTECTED] writes:
use of swap/dump zvols? If your existing swap/dump slice
is contiguous with your root pool, you can grow the root
pool into that space (using format to merge the
Ross,
The X4500 uses 6x Marvell 88SX SATA controllers for
its internal disks. They are not Supermicro
controllers. The new X4540 uses an LSI chipset
instead of the Marvell chipset.
--Brett
Yup, and the Supermicro card uses the Marvell Hercules-2 88SX6081 (Rev. C0)
SATA Host
Hoping this is not too off topic.Can anyone confirm you can break a
mirrored zfs root pool once formed. I basically want to clone a boot drive,
take it to another piece of identical hardware and have two machines ( or more
). I am running indiana b93 on x86 hardware. I have read
Aaargh! My perfect case not working!!
The back-pane should not be just a pass-trough? There was something
unmounted? The power was not enough for all the disks? Can it depend on the
disks?
Did you have some replies?
I would tell also to tech support of Chenbro directly
Lori Alt wrote:
Sounds like LU needs some of the same swap/dump flexibility
that we just gave initial install. I'll bring this up within the team.
The (partial) workaround I tried was:
1. create a ZFS BE in an existing pool that has enough space
2 lumount the BE, edit the vfstab to use the
Thank you very much Brandon for pointing out the issue for the case!!
(anyway that's really a peaty, I hope it will find a solution!...)
About Atom a person from Sun was pointing out the only good version for ZFS
would be N200 (64bit). Anyway I wouldn't make a problem of money (still ;-),
but
I have 4 filesystems in a pool that I want to replicate into another
pool, so I've taken snapshots prior to replication:
pool1/home1 14.3G 143G 14.3G /home1
pool1/[EMAIL PROTECTED] 1.57M - 14.3G -
pool1/home2 4.31G 143G 4.31G /home2
pool1/[EMAIL PROTECTED] 0
On Thu, Jul 24, 2008 at 1:28 AM, Steve
[EMAIL PROTECTED] wrote:
And interesting of booting from CF, but it seems is
possible to boot from the zraid and I would go for
it!
It's not possible to boot from a raidz volume yet.
You can only boot
from a single drive or a mirror.
If I
Yup, and the Supermicro card uses the Marvell
Hercules-2 88SX6081 (Rev. C0) SATA Host Controller,
which is part of the series supported by the same
driver:
http://docs.sun.com/app/docs/doc/816-5177/marvell88sx
7d?a=view. I've seen the Supermicro card mentioned
in connection with the
s And, if better I'm open also to intel!
intel you can possibly get onboard AHCI that works,
and the intel
igabit MAC, and 16GB instead of 8GB RAM on a desktop
board. Also the
video may be better-supported. but it's, you know,
intel.
Miles, sorry, but probably I'm missing something to
On Sun, Jul 13, 2008 at 3:37 AM, Bryan Wagoner [EMAIL PROTECTED] wrote:
I was a little confused on what to get, so I ended up buying this off the
Provantage website where I'm getting the card. The card was like $123 and
each of these cables was like $22.
CBL-0118L-02IPASS to 4 SATA
Since we were drowning, we decided to go ahead and reboot with my
guesses, even though I have not heard and expert opinions on the
changes. (Also, 3 mins was way under estimated. It takes 12 minutes to
reboot our x4500).
The new values are: (original)
set bufhwm_pct=10(2%)
set
Richard Elling wrote:
There are known issues with the Marvell drivers in X4500s. You will
want to pay attention to the release notes, SRDBs, InfoDocs, and SunAlerts
for the platform.
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/SunFireX4500/SunFireX4500
You will want
Enda O'Connor ( Sun Micro Systems Ireland) [EMAIL PROTECTED]
writes:
[..]
meant to add that on x86 the following should do the trick ( again I'm open
to correction )
installgrub /boot/grub/stage1 /zfsroot/boot/grub/stage2 /dev/rdsk/c1t0d0s0
haven't tested the z86 one though.
I used
On Fri, Apr 25, 2008 at 9:22 AM, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello andrew,
Thursday, April 24, 2008, 11:03:48 AM, you wrote:
a What is the reasoning behind ZFS not enabling the write cache for
a the root pool? Is there a way of forcing ZFS to enable the write cache?
The
Lida Horn wrote:
Richard Elling wrote:
There are known issues with the Marvell drivers in X4500s. You will
want to pay attention to the release notes, SRDBs, InfoDocs, and SunAlerts
for the platform.
On Thu, Jul 24, 2008 at 08:38:21PM -0700, Neal Pollack wrote:
As of build 94, it does not automatically bring the disk online.
I replaced a failed disk on an x4500 today running Nevada build 94, and
still
had to manually issue
# cfgadm -c configure sata1/3
# zpool replace tank cxt2d0
63 matches
Mail list logo