[zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-24 Thread Ross
Has anybody here got any thoughts on how to resolve this problem:
http://www.opensolaris.org/jive/thread.jspa?messageID=261204tstart=0

It sounds like two of us have been affected by this now, and it's a bit of a 
nuisance your entire server hanging when a drive is removed, makes you worry 
about how Solaris would handle a drive failure.

Has anybody tried pulling a drive on a live Thumper, surely they don't hang 
like this?  Although, having said that I do remember they do have a great big 
warning in the manual about using cfgadm to stop the disk before removal saying:

Caution - You must follow these steps before removing a disk from service.  
Failure to follow the procedure can corrupt your data or render your file 
system inoperable.

Ross
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

2008-07-24 Thread Tharindu Rukshan Bamunuarachchi




We do not use raidz*.
Virtually, no raid or stripe through OS.

We have 4 disk RAID1 volumes. RAID1 was created from CAM on 2540.

2540 does not have RAID 1+0 or 0+1.

cheers
tharindu

Brandon High wrote:

  On Tue, Jul 22, 2008 at 10:35 PM, Tharindu Rukshan Bamunuarachchi
[EMAIL PROTECTED] wrote:
  
  
Dear Mark/All,

Our trading system is writing to local and/or array volume at 10k
messages per second.
Each message is about 700bytes in size.

Before ZFS, we used UFS.
Even with UFS, there was evey 5 second peak due to fsflush invocation.

However each peak is about ~5ms.
Our application can not recover from such higher latency.

  
  
Is the pool using raidz, raidz2, or mirroring? How many drives are you using?

-B

  


***

The information contained in this email including in any attachment is 
confidential and is meant to be read only by the person to whom it is 
addressed. If you are not the intended recipient(s), you are prohibited from 
printing, forwarding, saving or copying this email. If you have received this 
e-mail in error, please immediately notify the sender and delete this e-mail 
and its attachments from your computer.

***___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
many on HD setup:

Thanks for the replies, but actual doubt is on MB.
I would go with the suggestion of different HD (even if I think that the speed 
will be aligned to the slowest of them), and may be raidz2 (even if I think 
raidz is enough for a home server)



bhigh:

It seems than 780G/SB700 and Nvidia 8200 are good choice.

Since the tom's hw website comparison 
(http://www.tomshardware.com/reviews/amd-nvidia-chipset,1972.html) I would 
choose AMD chipset, but there is very little difference on power (speed and 
consumption). So better to evaluate the compatibility feature!

And interesting of booting from CF, but it seems is possible to boot from the 
zraid and I would go for it!

PS: The good is the enemy of the best., so what is the best? ;-)

 
 
Miles Nordin:

Interesting the VIA stuff, but for sure I need something proven!...

About the compatibility it seems it will just improve with time, but since the 
only hw I've spare now is 386sx with 20MB HD, I have to buy something new!

About the bugs... this I would like to avoid with the counselling!

About freedom: I for sure would prefere open source drivers availability, let's 
account for it!

For the rest I'm a bit lost again... Let's say that for many reasons I would 
like to choose a botherboard with everything needed onboard... So I'm trying to 
understand how to use all the interesting advice in the post... There is a way 
(mb) that can balance stability (some selected old option) and performance (new 
options) for the expected computer life (next 3 years)?

About the reuse with Linux: for now I'm really interested in fileserver with 
ZFS, so I would focus and stick on the Solaris compatibility (for different 
reasons I wouldn't choose Mac and FreeBSD implementations).

And, if better I'm open also to intel!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Brandon High
On Thu, Jul 24, 2008 at 1:28 AM, Steve [EMAIL PROTECTED] wrote:
 And interesting of booting from CF, but it seems is possible to boot from the 
 zraid and I would go for it!

It's not possible to boot from a raidz volume yet. You can only boot
from a single drive or a mirror.

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

2008-07-24 Thread Brandon High
On Wed, Jul 23, 2008 at 10:02 PM, Tharindu Rukshan Bamunuarachchi
[EMAIL PROTECTED] wrote:
 We do not use raidz*. Virtually, no raid or stripe through OS.

So it's ZFS on a single LUN exported from the 2540? Or have you
created a zpool from multiple raid1 LUNs on the 2540?

Have you tried exporting the individual drives and using zfs to handle
the mirroring? It might have better performance in your situation.

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Pool setup suggestions

2008-07-24 Thread Stefan Palm
As I used OpenSolaris for some time I wanted to give SXCE (snv_93) a change on 
my home server. Now I' wondering what would be the best setup for my disks.

I have two 300GiB PATA disks* in stock, two 160G SATA disks** in use by my old 
linux server and - maybe for temporary use - an external 160G USB disk***.

First though was to take one of 300G disks and creating three slices on it:

c0d0s0  32G for SXCE
c0d0s1  1G  swapspace
c0d0s7  245Gfor zpool

Than cloning that setup on the second 300G disk (using c0d1s0 for luupgrade)  
and creating the pool with two of the 160G disk like this:

zpool create data mirror c0d0s7 c0d1s7 mirror c1t0d0 c2t0d0

(c2t0d0 would be the external disk in the first place as I cannot remove both 
Samsungs from the linux box before the new server can take over).

First problem I see wih that setup - it uses mirror sets which wastes a lot 
space. I would prefer a RAIDz solution. Second the pool uses slices instaed of 
whole disks.

Any suggestions for a ideal setup?

TIA,
Stefan


* - Maxtor 6L300R0
** - SAMSUNG SP1614C
*** - Hitachi HTS542516K9SA00
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
Following the VIA link and googling a bit I found something that seems 
interesting:
- MB: http://www.avmagazine.it/forum/showthread.php?s=threadid=108695
- in the case http://www.chenbro.com/corporatesite/products_detail.php?serno=100

Are they viable??
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Ross
Hi Jorgen,

This isn't an answer to your problem I'm afraid, but a request for you to do a 
test when you get your new x4500.

Could you try pulling a SATA drive to see if the system hangs?  I'm finding 
Solaris just locks up if I pull a drive connected to the Supermicro 
AOC-SAT2-MV8 card, and I was under the belief that uses the same chipset as the 
Thumper.

I'm hoping this is just a driver problem, or a problem specific to the 
Supermicro card, but since our loan x4500 went back to Sun I'm unable to test 
this myself, and if the x4500's do lock up I'm a bit concerned about how they 
handle hardware failures.

thanks,

Ross
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

2008-07-24 Thread Tharindu Rukshan Bamunuarachchi




Do you have any recommend parameters should I try ?

Ellis, Mike wrote:

  Would adding a dedicated ZIL/SLOG (what is the difference between those 2 exactly? Is there one?) help meet your requirement?

The idea would be to use some sort of relatively large SSD drive of some variety to absorb the initial write-hit. After hours when things quieit down (or perhaps during "slow periods" in the day) data is transparently destaged into the main disk-pool, providing you a transparent/rudimentary form of HSM. 

Have a look at Adam Leventhal's blog and ACM article for some interesting perspectives on this stuff... (Specifically the potential "return of the 3600 rpm drive" ;-)

Thanks -- mikee
  



Actually, we do not need this data at the end of the day.

We will write summary into Oracle DB.

SSD is good options, but cost is not feasible for some client.

Is Sun providing SSD arrays ??

  

- Original Message -
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
To: Tharindu Rukshan Bamunuarachchi [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org
Sent: Wed Jul 23 11:22:51 2008
Subject: Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

On Wed, 23 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:

  
  
10,000 x 700 = 7MB per second ..

We have this rate for whole day 

10,000 orders per second is minimum requirments of modern day stock exchanges ...

Cache still help us for ~1 hours, but after that who will help us ...

We are using 2540 for current testing ...
I have tried same with 6140, but no significant improvement ... only one or two hours ...

  
  
Does your application request synchronous file writes or use fsync()? 
While normally fsync() slows performance I think that it will also 
serve to even the write response since ZFS will not be buffering lots 
of unwritten data.  However, there may be buffered writes from other 
applications which gets written periodically and which may delay the 
writes from your critical application.  In this case reducing the ARC 
size may help so that the ZFS sync takes less time.

You could also run a script which executes 'sync' every second or two 
in order to convince ZFS to cache less unwritten data. This will cause 
a bit of a performance hit for the whole system though.
  

This did not work and i got much higher peak , once a while.

Other than Array mounted disk, our applications are writing to local
hard disks (e.g. logs )

AFAIK, "sync" is applicable to all file systems.

  
You 7MB per second is a very tiny write load so it is worthwhile 
investigating to see if there are other factors which are causing your 
storage system to not perform correctly.  The 2540 is capable of 
supporting writes at hundreds of MB per second.
  


Yes. 2540 can go up to 40MB/s or more with more striped hard disks.

But we are struggling with latency not bandwidth. I/O bandwidth is
superb. But poor latency.


  
As an example of "another factor", let's say that you used the 2540 to 
create 6 small LUNs and then put them into a ZFS zraid.  However, in 
this case the 2540 allocated all of the LUNs from the same disk (which 
it is happy to do by default) so now that disk is being severely 
thrashed since it is one disk rather than six.
  


I did not use raidz. 

I have manullay allocated 4 independent disk per volume. 

I will try to get few independent disks through few luns.

I would be able to created RAIDZ and try.

  
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  


***

The information contained in this email including in any attachment is 
confidential and is meant to be read only by the person to whom it is 
addressed. If you are not the intended recipient(s), you are prohibited from 
printing, forwarding, saving or copying this email. If you have received this 
e-mail in error, please immediately notify the sender and delete this e-mail 
and its attachments from your computer.

***___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Jorgen Lundman
We have had a disk fail in the the existing x4500 and it sure froze the 
whole server. I believe it is an OS problem which (should have) been 
fixed in a version newer than we have. If you want me to test it on the 
new x4500 because it runs Sol10 508 I can do.


Ross wrote:
 Hi Jorgen,
 
 This isn't an answer to your problem I'm afraid, but a request for you to do 
 a test when you get your new x4500.
 
 Could you try pulling a SATA drive to see if the system hangs?  I'm finding 
 Solaris just locks up if I pull a drive connected to the Supermicro 
 AOC-SAT2-MV8 card, and I was under the belief that uses the same chipset as 
 the Thumper.
 
 I'm hoping this is just a driver problem, or a problem specific to the 
 Supermicro card, but since our loan x4500 went back to Sun I'm unable to test 
 this myself, and if the x4500's do lock up I'm a bit concerned about how they 
 handle hardware failures.
 
 thanks,
 
 Ross
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

-- 
Jorgen Lundman   | [EMAIL PROTECTED]
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Glaser, David
We had the same problem, at least a good chunk of the zfs volumes died when the 
drive failed. Granted, I don't think the drive actually failed, but a driver 
issue/lockup. A reboot 2 weeks ago brought the machine back up and the drive 
hasn't had a problem since. I was behind on two patches  that were released 
earlier in the month. I put them on the machine (a kernel patch and a marvell 
driver patch) and the problem hasn't arisen again.

Dave


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jorgen Lundman
Sent: Thursday, July 24, 2008 7:40 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] x4500 performance tuning.

We have had a disk fail in the the existing x4500 and it sure froze the
whole server. I believe it is an OS problem which (should have) been
fixed in a version newer than we have. If you want me to test it on the
new x4500 because it runs Sol10 508 I can do.


Ross wrote:
 Hi Jorgen,

 This isn't an answer to your problem I'm afraid, but a request for you to do 
 a test when you get your new x4500.

 Could you try pulling a SATA drive to see if the system hangs?  I'm finding 
 Solaris just locks up if I pull a drive connected to the Supermicro 
 AOC-SAT2-MV8 card, and I was under the belief that uses the same chipset as 
 the Thumper.

 I'm hoping this is just a driver problem, or a problem specific to the 
 Supermicro card, but since our loan x4500 went back to Sun I'm unable to test 
 this myself, and if the x4500's do lock up I'm a bit concerned about how they 
 handle hardware failures.

 thanks,

 Ross


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Jorgen Lundman   | [EMAIL PROTECTED]
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Charles Menser
Yes, I am vary happy with the M2A-VM.

Charles

On Wed, Jul 23, 2008 at 5:05 PM, Steve [EMAIL PROTECTED] wrote:
 Thank you for all the replays!
 (and in the meantime I was just having a dinner! :-)

 To recap:

 tcook:
 you are right, in fact I'm thinking to have just 3/4 for now, without 
 anything else (no cd/dvd, no videocard, nothing else than mb and drives)
 the case will be the second choice, but I'll try to stick to micro ATX for 
 space reason

 Charles Menser:
 4 is ok, so is the ASUS M2A-VM good?

 Matt Harrison:
 The post is superb (very compliment to Simon)! And in fact I was already on 
 that, but the MB is unfortunatly ATX. If it will be the only or the suggested 
 choice I would go for it, but I hope there will be a littler one

 bhigh:
 so the best is 780G?


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Alan Burlison
I'm upgrading my B92 UFS-boot system to ZFS root using Live Upgrade.  It 
appears to work fine so far, but I'm wondering why it allocates a ZFS 
filesystem for swap when I already have a dedicated swap slice. 
Shouldn't it just use any existing swap slice rather than creating a ZFS 
one?

-- 
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

2008-07-24 Thread David Collier-Brown
  Hmmn, that *sounds* as if you are saying you've a very-high-redundancy
RAID1 mirror, 4 disks deep, on an 'enterprise-class tier 2 storage' array
that doesn't support RAID 1+0 or 0+1. 

  That sounds weird: the 2540 supports RAID levels 0, 1, (1+0), 3 and 5,
and deep mirrors are normally only used on really fast equipment in
mission-critical tier 1 storage...

  Are you sure you don't mean you have raid 0 (stripes) 4 disks wide,
each stripe presented as a LUN?

  If you really have 4-deep RAID 1, you have a configuration that will
perform somewhat slower than any single disk, as the array launches
4 writes to 4 drives in parallel, and returns success when they
all complete.

  If you had 4-wide RAID 0, with mirroring done at the host, you would
have a configuration that would (probabilistically) perform better than 
a single drive when writing to each side of the mirror, and the write
would return success when the slowest side of the mirror completed.

 --dave (puzzled!) c-b

Tharindu Rukshan Bamunuarachchi wrote:
 We do not use raidz*. Virtually, no raid or stripe through OS.
 
 We have 4 disk RAID1 volumes.  RAID1 was created from CAM on 2540.
 
 2540 does not have RAID 1+0 or 0+1.
 
 cheers
 tharindu
 
 Brandon High wrote:
 
On Tue, Jul 22, 2008 at 10:35 PM, Tharindu Rukshan Bamunuarachchi
[EMAIL PROTECTED] wrote:
  

Dear Mark/All,

Our trading system is writing to local and/or array volume at 10k
messages per second.
Each message is about 700bytes in size.

Before ZFS, we used UFS.
Even with UFS, there was evey 5 second peak due to fsflush invocation.

However each peak is about ~5ms.
Our application can not recover from such higher latency.



Is the pool using raidz, raidz2, or mirroring? How many drives are you using?

-B

  

 
 
 
 ***
 
 The information contained in this email including in any attachment is 
 confidential and is meant to be read only by the person to whom it is 
 addressed. If you are not the intended recipient(s), you are prohibited from 
 printing, forwarding, saving or copying this email. If you have received this 
 e-mail in error, please immediately notify the sender and delete this e-mail 
 and its attachments from your computer.
 
 ***
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
David Collier-Brown| Always do right. This will gratify
Sun Microsystems, Toronto  | some people and astonish the rest
[EMAIL PROTECTED] |  -- Mark Twain
(905) 943-1983, cell: (647) 833-9377, (800) 555-9786 x56583
bridge: (877) 385-4099 code: 506 9191#
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can anyone help me?

2008-07-24 Thread Ross
Great news, thanks for the update :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can anyone help me?

2008-07-24 Thread Aaron Botsis
Nevermind -- this problem seems like it's been fixed in b94. I saw a bug that 
looked like the description fit (slow clone removal, didn't write down the bug 
number) and gave it a shot. imported and things seem like they're back up and 
running.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Cindy . Swearingen
Hi Alan,

ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
environment requires separate ZFS volumes for swap and dump devices.

The ZFS boot/install project and information trail starts here:

http://opensolaris.org/os/community/zfs/boot/

Cindy





Alan Burlison wrote:
 I'm upgrading my B92 UFS-boot system to ZFS root using Live Upgrade.  It 
 appears to work fine so far, but I'm wondering why it allocates a ZFS 
 filesystem for swap when I already have a dedicated swap slice. 
 Shouldn't it just use any existing swap slice rather than creating a ZFS 
 one?
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

2008-07-24 Thread Bob Friesenhahn

On Thu, 24 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:


We do not use raidz*. Virtually, no raid or stripe through OS.

We have 4 disk RAID1 volumes.  RAID1 was created from CAM on 2540.


What ZFS block size are you using?

Are you using synchronous writes for each 700byte message?  10k 
synchronous writes per second is pretty high and would depend heavily 
on the 2540's write cache and how the 2540's firmware behaves.


You will find some cache tweaks for the 2540 in my writeup available 
at 
http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf.


Without these tweaks, the 2540 waits for the data to be written to 
disk rather than written to its NVRAM whenever ZFS flushes the write 
cache.


Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Florin Iucha
On Thu, Jul 24, 2008 at 10:38:49AM -0400, Charles Menser wrote:
 I installed it with snv_86 in IDE controller mode, and have since
 upgraded ending up at snv_93.
 
 Do you know what implications there are for using AHCI vs IDE modes?

I had the same question and Neal Pollack [EMAIL PROTECTED] told
me that:

: For a built-in motherboard port, legacy mode is not really seen as
: much slower.  From my understanding, AHCI mainly adds (in theory)  NCQ
: and hotplug capability.  But hotplug means very little for a boot disk,
: and NCQ is a big joke, as I have not yet seen reproducible benchmarks
: that show any real measurable performance gain.  So I personally think
: legacy mode is fine for now.

I STFW for the topic and I got similar comments on hardware review
sites forums.

Best,
florin

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpyUMapwTXS0.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Charles Menser
I installed it with snv_86 in IDE controller mode, and have since
upgraded ending up at snv_93.

Do you know what implications there are for using AHCI vs IDE modes?

Thanks,
Charles

On Thu, Jul 24, 2008 at 9:26 AM, Florin Iucha [EMAIL PROTECTED] wrote:
 On Thu, Jul 24, 2008 at 08:22:16AM -0400, Charles Menser wrote:
 Yes, I am vary happy with the M2A-VM.

 You will need at least SNV_93 to use it in AHCI mode.

 The northbridge gets quite hot, but that does not seem to be impairing
 its performance.  I have the M2A-VM with an AMD 64 BE-2400 (45W) and
 a Scythe Ninja Mini heat sink and the only fans that I have in the case
 are the two side fans (the case is Antec NSK-2440).  Quiet as a mouse.

 florin

 --
 Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Florin Iucha
On Thu, Jul 24, 2008 at 08:22:16AM -0400, Charles Menser wrote:
 Yes, I am vary happy with the M2A-VM.

You will need at least SNV_93 to use it in AHCI mode.

The northbridge gets quite hot, but that does not seem to be impairing
its performance.  I have the M2A-VM with an AMD 64 BE-2400 (45W) and
a Scythe Ninja Mini heat sink and the only fans that I have in the case
are the two side fans (the case is Antec NSK-2440).  Quiet as a mouse.

florin

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpUoATrg6Ykg.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

2008-07-24 Thread Bob Friesenhahn
On Thu, 24 Jul 2008, Brandon High wrote:

 Have you tried exporting the individual drives and using zfs to handle
 the mirroring? It might have better performance in your situation.

It should indeed have better performance.  The single LUN exported 
from the 2540 will be treated like a single drive from ZFS's 
perspective.  The data written needs to be serialized in the same way 
that it would be for a drive.  ZFS has no understanding that some 
offsets will access a different drive so it may be that one pair of 
drives is experiencing all of the load.

The most performant configuration would be to export a LUN from each 
of the 2540's 12 drives and create a pool of 6 mirrors.  In this 
situation, ZFS will load share across the 6 mirrors so that each pair 
gets its fair share of the IOPS based on its backlog.

The 2540 cache tweaks will also help tremendously for this sort of 
work load.

Since this is for critical data I would not disable the cache 
mirroring in the 2540's controllers.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Mike Gerdts
On Wed, Jul 23, 2008 at 11:36 AM,  [EMAIL PROTECTED] wrote:
 Rainer,

 Sorry for your trouble.

 I'm updating the installboot example in the ZFS Admin Guide with the
 -F zfs syntax now. We'll fix the installboot man page as well.

Perhaps it also deserves a mention in the FAQ somewhere near
http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/#mirrorboot.

5. How do I attach a mirror to an existing ZFS root pool?

Attach the second disk to form a mirror.  In this example, c1t1d0s0 is attached.

# zpool attach rpool c1t0d0s0 c1t1d0s0

Prior to build TBD, bug 6668666 causes the following
platform-dependent steps to also be needed:

On sparc systems:
# installboot -F zfs /usr/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0

On x86 systems:
# ...

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Alan Burlison
[EMAIL PROTECTED] wrote:

 ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
 environment requires separate ZFS volumes for swap and dump devices.
 
 The ZFS boot/install project and information trail starts here:
 
 http://opensolaris.org/os/community/zfs/boot/

Is this going to be supported in a later build?

I got it to use the existing swap slice by manually reconfiguring the 
ZFS-root BE post-install to use the swap slice as swap  dump - the 
resulting BE seems to work just fine, so I'm not sure why LU insists on 
creating ZFS swap  dump.

Basically I want to migrate my root filesystem from UFS to ZFS and leave 
everything else as it it, there doesn't seem to be a way to do this.

-- 
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Mike Gerdts wrote:
 On Wed, Jul 23, 2008 at 11:36 AM,  [EMAIL PROTECTED] wrote:
 Rainer,

 Sorry for your trouble.

 I'm updating the installboot example in the ZFS Admin Guide with the
 -F zfs syntax now. We'll fix the installboot man page as well.
 
 Perhaps it also deserves a mention in the FAQ somewhere near
 http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/#mirrorboot.
 
 5. How do I attach a mirror to an existing ZFS root pool?
 
 Attach the second disk to form a mirror.  In this example, c1t1d0s0 is 
 attached.
 
 # zpool attach rpool c1t0d0s0 c1t1d0s0
 
 Prior to build TBD, bug 6668666 causes the following
 platform-dependent steps to also be needed:
 
 On sparc systems:
 # installboot -F zfs /usr/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0

should be uname -m above I think.
and path to be:
# installboot -F zfs /platform/`uname -m`/lib/fs/zfs/bootblk as path for sparc.

others might correct me though

 
 On x86 systems:
 # ...
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

2008-07-24 Thread Bob Friesenhahn
On Thu, 24 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:

 Do you have any recommend parameters should I try ?

Using an external log is really not needed when using the StorageTek 
2540.  I doubt that it is useful at all.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Enda O'Connor ( Sun Micro Systems Ireland) wrote:
 Mike Gerdts wrote:
 On Wed, Jul 23, 2008 at 11:36 AM,  [EMAIL PROTECTED] wrote:
 Rainer,

 Sorry for your trouble.

 I'm updating the installboot example in the ZFS Admin Guide with the
 -F zfs syntax now. We'll fix the installboot man page as well.

 Perhaps it also deserves a mention in the FAQ somewhere near
 http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/#mirrorboot.

 5. How do I attach a mirror to an existing ZFS root pool?

 Attach the second disk to form a mirror.  In this example, c1t1d0s0 is 
 attached.

 # zpool attach rpool c1t0d0s0 c1t1d0s0

 Prior to build TBD, bug 6668666 causes the following
 platform-dependent steps to also be needed:

 On sparc systems:
 # installboot -F zfs /usr/`uname -i`/lib/fs/zfs/bootblk 
 /dev/rdsk/c1t1d0s0
 
 should be uname -m above I think.
 and path to be:
 # installboot -F zfs /platform/`uname -m`/lib/fs/zfs/bootblk as path for 
 sparc.
 
 others might correct me though
 

 On x86 systems:
 # ...
meant to add that on x86 the following should do the trick ( again I'm open to 
correction )

installgrub /boot/grub/stage1 /zfsroot/boot/grub/stage2 /dev/rdsk/c1t0d0s0

haven't tested the z86 one though.

Enda

 
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Cindy . Swearingen
Alan,

Just make sure you use dumpadm to point to valid dump device and
this setup should work fine. Please let us know if it doesn't.

The ZFS strategy behind automatically creating separate swap and
dump devices including the following:

o Eliminates the need to create separate slices
o Enables underlying ZFS architecture for swap and dump devices
o Enables you to set characteristics like compression on swap
and dump devices, and eventually, encryption

Cindy

Alan Burlison wrote:
 [EMAIL PROTECTED] wrote:
 
 ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
 environment requires separate ZFS volumes for swap and dump devices.

 The ZFS boot/install project and information trail starts here:

 http://opensolaris.org/os/community/zfs/boot/
 
 
 Is this going to be supported in a later build?
 
 I got it to use the existing swap slice by manually reconfiguring the 
 ZFS-root BE post-install to use the swap slice as swap  dump - the 
 resulting BE seems to work just fine, so I'm not sure why LU insists on 
 creating ZFS swap  dump.
 
 Basically I want to migrate my root filesystem from UFS to ZFS and leave 
 everything else as it it, there doesn't seem to be a way to do this.
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Enda O'Connor ( Sun Micro Systems Ireland)
[EMAIL PROTECTED] wrote:
 Alan,
 
 Just make sure you use dumpadm to point to valid dump device and
 this setup should work fine. Please let us know if it doesn't.
 
 The ZFS strategy behind automatically creating separate swap and
 dump devices including the following:
 
 o Eliminates the need to create separate slices
 o Enables underlying ZFS architecture for swap and dump devices
 o Enables you to set characteristics like compression on swap
 and dump devices, and eventually, encryption
Hi
also makes resizing easy to do as well.
ie
zfs set volsize=8G lupool/dump


Enda
 
 Cindy
 
 Alan Burlison wrote:
 [EMAIL PROTECTED] wrote:

 ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
 environment requires separate ZFS volumes for swap and dump devices.

 The ZFS boot/install project and information trail starts here:

 http://opensolaris.org/os/community/zfs/boot/

 Is this going to be supported in a later build?

 I got it to use the existing swap slice by manually reconfiguring the 
 ZFS-root BE post-install to use the swap slice as swap  dump - the 
 resulting BE seems to work just fine, so I'm not sure why LU insists on 
 creating ZFS swap  dump.

 Basically I want to migrate my root filesystem from UFS to ZFS and leave 
 everything else as it it, there doesn't seem to be a way to do this.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Lori Alt



Alan Burlison wrote:


[EMAIL PROTECTED] wrote:

 


ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
environment requires separate ZFS volumes for swap and dump devices.

The ZFS boot/install project and information trail starts here:

http://opensolaris.org/os/community/zfs/boot/
   



Is this going to be supported in a later build?

I got it to use the existing swap slice by manually reconfiguring the 
ZFS-root BE post-install to use the swap slice as swap  dump - the 
resulting BE seems to work just fine, so I'm not sure why LU insists on 
creating ZFS swap  dump.


Basically I want to migrate my root filesystem from UFS to ZFS and leave 
everything else as it it, there doesn't seem to be a way to do this.
 



It's hard to know what the right thing to do is from within
the installation software.  Does the user want to preserve
as much of their current environment as possible?  Or does
the user want to move toward the new standard configuration
(which is pretty much zfs-everything)?  Or something in between?

In designing the changes to the install software, we had to
decide whether to be all things to all people or make some
default choices.  Being all things to all people makes the
interface a lot more complicated and takes a lot more
engineering effort (we'd still be developing it and zfs boot
would not be available if we'd taken that path).  We
erred on the make default choices side (although with
some opportunities for customization), and leaned toward
the move the system toward zfs side in our choices for
those defaults.  We leaned a little too far in that direction in
our selection of default choices for swap/dump space in
the interactive install and so we're fixing that.

In this case, LU does move the system toward using
swap and dump zvols within the root pool.  If you really
don't want that, you can still use your existing swap and
dump slice and delete the swap/dump zvol.   I know it's
not ideal because it requires some manual steps, and maybe
you'll have to repeat those manual actions with subsequent
lucreates (or maybe not, I'm actually not sure how that works).
But is there any really good reason NOT to move to the
use of swap/dump zvols?  If your existing swap/dump slice
is contiguous with your root pool, you can grow the root
pool into that space (using format to merge the slices.
A reboot or re-import of the pool will cause it to grow into
the newly-available space).

Keep these comments coming!  We've tried to make the
best choices, balancing all the many considerations, but
as in the case or swap, I'm sure we made some choices
that were wrong or at least non-optimal and we want to
continue to refine how zfs works as a root file system. 


Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Alan Burlison
Lori Alt wrote:

 In designing the changes to the install software, we had to
 decide whether to be all things to all people or make some
 default choices.  Being all things to all people makes the
 interface a lot more complicated and takes a lot more
 engineering effort (we'd still be developing it and zfs boot
 would not be available if we'd taken that path).  We
 erred on the make default choices side (although with
 some opportunities for customization), and leaned toward
 the move the system toward zfs side in our choices for
 those defaults.  We leaned a little too far in that direction in
 our selection of default choices for swap/dump space in
 the interactive install and so we're fixing that.

Great, thanks :-)  Is this just a case of letting people use the -m flag 
with the ZFS filesystem type, and allowing the dump device to be 
specified with -m, or is there more to it than that?

I also notice that there doesn't appear to be any way to specify the 
size of the swap  dump areas when migrating - I thought I saw somewhere 
in the documentation that swap is sized to 1/2 physmem.  That might be 
problematic on machines with large amounts of memory.

 In this case, LU does move the system toward using
 swap and dump zvols within the root pool.  If you really
 don't want that, you can still use your existing swap and
 dump slice and delete the swap/dump zvol.   I know it's
 not ideal because it requires some manual steps, and maybe
 you'll have to repeat those manual actions with subsequent
 lucreates (or maybe not, I'm actually not sure how that works).

Seems to work, at least if you create the new BE in the same pool as the 
source BE.  I can't get it to work if I try to use a different pool though.

 But is there any really good reason NOT to move to the
 use of swap/dump zvols?  If your existing swap/dump slice
 is contiguous with your root pool, you can grow the root
 pool into that space (using format to merge the slices.
 A reboot or re-import of the pool will cause it to grow into
 the newly-available space).

For some reason, when I initially installed I ended up with this:

Part  TagFlag Cylinders SizeBlocks
   0   rootwm1046 -  2351   10.00GB(1306/0/0)   20980890
   1   swapwu   1 -  10458.01GB(1045/0/0)   16787925

So physically swap comes before root, so I can't do the trick you 
suggested.  I also have a second root slice that comes just after the 
first one, for my second current LU environment.  Really I want to 
collapse  those 3 into 1.  What I'm planning to do is evacuate 
everything else off my first disk onto a USB disk, then re-layout the 
disk.  Everything else on the machine bar the boot slices is already 
ZFS, so I can create a ZFS BE in the pool on my 2nd disk, boot into that 
then re-layout the first disk.

 Keep these comments coming!  We've tried to make the
 best choices, balancing all the many considerations, but
 as in the case or swap, I'm sure we made some choices
 that were wrong or at least non-optimal and we want to
 continue to refine how zfs works as a root file system.

I'm really liking what I see so far, it's just a question of getting my 
head around the best way of setting things up, and figuring out the 
easiest way of migrating.

-- 
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can anyone help me?

2008-07-24 Thread Victor Latushkin
Aaron Botsis пишет:
 Hello, I've hit this same problem. 
 
 Hernan/Victor, I sent you an email asking for the description of this 
 solution. I've also got important data on my array. I went to b93 hoping 
 there'd be a patch for this.
 
 I caused the problem in a manner identical to Hernan; by removing a zvol 
 clone. Exact same symptoms, userspace seems to go away, network stack is 
 still up, no disk activity, system never recovers. 
 
 If anyone has the solution to this, PLEASE help me out. Thanks a million in 
 advance.

Though it is a bit late, I think it's still may be useful to describe a 
way out of this  (prior to fix for 6573681).

When dataset is destroyed, it is first being marked inconsistent. If the 
destroy cannot complete for whatever reason, upon dataset open ZFS 
discovers that it is marked inconsistent and tries to destroy it again 
by calling appropriate ioctl(), if destroy succeeds, then it pretends 
that dataset never existed, if it fails, it tries to roll it back to 
previous state - see lines 410-450 here

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs/common/libzfs_dataset.c
 


But since ioctl() was unable to complete, it was no easy way out. Idea 
was simple - avoid attempting to destroy it again, and proceed right to 
rollback part. Since it was a clone, it was definitely possible to roll 
it back. So i simply added test for environment variable to 'if' 
statement on line 441, and it allowed to import pool.

my 2 cents,

Victor


 
 Aaron
 
 Well, finally managed to solve my issue, thanks to
 the invaluable help of Victor Latushkin, who I can't
 thank enough.

 I'll post a more detailed step-by-step record of what
 he and I did (well, all credit to him actually) to
 solve this. Actually, the problem is still there
 (destroying a huge zvol or clone is slow and takes a
 LOT of memory, and will die when it runs out of
 memory), but now I'm able to import my zpool and all
 is there.

 What Victor did was hack ZFS (libzfs) to force a
 rollback to abort the endless destroy, which was
 re-triggered every time the zpool was imported, as it
 was inconsistent. With this custom version of libzfs,
 setting an environment variable makes libzfs to
 bypass the destroy and jump to rollback, undoing
 the last destroy command.

 I'll be posting the long version of the story soon.

 Hernán
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Rainer Orth
Lori Alt [EMAIL PROTECTED] writes:

 use of swap/dump zvols?  If your existing swap/dump slice
 is contiguous with your root pool, you can grow the root
 pool into that space (using format to merge the slices.
 A reboot or re-import of the pool will cause it to grow into
 the newly-available space).

That had been my plan, and that's how I laid out my slices for zpools and
UFS BEs before ZFS boot came along.  Unfortunately, at least once this
resizing exercise went wrong, fatally, it seems, but so far nobody cared to
comment:

http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/049180.html

And on SPARC, the hopefully safe method from a failsafe environment is
hampered by

http://mail.opensolaris.org/pipermail/install-discuss/2008-July/006754.html

I think at least the second issue needs to be resolved before ZFS root is
appropriate for general use.

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-24 Thread Dave
I've discovered this as well - b81 to b93 (latest I've tried). I 
switched from my on-board SATA controller to AOC-SAT2-MV8 cards because 
the MCP55 controller caused random disk hangs. Now the SAT2-MV8 works as 
long as the drives are working correctly, but the system can't handle a 
drive failure or disconnect. :(

I don't think there's a bug filed for it. That would probably be the 
first step to getting this resolved (might also post to storage-discuss).

--
Dave

Ross wrote:
 Has anybody here got any thoughts on how to resolve this problem:
 http://www.opensolaris.org/jive/thread.jspa?messageID=261204tstart=0
 
 It sounds like two of us have been affected by this now, and it's a bit of a 
 nuisance your entire server hanging when a drive is removed, makes you worry 
 about how Solaris would handle a drive failure.
 
 Has anybody tried pulling a drive on a live Thumper, surely they don't hang 
 like this?  Although, having said that I do remember they do have a great big 
 warning in the manual about using cfgadm to stop the disk before removal 
 saying:
 
 Caution - You must follow these steps before removing a disk from service.  
 Failure to follow the procedure can corrupt your data or render your file 
 system inoperable.
 
 Ross
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Lori Alt



Alan Burlison wrote:


Lori Alt wrote:

 


In designing the changes to the install software, we had to
decide whether to be all things to all people or make some
default choices.  Being all things to all people makes the
interface a lot more complicated and takes a lot more
engineering effort (we'd still be developing it and zfs boot
would not be available if we'd taken that path).  We
erred on the make default choices side (although with
some opportunities for customization), and leaned toward
the move the system toward zfs side in our choices for
those defaults.  We leaned a little too far in that direction in
our selection of default choices for swap/dump space in
the interactive install and so we're fixing that.
   



Great, thanks :-)  Is this just a case of letting people use the -m flag 
with the ZFS filesystem type, and allowing the dump device to be 
specified with -m, or is there more to it than that?


The changes to which I was referring are to the interactive initial 
install program,

not LU.

I'm checking into how swap and dump zvols are sized in LU.

The two changes we made to the interactive initial install are:

1) Change the default size of the swap and dump zvols to 1/2 of
  physmem, but no more than 2 GB, and no less than 512 MB.
  Previously, we were allowing swap and dump to go as high
  as 32 GB, but that's just way too much for some systems.  Not
  only were we setting the size too high, we didn't give the user the
  opportunity to change it, so we made this change:
2)  Allow the user to change the swap and dump size to anything
  from 0 (i.e. no swap or dump zvol at all) to the maximum size
  that will fit in the pool).   We don't recommend the smaller sizes
  (especially for dump.  A system that can't take a crash dump is
  a system with a serviceability problem.), but we won't prevent
  users from setting it up that way.


I also notice that there doesn't appear to be any way to specify the 
size of the swap  dump areas when migrating - I thought I saw somewhere 
in the documentation that swap is sized to 1/2 physmem.  That might be 
problematic on machines with large amounts of memory.


 


In this case, LU does move the system toward using
swap and dump zvols within the root pool.  If you really
don't want that, you can still use your existing swap and
dump slice and delete the swap/dump zvol.   I know it's
not ideal because it requires some manual steps, and maybe
you'll have to repeat those manual actions with subsequent
lucreates (or maybe not, I'm actually not sure how that works).
   



Seems to work, at least if you create the new BE in the same pool as the 
source BE.  I can't get it to work if I try to use a different pool though.


 


But is there any really good reason NOT to move to the
use of swap/dump zvols?  If your existing swap/dump slice
is contiguous with your root pool, you can grow the root
pool into that space (using format to merge the slices.
A reboot or re-import of the pool will cause it to grow into
the newly-available space).
   



For some reason, when I initially installed I ended up with this:

Part  TagFlag Cylinders SizeBlocks
  0   rootwm1046 -  2351   10.00GB(1306/0/0)   20980890
  1   swapwu   1 -  10458.01GB(1045/0/0)   16787925

So physically swap comes before root, so I can't do the trick you 
suggested.  I also have a second root slice that comes just after the 
first one, for my second current LU environment.  Really I want to 
collapse  those 3 into 1.  What I'm planning to do is evacuate 
everything else off my first disk onto a USB disk, then re-layout the 
disk.  Everything else on the machine bar the boot slices is already 
ZFS, so I can create a ZFS BE in the pool on my 2nd disk, boot into that 
then re-layout the first disk.
 

What if you turned slice 1 into a pool (a new one), migrated your BE 
into it,
then grow that pool to soak up the space in the slices that follow it?  
You might

still need to save some stuff elsewhere while you're doing the transition.

Just a suggestion.  It sounds like you're working out a plan.

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-07-24 Thread mike
Did you have success?

What version of Solaris? OpenSolaris? etc?

I'd want to use this card with the latest Solaris 10 (update 5?)

The connector on the adapter itself is IPASS and the Supermicro part number 
for cables from the adapter to standard SATA drives is CBL-0118L-02 IPASS to 4 
SATA Cable, 23-cm Pb-free - I confirmed that with the Supermicro guys.

I suppose by now you've been able to confirm if your cables worked as well, I'm 
just hoping maybe you have news as to everything else...
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-24 Thread Ross
Yeah, I thought of the storage forum today and found somebody else with the 
problem, and since my post a couple of people have reported similar issues on 
Thumpers.

I guess the storage thread is the best place for this now:
http://www.opensolaris.org/jive/thread.jspa?threadID=42507tstart=0
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Ross
Have any of you guys reported this to Sun?  A quick search of the bug database 
doesn't bring up anything that appears related to sata drives and hanging or 
hot swapping.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for Solaris as an NFS-server sharing a number of ZFS file sys

2008-07-24 Thread Richard Elling
Rex Kuo wrote:
 Dear All :

   We are looking for best practice for Solaris as an NFS-server sharing a 
 number of ZFS file systems and nfs clients are RHEL 5.0 OS to mount 
 NFS-server.

   Any S10 NFS-server and RHEL 5.0 NFS-client tuning guide or suggestion are 
 welcome.
   

We try to keep the best practices up-to-date on the wiki
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
PS: I scaled down to mini-ITX form factot because it seems that the 
http://www.chenbro.com/corporatesite/products_detail.php?serno=100 is the 
PERFECT case for the job!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Brandon High
On Thu, Jul 24, 2008 at 3:41 AM, Steve [EMAIL PROTECTED] wrote:
 Or Atom maybe viable?

The atom CPU has pretty crappy performance. At 1.6 GHz performance is
somewhere between a 900MHz Celeron-M and 1.13 Pentium 3-M. It's also
single-core. It would probably work, but it could be CPU bound on
writes, especially if compression is enabled. If performance is
important, a cheap 2.3GHz dual core AMD and motherboard costs $95 vs.
a 1.6GHz Atom  motherboard for $75.

An embedded system using ZFS and the Atom could easily compete on
price and performance with something like the Infrant ReadyNAS. Being
able to increase the stripe width of raidz would help, too.

 - in the case 
 http://www.chenbro.com/corporatesite/products_detail.php?serno=100

Someone tried to use this case and posted about it. The hotswap
backplane in it didn't work so they had to modify the case to plug the
drives directly to the motherboard.
http://blog.flowbuzz.com/search/label/NAS

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Alan Burlison
Lori Alt wrote:

 What if you turned slice 1 into a pool (a new one), migrated your BE 
 into it,
 then grow that pool to soak up the space in the slices that follow it?  
 You might
 still need to save some stuff elsewhere while you're doing the transition.

Doesn't work, because LU wants to create both swap  dump ZFS 
filesystems in there too, my machine has 16Gb of memory and the slice is 
8Gb - so there isn't enough space  LU throws a cog.  Which is why I 
wanted to get it to use the old swap partition in the first place...

-- 
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Miles Nordin
 s == Steve  [EMAIL PROTECTED] writes:

 s About freedom: I for sure would prefere open source drivers
 s availability, let's account for it!

There is source for the Intel gigabit cards in the source browser.

  
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/e1000g/

There is source for some Broadcom gigabit cards (bge)

  
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/bge/

but they don't always work well.  There is a closed-source bcme driver
for the same cards downloaded from Broadcom that Benjamin Ellison is
using instead.

I believe this is source an nForce ethernet driver (!):

  
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/nge/

but can't promise this is the driver that actually attaches to your
nForce 570 board.  Also there's this:

  http://bugs.opensolaris.org/view_bug.do?bug_id=6728522

wikipedia says the forcedeth chips are crap, and always were even with
closed-source windows drivers, but they couldn't be worse than
broadcom.

I believe this source goes with the Realtek 8169/8110 gigabit MAC:

  
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/rge/rge_main.c

On NewEgg many boards say they have Realtek ethernet.  If it is 8169
or 8110, that is an actual MAC chip.  usually it's a different number,
and they are talking about the PHY chip which doesn't determine the
driver.

This is Theo de Raadt's favorite chip because Realtek is cooperative
with documentation.  However I think I've read on this list that chip
is slow and flakey under Solaris.


If using the Sil3124 with stable solaris, I guess you need a very new
release:

 http://bugs.opensolaris.org/view_bug.do?bug_id=2157034

The other problem is that there are different versions of this chip,
so the lack of bug reports doesn't give you much safety right after a
new chip stepping silently starts oozing into the market, unmarked by
the retailers.

It looks like the SATA drivers that come with source have their source
here:

 
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/sata/adapters/

The ATI chipset for AMD/AM2+ is ahci (but does not work well.  you'll
need an add-on card.)  I assume the nForce chipset is nv_sata, which
I'm astonished to find seems to come with source.  And, of course,
there is Sil3124!

The sil3112 driver is somewhere else.  I don't think you should use
that one.  I think you should use a ``SATA framework'' chip.

Marvell/thumper and LSI Logic mpt SATA drivers are closed-source, so
if you want a system where most drivers come with source code you
really need to build your own, not buy one of the Sun systems.  but
there is what looks like a BSD-licensed LSI Logic driver here:

 
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/mega_sas/

so, I am not sure what is the open/closed status of the LSI board.  I
was pretty sure the one in the Ultra 25 is the mpt and attaches to the
SCSI/FC stack, not the SATA stack, and was closed-source.  so maybe
this is another case of two drivers for one chip?  or maybe I was
wrong?

I'm not sure the ``SATA framework'' itself is open-source.  I believe
at one time it was not, but I don't know where to find a comprehensive
list of the unfree bits in your OpenSolaris 2008.5 CD.  

I'm hoping if enough people rant about this nonsense, we will shift
the situation.  For now it seems to be in Sun's best interest to be
vague about what's open source and what's not because people see the
name `Open' in OpenSolaris and impatiently assume the whole thing is
open-source like most Linux CD's.  We should have a more defensive
situation where their interest is better-served by being very detailed
and up-front about what's open and what isn't.


I haven't figured out an easy way to tell quickly which drivers are
free and which are not, even with great effort.  Not only is an
overall method missing, but a stumbling method does not work well
because there are many decoy drivers which don't actually attach
except in circumstances other than yours.  I need to find in the
source a few more tables, the PCI ID to kernel module name mapping,
and the kernel module name to build tree mapping.  I don't know if
such files exist, or if the only way it's stored is through execution
of spaghetti Makefiles available through numerous scattered ``gates''.
Of course this won't help root out unfree ``frameworks'' either.

For non-driver pieces of the OS, this is something the package
management tool can do on Linux and BSD, albeit clumsily---you feed
object filenames to tools like rpm and pkg_info, and they slowly
awkwardly lead you back to the source code.

-8-
zephiris:~$ pkg_info -E `which mutt`
/usr/local/bin/mutt: mutt-1.4.2.3
mutt-1.4.2.3tty-based e-mail client
zephiris:~$ pkg_info -P mutt-1.4.2.3
Information for inst:mutt-1.4.2.3

Pkgpath:
mail/mutt/stable

zephiris:~$ cd /usr/ports/mail/mutt/stable

Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Richard Elling
There are known issues with the Marvell drivers in X4500s.  You will
want to pay attention to the release notes, SRDBs, InfoDocs, and SunAlerts
for the platform.
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/SunFireX4500/SunFireX4500

You will want to especially pay attention to SunAlert 201289
http://sunsolve.sun.com/search/document.do?assetkey=1-66-201289-1

If you run into these or other problems which are not already described
in the above documents, please log a service call which will get you
into the folks who track the platform problems specifically and know
about patches in the pipeline.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Brett Monroe
Ross,

The X4500 uses 6x Marvell 88SX SATA controllers for its internal disks.  They 
are not Supermicro controllers.  The new X4540 uses an LSI chipset instead of 
the Marvell chipset.

--Brett
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Lori Alt


Alan Burlison wrote:

 Lori Alt wrote:

 What if you turned slice 1 into a pool (a new one), migrated your BE 
 into it,
 then grow that pool to soak up the space in the slices that follow 
 it?  You might
 still need to save some stuff elsewhere while you're doing the 
 transition.


 Doesn't work, because LU wants to create both swap  dump ZFS 
 filesystems in there too, my machine has 16Gb of memory and the slice 
 is 8Gb - so there isn't enough space  LU throws a cog.  Which is why 
 I wanted to get it to use the old swap partition in the first place...


Sounds like LU needs some of the same swap/dump flexibility
that we just gave initial install.  I'll bring this up within the team.

Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Lori Alt

I will look into this.  I don't know why it would have failed.

Lori

Rainer Orth wrote:


Lori Alt [EMAIL PROTECTED] writes:

 


use of swap/dump zvols?  If your existing swap/dump slice
is contiguous with your root pool, you can grow the root
pool into that space (using format to merge the slices.
A reboot or re-import of the pool will cause it to grow into
the newly-available space).
   



That had been my plan, and that's how I laid out my slices for zpools and
UFS BEs before ZFS boot came along.  Unfortunately, at least once this
resizing exercise went wrong, fatally, it seems, but so far nobody cared to
comment:

http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/049180.html

And on SPARC, the hopefully safe method from a failsafe environment is
hampered by

http://mail.opensolaris.org/pipermail/install-discuss/2008-July/006754.html

I think at least the second issue needs to be resolved before ZFS root is
appropriate for general use.

Rainer

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Ross
 Ross,
 
 The X4500 uses 6x Marvell 88SX SATA controllers for
 its internal disks.  They are not Supermicro
 controllers.  The new X4540 uses an LSI chipset
 instead of the Marvell chipset.
 
 --Brett

Yup, and the Supermicro card uses the Marvell Hercules-2 88SX6081 (Rev. C0) 
SATA Host Controller, which is part of the series supported by the same 
driver:  http://docs.sun.com/app/docs/doc/816-5177/marvell88sx-7d?a=view.  I've 
seen the Supermicro card mentioned in connection with the Thumpers many times 
on the forums.

And Supermicro have a card using the LSI chipset now 
too:http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm

I don't know if that's the same one as the new Thumper, but posts here have 
implied it is.  Either Sun  Supermicro have very similar taste in controllers, 
or Supermicro are looking at ZFS and actively copying Sun :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Charles Meeks
Hoping this is not too off topic.Can anyone confirm you can break a 
mirrored zfs root pool once formed.   I basically want to clone a boot drive, 
take it to another piece of identical hardware and have two machines ( or more 
).   I am running indiana b93 on x86 hardware. I have read that there are 
various bugs with mirrored zfs root that prevent what I want to do.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-07-24 Thread Steve
Aaargh! My perfect case not working!!

The back-pane should not be just a pass-trough? There was something 
unmounted? The power was not enough for all the disks? Can it depend on the 
disks?

Did you have some replies?

I would tell also to tech support of Chenbro directly 
(http://www.chenbro.com/corporatesite/service_support.php), this is the best 
case I've seen for a small factor high-end home fileserver!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Alan Burlison
Lori Alt wrote:

 Sounds like LU needs some of the same swap/dump flexibility
 that we just gave initial install.  I'll bring this up within the team.

The (partial) workaround I tried was:

1. create a ZFS BE in an existing pool that has enough space
2  lumount the BE, edit the vfstab to use the old swap slice
3. activate the new BE  boot it
4. use dumpadm to switch to using swap
5. delete the ZFS swap  dump filesystems created in step 1
6. delete the original UFS BE so we can reuse the slice
7. create a new ZFS pool in the old UFS slice

It then falls apart at the next step:

8. copy the ZFS BE into the new pool made from the UFS BE

because I can't get LU to create the BE in a different ZFS pool.

-- 
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
Thank you very much Brandon for pointing out the issue for the case!!
(anyway that's really a peaty, I hope it will find a solution!...)

About Atom a person from Sun was pointing out the only good version for ZFS 
would be N200 (64bit). Anyway I wouldn't make a problem of money (still ;-), 
but appropriateness (in case of Atom maybe is the heating / consumption). For 
the sheet specifications for me the intel seems a good one... just I don't know 
if is fully/partially compatible or not!

Hopefully this thread would suggest to me and to many other (I think) are 
thinking about building a little home ZFS NAS server one or more reference mb!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Replicating ZFS filesystems with non-standard mount points

2008-07-24 Thread Alan Burlison
I have 4 filesystems in a pool that I want to replicate into another 
pool, so I've taken snapshots prior to replication:

pool1/home1   14.3G   143G  14.3G  /home1
pool1/[EMAIL PROTECTED]  1.57M  -  14.3G  -
pool1/home2   4.31G   143G  4.31G  /home2
pool1/[EMAIL PROTECTED]  0  -  4.31G  -
pool1/home3   1.11G   143G  1.11G  /home3
pool1/[EMAIL PROTECTED]  0  -  1.11G  -
pool1/home4   10.2G   143G  10.2G  /home4
pool1/[EMAIL PROTECTED]44K  -  10.2G  -

# zfs send -vR [EMAIL PROTECTED] | zfs receive -vdF pool3
sending from @ to [EMAIL PROTECTED]
sending from @ to pool1/[EMAIL PROTECTED]
receiving full stream of [EMAIL PROTECTED] into [EMAIL PROTECTED]
received 13.6KB stream in 1 seconds (13.6KB/sec)
receiving full stream of pool1/[EMAIL PROTECTED] into pool3/[EMAIL PROTECTED]
sending from @ to pool1/[EMAIL PROTECTED]
cannot mount '/home4': directory is not empty

So how do I tell zfs receive to create the new filesystems in pool3, but 
not actually try to mount them?

-- 
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
 On Thu, Jul 24, 2008 at 1:28 AM, Steve
 [EMAIL PROTECTED] wrote:
  And interesting of booting from CF, but it seems is
 possible to boot from the zraid and I would go for
 it!
 
 It's not possible to boot from a raidz volume yet.
 You can only boot
 from a single drive or a mirror.

If I understood properly is possible since April this year, but yes, there are 
still open issues that are being solved! :-)
http://opensolaris.org/os/community/zfs/boot/
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Brett Monroe
 Yup, and the Supermicro card uses the Marvell
 Hercules-2 88SX6081 (Rev. C0) SATA Host Controller,
 which is part of the series supported by the same
 driver:
  http://docs.sun.com/app/docs/doc/816-5177/marvell88sx
 7d?a=view.  I've seen the Supermicro card mentioned
 in connection with the Thumpers many times on the
 forums.

Ahh, I was unfamiliar with Supermicro's products...I'll shut up now. :)

--Brett
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Steve
  s And, if better I'm open also to intel!
 intel you can possibly get onboard AHCI that works,
  and the intel
 igabit MAC, and 16GB instead of 8GB RAM on a desktop
 board.  Also the
 video may be better-supported.  but it's, you know,
 intel.

Miles, sorry, but probably I'm missing something to properly understand your 
closing comment about intel!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-07-24 Thread Brandon High
On Sun, Jul 13, 2008 at 3:37 AM, Bryan Wagoner [EMAIL PROTECTED] wrote:
 I was a little confused on what to get, so I ended up buying this off the 
 Provantage website where I'm getting the card.  The card was like $123 and 
 each of these cables was like $22.

 CBL-0118L-02IPASS to 4 SATA Octopus Cable

Adaptec also sells a lot of different SAS/SATA cables. The cost is a
little higher but there's a lot of selection.
http://www.adaptec.com/en-US/products/cables/cables/sas/

Any of these cables will work:
I-MSASX4-SAS4X1-FO-0.5M R
internal mini Serial Attached SCSI x4 (SFF-8087) to (4) x1 (SFF-8482)
Serial Attached SCSI (controller based) fan-out cable with removable
power dongles.

I-MSASX4-4SATAX1 0.5M R
internal mini Serial Attached SCSI x4 (SFF-8087) to (4) x1 Serial ATA
(controller based) fan-out cable


ACK-I-mSASx4-4SATAx1-SB-0.5m R
internal Mini Serial Attached SCSI x4 (SFF-8087) to (4) x1 Serial ATA
(controller based) fan-out cable with SFF-8448 sideband signals.

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Jorgen Lundman

Since we were drowning, we decided to go ahead and reboot with my 
guesses, even though I have not heard and expert opinions on the 
changes.  (Also, 3 mins was way under estimated. It takes 12 minutes to 
reboot our x4500).

The new values are:  (original)

set bufhwm_pct=10(2%)
set maxusers=4096(2048)
set ndquot=5048000   (50480)
set ncsize=1038376   (129797)
set ufs_ninode=1038376   (129797)


It does appear to run more better, but it hard to tell. 7 out of 10 
tries, statvfs64 takes less than 2seconds, but I did get as high as 14s.

However, 2 hours later the x4500 hung. Pingable, but no console, nor NFS 
response. The LOM was fine, and I performed a remote reboot.

Since then it has stayed up 5 hours.

PID USERNAME  SIZE   RSS STATE  PRI NICE  TIME  CPU PROCESS/NLWP 

521 daemon   7404K 6896K sleep   60  -20   0:25:03 3.1% nfsd/754
Total: 1 processes, 754 lwps, load averages: 0.82, 0.79, 0.79

CPU states: 90.6% idle,  0.0% user,  9.4% kernel,  0.0% iowait,  0.0% swap
Memory: 16G real, 829M free, 275M swap in use, 16G swap free


  10191915 total name lookups (cache hits 82%)

 maxsize 1038376
 maxsize reached 993770

(Increased it by nearly x10 and it still gets a high 'reached').


Lund


Jorgen Lundman wrote:
 We are having slow performance with the UFS volumes on the x4500. They
 are slow even on the local server. Which makes me think it is (for once) 
 not NFS related.
 
 
 Current settings:
 
 SunOS x4500-01.unix 5.11 snv_70b i86pc i386 i86pc
 
 # cat /etc/release
  Solaris Express Developer Edition 9/07 snv_70b X86
 Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
  Use is subject to license terms.
  Assembled 30 August 2007
 
 NFSD_SERVERS=1024
 LOCKD_SERVERS=128
 
 PID USERNAME  SIZE   RSS STATE  PRI NICE  TIME  CPU PROCESS/NLWP
 
   12249 daemon   7204K 6748K sleep   60  -20  54:16:26  14% nfsd/731
 
 load averages:  2.22,  2.32,  2.42 12:31:35
 63 processes:  62 sleeping, 1 on cpu
 CPU states: 68.7% idle,  0.0% user, 31.3% kernel,  0.0% iowait,  0.0% swap
 Memory: 16G real, 1366M free, 118M swap in use, 16G swap free
 
 
 /etc/system:
 
 set ndquot=5048000
 
 
 We have a setup like:
 
 /export/zfs1
 /export/zfs2
 /export/zfs3
 /export/zfs4
 /export/zfs5
 /export/zdev/vol1/ufs1
 /export/zdev/vol2/ufs2
 /export/zdev/vol3/ufs3
 
 What is interesting is that if I run df, it will display everything at 
 normal speed, but pause before vol1/ufs1 file system. truss confirms 
 that statvfs64() is slow (5 seconds usually). All other ZFS and UFS 
 filesystems behave normally. vol1/ufs1 is the most heavily used UFS 
 filesystem.
 
 Disk:
 /dev/zvol/dsk/zpool1/ufs1
 991G   224G   758G23%/export/ufs1
 
 Inodes:
 /dev/zvol/dsk/zpool1/ufs1
  37698475 2504405360%   /export/ufs1
 
 
 
 
 Possible problems:
 
 # vmstat -s
 866193018 total name lookups (cache hits 57%)
 
 # kstat -n inode_cache
 module: ufs instance: 0
 name:   inode_cache class:ufs
   maxsize 129797
   maxsize reached 269060
   thread idles319098740
   vget idles  62136
 
 
 This leads me to think we should consider setting;
 
 set ncsize=259594(doubled... are there better values?)
 set ufs_ninode=259594
 
 in /etc/system, and reboot. But it is costly to reboot based only on my
 guess. Do you have any other suggestions to explore? Will this help?
 
 
 Sincerely,
 
 Jorgen Lundman
 
 

-- 
Jorgen Lundman   | [EMAIL PROTECTED]
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Lida Horn
Richard Elling wrote:
 There are known issues with the Marvell drivers in X4500s.  You will
 want to pay attention to the release notes, SRDBs, InfoDocs, and SunAlerts
 for the platform.
 http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/SunFireX4500/SunFireX4500

 You will want to especially pay attention to SunAlert 201289
 http://sunsolve.sun.com/search/document.do?assetkey=1-66-201289-1

 If you run into these or other problems which are not already described
 in the above documents, please log a service call which will get you
 into the folks who track the platform problems specifically and know
 about patches in the pipeline.
  -- richard

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
Although I am not in the SATA group any longer, I have in the past 
tested hot plugging and failures
of SATA disks with x4500s, Marvell plug in cards and SuperMicro plug in 
cards.  It has worked
in the past on all of these platforms.  Having said that there are 
things that you might be hitting
or might try.

1) The default behavior when a disk is removed and then re-inserted is 
to leave the disk unconfigured.
The operator must issue a cfgadm -c configure satax/y to bring 
the newly plugged in disk on-line.
There was some work being done to make this automatic, but I am not 
currently aware of the state of
that work.

2) There were bugs related to disk drive errors that have been addressed 
(several months ago).  If you have old
 code you could be hitting one or more of those issues.

3) I think there was a change in the sata generic module with respect to 
when it declares a failed disk as off-line.
You might want to check if you are hitting a problem with that.

4) There are a significant number of bugs in ZFS that can cause hangs.  
Most have been addressed with recent patches.
Make sure you have all the patches.

If you use the raw disk (i.e. no ZFS involvement) doing something like 
dd bs=128k if=/dev/rdsk/cxtyd0p0 of=/dev/null
and then try pulling out the disk.  The dd should return with an I/O 
error virtually immediately.  If it doesn't then
ZFS is probably not the issue.  You can also issue the command cfgadm 
and see what it lists as the state(s) of the
various disks.

Hope that helps,
Lida Horn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Boyd Adamson
Enda O'Connor ( Sun Micro Systems Ireland) [EMAIL PROTECTED]
writes:
[..]
 meant to add that on x86 the following should do the trick ( again I'm open 
 to correction )

 installgrub /boot/grub/stage1 /zfsroot/boot/grub/stage2 /dev/rdsk/c1t0d0s0

 haven't tested the z86 one though.

I used

installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0

(i.e., no /zfsroot)

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs write cache enable on boot disks ?

2008-07-24 Thread Mike Gerdts
On Fri, Apr 25, 2008 at 9:22 AM, Robert Milkowski [EMAIL PROTECTED] wrote:
 Hello andrew,

 Thursday, April 24, 2008, 11:03:48 AM, you wrote:

 a What is the reasoning behind ZFS not enabling the write cache for
 a the root pool? Is there a way of forcing ZFS to enable the write cache?

 The reason is that EFI labels are not supported for booting.
 So from ZFS perspective you put root pool on a slice on SMI labeled
 disk - the way currently ZFS works it assumes in such a case that
 there could be other slices used by other programs and because you can
 enable/disable write cache per disk and not per slice it's just safer
 to not automatically enable it.

 If you havoever enable it yourself then it should stay that way (see
 format -e - cache)

So long as the zpool uses all of the space used for dynamic data that
needs to survive a reboot, it would seem to make a lot of sense to
enable write cache on such disks.  This assumes that ZFS does the
flush no matter whether it thinks the write cache is enabled or not.
Am I wrong about this somehow?

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Neal Pollack

Lida Horn wrote:

Richard Elling wrote:
  

There are known issues with the Marvell drivers in X4500s.  You will
want to pay attention to the release notes, SRDBs, InfoDocs, and SunAlerts
for the platform.
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/SunFireX4500/SunFireX4500

You will want to especially pay attention to SunAlert 201289
http://sunsolve.sun.com/search/document.do?assetkey=1-66-201289-1

If you run into these or other problems which are not already described
in the above documents, please log a service call which will get you
into the folks who track the platform problems specifically and know
about patches in the pipeline.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

Although I am not in the SATA group any longer, I have in the past 
tested hot plugging and failures
of SATA disks with x4500s, Marvell plug in cards and SuperMicro plug in 
cards.  It has worked
in the past on all of these platforms.  Having said that there are 
things that you might be hitting

or might try.

1) The default behavior when a disk is removed and then re-inserted is 
to leave the disk unconfigured.
The operator must issue a cfgadm -c configure satax/y to bring 
the newly plugged in disk on-line.
There was some work being done to make this automatic, but I am not 
currently aware of the state of

that work.
  


As of build 94, it does not automatically bring the disk online.
I replaced a failed disk on an x4500 today running Nevada build 94, and 
still

had to manually issue

# cfgadm -c configure sata1/3
# zpool replace tank cxt2d0

then wait 7 hours for resilver.
But the above is correct and expected.  They simply have not automated 
that yet.  Apparently.


Neal


2) There were bugs related to disk drive errors that have been addressed 
(several months ago).  If you have old

 code you could be hitting one or more of those issues.

3) I think there was a change in the sata generic module with respect to 
when it declares a failed disk as off-line.

You might want to check if you are hitting a problem with that.

4) There are a significant number of bugs in ZFS that can cause hangs.  
Most have been addressed with recent patches.

Make sure you have all the patches.

If you use the raw disk (i.e. no ZFS involvement) doing something like 
dd bs=128k if=/dev/rdsk/cxtyd0p0 of=/dev/null
and then try pulling out the disk.  The dd should return with an I/O 
error virtually immediately.  If it doesn't then
ZFS is probably not the issue.  You can also issue the command cfgadm 
and see what it lists as the state(s) of the

various disks.

Hope that helps,
Lida Horn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Eric Schrock
On Thu, Jul 24, 2008 at 08:38:21PM -0700, Neal Pollack wrote:
 
 As of build 94, it does not automatically bring the disk online.
 I replaced a failed disk on an x4500 today running Nevada build 94, and 
 still
 had to manually issue
 
 # cfgadm -c configure sata1/3
 # zpool replace tank cxt2d0
 
 then wait 7 hours for resilver.
 But the above is correct and expected.  They simply have not automated 
 that yet.  Apparently.

You can set 'sata:sata_auto_online=1' in /etc/system to enable this
behavior.  It should be the default, as it is with other drivers (like
mpt), but there has been resistance from the SATA team in the past.
Anyone wanting to do the PSARC legwork could probably get this changed
in nevada.

You can also set 'autoreplace=on' for the ZFS pool, and the replace will
happen automatically upon insertion if you are using whole disks and a
driver with static device paths (such as sata).

- Eric

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss