Re: [zfs-discuss] need hint on pool setup

2012-02-01 Thread Thomas Nau
Bob,


On 01/31/2012 09:54 PM, Bob Friesenhahn wrote:
 On Tue, 31 Jan 2012, Thomas Nau wrote:
 
 Dear all
 We have two JBODs with 20 or 21 drives available per JBOD hooked up
 to a server. We are considering the following setups:

 RAIDZ2 made of 4 drives
 RAIDZ2 made of 6 drives

 The first option wastes more disk space but can survive a JBOD failure
 whereas the second is more space effective but the system goes down when
 a JBOD goes down. Each of the JBOD comes with dual controllers, redundant
 fans and power supplies so do I need to be paranoid and use option #1?
 Of course it also gives us more IOPs but high end logging devices should take
 care of that
 
 I think that the answer depends on the impact to your business if data is 
 temporarily not available.  If your business can not
 survive data being temporarily not available (for hours or even a week) then 
 the more conserative approach may be warranted.

We are talking about home directories at a university so some
downtime is ok but fore sure now hours or even days. We do
regular backups plus snapshot send-receive to a remote location.
The main thing I was wondering about is if it's better to have a downtime
if a JBOD fails (rare I assume) or to keep going without any redundancy left.


 If you have a service contract which assures that a service tech will show up 
 quickly with replacement hardware in hand, then
 this may also influence the decision which should be made.

The replacement hardware is kind of on-site as we use it for the
disaster recovery on the remote location

 Another consideration is that since these JBODs connect to a server, the data 
 will also be unavailable when the server is down. 
 The server being down may in fact be a more significant factor than a JBOD 
 being down.

I skipped that, sorry. Of course all JOBDs are connected through multiple
SAS HBAs to two servers so server failure is easy to handle

Thanks for the thoughts
Thomas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-02-01 Thread Jim Klimov

2012-02-01 6:22, Ragnar Sundblad wrote:

That is almost what I do, except that I only have one HBA.
We haven't seen many HBAs fail during the years, none actually, so we
thought it was overkill to double those too. But maybe we are wrong?


Question: if you use two HBAs on different PCI buses to
to MPxIO to the same JBODs, wouldn't this double your
peak performance between motherboard and disks (beside
adding resilience to failure of one of the paths)?

This might be less important with JBODs of HDDs, but
more important with external arrays of SSD disks...
or very many HDDs :)

Thanks in advance for clearing that up for me,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] need hint on pool setup

2012-02-01 Thread Hung-Sheng Tsao (laoTsao)
my 2c
1 just  do mirror  of 2 dev with 20 hdd with 1 spare
2 raidz2 with   5 dev for 20 hdd,  with one spare 

Sent from my iPad

On Feb 1, 2012, at 3:49, Thomas Nau thomas@uni-ulm.de wrote:

 Hi
 
 On 01/31/2012 10:05 PM, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. wrote:
 what is your main application for ZFS? e.g. just NFS or iSCSI  for home dir 
 or VM? or Window client?
 
 Yes, fileservice only using CIFS, NFS, Samba and maybe iSCSI
 
 Is performance important? or space is more important?
 
 a good balance ;)
 
 what is the memory of your server?
 
 96G
 
 do you want to use ZIL or L2ARC?
 
 ZEUS STECRAM as ZIL (mirrored); maybe SSDs and L2ARC
 
 what is your backup  or DR plan?
 
 continuous rolling snapshot plus send/receive to remote site
 TSM backup at least once a week to tape; depends on how much
 time the TSM client needs to walk the filesystems
 
 You need to answer all these question first
 
 did so
 
 Thomas
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] very slow write performance on 151a

2012-02-01 Thread milosz
hi guys,

does anyone know if a fix for this (space map thrashing) is in the
works?  i've been running into this on and off on a number of systems
i manage.  sometimes i can delete snapshots and things go back to
normal, sometimes the only thing that works is enabling
metaslab_debug.  obviously the latter is only really an option for
systems with a huge amount of ram.

or: am i doing something wrong?

milosz

On Mon, Dec 19, 2011 at 8:02 AM, Jim Klimov jimkli...@cos.ru wrote:
 2011-12-15 22:44, milosz цкщеу:

 There are a few metaslab-related tunables that can be tweaked as well.
                                        - Bill


 For the sake of completeness, here are the relevant lines
 I have in /etc/system:

 **
 * fix up metaslab min size (recent default ~10Mb seems bad,
 * recommended return to 4Kb, we'll do 4*8K)
 * greatly increases write speed in filled-up pools
 set zfs:metaslab_min_alloc_size = 0x8000
 set zfs:metaslab_smo_bonus_pct = 0xc8
 **

 These values were described in greater detail on the list
 this summer, I think.

 HTH,
 //Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Jan Hellevik
I suspect that something is wrong with one of my disks.

This is the output from iostat:

extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
2.0   18.9   38.1  160.9  0.0  0.10.13.2   0   6   0   0   0   0 
c5d0
2.7   18.8   59.3  160.9  0.0  0.10.23.2   0   6   0   0   0   0 
c5d1
0.0   36.81.1 3593.7  0.0  0.10.02.9   0   8   0   0   0   0 
c6t66d0
0.0   38.20.0 3693.7  0.0  0.20.04.6   0  12   0   0   0   0 
c6t70d0
0.0   38.10.0 3693.7  0.0  0.10.02.4   0   5   0   0   0   0 
c6t74d0
0.0   42.00.0 4155.4  0.0  0.00.00.6   0   2   0   0   0   0 
c6t76d0
0.0   36.90.0 3593.7  0.0  0.10.01.4   0   3   0   0   0   0 
c6t78d0
0.0   41.70.0 4155.4  0.0  0.00.01.2   0   4   0   0   0   0 
c6t80d0

The disk in question is c6t70d0 - it shows consistently higher %b and asvc_t 
than the other disks in the pool. The output is from a 'zfs receive' after 
about 3 hours. 
The two c5dx disks are the 'rpool' mirror, the others belong to the 'backup' 
pool.

admin@master:~# zpool status
  pool: backup
 state: ONLINE
 scan: scrub repaired 0 in 5h7m with 0 errors on Tue Jan 31 04:55:31 2012
config:

NAME STATE READ WRITE CKSUM
backup   ONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
c6t78d0  ONLINE   0 0 0
c6t66d0  ONLINE   0 0 0
  mirror-1   ONLINE   0 0 0
c6t70d0  ONLINE   0 0 0
c6t74d0  ONLINE   0 0 0
  mirror-2   ONLINE   0 0 0
c6t76d0  ONLINE   0 0 0
c6t80d0  ONLINE   0 0 0

errors: No known data errors

admin@master:~# zpool list
NAME SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
backup  4.53T  1.37T  3.16T30%  1.00x  ONLINE  -

admin@master:~# uname -a
SunOS master 5.11 oi_148 i86pc i386 i86pc

Should I be worried? And what other commands can I use to investigate further?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Bob Friesenhahn

On Wed, 1 Feb 2012, Jan Hellevik wrote:

The disk in question is c6t70d0 - it shows consistently higher %b and asvc_t
than the other disks in the pool. The output is from a 'zfs receive' after 
about 3 hours.
The two c5dx disks are the 'rpool' mirror, the others belong to the 'backup' 
pool.


Are all of the disks the same make and model?  What type of chassis 
are the disks mounted in?  Is it possible that the environment that 
this disk experiences is somehow different than the others (e.g. due 
to vibration)?



Should I be worried? And what other commands can I use to investigate further?


It is difficult to say if you should be worried.

Be sure to do 'iostat -xe' to see if there are any accumulating errors 
related to the disk.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Jan Hellevik
Hi!

On Feb 1, 2012, at 7:43 PM, Bob Friesenhahn wrote:

 On Wed, 1 Feb 2012, Jan Hellevik wrote:
 The disk in question is c6t70d0 - it shows consistently higher %b and asvc_t
 than the other disks in the pool. The output is from a 'zfs receive' after 
 about 3 hours.
 The two c5dx disks are the 'rpool' mirror, the others belong to the 'backup' 
 pool.
 
 Are all of the disks the same make and model?  What type of chassis are the 
 disks mounted in?  Is it possible that the environment that this disk 
 experiences is somehow different than the others (e.g. due to vibration)?

They are different makes - I try to make pairs of different brands to minimise 
risk.

The disks are in a Rackable Systems enclosure (disk shelf?). 16 disks, all 
SATA. Connected to a SASUC8I controller on the server.

This is a backup server I recently put together to keep backups from my main 
server. I put in the disks from the old 'backup' pool and have started a 2TB 
zfs send/receive from my main server. So far thing look ok, it is just the 
somewhat high values on that one disk that worries me a little.

 
 Should I be worried? And what other commands can I use to investigate 
 further?
 
 It is difficult to say if you should be worried.
 
 Be sure to do 'iostat -xe' to see if there are any accumulating errors 
 related to the disk.
 

This is the most current output from iostat. It has been running a zfs receive 
for more than a day. No errors. zpool status also reports no errors.


extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
8.1   18.7  142.5  180.4  0.0  0.10.13.2   0   8   0   0   0   0 
c5d0
   10.2   18.7  186.3  180.4  0.0  0.10.13.3   0   9   0   0   0   0 
c5d1
0.0   36.70.0 3595.8  0.0  0.10.03.2   0   9   0   0   0   0 
c6t66d0
0.0   36.00.0 3642.2  0.0  0.10.03.9   0  12   0   0   0   0 
c6t70d0
0.0   36.10.0 3642.2  0.0  0.10.02.9   0   5   0   0   0   0 
c6t74d0
0.0   39.60.0 4071.8  0.0  0.00.00.7   0   2   0   0   0   0 
c6t76d0
0.20.00.30.0  0.0  0.00.00.0   0   0   0   0   0   0 
c6t77d0
0.2   36.80.3 3595.8  0.0  0.10.01.9   0   4   0   0   0   0 
c6t78d0
0.20.00.30.0  0.0  0.00.00.0   0   0   0   0   0   0 
c6t79d0
0.2   39.60.3 4071.6  0.0  0.10.01.6   0   5   0   0   0   0 
c6t80d0
0.20.00.30.0  0.0  0.00.00.0   0   0   0   0   0   0 
c6t81d0

admin@master:/export/home/admin$ zpool list 
NAME SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
backup  4.53T  2.17T  2.36T47%  1.00x  ONLINE  -

admin@master:/export/home/admin$ zpool status
  pool: backup
 state: ONLINE
 scan: scrub repaired 0 in 5h7m with 0 errors on Tue Jan 31 04:55:31 2012
config:

NAME STATE READ WRITE CKSUM
backup   ONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
c6t78d0  ONLINE   0 0 0
c6t66d0  ONLINE   0 0 0
  mirror-1   ONLINE   0 0 0
c6t70d0  ONLINE   0 0 0
c6t74d0  ONLINE   0 0 0
  mirror-2   ONLINE   0 0 0
c6t76d0  ONLINE   0 0 0
c6t80d0  ONLINE   0 0 0

errors: No known data errors


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Cindy Swearingen

Hi Jan,

These commands will tell you if FMA faults are logged:

# fmdump
# fmadm faulty

This command will tell you if errors are accumulating on this
disk:

# fmdump -eV | more

Thanks,

Cindy

On 02/01/12 11:20, Jan Hellevik wrote:

I suspect that something is wrong with one of my disks.

This is the output from iostat:

 extended device statistics    errors ---
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
 2.0   18.9   38.1  160.9  0.0  0.10.13.2   0   6   0   0   0   0 
c5d0
 2.7   18.8   59.3  160.9  0.0  0.10.23.2   0   6   0   0   0   0 
c5d1
 0.0   36.81.1 3593.7  0.0  0.10.02.9   0   8   0   0   0   0 
c6t66d0
 0.0   38.20.0 3693.7  0.0  0.20.04.6   0  12   0   0   0   0 
c6t70d0
 0.0   38.10.0 3693.7  0.0  0.10.02.4   0   5   0   0   0   0 
c6t74d0
 0.0   42.00.0 4155.4  0.0  0.00.00.6   0   2   0   0   0   0 
c6t76d0
 0.0   36.90.0 3593.7  0.0  0.10.01.4   0   3   0   0   0   0 
c6t78d0
 0.0   41.70.0 4155.4  0.0  0.00.01.2   0   4   0   0   0   0 
c6t80d0

The disk in question is c6t70d0 - it shows consistently higher %b and asvc_t
than the other disks in the pool. The output is from a 'zfs receive' after 
about 3 hours.
The two c5dx disks are the 'rpool' mirror, the others belong to the 'backup' 
pool.

admin@master:~# zpool status
   pool: backup
  state: ONLINE
  scan: scrub repaired 0 in 5h7m with 0 errors on Tue Jan 31 04:55:31 2012
config:

 NAME STATE READ WRITE CKSUM
 backup   ONLINE   0 0 0
   mirror-0   ONLINE   0 0 0
 c6t78d0  ONLINE   0 0 0
 c6t66d0  ONLINE   0 0 0
   mirror-1   ONLINE   0 0 0
 c6t70d0  ONLINE   0 0 0
 c6t74d0  ONLINE   0 0 0
   mirror-2   ONLINE   0 0 0
 c6t76d0  ONLINE   0 0 0
 c6t80d0  ONLINE   0 0 0

errors: No known data errors

admin@master:~# zpool list
NAME SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
backup  4.53T  1.37T  3.16T30%  1.00x  ONLINE  -

admin@master:~# uname -a
SunOS master 5.11 oi_148 i86pc i386 i86pc

Should I be worried? And what other commands can I use to investigate further?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GUI to set ACLs

2012-02-01 Thread Linder, Doug
Achim Wolpers wrote:

 I'm searching for a GUI tool to set ZFS (NFSv4) ACLs. I found some nautilus 
 add ons in the web but
 they don't seen to work with nautilus shipped with OI. Any solution?

I've been looking for something like this for ages, but as far as I know none 
exists.  It certainly seems like a logical idea.  Then again, Solaris doesn't 
have all that many desktop users so I guess the user base would be limited.  
Maybe they could integrate it with Ops Center somehow.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Learn more about Merchant Link at www.merchantlink.com.

THIS MESSAGE IS CONFIDENTIAL.  This e-mail message and any attachments are 
proprietary and confidential information intended only for the use of the 
recipient(s) named above.  If you are not the intended recipient, you may not 
print, distribute, or copy this message or any attachments.  If you have 
received this communication in error, please notify the sender by return e-mail 
and delete this message and any attachments from your computer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Bob Friesenhahn

On Wed, 1 Feb 2012, Jan Hellevik wrote:


Are all of the disks the same make and model?


They are different makes - I try to make pairs of different brands to minimise 
risk.


Does your pairing maintain the same pattern of disk type across all 
the pairings?


Some modern disks use 4k sectors while others still use 512 bytes.  If 
the slow disk is a 4k sector model but the others are 512 byte models, 
then that would certainly explain a difference.


Assuming that a couple of your disks are still unused, you could try 
replacing the suspect drive with an unused drive (via zfs command) to 
see if the slowness goes away. You could also make that vdev a 
triple-mirror since it is very easy to add/remove drives from a mirror 
vdev.  Just make sure that your zfs syntax is correct so that you 
don't accidentally add a single-drive vdev to the pool (oops!). 
These sorts of things can be tested with zfs commands without 
physically moving/removing drives or endangering your data.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Jan Hellevik

On Feb 1, 2012, at 8:07 PM, Bob Friesenhahn wrote:

 On Wed, 1 Feb 2012, Jan Hellevik wrote:
 
 Are all of the disks the same make and model?
 
 They are different makes - I try to make pairs of different brands to 
 minimise risk.
 
 Does your pairing maintain the same pattern of disk type across all the 
 pairings?
 

Not 100% percent sure I understand what you mean (english is not my first 
language). 
These are the disks:
mirror-0: wd15ears + hd154ui
mirror-1: wd15ears + hd154ui
mirror-2: wd20ears + hd204ui

Two pairs of 1.5TB and one pair of 2.0TB. I would like to have pairs of the 
same size, but these were the disks I had available, and since it is a backup 
pool I do not think it matters that much. If the flooding hadn't tripled the 
price of disks I would probably buy a few more, but not with the current price 
level. :-(

I am waiting for a replacement 1.5TB disk and will replace the 'bad' one as 
soon as I get it.

 Some modern disks use 4k sectors while others still use 512 bytes.  If the 
 slow disk is a 4k sector model but the others are 512 byte models, then that 
 would certainly explain a difference.
 

AVAILABLE DISK SELECTIONS:
   0. c5d0 ?xH?0?0??? cyl 14590 alt 2 hd 255 sec 63
   1. c5d1 ?xH?0?0??? cyl 14590 alt 2 hd 255 sec 63
   2. c6t66d0 ATA-WDC WD15EARS-00Z-0A80-1.36TB
   3. c6t67d0 ATA-SAMSUNG HD501LJ-0-12-465.76GB
   4. c6t68d0 ATA-WDC WD6400AAKS-2-3B01-596.17GB
   5. c6t69d0 ATA-SAMSUNG HD501LJ-0-12-465.76GB
   6. c6t70d0 ATA-WDC WD15EARS-00Z-0A80-1.36TB
   7. c6t71d0 ATA-SAMSUNG HD501LJ-0-13-465.76GB
   8. c6t72d0 ATA-WDC WD6400AAKS--3B01 cyl 38909 alt 2 hd 255 sec 126
   9. c6t73d0 ATA-SAMSUNG HD501LJ-0-13-465.76GB
  10. c6t74d0 ATA-SAMSUNG HD154UI-1118-1.36TB
  11. c6t75d0 ATA-SAMSUNG HD501LJ-0-11-465.76GB
  12. c6t76d0 ATA-SAMSUNG HD204UI-0001-1.82TB
  13. c6t77d0 ATA-SAMSUNG HD501LJ-0-11-465.76GB
  14. c6t78d0 ATA-SAMSUNG HD154UI-1118-1.36TB
  15. c6t79d0 ATA-SAMSUNG HD501LJ-0-11-465.76GB
  16. c6t80d0 ATA-WDC WD20EARS-00M-AB51-1.82TB
  17. c6t81d0 ATA-SAMSUNG HD501LJ-0-11-465.76GB

mirror-0
   2. c6t66d0 ATA-WDC WD15EARS-00Z-0A80-1.36TB
  14. c6t78d0 ATA-SAMSUNG HD154UI-1118-1.36TB
mirror-1
   6. c6t70d0 ATA-WDC WD15EARS-00Z-0A80-1.36TB
  10. c6t74d0 ATA-SAMSUNG HD154UI-1118-1.36TB
mirror-2
  12. c6t76d0 ATA-SAMSUNG HD204UI-0001-1.82TB
  16. c6t80d0 ATA-WDC WD20EARS-00M-AB51-1.82TB

You can see that mirror-0 and mirror-1 have identical disk pairs.

BTW: Can someone explain why this:
   8. c6t72d0 ATA-WDC WD6400AAKS--3B01 cyl 38909 alt 2 hd 255 sec 126
is not shown the same way as this:
   4. c6t68d0 ATA-WDC WD6400AAKS-2-3B01-596.17GB

Why the cylinder/sector in line 8?

 Assuming that a couple of your disks are still unused, you could try 
 replacing the suspect drive with an unused drive (via zfs command) to see if 
 the slowness goes away. You could also make that vdev a triple-mirror since 
 it is very easy to add/remove drives from a mirror vdev.  Just make sure that 
 your zfs syntax is correct so that you don't accidentally add a single-drive 
 vdev to the pool (oops!). These sorts of things can be tested with zfs 
 commands without physically moving/removing drives or endangering your data.
 

If I had available disks, I would. As of now, the are all busy. :-)

Thanks for the advice!

 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GUI to set ACLs

2012-02-01 Thread David Magda
On Wed, February 1, 2012 14:03, Linder, Doug wrote:
 Achim Wolpers wrote:

 I'm searching for a GUI tool to set ZFS (NFSv4) ACLs. I found some
 nautilus add ons in the web but
 they don't seen to work with nautilus shipped with OI. Any solution?

 I've been looking for something like this for ages, but as far as I know
 none exists.  It certainly seems like a logical idea.  Then again, Solaris
 doesn't have all that many desktop users so I guess the user base would be
 limited.  Maybe they could integrate it with Ops Center somehow.

Well, more and more file systems will be using NFSv4-style ACLs, so it'd
be useful on platforms besides Solaris.

At $WORK we have an Isilon that has these ACLs on OneFS, and we've found
it easier to go in via Windows and CIFS and edit the ACLs than trying to
use the CLI tools for some of the convoluted permissions we have to deal
with (e.g., multiple research groups, with some users getting write access
on top of the base read access that they'd normally have; add inheritance
on top of that for newly created directories sub-trees, etc.).

I think if you can SSH into a server with X11 forwarding enable and have
the editor run on the system, with the GUI showing up on the admin's
desktop, then it'd be handy. Editing complicated ACLs isn't just for your
desktop.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-02-01 Thread Richard Elling

On Feb 1, 2012, at 4:09 AM, Jim Klimov wrote:

 2012-02-01 6:22, Ragnar Sundblad wrote:
 That is almost what I do, except that I only have one HBA.
 We haven't seen many HBAs fail during the years, none actually, so we
 thought it was overkill to double those too. But maybe we are wrong?
 
 Question: if you use two HBAs on different PCI buses to
 to MPxIO to the same JBODs, wouldn't this double your
 peak performance between motherboard and disks (beside
 adding resilience to failure of one of the paths)?

In general, for HDDs no, for SSDs yes.

 This might be less important with JBODs of HDDs, but
 more important with external arrays of SSD disks...
 or very many HDDs :)

With a fast SSD, you can easily get 700+ MB/sec when using mpxio, even
with a single HBA.
 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-02-01 Thread Richard Elling
Thanks for the info, James!

On Jan 31, 2012, at 6:58 PM, James C. McPherson wrote:

 On  1/02/12 12:40 PM, Ragnar Sundblad wrote:
 ...
 I still don't really get what stmsboot -u actually does (and if - and if
 so how much - this differs between x86 and sparc).
 Would it be impolite to ask you to elaborate on this a little?
 
 Not at all. Here goes.
 
 /usr/sbin/stmsboot -u arms the mpxio-upgrade service so that it
 runs when you reboot.
 
 
 The mpxio-upgrade service
 
 #1 execs /lib/stmsboot_util -u, to do the actual rewriting of vfstab
 #2 execs metadevadm if you have any SVM metadevices
 #3 updates your boot archive
 #4 execs dumpadm to ensure that you have the correct dump device
   listed in /etc/dumpadm.conf
 #5 updates your boot path property on x64, if required.

Most or all of these are UFS-oriented. I've never found a need to run 
stmsboot when using ZFS root, even when changing from non-mpxio 
to mpxio.

Incidentally, the process to change from IDE legacy mode to AHCI for the
boot drive is very similar, but the Oracle docs say you have to reinstall the
OS. Clearly we can do that without reinstalling the OS, as shown in the
ZFS-discuss archives.
 -- richard

 
 
 /lib/stmsboot_util is the binary which does the heavy lifting. Each
 vfstab device element is checked - the cache that was created prior
 to the reboot is used to identify where the new paths are. You can
 see this cache by running strings over /etc/mpxio/devid_path.cache.
 
 
 
 This is all available for your perusal at
 
 http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/stmsboot/
 
 
 cheers,
 James
 --
 Oracle
 http://www.jmcp.homeunix.com/blog
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss