Re: [zfs-discuss] (Zil on multiple usb keys) Mirroring the pool

2011-07-16 Thread Craig Cory
Tiernan,

Depending on how you have created your current pool, you *may* be able to add
the mirroring without rebuilding it. Each disk in the stripe can have a second
disk of equal size attached to it to form a mirrored component, or vdev. So if
your pool has 2 500GB drives, attach another 500GB drive to each, forming a
mirror of each stripe half.

# zpool status
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  c0t50d1   ONLINE   0 0 0
  c0t50d2   ONLINE   0 0 0

errors: No known data errors

# zpool attach mypool c0t50d1 c0t50d3
# zpool attach mypool c0t50d2 c0t50d4
# zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Sat Jul 16 07:38:07 2011
config:

NAME STATE READ WRITE CKSUM
mypool   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t50d1  ONLINE   0 0 0
c0t50d3  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t50d2  ONLINE   0 0 0
c0t50d4  ONLINE   0 0 0

errors: No known data errors
#

Both single vdevs (c0t50d1 and c0t50d2) are now mirrored.

If you don't have a second disk appropriately sized to match the current pool
members, you can create one or two pools with your two 500GB and two 300GB
disks,, depending on your needs.

Either:

# zpool create pool1 mirror 500GB-1 500GB-2 mirror 300GB-1 300GB-2

to make one ~800GB pool. Or

# zpool create pool1 mirror 500GB-1 500GB-2
# zpool create pool2 mirror 300GB-1 300GB-2

to make two pools, one ~500GB and one ~300GB.

As long as the mirrored pairs match they do not have to be all the same in the
pool.

Craig









Tiernan OToole wrote:
 Thanks for the info. need to rebuild my machine and ZFS pool kind of new
 to this and realized i built it as a stripe, not a mirror... also, want to
 add extra disks...

 As a follow up question:

 I have 2 500Gb internal drives and 2 300Gb USB drives. If i where to create
 a 2 pools, a 300Gb and a 500Gb in each, and then mirror over them, would
 that work? is it even posible? or what would you recomend for that setup?

 Thanks.

 --Tiernan

 On Fri, Jul 15, 2011 at 5:39 PM, Edward Ned Harvey 
 opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Tiernan OToole
 
  This might be a stupid question, but here goes... Would adding, say, 4 4
 or
  8gb usb keys as a zil make enough of a difference for writes on an iscsi
 shared
  vol?
 
  I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives
  stripped) but writes top out at about 10 and drop a lot lower... If I
 where to
  add a couple usb keys for zil, would it make a difference?

 Unfortunately, usb keys, even the fastest ones, are slower than physical
 hard drives.  I even went out of my way to buy a super expensive super fast
 USB3 16G fob...  And it's still slower than a super-cheap USB2 sata hard
 drive.

 There is a way you can evaluate the effect of adding a fast slog device
 without buying one.  (It would have to be a fast device, certainly no USB
 fobs.)  Just temporarily disable your ZIL.  That's the fastest you can
 possibly go.  If it makes a big difference, then getting a fast slog device
 will help you approach that theoretical limit.  If it doesn't make a huge
 difference, then adding slog will not do you any good.

 To disable ZIL, if your pool is sufficiently recent, use the zfs set sync=
 command.  It takes effect immediately.  If you have an older system, you'll
 have to use a different command, and you'll probably have to remount your
 filesystem in order for the change to take effect.




 --
 Tiernan O'Toole
 blog.lotas-smartman.net
 www.tiernanotoolephotography.com
 www.the-hairy-one.com
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



-- 
Craig Cory
 Senior Instructor :: ExitCertified
 : Oracle/Sun Certified System Administrator
 : Oracle/Sun Certified Network Administrator
 : Oracle/Sun Certified Security Administrator
 : Symantec/Veritas Certified Instructor
 : RedHat Certified Systems Administrator

 8950 Cal Center Drive
 Bldg 1, Suite 110
 Sacramento, California  95826
 [e] craig.c...@exitcertified.com
 [p] 916.669.3970
 [f] 916.669.3977

+-+
 ExitCertified :: Excellence in IT Certified Education

  Certified training with Oracle, Sun Microsystems, Apple, Symantec, IBM,
   Red Hat, MySQL, Hitachi Storage, SpringSource and VMWare.

 1.800.803.EXIT (3948)  |  www.ExitCertified.com

Re: [zfs-discuss] Question: adding a single drive to a mirrored zpool

2011-06-24 Thread Craig Cory
Alex,

alex stun wrote:
 Hello,
 I have a zpool consisting of several mirrored vdevs. I was in the middle of
 adding another mirrored vdev today, but found out one of the new drives is
 bad. I will be receiving the replacement drive in a few days. In the mean
 time, I need the additional storage on my zpool.

 Is the command to add a single drive to a mirrored zpool:
 zpool add -f tank drive1?

 Does the -f command cause any issues?
 I realize that there will be no redundancy on that drive for a few days, and I
 can live with that as long as the rest of my zpool remains intact.

 Thanks
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

This is exactly what you'll need to do. Without the -f zpool will stop and
warn you that you have a mismatch in reliability. So, to get the space:

 zpool add -f poolname single-disk

Then later,

 zpool attach poolname newdisk single-disk

HTH

Craig


-- 
Craig Cory
 Senior Instructor :: ExitCertified
 : Oracle/Sun Certified System Administrator
 : Oracle/Sun Certified Network Administrator
 : Oracle/Sun Certified Security Administrator
 : Symantec/Veritas Certified Instructor
 : RedHat Certified Systems Administrator

+-+
 ExitCertified :: Excellence in IT Certified Education

  Certified training with Oracle, Sun Microsystems, Apple, Symantec, IBM,
   Red Hat, MySQL, Hitachi Storage, SpringSource and VMWare.

 1.800.803.EXIT (3948)  |  www.ExitCertified.com
+-+
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Craig Cory

Paul Kraus wrote:
 On Thu, Jun 23, 2011 at 12:48 AM, Nomen Nescio nob...@dizum.com wrote:

 Hello Bob! Thanks for the reply. I was thinking about going with a 3 way
 mirror and a hot spare. But I don't think I can upgrade to larger drives
 unless I do it all at once, is that correct?

 Why keep one out as a Hot Spare ? If you have another zpool and
 the Hot Spare will be shared, that makes sense. If the drive is
 powered on and spinning, I don't see any downside to making it a 4-way
 mirror instead of 3-way + HS.

 --

Also, to add larger disks to a mirrored pool, you can replace the mirror
members, one at a time, with the larger disk and wait for resilver to
complete. Then replace the other disk, resilver again.

Craig


-- 
Craig Cory
 Senior Instructor :: ExitCertified
 : Oracle/Sun Certified System Administrator
 : Oracle/Sun Certified Network Administrator
 : Oracle/Sun Certified Security Administrator
 : Symantec/Veritas Certified Instructor
 : RedHat Certified Systems Administrator


+-+
 ExitCertified :: Excellence in IT Certified Education

  Certified training with Oracle, Sun Microsystems, Apple, Symantec, IBM,
   Red Hat, MySQL, Hitachi Storage, SpringSource and VMWare.

 1.800.803.EXIT (3948)  |  www.ExitCertified.com
+-+
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split how it works?

2010-11-15 Thread Craig Cory
From the OpenSolaris ZFS FAQ page:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq

 If you want to use a hardware-level backup or snapshot feature instead of
the ZFS snapshot feature, then you will need to do the following steps:

* zpool export pool-name
* Hardware-level snapshot steps
* zpool import pool-name





sridhar surampudi wrote:
 Hi Darren,

 Thanks for your info.

 Sorry below might be lengthy :

 Yes I am looking for actual implementation rather how to use zpool split.

 My requirement is not at zfs file system level and also not zfs snapshot.


 As I understood,
 If my zpool say mypool is created using   zpool create mypool mirror device1
 device2
 and  after running  : zfs split mypool  newpool device2 , I can access
 device2 with newpool

 Same data on newpool is available as mypool as long as there are no
 writes/modifications to newpool.

 What i am looking for is,

 if my devices ( say zpool is created with only one device device1) are from an
 array and I took array snapshot ( zfs /zpool doesn't come in picture as I take
 hardware snapshot), I will get a snapshot device say device2.

 I am  looking for a way to use the snapshot device device2 by recreating the
 zpool and zfs stack with an alternate name.

 zpool split must be doing some changes to metadata of device2 to associate
 with the new name i.e. newpool,

 I want to do it for the same for snapshot device created using array/hardware
 snapshot.

 Thanks  Regards,
 sridhar.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



-- 
Craig Cory
 Senior Instructor :: ExitCertified
 : Sun Certified System Administrator
 : Sun Certified Network Administrator
 : Sun Certified Security Administrator
 : Veritas Certified Instructor

 8950 Cal Center Drive
 Bldg 1, Suite 110
 Sacramento, California  95826
 [e] craig.c...@exitcertified.com
 [p] 916.669.3970
 [f] 916.669.3977

+-+
 ExitCertified :: Excellence in IT Certified Education

  Certified training with Oracle, Sun Microsystems, Apple, Symantec, IBM,
   Red Hat, MySQL, Hitachi Storage, SpringSource and VMWare.

 1.800.803.EXIT (3948)  |  www.ExitCertified.com
+-+
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NetApp/Oracle-Sun lawsuit done

2010-09-13 Thread Craig Cory
Run away! Run fast little Netapp. Don't anger the sleeping giant - Oracle!



David Magda wrote:
 Seems that things have been cleared up:

 NetApp (NASDAQ: NTAP) today announced that both parties have agreed to
 dismiss their pending patent litigation, which began in 2007 between Sun
 Microsystems and NetApp. Oracle and NetApp seek to have the lawsuits
 dismissed without prejudice. The terms of the agreement are confidential.

 http://tinyurl.com/39qkzgz
 http://www.netapp.com/us/company/news/news-rel-20100909-oracle-settlement.html

 A recap of the history at:

 http://www.theregister.co.uk/2010/09/09/oracle_netapp_zfs_dismiss/


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



-- 
Craig Cory
 Senior Instructor :: ExitCertified
 : Sun Certified System Administrator
 : Sun Certified Network Administrator
 : Sun Certified Security Administrator
 : Veritas Certified Instructor

 8950 Cal Center Drive
 Bldg 1, Suite 110
 Sacramento, California  95826
 [e] craig.c...@exitcertified.com
 [p] 916.669.3970
 [f] 916.669.3977

+-+
 ExitCertified :: Excellence in IT Certified Education

  Certified training with Oracle, Sun Microsystems, Apple, Symantec, IBM,
   Red Hat, MySQL, Hitachi Storage, SpringSource and VMWare.

 1.800.803.EXIT (3948)  |  www.ExitCertified.com
+-+
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz capacity osol vs freebsd

2010-07-18 Thread Craig Cory
When viewing a raidz|raidz1|raidz2 pool, 'zpool list|status' will report the
total device space; ie: 3 1TB drives in a raidz will show approx. 3TB space.
'zfs list' will show available FILESYSTEM space, ie: 3 1TB raidz disks, approx
2TB space.


Logic wrote:
 Ian Collins (i...@ianshome.com) wrote:
 On 07/18/10 11:19 AM, marco wrote:
 *snip*


 Yes, that is correct. zfs list reports usable space, which is 2 out of
 the three drives (parity isn't confined to one device).

 *snip*


 Are you sure?  That result looks odd.  It is what I'd expect to see from
 a stripe, rather than a raidz.

 What does zpool iostat -v pool2 report?

 Hi Ian,

 I'm the friend with the osol release(snv_117) installed.

 The output you asked for is:
 % zpool iostat -v pool2
 capacity operationsbandwidth
 pool used  avail   read  write   read  write
 --  -  -  -  -  -  -
 pool2   4.26T  1.20T208 78  22.1M   409K
 raidz1  4.26T  1.20T208 78  22.1M   409K
 c2d1-  - 81 37  7.97M   208K
 c1d0-  - 82 38  7.85M   209K
 c2d0-  - 79 37  7.79M   209K
 --  -  -  -  -  -  -

 It really is a raidz, created a long time ago with build 27a, and I have been
 replacing the disks ever since, by removing one disk at a time and waiting for
 the
 resilvering to be done.

 greets Leon
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


+-+
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How many TLDs should you have?

2009-12-01 Thread Craig Cory
Hi Chris,

It sounds like there is some confusion with the recommendation about raidz?
vdevs. It is recommended that each raidz? TLD be a single-digit number of
disks - so up to 9. The total number of these single digit TLDs is not
practically limited.

Craig



Christopher White wrote:
 Cindy --

 Thanks for the link!

 I see in one of the examples that there are 14 TLDs (all mirrored).  Does
 that mean there are no performance issues with having more than 9 TLDs?  In
 the Sun class I attended, the instructor said to not use more than 9 TLDs,
 which seems like it could be very limiting, especially in a SAN setting.
 Like I said, our storage group presents 15G LUNs to use -- so it'd be
 difficult to keep the TLDs under 9 and have a very large filesystem.

 Let me know what you think.  Thanks!

 Chris



 On Tue, Dec 1, 2009 at 10:47 AM, Cindy Swearingen
 cindy.swearin...@sun.comwrote:

 Hi Chris,

 If you have 40 or so disks then you would create 5-6 RAIDZ virtual
 devices of 7-8 disks each, or possibly include two disks for the root
 pool, two disks as spares, and then 36 (4 RAIDZ vdevs of 6 disks) disks
 for a non-root pool.

 This configuration guide hasn't been updated for RAIDZ-3 yet, but you
 will get some ideas about how to configure a redundant configuration
 of many disks, here:

 http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide

 See ZFS Configuration Example (x4500 with raidz2)

 Cindy


 On 12/01/09 09:20, Christopher White wrote:

 All,

 We're going to start testing ZFS and I had a question about Top Level
 Devices (TLDs).  In Sun's class, they specifically said not to use more
 than
 9 TLDs due to performance concerns.  Our storage admins make LUNs roughly
 15G in size -- so how would we make a large pool (1TB) if we're limited to
 only 9 TLDs?

 The Best Practices guide (
 http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pool_Performance_Considerations)
 suggests not using 40+ disks in a RAIDZ TLD:

 Avoid creating a RAIDZ, RAIDZ-2, RAIDZ-3, or a mirrored configuration
 with one logical device of 40+ devices. See the sections below for examples
 of redundant configurations.

 Does that mean we should only have 9 RAIDZ TLDs with 39 LUNs in each
 RAIDZ?

 Or is the 9 TLDs an old recommendation that has since been changed?

 Thanks!

 Chris


 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



-- 
Craig Cory
 Senior Instructor :: ExitCertified
 : Sun Certified System Administrator
 : Sun Certified Network Administrator
 : Sun Certified Security Administrator
 : Veritas Certified Instructor

 8950 Cal Center Drive
 Bldg 1, Suite 110
 Sacramento, California  95826
 [e] craig.c...@exitcertified.com
 [p] 916.669.3970
 [f] 916.669.3977
 [w] WWW.EXITCERTIFIED.COM
+-+
   OTTAWA | SACRAMENTO | MONTREAL | LAS VEGAS | QUEBEC CITY | CALGARY
SAN FRANCISCO | VANCOUVER | REGINA | WINNIPEG | TORONTO

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-19 Thread Craig Cory
In response to:
 I don't see much similarity between mirroring and raidz other than
 that they both support redundancy.

Martin wrote:
 A single parity device against a single data device is, in essence, mirroring.
  For all intents and purposes, raid and mirroring with this configuration are
 one and the same.

I would have to disagree with this. Mirrored data will have mulitple copies of
the actual data. Any copy is a valid source for data access. Lose one disk and
the other is a complete original. A raid 3/4/5/6/z/z2 configuration will
generate a mathematical value to restore a portion of the lost data one of the
storage units in the stripe. A 2-disk raidz will have 1/2 of each disk's used
space holding primary data interlaced with the other 1/2 holding a parity
reflection of the data. Any time we access the parity representation, some
computation will be needed to render the live data. This would have to add
*some* overhead to the io.

Craig Cory
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS usable space calculations (Help!)

2009-03-17 Thread Craig Cory
Brent,


Brent Wagner wrote:
 Can someone point me to a document describing how available space in a
 zfs is calculated or review the data below and tell me what I'm
 missing?

 Thanks in advance,
 -Brent
 ===
 I have a home project with 3x250 GB+3x300 GB in raidz, so I expect to
 lose 1x300 GB to parity.

 Total size:1650GB
 Total size using 1024 to measure: ~1534 GB

 Expected raidz zpool size after losing 300 GB to parity: ~1350 GB
 Expected raidz zpool size using 1024 to measure: ~1255.5 GB

 Actual zpool size: 1.36T

 Single zfs on the pool - available size: 1.11T

 I realize zfs is going to have some overhead but 250 GB seems a little
 excessive...right? I thought maybe the zpool was showing all 6 disks and
 the filesystem reflected the remaining space after discounting the
 parity disk but that doesn't add up in a way that makes sense either
 (see above). Can someone help explain these numbers?

 Thanks,
 -Brent



When you say 3x250 GB+3x300 GB in raidz do you mean:

1) # zpool create mypool raidz 250gb-1 250gb-2 250gb-3 300gb-1 \
 300gb-2 300gb-3

or

2) # zpool create mypool raidz 250gb-1 250gb-2 250gb-3 \
 raidz 300gb-1 300gb-2 300gb-3

As I understand it, #1 would waste the extra 50gb on each 300gb drive and give
you 1500gb usable space. 250gb of that (1/6th) would be parity, so 1250gb
data space.

#2 would make 2 vdevs of 750gb and 900gb totaling 1650gb space. Parity would
use 250gb from the 1st vdev and 300gb from the second; so 1100gb of data
space is available.

Either way, when you list raidz* pools with
 # zpool list
you see the total physical space. When you list the filesystems with
 # zfs list
you get the usable filesystem space, which is where the parity is implemented.

Here's an example with 250MB files and 300MB files:

For #1 scenario:

# zpool create -f mypool1 raidz /250d1 /250d2 /250d3 /300d1 /300d2 /300d3

# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool11.44G145K   1.44G 0%  ONLINE -

# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
mypool1   115K  1.16G  40.7K  /mypool1

# zpool status
  pool: mypool1
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool1 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
/250d1  ONLINE   0 0 0
/250d2  ONLINE   0 0 0
/250d3  ONLINE   0 0 0
/300d1  ONLINE   0 0 0
/300d2  ONLINE   0 0 0
/300d3  ONLINE   0 0 0

--
And for #2:
# zpool create -f mypool2 raidz /250d1 /250d2 /250d3 raidz /300d1 /300d2 /300d3

# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool21.58G157K   1.58G 0%  ONLINE -

# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
mypool2   101K  1.02G  32.6K  /mypool2

# zpool status
  pool: mypool2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool2 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
/250d1  ONLINE   0 0 0
/250d2  ONLINE   0 0 0
/250d3  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
/300d1  ONLINE   0 0 0
/300d2  ONLINE   0 0 0
/300d3  ONLINE   0 0 0

errors: No known data errors
---

Does this describe what you're seeing?


Craig



-- 
Craig Cory
 Senior Instructor :: ExitCertified
 : Sun Certified System Administrator
 : Sun Certified Network Administrator
 : Sun Certified Security Administrator
 : Veritas Certified Instructor

 8950 Cal Center Drive
 Bldg 1, Suite 110
 Sacramento, California  95826
 [e] craig.c...@exitcertified.com
 [p] 916.669.3970
 [f] 916.669.3977
 [w] WWW.EXITCERTIFIED.COM
+-+
   OTTAWA | SACRAMENTO | MONTREAL | LAS VEGAS | QUEBEC CITY | CALGARY
SAN FRANCISCO | VANCOUVER | REGINA | WINNIPEG | TORONTO

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Will there be a GUI for ZFS ?

2007-08-16 Thread Craig Cory
The GUI is an implementation of the webmin tool. You must be running the
server - started with

/usr/sbin/smcwebserver start

Then, access it with

https://hostname:6789/zfs

Regards



Craig Cory
 Senior Instructor :: ExitCertified
 : Sun Certified System Administrator
 : Sun Certified Network Administrator
 : Sun Certified Security Administrator
 : Veritas Certified Instructor

 8950 Cal Center Drive
 Bldg 1, Suite 110
 Sacramento, California  95826
 [e] [EMAIL PROTECTED]
 [p] 916.669.3970
 [f] 916.669.3977
 [w] WWW.EXITCERTIFIED.COM
+-+
   OTTAWA | SACRAMENTO | MONTREAL | LAS VEGAS | QUEBEC CITY | CALGARY
SAN FRANCISCO | VANCOUVER | REGINA | WINNIPEG | TORONTO


In response to Rob Windsor, who said:
 Richard Elling wrote:
 Tim Thomas wrote:
   The GUI is already in Solaris 10 Update 3 and later.

 Just point a web browser at  https://yourhost:6789, login as root and
 you will see the launch point for the GUI on the console page..the ZFS
 GUI looks like this..

 Try https://localhost:6789
 Secure by default has the external access denied by default.

 Also, for those who haven't used it yet, the GUI will also show the CLI
 commands it will use.  So you can learn the CLI from the GUI.
   -- richard

 What pkg and svc is this?

 (I don't see it on my 11/06 installs)

 Rob++
 --
 Internet: [EMAIL PROTECTED] __o
 Life: [EMAIL PROTECTED]_`\,_
 (_)/ (_)
 They couldn't hit an elephant at this distance.
-- Major General John Sedgwick
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Multipathing and ZFS

2006-06-21 Thread Craig Cory
I have had a brief introduction to ZFS and while discussing it with some other
folks the question came up about use with multipathed storage. What, if any,
configuration or interaction does ZFS have with a multipathed storage setup -
however it may be managed.

thanks!




Craig Cory
 Senior Instructor :: ExitCertified
 : Sun Certified System Administrator
 : Sun Certified Network Administrator
 : Sun Certified Security Administrator
 : Veritas Certified Instructor

 8950 Cal Center Drive
 Bldg 1, Suite 110
 Sacramento, California  95826
 [e] [EMAIL PROTECTED]
 [p] 916.669.3970
 [f] 916.669.3977
 [w] WWW.EXITCERTIFIED.COM
+-+
   OTTAWA | SACRAMENTO | MONTREAL | LAS VEGAS | QUEBEC CITY | CALGARY
SAN FRANCISCO | VANCOUVER | REGINA | WINNIPEG | TORONTO
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zpool history' proposal

2006-05-03 Thread Craig Cory
I, too, am late to this thread but I caught something that didn't seem right
to me in this specific example. For the administration of the non-global
zones, SunEducation (for whom I am an instructor) is stressing that the ng
zones are Software Virtualizations (my quotes) and that the hardware and
infrastructure are managed by the global zone admin. In this case, the ngz
admins would not have access or permission to corrupt their filesystems at the
zpool/zfs level. Unless zfs is to offer a different management model, I don't
suspect we will need to differentiate the (incapacitated) ngz admins from the
gz admins.

Regards,

Craig

On Wed, May 3, 2006 3:05 pm, Eric Schrock said:
 On Wed, May 03, 2006 at 02:47:57PM -0700, eric kustarz wrote:
 Jason Schroeder wrote:

 eric kustarz wrote:
 
 The following case is about to go to PSARC.  Comments are welcome.
 
 eric
 
 To piggyback on earlier comments re: adding hostname and user:
 
 What is the need for zpool history to distinguish zfs commands that
 were executed by priviledged users in non-global zones for those
 datasets under ngz ownership?
 
 I personally don't see a need to distinguish between zones.  However,
 with delegated administration, it would be nice to know who did (say)
 destroy that file system - the local root or some remote user.

 Keep in mind that one username (or uid) in a local zone is different
 from the same username in the global zone, since they can be running
 different name services.  In the simplest example, you could have an
 entry that said something like:

 root  zfs destroy tank/foo

 And if you were using datasets delegated to local zones, you wouldn't
 know if that was 'root' in the global zone or 'root' in the local zone.
 If you are going to log a user at all, you _need_ to log the zone name
 as well.  Even without usernames, it would probably be useful to know
 that a particular action was done in a particular zone.

 Imagine a service provider with several zones delegated to different
 users, and each user has their own portion of the namespace.  At some
 point, you get a servicecall from a customer saying someone deleted my
 filesystems  You could look at the zpool history, but without a
 zone name, you wouldn't know if was your fault (from the global zone) or
 theirs (from the local zone).

 - Eric

 --
 Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss