Re: [zfs-discuss] 350TB+ storage solution

2011-05-18 Thread Chris Mosetick
The drives I just bought were half packed in white foam then wrapped

 in bubble wrap.  Not all edges were protected with more than bubble
 wrap.



Same here for me. I purchased 10 x 2TB Hitachi 7200rpm SATA disks from
Newegg.com in March. The majority of the drives were protected in white
foam. However ~1/2 inch of each end of all the drives were only protected by
bubble wrap. A small batch of three disks I ordered (testing for the larger
order) in February were packed similarly, and I've already had to RMA one of
those drives. Newegg, moving in the right direction, but still have a ways
to go in the packing dept. I still love their prices!

-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Guide to COMSTAR iSCSI?

2010-12-13 Thread Chris Mosetick
I have found this post from Mike La Spina to be very detailed covering this
topic, yet I could not seem to get it to work right on my first hasty
attempt a while back.  Let me know if you have success, or adjustments that
get this to work.

http://blog.laspina.ca/ubiquitous/securing-comstar-and-vmware-iscsi-connections

-Chris

On Sun, Dec 12, 2010 at 12:47 AM, Martin Mundschenk 
m.mundsch...@mundschenk.de wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi!

 I have configured two LUs following this guide:

 http://thegreyblog.blogspot.com/2010/02/setting-up-solaris-comstar-and.html

 Now I want each LU to be available to only one distinct client in the
 network. I found no easy guide how to accomplish the anywhere in the
 internet. Any hint?

 Martin


 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.16 (Darwin)

 iQEcBAEBAgAGBQJNBIw2AAoJEA6eiwqkMgR8vAcH/0jeBh0PvZdnjLK4FOY6/Xw1
 JwAqdNbS5jvUn8pvYRxdA379gqyZNoFXMRTpPl5Xefw88rpXS+vqvDHoaM1A5Wov
 tTERXrh9DMACAswm4KYnA7lcWxEUJWBJ8LA870Sd6GVqPHbBnE+R+o2Op69XUy/g
 +sAa0f7MDHPJP46xad5/qweUVRNZ0C+Ka2YYqhWKvYTN2DEYmFfnem+c6Vna2TXv
 uOLoEeV+CHOI/BdrpcDaU8XQzAS5f1x/oTPhk56j0Uzm4q8+aKqc2YTccvGnRJCm
 8F+/ZyZ40fy2TRLfhmZIGoL+y9nrJqUDm+K2jXkdH/55vzsk+EdhfZUlDYXsalo=
 =NdL6
 -END PGP SIGNATURE-
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Chris Mosetick
A word of caution on the Silicon Image 3124.  I have tested out a two
extremely cheap card using the si3124 driver on b134 and OIb147.  One card
was PCI, the other PCI-X.  I found that both are unusable until the driver
is updated.  Large'ish file transfers, say over 1GB would lock up the
machine and cause a kernel panic.  Investigation revealed it was si3124.
The driver is in serious need of an update, at least in the builds mentioned
above.  It's possible that a firmware update on the card would help, but I
never had time to explore that option.  If a device using the si3124 driver
works great for you in a L2ARC role after extensive testing, then by all
means use it, I just wanted to pass along my experience.

-Chris

The RevoDrive should not require a custom device driver as it is based on
 the
 Silicon Image 3124 PCI-X RAID controller connected to a Pericom PCI-X to
 PCIe bridge chip (PI7C9X130).  The required driver would be the si3124(7D),
 I noticed the man page states NCQ is not supported.  I found the following
 link
 detailing the status:

 http://opensolaris.org/jive/thread.jspa?messageID=466436

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive and locking

2010-11-03 Thread Chris Mosetick
Sorry i'm not able to provide more insight but I thought some of the
concepts in this article might help you, as well as Mike's replication
script, also available on this page:
http://blog.laspina.ca/ubiquitous/provisioning_disaster_recovery_with_zfs

You also might want to look at InfraGeeks auto-replicate script:

http://www.infrageeks.com/groups/infrageeks/wiki/8fb35/zfs_autoreplicate_script.html

If the auto-replicate script works without locking things up, then you are
probably doing something wrong.  This script works great for me on b134
hosts.  One thing you might do to enable faster transfer speeds is enable
blowfish-cbc in your /etc/ssh/sshd_config and then modify the script to use
that cipher.

Cheers,

-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] [illumos-Developer] zpool upgrade and zfs upgrade behavior on b145

2010-09-29 Thread Chris Mosetick
Hi Cindy,

I did see your first email pointing to that
bughttp://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6538600.
Apologies for not addressing it earlier.  It is my opinion that the behavior
Mike, and I http://illumos.org/issues/217 (or anyone else upgrading pools
right now) is seeing is a entirely new and different bug.  The bug you point
to, originally submitted in 2007 says it manifests itself before a reboot.
Also you say exporting and importing clear the problem.  After several
reboots, zdb still shows the older pool version, which means that this is a
new bug or perhaps the bug you are referencing is not listing clearly and
accurately what it should be and is incomplete.

Suppose an export and import can update the pool label config on a large
storage pool, great.  How would someone go about exporting the rpool the
operating system is on??  As far as I know, It's impossible to export the
zpool the operating system is running on.  I don't think it can be done, but
I'm new so maybe I'm missing something.

One option I have not explored that might work:  Booting to a live CD that
has the same or higher pool version present and then doing:   zpool import
 zpool import -f rpool  zpool export rpool  and then rebooting into the
operating system.  Perhaps this might be an option that works to update
the label config / zdb for rpool but I think fixing the root problem would
be much more beneficial for everyone in the long run.  Being that zdb is a
troubleshooting/debugging tool, I would think that it's necessary for it to
be aware of the proper pool version to work properly and so admins know
what's really going on with their pools.  The bottom line here is that if
zdb is going to be part of zfs, it needs to display what is currently on
disk, including the label config.  If I were an admin thinking about
trusting hundreds of GB's of data to zfs I would want the debugger to show
me whats really on the disks.

Additionally, even though zpool and zfs get version display the true and
updated versions, I'm not convinced that the problem is zdb, as the label
config is almost certainly set by the zpool and/or zfs commands.  Somewhere,
something is not happening that is supposed to when initiating a zpool
upgrade, but since I know virtually nothing of the internals of zfs, I do
not know where.

Sincerely,

-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] [illumos-Developer] zpool upgrade and zfs upgrade behavior on b145

2010-09-29 Thread Chris Mosetick
Well strangely enough, I just logged into a OS b145 machine.  It's rpool is
not mirrored, just a single disk.  I know that zdb reported zpool version 22
after at least the first 3 reboots after rpool upgrade, so I stopped
checking.  zdb now reports version 27.  This machine has probably been
rebooted about five or six times since the pool version upgrade.  One should
not have to reboot six times!  More mystery to this pool upgrade behavior!!

-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-15 Thread Chris Mosetick
We have two Intel X25-E 32GB SSD drives in one of our servers.  I'm using
one for ZIl and one for L2ARC, we are having great results so far.

Cheers,

-Chris

On Wed, Sep 15, 2010 at 9:43 AM, Richard Elling rich...@nexenta.com wrote:

 On Sep 14, 2010, at 6:59 AM, Wolfraider wrote:

  We are looking into the possibility of adding a dedicated ZIL and/or
 L2ARC devices to our pool. We are looking into getting 4 – 32GB  Intel X25-E
 SSD drives. Would this be a good solution to slow write speeds?

 Maybe, maybe not.  Use zilstat to check whether the ZIL is actually in use
 before spending money or raising expectations.

  We are currently sharing out different slices of the pool to windows
 servers using comstar and fibrechannel. We are currently getting around
 300MB/sec performance with 70-100% disk busy.

 This seems high, is the blocksize/recordsize matched?
  -- richard

 --
 OpenStorage Summit, October 25-27, Palo Alto, CA
 http://nexenta-summit2010.eventbrite.com

 Richard Elling
 rich...@nexenta.com   +1-760-896-4422
 Enterprise class storage for everyone
 www.nexenta.com





 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool upgrade and zfs upgrade behavior on b145

2010-09-13 Thread Chris Mosetick
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.

A couple questions.  First I have a physical host (call him bob) that was
just installed with b134 a few days ago.  I upgraded to b145 using the
instructions on the Illumos wiki yesterday.  The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).

ch...@bob:~# zpool upgrade rpool
This system is currently running ZFS pool version 27.
Pool 'rpool' is already formatted using the current version.

ch...@bob:~# zfs upgrade rpool
7 file systems upgraded

The file systems have been upgraded according to zfs get version rpool

Looks ok to me.

However, I now get an error when I run zdb -D.  I can't remember exactly
when I turned dedup on, but I moved some data on rpool, and zpool list
shows 1.74x ratio.

ch...@bob:~# zdb -D rpool
zdb: can't open 'rpool': No such file or directory

Also, running zdb by itself, returns expected output, but still says my
rpool is version 22.  Is that expected?

I never ran zdb before the upgrade, since it was a clean install from the
b134 iso to go straight to b145.  One thing I will mention is that the
hostname of the machine was changed too (using these
instructionshttp://wiki.genunix.org/wiki/index.php/Change_hostname_HOWTO).
bob used to be eric.  I don't know if that matters, but I can't open up the
Users and Groups from Gnome anymore, *unable to su* so something is
still not right there.

Moving on, I have another fresh install of b134 from iso inside a virtualbox
virtual machine, on a total different physical machine.  This machine is
named weston and was upgraded to b145 using the same Illumos wiki
instructions.  His name has never changed.  When I run the same zdb -D
command I get the expected output.

ch...@weston:~# zdb -D rpool
DDT-sha256-zap-unique: 11 entries, size 558 on disk, 744 in core
dedup = 1.00, compress = 7.51, copies = 1.00, dedup * compress / copies =
7.51

However, after zpool and zfs upgrades *on both machines*, they still say the
rpool is version 22.  Is that expected/correct?  I added a new virtual disk
to the vm weston to see what would happen if I made a new pool on the new
disk.

ch...@weston:~# zpool create test c5t1d0

Well, the new test pool shows version 27, but rpool is still listed at 22
by zdb.  Is this expected /correct behavior?  See the output below to see
the rpool and test pool version numbers according to zdb on the host weston.


Can anyone provide any insight into what I'm seeing?  Do I need to delete my
b134 boot environments for rpool to show as version 27 in zdb?  Why does zdb
-D rpool give me can't open on the host bob?

Thank you in advance,

-Chris

ch...@weston:~# zdb
rpool:
version: 22
name: 'rpool'
state: 0
txg: 7254
pool_guid: 17616386148370290153
hostid: 8413798
hostname: 'weston'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 17616386148370290153
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 14826633751084073618
path: '/dev/dsk/c5t0d0s0'
devid: 'id1,s...@sata_vbox_harddiskvbf6ff53d9-49330fdb/a'
phys_path: '/p...@0,0/pci8086,2...@d/d...@0,0:a'
whole_disk: 0
metaslab_array: 23
metaslab_shift: 28
ashift: 9
asize: 32172408832
is_log: 0
create_txg: 4
test:
version: 27
name: 'test'
state: 0
txg: 26
pool_guid: 13455895622924169480
hostid: 8413798
hostname: 'weston'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 13455895622924169480
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 7436238939623596891
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,s...@sata_vbox_harddiskvba371da65-169e72ea/a'
phys_path: '/p...@0,0/pci8086,2...@d/d...@1,0:a'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 24
ashift: 9
asize: 3207856128
is_log: 0
create_txg: 4
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS online device management

2010-09-13 Thread Chris Mosetick
Can anyone elaborate on the zpool split command.  I have not seen any
examples in use am I am very curious about it.  Say I have 12 disks in a
pool named tank.  6 in a RAIDZ2 + another 6 in a RAIDZ2. All is well, and
I'm not even close to maximum capacity in the pool.  Say I want to swap out
6 of the 12 SATA disks for faster SAS disks, and make a new 6 disk pool with
just the SAS disks, leaving the existing pool with the SATA disks intact.

Can I run something like:

zpool split tank dozer c4t8d0 c4t9d0 c4t10d0 c4t11d0 c4t12d0 c4t13d0

zpool export dozer

Now, turn off the server, remove the 6 SATA disks.

Put in the 6 SAS disks.

Power on the server.

echo | format   to get the disk ID's of the new SAS disks.

zpool create speed raidz disk1 disk2 disk3 disk4 disk5 disk6

Thanks in advance,

-Chris


On Sat, Sep 11, 2010 at 4:37 PM, besson3c j...@netmusician.org wrote:

 Ahhh, I figured you could always do that, I guess I was wrong...
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS online device management

2010-09-13 Thread Chris Mosetick
So are there now any methods to achieve the scenario I described to shrink a
pools size with existing ZFS tools?  I don't see a definitive way listed on
the old shrinking
threadhttp://www.opensolaris.org/jive/thread.jspa?threadID=8125
.

Thank you,

-Chris


On Mon, Sep 13, 2010 at 4:55 PM, Richard Elling rich...@nexenta.com wrote:

 On Sep 13, 2010, at 4:40 PM, Chris Mosetick wrote:

  Can anyone elaborate on the zpool split command.  I have not seen any
 examples in use am I am very curious about it.  Say I have 12 disks in a
 pool named tank.  6 in a RAIDZ2 + another 6 in a RAIDZ2. All is well, and
 I'm not even close to maximum capacity in the pool.  Say I want to swap out
 6 of the 12 SATA disks for faster SAS disks, and make a new 6 disk pool with
 just the SAS disks, leaving the existing pool with the SATA disks intact.

 zpool split only works on mirrors.

 For examples, see the section Creating a New Pool By Splitting a Mirrored
 ZFS
 Storage Pool in the ZFS Admin Guide.
  -- richard

 --
 OpenStorage Summit, October 25-27, Palo Alto, CA
 http://nexenta-summit2010.eventbrite.com

 Richard Elling
 rich...@nexenta.com   +1-760-896-4422
 Enterprise class storage for everyone
 www.nexenta.com






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss