Re: [zfs-discuss] Identifying drives (SATA)

2011-02-05 Thread rwalists

On Feb 5, 2011, at 2:43 PM, David Dyer-Bennet wrote:

 Is there a clever way to figure out which drive is which?  And if I have to 
 fall back on removing a drive I think is right, and seeing if that's true, 
 what admin actions will I have to perform to get the pool back to safety?  
 (I've got backups, but it's a pain to restore of course.) (Hmmm; in 
 single-user mode, use dd to read huge chunks of one disk, and see which 
 lights come on?  Do I even need to be in single-user mode to do that?)

Obviously this depends on your lights working to some extent (the right light 
doing something when the right disk is accessed), but I've used:

dd if=/dev/rdsk/c8t3d0s0 of=/dev/null bs=4k count=10

which someone mentioned on this list.  Assuming you can actually read from the 
disk (it isn't completely dead), it should allow you to direct traffic to each 
drive individually.

Good luck,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirrored drive

2010-11-29 Thread rwalists

On Nov 29, 2010, at 8:05 AM, Dick Hoogendijk wrote:

 OK, I've got a proble I can't solve by myself. I've installed solaris 11 
 using just one drive.
 Now I want to create a mirror by attached a second one tot the rpool.
 However, the first one has NO partition 9 but the second one does. This way 
 the sizes differ if I create a partiotion 0 (needed because it's a boot 
 disk)..
 
 How can I get the second disk look exactly the same like the first?
 Or can't that be done.

I haven't done this on Solaris 11 Express, but this worked on OpenSolaris 
2009-06:

 prtvtoc /dev/rdsk/c5t0d0s0 | fmthard -s - /dev/rdsk/c5t1d0s0

Where the first disk is the current root and the second one is the new mirror.

This is taken from here:

http://blogs.warwick.ac.uk/chrismay/entry/opensolaris_adventure_part_1

Good luck,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tips for ZFS tuning for NFS store of VM images

2010-07-29 Thread rwalists

On Jul 28, 2010, at 3:11 PM, sol wrote:

 A partial workaround was to turn off access time on the share and to mount 
 with 
 noatime,actimeo=60
 
 But that's not perfect because when left along the VM got into a stuck 
 state. 
 I've never seen that state before when the VM was hosted on a local disk. 
 Hosting VMs on NFS is not working well so far...

We host a lot of VMs on NFS shares (from a 7000 series) on ESXi with no issues 
other than an occasional Ubuntu machine that would do something similar to what 
you describe.  For us it was this:

http://communities.vmware.com/thread/237699?tstart=30

when the timeout is set to 180 the issue has been completely eliminated.  The 
current ESXi (and I think all versions of 4) VMWare tools does this properly.

Also, EMC and NetApp have this description of using NFS shares for VMWare:

http://virtualgeek.typepad.com/virtual_geek/2009/06/a-multivendor-post-to-help-our-mutual-nfs-customers-using-vmware.html

which by and large applies to any NFS server rather than just their equipment.  
We found it helpful, but nothing is that surprising in it.

Good luck,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-24 Thread rwalists
On May 24, 2010, at 4:28 AM, Erik Trimble wrote:

 yes, both the X25-M (both G1 and G2) plus the X25-E have a DRAM buffer on the 
 controller, and neither has a supercapacitor (or other battery) to back it 
 up, so there is the potential for data loss (but /not/ data corruption) in a 
 power-loss scenario.
 
 Sadly, we're pretty much at the point where no current retail-available SSD 
 has battery backup for it's on-controller DRAM cache (and, they /all/ use 
 DRAM caches).

I haven't seen where anyone has tested this, but the MemoRight SSD (sold by 
RocketDisk in the US) seems to claim all the right things:

http://www.rocketdisk.com/vProduct.aspx?ID=1

pdf specs:

http://www.rocketdisk.com/Local/Files/Product-PdfDataSheet-1_MemoRight%20SSD%20GT%20Specification.pdf

They claim to support the cache flush command, and with respect to DRAM cache 
backup they say (p. 14/section 3.9 in that pdf):

 The MemoRight’s NSSD have an on-drive backup power system. It saves energy 
 when the power supply is applied to drive. When power-off occurring, the 
 saved energy will be released to keep the drive working for a while. The 
 saved energy ensures the data in the cache can be flushed to the nonvolatile 
 flash media, which prevents the data loss to happen.
 It will take about 5 seconds to save enough energy for discharge at lease 1 
 second. The write cache will be disabled automatically before the backup 
 power system saved enough energy.

Which certainly sounds like an on-board capacitor to flush the cache and that 
the cache is disabled while charging the capacitor.  But I can't see where 
anyone has tested this on ZFS.

--Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-09 Thread rwalists
On Mar 8, 2010, at 7:55 AM, Erik Trimble wrote:

 Assume your machine has died the True Death, and you are starting with new 
 disks (and, at least a similar hardware setup).
 
 I'm going to assume that you named the original snapshot 
 'rpool/ROOT/whate...@today'
 
 (1)   Boot off the OpenSolaris LiveCD
 
 
...
 
 (10)  Activate the restored BE:
   # beadm activate New
 
 
 You should now be all set.   Note:  I have not /explicitly/ tried the above - 
 I should go do that now to see what happens.  :-)

If anyone is going to implement this, much the same procedure is documented at 
Simon Breden's blog:

http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/

which walks through the commands for executing the backup and the restore.

--Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?

2010-03-07 Thread rwalists
On Mar 8, 2010, at 12:05 AM, Dedhi Sujatmiko wrote:

 2. OpenSolaris (and EON) does not have proper implementation of SMART 
 monitoring. Therefore I cannot get to know the temperature of my hard disks. 
 Since they are DIY storage without chassis environment monitoring, I consider 
 this an important regression
 3. OpenSolaris (and EON) does not have proper serial number display of the 
 Seagate hard disks I am using. If I use the format to read the serial number, 
 I always miss the last character. If I read them using the hd or hdparm 
 utility, I will miss the first character

Both of these can be handled via smartctl 
(http://smartmontools.sourceforge.net/) as described here:

http://breden.org.uk/2008/05/16/home-fileserver-drive-temps/

As to the serial number, at least for Western Digital drives it was accurate.

Good luck,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] suggested ssd for zil

2010-02-28 Thread rwalists
If anyone has specific SSD drives they would recommend for ZIL use would you 
mind a quick response to the list?  My understanding is I need to look for:

1) Respect cache flush commands (which is my real question...the answer to this 
isn't very obvious in most cases)
2) Fast on small writes

It seems even the smallest sizes should be sufficient.  This is for a home NAS 
where most write work is for iSCSI volumes hosting backups for OS X Time 
Machine.  There is also some small amount of MySQL (InnoDB) shared via NFS.

From what I can gather workable options would be:

- Stec which are in the 7000 series and extremely expensive

- Mtron Pro 7500 16GB SLC which seem to respect the cache flush but aren't 
particularly fast doing it
http://opensolaris.org/jive/thread.jspa?messageID=459872tstart=0

- Intel X-25E with the cache turned off which seems to be like the Mtron

- Seagate's marketing page for their new SSD implies it has a capacitor to 
protect data in cache like I believe the Stec does.  But I don't think they are 
available at retail yet.
Power loss data protection to ensure against data loss upon power failure
http://www.seagate.com/www/en-us/products/servers/pulsar/pulsar/

And what won't work are:

- Intel X-25M
- Most/all of the consumer drives prices beneath the X-25M

all because they use capacitors to get write speed w/o respecting cache flush 
requests.  Is there anything that is safe to use as a ZIL, faster than the 
Mtron but more appropriate for home than a Stec?  Maybe the answer is to wait 
on Seagate, but I thought maybe someone has other ideas.

Thanks,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] suggested ssd for zil

2010-02-28 Thread rwalists

On Feb 28, 2010, at 11:51 PM, rwali...@washdcmail.com wrote:

 And what won't work are:
 
 - Intel X-25M
 - Most/all of the consumer drives prices beneath the X-25M
 
 all because they use capacitors to get write speed w/o respecting cache flush 
 requests. 

Sorry, meant to say they use cache to get write speed w/o respecting cache 
flush requests.

--Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] suggested ssd for zil

2010-02-28 Thread rwalists

On Mar 1, 2010, at 12:05 AM, Daniel Carosone wrote:

 Is there anything that is safe to use as a ZIL, faster than the
 Mtron but more appropriate for home than a Stec?  
 
 ACARD ANS-9010, as mentioned several times here recently (also sold as
 hyperdrive5) 

You are right.  I saw that in a recent thread.  In my case I don't have a spare 
bay for it.  I'm similarly constrained on some of the PCI solutions that have 
either battery backup or external power.

But this seems like a good solution if someone has the space.

Thanks,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharemgr

2009-11-25 Thread rwalists
On Nov 24, 2009, at 3:41 PM, dick hoogendijk wrote:

 I have a solution with use zfs set sharenfs=rw,nosuid zpool but i prefer
 use the sharemgr command.
 
 Then you prefere wrong. ZFS filesystems are not shared this way.
 Read up on ZFS and NFS.

It can also be done with sharemgr.  Shaving via ZFS creates a sharemgr group 
called 'zfs', but you can also share things directly via the sharemgr commands. 
 It is fairly well spelled out in the manpage:

http://docs.sun.com/app/docs/doc/819-2240/sharemgr-1m?a=view

Basically you want to create a group, set the group's properties and add a 
share to the group.


--Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-11 Thread rwalists

On Nov 11, 2009, at 12:01 AM, Tim Cook wrote:


On Tue, Nov 10, 2009 at 5:15 PM, Tim Cook t...@cook.ms wrote:

  One thing I'm
noticing is a lot of checksum errors being generated during the  
resilver.

Is this normal?


Anyone?  It's up to 7.35M checksum errors and it's rebuilding  
extremely
slowly (as evidenced by the 10 hour time).  The errors are only  
showing on

the replacing-9 line, not the individual drive.


I've only replaced a drive once, but it didn't show any checksum  
errors during the resilver.  This was a 2 TB WD Green drive in a  
mirror pool that had started to show write errors.  It was attached to  
a SuperMicro AOC-SAT2-MV8.


Good luck,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-29 Thread rwalists

On Sep 29, 2009, at 2:41 AM, Eugen Leitl wrote:


On Mon, Sep 28, 2009 at 06:04:01PM -0400, Thomas Burgess wrote:

personally i like this case:


http://www.newegg.com/Product/Product.aspx?Item=N82E16811219021

it's got 20 hot swap bays, and it's surprisingly well built.  For  
the money,

it's an amazing deal.


You don't like http://www.supermicro.com/products/nfo/chassis_storage.cfm 
 ?

I must admit I don't have a price list of these.

When running that many hard drives I would insist on redundant
power supplies, and server motherboards with ECC memory. Unless
it's for home use, where a downtime of days or weeks is not critical.


I hadn't thought of going that way because I was looking for at least  
a somewhat pre-packaged system, but another posted pointed out how  
many more drives I could get by choosing case/motherboard separately.   
I agree, with this much trouble it doesn't make sense to settle for  
fewer drive slots than I can get.


I agree completely with the ECC.  It's for home use, so the power  
supply issue isn't huge (though if it's possible that's a plus).  My  
concern with this particular option is noise.  It will be in a closet,  
but one with louvered doors right off a room where people watch TV.   
Anything particularly loud would be an issue.  The comments on Newegg  
make this sound pretty loud.  Have you tried one outside of a server  
room environment?


Thanks,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss