Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-01 Thread Wilkinson, Alex

0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote: 

On Thu, 30 Apr 2009, Wilkinson, Alex wrote:

 I currently have a single 17TB MetaLUN that i am about to present to an
 OpenSolaris initiator and it will obviously be ZFS. However, I am 
constantly
 reading that presenting a JBOD and using ZFS to manage the RAID is best
 practice ? Im not really sure why ? And isn't that a waste of a high 
performing
 RAID array (EMC) ?

The JBOD advantage is that then ZFS can schedule I/O for the disks 
and there is less chance of an unrecoverable pool since ZFS is assured 
to lay out redundant data on redundant hardware and ZFS uses more 
robust error detection than the firmware on any array.  When using 
mirrors there is considerable advantage since writes and reads can be 
concurrent.

That said, your EMC hardware likely offers much nicer interfaces for 
indicating and replacing bad disk drives.  With the ZFS JBOD approach 
you have to back-track from what ZFS tells you (a Solaris device ID) 
and figure out which physical drive is not behaving correctly.  EMC 
tech support may not be very helpful if ZFS says there is something 
wrong but the raid array says there is not. Sometimes there is value 
with taking advantage of what you paid for.

So, shall I forget ZFS and use UFS ?

 -aW

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-01 Thread Dale Ghent


On May 1, 2009, at 2:09 AM, Wilkinson, Alex wrote:



   0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:


On Thu, 30 Apr 2009, Wilkinson, Alex wrote:


I currently have a single 17TB MetaLUN that i am about to present  
to an
OpenSolaris initiator and it will obviously be ZFS. However, I am  
constantly
reading that presenting a JBOD and using ZFS to manage the RAID is  
best
practice ? Im not really sure why ? And isn't that a waste of a  
high performing

RAID array (EMC) ?


The JBOD advantage is that then ZFS can schedule I/O for the disks
and there is less chance of an unrecoverable pool since ZFS is  
assured

to lay out redundant data on redundant hardware and ZFS uses more
robust error detection than the firmware on any array.  When using
mirrors there is considerable advantage since writes and reads can be
concurrent.

That said, your EMC hardware likely offers much nicer interfaces for
indicating and replacing bad disk drives.  With the ZFS JBOD approach
you have to back-track from what ZFS tells you (a Solaris device ID)
and figure out which physical drive is not behaving correctly.  EMC
tech support may not be very helpful if ZFS says there is something
wrong but the raid array says there is not. Sometimes there is value
with taking advantage of what you paid for.


So, shall I forget ZFS and use UFS ?


Not at all. Just export lots of LUNs from your EMC to get the IO  
scheduling win, not one giant one, and configure the zpool as a stripe.


/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-01 Thread Ian Collins

Dale Ghent wrote:


On May 1, 2009, at 2:09 AM, Wilkinson, Alex wrote:


So, shall I forget ZFS and use UFS ?


Not at all. Just export lots of LUNs from your EMC to get the IO 
scheduling win, not one giant one, and configure the zpool as a stripe.


What, no redundancy?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Data loss bug - sidelined??

2009-05-01 Thread Roch Bourbonnais


Le 6 févr. 09 à 20:54, Ross Smith a écrit :


Something to do with cache was my first thought.  It seems to be able
to read and write from the cache quite happily for some time,
regardless of whether the pool is live.

If you're reading or writing large amounts of data, zfs starts
experiencing IO faults and offlines the pool pretty quickly.  If
you're just working with small datasets, or viewing files that you've
recently opened, it seems you can stretch it out for quite a while.

But yes, it seems that it doesn't enter failmode until the cache is
full.  I would expect it to hit this within 5 seconds (since I believe
that is how often the cache should be writing).



Note that on an lightly loaded system , it's more 30 sec these days.

-r



On Fri, Feb 6, 2009 at 7:04 PM, Brent Jones br...@servuhome.net  
wrote:
On Fri, Feb 6, 2009 at 10:50 AM, Ross Smith  
myxi...@googlemail.com wrote:

I can check on Monday, but the system will probably panic... which
doesn't really help :-)

Am I right in thinking failmode=wait is still the default?  If so,
that should be how it's set as this testing was done on a clean
install of snv_106.  From what I've seen, I don't think this is a
problem with the zfs failmode.  It's more of an issue of what  
happens
in the period *before* zfs realises there's a problem and applies  
the

failmode.

This time there was just a window of a couple of minutes while
commands would continue.  In the past I've managed to stretch it out
to hours.

To me the biggest problems are:
- ZFS accepting writes that don't happen (from both before and after
the drive is removed)
- No logging or warning of this in zpool status

I appreciate that if you're using cache, some data loss is pretty  
much

inevitable when a pool fails, but that should be a few seconds worth
of data at worst, not minutes or hours worth.

Also, if a pool fails completely and there's data in the cache that
hasn't been committed to disk, it would be great if Solaris could
respond by:

- immediately dumping the cache to any (all?) working storage
- prompting the user to fix the pool, or save the cache before
powering down the system

Ross


On Fri, Feb 6, 2009 at 5:49 PM, Richard Elling richard.ell...@gmail.com 
 wrote:

Ross, this is a pretty good description of what I would expect when
failmode=continue. What happens when failmode=panic?
-- richard


Ross wrote:


Ok, it's still happening in snv_106:

I plugged a USB drive into a freshly installed system, and  
created a

single disk zpool on it:
# zpool create usbtest c1t0d0

I opened the (nautilus?) file manager in gnome, and copied the / 
etc/X11
folder to it.  I then copied the /etc/apache folder to it, and  
at 4:05pm,

disconnected the drive.

At this point there are *no* warnings on screen, or any  
indication that
there is a problem.  To check that the pool was still working, I  
created
duplicates of the two folders on that drive.  That worked  
without any

errors, although the drive was physically removed.

4:07pm
I ran zpool status, the pool is actually showing as unavailable,  
so at

least that has happened faster than my last test.

The folder is still open in gnome, however any attempt to copy  
files to or

from it just hangs the file transfer operation window.

4:09pm
/usbtest is still visible in gnome
Also, I can still open a console and use the folder:

# cd usbtest
# ls
X11X11 (copy) apache apache (copy)

I also tried:
# mv X11 X11-test

That hung, but I saw the X11 folder disappear from the graphical  
file
manager, so the system still believes something is working with  
this pool.


The main GUI is actually a little messed up now.  The gnome file  
manager
window looking at the /usbtest folder has hung.  Also, right- 
clicking the
desktop to open a new terminal hangs, leaving the right-click  
menu on

screen.

The main menu still works though, and I can still open a new  
terminal.


4:19pm
Commands such as ls are finally hanging on the pool.

At this point I tried to reboot, but it appears that isn't  
working.  I
used system monitor to kill everything I had running and tried  
again, but

that didn't help.

I had to physically power off the system to reboot.

After the reboot, as expected, /usbtest still exists (even  
though the
drive is disconnected).  I removed that folder and connected the  
drive.


ZFS detects the insertion and automounts the drive, but I find  
that
although the pool is showing as online, and the filesystem shows  
as mounted

at /usbtest.  But the /usbtest directory doesn't exist.

I had to export and import the pool to get it available, but as  
expected,

I've lost data:
# cd usbtest
# ls
X11

even worse, zfs is completely unaware of this:
# zpool status -v usbtest
pool: usbtest
state: ONLINE
scrub: none requested
config:

  NAMESTATE READ WRITE CKSUM
  usbtest ONLINE   0 0 0
c1t0d0ONLINE   0 0 0

errors: No known data errors


So in 

Re: [zfs-discuss] ? ZFS encryption with root pool

2009-05-01 Thread Darren J Moffat

Ulrich Graef wrote:

According: ZFS encryption

Will it be possible to have an encrypted root pool?


We don't encrypt pools, we encrypt datasets.  This is the same as what 
is done for compression.


It will be possible in the initial integration to have encrypted 
datasets in the root pool.  However the bootfs dataset can not be 
encrypted nor can /var or /usr if you have split those off into separate 
datasets.



Integration with TPM?


Eventually yes.  Initially via PKCS#11 and eventually using it to store 
the key for an encrypted bootfs.  However neither of these are in scope 
for the first integration of the ZFS Crypto project.  They haven't been 
dropped they were never planned for the initial integration.


TPM support in OpenSolaris is quite new (it won't be in the 2009.06 
release), we don't yet have TPM support in GRUB and we don't yet have 
SPARC TPM support either.   To get an encrypted bootfs dataset we need 
to also modify GRUB to provide read only support for encrypted datasets. 
 Doing this for SPARC is much harder - and might require OBP updates.


--
Darren J Moffat

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-01 Thread Brian Hechinger
On Fri, May 01, 2009 at 09:52:54AM -0400, Dale Ghent wrote:
 
 EMC. It's where data lives.

I thought it was, EMC. It's where data goes to die.  :-D

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-01 Thread Darren J Moffat

Dale Ghent wrote:


On May 1, 2009, at 4:01 AM, Ian Collins wrote:


Dale Ghent wrote:


On May 1, 2009, at 2:09 AM, Wilkinson, Alex wrote:


So, shall I forget ZFS and use UFS ?


Not at all. Just export lots of LUNs from your EMC to get the IO 
scheduling win, not one giant one, and configure the zpool as a stripe.


What, no redundancy?


Leave that up to the array he's getting the LUNs from.

EMC. It's where data lives.


Not if you want ZFS to actually be able to recover from checksum 
detected failures.   ZFS must be in control of the redundancy, ie a 
mirror, raidz or raidz2.  If ZFS is just given 1 or more LUNs in a 
stripe then it is unlikely to be able to recover from data corruption, 
it might be able to recover metadata because it is always stored with at 
least copies=2 but that is best efforts.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-01 Thread Richard Elling

Wilkinson, Alex wrote:
   
So, shall I forget ZFS and use UFS 


I think the writing is on the wall, right next to Romani ite domum :-)
Today, laptops have 500 GByte drives, desktops have 1.5 TByte drives.
UFS really does not work well with SMI label and 1 TByte limitations.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sub-divided disks and ZFS

2009-05-01 Thread Bob Friesenhahn
This morning as I was reading USENIX conference summaries which 
suggested that maybe SATA/SAS is not an optimimum interface for SSDs 
it came to mind that some out-of-the-box thinking is needed for hard 
drives as well.  Hard drive storage densities have been increasing 
dramatically so that latest SATA drives are measured in terrabytes, 
just they were measured in gigabytes some years ago.


A problem with huge hard drives is that the resilver times increase 
with drive size.  Failure of hard drives with sizes in the terrabyte 
range lead to a long wait.


Hard drives are comprised of multiple platters, with typically an 
independently navigated head on each side.  Due to a mix of hardware 
and firmware, these disparate platters and heads are exposed as a 
simple logical linear device comprised of blocks.  If one side of a 
platter, or a drive head fails, then the whole drive fails.


My understanding is that most drives stripe logical blocks across the 
various platters such that the lower block addresses are on the outer 
edge of the disks to achieve fastest I/O transfer rate.  This approach 
is great for large linear writes, but is not so great for random I/O, 
is not so great for when data becomes spread across the disk, or when 
the disk becomes almost full.


The thought I had this morning is that perhaps the firmware on the 
disk drive can be updated to create a logical disk drive appareance 
for each drive head.  Any bad block management (if enabled) would be 
done using the same platter side.  With this approach a single 
physical drive could appear like two, four, or eight logical drives.


ZFS is really good about scheduling I/O across many drives.  Provided 
that care is taken to ensure that redundant data is appropriately 
distributed, it seems like subdividing the drives like this would 
allow ZFS to offer considerably improved performance, and resilver 
time of a logical drive would be reduced since it is smaller.  If a 
drive head fails, then that logical drive could be marked permanently 
out of service, but the whole drive would not need to be immediately 
resigned to the dumpster.


Does anyone have thoughts on the viability of this approach?  Can 
existing drives be effectively subdivided like this by simply updating 
drive firmware?


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sub-divided disks and ZFS

2009-05-01 Thread Eric D. Mudama

On Fri, May  1 at 11:44, Bob Friesenhahn wrote:
Hard drives are comprised of multiple platters, with typically an  
independently navigated head on each side.


This is a gap in your assumptions I believe.

The headstack is a single physical entity, so all heads move in unison
to the same position on all surfaces at the same time.

Additionally, hard drives typically have a single channel, meaning
only one head can be active at a time.

With the nature of embedded position information on the same surface
that contains user data, they haven't come up with a practical design
for doing multiple concurrent reads from different places.  At least
one vendor (connor?) tried to do a 2-actuator disk drive, and it was a
mechanical resonance nightmare for the servo systems.


I think that what you're looking for, however, is already happening,
with server farms moving to multiple 2.5 drives from the larger 3.5
drives.  Even on SATA drives, with NCQ the rotational speed doesn't
matter as much for overall throughput, so there are a growing number
of server applications that will be utilizing traditional laptop
form factor devices, to increase the spindle:capacity ratio without
blowing out their space budget.  SAS and SATA are both shipping
greater and greater volumes of SFF devices.

For the budget minded, a 2U server with a bunch of mirrored-pair 2.5
laptop drives is a nice platform, since you can fit 8-12 spindles in
that box.  The storage per unit volume is basically identical, just
that you get 2-4x the spindle count.

--eric

--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-01 Thread Miles Nordin
 sl == Scott Lawson scott.law...@manukau.ac.nz writes:
 wa == Wilkinson, Alex alex.wilkin...@dsto.defence.gov.au writes:
 dg == Dale Ghent da...@elemental.org writes:
 djm == Darren J Moffat darr...@opensolaris.org writes:

sl Specifically I am talking of ZFS snapshots, rollbacks,
sl cloning, clone promotion,

[...]

sl Of course to take maximum advantage of ZFS in full, then as
sl everyone has mentioned it is a good idea to let ZFS manage the
sl underlying raw disks if possible.

okay, but these two feature groups are completely orthogonal.  You can
get the ZFS revision tree which helped you so much, and all the other
features you mentioned, with a single-LUN zpool.

wa So, shall I forget ZFS and use UFS ?

Naturally here you will find mostly people who have chosen to use ZFS,
so I think you will have to think on your own rather than taking a
poll of the ZFS list.  

Myself, I use ZFS.  I would probably use it on a single-LUN SAN pool,
but only if I had a backup system onto a second zpool, and iff I could
do a restore/cutover really quickly if the primary zpool became
corrupt.  Some people have zpools that take days to restore, and in
that case I would not do it---I'd want direct-attached storage,
restore-by-cutover, or at the very least zpool-level redundancy.  I'm
using ZFS on a SAN right now, but my SAN is just Linux iSCSI targets,
and it is exporting many JBOD LUN's with zpool-level redundancy so I'm
less at risk for the single-LUN lost pool problems than you'd be with
single-lun EMC.  And I have a full backup onto another zpool, on a
machine capable enough to assume the role of the master, albeit not
automatically.

For a lighter filesystem I'm looking forward to the liberation of QFS,
too.  And in the future I think Solaris plans to offer redundancy
options above the filesystem level, like pNFS and Lustre, which may
end up being the ultimate win because of the way they can move the
storage mesh onto a big network switch, rather than what we have with
ZFS where it's a couple bonded gigabit ethernet cards and a single
PCIe backplane.  Not all of ZFS's features will remain useful in such
a world.

However I don't think there is ANY situation in which you should run
UFS over a zvol (which is one of the things you mentioned).  That's
only interesting for debugging or performance comparison (meaning it
should always perform worse, or else there's a bug).  If you read the
replies you got more carefully you'll find doing that addresses none
of the concerns people raised.

dg Not at all. Just export lots of LUNs from your EMC to get the
dg IO scheduling win, not one giant one, and configure the zpool
dg as a stripe.

I've never heard of using multiple-LUN stripes for storage QoS before.
Have you actually measured some improvement in this configuration over
a single LUN?  If so that's interesting.

But it's important to understand there's no difference between
multiple LUN stripes and a single big LUN w.r.t. reliability, as far
as we know to date.  The advice I've seen here to use multiple LUN's
over SAN vendor storage is, until now, not for QoS but for one of two
reasons:

  * availability.  a zpool mirror of LUNs on physically distant, or at
least separate, storage vendor gear.

  * avoid the lost-zpool problem when there are SAN reboots or storage
fabric disruptions without a host reboot.

   djm Not if you want ZFS to actually be able to recover from
   djm checksum detected failures.

while we agree recovering from checksum failures is an advantage of
zpool-level redundancy, I don't think it predominates the actual
failures observed by people using SAN's.  The lost-my-whole-zpool
failure mode predominates, and in the two or three cases when it was
examined enough to recover the zpool, it didn't look like a checksum
problem.  It looked like either ZFS bugs or lost writes, or one
leading to the other.  And having zpool-level redundancy may happen to
make this failure mode much less common, but it won't eliminate it,
especially since we still haven't tracked down the root cause.

Also we need to point out there *is* an availability advantage to
letting the SAN manage a layer of redundancy, because SAN's are much
better at dealing with failing disks without crashing/slowing down
than ZFS, so far.

I've never heard of anyone actually exporting JBOD from EMC yet.  Is
someone actually doing this?  So far I've heard of people burning huge
$$ of disk by exporting two RAID LUN's from the SAN and then
mirroring them with zpool.

   djm If ZFS is just given 1 or more LUNs in a stripe then it is
   djm unlikely to be able to recover from data corruption, it might
   djm be able to recover metadata because it is always stored with
   djm at least copies=2 but that is best efforts.

okay, fine, nice feature.  But this failure is not actually happening,
based on reports to the list.  It's redundancy in space, while reports
we've seen from SAN's show what's really needed 

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-01 Thread Torrey McMahon

On 5/1/2009 2:01 PM, Miles Nordin wrote:

I've never heard of using multiple-LUN stripes for storage QoS before.
Have you actually measured some improvement in this configuration over
a single LUN?  If so that's interesting.


Because of the way queing works in the OS and in most array controllers 
you can get better performance in some workloads if you create more LUNs 
from the underlying raid set.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sub-divided disks and ZFS

2009-05-01 Thread Miles Nordin
 edm == Eric D Mudama edmud...@bounceswoosh.org writes:

  Hard drives are comprised of multiple platters, with typically
  an independently navigated head on each side.

   edm This is a gap in your assumptions I believe.

   edm The headstack is a single physical entity, so all heads move
   edm in unison to the same position on all surfaces at the same
   edm time.

yes but AIUI switching heads requires resettling into the new track.
The cylinders are not really cylindrical, just because of wear or
temperature or whatever, so when switching heads the ``channel'' has
to use data from the head as part of a servo loop to settle on the
other surface's track.

I guess the rules do keep changing though.

   edm I think that what you're looking for, however, is already
   edm happening, with server farms moving to multiple 2.5 drives

yeah but you're reading him wrong.  He is saying a failed drive may
still be useful if you just avoid the one failed head.

The problem currently is the LBA's are laced through each cylinder,
which is worth doing so that things like short-stroking make sense to
reduce head movement.  If you re-swizzled the LBA's so that instead
they filled each side of each platter in turn, like a dual-layer DVD,
it wouldn't change sequential throughput at all, and would have the
benefit that ZFS's existing tendency to put redundant metadata copies
far apart in LBA would end up getting them on different heads, which
actually *is* helpful given known failure modes tend to be head
crashing, head falling off, u.s.w.

I think the idea is doomed firstly because these days when a single
head goes bad, the drive firmware, host adapter, driver, and even the
zfs maintenance commands, all the way up the storage stack to the
sysadmin's keyboard, all shit their pants and become useless.  You
have to find the bad drive, remove it, then move on.

Secondly I'm not sure I buy the USENIX claim that you can limp along
less one head.  The last failed drive I took apart, was indeed failed
on just one head, but it had scraped all the rust off the platter
(down to glass!  it was really glass!), and the inside of the thing
was filled with microscopic grey facepaint.  It had slathered the air
filtering pillow and coated all kinds of other surfaces.  so...I would
expect the other recording surfaces were not doing too well either,
but I could be wrong.  It does match experience, though, of drives
going from partly-failed to completely-failed in a day or a week.


pgpnNdt99xp7q.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-01 Thread Erik Trimble
Has the issue with disappearing single-LUN zpools causing corruption 
been fixed?


I'd have to look up the bug, but I got bitten by this last year about 
this time:


Config:

single LUN export from array to host, attached via FC.

Scenario:

(1) array is turned off while host is alive, but while zpool is idle (no 
write/reads occuring).

(2) host is shutdown
(3) array is turned on
(4) host is turned on
(5) host reports zpool is corrupted, refuses to import it, kernel 
panics, and goes into a reset loop.

(6) cannot import zpool on another system, zpool completely hosed.


Now, IIRC, the perpetual panic and reboot thing got fixed, but not the 
underlying cause, which was that zfs expected to be able to periodically 
write/read metadata from a zpool, and the disappearance of the single 
underlying LUN caused the zpool to be declared corrupted and dead, even 
though no data was actually bad.   The bad part of this is that the 
scenario is entirely likely to happen if a bad HBA or Switch causes the 
disappearance of the LUN, not the array itself going bad.


I _still_ don't do single-LUN non-redundant zpools because of this. Did 
it get fixed, or is this still an issue?


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sub-divided disks and ZFS

2009-05-01 Thread Bob Friesenhahn

On Fri, 1 May 2009, Eric D. Mudama wrote:


On Fri, May  1 at 11:44, Bob Friesenhahn wrote:
Hard drives are comprised of multiple platters, with typically an 
independently navigated head on each side.


This is a gap in your assumptions I believe.

The headstack is a single physical entity, so all heads move in unison
to the same position on all surfaces at the same time.


Ahhh.  I see.  That would explain why the idea has not been explored 
already. :-)



I think that what you're looking for, however, is already happening,
with server farms moving to multiple 2.5 drives from the larger 3.5
drives.  Even on SATA drives, with NCQ the rotational speed doesn't


Yes.  I was hoping to hasten things along with a firmware/software 
update rather than forklift replacement.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sub-divided disks and ZFS

2009-05-01 Thread Eric D. Mudama

On Fri, May  1 at 14:19, Miles Nordin wrote:

Secondly I'm not sure I buy the USENIX claim that you can limp along
less one head.  The last failed drive I took apart, was indeed failed
on just one head, but it had scraped all the rust off the platter
(down to glass!  it was really glass!), and the inside of the thing
was filled with microscopic grey facepaint.  It had slathered the air
filtering pillow and coated all kinds of other surfaces.  so...I would
expect the other recording surfaces were not doing too well either,
but I could be wrong.  It does match experience, though, of drives
going from partly-failed to completely-failed in a day or a week.


Your point here is 100% accurate.

Any physical damage inside the drive, even if initially constrained to
a single head, quickly becomes a huge problem for everything inside
the drive.

Once you're looking for physically isolated heads and platters, you might
as well just buy multiple smaller drives.


--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS in Solaris 10 update 7

2009-05-01 Thread Ian Collins

Is there a published list of updates to ZFS for Solaris 10 update 7?

I can't find anything specific in the release notes.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in Solaris 10 update 7

2009-05-01 Thread Cindy Swearingen
Hi Ian,

Other than bug fixes, the only notable feature in the Solaris 10 5/09
release is that Solaris Live Upgrade supports additional zones configurations.

You can read about these configurations here:

http://docs.sun.com/app/docs/doc/819-5461/gigek?l=ena=view

I hope someone else from the team can comment on other improvements
through bug fixes.

Cindy

- Original Message -
From: Ian Collins i...@ianshome.com
Date: Friday, May 1, 2009 4:43 pm
Subject: [zfs-discuss] ZFS in Solaris 10 update 7
To: zfs-discuss@opensolaris.org

 Is there a published list of updates to ZFS for Solaris 10 update 7?
 
 I can't find anything specific in the release notes.
 
 -- 
 Ian.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss