Re: [zfs-discuss] Solid State Drives?

2007-01-12 Thread Darren J Moffat

Neil Perrin wrote:

We are currently working on separate log devices such as disk and nvram.
This should help with both NFS and DB performance.


It also makes things interest from the zfs-crypto view point.  It 
means that it would allow a configuration where we don't do encryption 
on the ZIL but instead put it on a different type of device.  Obviously 
not for everyone and certainly not for the case where you are only using 
 spinning rust (especially if there is only one spindle) but 
interesting anyway.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solid State Drives?

2007-01-11 Thread Richard Elling

Erik Trimble wrote:
Just a thought:  would it be theoretically possible to designate some 
device as a system-wide write cache for all FS writes? Not just ZFS,
but for everything... In a manner similar to which we currently use 
extra RAM as a cache for FS read (and write, to a certain extent), it 
would be really nice to be able to say that a NVRAM/Flash/etc. device  
is  the system-wide write cache, so that calls to fsync() and the like - 
which currently force a flush of the RAM-resident buffers to disk - 
would return as complete after the data was written to such a SSD (even 
though it might not all be written to a HD yet).


Thoughts?  How difficult would this be?  And, problems ?  (the biggest I 
can see is for Flash, which, if it is being constantly written two, will 
wear out relatively quickly...)


The product was called Sun PrestoServ.  It was successful for benchmarking
and such, but unsuccessful in the market because:

+ when there is a failure, your data is spread across multiple
  fault domains

+ it is not clusterable, which is often a requirement for data
  centers

+ it used a battery, so you had to deal with physical battery
  replacement and all of the associated battery problems

+ it had yet another device driver, so integration was a pain

Google for it and you'll see all sorts of historical perspective.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solid State Drives?

2007-01-11 Thread Erik Trimble
On Thu, 2007-01-11 at 10:35 -0800, Richard Elling wrote:
 The product was called Sun PrestoServ.  It was successful for benchmarking
 and such, but unsuccessful in the market because:
 
   + when there is a failure, your data is spread across multiple
 fault domains
 
   + it is not clusterable, which is often a requirement for data
 centers
 
   + it used a battery, so you had to deal with physical battery
 replacement and all of the associated battery problems
 
   + it had yet another device driver, so integration was a pain
 
 Google for it and you'll see all sorts of historical perspective.
   -- richard


Yes, I remember (and used) PrestoServ. Back in the SPARCcenter 1000
days. :-)

And yes, local caching makes the system non-clusterable.   However, all
the other issues are common to a typical HW raid controller, and many
people use host-based HW controllers just fine and don't find their
problems to be excessive. 

And, honestly, I wouldn't think another driver would be needed.
Attaching a SSD or similar usually uses an existing driver (it normally
appears as a SCSI or FC drive to the OS).

-- 
Erik Trimble
Java System Support
Mailstop:  usca14-102
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solid State Drives?

2007-01-11 Thread Jason J. W. Williams

Hello all,

Just my two cents on the issue. The Thumper is proving to be a
terrific database server in all aspects except latency. While the
latency is acceptable, being able to add some degree of battery-backed
write cache that ZFS could use would be phenomenal.

Best Regards,
Jason

On 1/11/07, Jonathan Edwards [EMAIL PROTECTED] wrote:


On Jan 11, 2007, at 15:42, Erik Trimble wrote:

 On Thu, 2007-01-11 at 10:35 -0800, Richard Elling wrote:
 The product was called Sun PrestoServ.  It was successful for
 benchmarking
 and such, but unsuccessful in the market because:

  + when there is a failure, your data is spread across multiple
fault domains

  + it is not clusterable, which is often a requirement for data
centers

  + it used a battery, so you had to deal with physical battery
replacement and all of the associated battery problems

  + it had yet another device driver, so integration was a pain

 Google for it and you'll see all sorts of historical perspective.
   -- richard


 Yes, I remember (and used) PrestoServ. Back in the SPARCcenter 1000
 days. :-)

as do i .. (keep your batteries charged!! and don't panic!)

 And yes, local caching makes the system non-clusterable.

not necessarily .. i like the javaspaces approach to coherency, and
companies like gigaspaces have done some pretty impressive things
with in memory SBA databases and distributed grid architectures ..
intelligent coherency design with a good distribution balance for
local, remote, and redundant can go a long way in improving your
cache numbers.

 However, all
 the other issues are common to a typical HW raid controller, and many
 people use host-based HW controllers just fine and don't find their
 problems to be excessive.

True given most workloads, but in general it's the coherency issues
that drastically affect throughput on shared controllers particularly
as you add and distribute the same luns or data across different
control processors.  Add too many and your cache hit rates might fall
in the toilet.

.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solid State Drives?

2007-01-05 Thread Neil Perrin

I'm currently working on putting the ZFS intent log on separate devices
which could include seperate disks and nvram/solid state devices.
This would help any application using fsync/O_DSYNC - in particular
DB and NFS. From protoyping considerable peformanace improvements have
been seen.

Neil.

Kyle McDonald wrote On 01/05/07 08:10,:
I know there's been much discussion on the list lately about getting HW 
arrays to use (or not use) their caches in a way that helps ZFS the most.


Just yesterday I started seeing articles on NAND Flash Drives, and I 
know other Solid Stae Drive technologies have been around for a while 
and many times are used for transaction logs or other ways of 
accelerating FS's.


If these devices become more prevalent, and/or cheaper I'm curious what 
ways ZFS could be made to bast take advantage of them?


One Idea I had was for each pool allow me to designate a mirror or RaidZ 
of these devices just for the transaction logs. Since they're faster 
than normal disks, My uneducated guess is that they could boost 
performance.


I suppose it doesn't eliminate the problems with the real drive (or 
array) caches
though. You still need to know that the data is on the real drives 
before you can wipe that transaction from the transaction log right?


Well... I'd still like to hear the experts ideas on how this could (or 
won't ever?) help ZFS out? Would changes to ZFS be required?


-Kyle


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solid State Drives?

2007-01-05 Thread Neil Perrin



Robert Milkowski wrote On 01/05/07 11:45,:

Hello Neil,

Friday, January 5, 2007, 4:36:05 PM, you wrote:

NP I'm currently working on putting the ZFS intent log on separate devices
NP which could include seperate disks and nvram/solid state devices.
NP This would help any application using fsync/O_DSYNC - in particular
NP DB and NFS. From protoyping considerable peformanace improvements have
NP been seen.

Can you share any results from prototype testing?


I'd prefer not to just yet as I don't want to raise expectations unduly.
When testing I was using a simple local benchmark, whereas
I'd prefer to run something more official such as TPC.
I'm also missing a few required features in the protoype which
may affect performance.

Hopefully I can can provide some results soon, but even those will
be unoffical.

Neil.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solid State Drives?

2007-01-05 Thread Jason J. W. Williams

Could this ability (separate ZIL device) coupled with an SSD give
something like a Thumper the write latency benefit of battery-backed
write cache?

Best Regards,
Jason

On 1/5/07, Neil Perrin [EMAIL PROTECTED] wrote:



Robert Milkowski wrote On 01/05/07 11:45,:
 Hello Neil,

 Friday, January 5, 2007, 4:36:05 PM, you wrote:

 NP I'm currently working on putting the ZFS intent log on separate devices
 NP which could include seperate disks and nvram/solid state devices.
 NP This would help any application using fsync/O_DSYNC - in particular
 NP DB and NFS. From protoyping considerable peformanace improvements have
 NP been seen.

 Can you share any results from prototype testing?

I'd prefer not to just yet as I don't want to raise expectations unduly.
When testing I was using a simple local benchmark, whereas
I'd prefer to run something more official such as TPC.
I'm also missing a few required features in the protoype which
may affect performance.

Hopefully I can can provide some results soon, but even those will
be unoffical.

Neil.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss