On Jan 11, 2007, at 15:42, Erik Trimble wrote:

On Thu, 2007-01-11 at 10:35 -0800, Richard Elling wrote:
The product was called Sun PrestoServ. It was successful for benchmarking
and such, but unsuccessful in the market because:

        + when there is a failure, your data is spread across multiple
          fault domains

        + it is not clusterable, which is often a requirement for data
          centers

        + it used a battery, so you had to deal with physical battery
          replacement and all of the associated battery problems

        + it had yet another device driver, so integration was a pain

Google for it and you'll see all sorts of historical perspective.
  -- richard


Yes, I remember (and used) PrestoServ. Back in the SPARCcenter 1000
days. :-)

as do i .. (keep your batteries charged!! and don't panic!)

And yes, local caching makes the system non-clusterable.

not necessarily .. i like the javaspaces approach to coherency, and companies like gigaspaces have done some pretty impressive things with in memory SBA databases and distributed grid architectures .. intelligent coherency design with a good distribution balance for local, remote, and redundant can go a long way in improving your cache numbers.

However, all
the other issues are common to a typical HW raid controller, and many
people use host-based HW controllers just fine and don't find their
problems to be excessive.

True given most workloads, but in general it's the coherency issues that drastically affect throughput on shared controllers particularly as you add and distribute the same luns or data across different control processors. Add too many and your cache hit rates might fall in the toilet.

.je
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to