Jim Dunham wrote:
[...]

Jim, at first: I never said that AVS is a bad product. And I never will. 
  I wonder why you act as if you were attacked personally.

To be honest, if I were a customer with the original question, such a 
reaction wouldn't make me feel safer.


>> - ZFS is not aware of AVS. On the secondary node, you'll always have to
>> force the `zfs import` due to the unnoticed changes of metadata (zpool
>> in use).
> 
> This is not true. If on the primary node invokes "zpool export" while 
> replication is still active, then a forced "zpool import" is not 
> required. This behavior is the same as with a zpool on dual-ported or 
> SAN storage, and is NOT specific to AVS.

Jim. A graceful shutdown of the primary node may be a valid desaster 
scenario in the laboratory, but it never will be in the real life.

> 
>> No mechanism to prevent data loss exists, e.g. zpools can be
>> imported when the replicator is *not* in logging mode.
> 
> This behavior is the same as with a zpool on dual-ported or SAN storage, 
> and is NOT specific to AVS.

And what makes you think that I said that AVS is the problem here?

And by the way, the customer doesn't care *why* there's a problem. He 
only wants to know *if* there's a problem.

>> - AVS is not ZFS aware.
> 
> AVS is not UFS, QFS, Oracle, Sybase aware either. This makes AVS, and 
> other host based and controller based replication services 
> multi-functional. If you desire ZFS aware functionality, use ZFS send 
> and recv.

Yes, exactly. And that's the problem, sind `zfs send` and `zfs receive` 
are no working solution in a fail-safe two node environment. Again: the 
customer doesn't care *why* there's a problem. He only wants to know 
*if* there's a problem.

>> For instance, if ZFS resilves a mirrored disk,
>> e.g. after replacing a drive, the complete disk is sent over the network
>> to the secondary node, even though the replicated data on the secondary
>> is intact.
> 
> The complete disk IS NOT sent of the over the network to the secondary 
> node, only those disk blocks that re-written by ZFS. 

Yes, you're right. But sadly, in the mentioned scenario of having 
replaced an entire drive, the entire disk is rewritten by ZFS.

Again: And what makes you think that I said that AVS is the problem here?

>> - ZFS & AVS & X4500 leads to a bad error handling. The Zpool may not be
>> imported on the secondary node during the replication.
> 
> This behavior is the same as with a zpool on dual-ported or SAN storage, 
> and is NOT specific to AVS.

Again: And what makes you think that I said that AVS is the problem 
here? We are not on avs-discuss, Jim.

> I don't understand the relevance to AVS in the prior three paragraphs?

We are not on avs-discuss, Jim. The customer wanted to know what 
drawbacks exist in his *scenario*. Not AVS.

>> - I gave AVS a set of 6 drives just for the bitmaps (using SVM soft
>> partitions). Weren't enough, the replication was still very slow,
>> probably because of an insane amount of head movements, and scales
>> badly. Putting the bitmap of a drive on the drive itself (if I remember
>> correctly, this is recommended in one of the most referenced howto blog
>> articles) is a bad idea. Always use ZFS on whole disks, if performance
>> and caching matters to you.
> 
> When you have the time, can you replace the "probably because of ... " 
> with some real performance numbers?

No problem. If you please organize a Try&Buy of two X4500 server being 
sent to my address, thank you.


>> - AVS seems to require an additional shared storage when building
>> failover clusters with 48 TB of internal storage. That may be hard to
>> explain to the customer. But I'm not 100% sure about this, because I
>> just didn't find a way, I didn't ask on a mailing list for help.
> 
> When you have them time, can you replace the "AVS seems to ... " with 
> some specific references to what you are referring to?

The installation and configuration process and the location where AVS 
wants to store the shared database. I can tell you details about it the 
next time I give it try. Until then, please read the last sentence you 
quoted once more, thank you.

>> If you want a fail-over solution for important data, use the external
>> JBODs. Use AVS only to mirror complete clusters, don't use it to
>> replicate single boxes with local drives. And, in case OpenSolaris is
>> not an option for you due to your company policies or support contracts,
>> building a real cluster also A LOT cheaper.
> 
> You are offering up these position statements based on what?

My outline agreements, my support contracts, partner web desk and 
finally my experience with projects in high availability scenarios with 
tens of thousands of servers.


Jim, it's okay. I know that you're a project leader at Sun Microsystems 
and that AVS is your main concern. But if there's one thing I cannot 
withstand, it's getting stroppy replies from someone who should know 
better and should have realized that he's acting publicly and in front 
of the people who finance his income instead of trying to start a flame 
war. From now on, I leave the rest to you, because I earn my living with 
products of Sun Microsystems, too, and I don't want to damage neither 
Sun nor this mailing list.


-- 

Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA

Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/

1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe

Amtsgericht Montabaur HRB 6484

Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich, Thomas 
Gottschlich, Matthias Greve, Robert Hoffmann, Markus Huhn, Oliver Mauss, 
Achim Weiss
Aufsichtsratsvorsitzender: Michael Scheeren
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to