My personal experience with this is that VMFS is kinda sucky.. I had my 
"allocation table" killed by a disk cache dump (cache got dropped and not 
written out) - EVERY filesystem/VM on the VMFS was completely inaccessible and 
VMware support told me that this happens at least a couple times a day and 
theres no way to back up the allocation table but they were working on it (this 
was a couple years ago now, though.. no idea if they did release that).. I 
tried quite hard to get some disk tools to find the filesystems within the big 
partition but it didn't work out (had hoped they would be sequential, but it 
seems not)

So I've moved to using NFS (a well solved DRBD/heartbeat solution) - which 
apparently even a bunch of NetApp guys do and had alot more success than using 
VMFS/iSCSI and feel safer with my well-known ext3 filesystem.. but that might 
just be me.. also a potential "benefit" is you can oversubscribe VM space 
(using sparse file) on NFS but not on VMFS>. just be careful you don't run out 
though.

Regards,
Trent

On 14/03/2010, at 6:54 AM, dbarker wrote:

> 
> I have two ESXi 4u1 servers.
> 
> Each has two DAS, sata, large disks; /dev/sdb and /dev/sdc.
> 
> Each has a vm running DRBD 8.3.7 under Debain 503 serving up the large disks
> as /dev/drbd1 and /dev/drbd2. The disks are provisioned thin to the drbd
> virtual machines.
> 
> Each drbd vm runs iscsi-target, serving the drbd block devices as LUN1 and
> LUN2. The Type is set to blockio.
> 
> Each LUN is formatted in VMFS, and used by the various virtual machines. All
> the virtual machines on ESXi0 are on LUN1 (on /dev/sdb), and all the virtual
> machines on ESXi1 are on LUN2 (on /dev/sdc). So, I could easily change from
> Primary/Primary to Primary/Secondary, but I thought that VMFS was a
> clustering file system and would work properly with the Primary/Primary
> setup. It certainly is an elegant and convenient setup.
> 
> The reason I ask is I had some strange errors today, like "Concurrent local
> write detected!" and some scsi messages in ESXi's /var/log; including
> hanging on of the ESXi, but they've all settled down and/or rolled off the
> log. I probably caused these errors by improperly resizing my disks, but I
> wanted to run this configuration past the group. I'd hate to run this way
> for a few weeks and then discover I'd shot myself in the foot.
> 
> I can easily change one drbd vm to Primary/Secondary on /dev/drbd1 and the
> other to Primary/Secondary on /devdrbd2. The only issue is the ease of
> migration when failures occur. I like the elegance of the Primary/Primary
> solution on both LUNs but will happily change if that's not the "right" way
> to do this.
> 
> tia, Dan Barker
> 
> drbd conf:
> ========
> global { usage-count yes; }
> 
> common { protocol C;
>        handlers ... all the defaults from global.common.conf in 8.3.7 ... }
> 
>        net { allow-two-primaries; }
> 
>        syncer {
>                csums-alg md5;
>                rate 25M;
>        }
> }
> 
> resource r0 {
>  startup { become-primary-on both; }
>  device    /dev/drbd1;
>  disk      /dev/sdb;
>  meta-disk internal;
>  on Storage00 {
>    address   172.30.0.40:7789;
>  }
>  on Storage01 {
>    address   172.30.0.41:7789;
>  }
> }
> 
> resource r1 {
>  startup { become-primary-on both; }
>  device    /dev/drbd2;
>  disk      /dev/sdc;
>  meta-disk internal;
>  on Storage00 {
>    address   172.30.0.40:7790;
>  }
>  on Storage01 {
>    address   172.30.0.41:7790;
>  }
> }
> 
> ietd.conf:
> =======
> Target iqn.2010-03.com.visioncomm.Storage00:Storage00
> Lun 0 Path=/dev/drbd1,Type=blockio,ScsiSN=SPIDSK-090311-00
> Lun 1 Path=/dev/drbd2,Type=blockio,ScsiSN=SPIDSK-090312-00
> 
> 
> -- 
> View this message in context: 
> http://old.nabble.com/Does-VMFS-%22play-nice%22-with-primary-primary--tp27891474p27891474.html
> Sent from the DRBD - User mailing list archive at Nabble.com.
> 
> _______________________________________________
> drbd-user mailing list
> [email protected]
> http://lists.linbit.com/mailman/listinfo/drbd-user
> 

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to