VMDK is not supported due to http://kb.vmware.com/kb/2032940
It can cause GPFS deadlock due to hung IO. Thanks, Sanjay Gandhi GPFS FVT IBM, Beaverton Phone/FAX : 503-578-4141 T/L 775-4141 [email protected] From: [email protected] To: [email protected] Date: 10/25/2016 12:35 PM Subject: gpfsug-discuss Digest, Vol 57, Issue 66 Sent by: [email protected] Send gpfsug-discuss mailing list submissions to [email protected] To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to [email protected] You can reach the person managing the list at [email protected] When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Virtualized Spectrum Scale ([email protected]) ---------------------------------------------------------------------- Message: 1 Date: Tue, 25 Oct 2016 19:34:29 +0000 From: "[email protected]" <[email protected]> To: gpfsug main discussion list <[email protected]> Subject: Re: [gpfsug-discuss] Virtualized Spectrum Scale Message-ID: <[email protected]> Content-Type: text/plain; charset="utf-8" This is interesting since the SS FAQ leads me to believe that I have some options here. Does GPFS nodes with no direct disk access still mean RDMs? [cid:[email protected]] From: <[email protected]> on behalf of Luis Bolinches <[email protected]> Reply-To: gpfsug main discussion list <[email protected]> Date: Tuesday, October 25, 2016 at 1:58 PM To: gpfsug main discussion list <[email protected]> Subject: Re: [gpfsug-discuss] Virtualized Spectrum Scale Hi You must use RDM. Otherwise is not supported. SCSI commands is the reason. Furthermore on some versions I managed to crash the ESXi as well. -- Cheers On 25 Oct 2016, at 19.46, "[email protected]< mailto:[email protected]>" <[email protected]< mailto:[email protected]>> wrote: Anyone running SpectrumScale on Virtual Machines (intel)? I?m curious how you manage disks? Do you use RDM?s? Does this even make sense to do? If you have a 2-3 node cluster how do you share the disks across? Do you have VM?s with their own VMDK?s (if not RDM) in each node or is there some way to share access to the same VMDK?s? What are the advantages doing this other than existing HW use? Seems to me for a lab environment or very small nonperformance focused implementation this may be a viable option. Thanks Mark This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you. Sirius Computer Solutions<http://www.siriuscom.com> Ellei edell? ole toisin mainittu: / Unless stated otherwise above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20161025/fc9a8bf7/attachment.html > -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 169073 bytes Desc: image001.png URL: < http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20161025/fc9a8bf7/attachment.png > ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 57, Issue 66 **********************************************
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
