Excellent info Aaron.  Thanks for this.  I may reach out via PM to explore this 
more with you.

From: <[email protected]> on behalf of Aaron S Palazzolo 
<[email protected]>
Reply-To: gpfsug main discussion list <[email protected]>
Date: Tuesday, October 25, 2016 at 2:59 PM
To: "[email protected]" <[email protected]>
Subject: Re: [gpfsug-discuss] Virtualized Spectrum Scale

Hi Mark,

Great questions on the VM side.  Within our Spectrum Scale Test labs at IBM, we 
run within both VMware and KVM.  It sounds as though your questions are more 
towards the VMware side so I'll give a few pointers.  For more detailed info, 
feel free to grab me on the side or continue the discussion within the user 
group so that others can learn from this as well.


#1) As Luis suggests, using vmdks will not give you full support of the scsi 
protocol.  Specifically persistent reserve.  EIO errors are also not passed 
correctly in some path down situations.
- For certain test/dev environments, in which data protection and performance 
are not high priorities, then this may be fine.  Some of our test environments 
do run like this.
- Note that vmdks can be shared among virtual Spectrum Scale nodes using the 
multi-writer flag in VMware.  To do this, you'll first need to create your 
vmdks using 'Thick Provision Eager Zeroed'.  You'll then need to configure the 
multi-writer flag for scsi-sharing such as this, for each vmdk:

scsi1:0.sharing = "multi-writer"
scsi1:1.sharing = "multi-writer"
scsi1:2.sharing = "multi-writer"

You'll find this in the Advanced configuration parameters.  Finally, you'll 
need to set all of these vmdks to a separate scsi adapter which is configured 
for either virtual or physical sharing.

Downsides are lack of support, some degradation of performance, lack of 
failover support, and inability to use VMware snapshots of VMs with scsi 
sharing/multi-writer enabled.
Upsides are ease of setup both with virtually and physically and ability to 
store all VMs on a single datastore that can itself be snapshotted if the 
underlying physical storage supports this.

In our testlabs, we create 4node -> 50node virtual Spectrum Scale clusters, 
each node is a VM, some of these VMs have extra vmdks for NSDs and some do not. 
 All VMs belonging to a cluster reside on a single datastore which ends up 
being a single XIV Volume.  We then can snapshot this XIV volume and in 
essence, snapshot the entire Spectrum Scale cluster back and forth in time.  
I'm sure this is overly complicated for what you want to do, but it may get you 
thinking of use cases.

#2) RDM will give you both performance and the piece of mind that you're using 
a fully supported config.  Officially, I will always recommend RDM due to this. 
 The downside to RDM is complexity in setup unless you have a fairly static 
config or unless you can automate the zoning.

#3) No matter what type of VM infrastructure you use, make sure you investigate 
the memory/cpu requirements.
- Check our FAQ, specifically 6.1 for tuning info regarding vm.min_free_kbytes: 
 
https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#gpfsclustersfaqAugust2016-gen4__lintunq
- We have run some of our test clusters with as little as 4GB of memory but I 
would definitely recommend quite a bit more memory in each VM for production 
use.  If you use additional functions other than core file system, pay 
attention to the memory requirements of these.

#4) KVM is a viable alternative if needed.....



Regards,

Aaron Palazzolo
IBM Spectrum Scale Deployment, Infrastructure, Virtualization
9042 S Rita Road, Tucson AZ 85744
Phone: 520-799-5161, T/L: 321-5161
E-mail: [email protected]


----- Original message -----
From: [email protected]
Sent by: [email protected]
To: [email protected]
Cc:
Subject: gpfsug-discuss Digest, Vol 57, Issue 65
Date: Tue, Oct 25, 2016 12:05 PM

Send gpfsug-discuss mailing list submissions to
[email protected]

To subscribe or unsubscribe via the World Wide Web, visit
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
[email protected]

You can reach the person managing the list at
[email protected]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. October meet the devs report
      (Simon Thompson (Research Computing - IT Services))
   2. Virtualized Spectrum Scale ([email protected])
   3. Re: Virtualized Spectrum Scale (Luis Bolinches)
   4. Re: Virtualized Spectrum Scale (Kevin D Johnson)


----------------------------------------------------------------------

Message: 1
Date: Tue, 25 Oct 2016 15:33:16 +0000
From: "Simon Thompson (Research Computing - IT Services)"
<[email protected]>
To: "[email protected]"
<[email protected]>
Subject: [gpfsug-discuss] October meet the devs report
Message-ID:
<[email protected]>
Content-Type: text/plain; charset="us-ascii"


The October meet the devs workshop on cloud was last week. Thanks to Dean 
Hildebrand and John Lewars for flying in from the US to support us, Ulf 
Troppens and Dan Kidger also from IBM. And finally thanks to OCF for buying the 
pizza (one day we'll manage to get the pizza co to deliver on time!).

The event report is now up on the group website at:

http://www.spectrumscale.org/meet-the-devs-cloud-workshop-birmingham-uk/

Simon

------------------------------

Message: 2
Date: Tue, 25 Oct 2016 18:46:26 +0000
From: "[email protected]" <[email protected]>
To: "[email protected]"
<[email protected]>
Subject: [gpfsug-discuss] Virtualized Spectrum Scale
Message-ID: <[email protected]>
Content-Type: text/plain; charset="utf-8"

Anyone running SpectrumScale on Virtual Machines (intel)?  I?m curious how you 
manage disks?  Do you use RDM?s?  Does this even make sense to do?  If you have 
a 2-3 node cluster how do you share the disks across?  Do you have VM?s with 
their own VMDK?s (if not RDM) in each node or is there some way to share access 
to the same VMDK?s?  What are the advantages doing this other than existing HW 
use?  Seems to me for a lab environment or very small nonperformance focused 
implementation this may be a viable option.

Thanks

Mark

This message (including any attachments) is intended only for the use of the 
individual or entity to which it is addressed and may contain information that 
is non-public, proprietary, privileged, confidential, and exempt from 
disclosure under applicable law. If you are not the intended recipient, you are 
hereby notified that any use, dissemination, distribution, or copying of this 
communication is strictly prohibited. This message may be viewed by parties at 
Sirius Computer Solutions other than those named in the message header. This 
message does not contain an official representation of Sirius Computer 
Solutions. If you have received this communication in error, notify Sirius 
Computer Solutions immediately and (i) destroy this message if a facsimile or 
(ii) delete this message immediately if this is an electronic communication. 
Thank you.

Sirius Computer Solutions<http://www.siriuscom.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20161025/5d6808cd/attachment-0001.html>

------------------------------

Message: 3
Date: Tue, 25 Oct 2016 18:58:21 +0000
From: "Luis Bolinches" <[email protected]>
To: "gpfsug main discussion list" <[email protected]>
Subject: Re: [gpfsug-discuss] Virtualized Spectrum Scale
Message-ID:
<ofdede10b2.7153242c-on00258057.0068383a-1477421901...@notes.na.collabserv.com>

Content-Type: text/plain; charset="utf-8"

Hi

You must use RDM. Otherwise is not supported. SCSI commands is the reason.

Furthermore on some versions I managed to crash the ESXi as well.

--
Cheers

> On 25 Oct 2016, at 19.46, "[email protected]" <[email protected]> 
> wrote:
>
> Anyone running SpectrumScale on Virtual Machines (intel)?  I?m curious how 
> you manage disks?  Do you use RDM?s?  Does this even make sense to do?  If 
> you have a 2-3 node cluster how do you share the disks across?  Do you have 
> VM?s with their own VMDK?s (if not RDM) in each node or is there some way to 
> share access to the same VMDK?s?  What are the advantages doing this other 
> than existing HW use?  Seems to me for a lab environment or very small 
> nonperformance focused implementation this may be a viable option.
>
> Thanks
>
> Mark
> This message (including any attachments) is intended only for the use of the 
> individual or entity to which it is addressed and may contain information 
> that is non-public, proprietary, privileged, confidential, and exempt from 
> disclosure under applicable law. If you are not the intended recipient, you 
> are hereby notified that any use, dissemination, distribution, or copying of 
> this communication is strictly prohibited. This message may be viewed by 
> parties at Sirius Computer Solutions other than those named in the message 
> header. This message does not contain an official representation of Sirius 
> Computer Solutions. If you have received this communication in error, notify 
> Sirius Computer Solutions immediately and (i) destroy this message if a 
> facsimile or (ii) delete this message immediately if this is an electronic 
> communication. Thank you.
>
> Sirius Computer Solutions

Ellei edell? ole toisin mainittu: / Unless stated otherwise above:
Oy IBM Finland Ab
PL 265, 00101 Helsinki, Finland
Business ID, Y-tunnus: 0195876-3
Registered in Finland

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20161025/68a76094/attachment-0001.html>

------------------------------

Message: 4
Date: Tue, 25 Oct 2016 19:05:12 +0000
From: "Kevin D Johnson" <[email protected]>
To: [email protected]
Cc: [email protected]
Subject: Re: [gpfsug-discuss] Virtualized Spectrum Scale
Message-ID:
<of058ea313.abd3ea09-on00258057.0068a1be-00258057.0068d...@notes.na.collabserv.com>

Content-Type: text/plain; charset="us-ascii"

An HTML attachment was scrubbed...
URL: 
<http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20161025/a0853f55/attachment.html>

------------------------------

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


End of gpfsug-discuss Digest, Vol 57, Issue 65
**********************************************




_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to