- Q5.2:
- What are some scaling considerations for the protocols function?
- A5.2:
- Scaling considerations for the protocols function include:
- The number of protocol nodes.
If you are using SMB in any combination of other protocols you can configure only up to 16 protocol nodes. This is a hard limit and SMB cannot be enabled if there are more protocol nodes. If only NFS and Object are enabled, you can have 32 nodes configured as protocol nodes.
- The number of client connections.
A maximum of 3,000 SMB connections is recommended per protocol node with a maximum of 20,000 SMB connections per cluster. A maximum of 4,000 NFS connections per protocol node is recommended. A maximum of 2,000 Object connections per protocol nodes is recommended. The maximum number of connections depends on the amount of memory configured and sufficient CPU. We recommend a minimum of 64GB of memory for only Object or only NFS use cases. If you have multiple protocols enabled or if you have SMB enabled we recommend 128GB of memory on the system.
- The number of protocol nodes.
IBM Certified Client Technical Specialist, Level 2 Expert
Open Foundation, Master Certified Technical Specialist
IBM Systems, Storage Solutions
US Federal
407-271-9210 Office / Cell / Office / Text
[email protected] email
[email protected] wrote: -----
From: [email protected]
Sent by: [email protected]
Date: 12/10/2020 07:00AM
Subject: [EXTERNAL] gpfsug-discuss Digest, Vol 107, Issue 13
[email protected]
To subscribe or unsubscribe via the World Wide Web, visit
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
[email protected]
You can reach the person managing the list at
[email protected]
When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."
Today's Topics:
1. Protocol limits (leslie elliott)
2. Re: Protocol limits (Jan-Frode Myklebust)
----------------------------------------------------------------------
Message: 1
Date: Thu, 10 Dec 2020 08:45:22 +1000
From: leslie elliott <[email protected]>
To: gpfsug main discussion list <[email protected]>
Subject: [gpfsug-discuss] Protocol limits
Message-ID:
<CANBv+tsnwzTH5796xMfpLmWc-aY5=kihhlaacx-fzgdblup...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
hi all
we run a large number of shares from CES servers connected to a single
scale cluster
we understand the current supported limit is 1000 SMB shares, we run the
same number of NFS shares
we also understand that using external CES cluster to increase that limit
is not supported based on the documentation, we use the same authentication
for all shares, we do have additional use cases for sharing where this
pathway would be attractive going forward
so the question becomes if we need to run 20000 SMB and NFS shares off a
scale cluster is there any hardware design we can use to do this whilst
maintaining support
I have submitted a support request to ask if this can be done but thought I
would ask the collective good if this has already been solved
thanks
leslie
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20201210/a460a862/attachment-0001.html >
------------------------------
Message: 2
Date: Thu, 10 Dec 2020 00:21:03 +0100
From: Jan-Frode Myklebust <[email protected]>
To: gpfsug main discussion list <[email protected]>
Subject: Re: [gpfsug-discuss] Protocol limits
Message-ID:
<cahwpatj8xi5bez7m+gpqaguoxy_p+qw87mj4uf7z2nxr1ae...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
My understanding of these limits are that they are to limit the
configuration files from becoming too large, which makes
changing/processing them somewhat slow.
For SMB shares, you might be able to limit the number of configured shares
by using wildcards in the config (%U). These wildcarded entries counts as
one share.. Don?t know if simimar tricks can be done for NFS..
-jf
ons. 9. des. 2020 kl. 23:45 skrev leslie elliott <
[email protected]>:
>
> hi all
>
> we run a large number of shares from CES servers connected to a single
> scale cluster
> we understand the current supported limit is 1000 SMB shares, we run the
> same number of NFS shares
>
> we also understand that using external CES cluster to increase that limit
> is not supported based on the documentation, we use the same authentication
> for all shares, we do have additional use cases for sharing where this
> pathway would be attractive going forward
>
> so the question becomes if we need to run 20000 SMB and NFS shares off a
> scale cluster is there any hardware design we can use to do this whilst
> maintaining support
>
> I have submitted a support request to ask if this can be done but thought
> I would ask the collective good if this has already been solved
>
> thanks
>
> leslie
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20201210/4744cdc0/attachment-0001.html >
------------------------------
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
End of gpfsug-discuss Digest, Vol 107, Issue 13
***********************************************
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
