Re: [gpfsug-discuss] tscCmdPortRange question

2018-03-06 Thread Olaf Weiser
this parameter is just for administrative
commands.. "where" to send the output of a command...and for those admin ports .. so called
ephemeral ports... it depends , how much admin commands ( = sessions  =
sockets)  you want to run in parallel in my experience.. 10 ports is more
than enough we use those in a range from 5-50010
to be clear .. demon - to - demon .. communication
always uses 1191cheersFrom:      
 "Simon Thompson
(IT Research Support)" To:      
 "gpfsug-discuss@spectrumscale.org"
Date:      
 03/06/2018 06:55 PMSubject:    
   [gpfsug-discuss]
tscCmdPortRange questionSent by:    
   gpfsug-discuss-boun...@spectrumscale.orgWe are looking at setting a value for tscCmdPortRange
so that we can apply firewalls to a small number of GPFS nodes in one of
our clusters. The docs don’t give an indication on the
number of ports that are required to be in the range. Could anyone make
a suggestion on this? It doesn’t appear as a parameter for “mmchconfig
-i”, so I assume that it requires the nodes to be restarted, however I’m
not clear if we could do a rolling restart on this? Thanks Simon___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] 100G RoCEE and Spectrum Scale Performance

2018-03-06 Thread Douglas Duckworth
Hi

We are currently running Spectrum Scale over FDR Infiniband.  We plan on
upgrading to EDR since I have not really encountered documentation saying
to abandon the lower-latency advantage found in Infiniband.  Our workloads
generally benefit from lower latency.

It looks like, ignoring GPFS, EDR still has higher throughput and lower
latency when compared to 100G RoCEE.

http://sc16.supercomputing.org/sc-archive/tech_poster/poster_files/post149s2-file3.pdf

However, I wanted to get feedback on how GPFS performs with 100G Ethernet
instead of FDR.

Thanks very much!

Doug

Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Physiology and Biophysics
Weill Cornell Medicine
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] wondering about outage free protocols upgrades

2018-03-06 Thread Christof Schmitt
Rolling code upgrade was never support for SMB for the reasons mention in my other email.
 
The change in 5.0 is to enforce this restriction on a code level. The SMB service will refuse to start on a protocol node, if an incompatible version is already running on another node.
Regards,
Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZchristof.schm...@us.ibm.com  ||  +1-520-799-2469    (T/L: 321-2469)
 
 
- Original message -From: "Sobey, Richard A" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: "gpfsug-discuss@spectrumscale.org" , gpfsug main discussion list Cc:Subject: Re: [gpfsug-discuss] wondering about outage free protocols upgradesDate: Tue, Mar 6, 2018 11:49 AM 
Thanks for raising this, I was going to ask. The last I heard it was baked into the 5.0 release of Scale but the release notes are eerily quiet on the matter. 
Would be good to get some input from IBM on this. 
Richard 
Get Outlook for Android
From: gpfsug-discuss-boun...@spectrumscale.org  on behalf of greg.lehm...@csiro.au Sent: Friday, March 2, 2018 3:48:44 AMTo: gpfsug-discuss@spectrumscale.orgSubject: [gpfsug-discuss] wondering about outage free protocols upgrades
 
Hi All,
   It appears a rolling node by node upgrade of a protocols cluster is not possible. Ctdb is the sticking point as it won’t run with 2 different versions at the same time. Are there any plans to address this and make it a real Enterprise product?
 
Cheers,
 
Greg
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttps://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=5Nn7eUPeYe291x8f39jKybESLKv_W_XtkTkS8fTR-NI=bPllsWtGZ9ZmxVPYCKZnzIbVPvZEo3IevykEm3tqRR0=SNPiJbSYXpLQ1hfQyPt5FH671N464RBobumx5zX3Ios=
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] mmfind performance

2018-03-06 Thread Buterbaugh, Kevin L
Hi All,

In the README for the mmfind command it says:

mmfind
  A highly efficient file system traversal tool, designed to serve
   as a drop-in replacement for the 'find' command as used against GPFS FSes.

And:

mmfind is expected to be slower than find on file systems with relatively few 
inodes.
This is due to the overhead of using mmapplypolicy.
However, if you make use of the -exec flag to carry out a relatively expensive 
operation
on each file (e.g. compute a checksum), using mmfind should yield a significant 
performance
improvement, even on a file system with relatively few inodes.

I have a list of just shy of 50 inode numbers that I need to figure out what 
file they correspond to, so I decided to give mmfind a try:

+ cd /usr/lpp/mmfs/samples/ilm
+ ./mmfind /gpfs23 -inum 113769917 -o -inum 132539418 -o -inum 135584191 -o 
-inum 136471839 -o -inum 137009371 -o -inum 137314798 -o -inum 137939675 -o 
-inum 137997971 -o -inum 138013736 -o -inum 138029061 -o -inum 138029065 -o 
-inum 138029076 -o -inum 138029086 -o -inum 138029093 -o -inum 138029099 -o 
-inum 138029101 -o -inum 138029102 -o -inum 138029106 -o -inum 138029112 -o 
-inum 138029113 -o -inum 138029114 -o -inum 138029119 -o -inum 138029120 -o 
-inum 138029121 -o -inum 138029130 -o -inum 138029131 -o -inum 138029132 -o 
-inum 138029141 -o -inum 138029146 -o -inum 138029147 -o -inum 138029152 -o 
-inum 138029153 -o -inum 138029154 -o -inum 138029163 -o -inum 138029164 -o 
-inum 138029165 -o -inum 138029174 -o -inum 138029175 -o -inum 138029176 -o 
-inum 138083075 -o -inum 138083148 -o -inum 138083149 -o -inum 138083155 -o 
-inum 138216465 -o -inum 138216483 -o -inum 138216507 -o -inum 138216535 -o 
-inum 138235320 -ls

I kicked that off last Friday and it is _still_ running.  By comparison, I have 
a Perl script that I have run in the past that simple traverses the entire 
filesystem tree and stat’s each file and outputs that to a log file.  That 
script would “only” run ~24 hours.

Clearly mmfind as I invoked it is much slower than the corresponding Perl 
script, so what am I doing wrong?  Thanks…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - 
(615)875-9633



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] wondering about outage free protocols upgrades

2018-03-06 Thread Christof Schmitt
Hi,
 
at this point there are no plans to support "node by node" upgrade for SMB.
 
Some background: The technical reason for this restriction is that the records shared between protocol nodes for the SMB service (ctdb and Samba) are not versioned and no mechanism is in place to handle different versions. Changing this would be a large development task that has not been included in any current plans.
 
Note that this only affects the SMB service and that the knowledge center outlines a procedure to minimize the outage, by getting half of the protocol nodes ready with the new Samba version and then only taking a brief outage when switching from the "old" to the "new" Samba version: https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.2/com.ibm.spectrum.scale.v4r22.doc/bl1ins_updatingsmb.htm
The toolkit follows the same approach during an upgrade to minimize the outage.
 
We know that this is not ideal, but as mentioned above this is limited by the large effort that would be required which has to be weighed against other requirements and priorities.
 
Regards,
Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZchristof.schm...@us.ibm.com  ||  +1-520-799-2469    (T/L: 321-2469)
 
 
- Original message -From: Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: Cc:Subject: [gpfsug-discuss] wondering about outage free protocols upgradesDate: Tue, Mar 6, 2018 10:19 AM  
Hi All,
   It appears a rolling node by node upgrade of a protocols cluster is not possible. Ctdb is the sticking point as it won’t run with 2 different versions at the same time. Are there any plans to address this and make it a real Enterprise product? 
 
Cheers,
 
Greg
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttps://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=5Nn7eUPeYe291x8f39jKybESLKv_W_XtkTkS8fTR-NI=p5fg7X1tKGwi1BsYiw-wHTxmaG-PLihwHV0yTBQNaUs=3ZHS5vAoxeC6ikuOpTRLWNTpvgKEC3thI-qUgyU_hYo=
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] wondering about outage free protocols upgrades

2018-03-06 Thread Sobey, Richard A
Thanks for raising this, I was going to ask. The last I heard it was baked into 
the 5.0 release of Scale but the release notes are eerily quiet on the matter.

Would be good to get some input from IBM on this.

Richard

Get Outlook for Android


From: gpfsug-discuss-boun...@spectrumscale.org 
 on behalf of greg.lehm...@csiro.au 

Sent: Friday, March 2, 2018 3:48:44 AM
To: gpfsug-discuss@spectrumscale.org
Subject: [gpfsug-discuss] wondering about outage free protocols upgrades

Hi All,
   It appears a rolling node by node upgrade of a protocols cluster 
is not possible. Ctdb is the sticking point as it won’t run with 2 different 
versions at the same time. Are there any plans to address this and make it a 
real Enterprise product?

Cheers,

Greg
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS

2018-03-06 Thread Buterbaugh, Kevin L
Hi Leandro,

I think the silence in response to your question says a lot, don’t you?  :-O

IBM has said (on this list, I believe) that the Meltdown / Spectre patches do 
not impact GPFS functionality.  They’ve been silent as to performance impacts, 
which can and will be taken various ways.

In the absence of information from IBM, the approach we have chosen to take is 
to patch everything except our GPFS servers … only we (the SysAdmins, oh, and 
the NSA, of course!) can log in to them, so we feel that the risk of not 
patching them is minimal.

HTHAL…

Kevin

On Mar 1, 2018, at 9:02 AM, Avila-Diaz, Leandro 
> wrote:

Good morning,

Does anyone know if IBM has an official statement and/or perhaps a FAQ document 
about the Spectre/Meltdown impact on GPFS?
Thank you

From: 
>
 on behalf of IBM Spectrum Scale >
Reply-To: gpfsug main discussion list 
>
Date: Thursday, January 4, 2018 at 20:36
To: gpfsug main discussion list 
>
Subject: Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS

Kevin,

The team is aware of Meltdown and Spectre. Due to the late availability of 
production-ready test patches (they became available today) we started today 
working on evaluating the impact of applying these patches. The focus would be 
both on any potential functional impacts (especially to the kernel modules 
shipped with GPFS) and on the performance degradation which affects user/kernel 
mode transitions. Performance characterization will be complex, as some system 
calls which may get invoked often by the mmfsd daemon will suddenly become 
significantly more expensive because of the kernel changes. Depending on the 
main areas affected, code changes might be possible to alleviate the impact, by 
reducing frequency of certain calls, etc. Any such changes will be deployed 
over time.

At this point, we can't say what impact this will have on stability or 
Performance on systems running GPFS — until IBM issues an official statement on 
this topic. We hope to have some basic answers soon.



Regards, The Spectrum Scale (GPFS) team

--
If you feel that your question can benefit other users of Spectrum Scale 
(GPFS), then please post it to the public IBM developerWroks Forum 
athttps://www.ibm.com/developerworks/community/forums/html/forum?id=----0479.

If your query concerns a potential software error in Spectrum Scale (GPFS) and 
you have an IBM software maintenance contract please contact 1-800-237-5511 in 
the United States or your local IBM Service Center in other countries.

The forum is informally monitored as time permits and should not be used for 
priority messages to the Spectrum Scale (GPFS) team.

"Buterbaugh, Kevin L" ---01/04/2018 01:11:59 PM---Happy New Year 
everyone, I’m sure that everyone is aware of Meltdown and Spectre by now … we, 
like m

From: "Buterbaugh, Kevin L" 
>
To: gpfsug main discussion list 
>
Date: 01/04/2018 01:11 PM
Subject: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org




Happy New Year everyone,

I’m sure that everyone is aware of Meltdown and Spectre by now … we, like many 
other institutions, will be patching for it at the earliest possible 
opportunity.

Our understanding is that the most serious of the negative performance impacts 
of these patches will be for things like I/O (disk / network) … given that, we 
are curious if IBM has any plans for a GPFS update that could help mitigate 
those impacts? Or is there simply nothing that can be done?

If there is a GPFS update planned for this we’d be interested in knowing so 
that we could coordinate the kernel and GPFS upgrades on our cluster.

Thanks…

Kevin

P.S. The “Happy 

Re: [gpfsug-discuss] RDMA data from Zimon

2018-03-06 Thread Kristy Kallback-Rose
Thanks Eric. No one who is a ZIMon developer has jumped up to contradict this, 
so I’ll go with it :-)

Many thanks. This is helpful to understand where the data is coming from and 
would be a welcome addition to the documentation.

Cheers,
Kristy

> On Feb 15, 2018, at 9:08 AM, Eric Agar  wrote:
> 
> Kristy,
> 
> I experimented a bit with this some months ago and looked at the ZIMon source 
> code. I came to the conclusion that ZIMon is reporting values obtained from 
> the IB counters (actually, delta values adjusted for time) and that yes, for 
> port_xmit_data and port_rcv_data, one would need to multiply the values by 4 
> to make sense of them.
> 
> To obtain a port_xmit_data value, the ZIMon sensor first looks for 
> /sys/class/infiniband//ports//counters_ext/port_xmit_data_64, 
> and if that is not found then looks for 
> /sys/class/infiniband//ports//counters/port_xmit_data. Similarly 
> for other counters/metrics.
> 
> Full disclosure: I am not an IB expert nor a ZIMon developer.
> 
> I hope this helps.
> 
> 
> Eric M. Agar
> a...@us.ibm.com
> 
> 
> Kristy Kallback-Rose ---02/14/2018 08:47:59 PM---Hi, Can one of 
> the IBMers tell me if port_xmit_data and port_rcv_data from Zimon can be 
> interpreted
> 
> From: Kristy Kallback-Rose 
> To: gpfsug main discussion list 
> Date: 02/14/2018 08:47 PM
> Subject: [gpfsug-discuss] RDMA data from Zimon
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> 
> 
> 
> 
> Hi,
> 
> Can one of the IBMers tell me if port_xmit_data and port_rcv_data from Zimon 
> can be interpreted as RDMA Bytes/sec? Ideally, also how this data is being 
> collected? I’m looking here: 
> https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1hlp_monnetworksmetrics.htm
>  
> 
> 
> But then I also look here: https://community.mellanox.com/docs/DOC-2751 
> 
> 
> and see "Total number of data octets, divided by 4 (lanes), received on all 
> VLs. This is 64 bit counter.” So I wasn’t sure if some multiplication by 4 
> was in order.
> 
> Please advise.
> 
> Cheers,
> Kristy___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=IbxtjdkPAM2Sbon4Lbbi4w=zIRb70L9sx_FvvC9IcWVKLOSOOFnx-hIGfjw0kUN7bw=D1g4YTG5WeUiHI3rCPr_kkPxbG9V9E-18UGXBeCvfB8=
>  
> 
> 
> 
> 
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] tscCmdPortRange question

2018-03-06 Thread Simon Thompson (IT Research Support)
We are looking at setting a value for tscCmdPortRange so that we can apply 
firewalls to a small number of GPFS nodes in one of our clusters.

The docs don’t give an indication on the number of ports that are required to 
be in the range. Could anyone make a suggestion on this?

It doesn’t appear as a parameter for “mmchconfig -i”, so I assume that it 
requires the nodes to be restarted, however I’m not clear if we could do a 
rolling restart on this?

Thanks

Simon
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS

2018-03-06 Thread IBM Spectrum Scale
Hi,

The verification/test work is still ongoing. Hopefully GPFS will publish
statement soon. I think it would be available through several channels,
such as FAQ.

Regards, The Spectrum Scale (GPFS) team

--

If you feel that your question can benefit other users of  Spectrum Scale
(GPFS), then please post it to the public IBM developerWroks Forum at
https://www.ibm.com/developerworks/community/forums/html/forum?id=----0479.


If your query concerns a potential software error in Spectrum Scale (GPFS)
and you have an IBM software maintenance contract please contact
1-800-237-5511 in the United States or your local IBM Service Center in
other countries.

The forum is informally monitored as time permits and should not be used
for priority messages to the Spectrum Scale (GPFS) team.



From:   "Avila-Diaz, Leandro" 
To: gpfsug main discussion list 
Date:   03/01/2018 11:17 PM
Subject:Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Good morning,

Does anyone know if IBM has an official statement and/or perhaps a FAQ
document about the Spectre/Meltdown impact on GPFS?
Thank you

From:  on behalf of IBM Spectrum
Scale 
Reply-To: gpfsug main discussion list 
Date: Thursday, January 4, 2018 at 20:36
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS



Kevin,

The team is aware of Meltdown and Spectre. Due to the late availability of
production-ready test patches (they became available today) we started
today working on evaluating the impact of applying these patches. The focus
would be both on any potential functional impacts (especially to the kernel
modules shipped with GPFS) and on the performance degradation which affects
user/kernel mode transitions. Performance characterization will be complex,
as some system calls which may get invoked often by the mmfsd daemon will
suddenly become significantly more expensive because of the kernel changes.
Depending on the main areas affected, code changes might be possible to
alleviate the impact, by reducing frequency of certain calls, etc. Any such
changes will be deployed over time.

At this point, we can't say what impact this will have on stability or
Performance on systems running GPFS — until IBM issues an official
statement on this topic. We hope to have some basic answers soon.



Regards, The Spectrum Scale (GPFS) team

--

If you feel that your question can benefit other users of Spectrum Scale
(GPFS), then please post it to the public IBM developerWroks Forum at
https://www.ibm.com/developerworks/community/forums/html/forum?id=----0479
.

If your query concerns a potential software error in Spectrum Scale (GPFS)
and you have an IBM software maintenance contract please contact
1-800-237-5511 in the United States or your local IBM Service Center in
other countries.

The forum is informally monitored as time permits and should not be used
for priority messages to the Spectrum Scale (GPFS) team.

Inactive hide details for "Buterbaugh, Kevin L" ---01/04/2018 01:11:59
PM---Happy New Year everyone, I’m sure that everyone is"Buterbaugh, Kevin
L" ---01/04/2018 01:11:59 PM---Happy New Year everyone, I’m sure that
everyone is aware of Meltdown and Spectre by now … we, like m

From: "Buterbaugh, Kevin L" 
To: gpfsug main discussion list 
Date: 01/04/2018 01:11 PM
Subject: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS
Sent by: gpfsug-discuss-boun...@spectrumscale.org




Happy New Year everyone,

I’m sure that everyone is aware of Meltdown and Spectre by now … we, like
many other institutions, will be patching for it at the earliest possible
opportunity.

Our understanding is that the most serious of the negative performance
impacts of these patches will be for things like I/O (disk / network) …
given that, we are curious if IBM has any plans for a GPFS update that
could help mitigate those impacts? Or is there simply nothing that can be
done?

If there is a GPFS update planned for this we’d be interested in knowing so
that we could coordinate the kernel and GPFS upgrades on our cluster.

Thanks…

Kevin

P.S. The “Happy New Year” wasn’t intended as sarcasm … I hope it is a good
year for everyone despite how it’s starting out. :-O

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and
Education
kevin.buterba...@vanderbilt.edu - (615)875-9633



[gpfsug-discuss] Spectrum Scale Support Webinar - File Audit Logging

2018-03-06 Thread Bohai Zhang



You are receiving this message because you are an IBM Spectrum Scale Client
  and in GPFS User Group.









IBM Spectrum Scale Support Webinar

File Audit Logging



About this Webinar

IBM Spectrum Scale Webinars are hosted by IBM Spectrum Scale Support to
share expertise and knowledge of the Spectrum Scale product, as well as
product updates and best practices based on various use cases. This webinar
will discuss fundamentals of the new File Audit Logging function including
configuration and key best practices that will aid you in successful
deployment and use of File Audit Logging within Spectrum Scale.



Please note that our webinars are free of charge and will be held online
via WebEx.

Agenda:
   ·Overview of File Audit Logging
   ·Installation and deployment of File Audit Logging
   ·Using File Audit Logging
   ·Monitoring and troubleshooting File Audit Logging
   ·Q
NA/EU Session
Date: March 14, 2018
Time: 11 AM – 12PM EDT (4PM GMT)
Registration: https://ibm.biz/BdZsZz
Audience: Spectrum Scale Administrators

AP/JP Session
Date: March 15, 2018
Time: 10AM – 11AM Beijing Time (11AM Tokyo Time)
Registration: https://ibm.biz/BdZsZf
Audience: Spectrum Scale Administrators


If you have any questions, please contact Robert Simon, Jun Hui Bu,  Vlad
Spoiala and Bohai Zhang.




Regards,

IBM Spectrum Scale Support Team





  
 Regards,   
  

  

  

  

  

  

  
 IBM
  
 Spectrum   
  
 Computing  
  

  
Bohai Zhang 
 Critical 
Senior Technical Leader, IBM Systems
Situation 
Tel: 1-905-316-2727 
 Resolver 
Mobile: 1-416-897-7488  
 Expert Badge 
Email: bzh...@ca.ibm.com
  
3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada 
  
Live Chat@IBMStorageSuptMobile Apps 
  

  

  

  
Support Portal | Fix Central | Knowledge Center | Request for 
Enhancement | Product SMC  IBM  
| dWA   
  
We meet our service commitment only when you are very satisfied and 
EXTREMELY LIKELY to   
recommend IBM.  
  

  



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss