Re: [gpfsug-discuss] struct dirent d_type for fifos on gpfs

2022-01-14 Thread Lyle Gayne
He's already handled this.
 
Lyle
 
- Original message -From: IBM Spectrum Scale/Poughkeepsie/IBMSent by: gpfsug-discuss-boun...@spectrumscale.orgTo: Steve Xiao/Fishkill/IBM@IBMCc: gpfsug-discuss-boun...@spectrumscale.org, "gpfsug main discussion list" , Ted Anderson/Almaden/IBM@IBMMailSubject: [EXTERNAL] Re: [gpfsug-discuss] struct dirent d_type for fifos on gpfsDate: Fri, Jan 14, 2022 2:12 AM  Hi Steve, Can you please look into the below query from Hannappel. Regards, The Spectrum Scale (GPFS) team -- If you feel that yourZjQcmQRYFpfptBannerStart  

This Message Is From an External Sender
This message came from outside your organization. ZjQcmQRYFpfptBannerEnd 
Hi Steve,Can you please look into the below query from Hannappel.Regards, The Spectrum Scale (GPFS) team--If you feel that your question can benefit other users of  Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=----0479.If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact  1-800-237-5511 in the United States or your local IBM Service Center in other countries.The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team."Hannappel, Juergen" ---13-01-2022 11.46.55 PM---Hi, I noticed that whe I read directory entries with the usual readdir() function thatFrom: "Hannappel, Juergen" To: "gpfsug main discussion list" Date: 13-01-2022 11.46 PMSubject: [EXTERNAL] [gpfsug-discuss] struct dirent d_type for fifos on gpfsSent by: gpfsug-discuss-boun...@spectrumscale.org
Hi,I noticed that whe I read directory entries with the usual readdir() function thatfor fifos I get in the d_type files a 0, i.e. DT_UNKNOWN while if I try that on adifferent file system e.g. ext4 I get the expected DT_FIFO.Is this a bug or an expected feature?--Dr. Jürgen Hannappel  DESY/IT    Tel.  : +49 40 8998-4616___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss  
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss 
 


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Mismatch between local and Scale directories

2020-06-30 Thread Lyle Gayne
Linda may be able to help.
 
Lyle
 
- Original message -From: "Baptista, Pedro Real" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: "gpfsug-discuss@spectrumscale.org" Cc:Subject: [EXTERNAL] [gpfsug-discuss] Mismatch between local and Scale directoriesDate: Tue, Jun 30, 2020 7:54 AM  
Hi all,
 
I’m finding diferences between my local directories (Hadoop cluster) and GPFS filesystem. I’ve linked both yarn and mapreduce directories to Scale.
 
For example, in one specific worker node:
 

 
If I list the usercache folder, I see differences.
 
Local:
 

 
GPFS
 

 
I see that GPFS is working ok in the node
 

 
However, if I check the node health:
 

 
I’m new to Spectrum Scale and I don’t know what’s csm_resync_needed and local_fs_filled.
 
Can anyone give a hand with this?
 
Best regards,
 
Pedro Real BaptistaConsulting | Analytics & CognitiveDeloitte Consultores, S.A.Av. Eng. Duarte Pacheco, 7, 1070-100 Lisboa, PortugalM: +351 962369236pebapti...@deloitte.pt | www.deloitte.pt
 

 
*Disclaimer:*Deloitte refers to a Deloitte member firm, one of its related entities, or Deloitte Touche Tohmatsu Limited (“DTTL”).Each Deloitte member firm is a separate legal entity and a member of DTTL. DTTL does not provide services to clients.Please see www.deloitte.com/about to learn more.Privileged/Confidential Information may be contained in this message. If you are not the addressee indicated in this message (or responsible for delivery of the message to such person), you may not copy or deliver this message to anyone. In such case, you should destroy this message and kindly notify the sender by reply email. Please advise immediately if you or your employer do not consent to Internet email for messages of this kind. Opinions, conclusions and other information in this message that do not relate to the official business of my firm shall be understood as neither given nor endorsed by it.
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss 
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] AFM Recovery of SW cache does a full scan of home - is this to be expected?

2020-01-08 Thread Lyle Gayne
Venkat,
for awareness and response.
 
Thanks,Lyle
 
- Original message -From: "Billich Heinrich Rainer (ID SD)" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [EXTERNAL] [gpfsug-discuss] AFM Recovery of SW cache does a full scan of home - is this to be expected?Date: Wed, Jan 8, 2020 12:04 PM 
Hello,
 
 
still new to AFM, so some basic question on how Recovery works for a SW cache:
 
we have an AFM SW cache in recovery mode – recovery first did run policies on the cache cluster, but now I see a ‘tcpcachescan’ process on cache slowly scanning home via nfs. Single host, single process, no parallelism as far as I can see, but I may be wrong. This scan of home on a cache afmgateway takes very long while further updates on cache queue up. Home has about 100M files. After 8hours I see about 70M entries in the file /var/mmfs/afm/…/recovery/homelist, i.e. we get about 2500 lines/s.  (We may have very many changes on cache due to some recursive ACL operations, but I’m not sure.)
 
So I expect that 12hours pass to buildup filelists before recovery starts to update home. I see some risk: In this time new changes pile up on cache. Memory may become an issue? Cache may fill up and we can’t evict?
 
I wonder
Is this to be expected and normal behavior?  What to do about it?Will every reboot of a gateway node trigger a recovery of all afm filesets and a full scan of home? This would make normal rolling updates  very unpractical, or is there some better way?
 
Home is a gpfs cluster, hence we easily could produce the needed filelist on home with a policyscan in a few minutes.
 
Thank you, I will welcome and clarification, advice or comments.
 
Kind regards,
 
Heiner
.
 
-- 
===
Heinrich Billich
ETH Zürich
Informatikdienste
Tel.: +41 44 632 72 56
heinrich.bill...@id.ethz.ch

 
 
 
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss 
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites.

2019-12-18 Thread Lyle Gayne
Adding Venkat so he can chime in.
 
Lyle
 
- Original message -From: "Andi Nør Christiansen" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites.Date: Wed, Dec 18, 2019 5:24 AM  
Hi,
 
We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. 
I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. 
GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. 
 
As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. 
It does not need to be synchronous it just need to be automatic.  
 
I’m not sure if attachments will be available in this thread but I have attached the concept of our design. 
 
Basically the setup is :
 
Site A:
Owns “fileset A1” which needs to be replicated to Site B “fileset A2” the from Site B to Site C “fileset A3”.
 
Site B:
Owns “fileset B1” which needs to be replicated to Site C “fileset B2”.
 
Site C: 
Holds all data from Site A and B “fileset A3 & B2”.
 
We do not need any sites to have failover functionality only a copy of the data from the two first sites. 
 
If anyone knows how to accomplish this I would be glad to know how! 
 
We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don’t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted?
 
Thanks in advance!
 
Best Regards
Andi Nør ChristiansenIT Solution SpecialistPhone   +45 87 81 37 39Mobile  +45 23 89 59 75E-mail   a...@b4restore.comWeb      www.b4restore.com


 
 
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss 
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster

2019-12-05 Thread Lyle Gayne
  your current filesystem. Here's the link with additional information:
  
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20
(GPFS)/page/Data%20and%20Metadata

  Different RAID configurations also brings unexpected performance
  behaviors. Unless you are planning create different pools and use ILM
  to manage the files in different pools.

  One last thing, it's a good idea to follow the recommended levels for
  Spectrum Scale:
  
https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning

  Anyway, you are the system administrator, you know better than anyone
  how complex is to manage this cluster.




 
 Abraços / Regards / Saludos,   
 

 

 

 
 AndersonNobre  
 
 Power and Storage Consultant   
 
 IBM Systems Hardware Client Technical Team – IBM Systems Lab Services  
 

 

 

 






  

  

  
 Phone:55-19-2132-4317  
  
 E-mail: ano...@br.ibm.com  
  

  




   - Original message -
   From: "Dorigo Alvise (PSI)" 
   Sent by: gpfsug-discuss-boun...@spectrumscale.org
   To: "gpfsug-discuss@spectrumscale.org"
   
   Cc:
   Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a
   non-GNR cluster
   Date: Wed, Dec 4, 2019 06:44




   Thank you all for the answer. I try to recap my answers to your
   questions:



  1.the purpose is not to merge clusters "per se"; it is adding
 the GL2's 700TB raw space to the current filesystem provided
 by the GPFS/NetApp (which is running out of free space); of
 course I know well the heterogeneity of this hypothetical
 system, so the GL2's NSD would go to a special pool; but in
 the end I need a unique namespace for files.
  2.I do not want to do the opposite (mergin GPFS/NetApp into the
 GL2 cluster) because the former is in production and I cannot
 schedule long downtimes
  3.All system have proper licensing of course; what does it means
 that I could loose IBM support ? if the support is for a
 failing disk drive I do not think so; if the support is for a
 "strange" behaviour of GPFS I can probably understand
  4.NSD (in the NetApp system) are in their roles: what do you
 mean exactly ? there's RAIDset attached to servers that are
 actually NSD together with their attached LUN

  Alvise

   From: gpfsug-discuss-boun...@spectrumscale.org
on behalf of Lyle Gayne
   
   Sent: Tuesday, December 3, 2019 8:30:31 PM
   To: gpfsug-discuss@spectrumscale.org
   Cc: gpfsug-discuss@spectrumscale.org
   Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR
   cluster

   For:

   - A NetApp system with hardware RAID
   - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are
   these NSD servers in their GPFS roles (where Scale "runs on top"?
   - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)

   What I need to do is to merge the GL2 in the other GPFS cluster
   (running on the NetApp) without loosing, of course, the
   RecoveryGroup configuration, etc.

   I'd like to ask the experts
   1.whether it is feasible, considering the difference in the
   GPFS versions, architectures differences (x86_64 vs. power)
   2.if yes, whether anyone already did something like this and
   what is the best strategy suggested
   3.finally: is there any documentation dedica

Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster

2019-12-03 Thread Lyle Gayne
For:
 
- A NetApp system with hardware RAID- SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"?- A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc.I'd like to ask the experts1.        whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power)2.        if yes, whether anyone already did something like this and what is the best strategy suggested3.        finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? 
..
Some observations:
 
 
1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes?
 
2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules.
 
3) Mixing x86 and Power, especially as NSD servers, should pose no issues.  Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme.  Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details.
 
Lyle
- Original message -From: "Tomer Perry" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR clusterDate: Tue, Dec 3, 2019 10:03 AM Hi,Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well:"If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. "OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster.I would also suspect that the filesystem version on the ESS will be different.Regards,Tomer PerryScalable I/O Development (Spectrum Scale)email: t...@il.ibm.com1 Azrieli Center, Tel Aviv 67021, IsraelGlobal Tel:    +1 720 3422758Israel Tel:      +972 3 9188625Mobile:         +972 52 2554625From:        "Olaf Weiser" To:        gpfsug main discussion list Date:        03/12/2019 16:54Subject:        [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR clusterSent by:        gpfsug-discuss-boun...@spectrumscale.org
Hallo"merging" 2 different GPFS cluster into one .. is not possible ..for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... -  you can't preserve ESS's RG definitions...you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster ..  .. add the nodes to your existing cluster.. and then start configuring the RGsFrom:        "Dorigo Alvise (PSI)" To:        "gpfsug-discuss@spectrumscale.org" Date:        12/03/2019 09:35 AMSubject:        [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR clusterSent by:        gpfsug-discuss-boun...@spectrumscale.org
Hello everyone,I have:- A NetApp system with hardware RAID- SpectrumScale 4.2.3-13 running on top of the NetApp- A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc.I'd like to ask the experts1.        whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power)2.        if yes, whether anyone already did something like this and what is the best strategy suggested3.        finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ?Thank you very much,  Alvise Dorigo___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss 
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss 
 

___
gpfsug-discuss mailing list
gpfsug-discuss at 

Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster

2019-12-03 Thread Lyle Gayne
It's also the case that the recommended ESS config is to be its own cluster, remote mounted by any compute nodes which access the storage.
 
Lyle
 
- Original message -From: "Marc A Kaplan" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR clusterDate: Tue, Dec 3, 2019 2:15 PM 
IF you have everything properly licensed and then you reconfigure... It may work okay... But then you may come up short if you ask for IBM support or service...So depending how much support you need or desire... Or take the easier and supported path... And probably accomplish most of what you need -- let each cluster be and remote mount onto clients which could be on any connected cluster.Jonathan Buzzard ---12/03/2019 10:04:46 AM---On 03/12/2019 14:54, Olaf Weiser wrote: > HalloFrom: Jonathan Buzzard To: "gpfsug-discuss@spectrumscale.org" Date: 12/03/2019 10:04 AMSubject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR clusterSent by: gpfsug-discuss-boun...@spectrumscale.org
On 03/12/2019 14:54, Olaf Weiser wrote:> Hallo> "merging" 2 different GPFS cluster into one .. is not possible ..> for sure you can do "nested" mounts .. .but that's most likely not, what> you want to do ..>> if you want to add a GL2 (or any other ESS) ..to an existing (other)> cluster... -  you can't preserve ESS's RG definitions...> you need to create the RGs after adding the IO-nodes to the existing> cluster...>> so if you got a new ESS.. (no data on it) .. simply unconfigure cluster> ..  .. add the nodes to your existing cluster.. and then start> configuring the RGs>I was under the impression (from post by IBM employees on this list)that you are not allowed to mix GNR, ESS, DSS, classical GPFS, DDN GPFSetc. in the same cluster. Not a technical limitation but a licensing one.JAB.--Jonathan A. Buzzard                         Tel: +44141-5483420HPC System Administrator, ARCHIE-WeSt.University of Strathclyde, John Anderson Building, Glasgow. G4 0NG___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss  
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss 
 


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] ESS - Considerations when adding NSD space?

2019-10-25 Thread Lyle Gayne
Including Wes and Rezaul for any comment they have.
 
Lyle
 
- Original message -From: "Alexander Wolf" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug-discuss@spectrumscale.orgCc: gpfsug-discuss@spectrumscale.orgSubject: [EXTERNAL] Re: [gpfsug-discuss] ESS - Considerations when adding NSD space?Date: Fri, Oct 25, 2019 1:45 PM 
Bob,
 
from what you describe I would assume that you have now "old" vdisks that span four enclosures and "new" vdisks that span the two new enclosures. So you already are unbalanced at the vdisk level. From a performance point I would guess the optimum would be to have the new NSDs being half the size of the old ones. But honestly I do not know how much of a difference it really makes.
 
Fred is right, if you can you shoud always go for a homogenous setup. On the other hand if you can't, you can't.
 
 
Mit freundlichen Grüßen / Kind regards  
   
  
        Dr. Alexander Wolf-ReberSpectrum Scale Release Lead ArchitectDepartment M069 / Spectrum Scale Software Development+49-160-90540880a.wolf-re...@de.ibm.com
IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Matthias Hartmann / Geschäftsführung: Dirk WittkoppSitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294  
 
 
- Original message -From: "Frederick Stock" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug-discuss@spectrumscale.orgCc: gpfsug-discuss@spectrumscale.orgSubject: [EXTERNAL] Re: [gpfsug-discuss] ESS - Considerations when adding NSD space?Date: Thu, Oct 24, 2019 17:55 
Bob as I understand having different size NSDs is still not considered ideal even for ESS.  I had another customer recently add storage to an ESS system and they were advised to first check the size of their current vdisks and size the new vdisks to be the same. 
Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com
 
 
- Original message -From: "Oesterlin, Robert" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [EXTERNAL] [gpfsug-discuss] ESS - Considerations when adding NSD space?Date: Thu, Oct 24, 2019 11:34 AM 
We recently upgraded our GL4 to a GL6 (trouble free process for those considering FYI). I now have 615T free (raw) in each of my recovery groups.  I’d like to increase the size of one of the file systems (currently at 660T, I’d like to add 100T).
 
My first thought was going to be:
 
mmvdisk vdiskset define --vdisk-set fsdata1 --recovery-group rg_gssio1-hs,rg_gssio2-hs --set-size 50T --code 8+2p --block-size 4m --nsd-usage dataOnly --storage-pool data
mmvdisk vdiskset create --vdisk-set fs1data1 
mmvdisk filesystem add --filesystem fs1 --vdisk-set fs1data1 
 
I know in the past use of mixed size NSDs was frowned upon, not sure on the ESS. 
 
The other approach would be add two larger NSDs (current ones are 330T) of 380T, migrate the data to the new ones using mmrestripe, then delete the old ones. The other benefit of this process would be to have the file system data better balanced across all the storage enclosures.
 
Any considerations before I do this?  Thoughts?
 
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
 
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss 
  

___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss 
  

___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss 
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] mmbuild problem

2019-05-16 Thread Lyle Gayne
And that assumes that you have license entitlement for that version.
 
Lyle
 
- Original message -From: "Anderson Ferreira Nobre" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug-discuss@spectrumscale.orgCc: gpfsug-discuss@spectrumscale.org, shane.gro...@inl.govSubject: [EXTERNAL] Re: [gpfsug-discuss] mmbuild problemDate: Thu, May 16, 2019 12:39 PM 
You can download from IBM's fixcentral. Assuming that is SLES for x86_64, here's the link:
https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=Software%20defined%20storage=ibm/StorageSoftware/IBM+Spectrum+Scale=5.0.3=Linux+64-bit,x86_64=all
 
You will need a registered IBM user ID to be able to download.
 
Abraços / Regards / Saludos,
 
Anderson NobrePower and Storage ConsultantIBM Systems Hardware Client Technical Team – IBM Systems Lab Services 
Phone: 55-19-2132-4317E-mail: ano...@br.ibm.com
 
 
- Original message -From: "Ben G. Nickell" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc: "Shane T. Grover" Subject: Re: [gpfsug-discuss] [EXTERNAL] Re: mmbuild problemDate: Thu, May 16, 2019 1:20 PM 
Thanks for the quick reply Felipe,  and also for pointing me at the FAQ.I found the same.  The standard version of 5.2.0.3 built fine.  We apparently don't know how to get the advanced version, but I don't we are using that anyway, I imagine we could figure out how to get it if we do need it.  I just sent this a little too soon, sorry for the noise.--Ben Nickell - Idaho National LaboratoryHigh Performance Computing System AdministratorDesk: 208-526-4251        Mobile: 208-317-4259From: gpfsug-discuss-boun...@spectrumscale.org  on behalf of Felipe Knop Sent: Thursday, May 16, 2019 10:12 AMTo: gpfsug main discussion listSubject: [EXTERNAL] Re: [gpfsug-discuss] mmbuild problemBen,According to the FAQ (https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html) SLES 12 SP4 is only supported starting with Scale V5.0.2.3 . 12 SP4          4.12.14-95.3-default    4.12.14-95.3-default    From V4.2.3.13 in the 4.2 releaseFrom V5.0.2.3 or later in the 5.0 release        From V4.2.3.13 in the 4.2 releaseFrom V5.0.2.3 or later in the 5.0 releaseFelipeFelipe Knop k...@us.ibm.comGPFS Development and SecurityIBM SystemsIBM Building 0082455 South Rd, Poughkeepsie, NY 12601(845) 433-9314 T/L 293-9314[Inactive hide details for "Ben G. Nickell" ---05/16/2019 12:02:23 PM---First time poster, hopefully not  a simple RTFM question]"Ben G. Nickell" ---05/16/2019 12:02:23 PM---First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'mFrom: "Ben G. Nickell" To: "gpfsug-discuss@spectrumscale.org" Date: 05/16/2019 12:02 PMSubject: [EXTERNAL] [gpfsug-discuss] mmbuild problemSent by: gpfsug-discuss-boun...@spectrumscale.orgFirst time poster, hopefully not  a simple RTFM question, I've done some rudimentary googling.  I'm not the GPFS guy, butHaving a problem building Spectrum Scale 5.0.2.0 on Suse SLES SP4.I get the following errors.    Any ideas while our GPFS guy tries to get newer software?uname -aLinux hostname  4.12.14-95.13-default #1 SMP Fri Mar 22 06:04:58 UTC 2019 (c01bf34) x86_64 x86_64 x86_64 GNU/Linux./mmbuildgpl --build-packagemmbuildgpl: Building GPL module begins at Thu May 16 09:28:50 MDT 2019.Verifying Kernel Header... kernel version = 41214095 (41214095013000, 4.12.14-95.13-default, 4.12.14-95.13) module include dir = /lib/modules/4.12.14-95.13-default/build/include module build dir   = /lib/modules/4.12.14-95.13-default/build kernel source dir  = /usr/src/linux-4.12.14-95.13/include Found valid kernel header file under /lib/modules/4.12.14-95.13-default/build/includeVerifying Compiler... make is present at /usr/bin/make cpp is present at /usr/bin/cpp gcc is present at /usr/bin/gcc g++ is present at /usr/bin/g++ ld is present at /usr/bin/ldVerifying rpmbuild...Verifying Additional System Headers... Verifying linux-glibc-devel is installed ...   Command: /bin/rpm -q linux-glibc-devel   The required package linux-glibc-devel is installedmake World ...Verifying that tools to build the portability layer existcpp presentgcc presentg++ presentld presentcd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto 

Re: [gpfsug-discuss] A net new cluster

2019-04-01 Thread Lyle Gayne

Yes, native GPFS access can be used by AFM, but only for shorter distances
(10s of miles, e.g.).  For intercontinental or cross-US distances, the
latency would be too high for that protocol so NFS would be recommended.

Lyle




From:   "Marc A Kaplan" 
To: gpfsug main discussion list 
Date:   03/29/2019 03:05 PM
Subject:Re: [gpfsug-discuss] A net new cluster
Sent by:gpfsug-discuss-boun...@spectrumscale.org



I don't know the particulars of the case in question, nor much about ESS
rules...
But for a vanilla Spectrum Scale cluster -.

1) There is nothing wrong or ill-advised about upgrading software and then
creating a new version 5.x file system... keeping any older file systems in
place.

2) I thought AFM was improved years ago to support GPFS native access --
need not go through NFS stack...?

Whereas your wrote:
 ... nor is it advisable to try to create a new pool or filesystem in same
cluster and then migrate (partially because migrating between filesystems
within a cluster with afm would require going through nfs stack afaik) ...





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=IbxtjdkPAM2Sbon4Lbbi4w=vngQUjSBYhOMpp8HMi2XWB2feIO7aKGG6UivD0ADm6s=PjdyuwVaVKavcSGf9ltOn_k6wRMlka7CYhHzUdSKo5M=







___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] User Login Active Directory authentication on CES nodes with SMB protocol

2019-01-08 Thread Lyle Gayne

Adding Ingo Meents for response



From:   "Rob Logie" 
To: gpfsug-discuss@spectrumscale.org
Date:   01/08/2019 04:50 PM
Subject:[gpfsug-discuss] User Login Active Directory authentication on
CES nodes with SMB protocol
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi All
Is there a way to enable User Login Active Directory authentication on CES
nodes with SMB protocol that are joined to an AD domain. ?  The AD
authentication is working for access to the SMB shares, but not for user
login authentication on the CES nodes.


Thanks !





 Regards,   

 Rob Logie  

 IT Specialist  












___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss







___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Problems with GPFS on windows 10, stuck at mount. Never mounts.

2018-11-06 Thread Lyle Gayne

Yes, Henrik.

For information on which OS levels are supported at which Spectrum Scale
release levels, you should always consult our Spectrum Scale FAQ.
This info is in Section 2 or 3 of the FAQ.

Thanks,
Lyle



From:   "Henrik Cednert (Filmlance)" 
To: gpfsug main discussion list 
Date:   11/06/2018 09:00 AM
Subject:Re: [gpfsug-discuss] Problems with GPFS on windows 10, stuck at
mount. Never mounts.
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Thanks to all of you that on list and off list have replied. So it seems
like we need to upgrade to V5 to be able to run Win10 clients. I have
started the discussion with DDN. Not sure what’s included in maintenance
and what’s not.

--
Henrik Cednert  /  + 46 704 71 89 54  /  CTO  /  Filmlance
Disclaimer, the hideous bs disclaimer at the bottom is forced, sorry. ?
\_(ツ)_/?

  On 6 Nov 2018, at 11:08, Daniel Kidger 
  wrote:

  Henrik,

  Note too that Spectrum Scale 4.1.x  being almost 4 years old is close
  to retirement :
  GA. 19-Jun-2015, 215-147
  EOM.  19-Jan-2018, 917-114
  EOS.   30-Apr-2019, 917-114

  ref.
  
https://www-01.ibm.com/software/support/lifecycleapp/PLCDetail.wss?q45=G222117W12805R88
  Daniel

  _
  Daniel Kidger
  IBM Technical Sales Specialist
  Spectrum Scale, Spectrum NAS and IBM Cloud Object Store

  +44-(0)7818 522 266
  daniel.kid...@uk.ibm.com



  - Original message -
  From: Vic Cornell 
  Sent by: gpfsug-discuss-boun...@spectrumscale.org
  To: gpfsug main discussion list 
  Cc:
  Subject: Re: [gpfsug-discuss] Problems with GPFS on windows 10, stuck
  at mount. Never mounts.
  Date: Tue, Nov 6, 2018 9:35 AM

  Hi Cedric,

  Welcome to the mailing list!

  Looking at the Spectrum Scale FAQ:
  
https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#windows

  Windows 10 support doesn't start until Spectrum Scale V5:

  " Starting with V5.0.2, IBM Spectrum Scale additionally supports
  Windows 10 (Pro and Enterprise editions) in both heterogeneous and
  homogeneous clusters. At this time, Secure Boot must be disabled on
  Windows 10 nodes for IBM Spectrum Scale to install and function."

  Looks like you have GPFS 4.1.0.4.

  If you want to upgrade to V5, please contact DDN support.

  I am also not sure if the DDN windows installer supports Win10 yet.

  (I work for DDN)

  Best Regards,

  Vic Cornell

On 5 Nov 2018, at 20:25, Henrik Cednert (Filmlance) <
henrik.cedn...@filmlance.se> wrote:

Hi there

The welcome mail said that a brief introduction might be in
order. So let’s start with that, just jump over this paragraph
if that’s not of interest. So, Henrik is the name. I’m the CTO
at a Swedish film production company with our own internal post
production operation. At the heart of that we have a 1.8PB DDN
MediaScaler with GPFS in a mixed macOS and Windows environment.
The mac clients just use NFS but the Windows clients use the
native GPFS client. We have a couple of successful windows
deployments.

But… Now when we’re replacing one of those windows clients I’m
running into some seriously frustrating ball busting
shenanigans. Basically this client gets stuck on (from
mmfs.log.latest) "PM: mounting /dev/ddnnas0” . Nothing more
happens. Just stuck and it never mounts.

I have reinstalled it multiple times with full clean between,
removed cygwin and the root user account and everything. I have
verified that keyless ssh works between server and all nodes,
including this node. With my limited experience I don’t find
enough to go on in the logs nor windows event viewer. I’m
honestly totally stuck.

I’m using the same version on this clients as on the others.
DDN have their own installer which installs cygwin and the gpfs
packages. Have worked fine on other clients but not on this
sucker. =(

Versions involved:
Windows 10 Enterprise 2016 LTSB
IBM  GPFS Express Edition 4.1.0.4
IBM  GPFS Express Edition License and Prerequisites 4.1
IBM  GPFS GSKit  8.0.0.32

Below is the log from the client. I don’ find much useful at
the server, point me to specific log file if you have a good
idea of where I can find errors of this.

Cheers and many thanks in advance for helping me out here. I’m
all ears.


root@M5-CLIPSTER02 ~
$ cat /var/adm/ras/mmfs.log.latest
Mon, Nov  5, 2018  8:39:18 PM: runmmfs 

Re: [gpfsug-discuss] Problems with GPFS on windows 10, stuck at mount. Never mounts.

2018-11-06 Thread Lyle Gayne

Vipul or Heather should be able to assist.

Thanks,
Lyle



From:   "Henrik Cednert (Filmlance)" 
To: "gpfsug-discuss@spectrumscale.org"

Date:   11/06/2018 07:00 AM
Subject:[gpfsug-discuss] Problems with GPFS on windows 10,  stuck at
mount. Never mounts.
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi there

The welcome mail said that a brief introduction might be in order. So let’s
start with that, just jump over this paragraph if that’s not of interest.
So, Henrik is the name. I’m the CTO at a Swedish film production company
with our own internal post production operation. At the heart of that we
have a 1.8PB DDN MediaScaler with GPFS in a mixed macOS and Windows
environment. The mac clients just use NFS but the Windows clients use the
native GPFS client. We have a couple of successful windows deployments.

But… Now when we’re replacing one of those windows clients I’m running into
some seriously frustrating ball busting shenanigans. Basically this client
gets stuck on (from mmfs.log.latest) "PM: mounting /dev/ddnnas0” . Nothing
more happens. Just stuck and it never mounts.

I have reinstalled it multiple times with full clean between, removed
cygwin and the root user account and everything. I have verified that
keyless ssh works between server and all nodes, including this node. With
my limited experience I don’t find enough to go on in the logs nor windows
event viewer. I’m honestly totally stuck.

I’m using the same version on this clients as on the others. DDN have their
own installer which installs cygwin and the gpfs packages. Have worked fine
on other clients but not on this sucker. =(

Versions involved:
Windows 10 Enterprise 2016 LTSB
IBM  GPFS Express Edition 4.1.0.4
IBM  GPFS Express Edition License and Prerequisites 4.1
IBM  GPFS GSKit  8.0.0.32

Below is the log from the client. I don’ find much useful at the server,
point me to specific log file if you have a good idea of where I can find
errors of this.

Cheers and many thanks in advance for helping me out here. I’m all ears.


root@M5-CLIPSTER02 ~
$ cat /var/adm/ras/mmfs.log.latest
Mon, Nov  5, 2018  8:39:18 PM: runmmfs starting
Removing old /var/adm/ras/mmfs.log.* files:
mmtrace: The tracefmt.exe or tracelog.exe command can not be found.
mmtrace: 6027-1639 Command failed. Examine previous error messages to
determine cause.
Mon Nov 05 20:39:50.045 2018: GPFS: 6027-1550 [W] Error: Unable to
establish a session with an Active Directory server.  ID remapping via
Microsoft Identity Management for Unix will be unavailable.
Mon Nov 05 20:39:50.046 2018: [W] This node does not have a valid extended
license
Mon Nov 05 20:39:50.047 2018: GPFS: 6027-310 [I] mmfsd initializing.
{Version: 4.1.0.4   Built: Oct 28 2014 16:28:19} ...
Mon Nov 05 20:39:50.048 2018: [I] Cleaning old shared memory ...
Mon Nov 05 20:39:50.049 2018: [I] First pass parsing mmfs.cfg ...
Mon Nov 05 20:39:51.001 2018: [I] Enabled automated deadlock detection.
Mon Nov 05 20:39:51.002 2018: [I] Enabled automated deadlock debug data
collection.
Mon Nov 05 20:39:51.003 2018: [I] Initializing the main process ...
Mon Nov 05 20:39:51.004 2018: [I] Second pass parsing mmfs.cfg ...
Mon Nov 05 20:39:51.005 2018: [I] Initializing the page pool ...
Mon Nov 05 20:40:00.003 2018: [I] Initializing the mailbox message
system ...
Mon Nov 05 20:40:00.004 2018: [I] Initializing encryption ...
Mon Nov 05 20:40:00.005 2018: [I] Initializing the thread system ...
Mon Nov 05 20:40:00.006 2018: [I] Creating threads ...
Mon Nov 05 20:40:00.007 2018: [I] Initializing inter-node communication ...
Mon Nov 05 20:40:00.008 2018: [I] Creating the main SDR server object ...
Mon Nov 05 20:40:00.009 2018: [I] Initializing the sdrServ library ...
Mon Nov 05 20:40:00.010 2018: [I] Initializing the ccrServ library ...
Mon Nov 05 20:40:00.011 2018: [I] Initializing the cluster manager ...
Mon Nov 05 20:40:25.016 2018: [I] Initializing the token manager ...
Mon Nov 05 20:40:25.017 2018: [I] Initializing network shared disks ...
Mon Nov 05 20:41:06.001 2018: [I] Start the ccrServ ...
Mon Nov 05 20:41:07.008 2018: GPFS: 6027-1710 [N] Connecting to
192.168.45.200 DDN-0-0-gpfs 
Mon Nov 05 20:41:07.009 2018: GPFS: 6027-1711 [I] Connected to
192.168.45.200 DDN-0-0-gpfs 
Mon Nov 05 20:41:07.010 2018: GPFS: 6027-2750 [I] Node 192.168.45.200
(DDN-0-0-gpfs) is now the Group Leader.
Mon Nov 05 20:41:08.000 2018: GPFS: 6027-300 [N] mmfsd ready
Mon, Nov  5, 2018  8:41:10 PM: mmcommon mmfsup invoked. Parameters:
192.168.45.144 192.168.45.200 all
Mon, Nov  5, 2018  8:41:29 PM: mounting /dev/ddnnas0




--
Henrik Cednert  /  + 46 704 71 89 54  /  CTO  /  Filmlance
Disclaimer, the hideous bs disclaimer at the bottom is forced, sorry. ?
\_(ツ)_/?



Disclaimer





The information contained in this communication from the sender is
confidential. It is intended solely for use by the recipient and others
authorized to receive it. If you are not the recipient, you are hereby

Re: [gpfsug-discuss] GPFS support for latest stable release of OpenStack (called Queens https://www.openstack.org/software/queens/)

2018-06-14 Thread Lyle Gayne

Brian is probably best able to answer this question.

Lyle




From:   "Prafulla" 
To: 
Date:   06/14/2018 11:01 AM
Subject:[gpfsug-discuss] GPFS support for latest stable release of
OpenStack (called Queens
https://www.openstack.org/software/queens/)
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hello Guys,

Greetings!

Could you please help me figure out the level of GPFS's support for latest
stable release of OpenStack which is called as Queens?

I have few queries around that,

1. Does GPFS support Queens release for OpenStack Cinder, Glance, Manilla
and Swift services? If yes, which GPFS release(s)?
2. If I would need to integrate GPFS GUI with OpenStack Horizon (the
dashboard service), can it be done? If yes, how can I do that? Does GPFS
GUI provides any such APIs which could be used for this integration
purpose?

Guys, request you kindly help me find answer to these queries. Your help
will much appreciated.

Thanks in advance!

Regards,
pR___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] SOBAR restore

2018-02-21 Thread Lyle Gayne

April Brown should be able to assist.

Lyle




From:   "Juan Ignacio Tellez Vilchis" 
To: gpfsug-discuss@spectrumscale.org
Date:   02/21/2018 04:21 PM
Subject:[gpfsug-discuss] SOBAR restore
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hello,

Is there anybody that has some experience with GPFS filesystem restore
using SOBAR?

I already back filesystem out using SOBAR, but having some troubles with
dsmc restore command.

Any help would be appreciated!

Juan Ignacio Tellez Vilchis
Storage Consultant
Lab. Services
IBM Systems Hardware






 Phone: 52-55-5270-3218 | Mobile: 52-55-10160692
IBM 
 E-mail: juantel...@mx1.ibm.com 

 Find me on: LinkedIn: http://mx.linkedin.com/in/Ignaciotellez1and within IBM 
on: IBM Connections:  Alfonso Napoles Gandara 
 
https://w3-connections.ibm.com/profiles/html/profileView.do?key=2ce9da3f-33ae-4262-9e22-50433170ea46
  3111 

 Mexico City, DIF 01210 

 Mexico 









___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=IbxtjdkPAM2Sbon4Lbbi4w=F5mU6o96aI7N9_U21xmoWIM5YmGNLLIi66Drt1r75UY=C_BZnOZwvJjElYiXC-xlyQLCNkoD3tUr4qZ2SdPfxok=



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss