Re: [gpfsug-discuss] Using VMs as quorum / admin nodes in a GPFS infiniband cluster

2021-06-10 Thread Luis Bolinches
://www.youracclaim.com/user/luis-bolinches   Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland"If you always give you will always have" --  Anonymous       - Original message -From: Jan-Frode Myklebust Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main

Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-03-01 Thread Luis Bolinches
://www.youracclaim.com/user/luis-bolinches   Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland"If you always give you will always have" --  Anonymous       - Original message -From: "Achim Rehor" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug

Re: [gpfsug-discuss] Poor client performance with high cpu usage of mmfsd process

2020-11-12 Thread Luis Bolinches
Bolinches Consultant IT Specialist IBM Spectrum Scale development Mobile Phone: +358503112585   https://www.youracclaim.com/user/luis-bolinches   Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland"If you always give you will always have" --  Anonymous       - Original mes

Re: [gpfsug-discuss] Alternative to Scale S3 API.

2020-10-28 Thread Luis Bolinches
://www.youracclaim.com/user/luis-bolinches   Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland"If you always give you will always have" --  Anonymous       - Original message -From: "Nishaan Docrat" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug

Re: [gpfsug-discuss] Alternative to Scale S3 API.

2020-10-28 Thread Luis Bolinches
IT Specialist IBM Spectrum Scale development Mobile Phone: +358503112585   https://www.youracclaim.com/user/luis-bolinches   Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland"If you always give you will always have" --  Anonymous       - Original message -From: "

Re: [gpfsug-discuss] Services on DSS/ESS nodes

2020-10-04 Thread Luis Bolinches
3/10/2020 12:19, Luis Bolinches wrote: >> Are you mixing those ESS DSS in the same cluster? Or you are only >> running DSS > > Only running DSS. We are too far down the rabbit hole to ever switch to > ESS now. > >> Mixing DSS and ESS in the same cluster is not a supp

Re: [gpfsug-discuss] Services on DSS/ESS nodes

2020-10-03 Thread Luis Bolinches
Are you mixing those ESS DSS in the same cluster? Or you are only running DSS https://www.ibm.com/support/knowledgecenter/SSYSP8/gnrfaq.html?view=kc#supportqs__building Mixing DSS and ESS in the same cluster is not a supported configuration. You really need to talk with Lenovo as is your

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Luis Bolinches
/ Saludos cordiales / Salutations / SalutacionsLuis Bolinches Consultant IT Specialist IBM Spectrum Scale development ESS & client adoption teams Mobile Phone: +358503112585   https://www.youracclaim.com/user/luis-bolinches   Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland&qu

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Luis Bolinches
ttps://www.youracclaim.com/user/luis-bolinches   Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland"If you always give you will always have" --  Anonymous       - Original message -From: Giovanni Bracco Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: Jan-Frode Mykl

Re: [gpfsug-discuss] Client Latency and High NSD Server Load Average

2020-06-06 Thread Luis Bolinches
ations / SalutacionsLuis Bolinches Consultant IT Specialist IBM Spectrum Scale development ESS & client adoption teams Mobile Phone: +358503112585   https://www.youracclaim.com/user/luis-bolinches   Ab IBM Finland Oy Laajalahdentie 23 00330 Helsinki Uusimaa - Finland"If you always give you w

Re: [gpfsug-discuss] Read-only mount option for GPFS version 4.2.3.19

2020-03-04 Thread Luis Bolinches
Hi >From phone so typos are expected. You maybe would like to look into remo mounts and export the FS only as ro to the client cluster. -- Cheers > On 4. Mar 2020, at 12.05, Agostino Funel wrote: > > Hi, > > we have a GPFS cluster version 4.2.3.19. We have seen in the official >

Re: [gpfsug-discuss] Spectrum Scale Ganesha NFS multi threaded AFM?

2020-02-21 Thread Luis Bolinches
Hi While I agree with what es already mention here and it is really spot on, I think Andi missed to reveal what is the latency between sites. Latency is as key if not more than ur pipe link speed to throughput results. -- Cheers > On 22. Feb 2020, at 3.08, Andrew Beattie wrote: > > Andi,

Re: [gpfsug-discuss] Odd behaviour with regards to reported free space

2020-02-18 Thread Luis Bolinches
Hi   Do you have snapshots? --Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations / SalutacionsLuis Bolinches Consultant IT SpecialistMobile Phone: +358503112585https://www.youracclaim.com/user/luis-bolinches"If you always give you will always have" --

Re: [gpfsug-discuss] Compression question

2019-11-28 Thread Luis Bolinches
der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, > HRB 243294 > > > > - Original message - > From: "Luis Bolinches" > Sent by: gpfsug-discuss-boun...@spectrumscale.org > To: "gpfsug main discussion list" > Cc: > Sub

Re: [gpfsug-discuss] Compression question

2019-11-28 Thread Luis Bolinches
-90540880 > a.wolf-re...@de.ibm.com > > IBM Data Privacy Statement > IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: > Matthias Hartmann / Geschäftsführung: Dirk Wittkopp > Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart

Re: [gpfsug-discuss] Compression question

2019-11-28 Thread Luis Bolinches
Hi   Same principle COW. The data blocks do not get modified. --Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations / SalutacionsLuis Bolinches Consultant IT SpecialistMobile Phone: +358503112585https://www.youracclaim.com/user/luis-bolinches"If you always give you will a

Re: [gpfsug-discuss] Compression question

2019-11-28 Thread Luis Bolinches
://www.youracclaim.com/user/luis-bolinches"If you always give you will always have" --  Anonymous       - Original message -From: "Cregan, Bob" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [EXTERNAL] [gpfsug-discuss] Compression quest

Re: [gpfsug-discuss] Filesystem access issues via CES NFS

2019-10-23 Thread Luis Bolinches
) --Ystävällisin terveisin / Kind regards / Saludos cordiales / SalutationsLuis BolinchesCertified Consultant IT SpecialistMobile Phone: +358503112585https://www.youracclaim.com/user/luis-bolinches"If you always give you will always have" --  Anonymous       - Original message -Fro

[gpfsug-discuss] Spectrum Scale Erasure Code Edition (ECE) RedPaper Draft is public now

2019-10-16 Thread Luis Bolinches
  http://www.redbooks.ibm.com/abstracts/redp5557.html?Open   --Ystävällisin terveisin / Kind regards / Saludos cordiales / SalutationsLuis BolinchesCertified Consultant IT SpecialistMobile Phone: +358503112585https://www.youracclaim.com/user/luis-bolinches"If you always give you will always

Re: [gpfsug-discuss] Backup question

2019-08-29 Thread Luis Bolinches
You can backup GPFS with basically anything that can read a POSIX filesystem. Do you refer to refer to mmbackup integration? -- Cheers > On 29 Aug 2019, at 17.09, craig.ab...@gmfinancial.com wrote: > > > > Are there any other options to backup up GPFS other that Spectrum Protect ? > >

Re: [gpfsug-discuss] Building GPFS filesystem system data pool on shared nothing NVMe drives

2019-07-30 Thread Luis Bolinches
US to follow up I am pretty sure we can arrange that too.     --Ystävällisin terveisin / Kind regards / Saludos cordiales / SalutationsLuis BolinchesCertified Consultant IT SpecialistMobile Phone: +358503112585https://www.youracclaim.com/user/luis-bolinches"If you always give you will always ha

Re: [gpfsug-discuss] Building GPFS filesystem system data pool on shared nothing NVMe drives

2019-07-29 Thread Luis Bolinches
Hi, from phone so sorry for typos. I really think you should look into Spectrum Scale Erasure Code Edition (ECE) for this. Sure you could do a RAID on each node as you mention here but that sounds like a lot of waste to me on storage capacity. Not to forget you get other goodies like end to

Re: [gpfsug-discuss] Steps for gracefully handling bandwidth reduction during network maintenance

2019-06-17 Thread Luis Bolinches
Hi Writing from phone so excuse the typos. Assuming you have a system pool (metadata) and some other pool/s you can set limits on maintenance class that you done already and on other class that would affect all the other ops. You can add those per node or nodeclass that can be matched to what

Re: [gpfsug-discuss] Steps for gracefully handling bandwidth reduction during network maintenance

2019-06-17 Thread Luis Bolinches
Hi I would really look into QoS instead. -- Cheers > On 17 Jun 2019, at 19.33, Alex Chekholko wrote: > > Hi Chris, > > I think the next thing to double-check is when the maxMBpS change takes > effect. You may need to restart the nsds. Otherwise I think your plan is > sound. > >

Re: [gpfsug-discuss] Can't take snapshots while re-striping

2018-10-18 Thread Luis Bolinches
And use QoS Less aggressive during peak, more on valleys. If your workload allows it. — SENT FROM MOBILE DEVICE Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist Mobile Phone: +358503112585 https://www.youracclaim.com/user/luis

Re: [gpfsug-discuss] RAID type for system pool

2018-09-06 Thread Luis Bolinches
/user/luis-bolinches"If you always give you will always have" --  Anonymous       - Original message -From: "Achim Rehor" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: "gpfsug main discussion list" Cc:Subject: Re: [gpfsug-discuss] RAID type for system po

Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above?

2018-05-10 Thread Luis Bolinches
https://www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html#linuxrest By reading table 30, none at this point Thanks -- Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist Mobile Phone

Re: [gpfsug-discuss] Will GPFS recognize re-sized LUNs?

2018-04-26 Thread Luis Bolinches
Hi You knew the answer, still is no. https://www.mail-archive.com/gpfsug-discuss@spectrumscale.org/msg02249.html -- Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist Mobile Phone: +358503112585 https://www.youracclaim.com/user/luis

Re: [gpfsug-discuss] Preferred NSD

2018-03-14 Thread Luis Bolinches
/ Salutations Luis Bolinches Consultant IT Specialist Mobile Phone: +358503112585 https://www.youracclaim.com/user/luis-bolinches "If you always give you will always have" -- Anonymous > On 14 Mar 2018, at 5.28, Lukas Hejtmanek <xhejt...@ics.muni.cz> wrote: > > Hello, > &

Re: [gpfsug-discuss] Odd behavior with cat followed by grep.

2018-02-14 Thread Luis Bolinches
Hi not going to mention much on DDN setups but first thing that makes my eyes blurry a bit is minReleaseLevel 4.2.0.1 when you mention your whole cluster is already on 4.2.3 -- Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist

Re: [gpfsug-discuss] Odd behavior with cat followed by grep.

2018-02-13 Thread Luis Bolinches
Sorry With cat [root@specscale01 IBM_REPO]# cp test a [root@specscale01 IBM_REPO]# cat a a a a > test && grep ATAG test | wc -l && sleep 4 && grep ATAG test | wc -l 0 0 -- Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Cons

Re: [gpfsug-discuss] Odd behavior with cat followed by grep.

2018-02-13 Thread Luis Bolinches
of 3 nodes on KVM [root@specscale01 IBM_REPO]# echo "a a a a a a a a a a" > test && grep ATAG test | wc -l && sleep 4 && grep ATAG test | wc -l 0 0 [root@specscale01 IBM_REPO]# -- Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations Lu

Re: [gpfsug-discuss] storage-based replication for Spectrum Scale

2018-01-23 Thread Luis Bolinches
Hi They are not even open for a routed L3 network? How they talk between DCs today then? I would really go L3 and AFM here if the sole purpose here is to have a DR -- Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist Mobile Phone

Re: [gpfsug-discuss] Online data migration tool

2017-12-01 Thread Luis Bolinches
For Hana 2.0 only SP1 and 2 are supported -- Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist Mobile Phone: +358503112585 https://www.youracclaim.com/user/luis-bolinches "If you always give you will always have" -- Anonymo

Re: [gpfsug-discuss] GPFS for aarch64?

2017-06-08 Thread Luis Bolinches
And Linux on Z/VM If interested feel free to open a RFE -- Cheers > On 8 Jun 2017, at 12.46, Andrew Beattie wrote: > > Philipp, > > Not to my knowledge, > > AIX > Linux on x86 ( RHEL / SUSE / Ubuntu) > Linux on Power (RHEL / SUSE) > WIndows > > are the current

Re: [gpfsug-discuss] mmbackup with fileset : scope errors

2017-05-18 Thread Luis Bolinches
fileset is not suitable for  fileset level backup.  exit 1Will post the outcome.JaimeQuoting "Jaime Pinto" <pi...@scinet.utoronto.ca>:> Quoting "Luis Bolinches" <luis.bolinc...@fi.ibm.com>:>>> Hi>>>> have you tr

Re: [gpfsug-discuss] Used virtualization technologies for GPFS/Spectrum Scale

2017-04-24 Thread Luis Bolinches
out when you go to replace the DIMM? You able to hot-swap the memory without anything losing its mind? (I know this is possible in the Z/series world, but those usually have at least 2-3 more zeros in the price tag). [attachment "attqolcz.dat" deleted by Luis

Re: [gpfsug-discuss] -Y option for many commands, precious few officially!

2017-03-28 Thread Luis Bolinches
Hi   While I understand the frustration of tiem that could be used otherwise. depending of what you are plannig with script wrapping I would recommend you seriously take a look to the REST API  

Re: [gpfsug-discuss] default inode size

2017-03-15 Thread Luis Bolinches
Hi   My 2 cents. Before even thinking too much on that path I would check the following   - What the is physical size on those SSD if they are already 4K you won't "save" anything - Do you use small (>3.5Kb) files? If so I would still keep 4K inodes - Check if 512 can hold NSDv2 format, from top

Re: [gpfsug-discuss] Tracking deleted files

2017-02-27 Thread Luis Bolinches
I would say the snapshot is not a light process. on heavy I/O and hundreds of nodes is pretty hard actually.   the quiescence needed to do the actual snapshot is not to be taken lightly on such conditions.   Furthermore the deletion of snapshots can be really fun too when it comes to how heavy it

Re: [gpfsug-discuss] Q: backup with dsmc & .snapshots directory

2017-02-27 Thread Luis Bolinches
I usually exclude them. Otherwise you will end up with lots of data on the TSM backend. -- Cheers > On 27 Feb 2017, at 12.23, Hans-Joachim Ehlers wrote: > > Hi, > > short question: if we are using the native TSM dsmc Client, should we exclude > the "./.snapshots/."

Re: [gpfsug-discuss] translating /dev device into nsd name

2016-12-16 Thread Luis Bolinches
Hi   THe ts* is a good fear, they are internal commands bla bla bla you know that   Have you tried mmlsnsd -X --Ystävällisin terveisin / Kind regards / Saludos cordiales / SalutationsLuis BolinchesLab Serviceshttp://www-03.ibm.com/systems/services/labservices/IBM Laajalahdentie 23 (main Entrance)

Re: [gpfsug-discuss] Tiers

2016-12-15 Thread Luis Bolinches
Hi   I see Kevin's setup regularly. With or without tape depending on the religious believes of the client. --Ystävällisin terveisin / Kind regards / Saludos cordiales / SalutationsLuis BolinchesLab Serviceshttp://www-03.ibm.com/systems/services/labservices/IBM Laajalahdentie 23 (main Entrance)

Re: [gpfsug-discuss] Upgrading kernel on RHEL

2016-11-29 Thread Luis Bolinches
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>Cc:Subject: Re: [gpfsug-discuss] Upgrading kernel on RHELDate: Tue, Nov 29, 2016 10:44 PM  This is the first I've heard of this max_sectors_kb issue, has it already been discussed on the list? Can you point me to any more info?  On

Re: [gpfsug-discuss] Upgrading kernel on RHEL

2016-11-29 Thread Luis Bolinches
6199 - kevin...@us.ibm.com       - Original message -From: "Luis Bolinches" <luis.bolinc...@fi.ibm.com>Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug-discuss@spectrumscale.orgCc: gpfsug-discuss@spectrumscale.orgSubject: Re: [gpfsug-discuss] Upgrading kernel on RHELD

Re: [gpfsug-discuss] Upgrading kernel on RHEL

2016-11-29 Thread Luis Bolinches
My 2 cents   And I am sure different people have different opinions.   New kernels might be problematic.   Now got my fun with RHEL 7.3 kernel and max_sectors_kb for new FS. Is something will come to the FAQ soon. It is already on draft not public.   I guess whatever you do get a TEST cluster

Re: [gpfsug-discuss] Blocksize

2016-09-23 Thread Luis Bolinches
t; > There could be a great reason NOT to use 128K metadata block size, but I > don’t know what it is. I’d be happy to be corrected about this if it’s out of > whack. > > -- > Stephen > > > >> On Sep 22, 2016, at 3:37 PM, Luis Bolinches <luis.bolinc..