://www.youracclaim.com/user/luis-bolinches
Ab IBM Finland Oy
Laajalahdentie 23
00330 Helsinki
Uusimaa - Finland"If you always give you will always have" -- Anonymous
- Original message -From: Jan-Frode Myklebust Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main
://www.youracclaim.com/user/luis-bolinches
Ab IBM Finland Oy
Laajalahdentie 23
00330 Helsinki
Uusimaa - Finland"If you always give you will always have" -- Anonymous
- Original message -From: "Achim Rehor" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug
Bolinches
Consultant IT Specialist
IBM Spectrum Scale development
Mobile Phone: +358503112585
https://www.youracclaim.com/user/luis-bolinches
Ab IBM Finland Oy
Laajalahdentie 23
00330 Helsinki
Uusimaa - Finland"If you always give you will always have" -- Anonymous
- Original mes
://www.youracclaim.com/user/luis-bolinches
Ab IBM Finland Oy
Laajalahdentie 23
00330 Helsinki
Uusimaa - Finland"If you always give you will always have" -- Anonymous
- Original message -From: "Nishaan Docrat" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug
IT Specialist
IBM Spectrum Scale development
Mobile Phone: +358503112585
https://www.youracclaim.com/user/luis-bolinches
Ab IBM Finland Oy
Laajalahdentie 23
00330 Helsinki
Uusimaa - Finland"If you always give you will always have" -- Anonymous
- Original message -From: "
3/10/2020 12:19, Luis Bolinches wrote:
>> Are you mixing those ESS DSS in the same cluster? Or you are only
>> running DSS
>
> Only running DSS. We are too far down the rabbit hole to ever switch to
> ESS now.
>
>> Mixing DSS and ESS in the same cluster is not a supp
Are you mixing those ESS DSS in the same cluster? Or you are only running DSS
https://www.ibm.com/support/knowledgecenter/SSYSP8/gnrfaq.html?view=kc#supportqs__building
Mixing DSS and ESS in the same cluster is not a supported configuration.
You really need to talk with Lenovo as is your
/ Saludos cordiales / Salutations / SalutacionsLuis Bolinches
Consultant IT Specialist
IBM Spectrum Scale development
ESS & client adoption teams
Mobile Phone: +358503112585
https://www.youracclaim.com/user/luis-bolinches
Ab IBM Finland Oy
Laajalahdentie 23
00330 Helsinki
Uusimaa - Finland&qu
ttps://www.youracclaim.com/user/luis-bolinches
Ab IBM Finland Oy
Laajalahdentie 23
00330 Helsinki
Uusimaa - Finland"If you always give you will always have" -- Anonymous
- Original message -From: Giovanni Bracco Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: Jan-Frode Mykl
ations / SalutacionsLuis Bolinches
Consultant IT Specialist
IBM Spectrum Scale development
ESS & client adoption teams
Mobile Phone: +358503112585
https://www.youracclaim.com/user/luis-bolinches
Ab IBM Finland Oy
Laajalahdentie 23
00330 Helsinki
Uusimaa - Finland"If you always give you w
Hi
>From phone so typos are expected.
You maybe would like to look into remo mounts and export the FS only as ro
to the client cluster.
--
Cheers
> On 4. Mar 2020, at 12.05, Agostino Funel wrote:
>
> Hi,
>
> we have a GPFS cluster version 4.2.3.19. We have seen in the official
>
Hi
While I agree with what es already mention here and it is really spot on, I
think Andi missed to reveal what is the latency between sites. Latency is as
key if not more than ur pipe link speed to throughput results.
--
Cheers
> On 22. Feb 2020, at 3.08, Andrew Beattie wrote:
>
> Andi,
Hi
Do you have snapshots?
--Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations / SalutacionsLuis Bolinches
Consultant IT SpecialistMobile Phone: +358503112585https://www.youracclaim.com/user/luis-bolinches"If you always give you will always have" --
der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart,
> HRB 243294
>
>
>
> - Original message -
> From: "Luis Bolinches"
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> To: "gpfsug main discussion list"
> Cc:
> Sub
-90540880
> a.wolf-re...@de.ibm.com
>
> IBM Data Privacy Statement
> IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats:
> Matthias Hartmann / Geschäftsführung: Dirk Wittkopp
> Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart
Hi
Same principle COW. The data blocks do not get modified.
--Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations / SalutacionsLuis Bolinches
Consultant IT SpecialistMobile Phone: +358503112585https://www.youracclaim.com/user/luis-bolinches"If you always give you will a
://www.youracclaim.com/user/luis-bolinches"If you always give you will always have" -- Anonymous
- Original message -From: "Cregan, Bob" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [EXTERNAL] [gpfsug-discuss] Compression quest
)
--Ystävällisin terveisin / Kind regards / Saludos cordiales / SalutationsLuis BolinchesCertified Consultant IT SpecialistMobile Phone: +358503112585https://www.youracclaim.com/user/luis-bolinches"If you always give you will always have" -- Anonymous
- Original message -Fro
http://www.redbooks.ibm.com/abstracts/redp5557.html?Open
--Ystävällisin terveisin / Kind regards / Saludos cordiales / SalutationsLuis BolinchesCertified Consultant IT SpecialistMobile Phone: +358503112585https://www.youracclaim.com/user/luis-bolinches"If you always give you will always
You can backup GPFS with basically anything that can read a POSIX filesystem.
Do you refer to refer to mmbackup integration?
--
Cheers
> On 29 Aug 2019, at 17.09, craig.ab...@gmfinancial.com wrote:
>
>
>
> Are there any other options to backup up GPFS other that Spectrum Protect ?
>
>
US to follow up I am pretty sure we can arrange that too.
--Ystävällisin terveisin / Kind regards / Saludos cordiales / SalutationsLuis BolinchesCertified Consultant IT SpecialistMobile Phone: +358503112585https://www.youracclaim.com/user/luis-bolinches"If you always give you will always ha
Hi, from phone so sorry for typos.
I really think you should look into Spectrum Scale Erasure Code Edition (ECE)
for this.
Sure you could do a RAID on each node as you mention here but that sounds like
a lot of waste to me on storage capacity. Not to forget you get other goodies
like end to
Hi
Writing from phone so excuse the typos.
Assuming you have a system pool (metadata) and some other pool/s you can
set limits on maintenance class that you done already and on other class
that would affect all the other ops. You can add those per node or
nodeclass that can be matched to what
Hi
I would really look into QoS instead.
--
Cheers
> On 17 Jun 2019, at 19.33, Alex Chekholko wrote:
>
> Hi Chris,
>
> I think the next thing to double-check is when the maxMBpS change takes
> effect. You may need to restart the nsds. Otherwise I think your plan is
> sound.
>
>
And use QoS
Less aggressive during peak, more on valleys. If your workload allows it.
—
SENT FROM MOBILE DEVICE
Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations
Luis Bolinches
Consultant IT Specialist
Mobile Phone: +358503112585
https://www.youracclaim.com/user/luis
/user/luis-bolinches"If you always give you will always have" -- Anonymous
- Original message -From: "Achim Rehor" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: "gpfsug main discussion list" Cc:Subject: Re: [gpfsug-discuss] RAID type for system po
https://www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html#linuxrest
By reading table 30, none at this point
Thanks
--
Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations
Luis Bolinches
Consultant IT Specialist
Mobile Phone
Hi
You knew the answer, still is no.
https://www.mail-archive.com/gpfsug-discuss@spectrumscale.org/msg02249.html
--
Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations
Luis Bolinches
Consultant IT Specialist
Mobile Phone: +358503112585
https://www.youracclaim.com/user/luis
/ Salutations
Luis Bolinches
Consultant IT Specialist
Mobile Phone: +358503112585
https://www.youracclaim.com/user/luis-bolinches
"If you always give you will always have" -- Anonymous
> On 14 Mar 2018, at 5.28, Lukas Hejtmanek <xhejt...@ics.muni.cz> wrote:
>
> Hello,
>
&
Hi
not going to mention much on DDN setups but first thing that makes my eyes
blurry a bit is
minReleaseLevel 4.2.0.1
when you mention your whole cluster is already on 4.2.3
--
Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations
Luis Bolinches
Consultant IT Specialist
Sorry
With cat
[root@specscale01 IBM_REPO]# cp test a
[root@specscale01 IBM_REPO]# cat a a a a > test && grep ATAG test | wc -l
&& sleep 4 && grep ATAG test | wc -l
0
0
--
Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations
Luis Bolinches
Cons
of 3 nodes on KVM
[root@specscale01 IBM_REPO]# echo "a a a a a a a a a a" > test && grep
ATAG test | wc -l && sleep 4 && grep ATAG test | wc -l
0
0
[root@specscale01 IBM_REPO]#
--
Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations
Lu
Hi
They are not even open for a routed L3 network? How they talk between DCs today
then?
I would really go L3 and AFM here if the sole purpose here is to have a DR
--
Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations
Luis Bolinches
Consultant IT Specialist
Mobile Phone
For Hana 2.0 only SP1 and 2 are supported
--
Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations
Luis Bolinches
Consultant IT Specialist
Mobile Phone: +358503112585
https://www.youracclaim.com/user/luis-bolinches
"If you always give you will always have" -- Anonymo
And Linux on Z/VM
If interested feel free to open a RFE
--
Cheers
> On 8 Jun 2017, at 12.46, Andrew Beattie wrote:
>
> Philipp,
>
> Not to my knowledge,
>
> AIX
> Linux on x86 ( RHEL / SUSE / Ubuntu)
> Linux on Power (RHEL / SUSE)
> WIndows
>
> are the current
fileset is not suitable for fileset level backup. exit 1Will post the outcome.JaimeQuoting "Jaime Pinto" <pi...@scinet.utoronto.ca>:> Quoting "Luis Bolinches" <luis.bolinc...@fi.ibm.com>:>>> Hi>>>> have you tr
out when you go to replace the DIMM? You able to hot-swap the memory
without anything losing its mind? (I know this is possible in the Z/series
world, but those usually have at least 2-3 more zeros in the price tag).
[attachment "attqolcz.dat" deleted by Luis
Hi
While I understand the frustration of tiem that could be used otherwise. depending of what you are plannig with script wrapping I would recommend you seriously take a look to the REST API
Hi
My 2 cents. Before even thinking too much on that path I would check the following
- What the is physical size on those SSD if they are already 4K you won't "save" anything
- Do you use small (>3.5Kb) files? If so I would still keep 4K inodes
- Check if 512 can hold NSDv2 format, from top
I would say the snapshot is not a light process. on heavy I/O and hundreds of nodes is pretty hard actually.
the quiescence needed to do the actual snapshot is not to be taken lightly on such conditions.
Furthermore the deletion of snapshots can be really fun too when it comes to how heavy it
I usually exclude them. Otherwise you will end up with lots of data on the TSM
backend.
--
Cheers
> On 27 Feb 2017, at 12.23, Hans-Joachim Ehlers wrote:
>
> Hi,
>
> short question: if we are using the native TSM dsmc Client, should we exclude
> the "./.snapshots/."
Hi
THe ts* is a good fear, they are internal commands bla bla bla you know that
Have you tried mmlsnsd -X
--Ystävällisin terveisin / Kind regards / Saludos cordiales / SalutationsLuis BolinchesLab Serviceshttp://www-03.ibm.com/systems/services/labservices/IBM Laajalahdentie 23 (main Entrance)
Hi
I see Kevin's setup regularly. With or without tape depending on the religious believes of the client.
--Ystävällisin terveisin / Kind regards / Saludos cordiales / SalutationsLuis BolinchesLab Serviceshttp://www-03.ibm.com/systems/services/labservices/IBM Laajalahdentie 23 (main Entrance)
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>Cc:Subject: Re: [gpfsug-discuss] Upgrading kernel on RHELDate: Tue, Nov 29, 2016 10:44 PM
This is the first I've heard of this max_sectors_kb issue, has it already been discussed on the list? Can you point me to any more info?
On
6199 - kevin...@us.ibm.com
- Original message -From: "Luis Bolinches" <luis.bolinc...@fi.ibm.com>Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug-discuss@spectrumscale.orgCc: gpfsug-discuss@spectrumscale.orgSubject: Re: [gpfsug-discuss] Upgrading kernel on RHELD
My 2 cents
And I am sure different people have different opinions.
New kernels might be problematic.
Now got my fun with RHEL 7.3 kernel and max_sectors_kb for new FS. Is something will come to the FAQ soon. It is already on draft not public.
I guess whatever you do get a TEST cluster
t;
> There could be a great reason NOT to use 128K metadata block size, but I
> don’t know what it is. I’d be happy to be corrected about this if it’s out of
> whack.
>
> --
> Stephen
>
>
>
>> On Sep 22, 2016, at 3:37 PM, Luis Bolinches <luis.bolinc..
47 matches
Mail list logo