05:16, Jaime Pinto wrote:
Alert related to sysadmins managing TSM/DB2 servers and those responsible for
applying security patches, in particular kernel 3.10.0-1160.36.2.el7.x86_64,
despite security concerns raised by CVE-2021-33909:
Please hold off on upgrading your RedHat systems (possibly
out the hard way that kernel 3.10.0-1160.36.2.el7.x86_64 is not
compatible with DB2, and after the node reboot DB2 would not work anymore, not
only on TSM, but neither on HPSS. I had to revert the kernel to
3.10.0-1062.18.1.el7.x86_64 to get DB2 working properly again.
---
Jaime Pinto - Storage
US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
********
---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.ca
University of Toronto
661 University Ave. (MaRS), Su
of Jonathan Buzzard
*Sent:* 09 May 2020 23:22
*To:* gpfsug-discuss@spectrumscale.org
*Subject:* Re: [gpfsug-discuss] Odd networking/name resolution issue
On 09/05/2020 12:06, Jaime Pinto wrote:
DNS shouldn't be relied upon on a GPFS cluster for internal
communication/management or data.
The 1980's
://gpfsug.org/mailman/listinfo/gpfsug-discuss
.
.
.
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - Compute/Calcul
/mmfs/bin | grep r-s | awk '{system("ls -l
/usr/lpp/mmfs/bin/"$9)}')
All the best
Jaime
.
.
.
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
********
---
Jaime Pint
rg/mailman/listinfo/gpfsug-discuss
.
.
.
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
********
---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - Co
stimonials
****
---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.ca
University of Toronto
661 University Ave. (MaRS), Suite 1140
Toronto, ON, M5G1M1
P: 416-978-2755
C: 41
COMMMETHOD TCPIP
TCPPort 1500
PASSWORDACCESS GENERATE
TXNBYTELIMIT1048576
TCPBuffsize 512
On 2019-11-03 8:56 p.m., Jaime Pinto wrote:
On 11/3/2019 20:24:35, Marc A Kaplan wrote:
Please show
same filesystem, Or different filesystems...
And a single mmbackup instance can drive several TSM servers, which can be
named with an option or in the dsm.sys file:
# --tsm-servers TSMserver[,TSMserver...]
# List of TSM servers to use instead of the servers in the dsm.sys file.
Inactive hide details fo
izing is very undesirable.
Thanks
Jaime
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
****
---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoro
aime
---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.ca
University of Toronto
661 University Ave. (MaRS), Suite 1140
Toronto, ON, M5G1M1
P: 416-978-2755
C: 416-505-
aybe-_s'
Or even more powerfully
WHERE regex(FILESET_NAME, 'extended-regular-.*-expression')
From: "Jaime Pinto"
To: "gpfsug main discussion list"
Date: 04/18/2018 01:00 PM
Subject:[gpfsug-discuss] mmapplypolicy on nested filesets ...
Sent by:gp
.@vanderbilt.edu> -
(615)875-9633
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
********
---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - Co
ybe-_s'
Or even more powerfully
WHERE regex(FILESET_NAME, 'extended-regular-.*-expression')
From: "Jaime Pinto" <pi...@scinet.utoronto.ca>
To: "gpfsug main discussion list" <gpfsug-discuss@spectrumscale.org>
Date: 04/18/2018 01:00 PM
Subject:[gpfsu
ssage -----
From: "Jaime Pinto" <pi...@scinet.utoronto.ca>
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: "gpfsug main discussion list" <gpfsug-discuss@spectrumscale.org>,
"Truong Vu" <truo...@us.ibm.com>
Cc: gpfsug-discuss@spectrumscale.org
Sub
Maximum Number of filesets on GPFS v5? (Jaime Pinto)
--
Message: 1
Date: Sun, 04 Feb 2018 14:58:39 -0500
From: "Jaime Pinto" <pi...@scinet.utoronto.ca>
To: "gpfsug main discussion list" <gpfsug-discu
tion on
limitations for version 5:
https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/6027-2699.htm
https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1pdg_increasefilesetspace.htm
Any hints?
Thanks
Jaime
---
Jaime Pinto
Sc
Mexico
Links:
--
[1] mailto:vanfa...@mx1.ibm.com
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
****
---
Jaime Pinto
SciNet HPC Consortium - Compu
umscale.org<http://spectrumscale.org/>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
---
Jaime Pinto
SciNet HPC Consortium - Compute/Calcul
SUCCESS STORIES
http://www.scinethpc.ca/testimonials
****
---
Jaime Pinto
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.ca
University of Toronto
661 University Ave. (MaRS), Suite 1140
Toronto, ON, M5G1M1
P
.ibm.com/support/knowledgecenter/SSSR2R_7.1.1/com.ibm.itsm.hsmul.doc/c_recall_optimized_tape.html[1]
Regards, Andrew Beattie Software Defined Storage - IT Specialist
Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com[2] -
Original message -----
From: "Jaime Pinto" <pi...@sc
.ca/testimonials
****
---
Jaime Pinto
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.ca
University of Toronto
661 University Ave. (MaRS), Suite 1140
Toronto, ON, M5G1M1
P: 416-978-2755
Subject:Re: [gpfsug-discuss] What is an independent fileset? was:
mmbackup with fileset : scope errors
Sent by:gpfsug-discuss-boun...@spectrumscale.org
Hi
When mmbackup has passed the preflight stage (pretty quickly) you'll
find the autogenerated ruleset as /var/mmfs/mmbackup
t are
smallish, and/or you don't need the backups -- create independent
filesets, copy/move/delete the data, rename, voila.
From: "Jaime Pinto" <pi...@scinet.utoronto.ca>
To: "Marc A Kaplan" <makap...@us.ibm.com>
Cc: "gpfsug main discussion list&
ithin filesystems. Moving a
file from one fileset to another requires a copy operation. There is no
fast move nor hardlinking.
--marc
From: "Jaime Pinto" <pi...@scinet.utoronto.ca>
To: "gpfsug main discussion list" <gpfsug-discuss@spectrumscale.org>,
&qu
people with
funding decisions listen there.
So you are limited to either migrate the data from that fileset to a new
independent fileset (multiple ways to do that) or use the TSM client
config.
- Original message -
From: "Jaime Pinto" <pi...@scinet.utoronto.ca>
S
27:45 2017 mmbackup:mmbackup: Backing up *dependent*
fileset sysadmin3 is not supported
Wed May 17 21:27:45 2017 mmbackup:This fileset is not suitable for
fileset level backup. exit 1
Will post the outcome.
Jaime
Quoting "J
d
on /IBM/GPFS/FSET1
dsm.sys
...
DOMAIN /IBM/GPFS
EXCLUDE.DIR /IBM/GPFS/FSET1
From: "Jaime Pinto" <pi...@scinet.utoronto.ca>
To: "gpfsug main discussion list" <gpfsug-discuss@spectrumscale.org>
Date: 17-05-17 23:44
Subject:[gpfsug-discuss] mmbacku
---
Jaime Pinto
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.ca
University of Toronto
661 University Ave. (MaRS), Suite 1140
Toronto, ON, M5G1M1
P: 416-978-2755
C: 416-505-1477
Just bumping up.
When I first posted this subject at the end of March there was a UG
meeting that drove people's attention.
I hope to get some comments now.
Thanks
Jaime
Quoting "Jaime Pinto" <pi...@scinet.utoronto.ca>:
In the old days of DDN 9900 and gpfs 3.4 I
/ Registergericht: Amtsgericht Stuttgart,
HRB 17122
From: "Jaime Pinto" <pi...@scinet.utoronto.ca>
To: "gpfsug main discussion list" <gpfsug-discuss@spectrumscale.org>
Date: 05/08/2017 06:06 PM
Subject:[gpfsug-discuss] help with multi-c
Quoting valdis.kletni...@vt.edu:
On Mon, 08 May 2017 12:06:22 -0400, "Jaime Pinto" said:
Another piece og information is that as far as GPFS goes all clusters
are configured to communicate exclusively over Infiniband, each on a
different 10.20.x.x network, but broadcast 10.20.255.2
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
****
---
Jaime Pinto
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.ca
University of Toronto
661 University Ave. (MaRS), Suite 1140
Toront
you client nodes.
New clients can communicate with older versions of server nsds. Vice
versa...no so much.
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
****
STORIES
http://www.scinethpc.ca/testimonials
---
Jaime Pinto
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.ca
University of Toronto
661 University Ave. (MaRS), Suite 1140
Toronto, ON, M5G1M1
P: 416-978-2755
LESET:0:root:0:0:0:0:none:1:0:0:0:none:i:on:off:::
mmrepquota::0:1:::dns:FILESET:1:users:0:4294967296:4294967296:0:none:1:0:0:0:none:e:on:off:::
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
On 3/28/17, 9:47 AM, "gpfsug-discuss-boun...@spectrumscale.org on
behalf of Jaime Pinto&
thing to patch my scripts to
deal with this, however the principle here is that the reports
generated by GPFS should be the ones keeping consistence.
Thanks
Jaime
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/
ving a
senior moment can point you to it?
Kevin
On Jan 5, 2017, at 3:53 PM, Jaime Pinto <pi...@scinet.utoronto.ca> wrote:
Does anyone know of a functional standard alone tool to
systematically and recursively find and replicate ACLs that works
well with GPFS?
* We're curren
end our downtime until
noontime tomorrow, so I haven?t had a chance to do so yet. Now I
don?t have to? ;-)
Kevin
On Aug 4, 2016, at 10:59 AM, Jaime Pinto
<pi...@scinet.utoronto.ca<mailto:pi...@scinet.utoronto.ca>> wrote:
Since there were inconsistencies in the responses,
the grace period on the soft
quota on any of the secondary groups that user will be stopped from
further writing to those groups as well, just as in the primary group.
I hope this clears the waters a bit. I still have to solve my puzzle.
Thanks everyone for the feedback.
Jaime
Quoting "J
against primary group
sven
On Wed, Aug 3, 2016 at 9:22 AM, Jaime Pinto
<pi...@scinet.utoronto.ca<mailto:pi...@scinet.utoronto.ca>> wrote:
Suppose I want to set both USR and GRP quotas for a user, however
GRP is not the primary group. Will gpfs enforce the secondary gro
k '{print $4}' )
do
echo $dev generic
done
fi
* Copy edited nsddevices to the rest of the nodes at the same path
for host in n01 n02 n03 n04; do
scp /var/mmfs/etc/nsddevices ${host}:/var/mmfs/etc/nsddevices
done
Ben
On Jun 13, 2016, at 11:26 AM, Jaime Pinto <pi...@scine
Since we can not get GNR outside ESS/GSS appliances, is anybody using
ZFS for software raid on commodity storage?
Thanks
Jaime
---
Jaime Pinto
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
your
tape mounting (robot?), operates and what other requests may be queued
ahead of yours!
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
****
---
Jaime Pinto
with Firefox, Chrome and on linux, not only IE on
Windows). Users may authenticate via AD or LDAP, and traverse only
what they would be allowed to via linux permissions and ACLs.
Jaime
Quoting Jonathan Buzzard <jonat...@buzzard.me.uk>:
On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto
GPFS deficiency and benefited from the same fix.
yuri
TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials
****
---
Jaime Pinto
SciNet HPC Consortium - Compu
parity" feature, that substantially diminishes
rebuilt time in case of HD failures.
I'm particularly interested on HPC sites with 5000+ clients mounting
such commodity NSD's+HD's setup.
Thanks
Jaime
---
Jaime Pinto
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - w
ericht Stuttgart,
HRB 243294
From: Jaime Pinto <pi...@scinet.utoronto.ca>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>,
Marc A Kaplan <makap...@us.ibm.com>
Cc: Dominic Mueller-Wicke01/Germany/IBM@IBMDE
Date: 09.03.2016 16:22
Subject:
y...@il.ibm.com
IBM Israel
gpfsug-discuss-boun...@spectrumscale.org wrote on 03/09/2016 09:56:13 PM:
From: Jaime Pinto <pi...@scinet.utoronto.ca>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date: 03/09/2016 09:56 PM
Subject: [gpfsug-discuss] GPFS(snaps
the
load and substantially improve the performance of the backup compared
to when I first deployed this solution. However I suspect it could
still be improved further if I was to apply tools from the GPFS side
of the equation.
I would appreciate any comments/pointers.
Thanks
Jaime
---
Jaime
SUCCESS STORIES
http://www.scinethpc.ca/testimonials
****
---
Jaime Pinto
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
wever only if
this issue has been resolved (among a few others).
What has been some of the experience out there?
Thanks
Jaime
---
Jaime Pinto
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T
Quoting "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu>:
Hi Jaime,
Have you tried wiping out /var/mmfs/gen/* and /var/mmfs/etc/* on the
old nodeA?
Kevin
That did the trick.
Thanks Kevin and all that responded privately.
Jaime
On Feb 10, 2016, at 1:26 PM,
54 matches
Mail list logo