g
> search terms.
>
>
> Thanks,
> Brian
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege
much
> >> we need for GPFS and other ???system??? processes.
> >>
> >> I can tell you that SLURM is *much* more efficient at killing processes
> >> as soon as they exceed the amount of memory they???ve requested than PBS /
> >> Moab ever dreamed of being
networks?
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale
21 50 73 93 # 303 | Fax : +213 21 50 79 40 | E-mail :
> a.khired...@meteo.dz
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, Syst
es such a thing exist? :-)
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine
___
gpfsug-discuss mailing list
gpfsug-d
pectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine
__
reagents.
Sure, we could turn them off, but then we're eating $$$ we could be getting
back from the vendor.
At least SSD prices have come down far enough that we can put our metadata on
fast disks now, even if we can't take advantage of the more efficient small
file allocation yet.
having POSIX look-alike commands
like ls and find that plug into the GPFS API rather than making VFS calls.
That's of course a project for my Copious Free Time...
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)
; where there is functional overlap.
>
> Going either way (Posix or GPFS-specific) - for each API call the
> execution path drops into the kernel - and then if required - an
> inter-process call to the mmfsd daemon process.
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sci
gt; Vanderbilt University - Advanced Computing Center for Research and
> Education
> kevin.buterba...@vanderbilt.edu
> <mailto:kevin.buterba...@vanderbilt.edu> - (615)875-9633
>
>
>
>
>
> ___
> gpfsug-discuss
e - high i/o load expected).
> > >
> >
> >
> >
> >
> > ___
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >
>
>
__
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washingto
ys).
FWIW, the operating+capital costs we recharge our grants for tape storage
is ~50% of what we recharge them for bulk disk storage.
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-
>
>
>
>
>
>
> ___
> gpfsug-discuss mailing
ult ones officially
> described in rfc2307 ?
>
> many thanks.
> Regards,
>
>Alvise Dorigo
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skylar Thompson (skyl...
On Thu, May 24, 2018 at 03:46:32PM +0100, Jonathan Buzzard wrote:
> On Thu, 2018-05-24 at 14:16 +0000, Skylar Thompson wrote:
> > I haven't needed to change the LDAP attributes that CES uses, but I
> > do see --user-id-attrib in the mmuserauth documentation.
> > Unf
se
> the subnets option to identify that it has a private address and then
> dynamically shares that with the cluster?
>
> Thanks in advance for your clarifying comments.
>
> -Eric
>
> --
>
> Eric Horst
> University of Washington
> _____
r the reasonable users to subsidize them.
Yep, we set our fileset inode quota to 1 million/TB of allocated space. It
seems overly generous to me but it's far better than no limit at all.
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrat
at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine
_
Almaden
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org<htt
cuss] Hanging file-systems
>
> 1. are you under memory pressure or even worse started swapping .
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skyl
gt;
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foe
> damage caused by any virus transmitted by this email.
> > ___
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >
> _
preading incorrect information.
TSM can store some amount of metadata in its database without spilling over
to a storage pool, so whether a metadata update is cheap or expensive
depends not just on ACLs/extended attributes but also the directory entry
name length. It can definitely make for some
responding to GPFS filesets) that stress out our
mmbackup cluster at the sort step of mmbackup. UNIX sort is not
RAM-friendly, as it happens.
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washin
ntly large, but it does not
> > distinguish between clients and IO servers.
> >
> > Let me know when you get a chance.
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washing
out and run rpmrebuild for ourselves.
> >
>
> IBM should be hanging their heads in shame if the replacement RPM is
> missing files.
>
> JAB.
>
> --
> Jonathan A. Buzzard Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> U
t; 1-800-237-5511 in the United States or your local IBM Service Center in
> other countries.
>
> The forum is informally monitored as time permits and should not be used
> for priority messages to the Spectrum Scale (GPFS) team.
>
> gpfsug-discuss-boun...@spectrumscale.org wro
rsync on one of the NSD server or is
> better to run it on a client?
>
> The environment:
> GPFS 4.2.3.19, NSD CentOS7.4, clients mostly CentOS6.4 (connected by IB
> QDR) and CentOS7.3 (connected by OPA), connection between NSD and storage
> with IB QDR)
--
-- Skylar Thompson
we provide. The newer kernel available in CentOS 7 (and now 8) supports
large numbers of CPUs and large amounts of memory far better than the
ancient CentOS 6 kernel as well.
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Ad
SPARC system just to run a
15-year-old component of a pipeline for which they don't have source code...
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
--
aren't quite a panacea; there's still the issue of
insecure software being baked into the container, but at least we can limit
what the container can access more easily than running outside a
container.
> > On Feb 20, 2020, at 11:59 AM, Skylar Thompson wrote:
> >
> >
art of
the plan is to make it clear where we're willing to accept risk, and to
limit our own liability. No process is going to be perfect, but we at least
know and accept where those imperfections are.
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Adm
does not mean the backup completed as far as I know.
>
> What solutions are you all using, or does mmbackup in 5.x update the
> filespaceview table?
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046,
r and deleted for each user,
> including their paths. You can do more tricks if you want.
>
> Jaime
>
>
> On 3/25/2020 10:15:59, Skylar Thompson wrote:
> > We execute mmbackup via a regular TSM client schedule with an incremental
> > action, with a virtualmountpoint set to a
k.htm
On Tue, Apr 28, 2020 at 11:30:38AM +, Frederick Stock wrote:
> Have you looked a the mmaddcallback command and specifically the file system
> mount callbacks?
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building
On Thu, Apr 30, 2020 at 12:50:27PM +0200, Ulrich Sibiller wrote:
> Am 28.04.20 um 15:57 schrieb Skylar Thompson:
> >> Have you looked a the mmaddcallback command and specifically the file
> system mount callbacks?
>
> > We use callbacks successfully to ensure Linux audi
> Thank you.
> Damir
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department (UW Medicine), System Administrator
ecent % of oversubscription if they use
> filesets and quotas.
>
> Ed
> OSC
>
> -Original Message-
> From: gpfsug-discuss-boun...@spectrumscale.org
> On Behalf Of Skylar Thompson
> Sent: Tuesday, July 7, 2020 10:00 AM
> To: gpfsug-discuss@spectrumscale.or
t midway could leave CCR inconsistent).
I'm with Jonathan here: the command should fail with an informative
message, and the admin can correct the problem (just cd somewhere else).
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department (UW Medicine), System Administrat
n on Skylake?
>
>
> JAB.
>
> --
> Jonathan A. Buzzard Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> _______
> gpfsug-di
0 PM UTC
> >
> > Best,
> > Kristy
> >
> > Kristy Kallback-Rose
> > Senior HPC Storage Systems Analyst
> > National Energy Research Scientific Computing Center
> > Lawrence Berkeley National Laboratory
> >
>
>
lon so we should be able to reach just under
> 20Gbit...
>
>
> if anyone have any ideas they are welcome!
>
>
> Thanks in advance
> Andi Christiansen
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrum
method I setup where we
captured the output of find from both sides of the transfer and preserved
it for posterity, but obviously did require a hard-stop date on the source.
Fortunately, we seem committed to GPFS so it might be we never have to do
another bulk transfer outside of the filesyste
point + symlink, but I
> > really would prefer to avoid symlinks.
> >
> >
> >
> > Thanks a lot,
> >
> > Marc
> >
> > _____
> > Paul Scherrer Institut
> > High Performance Computing &
; gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department (UW Medicine), System Administrator
-- Foege Building S046, (206)-685-7354
-- Pronouns: He/Him/H
ata that can be encrypted very quickly.
>
> Is there anything that can protect the GPFS filesystem against this kind of
> attack?
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department (UW Medicine), System Administrator
-- Foege Building S046, (206
rson Building, Glasgow. G4 0NG
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department (UW
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department (UW Medicine), System Administrator
-- Fo
ss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=04%7C01%7Cp.ward%40nhm.ac.uk%7C17acea2a1b4944dc8daa08d9dd3f1fea%7C73a29c014e78437fa0d
6:18:05 DEFAULT I
> /gpfs/nhmfsa/bulk/share/data/mbl/share/workspaces/groups/urban-nature-project/audiowaveform/300_40/unp-grounds-01-1604557538.png
>545 B 01/25/2022 06:41:17 DEFAULT I
> /gpfs/nhmfsa/bulk/share/data/mbl/share/workspaces/g
at's the standard way to accomplish something like this?
> We've used systemd timers/mounts to accomplish it, but that's not ideal.
> Is there a way to do this natively with gpfs or does this have to be done
> through symlinks or gpfs over nfs?
--
-- Skylar Thompson
oauto.
> The noauto lets it boot, but the mount is never mounted properly. Doing
> a manual mount -a mounts it.
>
> On 2/22/22 12:37, Skylar Thompson wrote:
> > Assuming this is on Linux, you ought to be able to use bind mounts for
> > that, something like this in fstab or
doesn't mount until
> after the gpfs.service starts, and even then it's 20-30 seconds.
>
>
> On 2/22/22 14:42, Skylar Thompson wrote:
> > Like Tina, we're doing bind mounts in autofs. I forgot that there might be
> > a race condition if you're doing it in
54 matches
Mail list logo