[gpfsug-discuss] GPFS/SS UG Event at ORNL, Register by September 1

2018-08-03 Thread Kristy Kallback-Rose
All,

Here are some updates for the Spectrum Scale/GPFS UG Event at ORNL as 
part of the HPCXXL meeting. Below you will find:
— the draft agenda (bottom of page), 
— a link to registration, register by September 1 due to ORNL site requirements 
(see next line)
— an important note about registration requirements for going to Oak Ridge 
National Lab
— a request for your site presentations
— information about HPCXXL and who to contact for information about joining, 
and 
— other upcoming events.

Hope you can attend and see Summit and Alpine first hand.

Best,
Kristy

Registration link, you can register just for GPFS/SS day at $0: 
https://www.eventbrite.com/e/hpcxxl-2018-summer-meeting-registration-47111539884
 

 

IMPORTANT: September 1st is the deadline to register for HPCXXL and the GPFS 
Day. Registration closes earlier than normal. This is due to the background 
check required to attend the event on site at ORNL. The access review process 
takes at least 3 weeks to complete for foreign nationals and 1 week to complete 
for US Citizens. So don't wait too long to make your travel decisions.

ALSO: If you are interested in giving a site presentation, please let us know 
as we are trying to finalize the agenda.

About HPCXXL:
HPCXXL is a user group for sites which have large supercomputing and storage 
installations. Because of the history of HPCXXL, the focus of the group is on 
large-scale scientific/technical computing using IBM or Lenovo hardware and 
software, but other vendor hardware and software is also welcome. Some of the 
areas we cover are: Applications, Code Development Tools, Communications, 
Networking, Parallel I/O, Resource Management, System Administration, and 
Training. We address topics across a wide range of issues that are important to 
sustained petascale scientific/technical computing on scaleable parallel 
machines. Some of the benefits of joining the group include knowledge sharing 
across members, NDA content availability from vendors, and access to vendor 
developers and support staff.
The HPCXXL user group is a self-organized and self-supporting group. Members 
and affiliates are expected to participate actively in the HPCXXL meetings and 
activities and to cover their own costs for participating. HPCXXL meetings are 
open only to members and affiliates of the HPCXXL. HPCXXL member institutions 
must have an appropriate non-disclosure agreement in place with IBM and Lenovo, 
since at times both vendors disclose and discuss information of a confidential 
nature with the group.
To join HPCXXL, a new organization needs to be sponsored by a current HPCXXL 
member or by the prospective member themselves. This process is straightforward 
and can be completed over email or in person when a representative attends 
their first meeting. If you are interested in learning more, please contact 
m.step...@fz-juelich.de  HPCXXL president 
Michael Stephan.

Other upcoming GPFS/SS events:
Sep 19+20 HPCXXL, Oak Ridge
Aug 10 Meetup along TechU, Sydney
Oct 24 NYC User Meeting, New York  
Nov 11 SC, Dallas
Dec 12 CIUK, Manchester


Draft agenda below, full HPCXXL meeting information here: 
http://hpcxxl.org/meetings/summer-2018-meeting/ 

Duration Start End Title

Wednesday 19th, 2018

Speaker

TBD
Chris Maestas (IBM) TBD (IBM)
TBD (IBM)
John Lewars (IBM)

*** TO BE CONFIRMED *** *** TO BE CONFIRMED *** TBD (Starfish)
John Lewars (IBM)

Carl Zetie (IBM) TBD

TBD (ORNL)
TBD (IBM)
William Godoy (ORNL) Ted Hoover (IBM)

Sandeep Ramesh (IBM) *** TO BE CONFIRMED *** All

15 13:00 30 13:15 15 13:45 25 14:00 25 14:25 30 14:50 20 15:20 20 15:40 20 
16:00 30 16:20 30 16:50 10 17:20

13:15 Welcome
13:45 What is new in Spectrum Scale?
14:00 What is new in ESS?
14:25 Spinning up a Hadoop cluster on demand 14:50 Running Container on a Super 
Computer 15:20 === BREAK ===
15:40 AWE
16:00 CSCS site report
16:20 Starfish (Sponsor talk)
16:50 Network Flow
17:20 RFEs
17:30 W rap-up

Thursday 19th, 2018

20 08:30 30 08:50 20 09:20 20 09:40 30 10:00 30 10:30 30 11:00 30 11:30

08:50 Alpine – the Summit file system
09:20 Performance enhancements for CORAL 09:40 ADIOS I/O library
10:00 AI Reference Architecture
10:30 === BREAK ===
11:00 Encryption on the wire and on rest 11:30 Service Update
12:00 Open Forum 


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-03 Thread Buterbaugh, Kevin L
Hi All,

Aargh - now I really do feel like an idiot!  I had set up the stanza file over 
a week ago … then had to work on production issues … and completely forgot 
about setting the block size in the pool stanzas there.  But at least we all 
now know that stanza files override command line arguments to mmcrfs.

My apologies…

Kevin

On Aug 3, 2018, at 1:01 AM, Olaf Weiser 
mailto:olaf.wei...@de.ibm.com>> wrote:

Can u share your stanza file ?

Von meinem iPhone gesendet

Am 02.08.2018 um 23:15 schrieb Buterbaugh, Kevin L 
mailto:kevin.buterba...@vanderbilt.edu>>:

OK, so hold on … NOW what’s going on???  I deleted the filesystem … went to 
lunch … came back an hour later … recreated the filesystem with a metadata 
block size of 4 MB … and I STILL have a 1 MB block size in the system pool and 
the wrong fragment size in other pools…

Kevin

/root/gpfs
root@testnsd1# mmdelfs gpfs5
All data on the following disks of gpfs5 will be destroyed:
test21A3nsd
test21A4nsd
test21B3nsd
test21B4nsd
test23Ansd
test23Bnsd
test23Cnsd
test24Ansd
test24Bnsd
test24Cnsd
test25Ansd
test25Bnsd
test25Cnsd
Completed deletion of file system /dev/gpfs5.
mmdelfs: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
/root/gpfs
root@testnsd1# mmcrfs gpfs5 -F ~/gpfs/gpfs5.stanza -A yes -B 4M -E yes -i 4096 
-j scatter -k all -K whenpossible -m 2 -M 3 -n 32 -Q yes -r 1 -R 3 -T /gpfs5 -v 
yes --nofilesetdf --metadata-block-size 4M

The following disks of gpfs5 will be formatted on node testnsd3:
test21A3nsd: size 953609 MB
test21A4nsd: size 953609 MB
test21B3nsd: size 953609 MB
test21B4nsd: size 953609 MB
test23Ansd: size 15259744 MB
test23Bnsd: size 15259744 MB
test23Cnsd: size 1907468 MB
test24Ansd: size 15259744 MB
test24Bnsd: size 15259744 MB
test24Cnsd: size 1907468 MB
test25Ansd: size 15259744 MB
test25Bnsd: size 15259744 MB
test25Cnsd: size 1907468 MB
Formatting file system ...
Disks up to size 8.29 TB can be added to storage pool system.
Disks up to size 16.60 TB can be added to storage pool raid1.
Disks up to size 132.62 TB can be added to storage pool raid6.
Creating Inode File
  12 % complete on Thu Aug  2 13:16:26 2018
  25 % complete on Thu Aug  2 13:16:31 2018
  38 % complete on Thu Aug  2 13:16:36 2018
  50 % complete on Thu Aug  2 13:16:41 2018
  62 % complete on Thu Aug  2 13:16:46 2018
  74 % complete on Thu Aug  2 13:16:52 2018
  85 % complete on Thu Aug  2 13:16:57 2018
  96 % complete on Thu Aug  2 13:17:02 2018
 100 % complete on Thu Aug  2 13:17:03 2018
Creating Allocation Maps
Creating Log Files
   3 % complete on Thu Aug  2 13:17:09 2018
  28 % complete on Thu Aug  2 13:17:15 2018
  53 % complete on Thu Aug  2 13:17:20 2018
  78 % complete on Thu Aug  2 13:17:26 2018
 100 % complete on Thu Aug  2 13:17:27 2018
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool system
  98 % complete on Thu Aug  2 13:17:34 2018
 100 % complete on Thu Aug  2 13:17:34 2018
Formatting Allocation Map for storage pool raid1
  52 % complete on Thu Aug  2 13:17:39 2018
 100 % complete on Thu Aug  2 13:17:43 2018
Formatting Allocation Map for storage pool raid6
  24 % complete on Thu Aug  2 13:17:48 2018
  50 % complete on Thu Aug  2 13:17:53 2018
  74 % complete on Thu Aug  2 13:17:58 2018
  99 % complete on Thu Aug  2 13:18:03 2018
 100 % complete on Thu Aug  2 13:18:03 2018
Completed creation of file system /dev/gpfs5.
mmcrfs: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
/root/gpfs
root@testnsd1# mmlsfs gpfs5
flagvaluedescription
---  ---
 -f 8192 Minimum fragment (subblock) size 
in bytes (system pool)
32768Minimum fragment (subblock) size 
in bytes (other pools)
 -i 4096 Inode size in bytes
 -I 32768Indirect block size in bytes
 -m 2Default number of metadata replicas
 -M 3Maximum number of metadata replicas
 -r 1Default number of data replicas
 -R 3Maximum number of data replicas
 -j scatter  Block allocation type
 -D nfs4 File locking semantics in effect
 -k all  ACL semantics in effect
 -n 32   Estimated number of nodes that 
will mount file system
 -B 1048576  Block size (system pool)
4194304  Block size (other pools)
 -Q user;group;fileset 

Re: [gpfsug-discuss] Sven, the man with the golden gun now at DDN

2018-08-03 Thread Frank Kraemer


FYI - Sven is on a TOP secret mission called "Skyfall"; with his spirit,
super tech skills and know-how he will educate and convert all the poor
Lustre souls which are fighting for the world leadership. The GPFS-Q-team
in Poughkeepsie has prepared him a golden Walther  PPK (9mm) with lot's of
Scale v5. silver bullets. He was given a top secret
make_all_kind_of_I/O faster debugger with auto tuning features. And off
course he received a new car by Aston Martin with lot's of special features
designed by POK. It has dual V20-cores, lots of RAM, a Mestor-transmission,
twin-port RoCE turbochargers, AFM Rockets and LROC escape seats.
Poughkeepsie is still in the process to hire a larger group of smart and
good looking NMVeOF I/O girls; feel free to send your ideas and pictures.
The list of selected "Sven Girls" with be published in a new section in the
Scale FAQ.

-frank-
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-03 Thread Olaf Weiser
Can u share your stanza file ?

Von meinem iPhone gesendet

> Am 02.08.2018 um 23:15 schrieb Buterbaugh, Kevin L 
> :
> 
> OK, so hold on … NOW what’s going on???  I deleted the filesystem … went to 
> lunch … came back an hour later … recreated the filesystem with a metadata 
> block size of 4 MB … and I STILL have a 1 MB block size in the system pool 
> and the wrong fragment size in other pools…
> 
> Kevin
> 
> /root/gpfs
> root@testnsd1# mmdelfs gpfs5
> All data on the following disks of gpfs5 will be destroyed:
> test21A3nsd
> test21A4nsd
> test21B3nsd
> test21B4nsd
> test23Ansd
> test23Bnsd
> test23Cnsd
> test24Ansd
> test24Bnsd
> test24Cnsd
> test25Ansd
> test25Bnsd
> test25Cnsd
> Completed deletion of file system /dev/gpfs5.
> mmdelfs: Propagating the cluster configuration data to all
>   affected nodes.  This is an asynchronous process.
> /root/gpfs
> root@testnsd1# mmcrfs gpfs5 -F ~/gpfs/gpfs5.stanza -A yes -B 4M -E yes -i 
> 4096 -j scatter -k all -K whenpossible -m 2 -M 3 -n 32 -Q yes -r 1 -R 3 -T 
> /gpfs5 -v yes --nofilesetdf --metadata-block-size 4M
> 
> The following disks of gpfs5 will be formatted on node testnsd3:
> test21A3nsd: size 953609 MB
> test21A4nsd: size 953609 MB
> test21B3nsd: size 953609 MB
> test21B4nsd: size 953609 MB
> test23Ansd: size 15259744 MB
> test23Bnsd: size 15259744 MB
> test23Cnsd: size 1907468 MB
> test24Ansd: size 15259744 MB
> test24Bnsd: size 15259744 MB
> test24Cnsd: size 1907468 MB
> test25Ansd: size 15259744 MB
> test25Bnsd: size 15259744 MB
> test25Cnsd: size 1907468 MB
> Formatting file system ...
> Disks up to size 8.29 TB can be added to storage pool system.
> Disks up to size 16.60 TB can be added to storage pool raid1.
> Disks up to size 132.62 TB can be added to storage pool raid6.
> Creating Inode File
>   12 % complete on Thu Aug  2 13:16:26 2018
>   25 % complete on Thu Aug  2 13:16:31 2018
>   38 % complete on Thu Aug  2 13:16:36 2018
>   50 % complete on Thu Aug  2 13:16:41 2018
>   62 % complete on Thu Aug  2 13:16:46 2018
>   74 % complete on Thu Aug  2 13:16:52 2018
>   85 % complete on Thu Aug  2 13:16:57 2018
>   96 % complete on Thu Aug  2 13:17:02 2018
>  100 % complete on Thu Aug  2 13:17:03 2018
> Creating Allocation Maps
> Creating Log Files
>3 % complete on Thu Aug  2 13:17:09 2018
>   28 % complete on Thu Aug  2 13:17:15 2018
>   53 % complete on Thu Aug  2 13:17:20 2018
>   78 % complete on Thu Aug  2 13:17:26 2018
>  100 % complete on Thu Aug  2 13:17:27 2018
> Clearing Inode Allocation Map
> Clearing Block Allocation Map
> Formatting Allocation Map for storage pool system
>   98 % complete on Thu Aug  2 13:17:34 2018
>  100 % complete on Thu Aug  2 13:17:34 2018
> Formatting Allocation Map for storage pool raid1
>   52 % complete on Thu Aug  2 13:17:39 2018
>  100 % complete on Thu Aug  2 13:17:43 2018
> Formatting Allocation Map for storage pool raid6
>   24 % complete on Thu Aug  2 13:17:48 2018
>   50 % complete on Thu Aug  2 13:17:53 2018
>   74 % complete on Thu Aug  2 13:17:58 2018
>   99 % complete on Thu Aug  2 13:18:03 2018
>  100 % complete on Thu Aug  2 13:18:03 2018
> Completed creation of file system /dev/gpfs5.
> mmcrfs: Propagating the cluster configuration data to all
>   affected nodes.  This is an asynchronous process.
> /root/gpfs
> root@testnsd1# mmlsfs gpfs5
> flagvaluedescription
> ---  
> ---
>  -f 8192 Minimum fragment (subblock) size 
> in bytes (system pool)
> 32768Minimum fragment (subblock) size 
> in bytes (other pools)
>  -i 4096 Inode size in bytes
>  -I 32768Indirect block size in bytes
>  -m 2Default number of metadata 
> replicas
>  -M 3Maximum number of metadata 
> replicas
>  -r 1Default number of data replicas
>  -R 3Maximum number of data replicas
>  -j scatter  Block allocation type
>  -D nfs4 File locking semantics in effect
>  -k all  ACL semantics in effect
>  -n 32   Estimated number of nodes that 
> will mount file system
>  -B 1048576  Block size (system pool)
> 4194304  Block size (other pools)
>  -Q user;group;fileset   Quotas accounting enabled
> user;group;fileset   Quotas enforced
> none Default quotas enabled
>  --perfileset-quota No   Per-fileset quota