Re: [gpfsug-discuss] cross-cluster mounting different versions ofgpfs

2016-03-18 Thread Uwe Falke
scripts it could be done quite smoothly (depending on the percentage of compute nodes which can go down at once and on the run time / wall clocks of your jobs this will take between few hours and many days ...). Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High

Re: [gpfsug-discuss] cross-cluster mounting different versions ofgpfs

2016-03-19 Thread Uwe Falke
freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services --- IBM

Re: [gpfsug-discuss] GPFS and replication.. not a mirror?

2016-04-29 Thread Uwe Falke
sons). E2E Check-summing (as in GNR) would of course help here. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Cente

Re: [gpfsug-discuss] Manager nodes

2017-01-24 Thread Uwe Falke
threads. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] Forcing which node gets expelled?

2016-10-25 Thread Uwe Falke
on fixing your connectivity issues, as GPFS will not feel comfortable in such a unreliable environment anyway. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] subnets

2016-10-19 Thread Uwe Falke
Hi Brian, you might use mmfsadm saferdump tscomm to check on which route peer cluster members are reached. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] Strategies - servers with local SAS disks

2016-11-30 Thread Uwe Falke
with GPFS and local disks than what you considered? I suppose nothing reasonable ... Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] RAID config for SSD's used for data

2017-04-20 Thread Uwe Falke
regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services --- IBM Deutschland Rathausstr. 7

Re: [gpfsug-discuss] Lost disks

2017-07-27 Thread Uwe Falke
gt; Sent by: gpfsug-discuss-boun...@spectrumscale.org > > On Thu, 2017-07-27 at 16:18 +0200, Uwe Falke wrote: > > > "Just doing something" makes things worse usually. Whether a 3rd > > party tool knows how to handle GPFS NSDs can be doubted (as long as it >

Re: [gpfsug-discuss] Lost disks

2017-07-27 Thread Uwe Falke
eundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services -

Re: [gpfsug-discuss] Fail to mount file system

2017-07-05 Thread Uwe Falke
which should provide the necessary details. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] help with multi-cluster setup: Network isunreachable

2017-05-09 Thread Uwe Falke
regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services --- IBM Deutschland Rathausstr. 7

Re: [gpfsug-discuss] BIG LAG since 3.5 on quotaaccountingreconciliation

2017-05-11 Thread Uwe Falke
are to be solved via PMRs. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] FW: [EXTERNAL] FLASH: IBM Spectrum Scale (GPFS) V4.1 and 4.2 levels: network reconnect function may result in file system corruption or undetected file data corruption (2017.10.09

2017-10-10 Thread Uwe Falke
block, or a metadata block - so it may hit just a single data file or corrupt your entire file system. However, I think the likelihood that the RPC content can go as valid RPC header is very low. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance

Re: [gpfsug-discuss] Checking a file-system for errors

2017-10-11 Thread Uwe Falke
consistency. If the contents of the replicas of a data block differ, mmfsck won't see any problem (as long as the fs metadata are consistent), but mmrestripefs -c will. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated

Re: [gpfsug-discuss] Checking a file-system for errors

2017-10-11 Thread Uwe Falke
regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services --- IBM Deutschland

Re: [gpfsug-discuss] Checking a file-system for errors

2017-10-11 Thread Uwe Falke
by errors in the logs. Do you have reason to assume your fs has probs? Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

[gpfsug-discuss] nsdperf crash testing RDMA between Power BE and Intel nodes

2017-10-25 Thread Uwe Falke
regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services --- IBM Deutschland Rathausstr. 7

[gpfsug-discuss] nsdperf crash testing RDMA between Power BE and Intel nodes

2017-10-24 Thread Uwe Falke
Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services --- IBM Deutschland Rathausstr. 7

Re: [gpfsug-discuss] nsdperf crash testing RDMA between Power BE and Intel nodes

2017-10-25 Thread Uwe Falke
Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services --- IBM

Re: [gpfsug-discuss] Snapshots for backups

2018-05-08 Thread Uwe Falke
only a finite number. The initial version is gone then forever. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] Capacity pool filling

2018-06-08 Thread Uwe Falke
es files. But maybe your script is also sufficient Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] Capacity pool filling

2018-06-07 Thread Uwe Falke
Hm, RULE 'list_updated_in_capacity_pool' LIST 'updated_in_capacity_pool' FROM POOL 'gpfs23capacity' WHERE CURRENT_TIMESTAMP -MODIFICATION_TIME To: gpfsug main discussion list Date: 07/06/2018 16:25 Subject:[gpfsug-discuss] Capacity pool filling Sent by:

Re: [gpfsug-discuss] Capacity pool filling

2018-06-07 Thread Uwe Falke
r current storage shortage ? Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Da

Re: [gpfsug-discuss] subblock sanity check in 5.0

2018-07-02 Thread Uwe Falke
regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services --- IBM Deutschland Rathausstr. 7

Re: [gpfsug-discuss] Confusing I/O Behavior

2018-05-02 Thread Uwe Falke
mmfsadm dump pgalloc might get you one step further ... Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] V5 Experience

2018-02-09 Thread Uwe Falke
I suppose the new maxBlockSize default is <>1MB, so your config parameter was properly translated. I'd see no need to change anything. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data

Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 79, Issue 21: mmaddcallback documentation issue

2018-08-07 Thread Uwe Falke
"I'm not sure how we go about asking IBM to correct their documentation,..." One way would be to open a PMR, er?, case. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Cente

Re: [gpfsug-discuss] RAID type for system pool

2018-09-05 Thread Uwe Falke
better now and expensive enterprise SSDs will endure quite a lot of full rewrites, but you need to estimate the MD change rate, apply the RMW overhead and see where you end WRT lifetime (and performance). Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High

Re: [gpfsug-discuss] RAID type for system pool

2018-09-09 Thread Uwe Falke
freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services --- IBM

Re: [gpfsug-discuss] Allocation map limits - any way around this?

2018-07-10 Thread Uwe Falke
Hi Bob, you sure the first added NSD was 1 TB? As often as i created a FS, the max NSD size was way larger than the one I added initially , not just the fourfold. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated

Re: [gpfsug-discuss] Same file opened by many nodes / processes

2018-07-10 Thread Uwe Falke
scan (maybe once with jobs accessing the file, once without). Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] File placement rule for new files in directory

2018-07-12 Thread Uwe Falke
enced in initial placement rules, only the following attributes are valid: FILESET_NAME, GROUP_ID, NAME, and USER_ID. " Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Cente

Re: [gpfsug-discuss] Metadata only system pool

2018-01-23 Thread Uwe Falke
storage is on HDD and md storage on SSD) and that again reduces your chances. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] Metadata only system pool

2018-01-23 Thread Uwe Falke
. 1320MiB @8MiB) However, there could be other data structures in an inode (EAs) reducing the space available for DAs Hence, YMMV Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] Local event

2018-04-04 Thread Uwe Falke
ways are logically similar for the writing node (or should be for your matter). In short: yes, I think you need to roll out your "quota exceeded" call-back to all nodes in the HPC cluster. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computin

Re: [gpfsug-discuss] Dual server NSDs - change of hostname

2018-04-04 Thread Uwe Falke
ork config . Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Servi

Re: [gpfsug-discuss] Dual server NSDs - change of hostname

2018-04-05 Thread Uwe Falke
prevent any havoc for Scale and offers you plenty of opportunity to check your final new network set-up, before starting Scale on that renewed node. YMMV, and you might try different methods on your test system of course. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke

Re: [gpfsug-discuss] Confusing I/O Behavior

2018-04-10 Thread Uwe Falke
a 1 MiB buffer you need about 13100 chunks of 80Bytes ***per file*** within those 5 secs. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] Confusing I/O Behavior

2018-04-11 Thread Uwe Falke
in the configuration. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

Re: [gpfsug-discuss] ILM: migrating between internal pools while premigrated to external

2018-04-24 Thread Uwe Falke
GPFS files are done regularly), only in the long run data will be purged from pool1. Thus, migration to external should be done in shorter intervals. Sounds like I can go ahead without hesitation. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing S

Re: [gpfsug-discuss] ILM: migrating between internal pools while premigrated to external

2018-04-25 Thread Uwe Falke
generated automatically, and thus a systematic naming of files is possible to a large extent. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services

[gpfsug-discuss] ILM: migrating between internal pools while premigrated to external

2018-04-24 Thread Uwe Falke
or might not. Is there some knowledge about that in the community? If it would not be a reasonable way, we would do the internal migration before the external one, but that imposes a timely dependance we'd not have otherwise. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke

Re: [gpfsug-discuss] ILM: migrating between internal pools while premigrated to external

2018-04-24 Thread Uwe Falke
Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services --- IBM Deuts

Re: [gpfsug-discuss] ILM: migrating between internal pools while premigrated to external

2018-04-24 Thread Uwe Falke
am not a developer of the ILM stuff in Scale, so I cannot fully foresee what'll happen. Therefore asking. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / D

Re: [gpfsug-discuss] gpfs client cluster, lost quorum, ccr issues

2018-06-28 Thread Uwe Falke
if it comes up with quorum, if it does then go ahead and cleanly de-configure what's needed to remove that node from the cluster gracefully. Once you reach quorum with the remaining nodes you are in safe area. Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance

Re: [gpfsug-discuss] How to use RHEL 7 mdadm NVMe devices with Spectrum Scale 4.2.3.10?

2018-11-16 Thread Uwe Falke
Hi Lance, you might need to use /var/mmfs/etc/nsddevices to tell GPFS about these devices (template in /usr/lpp/mmfs/samples/nsddevices.sample) Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services

Re: [gpfsug-discuss] How to use RHEL 7 mdadm NVMe devices with Spectrum Scale 4.2.3.10?

2018-11-16 Thread Uwe Falke
freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services --- IBM