Re: [gpfsug-discuss] Upgrading kernel on RHEL

2016-12-01 Thread mark . bergman
In the message dated: Tue, 29 Nov 2016 20:56:25 +, The pithy ruminations from Luis Bolinches on were: => Its been around in certain cases, some kernel <-> storage combination get => hit some not => => Scott referenced it here https://www.ibm.com/developerworks/community/wikis =>

Re: [gpfsug-discuss] rpldisk vs deldisk & adddisk

2016-12-01 Thread Matt Weil
I always suspend the disk then use mmrestripefs -m to remove the data. Then delete the disk with mmdeldisk. ‐m Migrates all critical data off of any suspended disk in this file system. Critical data is all data that would be lost if

Re: [gpfsug-discuss] Strategies - servers with local SAS disks

2016-12-01 Thread Dean Hildebrand
Hi Bob, If you mean #4 with 2x data replication...then I would be very wary as the chance of data loss would be very high given local disk failure rates. So I think its really #4 with 3x replication vs #3 with 2x replication (and raid5/6 in node) (with maybe 3x for metadata). The space overhead

Re: [gpfsug-discuss] Strategies - servers with local SAS disks

2016-12-01 Thread Oesterlin, Robert
Yep, I should have added those requirements :-) 1) Yes I care about the data. It’s not scratch but a permanent repository of older, less frequently accessed data. 2) Yes, it will be backed up 3) I expect it to grow over time 4) Data integrity requirement: high Bob Oesterlin Sr Principal Storage

Re: [gpfsug-discuss] Strategies - servers with local SAS disks

2016-12-01 Thread Stephen Ulmer
Just because I don’t think I’ve seen you state it: (How much) Do you care about the data? Is it scratch? Is it test data that exists elsewhere? Does it ever flow from this storage to any other storage? Will it be dubbed business critical two years after they swear to you that it’s not

Re: [gpfsug-discuss] Strategies - servers with local SAS disks

2016-12-01 Thread Oesterlin, Robert
Some interesting discussion here. Perhaps I should have been a bit clearer on what I’m looking at here: I have 12 servers with 70*4TB drives each – so the hardware is free. What’s the best strategy for using these as GPFS NSD servers, given that I don’t want to relay on any “bleeding edge”