we wrapped something base on zookeeper around rsync to be able to use
rsync in parallel by splitting the path in subdirectories, and
distribute those
https://github.com/hpcugent/vsc-zk
works really well if the number of files in directories is somewhat
balanced. we use it to rsync some gpfs filesy
Doug,
This won't really work if you make use of ACL's or use special GPFS
extended attributes or set quotas, filesets, etc
so unfortunate the answer is you need to use a combination of things and
there is work going on to make some of this simpler (e.g. for ACL's) , but
its a longer road to get th
What's on a 'dataOnly' GPFS 3.5.x NSD besides data and the NSD disk
header, if anything?
I'm trying to understand some file corruption, and one potential
explanation would be if a (non-GPFS) server wrote to a LUN used as a
GPFS dataOnly NSD.
We are not seeing any 'I/O' or filesystem errors, mmf
mmbackupconfig may be of some help. The output is eyeball-able, so one
could tweak and then feed into mmrestoreconfig on the new system.
Even if you don't use mmrestoreconfig, you might like to have the info
collected by mmbackupconfig.
___
gpfsug-dis
Not talking about ESS specifically, but It’s not possible to change the block
size of an existing file system – you are pretty much stuck with file copy.
Something I wish IBM would address, I’d love to change the block size of some
of my file system too.
Bob Oesterlin
Sr Storage Engineer, Nuanc
I have found that a tar pipe is much faster than rsync for this sort of thing.
The fastest of these is ‘star’ (schily tar). On average it is about 2x-5x
faster than rsync for doing this. After one pass with this, you can use rsync
for a subsequent or last pass synch.
e.g.
$ cd /export/gpfs1/foo
We have recently purchased ESS appliance from IBM (GL6) with 1.5PT of
storage. We are in planning stages of implementation. We would like to
migrate date from our existing GPFS installation (around 300TB) to new
solution.
We were planning of adding ESS to our existing GPFS cluster and adding its
d
An additional detail…..this position could be filled in London as well.
thank you
russell
> On Jan 27, 2016, at 9:17 PM, Russell Nordquist wrote:
>
> Since job posting are the theme of the week, here is one in NYC to work on
> GPFS + more scale out storage:
>
> https://jobs.pdtpartners.com/?g
Since this official IBM website (pre)announces transparent cloud tiering
...
http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html?ce=sm6024&cmp=IBMSocial&ct=M16402YW&cm=h&IIO=BSYS&csr=blog&cr=casyst&ccy=us&s_tact=M16402YW
And since Oesterlin mentioned Cluster Export Service
>From the press release:
"And Spectrum Scale extends its policy-driven targeting, migration,
encryption and unified access to the data in the cloud-scale object
storage."
Take care with the "encryption" claim. Spectrum Scale can encrypt your
data as it is stored onto Spectrum Scale control
Here's a link for Transparent Cloud Tiering, IBM style
http://www.storagereview.com/ibm_announces_open_beta_for_spectrum_scale_transparent_cloud_tiering
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/lis
Take a look at the new Transparent Cloud Tiering – it works with S3 and on-prem
cloud as well – we have a local swift cluster and it’s works perfectly.
Bob Oesterlin
Sr Storage Engineer, Nuance HPC Grid
From:
mailto:gpfsug-discuss-boun...@spectrumscale.org>>
on behalf of Vic Cornell mailto:v
Simon is right - we are using DMAPI to tier off to WOS our object store and to
StrongBox NAS-fronted tape.
We are very interested in the new “lightweight” protocols as they may allow us
to do stuff that DMAPI makes hard - like tiering off to more than one DMAPI
service from a single filesystem.
>Er no the DMAPI was only ever required for HSM as far as I am aware. Now
>while HSM is a form of tiering, in GPFS parlance tiering generally
>referred to disk pools of varying "speeds", with file placement and
>movement being done via the policy engine.
DMAPI can do more than just HSM though. I
On Fri, 2016-01-29 at 15:46 +0100, serv...@metamodul.com wrote:
> Hi Robert,
> i refered to your posting i assume ^_^
>
> Note the following is from what I know, Since i did not had any change to work
> with GPFS in the last 2 years my knowledge will be outdated.
>
> The current GPFS tiering op
Without getting into a whole lot of detail - The service is not based around
the existing DMAPI interface. This service uses the Cluster Export Service
(CES) nodes in GPFS 4.2 to perform the work. There is a process running on
these nodes that’s configured to use a cloud provider and it performs
Hi Robert,
i refered to your posting i assume ^_^
Note the following is from what I know, Since i did not had any change to work
with GPFS in the last 2 years my knowledge will be outdated.
The current GPFS tiering options depending on the DMAPI which i am not a big fan
since it had in the past
Hi Hajo
I was involved in a closed beta of this service. You can read a bit more about
it from the developer at this IBM DevelperWorks posting:
https://www.ibm.com/developerworks/community/forums/html/topic?id=e9a43f16-4ab2-4edf-a253-83931840895e&ps=25
If this doesn’t cover your questions, let
Spectrum scale already supports external tiering to say tape.
http://www.zurich.ibm.com/sto/systems/bigdata.html
Here the GPFS policy engine is used to select which files to migrate based on say the filesize or date of last access.
The file metadata remains in the filesystem and so 'ls -l' etc st
Hi
i saw in the GPFS Forum somebody mentioning IBM Spectrum Scale transparent cloud
tiering
http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html
Thus the question. Does somebody knows how that - the tiering into clould
services - is technical done and what limitations exist
20 matches
Mail list logo