Alex,
Metadata will be 4Kib
Depending on the filesystem version you will also have subblocks to consider V4
filesystems have 1/32 subblocks, V5 filesystems have 1/1024 subblocks (assuming
metadata and data block size is the same)
My first question would be is “ Are you sure that Linux OS is
much is allocated to the OS / Spectrum
Scale Pagepool
Regards
Andrew Beattie
Technical Specialist - Storage for Big Data & AI
IBM Technology Group
IBM Australia & New Zealand
P. +61 421 337 927
E. abeat...@au1.ibm.com
> On 2 Feb 2022, at 19:14, Talamo Ivano Giuseppe (PSI
ation of RDMA add enough value... possibly not.
Andrew Beattie
Technical Sales - Storage for Big Data and AI
IBM Systems - Storage
IBM Australia & New Zealand
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Jonathan Buzzard" S
I’m not sure for the Lenovo
But the GUI hardware details tabs has MTM and serial information for all
the ESS nodes.
Regards
Andrew
> On 3 Sep 2021, at 01:39, Hannappel, Juergen
wrote:
>
> Hi,
> on an ESS node with power cpu I can get the serial number from
> /proc/device-tree/system-id
>
the 5030.
You could ask you local IBM Presales team to perform a StorM disk model of
the expected performance using your current configuration to show you what
you performance should look like.
Regards
Andrew Beattie
Technical Sales - Storage for Data and AI
IBM Australia and New Zealand
> On 29
David,
You can use the CES protocol nodes to present a swift Object interface or S3
Object interface for applications to access data from the filesystem / write
data into the filesystem over the object protocol.
Check the openstack Swift -S3 compatabioity matrix to make sure that the
changes to the data which is why
you need a copy?
I know of customers that do this regularly with block storage such as the IBM
Flashsystem product family In conjunction with IBM Spectrum Copy Data
Management. But I don’t believe CDM supports file based storage.
Regards,
Andrew Beattie
Ryan,
There are 2 parts to call home,
The spectrum scale Software call home and the IBM ESS hardware call home l.
The software call home should work regardless of hardware platform.
However it will not pull the detail of the hardware, you will need to
upload hardware logs separately.
Sent from
Jonathan,
You need to create vdisk sets which will create multiple vdisks, you can then
assign vdisk sets to your filesystem. (Assigning multiple vdisks at a time)
Things to watch - free space calculations are more complex as it’s building
multiple vdisks under the cover using multiple raid
via SMB or NFS, all research users across the university who want to
access one or more of their storage allocations do so via the SMB / NFS mount
points from this specific storage cluster.
Regards,
Andrew Beattie
File & Object Storage - Technical Lead
IBM Australia & New Zealand
Sent
Rob,
Talk to Jake Carroll from the University of Queensland, he has done a
number of presentations at Scale User Groups of UQ’s MeDiCI data fabric
which is based on Spectrum Scale and does very aggressive use of AFM.
Their use of AFM is not only on campus, but to remote Storage clusters
between
issues onto your
self rather than it being the vendors problem.
Sent from my iPhone
> On 3 Oct 2020, at 20:06, Jonathan Buzzard
wrote:
>
> On 02/10/2020 23:19, Andrew Beattie wrote:
>> Jonathan,
>> I suggest you get a formal statement from Lenovo as the DSS-G Platform
&
Jonathan,
I suggest you get a formal statement from Lenovo as the DSS-G Platform is
no longer an IBM platform.
But for ESS based platforms the answer would be, it is not supported to run
anything on the IO Servers other than GNR and the relevant Scale management
services, due to the fact that
AMD.
Sent from my iPhone
> On 3 Sep 2020, at 17:44, Giovanni Bracco wrote:
>
> OK from client side, but I would like to know if the same is also for
> NSD servers with AMD EPYC, do they operate with good performance
> compared to Intel CPUs?
>
> Giovanni
>
>> On
)
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: Giovanni Bracco Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list , Frederick Stock
Iban,
Spectrum scale native raid will not be an option for your scenario.
Scale erasure code edition does not support JBOD enclosures at this point.
And Scale Native RAID is only certified for specific hardware ( IBM ESS or
Lenovo GSS )
This means you either need to use Hardware raid
Why don’t you look at packaging your small files into larger files which
will be handled more effectively.
There is no simple way to replicate / move billions of small files,
But surely you can build your work flow to package the files up into a zip
or tar format which will simplify not only
Andi,
You may want to reach out to Jake Carrol at the University of Queensland,
When UQ first started exploring with AFM, and global AFM transfers they did
extensive testing around tuning for the NFS stack.
>From memory they got to a point where they could pretty much saturate a 10GBit
>link,
Jonathan,
Not entirely true,
Spectrum Scale Standard (Data Access Edition) does include AFM it does not include AFM-DR
Spectrum Scale Advanced (Data Management Edition) is required to support AFM-DR
Regards,
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems
.
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Andi Nør Christiansen" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list
://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Active%20File%20Management%20(AFM)
Regards,
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
across
all the drives in the RG, this guarantees the best performance for each Vdisk
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Olaf Weiser" Sent by: gpfs
to do almost anything with that intel adapter
regards,
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: Simon Thompson Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo
Simon,
are you using Intel 10Gb Network Adapters with RH 7.6 by anychance?
regards
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: Simon Thompson Sent by: gpfsug
r guidance to your question
Regards,
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: Cc:Subject: [EXTERNAL] [gpfsug-discu
stem based on the number of vdisks per building block i've added to that filesystem and provide a Min performance / Max performance number for a filesystem.
Regards,
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm
into an existing filesystem ?
Are you creating an new filesystem?
Regards,
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: Ryan Novosielski Sent by: gpfsug-discuss-boun
be a blanket -- NOT supported) but for other configurations its a case of at your own risk
Regards,
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Weil, Matthew&
Raid 4+2P, Raid 4+3P
Raid 8+2P and Raid8+3P options, as well as 2 copy and 3copy options -- you would need to reach out to your local IBM team to validate if Scale ECE would be supported with JBOD shelves presented to multiple servers.
Regards
Andrew Beattie
File and Object Storage Technical
/ mmNametoUID, to manage the ACL and permissions, they then move the data from the AFM filesystem to the HPC scratch filesystem for processing by the HPC (different filesystems within the same cluster)
Regards,
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
cluster.
Why are you trying to publish a non HA RHEL SMB share instead of using the HA CES protocols?
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: vall
/bl1adv_configprotocolsonremotefs.htm
Hope this helps,
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: vall...@cbio.mskcc.orgSent by: gpfsug-discuss-boun
Yes and its licensed based on the size of your network pipe.
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Frederick Stock" Sent by: gpfsug-di
Hi there,
Given you have a Lenovo DSS - I suggest you open a support call with Lenovo initially, as I believe all OEM's are tasked with providing their own support -- they have the ability to open a case directly with IBM if they are unable to provide the relevant answers.
Regards
Andrew
Kevin,
That sounds like a useful script
would you care to share?
Thanks
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Buterbaugh, Kevin L" Sent by: gpfsug-discuss-boun...@spectrums
to the base cluster and by group create exports of different protocols, but its not available today.
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: Andi Rhod Christiansen Sent by: gpfsug-discuss-boun
ood chance of getting a pair of switches to connect 2 NSD servers to a single FC storage array for far less than that.
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: Jan-Frode Myklebust Sent by: gpfs
this one made it Aaron
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Knister, Aaron S. (GSFC-606.2)[InuTeq, LLC]" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discu
your looking for.
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Henrik Cednert (Filmlance)" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subjec
depending on the type of NSD Servers doing the work.
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: Stephen Ulmer Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list
, to it, it also becomes useful as a gui server / management server?
regards,
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: Matthias Knigge Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main
and their account is deleted, you would almost certainly have issues with the Scale cluster trying to validate users permissions and having scale get an error from AD when the credentials that it uses are no longer valid.
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail
Stephen,
Sorry your right, I had to go back and look up what we were doing for metadata.
but we ended up with 1MB block for metadata and 8MB for data and a 32k subblock based on the 1MB metadata block size, effectively a 256k subblock for the Data
Andrew Beattie
Software Defined Storage
was the default for so many years
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Marc A Kaplan" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: R
will apply on the groups that have been updated, and the next time the compression policy is run they will be recompressed.
in terms of performance overhead -- the answer will always be -- it depends on your specific data environment.
Regards,
Andrew Beattie
Software Defined Storage - IT Specialist
development team involved in your design and see what we can support for your requirements.
Regards,
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: vall...@cbio.mskcc.orgSent by: gpfsug-discuss-boun
this thread is mildly amusing, given we regularly get customers asking why we are dropping support for versions of linux
that they "just can't move off"
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Origin
the benefits of Object Storage erasure coding might be worth looking at although for a 1 or 2 site scenario the overheads are pretty much the same if you use some variant of distributed raid or if you use erasure coding.
Andrew Beattie
Software Defined Storage - IT Specialist
Phone
infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent)
Andrew
configurations
Regards
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "J. Eric Wonderley" <eric.wonder...@vt.edu>Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discuss
advantage of this feature
Regards,
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Marc A Kaplan" <makap...@us.ibm.com>Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main
will have just migrated onto new hardware) less than 6 months after doing the migration to new hardware
and that migration method can't be add more hardware.
Regards
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message
Philipp,
Not to my knowledge,
AIX
Linux on x86 ( RHEL / SUSE / Ubuntu)
Linux on Power (RHEL / SUSE)
WIndows
are the current supported platforms
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From
Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Jaime Pinto" <pi...@scinet.utoronto.ca>To: "gpfsug main discussion list" <gpfsug-discuss@spectrumscale.org>, "Andrew Beat
for the users to schedule their data set recalls?
Regards,
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman
we are trying to tune the links to improve bandwidth and reduce latency rather than restrict the bandwidth.
Regards,
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: Matt Weil <mw...@wustl.edu&g
Out of curiosity -- why would you want to?
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: Aaron Knister <aaron.s.knis...@nasa.gov>Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsu
mmhealth might be a good place to start
CES should probably throw a message along the lines of the following:
mmhealth shows something is wrong with AD server:...CES DEGRADED ads_down ...
Andrew Beattie
Software Defined Storage - IT Specialist
Phone
pport have indicated that they think there is a bug in the SElinux code, which is causing this issue, and have suggested that we disable SElinux and try again.
My clients environment is currently deployed on Centos 7.
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E
of integrated raid controllers then any form of distributed raid is probably the best scenario, Raid 6 obviously.
How many Nodes are you planning on building? The more nodes the more value FPO is likely to bring as you can be more specific in how the data is written to the nodes.
Andrew Beattie
), to provide accountability for the number of virtual machines they are providing, and would like to extend this capability to their file storage offering, which today is based on basic virtual windows file servers
Is anyone doing something similar today? and if so at what granularity?
Andrew Beattie
I think you will find that AFM in any flavor is a function of the Server license, not a client license.
i've always found this to be a pretty good guide, although you now need to add Transparent Cloud Tiering into the bottom column
Andrew Beattie
Software Defined Storage
62 matches
Mail list logo