Some thoughts:
you give typical cumulative usage values. However, a fast pool might
matter most for spikes of the traffic. Do you have spikes driving your
current system to the edge?
Then: using the SSD pool for writes is straightforward (placement), using
it for reads will only pay off if da
On Wed, 2017-04-19 at 14:23 -0700, Alex Chekholko wrote:
> On 04/19/2017 12:53 PM, Buterbaugh, Kevin L wrote:
> >
> > So you’re considering the purchase of a dual-controller FC storage array
> > with 12 or so 1.8 TB SSD’s in it, with the idea being that that storage
> > would be in its’ own storage
On Wed, 2017-04-19 at 20:05 +, Simon Thompson (IT Research Support)
wrote:
> By having many LUNs, you get many IO queues for Linux to play with. Also the
> raid6 overhead can be quite significant, so it might be better to go with
> raid1 anyway depending on the controller...
>
> And if only
Simon,
We've managed to resolve this issue by switching off quota's and switching them
back on again and rebuilding the quota file.
Can I check if you run quota's on your cluster.
See you 2 weeks in Manchester
Thanks in advance.
Peter Childs
Research Storage Expert
ITS Research Infrastructure
Hi,
Having an issue that looks the same as this one:
We can do sequential writes to the filesystem at 7,8 GB/s total , which
is the expected speed for our current storage
backend. While we have even better performance with sequential reads on
raw storage LUNS, using GPFS we can only reach 1G
Interesting. Could you share a little more about your architecture? Is it
possible to mount the fs on an NSD server and do some dd's from the fs on the
NSD server? If that gives you decent performance perhaps try NSDPERF next
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki
Hi Kennmeth,
is prefetching off or on at your storage backend?
Raw sequential is very different from GPFS sequential at the storage
device !
GPFS does its own prefetching, the storage would never know what sectors
sequential read at GPFS level maps to at storage level!
Mit freundlichen Grüß
Hi Kennmeth,
we also had similar performance numbers in our tests. Native was far
quicker than through GPFS. When we learned though that the client tested
the performance on the FS at a big blocksize (512k) with small files - we
were able to speed it up significantly using a smaller FS blocksize