Hi Kennmeth,
we also had similar performance numbers in our tests. Native was far
quicker than through GPFS. When we learned though that the client tested
the performance on the FS at a big blocksize (512k) with small files - we
were able to speed it up significantly using a smaller FS blocksize
Simon,
We've managed to resolve this issue by switching off quota's and switching them
back on again and rebuilding the quota file.
Can I check if you run quota's on your cluster.
See you 2 weeks in Manchester
Thanks in advance.
Peter Childs
Research Storage Expert
ITS Research
Some thoughts:
you give typical cumulative usage values. However, a fast pool might
matter most for spikes of the traffic. Do you have spikes driving your
current system to the edge?
Then: using the SSD pool for writes is straightforward (placement), using
it for reads will only pay off if
On Wed, 2017-04-19 at 14:23 -0700, Alex Chekholko wrote:
> On 04/19/2017 12:53 PM, Buterbaugh, Kevin L wrote:
> >
> > So you’re considering the purchase of a dual-controller FC storage array
> > with 12 or so 1.8 TB SSD’s in it, with the idea being that that storage
> > would be in its’ own
On Wed, 2017-04-19 at 20:05 +, Simon Thompson (IT Research Support)
wrote:
> By having many LUNs, you get many IO queues for Linux to play with. Also the
> raid6 overhead can be quite significant, so it might be better to go with
> raid1 anyway depending on the controller...
>
> And if only