Thanks John. I'll give that a try as soon as I fix an issue with my MDS
servers that cropped up today.
On Mon, Aug 10, 2015 at 2:58 AM, John Spray wrote:
> On Fri, Aug 7, 2015 at 1:36 AM, Bob Ababurko wrote:
> > @John,
> >
> > Can you clarify which values would suggest that my metadata pool is
On Fri, Aug 7, 2015 at 1:36 AM, Bob Ababurko wrote:
> @John,
>
> Can you clarify which values would suggest that my metadata pool is too
> slow? I have added a link that includes values for the "op_active" &
> "handle_client_request"gathered in a crude fashion but should hopefully
> give eno
@John,
Can you clarify which values would suggest that my metadata pool is too
slow? I have added a link that includes values for the "op_active"
& "handle_client_request"gathered in a crude fashion but should
hopefully give enough data to paint a picture of what is happening.
http://pasteb
I should have probably condensed my finding over the course of the day into
one post but, I guess that just not how i'm built.
Another data point. I ran the `ceph daemon mds.cephmds02 perf dump` in a
while loop w/ 1 second sleep and grepping out the stats John mentioned and
at times(~every 10
I found a way to get the stats you mentioned: mds_server.handle_client_request
& objecter.op_active. I can see these values when I run:
ceph daemon mds. perf dump
I recently restarted the mds server so my stats reset but I still have
something to share:
"mds_server.handle_client_request": 44060
I have installed diamond(built by ksingh found at
https://github.com/ksingh7/ceph-calamari-packages) on the MDS node and I am
not seeing the mds_server.handle_client_request OR objecter.op_active
metrics being sent to graphite. Mind you, this is not the graphite that is
part of the calamari instal
Hi John,
You are correct in that my expectations may be incongruent with what is
possible with ceph(fs). I'm currently copying many small files(images)
from a netapp to the cluster...~35k sized files to be exact and the number
of objects/files copied thus far is fairly significant(below in bold):
On Tue, Aug 4, 2015 at 10:36 PM, Bob Ababurko wrote:
> My writes are not going as I would expect wrt to IOPS(50-1000 IOPs) & write
> throughput( ~25MB/s max). I'm interested in understanding what it takes to
> create a SSD pool that I can then migrate the current Cephfs_metadata pool
> to. I sus
I will dig into the network and determine if we have any issues. One thing
to note is our MTU is 1500 and will not be changed for this testsimply
put, I am not going to be able to get these changes implemented in our
current network . I dont expect a huge increase in performance by moving
to
Bob,
Those numbers would seem to indicate some other problem One of the
biggest culprits of that poor performance is often related to network issues.
In the last few months, there have been several reported issues of performance,
that have turned out to be network. Not all, but most.
I have my first ceph cluster up and running and am currently testing cephfs
for file access. It turns out, I am not getting excellent write
performance on my cluster via cephfs(kernel driver) and would like to try
to explore moving my cephfs_metadata pool to SSD.
To quickly describe the cluster:
11 matches
Mail list logo