I pulled down the gitbuilder package (ceph version 0.93-223-g5c2ecc3
(5c2ecc3b8901e6491f1fde8858b51794ffa892e2) ) and redid the cluster.
The small test files ( time cp small1/* small2/. ) went from the 2 min
30 seconds to 1 min 40 secs. With some initial tuning I was able to
get it down to 1 min
On Thu, Apr 2, 2015 at 11:18 PM, Barclay Jameson
almightybe...@gmail.com wrote:
I am using the Giant release. The OSDs and MON/MDS are using default
RHEL 7 kernel. Client is using elrepo 3.19 kernel. I am also using
cephaux.
I reproduced this issue by using giant release. It's a bug in the MDS
On Wed, Apr 1, 2015 at 12:31 AM, Barclay Jameson
almightybe...@gmail.com wrote:
Here is the mds output from the command you requested. I did this
during the small data run . ( time cp small1/* small2/ )
It is 20MB in size so I couldn't find a place online that would accept
that much data.
I am using the Giant release. The OSDs and MON/MDS are using default
RHEL 7 kernel. Client is using elrepo 3.19 kernel. I am also using
cephaux.
I may have found something.
I did the build manually as such I did _NOT_ set up these config settings:
filestore xattr use omap = false
filestore max
Nope,
I redid the cluster with the above config options and it did not fix it.
It must have cached the files from the first copy.
Any thoughts on this?
On Thu, Apr 2, 2015 at 10:18 AM, Barclay Jameson
almightybe...@gmail.com wrote:
I am using the Giant release. The OSDs and MON/MDS are using
On Sat, Mar 28, 2015 at 10:12 AM, Barclay Jameson
almightybe...@gmail.com wrote:
I redid my entire Ceph build going back to to CentOS 7 hoping to the
get the same performance I did last time.
The rados bench test was the best I have ever had with a time of 740
MB wr and 1300 MB rd. This was
On Sun, Mar 29, 2015 at 1:12 AM, Barclay Jameson
almightybe...@gmail.com wrote:
I redid my entire Ceph build going back to to CentOS 7 hoping to the
get the same performance I did last time.
The rados bench test was the best I have ever had with a time of 740
MB wr and 1300 MB rd. This was
I redid my entire Ceph build going back to to CentOS 7 hoping to the
get the same performance I did last time.
The rados bench test was the best I have ever had with a time of 740
MB wr and 1300 MB rd. This was even better than the first rados bench
test that had performance equal to PanFS. I find
Specifically related to BTRFS, if you have random IO to existing objects
it will cause terrible fragmentation due to COW. BTRFS is often faster
than XFS initially but after it starts fragmenting can become much
slower for sequential reads. You may want to try XFS again and see if
you can
I did a Ceph cluster install 2 weeks ago where I was getting great
performance (~= PanFS) where I could write 100,000 1MB files in 61
Mins (Took PanFS 59 Mins). I thought I could increase the performance
by adding a better MDS server so I redid the entire build.
Now it takes 4 times as long to
So this is exactly the same test you ran previously, but now it's on
faster hardware and the test is slower?
Do you have more data in the test cluster? One obvious possibility is
that previously you were working entirely in the MDS' cache, but now
you've got more dentries and so it's kicking data
Yes it's the exact same hardware except for the MDS server (although I
tried using the MDS on the old node).
I have not tried moving the MON back to the old node.
My default cache size is mds cache size = 1000
The OSDs (3 of them) have 16 Disks with 4 SSD Journal Disks.
I created 2048 for
On Fri, Mar 27, 2015 at 2:46 PM, Barclay Jameson
almightybe...@gmail.com wrote:
Yes it's the exact same hardware except for the MDS server (although I
tried using the MDS on the old node).
I have not tried moving the MON back to the old node.
My default cache size is mds cache size = 1000
13 matches
Mail list logo