On 29/08/14 04:11, Sebastien Han wrote:
Hey all,
See my fio template:
[global]
#logging
#write_iops_log=write_iops_log
#write_bw_log=write_bw_log
#write_lat_log=write_lat_lo
time_based
runtime=60
ioengine=rbd
clientname=admin
pool=test
rbdname=fio
invalidate=0# mandatory
#rw=randwrite
On 29/08/14 14:06, Mark Kirkwood wrote:
... mounting (xfs) with nobarrier seems to get
much better results. The run below is for a single osd on an xfs
partition from an Intel 520. I'm using another 520 as a journal:
...and adding
filestore_queue_max_ops = 2
improved IOPS a bit more
On 29/08/14 22:17, Sebastien Han wrote:
@Mark thanks trying this :)
Unfortunately using nobarrier and another dedicated SSD for the journal (plus
your ceph setting) didn’t bring much, now I can reach 3,5K IOPS.
By any chance, would it be possible for you to test with a single OSD SSD?
On 31/08/14 17:55, Mark Kirkwood wrote:
On 29/08/14 22:17, Sebastien Han wrote:
@Mark thanks trying this :)
Unfortunately using nobarrier and another dedicated SSD for the
journal (plus your ceph setting) didn’t bring much, now I can reach
3,5K IOPS.
By any chance, would it be possible
Yes, as Jason suggests - 27 IOPS doing 4k blocks is:
27*4/1024 MB/s = 0.1 MB/s
While the RBD volume is composed of 4MB objects - many of the
(presumably) random IOs of 4k blocks can reside in the same 4MB object,
so it is tricky to estimate how many 4MB objects are needing to be
rewritten
On 01/09/14 12:36, Mark Kirkwood wrote:
Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS
(running fio on the filesystem I've seen 70K IOPS so is reasonably
believable). So anyway we are not getting anywhere near the max IOPS
from our devices.
We use the Intel S3700
On 01/09/14 17:10, Alexandre DERUMIER wrote:
Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS
(running fio on the filesystem I've seen 70K IOPS so is reasonably
believable). So anyway we are not getting anywhere near the max IOPS
from our devices.
Hi,
Just check this:
On 02/09/14 19:38, Alexandre DERUMIER wrote:
Hi Sebastien,
I got 6340 IOPS on a single OSD SSD. (journal and data on the same partition).
Shouldn't it better to have 2 partitions, 1 for journal and 1 for datas ?
(I'm thinking about filesystem write syncs)
Oddly enough, it does not seem
On 05/09/14 10:05, Dan van der Ster wrote:
That's good to know. I would plan similarly for the wear out. But I want to
also prepare for catastrophic failures -- in the past we've had SSDs just
disappear like a device unplug. Those were older OCZ's though...
Yes - the Intel dc style drives
On 17/09/14 08:39, Alexandre DERUMIER wrote:
Hi,
I’m just surprised that you’re only getting 5299 with 0.85 since I’ve been able
to get 6,4K, well I was using the 200GB model
Your model is
DC S3700
mine is DC s3500
with lower writes, so that could explain the difference.
Interesting -
On 19/09/14 15:11, Aegeaner wrote:
I noticed ceph added key/value store OSD backend feature in firefly, but
i can hardly get any documentation about how to use it. At last I found
that i can add a line in ceph.conf:
osd objectstore = keyvaluestore-dev
but got failed with ceph-deploy creating
On 19/09/14 18:02, Mark Kirkwood wrote:
On 19/09/14 15:11, Aegeaner wrote:
I noticed ceph added key/value store OSD backend feature in firefly, but
i can hardly get any documentation about how to use it. At last I found
that i can add a line in ceph.conf:
osd objectstore = keyvaluestore-dev
On 23/09/14 18:22, Aegeaner wrote:
Now I use the following script to create key/value backended OSD, but
the OSD is created down and never go up.
ceph osd create
umount /var/lib/ceph/osd/ceph-0
rm -rf /var/lib/ceph/osd/ceph-0
mkdir /var/lib/ceph/osd/ceph-0
ceph osd crush add
On 23/09/14 18:22, Aegeaner wrote:
Now I use the following script to create key/value backended OSD, but
the OSD is created down and never go up.
ceph osd create
umount /var/lib/ceph/osd/ceph-0
rm -rf /var/lib/ceph/osd/ceph-0
mkdir /var/lib/ceph/osd/ceph-0
ceph osd crush add
On 24/09/14 14:07, Aegeaner wrote:
I turned on the debug option, and this is what I got:
# ./kv.sh
removed osd.0
removed item id 0 name 'osd.0' from crush map
0
umount: /var/lib/ceph/osd/ceph-0: not found
updated
add item id 0 name 'osd.0' weight 1 at location
On 24/09/14 14:29, Aegeaner wrote:
I run ceph on Red Hat Enterprise Linux Server 6.4 Santiago, and when I
run service ceph start i got:
# service ceph start
ERROR:ceph-disk:Failed to activate
ceph-disk: Does not look like a Ceph OSD, or incompatible version:
On 24/09/14 16:21, Aegeaner wrote:
I have got my ceph OSDs running with keyvalue store now!
Thank Mark! I have been confused for a whole week.
Pleased to hear it! Now you can actually start plying with key value
store backend.
There are quite a few parameters, not fully documented yet -
On 25/09/14 01:03, Sage Weil wrote:
On Wed, 24 Sep 2014, Mark Kirkwood wrote:
On 24/09/14 14:29, Aegeaner wrote:
I run ceph on Red Hat Enterprise Linux Server 6.4 Santiago, and when I
run service ceph start i got:
# service ceph start
ERROR:ceph-disk:Failed to activate
ceph-disk
On 08/10/14 11:02, lakshmi k s wrote:
I am trying to integrate OpenStack Keystone with Ceph Object Store using
the link - http://ceph.com/docs/master/radosgw/keystone.
http://ceph.com/docs/master/radosgw/keystone Swift V1.0 (without
keystone) works quite fine. But for some reason, Swift v2.0
I have a recent ceph (0.85-1109-g73d7be0) configured to use keystone for
authentication:
$ cat ceph.conf
...
[client.radosgw.gateway]
host = ceph4
keyring = /etc/ceph/ceph.rados.gateway.keyring
rgw_socket_path = /var/run/ceph/$name.sock
log_file = /var/log/ceph/radosgw.log
rgw_data =
On 08/10/14 18:46, Mark Kirkwood wrote:
I have a recent ceph (0.85-1109-g73d7be0) configured to use keystone for
authentication:
$ cat ceph.conf
...
[client.radosgw.gateway]
host = ceph4
keyring = /etc/ceph/ceph.rados.gateway.keyring
rgw_socket_path = /var/run/ceph/$name.sock
log_file = /var
Yes. I ran into that as well - I used
WSGIChunkedRequest On
in the virtualhost config for the *keystone* server [1] as indicated in
issue 7796.
Cheers
Mark
[1] i.e, not the rgw.
On 08/10/14 22:58, Ashish Chandra wrote:
Hi Mark,
Good you got the solution. But since you have already done
Ok, so that is the thing to get sorted. I'd suggest posting the error(s)
you are getting perhaps here (someone else might know), but definitely
to one of the Debian specific lists.
In the meantime perhaps try installing the packages with aptitude rather
than apt-get - if there is some fancy
or certutil tool on
debian/ubuntu? If so, how did you go about this problem.
On Wednesday, October 8, 2014 7:01 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Ok, so that is the thing to get sorted. I'd suggest posting the error(s)
you are getting perhaps here (someone else might know
:~$ openssl x509 -in /home/gateway/ca.pem -pubkey |
certutil -d /var/lib/ceph/nss -A -n ca -t TCu,Cu,Tuw
certutil: function failed: SEC_ERROR_LEGACY_DATABASE: The
certificate/key database is in an old, unsupported format.
On Wednesday, October 8, 2014 7:55 PM, Mark Kirkwood
mark.kirkw
/client.radosgw.gateway.log
rgw dns name = gateway
On Thursday, October 9, 2014 1:15 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
I ran into this - needed to actually be root via sudo -i or similar,
*then* it worked. Unhelpful error message is I think referring to no
intialized db.
On 09/10/14 16
setup in my environment. If you can and would like to
test it so that we could get it merged it would be great.
Thanks,
Yehuda
On Wed, Oct 8, 2014 at 6:18 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Yes. I ran into that as well - I used
WSGIChunkedRequest On
in the virtualhost
SSLCertificateKeyFile /etc/apache2/ssl/apache.key
SetEnv SERVER_PORT_SECURE 443
On Thursday, October 9, 2014 2:48 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Almost - the converted certs need to be saved on your *rgw* host in
nss_db_path (default is /var/ceph/nss but wherever you have
--
On Thursday, October 9, 2014 3:51 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
No, I don't have any explicit ssl enabled in the rgw site.
Now you might be running into http://tracker.ceph.com/issues/7796
http://tracker.ceph.com/issues/7796. So
check if you have
On Thursday, October 9, 2014 4:45 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Hmm - It looks to me like you added the chunked request into Horizon
instead of Keystone. You want virtual host *:35357
On 10/10/14 12:32, lakshmi k s wrote:
Have done this too, but in vain. I made changes
Given your setup appears to be non standard, it might be useful to see
the output of the 2 commands below:
$ keystone service-list
$ keystone endpoint-list
So we can avoid advising you incorrectly.
Regards
Mark
On 10/10/14 18:46, Mark Kirkwood wrote:
Also just to double check - 192.0.8.2
PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz
mailto:mark.kirkw...@catalyst.net.nz wrote:
Oh, I see. That complicates it a wee bit (looks back at your messages).
I see you have:
rgw_keystone_url = http://192.0.8.2:5000
http://192.0.8.2:5000/http://192.0.8.2:5000/
So you'll need
that as well, but in vain. In fact, that is how I
created the endpoint to begin with. Since, that didn't work, I followed
Openstack standard which was to include %tenant-id.
-Lakshmi.
On Friday, October 10, 2014 6:49 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Hi,
I think your swift
1 == req done
req=0x7f13e40256a0 http_status=401 ==
2014-10-11 19:38:28.516647 7f13c67ec700 20 process_request() returned -1
On Friday, October 10, 2014 10:15 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Right, well I suggest changing it back, and adding
debug rgw = 20
Ah, yes. So your gateway is called something other than:
[client.radosgw.gateway]
So take a look at what
$ ceph auth list
says (run from your rgw), it should pick up the correct name. Then
correct your ceph.conf, restart and see what the rgw log looks like as
you edge ever so closer to
: AQCI5C1UUH7iOhAAWazAeqVLetIDh+CptBtRrQ==
caps: [mon] allow rwx
caps: [osd] allow rwx
On Sunday, October 12, 2014 8:02 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Ah, yes. So your gateway is called something other than:
[client.radosgw.gateway]
So take a look at what
1.0 -A http://gateway.ex.com/auth/v1.0 -U s3User:swiftUser -K
CRV8PeotaW204nE9IyutoVTcnr+2Uw8M8DQuRP7i list
my-Test
I am at total loss now.
On Monday, October 13, 2014 3:25 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Well that certainly looks ok. So entries
Right,
So you have 3 osds, one of whom is a mon. Your rgw is on another host
(called gateway it seems). I'm wondering if is this the issue. In my
case I'm using one of my osds as a rgw as well. This *should* not
matter... but it might be worth trying out a rgw on one of your osds
instead.
rgw = 20
rgw keystone url = http://stack1:35357
rgw keystone admin token = tokentoken
rgw keystone accepted roles = admin Member _member_
rgw keystone token cache size = 500
rgw keystone revocation interval = 500
rgw s3 auth use keystone = true
nss db path = /var/ceph/nss/
On 15/10/14 10:25, Mark
On 16/10/14 09:08, lakshmi k s wrote:
I am trying to integrate Openstack keystone with radosgw. I have
followed the instructions as per the link -
http://ceph.com/docs/master/radosgw/keystone/. But for some reason,
keystone flags under [client.radosgw.gateway] section are not being
honored. That
On 16/10/14 10:37, Mark Kirkwood wrote:
On 16/10/14 09:08, lakshmi k s wrote:
I am trying to integrate Openstack keystone with radosgw. I have
followed the instructions as per the link -
http://ceph.com/docs/master/radosgw/keystone/. But for some reason,
keystone flags under
Hi,
While I certainly can (attached) - if your install has keystone running
it *must* have one. It will be hiding somewhere!
Cheers
Mark
On 17/10/14 05:12, lakshmi k s wrote:
Hello Mark -
Can you please paste your keystone.conf? Also It seems that Icehouse install
that I have does not
, October 16, 2014 3:17 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Hi,
While I certainly can (attached) - if your install has keystone running
it *must* have one. It will be hiding somewhere!
Cheers
Mark
On 17/10/14 05:12, lakshmi k s wrote:
Hello Mark -
Can you please paste
I'm doing some fio tests on Giant using fio rbd driver to measure
performance on a new ceph cluster.
However with block sizes 1M (initially noticed with 4M) I am seeing
absolutely no IOPS for *reads* - and the fio process becomes non
interrupteable (needs kill -9):
$ ceph -v
ceph version
On 24/10/14 13:09, Mark Kirkwood wrote:
I'm doing some fio tests on Giant using fio rbd driver to measure
performance on a new ceph cluster.
However with block sizes 1M (initially noticed with 4M) I am seeing
absolutely no IOPS for *reads* - and the fio process becomes non
interrupteable
branch temporarily that makes rbd reads
greater than the cache size hang (if the cache was on). This might be
that. (Jason is working on it: http://tracker.ceph.com/issues/9854)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Oct 23, 2014 at 5:09 PM, Mark Kirkwood
mark.kirkw
It looks to me like this has been considered (mapping default pool size
to 2). However just to check - this *does* mean that you need two (real
or virtual) hosts - if the two osds are on the same host then crush map
adjustment (hosts - osds) will be required.
Regards
Mark
On 29/10/14
That is not my experience:
$ ceph -v
ceph version 0.86-579-g06a73c3 (06a73c39169f2f332dec760f56d3ec20455b1646)
$ cat /etc/ceph/ceph.conf
[global]
...
osd pool default size = 2
$ ceph osd dump|grep size
pool 2 'hot' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 128
Righty, both osd are on the same host, so you will need to amend the
default crush rule. It will look something like:
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
I am doing some testing on our new ceph cluster:
- 3 ceph nodes (8 cpu 128G, Ubuntu 12.04 + 3.13 kernel)
- 8 osd on each (i.e 24 in total)
- 4 compute nodes (ceph clients)
- 10G networking
- ceph 0.86 (97dcc0539dfa7dac3de74852305d51580b7b1f82)
I'm using one of the compute nodes to run some fio
On 30/10/14 11:16, Mark Kirkwood wrote:
I am doing some testing on our new ceph cluster:
- 3 ceph nodes (8 cpu 128G, Ubuntu 12.04 + 3.13 kernel)
- 8 osd on each (i.e 24 in total)
- 4 compute nodes (ceph clients)
- 10G networking
- ceph 0.86 (97dcc0539dfa7dac3de74852305d51580b7b1f82)
I'm using
On 03/11/14 14:56, Christian Balzer wrote:
On Sun, 2 Nov 2014 14:07:23 -0800 (PST) Sage Weil wrote:
On Mon, 3 Nov 2014, Christian Balzer wrote:
c) But wait, you specified a pool size of 2 in your OSD section! Tough
luck, because since Firefly there is a bug that at the very least
prevents OSD
On 04/11/14 03:02, Sage Weil wrote:
On Mon, 3 Nov 2014, Mark Kirkwood wrote:
Ah, I missed that thread. Sounds like three separate bugs:
- pool defaults not used for initial pools
- osd_mkfs_type not respected by ceph-disk
- osd_* settings not working
The last one is a real shock; I would
On 04/11/14 22:02, Sage Weil wrote:
On Tue, 4 Nov 2014, Blair Bethwaite wrote:
On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:
In the Ceph session at the OpenStack summit someone asked what the CephFS
survey results looked like.
Thanks Sage, that was me!
Here's the link:
On 05/11/14 10:58, Mark Nelson wrote:
On 11/04/2014 03:11 PM, Mark Kirkwood wrote:
Heh, not necessarily - I put multi mds in there, as we want the cephfs
part to be of similar to the rest of ceph in its availability.
Maybe its because we are looking at plugging it in with an Openstack
setup
On 05/11/14 11:47, Sage Weil wrote:
On Wed, 5 Nov 2014, Mark Kirkwood wrote:
On 04/11/14 22:02, Sage Weil wrote:
On Tue, 4 Nov 2014, Blair Bethwaite wrote:
On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:
In the Ceph session at the OpenStack summit someone asked what the
CephFS
Hi,
I am following
http://docs.ceph.com/docs/master/radosgw/federated-config/ with giant
(0.88-340-g5bb65b3). I figured I'd do the simple case first:
- 1 region
- 2 zones (us-east, us-west) master us-east
- 2 radosgw instances (client.radosgw.us-east-1, wclient.radosgw.us-west-1)
- 1 ceph
On 21/11/14 14:49, Mark Kirkwood wrote:
The only things that look odd in the destination zone logs are 383
requests getting 404 rather than 200:
$ grep http_status=404 ceph-client.radosgw.us-west-1.log
...
2014-11-21 13:48:58.435201 7ffc4bf7f700 1 == req done
req=0x7ffca002df00
On 21/11/14 15:52, Mark Kirkwood wrote:
On 21/11/14 14:49, Mark Kirkwood wrote:
The only things that look odd in the destination zone logs are 383
requests getting 404 rather than 200:
$ grep http_status=404 ceph-client.radosgw.us-west-1.log
...
2014-11-21 13:48:58.435201 7ffc4bf7f700 1
On 21/11/14 16:05, Mark Kirkwood wrote:
On 21/11/14 15:52, Mark Kirkwood wrote:
On 21/11/14 14:49, Mark Kirkwood wrote:
The only things that look odd in the destination zone logs are 383
requests getting 404 rather than 200:
$ grep http_status=404 ceph-client.radosgw.us-west-1.log
...
2014
On 22/11/14 10:54, Yehuda Sadeh wrote:
On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Fri Nov 21 02:13:31 2014
x-amz-copy-source:bucketbig/_multipart_big.dat.2/fjid6CneDQYKisHf0pRFOT5cEWF_EQr.meta
/bucketbig/__multipart_big.dat.2
On 25/11/14 11:58, Yehuda Sadeh wrote:
On Mon, Nov 24, 2014 at 2:43 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
On 22/11/14 10:54, Yehuda Sadeh wrote:
On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Fri Nov 21 02:13:31 2014
x-amz-copy
It looks to me like you need to supply it the *ids* of the pools not
their names.
So do:
$ ceph osd dump # (or lspools)
note down the ids of the pools you want to use (suppose I have
cephfs_data 10 and cepfs_metadata 12):
$ ceph mds newfs 10 12 --yes-i-really-mean-it
On 26/11/14 11:30,
On 25/11/14 12:40, Mark Kirkwood wrote:
On 25/11/14 11:58, Yehuda Sadeh wrote:
On Mon, Nov 24, 2014 at 2:43 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
On 22/11/14 10:54, Yehuda Sadeh wrote:
On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote
On 07/12/14 07:39, Sage Weil wrote:
Thoughts? Suggestions?
Would kit make sense to include radosgw-agent package in this
normalization too?
Regards
Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 10/12/14 07:36, Vivek Varghese Cherian wrote:
Hi,
I am trying to integrate OpenStack Juno Keystone with the Ceph Object
Gateway(radosw).
I want to use keystone as the users authority. A user that keystone
authorizes to access the gateway will also be created on the radosgw.
Tokens that
On 11/12/14 02:33, Vivek Varghese Cherian wrote:
Hi,
root@ppm-c240-ceph3:~# /usr/bin/radosgw -n client.radosgw.gateway -d
log-to-stderr
2014-12-09 12:51:31.410944 7f073f6457c0 0 ceph version 0.80.7
(__6c0127fcb58008793d3c8b62d925bc__91963672a3), process
On 14/12/14 17:25, wang lin wrote:
Hi All
I set up my first ceph cluster according to instructions in
http://ceph.com/docs/master/start/quick-ceph-deploy/#storing-retrieving-object-data,http://ceph.com/docs/master/start/quick-ceph-deploy/#storing-retrieving-object-data
but I got
On 15/12/14 20:54, Vivek Varghese Cherian wrote:
Hi,
Do I need to overwrite the existing .db files and .txt file in
/var/lib/nssdb on the radosgw host with the ones copied from
/var/ceph/nss on the Juno node ?
Yeah - worth a try (we want to rule out any
On 15/12/14 17:44, ceph@panther-it.nl wrote:
I have the following setup:
Node1 = 8 x SSD
Node2 = 6 x SATA
Node3 = 6 x SATA
Having 1 node different from the rest is not going to help...you will
probably get better results if you sprinkle the SSD through all 3 nodes
and use SATA for osd
Looking at the blog, I notice he disabled the write cache before the
tests: doing this on my m550 resulted in *improved* dsync results (300
IOPS - 700 IOPS) still not great obviously, but ... interesting.
So do experiment with the settings to see if you can get the 840's
working better for
to try
that don't require him to buy new SSDs!
Cheers
Mark
On 18/12/14 21:28, Udo Lembke wrote:
On 18.12.2014 07:15, Mark Kirkwood wrote:
While you can't do much about the endurance lifetime being a bit low,
you could possibly improve performance using a journal *file* that is
located
On 19/12/14 03:01, Lindsay Mathieson wrote:
On Thu, 18 Dec 2014 10:05:20 PM Mark Kirkwood wrote:
The effect of this is *highly* dependent to the SSD make/model. My m550
work vastly better if the journal is a file on a filesystem as opposed
to a partition.
Obviously the Intel S3700/S3500
As others have mentioned, there is no reason you cannot run databases on
Ceph storage. I've been running/testing Postgres and Mysql/Mariadb on
Ceph RDB volumes for quite a while now - since version 0.50 (typically
inside KVM containers but via the kernel driver will work too).
With a
I managed to provoke this behavior by forgetting to include
'rbd_children' in the Ceph auth setup for the images and volumes
keyrings (https://ceph.com/docs/master/rbd/rbd-openstack/). Doing a:
$ ceph auth list
should reveal if all is well there.
Regards
Mark
On 04/04/14 20:56, Mariusz
really understood the
cause.
Could you clarify what's going on that would cause that kind of
asymmetry. I've been assuming that once I get around to turning
on/tuning read caching on my underlying OSD nodes the situation will
improve but haven't dug into that yet.
~jpr
On 04/04/2014 04:46 AM, Mark
Wow - that is a bit strange:
$ cat /etc/issue
Ubuntu 13.10 \n \l
$ sudo ceph -v
ceph version 0.78-569-g6a4c50d (6a4c50d7f27d2e7632d8c017d09e864e969a05f7)
$ sudo ceph osd erasure-code-profile ls
default
myprofile
profile
profile1
I'd hazard a guess that some of your ceph components are at
I'm not sure if this is relevant, but my 0.78 (and currently building
0.79) are compiled from src git checkout (and packages built from the
same src tree using dpkg-buildpackage Debian/Ubuntu package builder).
Having said that - the above procedure *should* produce equivalent
binaries to the
Hi Karan,
Just to double check - run the same command after ssh'ing into each of
the osd hosts, and maybe again on the monitor hosts too (just in case
you have *some* hosts successfully updated to 0.79 and some still on
0.78).
Regards
Mark
On 08/04/14 22:32, Karan Singh wrote:
Hi Loic
Hi all,
I've noticed that objects are using twice their actual space for a few
minutes after they are 'put' via rados:
$ ceph -v
ceph version 0.79-42-g010dff1 (010dff12c38882238591bb042f8e497a1f7ba020)
$ ceph osd tree
# idweight type name up/down reweight
-1 0.03998 root
On Wed, Apr 9, 2014 at 1:58 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Hi all,
I've noticed that objects are using twice their actual space for a few
minutes after they are 'put' via rados:
$ ceph -v
ceph version 0.79-42-g010dff1 (010dff12c38882238591bb042f8e497a1f7ba020)
$ ceph osd tree
the code is
that some layer in the stack is preallocated and then trimmed the
objects back down once writing stops, but I'd like some more data
points before I dig.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Apr 9, 2014 at 7:59 PM, Mark Kirkwood
mark.kirkw
)
weirdness.
Regards
Mark
On 10/04/14 15:41, Mark Kirkwood wrote:
Redoing (attached, 1st file is for 2x space, 2nd for normal). I'm seeing:
$ diff osd-du.0.txt osd-du.1.txt
924,925c924,925
2048 /var/lib/ceph/osd/ceph-1/current/5.1a_head/file__head_2E6FB49A__5
2048/var/lib/ceph/osd/ceph-1
On 11/04/14 06:35, Udo Lembke wrote:
Hi,
On 10.04.2014 20:03, Russell E. Glaue wrote:
I am seeing the same thing, and was wondering the same.
We have 16 OSDs on 4 hosts. The File system is Xfs. The OS is CentOS 6.4. ceph
version 0.72.2
I am importing a 3.3TB disk image into a rbd image.
At
for xfs filesystem
with 'allocsize' mount option.
Check out this
http://serverfault.com/questions/406069/why-are-my-xfs-filesystems-suddenly-consuming-more-space-and-full-of-sparse-file
On Thu, Apr 10, 2014 at 5:41 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz mailto:mark.kirkw...@catalyst.net.nz
clusters.
On 14/04/14 13:30, Mark Kirkwood wrote:
Yeah, I was looking at preallocation as the likely cause, but your link
is way better than anything I'd found (especially with the likely commit
- speculative preallocation - mentioned
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git
Some discussion about this can be found here:
http://ceph.com/dev-notes/incremental-snapshots-with-rbd/
Cheers
Mark
On 25/04/14 08:25, Brian Rak wrote:
Is there a recommended way to copy an RBD image between two different
clusters?
My initial thought was 'rbd export - | ssh rbd import -',
On 01/05/14 18:26, Wido den Hollander wrote:
Ok, thanks for the information. Just something that comes up in my mind:
- Repository location and access
- Documentation efforts for non-RHEL platforms
- Support for non-RHEL platforms
I'm confident that RedHat will make Ceph bigger and better, but
So that's two hosts - if this is a new cluster chances are the pools
have replication size=3, and won't place replica pgs on the same host...
'ceph osd dump' will let you know if this is the case. If it is ether
reduce size to 2, add another host or edit your crush rules to allow
replica pgs
'.users.uid' replicated size 2 min_size 2 crush_ruleset 0
object_hash rjenkins pg_num 8 pgp_num 8 last_change 56 owner
18446744073709551615 flags hashpspool stripe_width 0
Kind Regards,
Georg
On 09.05.2014 08:29, Mark Kirkwood wrote:
So that's two hosts - if this is a new cluster chances
On thing that would put me off the 530 is lack on power off safety
(capacitor or similar). Given the job of the journal, I think an SSD
that has some guarantee of write integrity is crucial - so yeah the
DC3500 or DC3700 seem like the best choices.
Regards
Mark
On 13/05/14 21:31, Christian
On 15/05/14 11:36, Tyler Wilson wrote:
Hey All,
I am setting up a new storage cluster that absolutely must have the best
read/write sequential speed @ 128k and the highest IOps at 4k read/write
as possible.
My current specs for each storage node are currently;
CPU: 2x E5-2670V2
Motherboard: SM
On 15/05/14 20:23, Séguin Cyril wrote:
Hello,
I would like to test I/O Ceph's performances. I'm searching for a
popular benchmark to create a dataset and run I/O tests that I can reuse
for other distributed file systems and other tests. I have tried
filebench but it seems not to be
On 05/06/14 17:01, yalla.gnan.ku...@accenture.com wrote:
Hi All,
I have a ceph storage cluster with four nodes. I have created block storage
using cinder in openstack and ceph as its storage backend.
So, I see a volume is created in ceph in one of the pools. But how to get
information like
I compile and run from the src build quite often. Here is my recipe:
$ ./autogen.sh
$ ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--with-radosgw
$ time make
$ sudo make install
$ sudo cp src/init-ceph /etc/init.d/ceph
$ sudo cp src/init-radosgw /etc/init.d/radosgw
$ sudo
I can reproduce this in:
ceph version 0.81-423-g1fb4574
on Ubuntu 14.04. I have a two osd cluster with data on two sata spinners
(WD blacks) and journals on two ssd (Crucual m4's). I getting about 3.5
MB/s (kernel and librbd) using your dd command with direct on. Leaving
off direct I'm
On 22/06/14 14:09, Mark Kirkwood wrote:
Upgrading the VM to 14.04 and restesting the case *without* direct I get:
- 164 MB/s (librbd)
- 115 MB/s (kernel 3.13)
So managing to almost get native performance out of the librbd case. I
tweaked both filestore max and min sync intervals (100 and 10
Good point, I had neglected to do that.
So, amending my conf.conf [1]:
[client]
rbd cache = true
rbd cache size = 2147483648
rbd cache max dirty = 1073741824
rbd cache max dirty age = 100
and also the VM's xml def to include cache to writeback:
disk type='network' device='disk'
On 23/06/14 18:51, Christian Balzer wrote:
On Sunday, June 22, 2014, Mark Kirkwood mark.kirkw...@catalyst.net.nz
rbd cache max dirty = 1073741824
rbd cache max dirty age = 100
Mark, you're giving it a 2GB cache.
For a write test that's 1GB in size.
Aggressively set is a bit
On 24/06/14 17:37, Alexandre DERUMIER wrote:
Hi Greg,
So the only way to improve performance would be to not use O_DIRECT (as this
should bypass rbd cache as well, right?).
yes, indeed O_DIRECT bypass cache.
BTW, Do you need to use mysql with O_DIRECT ? default innodb_flush_method is
1 - 100 of 212 matches
Mail list logo