Looks like the writte performance of keyvalue backend is bad than file store
backend with version 0.87
for my curent cluster, the writteing speed only have 1.5MB/s - 4.5MB/s
发件人: ceph-usersmailto:ceph-users-boun...@lists.ceph.com
发送时间: 2014-10-31 08:23
收件人:
I've had the same issue before during a cluster rebalancing and after
restarting one of the daemons (can't remember now if it was one of the OSDs
or MONs) the values reset to a more sane value and the cluster eventually
recovered when it reached 0 objects degraded.
Additionally when you have a
Hi guys,
I'm having some issues trying to view the logs for a bucket (the
background to this is we're having trouble handling some multipart
uploads over 1000 parts in size, but that's one for another post).
Using `radosgw-admin log list` I can see the logs themselves, e.g.:
Thanks. It would be nice though to have a repo where all the packages
are. We lock our packages ourselves, so we would just need to bump the
version instead of adding a repo for each major version:)
- Message from Irek Fasikhov malm...@gmail.com -
Date: Thu, 30 Oct 2014
Hello list,
When we upload a large multipart upload to RGW and it fails, we want
to abort the upload. On large multipart uploads, with say 1000+ parts,
it will consistently return 500 errors when trying to abort the
upload. If you persist and ignore the 500s it will eventually abort
the upload.
Hi Support,
I attempt to test ceph storage cluster on a 3 node cluster. I have installed
Ubuntu 12.04 LTS in all 3 nodes.
While attempting to create the monitors fro node 2 and node3, I am getting the
error below:
[ceph-node3][ERROR ] admin_socket: exception getting command
Might be worth looking at the new download infrastructure. If you always
want the latest you can try:
http://download.ceph.com/ceph/latest/
On Oct 31, 2014 6:17 AM, Kenneth Waegeman kenneth.waege...@ugent.be
wrote:
Thanks. It would be nice though to have a repo where all the packages are.
We
Any hint?
Il 30/10/2014 15:22, Massimiliano Cuttini ha scritto:
Dear Ceph users,
I just received 2 fresh new servers and i'm starting to develop my
Ceph Cluster.
The first step is: create the admin node in order to controll all the
cluster by remote.
I have a big cluster of XEN servers and
On Friday, October 31, 2014, Massimiliano Cuttini m...@phoenixweb.it wrote:
Any hint?
Il 30/10/2014 15:22, Massimiliano Cuttini ha scritto:
Dear Ceph users,
I just received 2 fresh new servers and i'm starting to develop my Ceph
Cluster.
The first step is: create the admin node in
On 29.10.2014 18:29, Thomas Alrin wrote:
Hi all,
I'm new to ceph. What is wrong in this ceph? How can i make status to
change HEALTH_OK? Please help
With the current default pool size of 3 and the default crush rule you
need at least 3 OSDs on separate nodes for a new ceph cluster to
I think I may have answered my own question:
http://tracker.ceph.com/issues/8553
Looks like this is fixed in Giant, which we'll be deploying as soon as
0.87.1 is out ;)
Thanks
Dane
On 31 October 2014 09:08, Dane Elwell dane.elw...@gmail.com wrote:
Hi guys,
I'm having some issues trying to
Hi Sage Weil
Thank for your repling. Yes, I'm using Ceph v.0.86,
I report some related bugs, Hope you help me,
2014-10-31 15:34:52.927965 7f85efb6b700 0 osd.21 104744 do_command r=0
2014-10-31 15:34:53.105533 7f85f036c700 -1 *** Caught signal
(Segmentation fault) **
in thread 7f85f036c700
I'll test this by manually inducing corrupted data to the ZFS filesystem and
report back how ZFS+ceph interact during a detected file failure/corruption,
how it recovers and any manual steps required, and report back with the
results.
As for compression, using lz4 the CPU impact is around
Hi there,
I have a few questions regarding pools, radosgw and logging:
1) How do I turn on radosgw logs for a specific pool?
I have this in my config:
rgw enable ops log = false
rgw enable usage log = true
rgw usage log tick interval = 30
rgw usage log flush threshold = 1024
but when I do
Hi All,
I have been working with Openstack Swift + radosgw to stress the whole object
storage from the Swift side (I have been creating containers and objects for
days now) but can't actually find the limitation when it comes to the number of
accounts, containers, objects that can be created
On Fri, Oct 31, 2014 at 9:55 AM, Narendra Trivedi (natrived)
natri...@cisco.com wrote:
Hi All,
I have been working with Openstack Swift + radosgw to stress the whole
object storage from the Swift side (I have been creating containers and
objects for days now) but can’t actually find the
Hi cephers,
I'm designing a new production-like Ceph cluster, but I've run into an issue.
I have 4 nodes with 1 disk for OS, 3 disks for OSDs on each node. However, I
only have 2 extra disks for use of OSD journals.
My first question is if it is possible to use a remote disk partition
Thanks, Gregory. Do you know how can I find out where the number of buckets
for a particular user has been configured?
--Narendra
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: Friday, October 31, 2014 11:58 AM
To: Narendra Trivedi (natrived)
Cc:
Hi Dan,
I don't know why NBD wouldn't function, but I also don't think it's the
way you should go. Putting the journals on the OSD disks isn't a
terrible option, but you will suffer the expected double write penalty .
If your system disk is an SSD with fast sequential write throughput
It defaults to 1000 and can be set via the rgw_admin utility or the
admin API when via the max-buckets param.
On Fri, Oct 31, 2014 at 10:01 AM, Narendra Trivedi (natrived)
natri...@cisco.com wrote:
Thanks, Gregory. Do you know how can I find out where the number of buckets
for a particular
On Fri, Oct 31, 2014 at 3:59 AM, Dane Elwell dane.elw...@gmail.com wrote:
Hello list,
When we upload a large multipart upload to RGW and it fails, we want
to abort the upload. On large multipart uploads, with say 1000+ parts,
it will consistently return 500 errors when trying to abort the
On Fri, Oct 31, 2014 at 8:06 AM, Dane Elwell dane.elw...@gmail.com wrote:
I think I may have answered my own question:
http://tracker.ceph.com/issues/8553
Looks like this is fixed in Giant, which we'll be deploying as soon as
0.87.1 is out ;)
Thanks
Dane
On 31 October 2014 09:08, Dane
On Fri, Oct 31, 2014 at 9:48 AM, Marco Garcês ma...@garces.cc wrote:
Hi there,
I have a few questions regarding pools, radosgw and logging:
1) How do I turn on radosgw logs for a specific pool?
What do you mean? What do you want to log?
I have this in my config:
rgw enable ops log =
Hey cephers!
Another Ceph Developer Summit is behind us, one that we thought went
quite well. In the usual spirit of democracy and openness however,
we'd love to hear your thoughts on both the event itself and the
process leading up to it. To make it easy to collect responses we
have assembled
Hi all,
I am working on a cluster that had a disk fill up. We've attempted to
balance and recover, but we're seeing a strange negative number in the
degraded objects (see below). Is this within design? Or is this a bug?
Additionally, is there anyway to recover from this negative state?
Hi
Can mutiple ceph nodes work on one single share disk? Just like RedHat
Global FS or Oralce ocfs2.
ZTE Information Security Notice: The information contained in this mail (and
any attachment transmitted herewith) is privileged and
CephFS, yes, but it's not considered production-ready.
You can also use an RBD volume and place OCFS2 on it and share it that way, too.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
yang.bi...@zte.com.cn
Sent: Friday, October 31, 2014 12:22 AM
To:
No SLES rpm's this release or for Firefly. Is there an issue with building for
SLES, or is it just no longer targeted?
Bill
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Patrick
McGarry [patr...@inktank.com]
Sent: Friday, October 31, 2014
As I understand it SUSE does their own builds of things. Just on
cursory examination it looks like the following repo uses Firefly:
https://susestudio.com/a/HVbCUu/master-ceph
and there is some Calamari work going in here:
https://susestudio.com/a/eEqfPk/calamari-opensuse-13-1
My guess is that
You should start by upgrading to giant, many many bug fixes went in
between .86 and giant.
-Sam
On Fri, Oct 31, 2014 at 8:54 AM, Ta Ba Tuan tua...@vccloud.vn wrote:
Hi Sage Weil
Thank for your repling. Yes, I'm using Ceph v.0.86,
I report some related bugs, Hope you help me,
2014-10-31
Hi all,
My workload is mostly writes, but when the writes reach a certain
throughput (iops wise not much higher) the read throughput would tank. This
seems to be impacting my VMs' responsiveness overall. Reads would recover
after write throughput drops.
Is there any way to prioritize read over
Hi Simon,
Have you tried using the Deadline scheduler on the Linux nodes? The deadline
scheduler prioritises reads over writes. I believe it tries to service all
reads within 500ms whilst writes can be delayed up to 5s.
I don’t the exact effect Ceph will have over the top of this, but
I am already using deadline scheduler, with the default parameters:
read_expire=500
write_expire=5000
writes_starved=2
front_merges=1
fifo_batch=16
I remember tuning them before, didn't make a great difference.
-Simon
On Fri, Oct 31, 2014 at 3:43 PM, Nick Fisk n...@fisk.me.uk wrote:
Hi Simon,
Hmmm, it sounds like you are just saturating the spindles to the point that
latency starts to climb to unacceptable levels. The problem being that no
matter how much tuning you apply, at some point the writes will have to start
being put down to the disk and at that point performance will
We have SSD journals, backend disks are actually on SSD-fronted bcache
devices in writeback mode. The client VMs have rbd cache enabled too...
-Simon
On Fri, Oct 31, 2014 at 4:07 PM, Nick Fisk n...@fisk.me.uk wrote:
Hmmm, it sounds like you are just saturating the spindles to the point
that
I think I might have to step out on this one, it sounds like you have all the
basics covered for best performance and I can’t think what else to suggest.
Sorry I couldn’t be of more help.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Xu
(Simon) Chen
Sent: 31
Hi all,
I'm having some issues while trying to activate a new osd in a
new cluster, the prepare command run fine, but then the activate
command failed:
ceph@cephbkdeploy01:~/desp-bkp-cluster$ ceph-deploy --overwrite-conf
disk prepare --fs-type btrfs ceph-bkp-osd01:sdf:/dev/sdc
Hi German,
if i'm right the journal-creation on /dev/sdc1 failed (perhaps because
you only say /dev/sdc instead of /dev/sdc1?).
Do you have partitions on sdc?
Udo
On 31.10.2014 22:02, German Anders wrote:
Hi all,
I'm having some issues while trying to activate a new osd in a
new
Hi Samuel and Sage,
I will upgrde to Giant soon, Thank you so much.
--
Tuan
HaNoi-VietNam
On 11/01/2014 01:10 AM, Samuel Just wrote:
You should start by upgrading to giant, many many bug fixes went in
between .86 and giant.
-Sam
On Fri, Oct 31, 2014 at 8:54 AM, Ta Ba Tuan tua...@vccloud.vn
39 matches
Mail list logo