Hi Xiaoxi,
Thanks for very useful information.
Can you share more details about "Terrible bad performance" is compare against
what? and what kind of usage pattern?
I'm just interested in key/value backend for more cost/performance without
expensive HW such as ssd/fusion io.
Regards,
Satoru F
Compared to Filestore on SSD(We run levelDB on top of SSD). The usage pattern
is RBD sequential write(64K * QD8) and random write( 4K * QD8), read seems on
par.
I would suspect KV backend on HDD will be even worse ,compared to Filestore on
HDD.
From: Satoru Funai [mailto:satoru.fu...@gmail.com
On 2014-12-02 15:03, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 4:26 PM, Ben wrote:
On 2014-12-02 11:25, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 4:23 PM, Ben wrote:
...
How can I tell if the shard has an object in it from the logs?
Search for a different sequence (e.g., search fo
On Mon, Dec 1, 2014 at 4:26 PM, Ben wrote:
> On 2014-12-02 11:25, Yehuda Sadeh wrote:
>>
>> On Mon, Dec 1, 2014 at 4:23 PM, Ben wrote:
...
>>> How can I tell if the shard has an object in it from the logs?
>>
>>
>>
>>
>> Search for a different sequence (e.g., search for
Hm. Already exists.
And now I'm completely confused. Ok, so I'm trying to start over. I've
"ceph-deploy purge"'d all my machines a few times with "ceph-deploy
purgedata" intermixed. I've manually removed all the files I could see
that were generated, except my osd directories, which I appar
On Mon, Dec 1, 2014 at 8:06 AM, John Spray wrote:
> I meant to chime in earlier here but then the weekend happened, comments
> inline
>
> On Sun, Nov 30, 2014 at 7:20 PM, Wido den Hollander wrote:
>> Why would you want all CephFS metadata in memory? With any filesystem
>> that will be a problem.
On Sun, Nov 30, 2014 at 1:15 PM, Andrei Mikhailovsky wrote:
> Greg, thanks for your comment. Could you please share what OS, kernel and
> any nfs/cephfs settings you've used to achieve the pretty well stability?
> Also, what kind of tests have you ran to check that?
We're just doing it on our te
On Tue, Nov 25, 2014 at 1:00 AM, Dan Van Der Ster
wrote:
> Hi Greg,
>
>
>> On 24 Nov 2014, at 22:01, Gregory Farnum wrote:
>>
>> On Thu, Nov 20, 2014 at 9:08 AM, Dan van der Ster
>> wrote:
>>> Hi all,
>>> What is compatibility/incompatibility of dumpling clients to talk to firefly
>>> and giant
Hi all, I have a problem with some incomplete pgs. Here’s the backstory: I had
a pool that I had accidently left with a size of 2. On one of the ods nodes,
the system hdd started to fail and I attempted to rescue it by sacrificing one
of my osd nodes. That went ok and I was able to bring the nod
On Mon, Dec 1, 2014 at 3:47 PM, Ben wrote:
> On 2014-12-02 10:40, Yehuda Sadeh wrote:
>>
>> On Mon, Dec 1, 2014 at 3:20 PM, Ben wrote:
>>>
>>> On 2014-12-02 09:25, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 2:10 PM, Ben wrote:
>
>
> On 2014-12-02 08:39, Yehuda Sadeh wr
On 2014-12-02 11:25, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 4:23 PM, Ben wrote:
On 2014-12-02 11:21, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 3:47 PM, Ben wrote:
On 2014-12-02 10:40, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 3:20 PM, Ben wrote:
On 2014-12-02 09:25, Yehuda Sad
On Mon, Dec 1, 2014 at 4:23 PM, Ben wrote:
> On 2014-12-02 11:21, Yehuda Sadeh wrote:
>>
>> On Mon, Dec 1, 2014 at 3:47 PM, Ben wrote:
>>>
>>> On 2014-12-02 10:40, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 3:20 PM, Ben wrote:
>
>
> On 2014-12-02 09:25, Yehuda Sadeh wr
On 2014-12-02 11:21, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 3:47 PM, Ben wrote:
On 2014-12-02 10:40, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 3:20 PM, Ben wrote:
On 2014-12-02 09:25, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 2:10 PM, Ben wrote:
On 2014-12-02 08:39, Yehuda Sad
On Mon, Dec 1, 2014 at 3:20 PM, Ben wrote:
> On 2014-12-02 09:25, Yehuda Sadeh wrote:
>>
>> On Mon, Dec 1, 2014 at 2:10 PM, Ben wrote:
>>>
>>> On 2014-12-02 08:39, Yehuda Sadeh wrote:
On Sat, Nov 29, 2014 at 2:26 PM, Ben wrote:
>
>
>
> On 29/11/14 11:40, Yehuda Sad
On 25/11/14 12:40, Mark Kirkwood wrote:
On 25/11/14 11:58, Yehuda Sadeh wrote:
On Mon, Nov 24, 2014 at 2:43 PM, Mark Kirkwood
wrote:
On 22/11/14 10:54, Yehuda Sadeh wrote:
On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood
wrote:
Fri Nov 21 02:13:31 2014
x-amz-copy-source:bucketbig/_multip
On 2014-12-02 10:40, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 3:20 PM, Ben wrote:
On 2014-12-02 09:25, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 2:10 PM, Ben wrote:
On 2014-12-02 08:39, Yehuda Sadeh wrote:
On Sat, Nov 29, 2014 at 2:26 PM, Ben wrote:
On 29/11/14 11:40, Yehuda Sad
On 2014-12-02 09:25, Yehuda Sadeh wrote:
On Mon, Dec 1, 2014 at 2:10 PM, Ben wrote:
On 2014-12-02 08:39, Yehuda Sadeh wrote:
On Sat, Nov 29, 2014 at 2:26 PM, Ben wrote:
On 29/11/14 11:40, Yehuda Sadeh wrote:
On Fri, Nov 28, 2014 at 1:38 PM, Ben wrote:
On 29/11/14 01:50, Yehuda Sade
On Mon, Dec 1, 2014 at 2:10 PM, Ben wrote:
> On 2014-12-02 08:39, Yehuda Sadeh wrote:
>>
>> On Sat, Nov 29, 2014 at 2:26 PM, Ben wrote:
>>>
>>>
>>> On 29/11/14 11:40, Yehuda Sadeh wrote:
On Fri, Nov 28, 2014 at 1:38 PM, Ben wrote:
>
>
> On 29/11/14 01:50, Yehuda Sadeh
On 2014-12-02 08:39, Yehuda Sadeh wrote:
On Sat, Nov 29, 2014 at 2:26 PM, Ben wrote:
On 29/11/14 11:40, Yehuda Sadeh wrote:
On Fri, Nov 28, 2014 at 1:38 PM, Ben wrote:
On 29/11/14 01:50, Yehuda Sadeh wrote:
On Thu, Nov 27, 2014 at 9:22 PM, Ben wrote:
On 2014-11-28 15:42, Yehuda Sadeh
On Sat, Nov 29, 2014 at 2:26 PM, Ben wrote:
>
> On 29/11/14 11:40, Yehuda Sadeh wrote:
>>
>> On Fri, Nov 28, 2014 at 1:38 PM, Ben wrote:
>>>
>>> On 29/11/14 01:50, Yehuda Sadeh wrote:
On Thu, Nov 27, 2014 at 9:22 PM, Ben wrote:
>
> On 2014-11-28 15:42, Yehuda Sadeh wrote:
>
You have to be a root user, either via login, su or sudo.
So no, you don't have to use sudo - just logon as root.
On 2 December 2014 at 00:05, Jiri Kanicky wrote:
> Hi.
>
> Do I have to install sudo in Debian Wheezy to deploy CEPH succesfully? I
> dont normally use sudo.
>
> Thank you
> Jiri
> _
Hi Paulo,
Thanks a lot. I’ve just added into /etc/apst/sources.list below back ports:
deb http://ftp.debian.org/debian/ wheezy-backports main
And : apt-get update
But ceph-deploy still throw alerts. So I added package manually (to take them
from wheezy-backports) :
apt-get -t wheezy-backports
I'm still using the default values, mostly because I haven't had time to
test.
On Thu, Nov 27, 2014 at 2:44 AM, Andrei Mikhailovsky
wrote:
> Hi Craig,
>
> Are you keeping the filestore, disk and op threads at their default
> values? or did you also change them?
>
> Cheers
>
>
> Tuning these valu
On Tue, Dec 2, 2014 at 12:38 AM, Ken Dreyer wrote:
> On 11/28/14 7:04 AM, Haomai Wang wrote:
> > Yeah, ceph source repo doesn't contain Kinetic header file and library
> > souce, you need to install kinetic devel package separately.
>
> Hi Haomai,
>
> I'm wondering if we need AC_CHECK_HEADER([kin
Sorry, it's a typo
/WITH_KINETIC/HAVE_KINETIC/
:-)
On Tue, Dec 2, 2014 at 12:51 AM, Julien Lutran
wrote:
>
> Sorry, It didn't change anything :
>
> root@host:~/sources/ceph# head -12 src/os/KeyValueDB.cc
> // -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
> // vim: ts=8 sw
Sorry, It didn't change anything :
root@host:~/sources/ceph# head -12 src/os/KeyValueDB.cc
// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
// vim: ts=8 sw=2 smarttab
#include "KeyValueDB.h"
#include "LevelDBStore.h"
#ifdef HAVE_LIBROCKSDB
#include "RocksDBStore.h"
#endif
On 11/28/14 7:04 AM, Haomai Wang wrote:
> Yeah, ceph source repo doesn't contain Kinetic header file and library
> souce, you need to install kinetic devel package separately.
Hi Haomai,
I'm wondering if we need AC_CHECK_HEADER([kinetic/kinetic.h], ...) in
configure.ac to double-check when the us
Hi all,
http://ceph.com/docs/master/rados/operations/crush-map/#crush-tunables
described how to set the tunables to legacy, argonaut, bobtail, firefly
or optimal.
But how can I see, which profile is active in an ceph-cluster?
With "ceph osd getcrushmap" I got not realy much info
(only "tunable ch
Thank you Lionel,
Indeed I have forgotten about size > min_size. I have set min_size to 1 and my
cluster is UP now. I have deleted crash osd and have set size to 3 and min_size
to 2.
---
With regards,
Stanislav
01.12.2014, 19:15, "Lionel Bouton" :
> Le 01/12/2014 17:08, Lionel Bouton a éc
On Fri, Nov 28, 2014 at 1:48 PM, Florian Haas wrote:
> Out of curiosity: would it matter at all whether or not a significant
> fraction of the files in CephFS were hard links? Clearly the only
> thing that differs in metadata between individual hard-linked files is
> the file name, but I wonder if
Le 01/12/2014 17:08, Lionel Bouton a écrit :
> I may be wrong here (I'm surprised you only have 4 incomplete pgs, I'd
> expect ~1/3rd of your pgs to be incomplete given your "ceph osd tree"
> output) but reducing min_size to 1 should be harmless and should
> unfreeze the recovering process.
Ignore
Le 01/12/2014 15:09, Butkeev Stas a écrit :
> pg 13.2 is incomplete, acting [1,3] (reducing pool .rgw.buckets min_size from
> 2 may help; search ceph.com/docs for 'incomplete')
The answer is in the logs: your .rgw.buckets pool is using min_size = 2.
So it doesn't have enough valid pg replicas to s
I meant to chime in earlier here but then the weekend happened, comments inline
On Sun, Nov 30, 2014 at 7:20 PM, Wido den Hollander wrote:
> Why would you want all CephFS metadata in memory? With any filesystem
> that will be a problem.
The latency associated with a cache miss (RADOS OMAP dirfra
On Mon, Dec 01, 2014 at 05:09:31PM +0300, Butkeev Stas wrote:
> Hi all,
> I have Ceph cluster+rgw. Now I have problems with one of OSD, it's down now.
> I check ceph status and see this information
>
> [root@node-1 ceph-0]# ceph -s
> cluster fc8c3ecc-ccb8-4065-876c-dc9fc992d62d
> health
Hi!
I had a very similar issue a few days ago.
For me it wasn't too much of a problem since the cluster was new
without data and I could force recreate the PGs. I really hope that in
your case it won't be necessary to do the same thing.
As a first step try to reduce the min_size from 2 to 1
Hi guys,
I'm interested in to use key/value store as a backend of Ceph OSD.
When firefly release, LevelDB support is mentioned as experimental,
is it same status on Giant release?
Regards,
Satoru Funai
___
ceph-users mailing list
ceph-users@lists.ceph.co
Hi all,
I have Ceph cluster+rgw. Now I have problems with one of OSD, it's down now. I
check ceph status and see this information
[root@node-1 ceph-0]# ceph -s
cluster fc8c3ecc-ccb8-4065-876c-dc9fc992d62d
health HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck
unclean
Is there a place I can download the entire repository for giant?
I'm really just looking for a rsync server that presents all the files
here: http://download.ceph.com/ceph/giant/centos6.5/
I know that eu.ceph.com runs one, but I'm not sure how up to date that
is (because of http://eu.ceph.com
Thank you, Paulo.
Metadata = mds, so metadata server should have cpu power.
--Roman
On 14-11-28 05:34 PM, Paulo Almeida wrote:
On Fri, 2014-11-28 at 16:37 -0500, Roman Naumenko wrote:
And if I understand correctly, monitors are the access points to the
cluster, so they should provide enough a
Thanks for your input. We will see what we can find out
with the logs and how to proceed from there.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Range query is not that important in nowadays SSDyou can see very high read
random read IOPS in ssd spec, and getting higher day by day.The key problem
here is trying to exactly matching one query(get/put) to one SSD
IO(read/write), eliminate the read/write amplification. We kind of believe
Hmm, src/os/KeyValueDB.cc lack of lines:
#ifdef WITH_KINETIC
#include "KineticStore.h"
#endif
On Mon, Dec 1, 2014 at 6:14 PM, Julien Lutran wrote:
> I'm sorry but the compilation still fails after including the cpp-client
> headers :
>
>
> CXX os/libos_la-KeyValueDB.lo
> os/KeyValueDB.c
Exactly, I'm just looking forward a better DB backend suitable for
KeyValueStore. It maybe traditional B-tree design.
Kinetic original I think it was a good backend, but it doesn't support
range query :-(
On Mon, Dec 1, 2014 at 10:04 PM, Chen, Xiaoxi wrote:
> We have tested it for a while, b
We have tested it for a while, basically it seems kind of stable but show
terrible bad performance.
This is not the fault of Ceph , but levelDB, or more generally, all K-V
storage with LSM design(RocksDB,etc), the LSM tree structure naturally
introduce very large write amplification 10X to
Hi.
Do I have to install sudo in Debian Wheezy to deploy CEPH succesfully? I
dont normally use sudo.
Thank you
Jiri
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> On 01 Dec 2014, at 13:37, Daniel Schneller
> wrote:
>
> On 2014-12-01 10:03:35 +, Dan Van Der Ster said:
>
>> Which version of Ceph are you using? This could be related:
>> http://tracker.ceph.com/issues/9487
>
> Firefly. I had seen this ticket earlier (when deleting a whole pool) and
Yeah, mainly used by test env.
On Mon, Dec 1, 2014 at 6:29 PM, Satoru Funai wrote:
> Hi guys,
> I'm interested in to use key/value store as a backend of Ceph OSD.
> When firefly release, LevelDB support is mentioned as experimental,
> is it same status on Giant release?
> Regards,
>
> Satoru Fun
>>Does it work with virtio-blk if you attach the RBD as a LUN?
virtio-blk don't support discard and triming
>> Supposedly, SCSI pass-through works in this mode, e.g.
SCSI pass-through works only with virtio-scsi, not virtio-blk
>>However, it seems that virtio-scsi is slowly becoming preferred o
Hi Andrei!
I had a similar setting with replicated size 2 and min_size also 2.
Changing that didn't change the status of the cluster.
I 've also tried to remove the pools and recreate them without success.
Removing and re-adding the OSDs also didn't have any influence!
Therefore and since I d
On 2014-12-01 10:03:35 +, Dan Van Der Ster said:
Which version of Ceph are you using? This could be related:
http://tracker.ceph.com/issues/9487
Firefly. I had seen this ticket earlier (when deleting a whole pool) and hoped
the backport of the fix would be available some time soon. I must
On 01/12/14 10:22, Alexandre DERUMIER wrote:
>
> Yes, it's working fine.
>
> (you need to use virtio-scsi and enable discard option)
>
Does it work with virtio-blk if you attach the RBD as a LUN? Supposedly,
SCSI pass-through works in this mode, e.g.
...
However, it seems that virtio-s
Ilya,
I see. My server is has 24GB of ram + 3GB of swap. While running the tests,
I've noticed that the server had 14GB of ram shown as cached and only 2MB were
used from the swap. Not sure if this is helpful to your debugging.
Andrei
--
Andrei Mikhailovsky
Director
Arhont Information Se
On Mon, Dec 1, 2014 at 1:39 PM, Andrei Mikhailovsky wrote:
> Ilya,
>
> I will try doing that once again tonight as this is a production cluster and
> when dds trigger that dmesg error the cluster's io becomes very bad and I
> have to reboot the server to get things on track. Most of my vms start
>
On Mon, Dec 1, 2014 at 1:09 PM, Dan Van Der Ster
wrote:
> Hi Ilya,
>
>> On 28 Nov 2014, at 17:56, Ilya Dryomov wrote:
>>
>> On Fri, Nov 28, 2014 at 5:46 PM, Dan Van Der Ster
>> wrote:
>>> Hi Andrei,
>>> Yes, I’m testing from within the guest.
>>>
>>> Here is an example. First, I do 2MB reads whe
Ilya,
I will try doing that once again tonight as this is a production cluster and
when dds trigger that dmesg error the cluster's io becomes very bad and I have
to reboot the server to get things on track. Most of my vms start having 70-90%
iowait until that server is rebooted.
I've actuall
Hi guys,
I'm interested in to use key/value store as a backend of Ceph OSD.
When firefly release, LevelDB support is mentioned as experimental,
is it same status on Giant release?
Regards,
Satoru Funai
___
ceph-users mailing list
ceph-users@lists.ceph.co
I'm sorry but the compilation still fails after including the cpp-client
headers :
CXX os/libos_la-KeyValueDB.lo
os/KeyValueDB.cc: In static member function 'static KeyValueDB*
KeyValueDB::create(CephContext*, const string&, const string&)':
os/KeyValueDB.cc:18:16: error: expected type
Hi Ilya,
> On 28 Nov 2014, at 17:56, Ilya Dryomov wrote:
>
> On Fri, Nov 28, 2014 at 5:46 PM, Dan Van Der Ster
> wrote:
>> Hi Andrei,
>> Yes, I’m testing from within the guest.
>>
>> Here is an example. First, I do 2MB reads when the max_sectors_kb=512, and
>> we see the reads are split into 4
Hi,
Which version of Ceph are you using? This could be related:
http://tracker.ceph.com/issues/9487
See "ReplicatedPG: don't move on to the next snap immediately"; basically, the
OSD is getting into a tight loop "trimming" the snapshot objects. The fix above
breaks out of that loop more frequent
Hi!
We take regular (nightly) snapshots of our Rados Gateway Pools for
backup purposes. This allows us - with some manual pokery - to restore
clients' documents should they delete them accidentally.
The cluster is a 4 server setup with 12x4TB spinning disks each,
totaling about 175TB. We are run
>>I think if you enable TRIM support on your RBD, then run fstrim on your
>>filesystems inside the guest (assuming ext4 / XFS guest filesystem),
>>Ceph should reclaim the trimmed space.
Yes, it's working fine.
(you need to use virtio-scsi and enable discard option)
- Mail original -
On Mon, Dec 1, 2014 at 12:30 AM, Andrei Mikhailovsky wrote:
>
> Ilya, further to your email I have switched back to the 3.18 kernel that
> you've sent and I got similar looking dmesg output as I had on the 3.17
> kernel. Please find it attached for your reference. As before, this is the
> command
[celtic][DEBUG ] create the mon path if it does not exist
mkdir /var/lib/ceph/mon/
2014-12-01 4:32 GMT+03:00 K Richard Pixley :
> What does this mean, please?
>
> --rich
>
> ceph@adriatic:~/my-cluster$ ceph status
> cluster 1023db58-982f-4b78-b507-481233747b13
> health HEALTH_OK
>
63 matches
Mail list logo