It's ok for KeyValueDB.cc now, but I have another problem with
src/os/KineticStore.h :
http://pastebin.com/nhSDQs70
On 12/01/2014 05:58 PM, Haomai Wang wrote:
Sorry, it's a typo
/WITH_KINETIC/HAVE_KINETIC/
:-)
On Tue, Dec 2, 2014 at 12:51 AM, Julien Lutran julien.lut...@ovh.net
It's ok for KeyValueDB.cc now, but I have another problem with
src/os/KineticStore.h :
http://pastebin.com/nhSDQs70
On 12/01/2014 05:58 PM, Haomai Wang wrote:
Sorry, it's a typo
/WITH_KINETIC/HAVE_KINETIC/
:-)
On Tue, Dec 2, 2014 at 12:51 AM, Julien Lutran julien.lut...@ovh.net
Hi ,all
I use ceph-deploy to deploy a cluster(quick start), but the cluster name is
ceph, Where can I
specify the cluster name in the quick start steps?
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
Thanks for your reply.
I found there is a file ~/.cephdeploy.conf
[ceph-deploy-global]
# Overrides for some of ceph-deploy's global flags, like verbosity or cluster
# name
cluster=myclustername
can this way work?
On Dec 2, 2014, at 17:48, John Spray john.sp...@redhat.com wrote:
Start
Hi everyone,
I use CephFS in a production environnement now. So far so good.
However, I have a small problem : my ceph status report (sometimes) a
client failing to respond to cache pressure, but this client is
disconnected since last week. The ceph-fuse on this server was stopped
(willingly).
Hi Thomas,
Is the warning appearing and disappearing if you run ceph status in
quick succession? This might be a symptom of
http://tracker.ceph.com/issues/10151, which will be fixed in the next
development release. If the warning is not oscillating on and off
when you run ceph status
Hi John,
Yes, it oscillates.
It really looks like what you have in the bug report, but with a ghost
client.
I'll wait for the next release then.
Thank you !
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information
On mar., 2014-12-02 at 11:43 +, John Spray
Hi all,
I was running scrub while cluster is in re-balancing state.
From the osd logs..
2014-12-02 18:50:26.934802 7fcc6b614700 0 log_channel(default) log [INF] :
0.3 scrub ok
2014-12-02 18:50:27.890785 7fcc6b614700 0 log_channel(default) log [INF] :
0.24 scrub ok
2014-12-02 18:50:31.902978
On 12/01/2014 04:53 PM, Brian Rak wrote:
Is there a place I can download the entire repository for giant?
I'm really just looking for a rsync server that presents all the files
here: http://download.ceph.com/ceph/giant/centos6.5/
I know that eu.ceph.com runs one, but I'm not sure how up to
Hi everyone,
In my application I need to know which osd contain the object.
And I know how to get that information through shell command: ceph osd map
[objectname] -p [poolname]
But I CAN'T find the exactly API (C, C ++) to do this job.
Does anyone have an idea ?
Many
On Mon, Dec 1, 2014 at 1:39 PM, Andrei Mikhailovsky and...@arhont.com wrote:
Ilya,
I will try doing that once again tonight as this is a production cluster and
when dds trigger that dmesg error the cluster's io becomes very bad and I
have to reboot the server to get things on track. Most of
After a change in our crushmap, to support different rulesets for different
disk technologies, we run into some weird behavior problems, such as OSDs
appearing in wrong places in the tree, after restarting them.
During such an attempt, we had some PGs which would not get synchronized,
whatever
Hello all. Hopefully this is not a stupid question at this point, but I
was hoping I could get some clarity on CentOS7 support. I'm well aware
that it works but I wonder why it is not explicitly listed on the supported
OS's site:
http://ceph.com/docs/master/start/os-recommendations/
It's
On 12/02/2014 10:30 AM, Frank Even wrote:
Hello all. Hopefully this is not a stupid question at this point, but I
was hoping I could get some clarity on CentOS7 support. I'm well aware
that it works but I wonder why it is not explicitly listed on the
supported OS's site:
We aren't currently doing any of the ongoing testing which that page covers
on CentOS 7. I think that's because it's going to flow through the same Red
Hat mechanisms as the RHEL7 builds, but I'm not on that team so I can't say
for sure.
-Greg
On Tue, Dec 2, 2014 at 9:39 AM Frank Even
I've found that it helps to shut down the osds before shutting down the
host. Especially if the node is also a monitor. It seems that some OSD
shutdown messages get lost while monitors are holding elections.
On Tue, Dec 2, 2014 at 10:10 AM, Christoph Adomeit
christoph.adom...@gatworks.de
You mean `ceph -w` and `ceph -s` didn't show any PGs in
the active+clean+scrubbing state while pool 2's PGs were being scrubbed?
I see that happen with my really small pools. I have a bunch of RadosGW
pools that contain 5 objects, and ~1kB of data. When I scrub the PGs in
those pools, they
Hi Craig,
ceph -s is not showing any PG's in pool2.
I have 3 pools. rbd and two pools that i created testPool and testPool2.
I have more than 10TB of data in testPool1 and good amount of data in
testPool2 as well.
Iam not using rbd pool.
-Thanks regards,
Mallikarjun Biradar
On 3 Dec 2014
On 12/02/2014 10:59 AM, Gregory Farnum wrote:
We aren't currently doing any of the ongoing testing which that page
covers on CentOS 7. I think that's because it's going to flow through
the same Red Hat mechanisms as the RHEL7 builds, but I'm not on that
team so I can't say for sure.
-Greg
So
Thanks Craig,
but this is what I am doing.
After setting ceph osd set noout I do a service ceph stop osd.51
and as soon as I do this I get growing numbers (200) of slow requests,
although there is not a big load on my cluster.
Christoph
On Tue, Dec 02, 2014 at 10:40:13AM -0800, Craig Lewis
There are two options that can reduce the load on your cluster from
rebuilding so much. If you don't have many OSDs, it would be better to make
the ZFS OSD out first and let the cluster become healthy, then proceed to
remove the OSD and rebuild it.
If you have a lot of OSDs, then the CRUSH
On Mon, Dec 1, 2014 at 1:51 AM, Daniel Schneller
daniel.schnel...@centerdevice.com wrote:
I could not find any way to throttle the background deletion activity
(the command returns almost immediately).
I'm only aware of osd snap trim sleep. I haven't tried this since my
Firefly upgrade
If you watch `ceph -w` while stopping the OSD, do you see
2014-12-02 11:45:17.715629 mon.0 [INF] osd.X marked itself down
?
On Tue, Dec 2, 2014 at 11:06 AM, Christoph Adomeit
christoph.adom...@gatworks.de wrote:
Thanks Craig,
but this is what I am doing.
After setting ceph osd set noout I
ceph osd dump | grep ^pool will map pool names to numbers. PGs are named
after the pool; PG 2.xx belongs to pool 2.
rados df will tell you have many items and data are in a pool.
On Tue, Dec 2, 2014 at 10:53 AM, Mallikarjun Biradar
mallikarjuna.bira...@gmail.com wrote:
Hi Craig,
ceph -s is
On Tue, Dec 2, 2014 at 12:42 PM, Gregory Farnum g...@gregs42.com wrote:
On Tue, Dec 2, 2014 at 10:55 AM, Ken Dreyer kdre...@redhat.com wrote:
On 12/02/2014 10:59 AM, Gregory Farnum wrote:
We aren't currently doing any of the ongoing testing which that page
covers on CentOS 7. I think
On 12/02/2014 12:42 PM, Gregory Farnum wrote:
On Tue, Dec 2, 2014 at 10:55 AM, Ken Dreyer kdre...@redhat.com wrote:
So it would be fair to remove the I, C flags from CentOS 7?
I've adjusted https://github.com/ceph/ceph/pull/3060 to just use the B
flag.
I didn't set up that page and I don't
You have a total of 2 OSDs, and 2 disks, right?
The safe method is to mark one OSD out, and wait for the cluster to heal.
Delete, reformat, add it back to the cluster, and wait for the cluster to
heal. Repeat. But that only works when you have enough OSDs that the
cluster can heal.
So you'll
I don't think we expose this as an API as such. If you wanted to do
it from within the ceph source you'd use OSDMap::object_locator_to_pg
and OSDMap::pg_to_acting_osds (see how Client::get_file_extent_osds
does it).
We do expose a special case of this in the libcephfs API
Hi All,
I am done with the setup for Cluster, Block Storage and File System,
http://docs.ceph.com/docs/master/start/
Can someone advise me on how to test or do the bench marking etc.
Thanks in Advance.
Regards
Raj
___
ceph-users mailing list
Hi guys,
Have you looked wiredtiger?
It has LevelDB compatibility API and also has b-tree.
http://source.wiredtiger.com/2.4.1/index.html
About their benchmark, wiredtiger performance is better than LevelDB at all io
pattern.
https://github.com/wiredtiger/wiredtiger/wiki/LevelDB-Benchmark
Hi Craig,
but, my concern is why ceph status is not reporting for pool 2 (testPool2
in this case). Whether its not performing scrub or its ceph status report
issue?
Though I have enough of objects in testPool2, scrub is not reporting
active+clean+scrubbing in ceph -s.
ems@rack6-ramp-4:~$ sudo
31 matches
Mail list logo