Assitance really appreciated. This output says it all:-
ceph@ceph-admin:~$ ceph-deploy osd activate ceph-4:/dev/sdb1
ceph-4:/dev/sdc1 ceph-4:/dev/sdd1
[ceph_deploy.conf][DEBUG ] found configuration file
at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.2):
On Fri, 2014-06-06 at 16:04 -0700, Gregory Farnum wrote:
I haven't used ceph-deploy to do this much, but I think you need to
prepare before you activate and it looks like you haven't done so.
Thanks, Greg. I did to the prepare, and it worked without a hitch :-\
On Sat, 2014-06-07 at 00:45 +0100, Jonathan Gowar wrote:
On Fri, 2014-06-06 at 16:04 -0700, Gregory Farnum wrote:
I haven't used ceph-deploy to do this much, but I think you need to
prepare before you activate and it looks like you haven't done so.
Thanks, Greg. I did to the prepare
ceph@ceph-admin:~$ rbd snap purge
cloudstack/20fcd781-2423-436e-afc6-21e75d85111d
Removing all snapshots: 100% complete...done.
ceph@ceph-admin:~$ rbd rm
cloudstack/20fcd781-2423-436e-afc6-21e75d85111d
Removing image: 99% complete...failed.
rbd: error: image still has watchers
This means the image
ceph@ceph-admin:~$ rbd info
cloudstack/20fcd781-2423-436e-afc6-21e75d85111d | grep prefix
block_name_prefix: rbd_data.50f613006c83e
ceph@ceph-admin:~$ rados -p cloudstack listwatchers
rbd_header.50f613006c83e
watcher=10.x.x.23:0/10014542 client.728679 cookie=1
watcher=10.x.x.23:0/11014542
On Wed, 2014-04-16 at 03:02 +0100, Joao Eduardo Luis wrote:
You didn't recreate mon.ceph-3.
The following should take care of that:
1. stop mon.ceph-3
2. ceph mon remove ceph-3
3. mv /var/lib/ceph/mon/ceph-3 /someplace/ceph-3
4. ceph mon getmap -o /tmp/monmap
5. ceph-mon -i ceph-3
Hi,
I had an OSD fail, I replaced the drive, and that part of the array is
now optimal. But in the process there's developed a problem with the
mon array.
I have 3 mon servers and 1 is marked down.
I checked there's a mon process running, and have tried restarting the
mon server.
worked my
On Tue, 2014-04-15 at 18:02 +0100, Joao Eduardo Luis wrote:
Ahah! You got bit by #5804: http://tracker.ceph.com/issues/5804
Best solution for your issue:
- shutdown 'mon.ceph-3'
- remove 'mon.ceph-3' from the cluster
- recreate 'mon.ceph-3'
- add 'mon.ceph-3' to the cluster
-Joao
Thanks. I managed to remove the images by unprotecting them first.
On Fri, 2014-04-04 at 10:15 +0800, YIP Wai Peng wrote:
Yes. You can see whether the snapshots are protected by using snap rm
instead of snap purge.
# rbd --pool mypool snap rm 5216ba99-1d8e-4155-9877-7d77d7b6caa0@snap
# rbd
On Wed, 2014-04-02 at 20:42 -0400, Jean-Charles Lopez wrote:
From what is pasted, your remove failed so make sure you purge
snapshots then the rbd image.
I already pasted that too.
rbd snap purge 6fa36869-4afe-485a-90a3-93fba1b5d15e
-p cloudstack
Removing all snapshots2014-04-03
Hi,
I have a small 8TB testing cluster. During testing I've used 94G.
But, I have since removed pools and images from Ceph, I shouldn't be
using any space, but still the 94G usage remains. How can I reclaim old
used space?
Also, this:-
ceph@ceph-admin:~$ rbd rm
On Tue, 2014-03-18 at 09:14 -0400, Alfredo Deza wrote:
With ceph-deploy you would do the following (keep in mind this gets
rid of all data as well):
ceph-deploy purge {nodes}
ceph-deploy purgedata {nodes}
Awesome! Nice new clean cluster, with all the rights bits :)
Thanks for the assist.
On Thu, 2014-03-06 at 19:17 -0500, Alfredo Deza wrote:
But what it means is that you kind of deployed monitors that have no
idea how to communicate with the ones that were deployed before.
What's the best way to resolve this then?
This. I'd like to get it back to 1 monitor. Any ideas?
On Thu, 2014-03-06 at 09:02 -0500, Alfredo Deza wrote:
From the admin node:-
http://pastebin.com/AYKgevyF
Ah you added a monitor with ceph-deploy but that is not something that
is supported (yet)
See: http://tracker.ceph.com/issues/6638
This should be released in the upcoming
In an attempt to add a mon server, I appear to have completely broken a
mon service to the cluster:-
# ceph quorum_status --format json-pretty
2014-03-05 14:36:43.704939 7fb065058700 0 monclient(hunting):
authenticate timed out after 300
2014-03-05 14:36:43.705029 7fb065058700 0 librados:
On Wed, 2014-03-05 at 16:35 +, Joao Eduardo Luis wrote:
On 03/05/2014 02:30 PM, Jonathan Gowar wrote:
In an attempt to add a mon server, I appear to have completely broken a
mon service to the cluster:-
Did you start the mon you added? How did you add the new monitor?
From the admin
I've a 3 OSD and 1 admin node cluster, running Debian 7 and Ceph 0.72.
I'd like to add XenServer tech-preview node too.
I'm trying to run ceph-deloy install xen-dev (xen-dev CentOS 6), but it
fails with these sorts of messages:-
[xen-dev][WARNIN] file /usr/lib64/librados.so.2.0.0 from install
Running ceph version 0.67.5. I've an OSD server with 3 OSDs; sdb sdc
and sdd.
sdb and sdc have fallen out, so I've removed and readded them. But both
OSDs I'm attempting to add refuse to start:-
root@ceph-3:~# /etc/init.d/ceph start osd.6
/etc/init.d/ceph: osd.6 not found (/etc/ceph/ceph.conf
On Mon, 2014-03-03 at 22:32 +, Jonathan Gowar wrote:
I don't mind adding the OSDs manually. IIRC I can do this on the admin
node, then put that out to the cluster, instead of editing each nodes
ceph.conf, right?
I added the OSD to ceph.conf and distributed the file. All working
again now
19 matches
Mail list logo