Hi all,
So the init script issue is sorted.. my grep binary is not working correctly.
I've replaced it and everything seems to be fine.
Which now has me wondering if the binaries I generated are any good... the bad
grep might have caused issues with the build...
I'm going to recompile after
On 12/03/2015, at 03.08, Jesus Chavez (jeschave) jesch...@cisco.com wrote:
Thanks Steffen I have followed everything not sure what is going on, the mon
keyring and client admin are individual? Per mon host? Or do I need to copy
from the first initial mon node?
I'm no expert, but I would
Community please explain the 2nd warning on this page:
http://ceph.com/docs/master/rbd/rbd-openstack/
Important Ceph doesn’t support QCOW2 for hosting a virtual machine disk.
Thus if you want to boot virtual machines in Ceph (ephemeral backend or
boot from volume), the Glance image format must
ceph is RAW format - should be all fine...so VM will be using that RAW
format
On 12 March 2015 at 09:03, Azad Aliyar azad.ali...@sparksupport.com wrote:
Community please explain the 2nd warning on this page:
http://ceph.com/docs/master/rbd/rbd-openstack/
Important Ceph doesn’t support QCOW2
Hi all
Does anybody know why am I having this error:
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying
[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:
[ceph_deploy.mon][ERROR ] capricornio
[ceph_deploy.mon][ERROR ] tauro
[ceph_deploy.mon][ERROR ] aries
the
Hello All,
I use Ceph in production for several months. but i have an errors with
Ceph Rados Gateway for multiple users.
I am faced with the following error:
Error trying to create container 'xs02': 409 Conflict:
BucketAlreadyExists
Which corresponds to the documentation :
Hello everyone,
I am currently trying to recover a ceph cluster from the disaster, now I have
enough osd (171 up and in/195) and have 2 incomplete pgs at the end.
However the question now is not the incomplete pgs, is about one mon services
fail to start due to a strange, wrong monmap is used.
Thanks Sam, I'll take a look. Seems sensible enough and worth a shot.
We'll probably call it a day after this and flatten in, but I'm
wondering if it's possible some rbd devices may miss these pg's and
could be exportable? Will have a tinker!
On Wed, Mar 11, 2015 at 7:06 PM, Samuel Just
Hello all.
The current behavior of snapshotting instances RBD-backed in OpenStack involves
uploading the snapshot into Glance.
The resulting Glance image is fully allocated, causing an explosion of
originally sparse RAW images. Is there a way to preserve the sparseness? Else I
can use
I’m trying to create my first ceph disk from a client named bjorn :
[ceph@bjorn ~]$ rbd create foo --size 512000 -m helga -k
/etc/ceph/ceph.client.admin.keyring
[ceph@bjorn ~]$ sudo rbd map foo --pool pool_ulr_1 --name client.admin -m
helga.univ-lr.fr -k /etc/ceph/ceph.client.admin.keyring
rbd:
On Thu, Mar 12, 2015 at 3:33 PM, Marc Boisis marc.boi...@univ-lr.fr wrote:
I’m trying to create my first ceph disk from a client named bjorn :
[ceph@bjorn ~]$ rbd create foo --size 512000 -m helga -k
/etc/ceph/ceph.client.admin.keyring
[ceph@bjorn ~]$ sudo rbd map foo --pool pool_ulr_1
In dmesg:
[ 5981.113104] libceph: client14929 fsid cd7dd0a4-075c-4317-8aed-0758085ea9d2
[ 5981.115853] libceph: mon0 10.10.10.64:6789 session established
My systems are RHEL 7 with 3.10.0-229.el7.x86_64 kernel
On Thu, Mar 12, 2015 at 3:33 PM, Marc Boisis marc.boi...@univ-lr.fr wrote:
I’m
On Thu, Mar 12, 2015 at 3:33 PM, Marc Boisis marc.boi...@univ-lr.fr wrote:
I’m trying to create my first ceph disk from a client named bjorn :
[ceph@bjorn ~]$ rbd create foo --size 512000 -m helga -k
/etc/ceph/ceph.client.admin.keyring
[ceph@bjorn ~]$ sudo rbd map foo --pool pool_ulr_1
On Sun, Mar 8, 2015 at 9:21 AM, Francois Lafont flafdiv...@free.fr wrote:
Hello,
Thanks to Jcsp (John Spray I guess) that helps me on IRC.
On 06/03/2015 04:04, Francois Lafont wrote:
~# mkdir /cephfs
~# mount -t ceph 10.0.2.150,10.0.2.151,10.0.2.152:/ /cephfs/ -o
Thanks a lot it’s good
ROOT:bjorn:/root rbd create foo --pool pool_ulr_1 --size 512000 -m
helga.univ-lr.fr -k /etc/ceph/ceph.client.admin.keyring
ROOT:bjorn:/root rbd map foo --pool pool_ulr_1 --name client.admin -m
helga.univ-lr.fr -k /etc/ceph/ceph.client.admin.keyring
/dev/rbd0
I've no idea if this helps. But I was looking in the meta file of osd.3 to see
if things there made any sense. I'm very much out of my depth.
To me this looks like a bug. Quite possibly a corner case, but bug none the
less.
Anyway I've included my crush map and what look like the osdmap files
I am looking into how I can maximize my space with replication, and I am
trying to understand how I can do that.
I have 145TB of space and a replication of 3 for the pool and was thinking
that the max data I can have in the cluster is ~47TB in my cluster at one
time..is that correct? Or is there
Hello,
On Thu, Mar 12, 2015 at 3:07 PM, Thomas Foster thomas.foste...@gmail.com
wrote:
I am looking into how I can maximize my space with replication, and I am
trying to understand how I can do that.
I have 145TB of space and a replication of 3 for the pool and was thinking
that the max
Actually, it's more like 41TB. It's a bad idea to run at near full
capacity (by default past 85%) because you need some space where Ceph
can replicate data as part of its healing process in the event of disk
or node failure. You'll get a health warning when you exceed this ratio.
You can use
If I remember right, the mon key has to be the same between all the mon
hosts. I don't think I added an admin key to my second mon, it got all the
other keys once it joined the mon closure. I do remember the join taking a
while. Have you checked the firewall to make sure traffic is allowed? I
On 03/12/2015 05:16 AM, Malcolm Haak wrote:
Sorry about all the unrelated grep issues..
So I've rebuilt and reinstalled and it's still broken.
On the working node, even with the new packages, everything works.
On the new broken node, I've added a mon and it works. But I still cannot start
an
Thank you! That helps alot.
On Mar 12, 2015 10:40 AM, Steve Anthony sma...@lehigh.edu wrote:
Actually, it's more like 41TB. It's a bad idea to run at near full
capacity (by default past 85%) because you need some space where Ceph can
replicate data as part of its healing process in the event
Having two monitors should not be causing the problem you are seeing like
you say. What is in /var/log/ceph/ceph.mon.*.log?
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Mar 12, 2015 7:39 PM, Georgios Dimitrakakis gior...@acmac.uoc.gr
wrote:
Hi Robert!
Thanks for the
Hi all, after adding osds manually and reboot the server the osd didnt come up
automatically am I missing something?
Thanks
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267 3146tel:+52%2055%205267%203146
Mobile: +51 1
My experience with CentOS 7 is that ceph-disk works the best. Systemd has a
fit with extra arguments common in the upstart and SysV scripts. Ceph
installs udev rules that will automatically mount and start OSDs.
The udev rules look for GPT partition UUIDs that are set aside for Ceph to
find
On Fri, Mar 13, 2015 at 1:17 AM, Florent B flor...@coppint.com wrote:
Hi all,
I test CephFS again on Giant release.
I use ceph-fuse.
After deleting a large directory (few hours ago), I can see that my pool
still contains 217 GB of objects.
Even if my root directory on CephFS is empty.
Here is the procedure I wrote for our internal use (it is still a work in
progress) and may help you:
*Creating the First Monitor*
Once you Ceph installed, DNS and networking configured and have a ceph.conf
file built, you are ready to bootstrap the first monitor. The UUID is the
same from the
Several patches aim to solve that by using RBD snapshots instead of QEMU
snapshots.
Unfortunately I doubt we will have something ready for OpenStack Juno.
Hopefully Liberty will be the release that fixes that.
Having RAW images is not that bad since booting from that snapshot will do a
clone.
Hi Robert yes I did disable completely actually with chkconfig off for not take
the service up when booting, I have 2 networks 1 with internet for yum purposes
and the network for the public network so before any configuration I specified
on ceph.conf that public network but I am not sure if it
Great :) so just 1 point more, step 4 in adding monitors (Add the new
monitor to the Monitor map.) this command actually runs in the new monitor
right?
Thank you so much!
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267
- Original Message -
From: Ben b@benjackson.email
To: ceph-us...@ceph.com
Sent: Wednesday, March 11, 2015 8:46:25 PM
Subject: Re: [ceph-users] Shadow files
Anyone got any info on this?
Is it safe to delete shadow files?
It depends. Shadow files are badly named objects that
For example, here is my confuguration:
superuser@admin:~$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
242T 209T 20783G 8.38
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
ec_backup-storage 4 9629G 3.88
Thanks for the feedback. I will be looking forward to those patches in Liberty.
In the meanwhile, it appears my best option would be to manually sparsify the
Glance images using qemu-img convert.
Regards,
Charles
-Original Message-
From: Sebastien Han sebastien@enovance.com
Date:
Thanks I did everything as you mention and still have the same issue that hangs
a lot:
[root@tauro ~]# ceph status
2015-03-12 11:40:50.441084 7f6e20336700 0 -- :/1005688 192.168.4.35:6789/0
pipe(0x7f6e1c0239a0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f6e1c023c30).fault
2015-03-12 11:40:53.441517
Thank you Robert! Ill try ;)
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267 3146tel:+52%2055%205267%203146
Mobile: +51 1 5538883255tel:+51%201%205538883255
CCIE - 44433
On Mar 12, 2015, at 11:03 AM, Robert LeBlanc
That command (ceph mon add mon-id ip[:port]) can be run from any
client in the cluster with the admin key, it is a general Ceph command.
On Thu, Mar 12, 2015 at 10:33 AM, Jesus Chavez (jeschave)
jesch...@cisco.com wrote:
Great :) so just 1 point more, step 4 in adding monitors (Add the
Our cluster has millions of objects in it, there has to be an easy way
to reconcile objects that no longer exist to shadow files?
We are in a critical position now because we have millions of objects, a
large number of TB of data, and closing in on 42 osds near full 89% util
out of 112 osds.
Hi Steffen I already had them in my configuration I am stress now because it
seems like none of the methods did help :( this is bad I think I am going to
get back to rhel6.6 where xfs is a damn add on and I have to install from
centos repo make ceph like patch :( but at last with RHEL6.6
I'm not sure why you are having such a hard time. I added monitors (and
removed them) on CentOS 7 by following what I had. The thing that kept
tripping me up was firewalld. Once I either shut it off or created a
service for Ceph, it worked fine.
What is in in /var/log/ceph/ceph-mon.tauro.log when
We all get burned by the firewall at one time or another. Hence the name
'fire'wall! :) I'm glad you got it working.
On Thu, Mar 12, 2015 at 2:53 PM, Jesus Chavez (jeschave) jesch...@cisco.com
wrote:
This is awkard Robert all this time was the firewall :( I cant believe I
spent 2 days trying
This is awkard Robert all this time was the firewall :( I cant believe I spent
2 days trying to figure out :(. Thank you so much!
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267 3146tel:+52%2055%205267%203146
Mobile: +51 1
Hi
I just want to tell you there is a rgw object visualisation that could help you
in our tool called inkscope available on github
Best regards
Envoyé de mon Galaxy Ace4 Orange
Message d'origine
De : Italo Santos okd...@gmail.com
Date :12/03/2015 21:26 (GMT+01:00)
À : Ben
On 12/03/2015, at 20.00, Jesus Chavez (jeschave) jesch...@cisco.com wrote:
Thats what I thought and did actually the monmap and keyring were copied to
the new monitor and there with 2 elements I did the mkfs thing and still have
that Messages, do I need osd configured? Because I have non
Hello Ben,
I’m facing with the same issue - #10295 (http://tracker.ceph.com/issues/10295)
and I remove the object directly from that rados successfully. But is very
important map all object before do that. I recommend you take a look to the
links bellow to understand more about the objects
Ok guys I decided to get back to ceph-deploy after mon create command I have
got this:
[root@capricornio ~]# ceph-deploy gatherkeys capricornio
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.22): /usr/bin/ceph-deploy
Hello,
I need to understand how replication is accomplished or
who is taking care of replication, osd itsef? Because we are using
librados to read/write to cluster. If librados is not doing parallel
writes according desired number of object copies, it could happen that
objects are in journal
I might be missing something, but is sounds like you already have a monitor
up and running. If you create a new key, the new monitor won't be able to
auth to the existing one. You need to get the monitor key from your
existing monitor and use that for the second (and third) monitor. Look at
step
The primary OSD for an object is responsible for the replication. In a
healthy cluster the workflow is as such:
1. Client looks up primary OSD in CRUSH map
2. Client sends object to be written to primary OSD
3. Primary OSD looks up replication OSD(s) in its CRUSH map
4. Primary OSD
Thats what I thought and did actually the monmap and keyring were copied to the
new monitor and there with 2 elements I did the mkfs thing and still have that
Messages, do I need osd configured? Because I have non and I am not sure if it
is requiered ... Also is weird that monmap is not taking
On 12-03-15 13:00, Lindsay Mathieson wrote:
On Thu, 12 Mar 2015 12:49:51 PM Vieresjoki, Juha wrote:
But there's really no point, block storage is the only viable option for
virtual machines performance-wise. With images you're dealing with multiple
filesystem layers on top of the actual
Jajaja :) damn fire wall :P well thank you for your patience! Have a great day
Robert and Steffen
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267 3146tel:+52%2055%205267%203146
Mobile: +51 1 5538883255tel:+51%201%205538883255
CCIE - 44433
But there's really no point, block storage is the only viable option for
virtual machines performance-wise. With images you're dealing with multiple
filesystem layers on top of the actual block devices, plus Ceph as block
storage supports pretty much everything that qcow2 as a format does.
On 12
On Thu, 12 Mar 2015 12:49:51 PM Vieresjoki, Juha wrote:
But there's really no point, block storage is the only viable option for
virtual machines performance-wise. With images you're dealing with multiple
filesystem layers on top of the actual block devices, plus Ceph as block
storage supports
http://docs.openstack.org/image-guide/content/ch_converting.html
On Mar 12, 2015 6:50 AM, Vieresjoki, Juha j...@void.fi wrote:
But there's really no point, block storage is the only viable option for
virtual machines performance-wise. With images you're dealing with multiple
filesystem layers
On Thu, 12 Mar 2015 09:27:43 AM Andrija Panic wrote:
ceph is RAW format - should be all fine...so VM will be using that RAW
format
If you use cephfs you can use qcow2.
___
ceph-users mailing list
ceph-users@lists.ceph.com
Sorry about this,
I sent this at 1AM last night and went to bed, I didn't realise the log was far
too long and the email had been blocked...
I've reattached all the requested files and trimmed the body of the email.
Thank you again for looking at this.
-Original Message-
From:
Hi Cephers,
Has anyone tested the behavior of rados by adding an object to the
cluster with an object name which already exists in the cluster ?
with command - rados put -p testpool myobject testfile
I notice that even if I already have an object called 'myobject' in testpool,
I can still add a
Hi all!
I have updated from 0.80.8 to 0.80.9 and every time I try to restart
CEPH a monitor a strange monitor is appearing!
Here is the output:
#/etc/init.d/ceph restart mon
=== mon.master ===
=== mon.master ===
Stopping Ceph mon.master on master...kill 10766...done
=== mon.master ===
I forgot to say that the monitors form a quorum and the cluster's
health is OK
so there aren't any serious troubles other than the annoying message.
Best,
George
Hi all!
I have updated from 0.80.8 to 0.80.9 and every time I try to restart
CEPH a monitor a strange monitor is appearing!
Here
Two monitors don't work very well and really don't but you anything. I
would either add another monitor or remove one. Paxos is most effective
with an odd number of monitors.
I don't know about the problem you are experiencing and how to help you. An
even number of monitors should work.
Robert
Hi Robert!
Thanks for the feedback! I am aware of the fact that the number of the
monitors should be odd
but this is a very basic setup just to test CEPH functionality and
perform tasks there before
doing it to our production cluster.
So I am not concerned about that and I really don't
We have application cluster and ceph as storage solution, cluster consists of
six servers, so we've installed
monitor on every one of them, to have ceph cluster sane (quorum) if server or
two of them goes down.
You want an odd number for sure, to avoid the classic split-brain problem:
62 matches
Mail list logo