run /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway
error:
2013-06-19 09:19:55.148536 7f120aa0d820 0 librados: client.radosgw.gateway
authentication error (95) Operation not supported
2013-06-19 09:19:55.148923 7f120aa0d820 -1 Couldn't init storage provider
(RADOS)
How
Why are there so many ceph-create-keys processes? Under Debian, every time I
start the mons another ceph-create-keys process starts up.
Thanks
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 19 Jun 2013, at 10:42, James Harper wrote:
Why are there so many ceph-create-keys processes? Under Debian, every time I
start the mons another ceph-create-keys process starts up.
I've seen these hang around for no particular good reason (no Ubuntu). It seems
to happen when there is some
Hi,
We ran into the following problem:
[ 194.789476] libceph: loaded (mon/osd proto 15/24, osdmap 5/6 5/6)
[ 194.798526] ceph: loaded (mds proto 32)
[ 194.800431] libceph: client0 fsid
97e515bb-d334-4fa7-8b53-7d85615809fd
[ 194.802534] libceph: mon0 10.255.0.25:6789 session established
[
Hi all-
Just a quick note to Debian squeeze users: in the course of debugging
ceph-mon memory growth over time, we've determined that (at least in
Stefan Priebe's environment) the tcmalloc (google perftools) library on
Debian squeeze is leaking memory. If you are a Ceph user on squeeze, be
On Wed, 19 Jun 2013, James Harper wrote:
Why are there so many ceph-create-keys processes? Under Debian, every
time I start the mons another ceph-create-keys process starts up.
If the processes aren't exiting on their own it means the monitors aren't
forming a quorum, or something else is
On Jun 18, 2013, at 12:08 PM, Joe Ryner jry...@cait.org wrote:
I would like to make a local mirror or your yum repositories. Do you support
any of the standard methods of syncing aka rsync?
+1. Our Ceph boxes are firewalled from the Internet at large and installing
from a local mirror is
On 6/19/13 12:35 AM, Sage Weil wrote:
Yes so the init script does create the directory. Even if i manually
create the directory before running the initial 'ceph-deploy mon create'
I am still seeing the exception in the mon_create function that I
originally posted about. Still trying to track
Good day!
Yep. Noticed this problem too under Debian.
It was keys-copy step problem. My mon process was alive. But there were no
keys in location so no quorum but 4-5 ceph-create-keys processes. Tried to
copy keys manually between all nodes and restarted ceph. This was helpful.
Regards, Artem
Hi,
So when bootstrapping radosgw you are not given the option to create the
pools (and therefor set a specific pg_num). There are a lot of pools
created .rgw, .rgw.gc, .rgw.control, .users.uid, .users.email, .users.
I know I can set osd_pool_default_pg_num but that will apply to all
those
Couple things I caught.
The first wasn't a huge issue but good to note.
The second took me a while to figure out.
1. Default attribute:
ceph-deploy new [HOST]
by default creates filestore xattr use omap = true which is used for ext4
http://eu.ceph.com/docs/wip-3060/config-cluster/ceph-conf/#osds
ceph@ceph-node0:~/test$ ceph-deploy new 172.18.11.30 172.18.11.32 172.18.11.34
ceph@ceph-node0:~/test$ cat ceph.conf
[global]
fsid = caf39355-bd8f-450e-b026-6001607e62cf
mon initial members = 172, 172, 172
mon host = 172.18.11.30,172.18.11.32,172.18.11.34
auth supported = cephx
osd journal size =
On Thu, 20 Jun 2013, Da Chun wrote:
ceph@ceph-node0:~/test$ ceph-deploy new 172.18.11.30 172.18.11.32
The mons need (host)names, too:
ceph-deploy new hosta:1.2.3.4 hostb:1.2.3.5 ...
where the name should match `hostname -s` on the node.
BTW, is there a version attribute in ceph-deploy for
+1.
I use apt-mirror to do this now.
-- Original --
From: John Nielsenli...@jnielsen.net;
Date: Thu, Jun 20, 2013 00:21 AM
To: Joe Rynerjry...@cait.org;
Cc: ceph-usersceph-users@lists.ceph.com;
Subject: Re: [ceph-users] Repository Mirroring
On Jun 18,
14 matches
Mail list logo