r old.
,Ashley
From: Ashley Merrick
Sent: 13 October 2017 07:54:27
To: Shinobu Kinjo
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] cephx
Hello,
http://docs.ceph.com/docs/master/rados/operations/user-management/
User Management — Ceph
Documentation<http://docs.c
t)
,Ashley
From: Shinobu Kinjo <ski...@redhat.com>
Sent: 13 October 2017 07:41
To: Ashley Merrick
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] cephx
On Fri, Oct 13, 2017 at 3:29 PM, Ashley Merrick <ash...@amerrick.co.uk> wrote:
> Hello,
&g
On Fri, Oct 13, 2017 at 3:29 PM, Ashley Merrick wrote:
> Hello,
>
>
> Is it possible to limit a cephx user to one image?
>
>
> I have looked and seems it's possible per a pool, but can't find a per image
> option.
What did you look at?
Best regards,
Shinobu Kinjo
>
>
>
Hello,
Is it possible to limit a cephx user to one image?
I have looked and seems it's possible per a pool, but can't find a per image
option.
,Ashley
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi, everyone.
According to the documentation, “auth_cluster_required” means that “the Ceph
Storage Cluster daemons (i.e., ceph-mon, ceph-osd, and ceph-mds) must
authenticate with each other”. So, I guess if I only need to verify the client,
then "auth_cluster_required" doesn't need to be
m" <gfar...@redhat.com>
> To: "Loris Cuoghi" <l...@stella-telecom.fr>
> Cc: ceph-users@lists.ceph.com
> Sent: Tuesday, March 15, 2016 5:54:43 PM
> Subject: Re: [ceph-users] cephx capabilities to forbid rbd creation
>
> On Tue, Mar 15, 2016 at 2:44 PM, Loris C
On Tue, Mar 15, 2016 at 2:44 PM, Loris Cuoghi wrote:
> So, one key per RBD.
> Or, dynamically enable/disable access to each RBD in each hypervisor's key.
> Uhm, something doesn't scale here. :P
> (I wonder if there's any limit to a key's capabilities string...)
>
> But, as
So, one key per RBD.
Or, dynamically enable/disable access to each RBD in each hypervisor's key.
Uhm, something doesn't scale here. :P
(I wonder if there's any limit to a key's capabilities string...)
But, as it appears, I share your view that it is the only available
approach right now.
Anyone
Hi,
Maybe (not tested) :
[osd ]allow * object_prefix ?
2016-03-15 22:18 GMT+01:00 Loris Cuoghi :
>
> Hi David,
>
> One pool per virtualization host would make it impossible to live
> migrate a VM. :)
>
> Thanks,
>
> Loris
>
>
> Le 15/03/2016 22:11, David Casier a écrit
Hi David,
One pool per virtualization host would make it impossible to live
migrate a VM. :)
Thanks,
Loris
Le 15/03/2016 22:11, David Casier a écrit :
> Hi Loris,
> If i'm not mistaken, there are no rbd ACL in cephx.
> Why not 1 pool/client and pool quota ?
>
> David.
>
> 2016-02-12 3:34
Hi Loris,
If i'm not mistaken, there are no rbd ACL in cephx.
Why not 1 pool/client and pool quota ?
David.
2016-02-12 3:34 GMT+01:00 Loris Cuoghi :
> Hi!
>
> We are on version 9.2.0, 5 mons and 80 OSDS distributed on 10 hosts.
>
> How could we twist cephx capabilities so
Hi!
We are on version 9.2.0, 5 mons and 80 OSDS distributed on 10 hosts.
How could we twist cephx capabilities so to forbid our KVM+QEMU+libvirt
hosts any RBD creation capability ?
We currently have an rbd-user key like so :
caps: [mon] allow r
caps: [osd] allow x
Hello guys,
today we had one storage (19xosd) down for 4 hours
and now we are observing different problems and when I tried to restart
one osd, I got error related to cephx
2015-06-09 21:09:49.983522
7fded00c7700 0 auth: could not find secret_id=6238
2015-06-09
21:09:49.983585 7fded00c7700
Hi Experts,
After implemented Ceph initially with 3 OSDs, now I am facing an issue:
It reports healthy but sometimes(or often) fails to access the pools.
While sometimes it comes back to normal automatically.
For example:
*[*ceph@gcloudcon ceph-cluster]$ *rados -p volumes ls*
What's strange is OSD rebalance obviously has no problem, it's just new
object can't be written since the new segments can't be distributed to new
OSDs.
Here is the error from radosgw.log:
2014-06-17 10:34:01.568754 7fc7e83f4700 0 cephx: verify_reply couldn't
decrypt with error: error decoding
It's unlikely to be the issue, but you might check the times on your OSDs.
cephx is clock-sensitive if you're off by more than an hour or two.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Jun 17, 2014 at 8:30 AM, Fred Yang frederic.y...@gmail.com wrote:
What's
I'm adding three OSD nodes(36 osds in total) to existing 3-node cluster(35
osds) using ceph-deploy, after disks prepared and OSDs activated, the
cluster re-balanced and shows all pgs active+clean:
osdmap e820: 72 osds: 71 up, 71 in
pgmap v173328: 15920 pgs, 17 pools, 12538 MB data,
Did you run ceph-deploy in the directory where you ran ceph-deploy new and
ceph-deploy gatherkeys? That's where the monitor bootstrap key should be.
On Mon, Jun 16, 2014 at 8:49 AM, Fred Yang frederic.y...@gmail.com wrote:
I'm adding three OSD nodes(36 osds in total) to existing 3-node
On Wed, 14 May 2014, Brian Rak wrote:
Why are the defaults for 'cephx require signatures' and similar still false?
Is it still necessary to maintain backwards compatibility with very old
clients by default? It seems like from a security POV, you'd want everything
to be more secure out of the
Why are the defaults for 'cephx require signatures' and similar still
false? Is it still necessary to maintain backwards compatibility with
very old clients by default? It seems like from a security POV, you'd
want everything to be more secure out of the box, and require the user
to
Thanks for the response Greg.
Unfortunately, I appear to be missing something. If I use my cephfs key
with these perms:
client.cephfs
key: redacted
caps: [mds] allow rwx
caps: [mon] allow r
caps: [osd] allow rwx pool=data
This is what happens when I mount:
# ceph-fuse -k
Hrm, I don't remember. Let me know which permutation works and we can
dig into it.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Apr 2, 2014 at 9:00 AM, Travis Rhoden trho...@gmail.com wrote:
Thanks for the response Greg.
Unfortunately, I appear to be missing
Ah, I figured it out. My original key worked, but I needed to use the --id
option with ceph-fuse to tell it to use the cephfs user rather than the
admin user. Tailing the log on my monitor pointed out that it was logging
in with client.admin, but providing the key for client.cephfs.
So, final
At present, the only security permission on the MDS is allowed to do
stuff, so rwx and * are synonymous. In general * means is an
admin, though, so you'll be happier in the future if you use rwx.
You may also want a more restrictive set of monitor capabilities as
somebody else recently pointed
Hi all,
I've tested authentication on client side for pools, no problem so far.
I'm testing granularity to the rbd image, I've seen in the doc that we
can limit to object prefix, so possibly to rbd image :
http://ceph.com/docs/master/man/8/ceph-authtool/#osd-capabilities
I've got the
Hello
I am trying to integrate openstack and ceph. I have successfully configured
cinder but there is a problem with rados lspools command executed during
cinder-volume startup. It looks like this command requires client.admin
keyring to be readable by cinder user. Is it possible to specify
On Fri, 21 Jun 2013, Maciej Ga?kiewicz wrote:
Hello
I am trying to integrate openstack and ceph. I have successfully configured
cinder but there is a problem with rados lspools command executed during
cinder-volume startup. It looks like this command requires client.admin
keyring to be
27 matches
Mail list logo