cluster!
rbd: listing images failed: (1) Operation not permitted
According to the documentation this should work, but it seems it
doesn't. Something I am doing wrong or is this a bug?
Thanks
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, T
Is the flip between the client name "rz" and "user" also a mistype? It's
> hard to divinate if it is intentional or not since you are mixing it about.
>
>
> Den fre 15 nov. 2019 kl 10:57 skrev Rainer Krienke
> mailto:krie...@uni-koblenz.de>>:
>
&
I found a typo in my post:
Of course I tried
export CEPH_ARGS="-n client.rz --keyring="
and not
export CEPH_ARGS=="-n client.rz --keyring="
Thanks
Rainer
Am 15.11.19 um 07:46 schrieb Rainer Krienke:
> Hello,
>
> I try to use CEPH_ARGS in order to use e
vior? I would like to set both user name and
keyring to be used, so that I can run rbd without any parameters.
How do you do this?
Thanks
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312
Web: http://userpage
fixed sequence eg first on a osd/mon host
and if the update is successful, then run the linux system package
updates on the other hosts? Do you use another strategy?
Thanks
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Tel: +49261287 1312 Fax
Hello,
thanks for the hint. I opened a ticket with a feature request to include
the ec-profile information in the output of ceph osd pool ls detail.
http://tracker.ceph.com/issues/40009
Rainer
Am 22.05.19 um 17:04 schrieb Jan Fajerski:
> On Wed, May 22, 2019 at 03:38:27PM +0200, Rainer Krie
le
> "jera_4plus2"
>
> -- Dan
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312
Web: http://userpages.uni-koblenz.de/~krienke
PGP: http://userpage
t;: "set_chooseleaf_tries",
"num": 5
},
{
"op": "set_choose_tries",
"num": 100
},
{
"op": "take",
"item": -1,
"item_name"
5 so namespaces
won't work for me at the moment.
Could you please explain what the magic behind "class rbd metadata_list"
is? Is it thought to "simply" allow access to the basepool (rbd in my
case), so I authorize access to the pool instead of a namespaces? And if
th
Hello,
just saw this message on the client when trying and failing to map the
rbd image:
May 20 08:59:42 client kernel: libceph: bad option at
'_pool_ns=testnamespace'
Rainer
Am 20.05.19 um 08:56 schrieb Rainer Krienke:
> Hello,
>
> on a ceph Nautilus cluster (14.2.1) runn
eration not permitted
2019-05-20 08:18:29.187 7f42aaffd700 -1 librbd::ImageState:
0x561792408860 failed to open image: (1) Operation not permitted
rbd: map failed: (22) Invalid argument
Thanks for your help
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz,
ew OSDMap the manager sees and I can't
> see how that would go wrong.)
> -Greg
>
>
>> Thanks
>> Rainer
>> --
>> Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
>> 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke
14.05.19 um 20:03 schrieb Rainer Krienke:
> Hello,
>
> for a fresh setup ceph cluster I see a strange difference in the number
> of existing pools in the output of ceph -s and what I know that should
> be there: no pools at all.
>
> I set up a fresh Nautilus cluster with 144
ght afterwards. In this case the pool is created and ceph -s
shows one pool more (5) and if delete this pool again the counter in
ceph -s goes back to 4 again.
How can I fix the system so that ceph -s also understands that are
actually no pools? There must be some inconsistency. Any ideas?
Thanks
R
OSD has a latency until
it can deliver its data shard. So is there a recommandation which of my
two k+m examples should be preferred?
Thanks in advance for your help
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Tel: +49261287 1312 Fax +49261287
oblems
> creating the BlueStore filesystem.
>
> [1] ceph-volume lvm zap /dev/sdg
> ceph-volume lvm prepare --bluestore --data /dev/sdg
>
> On Thu, Feb 14, 2019 at 10:25 AM Rainer Krienke <mailto:krie...@uni-koblenz.de>> wrote:
>
> Hi,
>
> I am quite
ly-mean-it
stderr: purged osd.0
--> RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd
--cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap
/var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data
/var/lib/ceph/osd/ceph-0/ --osd-uuid
14d041d6-0beb-4056-8df2-3920e2febc
17 matches
Mail list logo