[ceph-users] ceph 14.2.6 problem with default args to rbd (--name)

2020-01-20 Thread Rainer Krienke
cluster! rbd: listing images failed: (1) Operation not permitted According to the documentation this should work, but it seems it doesn't. Something I am doing wrong or is this a bug? Thanks Rainer -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, T

Re: [ceph-users] Strange CEPH_ARGS problems

2019-11-15 Thread Rainer Krienke
Is the flip between the client name "rz" and "user" also a mistype? It's > hard to divinate if it is intentional or not since you are mixing it about. > > > Den fre 15 nov. 2019 kl 10:57 skrev Rainer Krienke > mailto:krie...@uni-koblenz.de>>: > &

Re: [ceph-users] Strange CEPH_ARGS problems

2019-11-15 Thread Rainer Krienke
I found a typo in my post: Of course I tried export CEPH_ARGS="-n client.rz --keyring=" and not export CEPH_ARGS=="-n client.rz --keyring=" Thanks Rainer Am 15.11.19 um 07:46 schrieb Rainer Krienke: > Hello, > > I try to use CEPH_ARGS in order to use e

[ceph-users] Strange CEPH_ARGS problems

2019-11-14 Thread Rainer Krienke
vior? I would like to set both user name and keyring to be used, so that I can run rbd without any parameters. How do you do this? Thanks Rainer -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312 Web: http://userpage

[ceph-users] Two questions about ceph update/upgrade strategies

2019-06-04 Thread Rainer Krienke
fixed sequence eg first on a osd/mon host and if the update is successful, then run the linux system package updates on the other hosts? Do you use another strategy? Thanks Rainer -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Tel: +49261287 1312 Fax

Re: [ceph-users] Erasure code profiles and crush rules. Missing link...?

2019-05-22 Thread Rainer Krienke
Hello, thanks for the hint. I opened a ticket with a feature request to include the ec-profile information in the output of ceph osd pool ls detail. http://tracker.ceph.com/issues/40009 Rainer Am 22.05.19 um 17:04 schrieb Jan Fajerski: > On Wed, May 22, 2019 at 03:38:27PM +0200, Rainer Krie

Re: [ceph-users] Erasure code profiles and crush rules. Missing link...?

2019-05-22 Thread Rainer Krienke
le > "jera_4plus2" > > -- Dan -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312 Web: http://userpages.uni-koblenz.de/~krienke PGP: http://userpage

[ceph-users] Erasure code profiles and crush rules. Missing link...?

2019-05-22 Thread Rainer Krienke
t;: "set_chooseleaf_tries", "num": 5 }, { "op": "set_choose_tries", "num": 100 }, { "op": "take", "item": -1, "item_name"

Re: [ceph-users] ceph nautilus namespaces for rbd and rbd image access problem

2019-05-20 Thread Rainer Krienke
5 so namespaces won't work for me at the moment. Could you please explain what the magic behind "class rbd metadata_list" is? Is it thought to "simply" allow access to the basepool (rbd in my case), so I authorize access to the pool instead of a namespaces? And if th

Re: [ceph-users] ceph nautilus namespaces for rbd and rbd image access problem

2019-05-20 Thread Rainer Krienke
Hello, just saw this message on the client when trying and failing to map the rbd image: May 20 08:59:42 client kernel: libceph: bad option at '_pool_ns=testnamespace' Rainer Am 20.05.19 um 08:56 schrieb Rainer Krienke: > Hello, > > on a ceph Nautilus cluster (14.2.1) runn

[ceph-users] ceph nautilus namespaces for rbd and rbd image access problem

2019-05-19 Thread Rainer Krienke
eration not permitted 2019-05-20 08:18:29.187 7f42aaffd700 -1 librbd::ImageState: 0x561792408860 failed to open image: (1) Operation not permitted rbd: map failed: (22) Invalid argument Thanks for your help Rainer -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz,

Re: [ceph-users] ceph -s finds 4 pools but ceph osd lspools says no pool which is the expected answer

2019-05-16 Thread Rainer Krienke
ew OSDMap the manager sees and I can't > see how that would go wrong.) > -Greg > > >> Thanks >> Rainer >> -- >> Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 >> 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke

Re: [ceph-users] ceph -s finds 4 pools but ceph osd lspools says no pool which is the expected answer

2019-05-14 Thread Rainer Krienke
14.05.19 um 20:03 schrieb Rainer Krienke: > Hello, > > for a fresh setup ceph cluster I see a strange difference in the number > of existing pools in the output of ceph -s and what I know that should > be there: no pools at all. > > I set up a fresh Nautilus cluster with 144

[ceph-users] ceph -s finds 4 pools but ceph osd lspools says no pool which is the expected answer

2019-05-14 Thread Rainer Krienke
ght afterwards. In this case the pool is created and ceph -s shows one pool more (5) and if delete this pool again the counter in ceph -s goes back to 4 again. How can I fix the system so that ceph -s also understands that are actually no pools? There must be some inconsistency. Any ideas? Thanks R

[ceph-users] Need some advice about Pools and Erasure Coding

2019-04-29 Thread Rainer Krienke
OSD has a latency until it can deliver its data shard. So is there a recommandation which of my two k+m examples should be preferred? Thanks in advance for your help Rainer -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Tel: +49261287 1312 Fax +49261287

Re: [ceph-users] Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

2019-02-17 Thread Rainer Krienke
oblems > creating the BlueStore filesystem. > > [1] ceph-volume lvm zap /dev/sdg > ceph-volume lvm prepare --bluestore --data /dev/sdg > > On Thu, Feb 14, 2019 at 10:25 AM Rainer Krienke <mailto:krie...@uni-koblenz.de>> wrote: > > Hi, > > I am quite

[ceph-users] Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

2019-02-14 Thread Rainer Krienke
ly-mean-it stderr: purged osd.0 --> RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 14d041d6-0beb-4056-8df2-3920e2febc