[ceph-users] HA service for RGW and dnsmasq

2024-02-16 Thread Jean-Marc FONTANA

Hello everyone,

We operate 2 clusters as S3 external storage for owncloud.
Both were installed with ceph-deploy (Nautilus), then converted to 
cephadm and updated till Reef.

Each cluster has one RGW host, managing a dnsmasq pack for DNS delegation.
We feel know we should create a HA endpoint for RGW as explained in Reef 
doc.


We have a few questions that we didn't find answers in the doc before 
proceeding :


  - is dnsmasq still needed on the existing ans new RGW hosts ?
  - if answer is yes, how should it be configured ?
  - if answer is no, how does DNS delegation work ?

Any help will be useful.

Thanks and best regards

JM
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-17 Thread Jean-Marc FONTANA

Hello, everyone,

There's nothing cephadm.log in /var/log/ceph.

To get something else, we tried what David C. proposed (thanks to him 
!!) and found:


nov. 17 10:53:54 svtcephmonv3 ceph-mgr[727]: [balancer ERROR root] 
execute error: r = -1, detail = min_compat_client jewel < luminous, 
which is required for pg-upmap. Try 'ceph osd 
set-require-min-compat-client luminous' before using the new interface
nov. 17 10:54:54 svtcephmonv3 ceph-mgr[727]: [balancer ERROR root] 
execute error: r = -1, detail = min_compat_client jewel < luminous, 
which is required for pg-upmap. Try 'ceph osd 
set-require-min-compat-client luminous' before using the new interface
nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR exception] 
Internal Server Error
nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR request] 
[:::192.168.114.32:53414] [GET] [500] [0.026s] [testadmin] [513.0B] 
/api/rgw/daemon
nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR request] 
[b'{"status": "500 Internal Server Error", "detail": "The server 
encountered an unexpected condition which prevented it from fulfilling 
the request.", "request_id": "961b2a25-5c14-4c67-a82a-431f08684f80"} ']
nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR exception] 
Internal Server Error
nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR request] 
[:::192.168.114.32:53409] [GET] [500] [0.012s] [testadmin] [513.0B] 
/api/rgw/daemon
nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR request] 
[b'{"status": "500 Internal Server Error", "detail": "The server 
encountered an unexpected condition which prevented it from fulfilling 
the request.", "request_id": "baf41a81-1e6b-4422-97a7-bd96b832dc5a"}


The error about min_compat_client has been fixed with the suggested 
command ( that is a nice result :) ),

but the web interface still keeps on going on error.

Thanks for your helping,

JM

Le 17/11/2023 à 07:33, Nizamudeen A a écrit :

Hi,

I think it should be in /var/log/ceph/ceph-mgr..log, probably you
can reproduce this error again and hopefully
you'll be able to see a python traceback or something related to rgw in the
mgr logs.

Regards

On Thu, Nov 16, 2023 at 7:43 PM Jean-Marc FONTANA
wrote:


Hello,

These are the last lines of /var/log/ceph/cephadm.log of the active mgr
machine after an error occured.
As I don't feel this will be very helpfull, would you please tell us where
to look ?

Best regards,

JM Fontana

2023-11-16 14:45:08,200 7f341eae8740 DEBUG

cephadm ['--timeout', '895', 'gather-facts']
2023-11-16 14:46:10,406 7fca81386740 DEBUG

cephadm ['--timeout', '895', 'gather-facts']
2023-11-16 14:47:12,594 7fd48f814740 DEBUG

cephadm ['--timeout', '895', 'gather-facts']
2023-11-16 14:48:14,857 7fd0b24b1740 DEBUG

cephadm ['--timeout', '895', 'check-host']
2023-11-16 14:48:14,990 7fd0b24b1740 INFO podman (/usr/bin/podman) version
3.0.1 is present
2023-11-16 14:48:14,992 7fd0b24b1740 INFO systemctl is present
2023-11-16 14:48:14,993 7fd0b24b1740 INFO lvcreate is present
2023-11-16 14:48:15,041 7fd0b24b1740 INFO Unit chrony.service is enabled
and running
2023-11-16 14:48:15,043 7fd0b24b1740 INFO Host looks OK
2023-11-16 14:48:15,655 7f36b81fd740 DEBUG

cephadm ['--image', '
quay.io/ceph/ceph@sha256:56984a149e89ce282e9400ca53371ff7df74b1c7f5e979b6ec651b751931483a',
'--timeout', '895', 'ls']
2023-11-16 14:48:17,662 7f17bfc28740 DEBUG

cephadm ['--timeout', '895', 'gather-facts']
2023-11-16 14:49:20,131 7fc8a9cc1740 DEBUG

cephadm ['--timeout', '895', 'gather-facts']
2023-11-16 14:50:22,284 7f1a6a7eb740 DEBUG

cephadm ['--timeout', '895', 'gather-facts']
2023-11-16 14:51:24,505 7f1798dd5740 DEBUG

cephadm ['--timeout', '895', 'gather-facts']
2023-11-16 14:52:26,574 7f0185a55740 DEBUG

cephadm ['--timeout', '895', 'gather-facts']
2023-11-16 14:53:28,630 7f9bc3fff740 DEBUG

cephadm ['--timeout', '895', 'gather-facts']
2023-11-16 14:54:30,673 7fc3752d0740 DEBUG
--

[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-16 Thread Jean-Marc FONTANA
 

cephadm ['--image', 
'quay.io/ceph/ceph@sha256:56984a149e89ce282e9400ca53371ff7df74b1c7f5e979b6ec651b751931483a', 
'--timeout', '895', 'list-networks']
2023-11-16 15:03:53,692 7f652d1e6740 DEBUG 


cephadm ['--timeout', '895', 'gather-facts']
2023-11-16 15:04:56,193 7f2c66ce3740 DEBUG 


cephadm ['--timeout', '895', 'gather-facts']

Le 16/11/2023 à 12:41, Nizamudeen A a écrit :

Hello,

can you also add the mgr logs at the time of this error?

Regards,

On Thu, Nov 16, 2023 at 4:12 PM Jean-Marc FONTANA 
 wrote:


Hello David,

We tried what you pointed in your message. First, it was set to

"s3, s3website, swift, swift_auth, admin, sts, iam, subpub"

We tried to set it to "s3, s3website, swift, swift_auth, admin, sts,
iam, subpub, notifications"

and then to "s3, s3website, swift, swift_auth, admin, sts, iam,
notifications",

with no success at each time.

We tried then

   ceph dashboard reset-rgw-api-admin-resource

or

   ceph dashboard set-rgw-api-admin-resource XXX

getting a 500 internal error message in a red box on the upper corner
with the first one

or the 404 error message with the second one.

Thanks for your helping,

Cordialement,

JM Fontana


Le 14/11/2023 à 20:53, David C. a écrit :
> Hi Jean Marc,
>
> maybe look at this parameter "rgw_enable_apis", if the values
you have
> correspond to the default (need rgw restart) :
>
>
https://docs.ceph.com/en/quincy/radosgw/config-ref/#confval-rgw_enable_apis
>
> ceph config get client.rgw rgw_enable_apis
>
> 
>
> Cordialement,
>
> *David CASIER*
>
> ________
>
>
>
> Le mar. 14 nov. 2023 à 11:45, Jean-Marc FONTANA
>  a écrit :
>
>     Hello everyone,
>
>     We operate two clusters that we installed with ceph-deploy in
>     Nautilus
>     version on Debian 10. We use them for external S3 storage
>     (owncloud) and
>     rbd disk images.We had them upgraded to Octopus and Pacific
>     versions on
>     Debian 11 and recently converted them to cephadm and upgraded to
>     Quincy
>     (17.2.6).
>
>     As we now have the orchestrator, we tried updating to 17.2.7
using
>     the
>     command# ceph orch upgrade start --image
quay.io/ceph/ceph:v17.2.7 <http://quay.io/ceph/ceph:v17.2.7>
>     <http://quay.io/ceph/ceph:v17.2.7>
>
>     Everything went well, both clusters work perfectly for our use,
>     except
>     that the Rados gateway configuration is no longer accessible
from the
>     dashboard with the following error messageError connecting
to Object
>     Gateway: RGW REST API failed request with status code 404.
>
>     We tried a few solutions found on the internet (reset rgw
    >     credentials,
>     restart rgw adnd mgr, reenable dashboard, ...), unsuccessfully.
>
>     Does somebody have an idea ?
>
>     Best regards,
>
>     Jean-Marc Fontana
>     ___
>     ceph-users mailing list -- ceph-users@ceph.io
>     To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-16 Thread Jean-Marc FONTANA

Hello David,

We tried what you pointed in your message. First, it was set to

"s3, s3website, swift, swift_auth, admin, sts, iam, subpub"

We tried to set it to "s3, s3website, swift, swift_auth, admin, sts, 
iam, subpub, notifications"


and then to "s3, s3website, swift, swift_auth, admin, sts, iam, 
notifications",


with no success at each time.

We tried then

  ceph dashboard reset-rgw-api-admin-resource

or

  ceph dashboard set-rgw-api-admin-resource XXX

getting a 500 internal error message in a red box on the upper corner 
with the first one


or the 404 error message with the second one.

Thanks for your helping,

Cordialement,

JM Fontana


Le 14/11/2023 à 20:53, David C. a écrit :

Hi Jean Marc,

maybe look at this parameter "rgw_enable_apis", if the values you have 
correspond to the default (need rgw restart) :


https://docs.ceph.com/en/quincy/radosgw/config-ref/#confval-rgw_enable_apis

ceph config get client.rgw rgw_enable_apis



Cordialement,

*David CASIER*

____



Le mar. 14 nov. 2023 à 11:45, Jean-Marc FONTANA 
 a écrit :


Hello everyone,

We operate two clusters that we installed with ceph-deploy in
Nautilus
version on Debian 10. We use them for external S3 storage
(owncloud) and
rbd disk images.We had them upgraded to Octopus and Pacific
versions on
Debian 11 and recently converted them to cephadm and upgraded to
Quincy
(17.2.6).

As we now have the orchestrator, we tried updating to 17.2.7 using
the
command# ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.7
<http://quay.io/ceph/ceph:v17.2.7>

Everything went well, both clusters work perfectly for our use,
except
that the Rados gateway configuration is no longer accessible from the
dashboard with the following error messageError connecting to Object
Gateway: RGW REST API failed request with status code 404.

We tried a few solutions found on the internet (reset rgw
credentials,
restart rgw adnd mgr, reenable dashboard, ...), unsuccessfully.

Does somebody have an idea ?

Best regards,

Jean-Marc Fontana
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Problem while upgrade 17.2.6 to 17.2.7

2023-11-14 Thread Jean-Marc FONTANA

Hello everyone,

We operate two clusters that we installed with ceph-deploy in Nautilus 
version on Debian 10. We use them for external S3 storage (owncloud) and 
rbd disk images.We had them upgraded to Octopus and Pacific versions on 
Debian 11 and recently converted them to cephadm and upgraded to Quincy 
(17.2.6).


As we now have the orchestrator, we tried updating to 17.2.7 using the 
command# ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.7


Everything went well, both clusters work perfectly for our use, except 
that the Rados gateway configuration is no longer accessible from the 
dashboard with the following error messageError connecting to Object 
Gateway: RGW REST API failed request with status code 404.


We tried a few solutions found on the internet (reset rgw credentials, 
restart rgw adnd mgr, reenable dashboard, ...), unsuccessfully.


Does somebody have an idea ?

Best regards,

Jean-Marc Fontana
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Cephadm migration

2022-10-18 Thread Jean-Marc FONTANA

Hello Adam,

Just tried command ceph orch redeploy (without "--image") and it works, 
the rgw image is the right version.


The command we used is

$ sudo ceph orch daemon redeploy rgw.testrgw.svtcephrgwv1.zlfzpx 
quay.io/ceph/ceph:v16.2.10


The old rgw service is still alive, but it seems that it's no problem 
for orchestrator.

We can now go further and fetch what is wrong.

Thanks for your helping

JM

Le 14/10/2022 à 14:09, Adam King a écrit :

For the weird image, perhaps just "ceph orch daemon redeploy
rgw.testrgw.svtcephrgwv1.invwmo --image quay.io/ceph/ceph:v16.2.10" will
resolve it. Not sure about the other things wrong with it yet but I think
the image should be fixed before looking into that.

On Fri, Oct 14, 2022 at 5:47 AM Jean-Marc FONTANA
wrote:


Hello everyone !

We're operating a small cluster which contains 1 monitor-manager, 3 osds
ans 1 RGW.
Tjhe cluster was initially installed with ceph-deploy in version
Nautilus (14.2.19) then
upgraded in Octopus (15.2.16) and lastly in Pacific (16.2.9).
Ceph-deploy does not work
any more so we need to migrate the cluster in cephadm mode. We did it
following the
official Ceph Doc.

Everything is getting OK, until it concerns mon, mgr and osd. But we get
in great troubles
when migrating the RGW :

- the RGW podman image is a very exotic version (see below)
- the old service starts after a while, although having been stopped
and removed as explained in the doc,
- we never achieved to config of the new gateway with a yaml file.

Versions of the nodes after migrating :

   ##
monitor / manager  : cephadm inspect-image
##

{
  "ceph_version": "ceph version 16.2.10
(45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)",
  "image_id":
"32214388de9de06e6f5a0a6aa9591ac10c72cbe1bdd751b792946d968cd502d6",
  "repo_digests": [
"
quay.io/ceph/ceph@sha256:2b68483bcd050472a18e73389c0e1f3f70d34bb7abf733f692e88c935ea0a6bd
",
"
quay.io/ceph/ceph@sha256:3cd25ee2e1589bf534c24493ab12e27caf634725b4449d50408fd5ad4796bbfa
"
  ]
}


   ##
OSDs : cephadm inspect-image
##

{
  "ceph_version": "ceph version 16.2.10
(45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)",
  "image_id":
"32214388de9de06e6f5a0a6aa9591ac10c72cbe1bdd751b792946d968cd502d6",
  "repo_digests": [
"
quay.io/ceph/ceph@sha256:2b68483bcd050472a18e73389c0e1f3f70d34bb7abf733f692e88c935ea0a6bd
",
"
quay.io/ceph/ceph@sha256:3cd25ee2e1589bf534c24493ab12e27caf634725b4449d50408fd5ad4796bbfa
"
  ]
}

##
RGW : cephadm inspect-image
##

{
  "ceph_version": "ceph version 16.2.5-387-g7282d81d
(7282d81d2c500b5b0e929c07971b72444c6ac424) pacific (stable)",
  "image_id":
"41387741ad94630f1c58b94fdba261df8d8e3dc2d4f70ad6201739764f43eb2c",
  "repo_digests": [
"
docker.io/ceph/daemon-base@sha256:a038c6dc35064edff40bb7e824783f1bbd325c888e722ec5e814671406216ad5
"
  ]
}

=
Orchestrator
=
ceph orch ps :
-
NAME HOST   PORTS STATUS
REFRESHED  AGE  MEM USE  MEM LIM  VERSION IMAGE ID  CONTAINER ID
mgr.svtcephmonv1 svtcephmonv1  running (5h)
4m ago   2d 341M-  16.2.10 32214388de9d  0464e2e0c71b
mon.svtcephmonv1 svtcephmonv1  running (5h)
4m ago   2d 262M2048M  16.2.10 32214388de9d  28aa77685767
osd.0svtcephosdv01 running (5h)
4m ago   2d 184M4096M  16.2.10 32214388de9d  c5a4a1091cba
osd.1svtcephosdv02 running (5h)
4m ago   2d 182M4096M  16.2.10 32214388de9d  080c8f2b3eca
osd.2svtcephosdv03 running (5h)
4m ago   2d 189M4096M  16.2.10 32214388de9d  b58b549a932d
osd.3svtcephosdv01 running (5h)
4m ago   2d 245M4096M  16.2.10 32214388de9d  9d2f781ae290
osd.4svtcephosdv02 running (5h)
4m ago   2d 233M4096M  16.2.10 32214388de9d  6296db28f1d4
osd.5svtcephosdv03 running (5h)
4m ago   2d 213M4096M  16.2.10 32214388de9d  deb58248e520
rgw.testrgw.svtcephrgwv1.invwmo  svtcephrgwv1   *:80 error   67s
ago  23h--
-
ceph orch host ls  :
-
HOST   ADDR   LABELS  STATUS
svtcephmonv1   192.168.90.51
svtcephosdv01  192.168.90.54
svtcephosdv02  192.168.90.55
svtcephosdv03  192.168.90.56
svtcephrgwv1   192.168.90.57  RGW
5 hosts in cluster

Every help will be welcome and we can send any information which would
be convenient to solve the problem.
___
ceph-users mailing list --ceph-

[ceph-users] Re: Cephadm migration

2022-10-14 Thread Jean-Marc FONTANA

Hi Adam,

Thanks for your quick answering. Gonna try it  fastly and keep in touch 
for the result


Best regards

JM

Le 14/10/2022 à 14:09, Adam King a écrit :

For the weird image, perhaps just "ceph orch daemon redeploy
rgw.testrgw.svtcephrgwv1.invwmo --image quay.io/ceph/ceph:v16.2.10" will
resolve it. Not sure about the other things wrong with it yet but I think
the image should be fixed before looking into that.

On Fri, Oct 14, 2022 at 5:47 AM Jean-Marc FONTANA
wrote:


Hello everyone !

We're operating a small cluster which contains 1 monitor-manager, 3 osds
ans 1 RGW.
Tjhe cluster was initially installed with ceph-deploy in version
Nautilus (14.2.19) then
upgraded in Octopus (15.2.16) and lastly in Pacific (16.2.9).
Ceph-deploy does not work
any more so we need to migrate the cluster in cephadm mode. We did it
following the
official Ceph Doc.

Everything is getting OK, until it concerns mon, mgr and osd. But we get
in great troubles
when migrating the RGW :

- the RGW podman image is a very exotic version (see below)
- the old service starts after a while, although having been stopped
and removed as explained in the doc,
- we never achieved to config of the new gateway with a yaml file.

Versions of the nodes after migrating :

   ##
monitor / manager  : cephadm inspect-image
##

{
  "ceph_version": "ceph version 16.2.10
(45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)",
  "image_id":
"32214388de9de06e6f5a0a6aa9591ac10c72cbe1bdd751b792946d968cd502d6",
  "repo_digests": [
"
quay.io/ceph/ceph@sha256:2b68483bcd050472a18e73389c0e1f3f70d34bb7abf733f692e88c935ea0a6bd
",
"
quay.io/ceph/ceph@sha256:3cd25ee2e1589bf534c24493ab12e27caf634725b4449d50408fd5ad4796bbfa
"
  ]
}


   ##
OSDs : cephadm inspect-image
##

{
  "ceph_version": "ceph version 16.2.10
(45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)",
  "image_id":
"32214388de9de06e6f5a0a6aa9591ac10c72cbe1bdd751b792946d968cd502d6",
  "repo_digests": [
"
quay.io/ceph/ceph@sha256:2b68483bcd050472a18e73389c0e1f3f70d34bb7abf733f692e88c935ea0a6bd
",
"
quay.io/ceph/ceph@sha256:3cd25ee2e1589bf534c24493ab12e27caf634725b4449d50408fd5ad4796bbfa
"
  ]
}

##
RGW : cephadm inspect-image
##

{
  "ceph_version": "ceph version 16.2.5-387-g7282d81d
(7282d81d2c500b5b0e929c07971b72444c6ac424) pacific (stable)",
  "image_id":
"41387741ad94630f1c58b94fdba261df8d8e3dc2d4f70ad6201739764f43eb2c",
  "repo_digests": [
"
docker.io/ceph/daemon-base@sha256:a038c6dc35064edff40bb7e824783f1bbd325c888e722ec5e814671406216ad5
"
  ]
}

=
Orchestrator
=
ceph orch ps :
-
NAME HOST   PORTS STATUS
REFRESHED  AGE  MEM USE  MEM LIM  VERSION IMAGE ID  CONTAINER ID
mgr.svtcephmonv1 svtcephmonv1  running (5h)
4m ago   2d 341M-  16.2.10 32214388de9d  0464e2e0c71b
mon.svtcephmonv1 svtcephmonv1  running (5h)
4m ago   2d 262M2048M  16.2.10 32214388de9d  28aa77685767
osd.0svtcephosdv01 running (5h)
4m ago   2d 184M4096M  16.2.10 32214388de9d  c5a4a1091cba
osd.1svtcephosdv02 running (5h)
4m ago   2d 182M4096M  16.2.10 32214388de9d  080c8f2b3eca
osd.2svtcephosdv03 running (5h)
4m ago   2d 189M4096M  16.2.10 32214388de9d  b58b549a932d
osd.3svtcephosdv01 running (5h)
4m ago   2d 245M4096M  16.2.10 32214388de9d  9d2f781ae290
osd.4svtcephosdv02 running (5h)
4m ago   2d 233M4096M  16.2.10 32214388de9d  6296db28f1d4
osd.5svtcephosdv03 running (5h)
4m ago   2d 213M4096M  16.2.10 32214388de9d  deb58248e520
rgw.testrgw.svtcephrgwv1.invwmo  svtcephrgwv1   *:80 error   67s
ago  23h--
-
ceph orch host ls  :
-
HOST   ADDR   LABELS  STATUS
svtcephmonv1   192.168.90.51
svtcephosdv01  192.168.90.54
svtcephosdv02  192.168.90.55
svtcephosdv03  192.168.90.56
svtcephrgwv1   192.168.90.57  RGW
5 hosts in cluster

Every help will be welcome and we can send any information which would
be convenient to solve the problem.
___
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io


___
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Cephadm migration

2022-10-14 Thread Jean-Marc FONTANA

Hello everyone !

We're operating a small cluster which contains 1 monitor-manager, 3 osds 
ans 1 RGW.
Tjhe cluster was initially installed with ceph-deploy in version 
Nautilus (14.2.19) then
upgraded in Octopus (15.2.16) and lastly in Pacific (16.2.9). 
Ceph-deploy does not work
any more so we need to migrate the cluster in cephadm mode. We did it 
following the

official Ceph Doc.

Everything is getting OK, until it concerns mon, mgr and osd. But we get 
in great troubles

when migrating the RGW :

  - the RGW podman image is a very exotic version (see below)
  - the old service starts after a while, although having been stopped 
and removed as explained in the doc,

  - we never achieved to config of the new gateway with a yaml file.

Versions of the nodes after migrating :

 ##
monitor / manager  : cephadm inspect-image
##

{
    "ceph_version": "ceph version 16.2.10 
(45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)",
    "image_id": 
"32214388de9de06e6f5a0a6aa9591ac10c72cbe1bdd751b792946d968cd502d6",

    "repo_digests": [
"quay.io/ceph/ceph@sha256:2b68483bcd050472a18e73389c0e1f3f70d34bb7abf733f692e88c935ea0a6bd",
"quay.io/ceph/ceph@sha256:3cd25ee2e1589bf534c24493ab12e27caf634725b4449d50408fd5ad4796bbfa"
    ]
}


 ##
OSDs : cephadm inspect-image
##

{
    "ceph_version": "ceph version 16.2.10 
(45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)",
    "image_id": 
"32214388de9de06e6f5a0a6aa9591ac10c72cbe1bdd751b792946d968cd502d6",

    "repo_digests": [
"quay.io/ceph/ceph@sha256:2b68483bcd050472a18e73389c0e1f3f70d34bb7abf733f692e88c935ea0a6bd",
"quay.io/ceph/ceph@sha256:3cd25ee2e1589bf534c24493ab12e27caf634725b4449d50408fd5ad4796bbfa"
    ]
}

##
RGW : cephadm inspect-image
##

{
    "ceph_version": "ceph version 16.2.5-387-g7282d81d 
(7282d81d2c500b5b0e929c07971b72444c6ac424) pacific (stable)",
    "image_id": 
"41387741ad94630f1c58b94fdba261df8d8e3dc2d4f70ad6201739764f43eb2c",

    "repo_digests": [
"docker.io/ceph/daemon-base@sha256:a038c6dc35064edff40bb7e824783f1bbd325c888e722ec5e814671406216ad5"
    ]
}

=
Orchestrator
=
ceph orch ps :
-
NAME HOST   PORTS STATUS    
REFRESHED  AGE  MEM USE  MEM LIM  VERSION IMAGE ID  CONTAINER ID
mgr.svtcephmonv1 svtcephmonv1  running (5h) 
4m ago   2d 341M    -  16.2.10 32214388de9d  0464e2e0c71b
mon.svtcephmonv1 svtcephmonv1  running (5h) 
4m ago   2d 262M    2048M  16.2.10 32214388de9d  28aa77685767
osd.0    svtcephosdv01 running (5h) 
4m ago   2d 184M    4096M  16.2.10 32214388de9d  c5a4a1091cba
osd.1    svtcephosdv02 running (5h) 
4m ago   2d 182M    4096M  16.2.10 32214388de9d  080c8f2b3eca
osd.2    svtcephosdv03 running (5h) 
4m ago   2d 189M    4096M  16.2.10 32214388de9d  b58b549a932d
osd.3    svtcephosdv01 running (5h) 
4m ago   2d 245M    4096M  16.2.10 32214388de9d  9d2f781ae290
osd.4    svtcephosdv02 running (5h) 
4m ago   2d 233M    4096M  16.2.10 32214388de9d  6296db28f1d4
osd.5    svtcephosdv03 running (5h) 
4m ago   2d 213M    4096M  16.2.10 32214388de9d  deb58248e520
rgw.testrgw.svtcephrgwv1.invwmo  svtcephrgwv1   *:80 error   67s 
ago  23h    -    -    

-
ceph orch host ls  :
-
HOST   ADDR   LABELS  STATUS
svtcephmonv1   192.168.90.51
svtcephosdv01  192.168.90.54
svtcephosdv02  192.168.90.55
svtcephosdv03  192.168.90.56
svtcephrgwv1   192.168.90.57  RGW
5 hosts in cluster

Every help will be welcome and we can send any information which would 
be convenient to solve the problem.

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Can't setup Basic Ceph Client

2022-07-22 Thread Jean-Marc FONTANA

Hello Iban,

We finally did it ! With your example, we set up a client which does 
what we need.
We only regret that the documentation of ceph auth in not a little more 
explicit, that

could have led us quicker to the solution.

Many thanks Iban, and Kai Stian Olstad too

Best regards

JM

Le 19/07/2022 à 14:12, Jean-Marc FONTANA a écrit :


Hello Iban,

Thanks for your answering ! We finally managed to connect with the 
admin keyring
and we think that is not the best practice.  We shall try your conf 
and get you advised of the result.


Best regards

JM

Le 19/07/2022 à 11:08, Iban Cabrillo a écrit :

Hi Jean,

   If you do not want to use the admin user, which is the most logical thing to 
do, you must create a client with rbd access to the pool on which you are going 
to perform the I/O actions.
For example in our case it is the user cinder:
client.cinder
key: 

caps: [mgr] allow r
caps: [mon] profile rbd
caps: [osd] profile rbd pool=vol1, profile rbd pool=vol2 . profile 
rbd pool=volx

   And the install the client keyring on the client node:

cephclient:~ # ls -la /etc/ceph/
total 28
drwxr-xr-x 2 root root 4096 Jul 18 11:37 .
drwxr-xr-x 132 root root 12288 Jul 18 11:37 ...
-rw-r--r-- 1 root root root 64 Oct 19 2017 ceph.client.cinder.keyring
-rw-r--r-- 1 root root root 2018 Jul 18 11:37 ceph.conf

In our case we have added

cat /etc/profile.d/ceph-cinder.sh
export CEPH_ARGS="--keyring /etc/ceph/ceph.client.cinder.keyring --id cinder"

so that it picks it up automatically-

cephclient:~ # rbd ls -p volumes
image01_to_remove
volume-01bbf2ee-198c-446d-80bf-f68292130f5c
volume-036865ad-6f9b-4966-b2ea-ce10bf09b6a9
volume-04445a86-a032-4731-8bff-203dfc5d02e1
..

I hope this help you.

Cheers, I



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Can't setup Basic Ceph Client

2022-07-19 Thread Jean-Marc FONTANA

Hello Iban,

Thanks for your answering ! We finally managed to connect with the admin 
keyring
and we think that is not the best practice.  We shall try your conf and 
get you advised of the result.


Best regards

JM

Le 19/07/2022 à 11:08, Iban Cabrillo a écrit :

Hi Jean,

   If you do not want to use the admin user, which is the most logical thing to 
do, you must create a client with rbd access to the pool on which you are going 
to perform the I/O actions.
For example in our case it is the user cinder:
client.cinder
key: 

caps: [mgr] allow r
caps: [mon] profile rbd
caps: [osd] profile rbd pool=vol1, profile rbd pool=vol2 . profile 
rbd pool=volx

   And the install the client keyring on the client node:

cephclient:~ # ls -la /etc/ceph/
total 28
drwxr-xr-x 2 root root 4096 Jul 18 11:37 .
drwxr-xr-x 132 root root 12288 Jul 18 11:37 ...
-rw-r--r-- 1 root root root 64 Oct 19 2017 ceph.client.cinder.keyring
-rw-r--r-- 1 root root root 2018 Jul 18 11:37 ceph.conf

In our case we have added

cat /etc/profile.d/ceph-cinder.sh
export CEPH_ARGS="--keyring /etc/ceph/ceph.client.cinder.keyring --id cinder"

so that it picks it up automatically-

cephclient:~ # rbd ls -p volumes
image01_to_remove
volume-01bbf2ee-198c-446d-80bf-f68292130f5c
volume-036865ad-6f9b-4966-b2ea-ce10bf09b6a9
volume-04445a86-a032-4731-8bff-203dfc5d02e1
..

I hope this help you.

Cheers, I



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Can't setup Basic Ceph Client

2022-07-19 Thread Jean-Marc FONTANA

Hello,

Thanks for your answering ! We finally managed to connect with the admin 
keyring
but we think that is not the best practice. A few later after your 
message, there was another one
which indicates a way to get a real client. We shall try it and get you 
advised of the result.


Best regards

JM

Le 19/07/2022 à 10:50, Kai Stian Olstad a écrit :

On 08.07.2022 16:18, Jean-Marc FONTANA wrote:

We're planning to use rbd too and get block device for a linux server.
In order to do that, we installed ceph-common packages
and created ceph.conf and ceph.keyring as explained at Basic Ceph
Client Setup — Ceph Documentation
<https://docs.ceph.com/en/pacific/cephadm/client-setup/>
(https://docs.ceph.com/en/pacific/cephadm/client-setup/)

This does not work.

Ceph seems to be installed

$ dpkg -l | grep ceph-common
ii  ceph-common   16.2.9-1~bpo11+1 amd64 common
utilities to mount and interact with a ceph storage cluster
ii  python3-ceph-common   16.2.9-1~bpo11+1 all Python
3 utility libraries for Ceph

$ ceph -v
ceph version 16.2.9 (4c3647a322c0ff5a1dd2344e039859dcbd28c830) 
pacific (stable)


But, when using commands that interact with the cluster, we get this 
message


$ ceph -s
2022-07-08T15:51:24.965+0200 7f773b7fe700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support 
[2,1]

[errno 13] RADOS permission denied (error connecting to the cluster)


The default user for ceph is the admin/client.admin do you have that 
key in your keyring?

And is the keyring file readable for the user running the ceph commands?


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Can't setup Basic Ceph Client

2022-07-08 Thread Jean-Marc FONTANA

Hello folks,

We're operating a small ceph test cluster made of 5 VMs, 1 
monitor/manager, 3 osd and 1 radosgateway for

owncloud S3 external storage use. This works almost fine.

We're planning to use rbd too and get block device for a linux server. 
In order to do that, we installed ceph-common packages
and created ceph.conf and ceph.keyring as explained at Basic Ceph Client 
Setup — Ceph Documentation 
 
(https://docs.ceph.com/en/pacific/cephadm/client-setup/)


This does not work.

Ceph seems to be installed

$ dpkg -l | grep ceph-common
ii  ceph-common   16.2.9-1~bpo11+1 amd64    common 
utilities to mount and interact with a ceph storage cluster
ii  python3-ceph-common   16.2.9-1~bpo11+1 all  Python 3 
utility libraries for Ceph


$ ceph -v
ceph version 16.2.9 (4c3647a322c0ff5a1dd2344e039859dcbd28c830) pacific 
(stable)


But, when using commands that interact with the cluster, we get this message

$ ceph -s
2022-07-08T15:51:24.965+0200 7f773b7fe700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
[errno 13] RADOS permission denied (error connecting to the cluster)

We tried to insert these lines in ceph.conf

auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

as explained in a former forum, but we still get the error message, 
slightly different



$ ceph -s
2022-07-08T15:51:24.965+0200 7f773b7fe700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [2]
[errno 13] RADOS permission denied (error connecting to the cluster)

Does anyone have an idea ? .

The cluster was installed with ceph-deploy in Nautilus 14.2.21 on Debian 
10.9, then upgraded in Octopus 15.2.16 with Ceph-deploy.
Then Debian was upgraded in 11.3 and ceph-deploy couldn't go further. 
The las cluster upgrade was in Pacific 16.2.9, made with

Debian apt-get.

If you need more information ask us, we would be grateful to get some help.

JM

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Changing IP addresses

2021-04-06 Thread Jean-Marc FONTANA

Hello everyone,

We have installed a Nautilus Ceph cluster with 3 monitors, 5 osd and 1 
RGW gateway.
It works but now, we need to change the IP addresses of these machines 
to put them in DMZ.

Are there any recommandations to go about doing this ?

Best regards,

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io