Re: [ceph-users] Client admin socket for RBD

2019-06-25 Thread Alex Litvak

Thank you for explanation Jason, and thank you for opening a ticket for my 
issue.

On 6/25/2019 1:56 PM, Jason Dillaman wrote:

On Tue, Jun 25, 2019 at 2:40 PM Tarek Zegar mailto:tze...@us.ibm.com>> wrote:

Sasha,

Sorry I don't get it, the documentation for the command states that in order to see 
the config DB for all do: _"ceph config dump"_
To see what's in the config DB for a particular daemon do: _"ceph config get 
"_
To see what's set for a particular daemon (be it from the config db, override, conf file, 
etc): _"ceph config show "_

I don't see anywhere where the command you mentioned is valid: "ceph config get 
client.admin"

Here is output from a monitor node on bare metal
root@hostmonitor1:~# ceph config dump
WHO MASK LEVEL OPTION VALUE RO
mon.hostmonitor1 advanced mon_osd_down_out_interval 30
mon.hostmonitor1 advanced mon_osd_min_in_ratio 0.10
mgr unknown mgr/balancer/active 1 *
mgr unknown mgr/balancer/mode upmap *
osd.* advanced debug_ms 20/20
osd.* advanced osd_max_backfills 2

root@hostmonitor1:~# ceph config get mon.hostmonitor1
WHO MASK LEVEL OPTION VALUE RO
mon.hostmonitor1 advanced mon_osd_down_out_interval 30
mon.hostmonitor1 advanced mon_osd_min_in_ratio 0.10

root@hostmonitor1:~# ceph config get client.admin
WHO MASK LEVEL OPTION VALUE RO <-blank



What am I missing from what you're suggesting?


This just means that you don't have any MON config store overrides that apply to the 
"client.admin" Ceph user:

$ ceph config get client.admin
WHO MASK LEVEL OPTION VALUE RO
$ ceph config set client debug_rbd 20
$ ceph config set client.admin debug_rbd_mirror 20
$ ceph config get client.admin
WHO          MASK LEVEL    OPTION                    VALUE RO
client            advanced debug_rbd                 20/20
client.admin      advanced debug_rbd_mirror          20/20

The first parameter to "ceph config get" is a "who" (i.e. a Ceph entity like a 
daemon or client).



Thank you for clarifying,
Tarek Zegar
Senior SDS Engineer
Email _tze...@us.ibm.com_ <mailto:email%20address>
Mobile _630.974.7172_




Inactive hide details for Sasha Litvak ---06/25/2019 10:38:09 AM---Tarek, 
Of course you are correct about the client nodes. ISasha Litvak ---06/25/2019 
10:38:09 AM---Tarek, Of course you are
correct about the client nodes. I executed this command

From: Sasha Litvak mailto:alexander.v.lit...@gmail.com>>
To: Tarek Zegar mailto:tze...@us.ibm.com>>, 
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    Date: 06/25/2019 10:38 AM
Subject: [EXTERNAL] Re: Re: Re: [ceph-users] Client admin socket for RBD






Tarek,

Of course you are correct about the client nodes.  I executed this command 
inside of container that runs mon.  Or it can be done on the bare metal node 
that runs mon.  You essentially quering mon
configuration database.

On Tue, Jun 25, 2019 at 8:53 AM Tarek Zegar <_tze...@us.ibm.com_ 
<mailto:tze...@us.ibm.com>> wrote:

"config get" on a client.admin? There is no daemon for client.admin, I 
get nothing. Can you please explain?


Tarek Zegar
Senior SDS Engineer_
__Email __tze...@us.ibm.com_ <mailto:email%20address>
Mobile _630.974.7172_




Inactive hide details for Sasha Litvak ---06/24/2019 07:48:46 PM---ceph 
config get client.admin On Mon, Jun 24, 2019, 1:10 PM TSasha Litvak 
---06/24/2019 07:48:46 PM---ceph config get
client.admin On Mon, Jun 24, 2019, 1:10 PM Tarek Zegar <_tze...@us.ibm.com_ 
<mailto:tze...@us.ibm.com>> wrote:

From: Sasha Litvak <_alexander.v.litvak@gmail.com_ 
<mailto:alexander.v.lit...@gmail.com>>
    To: Tarek Zegar <_tze...@us.ibm.com_ <mailto:tze...@us.ibm.com>>
Date: 06/24/2019 07:48 PM
Subject: [EXTERNAL] Re: Re: [ceph-users] Client admin socket for RBD

----



ceph config get client.admin

On Mon, Jun 24, 2019, 1:10 PM Tarek Zegar <_tze...@us.ibm.com_ 
<mailto:tze...@us.ibm.com>> wrote:
Alex,

Sorry real quick, what did you type to get that last bit of 
info?

Tarek Zegar
Senior SDS Engineer_
__Email __tze...@us.ibm.com_ <mailto:email%20address>
Mobile _630.974.7172_




Alex Litva

Re: [ceph-users] Client admin socket for RBD

2019-06-25 Thread Jason Dillaman
On Tue, Jun 25, 2019 at 2:40 PM Tarek Zegar  wrote:

> Sasha,
>
> Sorry I don't get it, the documentation for the command states that in
> order to see the config DB for all do: *"ceph config dump"*
> To see what's in the config DB for a particular daemon do: *"ceph config
> get "*
> To see what's set for a particular daemon (be it from the config db,
> override, conf file, etc): *"ceph config show "*
>
> I don't see anywhere where the command you mentioned is valid: "ceph
> config get client.admin"
>
> Here is output from a monitor node on bare metal
> root@hostmonitor1:~# ceph config dump
> WHO MASK LEVEL OPTION VALUE RO
> mon.hostmonitor1 advanced mon_osd_down_out_interval 30
> mon.hostmonitor1 advanced mon_osd_min_in_ratio 0.10
> mgr unknown mgr/balancer/active 1 *
> mgr unknown mgr/balancer/mode upmap *
> osd.* advanced debug_ms 20/20
> osd.* advanced osd_max_backfills 2
>
> root@hostmonitor1:~# ceph config get mon.hostmonitor1
> WHO MASK LEVEL OPTION VALUE RO
> mon.hostmonitor1 advanced mon_osd_down_out_interval 30
> mon.hostmonitor1 advanced mon_osd_min_in_ratio 0.10
>
> root@hostmonitor1:~# ceph config get client.admin
> WHO MASK LEVEL OPTION VALUE RO <-blank
>

>
> What am I missing from what you're suggesting?
>

This just means that you don't have any MON config store overrides that
apply to the "client.admin" Ceph user:

$ ceph config get client.admin
WHO MASK LEVEL OPTION VALUE RO
$ ceph config set client debug_rbd 20
$ ceph config set client.admin debug_rbd_mirror 20
$ ceph config get client.admin
WHO  MASK LEVELOPTIONVALUE RO
clientadvanced debug_rbd 20/20
client.admin  advanced debug_rbd_mirror  20/20

The first parameter to "ceph config get" is a "who" (i.e. a Ceph entity
like a daemon or client).

>
>
> Thank you for clarifying,
> Tarek Zegar
> Senior SDS Engineer
> Email *tze...@us.ibm.com* 
> Mobile *630.974.7172*
>
>
>
>
> [image: Inactive hide details for Sasha Litvak ---06/25/2019 10:38:09
> AM---Tarek, Of course you are correct about the client nodes. I]Sasha
> Litvak ---06/25/2019 10:38:09 AM---Tarek, Of course you are correct about
> the client nodes. I executed this command
>
> From: Sasha Litvak 
> To: Tarek Zegar , ceph-users@lists.ceph.com
> Date: 06/25/2019 10:38 AM
> Subject: [EXTERNAL] Re: Re: Re: [ceph-users] Client admin socket for RBD
> --
>
>
>
> Tarek,
>
> Of course you are correct about the client nodes.  I executed this command
> inside of container that runs mon.  Or it can be done on the bare metal
> node that runs mon.  You essentially quering mon configuration database.
>
>
> On Tue, Jun 25, 2019 at 8:53 AM Tarek Zegar <*tze...@us.ibm.com*
> > wrote:
>
>"config get" on a client.admin? There is no daemon for client.admin, I
>get nothing. Can you please explain?
>
>
>Tarek Zegar
>Senior SDS Engineer
> *Email **tze...@us.ibm.com* 
>Mobile *630.974.7172*
>
>
>
>
>[image: Inactive hide details for Sasha Litvak ---06/24/2019 07:48:46
>PM---ceph config get client.admin On Mon, Jun 24, 2019, 1:10 PM T]Sasha
>    Litvak ---06/24/2019 07:48:46 PM---ceph config get client.admin On Mon, Jun
>24, 2019, 1:10 PM Tarek Zegar <*tze...@us.ibm.com* >
>wrote:
>
>From: Sasha Litvak <*alexander.v.lit...@gmail.com*
>>
>To: Tarek Zegar <*tze...@us.ibm.com* >
>Date: 06/24/2019 07:48 PM
>Subject: [EXTERNAL] Re: Re: [ceph-users] Client admin socket for RBD
>--
>
>
>
>ceph config get client.admin
>
>On Mon, Jun 24, 2019, 1:10 PM Tarek Zegar <*tze...@us.ibm.com*
>> wrote:
>   Alex,
>
>  Sorry real quick, what did you type to get that last bit of info?
>
>  Tarek Zegar
>  Senior SDS Engineer
> *Email **tze...@us.ibm.com* 
>      Mobile *630.974.7172*
>
>
>
>
>  Alex Litvak ---06/24/2019 01:07:28 PM---Jason, Here you go:
>
>  From: Alex Litvak <*alexander.v.lit...@gmail.com*
>  >
>  To: *ceph-users@lists.ceph.com* 
>  Cc: ceph-users <
>  *public-ceph-users-idqoxfivofjgjs9i8mt...@plane.gmane.org*
>  >
>  Date: 06/24/2019 01:07 PM
>  Subject: [EXTERNAL] Re: [ceph-users] Client admin socket for RBD
>  Sent by: "ceph-users" <*ceph-users-boun...@lists.ceph.com*
>  >
>  --
>
>
>
>  Jason,
>
>  Here y

Re: [ceph-users] Client admin socket for RBD

2019-06-25 Thread Tarek Zegar

Sasha,

Sorry I don't get it, the documentation for the command states that in
order to see the config DB for all do: "ceph config dump"
To see what's in the config DB for a particular daemon do: "ceph config get
"
To see what's set for a particular daemon (be it from the config db,
override, conf file, etc): "ceph config show "

I don't  see anywhere where the command you mentioned is valid: "ceph
config get client.admin"

Here is output from a monitor node on bare metal
root@hostmonitor1:~# ceph config dump
WHO  MASK LEVELOPTIONVALUERO
mon.hostmonitor1  advanced mon_osd_down_out_interval 30
mon.hostmonitor1  advanced mon_osd_min_in_ratio  0.10
  mgr unknown  mgr/balancer/active   1*
  mgr unknown  mgr/balancer/mode upmap*
osd.* advanced debug_ms  20/20
osd.* advanced osd_max_backfills 2

root@hostmonitor1:~# ceph config get mon.hostmonitor1
WHO  MASK LEVELOPTIONVALUERO
mon.hostmonitor1  advanced mon_osd_down_out_interval 30
mon.hostmonitor1  advanced mon_osd_min_in_ratio  0.10

root@hostmonitor1:~# ceph config get client.admin
WHO MASK LEVEL OPTION VALUE RO   <-blank

What am I missing from what you're suggesting?


Thank you for clarifying,
Tarek Zegar
Senior SDS Engineer
Email tze...@us.ibm.com
Mobile 630.974.7172






From:   Sasha Litvak 
To: Tarek Zegar , ceph-users@lists.ceph.com
Date:   06/25/2019 10:38 AM
Subject:    [EXTERNAL] Re: Re: Re: [ceph-users] Client admin socket for RBD



Tarek,

Of course you are correct about the client nodes.  I executed this command
inside of container that runs mon.  Or it can be done on the bare metal
node that runs mon.  You essentially quering mon configuration database.

On Tue, Jun 25, 2019 at 8:53 AM Tarek Zegar  wrote:
  "config get" on a client.admin? There is no daemon for client.admin, I
  get nothing. Can you please explain?


  Tarek Zegar
  Senior SDS Engineer
  Email tze...@us.ibm.com
  Mobile 630.974.7172




  Inactive hide details for Sasha Litvak ---06/24/2019 07:48:46 PM---ceph
  config get client.admin On Mon, Jun 24, 2019, 1:10 PM TSasha Litvak
  ---06/24/2019 07:48:46 PM---ceph config get client.admin On Mon, Jun 24,
  2019, 1:10 PM Tarek Zegar  wrote:

  From: Sasha Litvak 
  To: Tarek Zegar 
  Date: 06/24/2019 07:48 PM
  Subject: [EXTERNAL] Re: Re: [ceph-users] Client admin socket for RBD



  ceph config get client.admin

  On Mon, Jun 24, 2019, 1:10 PM Tarek Zegar  wrote:
Alex,

Sorry real quick, what did you type to get that last bit of info?

Tarek Zegar
Senior SDS Engineer
Email tze...@us.ibm.com
Mobile 630.974.7172




Alex Litvak ---06/24/2019 01:07:28 PM---Jason, Here you go:

From: Alex Litvak 
To: ceph-users@lists.ceph.com
Cc: ceph-users <
public-ceph-users-idqoxfivofjgjs9i8mt...@plane.gmane.org>
Date: 06/24/2019 01:07 PM
    Subject: [EXTERNAL] Re: [ceph-users] Client admin socket for RBD
Sent by: "ceph-users" 



Jason,

Here you go:

WHO    MASK LEVEL    OPTION                      VALUE
RO
client      advanced admin_socket
/var/run/ceph/$name.$pid.asok *
global      advanced cluster_network             10.0.42.0/23
*
global      advanced debug_asok                  0/0
global      advanced debug_auth                  0/0
global      advanced debug_bdev                  0/0
global      advanced debug_bluefs                0/0
global      advanced debug_bluestore             0/0
global      advanced debug_buffer                0/0
global      advanced debug_civetweb              0/0
global      advanced debug_client                0/0
global      advanced debug_compressor            0/0
global      advanced debug_context               0/0
global      advanced debug_crush                 0/0
global      advanced debug_crypto                0/0
global      advanced debug_dpdk                  0/0
global      advanced debug_eventtrace            0/0
global      advanced debug_filer                 0/0
global      advanced debug_filestore             0/0
global      advanced debug_finisher              0/0
global      advanced debug_fuse                  0/0
global      advanced debug_heartbeatmap          0/0
global      advanced debug_javaclient            0/0
global      advanced debug_journal               0/0
global      advanced debug_journaler             0/0
global      advanced debug_kinetic               0/0
  

Re: [ceph-users] Client admin socket for RBD

2019-06-25 Thread Jason Dillaman
On Mon, Jun 24, 2019 at 4:30 PM Alex Litvak
 wrote:
>
> Jason,
>
> What  are you suggesting to do ? Removing this line from the config database 
> and keeping in config files instead?

I think it's a hole right now in the MON config store that should be
addressed. I've opened a tracker ticket [1] to support re-opening the
admin socket after the MON configs are received (if not overridden in
the local conf).

> On 6/24/2019 1:12 PM, Jason Dillaman wrote:
> > On Mon, Jun 24, 2019 at 2:05 PM Alex Litvak
> >  wrote:
> >>
> >> Jason,
> >>
> >> Here you go:
> >>
> >> WHOMASK LEVELOPTION  VALUE 
> >> RO
> >> client  advanced admin_socket
> >> /var/run/ceph/$name.$pid.asok *
> >
> > This is the offending config option that is causing your warnings.
> > Since the mon configs are read after the admin socket has been
> > initialized, it is ignored (w/ the warning saying setting this
> > property has no effect).
> >
> >> global  advanced cluster_network 10.0.42.0/23  
> >> *
> >> global  advanced debug_asok  0/0
> >> global  advanced debug_auth  0/0
> >> global  advanced debug_bdev  0/0
> >> global  advanced debug_bluefs0/0
> >> global  advanced debug_bluestore 0/0
> >> global  advanced debug_buffer0/0
> >> global  advanced debug_civetweb  0/0
> >> global  advanced debug_client0/0
> >> global  advanced debug_compressor0/0
> >> global  advanced debug_context   0/0
> >> global  advanced debug_crush 0/0
> >> global  advanced debug_crypto0/0
> >> global  advanced debug_dpdk  0/0
> >> global  advanced debug_eventtrace0/0
> >> global  advanced debug_filer 0/0
> >> global  advanced debug_filestore 0/0
> >> global  advanced debug_finisher  0/0
> >> global  advanced debug_fuse  0/0
> >> global  advanced debug_heartbeatmap  0/0
> >> global  advanced debug_javaclient0/0
> >> global  advanced debug_journal   0/0
> >> global  advanced debug_journaler 0/0
> >> global  advanced debug_kinetic   0/0
> >> global  advanced debug_kstore0/0
> >> global  advanced debug_leveldb   0/0
> >> global  advanced debug_lockdep   0/0
> >> global  advanced debug_mds   0/0
> >> global  advanced debug_mds_balancer  0/0
> >> global  advanced debug_mds_locker0/0
> >> global  advanced debug_mds_log   0/0
> >> global  advanced debug_mds_log_expire0/0
> >> global  advanced debug_mds_migrator  0/0
> >> global  advanced debug_memdb 0/0
> >> global  advanced debug_mgr   0/0
> >> global  advanced debug_mgrc  0/0
> >> global  advanced debug_mon   0/0
> >> global  advanced debug_monc  0/00
> >> global  advanced debug_ms0/0
> >> global  advanced debug_none  0/0
> >> global  advanced debug_objclass  0/0
> >> global  advanced debug_objectcacher  0/0
> >> global  advanced debug_objecter  0/0
> >> global  advanced debug_optracker 0/0
> >> global  advanced debug_osd   0/0
> >> global  advanced debug_paxos 0/0
> >> global  advanced debug_perfcounter   0/0
> >> global  advanced debug_rados 0/0
> >> global  advanced debug_rbd   0/0
> >> global  advanced debug_rbd_mirror0/0
> >> global  advanced debug_rbd_replay0/0
> >> global  advanced debug_refs  0/0
> >> global  basiclog_file/dev/null 
> >> *
> >> global  advanced mon_cluster_log_file/dev/null 
> >> *
> >> global  advanced osd_pool_default_crush_rule -1
> >> global  advanced osd_scrub_begin_hour19
> >> global  advanced osd_scrub_end_hour  4
> >> global  advanced osd_scrub_load_threshold0.01
> >> global  advanced osd_scrub_sleep 0.10
> >> global  advanced perftrue
> >> global  advanced public_network  10.0.40.0/23  
> >> *
> >> global  advanced rocksdb_perftrue
> >>
> >> On 6/24/2019 11:50 AM, Jason Dillaman wrote:
> >>> On Sun, Jun 23, 2019 at 4:27 PM Alex Litvak
> >>>  wrote:
> 
>  Hello everyone,
> 
>  I encounter this in nautilus client and not with mimic.  Removing admin 
>  socket entry from config 

Re: [ceph-users] Client admin socket for RBD

2019-06-25 Thread Sasha Litvak
Tarek,

Of course you are correct about the client nodes.  I executed this command
inside of container that runs mon.  Or it can be done on the bare metal
node that runs mon.  You essentially quering mon configuration database.


On Tue, Jun 25, 2019 at 8:53 AM Tarek Zegar  wrote:

> "config get" on a client.admin? There is no daemon for client.admin, I get
> nothing. Can you please explain?
>
>
> Tarek Zegar
> Senior SDS Engineer
> Email *tze...@us.ibm.com* 
> Mobile *630.974.7172*
>
>
>
>
> [image: Inactive hide details for Sasha Litvak ---06/24/2019 07:48:46
> PM---ceph config get client.admin On Mon, Jun 24, 2019, 1:10 PM T]Sasha
> Litvak ---06/24/2019 07:48:46 PM---ceph config get client.admin On Mon, Jun
> 24, 2019, 1:10 PM Tarek Zegar  wrote:
>
> From: Sasha Litvak 
> To: Tarek Zegar 
> Date: 06/24/2019 07:48 PM
> Subject: [EXTERNAL] Re: Re: [ceph-users] Client admin socket for RBD
> --
>
>
>
> ceph config get client.admin
>
> On Mon, Jun 24, 2019, 1:10 PM Tarek Zegar <*tze...@us.ibm.com*
> > wrote:
>
>Alex,
>
>Sorry real quick, what did you type to get that last bit of info?
>
>Tarek Zegar
>Senior SDS Engineer
> *Email **tze...@us.ibm.com* 
>Mobile *630.974.7172*
>
>
>
>
>Alex Litvak ---06/24/2019 01:07:28 PM---Jason, Here you go:
>
>From: Alex Litvak <*alexander.v.lit...@gmail.com*
>>
>To: *ceph-users@lists.ceph.com* 
>Cc: ceph-users <
>*public-ceph-users-idqoxfivofjgjs9i8mt...@plane.gmane.org*
>>
>Date: 06/24/2019 01:07 PM
>Subject: [EXTERNAL] Re: [ceph-users] Client admin socket for RBD
>Sent by: "ceph-users" <*ceph-users-boun...@lists.ceph.com*
>>
>--
>
>
>
>Jason,
>
>Here you go:
>
>WHOMASK LEVELOPTION  VALUE
>RO
>client  advanced admin_socket
> /var/run/ceph/$name.$pid.asok *
>global  advanced cluster_network *10.0.42.0/23*
><http://10.0.42.0/23>  *
>global  advanced debug_asok  0/0
>global  advanced debug_auth  0/0
>global  advanced debug_bdev  0/0
>global  advanced debug_bluefs0/0
>global  advanced debug_bluestore 0/0
>global  advanced debug_buffer0/0
>global  advanced debug_civetweb  0/0
>global  advanced debug_client0/0
>global  advanced debug_compressor0/0
>global  advanced debug_context   0/0
>global  advanced debug_crush 0/0
>global  advanced debug_crypto0/0
>global  advanced debug_dpdk  0/0
>global  advanced debug_eventtrace0/0
>global  advanced debug_filer 0/0
>global  advanced debug_filestore 0/0
>global  advanced debug_finisher  0/0
>global  advanced debug_fuse  0/0
>global  advanced debug_heartbeatmap  0/0
>global  advanced debug_javaclient0/0
>global  advanced debug_journal   0/0
>global  advanced debug_journaler 0/0
>global  advanced debug_kinetic   0/0
>global  advanced debug_kstore0/0
>global  advanced debug_leveldb   0/0
>global  advanced debug_lockdep   0/0
>global  advanced debug_mds   0/0
>global  advanced debug_mds_balancer  0/0
>global  advanced debug_mds_locker0/0
>global  advanced debug_mds_log   0/0
>global  advanced debug_mds_log_expire0/0
>global  advanced debug_mds_migrator  0/0
>global  advanced debug_memdb 0/0
>global  advanced debug_mgr   0/0
>global  advanced debug_mgrc  0/0
>global  advanced debug_mon   0/0
>global  advanced debug_monc  0/00
>global  advanced debug_ms0/0
>global  advanced debug_none  0/0
>global  advanced debug_objclass  0/0
>global  advanced debug_objectcacher  0/0
>global  advanced debug_objecter  0/0
>global  advanced debug_optracker 0/0
>global  advanced debug_osd   0/0
>global  advanced de

Re: [ceph-users] Client admin socket for RBD

2019-06-24 Thread Alex Litvak

Jason,

What  are you suggesting to do ? Removing this line from the config database 
and keeping in config files instead?

On 6/24/2019 1:12 PM, Jason Dillaman wrote:

On Mon, Jun 24, 2019 at 2:05 PM Alex Litvak
 wrote:


Jason,

Here you go:

WHOMASK LEVELOPTION  VALUE 
RO
client  advanced admin_socket/var/run/ceph/$name.$pid.asok *


This is the offending config option that is causing your warnings.
Since the mon configs are read after the admin socket has been
initialized, it is ignored (w/ the warning saying setting this
property has no effect).


global  advanced cluster_network 10.0.42.0/23  *
global  advanced debug_asok  0/0
global  advanced debug_auth  0/0
global  advanced debug_bdev  0/0
global  advanced debug_bluefs0/0
global  advanced debug_bluestore 0/0
global  advanced debug_buffer0/0
global  advanced debug_civetweb  0/0
global  advanced debug_client0/0
global  advanced debug_compressor0/0
global  advanced debug_context   0/0
global  advanced debug_crush 0/0
global  advanced debug_crypto0/0
global  advanced debug_dpdk  0/0
global  advanced debug_eventtrace0/0
global  advanced debug_filer 0/0
global  advanced debug_filestore 0/0
global  advanced debug_finisher  0/0
global  advanced debug_fuse  0/0
global  advanced debug_heartbeatmap  0/0
global  advanced debug_javaclient0/0
global  advanced debug_journal   0/0
global  advanced debug_journaler 0/0
global  advanced debug_kinetic   0/0
global  advanced debug_kstore0/0
global  advanced debug_leveldb   0/0
global  advanced debug_lockdep   0/0
global  advanced debug_mds   0/0
global  advanced debug_mds_balancer  0/0
global  advanced debug_mds_locker0/0
global  advanced debug_mds_log   0/0
global  advanced debug_mds_log_expire0/0
global  advanced debug_mds_migrator  0/0
global  advanced debug_memdb 0/0
global  advanced debug_mgr   0/0
global  advanced debug_mgrc  0/0
global  advanced debug_mon   0/0
global  advanced debug_monc  0/00
global  advanced debug_ms0/0
global  advanced debug_none  0/0
global  advanced debug_objclass  0/0
global  advanced debug_objectcacher  0/0
global  advanced debug_objecter  0/0
global  advanced debug_optracker 0/0
global  advanced debug_osd   0/0
global  advanced debug_paxos 0/0
global  advanced debug_perfcounter   0/0
global  advanced debug_rados 0/0
global  advanced debug_rbd   0/0
global  advanced debug_rbd_mirror0/0
global  advanced debug_rbd_replay0/0
global  advanced debug_refs  0/0
global  basiclog_file/dev/null *
global  advanced mon_cluster_log_file/dev/null *
global  advanced osd_pool_default_crush_rule -1
global  advanced osd_scrub_begin_hour19
global  advanced osd_scrub_end_hour  4
global  advanced osd_scrub_load_threshold0.01
global  advanced osd_scrub_sleep 0.10
global  advanced perftrue
global  advanced public_network  10.0.40.0/23  *
global  advanced rocksdb_perftrue

On 6/24/2019 11:50 AM, Jason Dillaman wrote:

On Sun, Jun 23, 2019 at 4:27 PM Alex Litvak
 wrote:


Hello everyone,

I encounter this in nautilus client and not with mimic.  Removing admin socket 
entry from config on client makes no difference

Error:

rbd ls -p one
2019-06-23 12:58:29.344 7ff2710b0700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime
2019-06-23 12:58:29.348 7ff2708af700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime

I have no issues running other ceph clients (no messages on the screen with 
ceph -s or ceph iostat from the same box.)
I connected to a few other client nodes and as root I can do the same string
rbd ls -p one


On all the nodes with user libvirt I have seen the admin_socket messages

oneadmin@virt3n1-la:~$  rbd ls -p one 

Re: [ceph-users] Client admin socket for RBD

2019-06-24 Thread Jason Dillaman
On Mon, Jun 24, 2019 at 2:05 PM Alex Litvak
 wrote:
>
> Jason,
>
> Here you go:
>
> WHOMASK LEVELOPTION  VALUE
>  RO
> client  advanced admin_socket
> /var/run/ceph/$name.$pid.asok *

This is the offending config option that is causing your warnings.
Since the mon configs are read after the admin socket has been
initialized, it is ignored (w/ the warning saying setting this
property has no effect).

> global  advanced cluster_network 10.0.42.0/23 
>  *
> global  advanced debug_asok  0/0
> global  advanced debug_auth  0/0
> global  advanced debug_bdev  0/0
> global  advanced debug_bluefs0/0
> global  advanced debug_bluestore 0/0
> global  advanced debug_buffer0/0
> global  advanced debug_civetweb  0/0
> global  advanced debug_client0/0
> global  advanced debug_compressor0/0
> global  advanced debug_context   0/0
> global  advanced debug_crush 0/0
> global  advanced debug_crypto0/0
> global  advanced debug_dpdk  0/0
> global  advanced debug_eventtrace0/0
> global  advanced debug_filer 0/0
> global  advanced debug_filestore 0/0
> global  advanced debug_finisher  0/0
> global  advanced debug_fuse  0/0
> global  advanced debug_heartbeatmap  0/0
> global  advanced debug_javaclient0/0
> global  advanced debug_journal   0/0
> global  advanced debug_journaler 0/0
> global  advanced debug_kinetic   0/0
> global  advanced debug_kstore0/0
> global  advanced debug_leveldb   0/0
> global  advanced debug_lockdep   0/0
> global  advanced debug_mds   0/0
> global  advanced debug_mds_balancer  0/0
> global  advanced debug_mds_locker0/0
> global  advanced debug_mds_log   0/0
> global  advanced debug_mds_log_expire0/0
> global  advanced debug_mds_migrator  0/0
> global  advanced debug_memdb 0/0
> global  advanced debug_mgr   0/0
> global  advanced debug_mgrc  0/0
> global  advanced debug_mon   0/0
> global  advanced debug_monc  0/00
> global  advanced debug_ms0/0
> global  advanced debug_none  0/0
> global  advanced debug_objclass  0/0
> global  advanced debug_objectcacher  0/0
> global  advanced debug_objecter  0/0
> global  advanced debug_optracker 0/0
> global  advanced debug_osd   0/0
> global  advanced debug_paxos 0/0
> global  advanced debug_perfcounter   0/0
> global  advanced debug_rados 0/0
> global  advanced debug_rbd   0/0
> global  advanced debug_rbd_mirror0/0
> global  advanced debug_rbd_replay0/0
> global  advanced debug_refs  0/0
> global  basiclog_file/dev/null
>  *
> global  advanced mon_cluster_log_file/dev/null
>  *
> global  advanced osd_pool_default_crush_rule -1
> global  advanced osd_scrub_begin_hour19
> global  advanced osd_scrub_end_hour  4
> global  advanced osd_scrub_load_threshold0.01
> global  advanced osd_scrub_sleep 0.10
> global  advanced perftrue
> global  advanced public_network  10.0.40.0/23 
>  *
> global  advanced rocksdb_perftrue
>
> On 6/24/2019 11:50 AM, Jason Dillaman wrote:
> > On Sun, Jun 23, 2019 at 4:27 PM Alex Litvak
> >  wrote:
> >>
> >> Hello everyone,
> >>
> >> I encounter this in nautilus client and not with mimic.  Removing admin 
> >> socket entry from config on client makes no difference
> >>
> >> Error:
> >>
> >> rbd ls -p one
> >> 2019-06-23 12:58:29.344 7ff2710b0700 -1 set_mon_vals failed to set 
> >> admin_socket = /var/run/ceph/$name.$pid.asok: Configuration option 
> >> 'admin_socket' may not be modified at runtime
> >> 2019-06-23 12:58:29.348 7ff2708af700 -1 set_mon_vals failed to set 
> >> admin_socket = /var/run/ceph/$name.$pid.asok: Configuration option 
> >> 'admin_socket' may not be modified at runtime
> >>
> >> I have no issues running other ceph clients (no messages on the screen 
> >> with ceph -s or ceph iostat from the same box.)
> >> I connected to a few other client nodes and as root I can do the same 
> >> string
> >> rbd ls -p one
> >>
> >>
> >> On all the 

Re: [ceph-users] Client admin socket for RBD

2019-06-24 Thread Alex Litvak

Jason,

Here you go:

WHOMASK LEVELOPTION  VALUE 
RO
client  advanced admin_socket/var/run/ceph/$name.$pid.asok *
global  advanced cluster_network 10.0.42.0/23  *
global  advanced debug_asok  0/0
global  advanced debug_auth  0/0
global  advanced debug_bdev  0/0
global  advanced debug_bluefs0/0
global  advanced debug_bluestore 0/0
global  advanced debug_buffer0/0
global  advanced debug_civetweb  0/0
global  advanced debug_client0/0
global  advanced debug_compressor0/0
global  advanced debug_context   0/0
global  advanced debug_crush 0/0
global  advanced debug_crypto0/0
global  advanced debug_dpdk  0/0
global  advanced debug_eventtrace0/0
global  advanced debug_filer 0/0
global  advanced debug_filestore 0/0
global  advanced debug_finisher  0/0
global  advanced debug_fuse  0/0
global  advanced debug_heartbeatmap  0/0
global  advanced debug_javaclient0/0
global  advanced debug_journal   0/0
global  advanced debug_journaler 0/0
global  advanced debug_kinetic   0/0
global  advanced debug_kstore0/0
global  advanced debug_leveldb   0/0
global  advanced debug_lockdep   0/0
global  advanced debug_mds   0/0
global  advanced debug_mds_balancer  0/0
global  advanced debug_mds_locker0/0
global  advanced debug_mds_log   0/0
global  advanced debug_mds_log_expire0/0
global  advanced debug_mds_migrator  0/0
global  advanced debug_memdb 0/0
global  advanced debug_mgr   0/0
global  advanced debug_mgrc  0/0
global  advanced debug_mon   0/0
global  advanced debug_monc  0/00
global  advanced debug_ms0/0
global  advanced debug_none  0/0
global  advanced debug_objclass  0/0
global  advanced debug_objectcacher  0/0
global  advanced debug_objecter  0/0
global  advanced debug_optracker 0/0
global  advanced debug_osd   0/0
global  advanced debug_paxos 0/0
global  advanced debug_perfcounter   0/0
global  advanced debug_rados 0/0
global  advanced debug_rbd   0/0
global  advanced debug_rbd_mirror0/0
global  advanced debug_rbd_replay0/0
global  advanced debug_refs  0/0
global  basiclog_file/dev/null *
global  advanced mon_cluster_log_file/dev/null *
global  advanced osd_pool_default_crush_rule -1
global  advanced osd_scrub_begin_hour19
global  advanced osd_scrub_end_hour  4
global  advanced osd_scrub_load_threshold0.01
global  advanced osd_scrub_sleep 0.10
global  advanced perftrue
global  advanced public_network  10.0.40.0/23  *
global  advanced rocksdb_perftrue

On 6/24/2019 11:50 AM, Jason Dillaman wrote:

On Sun, Jun 23, 2019 at 4:27 PM Alex Litvak
 wrote:


Hello everyone,

I encounter this in nautilus client and not with mimic.  Removing admin socket 
entry from config on client makes no difference

Error:

rbd ls -p one
2019-06-23 12:58:29.344 7ff2710b0700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime
2019-06-23 12:58:29.348 7ff2708af700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime

I have no issues running other ceph clients (no messages on the screen with 
ceph -s or ceph iostat from the same box.)
I connected to a few other client nodes and as root I can do the same string
rbd ls -p one


On all the nodes with user libvirt I have seen the admin_socket messages

oneadmin@virt3n1-la:~$  rbd ls -p one --id libvirt
2019-06-23 13:16:41.626 7f9ea0ff9700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime
2019-06-23 13:16:41.626 7f9e8bfff700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime

I can execute all rbd operations on the cluster from client 

Re: [ceph-users] Client admin socket for RBD

2019-06-24 Thread Jason Dillaman
On Sun, Jun 23, 2019 at 4:27 PM Alex Litvak
 wrote:
>
> Hello everyone,
>
> I encounter this in nautilus client and not with mimic.  Removing admin 
> socket entry from config on client makes no difference
>
> Error:
>
> rbd ls -p one
> 2019-06-23 12:58:29.344 7ff2710b0700 -1 set_mon_vals failed to set 
> admin_socket = /var/run/ceph/$name.$pid.asok: Configuration option 
> 'admin_socket' may not be modified at runtime
> 2019-06-23 12:58:29.348 7ff2708af700 -1 set_mon_vals failed to set 
> admin_socket = /var/run/ceph/$name.$pid.asok: Configuration option 
> 'admin_socket' may not be modified at runtime
>
> I have no issues running other ceph clients (no messages on the screen with 
> ceph -s or ceph iostat from the same box.)
> I connected to a few other client nodes and as root I can do the same string
> rbd ls -p one
>
>
> On all the nodes with user libvirt I have seen the admin_socket messages
>
> oneadmin@virt3n1-la:~$  rbd ls -p one --id libvirt
> 2019-06-23 13:16:41.626 7f9ea0ff9700 -1 set_mon_vals failed to set 
> admin_socket = /var/run/ceph/$name.$pid.asok: Configuration option 
> 'admin_socket' may not be modified at runtime
> 2019-06-23 13:16:41.626 7f9e8bfff700 -1 set_mon_vals failed to set 
> admin_socket = /var/run/ceph/$name.$pid.asok: Configuration option 
> 'admin_socket' may not be modified at runtime
>
> I can execute all rbd operations on the cluster from client otherwise.  
> Commenting client in config file makes no difference
>
> This is an optimiised config distributed across the clients it is almost the 
> same as on servers (no libvirt on servers)
>
> [client]
> admin_socket = /var/run/ceph/$name.$pid.asok
>
> [client.libvirt]
> admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be 
> writable by QEMU and allowed by SELinux or AppArmor
> log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU and 
> allowed by SELinux or AppArmor
>
> # Please do not change this file directly since it is managed by Ansible and 
> will be overwritten
> [global]
> cluster network = 10.0.42.0/23
> fsid = 3947ba2d-1b01-4909-8e3a-f9714f427483
> log file = /dev/null
> mon cluster log file = /dev/null
> mon host = 
> [v2:10.0.40.121:3300,v1:10.0.40.121:6789],[v2:10.0.40.122:3300,v1:10.0.40.122:6789],[v2:10.0.40.123:3300,v1:10.0.40.123:6789]
> perf = True
> public network = 10.0.40.0/23
> rocksdb_perf = True
>
>
> Here is config from mon
>
> NAMEVALUE 
> 
> SOURCE   OVERRIDES  IGNORES
> cluster_network 10.0.42.0/23  
> 
> file (mon[10.0.42.0/23])
> daemonize   false 
> 
> override
> debug_asok  0/0   
> 
> mon
> debug_auth  0/0   
> 
> mon
> debug_bdev  0/0   
> 
> mon
> debug_bluefs0/0   
> 
> mon
> debug_bluestore 0/0   
> 
> mon
> debug_buffer0/0   
> 
> mon
> debug_civetweb  0/0   
> 
> mon
> debug_client0/0   
> 
> mon
> debug_compressor0/0   
> 
> mon
> debug_context   0/0   
> 
> mon
> debug_crush 0/0   
> 
> mon
> debug_crypto0/0   
> 
> mon
> debug_dpdk   

[ceph-users] Client admin socket for RBD

2019-06-23 Thread Alex Litvak

Hello everyone,

I encounter this in nautilus client and not with mimic.  Removing admin socket 
entry from config on client makes no difference

Error:

rbd ls -p one
2019-06-23 12:58:29.344 7ff2710b0700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime
2019-06-23 12:58:29.348 7ff2708af700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime

I have no issues running other ceph clients (no messages on the screen with 
ceph -s or ceph iostat from the same box.)
I connected to a few other client nodes and as root I can do the same string
rbd ls -p one


On all the nodes with user libvirt I have seen the admin_socket messages

oneadmin@virt3n1-la:~$  rbd ls -p one --id libvirt
2019-06-23 13:16:41.626 7f9ea0ff9700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime
2019-06-23 13:16:41.626 7f9e8bfff700 -1 set_mon_vals failed to set admin_socket 
= /var/run/ceph/$name.$pid.asok: Configuration option 'admin_socket' may not be 
modified at runtime

I can execute all rbd operations on the cluster from client otherwise.  
Commenting client in config file makes no difference

This is an optimiised config distributed across the clients it is almost the 
same as on servers (no libvirt on servers)

[client]
admin_socket = /var/run/ceph/$name.$pid.asok

[client.libvirt]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be 
writable by QEMU and allowed by SELinux or AppArmor
log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU and 
allowed by SELinux or AppArmor

# Please do not change this file directly since it is managed by Ansible and 
will be overwritten
[global]
cluster network = 10.0.42.0/23
fsid = 3947ba2d-1b01-4909-8e3a-f9714f427483
log file = /dev/null
mon cluster log file = /dev/null
mon host = 
[v2:10.0.40.121:3300,v1:10.0.40.121:6789],[v2:10.0.40.122:3300,v1:10.0.40.122:6789],[v2:10.0.40.123:3300,v1:10.0.40.123:6789]
perf = True
public network = 10.0.40.0/23
rocksdb_perf = True


Here is config from mon

NAMEVALUE   
  
SOURCE   OVERRIDES  IGNORES
cluster_network 10.0.42.0/23
  file  
   (mon[10.0.42.0/23])
daemonize   false   
  
override
debug_asok  0/0 
  mon
debug_auth  0/0 
  mon
debug_bdev  0/0 
  mon
debug_bluefs0/0 
  mon
debug_bluestore 0/0 
  mon
debug_buffer0/0 
  mon
debug_civetweb  0/0 
  mon
debug_client0/0 
  mon
debug_compressor0/0 
  mon
debug_context   0/0 
  mon
debug_crush 0/0 
  mon
debug_crypto0/0 
  mon
debug_dpdk  0/0 
  mon
debug_eventtrace0/0