Thank you for explanation Jason, and thank you for opening a ticket for my
issue.
On 6/25/2019 1:56 PM, Jason Dillaman wrote:
On Tue, Jun 25, 2019 at 2:40 PM Tarek Zegar <[email protected]
<mailto:[email protected]>> wrote:
Sasha,
Sorry I don't get it, the documentation for the command states that in order to see
the config DB for all do: _"ceph config dump"_
To see what's in the config DB for a particular daemon do: _"ceph config get
<daemon>"_
To see what's set for a particular daemon (be it from the config db, override, conf file,
etc): _"ceph config show <daemon>"_
I don't see anywhere where the command you mentioned is valid: "ceph config get
client.admin"
Here is output from a monitor node on bare metal
root@hostmonitor1:~# ceph config dump
WHO MASK LEVEL OPTION VALUE RO
mon.hostmonitor1 advanced mon_osd_down_out_interval 30
mon.hostmonitor1 advanced mon_osd_min_in_ratio 0.100000
mgr unknown mgr/balancer/active 1 *
mgr unknown mgr/balancer/mode upmap *
osd.* advanced debug_ms 20/20
osd.* advanced osd_max_backfills 2
root@hostmonitor1:~# ceph config get mon.hostmonitor1
WHO MASK LEVEL OPTION VALUE RO
mon.hostmonitor1 advanced mon_osd_down_out_interval 30
mon.hostmonitor1 advanced mon_osd_min_in_ratio 0.100000
root@hostmonitor1:~# ceph config get client.admin
WHO MASK LEVEL OPTION VALUE RO <-----blank
What am I missing from what you're suggesting?
This just means that you don't have any MON config store overrides that apply to the
"client.admin" Ceph user:
$ ceph config get client.admin
WHO MASK LEVEL OPTION VALUE RO
$ ceph config set client debug_rbd 20
$ ceph config set client.admin debug_rbd_mirror 20
$ ceph config get client.admin
WHO MASK LEVEL OPTION VALUE RO
client advanced debug_rbd 20/20
client.admin advanced debug_rbd_mirror 20/20
The first parameter to "ceph config get" is a "who" (i.e. a Ceph entity like a
daemon or client).
Thank you for clarifying,
Tarek Zegar
Senior SDS Engineer
Email [email protected]_ <mailto:email%20address>
Mobile _630.974.7172_
Inactive hide details for Sasha Litvak ---06/25/2019 10:38:09 AM---Tarek,
Of course you are correct about the client nodes. ISasha Litvak ---06/25/2019
10:38:09 AM---Tarek, Of course you are
correct about the client nodes. I executed this command
From: Sasha Litvak <[email protected]
<mailto:[email protected]>>
To: Tarek Zegar <[email protected] <mailto:[email protected]>>,
[email protected] <mailto:[email protected]>
Date: 06/25/2019 10:38 AM
Subject: [EXTERNAL] Re: Re: Re: [ceph-users] Client admin socket for RBD
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Tarek,
Of course you are correct about the client nodes. I executed this command
inside of container that runs mon. Or it can be done on the bare metal node
that runs mon. You essentially quering mon
configuration database.
On Tue, Jun 25, 2019 at 8:53 AM Tarek Zegar <[email protected]_
<mailto:[email protected]>> wrote:
"config get" on a client.admin? There is no daemon for client.admin, I
get nothing. Can you please explain?
Tarek Zegar
Senior SDS Engineer_
__Email [email protected]_ <mailto:email%20address>
Mobile _630.974.7172_
Inactive hide details for Sasha Litvak ---06/24/2019 07:48:46 PM---ceph
config get client.admin On Mon, Jun 24, 2019, 1:10 PM TSasha Litvak
---06/24/2019 07:48:46 PM---ceph config get
client.admin On Mon, Jun 24, 2019, 1:10 PM Tarek Zegar <[email protected]_
<mailto:[email protected]>> wrote:
From: Sasha Litvak <[email protected]_
<mailto:[email protected]>>
To: Tarek Zegar <[email protected]_ <mailto:[email protected]>>
Date: 06/24/2019 07:48 PM
Subject: [EXTERNAL] Re: Re: [ceph-users] Client admin socket for RBD
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ceph config get client.admin
On Mon, Jun 24, 2019, 1:10 PM Tarek Zegar <[email protected]_
<mailto:[email protected]>> wrote:
Alex,
Sorry real quick, what did you type to get that last bit of
info?
Tarek Zegar
Senior SDS Engineer_
__Email [email protected]_ <mailto:email%20address>
Mobile _630.974.7172_
Alex Litvak ---06/24/2019 01:07:28 PM---Jason, Here you go:
From: Alex Litvak <[email protected]_
<mailto:[email protected]>>
To: [email protected]_
<mailto:[email protected]>
Cc: ceph-users
<[email protected]_
<mailto:[email protected]>>
Date: 06/24/2019 01:07 PM
Subject: [EXTERNAL] Re: [ceph-users] Client admin socket for RBD
Sent by: "ceph-users" <[email protected]_
<mailto:[email protected]>>
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Jason,
Here you go:
WHO MASK LEVEL OPTION VALUE
RO
client advanced admin_socket
/var/run/ceph/$name.$pid.asok *
global advanced cluster_network _10.0.42.0/23_
<http://10.0.42.0/23> *
global advanced debug_asok 0/0
global advanced debug_auth 0/0
global advanced debug_bdev 0/0
global advanced debug_bluefs 0/0
global advanced debug_bluestore 0/0
global advanced debug_buffer 0/0
global advanced debug_civetweb 0/0
global advanced debug_client 0/0
global advanced debug_compressor 0/0
global advanced debug_context 0/0
global advanced debug_crush 0/0
global advanced debug_crypto 0/0
global advanced debug_dpdk 0/0
global advanced debug_eventtrace 0/0
global advanced debug_filer 0/0
global advanced debug_filestore 0/0
global advanced debug_finisher 0/0
global advanced debug_fuse 0/0
global advanced debug_heartbeatmap 0/0
global advanced debug_javaclient 0/0
global advanced debug_journal 0/0
global advanced debug_journaler 0/0
global advanced debug_kinetic 0/0
global advanced debug_kstore 0/0
global advanced debug_leveldb 0/0
global advanced debug_lockdep 0/0
global advanced debug_mds 0/0
global advanced debug_mds_balancer 0/0
global advanced debug_mds_locker 0/0
global advanced debug_mds_log 0/0
global advanced debug_mds_log_expire 0/0
global advanced debug_mds_migrator 0/0
global advanced debug_memdb 0/0
global advanced debug_mgr 0/0
global advanced debug_mgrc 0/0
global advanced debug_mon 0/0
global advanced debug_monc 0/00
global advanced debug_ms 0/0
global advanced debug_none 0/0
global advanced debug_objclass 0/0
global advanced debug_objectcacher 0/0
global advanced debug_objecter 0/0
global advanced debug_optracker 0/0
global advanced debug_osd 0/0
global advanced debug_paxos 0/0
global advanced debug_perfcounter 0/0
global advanced debug_rados 0/0
global advanced debug_rbd 0/0
global advanced debug_rbd_mirror 0/0
global advanced debug_rbd_replay 0/0
global advanced debug_refs 0/0
global basic log_file /dev/null
*
global advanced mon_cluster_log_file /dev/null
*
global advanced osd_pool_default_crush_rule -1
global advanced osd_scrub_begin_hour 19
global advanced osd_scrub_end_hour 4
global advanced osd_scrub_load_threshold 0.010000
global advanced osd_scrub_sleep 0.100000
global advanced perf true
global advanced public_network _10.0.40.0/23_
<http://10.0.40.0/23> *
global advanced rocksdb_perf true
On 6/24/2019 11:50 AM, Jason Dillaman wrote:
> On Sun, Jun 23, 2019 at 4:27 PM Alex Litvak
> <[email protected]_
<mailto:[email protected]>> wrote:
>>
>> Hello everyone,
>>
>> I encounter this in nautilus client and not with mimic.
Removing admin socket entry from config on client makes no difference
>>
>> Error:
>>
>> rbd ls -p one
>> 2019-06-23 12:58:29.344 7ff2710b0700 -1 set_mon_vals failed
to set admin_socket = /var/run/ceph/$name.$pid.asok: Configuration option
'admin_socket' may not be modified at runtime
>> 2019-06-23 12:58:29.348 7ff2708af700 -1 set_mon_vals failed
to set admin_socket = /var/run/ceph/$name.$pid.asok: Configuration option
'admin_socket' may not be modified at runtime
>>
>> I have no issues running other ceph clients (no messages on
the screen with ceph -s or ceph iostat from the same box.)
>> I connected to a few other client nodes and as root I can
do the same string
>> rbd ls -p one
>>
>>
>> On all the nodes with user libvirt I have seen the
admin_socket messages
>>
>> oneadmin@virt3n1-la:~$ rbd ls -p one --id libvirt
>> 2019-06-23 13:16:41.626 7f9ea0ff9700 -1 set_mon_vals failed
to set admin_socket = /var/run/ceph/$name.$pid.asok: Configuration option
'admin_socket' may not be modified at runtime
>> 2019-06-23 13:16:41.626 7f9e8bfff700 -1 set_mon_vals failed
to set admin_socket = /var/run/ceph/$name.$pid.asok: Configuration option
'admin_socket' may not be modified at runtime
>>
>> I can execute all rbd operations on the cluster from client
otherwise. Commenting client in config file makes no difference
>>
>> This is an optimiised config distributed across the clients
it is almost the same as on servers (no libvirt on servers)
>>
>> [client]
>> admin_socket = /var/run/ceph/$name.$pid.asok
>>
>> [client.libvirt]
>> admin socket =
/var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be writable by QEMU and
allowed by SELinux or AppArmor
>> log file = /var/log/ceph/qemu-guest-$pid.log # must be
writable by QEMU and allowed by SELinux or AppArmor
>>
>> # Please do not change this file directly since it is
managed by Ansible and will be overwritten
>> [global]
>> cluster network = _10.0.42.0/23_ <http://10.0.42.0/23>
>> fsid = 3947ba2d-1b01-4909-8e3a-f9714f427483
>> log file = /dev/null
>> mon cluster log file = /dev/null
>> mon host = [v2:_10.0.40.121:3300_
<http://10.0.40.121:3300>,v1:_10.0.40.121:6789_
<http://10.0.40.121:6789>],[v2:_10.0.40.122:3300_
<http://10.0.40.122:3300>,v1:_10.0.40.122:6789_
<http://10.0.40.122:6789>],[v2:_10.0.40.123:3300_
<http://10.0.40.123:3300>,v1:_10.0.40.123:6789_ <http://10.0.40.123:6789>]
>> perf = True
>> public network = _10.0.40.0/23_ <http://10.0.40.0/23>
>> rocksdb_perf = True
>>
>>
>> Here is config from mon
>>
>> NAME VALUE SOURCE OVERRIDES
IGNORES
>> cluster_network _10.0.42.0/23_ <http://10.0.42.0/23> file
(mon[_10.0.42.0/23_ <http://10.0.42.0/23>])
>> daemonize false
override
>> debug_asok 0/0
mon
>> debug_auth 0/0
mon
>> debug_bdev 0/0
mon
>> debug_bluefs 0/0
mon
>> debug_bluestore 0/0
mon
>> debug_buffer 0/0
mon
>> debug_civetweb 0/0
mon
>> debug_client 0/0
mon
>> debug_compressor 0/0
mon
>> debug_context 0/0
mon
>> debug_crush 0/0
mon
>> debug_crypto 0/0
mon
>> debug_dpdk 0/0
mon
>> debug_eventtrace 0/0
mon
>> debug_filer 0/0
mon
>> debug_filestore 0/0
mon
>> debug_finisher 0/0
mon
>> debug_fuse 0/0
mon
>> debug_heartbeatmap 0/0
mon
>> debug_javaclient 0/0
mon
>> debug_journal 0/0
mon
>> debug_journaler 0/0
mon
>> debug_kinetic 0/0
mon
>> debug_kstore 0/0
mon
>> debug_leveldb 0/0
mon
>> debug_lockdep 0/0
mon
>> debug_mds 0/0
mon
>> debug_mds_balancer 0/0
mon
>> debug_mds_locker 0/0
mon
>> debug_mds_log 0/0
mon
>> debug_mds_log_expire 0/0
mon
>> debug_mds_migrator 0/0
mon
>> debug_memdb 0/0
mon
>> debug_mgr 0/0
mon
>> debug_mgrc 0/0
mon
>> debug_mon 0/0
mon
>> debug_monc 0/00
mon
>> debug_ms 0/0
mon
>> debug_none 0/0
mon
>> debug_objclass 0/0
mon
>> debug_objectcacher 0/0
mon
>> debug_objecter 0/0
mon
>> debug_optracker 0/0
mon
>> debug_osd 0/0
mon
>> debug_paxos 0/0
mon
>> debug_perfcounter 0/0
mon
>> debug_rados 0/0
mon
>> debug_rbd 0/0
mon
>> debug_rbd_mirror 0/0
mon
>> debug_rbd_replay 0/0
mon
>> debug_refs 0/0
mon
>> err_to_stderr true
override
>> keyring $mon_data/keyring
default
>> leveldb_block_size 65536
default
>> leveldb_cache_size 536870912
default
>> leveldb_compression false
default
>> leveldb_log
default
>> leveldb_write_buffer_size 33554432
default
>> log_file
override
file[/dev/null],mon[/dev/null]
>> log_stderr_prefix debug
cmdline
>> log_to_stderr true
override
>> log_to_syslog false
override
>> mon_allow_pool_delete true
mon
>> mon_cluster_log_file /dev/null
file (mon[/dev/null])
>> mon_cluster_log_to_stderr true
cmdline
>> mon_data
/var/lib/ceph/mon/ceph-storage2n2-la
cmdline
>> mon_host [v2:_10.0.40.121:3300_
<http://10.0.40.121:3300>,v1:_10.0.40.121:6789_
<http://10.0.40.121:6789>],[v2:_10.0.40.122:3300_
<http://10.0.40.122:3300>,v1:_10.0.40.122:6789_
<http://10.0.40.122:6789>],[v2:_10.0.40.123:3300_
<http://10.0.40.123:3300>,v1:_10.0.40.123:6789_ <http://10.0.40.123:6789>] file
>> mon_initial_members
storage2n1-la,storage2n2-la,storage2n3-la
file
>> mon_osd_down_out_interval 300
mon
>> osd_pool_default_crush_rule -1
file (mon[-1])
>> osd_scrub_begin_hour 19
mon
>> osd_scrub_end_hour 4
mon
>> osd_scrub_load_threshold 0.010000
mon
>> osd_scrub_sleep 0.100000
mon
>> perf true
file (mon[true])
>> public_addr v2:_10.0.40.122:0/0_ <http://10.0.40.122:0/0>
cmdline
>> public_network _10.0.40.0/23_ <http://10.0.40.0/23> file
(mon[_10.0.40.0/23_ <http://10.0.40.0/23>])
>> rbd_default_features 61
default
>> rocksdb_perf true
file (mon[true])
>> setgroup ceph
cmdline
>> setuser ceph
cmdline
>
> What's the mon config for the "client.admin" user? "ceph
config get
> client.admin"
>
>>
>> I am not sure why I am getting this messages and why are
they inconsistent across the nodes. For example I am not getting those when I
execute rbd in containers running ceph
daemons on server
>> cluster nodes. Any clue would be appreciated.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]_
<mailto:[email protected]>
>> _http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com_
>
>
>
_______________________________________________
ceph-users mailing list_
[email protected]_ <mailto:[email protected]>_
__http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com_
/
(See attached file: 1C790723.gif)/
_______________________________________________
ceph-users mailing list
[email protected] <mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Jason
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com