[ceph-users] Re: Logging control

2023-12-20 Thread Tim Holloway
Thanks, that is most useful to know!

The Ceph docs are very good except when they propagate obsolete
information. For example, using "ceph-deploy" on Octopus (my copy
didn't come with ceph-deploy - it used cephadm).

And, alas, nothing has been written to delineate differences between
containerized resource management versus "bare" resource management.

Hopefully that will get fixed someday. Ideally, it will be both fixed
and applied backwards, where applicable.

   Tim

On Wed, 2023-12-20 at 12:32 +, Eugen Block wrote:
> Just to add a bit more information, the 'ceph daemon' command is
> still  
> valid, it just has to be issued inside of the containers:
> 
> quincy-1:~ # cephadm enter --name osd.0
> Inferring fsid 1e6e5cb6-73e8-11ee-b195-fa163ee43e22
> [ceph: root@quincy-1 /]# ceph daemon osd.0 config diff | head
> {
>  "diff": {
>  "bluestore_block_db_size": {
>  "default": "0",
>  "mon": "20",
>  "final": "20"
>  },
> 
> Or with the cephadm shell:
> 
> quincy-1:~ # cephadm shell --name osd.0 -- ceph daemon osd.0 config  
> diff | head
> 
> But 'ceph tell' should work as well, I just wanted to show some more 
> context a options.
> 
> As for the debug messages, there are a couple of things to tweak as  
> you may have noticed. For example, you could reduce the log level of 
> debug_rocksdb (default 4/5). If you want to reduce the
> mgr_tick_period  
> (the repeating health messages every two seconds) you can do that
> like  
> this:
> 
> quincy-1:~ # ceph config set mgr mgr_tick_period 10
> 
> But don't use too large periods, I had mentioned that in a recent  
> thread. 10 seconds seem to work just fine for me, though.
> 
> 
> 
> Zitat von Tim Holloway :
> 
> > OK. Found some loglevel overrides in the monitor and reset them.
> > 
> > Restarted the mgr and monitor just in case.
> > 
> > Still getting a lot of stuff that looks like this.
> > 
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
> > 7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
> > [default] Manual compact>
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
> > 7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
> > [default] Manual compact>
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
> > 7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
> > [default] Manual compact>
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
> > 7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
> > [default] Manual compact>
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-
> > 19T22:10:49.542670+
> > mgr.xyz1 (mgr.6889303) 177 : cluster [DBG] pgmap v160: 649 pgs: 1
> > active+clean+scrubbin>
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:51.542+
> > 7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v161: 649
> > pgs: 1
> > active+clean+scrubbi>
> > Dec 19 17:10:52 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-
> > 19T22:10:51.543403+
> > mgr.xyz1 (mgr.6889303) 178 : cluster [DBG] pgmap v161: 649 pgs: 1
> > active+clean+scrubbin>
> > Dec 19 17:10:52 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:52.748+
> > 7fa1c74a2700  0 [progress INFO root] Processing OSDMap change
> > 20239..20239
> > Dec 19 17:10:53 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:53.544+
> > 7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v162: 649
> > pgs: 1
> > active+clean+scrubbi>
> > Dec 19 17:10:54 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-
> > 19T22:10:53.545649+
> > mgr.xyz1 (mgr.6889303) 179 : cluster [DBG] pgmap v162: 649 pgs: 1
> > active+clean+scrubbin>
> > Dec 19 17:10:55 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:55.545+
> > 7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v163: 649
> > pgs: 1
> > active+clean+scrubbi>
> > Dec 19 17:10:55 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:55.834+
> > 7f36de29f700  1 mon.xyz1@1(peon).osd e20239 _set_new_cache_sizes
> > cache_size:1020054731 inc_a>
> > Dec 19 17:10:56 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-
> > 19T22:10:

[ceph-users] Re: Logging control

2023-12-20 Thread Eugen Block
Just to add a bit more information, the 'ceph daemon' command is still  
valid, it just has to be issued inside of the containers:


quincy-1:~ # cephadm enter --name osd.0
Inferring fsid 1e6e5cb6-73e8-11ee-b195-fa163ee43e22
[ceph: root@quincy-1 /]# ceph daemon osd.0 config diff | head
{
"diff": {
"bluestore_block_db_size": {
"default": "0",
"mon": "20",
"final": "20"
},

Or with the cephadm shell:

quincy-1:~ # cephadm shell --name osd.0 -- ceph daemon osd.0 config  
diff | head


But 'ceph tell' should work as well, I just wanted to show some more  
context a options.


As for the debug messages, there are a couple of things to tweak as  
you may have noticed. For example, you could reduce the log level of  
debug_rocksdb (default 4/5). If you want to reduce the mgr_tick_period  
(the repeating health messages every two seconds) you can do that like  
this:


quincy-1:~ # ceph config set mgr mgr_tick_period 10

But don't use too large periods, I had mentioned that in a recent  
thread. 10 seconds seem to work just fine for me, though.




Zitat von Tim Holloway :


OK. Found some loglevel overrides in the monitor and reset them.

Restarted the mgr and monitor just in case.

Still getting a lot of stuff that looks like this.

Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
[default] Manual compact>
Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
[default] Manual compact>
Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
[default] Manual compact>
Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
[default] Manual compact>
Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-19T22:10:49.542670+
mgr.xyz1 (mgr.6889303) 177 : cluster [DBG] pgmap v160: 649 pgs: 1
active+clean+scrubbin>
Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:51.542+
7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v161: 649 pgs: 1
active+clean+scrubbi>
Dec 19 17:10:52 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-19T22:10:51.543403+
mgr.xyz1 (mgr.6889303) 178 : cluster [DBG] pgmap v161: 649 pgs: 1
active+clean+scrubbin>
Dec 19 17:10:52 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:52.748+
7fa1c74a2700  0 [progress INFO root] Processing OSDMap change
20239..20239
Dec 19 17:10:53 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:53.544+
7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v162: 649 pgs: 1
active+clean+scrubbi>
Dec 19 17:10:54 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-19T22:10:53.545649+
mgr.xyz1 (mgr.6889303) 179 : cluster [DBG] pgmap v162: 649 pgs: 1
active+clean+scrubbin>
Dec 19 17:10:55 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:55.545+
7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v163: 649 pgs: 1
active+clean+scrubbi>
Dec 19 17:10:55 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:55.834+
7f36de29f700  1 mon.xyz1@1(peon).osd e20239 _set_new_cache_sizes
cache_size:1020054731 inc_a>
Dec 19 17:10:56 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-19T22:10:55.546197+
mgr.xyz1 (mgr.6889303) 180 : cluster [DBG] pgmap v163: 649 pgs: 1
active+clean+scrubbin>
Dec 19 17:10:57 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:57.546+
7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v164: 649 pgs: 1
active+clean+scrubbi>
Dec 19 17:10:57 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:57.751+
7fa1c74a2700  0 [progress INFO root] Processing OSDMap change
20239..20239
Dec 19 17:10:58 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-19T22:10:57.547657+
mgr.xyz1 (mgr.6889303) 181 : cluster [DBG] pgmap v164: 649 pgs: 1
active+clean+scrubbin>
Dec 19 17:10:59 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug

[ceph-users] Re: Logging control

2023-12-19 Thread Tim Holloway
OK. Found some loglevel overrides in the monitor and reset them.

Restarted the mgr and monitor just in case.

Still getting a lot of stuff that looks like this.

Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
[default] Manual compact>
Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
[default] Manual compact>
Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
[default] Manual compact>
Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
[default] Manual compact>
Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-19T22:10:49.542670+
mgr.xyz1 (mgr.6889303) 177 : cluster [DBG] pgmap v160: 649 pgs: 1
active+clean+scrubbin>
Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:51.542+
7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v161: 649 pgs: 1
active+clean+scrubbi>
Dec 19 17:10:52 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-19T22:10:51.543403+
mgr.xyz1 (mgr.6889303) 178 : cluster [DBG] pgmap v161: 649 pgs: 1
active+clean+scrubbin>
Dec 19 17:10:52 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:52.748+
7fa1c74a2700  0 [progress INFO root] Processing OSDMap change
20239..20239
Dec 19 17:10:53 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:53.544+
7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v162: 649 pgs: 1
active+clean+scrubbi>
Dec 19 17:10:54 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-19T22:10:53.545649+
mgr.xyz1 (mgr.6889303) 179 : cluster [DBG] pgmap v162: 649 pgs: 1
active+clean+scrubbin>
Dec 19 17:10:55 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:55.545+
7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v163: 649 pgs: 1
active+clean+scrubbi>
Dec 19 17:10:55 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:55.834+
7f36de29f700  1 mon.xyz1@1(peon).osd e20239 _set_new_cache_sizes
cache_size:1020054731 inc_a>
Dec 19 17:10:56 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-19T22:10:55.546197+
mgr.xyz1 (mgr.6889303) 180 : cluster [DBG] pgmap v163: 649 pgs: 1
active+clean+scrubbin>
Dec 19 17:10:57 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:57.546+
7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v164: 649 pgs: 1
active+clean+scrubbi>
Dec 19 17:10:57 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:57.751+
7fa1c74a2700  0 [progress INFO root] Processing OSDMap change
20239..20239
Dec 19 17:10:58 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-19T22:10:57.547657+
mgr.xyz1 (mgr.6889303) 181 : cluster [DBG] pgmap v164: 649 pgs: 1
active+clean+scrubbin>
Dec 19 17:10:59 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:59.548+
7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v165: 649 pgs: 1
active+clean+scrubbi>
Dec 19 17:11:00 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: :::10.0.1.2 - -
[19/Dec/2023:22:11:00] "GET /metrics HTTP/1.1" 200 215073 ""
"Prometheus/2.33.4"
Dec 19 17:11:00 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:11:00.762+
7fa1b6e42700  0 [prometheus INFO cherrypy.access.140332751105776]
:::10.0.1.2 - - [19/De>
Dec 19 17:11:00 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:11:00.835+
7f36de29f700  1 mon.xyz1@1(peon).osd e20239 _set_new_cache_sizes
cache_size:1020054731 inc_a>
Dec 19 17:11:00 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-19T22:10:59.549152+
mgr.xyz1 (mgr.6889303) 182 : cluster [DBG] pgmap v165: 649 pgs: 1
active+clean+scrubbin>
Dec 19 17:11:01 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:11:01.548+
7fa1d9fc7700  0 log_chan

[ceph-users] Re: Logging control

2023-12-19 Thread Tim Holloway
The problem with "ceph daemon" is that the results I listed DID come
from running the command on the same machine as the daemon.

But "ceph tell" seems to be more promising.

There's more to the story, since I tried to do blind brute-force
adjustments and they failed also, but let me see if ceph tell gives a
better idea of what I should be doing.

   Tim
On Tue, 2023-12-19 at 16:21 -0500, Wesley Dillingham wrote:
> "ceph daemon" commands need to be run local to the machine where the
> daemon is running. So in this case if you arent on the node where
> osd.1 lives it wouldnt work. "ceph tell" should work anywhere there
> is a client.admin key.
> 
> 
> Respectfully,
> 
> Wes Dillingham
> w...@wesdillingham.com
> LinkedIn
> 
> 
> On Tue, Dec 19, 2023 at 4:02 PM Tim Holloway 
> wrote:
> > Ceph version is Pacific (16.2.14), upgraded from a sloppy Octopus.
> > 
> > I ran afoul of all the best bugs in Octopus, and in the process
> > switched on a lot of stuff better left alone, including some
> > detailed
> > debug logging. Now I can't turn it off.
> > 
> > I am confidently informed by the documentation that the first step
> > would be the command:
> > 
> > ceph daemon osd.1 config show | less
> > 
> > But instead of config information I get back:
> > 
> > Can't get admin socket path: unable to get conf option admin_socket
> > for
> > osd: b"error parsing 'osd': expected string of the form TYPE.ID,
> > valid
> > types are: auth, mon, osd, mds, mgr, client\n"
> > 
> > Which seems to be kind of insane.
> > 
> > Attempting to get daemon config info on a monitor on that machine
> > gives:
> > 
> > admin_socket: exception getting command descriptions: [Errno 2] No
> > such
> > file or directory
> > 
> > Which doesn't help either.
> > 
> > Anyone got an idea?
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Logging control

2023-12-19 Thread Wesley Dillingham
"ceph daemon" commands need to be run local to the machine where the daemon
is running. So in this case if you arent on the node where osd.1 lives it
wouldnt work. "ceph tell" should work anywhere there is a client.admin key.


Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 


On Tue, Dec 19, 2023 at 4:02 PM Tim Holloway  wrote:

> Ceph version is Pacific (16.2.14), upgraded from a sloppy Octopus.
>
> I ran afoul of all the best bugs in Octopus, and in the process
> switched on a lot of stuff better left alone, including some detailed
> debug logging. Now I can't turn it off.
>
> I am confidently informed by the documentation that the first step
> would be the command:
>
> ceph daemon osd.1 config show | less
>
> But instead of config information I get back:
>
> Can't get admin socket path: unable to get conf option admin_socket for
> osd: b"error parsing 'osd': expected string of the form TYPE.ID, valid
> types are: auth, mon, osd, mds, mgr, client\n"
>
> Which seems to be kind of insane.
>
> Attempting to get daemon config info on a monitor on that machine
> gives:
>
> admin_socket: exception getting command descriptions: [Errno 2] No such
> file or directory
>
> Which doesn't help either.
>
> Anyone got an idea?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Logging control

2023-12-19 Thread Josh Baergen
I would start with "ceph tell osd.1 config diff", as I find that
output the easiest to read when trying to understand where various
config overrides are coming from. You almost never need to use "ceph
daemon" in Octopus+ systems since "ceph tell" should be able to access
pretty much all commands for daemons from any node.

Josh

On Tue, Dec 19, 2023 at 2:02 PM Tim Holloway  wrote:
>
> Ceph version is Pacific (16.2.14), upgraded from a sloppy Octopus.
>
> I ran afoul of all the best bugs in Octopus, and in the process
> switched on a lot of stuff better left alone, including some detailed
> debug logging. Now I can't turn it off.
>
> I am confidently informed by the documentation that the first step
> would be the command:
>
> ceph daemon osd.1 config show | less
>
> But instead of config information I get back:
>
> Can't get admin socket path: unable to get conf option admin_socket for
> osd: b"error parsing 'osd': expected string of the form TYPE.ID, valid
> types are: auth, mon, osd, mds, mgr, client\n"
>
> Which seems to be kind of insane.
>
> Attempting to get daemon config info on a monitor on that machine
> gives:
>
> admin_socket: exception getting command descriptions: [Errno 2] No such
> file or directory
>
> Which doesn't help either.
>
> Anyone got an idea?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io