Re: [ceph-users] cephfs 1 large omap objects

2019-10-30 Thread Patrick Donnelly
On Wed, Oct 30, 2019 at 9:28 AM Jake Grimmett  wrote:
>
> Hi Zheng,
>
> Many thanks for your helpful post, I've done the following:
>
> 1) set the threshold to 1024 * 1024:
>
> # ceph config set osd \
> osd_deep_scrub_large_omap_object_key_threshold 1048576
>
> 2) deep scrubbed all of the pgs on the two OSD that reported "Large omap
> object found." - these were all in pool 1, which has just four osd.
>
>
> Result: After 30 minutes, all deep-scrubs completed, and all "large omap
> objects" warnings disappeared.
>
> ...should we be worried about the size of these OMAP objects?

No. There are only a few of these objects and it's not caused problems
up to now in any other cluster.

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs 1 large omap objects

2019-10-30 Thread Jake Grimmett
Hi Zheng,

Many thanks for your helpful post, I've done the following:

1) set the threshold to 1024 * 1024:

# ceph config set osd \
osd_deep_scrub_large_omap_object_key_threshold 1048576

2) deep scrubbed all of the pgs on the two OSD that reported "Large omap
object found." - these were all in pool 1, which has just four osd.


Result: After 30 minutes, all deep-scrubs completed, and all "large omap
objects" warnings disappeared.

...should we be worried about the size of these OMAP objects?

again many thanks,

Jake

On 10/30/19 3:15 AM, Yan, Zheng wrote:
> see https://tracker.ceph.com/issues/42515.  just ignore the warning for now
> 
> On Mon, Oct 7, 2019 at 7:50 AM Nigel Williams
>  wrote:
>>
>> Out of the blue this popped up (on an otherwise healthy cluster):
>>
>> HEALTH_WARN 1 large omap objects
>> LARGE_OMAP_OBJECTS 1 large omap objects
>> 1 large objects found in pool 'cephfs_metadata'
>> Search the cluster log for 'Large omap object found' for more details.
>>
>> "Search the cluster log" is somewhat opaque, there are logs for many 
>> daemons, what is a "cluster" log? In the ML history some found it in the OSD 
>> logs?
>>
>> Another post suggested removing lost+found, but using cephfs-shell I don't 
>> see one at the top-level, is there another way to disable this "feature"?
>>
>> thanks.
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


--
Jake Grimmett
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs 1 large omap objects

2019-10-29 Thread Yan, Zheng
see https://tracker.ceph.com/issues/42515.  just ignore the warning for now

On Mon, Oct 7, 2019 at 7:50 AM Nigel Williams
 wrote:
>
> Out of the blue this popped up (on an otherwise healthy cluster):
>
> HEALTH_WARN 1 large omap objects
> LARGE_OMAP_OBJECTS 1 large omap objects
> 1 large objects found in pool 'cephfs_metadata'
> Search the cluster log for 'Large omap object found' for more details.
>
> "Search the cluster log" is somewhat opaque, there are logs for many daemons, 
> what is a "cluster" log? In the ML history some found it in the OSD logs?
>
> Another post suggested removing lost+found, but using cephfs-shell I don't 
> see one at the top-level, is there another way to disable this "feature"?
>
> thanks.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs 1 large omap objects

2019-10-28 Thread Jake Grimmett
Hi Paul, Nigel,

I'm also seeing "HEALTH_WARN 6 large omap objects" warnings with cephfs
after upgrading to 14.2.4:

The affected osd's are used (only) by the metadata pool:

POOLID STORED OBJECTS USED   %USED  MAX AVAIL
mds_ssd  1 64 GiB 1.74M   65 GiB 4.47   466 GiB

See below for more log details.

While I'm glad we can silence the warning, should I be worried about the
values reported in the log causing real problems?

many thanks

Jake

[root@ceph1 ~]# zgrep "Large omap object found" /var/log/ceph/ceph.log*

/log/ceph/ceph.log-20191022.gz:2019-10-21 15:43:45.800608 osd.2 (osd.2)
262 : cluster [WRN] Large omap object found. Object:
1:e5134dd5:::10007b4b304.0240:head Key count: 524005 Size (bytes):
242090310
/var/log/ceph/ceph.log-20191022.gz:2019-10-21 15:43:48.440425 osd.2
(osd.2) 263 : cluster [WRN] Large omap object found. Object:
1:e5347802:::1000861ecf6.:head Key count: 395404 Size (bytes):
182676204
/var/log/ceph/ceph.log-20191025.gz:2019-10-24 23:53:25.348227 osd.2
(osd.2) 58 : cluster [WRN] Large omap object found. Object:
1:2f12e2d8:::10007b4b304.0180:head Key count: 1041988 Size (bytes):
481398012
/var/log/ceph/ceph.log-20191026.gz:2019-10-25 10:54:57.478636 osd.2
(osd.2) 69 : cluster [WRN] Large omap object found. Object:
1:effe741b:::1000763dfe6.:head Key count: 640788 Size (bytes):
296043612
/var/log/ceph/ceph.log-20191026.gz:2019-10-25 19:57:11.894099 osd.3
(osd.3) 326 : cluster [WRN] Large omap object found. Object:
1:4b4f7436:::10007b4b304.0200:head Key count: 522689 Size (bytes):
241482318
/var/log/ceph/ceph.log-20191027.gz:2019-10-27 02:30:10.648346 osd.3
(osd.3) 351 : cluster [WRN] Large omap object found. Object:
1:a47c6896:::1000894a736.:head Key count: 768126 Size (bytes):
354873768
On 10/8/19 10:27 AM, Paul Emmerich wrote:
> Hi,
> 
> the default for this warning changed recently (see other similar
> threads on the mailing list), it was 2 million before 14.2.3.
> 
> I don't think the new default of 200k is a good choice, so increasing
> it is a reasonable work-around.
> 
> Paul
> 


-- 
Jake Grimmett
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs 1 large omap objects

2019-10-08 Thread Paul Emmerich
Hi,

the default for this warning changed recently (see other similar
threads on the mailing list), it was 2 million before 14.2.3.

I don't think the new default of 200k is a good choice, so increasing
it is a reasonable work-around.

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Mon, Oct 7, 2019 at 3:37 AM Nigel Williams
 wrote:
>
> I've adjusted the threshold:
>
> ceph config set osd osd_deep_scrub_large_omap_object_key_threshold 35
>
> Colleague suggested that this will take effect on the next deep-scrub.
>
> Is the default of 200,000 too small? will this be adjusted in future
> releases or is it meant to be adjusted in some use-cases?
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs 1 large omap objects

2019-10-06 Thread Nigel Williams
I've adjusted the threshold:

ceph config set osd osd_deep_scrub_large_omap_object_key_threshold 35

Colleague suggested that this will take effect on the next deep-scrub.

Is the default of 200,000 too small? will this be adjusted in future
releases or is it meant to be adjusted in some use-cases?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs 1 large omap objects

2019-10-06 Thread Nigel Williams
I followed some other suggested steps, and have this:

root@cnx-17:/var/log/ceph# zcat ceph-osd.178.log.?.gz|fgrep Large
2019-10-02 13:28:39.412 7f482ab1c700  0 log_channel(cluster) log [WRN] :
Large omap object found. Object: 2:654134d2:::mds0_openfiles.0:head Key
count: 306331 Size (bytes): 13993148
root@cnx-17:/var/log/ceph# ceph daemon osd.178 config show | grep
osd_deep_scrub_large_omap
"osd_deep_scrub_large_omap_object_key_threshold": "20",
"osd_deep_scrub_large_omap_object_value_sum_threshold": "1073741824",

root@cnx-11:~# rados -p cephfs_metadata stat 'mds0_openfiles.0'
cephfs_metadata/mds0_openfiles.0 mtime 2019-10-06 23:37:23.00, size 0
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephfs 1 large omap objects

2019-10-06 Thread Nigel Williams
Out of the blue this popped up (on an otherwise healthy cluster):

HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'cephfs_metadata'
Search the cluster log for 'Large omap object found' for more details.

"Search the cluster log" is somewhat opaque, there are logs for many
daemons, what is a "cluster" log? In the ML history some found it in the
OSD logs?

Another post suggested removing lost+found, but using cephfs-shell I don't
see one at the top-level, is there another way to disable this "feature"?

thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com