Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Hector Martin

On 07/02/2019 20:21, Marc Roos wrote:

I also do not exactly know how many I have. It is sort of test setup and
the bash script creates a snapshot every day. So with 100 dirs it will
be
a maximum of 700. But the script first checks if there is any data with
getfattr --only-values --absolute-names -d -m ceph.dir.rbytes


I don't know what 'leaking old snapshots forever' means, how do I check
  this is happening? I am quite confident that the bash script only
creates and removes the snap dirs as it should.

Is it not strange that the snaps are shown on fs data pools I am not
  using? fs_data has indeed snapshots, fs_data.ec21.ssd is empty


I think the snapshots IDs will apply to all pools in the FS regardless 
of whether they contain any data referenced by the snapshots.


I just tested this and it seems each CephFS snapshot consumes two 
snapshots in the underlying pools, one apparently created on deletion (I 
wasn't aware of this). So for ~700 snapshots the output you're seeing is 
normal. It seems that using a "rolling snapshot" pattern in CephFS 
inherently creates a "one present, one deleted" pattern in the 
underlying pools.


--
Hector Martin (hec...@marcansoft.com)
Public Key: https://mrcn.st/pub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Marc Roos
 >
 >>   
 >> 
 >> Hmmm, I am having a daily cron job creating these only on maybe 100
 >> directories. I am removing the snapshot if it exists with a rmdir.
 >> Should I do this differently? Maybe eg use snap-20190101, 
snap-20190102,
 >> snap-20190103 then I will always create unique directories and the 
ones
 >> removed will also be always unique.
 >
 >The names shouldn't matter. If you're creating 100 snapshots then 
having 
 >a removed_snaps with that order of entries may be normal; I'm not sure 

 >how many you really have, since your line was truncated, but at least 
 >600 or so? You might want to go through your snapshots and check that 
 >you aren't leaking old snapshots forever, or deleting the wrong ones.

I also do not exactly know how many I have. It is sort of test setup and
the bash script creates a snapshot every day. So with 100 dirs it will 
be
a maximum of 700. But the script first checks if there is any data with
getfattr --only-values --absolute-names -d -m ceph.dir.rbytes


I don't know what 'leaking old snapshots forever' means, how do I check
 this is happening? I am quite confident that the bash script only 
creates and removes the snap dirs as it should.

Is it not strange that the snaps are shown on fs data pools I am not
 using? fs_data has indeed snapshots, fs_data.ec21.ssd is empty

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Hector Martin

On 07/02/2019 19:19, Marc Roos wrote:
  


Hmmm, I am having a daily cron job creating these only on maybe 100
directories. I am removing the snapshot if it exists with a rmdir.
Should I do this differently? Maybe eg use snap-20190101, snap-20190102,
snap-20190103 then I will always create unique directories and the ones
removed will also be always unique.


The names shouldn't matter. If you're creating 100 snapshots then having 
a removed_snaps with that order of entries may be normal; I'm not sure 
how many you really have, since your line was truncated, but at least 
600 or so? You might want to go through your snapshots and check that 
you aren't leaking old snapshots forever, or deleting the wrong ones.


--
Hector Martin (hec...@marcansoft.com)
Public Key: https://mrcn.st/pub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Marc Roos
 

Hmmm, I am having a daily cron job creating these only on maybe 100 
directories. I am removing the snapshot if it exists with a rmdir. 
Should I do this differently? Maybe eg use snap-20190101, snap-20190102, 
snap-20190103 then I will always create unique directories and the ones 
removed will also be always unique.

[@ .snap]# ls -c1
snap-7
snap-6
snap-5
snap-4
snap-3
snap-2
snap-1







-Original Message-
From: Hector Martin [mailto:hec...@marcansoft.com] 
Sent: 07 February 2019 10:41
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] I get weird ls pool detail output 12.2.11

On 07/02/2019 18:17, Marc Roos wrote:
> 250~1,2252~1,2254~1,2256~1,2258~1,225a~1,225c~1,225e~1,2260~1,2262~1,2
> 26 
> 4~1,2266~1,2268~1,226a~1,226c~1,226e~1,2270~1,2272~1,2274~1,2276~1,227
> 8~ 
> 1,227a~1,227c~1,227e~1,2280~1,2282~1,2284~1,2286~1,2288~1,228a~1,228c~
> 1,
> 228e~1,2290~1,2292~1,2294~1,2296~1,2298~1,229a~1,229c~1,229e~1,22a0~1,
> 22

Looks like you are creating a snapshot, creating a snapshot, deleting a 
snapshot, creating a snapshot, creating a snapshot... AIUI this pattern 
will result in an incompressible removed_snaps list. You should probably 
remove some old snapshots unless you really need them. Snapshot ID 
ranges are basically run-length compressed (interval_set), so if you end 
up with a present-removed-present-removed-present-removed pattern it 
won't compress.

I'm not sure about the performance implications of having such a "holey" 

snapshot list though. There's some discussion on this here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/020510.html

--
Hector Martin (hec...@marcansoft.com)
Public Key: https://mrcn.st/pub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Hector Martin

On 07/02/2019 18:17, Marc Roos wrote:

250~1,2252~1,2254~1,2256~1,2258~1,225a~1,225c~1,225e~1,2260~1,2262~1,226
4~1,2266~1,2268~1,226a~1,226c~1,226e~1,2270~1,2272~1,2274~1,2276~1,2278~
1,227a~1,227c~1,227e~1,2280~1,2282~1,2284~1,2286~1,2288~1,228a~1,228c~1,
228e~1,2290~1,2292~1,2294~1,2296~1,2298~1,229a~1,229c~1,229e~1,22a0~1,22


Looks like you are creating a snapshot, creating a snapshot, deleting a 
snapshot, creating a snapshot, creating a snapshot... AIUI this pattern 
will result in an incompressible removed_snaps list. You should probably 
remove some old snapshots unless you really need them. Snapshot ID 
ranges are basically run-length compressed (interval_set), so if you end 
up with a present-removed-present-removed-present-removed pattern it 
won't compress.


I'm not sure about the performance implications of having such a "holey" 
snapshot list though. There's some discussion on this here:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/020510.html

--
Hector Martin (hec...@marcansoft.com)
Public Key: https://mrcn.st/pub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Marc Roos
 
Also on pools that are empty, looks like on all cephfs data pools.


pool 55 'fs_data.ec21.ssd' erasure size 3 min_size 3 crush_rule 6 
object_hash rjenkins pg_num 8 pgp_num 8 last_change 29032 flags 
hashpspool,ec_overwrites stripe_width 8192 application cephfs
removed_snaps 
[57f~1,583~1,587~1,58b~1,58f~1,593~1,597~1,59b~1,59d~1,59f~1,5a1~1,5a3~1
,5a5~1,5a7~1,5a9~1,5ab~1,5ad~1,5af~1,5b1~1,5b3~1,5b5~1ea,7a0~3,7a4~b10,1
2b5~3,12b9~3,12bd~22c,14ea~22e,1719~b04,221e~1,2220~1,~1,2224~1,2226
~1,2228~1,222a~1,222c~1,222e~1,2230~1,2232~1,2234~1,2236~1,2238~1,223a~1
,223c~1,223e~1,2240~1,2242~1,2244~1,2246~1,2248~1,224a~1,224c~1,224e~1,2
250~1,2252~1,2254~1,2256~1,2258~1,225a~1,225c~1,225e~1,2260~1,2262~1,226
4~1,2266~1,2268~1,226a~1,226c~1,226e~1,2270~1,2272~1,2274~1,2276~1,2278~
1,227a~1,227c~1,227e~1,2280~1,2282~1,2284~1,2286~1,2288~1,228a~1,228c~1,
228e~1,2290~1,2292~1,2294~1,2296~1,2298~1,229a~1,229c~1,229e~1,22a0~1,22
a2~1,22a4~1,22a6~1,22a8~1,22aa~1,22ac~1,22ae~1,22b0~1,22b2~1,22b4~1,22b6
~1,22b8~1,22ba~1,22bc~1,22be~1,22c0~1,22c2~1,22c4~1,22c6~1,22c8~1,22ca~1
,22cc~1,22ce~1,22d0~1,22d2~1,22d4~1,22d6~1,22d8~1,22da~1,22dc~1,22de~1,2
2e0~1,22e2~1,22e4~1,22e6~1,22e8~1,22ea~1,22ec~1,22ee~1,22f0~1,22f2~1,22f
4~1,22f6~1,22f8~1,22fa~1,22fc~1,22fe~1,2300~1,2302~1,2304~1,2306~1,2308~
1,230a~1,230c~1,230e~1,2310~1,2312~1,2314~1,2316~1,2318~1,231a~1,231c~1,
231e~1,2320~1,2322~1,2324~1,2326~1,2328~1,232a~1,232c~1,232e~1,2330~1,23
32~1,2334~1,2336~1,2338~1,233a~1,233c~1,233e~2,2341~1,2343~1,2345~1,2348
~1,234a~1,234c~1,234e~1,2350~1,2352~1,2354~1,2356~1,2358~1,235a~1,235c~1
,235e~1,2360~1,2362~1,2364~1,2366~1,2368~1,236a~1,236c~1,236e~1,2370~1,2
372~1,2374~1,2376~1,2378~1,237a~1,237c~1,237e~1,2380~1,2382~1,2384~1,238
6~1,2388~1,238a~1,238c~1,238e~1,2390~1,2392~1,2394~1,2396~1,2398~1,239a~
1,239c~1,239e~1,23a0~1,23a2~1,23a4~1,23a6


[@]# ceph df
GLOBAL:
SIZE   AVAIL   RAW USED %RAW USED
115TiB 68.5TiB  46.0TiB 40.16
POOLS:
NAME  ID USED%USED MAX 
AVAIL OBJECTS
fs_data.ec21.ssd  55  0B 0
525GiB   0





ceph osd pool ls detail


pool 20 'fs_data' replicated size 3 min_size 2 crush_rule 0 object_hash 
rjenkins pg_num 64 pgp_num 64 last_change 29032 flags hashpspool 
stripe_width 0 application cephfs
removed_snaps
[3~1,5~31,37~768,7a0~3,7a4~b10,12b5~3,12b9~3,12bd~22c,14ea~22e,1719~b04,
221e~1,2220~1,~1,2224~1,2226~1,2228~1,222a~1,222c~1,222e~1,2230~1,22
32~1,2234~1,2236~1,2238~1,223a~1,223c~1,223e~1,2240~1,2242~1,2244~1,2246
~1,2248~1,224a~1,224c~1,224e~1,2250~1,2252~1,2254~1,2256~1,2258~1,225a~1
,225c~1,225e~1,2260~1,2262~1,2264~1,2266~1,2268~1,226a~1,226c~1,226e~1,2
270~1,2272~1,2274~1,2276~1,2278~1,227a~1,227c~1,227e~1,2280~1,2282~1,228
4~1,2286~1,2288~1,228a~1,228c~1,228e~1,2290~1,2292~1,2294~1,2296~1,2298~
1,229a~1,229c~1,229e~1,22a0~1,22a2~1,22a4~1,22a6~1,22a8~1,22aa~1,22ac~1,
22ae~1,22b0~1,22b2~1,22b4~1,22b6~1,22b8~1,22ba~1,22bc~1,22be~1,22c0~1,22
c2~1,22c4~1,22c6~1,22c8~1,22ca~1,22cc~1,22ce~1,22d0~1,22d2~1,22d4~1,22d6
~1,22d8~1,22da~1,22dc~1,22de~1,22e0~1,22e2~1,22e4~1,22e6~1,22e8~1,22ea~1
,22ec~1,22ee~1,22f0~1,22f2~1,22f4~1,22f6~1,22f8~1,22fa~1,22fc~1,22fe~1,2
300~1,2302~1,2304~1,2306~1,2308~1,230a~1,230c~1,230e~1,2310~1,2312~1,231
4~1,2316~1,2318~1,231a~1,231c~1,231e~1,2320~1,2322~1,2324~1,2326~1,2328~
1,232a~1,232c~1,232e~1,2330~1,2332~1,2334~1,2336~1,2338~1,233a~1,233c~1,
233e~2,2341~1,2343~1,2345~1,2348~1,234a~1,234c~1,234e~1,2350~1,2352~1,23
54~1,2356~1,2358~1,235a~1,235c~1,235e~1,2360~1,2362~1,2364~1,2366~1,2368
~1,236a~1,236c~1,236e~1,2370~1,2372~1,2374~1,2376~1,2378~1,237a~1,237c~1
,237e~1,2380~1,2382~1,2384~1,2386~1,2388~1,238a~1,238c~1,238e~1,2390~1,2
392~1,2394~1,2396~1,2398~1,239a~1,239c~1,239e~1,23a0~1,23a2~1,23a4~1,23a
6~1,23a8~1,23aa~1,23ac~1,23ae~1,23b0~1,23b2~1,23b4~1,23b6~1,23b8~1,23ba~
1,23bc~1,23be~1,23c0~1,23c2~1,23c4~1,23c6~1,23c8~1,23ca~1,23cc~1,23ce~1,
23d0~1,23d2~1,23d4~1,23d6~1,23d8~1,23da~1,23dc~1,23de~1,23e0~1,23e2~1,23
e4~1,23e6~1,23e8~1,23ea~1,23ec~1,23ee~1,23f0~1,23f2~1,23f4~1,23f6~1,23f8
~1,23fa~1,23fc~1,23fe~1,2400~1,2402~1,2404~1,2406~1,2408~1,240a~1,240c~1
,240e~1,2410~1,2412~1,2414~1,2416~1,2418~1,241a~1,241c~1,241e~1,2420~1,2
422~1,2426~1,2428~1,242a~1,242c~1,242e~1,2430~1,2432~1,2434~1,2436~1,243
8~1,243a~1,243c~1,243e~1,2440~1,2442~1,2444~1,2446~1,2448~1,244a~1,244c~
1,244e~1,2450~1,2452~1,2454~1,2456~1,2458~1,245a~1,245c~1,245e~1,2460~1,
2462~1,2464~1,2466~1,2468~1,246a~1,246c~1,246e~1,2470~1,2472~1,2474~1,24
76~1,2478~1,247a~1,247c~1,247e~1,2480~1,2482~1,2484~1,2486~1,2488~1,248a
~1,248c~1,248e~1,2490~1,2492~1,2494~1,2496~1,2498~1,249a~1,249c~1,249e~1
,24a0~1,24a2~1,24a4~1,24a6~1,24a8~1,24aa~1,24ac~1,24ae~1,24b0~1,24b2~1,2
4b4~1,24b6~1,24b8~1,24ba~1,24bc~1,24be~1,24c0~1,24c2~1,24c4~1,24c6~1,24c
8~1,24ca~1,24cc~1,24ce~1,24d0~1,24d2~1,24d4~1,24d6~1,24d8~1,24da~1,24dc~
1,24de~1,24e0~1,24e2~1,24e4~1,24e6~1,24e8~1,2