Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Hector Martin
On 07/02/2019 20:21, Marc Roos wrote: I also do not exactly know how many I have. It is sort of test setup and the bash script creates a snapshot every day. So with 100 dirs it will be a maximum of 700. But the script first checks if there is any data with getfattr --only-values --absolute-names

Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Marc Roos
> >> >> >> Hmmm, I am having a daily cron job creating these only on maybe 100 >> directories. I am removing the snapshot if it exists with a rmdir. >> Should I do this differently? Maybe eg use snap-20190101, snap-20190102, >> snap-20190103 then I will always create unique directories

Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Hector Martin
On 07/02/2019 19:19, Marc Roos wrote: Hmmm, I am having a daily cron job creating these only on maybe 100 directories. I am removing the snapshot if it exists with a rmdir. Should I do this differently? Maybe eg use snap-20190101, snap-20190102, snap-20190103 then I will always create unique

Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Marc Roos
will also be always unique. [@ .snap]# ls -c1 snap-7 snap-6 snap-5 snap-4 snap-3 snap-2 snap-1 -Original Message- From: Hector Martin [mailto:hec...@marcansoft.com] Sent: 07 February 2019 10:41 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] I get weird ls pool detail output

Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Hector Martin
On 07/02/2019 18:17, Marc Roos wrote: 250~1,2252~1,2254~1,2256~1,2258~1,225a~1,225c~1,225e~1,2260~1,2262~1,226 4~1,2266~1,2268~1,226a~1,226c~1,226e~1,2270~1,2272~1,2274~1,2276~1,2278~ 1,227a~1,227c~1,227e~1,2280~1,2282~1,2284~1,2286~1,2288~1,228a~1,228c~1, 228e~1,2290~1,2292~1,2294~1,2296~1,2298~

Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Marc Roos
Also on pools that are empty, looks like on all cephfs data pools. pool 55 'fs_data.ec21.ssd' erasure size 3 min_size 3 crush_rule 6 object_hash rjenkins pg_num 8 pgp_num 8 last_change 29032 flags hashpspool,ec_overwrites stripe_width 8192 application cephfs removed_snaps [57f~1,583~