Re: [ceph-users] cephfs ceph: fill_inode badness

2015-12-06 Thread Don Waterloo
265 GB used, 5357 GB / 5622 GB avail 840 active+clean On 6 December 2015 at 08:18, Yan, Zheng <uker...@gmail.com> wrote: > On Sun, Dec 6, 2015 at 7:01 AM, Don Waterloo <don.water...@gmail.com> > wrote: > > Thanks for the advice. > > > &

Re: [ceph-users] cephfs ceph: fill_inode badness

2015-12-05 Thread Don Waterloo
ker...@gmail.com> wrote: > On Fri, Dec 4, 2015 at 10:39 AM, Don Waterloo <don.water...@gmail.com> > wrote: > > i have a file which is untouchable: ls -i gives an error, stat gives an > > error. it shows ??? for all fields except name. > > > > How do i clean this u

[ceph-users] cephfs ceph: fill_inode badness

2015-12-03 Thread Don Waterloo
i have a file which is untouchable: ls -i gives an error, stat gives an error. it shows ??? for all fields except name. How do i clean this up? I'm on ubuntu 15.10, running 0.94.5 # ceph -v ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) the node that accessed the file then

Re: [ceph-users] cephfs, low performances

2015-12-20 Thread Don Waterloo
On 20 December 2015 at 19:23, Francois Lafont <flafdiv...@free.fr> wrote: > On 20/12/2015 22:51, Don Waterloo wrote: > > > All nodes have 10Gbps to each other > > Even the link client node <---> cluster nodes? > > > OSD: > > $ ceph osd tree > > I

[ceph-users] cephfs 'lag' / hang

2015-12-18 Thread Don Waterloo
I have 3 systems w/ a cephfs mounted on them. And i am seeing material 'lag'. By 'lag' i mean it hangs for little bits of time (1s, sometimes 5s). But very non repeatable. If i run time find . -type f -print0 | xargs -0 stat > /dev/null it might take ~130ms. But, it might take 10s. Once i've done

Re: [ceph-users] cephfs, low performances

2015-12-18 Thread Don Waterloo
On 17 December 2015 at 21:36, Francois Lafont wrote: > Hi, > > I have ceph cluster currently unused and I have (to my mind) very low > performances. > I'm not an expert in benchs, here an example of quick bench: > >

Re: [ceph-users] cephfs, low performances

2015-12-18 Thread Don Waterloo
On 18 December 2015 at 15:48, Don Waterloo <don.water...@gmail.com> wrote: > > > On 17 December 2015 at 21:36, Francois Lafont <flafdiv...@free.fr> wrote: > >> Hi, >> >> I have ceph cluster currently unused and I have (to my mind) very low >> pe

Re: [ceph-users] cephfs, low performances

2015-12-20 Thread Don Waterloo
On 20 December 2015 at 08:35, Francois Lafont <flafdiv...@free.fr> wrote: > Hello, > > On 18/12/2015 23:26, Don Waterloo wrote: > > > rbd -p mypool create speed-test-image --size 1000 > > rbd -p mypool bench-write speed-test-image > > > > I get >

Re: [ceph-users] cephfs, low performances

2015-12-22 Thread Don Waterloo
On 21 December 2015 at 22:07, Yan, Zheng wrote: > > > OK, so i changed fio engine to 'sync' for the comparison of a single > > underlying osd vs the cephfs. > > > > the cephfs w/ sync is ~ 115iops / ~500KB/s. > > This is normal because you were doing single thread sync IO. If

Re: [ceph-users] cephfs 'lag' / hang

2015-12-21 Thread Don Waterloo
On 21 December 2015 at 03:23, Yan, Zheng <uker...@gmail.com> wrote: > On Sat, Dec 19, 2015 at 4:34 AM, Don Waterloo <don.water...@gmail.com> > wrote: > > I have 3 systems w/ a cephfs mounted on them. > > And i am seeing material 'lag'. By 'lag' i mean it hangs for

Re: [ceph-users] cephfs, low performances

2015-12-21 Thread Don Waterloo
On 20 December 2015 at 22:47, Yan, Zheng wrote: > >> --- > >> > > > fio tests AIO performance in this case. cephfs does not handle AIO > properly, AIO is actually SYNC IO. that's why cephfs is so slow in > this case.

[ceph-users] ceph hang on pg list_unfound

2016-05-18 Thread Don Waterloo
I am running 10.2.0-0ubuntu0.16.04.1. I've run into a problem w/ cephfs metadata pool. Specifically I have a pg w/ an 'unfound' object. But i can't figure out which since when i run: ceph pg 12.94 list_unfound it hangs (as does ceph pg 12.94 query). I know its in the cephfs metadata pool since

[ceph-users] Questions about cache-tier in 12.1

2017-08-10 Thread Don Waterloo
I have a system w/ 7 hosts. Each host has 1x1TB NVME, and 2x2TB SATA SSD. The intent was to use this for openstack, having glance stored on the SSD, and cinder + nova running cache-tier replication pool on nvme into erasure coded pool on ssd. The rationale is that, given the copy-on-write, only