Re: [ceph-users] ceph-fuse slow cache?

2018-08-26 Thread Yan, Zheng
Could you strace apacha process, check which syscall waits for a long time. On Sat, Aug 25, 2018 at 3:04 AM Stefan Kooman wrote: > > Quoting Gregory Farnum (gfar...@redhat.com): > > > Hmm, these aren't actually the start and end times to the same operation. > > put_inode() is literally adjusting

Re: [ceph-users] cephfs kernel client hangs

2018-08-26 Thread Yan, Zheng
please check client.213528, instead of client.267792. which version of kernel client.213528 use. On Sat, Aug 25, 2018 at 6:12 AM Zhenshi Zhou wrote: > > Hi, > This time, osdc: > > REQUESTS 0 homeless 0 > LINGER REQUESTS > > monc: > > have monmap 2 want 3+ > have osdmap 4545 want 4546 > have

Re: [ceph-users] Design a PetaByte scale CEPH object storage

2018-08-26 Thread John Hearns
James, I echo what Christian Balzer says. DO not fixate on CEPH at this stage, we need to look at what the requirements are, There are alternatives such as Spectrum Scale and Minio. Also, depending on how often the videos are to be recalled, looking at a tape based solution. Regarding hardware,

Re: [ceph-users] Can I deploy wal and db of more than one osd in one partition

2018-08-26 Thread David Turner
You need to do them on separate partitions. You can either do sdc{num} or manage the SSD using LVM. On Sun, Aug 26, 2018, 8:39 AM Zhenshi Zhou wrote: > Hi, > I have 4 osd nodes with 4 hdd and 1 ssd on each. > I'm gonna add these osds in an existing cluster. > What I'm confused is that how to

Re: [ceph-users] Design a PetaByte scale CEPH object storage

2018-08-26 Thread Christian Balzer
Hello, On Sun, 26 Aug 2018 22:23:53 +0400 James Watson wrote: > Hi CEPHers, > > I need to design an HA CEPH object storage system. The first question that comes to mind is why? Why does it need to be Ceph and why object based (RGW)? >From what's stated below it seems that nobody at your

Re: [ceph-users] Why does Ceph probe for end of MDS log?

2018-08-26 Thread Bryan Henderson
>No, the log end in the header is a hint. This is because we can't >atomically wrote to two objects (the header and the last log object) at the >same time, so we do atomic appends to the end of the log and flush out the >journal header lazily. Thanks; I get it now. >I believe zeroes at the end

[ceph-users] Design a PetaByte scale CEPH object storage

2018-08-26 Thread James Watson
Hi CEPHers, I need to design an HA CEPH object storage system. The scenario is that we are recording HD Videos and end of the day we need to copy all these video files (each file has approx 15 TB ) to our storage system. 1)Which would be the best tech in storage to transfer these PBs size loads

[ceph-users] Error EINVAL: (22) Invalid argument While using ceph osd safe-to-destroy

2018-08-26 Thread Robert Stanford
I am following the procedure here: http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/ When I get to the part to run "ceph osd safe-to-destroy $ID" in a while loop, I get a EINVAL error. I get this error when I run "ceph osd safe-to-destroy 0" on the command line by itself,

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-26 Thread Jones de Andrade
Hi Eugen. Thanks for the suggestion. I'll look for the logs (since it's our first attempt with ceph, I'll have to discover where they are, but no problem). One thing called my attention on your response however: I haven't made myself clear, but one of the failures we encountered were that the

[ceph-users] Can I deploy wal and db of more than one osd in one partition

2018-08-26 Thread Zhenshi Zhou
Hi, I have 4 osd nodes with 4 hdd and 1 ssd on each. I'm gonna add these osds in an existing cluster. What I'm confused is that how to deal with the ssd. Can I deploy 4 osd with wal and db in one ssd partition such as: # ceph-disk prepare --bluestore --block.db /dev/sdc --block.wal /dev/sdc

Re: [ceph-users] Why rbd rn did not clean used pool?

2018-08-26 Thread Fyodor Ustinov
Hi! In replicated pool rbd - the same behavior with tier. - Original Message - From: "Vasiliy Tolstov" To: "Konstantin Shalygin" Cc: ceph-users@lists.ceph.com, "Fyodor Ustinov" Sent: Sunday, 26 August, 2018 09:39:15 Subject: Re: [ceph-users] Why rbd rn did not clean used pool? Why

Re: [ceph-users] Why rbd rn did not clean used pool?

2018-08-26 Thread Konstantin Shalygin
On 08/26/2018 01:39 PM, Vasiliy Tolstov wrote: Why avoid cache tier? Does this only for erasure or for replicated too? Because cache tier is very uncommon feature. Cepher's was used it to will have rbd writes to EC pools mostly, before Luminous [1] Why this need for replicated? With cache

Re: [ceph-users] Why rbd rn did not clean used pool?

2018-08-26 Thread Vasiliy Tolstov
Why avoid cache tier? Does this only for erasure or for replicated too? вс, 26 Авг 2018, 7:42 Konstantin Shalygin : > > Configuration: > > rbd - erasure pool > > rbdtier - tier pool for rbd > > > > ceph osd tier add-cache rbd rbdtier 549755813888 > > ceph osd tier cache-mode rbdtier writeback >