[ceph-users] Re: ceph objecy storage client gui

2020-03-21 Thread Ignazio Cassano
Thanks Konstantin. I presume owncloud solution can use ceph as storage backend. Ignazio Il Sab 21 Mar 2020, 05:45 Konstantin Shalygin ha scritto: > On 3/18/20 7:06 PM, Ignazio Cassano wrote: > > Hello All, > > I am looking for object storage freee/opensource client gui (linux and > > windows)

[ceph-users] Re: Problem with OSD::osd_op_tp thread had timed out and other connected issues

2020-03-21 Thread XuYun
We had a similar problem that caused by insufficient RAM: we have 6 OSDs and 32G RAM per host, and somehow swap partition was used by OS, which lead sporadic performance problem. > 2020年3月21日 下午8:45,Jan Pekař - Imatic 写道: > > Each node has 64GB RAM so it should be enough (12 OSD's = 48GB

[ceph-users] Re: Problem with OSD::osd_op_tp thread had timed out and other connected issues

2020-03-21 Thread Jan Pekař - Imatic
Each node has 64GB RAM so it should be enough (12 OSD's = 48GB used). On 21/03/2020 13.14, XuYun wrote: Bluestore requires more than 4G memory per OSD, do you have enough memory? 2020年3月21日 下午8:09,Jan Pekař - Imatic 写道: Hello, I have ceph cluster version 14.2.7

[ceph-users] Re: Questions on Ceph cluster without OS disks

2020-03-21 Thread Marc Roos
I would say it is not a 'proven technology' otherwise you would see a wide spread implementation and adaptation of this method. However if you really need the physical disk space, it is a solution. Although I also would have questions on creating an extra redundant environment to service

[ceph-users] Re: Problem with OSD::osd_op_tp thread had timed out and other connected issues

2020-03-21 Thread Anthony D'Atri
This is an expensive operation. You want to slow it down, not burden the OSDs. > On Mar 21, 2020, at 5:46 AM, Jan Pekař - Imatic wrote: > > Each node has 64GB RAM so it should be enough (12 OSD's = 48GB used). > >> On 21/03/2020 13.14, XuYun wrote: >> Bluestore requires more than 4G memory

[ceph-users] Problem with OSD::osd_op_tp thread had timed out and other connected issues

2020-03-21 Thread Jan Pekař - Imatic
Hello, I have ceph cluster version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable) 4 nodes - each node 11 HDD, 1 SSD, 10Gbit network Cluster was empty, fresh install. We filled cluster with data (small blocks) using RGW. Cluster is now used for testing so no client was

[ceph-users] Re: Problem with OSD::osd_op_tp thread had timed out and other connected issues

2020-03-21 Thread XuYun
Bluestore requires more than 4G memory per OSD, do you have enough memory? > 2020年3月21日 下午8:09,Jan Pekař - Imatic 写道: > > Hello, > > I have ceph cluster version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) > nautilus (stable) > > 4 nodes - each node 11 HDD, 1 SSD, 10Gbit network > >

[ceph-users] Questions on Ceph cluster without OS disks

2020-03-21 Thread huxia...@horebdata.cn
Hello, Martin, I notice that Croit advocate the use of ceph cluster without OS disks, but with PXE boot. Do you use a NFS server to serve the root file system for each node? such as hosting configuration files, user and password, log files, etc. My question is, will the NFS server be a

[ceph-users] Cephfs mount error 1 = Operation not permitted

2020-03-21 Thread Dungan, Scott A.
I am still very new to ceph and I have just set up my first small test cluster. I have Cephfs enabled (named cephfs) and everything is good in the dashboard. I added an authorized user key for cephfs with: ceph fs authorize cephfs client.1 / r / rw I then copied the key to a file with: ceph

[ceph-users] Re: Cephfs mount error 1 = Operation not permitted

2020-03-21 Thread Eugen Block
Hi, have you tried to mount with the secret only instead of a secret file? mount -t ceph ceph-n4:6789:/ /ceph -o name=client.1,secret= If that works your secret file is not right. If not you should check if the client actually has access to the cephfs pools ('ceph auth list'). Zitat

[ceph-users] Re: Cephfs mount error 1 = Operation not permitted

2020-03-21 Thread Eugen Block
I just remembered there was a thread [1] about that a couple of weeks ago. Seems like you need to add the capabilities to the client. [1] https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/23FDDSYBCDVMYGCUTALACPFAJYITLOHJ/#I6LJR72AJGOCGINVOVEVSCKRIWV5TTZ2 Zitat von Eugen

[ceph-users] Re: Problem with OSD::osd_op_tp thread had timed out and other connected issues

2020-03-21 Thread Jan Pekař - Imatic
I understand, so I expected slow requests (like X slow requests are blocked > 32 sec) but I was not expecting that heartbeats are missed or OSD's were restarted. Maybe this "hard recovery" was not tested enough. Also I'm concerned, that this OSD restart caused data degradation and recovery -

[ceph-users] Re: Cephfs mount error 1 = Operation not permitted

2020-03-21 Thread Dungan, Scott A.
Zitat, thanks for the tips. I tried appending the key directly in the mount command (secret=) and that produced the same error. I took a look at the thread you suggested and I ran the commands that Paul at Croit suggested even though I the ceph dashboard showed "cephs" as already set as the

[ceph-users] Maximum limit of lifecycle rule length

2020-03-21 Thread Amit Ghadge
He All, We set rgw_lc_max_rules to 1 but we seen the issue while the xml rule length are > 1MB It return InvalidRange, the format is below, http://s3.amazonaws.com/doc/2006-03-01/;> Enabledtest_1/1. . . . Any reason why Ceph not allowed lc rule length > 1MB. Thanks, AmitG