Re: [ceph-users] qemu-img convert vs rbd import performance

2017-12-22 Thread Konstantin Shalygin
It's already in qemu 2.9 http://git.qemu.org/?p=qemu.git;a=commit;h=2d9187bc65727d9dd63e2c410b5500add3db0b0d " This patches introduces 2 new cmdline parameters. The -m parameter to specify the number of coroutines running in parallel (defaults to 8). And the -W parameter to allow qemu-img to

Re: [ceph-users] Removing an OSD host server

2017-12-22 Thread David Turner
The hosts got put there because OSDs started for the first time on a server with that name. If you name the new servers identically to the failed ones, the new osds will just place themselves under the host in the crush map and everything will be fine. There shouldn't be any problems with that

[ceph-users] Removing an OSD host server

2017-12-22 Thread Brent Kennedy
Been looking around the web and I cant find a what seems to be "clean way" to remove an OSD host from the "ceph osd tree" command output. I am therefore hesitant to add a server with the same name, but I still see the removed/failed nodes from the list. Anyone know how to do that? I found an

[ceph-users] How to evict a client in rbd

2017-12-22 Thread Karun Josy
Hello, I am unable to delete this abandoned image.Rbd info shows a watcher ip Image is not mapped Image has no snapshots rbd status cvm/image --id clientuser Watchers: watcher=10.255.0.17:0/3495340192 client.390908 cookie=18446462598732841114 How can I evict or black list a watcher

Re: [ceph-users] Proper way of removing osds

2017-12-22 Thread Karun Josy
Thank you! Karun Josy On Thu, Dec 21, 2017 at 3:51 PM, Konstantin Shalygin wrote: > Is this the correct way to removes OSDs, or am I doing something wrong ? >> > Generic way for maintenance (e.g. disk replace) is rebalance by change osd > weight: > > > ceph osd crush reweight

Re: [ceph-users] CEPH luminous - Centos kernel 4.14 qfull_time not supported

2017-12-22 Thread Mike Christie
On 12/20/2017 03:21 PM, Steven Vacaroaia wrote: > Hi, > > I apologies for creating a new thread ( I already mentioned my issue in > another one) > but I am hoping someone will be able to > provide clarification / instructions > > Looks like the patch for including qfull_time is missing from

[ceph-users] Luminous RGW Metadata Search

2017-12-22 Thread Youzhong Yang
I followed the exact steps of the following page: http://ceph.com/rgw/new-luminous-rgw-metadata-search/ "us-east-1" zone is serviced by host "ceph-rgw1" on port 8000, no issue, the service runs successfully. "us-east-es" zone is serviced by host "ceph-rgw2" on port 8002, the service was unable

Re: [ceph-users] Open Compute (OCP) servers for Ceph

2017-12-22 Thread Wido den Hollander
On 12/22/2017 02:40 PM, Dan van der Ster wrote: Hi Wido, We have used a few racks of Wiwynn OCP servers in a Ceph cluster for a couple of years. The machines are dual Xeon [1] and use some of those 2U 30-disk "Knox" enclosures. Yes, I see. I was looking for a solution without a JBOD and

Re: [ceph-users] Open Compute (OCP) servers for Ceph

2017-12-22 Thread Wido den Hollander
On 12/22/2017 03:27 PM, Luis Periquito wrote: Hi Wido, what are you trying to optimise? Space? Power? Are you tied to OCP? A lot of things. I'm not tied to OCP, but OCP has a lot of advantages over regular 19" servers and thus I'm investigating Ceph+OCP - Less power loss due to only one

Re: [ceph-users] MDS behind on trimming

2017-12-22 Thread Stefan Kooman
Quoting Stefan Kooman (ste...@bit.nl): > Quoting Dan van der Ster (d...@vanderster.com): > > Hi, > > > > We've used double the defaults for around 6 months now and haven't had any > > behind on trimming errors in that time. > > > >mds log max segments = 60 > >mds log max expiring = 40 >

Re: [ceph-users] Open Compute (OCP) servers for Ceph

2017-12-22 Thread Luis Periquito
Hi Wido, what are you trying to optimise? Space? Power? Are you tied to OCP? I remember Ciara had some interesting designs like this http://www.ciaratech.com/product.php?id_prod=539=en_cat1=1_cat2=67 though I don't believe they are OCP. I also had a look and supermicro has a few that may fill

Re: [ceph-users] How to use vfs_ceph

2017-12-22 Thread David Disseldorp
On Fri, 22 Dec 2017 12:10:18 +0100, Felix Stolte wrote: > I am using Samba 4.6.7 (shipped with Ubuntu 17.10). I've got it working > now by copying the ceph.client.admin.keyring to /etc/ceph (I'm very > unhappy with that). The ceph:user_id smb.conf functionality was first shipped with Samba

Re: [ceph-users] Open Compute (OCP) servers for Ceph

2017-12-22 Thread Dan van der Ster
Hi Wido, We have used a few racks of Wiwynn OCP servers in a Ceph cluster for a couple of years. The machines are dual Xeon [1] and use some of those 2U 30-disk "Knox" enclosures. Other than that, I have nothing particularly interesting to say about these. Our data centre procurement team have

Re: [ceph-users] Cephfs limis

2017-12-22 Thread Yan, Zheng
On Fri, Dec 22, 2017 at 3:23 PM, nigel davies wrote: > Right ok I take an look. Can you do that after the pool /cephfs has been set > up > yes, see http://docs.ceph.com/docs/jewel/rados/operations/pools/ > > On 21 Dec 2017 12:25 pm, "Yan, Zheng" wrote:

Re: [ceph-users] [luminous 12.2.2] Cluster write performance degradation problem(possibly tcmalloc related)

2017-12-22 Thread Webert de Souza Lima
On Thu, Dec 21, 2017 at 12:52 PM, shadow_lin wrote: > > After 18:00 suddenly the write throughput dropped and the osd latency > increased. TCmalloc started relcaim page heap freelist much more > frequently.All of this happened very fast and every osd had the indentical >

Re: [ceph-users] MDS locatiins

2017-12-22 Thread Webert de Souza Lima
it depends on how you use it. for me, it runs fine on the OSD hosts but the mds server consumes loads of RAM, so be aware of that. if the system load average goes too high due to osd disk utilization the MDS server might run into troubles too, as delayed response from the host could cause the MDS

Re: [ceph-users] cephfs mds millions of caps

2017-12-22 Thread Webert de Souza Lima
On Fri, Dec 22, 2017 at 3:20 AM, Yan, Zheng wrote: > idle client shouldn't hold so many caps. > i'll try to make it reproducible for you to test. yes. For now, it's better to run "echo 3 >/proc/sys/vm/drop_caches" > after cronjob finishes Thanks. I'll adopt that for now.

Re: [ceph-users] How to use vfs_ceph

2017-12-22 Thread Felix Stolte
Hi David, I am using Samba 4.6.7 (shipped with Ubuntu 17.10). I've got it working now by copying the ceph.client.admin.keyring to /etc/ceph (I'm very unhappy with that). Which Samba Version & Linux Distribution are using? Are you using quotas on subdirectories and are they applied when you

[ceph-users] Open Compute (OCP) servers for Ceph

2017-12-22 Thread Wido den Hollander
Hi, I'm looking at OCP [0] servers for Ceph and I'm not able to find yet what I'm looking for. First of all, the geek in me loves OCP and the design :-) Now I'm trying to match it with Ceph. Looking at wiwynn [1] they offer a few OCP servers: - 3 nodes in 2U with a single 3.5" disk [2] -

Re: [ceph-users] Permissions for mon status command

2017-12-22 Thread Andreas Calminder
Thanks! I completely missed that, adding name='client.something' did the trick. /andreas On 22 December 2017 at 02:22, David Turner wrote: > You aren't specifying your cluster user, only the keyring. So the > connection command is still trying to use the default