Re: [ceph-users] qemu-img convert vs rbd import performance

2017-12-22 Thread Konstantin Shalygin
It's already in qemu 2.9 http://git.qemu.org/?p=qemu.git;a=commit;h=2d9187bc65727d9dd63e2c410b5500add3db0b0d " This patches introduces 2 new cmdline parameters. The -m parameter to specify the number of coroutines running in parallel (defaults to 8). And the -W parameter to allow qemu-img to w

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-21 Thread Alexandre DERUMIER
clusive lock,objet map, and try qemu-img convert. - Mail original - De: "Mahesh Jambhulkar" À: "aderumier" Cc: "dillaman" , "ceph-users" Envoyé: Vendredi 21 Juillet 2017 14:38:20 Objet: Re: [ceph-users] qemu-img convert vs rbd import performance T

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-21 Thread Mahesh Jambhulkar
"dillaman" > Cc: "Mahesh Jambhulkar" , "ceph-users" < > ceph-users@lists.ceph.com> > Envoyé: Vendredi 21 Juillet 2017 10:51:21 > Objet: Re: [ceph-users] qemu-img convert vs rbd import performance > > Hi, > > they are an RFC here: > &g

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-21 Thread Alexandre DERUMIER
; Envoyé: Vendredi 21 Juillet 2017 10:51:21 Objet: Re: [ceph-users] qemu-img convert vs rbd import performance Hi, they are an RFC here: "[RFC] qemu-img: make convert async" https://patchwork.kernel.org/patch/9552415/ maybe it could help - Mail original - De: &quo

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-21 Thread Alexandre DERUMIER
5:20:32 Objet: Re: [ceph-users] qemu-img convert vs rbd import performance Running a similar 20G import test within a single OSD VM-based cluster, I see the following: $ time qemu-img convert -p -O raw -f raw ~/image rbd:rbd/image (100.00/100%) real 3m20.722s user 0m18.859s sys 0m20.628s $

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-20 Thread Mahesh Jambhulkar
Thanks for the information Jason! We have few concerns: 1. Following is our ceph configuration. Is there something that needs to be changed here? #cat /etc/ceph/ceph.conf [global] fsid = 0e1bd4fe-4e2d-4e30-8bc5-cb94ecea43f0 mon_initial_members = cephlarge mon_host = 10.0.0.188 auth_cluster_r

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-20 Thread Jason Dillaman
Running a similar 20G import test within a single OSD VM-based cluster, I see the following: $ time qemu-img convert -p -O raw -f raw ~/image rbd:rbd/image (100.00/100%) real 3m20.722s user 0m18.859s sys 0m20.628s $ time rbd import ~/image Importing image: 100% complete...done. real 2m11.9

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-20 Thread Mahesh Jambhulkar
Adding *rbd readahead disable after bytes = 0* did not help. [root@cephlarge mnt]# time qemu-img convert -p -O raw /mnt/data/workload_326e8a43-a90a-4fe9-8aab-6d33bcdf5a05/ snapshot_9f0cee13-8200-4562-82ec-1fb9f234bcd8/vm_id_05e9534e-5c84-4487-9613- 1e0e227e4c1a/vm_res_id_24291e4b-93d2-47ad-80a8-

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-13 Thread Jason Dillaman
On Thu, Jul 13, 2017 at 8:57 AM, Irek Fasikhov wrote: > rbd readahead disable after bytes = 0 There isn't any reading from an RBD image in this example -- plus readahead disables itself automatically after the first 50MBs of IO (i.e. after the OS should have had enough time to start its own

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-13 Thread Jason Dillaman
I'll refer you to the original thread about this [1] that was awaiting an answer. I would recommend dropping the "-t none" option since that might severely slow down sequential write operations if "qemu-img convert" is performing 512 byte IO operations. You might also want to consider adding the "-

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-07-13 Thread Irek Fasikhov
Hi. You need to add to the ceph.conf [client] rbd cache = true rbd readahead trigger requests = 5 rbd readahead max bytes = 419430400 *rbd readahead disable after bytes = 0* rbd_concurrent_management_ops = 50 2017-07-13 15:29 GMT+03:00 Mahesh Jambhulkar : > Seeing some p

[ceph-users] qemu-img convert vs rbd import performance

2017-07-13 Thread Mahesh Jambhulkar
Seeing some performance issues on my ceph cluster with *qemu-img convert *directly writing to ceph against normal rbd import command. *Direct data copy (without qemu-img convert) took 5 hours 43 minutes for 465GB data.* [root@cephlarge vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb]# time rb

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-06-28 Thread Jason Dillaman
Perhaps just one cluster has low latency and the other has excessively high latency? You can use "rbd bench-write" to verify. On Wed, Jun 28, 2017 at 8:04 PM, Murali Balcha wrote: > We will give it a try. I have another cluster of similar configuration and > the converts are working fine. We hav

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-06-28 Thread Jason Dillaman
Given that your time difference is roughly 10x, best guess is that qemu-img is sending the IO operations synchronously (queue depth = 1), whereas, by default, "rbd import" will send up to 10 write requests in parallel to the backing OSDs. Such an assumption assumes that you have really high latency

Re: [ceph-users] qemu-img convert vs rbd import performance

2017-06-28 Thread Murali Balcha
We will give it a try. I have another cluster of similar configuration and the converts are working fine. We have not changed any queue depth setting on that setup either. If it turns out to be queue depth how can we set queue setting for qemu-img convert operation? Thank you. Sent from my i

[ceph-users] qemu-img convert vs rbd import performance

2017-06-28 Thread Murali Balcha
Need some help resolving the performance issues on the my ceph cluster. We are running acute performance issues when we are using qemu-img convert. However rbd import operation works perfectly alright. Please ignore image format for a minute. I am trying to understand why rbd import performs wel