: rbd_cache, limiting read on high iops around 40k
Just an update, there seems to be no proper way to pass iothread
parameter from openstack-nova (not at least in Juno release). So a
default single iothread per VM is what all we have. So in conclusion a
nova instance max iops on ceph rbd
Roy somnath@sandisk.com, Irek Fasikhov
malm...@gmail.com, ceph-devel ceph-devel@vger.kernel.org, ceph-users
ceph-us...@lists.ceph.com
Envoyé: Lundi 22 Juin 2015 07:58:47
Objet: Re: rbd_cache, limiting read on high iops around 40k
Just an update, there seems to be no proper way to pass
-devel
ceph-devel@vger.kernel.org, ceph-users ceph-us...@lists.ceph.com
Envoyé: Lundi 22 Juin 2015 11:04:42
Objet: Re: rbd_cache, limiting read on high iops around 40k
| Proxmox 4.0 will allow to enable|disable 1 iothread by disk.
Alexandre, Useful option!
In proxmox 3.4 will it be possible to add
@vger.kernel.org, ceph-users ceph-us...@lists.ceph.com
Envoyé: Lundi 22 Juin 2015 09:22:13
Objet: Re: rbd_cache, limiting read on high iops around 40k
It is already possible to do in proxmox 3.4 (with the latest updates qemu-kvm
2.2.x). But it is necessary to register in the conf file iothread:1. For single
:58:21
Objet: Re: rbd_cache, limiting read on high iops around 40k
Thanks, posted the question in openstack list. Hopefully will get some
expert opinion.
On Fri, Jun 12, 2015 at 11:33 AM, Alexandre DERUMIER
aderum...@odiso.com wrote:
Hi,
here a libvirt xml sample from libvirt src
(you need
À: aderumier aderum...@odiso.com
Cc: Somnath Roy somnath@sandisk.com, Irek Fasikhov
malm...@gmail.com, ceph-devel ceph-devel@vger.kernel.org, ceph-users
ceph-us...@lists.ceph.com
Envoyé: Vendredi 12 Juin 2015 07:52:41
Objet: Re: rbd_cache, limiting read on high iops around 40k
Objet: Re: rbd_cache, limiting read on high iops around 40k
Hi Alexandre,
I agree with your rational, of one iothread per disk. CPU consumed in
IOwait is pretty high in each VM. But I am not finding a way to set
the same on a nova instance. I am using openstack Juno with QEMU+KVM.
As per libvirt
@vger.kernel.org, ceph-users
ceph-us...@lists.ceph.com
Envoyé: Vendredi 12 Juin 2015 07:52:41
Objet: Re: rbd_cache, limiting read on high iops around 40k
Hi Alexandre,
I agree with your rational, of one iothread per disk. CPU consumed in
IOwait is pretty high in each VM. But I am not finding a way to set
...@odiso.com, Irek Fasikhov malm...@gmail.com
Cc: ceph-devel ceph-devel@vger.kernel.org, pushpesh sharma
pushpesh@gmail.com, ceph-users ceph-us...@lists.ceph.com
Envoyé: Mercredi 10 Juin 2015 09:06:32
Objet: RE: rbd_cache, limiting read on high iops around 40k
Hi Alexandre,
Thanks
: aderumier aderum...@odiso.com, Irek Fasikhov malm...@gmail.com
Cc: ceph-devel ceph-devel@vger.kernel.org, pushpesh sharma
pushpesh@gmail.com, ceph-users ceph-us...@lists.ceph.com
Envoyé: Mercredi 10 Juin 2015 09:06:32
Objet: RE: rbd_cache, limiting read on high iops around 40k
Hi Alexandre
: Tuesday, June 09, 2015 10:42 PM
To: Irek Fasikhov
Cc: ceph-devel; pushpesh sharma; ceph-users
Subject: Re: [ceph-users] rbd_cache, limiting read on high iops around 40k
Very good work!
Do you have a rpm-file?
Thanks.
no sorry, I'm have compiled it manually (and I'm using debian jessie as client
...@lists.ceph.com
Envoyé: Mardi 9 Juin 2015 09:28:21
Objet: Re: [ceph-users] rbd_cache, limiting read on high iops around 40k
Hi,
We tried adding more RBDs to single VM, but no luck.
If you want to scale with more disks in a single qemu vm, you need to use
iothread feature from qemu
Nelson mnel...@redhat.com
À: aderumier aderum...@odiso.com, pushpesh sharma pushpesh@gmail.com
Cc: ceph-devel ceph-devel@vger.kernel.org, ceph-users
ceph-us...@lists.ceph.com
Envoyé: Mardi 9 Juin 2015 13:36:31
Objet: Re: [ceph-users] rbd_cache, limiting read on high iops around 40k
Hi All
:21
Objet: Re: [ceph-users] rbd_cache, limiting read on high iops around 40k
Hi,
We tried adding more RBDs to single VM, but no luck.
If you want to scale with more disks in a single qemu vm, you need to use
iothread feature from qemu and assign 1 iothread by disk (works with
virtio-blk
ceph-devel@vger.kernel.org, pushpesh sharma pushpesh@gmail.com,
ceph-users ceph-us...@lists.ceph.com
Envoyé: Mardi 9 Juin 2015 18:00:29
Objet: Re: [ceph-users] rbd_cache, limiting read on high iops around 40k
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I also saw a similar performance
@gmail.com, ceph-devel ceph-devel@vger.kernel.org,
ceph-users ceph-us...@lists.ceph.com
Envoyé: Mardi 9 Juin 2015 15:39:50
Objet: Re: [ceph-users] rbd_cache, limiting read on high iops around 40k
In the past we've hit some performance issues with RBD cache that we've
fixed, but we've never
Envoyé: Mardi 9 Juin 2015 09:28:21
Objet: Re: [ceph-users] rbd_cache, limiting read on high iops around 40k
Hi,
We tried adding more RBDs to single VM, but no luck.
If you want to scale with more disks in a single qemu vm, you need to use
iothread feature from qemu and assign 1 iothread
@vger.kernel.org, pushpesh sharma pushpesh@gmail.com,
ceph-users ceph-us...@lists.ceph.com
Envoyé: Mercredi 10 Juin 2015 07:21:42
Objet: Re: [ceph-users] rbd_cache, limiting read on high iops around 40k
Hi, Alexandre.
Very good work!
Do you have a rpm-file?
Thanks.
2015-06-10 7:10 GMT+03:00
In the past we've hit some performance issues with RBD cache that we've
fixed, but we've never really tried pushing a single VM beyond 40+K read
IOPS in testing (or at least I never have). I suspect there's a couple
of possibilities as to why it might be slower, but perhaps joshd can
chime
: Robert LeBlanc rob...@leblancnet.us
Cc: Mark Nelson mnel...@redhat.com, ceph-devel
ceph-devel@vger.kernel.org, pushpesh sharma pushpesh@gmail.com,
ceph-users ceph-us...@lists.ceph.com
Envoyé: Mardi 9 Juin 2015 18:47:27
Objet: Re: [ceph-users] rbd_cache, limiting read on high iops around 40k
Hi,
I'm doing benchmark (ceph master branch), with randread 4k qdepth=32,
and rbd_cache=true seem to limit the iops around 40k
no cache
1 client - rbd_cache=false - 1osd : 38300 iops
1 client - rbd_cache=false - 2osd : 69073 iops
1 client - rbd_cache=false - 3osd : 78292 iops
cache
21 matches
Mail list logo