Re: [ceph-users] Applications slow in VMs running RBD disks

2019-08-21 Thread Gesiel Galvão Bernardes
Hi Eliza, Em qua, 21 de ago de 2019 às 09:30, Eliza escreveu: > Hi > > on 2019/8/21 20:25, Gesiel Galvão Bernardes wrote: > > I`m use a Qemu/kvm(Opennebula) with Ceph/RBD for running VMs, and I > > having problems with slowness in aplications that many times not > >

[ceph-users] Applications slow in VMs running RBD disks

2019-08-21 Thread Gesiel Galvão Bernardes
Hi, I`m use a Qemu/kvm(Opennebula) with Ceph/RBD for running VMs, and I having problems with slowness in aplications that many times not consuming very CPU or RAM. This problem affect mostly Windows. Appearly the problem is that normally the application load many short files (ex: DLLs) and these

[ceph-users] Time of response of "rbd ls" command

2019-08-13 Thread Gesiel Galvão Bernardes
HI, I recently noticed that in two of my pools the command "rbd ls" has take several minutes to return the values. These pools have between 100 and 120 images each. Where should I look to check why this slowness? The cluster is apparently fine, without any warning. Thank you very much in

Re: [ceph-users] Online disk resize with Qemu/KVM and Ceph

2019-02-15 Thread Gesiel Galvão Bernardes
-Original Message- > From: Gesiel Galvão Bernardes [mailto:gesiel.bernar...@gmail.com] > Sent: 15 February 2019 13:16 > To: Marc Roos > Cc: ceph-users > Subject: Re: [ceph-users] Online disk resize with Qemu/KVM and Ceph > > HI Marc, > > i tried this and the problem conti

Re: [ceph-users] Online disk resize with Qemu/KVM and Ceph

2019-02-15 Thread Gesiel Galvão Bernardes
; /sys/class/scsi_device/2\:0\:3\:0/device/rescan (sdd) > > > > I have this to, have to do this to: > > virsh qemu-monitor-command vps-test2 --hmp "info block" > virsh qemu-monitor-command vps-test2 --hmp "block_resize > drive-scsi0-0-0-0 12G" > > &g

[ceph-users] Online disk resize with Qemu/KVM and Ceph

2019-02-15 Thread Gesiel Galvão Bernardes
Hi, I'm making a environment for VMs with qemu/kvm and Ceph using RBD, and I'm with the follow problem: The guest VM not recognizes disk resize (increase). The cenario is: Host: Centos 7.6 Libvirt 4.5 Ceph 13.2.4 I follow the following steps to increase the disk (ex: disk 10Gb to 20Gb): # rbd

[ceph-users] Use SSDs for metadata or for a pool cache?

2018-11-17 Thread Gesiel Galvão Bernardes
Hello, I am building a new cluster with 4 hosts, which have the following configuration: 128Gb RAM 12 HDs SATA 8TB 7.2k rpm 2 SSDs 240Gb 2x10GB Network I will use the cluster to store RBD images of VMs, I thought to use with 2x replica, if it does not get too slow. My question is: Using

Re: [ceph-users] [Ceph-community] Pool broke after increase pg_num

2018-11-09 Thread Gesiel Galvão Bernardes
do "ceph health detail" and then pick a stuck PG what does "ceph pg > PG query" output? > > Has your ceph -s output changed at all since the last paste? > > On Fri, Nov 9, 2018 at 12:08 AM Gesiel Galvão Bernardes < > gesiel.bernar...@gmail.com> wrote

[ceph-users] [Ceph-community] Pool broke after increase pg_num

2018-11-08 Thread Gesiel Galvão Bernardes
Em qui, 8 de nov de 2018 às 10:00, Joao Eduardo Luis escreveu: > Hello Gesiel, > > Welcome to Ceph! > > In the future, you may want to address the ceph-users list > (`ceph-users@lists.ceph.com`) for this sort of issues. > > Thank you, I will do. On 11/08/2018 11:18 AM,