Hello,
in between I learned a lot from this group (thanks a lot) to solve many
performance problems I initially faced with proxmox in VMs having their
storage on CEPH rbds.
I parallelized access to many disks on a vm where possible, used
iothreads and activated writeback cache.
Running a bonnie+
On Tue, Apr 14, 2020 at 03:54:30PM +0200, Rainer Krienke wrote:
> Hello,
>
> in between I learned a lot from this group (thanks a lot) to solve many
> performance problems I initially faced with proxmox in VMs having their
> storage on CEPH rbds.
>
> I parallelized access to many disks on a vm wh
Am 14.04.20 um 16:42 schrieb Alwin Antreich:
>> According to these numbers the relation from write and read performance
>> should be the other way round: writes should be slower than reads, but
>> on a VM its exactly the other way round?
> Ceph does reads in parallel, while writes are done to the
On Tue, Apr 14, 2020 at 05:21:44PM +0200, Rainer Krienke wrote:
> Am 14.04.20 um 16:42 schrieb Alwin Antreich:
> >> According to these numbers the relation from write and read performance
> >> should be the other way round: writes should be slower than reads, but
> >> on a VM its exactly the other
Hi there
I have 7 servers with PVE 6 all updated...
All servers has named pve1,pve2 and so on...
On pve3, pve4 and pve5 has SSD HD of 960GB.
So we decided to create a second pool that will use only this SSD.
I have readed Ceph CRUSH & device classes in order to do that!
So just to do things right,
Am 14.04.20 um 18:09 schrieb Alwin Antreich:
>>
>> In a VM I also tried to read its own striped LV device: dd
>> if=/dev/vg/testlv of=/dev/null bs=1024k status=progress (after clearing
>> the VMs cache). /dev/vg/testlv is a striped LV (on 4 disks) with xfs on
>> it on which I tested the speed us
On Tue, Apr 14, 2020 at 02:35:55PM -0300, Gilberto Nunes wrote:
> Hi there
>
> I have 7 servers with PVE 6 all updated...
> All servers has named pve1,pve2 and so on...
> On pve3, pve4 and pve5 has SSD HD of 960GB.
> So we decided to create a second pool that will use only this SSD.
> I have reade
Oh! Sorry Alwin.
I have some urgence to do this.
So this what I do...
First, I insert all HDDs, both SAS and SSD, into OSD tree.
Than, I check if the system could detect SSD as ssd and SAS as hdd, but
there's not difference! It's show all HDDs as hdd!
So, I change the class with this commands:
ceph