On 01/24/17 10:22 AM, Eneko Lacunza wrote:
> Hi Miguel,
>
> El 24/01/17 a las 10:11, Miguel González escribió:
>> Reads and in both, Proxmox and therefore in guest VMs.
>>
>> As you can see in my messages writes seem to be more or less fine around
>> 140 MB/s in Proxmox while reads are around 40
As asked by the hosting company I shut down and test in rescue mode. I
also tested with proxmox running but with all VMs shut down (my tests
were done with all VMs running).
There is an increase in speed reads but I find that FSYNCs are still low
and reads differ too much (from 100 MB/s to 140
>
> Re-reading your first post I think you have a hw/cabling issue. Does sdb
> give same hdparm results as sda?
Yes, sdb has similar numbers.
it might be a cabling issue. I have checked with a similar server from
them with same HD model and ext4 (not software RAID though) and numbers
are
Hi Miguel,
El 24/01/17 a las 10:11, Miguel González escribió:
Reads and in both, Proxmox and therefore in guest VMs.
As you can see in my messages writes seem to be more or less fine around
140 MB/s in Proxmox while reads are around 40 MB/s.
At this point I don´t know if there is something
Reads and in both, Proxmox and therefore in guest VMs.
As you can see in my messages writes seem to be more or less fine around
140 MB/s in Proxmox while reads are around 40 MB/s.
At this point I don´t know if there is something related to hardware or
software, I have raised a ticket with
Miguel,
You have trouble with writes or reads?
From Proxmox or from a guest VM?
El 23/01/17 a las 19:39, Miguel González escribió:
Sorry I meant reads, pveperf:
root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42669.12
REGEX/SECOND: 935261
HD SIZE: 1809.50 GB
Sorry I meant reads, pveperf:
root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42669.12
REGEX/SECOND: 935261
HD SIZE: 1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS:94.56 MB/sec
AVERAGE SEEK TIME: 17.88 ms
FSYNCS/SECOND: 10.15
DNS EXT: 16.93 ms
DNS INT:
On Mon, 23 Jan 2017 19:02:22 +0100
Mehmet wrote:
> For datasafety i would not disable Barrier!
> But do you mean realy write Performance? As i know hdparm is used for read
> Performance...
>
For what reason should barrier not be disabled? Barrier is disabled on
ext3 by
For datasafety i would not disable Barrier!
But do you mean realy write Performance? As i know hdparm is used for read
Performance...
Am 23. Januar 2017 18:57:19 MEZ schrieb Michael Rasmussen :
>On Mon, 23 Jan 2017 18:34:11 +0100
>Miguel González
On Mon, 23 Jan 2017 18:34:11 +0100
Miguel González wrote:
> Considering hdparm reports same speed for the underlying disk and the
> soft raid seems some settings that need to be set. I´m asking here just
> to be safe in terms of proxmox and not to be too aggressive.
root@myserver:~# cat /proc/mdstat
Personalities : [raid1]
md4 : active raid1 sda4[0] sdb4[1]
1931980736 blocks [2/2] [UU]
bitmap: 11/15 pages [44KB], 65536KB chunk
md2 : active raid1 sda2[0] sdb2[1]
20478912 blocks [2/2] [UU]
Considering hdparm reports same speed for the
Hi,
i would say it more a softraid problem.
What you get when you do „cat /proc/mdstat“
Maybe still rebuilding.
> Am 23.01.2017 um 16:35 schrieb Miguel González :
>
> Dear all,
>
> I´m running Proxmox 4.2 onto a software RAID of 2 Tb SATA disks of
> 7200 RPM.
>
Dear all,
I´m running Proxmox 4.2 onto a software RAID of 2 Tb SATA disks of
7200 RPM.
I have set it up using OVH Proxmox installer with ext4. I have
realized that another server with just one SATA disk has writes of about
140 MB/s while this server with 2 disks has just writes around 40
13 matches
Mail list logo