[Users] SLOW I/O performance

2013-01-21 Thread Alex Leonhardt
Hi All,

This is my current setup:


HV1 has :
 storage_domain_1
 is SPM master

HV2 has :
 storage_domain_2
 is normal (not master)


HV1 has storage_domain_1 mounted via 127.0.0.1 (network name, but hosts
entry sends it to loopback)

HV2 has storage_domain_2 mounted via 127.0.0.1 (network name, but hosts
entry sends it to loopback)


All VMs on HV1 have its storage set to storage_domain_1 and all VMs on HV2
have their storage set to storage_domain_2


My problem now is that after I finally created all the disks on HV2 over a
super slow mgmt network (ovirtmgmt), it's a 100 Mbit only, I'm now trying
to kickstart all the VMs I created, however, formatting the disk is taking
for ever ~ 20-30mins for 12 GB, that is roughly how long it took to create
the disks over the 100Mbit link.

The weirdness really starts with HV2, as all VMs on HV1 with disks on
storage_domain_1 have good I/O throughput, all VMs on HV2 are awfully
slow in reading/writing to disk.

I've tried some network settings to increase throughput, but those didnt
help / had no effect at all.

Anyone come across this issue ? Is it something to do with the ovirtmgmt
interface only being 100Mbit ?

Alex




-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SLOW I/O performance

2013-01-21 Thread Alex Leonhardt
Additionally, I've just seen this on the console ovirt engine web console :

Storage domain storage_domain_2 experienced a high latency of 6.76029706001
seconds from host HV2. This may cause performance and functional issues.
Please consult your Storage Administrator


Alex



On 21 January 2013 12:38, Alex Leonhardt alex.t...@gmail.com wrote:

 Hi All,

 This is my current setup:


 HV1 has :
  storage_domain_1
  is SPM master

 HV2 has :
  storage_domain_2
  is normal (not master)


 HV1 has storage_domain_1 mounted via 127.0.0.1 (network name, but hosts
 entry sends it to loopback)

 HV2 has storage_domain_2 mounted via 127.0.0.1 (network name, but hosts
 entry sends it to loopback)


 All VMs on HV1 have its storage set to storage_domain_1 and all VMs on HV2
 have their storage set to storage_domain_2


 My problem now is that after I finally created all the disks on HV2 over a
 super slow mgmt network (ovirtmgmt), it's a 100 Mbit only, I'm now trying
 to kickstart all the VMs I created, however, formatting the disk is taking
 for ever ~ 20-30mins for 12 GB, that is roughly how long it took to create
 the disks over the 100Mbit link.

 The weirdness really starts with HV2, as all VMs on HV1 with disks on
 storage_domain_1 have good I/O throughput, all VMs on HV2 are awfully
 slow in reading/writing to disk.

 I've tried some network settings to increase throughput, but those didnt
 help / had no effect at all.

 Anyone come across this issue ? Is it something to do with the ovirtmgmt
 interface only being 100Mbit ?

 Alex




 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |




-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SLOW I/O performance

2013-01-21 Thread Alex Leonhardt
Looks like a version issue - v4 is dead slow, v3 seems to do a OK job (at
the moment) .. is there a way to change the mount options once a storage
domain has been added to the DC ?

Alex



On 21 January 2013 12:41, Alex Leonhardt alex.t...@gmail.com wrote:


 Additionally, I've just seen this on the console ovirt engine web console
 :

 Storage domain storage_domain_2 experienced a high latency of
 6.76029706001 seconds from host HV2. This may cause performance and
 functional issues. Please consult your Storage Administrator


 Alex



 On 21 January 2013 12:38, Alex Leonhardt alex.t...@gmail.com wrote:

 Hi All,

 This is my current setup:


 HV1 has :
  storage_domain_1
  is SPM master

 HV2 has :
  storage_domain_2
  is normal (not master)


 HV1 has storage_domain_1 mounted via 127.0.0.1 (network name, but hosts
 entry sends it to loopback)

 HV2 has storage_domain_2 mounted via 127.0.0.1 (network name, but hosts
 entry sends it to loopback)


 All VMs on HV1 have its storage set to storage_domain_1 and all VMs on
 HV2 have their storage set to storage_domain_2


 My problem now is that after I finally created all the disks on HV2 over
 a super slow mgmt network (ovirtmgmt), it's a 100 Mbit only, I'm now trying
 to kickstart all the VMs I created, however, formatting the disk is taking
 for ever ~ 20-30mins for 12 GB, that is roughly how long it took to create
 the disks over the 100Mbit link.

 The weirdness really starts with HV2, as all VMs on HV1 with disks on
 storage_domain_1 have good I/O throughput, all VMs on HV2 are awfully
 slow in reading/writing to disk.

 I've tried some network settings to increase throughput, but those didnt
 help / had no effect at all.

 Anyone come across this issue ? Is it something to do with the ovirtmgmt
 interface only being 100Mbit ?

 Alex




 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |




 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |




-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SLOW I/O performance

2013-01-21 Thread Alex Leonhardt
I'm still not convinced that that was the actual issue here, but
performance certainly has improved well enough to run and build these VMs.
Could it have something to do of where a storage domain runs ?

Alex



On 21 January 2013 14:15, Alex Leonhardt alex.t...@gmail.com wrote:

 Looks like a version issue - v4 is dead slow, v3 seems to do a OK job
 (at the moment) .. is there a way to change the mount options once a
 storage domain has been added to the DC ?

 Alex



 On 21 January 2013 12:41, Alex Leonhardt alex.t...@gmail.com wrote:


 Additionally, I've just seen this on the console ovirt engine web console
 :

 Storage domain storage_domain_2 experienced a high latency of
 6.76029706001 seconds from host HV2. This may cause performance and
 functional issues. Please consult your Storage Administrator


 Alex



 On 21 January 2013 12:38, Alex Leonhardt alex.t...@gmail.com wrote:

 Hi All,

 This is my current setup:


 HV1 has :
  storage_domain_1
  is SPM master

 HV2 has :
  storage_domain_2
  is normal (not master)


 HV1 has storage_domain_1 mounted via 127.0.0.1 (network name, but hosts
 entry sends it to loopback)

 HV2 has storage_domain_2 mounted via 127.0.0.1 (network name, but hosts
 entry sends it to loopback)


 All VMs on HV1 have its storage set to storage_domain_1 and all VMs on
 HV2 have their storage set to storage_domain_2


 My problem now is that after I finally created all the disks on HV2 over
 a super slow mgmt network (ovirtmgmt), it's a 100 Mbit only, I'm now trying
 to kickstart all the VMs I created, however, formatting the disk is taking
 for ever ~ 20-30mins for 12 GB, that is roughly how long it took to create
 the disks over the 100Mbit link.

 The weirdness really starts with HV2, as all VMs on HV1 with disks on
 storage_domain_1 have good I/O throughput, all VMs on HV2 are awfully
 slow in reading/writing to disk.

 I've tried some network settings to increase throughput, but those didnt
 help / had no effect at all.

 Anyone come across this issue ? Is it something to do with the ovirtmgmt
 interface only being 100Mbit ?

 Alex




 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com|




 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |




 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |




-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users