Re: [ovirt-users] vm nic 10g

2016-11-25 Thread Yaniv Kaul
On Nov 25, 2016 5:22 AM, "qinglong.d...@horebdata.cn" <
qinglong.d...@horebdata.cn> wrote:
>>
>> Yes, I have got 9.5Gbits/s in the vm.
>> I found that when I copy a file using scp command I got 275Mb/s(limited
by the disks). But in the admin portal I saw a 100% network(It should be
about 30%), so the engine regarded the virtual nic as a 1G nic.
>> I also found that the vm qos could only limit 1024Mbps.

Both sound like issues we should improve. Not even sure how to handle the
first one of course.
Y.

>>
>> On Thu, Nov 24, 2016 at 10:24 AM, qinglong.d...@horebdata.cn <
qinglong.d...@horebdata.cn> wrote:
>>>
>>> Hi, all:
>>> I have an ovirt 4.0.5 environment. There is a 10g nic in the
host. I have created a vm which uses the 10g nic. The type of the nic is
virtio. Then I got a 1g virtual nic in the vm. Now I want a 10g virtual nic
in the vm. Is it possible? Anyone can help? Thanks!
>>
>>
>> Disregard this number - it's a virtual NIC, the number is virtual as
well. Back in the days when virtio was created it seemed like a good fast
number to go with. They could have gone with any speed.
>> Please test the real performance and let us know what you've got.
>> Y.
>>
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] IP address doesn't appear in the host network interfaces tab

2016-11-25 Thread Nathanaël Blanchet



Le 08/11/2016 à 16:59, Dan Kenigsberg a écrit :

On Tue, Nov 08, 2016 at 04:03:27PM +0100, Nathanaël Blanchet wrote:

Hi all,

I try to change my migration network from one to an other. To do such a
thing I assigned an IP  on the new vlan fo r each of my hosts by dhcp.but
when changing the migration network at the cluster level, engine complains
that my host doesn't have IP.

I tried the "Sync all networks tab" to refresh, but nothing happens. Howver
IP in the vlan do exist on the host.

The only workaround to do this is to restart vdsmd and then, IP appear and I
can go further.

Can it be considered as a bug?

How did you configure that vlan address on each of your hosts? Via the
Setup Host Network dialog?

Quite often, we have a race where networks configured with dhcp do not
report their acquired ip address to Engine. This is to be solved by

 Bug 1240719 - [RFE] - Send an events to engine upon network changes

but until then - by clicking the "Refresh Capabilities" button, moving
the host to maintenance and back, or stopping-waiting-restarting Vdsm.

If "Refresh Capabilities" solves your issue as well, you're seeing the
problem I'm referring to.
Yes thank you, that's it, at the beginning, I didn't click on "refresh 
capabilities", but "sync networks", that's why it didn't work.


Regards,
Dan.


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to update ovirt on a single-host hosted-engine host

2016-11-25 Thread Derek Atkins
Hi,

On Fri, November 25, 2016 1:02 pm, Gianluca Cecchi wrote:
> Il 24/Nov/2016 20:34, "Derek Atkins"  ha scritto:
>>
>> Hi,
>>
>> I've got ovirt running on a single host (with hosted-engine).  I just
>> upgraded the engine from 4.0.4 to 4.0.5.  That was pretty easy; I just
>> put the system into global maintenance mode, then yum update, then
>> engine-setup, then took the host out of maint mode.  Easy peasy.
>>
>> Now, I want to update the host itself.  Since it's a single host system
>> I know I'll need to shut down all the VMs (because there's no place to
>> migrate them).  This means I'll need to shut down the engine VM, too.
>> That would imply that I can't use the "Update" feature from the ui,
>> right?
>>
>> So what IS the process to properly update a single-host host?
>
> It seems you didn't specify the kind of storage you are using...
> Anything provided by the host itself or what?

All storage is NFS storage exported by the host itself.  i didn't mention
it because it shouldn't have mattered -- the HE export mounted but the
others didn't..  Although it appears this may all have been due to the
network issues between engine and host -- once I added a workaround to
that issue the mounting issues seemed to resolve themselves.

I'm not sure why I'm having this network issue.  The engine VM gets an ARP
Redirect and then can't reach the host anymore -- but of course the host
can reach the VM.  It's a strange issue that I haven't been able to track
down (but is most likely not related to ovirt).

Thanks,

-derek

-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] used bandwidth when live migrating

2016-11-25 Thread Nathanaël Blanchet



Le 25/11/2016 à 15:25, Michal Skrivanek a écrit :

On 25 Nov 2016, at 11:08, Nathanaël Blanchet  wrote:


Le 25/11/2016 à 10:28, Nathanaël Blanchet a écrit :

Hi all,

We use in production a dedicated 10G vlan link for live migration. In the 
cluster option tab I put the migration bandwidth limit to 1 Mbps. 
Everything work as expected and now 25 vms on a host migrate in a few seconds 
(excatly 13), but I'm not able to measure the real consumed bandwidth. I want 
to evaluate such a thing beacuse my goal is to dedicate a vlan for gluster ont 
the same 10G nic, and I don't want an overload issue with gluster when vms 
migrations happen.

So my questions are : how does live migration work? Is it a RAM to RAM 
transport between two hosts? Are migration bandwidth limited by I/O disk 
anywhere or by the nic capabilities? Could 10Gbps be fully used for such staff? 
What would you advice to make work gluster and migration on the same nic (QoS?)

Is this what I need ? 
https://www.ovirt.org/develop/release-management/features/network/detailed-host-network-qos/

even the existing QoS capabilities should be enough
Separating storage and migration traffic to different logical networks is 
indeed a good idea
You can limit the migration bandwidth if it’s not critical, do you have nay 
specific requirements there?
If I understand what is explained, host network QoS level is what I 
need. To do that, it is enough to declare the predefined QoS in the 
edition of the logical network, so that the traffic will be limited on 
all the physical nics being used  by this vlan. Tell me if I'm wrong.



Cluster/DC (Data Center) - control the traffic related to a specific logical 
network throughout the entire cluster/DC, including through its infrastructure 
(e.g. L2 switches).

Cluster/DC-wide QoS remains to be handled in the future.

It seems not to be yet present in 4.0


Thank you for help.



--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to update ovirt on a single-host hosted-engine host

2016-11-25 Thread Gianluca Cecchi
Il 24/Nov/2016 20:34, "Derek Atkins"  ha scritto:
>
> Hi,
>
> I've got ovirt running on a single host (with hosted-engine).  I just
> upgraded the engine from 4.0.4 to 4.0.5.  That was pretty easy; I just
> put the system into global maintenance mode, then yum update, then
> engine-setup, then took the host out of maint mode.  Easy peasy.
>
> Now, I want to update the host itself.  Since it's a single host system
> I know I'll need to shut down all the VMs (because there's no place to
> migrate them).  This means I'll need to shut down the engine VM, too.
> That would imply that I can't use the "Update" feature from the ui,
> right?
>
> So what IS the process to properly update a single-host host?

It seems you didn't specify the kind of storage you are using...
Anything provided by the host itself or what?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to update ovirt on a single-host hosted-engine host

2016-11-25 Thread Derek Atkins
HELP!!!

I just did this, but now my storage isn't showing up!  According to
"mount", only the HE domain is mounted, but none of my other domains
mounted.  Running 'hosted-engine --connect-storage' does not seem to mount
the storage domains.

If I bring the engine up, it still wont access those domains.  And indeed,
now the host is showing up as non-responsive to the engine.

on the host, I see:

[root@ovirt-0 ~]# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date  : True
Hostname   : OVIRT-0{.dom.ain}
Host ID: 1
Engine status  : {"health": "good", "vm": "up",
"detail": "up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 76ac9280
Host timestamp : 985
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=985 (Fri Nov 25 11:35:39 2016)
host-id=1
score=3400
maintenance=False
state=EngineUp
stopped=False

So the host seems to think the engine is up.

I can't mount the storage from the UI because the engine claims the host
is non-responsive.  Not surprisingly, this appears to be due to a
networking issue that I didn't have before the update.  Indeed, the engine
can't reach the host, even though I can reach both (and the host can reach
the engine).  Did the host update somehow change the iptables
configuration?  It doesn't look like it.  But I'm not sure why the network
isn't working now when it was working before the update.

So, now what do I do?  I suppose I can check my host backup from last
night and see if I can see anything different in the vdsm configs?

There are two problems to solve:

1) Host doesn't mount my storage
2) Engine can't reach host via network.

HELP!

-derek

On Fri, November 25, 2016 9:53 am, Derek Atkins wrote:
> Hi Simone,
>
> On Fri, November 25, 2016 4:34 am, Simone Tiraboschi wrote:
>> On Thu, Nov 24, 2016 at 8:33 PM, Derek Atkins  wrote:
>
> [snip]
>>> So what IS the process to properly update a single-host host?  My guess
>>> is:
>>>
>>> * shutdown all the VMs
>>>
>> * go into global maint mode (what IS the difference between global and
>>>   local?)
>>> * shutdown the engine/engine VM
>>> * yum update on the host
>>> * restart services (or reboot, I guess)
>>> * bring the system out of maintenance mode
>>>
>>
>> shutdown or hibernate all the other VMs.
>> set global maintenance mode to avoid ovirt-ha-agent trying to restart
>> the
>> engine vm
>> shutdown the engine VM (from the host CLI) with hosted-engine
>> --vm-shutdown
>
> This is in lieu of "shutdown -h now" from the engine VM?  (i.e., will this
> perform a clean shutdown from within the VM?)
>
>> yum update
>> restart services
>
> So I don't need to run a host equivalent of "engine-setup"?  Great.
>
>> bring the system out of global maintenance mode: in a couple of minutes
>> the
>> HA agent should restart the engine VM
>
> Perfect.  Thanks.  I'll go test this shortly.
>
> -derek
>
> --
>Derek Atkins 617-623-3745
>de...@ihtfp.com www.ihtfp.com
>Computer and Internet Security Consultant
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>


-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to update ovirt on a single-host hosted-engine host

2016-11-25 Thread Simone Tiraboschi
On Fri, Nov 25, 2016 at 3:53 PM, Derek Atkins  wrote:

> Hi Simone,
>
> On Fri, November 25, 2016 4:34 am, Simone Tiraboschi wrote:
> > On Thu, Nov 24, 2016 at 8:33 PM, Derek Atkins  wrote:
>
> [snip]
> >> So what IS the process to properly update a single-host host?  My guess
> >> is:
> >>
> >> * shutdown all the VMs
> >>
> > * go into global maint mode (what IS the difference between global and
> >>   local?)
> >> * shutdown the engine/engine VM
> >> * yum update on the host
> >> * restart services (or reboot, I guess)
> >> * bring the system out of maintenance mode
> >>
> >
> > shutdown or hibernate all the other VMs.
> > set global maintenance mode to avoid ovirt-ha-agent trying to restart the
> > engine vm
> > shutdown the engine VM (from the host CLI) with hosted-engine
> > --vm-shutdown
>
> This is in lieu of "shutdown -h now" from the engine VM?  (i.e., will this
> perform a clean shutdown from within the VM?)
>
>
Yes, exactly.
For emergency issue you also have hosted-engine --vm-poweroff to perform an
instant poweroff.


> > yum update
> > restart services
>
> So I don't need to run a host equivalent of "engine-setup"?  Great.
>
> > bring the system out of global maintenance mode: in a couple of minutes
> > the
> > HA agent should restart the engine VM
>
> Perfect.  Thanks.  I'll go test this shortly.
>
> -derek
>
> --
>Derek Atkins 617-623-3745
>de...@ihtfp.com www.ihtfp.com
>Computer and Internet Security Consultant
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ansible Modules for ovirt

2016-11-25 Thread Jean Paul Gatt
Hi,

I have been trying to use the following ansible modules (based on the last 
devel commit in the ansible github repo for the extras modules 
43bb97bc3763b0335e245606eb2985314902cc91):

* ovirt_vms

* ovirt_disks

I have had varying degrees of success, and I would like to list the following 
issues:

* Through the REST API, I can define multiple nic card configurations 
for cloud-init, however only 1 can be done with Ansible. Also, when using 
templates, it automatically assumes I will be using the last version of the 
template, this had to be defined when using REST.

* Assume I have deployed from a template. This will create a VM with a 
disk that is already attached. Now, let's say I want to confirm, or add more 
disks to the VM. If I pass a number of disks, that have already been attached, 
and I reference them by name (in ovirt_disks), I get an error. If I do the same 
operation by ID it seems to work. However I cannot get the ID when running 
ansible, I need to get it from REST. When I query ovirt_vms, disk attachments 
comes up as

o   "disk_attachments": [],

If that field gets populated, I could then in turn run ovirt_disks with the ID, 
after querying the VM to get the attachments, but as it is it cannot be done.



I'll give an example

- name: Get Disks
  ovirt_disks:
auth: "{{ovirt_auth}}"
vm_name: "{{item.0.name}}"
name: "{{item.1.name}}"
interface: "{{item.1.interface}}"
  register: r_disks
  with_subelements:
- "{{virtual_machines }}"
- drives

If the drives have already been attached this will fail... but

- name: Add Disks to VMs
  ovirt_disks:
auth: "{{ovirt_auth}}"
vm_name: "{{item.0.name}}"
id: 8fc2811e-7774-4903-a5b3-489fdedb377b
size: "100GiB"
format: "{{item.1.format}}"
storage_domain: "{{item.1.storage_domain.name}}"
bootable: "{{item.1.bootable}}"
interface: "{{item.1.interface}}"
state: attached
  with_subelements:
- "{{virtual_machines }}"
- drives

The second example is working, even if the disk has already 
been attached.


I'm assuming that since I'm creating the VM from a template, there might be 
multiple disks with the same name, and the wrong ID is being retrieved.

I'm happy to do more testing as further commits appear.

Regards,

Jean Paul Gatt



This email (including any attachments) is confidential, protected by copyright 
and/or other intellectual property rights and may be privileged. If you have 
received it in error, please notify the sender immediately before deleting the 
email from your device. Do not use, copy or disclose the information contained 
in this message or in any attachment. We accept no responsibility for malware 
that we may have unintentionally transmitted to you within this email. Those 
communicating with us by email will be deemed to have consented to us 
intercepting and monitoring those communications. Gamesys Network Ltd is 
registered in Malta, with company registration number C47555 and VAT registered 
number MT19542837. The registered office is at Capital Business Centre, 
Entrance A, Level 1, Taz-Zwejt Street, San Gwann SGN3000.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to update ovirt on a single-host hosted-engine host

2016-11-25 Thread Derek Atkins
Hi Simone,

On Fri, November 25, 2016 4:34 am, Simone Tiraboschi wrote:
> On Thu, Nov 24, 2016 at 8:33 PM, Derek Atkins  wrote:

[snip]
>> So what IS the process to properly update a single-host host?  My guess
>> is:
>>
>> * shutdown all the VMs
>>
> * go into global maint mode (what IS the difference between global and
>>   local?)
>> * shutdown the engine/engine VM
>> * yum update on the host
>> * restart services (or reboot, I guess)
>> * bring the system out of maintenance mode
>>
>
> shutdown or hibernate all the other VMs.
> set global maintenance mode to avoid ovirt-ha-agent trying to restart the
> engine vm
> shutdown the engine VM (from the host CLI) with hosted-engine
> --vm-shutdown

This is in lieu of "shutdown -h now" from the engine VM?  (i.e., will this
perform a clean shutdown from within the VM?)

> yum update
> restart services

So I don't need to run a host equivalent of "engine-setup"?  Great.

> bring the system out of global maintenance mode: in a couple of minutes
> the
> HA agent should restart the engine VM

Perfect.  Thanks.  I'll go test this shortly.

-derek

-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4.0.5 create vm from template on cinder/ceph backed storage

2016-11-25 Thread Nir Soffer
On Thu, Nov 24, 2016 at 11:03 PM, Jordan Conway
 wrote:
> I've got a problem where since upgrading to 4.0.5 I am still unable to
> create a vm from a template.
> The issue now is that it seems to be stuck in a loop trying to and failing
> to create the vm, which is making my logs explode with the following
>
> https://paste.fedoraproject.org/489161/14799981/
>
> And in the engine UI "Failed to complete VM fromtemplate-1 creation."
> thousands of times.
>
> The issue started I believe with this sql error
>
> https://paste.fedoraproject.org/489183/00010131/
>
> Any insight on how I can stop the persistent error would be appreciated, as
> it is also preventng me from running engine-setup due to running jobs, even
> though vdsclient shows nothing.
>
> vdsClient -s 0 getAllTasksStatuses
> {'status': {'message': 'OK', 'code': 0}, 'allTasksStatus': {}}

Vdsm is not in the loop when using cinder/ceph storage, engine
is creating volumes in ceph via cinder, not via vdsm.

Can you file a engine bug about this?
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] used bandwidth when live migrating

2016-11-25 Thread Michal Skrivanek

> On 25 Nov 2016, at 11:08, Nathanaël Blanchet  wrote:
> 
> 
> Le 25/11/2016 à 10:28, Nathanaël Blanchet a écrit :
>> Hi all,
>> 
>> We use in production a dedicated 10G vlan link for live migration. In the 
>> cluster option tab I put the migration bandwidth limit to 1 Mbps. 
>> Everything work as expected and now 25 vms on a host migrate in a few 
>> seconds (excatly 13), but I'm not able to measure the real consumed 
>> bandwidth. I want to evaluate such a thing beacuse my goal is to dedicate a 
>> vlan for gluster ont the same 10G nic, and I don't want an overload issue 
>> with gluster when vms migrations happen.
>> 
>> So my questions are : how does live migration work? Is it a RAM to RAM 
>> transport between two hosts? Are migration bandwidth limited by I/O disk 
>> anywhere or by the nic capabilities? Could 10Gbps be fully used for such 
>> staff? What would you advice to make work gluster and migration on the same 
>> nic (QoS?)
> 
> Is this what I need ? 
> https://www.ovirt.org/develop/release-management/features/network/detailed-host-network-qos/

even the existing QoS capabilities should be enough
Separating storage and migration traffic to different logical networks is 
indeed a good idea
You can limit the migration bandwidth if it’s not critical, do you have nay 
specific requirements there?

> 
> Cluster/DC (Data Center) - control the traffic related to a specific logical 
> network throughout the entire cluster/DC, including through its 
> infrastructure (e.g. L2 switches).
> 
> Cluster/DC-wide QoS remains to be handled in the future.
> 
> It seems not to be yet present in 4.0
> 
>> 
>> Thank you for help.
>> 
>> 
> 
> -- 
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Timout with large image uploads in ovirt 4.0.5

2016-11-25 Thread Nir Soffer
On Thu, Nov 24, 2016 at 11:29 PM, Claude Durocher
 wrote:
> When trying to upload large kvm images (more than 50 GB) with the upload
> option, we get timeouts. In the ovirt-imageio-proxy/image-proxy.log we have:
>
> ERROR 2016-11-24 16:18:51,211 session:293:root:(_decode_ovirt_ticket) Failed
> to verify proxy ticket: Ticket life time expired
>
> We can recover and restart the upload but it is annoying to constantly watch
> the process. Is there a timeout parameter we can change in the
> imageio-proxy?

This is not a timeout but expiration of the ticket. The ticket
lifetime is 5 minutes,
and it is renewed by the client. If the client stops renewing the
ticket, the engine
ticket will expire, and the upload session will be aborted.

Can you file a bug about this issue?
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-imageio

Please attach proxy and engine logs showing the timeframe of the upload.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] used bandwidth when live migrating

2016-11-25 Thread Nathanaël Blanchet


Le 25/11/2016 à 10:28, Nathanaël Blanchet a écrit :

Hi all,

We use in production a dedicated 10G vlan link for live migration. In 
the cluster option tab I put the migration bandwidth limit to 1 
Mbps. Everything work as expected and now 25 vms on a host migrate in 
a few seconds (excatly 13), but I'm not able to measure the real 
consumed bandwidth. I want to evaluate such a thing beacuse my goal is 
to dedicate a vlan for gluster ont the same 10G nic, and I don't want 
an overload issue with gluster when vms migrations happen.


So my questions are : how does live migration work? Is it a RAM to RAM 
transport between two hosts? Are migration bandwidth limited by I/O 
disk anywhere or by the nic capabilities? Could 10Gbps be fully used 
for such staff? What would you advice to make work gluster and 
migration on the same nic (QoS?)


Is this what I need ? 
https://www.ovirt.org/develop/release-management/features/network/detailed-host-network-qos/


Cluster/DC (Data Center) - control the traffic related to a specific 
logical network throughout the entire cluster/DC, including through its 
infrastructure (e.g. L2 switches).


Cluster/DC-wide QoS remains to be handled in the future.

It seems not to be yet present in 4.0



Thank you for help.




--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] can not use iscsi storage type on ovirtandGlusterfshyper-converged environment

2016-11-25 Thread 胡茂荣
===---

   ###vdsm or supervdsm log  report :

MainProcess|jsonrpc.Executor/7::ERROR::2016-11-01 
11:07:00,178::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) Error in 
getPathsStatus

MainProcess|jsonrpc.Executor/4::ERROR::2016-11-01 
11:07:20,964::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) Error in 
getPathsStatus

   some code info >
[root@horeba storage]# pwd
/usr/share/vdsm/storage


[root@horeba storage]# grep "getPathsStatus" -R ./
./devicemapper.py:def _getPathsStatus():
./devicemapper.py:def getPathsStatus():
./devicemapper.py:return getProxy().getPathsStatus()
./multipath.py:pathStatuses = devicemapper.getPathsStatus()



def _getPathsStatus():
cmd = [EXT_DMSETUP, "status"]
rc, out, err = misc.execCmd(cmd)
if rc != 0:
raise Exception("Could not get device statuses")


res = {}
for statusLine in out:
try:
devName, statusLine = statusLine.split(":", 1)
except ValueError:
if len(out) == 1:
# return an empty dict when status output is: No devices found
return res
else:
raise


for m in PATH_STATUS_RE.finditer(statusLine):
devNum, status = m.groups()
physdevName = findDev(*[int(i) for i in devNum.split(":")])
res[physdevName] = {"A": "active", "F": "failed"}[status]


return res
def getPathsStatus():
return getProxy().getPathsStatus()

=
  and flashcache dm device will error  when use getPathsStatus() function .  
could change code not check flashcache dm device ?

dmsetup info --->
[root@horebc ~]# dmsetup status
flash_sdb: 0 976771072 flashcache stats: 
reads(1388761), writes(15548965)
read hits(1235671), read hit percent(88)
write hits(6539144) write hit percent(42)
dirty write hits(21372) dirty write hit percent(0)
replacement(147711), write replacement(524881)
write invalidates(0), read invalidates(1)
pending enqueues(810), pending inval(810)
metadata dirties(15196370), metadata cleans(15196322)
metadata batch(30087377) metadata ssd writes(305315)
cleanings(15196322) fallow cleanings(48187)
no room(337139) front merge(716153) back merge(14391395)
force_clean_block(0)
disk reads(153093), disk writes(15530535) ssd reads(16431974) ssd 
writes(15672221)
uncached reads(3714), uncached writes(334235), uncached IO requeue(0)
disk read errors(0), disk write errors(0) ssd read errors(0) ssd write 
errors(0)
uncached sequential reads(0), uncached sequential writes(0)
pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0)
lru hot blocks(12158976), lru warm blocks(12158976)
lru promotions(0), lru demotions(0)
dedbd337--ca66--43ff--b78c--4e9347682a9c-metadata: 0 1048576 linear 
dedbd337--ca66--43ff--b78c--4e9347682a9c-leases: 0 4194304 linear 
23137643634356633: 0 2147483648 multipath 2 0 0 0 1 1 A 0 1 2 8:128 A 0 0 1 
dedbd337--ca66--43ff--b78c--4e9347682a9c-master: 0 2097152 linear 
dedbd337--ca66--43ff--b78c--4e9347682a9c-inbox: 0 262144 linear 
dedbd337--ca66--43ff--b78c--4e9347682a9c-outbox: 0 262144 linear 


[root@horebc ~]# dmsetup info -C
Name  Maj Min Stat Open Targ Event  
UUID
flash_sdb 253   0 L--w11  0 

dedbd337--ca66--43ff--b78c--4e9347682a9c-metadata 253   4 L--w01  0 
LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGIGwifAPyYj9GmjFzCmJkIf9vFFFHn9n7V
dedbd337--ca66--43ff--b78c--4e9347682a9c-leases   253   6 L--w01  0 
LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGIVSCllWEYYKziY1bSeiTL0dAKAd27JqDT
23137643634356633 253   3 L--w61  0 
mpath-23137643634356633 
dedbd337--ca66--43ff--b78c--4e9347682a9c-master   253   9 L--w01  0 
LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGIiEnFZklRhZfFZ4YRdYWFImKWsUGr5pHg
dedbd337--ca66--43ff--b78c--4e9347682a9c-inbox253   8 L--w01  0 
LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGILobNK8KRD4SzDWyg50aG7jGdcNAi3KNw
dedbd337--ca66--43ff--b78c--4e9347682a9c-outbox   253   5 L--w01  0 
LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGIAvbT8CLegbVL802bG3QgLtH7I7llmS6R
flash_sdf 253   2 L--w11  0 

dedbd337--ca66--43ff--b78c--4e9347682a9c-ids  253   7 L--w11  0 
LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGIIkeaFaWvxa9wbHy7WrfiXNsP4F2J3gg0
flash_sdd 253   1 L--w1

[ovirt-users] used bandwidth when live migrating

2016-11-25 Thread Nathanaël Blanchet

Hi all,

We use in production a dedicated 10G vlan link for live migration. In 
the cluster option tab I put the migration bandwidth limit to 1 
Mbps. Everything work as expected and now 25 vms on a host migrate in a 
few seconds (excatly 13), but I'm not able to measure the real consumed 
bandwidth. I want to evaluate such a thing beacuse my goal is to 
dedicate a vlan for gluster ont the same 10G nic, and I don't want an 
overload issue with gluster when vms migrations happen.


So my questions are : how does live migration work? Is it a RAM to RAM 
transport between two hosts? Are migration bandwidth limited by I/O disk 
anywhere or by the nic capabilities? Could 10Gbps be fully used for such 
staff? What would you advice to make work gluster and migration on the 
same nic (QoS?)


Thank you for help.


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm.conf on one of the node is missing

2016-11-25 Thread Simone Tiraboschi
On Fri, Nov 25, 2016 at 7:52 AM, knarra  wrote:

> On 11/25/2016 03:35 AM, Simone Tiraboschi wrote:
>
>
> On Thu, Nov 24, 2016 at 3:35 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Thu, Nov 24, 2016 at 3:23 PM, knarra  wrote:
>>
>>> On 11/24/2016 07:47 PM, Simone Tiraboschi wrote:
>>>
>>>
>>>
>>> On Thu, Nov 24, 2016 at 3:06 PM, knarra  wrote:
>>>
 On 11/24/2016 07:27 PM, Simone Tiraboschi wrote:



 On Thu, Nov 24, 2016 at 2:39 PM, knarra  wrote:

> On 11/24/2016 06:56 PM, Simone Tiraboschi wrote:
>
>
>
> On Thu, Nov 24, 2016 at 2:08 PM, knarra  wrote:
>
>> On 11/24/2016 06:15 PM, Simone Tiraboschi wrote:
>>
>>
>>
>> On Thu, Nov 24, 2016 at 1:26 PM, knarra  wrote:
>>
>>> Hi,
>>>
>>> I have three nodes with glusterfs as storage domain. For some
>>> reason i see that vm.conf from /var/run/ovirt-hosted-engine-ha is
>>> missing and due to this on one of my host i see that Hosted Engine HA : 
>>> Not
>>> Active. Once i copy the file from some other node and restart
>>> ovirt-ha-broker and ovirt-ha-agent services everything works fine. But 
>>> then
>>> this happens again. Can some please help me identify why this happens.
>>> Below is the log i see in ovirt-ha-agent.logs.
>>>
>>>
>>> https://paste.fedoraproject.org/489120/79990345/
>>>
>>>
>> Once the engine correctly imported the hosted-engine storage domain,
>> a couple of OVF_STORE volumes will appear there.
>> Every modification to the engine VM configuration will be written by
>> the engine into that OVF_STORE, so all the ovirt-ha-agent running on the
>> hosted-engine hosts will be able to re-start the engine VM with a 
>> coherent
>> configuration.
>>
>> Till the engine imports the hosted-engine storage domain,
>> ovirt-ha-agent will fall back to the initial vm.conf.
>>
>> In you case the OVF_STORE volume is there,
>> but the agent fails extracting the engine VM configuration:
>> MainThread::INFO::2016-11-24 17:55:04,914::ovf_store::112::
>> ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>> Extracting Engine VM OVF from the OVF_STORE
>> MainThread::INFO::2016-11-24 17:55:04,919::ovf_store::119::
>> ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>> OVF_STORE volume path: /rhev/data-center/mnt/glusterSD/10.70.36.79:
>> _engine/27f054c3-c245-4039-b42a-c28b37043016/i
>> mages/fdf49778-9a06-49c6-bf7a-a0f12425911c/8c954add-6bcf-47f
>> 8-ac2e-4c85fc3f8699
>> MainThread::ERROR::2016-11-24 17:55:04,928::ovf_store::124::
>> ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>> Unable to extract HEVM OVF
>>
>> So it tries to rollback to the initial vm.conf, but also that one
>> seams to miss some values and so the agent is failing:
>> MainThread::ERROR::2016-11-24 17:55:04,974::agent::205::ovir
>> t_hosted_engine_ha.agent.agent.Agent::(_run_agent) Error:
>> ''Configuration value not found: 
>> file=/var/run/ovirt-hosted-engine-ha/vm.conf,
>> key=memSize'' - trying to restart agent
>>
>> Both of the issue seams storage related, could yuo please share your
>> gluster logs?
>>
>>
>>>
>>>
>>> Thanks
>>>
>>> kasturi
>>>
>>>
>> Hi Simone,
>>
>> Below [1] is the link for the sosreports on the first two hosts.
>> The third host has some issue. Once it is up will give the sosreport from
>> there as well.
>>
>
> And the host where you see the initial issue was the third one?
>
> It is on the first host.
>
>
>
>
 It seams that host1 is failing reading from the the hosted-engine
 storage domain:

 [2016-11-24 12:33:43.678467] W [MSGID: 114031]
 [client-rpc-fops.c:2938:client3_3_lookup_cbk] 0-engine-client-2:
 remote operation failed. Path: / (----0001)
 [Transport endpoint is not connected]
 [2016-11-24 12:33:43.678747] E [rpc-clnt.c:365:saved_frames_unwind]
 (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f077eba1642]
 (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f077e96775e]
 (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f077e96786e]
 (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f077e968fc4]
 (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x7f077e9698a0] )
 0-engine-client-2: forced unwinding frame type(GlusterFS 3.3)
 op(LOOKUP(27)) called at 2016-11-24 12:33:07.495178 (xid=0x82a1c)
 [2016-11-24 12:33:43.678982] E [rpc-clnt.c:365:saved_frames_unwind]
 (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f077eba1642]
 (-->