Re: [ovirt-users] How to migrate Self Hosted Engine

2016-11-20 Thread Yedidyah Bar David
On Mon, Nov 21, 2016 at 1:09 AM, Gianluca Cecchi
 wrote:
> On Sun, Nov 20, 2016 at 4:08 PM, Yedidyah Bar David  wrote:
>>
>>
>>
>> There is now an option for this in the gui, did you mark it? See also:
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1167262
>> --
>> Didi
>
>
> Ah.. I see.
> In "New Host" window there is a section named "Hosted Engine" and it
> defaults to "None" and I was give that..:
>
> https://drive.google.com/file/d/0BwoPbcrMv8mvME9CVGFRLTB0b0k/view?usp=sharing
>
> I didn't know it.
> I have verified I was able to put one host into maintenance (the only one
> running VM was automatically migrated) and then select "Reinstall" and in
> the proposed window I select now "Deploy" in similar hosted engine section:
> https://drive.google.com/file/d/0BwoPbcrMv8mvWTJMQXpwbHJYc00/view?usp=sharing
>
> It seems ok now

Mostly, yes.

> [root@ovirt02 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> Status up-to-date  : True
> Hostname   : ovirt01.localdomain.local
> Host ID: 1
> Engine status  : {"health": "good", "vm": "up",
> "detail": "up"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 8e1ee066
> Host timestamp : 429820
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=429820 (Sun Nov 20 23:58:31 2016)
> host-id=1
> score=3400
> maintenance=False
> state=EngineUp
> stopped=False
>
>
> --== Host 2 status ==--
>
> Status up-to-date  : True
> Hostname   : 192.168.150.103

This is the address you provided in the ui, right?
I suggest to use a fqdn and make it well-resolvable. It will then be
easier to change the IP address if needed.

> Host ID: 2
> Engine status  : {"reason": "vm not running on this
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 839f79f5
> Host timestamp : 429736
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=429736 (Sun Nov 20 23:58:37 2016)
> host-id=2
> score=3400
> maintenance=False
> state=EngineDown
> stopped=False
> [root@ovirt02 ~]#
>
> Is it?
> I was also able now to migrate the hosted engine vm to the second host and
> to connect without problems to its console.
> I'm going to change also the third host.
>
> Two notes:
> 1) it would be nice to pre-filter the drop down box when you have to choose
> the host where migrate the hosted engine...
> So that if there are no hosts available you will be given a related message
> without choice at all and if there is a subset of eligible hosts inside the
> cluster, you will be proposed only to choose one of them and not all the
> hosts inside the cluster.

Makes sense, please open an RFE to track this. Thanks.

>
> 2) If the gui option becomes the default and preferred way to deploy hosts
> in self hosted engine environments I think it should be put in clearer shape
> that if you follow the default action you would not have high availability
> for the hosted engine vm.
> Or changing the default action to "Deploy", or showing a popup if the hosted
> engine vm has only one host configured for it but there are other hosts in
> the cluster.
> Just my opinion.

Makes sense too, but I wonder if people will then be annoyed by forgetting
to uncheck it even after having enough HA hosts, which does have its cost
(both actual resource use and also reservations, IIUC). Perhaps we should
enable by default and/or remind only until you have enough HA hosts, which
can be a configurable number (and default e.g. to 3).

Best regards,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Could not get access to ACL tech driver 'ebiptables'

2016-11-20 Thread Laine Stump

On 11/20/2016 03:40 AM, Edward Haas wrote:



On Sat, Nov 19, 2016 at 7:35 AM, Kenneth Bingham mailto:w...@qrk.us>> wrote:

I suspect this has something to do with macspoofing because I found
that I was able to start a guest by changing the virtual network
interface profile to remove network filtering. I verified the guests
are able to start with filtering enabled on the vnic profile if it
is set to false in engine-config and ovirt-engine service bounced.
I'd prefer to leaf macspoofing disabled globally and only enable for
things like VRR, CARP; but I'll have to leave it enabled for now.
Could it be that the macs of the imported guests, being from the
foreign mac pool, are being blocked by an ebtables policy? I wonder
if I add their Ethernet range to the pools of the gaining Manager...

On Fri, Nov 18, 2016 at 9:35 PM Kenneth Bingham mailto:w...@qrk.us>> wrote:

I imported a guest from its iscsi storage domain and clicked the
green UP button, but the guest failed to start. This was the
first time vdsm tried to create a temporary storage domain for a
host other than hosted_engine. I'm using the same chap
credential that was used with the same iscsi storage domain with
the old instance of Manager. It looks like it wasn't able to get
permission to do something, but everything was set up with
sudo-as-root. I used CentOS 7.2 with the ovirt-release
repository and hosted-engine script to deploy Manager.

From /var/log/vdsm/vdsm.log on the host where it tried to start:
Thread-23385::ERROR::2016-11-19
02:12:41,907::vm::765::virt.vm::(_startUnderlyingVm)
vmId=`c3125d32-ae2a-4d2f-af4c-13661d90ddf9`::*The vm start
process failed*
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 706, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 1996, in _run
self._connection.createXML(domxml, flags),
  File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 123, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line
917, in wrapper
return func(inst, *args, **kwargs)
  File "*/usr/lib64/python2.7/site-packages/libvirt.py*", line
3611, in *createXML*
if ret is None:raise libvirtError('virDomainCreateXML()
failed', conn=self)
libvirtError: internal error: *Could not get access to ACL tech
driver 'ebiptables'*

From /usr/lib64/python2.7/site-packages/libvirt.py which raised
the error:
def createXML(self, xmlDesc, flags=0):
"""Launch a new guest domain, based on an XML
description similar
to the one returned by virDomainGetXMLDesc()
This function may require privileged access to the
hypervisor.
The domain is not persistent, so its definition will
disappear when it
is destroyed, or if the host is restarted (see
virDomainDefineXML() to
define persistent domains).

If the VIR_DOMAIN_START_PAUSED flag is set, the guest domain
will be started, but its CPUs will remain paused. The CPUs
can later be manually started using virDomainResume.

If the VIR_DOMAIN_START_AUTODESTROY flag is set, the guest
domain will be automatically destroyed when the
virConnectPtr
object is finally released. This will also happen if the
client application crashes / loses its connection to the
libvirtd daemon. Any domains marked for auto destroy will
block attempts at migration, save-to-file, or snapshots.

virDomainFree should be used to free the resources after the
domain object is no longer needed. """
ret = libvirtmod.virDomainCreateXML(self._o, xmlDesc, flags)
if ret is None:raise libvirtError('virDomainCreateXML()
failed', conn=self)
__tmp = virDomain(self,_obj=ret)
return __tmp


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



Please include the Kernel, libvirt and ebtables versions you run with.
In addition, the logs from libvirt and vdsm (vdsm.log, supervdsm.log)
would help.

Laine, Thomas, can this be related to
https://bugzilla.redhat.com/show_bug.cgi?id=1396032 ?


I doubt it. The end result is the same (guest fails to start if nwfilter 
rules are in place, and error message mentions ebtables), but the error 
is different and means that the initial failure

Re: [ovirt-users] Unable to add host

2016-11-20 Thread knarra

On 11/20/2016 09:24 PM, Oscar Segarra wrote:

Hi,

When I try to add the second host from the ovirt interface I get the 
following error:


Imágenes integradas 2

Of course, host vdicnode02 does not appear in the GUI and the gluster 
looks perfectly up und sync:


UI supports a functionality called "Importing host into Ovirt" which 
means that if there is already an existing cluster user can import that 
cluster and manage it from the UI. In your case i see that you already 
have a cluster, what you would need to do just importing the cluster 
into UI. To achieve that you just need to go to 'clusters' tab, there 
you see a link  called 'import'. Simply click on that link and you will 
see a popup for adding the host. Provide the root password for your 
hosts and all your hosts will be imported into the UI which are part of 
the cluster.


[root@vdicnode02 ~]# gluster volume status
Status of volume: vdic-infr-gv0
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick vdicnode01-priv:/vdic-infr/gv049152 0  Y   3039
Brick vdicnode02-priv:/vdic-infr/gv049152 0  Y   1999
Brick vdicnode03-priv:/vdic-infr/gv049152 0  Y   3456
Self-heal Daemon on localhost   N/A   N/AY 
  3043
Self-heal Daemon on vdicnode03-priv N/A   N/AY 
  3496
Self-heal Daemon on vdicnode01-priv N/A   N/AY 
  3267


Task Status of Volume vdic-infr-gv0
--
There are no active volume tasks

Status of volume: vdic-infr2-gv0
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick vdicnode01-priv:/vdic-infr2/gv0   49153 0  Y   3048
Brick vdicnode02-priv:/vdic-infr2/gv0   49153 0  Y   2026
Brick vdicnode03-priv:/vdic-infr2/gv0   49153 0  Y   3450
Self-heal Daemon on localhost   N/A   N/AY 
  3043
Self-heal Daemon on vdicnode01-priv N/A   N/AY 
  3267
Self-heal Daemon on vdicnode03-priv N/A   N/AY 
  3496


Task Status of Volume vdic-infr2-gv0
--
There are no active volume tasks

[root@vdicnode02 ~]#

May I activate self-heal?
Activate self heal? From the above volume status output i see that SHD 
process is started and PID for the same is listed which simply means 
that self heal is active and running.


I'd like to know the difference between None, Deploy and Undeploy from 
the Hosted Engine option as well:


Imágenes integradas 1


ah !!!. A lot to explain here. I would suggest you to go through the 
link below for more details on this.


https://devconfcz2016.sched.org/event/5m20/ovirt-and-gluster-hyperconvergence

Hope the above helps


Thanks a lot.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to migrate Self Hosted Engine

2016-11-20 Thread Gianluca Cecchi
On Sun, Nov 20, 2016 at 4:08 PM, Yedidyah Bar David  wrote:

>
>
> There is now an option for this in the gui, did you mark it? See also:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1167262
> --
> Didi
>

Ah.. I see.
In "New Host" window there is a section named "Hosted Engine" and it
defaults to "None" and I was give that..:

https://drive.google.com/file/d/0BwoPbcrMv8mvME9CVGFRLTB0b0k/view?usp=sharing

I didn't know it.
I have verified I was able to put one host into maintenance (the only one
running VM was automatically migrated) and then select "Reinstall" and in
the proposed window I select now "Deploy" in similar hosted engine section:
https://drive.google.com/file/d/0BwoPbcrMv8mvWTJMQXpwbHJYc00/view?usp=sharing

It seems ok now
[root@ovirt02 ~]# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date  : True
Hostname   : ovirt01.localdomain.local
Host ID: 1
Engine status  : {"health": "good", "vm": "up",
"detail": "up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 8e1ee066
Host timestamp : 429820
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=429820 (Sun Nov 20 23:58:31 2016)
host-id=1
score=3400
maintenance=False
state=EngineUp
stopped=False


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : 192.168.150.103
Host ID: 2
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 839f79f5
Host timestamp : 429736
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=429736 (Sun Nov 20 23:58:37 2016)
host-id=2
score=3400
maintenance=False
state=EngineDown
stopped=False
[root@ovirt02 ~]#

Is it?
I was also able now to migrate the hosted engine vm to the second host and
to connect without problems to its console.
I'm going to change also the third host.

Two notes:
1) it would be nice to pre-filter the drop down box when you have to choose
the host where migrate the hosted engine...
So that if there are no hosts available you will be given a related message
without choice at all and if there is a subset of eligible hosts inside the
cluster, you will be proposed only to choose one of them and not all the
hosts inside the cluster.

2) If the gui option becomes the default and preferred way to deploy hosts
in self hosted engine environments I think it should be put in clearer
shape that if you follow the default action you would not have high
availability for the hosted engine vm.
Or changing the default action to "Deploy", or showing a popup if the
hosted engine vm has only one host configured for it but there are other
hosts in the cluster.
Just my opinion.

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unable to add host

2016-11-20 Thread Oscar Segarra
Hi,

When I try to add the second host from the ovirt interface I get the
following error:

[image: Imágenes integradas 2]

Of course, host vdicnode02 does not appear in the GUI and the gluster looks
perfectly up und sync:

[root@vdicnode02 ~]# gluster volume status
Status of volume: vdic-infr-gv0
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick vdicnode01-priv:/vdic-infr/gv049152 0  Y
3039
Brick vdicnode02-priv:/vdic-infr/gv049152 0  Y
1999
Brick vdicnode03-priv:/vdic-infr/gv049152 0  Y
3456
Self-heal Daemon on localhost   N/A   N/AY
3043
Self-heal Daemon on vdicnode03-priv N/A   N/AY
3496
Self-heal Daemon on vdicnode01-priv N/A   N/AY
3267

Task Status of Volume vdic-infr-gv0
--
There are no active volume tasks

Status of volume: vdic-infr2-gv0
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick vdicnode01-priv:/vdic-infr2/gv0   49153 0  Y
3048
Brick vdicnode02-priv:/vdic-infr2/gv0   49153 0  Y
2026
Brick vdicnode03-priv:/vdic-infr2/gv0   49153 0  Y
3450
Self-heal Daemon on localhost   N/A   N/AY
3043
Self-heal Daemon on vdicnode01-priv N/A   N/AY
3267
Self-heal Daemon on vdicnode03-priv N/A   N/AY
3496

Task Status of Volume vdic-infr2-gv0
--
There are no active volume tasks

[root@vdicnode02 ~]#

May I activate self-heal?

I'd like to know the difference between None, Deploy and Undeploy from the
Hosted Engine option as well:

[image: Imágenes integradas 1]

Thanks a lot.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unexpected behaviour during creating VM using oVirt API

2016-11-20 Thread Juan Hernández
On 10/07/2016 08:59 PM, Roman Chukov wrote:
> Hello, 
> 
> I am trying to create a virtual machine from a template using oVirt
> API. Somehow like this:
> 
> ---
> def createVM(connection, cluster, vmname, vmtemplate):
> try:
>  param = params.VM( name=vmname, \
>  cluster=connection.clusters.get(name=cluster), \
>  template=connection.templates.get(name=vmtemplate), \
>  use_latest_template_version = True ) 
> except: 
>   print "Could not construct a request to oVirtapi,please check 
> parameters which were being sent to" 
>   return None
> 
> try:
> connection.vms.add(param)
> except:
> print "I was not able to commit my request into oVirt api."
> return None
> return "OK"
> ---
> 
> Everything is fine when I have only ONE version of a template. But I am 
> used to create several number of versions for one template because it
> is quite flexible. In this case, when I run my script, I receive an
> "AmbiguousQueryError" error even if an option
> "use_latest_template_version = True" is used.
> 
> I revised
> file  /usr/lib/python2.7/site-packages/ovirtsdk/utils/filterhelper.py
> and found near line 30 that this error is raised unquestionably:
> 
> --
> if len(result) > 1:
>   raise AmbiguousQueryError(query)
> return result[0] if result else None
> --
> 
> It seems quite strange. Either I do not understand the meaning of
> option "use_latest_template_version" or using of this option does not
> make sense, I mean query constructed in params.VM() function will not
> be passed by filterhelper.py in current implementation.
> I made a small patch that allows me to use the latest
> version of the template during VM creation:
> 
> --
> if len(result) > 1 :
>   result = result[len(result) - 1]
>   return result
>   #raise AmbiguousQueryError(query)
> return result[0] if result else Nonepython-2.7.5-34.el7.x86_64
> --
> But I am still not sure that original behaviour of filehelper.py is
> unexpectable. I would be very pleasant if you explain me this issue.
> 
> My OS is CentOS 7. I use Python python-2.7.5-34.el7.x86_64. Version of
> ovirt-engine-sdk is ovirt-engine-sdk-python-3.6.8.0-1.el7.centos.noarch
> 

The reason for this is that you are searching all the templates that
have a given name, and when a template has multiple versions each
version appears, in the API, as a separate template. So you actually
have multiple templates with the same name.

To avoid that, instead using "connection.templates.get(name=...)" you
can use "connection.templates.list(query="name=%s" % ...)". That will
give you all the versions of the template. You can just get the firs,
or, if you need a specific version, then you can look for it explicitly:

  template = None
  for candidate in connection.templates.list(query="name=mytemplate"):
if candidate.get_version().get_template_name() == 'myversion':
  template = candidate
  break

Then you can explicitly use that template:

  connection.vms.add(
params.VM(
  name="newvm",
  cluster=params.Cluster(
id="mycluster",
  ),
  template=params.Template(
id=template.get_id(),
  ),
  use_latest_template_version=True,
)
  )

Note that it is good practice to always create a new instance of
params.Cluster and params.Template when sending a request like this,
that way you make sure that you are only sending to the server the
required data, in this case the cluster name and the template id. The
way you are doing it now is sending the complete representation of the
cluster and template, which is bad for performance.

However, in your case, as you already know the name and the cluster and
the name of the VM, it is easier to send the request like this, without
searching the cluster or template before:

  connect.vms.add(
params.VM(
  name="myvm",
  cluster=params.Cluster(
name="mycluster",
  ),
  template=params.Template(
name="mytemplate",
  ),
  use_latest_template_version=True,
),
  )

I take the opportunity to remind you that you are using version 3 of the
API and version 3 of the API. Those will be retired staring with versoin
4.2 of oVirt. If you are already using version 4.0 or newer of oVirt
consider using versoin 4 of the API and version 4 of the SDK. You have
documentation and examples here:

  https://github.com/oVirt/ovirt-engine-sdk/tree/master/sdk
  https://github.com/oVirt/ovirt-engine-sdk/tree/master/sdk/examples

-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to migrate Self Hosted Engine

2016-11-20 Thread Yedidyah Bar David
On Sun, Nov 20, 2016 at 4:02 PM, Gianluca Cecchi
 wrote:
>
>
> On Sun, Nov 20, 2016 at 2:54 PM, Gianluca Cecchi 
> wrote:
>>
>> Hello,
>> I have an Hyperconverged gluster cluster with SEH and 3 hosts born in
>> 4.0.5.
>> The installation was done starting from ovirt01 (named hosted_engine_1 in
>> webadmin gui) and then deploying two more hosts: ovirt02 and ovirt03 from
>> web admin gui itself.
>> All seems ok.
>> I can migrate a normal VM from one host to another one and nice that I
>> don't loose console now.
>> But if I try from webadmin gui to migrate the self hosted engine I get
>> this message:
>>
>>
>> https://drive.google.com/file/d/0BwoPbcrMv8mvY3pURVRkX0p4OW8/view?usp=sharing
>>
>> Is this because the only way to migrate engine is to put its hosting host
>> to maintenance or is there anything wrong?
>> I don't understand the message:
>>
>> The host ovirt02.localdomain.local did not satisfy internal filter HA
>> because it is not a Hosted Engine host..
>>
>>
>> some commands executed on hosted_engine_1 (ovirt01):
>> [root@ovirt01 ~]# vdsClient -s 0 glusterHostsList
>> {'hosts': [{'hostname': '10.10.100.102/24',
>> 'status': 'CONNECTED',
>> 'uuid': 'e9717281-a356-42aa-a579-a4647a29a0bc'},
>>{'hostname': 'ovirt03.localdomain.local',
>> 'status': 'CONNECTED',
>> 'uuid': 'ec81a04c-a19c-4d31-9d82-7543cefe79f3'},
>>{'hostname': 'ovirt02.localdomain.local',
>> 'status': 'CONNECTED',
>> 'uuid': 'b89311fe-257f-4e44-8e15-9bff6245d689'}],
>>  'status': {'code': 0, 'message': 'Done'}}
>> Done
>>
>> [root@ovirt01 ~]# vdsClient -s 0 list
>>
>> 87fd6bdb-535d-45b8-81d4-7e3101a6c364
>> Status = Up
>> nicModel = rtl8139,pv
>> statusTime = 4691827920
>> emulatedMachine = pc
>> pid = 18217
>> vmName = HostedEngine
>> devices = [{'device': 'console', 'specParams': {}, 'type': 'console',
>> 'deviceId': '08628a0d-1c2a-43e9-8820-4c02f14d04e9', 'alias': 'console0'},
>> {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon',
>> 'alias': 'balloon0'}, {'alias': 'rng0', 'specParams': {'source': 'random'},
>> 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x', 'type':
>> 'pci', 'function': '0x0'}, 'device': 'virtio', 'model': 'virtio', 'type':
>> 'rng'}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type':
>> 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x',
>> 'type': 'pci', 'function': '0x0'}}, {'device': 'vga', 'alias': 'video0',
>> 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain':
>> '0x', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc',
>> 'specParams': {'spiceSecureChannels':
>> 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
>> 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv',
>> 'macAddr': '00:16:3e:0a:e7:ba', 'linkActive': True, 'network': 'ovirtmgmt',
>> 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {},
>> 'deviceId': '79a745a0-e691-4a3d-8d6b-c94306db9113', 'address': {'slot':
>> '0x03', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function':
>> '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index':
>> '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'specParams': {},
>> 'readonly': 'True', 'deviceId': '6be25e51-0944-4fc0-93fe-4ecabe32ac6b',
>> 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0',
>> 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type':
>> 'disk'}, {'poolID': '----', 'reqsize': '0',
>> 'index': '0', 'iface': 'virtio', 'apparentsize': '10737418240', 'alias':
>> 'virtio-disk0', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8',
>> 'readonly': 'False', 'shared': 'exclusive', 'truesize': '3395743744',
>> 'type': 'disk', 'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab',
>> 'volumeInfo': {'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab',
>> 'volType': 'path', 'leaseOffset': 0, 'volumeID':
>> '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'leasePath':
>> '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease',
>> 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path':
>> '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'},
>> 'format': 'raw', 'deviceId': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8',
>> 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x', 'type':
>> 'pci', 'function': '0x0'}, 'device': 'disk', 'path':
>> '/var/run/vdsm/storage/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6',
>> 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder':
>> '1', 'volumeID': '94c46bac-0a9f-49e8-

Re: [ovirt-users] How to migrate Self Hosted Engine

2016-11-20 Thread Gianluca Cecchi
On Sun, Nov 20, 2016 at 2:54 PM, Gianluca Cecchi 
wrote:

> Hello,
> I have an Hyperconverged gluster cluster with SEH and 3 hosts born in
> 4.0.5.
> The installation was done starting from ovirt01 (named hosted_engine_1 in
> webadmin gui) and then deploying two more hosts: ovirt02 and ovirt03 from
> web admin gui itself.
> All seems ok.
> I can migrate a normal VM from one host to another one and nice that I
> don't loose console now.
> But if I try from webadmin gui to migrate the self hosted engine I get
> this message:
>
> https://drive.google.com/file/d/0BwoPbcrMv8mvY3pURVRkX0p4OW8/
> view?usp=sharing
>
> Is this because the only way to migrate engine is to put its hosting host
> to maintenance or is there anything wrong?
> I don't understand the message:
>
> The host ovirt02.localdomain.local did not satisfy internal filter HA
> because it is not a Hosted Engine host..
>
>
> some commands executed on hosted_engine_1 (ovirt01):
> [root@ovirt01 ~]# vdsClient -s 0 glusterHostsList
> {'hosts': [{'hostname': '10.10.100.102/24',
> 'status': 'CONNECTED',
> 'uuid': 'e9717281-a356-42aa-a579-a4647a29a0bc'},
>{'hostname': 'ovirt03.localdomain.local',
> 'status': 'CONNECTED',
> 'uuid': 'ec81a04c-a19c-4d31-9d82-7543cefe79f3'},
>{'hostname': 'ovirt02.localdomain.local',
> 'status': 'CONNECTED',
> 'uuid': 'b89311fe-257f-4e44-8e15-9bff6245d689'}],
>  'status': {'code': 0, 'message': 'Done'}}
> Done
>
> [root@ovirt01 ~]# vdsClient -s 0 list
>
> 87fd6bdb-535d-45b8-81d4-7e3101a6c364
> Status = Up
> nicModel = rtl8139,pv
> statusTime = 4691827920
> emulatedMachine = pc
> pid = 18217
> vmName = HostedEngine
> devices = [{'device': 'console', 'specParams': {}, 'type': 'console',
> 'deviceId': '08628a0d-1c2a-43e9-8820-4c02f14d04e9', 'alias': 'console0'},
> {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type':
> 'balloon', 'alias': 'balloon0'}, {'alias': 'rng0', 'specParams': {'source':
> 'random'}, 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x',
> 'type': 'pci', 'function': '0x0'}, 'device': 'virtio', 'model': 'virtio',
> 'type': 'rng'}, {'device': 'scsi', 'alias': 'scsi0', 'model':
> 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus':
> '0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}}, {'device':
> 'vga', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02',
> 'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}},
> {'device': 'vnc', 'specParams': {'spiceSecureChannels':
> 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
> 'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv',
> 'macAddr': '00:16:3e:0a:e7:ba', 'linkActive': True, 'network': 'ovirtmgmt',
> 'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {},
> 'deviceId': '79a745a0-e691-4a3d-8d6b-c94306db9113', 'address': {'slot':
> '0x03', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function':
> '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'},
> {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0',
> 'specParams': {}, 'readonly': 'True', 'deviceId': 
> '6be25e51-0944-4fc0-93fe-4ecabe32ac6b',
> 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0',
> 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type':
> 'disk'}, {'poolID': '----', 'reqsize':
> '0', 'index': '0', 'iface': 'virtio', 'apparentsize': '10737418240',
> 'alias': 'virtio-disk0', 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8',
> 'readonly': 'False', 'shared': 'exclusive', 'truesize': '3395743744',
> 'type': 'disk', 'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab',
> 'volumeInfo': {'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab',
> 'volType': 'path', 'leaseOffset': 0, 'volumeID': 
> '94c46bac-0a9f-49e8-9188-627fa0caf2b6',
> 'leasePath': '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.
> local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/
> cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease',
> 'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path':
> '/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.
> local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/
> cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'},
> 'format': 'raw', 'deviceId': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8',
> 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x', 'type':
> 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/
> e9e4a478-f391-42e5-9bb8-ed22a33e5cab/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/
> 94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'propagateErrors': 'off',
> 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'volumeID':
> '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'specParams': {}, 'volumeChain':
> [{'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volType': 'path',
> 'l

Re: [ovirt-users] Self hosted single server network requirements

2016-11-20 Thread Paul-Erik Törrönen

On 2016-11-19 11:28, Joop wrote:

I have never setup such a server but if you create the ovirtmgmt bridge
first then the deploy will leave it alone and you won't lose network
connectivity.


I've set up almost the described combination (I do run the vm-engine on 
a separate laptop), and found that running the commandline stuff in a 
screen helps a lot, the setup commands can then complete instead of 
being interrupted as your (ssh-)session is killed.


I also enountered the same issue of network connection loss as the 
ovirtmgmt-interface took over my primary connection, but I was able to 
continue as I  had several network devices on my hardware which I had 
already configured, and was able to access the server through them.


If you only have one network interface, then I would recommend running 
the commands in a screen, and in the case of losing the connection, 
reconnect to the screen through the KVM.


My 0.02€

Poltsi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] EPEL and package(s) conflicts

2016-11-20 Thread Kaleb S. KEITHLEY


Yes, disable EPEL. I suppose I should retire nfs-ganesha in EPEL.

On 11/17/2016 12:29 PM, Soumya Koduri wrote:


+ Kaleb, Niels

On 11/17/2016 09:24 PM, lejeczek wrote:



On 15/11/16 13:51, knarra wrote:

[+soumya]
On 11/15/2016 06:51 PM, Simone Tiraboschi wrote:



On Tue, Nov 15, 2016 at 1:26 PM, lejeczek mailto:pelj...@yahoo.co.uk>> wrote:

hi

I apologize if I missed it reading release(repo) note.
What are users supposed to do with EPEL repo?
I'm asking for hit this:

--> Package python-perf.x86_64 0:4.8.7-1.el7.elrepo
will be an update
--> Finished Dependency Resolution
Error: Package:
nfs-ganesha-gluster-2.3.0-1.el7.x86_64
(@ovirt-4.0-centos-gluster37)
   Requires: nfs-ganesha = 2.3.0-1.el7
   Removing: nfs-ganesha-2.3.0-1.el7.x86_64
(@ovirt-4.0-centos-gluster37)
   nfs-ganesha = 2.3.0-1.el7
   Updated By: nfs-ganesha-2.3.2-1.el7.x86_64
(epel)
   nfs-ganesha = 2.3.2-1.el7


Adding Sahina on this.

but would there be any advisory for now? repo disable/enable, exclude
packages?


I think disabling epel repo may fix the issue. Request Kaleb/Niels to
confirm.

Thanks,
Soumya


thanks.
L.



and I also wander if there might be more?
regards.
L.
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] EPEL and package(s) conflicts

2016-11-20 Thread Soumya Koduri


+ Kaleb, Niels

On 11/17/2016 09:24 PM, lejeczek wrote:



On 15/11/16 13:51, knarra wrote:

[+soumya]
On 11/15/2016 06:51 PM, Simone Tiraboschi wrote:



On Tue, Nov 15, 2016 at 1:26 PM, lejeczek mailto:pelj...@yahoo.co.uk>> wrote:

hi

I apologize if I missed it reading release(repo) note.
What are users supposed to do with EPEL repo?
I'm asking for hit this:

--> Package python-perf.x86_64 0:4.8.7-1.el7.elrepo
will be an update
--> Finished Dependency Resolution
Error: Package:
nfs-ganesha-gluster-2.3.0-1.el7.x86_64
(@ovirt-4.0-centos-gluster37)
   Requires: nfs-ganesha = 2.3.0-1.el7
   Removing: nfs-ganesha-2.3.0-1.el7.x86_64
(@ovirt-4.0-centos-gluster37)
   nfs-ganesha = 2.3.0-1.el7
   Updated By: nfs-ganesha-2.3.2-1.el7.x86_64
(epel)
   nfs-ganesha = 2.3.2-1.el7


Adding Sahina on this.

but would there be any advisory for now? repo disable/enable, exclude
packages?


I think disabling epel repo may fix the issue. Request Kaleb/Niels to 
confirm.


Thanks,
Soumya


thanks.
L.



and I also wander if there might be more?
regards.
L.
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to migrate Self Hosted Engine

2016-11-20 Thread Gianluca Cecchi
Hello,
I have an Hyperconverged gluster cluster with SEH and 3 hosts born in 4.0.5.
The installation was done starting from ovirt01 (named hosted_engine_1 in
webadmin gui) and then deploying two more hosts: ovirt02 and ovirt03 from
web admin gui itself.
All seems ok.
I can migrate a normal VM from one host to another one and nice that I
don't loose console now.
But if I try from webadmin gui to migrate the self hosted engine I get this
message:

https://drive.google.com/file/d/0BwoPbcrMv8mvY3pURVRkX0p4OW8/view?usp=sharing

Is this because the only way to migrate engine is to put its hosting host
to maintenance or is there anything wrong?
I don't understand the message:

The host ovirt02.localdomain.local did not satisfy internal filter HA
because it is not a Hosted Engine host..


some commands executed on hosted_engine_1 (ovirt01):
[root@ovirt01 ~]# vdsClient -s 0 glusterHostsList
{'hosts': [{'hostname': '10.10.100.102/24',
'status': 'CONNECTED',
'uuid': 'e9717281-a356-42aa-a579-a4647a29a0bc'},
   {'hostname': 'ovirt03.localdomain.local',
'status': 'CONNECTED',
'uuid': 'ec81a04c-a19c-4d31-9d82-7543cefe79f3'},
   {'hostname': 'ovirt02.localdomain.local',
'status': 'CONNECTED',
'uuid': 'b89311fe-257f-4e44-8e15-9bff6245d689'}],
 'status': {'code': 0, 'message': 'Done'}}
Done

[root@ovirt01 ~]# vdsClient -s 0 list

87fd6bdb-535d-45b8-81d4-7e3101a6c364
Status = Up
nicModel = rtl8139,pv
statusTime = 4691827920
emulatedMachine = pc
pid = 18217
vmName = HostedEngine
devices = [{'device': 'console', 'specParams': {}, 'type': 'console',
'deviceId': '08628a0d-1c2a-43e9-8820-4c02f14d04e9', 'alias': 'console0'},
{'device': 'memballoon', 'specParams': {'model': 'none'}, 'type':
'balloon', 'alias': 'balloon0'}, {'alias': 'rng0', 'specParams': {'source':
'random'}, 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x',
'type': 'pci', 'function': '0x0'}, 'device': 'virtio', 'model': 'virtio',
'type': 'rng'}, {'device': 'scsi', 'alias': 'scsi0', 'model':
'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus':
'0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}}, {'device':
'vga', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02',
'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}},
{'device': 'vnc', 'specParams': {'spiceSecureChannels':
'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv',
'macAddr': '00:16:3e:0a:e7:ba', 'linkActive': True, 'network': 'ovirtmgmt',
'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {},
'deviceId': '79a745a0-e691-4a3d-8d6b-c94306db9113', 'address': {'slot':
'0x03', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function':
'0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'},
{'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0',
'specParams': {}, 'readonly': 'True', 'deviceId':
'6be25e51-0944-4fc0-93fe-4ecabe32ac6b', 'address': {'bus': '1',
'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device':
'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID':
'----', 'reqsize': '0', 'index': '0',
'iface': 'virtio', 'apparentsize': '10737418240', 'alias': 'virtio-disk0',
'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'readonly': 'False',
'shared': 'exclusive', 'truesize': '3395743744', 'type': 'disk',
'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volumeInfo':
{'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volType': 'path',
'leaseOffset': 0, 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6',
'leasePath':
'/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease',
'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path':
'/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'},
'format': 'raw', 'deviceId': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8',
'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x', 'type':
'pci', 'function': '0x0'}, 'device': 'disk', 'path':
'/var/run/vdsm/storage/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6',
'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder':
'1', 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'specParams': {},
'volumeChain': [{'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab',
'volType': 'path', 'leaseOffset': 0, 'volumeID':
'94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'leasePath':
'/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-

Re: [ovirt-users] oVirt 4.0.5 and SHE console from gui

2016-11-20 Thread Gianluca Cecchi
On Sun, Nov 20, 2016 at 12:40 PM, Gianluca Cecchi  wrote:

> On Sun, Nov 20, 2016 at 11:26 AM, Yedidyah Bar David 
> wrote:
>
>>
>> > A part from the usual command
>> >  hosted-engine --add-console-password
>> > and related vmcviewer one from console,
>>
>> Does this work?
>>
>>
>
> Yes, I have to check on which host the VM is and then on that host
>  hosted-engine --add-console-password
>
> and then from my client
>
> vncviewer 192.168.150.102:5900
>
> I'm proposed a password and after input I can open the ovirt engine
> console.
>
>
>>
>> Not sure if this is covered by [1]. Adding Roy.
>> If it's not, you can try manually editing it, check e.g. [2], and feel
>> free to open an RFE.
>>
>> Best,
>>
>> [1] https://www.ovirt.org/develop/release-management/features/ho
>> sted-engine-vm-management/
>> [2] http://lists.ovirt.org/pipermail/users/2015-November/036152.html
>> --
>> Didi
>>
>
> I will try and check...
>
>
>
Sorry, it was actually a problem with name resolution of the particular
host where the engine was running when I tried the first time.
Now it is ok also from web admin gui.

I'm going to check anyway the actions to change the console to spice
protocol.
Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0.5 and SHE console from gui

2016-11-20 Thread Gianluca Cecchi
On Sun, Nov 20, 2016 at 11:26 AM, Yedidyah Bar David 
wrote:

>
> > A part from the usual command
> >  hosted-engine --add-console-password
> > and related vmcviewer one from console,
>
> Does this work?
>
>

Yes, I have to check on which host the VM is and then on that host
 hosted-engine --add-console-password

and then from my client

vncviewer 192.168.150.102:5900

I'm proposed a password and after input I can open the ovirt engine console.


>
> Not sure if this is covered by [1]. Adding Roy.
> If it's not, you can try manually editing it, check e.g. [2], and feel
> free to open an RFE.
>
> Best,
>
> [1] https://www.ovirt.org/develop/release-management/features/
> hosted-engine-vm-management/
> [2] http://lists.ovirt.org/pipermail/users/2015-November/036152.html
> --
> Didi
>

I will try and check...
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0.5 and SHE console from gui

2016-11-20 Thread Yedidyah Bar David
On Sat, Nov 19, 2016 at 4:20 PM, Gianluca Cecchi
 wrote:
> Hello,
> In a clean environment configured in 4.0.5 I used VNC for hosted engine VM
> console during setup.
> In web admin gui I see that the graphics protocol for the Hosted Engine VM
> is indeed VNC and Video Type VGA.
>
> I have verified that if I configure a VM with VNC graphics (video type both
> cirrus and vga tried) I'm able to connect to it with remote-viewer from
> Linux client and with virt-viewer in a W2012 client.
> A part from the usual command
>  hosted-engine --add-console-password
> and related vmcviewer one from console,

Does this work?

> can I connect  to the VNC engine
> console also from web admin gui?
> Both from Windows and Linux I get error
>
> Unable to connect to the graphic server
> C:\Users\Admini~1\Appdata\Local\Temp\console-12.vv
>
> Unable to connect to the graphic server /tmp/mozilla_g.cecchi0/console.vv

Can you please check/share the .vv files?

>
> Can I setup console to spice after install? And in case can I connect to its
> console from webadmin gui then?

Not sure if this is covered by [1]. Adding Roy.
If it's not, you can try manually editing it, check e.g. [2], and feel
free to open an RFE.

Best,

[1] 
https://www.ovirt.org/develop/release-management/features/hosted-engine-vm-management/
[2] http://lists.ovirt.org/pipermail/users/2015-November/036152.html
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted single server network requirements

2016-11-20 Thread Yedidyah Bar David
On Sat, Nov 19, 2016 at 11:28 AM, Joop  wrote:
> On 18-11-2016 23:47, Mark Steckel wrote:
>> Hi folks,
>>
>> I'm a complete OVirt newbie trying to set up a self hosted, single server 
>> and looking for some pointers to get going... (While new to OVirt, I'm long 
>> time sysadmin so not totally clueless.)

Welcome :-)

>>
>> The server I'm using has a 4-core CPU, 64 GB of ram, and a pair of 500 GB 
>> SSDs set up with RAID 1 and running CentOS 7.2. (I do not have physical 
>> access to the server but can get KVM access when required.)
>>
>> The goal is to set up this server as a self hosted, single server OVirt 
>> platform. Nothing mission critical, just want to be able to spin up VMs when 
>> needed and destroy them afterwards.
>>
>> Trying to use OVirt 4.0.4 (now 4.0.5).
>>
>> I've read some docs for setting this up including those just below as well 
>> as a bunch of googling and reading through the list archive.
>>
>>http://www.ovirt.org/documentation/how-to/hosted-engine/
>>http://www.ovirt.org/develop/release-management/features/heapplianceflow/
>>
>> http://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/
>>
>> Every time I execute "hosted-engine --deploy" the process "hangs" when it 
>> migrates the public IP address to a virtual nic interface. And in this case, 
>> by "hang" I mean that I lose the connection to the server. I have no idea if 
>> the process finishes or errors out.

Can you share logs at this point? Including /var/log/vdsm/* and
/var/log/ovirt-hosted-engine-setup/* . If you have KVM access, you
might manage to get them. Or copy after a reboot.
Also please share your network configuration. Are you using
NetworkManager (IIRC unsupported currently)? DHCP? Static IP conf?

>>
>> So obviously I'm doing something wrong.

Or it might be a bug.

>>
>> I suspect that I need to take care of some network requirement before 
>> starting. However, I can't find anything in the docs of how to prep the 
>> server network environment before running "hosted-engine --deploy".
>>
>> Should probably also mention that some, if not all of the VMs will need a 
>> public IP address. The physical server has a /29 to get started.
>>
> I have never setup such a server but if you create the ovirtmgmt bridge
> first then the deploy will leave it alone and you won't lose network
> connectivity.

See also [1]. Adding Dan (who wrote this). Best,

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1301879#c1
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Could not get access to ACL tech driver 'ebiptables'

2016-11-20 Thread Edward Haas
On Sat, Nov 19, 2016 at 7:35 AM, Kenneth Bingham  wrote:

> I suspect this has something to do with macspoofing because I found that I
> was able to start a guest by changing the virtual network interface profile
> to remove network filtering. I verified the guests are able to start with
> filtering enabled on the vnic profile if it is set to false in
> engine-config and ovirt-engine service bounced. I'd prefer to leaf
> macspoofing disabled globally and only enable for things like VRR, CARP;
> but I'll have to leave it enabled for now. Could it be that the macs of the
> imported guests, being from the foreign mac pool, are being blocked by an
> ebtables policy? I wonder if I add their Ethernet range to the pools of the
> gaining Manager...
>
> On Fri, Nov 18, 2016 at 9:35 PM Kenneth Bingham  wrote:
>
>> I imported a guest from its iscsi storage domain and clicked the green UP
>> button, but the guest failed to start. This was the first time vdsm tried
>> to create a temporary storage domain for a host other than hosted_engine.
>> I'm using the same chap credential that was used with the same iscsi
>> storage domain with the old instance of Manager. It looks like it wasn't
>> able to get permission to do something, but everything was set up with
>> sudo-as-root. I used CentOS 7.2 with the ovirt-release repository and
>> hosted-engine script to deploy Manager.
>>
>> From /var/log/vdsm/vdsm.log on the host where it tried to start:
>> Thread-23385::ERROR::2016-11-19 
>> 02:12:41,907::vm::765::virt.vm::(_startUnderlyingVm)
>> vmId=`c3125d32-ae2a-4d2f-af4c-13661d90ddf9`::*The vm start process
>> failed*
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/virt/vm.py", line 706, in _startUnderlyingVm
>> self._run()
>>   File "/usr/share/vdsm/virt/vm.py", line 1996, in _run
>> self._connection.createXML(domxml, flags),
>>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
>> line 123, in wrapper
>> ret = f(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 917, in
>> wrapper
>> return func(inst, *args, **kwargs)
>>   File "*/usr/lib64/python2.7/site-packages/libvirt.py*", line 3611, in
>> *createXML*
>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>> conn=self)
>> libvirtError: internal error: *Could not get access to ACL tech driver
>> 'ebiptables'*
>>
>> From /usr/lib64/python2.7/site-packages/libvirt.py which raised the
>> error:
>> def createXML(self, xmlDesc, flags=0):
>> """Launch a new guest domain, based on an XML description similar
>> to the one returned by virDomainGetXMLDesc()
>> This function may require privileged access to the hypervisor.
>> The domain is not persistent, so its definition will disappear
>> when it
>> is destroyed, or if the host is restarted (see
>> virDomainDefineXML() to
>> define persistent domains).
>>
>> If the VIR_DOMAIN_START_PAUSED flag is set, the guest domain
>> will be started, but its CPUs will remain paused. The CPUs
>> can later be manually started using virDomainResume.
>>
>> If the VIR_DOMAIN_START_AUTODESTROY flag is set, the guest
>> domain will be automatically destroyed when the virConnectPtr
>> object is finally released. This will also happen if the
>> client application crashes / loses its connection to the
>> libvirtd daemon. Any domains marked for auto destroy will
>> block attempts at migration, save-to-file, or snapshots.
>>
>> virDomainFree should be used to free the resources after the
>> domain object is no longer needed. """
>> ret = libvirtmod.virDomainCreateXML(self._o, xmlDesc, flags)
>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>> conn=self)
>> __tmp = virDomain(self,_obj=ret)
>> return __tmp
>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
Please include the Kernel, libvirt and ebtables versions you run with.
In addition, the logs from libvirt and vdsm (vdsm.log, supervdsm.log) would
help.

Laine, Thomas, can this be related to
https://bugzilla.redhat.com/show_bug.cgi?id=1396032 ?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] can't import vm from KVM host

2016-11-20 Thread Shahar Havivi
On 18.11.16 09:34, Nelson Lameiras wrote:
> Hello Shahar,
> 
> I've rebuild my test system and installed recently released ovirt 4.0.5 + 
> gerrit patch => still no luck. vdsm always gives me same error : 
Hi,

yes once you have the alias you are half way there.
but your alias path looks wrong, it starts with white-space:
'/  var/lib/libvirt/images/rhel7.1.img'

check it via 'virsh -r dumpxml '
and look if the white space are there two

regarding the pool you can check it too via virsh
virsh pool-list
and all pool-* commands (to create one), if you have problems building pool
let me know.

 Shahar.
> 
> "
> jsonrpc.Executor/6::ERROR::2016-11-17 
> 16:38:56,320::v2v::934::root::(_add_disk_info) Error getting disk size
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 931, in 
> _add_disk_info
> vol = conn.storageVolLookupByPath(disk['alias'])
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4596, in 
> storageVolLookupByPath
> if ret is None:raise libvirtError('virStorageVolLookupByPath() failed', 
> conn=self)
> libvirtError: Volume de stockage introuvable : no storage vol with matching 
> path '/dev/sdc'
> jsonrpc.Executor/6::DEBUG::2016-11-17 
> 16:38:56,325::__init__::555::jsonrpc.JsonRpcServer::(_handle_request) Return 
> 'Host.getExternalVMs' in bridge with [{'status': 'Down', 'graphics': 'spice', 
> 'arch': 'x86_64', 'disks': [ {'capacity': '8589934592', 'format': 'RAW', 
> 'dev': 'hdd', 'allocation': '8589942784', 'alias': '/  
> var/lib/libvirt/images/rhel7.1.img', 'type': 'disk'}, {'alias': '/dev/sdc', 
> 'type': 'disk', 'dev': 'hdb', 'format': 'RAW'}], 'vmId': 
> 'd0ddc4f6-a208-4286-a665-fb9e54d14bef', 'smp': 1, 'has_snapshots': False, 
> 'video': 'qxl', 'memSize': 1  024, 'vmName': 'rhel7.1', 'networks': 
> [{'model': 'virtio', 'macAddr': '52:54:00:00:d7:76', 'type': 'network'}]}]
> "
> 
> FYI I added a normal disk (file based) to the VM ir order to try to 
> understand what's happening. As expected, this disk is seen by the oVirt 
> (without patch)
> 
> I do reckon something in the above log :
> 
>  'disks': [
>{'capacity': '8589934592', 'format': 'RAW', 'dev': 'hdd', 'allocation': 
> '8589942784', 'alias': '/  var/lib/libvirt/images/rhel7.1.img', 'type': 
> 'disk'}, 
>{'alias': '/dev/sdc', 'type': 'disk', 'dev': 'hdb', 'format': 'RAW'}
>  ], 
> 
> the "alias" line is clearly new since it did not appear before de patch! 
> something is working ;)
> 
> I confirm that when I created the test VM (via virt-manager), I gave directly 
> the path to the device (/dev/sdc), so there is no "pool" to be addressed 
> defined on libvirt! 
> I read https://gerrit.ovirt.org/#/c/64272/4/ and I agree with Tomas findings 
> : Currently the conversion process is expecting a volume on a pool and it's 
> not able to use a block device directly.
> 
> So I'm hopping a solution can be found to this problem. We have a few 
> hundreds VM waiting to be migrated in production and we don't have a NFS 
> server, so this is our only option so far.
> 
> I'm able to test any patch quickly if needed, do not hesitate.
> 
> cordialement, regards, 
> Nelson LAMEIRAS 
> 
> Lyra Network 
> Service Projets et Processus 
> Tel : +33 (0) 5 32 09 09 70 
> 109 rue de l’innovation 
> 31670 Labège - France 
> www.lyra-network.com
> 
> - Original Message -
> From: "Shahar Havivi" 
> To: "Nelson Lameiras" 
> Cc: "Tomas Golembiovsky" , "Michal Skrivanek" 
> , users@ovirt.org
> Sent: Tuesday, November 8, 2016 12:16:55 PM
> Subject: Re: [ovirt-users] can't import vm from KVM host
> 
> On 08.11.16 10:58, Nelson Lameiras wrote:
> > Hi Shahar,
> > 
> > We try to prioritise vm behaviour predictability over ressources 
> > consumption, so "thin provisionning" is not a option for us. "Preallocated" 
> > is always selected over default behaviour.
> > 
> > Nevertheless, while trying to import a KVM vm (from another host), I get a 
> > 0 disk count on the VM, which means I do not event get to the point where I 
> > can chose allocation policy (usually next screen).
> > This is true with or without patch proposed below (assuming it has not 
> > changed since sept 29).
> 
> I assume that this line is failing:
> vol = conn.storageVolLookupByPath(disk['alias'])
> it may be the reason that the storage is not part of a pool.
> (look at pool-xxx commands via virsh)
> 
> If it doesn't work it may be related to a different API that we will need to
> implement as Tomas suggested in the gerrit patch.
> 
> > 
> > Can I give you any more information?
> > 
> > cordialement, regards, 
> > Nelson LAMEIRAS 
> > 
> > Lyra Network 
> > Service Projets et Processus 
> > Tel : +33 (0) 5 32 09 09 70 
> > 109 rue de l’innovation 
> > 31670 Labège - France 
> > www.lyra-network.com
> > 
> > - Original Message -
> > From: "Shahar Havivi" 
> > To: "Nelson Lameiras" , "Tomas 
> > Golembiovsky" 
> > Cc: "Michal Skrivanek" , users@ovirt.org
> > Sent: Monday, November 7, 2016 1:58:34 PM
> 

Re: [ovirt-users] Old 3.3 deep dive presentations

2016-11-20 Thread Yedidyah Bar David
On Fri, Nov 18, 2016 at 4:23 PM, Martin Sivak  wrote:
> Hi,
>
> does anybody know where the old deep dive presentations from
> http://www.ovirt.org/develop/release-management/releases/3.3/#Deep_dives
> are? Did we lose them in the conversion or do we still have backup
> somewhere?

We have a backup in [1]. There is an open issue about this [2].
Following the discussion there, and the total size of the missing
files there, compared to those already added to the new site, I
personally decided there is no problem simply adding them. I started
working on this, still did not finish. You can find my WIP stuff
in [3]. Best,

[1] http://resources.ovirt.org/old-site-files/
[2] https://github.com/oVirt/ovirt-site/issues/77
[3] https://github.com/didib/ovirt-site/tree/old-site-files

>
> --
> Martin Sivak
> SLA / oVirt
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users