Re: [ovirt-users] ovirt minor upgrades for nodes via GUI or CLI?

2017-04-24 Thread Yedidyah Bar David
On Mon, Apr 24, 2017 at 5:41 PM, Matthias Leopold
 wrote:
>
>
> Am 2017-04-24 um 16:37 schrieb Yedidyah Bar David:
>>
>> On Mon, Apr 24, 2017 at 5:28 PM, Matthias Leopold
>>  wrote:
>>>
>>> hi,
>>>
>>> i'm still testing ovirt 4.1.
>>>
>>> i installed engine and 2 nodes in vanilla centos 7.3 hosts with
>>> everything
>>> that came from
>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
>>>
>>> i regularly checked for updates in the engine host OS with "yum update"
>>> (is
>>> there a gui option for this?). it obviously got an ovirt update from
>>> version
>>> 4.1.0 to 4.1.1.1 already some time ago.
>>>
>>> i regularly checked for updates in the nodes via the ovirt web gui
>>> (installation - check for upgrade). there where package updates available
>>> and installed in the past so i thought that everything was fine.
>>>
>>> now i checked with "yum check-update" in the nodes OS shell and noticed
>>> that
>>> ovirt-release41 is still on 4.1.0 and there are 81 packages available for
>>> update (from centos base _and_ ovirt repos including ovirt-release41
>>> itself). ovirt gui tells me 'no updates found'.
>>
>>
>> I think this function only checks for specific packages, not everything
>> yum reports.
>>
>>>
>>> why didn't these updates get installed? is it because of the
>>> ovirt-release41
>>> update? do i have to do this manually with yum?
>>
>>
>> ovirt-release41 itself is not one of these packages, and should in
>> principle
>> be considered "another package" (just like any other package you installed
>> on your machine).
>>
>> Which packages does yum say you have updates for?
>
>
> firts my repos, to be sure:
>
> # yum repolist
> ...
> Repo-IDRepo-Name:
> Status
> base/7/x86_64  CentOS-7 - Base
> 9.363
> centos-opstools-testing/x86_64 CentOS-7 - OpsTools - testing
> repo  448
> centos-ovirt-common-candidate/x86_64   CentOS-7 - oVirt common
> 198
> centos-ovirt41-candidate/x86_64CentOS-7 - oVirt 4.1
> 95
> extras/7/x86_64CentOS-7 - Extras
> 337
> ovirt-4.1/7Latest oVirt 4.1 Release
> 455
> ovirt-4.1-centos-gluster38/x86_64  CentOS-7 - Gluster 3.8
> 181
> ovirt-4.1-epel/x86_64  Extra Packages for Enterprise
> Linux 7 - x86_64   11.550
> ovirt-4.1-patternfly1-noarch-epel/x86_64   Copr repo for patternfly1
> owned by patternfly 2
> updates/7/x86_64   CentOS-7 - Updates
> 1.575
> virtio-win-stable  virtio-win builds roughly
> matching what was shipped in latest RHEL4
> repolist: 24.208
>
> # yum check-update
> ...
> NetworkManager.x86_64   1:1.4.0-19.el7_3
> updates
> NetworkManager-config-server.x86_64 1:1.4.0-19.el7_3
> updates
> NetworkManager-libnm.x86_64 1:1.4.0-19.el7_3
> updates
> NetworkManager-team.x86_64  1:1.4.0-19.el7_3
> updates
> NetworkManager-tui.x86_64   1:1.4.0-19.el7_3
> updates
> NetworkManager-wifi.x86_64  1:1.4.0-19.el7_3
> updates
> bind-libs-lite.x86_64 32:9.9.4-38.el7_3.3 updates
> bind-license.noarch 32:9.9.4-38.el7_3.3 updates
> ca-certificates.noarch 2017.2.11-70.1.el7_3updates
> dmidecode.x86_641:3.0-2.1.el7_3
> updates
> fence-agents-all.x86_64 4.0.11-47.el7_3.5   updates
> fence-agents-apc.x86_64 4.0.11-47.el7_3.5   updates
> fence-agents-apc-snmp.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-bladecenter.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-brocade.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-cisco-mds.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-cisco-ucs.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-common.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-compute.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-drac5.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-eaton-snmp.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-emerson.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-eps.x86_64 4.0.11-47.el7_3.5   updates
> fence-agents-hpblade.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-ibmblade.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-ifmib.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-ilo-moonshot.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-ilo-mp.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-ilo-ssh.x86_64 4.0.11-47.el7_3.5
> updates
> fence-agents-ilo2.x86_64 4.0.11-47.el7_3.5   updates
> 

Re: [ovirt-users] oVirt GUI bug? clicking "ok" on upgrade host confirmation screen

2017-04-24 Thread knarra

On 04/24/2017 03:59 PM, Nelson Lameiras wrote:

Hi kasturi,

Thanks for your answer,

Indeed, I tried again and after 1 minute and 17 seconds (!!) the 
confirmation screen disappeared. Is it really necessary to wait this 
long for screen to disapear? (I can see in the background that 
"upgrade" stars a few seconds after clicking ok)


When putting host into maintenance mode, a circular "waiting" 
animation is used in order to warn user "something" is happening. A 
similar animation would be usefull in "upgrade" screen after clicking 
ok, no?


cordialement, regards,



Nelson LAMEIRAS
Ingénieur Systèmes et Réseaux/ Systems and Networks engineer
Tel: +33 5 32 09 09 70
nelson.lamei...@lyra-network.com 
www.lyra-network.com  | www.payzen.eu 










Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE


Not sure why does it take so long in your case. In my case it just takes 
few sec. But yaniv mentioned a bug on this would be good to track it down.



*From: *"knarra" 
*To: *"Nelson Lameiras" , "ovirt 
users" 

*Sent: *Monday, April 24, 2017 7:34:17 AM
*Subject: *Re: [ovirt-users] oVirt GUI bug? clicking "ok" on upgrade 
host confirmation screen


On 04/21/2017 10:20 PM, Nelson Lameiras wrote:

Hello,

Since "upgrade" functionality is available for hosts in oVirt GUI
I have this strange bug :

- Click on "Installation>>Upgrade"
- Click "ok" on confirmation screen
- -> (bug) confirmation screen does not dissapear as expected
- Click "ok" again on confirmation screen -> error : "system is
already upgrading"
- Click "cancel" to be able to return to oVirt

This happens using on :
ovirt engine : oVirt Engine Version: 4.1.1.6-1.el7.centos
client : windows 10
client : chrome Version 57.0.2987.133 (64-bit)

This bug was already present on oVirt 4.0 before updating to 4.1.

Has anybody else have this problem?

(will try to reproduce with firefox, IE)

cordialement, regards,



Nelson LAMEIRAS
Ingénieur Systèmes et Réseaux/ Systems and Networks engineer
Tel: +33 5 32 09 09 70
nelson.lamei...@lyra-network.com

www.lyra-network.com  |
www.payzen.eu 








Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Hi Nelson,

Once you click on 'OK' you will need to wait for few seconds 
(before the confirmation disappears) then you can see that upgrade 
starts.  In the previous versions once user clicks on 'OK' 
confirmation screen usually disappears immediately.


Thanks

kasturi




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] EXTERNAL: Re: Ovirt self hosted engine network issue

2017-04-24 Thread Brenneman, Brad B.
Ed,

   This is great news.  Thanks for the insight on the workaround for multiple 
gateways.

Brad


William “Brad” Brenneman | Leidos
Senior Systems Engineer | Naval Strike and Intelligence Division
6909 Metro Park Drive   Alexandria, VA 22310
phone: 571.319.8221
“Temporary” mobile:   571 213 6890
william.b.brenne...@leidos.com  |  
leidos.com

[cid:image004.png@01CF4CE2.1EF07A30]

From: Edward Haas [mailto:eh...@redhat.com]
Sent: Saturday, April 22, 2017 3:05 AM
To: Brenneman, Brad B.
Cc: Users@ovirt.org
Subject: EXTERNAL: Re: [ovirt-users] Ovirt self hosted engine network issue

Hi Brad,
oVirt has originally supporting to set the host default gateway only for the 
management network (ovirtmgmt).
The need to set it on a different network has been raised and for 4.1 an 
intermediate solution has been given,
solving it in a more integral fashion in 4.2.
The way it is solved in 4.1 is using a custom property.
The commit message explains how to use it: https://gerrit.ovirt.org/#/c/66127
Let us know if it helps.
Thanks,
Edy.

On Thu, Apr 20, 2017 at 7:59 PM, Brenneman, Brad B. 
> wrote:
Hi,

  I have an ovirt self hosted server with multiple physical NICs in it.

em1 – set for DHCP and used for web resources
bond0 (p1p1, p1p2) – set statically for communicating to internal hosts w/o 
internet

Before I installed the engine vm/appliance, system was able to browse the web 
on em1 and maintain internal comms on bond0 to internals hosts.

After installing self hosted engine/appliance, the host is unable to 
communicate to web resources on em1.  Ovirtmgmt is on the bond and has no comms 
issues.

I installed the logical network for the external web side per the online docs 
and attached it to em1.

Host system can ping the network gateways on each NIC. When launching firefox, 
system is able to browse to Hosted-engine VM page (admin portal, user portal , 
etc) but is unable to get to web sites (i.e. google, 
ovirt.org, etc.)

I read that Ovirt 4.1 was supposed to have fixed the multiple gateway issue, 
but am confused as to why I can’t get out.

Any ideas how I can get the host to browse the web again?

Brad

William “Brad” Brenneman | Leidos
Senior Systems Engineer | Naval Strike and Intelligence Division
6909 Metro Park Drive   Alexandria, VA 22310
phone: 571.319.8221
“Temporary” mobile:   571 213 6890
william.b.brenne...@leidos.com  |  
leidos.com

[cid:image004.png@01CF4CE2.1EF07A30]


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?

2017-04-24 Thread Beckman, Daniel
So I successfully upgraded my engine from 4.06 to 4.1.1 with no major issues.

A nice thing I noticed was that my custom CA certificate for https on the admin 
and user portals wasn’t clobbered by setup.

I did have to restore my custom settings for ISO uploader, log collector, and 
websocket proxy:
cp /etc/ovirt-engine/isouploader.conf.d/10-engine-setup.conf. 
/etc/ovirt-engine/isouploader.conf.d/10-engine-setup.conf
cp 
/etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf. 
/etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf
cp 
/etc/ovirt-engine/logcollector.conf.d/10-engine-setup.conf. 
/etc/ovirt-engine/logcollector.conf.d/10-engine-setup.conf

Now I’m moving on to updating the oVirt node hosts, which are currently at 
oVirt Node 4.0.6.1. (I’m assuming I should do that before attempting to upgrade 
the cluster and data center compatibility level to 4.1.)

When I right-click on a host and go to Installation / Check for Upgrade, the 
results are ‘no updates found.’ When I log into that host directly, I notice 
it’s still got the oVirt 4.0 repo, not 4.1. Is there an extra step I’m missing? 
The documentation I’ve found 
(http://www.ovirt.org/documentation/upgrade-guide/chap-Updates_between_Minor_Releases/)
 doesn’t mention this.


**
If I can offer some unsolicited feedback: I feel like this list is populated 
with a lot of questions that could be averted with a little care and feeding of 
the documentation. It’s unfortunate because that makes for a rocky introduction 
to oVirt, and it makes it look like a neglected project, which I know is not 
the case.

On a related note, I know this has been discussed before but…
The centralized control in Github for the documentation does not really 
encourage user contributions. What’s wrong with a wiki? If we’re really 
concerned about bad or malicious edits being posted, keep the official in git 
and add a separate wiki that is clearly marked as user-contributed.
**


Thanks,
Daniel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-ha-agent cpu usage

2017-04-24 Thread Gianluca Cecchi
On Fri, Oct 7, 2016 at 3:35 PM, Simone Tiraboschi 
wrote:

>
>
>
> If I can apply the patch also to 4.0.3 I'm going to see if there is then a
> different behavior.
> Let me know,
>
>
> I'm trying it right now.
> Any other tests will be really appreciated.
>
> The patch is pretty simply, you can apply that on the fly.
> You have to shutdown ovirt-ha-broker and ovirt-ha-agent; then you could
> directly edit
> /usr/lib/python2.7/site-packages/api/vdsmapi.py
> around line 97 changing from
> loaded_schema = yaml.load(f)
> to
> loaded_schema = yaml.load(f, Loader=yaml.CLoader)
> Please pay attention to keep exactly the same amount of initial spaces.
>
> Then you can simply restart the HA agent and check.
>
>

Hello,
I'm again registering high spikes of ovirt-ha-agent with only 2-3 VMs up
and with almost no activity
The package of the involved file
/usr/lib/python2.7/site-packages/api/vdsmapi.py
is now at veriosn vdsm-api-4.19.4-1.el7.centos.noarch and I see that the
file contains this kind of lines

129 try:
130 for path in paths:
131 with open(path) as f:
132 if hasattr(yaml, 'CLoader'):
133 loader = yaml.CLoader
134 else:
135 loader = yaml.Loader
136 loaded_schema = yaml.load(f, Loader=loader)
137
138 types = loaded_schema.pop('types')
139 self._types.update(types)
140 self._methods.update(loaded_schema)
141 except EnvironmentError:
142 raise SchemaNotFound("Unable to find API schema file")

So there is a conditional statement...
How can I be sure that "loader" is set to "yaml.CLoader" that was what in
4.0 was able to lower the cpu usage of ovirt-ha-agent?
Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-24 Thread Gianluca Cecchi
On Mon, Apr 24, 2017 at 7:01 PM, Sharon Gratch  wrote:

> Hi,
>
>
> On Mon, Apr 24, 2017 at 6:06 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>>
>>
>> On Mon, Apr 24, 2017 at 12:57 PM, Evgenia Tokar 
>> wrote:
>>
>>>
>>> Thanks.
>>>
>>> 1. In the UI under vm devices tab do you have an entry for graphical
>>> device (type=spice)?
>>>
>>
>> It doesn't seem so...
>> Se the whole content:
>> https://drive.google.com/file/d/0BwoPbcrMv8mvblV3dDlMelVFS1U
>> /view?usp=sharing
>>
>>
> According to the vm devices list you sent here, there is also no ​entry
> for for the video device (device with type=qxl)​, although according to one
> of your previous mails, the vm "general" sub tab shows "Video Type: qxl".
>

I confirm that. But please note that in the same general sub tab the
"Graphics Protocol" appears as "None", while normally it should contain
"SPICE".
I resend the link of the general sub tab screenshot:
https://drive.google.com/file/d/0BwoPbcrMv8mvUURobHhKb0kxemM/
view?usp=sharing




>
> 1. Can you please check the vm console configuration to see if headless
> mode is off, just to make sure?  (in ui - edit the vm and check the console
> tab to see if the "Headless mode" is not checked).
>
> (*) headless mode is a new feature added to 4.1
>

I confirm headless is not checked.
See here (you see underneath also the general sub tab):
https://drive.google.com/file/d/0BwoPbcrMv8mvQVhzaGFHSmoxaGc/view?usp=sharing



> 2. what values are assigned in vm console tab for "Video Type" and
> "Graphics protocol" fields?
>

>From the above screenshot you see that they are "QXL" and "SPICE"


>
> Thanks,
> Sharon
>
>
>
You are welcome.
I have another environment with single server and she, with similar update
history and now at 4.1.1.6-1 level, where I had not the problem, but in
that environment the engine was configured as "VNC" and "Cirrus" so it was
not the same.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.1 and ovn problems

2017-04-24 Thread Gianluca Cecchi
On Mon, Apr 24, 2017 at 10:03 PM, Marcin Mirecki 
wrote:

> Looks like the south db works properly. The north db uses the same
> mechanism, just a different schema and port.
>
> Looking at the netstat output it looks like ovn north db is not even
> listening, or is there anything for 6641?
>
>
>
>
Actually yes... it seems that the switch "-t" with the "-p" doesn't catch
the 6641 and 6642 "LISTEN" lines, while if I use "-a" instead of "-t" I get
them too...

with "-a"
root@ovmgr1 ~]# netstat -apn | grep 664
tcp0  0 0.0.0.0:66410.0.0.0:*
LISTEN  6691/ovsdb-server
tcp0  0 0.0.0.0:66420.0.0.0:*
LISTEN  6699/ovsdb-server
tcp0  0 10.4.192.43:664210.4.168.76:38882
ESTABLISHED 6699/ovsdb-server
tcp0  0 10.4.192.43:664210.4.168.75:45486
ESTABLISHED 6699/ovsdb-server
tcp0  0 10.4.192.43:664210.4.168.74:59176
ESTABLISHED 6699/ovsdb-server
unix  3  [ ] STREAM CONNECTED 14119
664/vmtoolsd

with "-t"
[root@ovmgr1 ~]# netstat -tpn | grep 664
tcp0  0 10.4.192.43:664210.4.168.76:38882
ESTABLISHED 6699/ovsdb-server
tcp0  0 10.4.192.43:664210.4.168.75:45486
ESTABLISHED 6699/ovsdb-server
tcp0  0 10.4.192.43:664210.4.168.74:59176
ESTABLISHED 6699/ovsdb-server
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.1 and ovn problems

2017-04-24 Thread Marcin Mirecki
Looks like the south db works properly. The north db uses the same
mechanism, just a different schema and port.

Looking at the netstat output it looks like ovn north db is not even
listening, or is there anything for 6641?







On Mon, Apr 24, 2017 at 10:37 AM, Gianluca Cecchi  wrote:

>
>
> On Sun, Apr 23, 2017 at 11:32 PM, Marcin Mirecki 
> wrote:
>
>> Hello Gianluca,
>>
>> Can you please check the ovn north db log.
>> This is placed in /var/log/openvswitch/ovsdb-server-nb.log
>> Please check if the logs has any new entries when you try to connect and
>> when you issue the 'ovn-nbctl set-connection ptcp:6641' command.
>> If the connection attempt is getting through, pvs db should print an
>> error to the log.
>>
>> Please also try restarting the ovn-northd service.
>>
>> Do the ovn-controllers connect to the south-db?
>> You can verify this by looking at /var/log/openvswitch/ovn-controller.log
>> on the ovn-controller host (please look for entries saying "... > ip>:6642 connected")
>>
>> Marcin
>>
>>
>>
> The ovirt nb log contains:
> 2017-04-24T07:46:51.541Z|1|vlog|INFO|opened log file
> /var/log/openvswitch/ovsdb-server-nb.log
> 2017-04-24T07:46:51.550Z|2|ovsdb_server|INFO|ovsdb-server (Open
> vSwitch) 2.7.0
> 2017-04-24T07:47:01.560Z|3|memory|INFO|2268 kB peak resident set size
> after 10.0 seconds
> 2017-04-24T07:47:01.560Z|4|memory|INFO|cells:100 json-caches:1
> monitors:1 ses
>
> In my ovn-controller.log of my 3 hosts I have this, when I run the 2
> commands below on the provider host
>
> ovn-sbctl set-connection ptcp:6642
> ovn-nbctl set-connection ptcp:6641
>
>
> 2017-04-24T07:56:23.178Z|00247|reconnect|INFO|tcp:10.4.192.43:6642:
> connecting...
> 2017-04-24T07:56:23.178Z|00248|reconnect|INFO|tcp:10.4.192.43:6642:
> connection attempt failed (Connection refused)
> 2017-04-24T07:56:23.178Z|00249|reconnect|INFO|tcp:10.4.192.43:6642:
> waiting 8 seconds before reconnect
> 2017-04-24T07:56:31.187Z|00250|reconnect|INFO|tcp:10.4.192.43:6642:
> connecting...
> 2017-04-24T07:56:31.188Z|00251|reconnect|INFO|tcp:10.4.192.43:6642:
> connected
> 2017-04-24T07:56:31.193Z|00252|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt:
> connecting to switch
> 2017-04-24T07:56:31.193Z|00253|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
> connecting...
> 2017-04-24T07:56:31.201Z|00254|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
> connected
> 2017-04-24T07:56:31.201Z|00255|pinctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt:
> connecting to switch
> 2017-04-24T07:56:31.201Z|00256|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
> connecting...
> 2017-04-24T07:56:31.201Z|00257|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
> connected
> 2017-04-24T07:56:31.202Z|00258|binding|INFO|Releasing lport
> 0a2a47bc-ea0d-4f1d-8f49-ec903e519983 from this chassis.
>
> On my provider I see then the 3 oVirt hosts connected:
> [root@ovmgr1 openvswitch]# netstat -tpn|grep 66
> tcp0  0 10.4.192.43:664210.4.168.76:38882
> ESTABLISHED 6699/ovsdb-server
> tcp0  0 10.4.192.43:664210.4.168.75:45486
> ESTABLISHED 6699/ovsdb-server
> tcp0  0 127.0.0.1:5432  127.0.0.1:37074
> ESTABLISHED 16696/postgres: eng
> tcp0  0 10.4.192.43:664210.4.168.74:59176
> ESTABLISHED 6699/ovsdb-server
> [root@ovmgr1 openvswitch]#
>
> But it seems that the "set" command above is not persistent across reboot
> of the provider host that in my case is the oVirt engine server
>
>


-- 

MARCIN mIRECKI

Red Hat



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-24 Thread Sharon Gratch
Hi,


On Mon, Apr 24, 2017 at 6:06 PM, Gianluca Cecchi 
wrote:

>
>
> On Mon, Apr 24, 2017 at 12:57 PM, Evgenia Tokar  wrote:
>
>>
>> Thanks.
>>
>> 1. In the UI under vm devices tab do you have an entry for graphical
>> device (type=spice)?
>>
>
> It doesn't seem so...
> Se the whole content:
> https://drive.google.com/file/d/0BwoPbcrMv8mvblV3dDlMelVFS1U
> /view?usp=sharing
>
>
According to the vm devices list you sent here, there is also no ​entry for
for the video device (device with type=qxl)​, although according to one of
your previous mails, the vm "general" sub tab shows "Video Type: qxl".

1. Can you please check the vm console configuration to see if headless
mode is off, just to make sure?  (in ui - edit the vm and check the console
tab to see if the "Headless mode" is not checked).

(*) headless mode is a new feature added to 4.1

2. what values are assigned in vm console tab for "Video Type" and
"Graphics protocol" fields?

Thanks,
Sharon



>
>> 2. Can you paste again the contents of the local vm.conf? If you have a
>> graphical device in the engine it should appear there as well.
>>
>> Jenny
>>
>
> [root@ractor ovirt-hosted-engine-ha]# cat /run/ovirt-hosted-engine-ha/vm
> .conf
> cpuType=Nehalem
> emulatedMachine=pc-i440fx-rhel7.3.0
> vmId=7b0ff898-0a9e-4b97-8292-1d9f2a0a6683
> smp=4
> memSize=16384
> maxVCpus=16
> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback
> ,srecord,ssmartcard,susbredir
> vmName=HostedEngine
> display=qxl
> devices={index:0,iface:virtio,format:raw,bootOrder:1,address
> :{slot:0x06,bus:0x00,domain:0x,type:pci,function:0x0},vo
> lumeID:43ee87b9-4293-4d43-beab-582f500667a7,imageID:d6287dfb
> -27af-461b-ab79-4eb3a45d8c8a,readonly:false,domainID:2025c2e
> a-6205-4bc1-b29d-745b47f8f806,deviceId:d6287dfb-27af-461b-
> ab79-4eb3a45d8c8a,poolID:----
> ,device:disk,shared:exclusive,propagateErrors:off,type:disk}
> devices={nicModel:pv,macAddr:00:16:3e:3a:ee:a5,linkActive:tr
> ue,network:ovirtmgmt,deviceId:4bbb90e6-4f8e-42e0-91ea-d89412
> 5ff4a8,address:{slot:0x03,bus:0x00,domain:0x,type:pci,fu
> nction:0x0},device:bridge,type:interface}
> devices={index:2,iface:ide,shared:false,readonly:true,device
> Id:8c3179ac-b322-4f5c-9449-c52e3665e0ae,address:{controller:
> 0,target:0,unit:0,bus:1,type:drive},device:cdrom,path:,type:disk}
> devices={device:usb,type:controller,deviceId:ee985889-6878-4
> 63a-a415-9b50a4a810b3,address:{slot:0x01,bus:0x00,domain:0x0
> 000,type:pci,function:0x2}}
> devices={device:virtio-serial,type:controller,deviceId:d9970
> 5cd-0ebf-40f0-950b-575ab4e6d934,address:{slot:0x05,bus:0x00,
> domain:0x,type:pci,function:0x0}}
> devices={device:ide,type:controller,deviceId:ef31f1a2-746a-4
> 188-ae45-ef157d7b5598,address:{slot:0x01,bus:0x00,domain:0x0
> 000,type:pci,function:0x1}}
> devices={device:scsi,model:virtio-scsi,type:controller,devic
> eId:f41baf47-51f8-42e9-a290-70da06191991,address:{slot:0x04,
> bus:0x00,domain:0x,type:pci,function:0x0}}
> devices={alias:rng0,specParams:{source:urandom},deviceId:4c7
> f0e81-c3e8-498f-a5a2-b8c1543e94b4,address:{slot:0x02,bus:0x0
> 0,domain:0x,type:pci,function:0x0},device:virtio,model:
> virtio,type:rng}
> devices={device:console,type:console}
> [root@ractor ovirt-hosted-engine-ha]#
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Denis Chaplygin
Hello!

On Mon, Apr 24, 2017 at 5:08 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Hi Denis, understood.
> What if in the case of adding a fourth host to the running cluster, will
> the copy of data be kept only twice in any of the 4 servers ?
>

replica volumes can be build only from 2 or 3 bricks. There is no way to
make a replica volume from a 4 bricks.

But you may combine distributed volumes and replica volumes [1]:

gluster volume create test-volume replica 2 transport tcp server1:/b1
server2:/b2 server3:/b3 server4:/b4

test-volume would be like a RAID10 - you will have two replica volumes
b1+b2 and b3+b4 combined into a single distributed volume. In that case you
will
have only two copies of your data. Part of your data will be stored twice
on b1 and b2 and another one part will be stored twice at b3 and b4
You will be able to extend that distributed volume by adding new replicas.


[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move Hosted Engine disk?

2017-04-24 Thread Denis Chaplygin
Hello!

On Mon, Apr 24, 2017 at 4:34 PM, gflwqs gflwqs  wrote:

>
> I am running ovirt4.1.1 and moving my vm:s is no problem.
> However how do i move my hosted engine disk to the new FC SAN?
> In the engine gui i am able to click move disk but is this enough?
>
>
Short answer: No, it's not enough. Use backup/restore

Long answer: It would be better to reinstall Hosted Engine on a new FC SAN
and restore database from a backup, but that operations is quite tricky.

It also requires some extra space for VMs, as you will be putting couple
of  hosts into maintenance mode and at least one host will be reinstalled.

I would recommend you to migrate all your VMs to non-HE hosts at first. If
you don't have enough hosts for that, you should either temporarily
undeploy HE from some hosts or shutdown some VMs. If both options are not
possible, you may still continue, but you may have some undesired effects.

After that take a database backup and execute engine-migrate-he.py script.
This script will put one of your hosts into maintenance mode, so you will
need some extra space for your VMs (including HE VM).

Now you are safe to switch Hosted Engine to global maintenance mode,
shutdown you HE VM, redeploy HE on some host using new FC SAN and restore
your database. engine-backup script should be executed with
--he-remove-hosts --he-remove-hosts options.

Finally, immediately after database restoration, redeploy existing HE
hosts, so they will join new HE cluster. It is also safe to activate host,
that was put to maintenance mode by engine-migrate-he.py script (but if it
was a HE host, just reinstall it)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-24 Thread Gianluca Cecchi
On Mon, Apr 24, 2017 at 12:57 PM, Evgenia Tokar  wrote:

>
> Thanks.
>
> 1. In the UI under vm devices tab do you have an entry for graphical
> device (type=spice)?
>

It doesn't seem so...
Se the whole content:
https://drive.google.com/file/d/0BwoPbcrMv8mvblV3dDlMelVFS1U/view?usp=sharing



> 2. Can you paste again the contents of the local vm.conf? If you have a
> graphical device in the engine it should appear there as well.
>
> Jenny
>

[root@ractor ovirt-hosted-engine-ha]# cat
/run/ovirt-hosted-engine-ha/vm.conf
cpuType=Nehalem
emulatedMachine=pc-i440fx-rhel7.3.0
vmId=7b0ff898-0a9e-4b97-8292-1d9f2a0a6683
smp=4
memSize=16384
maxVCpus=16
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
vmName=HostedEngine
display=qxl
devices={index:0,iface:virtio,format:raw,bootOrder:1,address:{slot:0x06,bus:0x00,domain:0x,type:pci,function:0x0},volumeID:43ee87b9-4293-4d43-beab-582f500667a7,imageID:d6287dfb-27af-461b-ab79-4eb3a45d8c8a,readonly:false,domainID:2025c2ea-6205-4bc1-b29d-745b47f8f806,deviceId:d6287dfb-27af-461b-ab79-4eb3a45d8c8a,poolID:----,device:disk,shared:exclusive,propagateErrors:off,type:disk}
devices={nicModel:pv,macAddr:00:16:3e:3a:ee:a5,linkActive:true,network:ovirtmgmt,deviceId:4bbb90e6-4f8e-42e0-91ea-d894125ff4a8,address:{slot:0x03,bus:0x00,domain:0x,type:pci,function:0x0},device:bridge,type:interface}
devices={index:2,iface:ide,shared:false,readonly:true,deviceId:8c3179ac-b322-4f5c-9449-c52e3665e0ae,address:{controller:0,target:0,unit:0,bus:1,type:drive},device:cdrom,path:,type:disk}
devices={device:usb,type:controller,deviceId:ee985889-6878-463a-a415-9b50a4a810b3,address:{slot:0x01,bus:0x00,domain:0x,type:pci,function:0x2}}
devices={device:virtio-serial,type:controller,deviceId:d99705cd-0ebf-40f0-950b-575ab4e6d934,address:{slot:0x05,bus:0x00,domain:0x,type:pci,function:0x0}}
devices={device:ide,type:controller,deviceId:ef31f1a2-746a-4188-ae45-ef157d7b5598,address:{slot:0x01,bus:0x00,domain:0x,type:pci,function:0x1}}
devices={device:scsi,model:virtio-scsi,type:controller,deviceId:f41baf47-51f8-42e9-a290-70da06191991,address:{slot:0x04,bus:0x00,domain:0x,type:pci,function:0x0}}
devices={alias:rng0,specParams:{source:urandom},deviceId:4c7f0e81-c3e8-498f-a5a2-b8c1543e94b4,address:{slot:0x02,bus:0x00,domain:0x,type:pci,function:0x0},device:virtio,model:virtio,type:rng}
devices={device:console,type:console}
[root@ractor ovirt-hosted-engine-ha]#
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt minor upgrades for nodes via GUI or CLI?

2017-04-24 Thread Matthias Leopold



Am 2017-04-24 um 16:37 schrieb Yedidyah Bar David:

On Mon, Apr 24, 2017 at 5:28 PM, Matthias Leopold
 wrote:

hi,

i'm still testing ovirt 4.1.

i installed engine and 2 nodes in vanilla centos 7.3 hosts with everything
that came from http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm

i regularly checked for updates in the engine host OS with "yum update" (is
there a gui option for this?). it obviously got an ovirt update from version
4.1.0 to 4.1.1.1 already some time ago.

i regularly checked for updates in the nodes via the ovirt web gui
(installation - check for upgrade). there where package updates available
and installed in the past so i thought that everything was fine.

now i checked with "yum check-update" in the nodes OS shell and noticed that
ovirt-release41 is still on 4.1.0 and there are 81 packages available for
update (from centos base _and_ ovirt repos including ovirt-release41
itself). ovirt gui tells me 'no updates found'.


I think this function only checks for specific packages, not everything
yum reports.



why didn't these updates get installed? is it because of the ovirt-release41
update? do i have to do this manually with yum?


ovirt-release41 itself is not one of these packages, and should in principle
be considered "another package" (just like any other package you installed
on your machine).

Which packages does yum say you have updates for?


firts my repos, to be sure:

# yum repolist
...
Repo-IDRepo-Name: 
   Status
base/7/x86_64  CentOS-7 - Base 
9.363
centos-opstools-testing/x86_64 CentOS-7 - OpsTools - 
testing repo  448
centos-ovirt-common-candidate/x86_64   CentOS-7 - oVirt common 
  198
centos-ovirt41-candidate/x86_64CentOS-7 - oVirt 4.1 
   95
extras/7/x86_64CentOS-7 - Extras 
  337
ovirt-4.1/7Latest oVirt 4.1 Release 
  455
ovirt-4.1-centos-gluster38/x86_64  CentOS-7 - Gluster 3.8 
  181
ovirt-4.1-epel/x86_64  Extra Packages for 
Enterprise Linux 7 - x86_64   11.550
ovirt-4.1-patternfly1-noarch-epel/x86_64   Copr repo for patternfly1 
owned by patternfly 2
updates/7/x86_64   CentOS-7 - Updates 
1.575
virtio-win-stable  virtio-win builds roughly 
matching what was shipped in latest RHEL4

repolist: 24.208

# yum check-update
...
NetworkManager.x86_64   1:1.4.0-19.el7_3 
   updates
NetworkManager-config-server.x86_64 1:1.4.0-19.el7_3 
   updates
NetworkManager-libnm.x86_64 1:1.4.0-19.el7_3 
   updates
NetworkManager-team.x86_64  1:1.4.0-19.el7_3 
   updates
NetworkManager-tui.x86_64   1:1.4.0-19.el7_3 
   updates
NetworkManager-wifi.x86_64  1:1.4.0-19.el7_3 
   updates
bind-libs-lite.x86_64 
32:9.9.4-38.el7_3.3 updates
bind-license.noarch 
32:9.9.4-38.el7_3.3 updates
ca-certificates.noarch 
2017.2.11-70.1.el7_3updates
dmidecode.x86_641:3.0-2.1.el7_3 
   updates
fence-agents-all.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-apc.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-apc-snmp.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-bladecenter.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-brocade.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-cisco-mds.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-cisco-ucs.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-common.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-compute.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-drac5.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-eaton-snmp.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-emerson.x86_64 
4.0.11-47.el7_3.5   updates

Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread FERNANDO FREDIANI

Ok, great, thanks for the clarification.

Therefore a replica 3 configuration means raw storage space cost is 
'similar' to a RAID 1 and actual data exists only 2 times and two 
different servers.


Regards
Fernando


On 24/04/2017 11:35, Denis Chaplygin wrote:
With arbiter volume you still have a replica 3 volume, meaning that 
you have three participants in your quorum. But only two of those 
participants keep the actual data. Third one, the arbiter, stores only 
some metadata, not the files content, so data is not replicated 3 times.


On Mon, Apr 24, 2017 at 3:33 PM, FERNANDO FREDIANI 
> wrote:


But then quorum doesn't replicate data 3 times, does it ?

Fernando


On 24/04/2017 10:24, Denis Chaplygin wrote:

Hello!

On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI
> wrote:

Out of curiosity, why do you and people in general use more
replica 3 than replica 2 ?


The answer is simple - quorum. With just two participants you
don't know what to do, when your peer is unreachable. When you
have three participants, you are able to establish a majority. In
that case, when two partiticipants are able to communicate, they
now, that lesser part of cluster knows, that it should not accept
any changes.

If I understand correctly this seems overkill and waste of
storage as 2 copies of data (replica 2)  seems pretty
reasonable similar to RAID 1 and still in the worst case the
data can be replicated after a fail. I see that replica 3
helps more on performance at the cost of space.


You are absolutely right. You need two copies of data to provide
data redundancy and you need three (or more) members in cluster
to provide distinguishable majority. Therefore we have arbiter
volumes, thus solving that issue [1].

[1]

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Setup

2017-04-24 Thread Mohd Zainal Abidin
Hi,

Hardware:

1x DL380 G9 32GB Memory (Ovirt Engine)
3x DL560 G9 512GB Memory (Hypervisor)
4x DL180se 6x3TB 32GB Memory (Storage - GlusterFS)

What is the best setup from my requirement? Which step i need to do? How
many NIC do i need to use for each server?

-- 
Thank you
__

Mohd Zainal Abidin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt minor upgrades for nodes via GUI or CLI?

2017-04-24 Thread Yedidyah Bar David
On Mon, Apr 24, 2017 at 5:28 PM, Matthias Leopold
 wrote:
> hi,
>
> i'm still testing ovirt 4.1.
>
> i installed engine and 2 nodes in vanilla centos 7.3 hosts with everything
> that came from http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
>
> i regularly checked for updates in the engine host OS with "yum update" (is
> there a gui option for this?). it obviously got an ovirt update from version
> 4.1.0 to 4.1.1.1 already some time ago.
>
> i regularly checked for updates in the nodes via the ovirt web gui
> (installation - check for upgrade). there where package updates available
> and installed in the past so i thought that everything was fine.
>
> now i checked with "yum check-update" in the nodes OS shell and noticed that
> ovirt-release41 is still on 4.1.0 and there are 81 packages available for
> update (from centos base _and_ ovirt repos including ovirt-release41
> itself). ovirt gui tells me 'no updates found'.

I think this function only checks for specific packages, not everything
yum reports.

>
> why didn't these updates get installed? is it because of the ovirt-release41
> update? do i have to do this manually with yum?

ovirt-release41 itself is not one of these packages, and should in principle
be considered "another package" (just like any other package you installed
on your machine).

Which packages does yum say you have updates for?

Best,

>
> thx
> matthias
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Denis Chaplygin
With arbiter volume you still have a replica 3 volume, meaning that you
have three participants in your quorum. But only two of those participants
keep the actual data. Third one, the arbiter, stores only some metadata,
not the files content, so data is not replicated 3 times.

On Mon, Apr 24, 2017 at 3:33 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> But then quorum doesn't replicate data 3 times, does it ?
>
> Fernando
>
> On 24/04/2017 10:24, Denis Chaplygin wrote:
>
> Hello!
>
> On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI <
> fernando.fredi...@upx.com> wrote:
>
>> Out of curiosity, why do you and people in general use more replica 3
>> than replica 2 ?
>>
>
> The answer is simple - quorum. With just two participants you don't know
> what to do, when your peer is unreachable. When you have three
> participants, you are able to establish a majority. In that case, when two
> partiticipants are able to communicate, they now, that lesser part of
> cluster knows, that it should not accept any changes.
>
>
>> If I understand correctly this seems overkill and waste of storage as 2
>> copies of data (replica 2)  seems pretty reasonable similar to RAID 1 and
>> still in the worst case the data can be replicated after a fail. I see that
>> replica 3 helps more on performance at the cost of space.
>>
>> You are absolutely right. You need two copies of data to provide data
> redundancy and you need three (or more) members in cluster to provide
> distinguishable majority. Therefore we have arbiter volumes, thus solving
> that issue [1].
>
> [1] https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/arbiter-volumes-and-quorum/
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Move Hosted Engine disk?

2017-04-24 Thread gflwqs gflwqs
Hi list,
We are migrating away from our current ISCSI SAN to an FC SAN.

I am running ovirt4.1.1 and moving my vm:s is no problem.
However how do i move my hosted engine disk to the new FC SAN?
In the engine gui i am able to click move disk but is this enough?

Regards
Christian
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] after upgrade from 4.0.4 to 4.1.1, no more gettagsbyparent_id

2017-04-24 Thread Fabrice Bacchella
I tried to upgrade from 4.0 to 4.1, I ran engine-setup

And now ovirt-engine start fails with command like:

Caused by: org.springframework.jdbc.BadSqlGrammarException: 
PreparedStatementCallback; bad SQL grammar [select * from  
gettagsbyparent_id()]; nested exception is org.postgresql.util.PSQLException: 
ERROR: function gettagsbyparent_id() does not exist


In the setup log I indeed found:

 drop function if exists public.gettagsbyparent_id(uuid) cascade;


But nothing else. This function is not recreated. Any hint about what happened ?






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Denis Chaplygin
Hello!

On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Out of curiosity, why do you and people in general use more replica 3 than
> replica 2 ?
>

The answer is simple - quorum. With just two participants you don't know
what to do, when your peer is unreachable. When you have three
participants, you are able to establish a majority. In that case, when two
partiticipants are able to communicate, they now, that lesser part of
cluster knows, that it should not accept any changes.


> If I understand correctly this seems overkill and waste of storage as 2
> copies of data (replica 2)  seems pretty reasonable similar to RAID 1 and
> still in the worst case the data can be replicated after a fail. I see that
> replica 3 helps more on performance at the cost of space.
>
> You are absolutely right. You need two copies of data to provide data
redundancy and you need three (or more) members in cluster to provide
distinguishable majority. Therefore we have arbiter volumes, thus solving
that issue [1].

[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread FERNANDO FREDIANI

Hello.

Out of curiosity, why do you and people in general use more replica 3 
than replica 2 ?


If I understand correctly this seems overkill and waste of storage as 2 
copies of data (replica 2)  seems pretty reasonable similar to RAID 1 
and still in the worst case the data can be replicated after a fail. I 
see that replica 3 helps more on performance at the cost of space.


Fernando


On 24/04/2017 08:33, Sven Achtelik wrote:


Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to 
stay on the current version and I’m applying updates/upgrade if there 
are any. For this I put a host in maintenance and also use the “Stop 
Gluster Service”  checkbox. After it’s done updating I’ll set it back 
to active and wait until the engine sees all bricks again and then 
I’ll go for the next host.


This worked fine for me the last month and now that I have more and 
more VMs running the changes that are written to the gluster volume 
while a host is in maintenance become a lot more and it takes pretty 
long for the healing to complete. What I don’t understand is that I 
don’t really see a lot of network usage in the GUI during that time 
and it feels quiet slow. The Network for the gluster is a 10G and I’m 
quiet happy with the performance of it, it’s just the healing that 
takes long. I noticed that because I couldn’t update the third host 
because of unsynced gluster volumes.


Is there any limiting variable that slows down traffic during healing 
that needs to be configured ? Or should I maybe change my updating 
process somehow to avoid having so many changes in queue?


Thank you,

Sven



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Fwd: import domain and import template

2017-04-24 Thread Jakub Niedermertl
-- Forwarded message --
From: Jakub Niedermertl 
Date: Mon, Apr 24, 2017 at 2:25 PM
Subject: Re: [ovirt-users] import domain and import template
To: "qinglong.d...@horebdata.cn" 


Hi,

thank you for reporting. This is bug, tracked at
https://bugzilla.redhat.com/show_bug.cgi?id=1444848.

Regards
Jakub

On Thu, Apr 20, 2017 at 9:31 AM, qinglong.d...@horebdata.cn <
qinglong.d...@horebdata.cn> wrote:

> Hi,
> I have created an ovirt 4.1.1.6 environment a few days ago. And I
> have imported a data domain which was used in early version(4.0.0.5)
> sucessfully. But I got a same error when importing all templates of the
> data domain:
> Anyone can help? Thanks!
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread knarra

On 04/24/2017 05:36 PM, Sven Achtelik wrote:


Hi Kasturi,

I’ll try that. Will this be persistent over a reboot of a host or even 
stopping of the complete cluster ?


Thank you


Hi Sven,

This is a volume set option ((has nothing to do with reboot)and it 
will be present on the volume until you reset it manually using 'gluster 
volume reset' command . You just need to execute 'gluster volume heal 
 granular-entry-heal enable' and this will do the right thing 
for you.


Thanks
kasturi.


*Von:*knarra [mailto:kna...@redhat.com]
*Gesendet:* Montag, 24. April 2017 13:44
*An:* Sven Achtelik ; users@ovirt.org
*Betreff:* Re: [ovirt-users] Hyperconverged Setup and Gluster healing

On 04/24/2017 05:03 PM, Sven Achtelik wrote:

Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always
try to stay on the current version and I’m applying
updates/upgrade if there are any. For this I put a host in
maintenance and also use the “Stop Gluster Service”  checkbox.
After it’s done updating I’ll set it back to active and wait until
the engine sees all bricks again and then I’ll go for the next host.

This worked fine for me the last month and now that I have more
and more VMs running the changes that are written to the gluster
volume while a host is in maintenance become a lot more and it
takes pretty long for the healing to complete. What I don’t
understand is that I don’t really see a lot of network usage in
the GUI during that time and it feels quiet slow. The Network for
the gluster is a 10G and I’m quiet happy with the performance of
it, it’s just the healing that takes long. I noticed that because
I couldn’t update the third host because of unsynced gluster volumes.

Is there any limiting variable that slows down traffic during
healing that needs to be configured ? Or should I maybe change my
updating process somehow to avoid having so many changes in queue?

Thank you,

Sven



___

Users mailing list

Users@ovirt.org 

http://lists.ovirt.org/mailman/listinfo/users

Hi Sven,

Do you have granular entry heal enabled on the volume? If no, 
there is a feature called granular entry self-heal which should be 
enabled with sharded volumes to get the benefits. So when a brick goes 
down and say only 1 in those million entries is created/deleted. 
Self-heal would be done for only that file it won't crawl the entire 
directory.


You can run|gluster volume set|/VOLNAME/|cluster.granular-entry-heal 
enable / disable|command only if the volume is in|Created|state. If 
the volume is in any other state other than|Created|, for 
example,|Started|,|Stopped|, and so on, execute|gluster volume heal 
VOLNAME granular-entry-heal||enable / disable|command to enable or 
disable granular-entry-heal option.


Thanks

kasturi



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Sven Achtelik
Hi Kasturi,

I'll try that. Will this be persistent over a reboot of a host or even stopping 
of the complete cluster ?


Thank you
Von: knarra [mailto:kna...@redhat.com]
Gesendet: Montag, 24. April 2017 13:44
An: Sven Achtelik ; users@ovirt.org
Betreff: Re: [ovirt-users] Hyperconverged Setup and Gluster healing

On 04/24/2017 05:03 PM, Sven Achtelik wrote:
Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to stay on 
the current version and I'm applying updates/upgrade if there are any. For this 
I put a host in maintenance and also use the "Stop Gluster Service"  checkbox. 
After it's done updating I'll set it back to active and wait until the engine 
sees all bricks again and then I'll go for the next host.

This worked fine for me the last month and now that I have more and more VMs 
running the changes that are written to the gluster volume while a host is in 
maintenance become a lot more and it takes pretty long for the healing to 
complete. What I don't understand is that I don't really see a lot of network 
usage in the GUI during that time and it feels quiet slow. The Network for the 
gluster is a 10G and I'm quiet happy with the performance of it, it's just the 
healing that takes long. I noticed that because I couldn't update the third 
host because of unsynced gluster volumes.

Is there any limiting variable that slows down traffic during healing that 
needs to be configured ? Or should I maybe change my updating process somehow 
to avoid having so many changes in queue?

Thank you,

Sven




___

Users mailing list

Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users

Hi Sven,

Do you have granular entry heal enabled on the volume? If no, there is a 
feature called granular entry self-heal which should be enabled with sharded 
volumes to get the benefits. So when a brick goes down and say only 1 in those 
million entries is created/deleted. Self-heal would be done for only that file 
it won't crawl the entire directory.

You can run gluster volume set VOLNAME cluster.granular-entry-heal enable / 
disable command only if the volume is in Created state. If the volume is in any 
other state other than Created , for example, Started , Stopped, and so on, 
execute gluster volume heal VOLNAME granular-entry-heal enable / disable 
command to enable or disable granular-entry-heal option.

Thanks

kasturi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread knarra

On 04/24/2017 05:03 PM, Sven Achtelik wrote:


Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to 
stay on the current version and I’m applying updates/upgrade if there 
are any. For this I put a host in maintenance and also use the “Stop 
Gluster Service”  checkbox. After it’s done updating I’ll set it back 
to active and wait until the engine sees all bricks again and then 
I’ll go for the next host.


This worked fine for me the last month and now that I have more and 
more VMs running the changes that are written to the gluster volume 
while a host is in maintenance become a lot more and it takes pretty 
long for the healing to complete. What I don’t understand is that I 
don’t really see a lot of network usage in the GUI during that time 
and it feels quiet slow. The Network for the gluster is a 10G and I’m 
quiet happy with the performance of it, it’s just the healing that 
takes long. I noticed that because I couldn’t update the third host 
because of unsynced gluster volumes.


Is there any limiting variable that slows down traffic during healing 
that needs to be configured ? Or should I maybe change my updating 
process somehow to avoid having so many changes in queue?


Thank you,

Sven



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Sven,

Do you have granular entry heal enabled on the volume? If no, there 
is a feature called granular entry self-heal which should be enabled 
with sharded volumes to get the benefits. So when a brick goes down and 
say only 1 in those million entries is created/deleted. Self-heal would 
be done for only that file it won't crawl the entire directory.


You can run|gluster volume set/VOLNAME/cluster.granular-entry-heal 
enable / disable|command only if the volume is in|Created|state. If the 
volume is in any other state other than|Created|, for 
example,|Started|,|Stopped|, and so on, execute|gluster volume heal 
VOLNAME granular-entry-heal|enable / disable||command to enable or 
disable granular-entry-heal option.


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Sven Achtelik
Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to stay on 
the current version and I'm applying updates/upgrade if there are any. For this 
I put a host in maintenance and also use the "Stop Gluster Service"  checkbox. 
After it's done updating I'll set it back to active and wait until the engine 
sees all bricks again and then I'll go for the next host.

This worked fine for me the last month and now that I have more and more VMs 
running the changes that are written to the gluster volume while a host is in 
maintenance become a lot more and it takes pretty long for the healing to 
complete. What I don't understand is that I don't really see a lot of network 
usage in the GUI during that time and it feels quiet slow. The Network for the 
gluster is a 10G and I'm quiet happy with the performance of it, it's just the 
healing that takes long. I noticed that because I couldn't update the third 
host because of unsynced gluster volumes.

Is there any limiting variable that slows down traffic during healing that 
needs to be configured ? Or should I maybe change my updating process somehow 
to avoid having so many changes in queue?

Thank you,

Sven

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-24 Thread Evgenia Tokar
Thanks.

1. In the UI under vm devices tab do you have an entry for graphical device
(type=spice)?
2. Can you paste again the contents of the local vm.conf? If you have a
graphical device in the engine it should appear there as well.

Jenny
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine FCP SAN can not activate data domain

2017-04-24 Thread Jens Oechsler
Hello
I have a problem with oVirt Hosted Engine Setup version: 4.0.5.5-1.el7.centos.
Setup is using FCP SAN for data and engine.
Cluster has worked fine for a while. It has two hosts with VMs running.
I extended storage with an additional LUN recently. This LUN seems to
be gone from data domain and one VM is paused which I assume has data
on that device.

Got these errors in events:

Apr 24, 2017 10:26:05 AM
Failed to activate Storage Domain SD (Data Center DC) by admin@internal-authz
Apr 10, 2017 3:38:08 PM
Status of host cl01 was set to Up.
Apr 10, 2017 3:38:03 PM
Host cl01 does not enforce SELinux. Current status: DISABLED
Apr 10, 2017 3:37:58 PM
Host cl01 is initializing. Message: Recovering from crash or Initializing
Apr 10, 2017 3:37:58 PM
VDSM cl01 command failed: Recovering from crash or Initializing
Apr 10, 2017 3:37:46 PM
Failed to Reconstruct Master Domain for Data Center DC.
Apr 10, 2017 3:37:46 PM
Host cl01 is not responding. Host cannot be fenced automatically
because power management for the host is disabled.
Apr 10, 2017 3:37:46 PM
VDSM cl01 command failed: Broken pipe
Apr 10, 2017 3:37:46 PM
VDSM cl01 command failed: Broken pipe
Apr 10, 2017 3:32:45 PM
Invalid status on Data Center DC. Setting Data Center status to Non
Responsive (On host cl01, Error: General Exception).
Apr 10, 2017 3:32:45 PM
VDSM cl01 command failed: [Errno 19] Could not find dm device named `[unknown]`
Apr 7, 2017 1:28:04 PM
VM HostedEngine is down with error. Exit message: resource busy:
Failed to acquire lock: error -243.
Apr 7, 2017 1:28:02 PM
Storage Pool Manager runs on Host cl01 (Address: cl01).
Apr 7, 2017 1:27:59 PM
Invalid status on Data Center DC. Setting status to Non Responsive.
Apr 7, 2017 1:27:53 PM
Host cl02 does not enforce SELinux. Current status: DISABLED
Apr 7, 2017 1:27:52 PM
Host cl01 does not enforce SELinux. Current status: DISABLED
Apr 7, 2017 1:27:49 PM
Affinity Rules Enforcement Manager started.
Apr 7, 2017 1:27:34 PM
ETL Service Started
Apr 7, 2017 1:26:01 PM
ETL Service Stopped
Apr 3, 2017 1:22:54 PM
Shutdown of VM HostedEngine failed.
Apr 3, 2017 1:22:52 PM
Storage Pool Manager runs on Host cl01 (Address: cl01).
Apr 3, 2017 1:22:49 PM
Invalid status on Data Center DC. Setting status to Non Responsive.


Master data domain is inactive.


vdsm.log:

jsonrpc.Executor/5::INFO::2017-04-20
07:01:26,796::lvm::1226::Storage.LVM::(activateLVs) Refreshing lvs:
vg=bd616961-6da7-4eb0-939e-330b0a3fea6e lvs=['ids']
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,796::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-39 /usr/bin/sudo -n /usr/sbin/lvm lvchange --config '
devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_d
evices=1 write_cache_state=0 disable_after_error_count=3 filter = [
'\''a|/dev/mapper/360050768018182b6c99e|[unknown]|'\'',
'\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1  use_lvmetad=
0 }  backup {  retain_min = 50  retain_days = 0 } ' --refresh
bd616961-6da7-4eb0-939e-330b0a3fea6e/ids (cwd None)
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,880::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS:  = "
WARNING: Not using lvmetad because config setting use_lvmetad=0.\n
WARNING: To avoid corruption, rescan devices to make changes
 visible (pvscan --cache).\n  Couldn't find device with uuid
jDB9VW-bNqY-UIKc-XxXp-xnyK-ZTlt-7Cpa1U.\n";  = 0
jsonrpc.Executor/5::INFO::2017-04-20
07:01:26,881::lvm::1226::Storage.LVM::(activateLVs) Refreshing lvs:
vg=bd616961-6da7-4eb0-939e-330b0a3fea6e lvs=['leases']
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,881::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-39 /usr/bin/sudo -n /usr/sbin/lvm lvchange --config '
devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_d
evices=1 write_cache_state=0 disable_after_error_count=3 filter = [
'\''a|/dev/mapper/360050768018182b6c99e|[unknown]|'\'',
'\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1  use_lvmetad=
0 }  backup {  retain_min = 50  retain_days = 0 } ' --refresh
bd616961-6da7-4eb0-939e-330b0a3fea6e/leases (cwd None)
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,973::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS:  = "
WARNING: Not using lvmetad because config setting use_lvmetad=0.\n
WARNING: To avoid corruption, rescan devices to make changes
 visible (pvscan --cache).\n  Couldn't find device with uuid
jDB9VW-bNqY-UIKc-XxXp-xnyK-ZTlt-7Cpa1U.\n";  = 0
jsonrpc.Executor/5::INFO::2017-04-20
07:01:26,973::lvm::1226::Storage.LVM::(activateLVs) Refreshing lvs:
vg=bd616961-6da7-4eb0-939e-330b0a3fea6e lvs=['metadata', 'leases',
'ids', 'inbox', 'outbox', 'master']
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,974::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-39 /usr/bin/sudo -n /usr/sbin/lvm lvchange --config '
devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_d
evices=1 write_cache_state=0 disable_after_error_count=3 filter = [

Re: [ovirt-users] oVirt GUI bug? clicking "ok" on upgrade host confirmation screen

2017-04-24 Thread Yaniv Kaul
On Mon, Apr 24, 2017 at 1:29 PM, Nelson Lameiras <
nelson.lamei...@lyra-network.com> wrote:

> Hi kasturi,
>
> Thanks for your answer,
>
> Indeed, I tried again and after 1 minute and 17 seconds (!!) the
> confirmation screen disappeared. Is it really necessary to wait this long
> for screen to disapear? (I can see in the background that "upgrade" stars a
> few seconds after clicking ok)
>
> When putting host into maintenance mode, a circular "waiting" animation is
> used in order to warn user "something" is happening. A similar animation
> would be usefull in "upgrade" screen after clicking ok, no?
>

We should certainly improve this.
Can you please open a bug (and attach engine.log ) ?
Y.


>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> --
> *From: *"knarra" 
> *To: *"Nelson Lameiras" , "ovirt users"
> 
> *Sent: *Monday, April 24, 2017 7:34:17 AM
> *Subject: *Re: [ovirt-users] oVirt GUI bug? clicking "ok" on upgrade host
> confirmation screen
>
> On 04/21/2017 10:20 PM, Nelson Lameiras wrote:
>
> Hello,
>
> Since "upgrade" functionality is available for hosts in oVirt GUI I have
> this strange bug :
>
> - Click on "Installation>>Upgrade"
> - Click "ok" on confirmation screen
> - -> (bug) confirmation screen does not dissapear as expected
> - Click "ok" again on confirmation screen -> error : "system is already
> upgrading"
> - Click "cancel" to be able to return to oVirt
>
> This happens using on :
> ovirt engine : oVirt Engine Version: 4.1.1.6-1.el7.centos
> client : windows 10
> client : chrome Version 57.0.2987.133 (64-bit)
>
> This bug was already present on oVirt 4.0 before updating to 4.1.
>
> Has anybody else have this problem?
>
> (will try to reproduce with firefox, IE)
>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
> Hi Nelson,
>
> Once you click on 'OK' you will need to wait for few seconds (before
> the confirmation disappears) then you can see that upgrade starts.  In the
> previous versions once user clicks on 'OK' confirmation screen usually
> disappears immediately.
>
> Thanks
>
> kasturi
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt GUI bug? clicking "ok" on upgrade host confirmation screen

2017-04-24 Thread Nelson Lameiras
Hi kasturi, 

Thanks for your answer, 

Indeed, I tried again and after 1 minute and 17 seconds (!!) the confirmation 
screen disappeared. Is it really necessary to wait this long for screen to 
disapear? (I can see in the background that "upgrade" stars a few seconds after 
clicking ok) 

When putting host into maintenance mode, a circular "waiting" animation is used 
in order to warn user "something" is happening. A similar animation would be 
usefull in "upgrade" screen after clicking ok, no? 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 



From: "knarra"  
To: "Nelson Lameiras" , "ovirt users" 
 
Sent: Monday, April 24, 2017 7:34:17 AM 
Subject: Re: [ovirt-users] oVirt GUI bug? clicking "ok" on upgrade host 
confirmation screen 

On 04/21/2017 10:20 PM, Nelson Lameiras wrote: 



Hello, 

Since "upgrade" functionality is available for hosts in oVirt GUI I have this 
strange bug : 

- Click on "Installation>>Upgrade" 
- Click "ok" on confirmation screen 
- -> (bug) confirmation screen does not dissapear as expected 
- Click "ok" again on confirmation screen -> error : "system is already 
upgrading" 
- Click "cancel" to be able to return to oVirt 

This happens using on : 
ovirt engine : oVirt Engine Version: 4.1.1.6-1.el7.centos 
client : windows 10 
client : chrome Version 57.0.2987.133 (64-bit) 

This bug was already present on oVirt 4.0 before updating to 4.1. 

Has anybody else have this problem? 

(will try to reproduce with firefox, IE) 

cordialement, regards, 




Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 









Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 



___
Users mailing list Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 




Hi Nelson, 

Once you click on 'OK' you will need to wait for few seconds (before the 
confirmation disappears) then you can see that upgrade starts. In the previous 
versions once user clicks on 'OK' confirmation screen usually disappears 
immediately. 


Thanks 

kasturi 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-24 Thread Gianluca Cecchi
On Mon, Apr 24, 2017 at 10:38 AM, Evgenia Tokar  wrote:

> HI,
>
> Can you attach the agent log from the host?
>
> Thanks,
> Jenny
>
>
>
here it is:
https://drive.google.com/file/d/0BwoPbcrMv8mvamp1N0E5THVHNVE/view?usp=sharing
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-24 Thread Evgenia Tokar
HI,

Can you attach the agent log from the host?

Thanks,
Jenny

On Fri, Apr 21, 2017 at 12:52 AM, Gianluca Cecchi  wrote:

> Further infos:
>
> - ovirt-hosted-engine-ha package version
>
> [root@ractor ~]# rpm -q ovirt-hosted-engine-ha
> ovirt-hosted-engine-ha-2.1.0.5-1.el7.centos.noarch
> [root@ractor ~]#
>
>
> - serial console works
>
> [root@ractor ~]# hosted-engine --console
> The engine VM is running on this host
> Connected to domain HostedEngine
> Escape character is ^]
>
> CentOS Linux 7 (Core)
> Kernel 3.10.0-514.16.1.el7.x86_64 on an x86_64
>
> ractorshe login: root
> Password:
> Last login: Thu Apr 20 19:14:27 on pts/0
> [root@ractorshe ~]#
>
>
> - Current runtime vm.conf for hosted engine vm
>
> [root@ractor ~]# cat /run/ovirt-hosted-engine-ha/vm.conf
> cpuType=Nehalem
> emulatedMachine=pc-i440fx-rhel7.3.0
> vmId=7b0ff898-0a9e-4b97-8292-1d9f2a0a6683
> smp=4
> memSize=16384
> maxVCpus=16
> spiceSecureChannels=smain,sdisplay,sinputs,scursor,
> splayback,srecord,ssmartcard,susbredir
> vmName=HostedEngine
> display=qxl
> devices={index:0,iface:virtio,format:raw,bootOrder:1,
> address:{slot:0x06,bus:0x00,domain:0x,type:pci,function:0x0},volumeID:
> 43ee87b9-4293-4d43-beab-582f500667a7,imageID:d6287dfb-
> 27af-461b-ab79-4eb3a45d8c8a,readonly:false,domainID:
> 2025c2ea-6205-4bc1-b29d-745b47f8f806,deviceId:d6287dfb-27af-461b-ab79-
> 4eb3a45d8c8a,poolID:----,
> device:disk,shared:exclusive,propagateErrors:off,type:disk}
> devices={nicModel:pv,macAddr:00:16:3e:3a:ee:a5,linkActive:
> true,network:ovirtmgmt,deviceId:4bbb90e6-4f8e-42e0-
> 91ea-d894125ff4a8,address:{slot:0x03,bus:0x00,domain:
> 0x,type:pci,function:0x0},device:bridge,type:interface}
> devices={index:2,iface:ide,shared:false,readonly:true,
> deviceId:8c3179ac-b322-4f5c-9449-c52e3665e0ae,address:{
> controller:0,target:0,unit:0,bus:1,type:drive},device:
> cdrom,path:,type:disk}
> devices={device:usb,type:controller,deviceId:ee985889-
> 6878-463a-a415-9b50a4a810b3,address:{slot:0x01,bus:0x00,
> domain:0x,type:pci,function:0x2}}
> devices={device:virtio-serial,type:controller,deviceId:
> d99705cd-0ebf-40f0-950b-575ab4e6d934,address:{slot:
> 0x05,bus:0x00,domain:0x,type:pci,function:0x0}}
> devices={device:ide,type:controller,deviceId:ef31f1a2-
> 746a-4188-ae45-ef157d7b5598,address:{slot:0x01,bus:0x00,
> domain:0x,type:pci,function:0x1}}
> devices={device:scsi,model:virtio-scsi,type:controller,
> deviceId:f41baf47-51f8-42e9-a290-70da06191991,address:{
> slot:0x04,bus:0x00,domain:0x,type:pci,function:0x0}}
> devices={alias:rng0,specParams:{source:urandom},
> deviceId:4c7f0e81-c3e8-498f-a5a2-b8c1543e94b4,address:{
> slot:0x02,bus:0x00,domain:0x,type:pci,function:0x0},
> device:virtio,model:virtio,type:rng}
> devices={device:console,type:console}
> [root@ractor ~]#
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.1 and ovn problems

2017-04-24 Thread Gianluca Cecchi
On Sun, Apr 23, 2017 at 11:32 PM, Marcin Mirecki 
wrote:

> Hello Gianluca,
>
> Can you please check the ovn north db log.
> This is placed in /var/log/openvswitch/ovsdb-server-nb.log
> Please check if the logs has any new entries when you try to connect and
> when you issue the 'ovn-nbctl set-connection ptcp:6641' command.
> If the connection attempt is getting through, pvs db should print an error
> to the log.
>
> Please also try restarting the ovn-northd service.
>
> Do the ovn-controllers connect to the south-db?
> You can verify this by looking at /var/log/openvswitch/ovn-controller.log
> on the ovn-controller host (please look for entries saying "...  ip>:6642 connected")
>
> Marcin
>
>
>
The ovirt nb log contains:
2017-04-24T07:46:51.541Z|1|vlog|INFO|opened log file
/var/log/openvswitch/ovsdb-server-nb.log
2017-04-24T07:46:51.550Z|2|ovsdb_server|INFO|ovsdb-server (Open
vSwitch) 2.7.0
2017-04-24T07:47:01.560Z|3|memory|INFO|2268 kB peak resident set size
after 10.0 seconds
2017-04-24T07:47:01.560Z|4|memory|INFO|cells:100 json-caches:1
monitors:1 ses

In my ovn-controller.log of my 3 hosts I have this, when I run the 2
commands below on the provider host

ovn-sbctl set-connection ptcp:6642
ovn-nbctl set-connection ptcp:6641


2017-04-24T07:56:23.178Z|00247|reconnect|INFO|tcp:10.4.192.43:6642:
connecting...
2017-04-24T07:56:23.178Z|00248|reconnect|INFO|tcp:10.4.192.43:6642:
connection attempt failed (Connection refused)
2017-04-24T07:56:23.178Z|00249|reconnect|INFO|tcp:10.4.192.43:6642: waiting
8 seconds before reconnect
2017-04-24T07:56:31.187Z|00250|reconnect|INFO|tcp:10.4.192.43:6642:
connecting...
2017-04-24T07:56:31.188Z|00251|reconnect|INFO|tcp:10.4.192.43:6642:
connected
2017-04-24T07:56:31.193Z|00252|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connecting to switch
2017-04-24T07:56:31.193Z|00253|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connecting...
2017-04-24T07:56:31.201Z|00254|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connected
2017-04-24T07:56:31.201Z|00255|pinctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connecting to switch
2017-04-24T07:56:31.201Z|00256|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connecting...
2017-04-24T07:56:31.201Z|00257|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connected
2017-04-24T07:56:31.202Z|00258|binding|INFO|Releasing lport
0a2a47bc-ea0d-4f1d-8f49-ec903e519983 from this chassis.

On my provider I see then the 3 oVirt hosts connected:
[root@ovmgr1 openvswitch]# netstat -tpn|grep 66
tcp0  0 10.4.192.43:664210.4.168.76:38882
ESTABLISHED 6699/ovsdb-server
tcp0  0 10.4.192.43:664210.4.168.75:45486
ESTABLISHED 6699/ovsdb-server
tcp0  0 127.0.0.1:5432  127.0.0.1:37074
ESTABLISHED 16696/postgres: eng
tcp0  0 10.4.192.43:664210.4.168.74:59176
ESTABLISHED 6699/ovsdb-server
[root@ovmgr1 openvswitch]#

But it seems that the "set" command above is not persistent across reboot
of the provider host that in my case is the oVirt engine server
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] upgrade to 4.1

2017-04-24 Thread Yedidyah Bar David
On Sun, Apr 23, 2017 at 2:53 PM, Fabrice Bacchella
 wrote:
>
>> Le 23 avr. 2017 à 07:59, Yedidyah Bar David  a écrit :
>>
>>
>
>> The main reason we require this is for pg_dump/pg_restore which are ran
>> during setup/rollback (if needed). pg_dump can't know for sure that all
>> the changes in the db were done using a client of its own version (that
>> is, current machine usually), and if indeed a newer client was used, it
>> might have used features that pg_dump of the lower version doesn't know
>> how to back up (and especially pg_restore does not know how to restore).
>> See also [1]. I seem to have tested there (can't remember anymore, see
>> comment 13) 9.2 client with 9.5 server and it didn't work. pg_dump(1)
>> manpage says:
>>
>>   Because pg_dump is used to transfer data to newer versions of
>>   PostgreSQL, the output of pg_dump can be expected to load into
>>   PostgreSQL server versions newer than pg_dump's version.  pg_dump can
>>   also dump from PostgreSQL servers older than its own version.
>>   (Currently, servers back to version 7.0 are supported.) However,
>>   pg_dump cannot dump from PostgreSQL servers newer than its own major
>>   version; it will refuse to even try, rather than risk making an invalid
>>   dump. Also, it is not guaranteed that pg_dump's output can be loaded
>>   into a server of an older major version — not even if the dump was
>>   taken from a server of that version. Loading a dump file into an older
>>   server may require manual editing of the dump file to remove syntax not
>>   understood by the older server. Use of the --quote-all-identifiers
>>   option is recommended in cross-version cases, as it can prevent
>>   problems arising from varying reserved-word lists in different
>>   PostgreSQL versions.
>>
>
> I don't get it, but I don't know pg so I might be wrong.
>
> You have a client application (like ovirt) written using features from V1 of 
> pg.

Right.

>
> It's running on a server where version V2 is installing. For good reasons, V2 
> >= V1 is needed.

Indeed.

>
> The server is running a version V3. Again V3 >= V1 is needed.   Except for 
> major version, does V3 => V2 is really needed ?

For oVirt itself, no.

>
> And for backup the problem is the same. It must probably know every features 
> used in the application (so again being V1 or more). Why does it needs to 
> match both V2 and V3 ?  It will probably fits V2 is installation is the same. 
> But that not mandatory. In a java application, the client library might be a 
> jar provided by the application and pg_dump a tool installed with native os 
> packaging. But how can complain against V3 ?

pg_dump is indeed a tool installed from the OS. I copied above the relevant
part of its manpage - if you think it does a wrong thing, perhaps better
continue the discussion on postgresql lists.

For our concern, the only relevant fact is that we decided to backup the
db using pg_dump during setup, and to restore it on rollback. If you want
to make this backup optional, please open an RFE, as already requested, and
we can discuss this there.

>
> But with ovirt we have V1=V2=V3,

Not exactly. Using your terminology, we have V1<=V2, V2=V3.

> even for for patch level (9.4.8 against 9.4.11). What kind of feature that 
> ovirt don't know about might be missing ? I don't think ovirt might know 
> about any 9.4 since you talked about version 9.2 as the official supported 
> version.

At the time, oVirt didn't check versions at all, and so failed on a
certain combination, and it was decided to require V2=V3, as a simple
and effective solution. This was also specifically discussed there,
as you can see for yourself, open to the public for review/comments.
Check especially comments 9 to 19 (in bz 1331168).

If you think we need a more delicate test, please open an RFE,
preferably providing the details I suggested that should be considered
for one.

Also please recall that you have a third, very simple, IMHO, option.
If for any reason you decided to have your server with version V3,
simply install V3 on your client machine (the oVirt engine). This
will be simple and solve all problems. If your concern is that the
OS does not supply pg V3, then it applies also to the OS of the
server. If that is a different OS, you are welcome to port oVirt
engine to that OS so you can have all you want at once.

You also have a fourth option - patch oVirt to not check for the
version, by reverting the patch that introduced this check (might
require a bit more work than "git revert", but not much):

https://gerrit.ovirt.org/59941

Of course, this will essentially be a fork, and I'd personally not
recommend it. But it's still an option.

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users