Re: [ovirt-users] Nested KVM for oVirt 4.1.2

2017-05-28 Thread Luca 'remix_tj' Lorenzetto
Il 29 mag 2017 12:48 AM,  ha scritto:

http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/

I have one CentOS7 host (physical) & 3x oVirt nodes 4.1.2 (these are vm's).
I have installed vdsm-hook-nestedvm on the host.

Should I install vdsm-hook-macspoof on the 3x node vm's?


Hello,

You should install macspoof hook on the host you're using for
virtualization, not on the guests.

Luca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Building oVirt engine on Debian

2017-05-28 Thread Leni Kadali Mutungi
I have read what is in the README.adoc and done all of it.
Unfortunately, even with BUILD_ALL_USER_AGENTS=0, my computer would
still freeze. I had to opt to not build the gwt ui instead. I hope
it's not too important :P.

Build went on fine. Ran into this issue:

install: cannot create regular file
'/usr/local/etc/logrotate.d/ovirt-engine-setup': Permission denied
install: cannot create regular file
'/usr/local/etc/logrotate.d/ovirt-engine-notifier': Permission denied
install: cannot create regular file
'/usr/local/etc/logrotate.d/ovirt-engine': Permission denied
Makefile:301: recipe for target 'copy-recursive' failed
make[2]: *** [copy-recursive] Error 1
make[2]: Leaving directory '/home/user/ovirt-engine'
Makefile:420: recipe for target 'install-packaging-files' failed
make[1]: *** [install-packaging-files] Error 2
make[1]: Leaving directory '/home/user/ovirt-engine'
Makefile:521: recipe for target 'install-dev' failed
make: *** [install-dev] Error 2

So usually I'd solve this by running as root, but since you've advised
against it, what should I do to give the normal user permission to run
this operation since the folder is owned by root and the group is
staff? My looking through other responses to similar situations
indicates to me that giving the non-root user access to this folder
isn't recommended. What would be the best solution in my case?

-- 
- Warm regards
Leni Kadali Mutungi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Nested KVM for oVirt 4.1.2

2017-05-28 Thread ovirt

http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/

I have one CentOS7 host (physical) & 3x oVirt nodes 4.1.2 (these are 
vm's).

I have installed vdsm-hook-nestedvm on the host.

Should I install vdsm-hook-macspoof on the 3x node vm's?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network interface not working

2017-05-28 Thread Lev Veyde
Hi Alex,

That is quite strange...

Does this happen on both hosts - have you tried to migrate the VM to the
second host and see if the issue still remains?

Thanks in advance,


On Fri, May 26, 2017 at 3:02 PM, Герасимов Александр 
wrote:

> Hi Lev.
>
>
> On one of the VMs you only see 1 NIC instead of the 2?
>
> NO. both VM's sees two NIC, but on first VM ping with no error, and second
> VM ping with 75% error.
>
> OS version on hosts [root@node01 ~]# cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
>
> OS veriosion on VM's [root@node03 ~]# cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
>
>
>
> *first VM*
>
> 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
>
> 00:09.0 Ethernet controller: Red Hat, Inc Virtio network device
>
> [root@node03 ~]# ip l
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode
> DEFAULT qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: eth0:  mtu 1500 qdisc pfifo_fast
> state UP mode DEFAULT qlen 1000
> link/ether 00:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
> 3: eth1:  mtu 1500 qdisc pfifo_fast
> state UP mode DEFAULT qlen 1000
> link/ether 00:1a:4a:16:01:55 brd ff:ff:ff:ff:ff:ff
>
> *second VM*
>
> 00:03.0 Ethernet controller: Realtek Semiconductor Co., Ltd.
> RTL-8100/8101L/8139 PCI Fast Ethernet Adapter (rev 20)  - but i tested all
> version of NIC and no effect
>
> 00:0a.0 Ethernet controller: Red Hat, Inc Virtio network device
>
> [root@node04 ~]# ip link
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode
> DEFAULT qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: eth0:  mtu 1500 qdisc pfifo_fast
> state UP mode DEFAULT qlen 1000
> link/ether 00:1a:4a:16:01:53 brd ff:ff:ff:ff:ff:ff
> 3: ens3:  mtu 1500 qdisc pfifo_fast
> state UP mode DEFAULT qlen 1000
> link/ether 00:1a:4a:16:01:52 brd ff:ff:ff:ff:ff:ff
>
> In logs no messages only like this :
>
> May 26 15:01:01 node04 systemd: Started Session 67263 of user root.
> May 26 15:01:01 node04 systemd: Starting Session 67263 of user root.
> May 26 15:01:01 node04 systemd: Created slice user-600.slice.
> May 26 15:01:01 node04 systemd: Starting user-600.slice.
> May 26 15:01:01 node04 systemd: Started Session 67262 of user bitrix.
> May 26 15:01:01 node04 systemd: Starting Session 67262 of user bitrix.
> May 26 15:01:01 node04 systemd: Removed slice user-600.slice.
> May 26 15:01:01 node04 systemd: Stopping user-600.slice.
>
>
> Hi Alexander,
>
> So if I understand it correctly, you have the following configuration:
> - 2 hosts, each having 2 NICs
> - 2 virtual machines, each have a connection to each one of the NICs
> available on the hosts
>
> On one of the VMs you only see 1 NIC instead of the 2?
>
> Are you sure that the VM is properly configured to have 2 NICs?
>
> What Linux distro and version you're using on the hosts and inside the VMs
> ?
>
> Can you please send us:
> - the logs from the VM, e.g. /var/log/messages
> - the output of lspci -v
> - the output of ip link
>
> Thanks in advance,
>
> 2017-05-18 12:19 GMT+03:00 Герасимов Александр :
>
> > Hi all.
> >
> > I have to servers with ovirt.
> >
> > And to identical virtual machines.
> >
> > Both servers are identical. But on second virtual server not working one
> > network interface. Ping have a problem. I tried to change network driver,
> > but has no effect.
> >
> > I don't understand that to do
> >
> >
> > ovirt version and package:
> >
> > rpm -qa|grep ovirt
> > ovirt-imageio-proxy-0.4.0-0.201608310602.gita9b573b.el7.centos.noarch
> > ovirt-engine-vmconsole-proxy-helper-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-restapi-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-extensions-api-impl-4.0.5.5-1.el7.centos.noarch
> > ovirt-imageio-daemon-0.4.0-1.el7.noarch
> > ovirt-engine-wildfly-10.1.0-1.el7.x86_64
> > ovirt-vmconsole-1.0.4-1.el7.centos.noarch
> > ovirt-engine-cli-3.6.9.2-1.el7.noarch
> > ovirt-engine-websocket-proxy-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-dashboard-1.0.5-1.el7.centos.noarch
> > ovirt-host-deploy-1.5.3-1.el7.centos.noarch
> > ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
> > ovirt-engine-setup-base-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-dwh-setup-4.0.5-1.el7.centos.noarch
> > ovirt-engine-setup-plugin-websocket-proxy-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-setup-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-dbscripts-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-userportal-4.0.5.5-1.el7.centos.noarch
> > ovirt-imageio-common-0.4.0-1.el7.noarch
> > python-ovirt-engine-sdk4-4.0.2-1.el7.centos.x86_64
> > ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch
> > ovirt-engine-dwh-4.0.5-1.el7.centos.noarch
> > ovirt-engine-tools-backup-4.0.5.5-1.el7.centos.noarch
> > ovirt-image-uploader-4.0.1-1.el7.centos.noarch
> > 

Re: [ovirt-users] Issue with deploy HostedEngine on second host with Ovirt 4.1

2017-05-28 Thread alex
Yes, all storage domains, hosted-engine storage domain and VMs are visible in the ovirt-GUI   26.05.2017, 22:00, "Simone Tiraboschi" :Are the hosted-engine storage domain and the engine VM correctly visible in the engine? On Fri, May 26, 2017 at 2:50 PM, a...@rt14.ru  wrote:probably error you noticed is absent zcat engine.log-20170525.gz | grep "ERROR [org.ovirt.engine.core.bll.hostedengine.HostedEngineConfigFetcher]"  and error noticed in bug description also is absent zcat engine.log-20170525.gz | grep "TLS issue"  26.05.2017, 20:41, "Simone Tiraboschi" :Ok,I see that the engine is trying to deploy the second host as an hosted-engine one but the engine sends only HOSTED_ENGINE_CONFIG/host_id=str:'2'while all the other values are missing. 2017-05-25 02:40:31 DEBUG otopi.context context.dumpEnvironment:770 ENV HOSTED_ENGINE/action="">2017-05-25 02:40:31 DEBUG otopi.context context.dumpEnvironment:770 ENV HOSTED_ENGINE_CONFIG/host_id=str:'2' I think you hit this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1414696 Could you please check your engine logs for 'ERROR [org.ovirt.engine.core.bll.hostedengine.HostedEngineConfigFetcher]'?  On Fri, May 26, 2017 at 12:47 PM,  wrote:Hi, Simone!  done 25.05.2017, 17:18, "Simone Tiraboschi" :  On Wed, May 24, 2017 at 8:10 PM,  wrote:Hi everybody!I get some issue with HA configuration of HostedEngine: I am trying to deploy it to second host: 1. Set maintenance for host2. Do reinstall with HostedEngine Deploy option Installation finished successfully, I did't observe any errors in logs, but host did't became HostedEngine ready. could you please attach your host-deploy logs?So I want to switch Host with HE to Maintenance mode and won't get this message: "Error while executing action: Cannot switch the Host(s) to Maintenance mode.There are no available hosts capable of running the engine VM." Could you give me steps to resolve this problem?___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issue with deploy HostedEngine on second host with Ovirt 4.1

2017-05-28 Thread a...@rt14.ru
probably error you noticed is absent zcat engine.log-20170525.gz | grep "ERROR [org.ovirt.engine.core.bll.hostedengine.HostedEngineConfigFetcher]"  and error noticed in bug description also is absent zcat engine.log-20170525.gz | grep "TLS issue"  26.05.2017, 20:41, "Simone Tiraboschi" :Ok,I see that the engine is trying to deploy the second host as an hosted-engine one but the engine sends only HOSTED_ENGINE_CONFIG/host_id=str:'2'while all the other values are missing. 2017-05-25 02:40:31 DEBUG otopi.context context.dumpEnvironment:770 ENV HOSTED_ENGINE/action="">2017-05-25 02:40:31 DEBUG otopi.context context.dumpEnvironment:770 ENV HOSTED_ENGINE_CONFIG/host_id=str:'2' I think you hit this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1414696 Could you please check your engine logs for 'ERROR [org.ovirt.engine.core.bll.hostedengine.HostedEngineConfigFetcher]'?  On Fri, May 26, 2017 at 12:47 PM,  wrote:Hi, Simone!  done 25.05.2017, 17:18, "Simone Tiraboschi" :  On Wed, May 24, 2017 at 8:10 PM,  wrote:Hi everybody!I get some issue with HA configuration of HostedEngine: I am trying to deploy it to second host: 1. Set maintenance for host2. Do reinstall with HostedEngine Deploy option Installation finished successfully, I did't observe any errors in logs, but host did't became HostedEngine ready. could you please attach your host-deploy logs?So I want to switch Host with HE to Maintenance mode and won't get this message: "Error while executing action: Cannot switch the Host(s) to Maintenance mode.There are no available hosts capable of running the engine VM." Could you give me steps to resolve this problem?___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network interface not working

2017-05-28 Thread Герасимов Александр

Hi Lev.


On one of the VMs you only see 1 NIC instead of the 2?

NO. both VM's sees two NIC, but on first VM ping with no error, and 
second VM ping with 75% error.


OS version on hosts [root@node01 ~]# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)

OS veriosion on VM's [root@node03 ~]# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)



*first VM*

00:03.0 Ethernet controller: Red Hat, Inc Virtio network device

00:09.0 Ethernet controller: Red Hat, Inc Virtio network device

[root@node03 ~]# ip l
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode 
DEFAULT qlen 1

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0:  mtu 1500 qdisc pfifo_fast 
state UP mode DEFAULT qlen 1000

link/ether 00:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
3: eth1:  mtu 1500 qdisc pfifo_fast 
state UP mode DEFAULT qlen 1000

link/ether 00:1a:4a:16:01:55 brd ff:ff:ff:ff:ff:ff

*second VM*

00:03.0 Ethernet controller: Realtek Semiconductor Co., Ltd. 
RTL-8100/8101L/8139 PCI Fast Ethernet Adapter (rev 20)  - but i tested 
all version of NIC and no effect


00:0a.0 Ethernet controller: Red Hat, Inc Virtio network device

[root@node04 ~]# ip link
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode 
DEFAULT qlen 1

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0:  mtu 1500 qdisc pfifo_fast 
state UP mode DEFAULT qlen 1000

link/ether 00:1a:4a:16:01:53 brd ff:ff:ff:ff:ff:ff
3: ens3:  mtu 1500 qdisc pfifo_fast 
state UP mode DEFAULT qlen 1000

link/ether 00:1a:4a:16:01:52 brd ff:ff:ff:ff:ff:ff

In logs no messages only like this :

May 26 15:01:01 node04 systemd: Started Session 67263 of user root.
May 26 15:01:01 node04 systemd: Starting Session 67263 of user root.
May 26 15:01:01 node04 systemd: Created slice user-600.slice.
May 26 15:01:01 node04 systemd: Starting user-600.slice.
May 26 15:01:01 node04 systemd: Started Session 67262 of user bitrix.
May 26 15:01:01 node04 systemd: Starting Session 67262 of user bitrix.
May 26 15:01:01 node04 systemd: Removed slice user-600.slice.
May 26 15:01:01 node04 systemd: Stopping user-600.slice.


Hi Alexander,

So if I understand it correctly, you have the following configuration:
- 2 hosts, each having 2 NICs
- 2 virtual machines, each have a connection to each one of the NICs
available on the hosts

On one of the VMs you only see 1 NIC instead of the 2?

Are you sure that the VM is properly configured to have 2 NICs?

What Linux distro and version you're using on the hosts and inside the VMs ?

Can you please send us:
- the logs from the VM, e.g. /var/log/messages
- the output of lspci -v
- the output of ip link

Thanks in advance,

2017-05-18 12:19 GMT+03:00 Герасимов Александр :

> Hi all.
>
> I have to servers with ovirt.
>
> And to identical virtual machines.
>
> Both servers are identical. But on second virtual server not working one
> network interface. Ping have a problem. I tried to change network driver,
> but has no effect.
>
> I don't understand that to do
>
>
> ovirt version and package:
>
> rpm -qa|grep ovirt
> ovirt-imageio-proxy-0.4.0-0.201608310602.gita9b573b.el7.centos.noarch
> ovirt-engine-vmconsole-proxy-helper-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-restapi-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-extensions-api-impl-4.0.5.5-1.el7.centos.noarch
> ovirt-imageio-daemon-0.4.0-1.el7.noarch
> ovirt-engine-wildfly-10.1.0-1.el7.x86_64
> ovirt-vmconsole-1.0.4-1.el7.centos.noarch
> ovirt-engine-cli-3.6.9.2-1.el7.noarch
> ovirt-engine-websocket-proxy-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-dashboard-1.0.5-1.el7.centos.noarch
> ovirt-host-deploy-1.5.3-1.el7.centos.noarch
> ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
> ovirt-engine-setup-base-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-dwh-setup-4.0.5-1.el7.centos.noarch
> ovirt-engine-setup-plugin-websocket-proxy-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-setup-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-dbscripts-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-userportal-4.0.5.5-1.el7.centos.noarch
> ovirt-imageio-common-0.4.0-1.el7.noarch
> python-ovirt-engine-sdk4-4.0.2-1.el7.centos.x86_64
> ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch
> ovirt-engine-dwh-4.0.5-1.el7.centos.noarch
> ovirt-engine-tools-backup-4.0.5.5-1.el7.centos.noarch
> ovirt-image-uploader-4.0.1-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-tools-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-4.0.5.5-1.el7.centos.noarch
> ovirt-release40-4.0.5-2.noarch
> ovirt-host-deploy-java-1.5.3-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-4.0.5.5-1.el7.centos.noarch
> ovirt-iso-uploader-4.0.2-1.el7.centos.noarch
> ovirt-engine-webadmin-portal-4.0.5.5-1.el7.centos.noarch
> ovirt-setup-lib-1.0.2-1.el7.centos.noarch
>