Re: [ovirt-users] R: Re: Network instability after upgrade 3.6.0 -> 3.6.1

2016-01-02 Thread Gianluca Cecchi
Just to put in my experience.
Installed single system with SH engine in 3.6.0 and CentOS 7.1.
Then updated to oVirt 3.6.1 and CentOS 7.2.
I never had problems regarding bonding neither in 3.6.0 nor in 3.6.1.

My current kernel is 3.10.0-327.3.1.el7.x86_64
The server hw is a blade PowerEdge M910 with 4 Gigabit adapters

[root@ractor ~]# lspci | grep igab
01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S
Gigabit Ethernet (rev 20)
01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S
Gigabit Ethernet (rev 20)
02:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S
Gigabit Ethernet (rev 20)
02:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S
Gigabit Ethernet (rev 20)

They are connected to Cisco switches with ports configured as 802.3ad (I
have no details at hand for Cisco model but I can verify)

And this is the situation for VM bonding, where I only customized mode=4 to
specify lacp_rate=1 (default is slow)

- bridges
[root@ractor ~]# brctl show
bridge name bridge id STP enabled interfaces
;vdsmdummy; 8000. no
ovirtmgmt 8000.002564ff0bf4 no bond1
vnet0
vlan65 8000.002564ff0bf0 no bond0.65
vnet1
vnet2

- bond device for VMs vlans
[root@ractor ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 9
Partner Key: 8
Partner Mac Address: 00:01:02:03:04:0c

Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:64:ff:0b:f0
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
port key: 9
port priority: 255
port number: 1
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 8
port priority: 32768
port number: 137
port state: 63

Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:64:ff:0b:f2
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
port key: 9
port priority: 255
port number: 2
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 8
port priority: 32768
port number: 603
port state: 63


- bond device for ovirtmgmt
[root@ractor ~]# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 9
Partner Key: 16
Partner Mac Address: 00:01:02:03:04:0c

Slave Interface: em3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:64:ff:0b:f4
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
port key: 9
port priority: 255
port number: 1
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 16
port priority: 32768
port number: 145
port state: 63

Slave Interface: em4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:64:ff:0b:f6
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
port key: 9
port priority: 255
port number: 2
port state: 63
details partner lacp pdu:
system priority: 32768
oper key: 16
port priority: 32768
port number: 611
port state: 63


No particular settings for single interfaces. This is what has been set by
the system for both em1, em2, em3 and em4 (output shown only for em1):

[root@ractor ~]# ethtool -k em1
Features for em1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: on
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: on
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: on
tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on

Re: [ovirt-users] oVirt hosted engine agent and broker duplicate logs to syslog

2016-01-02 Thread Aleksey Chudov
Thank you for the answer.

So, currently there is at least three copy of agent and broker logs on
centos 7.2:
1. agent.log and broker.log files in /var/log/ovirt-hosted-engine-ha/
directory
2. /var/log/messages file
3. journald database

Does errors additionally send to syslog according to agent-log.conf and
broker-log.conf files in /etc/ovirt-hosted-engine-ha/ directory?

It's a bit too much :) Do you plan to fix this duplication?

For example 2. and 3. can be quickly disabled by adding StandardOutput=null
option to unit file

[Service]
...
StandardOutput=null

On Sat, Jan 2, 2016 at 2:01 PM, Simone Tiraboschi 
wrote:

>
> Il 31/Dic/2015 16:36, "Aleksey Chudov"  ha
> scritto:
> >
> > Hi,
> >
> > After upgrade from 3.6.0 to 3.6.1 agent and broker duplicate their logs
> to syslog. So, the same messages logged twice to files in
> /var/log/ovirt-hosted-engine-ha/ directory and to /var/log/messages file.
> >
> > Agent and broker configuration files remain the same for 3.5, 3.6.0 and
> 3.6.1 and there is not such logs duplication in 3.5 and 3.6.0.
> >
> > Is it a bug or expected behavior?
>
> I think that it's an expected behavior: for 3.6.1 we had to rewrite the
> systemd startup script due to a slightly different behavior on centos 7.2.
> Prior than that the agent was forking as a daemon while now it's running
> as a systemd service with type=simple so its output goes to systemd logging
> facilities and so also in /var/log/messages according to your system
> configuration.
>
> >
> > OS is CentOS 7.2
> >
> > # rpm -qa 'ovirt*'
> > ovirt-vmconsole-1.0.0-1.el7.centos.noarch
> > ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
> > ovirt-host-deploy-1.4.1-1.el7.centos.noarch
> > ovirt-hosted-engine-ha-1.3.3.5-1.el7.centos.noarch
> > ovirt-engine-sdk-python-3.6.0.3-1.el7.centos.noarch
> > ovirt-release36-002-2.noarch
> > ovirt-setup-lib-1.0.0-1.el7.centos.noarch
> > ovirt-hosted-engine-setup-1.3.1.3-1.el7.centos.noarch
> >
> >
> > # cat /etc/ovirt-hosted-engine-ha/agent-log.conf
> > [loggers]
> > keys=root
> >
> > [handlers]
> > keys=syslog,logfile
> >
> > [formatters]
> > keys=long,sysform
> >
> > [logger_root]
> > level=INFO
> > handlers=syslog,logfile
> > propagate=0
> >
> > [handler_syslog]
> > level=ERROR
> > class=handlers.SysLogHandler
> > formatter=sysform
> > args=('/dev/log', handlers.SysLogHandler.LOG_USER)
> >
> > [handler_logfile]
> > class=logging.handlers.TimedRotatingFileHandler
> > args=('/var/log/ovirt-hosted-engine-ha/agent.log', 'd', 1, 7)
> > level=DEBUG
> > formatter=long
> >
> > [formatter_long]
> >
> format=%(threadName)s::%(levelname)s::%(asctime)s::%(module)s::%(lineno)d::%(name)s::(%(funcName)s)
> %(message)s
> >
> > [formatter_sysform]
> > format=ovirt-ha-agent %(name)s %(levelname)s %(message)s
> > datefmt=
> >
> >
> > Aleksey
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Users Digest, Vol 52, Issue 1

2016-01-02 Thread Michael Cooper
Hello Everyone,

I wondering why I cannot attach my iso_domain to my Data Center.
When I try to attach it says no Valid DataCenters. Why is this happening?
Where should I start looking for the resolution?

Thanks,

-- 
Michael A Cooper
Linux Certified
Zerto Certified
http://www.coopfire.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] how shutdown a host

2016-01-02 Thread alireza sadeh seighalan
hi everyone

how can i shutdown a host acording an standard? i want to shutdown or
reboot hosts for maintenance purposes (for example host 2-6). thanks in
advance
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt hosted engine agent and broker duplicate logs to syslog

2016-01-02 Thread Simone Tiraboschi
Il 31/Dic/2015 16:36, "Aleksey Chudov"  ha
scritto:
>
> Hi,
>
> After upgrade from 3.6.0 to 3.6.1 agent and broker duplicate their logs
to syslog. So, the same messages logged twice to files in
/var/log/ovirt-hosted-engine-ha/ directory and to /var/log/messages file.
>
> Agent and broker configuration files remain the same for 3.5, 3.6.0 and
3.6.1 and there is not such logs duplication in 3.5 and 3.6.0.
>
> Is it a bug or expected behavior?

I think that it's an expected behavior: for 3.6.1 we had to rewrite the
systemd startup script due to a slightly different behavior on centos 7.2.
Prior than that the agent was forking as a daemon while now it's running as
a systemd service with type=simple so its output goes to systemd logging
facilities and so also in /var/log/messages according to your system
configuration.

>
> OS is CentOS 7.2
>
> # rpm -qa 'ovirt*'
> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
> ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
> ovirt-hosted-engine-ha-1.3.3.5-1.el7.centos.noarch
> ovirt-engine-sdk-python-3.6.0.3-1.el7.centos.noarch
> ovirt-release36-002-2.noarch
> ovirt-setup-lib-1.0.0-1.el7.centos.noarch
> ovirt-hosted-engine-setup-1.3.1.3-1.el7.centos.noarch
>
>
> # cat /etc/ovirt-hosted-engine-ha/agent-log.conf
> [loggers]
> keys=root
>
> [handlers]
> keys=syslog,logfile
>
> [formatters]
> keys=long,sysform
>
> [logger_root]
> level=INFO
> handlers=syslog,logfile
> propagate=0
>
> [handler_syslog]
> level=ERROR
> class=handlers.SysLogHandler
> formatter=sysform
> args=('/dev/log', handlers.SysLogHandler.LOG_USER)
>
> [handler_logfile]
> class=logging.handlers.TimedRotatingFileHandler
> args=('/var/log/ovirt-hosted-engine-ha/agent.log', 'd', 1, 7)
> level=DEBUG
> formatter=long
>
> [formatter_long]
>
format=%(threadName)s::%(levelname)s::%(asctime)s::%(module)s::%(lineno)d::%(name)s::(%(funcName)s)
%(message)s
>
> [formatter_sysform]
> format=ovirt-ha-agent %(name)s %(levelname)s %(message)s
> datefmt=
>
>
> Aleksey
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] R: Re: Network instability after upgrade 3.6.0 -> 3.6.1

2016-01-02 Thread Jon Archer
Hi,

I'm not near the server for a while but network is set up

2x broadcom nic with whatever driver works out of the box.

Both set as slaves to bond0 which is in mode=4 with no explicit options.

Switch is a fairly basic tp-link but works almost identically to a Cisco. Has 
the 2 ports set up in a portchannel.

Short and long, this config works in earlier kernel, but not with kernel 
shipped in 7.2

Release notes for RH7.2 suggest some work on bonding has been done, wonder if 
default options (LACP speed?) have changed?

Jon

On 30 December 2015 09:44:02 GMT+00:00, Dan Kenigsberg  
wrote:
>On Tue, Dec 29, 2015 at 09:57:07PM +, Jon Archer wrote:
>> Hi Stefano,
>> 
>> It's definitely not the switch, it seems to be the latest kernel
>package
>> (kernel-3.10.0-327.3.1.el7.x86_64) which stops bonding working
>correctly,
>> reverting back to the previous kernel brings the network up in
>802.3ad mode
>> (4).
>> 
>> I know, from reading the release notes of 7.2, that there were some
>changes
>> to the bonding bits in the kernel so i'm guessing maybe some defaults
>have
>> changed.
>> 
>> I'll keep digging and post back as soon as i have something.
>> 
>> Jon
>> 
>> On 29/12/15 19:55, Stefano Danzi wrote:
>> >Hi! I didn't solve yet. I'm still using mode 2 on bond interface.
>What's
>> >your switch model and firmware version?
>
>Hi Jon and Stefano,
>
>We've been testing bond mode 4 with (an earlier)
>kernel-3.10.0-327.el7.x86_64 and experienced no such behaviour.
>
>However, to better identify the suspected kernel bug, could you provide
>more information regarding your network connectivity?
>
>What is the make of your NICs? Which driver do you use?
>
>Do you set special ethtool opts (LRO with bridge was broken in 7.2.0
>kernel if I am not mistaken)?
>
>You have the ovirtmgmt bridge on top of your bond, right?
>
>Can you share your ifcfg*?

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt hosted engine agent and broker duplicate logs to syslog

2016-01-02 Thread Simone Tiraboschi
Il 02/Gen/2016 13:29, "Aleksey Chudov"  ha
scritto:
>
> Thank you for the answer.
>
> So, currently there is at least three copy of agent and broker logs on
centos 7.2:
> 1. agent.log and broker.log files in /var/log/ovirt-hosted-engine-ha/
directory
> 2. /var/log/messages file
> 3. journald database
>
> Does errors additionally send to syslog according to agent-log.conf and
broker-log.conf files in /etc/ovirt-hosted-engine-ha/ directory?
>
> It's a bit too much :) Do you plan to fix this duplication?

If it's too much, I'd prefer to skip the dedicated log file one.
Personally I'm quite comfortable with systemd logging.

> For example 2. and 3. can be quickly disabled by adding
StandardOutput=null option to unit file
>
> [Service]
> ...
> StandardOutput=null
>
> On Sat, Jan 2, 2016 at 2:01 PM, Simone Tiraboschi 
wrote:
>>
>>
>> Il 31/Dic/2015 16:36, "Aleksey Chudov"  ha
scritto:
>> >
>> > Hi,
>> >
>> > After upgrade from 3.6.0 to 3.6.1 agent and broker duplicate their
logs to syslog. So, the same messages logged twice to files in
/var/log/ovirt-hosted-engine-ha/ directory and to /var/log/messages file.
>> >
>> > Agent and broker configuration files remain the same for 3.5, 3.6.0
and 3.6.1 and there is not such logs duplication in 3.5 and 3.6.0.
>> >
>> > Is it a bug or expected behavior?
>>
>> I think that it's an expected behavior: for 3.6.1 we had to rewrite the
systemd startup script due to a slightly different behavior on centos 7.2.
>> Prior than that the agent was forking as a daemon while now it's running
as a systemd service with type=simple so its output goes to systemd logging
facilities and so also in /var/log/messages according to your system
configuration.
>>
>> >
>> > OS is CentOS 7.2
>> >
>> > # rpm -qa 'ovirt*'
>> > ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>> > ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
>> > ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>> > ovirt-hosted-engine-ha-1.3.3.5-1.el7.centos.noarch
>> > ovirt-engine-sdk-python-3.6.0.3-1.el7.centos.noarch
>> > ovirt-release36-002-2.noarch
>> > ovirt-setup-lib-1.0.0-1.el7.centos.noarch
>> > ovirt-hosted-engine-setup-1.3.1.3-1.el7.centos.noarch
>> >
>> >
>> > # cat /etc/ovirt-hosted-engine-ha/agent-log.conf
>> > [loggers]
>> > keys=root
>> >
>> > [handlers]
>> > keys=syslog,logfile
>> >
>> > [formatters]
>> > keys=long,sysform
>> >
>> > [logger_root]
>> > level=INFO
>> > handlers=syslog,logfile
>> > propagate=0
>> >
>> > [handler_syslog]
>> > level=ERROR
>> > class=handlers.SysLogHandler
>> > formatter=sysform
>> > args=('/dev/log', handlers.SysLogHandler.LOG_USER)
>> >
>> > [handler_logfile]
>> > class=logging.handlers.TimedRotatingFileHandler
>> > args=('/var/log/ovirt-hosted-engine-ha/agent.log', 'd', 1, 7)
>> > level=DEBUG
>> > formatter=long
>> >
>> > [formatter_long]
>> >
format=%(threadName)s::%(levelname)s::%(asctime)s::%(module)s::%(lineno)d::%(name)s::(%(funcName)s)
%(message)s
>> >
>> > [formatter_sysform]
>> > format=ovirt-ha-agent %(name)s %(levelname)s %(message)s
>> > datefmt=
>> >
>> >
>> > Aleksey
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt hosted engine agent and broker duplicate logs to syslog

2016-01-02 Thread Nir Soffer
On Sat, Jan 2, 2016 at 2:29 PM, Aleksey Chudov  wrote:
> Thank you for the answer.
>
> So, currently there is at least three copy of agent and broker logs on
> centos 7.2:
> 1. agent.log and broker.log files in /var/log/ovirt-hosted-engine-ha/
> directory
> 2. /var/log/messages file
> 3. journald database

I don't think we can fix the duplication in /var/log/messages and
journald database.
If we log to syslog, it will appear in both of them:

# logger `uuidgen`
# tail -n 1 /var/log/messages
Jan  2 17:27:37 xxx root: b364232b-20d1-4e50-aa71-10a9b4513456
# journalctl -n 1
-- Logs begin at Fri 2015-12-11 22:47:26 IST, end at Sat 2016-01-02
17:27:37 IST. --
Jan 02 17:27:37 xxx root[6828]: b364232b-20d1-4e50-aa71-10a9b4513456

> Does errors additionally send to syslog according to agent-log.conf and
> broker-log.conf files in /etc/ovirt-hosted-engine-ha/ directory?
>
> It's a bit too much :) Do you plan to fix this duplication?
>
> For example 2. and 3. can be quickly disabled by adding StandardOutput=null
> option to unit file
>
> [Service]
> ...
> StandardOutput=null

Did you try that?

>
> On Sat, Jan 2, 2016 at 2:01 PM, Simone Tiraboschi 
> wrote:
>>
>>
>> Il 31/Dic/2015 16:36, "Aleksey Chudov"  ha
>> scritto:
>> >
>> > Hi,
>> >
>> > After upgrade from 3.6.0 to 3.6.1 agent and broker duplicate their logs
>> > to syslog. So, the same messages logged twice to files in
>> > /var/log/ovirt-hosted-engine-ha/ directory and to /var/log/messages file.
>> >
>> > Agent and broker configuration files remain the same for 3.5, 3.6.0 and
>> > 3.6.1 and there is not such logs duplication in 3.5 and 3.6.0.
>> >
>> > Is it a bug or expected behavior?
>>
>> I think that it's an expected behavior: for 3.6.1 we had to rewrite the
>> systemd startup script due to a slightly different behavior on centos 7.2.
>> Prior than that the agent was forking as a daemon while now it's running
>> as a systemd service with type=simple so its output goes to systemd logging
>> facilities and so also in /var/log/messages according to your system
>> configuration.

How is this related? According to the logger configuration, logs are logged
to the log file and to /dev/log, so nothing should be seen in stderr or stdout.

>>
>> >
>> > OS is CentOS 7.2
>> >
>> > # rpm -qa 'ovirt*'
>> > ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>> > ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
>> > ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>> > ovirt-hosted-engine-ha-1.3.3.5-1.el7.centos.noarch
>> > ovirt-engine-sdk-python-3.6.0.3-1.el7.centos.noarch
>> > ovirt-release36-002-2.noarch
>> > ovirt-setup-lib-1.0.0-1.el7.centos.noarch
>> > ovirt-hosted-engine-setup-1.3.1.3-1.el7.centos.noarch
>> >
>> >
>> > # cat /etc/ovirt-hosted-engine-ha/agent-log.conf
>> > [loggers]
>> > keys=root
>> >
>> > [handlers]
>> > keys=syslog,logfile
>> >
>> > [formatters]
>> > keys=long,sysform
>> >
>> > [logger_root]
>> > level=INFO
>> > handlers=syslog,logfile
>> > propagate=0
>> >
>> > [handler_syslog]
>> > level=ERROR

Only errors should appear in /var/log/messages - do you see other log
level there?

>> > class=handlers.SysLogHandler
>> > formatter=sysform
>> > args=('/dev/log', handlers.SysLogHandler.LOG_USER)
>> >
>> > [handler_logfile]
>> > class=logging.handlers.TimedRotatingFileHandler
>> > args=('/var/log/ovirt-hosted-engine-ha/agent.log', 'd', 1, 7)
>> > level=DEBUG
>> > formatter=long
>> >
>> > [formatter_long]
>> >
>> > format=%(threadName)s::%(levelname)s::%(asctime)s::%(module)s::%(lineno)d::%(name)s::(%(funcName)s)
>> > %(message)s
>> >
>> > [formatter_sysform]
>> > format=ovirt-ha-agent %(name)s %(levelname)s %(message)s
>> > datefmt=
>> >
>> >
>> > Aleksey
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Users Digest, Vol 52, Issue 1

2016-01-02 Thread Nir Soffer
On Sat, Jan 2, 2016 at 12:46 PM, Michael Cooper  wrote:
> Hello Everyone,
>
> I wondering why I cannot attach my iso_domain to my Data Center.
> When I try to attach it says no Valid DataCenters. Why is this happening?
> Where should I start looking for the resolution?

Can you describe your data center? do you have active hosts? active
storage domain?

Can you attach engine.log showing this error?

Nir

>
> Thanks,
>
> --
> Michael A Cooper
> Linux Certified
> Zerto Certified
> http://www.coopfire.com
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how shutdown a host

2016-01-02 Thread Nir Soffer
On Sat, Jan 2, 2016 at 11:32 AM, alireza sadeh seighalan
 wrote:
> hi everyone
>
> how can i shutdown a host acording an standard? i want to shutdown or reboot
> hosts for maintenance purposes (for example host 2-6). thanks in advance

You should move the host to maintenance.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SPM

2016-01-02 Thread Nir Soffer
On Thu, Dec 31, 2015 at 5:20 PM, Fernando Fuentes  wrote:
> Team,
>
> I noticed that my SPM moved to another host which was odd because I have
> a set SPM.

What do you mean by "set SPM"?

> Somehow when that happen two of my hosts went down and all my vms when
> in pause state.
> The oddity behind all this is that my primary storage which has allways
> been my SPM was online without any issues..

Your primary storage is a hypervisor used as SPM?

> What could of have cause that? and is there a way prevent from the SPM
> migrating unless there is an issue?

Can you attach enging log showing the timeframe when the SPM moved to another
host?

Can you attach logs from the host used as SPM showing the same timeframe?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration Failure With FibreChannel+NFS

2016-01-02 Thread Roy Golan
On Thu, Dec 31, 2015 at 12:03 PM, Charles Tassell 
wrote:

> Hi Everyone,
>
>   I've been playing around with oVirt 3.6.1 to see if we can use it to
> replace VMWare, and I'm running into a problem with live migrations.  They
> fail and I can't seem to find an error message that describes why (the
> error logging is VERY verbose, so maybe I'm just missing the important
> part.)
>   I've setup two hosts that use a fibre channel SAN for the VM datastore
> and an NFS share for the ISO datastore.  I have a VM which is just booting
> off of a SystemRescue ISO file with a 2GB disk.  It seems to run fine, but
> when I try to migrate it to the other host I get the following in the
> engine.log of the hosted engine:
>
> 2015-12-31 09:28:20,433 INFO [org.ovirt.engine.core.bll.MigrateVmCommand]
> (default task-31) [61255087] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[2be4938e-f4a3-4322-bae3-8a9628b81835= ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName cdTest02>]',
> sharedLocks='null'}'
> 2015-12-31 09:28:20,526 INFO
> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-31)
> [61255087] Candidate host 'oVirt-01'
> ('cbfd733b-8ced-487d-8754-a2217ce1210f') was filtered out by
> 'VAR__FILTERTYPE__INTERNAL' filter 'Migration' (correlation id: null)
> 2015-12-31 09:28:20,646 INFO [org.ovirt.engine.core.bll.MigrateVmCommand]
> (org.ovirt.thread.pool-8-thread-30) [61255087] Running command:
> MigrateVmCommand internal: false. Entities affected :  ID:
> 2be4938e-f4a3-4322-bae3-8a9628b81835 Type: VMAction group MIGRATE_VM with
> role type USER
> 2015-12-31 09:28:20,701 INFO
> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> (org.ovirt.thread.pool-8-thread-30) [61255087] START, MigrateVDSCommand(
> MigrateVDSCommandParameters:{runAsync='true',
> hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
> vmId='2be4938e-f4a3-4322-bae3-8a9628b81835', srcHost='
> ovirt-01.virt.roblib.upei.ca',
> dstVdsId='1200a78f-6d05-4e5e-9ef7-6798cf741310', dstHost='
> ovirt-02.virt.roblib.upei.ca:54321', migrationMethod='ONLINE',
> tunnelMigration='false', migrationDowntime='0', autoConverge='false',
> migrateCompressed='false', consoleAddress='null'}), log id: f2548d4
> 2015-12-31 09:28:20,703 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> (org.ovirt.thread.pool-8-thread-30) [61255087] START,
> MigrateBrokerVDSCommand(HostName = oVirt-01,
> MigrateVDSCommandParameters:{runAsync='true',
> hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
> vmId='2be4938e-f4a3-4322-bae3-8a9628b81835', srcHost='
> ovirt-01.virt.roblib.upei.ca',
> dstVdsId='1200a78f-6d05-4e5e-9ef7-6798cf741310', dstHost='
> ovirt-02.virt.roblib.upei.ca:54321', migrationMethod='ONLINE',
> tunnelMigration='false', migrationDowntime='0', autoConverge='false',
> migrateCompressed='false', consoleAddress='null'}), log id: 5ec26536
> 2015-12-31 09:28:21,435 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> (org.ovirt.thread.pool-8-thread-30) [61255087] FINISH,
> MigrateBrokerVDSCommand, log id: 5ec26536
> 2015-12-31 09:28:21,449 INFO
> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> (org.ovirt.thread.pool-8-thread-30) [61255087] FINISH, MigrateVDSCommand,
> return: MigratingFrom, log id: f2548d4
> 2015-12-31 09:28:21,504 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-8-thread-30) [61255087] Correlation ID: 61255087,
> Job ID: ff37dfc9-f543-4e7b-983c-62cb0056959c, Call Stack: null, Custom
> Event ID: -1, Message: Migration started (VM: cdTest02, Source: oVirt-01,
> Destination: oVirt-02, User: admin@internal).
> 2015-12-31 09:28:22,984 WARN
> [org.ovirt.engine.core.vdsbroker.VmsMonitoring]
> (DefaultQuartzScheduler_Worker-3) [] skipping VM
> '2be4938e-f4a3-4322-bae3-8a9628b81835' from this monitoring cycle - the VM
> data has changed since fetching the data
> 2015-12-31 09:28:22,992 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
> (DefaultQuartzScheduler_Worker-3) [] START, FullListVDSCommand(HostName = ,
> FullListVDSCommandParameters:{runAsync='true',
> hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
> vds='Host[,cbfd733b-8ced-487d-8754-a2217ce1210f]',
> vmIds='[2beb0a49-6f2a-460a-b253-d3fcc7b68d31]'}), log id: dfa1177
> 2015-12-31 09:28:23,628 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
> (DefaultQuartzScheduler_Worker-3) [] FINISH, FullListVDSCommand, return:
> [{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc,
> guestDiskMapping={96768549-c104-4e9c-a={name=/dev/vda},
> QEMU_DVD-ROM={name=/dev/sr0}}, vmId=2beb0a49-6f2a-460a-b253-d3fcc7b68d31,
> pid=9358, devices=[Ljava.lang.Object;@16b85d46, smp=2, vmType=kvm,
> displayIp=0, display=vnc, displaySecurePort=-1, memSize=4096,
> displayPort=5900, cpuType=Westmere,
> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
> statusTime=4299531100, vmName=HostedEngine, clientIp=, 

Re: [ovirt-users] Configuring another interface for trunked (tagged) VM traffic

2016-01-02 Thread Will Dennis
I found this following (older) article, that gave me a clue…
http://captainkvm.com/2013/04/maximizing-your-10gb-ethernet-in-kvm/

So I configured the following up in /etc/sysconfig/network-scripts for each of 
my hosts —

[root@ovirt-node-01 network-scripts]# cat ifcfg-enp4s0f0
HWADDR=00:15:17:7B:E9:EA
TYPE=Ethernet
BOOTPROTO=none
NAME=enp4s0f0
UUID=8b006c8c-b5d3-4dae-a1e7-5ca463119be3
ONBOOT=yes
SLAVE=yes
MASTER=bond0

(^^^ same sort of file made for enp4s0f1)

[root@ovirt-node-01 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=none
BONDING_OPTS="mode=4 miimon=100"

[root@ovirt-node-01 network-scripts]# cat ifcfg-bond0.180
DEVICE=bond0.180
VLAN=yes
BOOTPROTO=static
ONBOOT=yes
BRIDGE=br180

(^^^ same sort of file made for other VLANs)

[root@ovirt-node-03 network-scripts]# cat ifcfg-br180
DEVICE=br180
TYPE=Bridge
BOOTPROTO=static
ONBOOT=yes
DELAY=0

(^^^ same sort of file made for other bridges)

So that all makes the following sort of device chain:

http://s1096.photobucket.com/user/willdennis/media/ovirt-bond-layout.png.html;
 target="_blank">http://i1096.photobucket.com/albums/g330/willdennis/ovirt-bond-layout.png; 
border="0" alt="ovirt-bond-layout.png"/>

But then I read this next article:
http://captainkvm.com/2013/04/maximizing-your-10gb-ethernet-in-rhev/

This leads me to believe (if it’s still the same process on current oVirt/RHEV) 
that I could stop with the bond0 setup, and then by tying the networks I 
created for the VLANs of interest (which do have the proper VLAN tags set on 
them) that oVirt would automatically create the needed bond0 VLAN 
sub-interfaces, and the related  per-VLAN bridges.

So, is there a way to tie the oVirt networks to use the bridges I’ve already 
created (they don’t show up in the oVirt webadmin “Setup host networks” dialog) 
or should I just match the oVirt networks with the bond0 interface, and let 
whatever structure oVirt creates happen? (and if so, I guess I’d need to remove 
the bond0 VLAN sub-interfaces, and the related per-VLAN bridges I created?)


On Dec 31, 2015, at 1:56 PM, Will Dennis 
> wrote:

Hi all,

Taking the next step on configuring my newly-established oVirt cluster, and 
that would be to set up a trunk (VLAN tagged) connection to each cluster host 
(there are 3) for VM traffic. What I’m looking at is akin to setting up 
vSwitches on VMware, except I have never done this on a VMware cluster, just on 
individual hosts…

Anyhow, I have the following NICs available on my three hosts (conveniently, 
they are the exact same hardware platform):

ovirt-node-01 | success | rc=0 >>
3: enp4s0f0:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
qlen 1000
4: enp4s0f1:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
qlen 1000
5: enp12s0f0:  mtu 1500 qdisc pfifo_fast 
master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
qlen 1000

ovirt-node-02 | success | rc=0 >>
3: enp4s0f0:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
qlen 1000
4: enp4s0f1:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
qlen 1000
5: enp12s0f0:  mtu 1500 qdisc pfifo_fast 
master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
qlen 1000

ovirt-node-03 | success | rc=0 >>
3: enp4s0f0:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
qlen 1000
4: enp4s0f1:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
qlen 1000
5: enp12s0f0:  mtu 1500 qdisc pfifo_fast 
master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
qlen 1000

As you may see, I am using the ‘enp12s0f0’ interface on each host for the 
‘ovirtmgmt’ bridge. This network carries the admin traffic as well as Gluster 
distributed filesystem traffic, but I now want to establish a separate link to 
each host for VM traffic. The ‘ovirtmgmt’ bridge is NOT trunked/tagged, only a 
single VLAN is used. For the VM traffic, I’d like to use the ‘enp4s0f0’ 
interface on each host, and tie them into a logical network named “vm-traffic” 
(or the like) and make that a trunked/tagged interface.

Are there any existing succinct instructions on how to do this? I have been 
reading thru the oVirt Admin Manual’s “Logical Networks” section 
(http://www.ovirt.org/OVirt_Administration_Guide#Logical_Network_Tasks) but it 
hasn’t “clicked” in my mind yet...

Thanks,
Will

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] host status "Non Operational" - how to diagnose & fix?

2016-01-02 Thread Will Dennis
I have had one of my hosts go into the state “Non Operational” after I rebooted 
it… I also noticed that in the oVirt webadmin UI, the NIC that’s used in the 
‘ovirtmgmt’ network is showing “down”, but in Linux the NIC is operational and 
up, as is the ‘ovirtmgmt’ bridge…

[root@ovirt-node-02 ~]# ip link sh up
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: bond0:  mtu 1500 qdisc noqueue 
state DOWN mode DEFAULT
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
3: enp4s0f0:  mtu 1500 qdisc 
pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
4: enp4s0f1:  mtu 1500 qdisc 
pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
5: enp12s0f0:  mtu 1500 qdisc pfifo_fast 
master ovirtmgmt state UP mode DEFAULT qlen 1000
link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff
7: ovirtmgmt:  mtu 1500 qdisc noqueue state UP 
mode DEFAULT
link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff

What should I take a look at first?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host status "Non Operational" - how to diagnose & fix?

2016-01-02 Thread Roy Golan
On Sun, Jan 3, 2016 at 2:46 AM, Will Dennis  wrote:

> I have had one of my hosts go into the state “Non Operational” after I
> rebooted it… I also noticed that in the oVirt webadmin UI, the NIC that’s
> used in the ‘ovirtmgmt’ network is showing “down”, but in Linux the NIC is
> operational and up, as is the ‘ovirtmgmt’ bridge…
>
>
Hosts tab -> Network Interfaces subtab -> click "Setup networks" and make
sure "ovirtmgmt" is placed on a working nic.

make sure

> [root@ovirt-node-02 ~]# ip link sh up
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode
> DEFAULT
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: bond0:  mtu 1500 qdisc
> noqueue state DOWN mode DEFAULT
> link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
> 3: enp4s0f0:  mtu 1500 qdisc
> pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
> link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
> 4: enp4s0f1:  mtu 1500 qdisc
> pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
> link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
> 5: enp12s0f0:  mtu 1500 qdisc pfifo_fast
> master ovirtmgmt state UP mode DEFAULT qlen 1000
> link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff
> 7: ovirtmgmt:  mtu 1500 qdisc noqueue
> state UP mode DEFAULT
> link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff
>
> What should I take a look at first?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host status "Non Operational" - how to diagnose & fix?

2016-01-02 Thread Will Dennis
The ‘ovirtmgmt’ network has been & is still placed on a working NIC 
(enp12s0f0)… It’s just that now, oVirt somehow doesn’t *think* it’s working…

http://s1096.photobucket.com/user/willdennis/media/setup-networks.png.html

However, as I showed you in the ‘ip link show up’ output, it is indeed up and 
working.




On Jan 2, 2016, at 8:00 PM, Roy Golan 
> wrote:



On Sun, Jan 3, 2016 at 2:46 AM, Will Dennis 
> wrote:
I have had one of my hosts go into the state “Non Operational” after I rebooted 
it… I also noticed that in the oVirt webadmin UI, the NIC that’s used in the 
‘ovirtmgmt’ network is showing “down”, but in Linux the NIC is operational and 
up, as is the ‘ovirtmgmt’ bridge…


Hosts tab -> Network Interfaces subtab -> click "Setup networks" and make sure 
"ovirtmgmt" is placed on a working nic.

make sure
[root@ovirt-node-02 ~]# ip link sh up
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: bond0:  mtu 1500 qdisc noqueue 
state DOWN mode DEFAULT
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
3: enp4s0f0:  mtu 1500 qdisc 
pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
4: enp4s0f1:  mtu 1500 qdisc 
pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
5: enp12s0f0:  mtu 1500 qdisc pfifo_fast 
master ovirtmgmt state UP mode DEFAULT qlen 1000
link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff
7: ovirtmgmt:  mtu 1500 qdisc noqueue state UP 
mode DEFAULT
link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff

What should I take a look at first?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how shutdown a host

2016-01-02 Thread Oved Ourfali
It will do the migrate automatically.
The host will move to "Preparing for maintenance" status until all VMs are
migrated, which will turn the host into "maintenance" status.



On Sun, Jan 3, 2016 at 8:11 AM, Alan Murrell  wrote:

> On 02/01/2016 8:31 AM, Nir Soffer wrote:
>
>> You should move the host to maintenance.
>>
>
> Will moving a host to maintenance mode automatically migrate its VMs to
> another available host in the cluster, or should that be done first?
>
> Regards,
>
> Alan
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how shutdown a host

2016-01-02 Thread Alan Murrell

On 02/01/2016 8:31 AM, Nir Soffer wrote:

You should move the host to maintenance.


Will moving a host to maintenance mode automatically migrate its VMs to 
another available host in the cluster, or should that be done first?


Regards,

Alan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to upgrade ovirt-engine 3.5.5 to 3.6.1 on EL6

2016-01-02 Thread Yedidyah Bar David
On Fri, Jan 1, 2016 at 12:44 AM, Frank Wall  wrote:
> Hi,
>
> I've just tried to upgrade my ovirt-engine 3.5.5 which is still running on 
> EL6,
> but it failed due to a dependency error regarding slf4j:
>
> # engine-setup
> [ INFO  ] Stage: Initializing
> [ INFO  ] Stage: Environment setup
>   Configuration files: 
> ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf', 
> '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', 
> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
>   Log file: 
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20151231233407-lmovl5.log
>   Version: otopi-1.4.0 (otopi-1.4.0-1.el6)
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Stage: Environment customization
>
>   --== PRODUCT OPTIONS ==--
>
>   --== PACKAGES ==--
>
> [ INFO  ] Checking for product updates...
> [ ERROR ] Yum: [u'ovirt-engine-3.6.1.3-1.el6.noarch requires slf4j >= 1.7.0', 
> u'vdsm-jsonrpc-java-1.1.5-1.el6.noarch requires slf4j >= 1.6.1']
> [ INFO  ] Yum: Performing yum transaction rollback
> [ ERROR ] Failed to execute stage 'Environment customization': 
> [u'ovirt-engine-3.6.1.3-1.el6.noarch requires slf4j >= 1.7.0', 
> u'vdsm-jsonrpc-java-1.1.5-1.el6.noarch requires slf4j >= 1.6.1']
> [ INFO  ] Stage: Clean up
>   Log file is located at 
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20151231233407-lmovl5.log
> [ INFO  ] Generating answer file 
> '/var/lib/ovirt-engine/setup/answers/20151231233424-setup.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Execution of setup failed
>
> I've followed the upgrade guide [1], and yes, I'm aware that AiO is no longer
> supported on EL6 [2], but this is just my Hosted-Engine VM, *not* an AiO host.
> So I thought it would still work.
>
> I haven't attached any further logs, because this error is really obvious I
> guess. These are the currently installed oVirt packages:
>
> otopi-1.4.0-1.el6.noarch
> otopi-java-1.4.0-1.el6.noarch
> ovirt-engine-3.5.5-1.el6.noarch
> ovirt-engine-backend-3.5.5-1.el6.noarch
> ovirt-engine-cli-3.5.0.5-1.el6.noarch
> ovirt-engine-dbscripts-3.5.5-1.el6.noarch
> ovirt-engine-extension-aaa-jdbc-1.0.4-1.el6.noarch
> ovirt-engine-extensions-api-impl-3.5.5-1.el6.noarch
> ovirt-engine-jboss-as-7.1.1-1.el6.x86_64
> ovirt-engine-lib-3.6.1.3-1.el6.noarch
> ovirt-engine-restapi-3.5.5-1.el6.noarch
> ovirt-engine-sdk-python-3.5.2.1-1.el6.noarch
> ovirt-engine-setup-3.6.1.3-1.el6.noarch
> ovirt-engine-setup-base-3.6.1.3-1.el6.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.6.1.3-1.el6.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.1.3-1.el6.noarch
> ovirt-engine-setup-plugin-websocket-proxy-3.6.1.3-1.el6.noarch
> ovirt-engine-tools-3.5.5-1.el6.noarch
> ovirt-engine-userportal-3.5.5-1.el6.noarch
> ovirt-engine-webadmin-portal-3.5.5-1.el6.noarch
> ovirt-engine-websocket-proxy-3.5.5-1.el6.noarch
> ovirt-host-deploy-1.3.1-1.el6.noarch
> ovirt-host-deploy-java-1.3.1-1.el6.noarch
> ovirt-image-uploader-3.5.1-1.el6.noarch
> ovirt-iso-uploader-3.5.2-1.el6.noarch
> vdsm-jsonrpc-java-1.0.15-1.el6.noarch
>
> Any ideas?
>
> [1] http://www.ovirt.org/OVirt_3.6.1_Release_Notes#oVirt_Hosted_Engine
> [2] http://www.ovirt.org/OVirt_3.6.1_Release_Notes#Known_issues

Sounds like:

https://bugzilla.redhat.com/show_bug.cgi?id=1276651

Can you post exact flow and logs? Thanks.
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host status "Non Operational" - how to diagnose & fix?

2016-01-02 Thread Karli Sjöberg

Den 3 jan. 2016 2:43 fm skrev Will Dennis :
>
> The ‘ovirtmgmt’ network has been & is still placed on a working NIC 
> (enp12s0f0)… It’s just that now, oVirt somehow doesn’t *think* it’s working…

Here's something I wrote a long time ago now, for those times when 
auto-gui-config fluff just won't do:

http://www.ovirt.org/Bonding_VLAN_Bridge

/K
>
> http://s1096.photobucket.com/user/willdennis/media/setup-networks.png.html
>
> However, as I showed you in the ‘ip link show up’ output, it is indeed up and 
> working.
>
>
>
>
> On Jan 2, 2016, at 8:00 PM, Roy Golan 
> > wrote:
>
>
>
> On Sun, Jan 3, 2016 at 2:46 AM, Will Dennis 
> > wrote:
> I have had one of my hosts go into the state “Non Operational” after I 
> rebooted it… I also noticed that in the oVirt webadmin UI, the NIC that’s 
> used in the ‘ovirtmgmt’ network is showing “down”, but in Linux the NIC is 
> operational and up, as is the ‘ovirtmgmt’ bridge…
>
>
> Hosts tab -> Network Interfaces subtab -> click "Setup networks" and make 
> sure "ovirtmgmt" is placed on a working nic.
>
> make sure
> [root@ovirt-node-02 ~]# ip link sh up
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode 
> DEFAULT
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: bond0:  mtu 1500 qdisc noqueue 
> state DOWN mode DEFAULT
> link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
> 3: enp4s0f0:  mtu 1500 qdisc 
> pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
> link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
> 4: enp4s0f1:  mtu 1500 qdisc 
> pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
> link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
> 5: enp12s0f0:  mtu 1500 qdisc pfifo_fast 
> master ovirtmgmt state UP mode DEFAULT qlen 1000
> link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff
> 7: ovirtmgmt:  mtu 1500 qdisc noqueue state 
> UP mode DEFAULT
> link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff
>
> What should I take a look at first?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users