Re: [ovirt-users] 3.6 looses network on reboot

2016-03-03 Thread David LeVene
Hi Nir,

Thanks for your help. I’m going to start up another thread as to not pollute 
what this thread was initially about.

I’ll test your patch, and I suspect it’s the gluster version that is causing 
the problem as I did update the host.

When I create the thread I’ll make sure to include everyone already present (It 
might be a week or so, as my time Is limited atm)

Cheers
David

From: Nir Soffer [mailto:nsof...@redhat.com]
Sent: Thursday, March 03, 2016 17:59
To: Dan Kenigsberg ; Sahina Bose ; Ala 
Hino 
Cc: David LeVene ; users@ovirt.org
Subject: Re: [ovirt-users] 3.6 looses network on reboot

On Thu, Mar 3, 2016 at 9:06 AM, Dan Kenigsberg 
mailto:dan...@redhat.com>> wrote:
>
> On Thu, Mar 03, 2016 at 12:54:25AM +, David LeVene wrote:
> >
> > Can you check our patches? They should resolve the problem we saw in the
> > log: https://gerrit.ovirt.org/#/c/54237  (based on oVirt-3.6.3)
> >
> > -- I've manually applied the patch to the node that I was testing on
> > and the networking comes on-line correctly - now I'm encountering a
> > gluster issue with cannot find master domain.
>
> You are most welcome to share your logs (preferably on a different
> thread, to avoid confusion)
>
> >
> > Without the fixes, as a workaround, I would suggest (if possible) to 
> > disable IPv6 on your host boot line and check if all works out for you.
> > -- Ok, but as I can manually apply the patch its good now. Do you know
> > what version are we hoping to have this put into as I won't perform an
> > ovirt/vdsm update until its part of the upstream RPM's
>
> The fix has been proposed to ovirt-3.6.4. I'll make sure it's accepted.
>
> >
> > Do you need IPv6 connectivity? If so, you'll need to use a vdsm hook or 
> > another interface that is not controlled by oVirt.
> > -- Ideally I'd prefer not to have it, but the way our network has been
> > configured some hosts are IPv6 only, so at a min the guests need it..
> > the hypervisors not so much.
>
> May I tap to what your IPv6 experience? (only if you feel confortable sharing
> this publically). What does these IPv6-only servers do? What does the guest do
> with them?
>
> >
> > -- I've now hit an issue with it not starting up the master storage
> > gluster domain - as it’s a separate issue I'll review the mailing
> > lists & create a new item if its related.. I've attached the
> > supervdsm.log incase you can save me some time and point me in the
> > right direction!
>
> All I see is this
>
> MainProcess|jsonrpc.Executor/4::ERROR::2016-03-03 
> 11:15:04,699::supervdsmServer::118::SuperVdsm.ServerCallback::(wrapper) Error 
> in wrapper
> Traceback (most recent call last):
>   File "/usr/share/vdsm/supervdsmServer", line 116, in wrapper
> res = func(*args, **kwargs)
>   File "/usr/share/vdsm/supervdsmServer", line 531, in wrapper
> return func(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/cli.py", line 496, in volumeInfo
> xmltree = _execGlusterXml(command)
>   File "/usr/share/vdsm/gluster/cli.py", line 108, in _execGlusterXml
> raise ge.GlusterCmdExecFailedException(rc, out, err)
> GlusterCmdExecFailedException: Command execution failed
> return code: 2


We have this logs before the exception:

MainProcess|jsonrpc.Executor/3::DEBUG::2016-03-03 
11:02:42,945::utils::669::root::(execCmd) /usr/bin/taskset --cpu-list 0-39 
/usr/sbin/gluster --mode=script volume info --re
mote-host=ovirtmount.test.lab data --xml (cwd None)

The command looks correct

MainProcess|jsonrpc.Executor/3::DEBUG::2016-03-03 
11:02:43,024::utils::687::root::(execCmd) FAILED:  = '\n';  = 2

gluster command line failed in an unhelpful way.

(Adding Sahina)

David, can you try to run this command manually on this host? maybe there is 
some
--verbose flag revealing more info?

You may also try a simpler command:

gluster volume info --remote-host=ovirtmount.test.lab data

Another issue you should check - gluster version on the hosts and on the 
gluster nodes *must*
match - otherwise you should expect failures accessing gluster server.

We have this patch for handling such errors gracefully - can you test it?
https://gerrit.ovirt.org/53785

(Adding Ala)

Nir
This email and any attachments may contain confidential and proprietary 
information of Blackboard that is for the sole use of the intended recipient. 
If you are not the intended recipient, disclosure, copying, re-distribution or 
other use of any of this information is strictly prohibited. Please immediately 
notify the sender and delete this transmission if you received this email in 
error.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 looses network on reboot

2016-03-03 Thread David LeVene


-Original Message-
From: Dan Kenigsberg [mailto:dan...@redhat.com]
Sent: Thursday, March 03, 2016 17:36
To: David LeVene 
Cc: edwa...@redhat.com; sab...@redhat.com; users@ovirt.org
Subject: Re: [ovirt-users] 3.6 looses network on reboot

On Thu, Mar 03, 2016 at 12:54:25AM +, David LeVene wrote:
>
> Can you check our patches? They should resolve the problem we saw in
> the
> log: https://gerrit.ovirt.org/#/c/54237  (based on oVirt-3.6.3)
>
> -- I've manually applied the patch to the node that I was testing on
> and the networking comes on-line correctly - now I'm encountering a
> gluster issue with cannot find master domain.

You are most welcome to share your logs (preferably on a different thread, to 
avoid confusion)

--- Will do - I'll start a new thread after I've done some more investigation 
with the pointers given so far in this thread & by Nir in the next reply.

>
> Without the fixes, as a workaround, I would suggest (if possible) to disable 
> IPv6 on your host boot line and check if all works out for you.
> -- Ok, but as I can manually apply the patch its good now. Do you know
> what version are we hoping to have this put into as I won't perform an
> ovirt/vdsm update until its part of the upstream RPM's

The fix has been proposed to ovirt-3.6.4. I'll make sure it's accepted.

-- Great thanks!

>
> Do you need IPv6 connectivity? If so, you'll need to use a vdsm hook or 
> another interface that is not controlled by oVirt.
> -- Ideally I'd prefer not to have it, but the way our network has been
> configured some hosts are IPv6 only, so at a min the guests need it..
> the hypervisors not so much.

May I tap to what your IPv6 experience? (only if you feel confortable sharing 
this publically). What does these IPv6-only servers do? What does the guest do 
with them?

-- The group I work with has implemented dual stack & if a v4 address is not 
required.. it's not given.. The IPv6 servers will run an application of some 
kind.. could be a webserver, anything really. As they sit behind LB's they 
handle the v4 If required. My personal opinion only: is that it's too much 
hassle for what its worth at this point and I'd prefer it if we only used v4/v6 
address's at the entry points and v4 internally or all dual stack.
-- My option is based on the fact that I've come across too many pieces of 
software that require tweaking, &/or additional configuration &/or special 
compilation to enable it. It also adds an addition layer of complication when 
troubleshooting applications if v6 isn't working and there's no v4 address to 
test on.
-- A recent example.. downloads.ceph.com advertises v4 and v6 address. On an 
IPv6 only host it fails to connect to the repo because the v6 address fails.. 
and there's a V4 address so NAT64 isn't performed. This breaks yum. 
Workaround.. fd64::IPv4 Address.

>
> -- I've now hit an issue with it not starting up the master storage
> gluster domain - as it’s a separate issue I'll review the mailing
> lists & create a new item if its related.. I've attached the
> supervdsm.log incase you can save me some time and point me in the
> right direction!

All I see is this

MainProcess|jsonrpc.Executor/4::ERROR::2016-03-03
MainProcess|11:15:04,699::supervdsmServer::118::SuperVdsm.ServerCallback
MainProcess|::(wrapper) Error in wrapper
Traceback (most recent call last):
  File "/usr/share/vdsm/supervdsmServer", line 116, in wrapper
res = func(*args, **kwargs)
  File "/usr/share/vdsm/supervdsmServer", line 531, in wrapper
return func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/cli.py", line 496, in volumeInfo
xmltree = _execGlusterXml(command)
  File "/usr/share/vdsm/gluster/cli.py", line 108, in _execGlusterXml
raise ge.GlusterCmdExecFailedException(rc, out, err)
GlusterCmdExecFailedException: Command execution failed return code: 2

which tells me very little. Please share your vdsm.log and gluster logs 
(possibly /var/log/messages as well) to understand what has happened.
Make sure to include sabose at redhat.com on the thread.
In the past we heard that network disconnections causes glusterd to crash, so 
it might be the case again.

-- I'll investigate further when I have time, and post again under a different 
topic. Cheers for the tips/logs to review.

Regards,
Dan.
This email and any attachments may contain confidential and proprietary 
information of Blackboard that is for the sole use of the intended recipient. 
If you are not the intended recipient, disclosure, copying, re-distribution or 
other use of any of this information is strictly prohibited. Please immediately 
notify the sender and delete this transmission if you received this email in 
error.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 looses network on reboot

2016-03-02 Thread Nir Soffer
On Thu, Mar 3, 2016 at 9:06 AM, Dan Kenigsberg  wrote:
>
> On Thu, Mar 03, 2016 at 12:54:25AM +, David LeVene wrote:
> >
> > Can you check our patches? They should resolve the problem we saw in the
> > log: https://gerrit.ovirt.org/#/c/54237  (based on oVirt-3.6.3)
> >
> > -- I've manually applied the patch to the node that I was testing on
> > and the networking comes on-line correctly - now I'm encountering a
> > gluster issue with cannot find master domain.
>
> You are most welcome to share your logs (preferably on a different
> thread, to avoid confusion)
>
> >
> > Without the fixes, as a workaround, I would suggest (if possible) to
disable IPv6 on your host boot line and check if all works out for you.
> > -- Ok, but as I can manually apply the patch its good now. Do you know
> > what version are we hoping to have this put into as I won't perform an
> > ovirt/vdsm update until its part of the upstream RPM's
>
> The fix has been proposed to ovirt-3.6.4. I'll make sure it's accepted.
>
> >
> > Do you need IPv6 connectivity? If so, you'll need to use a vdsm hook or
another interface that is not controlled by oVirt.
> > -- Ideally I'd prefer not to have it, but the way our network has been
> > configured some hosts are IPv6 only, so at a min the guests need it..
> > the hypervisors not so much.
>
> May I tap to what your IPv6 experience? (only if you feel confortable
sharing
> this publically). What does these IPv6-only servers do? What does the
guest do
> with them?
>
> >
> > -- I've now hit an issue with it not starting up the master storage
> > gluster domain - as it’s a separate issue I'll review the mailing
> > lists & create a new item if its related.. I've attached the
> > supervdsm.log incase you can save me some time and point me in the
> > right direction!
>
> All I see is this
>
> MainProcess|jsonrpc.Executor/4::ERROR::2016-03-03
11:15:04,699::supervdsmServer::118::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper
> Traceback (most recent call last):
>   File "/usr/share/vdsm/supervdsmServer", line 116, in wrapper
> res = func(*args, **kwargs)
>   File "/usr/share/vdsm/supervdsmServer", line 531, in wrapper
> return func(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/cli.py", line 496, in volumeInfo
> xmltree = _execGlusterXml(command)
>   File "/usr/share/vdsm/gluster/cli.py", line 108, in _execGlusterXml
> raise ge.GlusterCmdExecFailedException(rc, out, err)
> GlusterCmdExecFailedException: Command execution failed
> return code: 2


We have this logs before the exception:

MainProcess|jsonrpc.Executor/3::DEBUG::2016-03-03
11:02:42,945::utils::669::root::(execCmd) /usr/bin/taskset --cpu-list 0-39
/usr/sbin/gluster --mode=script volume info --re
mote-host=ovirtmount.test.lab data --xml (cwd None)

The command looks correct

MainProcess|jsonrpc.Executor/3::DEBUG::2016-03-03
11:02:43,024::utils::687::root::(execCmd) FAILED:  = '\n';  = 2

gluster command line failed in an unhelpful way.

(Adding Sahina)

David, can you try to run this command manually on this host? maybe there
is some
--verbose flag revealing more info?

You may also try a simpler command:

gluster volume info --remote-host=ovirtmount.test.lab data

Another issue you should check - gluster version on the hosts and on the
gluster nodes *must*
match - otherwise you should expect failures accessing gluster server.

We have this patch for handling such errors gracefully - can you test it?
https://gerrit.ovirt.org/53785

(Adding Ala)

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 looses network on reboot

2016-03-02 Thread Dan Kenigsberg
On Thu, Mar 03, 2016 at 12:54:25AM +, David LeVene wrote:
> 
> Can you check our patches? They should resolve the problem we saw in the
> log: https://gerrit.ovirt.org/#/c/54237  (based on oVirt-3.6.3)
> 
> -- I've manually applied the patch to the node that I was testing on
> and the networking comes on-line correctly - now I'm encountering a
> gluster issue with cannot find master domain.

You are most welcome to share your logs (preferably on a different
thread, to avoid confusion)

> 
> Without the fixes, as a workaround, I would suggest (if possible) to disable 
> IPv6 on your host boot line and check if all works out for you.
> -- Ok, but as I can manually apply the patch its good now. Do you know
> what version are we hoping to have this put into as I won't perform an
> ovirt/vdsm update until its part of the upstream RPM's

The fix has been proposed to ovirt-3.6.4. I'll make sure it's accepted.

> 
> Do you need IPv6 connectivity? If so, you'll need to use a vdsm hook or 
> another interface that is not controlled by oVirt.
> -- Ideally I'd prefer not to have it, but the way our network has been
> configured some hosts are IPv6 only, so at a min the guests need it..
> the hypervisors not so much.

May I tap to what your IPv6 experience? (only if you feel confortable sharing
this publically). What does these IPv6-only servers do? What does the guest do
with them?

> 
> -- I've now hit an issue with it not starting up the master storage
> gluster domain - as it’s a separate issue I'll review the mailing
> lists & create a new item if its related.. I've attached the
> supervdsm.log incase you can save me some time and point me in the
> right direction!

All I see is this

MainProcess|jsonrpc.Executor/4::ERROR::2016-03-03 
11:15:04,699::supervdsmServer::118::SuperVdsm.ServerCallback::(wrapper) Error 
in wrapper
Traceback (most recent call last):
  File "/usr/share/vdsm/supervdsmServer", line 116, in wrapper
res = func(*args, **kwargs) 
  File "/usr/share/vdsm/supervdsmServer", line 531, in wrapper
return func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/cli.py", line 496, in volumeInfo
xmltree = _execGlusterXml(command) 
  File "/usr/share/vdsm/gluster/cli.py", line 108, in _execGlusterXml
raise ge.GlusterCmdExecFailedException(rc, out, err)
GlusterCmdExecFailedException: Command execution failed
return code: 2

which tells me very little. Please share your vdsm.log and gluster logs
(possibly /var/log/messages as well) to understand what has happened.
Make sure to include sabose at redhat.com on the thread.
In the past we heard that network disconnections causes glusterd to crash, so
it might be the case again.

Regards,
Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 looses network on reboot

2016-03-02 Thread Nir Soffer
On Thu, Mar 3, 2016 at 2:54 AM, David LeVene 
wrote:

> Hi,
>
> Thanks for the quick responses & help.. answers in-line at the end of this
> email.
>
> Cheers
> David
>
> -Original Message-
> From: Edward Haas [mailto:edwa...@redhat.com]
> Sent: Wednesday, March 02, 2016 20:05
> To: David LeVene ; Dan Kenigsberg <
> dan...@redhat.com>
> Cc: users@ovirt.org
> Subject: Re: [ovirt-users] 3.6 looses network on reboot
>
> On 03/02/2016 01:36 AM, David LeVene wrote:
> > Hi Dan,
> >
> > I missed the email as the subject line changed!
> >
> > So we use and run IPv6 in our network - not sure if this is related. The
> Addresses are handed out via SLAAC so that would be where the IPv6 address
> is coming from.
> >
> > My memory is a bit sketchy... but I think if I remove the vmfex/SRIOV
> vNIC and only run with the one vNIC it works fine, it's when I bring the
> second NIC into play with SRIOV the issues arise.
> >
> > Answers inline.
> >
> > -Original Message-
> > From: Dan Kenigsberg [mailto:dan...@redhat.com]
> > Sent: Tuesday, March 01, 2016 00:28
> > To: David LeVene 
> > Cc: edwa...@redhat.com; users@ovirt.org
> > Subject: Re: [ovirt-users] 3.6 looses network on reboot
> >
> > This sounds very bad. Changing the subject, so the wider, more
> problematic issue is visible.
> >
> > Did any other user see this behavior?
> >
> > On Mon, Feb 29, 2016 at 06:27:46AM +, David LeVene wrote:
> >> Hi Dan,
> >>
> >> Answers as follows;
> >>
> >> # rpm -qa | grep -i vdsm
> >> vdsm-jsonrpc-4.17.18-1.el7.noarch
> >> vdsm-hook-vmfex-4.17.18-1.el7.noarch
> >> vdsm-infra-4.17.18-1.el7.noarch
> >> vdsm-4.17.18-1.el7.noarch
> >> vdsm-python-4.17.18-1.el7.noarch
> >> vdsm-yajsonrpc-4.17.18-1.el7.noarch
> >> vdsm-cli-4.17.18-1.el7.noarch
> >> vdsm-xmlrpc-4.17.18-1.el7.noarch
> >> vdsm-hook-vmfex-dev-4.17.18-1.el7.noarch
> >>
> >>
> >> There was in this folder ifcfg-ovirtmgnt bridge setup, and also
> route-ovirtmgnt & rule-ovirtmgmt.. but they were removed after the reboot.
> >>
> >> # ls -althr | grep ifcfg
> >> -rw-r--r--. 1 root root  254 Sep 16 21:21 ifcfg-lo -rw-r--r--. 1 root
> >> root  120 Feb 25 14:07 ifcfg-enp7s0f0 -rw-rw-r--. 1 root root  174
> >> Feb
> >> 25 14:40 ifcfg-enp6s0
> >>
> >> I think I modified ifcfg-enp6s0 to get networking up again (eg was set
> to bridge.. but the bridge wasn't configured).. it was a few days ago.. if
> it's important I can reboot the box again to see what state it comes up
> with.
> >>
> >> # cat ifcfg-enp6s0
> >> BOOTPROTO="none"
> >> IPADDR="10.80.10.117"
> >> NETMASK="255.255.255.0"
> >> GATEWAY="10.80.10.1"
> >> DEVICE="enp6s0"
> >> HWADDR="00:25:b5:00:0b:4f"
> >> ONBOOT=yes
> >> PEERDNS=yes
> >> PEERROUTES=yes
> >> MTU=1500
> >>
> >> # cat ifcfg-enp7s0f0
> >> # Generated by VDSM version 4.17.18-1.el7
> >> DEVICE=enp7s0f0
> >> ONBOOT=yes
> >> MTU=1500
> >> HWADDR=00:25:b5:00:0b:0f
> >> NM_CONTROLLED=no
> >>
> >> # find /var/lib/vdsm/persistence
> >> /var/lib/vdsm/persistence
> >> /var/lib/vdsm/persistence/netconf
> >> /var/lib/vdsm/persistence/netconf.1456371473833165545
> >> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets
> >> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt
> >>
> >> # cat
> >> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt
> >> {
> >> "nic": "enp6s0",
> >> "ipaddr": "10.80.10.117",
> >> "mtu": "1500",
> >> "netmask": "255.255.255.0",
> >> "STP": "no",
> >> "bridged": "true",
> >> "gateway": "10.80.10.1",
> >> "defaultRoute": true
> >> }
> >>
> >> Supervdsm log is attached.
> >
> > Have you editted ifcfg-ovirtmgmt manually?
> > Nope
> >
> > Can you somehow reproduce it, and share its content?
> > Yea, I should be able to reproduce it - just gotta fix it first (create
> the networking manually and get VDSM on-line).  Also it’s a side
> project/investigation at the moment so time isn'

Re: [ovirt-users] 3.6 looses network on reboot

2016-03-02 Thread Edward Haas
On 03/02/2016 01:36 AM, David LeVene wrote:
> Hi Dan,
> 
> I missed the email as the subject line changed!
> 
> So we use and run IPv6 in our network - not sure if this is related. The 
> Addresses are handed out via SLAAC so that would be where the IPv6 address is 
> coming from.
> 
> My memory is a bit sketchy... but I think if I remove the vmfex/SRIOV vNIC 
> and only run with the one vNIC it works fine, it's when I bring the second 
> NIC into play with SRIOV the issues arise.
> 
> Answers inline.
> 
> -Original Message-
> From: Dan Kenigsberg [mailto:dan...@redhat.com]
> Sent: Tuesday, March 01, 2016 00:28
> To: David LeVene 
> Cc: edwa...@redhat.com; users@ovirt.org
> Subject: Re: [ovirt-users] 3.6 looses network on reboot
> 
> This sounds very bad. Changing the subject, so the wider, more problematic 
> issue is visible.
> 
> Did any other user see this behavior?
> 
> On Mon, Feb 29, 2016 at 06:27:46AM +, David LeVene wrote:
>> Hi Dan,
>>
>> Answers as follows;
>>
>> # rpm -qa | grep -i vdsm
>> vdsm-jsonrpc-4.17.18-1.el7.noarch
>> vdsm-hook-vmfex-4.17.18-1.el7.noarch
>> vdsm-infra-4.17.18-1.el7.noarch
>> vdsm-4.17.18-1.el7.noarch
>> vdsm-python-4.17.18-1.el7.noarch
>> vdsm-yajsonrpc-4.17.18-1.el7.noarch
>> vdsm-cli-4.17.18-1.el7.noarch
>> vdsm-xmlrpc-4.17.18-1.el7.noarch
>> vdsm-hook-vmfex-dev-4.17.18-1.el7.noarch
>>
>>
>> There was in this folder ifcfg-ovirtmgnt bridge setup, and also 
>> route-ovirtmgnt & rule-ovirtmgmt.. but they were removed after the reboot.
>>
>> # ls -althr | grep ifcfg
>> -rw-r--r--. 1 root root  254 Sep 16 21:21 ifcfg-lo -rw-r--r--. 1 root
>> root  120 Feb 25 14:07 ifcfg-enp7s0f0 -rw-rw-r--. 1 root root  174 Feb
>> 25 14:40 ifcfg-enp6s0
>>
>> I think I modified ifcfg-enp6s0 to get networking up again (eg was set to 
>> bridge.. but the bridge wasn't configured).. it was a few days ago.. if it's 
>> important I can reboot the box again to see what state it comes up with.
>>
>> # cat ifcfg-enp6s0
>> BOOTPROTO="none"
>> IPADDR="10.80.10.117"
>> NETMASK="255.255.255.0"
>> GATEWAY="10.80.10.1"
>> DEVICE="enp6s0"
>> HWADDR="00:25:b5:00:0b:4f"
>> ONBOOT=yes
>> PEERDNS=yes
>> PEERROUTES=yes
>> MTU=1500
>>
>> # cat ifcfg-enp7s0f0
>> # Generated by VDSM version 4.17.18-1.el7
>> DEVICE=enp7s0f0
>> ONBOOT=yes
>> MTU=1500
>> HWADDR=00:25:b5:00:0b:0f
>> NM_CONTROLLED=no
>>
>> # find /var/lib/vdsm/persistence
>> /var/lib/vdsm/persistence
>> /var/lib/vdsm/persistence/netconf
>> /var/lib/vdsm/persistence/netconf.1456371473833165545
>> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets
>> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt
>>
>> # cat
>> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt
>> {
>> "nic": "enp6s0",
>> "ipaddr": "10.80.10.117",
>> "mtu": "1500",
>> "netmask": "255.255.255.0",
>> "STP": "no",
>> "bridged": "true",
>> "gateway": "10.80.10.1",
>> "defaultRoute": true
>> }
>>
>> Supervdsm log is attached.
> 
> Have you editted ifcfg-ovirtmgmt manually?
> Nope
> 
> Can you somehow reproduce it, and share its content?
> Yea, I should be able to reproduce it - just gotta fix it first (create the 
> networking manually and get VDSM on-line).  Also it’s a side 
> project/investigation at the moment so time isn't on my side...
> 
> Would it help if I take an sosreport before and after? I don’t' mine emailing 
> these directly to yourself.
> 
> Do you have NetworkManager running? which version?
> NM is disabled, but the version is...
> # rpm -q NetworkManager
> NetworkManager-1.0.6-27.el7.x86_64
> # systemctl status NetworkManager.service
> ● NetworkManager.service - Network Manager
>Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; 
> vendor preset: enabled)
>Active: inactive (dead)
> 
> It seems that Vdsm has two bugs: on boot, initscripts end up setting an
> ipv6 address that Vdsm never requested
> 
> As mentioned above this would have come from SLAAC which we have setup in our 
> network
> 
> restore-net::INFO::2016-02-25 
> 14:14:58,024::vdsm-restore-net-config::261::root::(_find_changed_or_missing) 
> ovirtmgmt is different or mis

Re: [ovirt-users] 3.6 looses network on reboot

2016-03-02 Thread Edward Haas
On 03/02/2016 01:36 AM, David LeVene wrote:
> Hi Dan,
> 
> I missed the email as the subject line changed!
> 
> So we use and run IPv6 in our network - not sure if this is related. The 
> Addresses are handed out via SLAAC so that would be where the IPv6 address is 
> coming from.
> 
> My memory is a bit sketchy... but I think if I remove the vmfex/SRIOV vNIC 
> and only run with the one vNIC it works fine, it's when I bring the second 
> NIC into play with SRIOV the issues arise.
> 
> Answers inline.
> 
> -Original Message-
> From: Dan Kenigsberg [mailto:dan...@redhat.com]
> Sent: Tuesday, March 01, 2016 00:28
> To: David LeVene 
> Cc: edwa...@redhat.com; users@ovirt.org
> Subject: Re: [ovirt-users] 3.6 looses network on reboot
> 
> This sounds very bad. Changing the subject, so the wider, more problematic 
> issue is visible.
> 
> Did any other user see this behavior?
> 
> On Mon, Feb 29, 2016 at 06:27:46AM +, David LeVene wrote:
>> Hi Dan,
>>
>> Answers as follows;
>>
>> # rpm -qa | grep -i vdsm
>> vdsm-jsonrpc-4.17.18-1.el7.noarch
>> vdsm-hook-vmfex-4.17.18-1.el7.noarch
>> vdsm-infra-4.17.18-1.el7.noarch
>> vdsm-4.17.18-1.el7.noarch
>> vdsm-python-4.17.18-1.el7.noarch
>> vdsm-yajsonrpc-4.17.18-1.el7.noarch
>> vdsm-cli-4.17.18-1.el7.noarch
>> vdsm-xmlrpc-4.17.18-1.el7.noarch
>> vdsm-hook-vmfex-dev-4.17.18-1.el7.noarch
>>
>>
>> There was in this folder ifcfg-ovirtmgnt bridge setup, and also 
>> route-ovirtmgnt & rule-ovirtmgmt.. but they were removed after the reboot.
>>
>> # ls -althr | grep ifcfg
>> -rw-r--r--. 1 root root  254 Sep 16 21:21 ifcfg-lo -rw-r--r--. 1 root
>> root  120 Feb 25 14:07 ifcfg-enp7s0f0 -rw-rw-r--. 1 root root  174 Feb
>> 25 14:40 ifcfg-enp6s0
>>
>> I think I modified ifcfg-enp6s0 to get networking up again (eg was set to 
>> bridge.. but the bridge wasn't configured).. it was a few days ago.. if it's 
>> important I can reboot the box again to see what state it comes up with.
>>
>> # cat ifcfg-enp6s0
>> BOOTPROTO="none"
>> IPADDR="10.80.10.117"
>> NETMASK="255.255.255.0"
>> GATEWAY="10.80.10.1"
>> DEVICE="enp6s0"
>> HWADDR="00:25:b5:00:0b:4f"
>> ONBOOT=yes
>> PEERDNS=yes
>> PEERROUTES=yes
>> MTU=1500
>>
>> # cat ifcfg-enp7s0f0
>> # Generated by VDSM version 4.17.18-1.el7
>> DEVICE=enp7s0f0
>> ONBOOT=yes
>> MTU=1500
>> HWADDR=00:25:b5:00:0b:0f
>> NM_CONTROLLED=no
>>
>> # find /var/lib/vdsm/persistence
>> /var/lib/vdsm/persistence
>> /var/lib/vdsm/persistence/netconf
>> /var/lib/vdsm/persistence/netconf.1456371473833165545
>> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets
>> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt
>>
>> # cat
>> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt
>> {
>> "nic": "enp6s0",
>> "ipaddr": "10.80.10.117",
>> "mtu": "1500",
>> "netmask": "255.255.255.0",
>> "STP": "no",
>> "bridged": "true",
>> "gateway": "10.80.10.1",
>> "defaultRoute": true
>> }
>>
>> Supervdsm log is attached.
> 
> Have you editted ifcfg-ovirtmgmt manually?
> Nope
> 
> Can you somehow reproduce it, and share its content?
> Yea, I should be able to reproduce it - just gotta fix it first (create the 
> networking manually and get VDSM on-line).  Also it’s a side 
> project/investigation at the moment so time isn't on my side...
> 
> Would it help if I take an sosreport before and after? I don’t' mine emailing 
> these directly to yourself.
> 
> Do you have NetworkManager running? which version?
> NM is disabled, but the version is...
> # rpm -q NetworkManager
> NetworkManager-1.0.6-27.el7.x86_64
> # systemctl status NetworkManager.service
> ● NetworkManager.service - Network Manager
>Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; 
> vendor preset: enabled)
>Active: inactive (dead)
> 
> It seems that Vdsm has two bugs: on boot, initscripts end up setting an
> ipv6 address that Vdsm never requested
> 
> As mentioned above this would have come from SLAAC which we have setup in our 
> network
> 
> restore-net::INFO::2016-02-25 
> 14:14:58,024::vdsm-restore-net-config::261::root::(_find_changed_or_missing) 
> ovirtmgmt is different or mis

Re: [ovirt-users] 3.6 looses network on reboot

2016-03-01 Thread David LeVene
Hi Dan,

I missed the email as the subject line changed!

So we use and run IPv6 in our network - not sure if this is related. The 
Addresses are handed out via SLAAC so that would be where the IPv6 address is 
coming from.

My memory is a bit sketchy... but I think if I remove the vmfex/SRIOV vNIC and 
only run with the one vNIC it works fine, it's when I bring the second NIC into 
play with SRIOV the issues arise.

Answers inline.

-Original Message-
From: Dan Kenigsberg [mailto:dan...@redhat.com]
Sent: Tuesday, March 01, 2016 00:28
To: David LeVene 
Cc: edwa...@redhat.com; users@ovirt.org
Subject: Re: [ovirt-users] 3.6 looses network on reboot

This sounds very bad. Changing the subject, so the wider, more problematic 
issue is visible.

Did any other user see this behavior?

On Mon, Feb 29, 2016 at 06:27:46AM +, David LeVene wrote:
> Hi Dan,
>
> Answers as follows;
>
> # rpm -qa | grep -i vdsm
> vdsm-jsonrpc-4.17.18-1.el7.noarch
> vdsm-hook-vmfex-4.17.18-1.el7.noarch
> vdsm-infra-4.17.18-1.el7.noarch
> vdsm-4.17.18-1.el7.noarch
> vdsm-python-4.17.18-1.el7.noarch
> vdsm-yajsonrpc-4.17.18-1.el7.noarch
> vdsm-cli-4.17.18-1.el7.noarch
> vdsm-xmlrpc-4.17.18-1.el7.noarch
> vdsm-hook-vmfex-dev-4.17.18-1.el7.noarch
>
>
> There was in this folder ifcfg-ovirtmgnt bridge setup, and also 
> route-ovirtmgnt & rule-ovirtmgmt.. but they were removed after the reboot.
>
> # ls -althr | grep ifcfg
> -rw-r--r--. 1 root root  254 Sep 16 21:21 ifcfg-lo -rw-r--r--. 1 root
> root  120 Feb 25 14:07 ifcfg-enp7s0f0 -rw-rw-r--. 1 root root  174 Feb
> 25 14:40 ifcfg-enp6s0
>
> I think I modified ifcfg-enp6s0 to get networking up again (eg was set to 
> bridge.. but the bridge wasn't configured).. it was a few days ago.. if it's 
> important I can reboot the box again to see what state it comes up with.
>
> # cat ifcfg-enp6s0
> BOOTPROTO="none"
> IPADDR="10.80.10.117"
> NETMASK="255.255.255.0"
> GATEWAY="10.80.10.1"
> DEVICE="enp6s0"
> HWADDR="00:25:b5:00:0b:4f"
> ONBOOT=yes
> PEERDNS=yes
> PEERROUTES=yes
> MTU=1500
>
> # cat ifcfg-enp7s0f0
> # Generated by VDSM version 4.17.18-1.el7
> DEVICE=enp7s0f0
> ONBOOT=yes
> MTU=1500
> HWADDR=00:25:b5:00:0b:0f
> NM_CONTROLLED=no
>
> # find /var/lib/vdsm/persistence
> /var/lib/vdsm/persistence
> /var/lib/vdsm/persistence/netconf
> /var/lib/vdsm/persistence/netconf.1456371473833165545
> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets
> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt
>
> # cat
> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt
> {
> "nic": "enp6s0",
> "ipaddr": "10.80.10.117",
> "mtu": "1500",
> "netmask": "255.255.255.0",
> "STP": "no",
> "bridged": "true",
> "gateway": "10.80.10.1",
> "defaultRoute": true
> }
>
> Supervdsm log is attached.

Have you editted ifcfg-ovirtmgmt manually?
Nope

Can you somehow reproduce it, and share its content?
Yea, I should be able to reproduce it - just gotta fix it first (create the 
networking manually and get VDSM on-line).  Also it’s a side 
project/investigation at the moment so time isn't on my side...

Would it help if I take an sosreport before and after? I don’t' mine emailing 
these directly to yourself.

Do you have NetworkManager running? which version?
NM is disabled, but the version is...
# rpm -q NetworkManager
NetworkManager-1.0.6-27.el7.x86_64
# systemctl status NetworkManager.service
● NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; 
vendor preset: enabled)
   Active: inactive (dead)

It seems that Vdsm has two bugs: on boot, initscripts end up setting an
ipv6 address that Vdsm never requested

As mentioned above this would have come from SLAAC which we have setup in our 
network

restore-net::INFO::2016-02-25 
14:14:58,024::vdsm-restore-net-config::261::root::(_find_changed_or_missing) 
ovirtmgmt is different or missing from persistent configuration. current: 
{'nic': 'enp6s0', 'dhcpv6': False, 'ipaddr': '10.80.10.117', 'mtu': '1500', 
'netmask': '255.255.255.0', 'bootproto': 'none', 'stp': False, 'bridged': True, 
'ipv6addr': ['2400:7d00:110:3:225:b5ff:fe00:b4f/64'], 'gateway': '10.80.10.1', 
'defaultRoute': True}, persisted: {u'nic': u'enp6s0', 'dhcpv6': False, 
u'ipaddr': u'10.80.10.117&

Re: [ovirt-users] 3.6 looses network on reboot

2016-02-29 Thread Dan Kenigsberg
On Tue, Mar 01, 2016 at 06:33:52AM +, Pavel Gashev wrote:
> I did see it few times. The first reboot after new node setup sometimes fails 
> to bring network up. It tries to remove network interface when it doesn't 
> exist.
> 
> Steps to recover:
> 1. Remove /var/lib/vdsm/persistence/netconf
> 2. Remove /var/run/vdsm/netconf
> 3. Configure network manually
> 4. Start vdsmd service
> 5. Configure network again using web ui. Make sure that config is synced.

Thanks, please share your supervdsm.log of that time if you still have
it available.

It is very important to understand what caused this failure - whether it
is the same as in David's case or different issue.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 looses network on reboot

2016-02-29 Thread Pavel Gashev
I did see it few times. The first reboot after new node setup sometimes fails 
to bring network up. It tries to remove network interface when it doesn't exist.

Steps to recover:
1. Remove /var/lib/vdsm/persistence/netconf
2. Remove /var/run/vdsm/netconf
3. Configure network manually
4. Start vdsmd service
5. Configure network again using web ui. Make sure that config is synced.

On Mon, 2016-02-29 at 15:58 +0200, Dan Kenigsberg wrote:

This sounds very bad. Changing the subject, so the wider, more
problematic issue is visible.

Did any other user see this behavior?

On Mon, Feb 29, 2016 at 06:27:46AM +, David LeVene wrote:


Hi Dan,

Answers as follows;

# rpm -qa | grep -i vdsm
vdsm-jsonrpc-4.17.18-1.el7.noarch
vdsm-hook-vmfex-4.17.18-1.el7.noarch
vdsm-infra-4.17.18-1.el7.noarch
vdsm-4.17.18-1.el7.noarch
vdsm-python-4.17.18-1.el7.noarch
vdsm-yajsonrpc-4.17.18-1.el7.noarch
vdsm-cli-4.17.18-1.el7.noarch
vdsm-xmlrpc-4.17.18-1.el7.noarch
vdsm-hook-vmfex-dev-4.17.18-1.el7.noarch


There was in this folder ifcfg-ovirtmgnt bridge setup, and also route-ovirtmgnt 
& rule-ovirtmgmt.. but they were removed after the reboot.

# ls -althr | grep ifcfg
-rw-r--r--. 1 root root  254 Sep 16 21:21 ifcfg-lo
-rw-r--r--. 1 root root  120 Feb 25 14:07 ifcfg-enp7s0f0
-rw-rw-r--. 1 root root  174 Feb 25 14:40 ifcfg-enp6s0

I think I modified ifcfg-enp6s0 to get networking up again (eg was set to 
bridge.. but the bridge wasn't configured).. it was a few days ago.. if it's 
important I can reboot the box again to see what state it comes up with.

# cat ifcfg-enp6s0
BOOTPROTO="none"
IPADDR="10.80.10.117"
NETMASK="255.255.255.0"
GATEWAY="10.80.10.1"
DEVICE="enp6s0"
HWADDR="00:25:b5:00:0b:4f"
ONBOOT=yes
PEERDNS=yes
PEERROUTES=yes
MTU=1500

# cat ifcfg-enp7s0f0
# Generated by VDSM version 4.17.18-1.el7
DEVICE=enp7s0f0
ONBOOT=yes
MTU=1500
HWADDR=00:25:b5:00:0b:0f
NM_CONTROLLED=no

# find /var/lib/vdsm/persistence
/var/lib/vdsm/persistence
/var/lib/vdsm/persistence/netconf
/var/lib/vdsm/persistence/netconf.1456371473833165545
/var/lib/vdsm/persistence/netconf.1456371473833165545/nets
/var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt

# cat /var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt
{
"nic": "enp6s0",
"ipaddr": "10.80.10.117",
"mtu": "1500",
"netmask": "255.255.255.0",
"STP": "no",
"bridged": "true",
"gateway": "10.80.10.1",
"defaultRoute": true
}

Supervdsm log is attached.



Have you editted ifcfg-ovirtmgmt manually? Can you somehow reproduce it,
and share its content?
Do you have NetworkManager running? which version?

It seems that Vdsm has two bugs: on boot, initscripts end up setting an
ipv6 address that Vdsm never requested.

restore-net::INFO::2016-02-25 
14:14:58,024::vdsm-restore-net-config::261::root::(_find_changed_or_missing) 
ovirtmgmt is different or missing from persistent configuration. current: 
{'nic': 'enp6s0', 'dhcpv6': False, 'ipaddr': '10.80.10.117', 'mtu': '1500', 
'netmask': '255.255.255.0', 'bootproto': 'none', 'stp': False, 'bridged': True, 
'ipv6addr': ['2400:7d00:110:3:225:b5ff:fe00:b4f/64'], 'gateway': '10.80.10.1', 
'defaultRoute': True}, persisted: {u'nic': u'enp6s0', 'dhcpv6': False, 
u'ipaddr': u'10.80.10.117', u'mtu': '1500', u'netmask': u'255.255.255.0', 
'bootproto': 'none', 'stp': False, u'bridged': True, u'gateway': u'10.80.10.1', 
u'defaultRoute': True}


Then, Vdsm tries to drop the
unsolicited address, but fails. Both must be fixed ASAP.

restore-net::ERROR::2016-02-25 14:14:59,490::__init__::58::root::(__exit__) 
Failed rollback transaction last known good network.
Traceback (most recent call last):
  File "/usr/share/vdsm/network/api.py", line 918, in setupNetworks
keep_bridge=keep_bridge)
  File "/usr/share/vdsm/network/api.py", line 222, in wrapped
ret = func(**attrs)
  File "/usr/share/vdsm/network/api.py", line 502, in _delNetwork
configurator.removeQoS(net_ent)
  File "/usr/share/vdsm/network/configurators/__init__.py", line 122, in 
removeQoS
qos.remove_outbound(top_device)
  File "/usr/share/vdsm/network/configurators/qos.py", line 60, in 
remove_outbound
device, pref=_NON_VLANNED_ID if vlan_tag is None else vlan_tag)
  File "/usr/share/vdsm/network/tc/filter.py", line 31, in delete
_wrapper.process_request(command)
  File "/usr/share/vdsm/network/tc/_wrapper.py", line 38, in process_request
raise TrafficControlException(retcode, err, command)
TrafficControlException: (None, 'Message truncated', ['/usr/sbin/tc', 
'filter', 'del', 'dev', 'enp6s0', 'pref', '5000'])

Regards,
Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 looses network on reboot

2016-02-29 Thread Dan Kenigsberg
This sounds very bad. Changing the subject, so the wider, more
problematic issue is visible.

Did any other user see this behavior?

On Mon, Feb 29, 2016 at 06:27:46AM +, David LeVene wrote:
> Hi Dan,
> 
> Answers as follows;
> 
> # rpm -qa | grep -i vdsm
> vdsm-jsonrpc-4.17.18-1.el7.noarch
> vdsm-hook-vmfex-4.17.18-1.el7.noarch
> vdsm-infra-4.17.18-1.el7.noarch
> vdsm-4.17.18-1.el7.noarch
> vdsm-python-4.17.18-1.el7.noarch
> vdsm-yajsonrpc-4.17.18-1.el7.noarch
> vdsm-cli-4.17.18-1.el7.noarch
> vdsm-xmlrpc-4.17.18-1.el7.noarch
> vdsm-hook-vmfex-dev-4.17.18-1.el7.noarch
> 
> 
> There was in this folder ifcfg-ovirtmgnt bridge setup, and also 
> route-ovirtmgnt & rule-ovirtmgmt.. but they were removed after the reboot.
> 
> # ls -althr | grep ifcfg
> -rw-r--r--. 1 root root  254 Sep 16 21:21 ifcfg-lo
> -rw-r--r--. 1 root root  120 Feb 25 14:07 ifcfg-enp7s0f0
> -rw-rw-r--. 1 root root  174 Feb 25 14:40 ifcfg-enp6s0
> 
> I think I modified ifcfg-enp6s0 to get networking up again (eg was set to 
> bridge.. but the bridge wasn't configured).. it was a few days ago.. if it's 
> important I can reboot the box again to see what state it comes up with.
> 
> # cat ifcfg-enp6s0
> BOOTPROTO="none"
> IPADDR="10.80.10.117"
> NETMASK="255.255.255.0"
> GATEWAY="10.80.10.1"
> DEVICE="enp6s0"
> HWADDR="00:25:b5:00:0b:4f"
> ONBOOT=yes
> PEERDNS=yes
> PEERROUTES=yes
> MTU=1500
> 
> # cat ifcfg-enp7s0f0
> # Generated by VDSM version 4.17.18-1.el7
> DEVICE=enp7s0f0
> ONBOOT=yes
> MTU=1500
> HWADDR=00:25:b5:00:0b:0f
> NM_CONTROLLED=no
> 
> # find /var/lib/vdsm/persistence
> /var/lib/vdsm/persistence
> /var/lib/vdsm/persistence/netconf
> /var/lib/vdsm/persistence/netconf.1456371473833165545
> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets
> /var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt
> 
> # cat /var/lib/vdsm/persistence/netconf.1456371473833165545/nets/ovirtmgmt
> {
> "nic": "enp6s0",
> "ipaddr": "10.80.10.117",
> "mtu": "1500",
> "netmask": "255.255.255.0",
> "STP": "no",
> "bridged": "true",
> "gateway": "10.80.10.1",
> "defaultRoute": true
> }
> 
> Supervdsm log is attached.

Have you editted ifcfg-ovirtmgmt manually? Can you somehow reproduce it,
and share its content?
Do you have NetworkManager running? which version?

It seems that Vdsm has two bugs: on boot, initscripts end up setting an
ipv6 address that Vdsm never requested.

restore-net::INFO::2016-02-25 
14:14:58,024::vdsm-restore-net-config::261::root::(_find_changed_or_missing) 
ovirtmgmt is different or missing from persistent configuration. current: 
{'nic': 'enp6s0', 'dhcpv6': False, 'ipaddr': '10.80.10.117', 'mtu': '1500', 
'netmask': '255.255.255.0', 'bootproto': 'none', 'stp': False, 'bridged': True, 
'ipv6addr': ['2400:7d00:110:3:225:b5ff:fe00:b4f/64'], 'gateway': '10.80.10.1', 
'defaultRoute': True}, persisted: {u'nic': u'enp6s0', 'dhcpv6': False, 
u'ipaddr': u'10.80.10.117', u'mtu': '1500', u'netmask': u'255.255.255.0', 
'bootproto': 'none', 'stp': False, u'bridged': True, u'gateway': u'10.80.10.1', 
u'defaultRoute': True}


Then, Vdsm tries to drop the
unsolicited address, but fails. Both must be fixed ASAP.

restore-net::ERROR::2016-02-25 14:14:59,490::__init__::58::root::(__exit__) 
Failed rollback transaction last known good network.
Traceback (most recent call last):
  File "/usr/share/vdsm/network/api.py", line 918, in setupNetworks
keep_bridge=keep_bridge)
  File "/usr/share/vdsm/network/api.py", line 222, in wrapped
ret = func(**attrs)
  File "/usr/share/vdsm/network/api.py", line 502, in _delNetwork
configurator.removeQoS(net_ent)
  File "/usr/share/vdsm/network/configurators/__init__.py", line 122, in 
removeQoS
qos.remove_outbound(top_device)
  File "/usr/share/vdsm/network/configurators/qos.py", line 60, in 
remove_outbound
device, pref=_NON_VLANNED_ID if vlan_tag is None else vlan_tag)
  File "/usr/share/vdsm/network/tc/filter.py", line 31, in delete
_wrapper.process_request(command)
  File "/usr/share/vdsm/network/tc/_wrapper.py", line 38, in process_request
raise TrafficControlException(retcode, err, command)
TrafficControlException: (None, 'Message truncated', ['/usr/sbin/tc', 
'filter', 'del', 'dev', 'enp6s0', 'pref', '5000'])

Regards,
Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users