It's in the FAQ, ovs-vsctl man page, ovs-vswitchd.conf.db man page, and 
probably other places.

--Justin


> On May 26, 2017, at 10:06 AM, Gale Price <[email protected]> wrote:
> 
> No I do not... 
> Where do I turn it off?
> 
> 
> Gale (Dean) Price
> Lead Admin - Remote Services Lab
> Lab Infrastructure Services & Support
> 8001 Development Drive Morrisville, NC 27560
> Lenovo USA
> Ph: 919-237-8421
> 297-8421
> [email protected]
>  
> 
> Lenovo.com 
> Twitter | Facebook | Instagram | Blogs | Forums
> 
> 
> 
> 
> 
> 
> -----Original Message-----
> From: Justin Pettit [mailto:[email protected]] 
> Sent: Friday, May 26, 2017 1:00 PM
> To: Gale Price
> Cc: [email protected]
> Subject: Re: [ovs-discuss] Ping Drop Problem
> 
> It looks like you're having STP (spanning tree) issues.  Are the OVS 
> instances running in the VMs?  I'm wondering if your shutting down those VMs 
> is causing STP to get confused, since there are various time outs for state 
> changes in STP.  Do you need to run STP?
> 
> --Justin
> 
> 
>> On May 26, 2017, at 7:09 AM, Gale Price <[email protected]> wrote:
>> 
>> Well tcpdump didn’t really give me any relevant info.
>> 
>> But in testing some of the VM's I noticed that one of my older VM's did not 
>> cause a Ping drop system wide when shutdown.
>> 
>> But my newer VM's does cause it when shutdown... 
>> 
>> Here is my ovs-vswitchd.log when the newer vm is shutdown.
>> 
>> It takes about 5-6 pings to get from "Dropped" to "topology", once the 
>> topology changes the system wide pings start back up.
>> 
>> 2017-05-26T13:50:10.819Z|01271|ofproto|WARN|ovsbr1: cannot get STP 
>> status on nonexistent port 106
>> 2017-05-26T13:50:10.819Z|01272|ofproto|WARN|ovsbr1: cannot get RSTP 
>> status on nonexistent port 106 
>> 2017-05-26T13:50:10.823Z|01273|netdev_linux|WARN|ethtool command 
>> ETHTOOL_GDRVINFO on network device vnet33 failed: No such device 
>> 2017-05-26T13:50:10.825Z|01274|netdev_linux|WARN|ethtool command 
>> ETHTOOL_GSET on network device vnet33 failed: No such device
>> 2017-05-26T13:50:10.830Z|01275|netdev_linux|INFO|ioctl(SIOCGIFHWADDR) 
>> on vnet33 device failed: No such device
>> 2017-05-26T13:50:10.832Z|01276|netdev_linux|WARN|ioctl(SIOCGIFINDEX) 
>> on vnet33 device failed: No such device 
>> 2017-05-26T13:50:41.892Z|01277|stp|INFO|Dropped 14 log messages in 
>> last 373 seconds (most recently, 317 seconds ago) due to excessive 
>> rate
>> 2017-05-26T13:50:41.892Z|01278|stp|INFO|ovsbr1: detected topology change.
>> 2017-05-26T13:50:41.892Z|01279|stp|INFO|ovsbr1: detected topology change.
>> 2017-05-26T13:50:41.892Z|01280|stp|INFO|ovsbr1: detected topology change.
>> 2017-05-26T13:50:41.892Z|01281|stp|INFO|ovsbr1: detected topology change.
>> 2017-05-26T13:50:41.892Z|01282|stp|INFO|ovsbr1: detected topology change.
>> 
>> My messages log at the time of the shutdown:
>> 
>> May 26 09:59:46 rslcluster-node3 NetworkManager[1986]: <info>  
>> (vnet33): enslaved to non-master-type device ovs-system; ignoring May 
>> 26 10:00:00 rslcluster-node3 journal: Received unexpected event 1 May 
>> 26 10:00:00 rslcluster-node3 kernel: [3174306.304707] device vnet33 
>> left promiscuous mode May 26 10:00:00 rslcluster-node3 kernel: device 
>> vnet33 left promiscuous mode May 26 10:00:01 rslcluster-node3 journal: 
>> internal error: End of file from monitor May 26 10:00:01 rslcluster-node3 
>> ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- 
>> --if-exists del-port vnet33 May 26 10:00:01 rslcluster-node3 dbus[1954]: 
>> [system] Activating via systemd: service name='org.freedesktop.machine1' 
>> unit='dbus-org.freedesktop.machine1.service'
>> May 26 10:00:01 rslcluster-node3 dbus[1954]: [system] Successfully activated 
>> service 'org.freedesktop.machine1'
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-x.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 systemd-machined: New machine qemu-.
>> May 26 10:00:01 rslcluster-node3 journal: error from service: 
>> TerminateMachine: No machine 'qemu-NagiosLogserver' known
>> 
>> I am really at a loss here. And my network guy is not much help as he is not 
>> in to the virtualization world. 
>> 
>> 
>> Gale (Dean) Price
>> Lead Admin - Remote Services Lab
>> Lab Infrastructure Services & Support
>> 8001 Development Drive Morrisville, NC 27560 Lenovo USA
>> Ph: 919-237-8421
>> 297-8421
>> [email protected]
>> 
>> 
>> Lenovo.com
>> Twitter | Facebook | Instagram | Blogs | Forums
>> 
>> 
>> 
>> 
>> 
>> 
>> -----Original Message-----
>> From: Justin Pettit [mailto:[email protected]]
>> Sent: Thursday, May 25, 2017 9:10 PM
>> To: Gale Price
>> Cc: [email protected]
>> Subject: Re: [ovs-discuss] Ping Drop Problem
>> 
>> Well, it's hard to know, since your description doesn't describe your 
>> topology, configuration, etc.  That said, I guess I would start with tcpdump 
>> or flow counts to see where packets appear to be getting dropped and why.
>> 
>> --Justin
>> 
>> 
>>> On May 25, 2017, at 6:03 PM, Gale Price <[email protected]> wrote:
>>> 
>>> Where would you suggest I start looking?
>>> 
>>> Sent from my iPhone
>>> 
>>>> On May 25, 2017, at 8:39 PM, Justin Pettit <[email protected]> wrote:
>>>> 
>>>> 
>>>>> On May 25, 2017, at 11:11 AM, Gale Price <[email protected]> wrote:
>>>>> 
>>>>> I am using ovs on a Fedora 20 KVM Host with 6 defined vlans
>>>>> 
>>>>> I am having a problem where when I stop any VM I lose connectivity to all 
>>>>> other VM’s for about 6-7 Pings.
>>>>> Then everything starts working again.
>>>>> 
>>>>> Anyone see this or know a remedy for it?
>>>> 
>>>> That sounds odd.  I think you'll need to do a bit more debugging before 
>>>> anyone can help you.
>>>> 
>>>> --Justin
>>>> 
>>>> 
>> 
> 

_______________________________________________
discuss mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to