On Sat, Mar 31, 2018 at 1:09 AM, Ran Shalit <ransha...@gmail.com> wrote: > On Fri, Mar 30, 2018 at 8:08 PM, Alexander Duyck > <alexander.du...@gmail.com> wrote: >> On Thu, Mar 29, 2018 at 10:35 PM, Ran Shalit <ransha...@gmail.com> wrote: >>> On Fri, Mar 30, 2018 at 12:34 AM, Ran Shalit <ransha...@gmail.com> wrote: >>>> On Thu, Mar 29, 2018 at 9:36 PM, Alexander Duyck >>>> <alexander.du...@gmail.com> wrote: >>>>> On Thu, Mar 29, 2018 at 11:18 AM, Ran Shalit <ransha...@gmail.com> wrote: >>>>>> On Thu, Mar 29, 2018 at 8:52 PM, Alexander Duyck >>>>>> <alexander.du...@gmail.com> wrote: >>>>>>> On Thu, Mar 29, 2018 at 7:45 AM, Ran Shalit <ransha...@gmail.com> wrote: >>>>>>>> On Tue, Mar 27, 2018 at 11:43 PM, Alexander Duyck >>>>>>>> <alexander.du...@gmail.com> wrote: >>>>>>>>> On Tue, Mar 27, 2018 at 1:52 AM, Ran Shalit <ransha...@gmail.com> >>>>>>>>> wrote: >>>>>>>>>> On Thu, Mar 22, 2018 at 7:31 PM, Alexander Duyck >>>>>>>>>> <alexander.du...@gmail.com> wrote: >>>>>>>>>>> On Thu, Mar 22, 2018 at 9:11 AM, Ran Shalit <ransha...@gmail.com> >>>>>>>>>>> wrote: >>>>>>>>>>>> On Mon, Mar 19, 2018 at 10:23 PM, Alexander Duyck >>>>>>>>>>>> <alexander.du...@gmail.com> wrote: >>>>>>>>>>>>> On Mon, Mar 19, 2018 at 12:31 PM, Ran Shalit >>>>>>>>>>>>> <ransha...@gmail.com> wrote: >>>>>>>>>>>>>> On Mon, Mar 19, 2018 at 6:27 PM, Alexander Duyck >>>>>>>>>>>>>> <alexander.du...@gmail.com> wrote: >>>>>>>>>>>>>>> On Mon, Mar 19, 2018 at 9:07 AM, Ran Shalit >>>>>>>>>>>>>>> <ransha...@gmail.com> wrote: >>>>>>>>>>>>>>>> Hello, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I am using igb driver: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> cat output/build/linux-4.10.17/.config | grep IGB >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> CONFIG_IGB=y >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> CONFIG_IGB_HWMON=y >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> CONFIG_IGBVF=y >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> But every boot is takes a lot of time till ethernet is ready. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I try to disable auto negotiation, but nothing helps yet, the >>>>>>>>>>>>>>>> device >>>>>>>>>>>>>>>> resist, and keep resets phy. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> adapter->fc_autoneg = false; >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> hw->mac.autoneg = false; >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> hw->phy.autoneg_advertised = 0; >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I tried more flags, but nothing helps. >>>>>>>>>>>>>>>> The phy always disabled/reset at boot (led is off for 1 second >>>>>>>>>>>>>>>> and then on). >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Is there a way to disable auto-negotiation with igb driver ? >>>>>>>>>>>>>>>> I use buildroot with kernel 4.10.81. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thank you for any suggestion, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ran >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Instead of trying to disable it in the driver why not just >>>>>>>>>>>>>>> change your >>>>>>>>>>>>>>> system configuration to disable it? You should be able to >>>>>>>>>>>>>>> configure >>>>>>>>>>>>>>> things in either Network Manager, or via the network init >>>>>>>>>>>>>>> scripts so >>>>>>>>>>>>>>> that you instead just used a forced speed/duplex combination. If >>>>>>>>>>>>>>> nothing else you can go through and drop support for any other >>>>>>>>>>>>>>> advertised speed/duplex and that should improve the speed of >>>>>>>>>>>>>>> autoneg >>>>>>>>>>>>>>> itself. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> I think I tried it and it did not work, but I shall try again. >>>>>>>>>>>>>> Where should I put it ? in init.d startup scripts ? >>>>>>>>>>>>>> I think I tried, and yet , I have seen that in reset, the leds >>>>>>>>>>>>>> of the >>>>>>>>>>>>>> phy are always turned off for a ~1.5 second and then on again. >>>>>>>>>>>>>> This is actually what I am trying to overcome, this strange >>>>>>>>>>>>>> reset of >>>>>>>>>>>>>> phy every powerup. >>>>>>>>>>>>> >>>>>>>>>>>>> So it sounds like what you may want to disable would not be the >>>>>>>>>>>>> phy >>>>>>>>>>>>> autoneg, but the phy reset itself. If that is what you are >>>>>>>>>>>>> looking for >>>>>>>>>>>>> then you might try modifying igb_reset_phy, or at least when we >>>>>>>>>>>>> invoke >>>>>>>>>>>>> it in igb_power_up_link. You could look at adding a private flag >>>>>>>>>>>>> to >>>>>>>>>>>>> the igb driver to disable it for your use case if that is the >>>>>>>>>>>>> issue. >>>>>>>>>>>>> >>>>>>>>>>>>>>> You could refer to something like this for more information: >>>>>>>>>>>>>>> https://www.shellhacks.com/change-speed-duplex-ethernet-card-linux/ >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> - Alex >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thank you very much, >>>>>>>>>>>>>> ran >>>>>>>>>>>>> >>>>>>>>>>>>> Now I am not so certain if this will solve you issue. What you may >>>>>>>>>>>>> want to do instead is take a look at the function >>>>>>>>>>>>> igb_power_up_link in >>>>>>>>>>>>> igb_main.c and possibly consider adding a flag check to allow you >>>>>>>>>>>>> to >>>>>>>>>>>>> disable it on the systems you need to disable it on, or if this >>>>>>>>>>>>> is for >>>>>>>>>>>>> just one driver you could comment the line out and see if that >>>>>>>>>>>>> will >>>>>>>>>>>>> solve the issue. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Using dmesg to understand what's going on , I see 2 things: >>>>>>>>>>>> 1. Long time interval between opening and link up (5.0 - 1.9 = >>>>>>>>>>>> ~3seconds !) >>>>>>>>>>> >>>>>>>>>>> For a 1G link 3 seconds isn't all that long. >>>>>>>>>>> >>>>>>>>>>>> 2. Long time interval between link up state and ping success ( 8 - >>>>>>>>>>>> 5 = >>>>>>>>>>>> ~3 sedconds !) >>>>>>>>>>> >>>>>>>>>>> I don't know what to tell you about that part. I suspect that may be >>>>>>>>>>> some delays in notifiers being processed after the interface has >>>>>>>>>>> reported link up. In addition I don't know what you link partner is. >>>>>>>>>>> Normally for a ping you have to take care of things like setting up >>>>>>>>>>> routes, getting an ARP out to find your target, and then sending the >>>>>>>>>>> ping to that address. >>>>>>>>>>> >>>>>>>>>>> What might be interesting would be to add a dump of the >>>>>>>>>>> tx_buffer_info >>>>>>>>>>> data for the rings that have transmitted packets. For example we >>>>>>>>>>> might >>>>>>>>>>> have reported link up, but the link partner may not have been >>>>>>>>>>> responding to us because it wasn't. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hi Alexander, >>>>>>>>>> >>>>>>>>>> Is there a way to open this dump with compilation flag ? How to dump >>>>>>>>>> this information ? >>>>>>>>>> This time interval (from igb probe to link up) is now is the major >>>>>>>>>> delay (we solved the other time to ping). >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> ranran >>>>>>>>> >>>>>>>>> Actually it can be turned on dynamically to some extend. The code to >>>>>>>>> dump the Tx descriptor information is in a function called igb_dump in >>>>>>>>> the driver. To turn on the bits related to dumping Tx packets you >>>>>>>>> would need to use: >>>>>>>>> ethtool -s <ethX> msglvl hw on >>>>>>>>> ethtool -s <ethX> msglvl tx_done on >>>>>>>>> ethtool -s <ethX> msglvl pktdata on >>>>>>>>> >>>>>>>>> Then it is just a matter of triggering a call to the reset task. There >>>>>>>>> are a few ways to get there. If nothing else you might modify the >>>>>>>>> driver to call igb_dump() at the start of the igb_close() function. >>>>>>>>> Then all you would have to do is bring down the interface with >>>>>>>>> something like an ifconfig <ethX> down and that should trigger the >>>>>>>>> dump of the Tx descriptor data. >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> I've added start/stop log in main igb functions. >>>>>>>> I can now see things more clearly. >>>>>>>> please see the attached log below, >>>>>>>> It seems that igb_watchdog is responsible somehow for waking up and >>>>>>>> checking things, before deciding that link is up. >>>>>>>> Maybe there is a way to call it higher rate ? >>>>>>> >>>>>>> When the link state changes it is triggered via an interrupt. The time >>>>>>> was about 3.5s to bring up link here if I am understanding correctly. >>>>>>> The other issue I see in your test is I am guessing you aren't pinging >>>>>>> something with a static ARP entry which is another cause for delay in >>>>>>> your test. >>>>>>> >>>>>>> One other thing you might try doing to shave time off of link training >>>>>>> is to disable mdix via ethtool "ethtool -s <ethX> mdix off" and see if >>>>>>> that saves you any time. It might shave maybe about .5 seconds off of >>>>>>> the total link time. >>>>>>> >>>>>>> If you are wanting it to link faster maybe you should look at using >>>>>>> something besides 1000BaseT. Typically about 3 to 4 seconds is about >>>>>>> the limit for Gigabit copper. You may wan to look at a device with a >>>>>>> different PHY if you require a faster link time. For example for >>>>>>> automotive applications something like 1000BaseT1 >>>>>>> (https://standards.ieee.org/events/automotive/2016/d1_09_tan_1000base_t1_learning_and_next_gen_use_cases_v6.pdf) >>>>>>> is supposed to be the intended way to go with link times in the >>>>>>> sub-200ms range. >>>>>>> >>>>>>> - Alex >>>>>> >>>>>> I will check your interesting suggestions. >>>>>> Do you think it might help to start igb and networking earlier ? >>>>> >>>>> No. >>>>> >>>>>> I see in dmesg log that igb starts late in kernel boot, even when I >>>>>> add ip=... in bootargs instead of using the init.d script (which calls >>>>>> ifup). >>>>> >>>>> You need enough of the basics up that 2 seconds is pretty reasonable. >>> >>> >>> I would like to add another very interesting result I got: >>> disabling auto-negotiation did not reduce time, and I disabled start >>> networking in init.d (S40network , ifup) >>> and started the network instead with bootargs (ip=10.0.0.2:....) >>> >>> Is there any reason for this dependency of these methods in each other ? >>> >>> Best Regards, >>> ranran >> >> So as they say "it takes two to tango". You are controlling one end of >> the link. What is the other end of the link configured for currently? >> If it is set to auto-negotiate or do stuff like automdix that will add >> time to link establishment. >> > > I understand. I did tried the autonegotiation disabling in both sides, > I haven't tried mdix yet, but trying to see if I have mdix > configuration in the windows PC NIC, > I don't find such configuration in the adapter in the windows adapter > menu, so not sure if I can change it in both sides.
If you don't have that level of detail on your link partner how can you expect the Linux configuration to be of any help at all? I was concerned something like this was going on. You should have a symmetric setup if you want any hopes of being able to actually judge the changes in configuration. Otherwise you might as well just give up on this since you can make all the changes you want in the Linux setup and your Windows setup could be artificially limiting your link time in all cases. >> I'm not sure what you are talking about in regards to dependencies. If >> you are asking why the network has to be running in order to configure >> a network interface I would think that is kind of obvious. >> >> Thanks. >> >> - Alex > > I meant dependency by the following: > When using auto-negotiation (in both sides of course), I got 10 > seconds till ping. > Then if I: > 1. Removing ifup in startup script and used bootargs (ip=) instead - no change > 2. Adding ifup again (and removed ip from bootargs) , and disabled > autonegotiation - 1.5 second improvement to time till ping. > 3. Now I tried again to removed ifup in startup script and use ip in > bootargs - got imrpovement of 1 second to time till ping. > > So, I mean that I get improvement in method described in (3) above > only after disabling auto-negotiation. > I am not sure what is the reason for this imporvement and why it > depends on disabling auto-negotiation. > I have dmesg of all trials. > > Best Regards, > Ran For all I know it could be noise or just a coincidence. It sounds like your environment has too many variables you don't have control over so I really have no way of knowing what is going on. My advice would be to look at trying to get a symmetric setup where you can setup both ends the same way if you are going to go down this sort of path. As is I don't think you ever explained why you were trying to reduce your time to "ping", and the fact that you are as concerned about link time as you are has me wondering if standard Ethernet is even the right medium for you or if you should be looking at something like 1000base-T1. - Alex ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ E1000-devel mailing list E1000-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/e1000-devel To learn more about Intel® Ethernet, visit http://communities.intel.com/community/wired