Re: extreme network latency

2016-10-19 Thread Sepherosa Ziehau
On Thu, Oct 20, 2016 at 2:22 AM, Richard Nyberg  wrote:
>> For better performance, you can put the if_emx_load="YES" in
>> loader.conf.  However, em0 will become emx0 after loading that module,
>> so make sure that your rc.conf and pf.conf are also updated ;)
>
> It turned out hw.re.msi.enable="1" did not work so well. re0 simply

That's why I didn't enable MSI on re in the first place, some chips
work, some not.

> stopped doing anything after a while. My current configuration uses re
> and emx, both without polling. This works great since they end up
> using different interrupts with emx instead of em.

Heh, I see, it seems to the issue of shared interrupt.  emx is
actually using MSI by default, re probably is the only interrupt
producer on that pin.

Thanks,
sephe

-- 
Tomorrow Will Never Die


Re: extreme network latency

2016-10-19 Thread Richard Nyberg
> For better performance, you can put the if_emx_load="YES" in
> loader.conf.  However, em0 will become emx0 after loading that module,
> so make sure that your rc.conf and pf.conf are also updated ;)

It turned out hw.re.msi.enable="1" did not work so well. re0 simply
stopped doing anything after a while. My current configuration uses re
and emx, both without polling. This works great since they end up
using different interrupts with emx instead of em.

Thanks for all your help,
-Richard


Re: extreme network latency

2016-10-18 Thread Sepherosa Ziehau
On Wed, Oct 19, 2016 at 2:19 AM, Richard Nyberg  wrote:
> Hi! Yes it's quite recent hardware.
>
> On 18 October 2016 at 04:20, Sepherosa Ziehau  wrote:
>>
>> Heh, I'd say avoid re :).
>
> I might not have made the wisest choice there. :)
>
>> Try put the following tunable:
>> hw.re.msi.enable="1"
>> to /boot/loader.conf.  And reboot.
>
> This also worked. Using this instead of polling now.
>
>> Output of pciconf -lvc would really be helpful.
>
> Output attached.

For better performance, you can put the if_emx_load="YES" in
loader.conf.  However, em0 will become emx0 after loading that module,
so make sure that your rc.conf and pf.conf are also updated ;)

Thanks,
sephe

-- 
Tomorrow Will Never Die


Re: extreme network latency

2016-10-18 Thread Richard Nyberg
Hi! Yes it's quite recent hardware.

On 18 October 2016 at 04:20, Sepherosa Ziehau  wrote:
>
> Heh, I'd say avoid re :).

I might not have made the wisest choice there. :)

> Try put the following tunable:
> hw.re.msi.enable="1"
> to /boot/loader.conf.  And reboot.

This also worked. Using this instead of polling now.

> Output of pciconf -lvc would really be helpful.

Output attached.

-Richard
hostb0@pci0:0:0:0:  class=0x06 card=0x86941043 chip=0x191f8086 rev=0x07 
hdr=0x00
vendor = 'Intel Corporation'
device = 'Skylake Host Bridge/DRAM Registers'
class  = bridge
subclass   = HOST-PCI
cap 09[e0] = vendor (length 16) Intel cap 0 version 1
pcib1@pci0:0:1:0:   class=0x060400 card=0x86941043 chip=0x19018086 rev=0x07 
hdr=0x01
vendor = 'Intel Corporation'
device = 'Skylake PCIe Controller (x16)'
class  = bridge
subclass   = PCI-PCI
cap 0d[88] = PCI Bridge card=0x86941043
cap 01[80] = powerspec 3  supports D0 D3  current D0
cap 05[90] = MSI supports 1 message 
cap 10[a0] = PCI-Express 2 root port max data 128(256) link x1(x16)
ecap 0002[100] = VC 1 max VC0
ecap 0005[140] = unknown 1
ecap 0019[d94] = unknown 1
vgapci0@pci0:0:2:0: class=0x03 card=0x86941043 chip=0x19128086 rev=0x06 
hdr=0x00
vendor = 'Intel Corporation'
device = 'HD Graphics 530'
class  = display
subclass   = VGA
bar   [10] = type Memory, range 64, base 0xf600, size 16777216, enabled
bar   [18] = type Prefetchable Memory, range 64, base 0xe000, size 
268435456, enabled
bar   [20] = type I/O Port, range 32, base 0xf000, size 64, enabled
cap 09[40] = vendor (length 12) Intel cap 0 version 1
cap 10[70] = PCI-Express 2 root endpoint max data 128(128) link x0(x0)
cap 05[ac] = MSI supports 1 message 
cap 01[d0] = powerspec 2  supports D0 D3  current D0
ecap 001b[100] = unknown 1
ecap 000f[200] = unknown 1
ecap 0013[300] = unknown 1
xhci0@pci0:0:20:0:  class=0x0c0330 card=0x86941043 chip=0xa12f8086 rev=0x31 
hdr=0x00
vendor = 'Intel Corporation'
device = 'Sunrise Point-H USB 3.0 xHCI Controller'
class  = serial bus
subclass   = USB
bar   [10] = type Memory, range 64, base 0xf723, size 65536, enabled
cap 01[70] = powerspec 2  supports D0 D3  current D0
cap 05[80] = MSI supports 8 messages, 64 bit enabled with 1 message
none0@pci0:0:22:0:  class=0x078000 card=0x86941043 chip=0xa13a8086 rev=0x31 
hdr=0x00
vendor = 'Intel Corporation'
device = 'Sunrise Point-H CSME HECI'
class  = simple comms
bar   [10] = type Memory, range 64, base 0xf724d000, size 4096, enabled
cap 01[50] = powerspec 3  supports D0 D3  current D0
cap 05[8c] = MSI supports 1 message, 64 bit 
ahci0@pci0:0:23:0:  class=0x010601 card=0x86941043 chip=0xa1028086 rev=0x31 
hdr=0x00
vendor = 'Intel Corporation'
device = 'Sunrise Point-H SATA controller [AHCI mode]'
class  = mass storage
subclass   = SATA
bar   [10] = type Memory, range 32, base 0xf7248000, size 8192, enabled
bar   [14] = type Memory, range 32, base 0xf724c000, size 256, enabled
bar   [18] = type I/O Port, range 32, base 0xf090, size  8, enabled
bar   [1c] = type I/O Port, range 32, base 0xf080, size  4, enabled
bar   [20] = type I/O Port, range 32, base 0xf060, size 32, enabled
bar   [24] = type Memory, range 32, base 0xf724b000, size 2048, enabled
cap 05[80] = MSI supports 1 message enabled with 1 message
cap 01[70] = powerspec 3  supports D0 D3  current D0
cap 12[a8] = SATA Index-Data Pair
pcib2@pci0:0:27:0:  class=0x060400 card=0x86941043 chip=0xa1678086 rev=0xf1 
hdr=0x01
vendor = 'Intel Corporation'
device = 'Sunrise Point-H PCI Root Port'
class  = bridge
subclass   = PCI-PCI
cap 10[40] = PCI-Express 2 root port max data 128(256) link x0(x4)
cap 05[80] = MSI supports 1 message 
cap 0d[90] = PCI Bridge card=0x86941043
cap 01[a0] = powerspec 3  supports D0 D3  current D0
ecap 0001[100] = AER 1 0 fatal 0 non-fatal 0 corrected
ecap 000d[140] = unknown 1
ecap 0019[220] = unknown 1
pcib3@pci0:0:28:0:  class=0x060400 card=0x86941043 chip=0xa1108086 rev=0xf1 
hdr=0x01
vendor = 'Intel Corporation'
device = 'Sunrise Point-H PCI Express Root Port'
class  = bridge
subclass   = PCI-PCI
cap 10[40] = PCI-Express 2 root port max data 128(256) link x0(x1)
cap 05[80] = MSI supports 1 message 
cap 0d[90] = PCI Bridge card=0x86941043
cap 01[a0] = powerspec 3  supports D0 D3  current D0
pcib4@pci0:0:29:0:  class=0x060400 card=0x86941043 chip=0xa1188086 rev=0xf1 
hdr=0x01
vendor = 'Intel Corporation'
device = 'Sunrise Point-H PCI Express Root Port'
class  = bridge
subclass   = PCI-PCI
cap 10[40] = PCI-Express 2 root port max data 128(256) link x0(x1)
cap 05[80] = MSI supports 1 message 
cap 0d[90] 

Re: extreme network latency

2016-10-17 Thread Sepherosa Ziehau
On Tue, Oct 18, 2016 at 4:17 AM, Richard Nyberg  wrote:
> Yes, that was it. Many thanks!
>
> Should I just use polling, which works fine, or is there something one
> can do about the interrupt issue?

Heh, I'd say avoid re :).

Try put the following tunable:
hw.re.msi.enable="1"
to /boot/loader.conf.  And reboot.

Output of pciconf -lvc would really be helpful.

Thanks,
sephe

>
> -Richard
>
> On 17 October 2016 at 22:05, Matthew Dillon  wrote:
>> That kinda sounds like an interrupt issue, in which case I suggest turning
>> polling on for both interfaces.  ifconfig  polling ought to do it.  If
>> that fixes the problem, then it is definitely interrupt-related.
>>
>> -Matt
>>
>> On Mon, Oct 17, 2016 at 12:52 PM, Richard Nyberg 
>> wrote:
>>>
>>> Thanks again for your suggestions.
>>>
>>> Actually it's much stranger than I thought. While troubleshooting I
>>> had this configuration:
>>>
>>> df (em0) -> switch <- desktop
>>>
>>> No other devices or network interfaces were connected. In this
>>> configuration there was no problem at all with latency. I then plugged
>>> in the cable with internet acces like below:
>>>
>>> internet <- (re0) df (em0) -> switch <- desktop
>>>
>>> In this configuration the latency problems immediately showed. The fun
>>> thing is that when I unplugged the re0 interface again the em0
>>> interface stopped responding at all, until I put the cable back to
>>> re0. Then em0 was back but with latency problems.
>>>
>>> Another data point is that while I downloaded a large file at speed
>>> from the internet via df to my desktop in the above configuration and
>>> pinged from the desktop to df at the same time, the latency problems
>>> were gone. Until the download was finished and they started again.
>>>
>>> -Richard
>>>
>>> On 16 October 2016 at 19:14, Matthew Dillon  wrote:
>>> > Look for a packet loop on the interface.  Use tcpdump on the interface
>>> > to
>>> > see if there are excess packets being generated from somewhere.  There
>>> > are
>>> > numerous things that can blow up a LAN.  The most common being that a
>>> > switch
>>> > port is wired to loop back into the LAN.
>>> >
>>> > -Matt
>>> >
>>> > On Sun, Oct 16, 2016 at 9:17 AM, Justin Sherrill
>>> > 
>>> > wrote:
>>> >>
>>> >> On Sun, Oct 16, 2016 at 11:49 AM, Richard Nyberg
>>> >> 
>>> >> wrote:
>>> >> > Thanks!
>>> >> >
>>> >> > Here are some more datapoints.
>>> >>
>>> >> I think the only constant at this point is the internal interface on
>>> >> the DragonFly system.  If you hook the em0 interface that's currently
>>> >> internal on the DragonFly machine up to your Internet link (i.e.
>>> >> reverse which interface is internal or external), does it still
>>> >> perform badly?
>>> >>
>>> >> If it doesn't work well, then that interface is bad.  I'd be
>>> >> surprised, cause I've seen network ports go bad very rarely, but it's
>>> >> possible.  Plus, I don't have any other ideas.
>>> >
>>> >
>>
>>



-- 
Tomorrow Will Never Die


Re: extreme network latency

2016-10-17 Thread Matthew Dillon
Our network dev Sephe might be able to work out why the NICs are
interfering with each other, but it depends how old they are.  If they are
old card(s) and/or it is an old motherboard, it might not be worth tracking
down.  Polling is a perfectly acceptable solution for older stuff.  If the
NICs are relatively recent, though, we would want to try to track down the
problem.

I think it is at least worth doing a verbose boot and posting the dmesg
output from it, and also posting the output from 'pciconf -lbcv' with both
NICs present.

-Matt

On Mon, Oct 17, 2016 at 1:17 PM, Richard Nyberg 
wrote:

> Yes, that was it. Many thanks!
>
> Should I just use polling, which works fine, or is there something one
> can do about the interrupt issue?
>
> -Richard
>
> On 17 October 2016 at 22:05, Matthew Dillon  wrote:
> > That kinda sounds like an interrupt issue, in which case I suggest
> turning
> > polling on for both interfaces.  ifconfig  polling ought to do
> it.  If
> > that fixes the problem, then it is definitely interrupt-related.
> >
> > -Matt
> >
> > On Mon, Oct 17, 2016 at 12:52 PM, Richard Nyberg 
> > wrote:
> >>
> >> Thanks again for your suggestions.
> >>
> >> Actually it's much stranger than I thought. While troubleshooting I
> >> had this configuration:
> >>
> >> df (em0) -> switch <- desktop
> >>
> >> No other devices or network interfaces were connected. In this
> >> configuration there was no problem at all with latency. I then plugged
> >> in the cable with internet acces like below:
> >>
> >> internet <- (re0) df (em0) -> switch <- desktop
> >>
> >> In this configuration the latency problems immediately showed. The fun
> >> thing is that when I unplugged the re0 interface again the em0
> >> interface stopped responding at all, until I put the cable back to
> >> re0. Then em0 was back but with latency problems.
> >>
> >> Another data point is that while I downloaded a large file at speed
> >> from the internet via df to my desktop in the above configuration and
> >> pinged from the desktop to df at the same time, the latency problems
> >> were gone. Until the download was finished and they started again.
> >>
> >> -Richard
> >>
> >> On 16 October 2016 at 19:14, Matthew Dillon 
> wrote:
> >> > Look for a packet loop on the interface.  Use tcpdump on the interface
> >> > to
> >> > see if there are excess packets being generated from somewhere.  There
> >> > are
> >> > numerous things that can blow up a LAN.  The most common being that a
> >> > switch
> >> > port is wired to loop back into the LAN.
> >> >
> >> > -Matt
> >> >
> >> > On Sun, Oct 16, 2016 at 9:17 AM, Justin Sherrill
> >> > 
> >> > wrote:
> >> >>
> >> >> On Sun, Oct 16, 2016 at 11:49 AM, Richard Nyberg
> >> >> 
> >> >> wrote:
> >> >> > Thanks!
> >> >> >
> >> >> > Here are some more datapoints.
> >> >>
> >> >> I think the only constant at this point is the internal interface on
> >> >> the DragonFly system.  If you hook the em0 interface that's currently
> >> >> internal on the DragonFly machine up to your Internet link (i.e.
> >> >> reverse which interface is internal or external), does it still
> >> >> perform badly?
> >> >>
> >> >> If it doesn't work well, then that interface is bad.  I'd be
> >> >> surprised, cause I've seen network ports go bad very rarely, but it's
> >> >> possible.  Plus, I don't have any other ideas.
> >> >
> >> >
> >
> >
>


Re: extreme network latency

2016-10-17 Thread Richard Nyberg
Yes, that was it. Many thanks!

Should I just use polling, which works fine, or is there something one
can do about the interrupt issue?

-Richard

On 17 October 2016 at 22:05, Matthew Dillon  wrote:
> That kinda sounds like an interrupt issue, in which case I suggest turning
> polling on for both interfaces.  ifconfig  polling ought to do it.  If
> that fixes the problem, then it is definitely interrupt-related.
>
> -Matt
>
> On Mon, Oct 17, 2016 at 12:52 PM, Richard Nyberg 
> wrote:
>>
>> Thanks again for your suggestions.
>>
>> Actually it's much stranger than I thought. While troubleshooting I
>> had this configuration:
>>
>> df (em0) -> switch <- desktop
>>
>> No other devices or network interfaces were connected. In this
>> configuration there was no problem at all with latency. I then plugged
>> in the cable with internet acces like below:
>>
>> internet <- (re0) df (em0) -> switch <- desktop
>>
>> In this configuration the latency problems immediately showed. The fun
>> thing is that when I unplugged the re0 interface again the em0
>> interface stopped responding at all, until I put the cable back to
>> re0. Then em0 was back but with latency problems.
>>
>> Another data point is that while I downloaded a large file at speed
>> from the internet via df to my desktop in the above configuration and
>> pinged from the desktop to df at the same time, the latency problems
>> were gone. Until the download was finished and they started again.
>>
>> -Richard
>>
>> On 16 October 2016 at 19:14, Matthew Dillon  wrote:
>> > Look for a packet loop on the interface.  Use tcpdump on the interface
>> > to
>> > see if there are excess packets being generated from somewhere.  There
>> > are
>> > numerous things that can blow up a LAN.  The most common being that a
>> > switch
>> > port is wired to loop back into the LAN.
>> >
>> > -Matt
>> >
>> > On Sun, Oct 16, 2016 at 9:17 AM, Justin Sherrill
>> > 
>> > wrote:
>> >>
>> >> On Sun, Oct 16, 2016 at 11:49 AM, Richard Nyberg
>> >> 
>> >> wrote:
>> >> > Thanks!
>> >> >
>> >> > Here are some more datapoints.
>> >>
>> >> I think the only constant at this point is the internal interface on
>> >> the DragonFly system.  If you hook the em0 interface that's currently
>> >> internal on the DragonFly machine up to your Internet link (i.e.
>> >> reverse which interface is internal or external), does it still
>> >> perform badly?
>> >>
>> >> If it doesn't work well, then that interface is bad.  I'd be
>> >> surprised, cause I've seen network ports go bad very rarely, but it's
>> >> possible.  Plus, I don't have any other ideas.
>> >
>> >
>
>


Re: extreme network latency

2016-10-17 Thread Matthew Dillon
That kinda sounds like an interrupt issue, in which case I suggest turning
polling on for both interfaces.  ifconfig  polling ought to do it.
If that fixes the problem, then it is definitely interrupt-related.

-Matt

On Mon, Oct 17, 2016 at 12:52 PM, Richard Nyberg 
wrote:

> Thanks again for your suggestions.
>
> Actually it's much stranger than I thought. While troubleshooting I
> had this configuration:
>
> df (em0) -> switch <- desktop
>
> No other devices or network interfaces were connected. In this
> configuration there was no problem at all with latency. I then plugged
> in the cable with internet acces like below:
>
> internet <- (re0) df (em0) -> switch <- desktop
>
> In this configuration the latency problems immediately showed. The fun
> thing is that when I unplugged the re0 interface again the em0
> interface stopped responding at all, until I put the cable back to
> re0. Then em0 was back but with latency problems.
>
> Another data point is that while I downloaded a large file at speed
> from the internet via df to my desktop in the above configuration and
> pinged from the desktop to df at the same time, the latency problems
> were gone. Until the download was finished and they started again.
>
> -Richard
>
> On 16 October 2016 at 19:14, Matthew Dillon  wrote:
> > Look for a packet loop on the interface.  Use tcpdump on the interface to
> > see if there are excess packets being generated from somewhere.  There
> are
> > numerous things that can blow up a LAN.  The most common being that a
> switch
> > port is wired to loop back into the LAN.
> >
> > -Matt
> >
> > On Sun, Oct 16, 2016 at 9:17 AM, Justin Sherrill <
> jus...@shiningsilence.com>
> > wrote:
> >>
> >> On Sun, Oct 16, 2016 at 11:49 AM, Richard Nyberg  >
> >> wrote:
> >> > Thanks!
> >> >
> >> > Here are some more datapoints.
> >>
> >> I think the only constant at this point is the internal interface on
> >> the DragonFly system.  If you hook the em0 interface that's currently
> >> internal on the DragonFly machine up to your Internet link (i.e.
> >> reverse which interface is internal or external), does it still
> >> perform badly?
> >>
> >> If it doesn't work well, then that interface is bad.  I'd be
> >> surprised, cause I've seen network ports go bad very rarely, but it's
> >> possible.  Plus, I don't have any other ideas.
> >
> >
>


Re: extreme network latency

2016-10-17 Thread Richard Nyberg
Thanks again for your suggestions.

Actually it's much stranger than I thought. While troubleshooting I
had this configuration:

df (em0) -> switch <- desktop

No other devices or network interfaces were connected. In this
configuration there was no problem at all with latency. I then plugged
in the cable with internet acces like below:

internet <- (re0) df (em0) -> switch <- desktop

In this configuration the latency problems immediately showed. The fun
thing is that when I unplugged the re0 interface again the em0
interface stopped responding at all, until I put the cable back to
re0. Then em0 was back but with latency problems.

Another data point is that while I downloaded a large file at speed
from the internet via df to my desktop in the above configuration and
pinged from the desktop to df at the same time, the latency problems
were gone. Until the download was finished and they started again.

-Richard

On 16 October 2016 at 19:14, Matthew Dillon  wrote:
> Look for a packet loop on the interface.  Use tcpdump on the interface to
> see if there are excess packets being generated from somewhere.  There are
> numerous things that can blow up a LAN.  The most common being that a switch
> port is wired to loop back into the LAN.
>
> -Matt
>
> On Sun, Oct 16, 2016 at 9:17 AM, Justin Sherrill 
> wrote:
>>
>> On Sun, Oct 16, 2016 at 11:49 AM, Richard Nyberg 
>> wrote:
>> > Thanks!
>> >
>> > Here are some more datapoints.
>>
>> I think the only constant at this point is the internal interface on
>> the DragonFly system.  If you hook the em0 interface that's currently
>> internal on the DragonFly machine up to your Internet link (i.e.
>> reverse which interface is internal or external), does it still
>> perform badly?
>>
>> If it doesn't work well, then that interface is bad.  I'd be
>> surprised, cause I've seen network ports go bad very rarely, but it's
>> possible.  Plus, I don't have any other ideas.
>
>


Re: extreme network latency

2016-10-16 Thread Matthew Dillon
Look for a packet loop on the interface.  Use tcpdump on the interface to
see if there are excess packets being generated from somewhere.  There are
numerous things that can blow up a LAN.  The most common being that a
switch port is wired to loop back into the LAN.

-Matt

On Sun, Oct 16, 2016 at 9:17 AM, Justin Sherrill 
wrote:

> On Sun, Oct 16, 2016 at 11:49 AM, Richard Nyberg 
> wrote:
> > Thanks!
> >
> > Here are some more datapoints.
>
> I think the only constant at this point is the internal interface on
> the DragonFly system.  If you hook the em0 interface that's currently
> internal on the DragonFly machine up to your Internet link (i.e.
> reverse which interface is internal or external), does it still
> perform badly?
>
> If it doesn't work well, then that interface is bad.  I'd be
> surprised, cause I've seen network ports go bad very rarely, but it's
> possible.  Plus, I don't have any other ideas.
>


Re: extreme network latency

2016-10-16 Thread Justin Sherrill
On Sun, Oct 16, 2016 at 11:49 AM, Richard Nyberg  wrote:
> Thanks!
>
> Here are some more datapoints.

I think the only constant at this point is the internal interface on
the DragonFly system.  If you hook the em0 interface that's currently
internal on the DragonFly machine up to your Internet link (i.e.
reverse which interface is internal or external), does it still
perform badly?

If it doesn't work well, then that interface is bad.  I'd be
surprised, cause I've seen network ports go bad very rarely, but it's
possible.  Plus, I don't have any other ideas.


Re: extreme network latency

2016-10-16 Thread Richard Nyberg
Thanks!

Here are some more datapoints.

Yeah the switch for the lan is a simple Netgear GS105. My previous df
gateway was connected in the same way, also with an internal em0,

Turning pf off does not help.
Switching cable and port between df and the switch does not help.
Pinging www.google.com on the external interface from df gives
consistant < 2 ms so it's on the internal interface the problem is.
Pinging from a device on wifi (access point connected to the same
switch) to df also gives very high latency.
Pinging from the same wifi device to my desktop gives good latency.
Pinging from df to my desktop gives bad latency.

On 16 October 2016 at 17:24, Justin Sherrill  wrote:
> This is a problem that's going to require more data.
>
> - If you turn pf off, does the problem go away?
> - If you ping from the DragonFly machine to your desktop, do you get
> the same results?
> - If you ping for an extended period (ping -t), do you get more timeouts?
> - Are you directly connected to the DragonFly machine?  I assume it's
> all going through a network switch.
> - Can you swap out network cables?
> - If you are using some sort of home switch, are any NAT capabilities
> turned off?
>
> If turning pf off does not affect it, it's not pf.
> If pinging DragonFly -> desktop does not do it, it's the desktop
> If you get more timeouts, or if it goes away on a direct link, I'm
> thinking it's something with the physical connection.
>
> On Sun, Oct 16, 2016 at 4:53 AM, Richard Nyberg  wrote:
>> Hi users!
>>
>> I've just changed hardware for my gateway. It's now built on the asus
>> z170i pro gaming motherboard and runs df 4.6. Unfortunately it suffers
>> extreme latency on my lan and I don't know how to troubleshoot this.
>> Between other devices on the lan there's no problem. The gateway isn't
>> loaded in the least since I haven't installed any other services other
>> dhcp yet.
>>
>> ### Gateway lan interface config:
>>
>> shoebox# ifconfig em0
>> em0: flags=8843 mtu 1500
>> options=1b
>> inet 10.5.2.1 netmask 0xff00 broadcast 10.5.2.255
>> inet6 fe80::2e56:dcff:fe96:5961%em0 prefixlen 64 scopeid 0x2
>> ether 2c:56:dc:96:59:61
>> media: Ethernet autoselect  (1000baseT 
>> )
>> status: active
>>
>> # ## /etc/pf.conf
>>
>> ext_if="re0"
>> int_if="em0"
>>
>> scrub in
>>
>> nat on $ext_if from !($ext_if) -> ($ext_if:0)
>>
>> block in
>> pass out keep state
>>
>> pass quick on { lo0 $int_if }
>> antispoof quick for { lo0 $int_if }
>>
>> pass on $ext_if proto { icmp }
>>
>> ### Pinging from my desktop:
>>
>> C:>ping 10.5.2.1
>>
>> Pinging 10.5.2.1 with 32 bytes of data:
>> Request timed out.
>> Reply from 10.5.2.1: bytes=32 time=66ms TTL=64
>> Reply from 10.5.2.1: bytes=32 time=559ms TTL=64
>> Reply from 10.5.2.1: bytes=32 time=1647ms TTL=64
>>
>> Ping statistics for 10.5.2.1:
>> Packets: Sent = 4, Received = 3, Lost = 1 (25% loss),
>> Approximate round trip times in milli-seconds:
>> Minimum = 66ms, Maximum = 1647ms, Average = 757ms
>>
>> ###
>>
>> Any ideas?
>>
>> Best regards,
>> -Richard


Re: extreme network latency

2016-10-16 Thread Justin Sherrill
This is a problem that's going to require more data.

- If you turn pf off, does the problem go away?
- If you ping from the DragonFly machine to your desktop, do you get
the same results?
- If you ping for an extended period (ping -t), do you get more timeouts?
- Are you directly connected to the DragonFly machine?  I assume it's
all going through a network switch.
- Can you swap out network cables?
- If you are using some sort of home switch, are any NAT capabilities
turned off?

If turning pf off does not affect it, it's not pf.
If pinging DragonFly -> desktop does not do it, it's the desktop
If you get more timeouts, or if it goes away on a direct link, I'm
thinking it's something with the physical connection.

On Sun, Oct 16, 2016 at 4:53 AM, Richard Nyberg  wrote:
> Hi users!
>
> I've just changed hardware for my gateway. It's now built on the asus
> z170i pro gaming motherboard and runs df 4.6. Unfortunately it suffers
> extreme latency on my lan and I don't know how to troubleshoot this.
> Between other devices on the lan there's no problem. The gateway isn't
> loaded in the least since I haven't installed any other services other
> dhcp yet.
>
> ### Gateway lan interface config:
>
> shoebox# ifconfig em0
> em0: flags=8843 mtu 1500
> options=1b
> inet 10.5.2.1 netmask 0xff00 broadcast 10.5.2.255
> inet6 fe80::2e56:dcff:fe96:5961%em0 prefixlen 64 scopeid 0x2
> ether 2c:56:dc:96:59:61
> media: Ethernet autoselect  (1000baseT )
> status: active
>
> # ## /etc/pf.conf
>
> ext_if="re0"
> int_if="em0"
>
> scrub in
>
> nat on $ext_if from !($ext_if) -> ($ext_if:0)
>
> block in
> pass out keep state
>
> pass quick on { lo0 $int_if }
> antispoof quick for { lo0 $int_if }
>
> pass on $ext_if proto { icmp }
>
> ### Pinging from my desktop:
>
> C:>ping 10.5.2.1
>
> Pinging 10.5.2.1 with 32 bytes of data:
> Request timed out.
> Reply from 10.5.2.1: bytes=32 time=66ms TTL=64
> Reply from 10.5.2.1: bytes=32 time=559ms TTL=64
> Reply from 10.5.2.1: bytes=32 time=1647ms TTL=64
>
> Ping statistics for 10.5.2.1:
> Packets: Sent = 4, Received = 3, Lost = 1 (25% loss),
> Approximate round trip times in milli-seconds:
> Minimum = 66ms, Maximum = 1647ms, Average = 757ms
>
> ###
>
> Any ideas?
>
> Best regards,
> -Richard


extreme network latency

2016-10-16 Thread Richard Nyberg
Hi users!

I've just changed hardware for my gateway. It's now built on the asus
z170i pro gaming motherboard and runs df 4.6. Unfortunately it suffers
extreme latency on my lan and I don't know how to troubleshoot this.
Between other devices on the lan there's no problem. The gateway isn't
loaded in the least since I haven't installed any other services other
dhcp yet.

### Gateway lan interface config:

shoebox# ifconfig em0
em0: flags=8843 mtu 1500
options=1b
inet 10.5.2.1 netmask 0xff00 broadcast 10.5.2.255
inet6 fe80::2e56:dcff:fe96:5961%em0 prefixlen 64 scopeid 0x2
ether 2c:56:dc:96:59:61
media: Ethernet autoselect  (1000baseT )
status: active

# ## /etc/pf.conf

ext_if="re0"
int_if="em0"

scrub in

nat on $ext_if from !($ext_if) -> ($ext_if:0)

block in
pass out keep state

pass quick on { lo0 $int_if }
antispoof quick for { lo0 $int_if }

pass on $ext_if proto { icmp }

### Pinging from my desktop:

C:>ping 10.5.2.1

Pinging 10.5.2.1 with 32 bytes of data:
Request timed out.
Reply from 10.5.2.1: bytes=32 time=66ms TTL=64
Reply from 10.5.2.1: bytes=32 time=559ms TTL=64
Reply from 10.5.2.1: bytes=32 time=1647ms TTL=64

Ping statistics for 10.5.2.1:
Packets: Sent = 4, Received = 3, Lost = 1 (25% loss),
Approximate round trip times in milli-seconds:
Minimum = 66ms, Maximum = 1647ms, Average = 757ms

###

Any ideas?

Best regards,
-Richard