Re: Summer of Code 2005: Improve Libalias

2005-09-06 Thread Mike Jakubik
On Tue, September 6, 2005 10:13 am, Paolo Pisati said: On Tue, Sep 06, 2005 at 04:06:57PM +0400, Gleb Smirnoff wrote: during your work with libalias have you found any bugs or buglets, or a rough places, that should be considered to be merged to main FreeBSD CVS tree as soon as possible,

Re: Help - PPPoE Server

2006-02-14 Thread Mike Jakubik
Murugan wrote: Hi i need a sample(working) configuration files to set up a PPPoE server in FreeBSD 4.9. www.google.com ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to

Re: Intel PRO/1000 EB, 82563EB and 82564EB.

2006-08-02 Thread Mike Jakubik
Nikolas Britton wrote: No I have not tired this, I didn't even know Intel made FreeBSD drivers... I went looking on the site early but couldn't find anything. Do you know if they are any good?... I'll check it out, thanks. It is essentially the same driver that FreeBSD uses, but the one

CARP howto

2006-08-12 Thread Mike Jakubik
Does anyone know a good CARP howto for FreeBSD? I've googled around, but i cant find anything specific to FreeBSD. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to [EMAIL

Re: CARP howto

2006-08-12 Thread Mike Jakubik
Anton Yuzhaninov wrote: Saturday, August 12, 2006, 8:30:00 PM, Mike Jakubik wrote: MJ Does anyone know a good CARP howto for FreeBSD? I've googled around, but MJ i cant find anything specific to FreeBSD. You can use CARP howto for OpenBSD. Yup, that and the FreeBSD man page gave me all

Re: Load balancing for web servers

2006-08-30 Thread Mike Jakubik
Max Laier wrote: Have a look at: http://www.countersiege.com/doc/pfsync-carp/#big for one idea. All requirements (carp, pf and pfsync) are available in FreeBSD as well. You can load balance with CARP, but AFAIK it only works on the local network segment, i.e. it wont work past a

Re: Slow FreeBSD - Windows performance with inflight enabled

2007-01-09 Thread Mike Jakubik
On Mon, January 8, 2007 2:58 pm, Steven Hartland wrote: I've just been looking at an issue reported by some of our users that downloads from our one of our sites run on FreeBSD 6.1 and Apache 1.3 where strangely slow. After doing some digging around I found that two remote machines on the

Byte counters reset at ~4GB

2004-03-15 Thread Mike Jakubik
Hello, It seems that the byte counters (derived from netstat -nbi) reset at around 4 GB. Is there no way around this? It would be nice to be able to see an accurate display of totals. It just seems pointless to even have this, as 4 GB is just not that much anymore. I know this is a 32bit

Re: Byte counters reset at ~4GB

2004-03-15 Thread Mike Jakubik
Brooks Davis said: Please read the archives of freebsd-net. This has been discussed many times. There are valid reasons for this, particularly the fact that 64-bit counters are much more expensive to update on 32-bit architectures. API breakage is also a problem. We're aware that 2^32 is

Re: Byte counters reset at ~4GB

2004-03-15 Thread Mike Jakubik
Max Laier said: There is now: pf comes with 64bit statistic counters. For now you can put them on one interface only, but in future version there will be more flexible statistics. Additionally there are many accounting programs out there which utilize various existing (32bit) counters or the

Re: Byte counters reset at ~4GB

2004-03-15 Thread Mike Jakubik
Max Laier said: Sure, you measure it ;) ... no, of course it is more expensive to update a 64bit counter on a 32bit arch, but the key (once again) is descision: While (almost) all of the pf counters are 64bit types you can configure it not to use the loginterface or whatsoever more. So it's

Re: PPTP VPN using MPD behind NAT help needed

2004-07-14 Thread Mike Jakubik
Motonori Shindo said: Mike, This seems like a DSL router's problem. Because PPTP encapsulates PPP using GRE, which is neither TCP nor UDP, routers sometimes can not NAT PPTP traffic. Some router conqurs this problem by simply passing through GRE packets (and hence this feature is sometimes

Re: PPTP VPN using MPD behind NAT help needed

2004-07-14 Thread Mike Jakubik
Motonori Shindo said: This seems like a DSL router's problem. Because PPTP encapsulates PPP using GRE, which is neither TCP nor UDP, routers sometimes can not NAT PPTP traffic. Some router conqurs this problem by simply passing through GRE packets (and hence this feature is sometimes

NATD no longer works for outgoing PPTP VPN?

2004-07-20 Thread Mike Jakubik
Hello, I have recently discovered, after long periods of trying to debug a VPN server, that i can not establish PPTP VPN connections any more. The culprit seems to be natd not forwarding GRE properly. I have tried adding a 'redirect_proto gre' option to natd, but same behaviour occurs. I could

RE: Load Balancing

2004-12-16 Thread Mike Jakubik
Mitch (Bitblock) said: Short answer is Yes. For basic failover, I've used a script which monitors link status and function (by pinging or connecting to a remote host). Failover is accomplished by switching the default route. Using ipfw fwd statements, you can make both links function at

Link state messages

2005-03-08 Thread Mike Jakubik
Hi, I have recently cvsuped to a new snapshot of -current, the existing system was about 1-2 months old. I am now seeing a lot of link state messages in dmesg. em0: link state changed to DOWN em0: link state changed to UP em0: link state changed to DOWN em0: link state changed to UP em0: link

Re: FreeVRRPd project status

2005-04-05 Thread Mike Jakubik
On Tue, April 5, 2005 10:23 pm, Theo Schlossnagle said: On Apr 4, 2005, at 5:05 PM, Eivind Hestnes wrote: If you are looking for a Open Source failover solution, CARP is probably the best choice as it stands today. If you need assistance with the configuration, please reply to the list, and

Re: FreeVRRPd project status

2005-04-05 Thread Mike Jakubik
On Tue, April 5, 2005 10:46 pm, Theo Schlossnagle said: It isn't unmaintained... what makes you think it is unmaintained? wackamole version 2.1.1 was released on July 28th, 2004 (08.31.2004). Compiles fine on my boxen. (4.11, 4-stable, 5.2.1, 5.3-RELEASE-p5) Really? Ive never been able to

Re: FreeVRRPd project status

2005-04-07 Thread Mike Jakubik
On Thu, April 7, 2005 11:12 am, Dag-Erling Smørgrav said: You're probably using the wrong version of bison. Yes, Theo Schlossnagle already mentioned this to me. The compile process was using the port version of bison, removing it solved the problem. Thanks.

Re: SOLVED: Degraded TCP performace on Intel PRO/1000

2005-05-07 Thread Mike Jakubik
On Sat, May 7, 2005 5:35 am, Marian Durkovic said: To achieve wirespeed performance, the TX FIFO must be large enough to accomodate 2 jumbo packets (not just 1 as the driver was assuming). There was also a typo in the driver, causing the PBA tuning on most cards to be non-functional. Please

Re: ntop on FreeBSD 5.4ish and threading

2005-05-07 Thread Mike Jakubik
On Sat, May 7, 2005 11:20 am, Joao Barros said: Hi all, I recently tried ntop on FreeBSD 5.4 RC3 and RC4 and was disappointed with the problems I bumped into. I reported this to ntop's developers mailing list and a few coments about FreeBSD threading came up. It would be interesting if

Re: SOLVED: Degraded TCP performace on Intel PRO/1000

2005-05-11 Thread Mike Jakubik
On Sat, May 7, 2005 12:37 am, Kris Kennaway said: On Fri, May 06, 2005 at 08:59:50AM +0200, Marian Durkovic wrote: Hi all, seems we've found the problem. The performance degradation was happening it the TX path, due to insufficient setting of TX packet buffer FIFO on the chip. To

Re: SOLVED: Degraded TCP performace on Intel PRO/1000

2005-05-11 Thread Mike Jakubik
On Wed, May 11, 2005 5:24 pm, Mike Jakubik said: Any luck submitting the patch for this? I looked at Intels website, and the latest drive for FreeBSD 4.7 is 1.7.35. Which is what is also used on -CURRENT now. They also state Development is no longer taking place on this driver. For the latest

Re: SOLVED: Degraded TCP performace on Intel PRO/1000

2005-05-12 Thread Mike Jakubik
On Thu, May 12, 2005 3:27 am, Marian Durkovic said: Hi, On Wed, May 11, 2005 at 06:38:48PM -0400, Mike Jakubik wrote: Any luck submitting the patch for this? Yes, it's kern/80932 Good stuff, ill test it when i get a chance. I looked at Intels website, and the latest drive for FreeBSD

Outgoing speed problems in -CURRENT (was: Re: SOLVED: Degraded TCP performace on Intel PRO/1000)

2005-05-13 Thread Mike Jakubik
On Thu, May 12, 2005 3:27 am, Marian Durkovic said: Seems like i am getting half the performance when sending to the fbsd box. Also, enabling jumbo frames does not help, and sometimes even yields slightly slower results. Yes, that's exactly the problem my patch is addressing - for larger MTU

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-28 Thread Mike Jakubik
cific for your use case. Please post back to the list with your specific findings and nic/ tcp tunables, these are very helpful for the next person! Dave Mike Jakubik https://www.swiftsmsgateway.com/ Disclaimer: This e-mail and any attachments are intended only for the use of

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-13 Thread Mike Jakubik
  sec  31.1 GBytes  8.91 Gbits/sec  1330 sender [  5]   0.00-30.00  sec  31.1 GBytes  8.91 Gbits/sec  receiver Thanks. On Mon, 13 Jun 2022 14:41:05 -0400 Santiago Martinez <mailto:s...@codenetworks.net> wrote Mike Jakubik

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-16 Thread Mike Jakubik
After multiple tests and tweaks i believe the issue is not with the HW or Numa related (Infinity fabric should do around 32GB) but rather with FreeBSD TCP/IP stack. It's like it cant figure itself out properly for the speed that the HW can do, i keep getting widely varying results when testing.

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-16 Thread Mike Jakubik
  receiver iperf Done. Thank You! On Thu, 16 Jun 2022 17:00:25 -0400 Alexander V. Chernikov wrote > On 16 Jun 2022, at 21:48, Mike Jakubik > <mailto:mike.jaku...@swiftsmsgateway.com> wrote: > > After multiple tests and tweaks i believe the issue is not with

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-14 Thread Mike Jakubik
lem is, if it's PCI backpressure or something else. sysctl -a | grep diag_pci_enable sysctl -a | grep diag_general_enable Set these two to 1, then run some traffic and dump all mce sysctls: sysctl -a | grep mce > dump.txt --HPS Mike Jakubik https://www.swiftsmsgateway.com/

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-14 Thread Mike Jakubik
]   0.00-30.00  sec  29.4 GBytes  8.42 Gbits/sec  3863 sender [  5]   0.00-30.00  sec  29.4 GBytes  8.42 Gbits/sec  receiver On Tue, 14 Jun 2022 10:21:51 -0400 Mike Jakubik wrote Disabling rx/tx pause seems to produce higher peaks. [root@db-02

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-14 Thread Mike Jakubik
Yes, it is the default of 1500. If I set it to 9000 I get some bizarre network behavior. On Tue, 14 Jun 2022 09:45:10 -0400 Andrey V. Elsukov <mailto:bu7c...@yandex.ru> wrote Hi, Do you have the same MTU size on linux machine? Mike Jakubik

Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-13 Thread Mike Jakubik
.77 Gbits/sec  244 sender [  5]   0.00-30.00  sec  30.6 GBytes  8.77 Gbits/sec  receiver More data can be found @  https://forums.freebsd.org/threads/poor-performance-with-stable-13-and-mellanox-connectx-6-mlx5.85460/ Mike Jakubik https://www.swiftsmsgateway.