Re: Tun and ALTQ

2005-11-09 Thread Marko Cuk

It seems that it work. Thanks.

Damn, for vlan's ( 802.1Q)  you should specify em, for tun, vice 
versa... what a mess, hehe.


Cuk



Brian Fundakowski Feldman wrote:


On Tue, Nov 08, 2005 at 12:00:32AM +0100, Marko Cuk wrote:
 


Resend...

Please, does anyone have any ideas...


What is the status of the tun0 driver and ALTQ ?
I have FreeBSD 6.0-RELEASE and have tried it without success. Why 6.0 ? 
Don't know... curious maybe... if you think, that 5.4 will work better, 
I'll reinstall it.


The tun0 is because od xDSL ( PPPoE )

It seems like packets won't match queue. Look at the pfctl output ( look 
at the bucy rules -- he is a huge consumer and the primary uplink is 
out for a week, xDSL is only backup and he consumes all the avail 
bandwidth )



THIS IFACE IS TUN0 ( pppoe )
queue root_em0 bandwidth 1Gb priority 0 cbq( wrr root ) {std_ext, bucy_out}
[ pkts:  76053  bytes:7390221  dropped pkts:  0 bytes:  
0 ]

[ qlength:   0/ 50  borrows:  0  suspends:  0 ]
[ measured:   199.0 packets/s, 146.71Kb/s ]
   



No it isn't, it's em0.  You probably want to be using ALTQ on tun0.
I've done it; it works

 



--
NetInet d.o.o. http://www.NetInet.si
Private: http://cuk.nu
MountainBikeSlovenia team: http://mtb.si
Slovenian FreeBSD mirror admin http://www2.si.freebsd.org



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Tun and ALTQ

2005-11-07 Thread Marko Cuk

Resend...

Please, does anyone have any ideas...


What is the status of the tun0 driver and ALTQ ?
I have FreeBSD 6.0-RELEASE and have tried it without success. Why 6.0 ? 
Don't know... curious maybe... if you think, that 5.4 will work better, 
I'll reinstall it.


The tun0 is because od xDSL ( PPPoE )

It seems like packets won't match queue. Look at the pfctl output ( look 
at the bucy rules -- he is a huge consumer and the primary uplink is 
out for a week, xDSL is only backup and he consumes all the avail 
bandwidth )



THIS IFACE IS TUN0 ( pppoe )
queue root_em0 bandwidth 1Gb priority 0 cbq( wrr root ) {std_ext, bucy_out}
[ pkts:  76053  bytes:7390221  dropped pkts:  0 bytes:  
0 ]

[ qlength:   0/ 50  borrows:  0  suspends:  0 ]
[ measured:   199.0 packets/s, 146.71Kb/s ]
queue  std_ext bandwidth 384Kb cbq( default )
[ pkts:  76053  bytes:7390221  dropped pkts:  0 bytes:  
0 ]

[ qlength:   0/ 50  borrows:  0  suspends: 59 ]
[ measured:   199.0 packets/s, 146.71Kb/s ]

THIS ONE IS PROBLEMATIC - Won't match
queue  bucy_out bandwidth 128Kb
[ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  
0 ]

[ qlength:   0/ 50  borrows:  0  suspends:  0 ]
[ measured: 0.0 packets/s, 0 b/s ]
queue root_em1 bandwidth 1Gb priority 0 cbq( wrr root ) {std_int, bucy_in}
[ pkts:  91920  bytes:  100394990  dropped pkts:  0 bytes:  
0 ]

[ qlength:   0/ 50  borrows:  0  suspends:  0 ]
[ measured:   260.4 packets/s, 2.37Mb/s ]
queue  std_int bandwidth 2Mb cbq( default )
[ pkts:  50302  bytes:   58076735  dropped pkts:  0 bytes:  
0 ]

[ qlength:   0/ 50  borrows:  0  suspends:   2359 ]
[ measured:   194.6 packets/s, 1.89Mb/s ]
queue  bucy_in bandwidth 900Kb
[ pkts:  41618  bytes:   42318255  dropped pkts:446 bytes: 
433317 ]

[ qlength:   0/ 50  borrows:  0  suspends:   7440 ]
[ measured:65.8 packets/s, 475.89Kb/s ]


queue root_dc0 bandwidth 10Mb priority 0 cbq( wrr root ) {std_int_wifi_in}
[ pkts:   3967  bytes:1730908  dropped pkts:  0 bytes:  
0 ]

[ qlength:   0/ 50  borrows:  0  suspends:  0 ]
[ measured: 2.6 packets/s, 4.17Kb/s ]
queue  std_int_wifi_in bandwidth 5Mb cbq( default )
[ pkts:   3967  bytes:1730908  dropped pkts:  0 bytes:  
0 ]

[ qlength:   0/ 50  borrows:  0  suspends:  0 ]
[ measured: 2.6 packets/s, 4.17Kb/s ]


This are the rules:
## 


# QUEUEING: rule-based bandwidth control.
### 


# TOLE JE NAS ODHODNI PROMET VEN - UPLOAD
altq on em0 cbq bandwidth 100% queue { std_ext,bucy_out }
queue std_ext   bandwidth 384Kb cbq(default)
queue bucy_out  bandwidth 128Kb


# 


# TOLE JE NAS DOHODNI PROMER NOTRI - DOWNLOAD
altq on em1 cbq bandwidth 100% queue { std_int,bucy_in }
queue std_int   bandwidth 2Mb cbq(default)
queue bucy_in  bandwidth 900Kb



# QUEUE rule
pass in log on em1 from 10.0.100.0/24 to any queue bucy_out
pass out log on em1 from any to 10.0.100.0/24 queue bucy_in



Many thanks for any informations. I have changed the various eth cards, 
from dc cards to em gigabit cards, etc, etc. Without success. I know, 
that there has been some issues with tun0 on OpenBSD, but that was a 
little time ago.



Cuk


--
NetInet d.o.o. http://www.NetInet.si
Private: http://cuk.nu
MountainBikeSlovenia team: http://mtb.si
Slovenian FreeBSD mirror admin http://www2.si.freebsd.org



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


[Fwd: Tun]

2005-11-05 Thread Marko Cuk

Max obvious didn't have time to answer...

Any ideas ?

Tnx, Marko Cuk

--
NetInet d.o.o. http://www.NetInet.si
Private: http://cuk.nu
MountainBikeSlovenia team: http://mtb.si
Slovenian FreeBSD mirror admin http://www2.si.freebsd.org



---BeginMessage---

Hello Max !

Please, do you have any ideas...


What is the status of the tun0 driver and ALTQ ?
I have FreeBSD 6.0-RELEASE and have tried it without success. Why 6.0 ? 
Don't know... curious maybe... if you think, that 5.4 will work better, 
I'll reinstall it.


The tun0 is because od xDSL ( PPPoE )

It seems like packets won't match queue. Look at the pfctl output ( look 
at the bucy rules -- he is a huge consumer and the primary uplink is 
out for a week, xDSL is only backup and he consumes all the avail 
bandwidth )



THIS IFACE IS TUN0 ( pppoe )
queue root_em0 bandwidth 1Gb priority 0 cbq( wrr root ) {std_ext, bucy_out}
 [ pkts:  76053  bytes:7390221  dropped pkts:  0 
bytes:  0 ]

 [ qlength:   0/ 50  borrows:  0  suspends:  0 ]
 [ measured:   199.0 packets/s, 146.71Kb/s ]
queue  std_ext bandwidth 384Kb cbq( default )
 [ pkts:  76053  bytes:7390221  dropped pkts:  0 
bytes:  0 ]

 [ qlength:   0/ 50  borrows:  0  suspends: 59 ]
 [ measured:   199.0 packets/s, 146.71Kb/s ]

THIS ONE IS PROBLEMATIC - Won't match
queue  bucy_out bandwidth 128Kb
 [ pkts:  0  bytes:  0  dropped pkts:  0 
bytes:  0 ]

 [ qlength:   0/ 50  borrows:  0  suspends:  0 ]
 [ measured: 0.0 packets/s, 0 b/s ]
queue root_em1 bandwidth 1Gb priority 0 cbq( wrr root ) {std_int, bucy_in}
 [ pkts:  91920  bytes:  100394990  dropped pkts:  0 
bytes:  0 ]

 [ qlength:   0/ 50  borrows:  0  suspends:  0 ]
 [ measured:   260.4 packets/s, 2.37Mb/s ]
queue  std_int bandwidth 2Mb cbq( default )
 [ pkts:  50302  bytes:   58076735  dropped pkts:  0 
bytes:  0 ]

 [ qlength:   0/ 50  borrows:  0  suspends:   2359 ]
 [ measured:   194.6 packets/s, 1.89Mb/s ]
queue  bucy_in bandwidth 900Kb
 [ pkts:  41618  bytes:   42318255  dropped pkts:446 bytes: 
433317 ]

 [ qlength:   0/ 50  borrows:  0  suspends:   7440 ]
 [ measured:65.8 packets/s, 475.89Kb/s ]


queue root_dc0 bandwidth 10Mb priority 0 cbq( wrr root ) {std_int_wifi_in}
 [ pkts:   3967  bytes:1730908  dropped pkts:  0 
bytes:  0 ]

 [ qlength:   0/ 50  borrows:  0  suspends:  0 ]
 [ measured: 2.6 packets/s, 4.17Kb/s ]
queue  std_int_wifi_in bandwidth 5Mb cbq( default )
 [ pkts:   3967  bytes:1730908  dropped pkts:  0 
bytes:  0 ]

 [ qlength:   0/ 50  borrows:  0  suspends:  0 ]
 [ measured: 2.6 packets/s, 4.17Kb/s ]


This are the rules:
##
# QUEUEING: rule-based bandwidth control.
###
# TOLE JE NAS ODHODNI PROMET VEN - UPLOAD
altq on em0 cbq bandwidth 100% queue { std_ext,bucy_out }
queue std_ext   bandwidth 384Kb cbq(default)
queue bucy_out  bandwidth 128Kb


#
# TOLE JE NAS DOHODNI PROMER NOTRI - DOWNLOAD
altq on em1 cbq bandwidth 100% queue { std_int,bucy_in }
queue std_int   bandwidth 2Mb cbq(default)
queue bucy_in  bandwidth 900Kb



# QUEUE rule
pass in log on em1 from 10.0.100.0/24 to any queue bucy_out
pass out log on em1 from any to 10.0.100.0/24 queue bucy_in



Many thanks for any informations. I have changed the various eth cards, 
from dc cards to em gigabit cards, etc, etc. Without success. I know, 
that there has been some issues with tun0 on OpenBSD, but that was a 
little time ago.


Bye, Marko

p.s. I have a very huge sistem, dualhomed , all based on pf and there is 
one issue too, but we'll discuss it later. I have managed it with ipf ( 
only that source routing issue ).

Anyway, your work is very good and thanks for porting pf to FreeBSD.

--
NetInet d.o.o. http://www.NetInet.si
Private: http://cuk.nu
MountainBikeSlovenia team: http://mtb.si
Slovenian FreeBSD mirror admin http://www2.si.freebsd.org




---End Message---
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]

Re: NFS on 5.4

2005-06-13 Thread Marko Cuk

Aha, ok, thanks...

(31 Oct 2004) The results of netstat -m can become incorrect on SMP 
systems when debug.mpsafenet is set to 1 (default). This is an error in 
the statistics gathering because of a race condition in the counters, 
not an actual memory leak.


I'll put kernel into debug and see, as Max suggested.

Tnx



Kris Kennaway wrote:


On Mon, Jun 13, 2005 at 10:29:40AM +0200, Marko ??uk wrote:
 


Hello !

Does anyone knows, what's going on with NFS on 5.4 ?

I have same hardware as before, when 5.3 was installed, except the 
installation is new, 5.4. I have NFS mounted and some FTP app is using 
it extensively and server crashes once / day. I have noticed:



After 8 hours of uptime...

su-2.05b# netstat -m
739 mbufs in use
736/32768 mbuf clusters in use (current/max)
0/3/4464 sfbufs in use (current/peak/max)
1656 KBytes allocated to network
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
749 calls to protocol drain routines

suddenly after 5 minutes...

4294962365 mbufs in use
359/25600 mbuf clusters in use (current/max)
10/37/6656 sfbufs in use (current/peak/max)
4193789 KBytes allocated to network
0 requests for sfbufs denied
0 requests for sfbufs delayed
1453 requests for I/O initiated by sendfile
0 calls to protocol drain routines
   



See the release errata; this is not an indication of a problem.

Kris
 



--
Please send all support related questions to [EMAIL PROTECTED]

Work @ http://www.xenya.si
Private: http://cuk.nu
Sports: http://www.cuk.nu
Slovenian FreeBSD mirror admin http://www2.si.freebsd.org
Slovenian OpenBSD mirror ftp://openbsd.cuk.nu/pub/OpenBSD



smime.p7s
Description: S/MIME Cryptographic Signature


NATd high internal load - help

2001-04-10 Thread Marko Cuk

Hello !!

I have problems with high load on FBSD box. First I had the 4.2 STABLE ,
then I cvsuped to 4.3-RC. Same thing.

When high traffic occurs on 100mbit hub, to wich is fxp0 connected, load
and processor usage on natd process is very high and after a while it
won't pass packets anymore to outside world.

The configuration:
-Celeron 400 and BX motherboard, 512Mb RAM PC100.
-fxp Intel 10/100 cards

fxp0 - connected to hub and interface has routable IP, because it has a
network behind
fxp1 - connected to outside world

Routing is working, everything was ok.

Then I want to set an alias to fxp0 with 192.168.x.x unregistered class
and make NATd working and to hide most of client computers and leave the
routable IPs for servers, etc.

It's very strange. I have NATd at home and it is working without any
problems.

The MAXUSERS variable is set to 196 and netstat -m:
261/944/14336 mbufs in use (current/peak/max):
236 mbufs allocated to data
25 mbufs allocated to packet headers
206/438/3584 mbuf clusters in use (current/peak/max)
1112 Kbytes allocated to network (10% of mb_map in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines

Top:

last pid:  9475;  load averages:  0.97,  0.51, 
0.35up
8+09:49:40  11:47:42
48 processes:  4 running, 44 sleeping
CPU states: 14.0% user,  0.0% nice, 71.3% system,  9.3% interrupt,  5.4%
idle
Mem: 170M Active, 246M Inact, 58M Wired, 20M Cache, 61M Buf, 6752K Free
Swap: 2000M Total, 48K Used, 2000M Free

  PID USERNAME PRI NICE  SIZERES STATETIME   WCPUCPU COMMAND
 9475 root  36   0   448K   228K RUN  0:12 28.87% 24.56% ping

/etc/rc.conf
firewall_enable="YES"
firewall_type="/etc/rc.pcx"
firewall_script="/etc/rc.firewall"
firewall_quiet="YES"
natd_program="/sbin/natd"
natd_enable="YES"   
natd_flags="-f /etc/natd.conf"

/etc/natd.conf
interface fxp1 -if I put an IP here, it's the same problem
#log yes
log_denied yes
unregistered_only yes
#use_sockets yes
#same_ports yes
#dynamic yes
I also comment those things as Blaz Zupan told me.

ipfw natd rule:
add 80 divert natd ip from any to any via fxp1

Now I don't have any idea, what to do ? Did I miss something ? What did
I do wrong ?

I have also 84 ipfw rules for firewall ( most of them reset and deny and
a few dummynet pipes ).
Is the processor too slow for that ?

Cuk

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-stable" in the body of the message



NATd high internal load - help

2001-04-10 Thread Marko Cuk

Hello !!

I have problems with high load on FBSD box. First I had the 4.2 STABLE ,
then I cvsuped to 4.3-RC. Same thing.

When high traffic occurs on 100mbit hub, to wich is fxp0 connected, load
and processor usage on natd process is very high and after a while it
won't pass packets anymore to outside world.

The configuration:
-Celeron 400 and BX motherboard, 512Mb RAM PC100.
-fxp Intel 10/100 cards

fxp0 - connected to hub and interface has routable IP, because it has a
network behind
fxp1 - connected to outside world

Routing is working, everything was ok.

Then I want to set an alias to fxp0 with 192.168.x.x unregistered class
and make NATd working and to hide most of client computers and leave the
routable IPs for servers, etc.

It's very strange. I have NATd at home and it is working without any
problems.

The MAXUSERS variable is set to 196 and netstat -m:
261/944/14336 mbufs in use (current/peak/max):
236 mbufs allocated to data
25 mbufs allocated to packet headers
206/438/3584 mbuf clusters in use (current/peak/max)
1112 Kbytes allocated to network (10% of mb_map in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines

Top:

last pid:  9475;  load averages:  0.97,  0.51, 
0.35up
8+09:49:40  11:47:42
48 processes:  4 running, 44 sleeping
CPU states: 14.0% user,  0.0% nice, 71.3% system,  9.3% interrupt,  5.4%
idle
Mem: 170M Active, 246M Inact, 58M Wired, 20M Cache, 61M Buf, 6752K Free
Swap: 2000M Total, 48K Used, 2000M Free

  PID USERNAME PRI NICE  SIZERES STATETIME   WCPUCPU COMMAND
 9475 root  36   0   448K   228K RUN  0:12 28.87% 24.56% ping

/etc/rc.conf
firewall_enable="YES"
firewall_type="/etc/rc.pcx"
firewall_script="/etc/rc.firewall"
firewall_quiet="YES"
natd_program="/sbin/natd"
natd_enable="YES"   
natd_flags="-f /etc/natd.conf"

/etc/natd.conf
interface fxp1 -if I put an IP here, it's the same problem
#log yes
log_denied yes
unregistered_only yes
#use_sockets yes
#same_ports yes
#dynamic yes
I also comment those things as Blaz Zupan told me.

ipfw natd rule:
add 80 divert natd ip from any to any via fxp1

Now I don't have any idea, what to do ? Did I miss something ? What did
I do wrong ?

I have also 84 ipfw rules for firewall ( most of them reset and deny and
a few dummynet pipes ).
Is the processor too slow for that ?

Cuk

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-stable" in the body of the message



Re: rc.firewall script natd on 4.1.1

2000-11-08 Thread Marko Cuk



"Crist J . Clark" wrote:



 Details please? The natd(8) process is not actually started from
 rc.firewall, but from rc.network. In which of these is the supposed
 breakage? Or do you mean the divert(4) rule in rc.firewall is not
 being added properly?

That's right !

 Could you post the actual problem and the
 versions of the file(s) you are using?
 --
 Crist J. Clark   [EMAIL PROTECTED]

Bad file:


# Setup system for firewall service.
# $FreeBSD: src/etc/rc.firewall,v 1.30.2.6 2000/09/21 07:44:53 ru Exp $

And that one works and adds natd divert rule to ipfw correctly

# Setup system for firewall service.
# $FreeBSD: src/etc/rc.firewall,v 1.30.2.4 2000/05/28 19:17:15 asmodai Exp $


Cuk




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-stable" in the body of the message



Re: Stability

2000-11-08 Thread Marko Cuk



Roman Shterenzon wrote:


 I've a perfectly good PR about vinum (panics) open. There's no even single
 follow up (kern/22103).
 I couldn't stand it any longer, so I'm not able to recreate it since I'm
 using raid1 on those disks now. I waited for almost one month but
 aparently nothing was done. (I opened one PR before that but it was badly
 formatted and had to become closed).

 I understand that people have other things to do, and FreeBSD is volunteer
 project, but we shall face the truth - the man page for vinum should state
 that RAID5 is experimental and prone to crashes. It should be emphasized
 that it shouldn't be used in sensitive environmets.
 I know other people for whom it rendered their servers unusable.

 I managed 8( to crash it today as well. I'll probably move to hardware
 raid solution instead, I'm quite fed up with vinum.


Wich good hardware solution is supported under FreeBSD ?

 I've a crash dump of today, perhaps I'll open another PR.


I've had crash before one hour, and I've only doing cvsup ports  Yesterday I
was changing speed of ipfw pipe and it crashed after a second. I cannot imagine
such things.

I can't have 20 days of uptime because of so many crashes.

Yes, I know a solution for stable FreeBSD. Leave it completly alone and don't do
anything.

It's no good. Fbsd is 4.1.

Cuk





To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-stable" in the body of the message