Re: : : rouge IPs / user

2007-12-13 Thread Otto Moerbeek
On Thu, Dec 13, 2007 at 02:25:21AM -0500, Daniel Ouellet wrote:

 Otto Moerbeek wrote:
 I'm only talking about the tear-down. The three-way handshake happens
 before that, both sockets are in ESTABSLIHED state. You have to read
 half-close as a verb (action), and half-open as a description of the
 state of the connection. Check the state transition diagram, and maybe do 
 some reading in
 Stevens's TCP/Illustrated Volume I (which has the state diagram on the
 inside cover ;-)

 I did a lots of reading and I cleared my miss understanding looks like and 
 see where I confuse myself:

 1) A socket gets into CLOSE_WAIT when the remote side sends a FIN and the 
 local side sends an ACK, but has not yet sent it's FIN.

 2) A socket gets into FIN_WAIT_2 when the local side closes the socket and 
 sends a FIN (FIN_WAIT_1) and then receives an ACK for that FIN but has not 
 yet received a FIN from the remote side.

 I got confuse as I can't never see any CLOSE_WAIT on my server, no complain 
 for sure but I see a FIN_WAIT_2 and obviously confuse the two.

 I am puzzled as to why they would get so many CLOSE_WAIT and I guess it 
 might be an under power server may be that when the httpd child process is 
 done, that the master, or what not don't have time to process it and as 
 such keep the socket open because of that. Not sure, but I guess possible.

 In any case, I appreciate the reply and I sure clear that part of the 
 handshake anyway, so I will not make the same mistakes again. (;

 Best,

 Daniel

It's easy to force a socket into CLOSE_WAIT

On server:

$ nc -l 1

On client:

$ nc 1

That gets you ESTABLISHED on both sides.

^Z the server side

close the client side by typing ^D

The client moves to FIN_WAIT_2, and the server to CLOSE_WAIT.

fg the server process

The client moves to TIME_WAIT and the server socket will disappear.

It's depening on the application if sockets staying in CLOSE_WAIT are
a problem or not: it might be intentional (in the hulp duplex case),
or it might be a program forgetting to do a close.

-Otto

BTW: it's volume II that has the state diagram on the inside cover.
Volume I has it in chapter 18, after the discussion of half-close.



Re: : : rouge IPs / user

2007-12-13 Thread Stuart Henderson
On 2007/12/13 09:09, Otto Moerbeek wrote:
 It's depening on the application if sockets staying in CLOSE_WAIT are
 a problem or not: it might be intentional (in the hulp duplex case),
 or it might be a program forgetting to do a close.

Does select() notify the application of FIN from the other side?

If not, that would explain things, it wouldn't be reasonable for
httpd to manually try and receive from all sockets in keepalive
to see whether it needs to close the socket, since it will only
wait KeepAliveTimeout (default 15s) before it closes them anyway.



Re: : : rouge IPs / user

2007-12-13 Thread Hannah Schroeter
Hi!

On Thu, Dec 13, 2007 at 11:10:51AM +, Stuart Henderson wrote:
On 2007/12/13 09:09, Otto Moerbeek wrote:
 It's depening on the application if sockets staying in CLOSE_WAIT are
 a problem or not: it might be intentional (in the hulp duplex case),
 or it might be a program forgetting to do a close.

Does select() notify the application of FIN from the other side?

I haven't followed the rest of the thread, so answering this question
out of context: Yes, FIN from the other side says that there'll be no
other data from the other side, i.e. that'll lead to an EOF condition on
the data stream. So after the application has read all the remaining
data from the socket, the socket will be readable (in the sense of the
read bit from select(), or POLLIN from poll(), or EVFILT_READ for
kqueue/kevent) and if you read, it'll return 0, just the same as if you
have EOF on files.

[...]

Kind regards,

Hannah.



Re: : : rouge IPs / user

2007-12-13 Thread Otto Moerbeek
On Thu, Dec 13, 2007 at 11:10:51AM +, Stuart Henderson wrote:

 On 2007/12/13 09:09, Otto Moerbeek wrote:
  It's depening on the application if sockets staying in CLOSE_WAIT are
  a problem or not: it might be intentional (in the hulp duplex case),

Strange typo by me... that's a Dutch word, but not very relevant in
this context. It should be half-duplex of course. Stupid autonomous
fingers ;-)

  or it might be a program forgetting to do a close.
 
 Does select() notify the application of FIN from the other side?
 
 If not, that would explain things, it wouldn't be reasonable for
 httpd to manually try and receive from all sockets in keepalive
 to see whether it needs to close the socket, since it will only
 wait KeepAliveTimeout (default 15s) before it closes them anyway.

Nice suggestion, but if you've marked the fd for read I would expect
select to notify if the other side does a shutdown(SHUT_WR).

Other scenarios are also thinkable: like the server socket being
blocked because of outgoing data that cannot be written out. That
might prevent the server from doing a close too. But in the end the
close will happen, otherwise you would run out of fd's very soon.

-Otto



Re: : : : rouge IPs / user

2007-12-13 Thread Raimo Niskanen
On Thu, Dec 13, 2007 at 12:35:03PM +0100, Otto Moerbeek wrote:
 On Thu, Dec 13, 2007 at 11:10:51AM +, Stuart Henderson wrote:
 
  On 2007/12/13 09:09, Otto Moerbeek wrote:
   It's depening on the application if sockets staying in CLOSE_WAIT are
   a problem or not: it might be intentional (in the hulp duplex case),
 
 Strange typo by me... that's a Dutch word, but not very relevant in
 this context. It should be half-duplex of course. Stupid autonomous
 fingers ;-)
 
   or it might be a program forgetting to do a close.
  
  Does select() notify the application of FIN from the other side?
  
  If not, that would explain things, it wouldn't be reasonable for
  httpd to manually try and receive from all sockets in keepalive
  to see whether it needs to close the socket, since it will only
  wait KeepAliveTimeout (default 15s) before it closes them anyway.
 
 Nice suggestion, but if you've marked the fd for read I would expect
 select to notify if the other side does a shutdown(SHUT_WR).
 
 Other scenarios are also thinkable: like the server socket being
 blocked because of outgoing data that cannot be written out. That
 might prevent the server from doing a close too. But in the end the
 close will happen, otherwise you would run out of fd's very soon.
 

I have been doing a little ktrace:ing on the httpd worker process(es)
that gets stuck with a socket in CLOSE_WAIT.

It appears when it finally (after e.g 8.5 minutes) continues, it:
 28749 httpd1197541469.470707 EMUL  native
 28749 httpd53.276754 RET   write -1 errno 32 Broken pipe
 28749 httpd0.31 CALL  close(0x4)
 28749 httpd0.05 RET   close 0
 28749 httpd0.12 CALL  gettimeofday(0xcfbd12c8,0)
 28749 httpd0.02 RET   gettimeofday 0
 28749 httpd0.02 CALL  gettimeofday(0xcfbd12c8,0)
 28749 httpd0.01 RET   gettimeofday 0
 28749 httpd0.02 CALL  stat(0x2d5a3945,0xcfbd1370)
 28749 httpd0.02 NAMI  /etc/resolv.conf
 28749 httpd0.09 RET   stat 0
 28749 httpd0.12 CALL  open(0x2d5a1ffb,0,0x1b6)
 28749 httpd0.04 NAMI  /etc/hosts
 28749 httpd0.05 RET   open 4
 28749 httpd0.03 CALL  fstat(0x4,0xcfbd16d0)
 28749 httpd0.02 RET   fstat 0
 28749 httpd0.03 CALL  mprotect(0x8111e000,0x1000,0x3)
 28749 httpd0.05 RET   mprotect 0
 28749 httpd0.06 CALL  mprotect(0x8111e000,0x1000,0x1)
 28749 httpd0.03 RET   mprotect 0
 28749 httpd0.02 CALL  read(0x4,0x8103a000,0x4000)
 28749 httpd0.04 GIO   fd 4 read 725 bytes
that must be the start of finding my own hostname, it proceeds with
looking up the remote host over DNS and then writes a log entry:
 28749 httpd0.19 CALL  write(0x16,0x7cc0b4ac,0x7c)
 28749 httpd0.08 GIO   fd 22 wrote 124 bytes
:
 28749 httpd0.02 RET   write 124/0x7c
 28749 httpd0.11 CALL  close(0x3)
 28749 httpd0.16 RET   close 0
 28749 httpd0.03 CALL  sigaction(0x1e,0xcfbd1e00,0xcfbd1df0)
 28749 httpd0.03 RET   sigaction 0
 28749 httpd0.04 CALL  accept(0x12,0xcfbd1e50,0xcfbd1e3c)
 28749 httpd2.751318 RET   accept 3
there comes the next connection. Note the close(0x3) call.
The lsof output said the HTTP connection socket had fd 3.

To compare with the end of a later successful request/reply
in the same log:
 28749 httpd0.04 CALL  write(0x3,0x85b7200c,0xb57)
 28749 httpd0.42 GIO   fd 3 wrote 2903 bytes
:
 28749 httpd0.10 RET   write 2903/0xb57
 28749 httpd0.06 CALL  gettimeofday(0xcfbd12c8,0)
 28749 httpd0.03 RET   gettimeofday 0
 28749 httpd0.02 CALL  gettimeofday(0xcfbd12c8,0)
 28749 httpd0.02 RET   gettimeofday 0
 28749 httpd0.01 CALL  stat(0x2d5a3945,0xcfbd1370)
 28749 httpd0.02 NAMI  /etc/resolv.conf
 28749 httpd0.09 RET   stat 0
and then lookup of my name and the other name, and write log:
 28749 httpd0.21 CALL  write(0x16,0x7cc0b2ac,0x97)
 28749 httpd0.08 GIO   fd 22 wrote 151 bytes
:
 28749 httpd0.02 RET   write 151/0x97
 28749 httpd0.11 CALL  shutdown(0x3,0x1)
 28749 httpd0.17 RET   shutdown 0
 28749 httpd0.03 CALL  select(0x4,0xcfbd1b70,0,0,0xcfbd1b68)
 28749 httpd0.115966 RET   select 1
 28749 httpd0.09 CALL  read(0x3,0xcfbd1bf0,0x200)
 28749 httpd0.04 RET   read 0
 28749 httpd0.04 CALL  close(0x3)
 28749 httpd0.04 RET   close 0
 28749 httpd0.05 CALL  sigaction(0x1e,0xcfbd1e00,0xcfbd1df0)
 28749 httpd0.02 RET   sigaction 0
 28749 httpd0.02 CALL  munmap(0x8b60d000,0xa2b)
 28749 httpd0.18 RET   munmap 0
 28749 httpd0.16 CALL  accept(0x12,0xcfbd1e50,0xcfbd1e3c)

Note that close(0x3) follows after select() RET 1, read() RET 0.

In the fail case log, close(0x3) is called without select() 
and read() probably because it is already known fd 3 is
dead due to the failing write().

So, the conclusion closest at hand is that it is a 

Re: : : : rouge IPs / user

2007-12-13 Thread Raimo Niskanen
On Thu, Dec 13, 2007 at 12:35:03PM +0100, Otto Moerbeek wrote:
:
   or it might be a program forgetting to do a close.
  
  Does select() notify the application of FIN from the other side?
  
  If not, that would explain things, it wouldn't be reasonable for
  httpd to manually try and receive from all sockets in keepalive
  to see whether it needs to close the socket, since it will only
  wait KeepAliveTimeout (default 15s) before it closes them anyway.
 
 Nice suggestion, but if you've marked the fd for read I would expect
 select to notify if the other side does a shutdown(SHUT_WR).
 
 Other scenarios are also thinkable: like the server socket being
 blocked because of outgoing data that cannot be written out. That
 might prevent the server from doing a close too. But in the end the
 close will happen, otherwise you would run out of fd's very soon.
 
   -Otto

The behaviour is starting to make sense now. Scenario:

* The client connects to the server, sends its request and
  then closes the socket, that is shutdown() aka half-close.
  It can still read the reply.
* The server accepts the connection, reads the request,
  and may or may not notice that the client has done
  a shutdown() - it is not important. Nevertheless the
  server can not close the socket since it has a
  reply to deliver. And the server host TCP stack
  has noticed the shutdown() so the socket already
  enters CLOSE_WAIT.
* The server starts sending the reply which may be large
  e.g a file download. In the middle of this transfer
  the client's ethernet cable gets plugged out, the
  client host gets powered off, a firewall in the
  path goes bananas or whatnot.
* The server is now stuck in a write() call since the
  server host TCP stack has to wait quite a while
  to be sure the connection is really dead.
  And the state is still CLOSE_WAIT.

If the client program would die, the client host TCP
stack would close the socket and tell the server host
TCP stack, that would fail the hanging write() call.
So there must be a harder error such as network
outage or power outage to induce this problem.

If this scenario is correct, there is nothing to do
about it, except decreasing the likelyhood of the
server socket being half-closed while sending
the reply, and having KeepAliveTimeout in
httpd.conf at its default (15) or slightly lower
seems to do the trick. But I do not know how.

If there is some quirk in httpd's implementation
of the KeepAliveTimeout that makes it not notice
the half-close and keeps the socket open the whole
KeepAliveTimeout, that would explain it.

-- 

/ Raimo Niskanen, Erlang/OTP, Ericsson AB



Re: : rouge IPs / user

2007-12-12 Thread knitti
I have to correct myself a bit: the socket is in CLOSE_WAIT after
receiving the clients FIN (and acknowledging it). The server hasn't
yet sent its FIN, so the connection is properly half closed, the server
_could_ send some data down the line as its part of the connection
is still up. Translation: the server didn't close its socket for some
reason or non-reason.

For that to find out I'll have to read some code, which may or may not
turn up something (interesting for me).

--knitti



Re: : : rouge IPs / user

2007-12-12 Thread Raimo Niskanen
On Wed, Dec 12, 2007 at 10:11:05AM +0100, knitti wrote:
 I have to correct myself a bit: the socket is in CLOSE_WAIT after
 receiving the clients FIN (and acknowledging it). The server hasn't
 yet sent its FIN, so the connection is properly half closed, the server
 _could_ send some data down the line as its part of the connection
 is still up. Translation: the server didn't close its socket for some
 reason or non-reason.
 
 For that to find out I'll have to read some code, which may or may not
 turn up something (interesting for me).
 
 --knitti

Interesting for me too, and most probably for others. It became an
interesting discussion of my CLOSE_WAIT problem after all...

To summarize (as I see it):

* pf synproxy state does not affect these CLOSE_WAIT sockets since
  the SYN proxy is only active during connection establishement.
  But it is a good to use anyway since it prevents IP spoofing.
* Reducing httpd.conf:KeepAliveTimeout decreases the number of
  sockets in CLOSE_WAIT. I had it at 150 seconds (my mistake,
  probably the problem origin). The default is 15 seconds.
  My setting is now 10 seconds, problem probably solved.
  Thanks to all contributing to the solution!
* A httpd server socket enters CLOSE_WAIT when the client
  closes (or half-closes) its end and sends FIN to the
  server TCP stack that replies ACK and enters CLOSE_WAIT.
  The socket proceeds out of CLOSE_WAIT when httpd calls
  close() on the socket.

So, the remaining question is why httpd does not close the socket.
Even though KeepAlive is in effect, since the client has closed its
end there can come no more request on it, and the server
should be able to notice that the client has closed its
socket end either by recv() returning 0, or from a poll()
return value. The server also should be able to know if
it has more data to send to complete the reply.
I see no reason to hold the socket in CLOSE_WAIT the whole
KeepAliveTimeout time, and am interested to learn why.


I have also learned to avoid hijacking threads.

-- 
Not the original thread poster,
but the one that hijacked the thread for my CLOSE_WAIT problem,
and probably got mistaken for the original thread poster,
and thereby got accused of being to lazy/dumb to use pf,
and to not listen to advice.
and more...

/ Raimo Niskanen, Erlang/OTP, Ericsson AB



Re: : : rouge IPs / user

2007-12-12 Thread knitti
On 12/12/07, Raimo Niskanen [EMAIL PROTECTED] wrote:
 * A httpd server socket enters CLOSE_WAIT when the client
   closes (or half-closes) its end and sends FIN to the
   server TCP stack that replies ACK and enters CLOSE_WAIT.
   The socket proceeds out of CLOSE_WAIT when httpd calls
   close() on the socket.

 So, the remaining question is why httpd does not close the socket.
 Even though KeepAlive is in effect, since the client has closed its
 end there can come no more request on it, and the server
 should be able to notice that the client has closed its
 socket end either by recv() returning 0, or from a poll()
 return value. The server also should be able to know if
 it has more data to send to complete the reply.
 I see no reason to hold the socket in CLOSE_WAIT the whole
 KeepAliveTimeout time, and am interested to learn why.

WARNING: I'm not very experienced reading C code, so take my words
with heaps of salt.

The interesting code is most probably in http_main.c,
http://www.openbsd.org/cgi-bin/cvsweb/src/usr.sbin/httpd/src/main/http_main.c

The problem would be to forget calling ap_bclose() after ending a
connection, either because all data has been sent or the connection has
been aborted. What I can read with some confidence, is that keeping a
socket open beyond sending any data is not intentional, and there is
nothing (for me) which suggests that it would happen at all.

Noob questions/statements ahead:

The code, which implications (aside from the clear visible intention what the
code *should do) are least clear to me for lingering_close() and lingerout()
(is this a signal handler for SIG_ALRM?).

I would suspect some kind of (signal?) race (not nessessarily there), in
which ap_bclose() gets called on a different socket than intended (thus
shutting down another connection as a side effect). BUT since the whole
code doesn't run threaded, I can't come up with something which would
actually suggest that.

I would appreciate if someone told me whether my interpretation is rather
wrong or rather right ;)


--knitti



Re: : : rouge IPs / user

2007-12-12 Thread Daniel Ouellet

Raimo Niskanen wrote:

Interesting for me too, and most probably for others. It became an
interesting discussion of my CLOSE_WAIT problem after all...

To summarize (as I see it):

* pf synproxy state does not affect these CLOSE_WAIT sockets since
  the SYN proxy is only active during connection establishement.
  But it is a good to use anyway since it prevents IP spoofing.


Why not? Just test it out. What happen if you get a DDoS on your httpd 
as an example, or try to connect to it. You send a packet to httpd, it 
will create a socket to reply to your connection request and send the 
source IP ACK and then wait for the reply ACK that will never come. So, 
what does this do to your httpd then??? How many sockets will you have 
pending responses here? You use one socket per user connection to your 
httpd. You have 25 real users accessing your httpd and 1,000 fake users 
without pf in the path. I will aksed you this simple question then.


How many sochets will your httpd use and how many will end up waiting on 
reply ans then wait to close?



* Reducing httpd.conf:KeepAliveTimeout decreases the number of
  sockets in CLOSE_WAIT. I had it at 150 seconds (my mistake,
  probably the problem origin). The default is 15 seconds.
  My setting is now 10 seconds, problem probably solved.
  Thanks to all contributing to the solution!


Glad it provided you where to look.


* A httpd server socket enters CLOSE_WAIT when the client
  closes (or half-closes) its end and sends FIN to the
  server TCP stack that replies ACK and enters CLOSE_WAIT.
  The socket proceeds out of CLOSE_WAIT when httpd calls
  close() on the socket.


the close process have three stage as well. The client asked to close, 
the server reply and the client confirmed. So, close, ACK and ACK.


Did you verify that the client sent the last required ACK to the 
original request of the server to close?


There is also a keep alive in the tcp stack and if I remember well I 
think it is set by default by the RFC is not a small amount of time.


So, if that was the case for each connection, don't you think you would 
run out of socket in just a few minutes after starting httpd?


Something can be done to help the lost one, but leaving them alone is no 
problem after you fixed what was your original problem above.



So, the remaining question is why httpd does not close the socket.
Even though KeepAlive is in effect, since the client has closed its
end there can come no more request on it, and the server
should be able to notice that the client has closed its
socket end either by recv() returning 0, or from a poll()
return value. The server also should be able to know if
it has more data to send to complete the reply.
I see no reason to hold the socket in CLOSE_WAIT the whole
KeepAliveTimeout time, and am interested to learn why.


Again, are you sure all the RFC process was done? Who is waiting on who 
here? Also, I think you may be confusing a few things here. httpd not 
closing a socket and having KeepAlive is in effect are contradictory. 
That's the point of KeepAlive in httpd to keep the socket opne for the 
next possible request from that same users. Not that KeepAlive is what 
will make the socket close. httpd will keep it open specially to avoid 
the usage of resources to fork an other httpd process if need be that is 
way more costly in the OS then just keeping the one already open ready 
for that user.


If you are so stuck on this, then disable httpd KeepAlive all 
together. I sure wouldn't recommend to do that, but if that's what make 
your life better, then please so do it.


Then there is some possible adjustment in the sysctl part for the tcp 
stack in the OS, but I will not go back there again if just httpd 
KeepAlive gives you a conceptual problem. doing so, I would only provide 
you lots of rope to hang your httpd with it, or may be you. (;




Re: : : rouge IPs / user

2007-12-12 Thread Daniel Ouellet

knitti wrote:

The problem would be to forget calling ap_bclose() after ending a
connection, either because all data has been sent or the connection has
been aborted. What I can read with some confidence, is that keeping a
socket open beyond sending any data is not intentional, and there is
nothing (for me) which suggests that it would happen at all.


Logically if that was the case, wouldn't you think you would run out of 
sockets in just a few minutes after starting httpd? I am not saying 
there isn't any bugs in httpd, or that there is. Fair to assume there is 
some, but to that extend, I couldn't imagine so. Just think about it for 
a second. What the effect of it would be if that was the case?



Noob questions/statements ahead:

The code, which implications (aside from the clear visible intention what the
code *should do) are least clear to me for lingering_close() and lingerout()
(is this a signal handler for SIG_ALRM?).

I would suspect some kind of (signal?) race (not nessessarily there), in
which ap_bclose() gets called on a different socket than intended (thus
shutting down another connection as a side effect). BUT since the whole
code doesn't run threaded, I can't come up with something which would
actually suggest that.

I would appreciate if someone told me whether my interpretation is rather
wrong or rather right ;)


I can't say either way in a knowing fashion here. After cleaning up a 
lots of code in httpd in 204  2005, I got real sick of looking a it. 
May be one day, will go back again and do more.


But here are a few things to think about that I think will point you 
were it is and how you might be able to affect how it react. I am not 
putting a judgment on that however as I think in most cases, more harm 
can be done then good, and is way to dependent on each one setup. But 
never the less just think about this.


- Application needs sockets and send request to create and destroy them 
and keep using them after they are created. Who does that, kernel or 
application?


- Who receive the sockets creation and destroy requests and will create 
them or destroy them and pass the handle to the application when ready. 
The Kernel, or the applications?


- Who is handling the signaling, meaning handshake, opening, close_wait, 
retransmitions, etc. Application or kernel?


- So, in the end, if a socket is in close_wait, is it the application, 
or the kernel at that point? Meaning, was it already requested to be 
close and is now a signaling issue, or an application that hasn't asked 
to close the socket yet? (;


- If jam in close_wait state, is it because it hasn't send the ACK on 
the request from the client to close the socket?


- Or is it that it did send the ACK to the client and is now waiting on 
the final ACK from that client to do it?


- Or is it that it reach that point because it was an none fully open 
three way handshake establish connection to start with may be?


- Or it is because the client just open a socket, get what it needed and 
 didn't bother to do the proper closing of the sockets as it should be?


- Now, where is the application, in the case httpd involved here?

- Where can keep alive in httpd help, or not?

- Where pf proxy help or not?

- Where keep alive in tcp stack (sysctl) help or not?

So, I think I am done with this one, knowing where you are in the 
exchange process and the answers to the above will tell you where you 
can have the impact you are looking for.


I think I try to help as much as I should here and provided where to 
look for what part on the issue that is not in a single place and is not 
affected by a single aspect of the httpd usage.


That's why there isn't a single answer to the questions here and it will 
always depend on your specific setup, traffic patterns and load, etc.


Hope it help you some never the less and provide you something to think 
about.


I had also in the archive many tests on httpd already done and some 
changes and effect done for sysctl value. Some good, some bad, but it's 
there is you want to know more, however, I can't really recommend a 
specific solution as it is way to dependent on your situation.


Example, you could reduce the keep alive in sysctl a lots if you want to 
help the close_wait, but at the same time this will increase all the 
exchange messages between valid connections as well. So, on one hand to 
will affect the delay in closing your sockets sooner, but at the same 
time you will increase the load on other already active connections. 
Witch one is right and where is the best setup for you. I can't say, nor 
anyone else really. The defaults are pretty sane, you can change some of 
them yes, but then it's always a trade off between two or more things.


In all honesty, I can't tell you witch one is best for you. I did many 
tests for myself and what works for me, by no mean might work for you. 
But it's in the archive if you care to look however. There was some very 
valid feedback on it as 

Re: rouge IPs / user

2007-12-12 Thread Dan Farrell
SPEWS is an excellent example of why trusting strangers on the Internet
that you can't even communicate with doesn't work.

danno

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of Nick Guenther
Sent: Friday, December 07, 2007 11:13 PM
To: OpenBSD-Misc
Subject: Re: rouge IPs / user


See, that requires trusting the other 'security experts' are actually
being honest and working for each others benefit... but that system
isn't secure, how do you distinguish 'security expert' from
'infiltrator'?
You *must* have decentralized systems/methods for this. There's no way
to combine data together, the best you can do is share techniques
which you can verify with your own logic -- except for blacklists like
SPEWS, and even then there are all sorts of politics and troubles.

-Nick



Re: : : rouge IPs / user

2007-12-12 Thread knitti
On 12/12/07, Daniel Ouellet [EMAIL PROTECTED] wrote:
 Raimo Niskanen wrote:
  Interesting for me too, and most probably for others. It became an
  interesting discussion of my CLOSE_WAIT problem after all...
 
  To summarize (as I see it):
 
  * pf synproxy state does not affect these CLOSE_WAIT sockets since
the SYN proxy is only active during connection establishement.
But it is a good to use anyway since it prevents IP spoofing.

 Why not? Just test it out. What happen if you get a DDoS on your httpd
 as an example, or try to connect to it. You send a packet to httpd, it
 will create a socket to reply to your connection request and send the
 source IP ACK and then wait for the reply ACK that will never come. So,
 what does this do to your httpd then??? How many sockets will you have
 pending responses here? You use one socket per user connection to your
 httpd. You have 25 real users accessing your httpd and 1,000 fake users
 without pf in the path. I will aksed you this simple question then.

don't confuse the CLOSE_WAIT with a SYN flood. if httpd doesn't close
its socket, the proxy will neither. And even if it did, this doesn't
close httpd's
socket. I think I'm repeating myself, but the problem is *not* that httpd
waits for any client data. I _has_ seen the clients FIN (or it wouldn't go into
CLOSE_WAIT), but keeps its side open.

 the close process have three stage as well. The client asked to close,
 the server reply and the client confirmed. So, close, ACK and ACK.

not exactly. the long version is: the side which wishes to close sends
FIN, other side sends ACK (4-way-close: each side sends a FIN and an
ACK). If the other side, which receives the first FIN decides to close also
immediately, it can combine the FIN and the ACK (FIN - FIN/ACK - ACK).


 Did you verify that the client sent the last required ACK to the
 original request of the server to close?

If the server closes first and the client doesn't ACK, the socket should be
in TIME_WAIT. After some time, I think, the server may send a RST if
the client doesn't ACK its FIN.

 There is also a keep alive in the tcp stack and if I remember well I
 think it is set by default by the RFC is not a small amount of time.

yes, TCP keep alives are empty ACK packets (or 1 octet payload).
but while the TCP connection is open (while TCP keep alives might
be sent), the socket doesn't go into CLOSE_WAIT. it does when the
client FINs its connection, which should also end the sending of TCP
keep alives

 Again, are you sure all the RFC process was done? Who is waiting on who
 here? Also, I think you may be confusing a few things here. httpd not
 closing a socket and having KeepAlive is in effect are contradictory.

in theory, they are simply not related, because on different protocol layers.
Practically there seems to be a correlation by implementation.

--knitti



Re: : : rouge IPs / user

2007-12-12 Thread Daniel Ouellet

knitti wrote:

* pf synproxy state does not affect these CLOSE_WAIT sockets since
  the SYN proxy is only active during connection establishement.
  But it is a good to use anyway since it prevents IP spoofing.

Why not? Just test it out. What happen if you get a DDoS on your httpd
as an example, or try to connect to it. You send a packet to httpd, it
will create a socket to reply to your connection request and send the
source IP ACK and then wait for the reply ACK that will never come. So,
what does this do to your httpd then??? How many sockets will you have
pending responses here? You use one socket per user connection to your
httpd. You have 25 real users accessing your httpd and 1,000 fake users
without pf in the path. I will aksed you this simple question then.


don't confuse the CLOSE_WAIT with a SYN flood. if httpd doesn't close
its socket, the proxy will neither. And even if it did, this doesn't
close httpd's
socket. I think I'm repeating myself, but the problem is *not* that httpd
waits for any client data. I _has_ seen the clients FIN (or it wouldn't go into
CLOSE_WAIT), but keeps its side open.


I am not. May look like that at first glance, but I am not. I am only 
saying that using PF in front of httpd will reduce the possible number 
of httpd close_wait you might see. By default httpd can only support up 
to 256 connections, unless you increase it and compile it again. PF do 
not have that limits and as such would help keeping your httpd going 
even if you have still a bunch of close_wait states.


As to if PF would close the httpd sockets, no it wouldn't. If I miss 
lead you in that path, then I am sorry. What will affect your close_wait 
time (when you reach that point) are the tcp stack value, witch I am 
reluctant to suggest to adjust as they sure can create way more harm 
then goods. But even then, there is limits to what can be done.


My point with PF here was that it would reduce the possible numbers of 
close_wait state you could possibly see in the first place, witch is one 
of the original goal of the question.


If you want to reduce the time it stay in that stage, then you need to 
adjust some of the tcp stack value, but it will also introduce other 
effects doing so, witch is impossible for me to say, or anyone else I 
think what's the optimum value for you in your specific setup. I 
wouldn't try to say I would know what's best for you in that case. I can't.


Hope this clear that part.

Best,

Daniel



Re: : : rouge IPs / user

2007-12-12 Thread knitti
On 12/12/07, Daniel Ouellet [EMAIL PROTECTED] wrote:
 I am only
 saying that using PF in front of httpd will reduce the possible number
 of httpd close_wait you might see. By default httpd can only support up
 to 256 connections, unless you increase it and compile it again.

I don't understand why pf would reduce this. Every single CLOSE_WAIT
stems from a former established connection, and pf can nothing do
to convince httpd to close its socket. No rogue clients involved here.

 lead you in that path, then I am sorry. What will affect your close_wait
 time (when you reach that point) are the tcp stack value, witch I am
 reluctant to suggest to adjust as they sure can create way more harm
 then goods.

I don't think there is a systl for that. TCP connections don't expire by
default, if you not make them, and the same should go for a half-closed
one. There are perfectly legit reasons for long open half-closed
TCP connections.

 My point with PF here was that it would reduce the possible numbers of
 close_wait state you could possibly see in the first place, witch is one
 of the original goal of the question.

Why?


--knitti



Re: : : rouge IPs / user

2007-12-12 Thread knitti
On 12/12/07, Daniel Ouellet [EMAIL PROTECTED] wrote:
 knitti wrote:
  The problem would be to forget calling ap_bclose() after ending a
  connection, either because all data has been sent or the connection has
  been aborted. What I can read with some confidence, is that keeping a
  socket open beyond sending any data is not intentional, and there is
  nothing (for me) which suggests that it would happen at all.

 Logically if that was the case, wouldn't you think you would run out of
 sockets in just a few minutes after starting httpd? I am not saying
 there isn't any bugs in httpd, or that there is. Fair to assume there is
 some, but to that extend, I couldn't imagine so. Just think about it for
 a second. What the effect of it would be if that was the case?

I think you misunderstood me. I meant I don't see any obvious occasion
in which the problem I assumed (forgetting ap_bclose() ) would occur.
So I don't see any bug (surpise), but something occurs. So either I don't
see the bug because its not obvious (surprise, again), or my
assumption (ap_bclose() not called) is wrong.

My question: would not calling ap_bclose() show this behaviour ?

 - Application needs sockets and send request to create and destroy them
 and keep using them after they are created. Who does that, kernel or
 application?

I assume the kernel creates the actual socket, but the app keeps it as long
as it wants (or longer ;-)

 - Who receive the sockets creation and destroy requests and will create
 them or destroy them and pass the handle to the application when ready.
 The Kernel, or the applications?

 - Who is handling the signaling, meaning handshake, opening, close_wait,
 retransmitions, etc. Application or kernel?

 - So, in the end, if a socket is in close_wait, is it the application,
 or the kernel at that point? Meaning, was it already requested to be
 close and is now a signaling issue, or an application that hasn't asked
 to close the socket yet? (;

I *assume* that it is the application forgetting to close(), because if the
kernel forgets to close() something what is more or less a file, we would
also have massive stale open files being around.

 - If jam in close_wait state, is it because it hasn't send the ACK on
 the request from the client to close the socket?

 - Or is it that it did send the ACK to the client and is now waiting on
 the final ACK from that client to do it?

 - Or is it that it reach that point because it was an none fully open
 three way handshake establish connection to start with may be?

 - Or it is because the client just open a socket, get what it needed and
   didn't bother to do the proper closing of the sockets as it should be?

_please_, read my last mails, or look at a TCP state diagram.


 - Now, where is the application, in the case httpd involved here?

CLOSE_WAIT is a defined state. The most simple explaination is not
closing the socket even after recognizing there is nothing more to
read from it.

 - Where can keep alive in httpd help, or not?

 - Where pf proxy help or not?

 - Where keep alive in tcp stack (sysctl) help or not?

these three questions I simply don't understand. Please rephrase.


 That's why there isn't a single answer to the questions here and it will
 always depend on your specific setup, traffic patterns and load, etc.

I seems we are here of different opinions. I'm more or less convinced
now, that there is a bug not closing the socket even after httpd has
nothing more to send. Under the assumption my interpretation of the
problem is not fundamentally flawed.


 Example, you could reduce the keep alive in sysctl a lots if you want to
 help the close_wait, but at the same time this will increase all the
 exchange messages between valid connections as well. So, on one hand to
 will affect the delay in closing your sockets sooner, but at the same
 time you will increase the load on other already active connections.

well, I think turnig the wrong knobs will do harm, there you are right.
tuning TCP keep alives would be the wrong knob

 left, unless it does give you a problem, other then a feeling of wanting
 it to look different, you should put it to rest I think.

unless I can reproduce it, I will also let it rest after being convinced
of not finding the bug by reading the code alone ;)

--knitti



Re: : : rouge IPs / user

2007-12-12 Thread Daniel Ouellet

knitti wrote:

On 12/12/07, Daniel Ouellet [EMAIL PROTECTED] wrote:

I am only
saying that using PF in front of httpd will reduce the possible number
of httpd close_wait you might see. By default httpd can only support up
to 256 connections, unless you increase it and compile it again.


I don't understand why pf would reduce this. Every single CLOSE_WAIT
stems from a former established connection, and pf can nothing do
to convince httpd to close its socket. No rogue clients involved here.


lead you in that path, then I am sorry. What will affect your close_wait
time (when you reach that point) are the tcp stack value, witch I am
reluctant to suggest to adjust as they sure can create way more harm
then goods.


I don't think there is a systl for that. TCP connections don't expire by
default, if you not make them, and the same should go for a half-closed
one. There are perfectly legit reasons for long open half-closed
TCP connections.


TCP does handshake and take actions based on some timeout values, some 
fix in the RFC, so that can be affected in good or bad ways. So, a 
proper combinations of them with proper value can achieve the requested 
effect. Some example on top of may head I think could be, and I am not 
saying it's wise to go and changed them for no good reason, but they 
will affect the efficiency of your sockets in various ways, witch I 
wouldn't venture to say I could explain it all fully without mistakes. I 
only point out that there is ways to achieve what you are looking for, 
may be indirectly, but I think there is.



net.inet.tcp.keepidle   # Time connection must be idle before
  keepalive sent.

net.inet.tcp.keepinittime   # Used by the syncache to timeout SYN
  request.

net.inet.tcp.keepintvl  # Interval between keepalive sent to
  remote machines.

net.inet.tcp.rstppslimit# This variable specifies the maximum
# number of outgoing TCP RST
# packets per second. TCP RST packets
# exceeding this value are subject
#  to rate limitation and will not go
# out from the node.  A negative value
# disables rate limitation.

net.inet.tcp.synbucketlimit # The maximum number of entries allowed
# per hash bucket in
# the TCP SYN cache.

net.inet.tcp.syncachelimit  # The maximum number of entries allowed
# in the TCP SYN cache.



My point with PF here was that it would reduce the possible numbers of
close_wait state you could possibly see in the first place, witch is one
of the original goal of the question.


Why?


OK, I could be wrong and I am sure someone with a huge stick will hit me 
with it if I say something stupid, and/or there might be something I am 
overlooking or not understanding fully, witch is sure possible as well. (;


But if httpd received a fake connection that do not do the full 
handshake, isn't it there a socket open and/or use by httpd for that 
fake connection anyway. Meaning it tries to communicate with that fake 
source and can't and eventually will close and (that's where may be I am 
failing here) will end up in close_wait may be?


Or, are you saying that the ONLY possible way a socket end up in 
close_wait state is ONLY when and ONLY possible if it was fully open 
properly in the first place? If so, then I stand corrected and I was/am 
wrong about that part of my suggestions. So, is it the case then?


Best,

Daniel



Re: : : rouge IPs / user

2007-12-12 Thread knitti
On 12/12/07, Daniel Ouellet [EMAIL PROTECTED] wrote:
 net.inet.tcp.keepidle
 net.inet.tcp.keepinittime
 net.inet.tcp.keepintvl
 net.inet.tcp.rstppslimit
 net.inet.tcp.synbucketlimit
 net.inet.tcp.syncachelimit

nope, shoudn't apply, unless my TCP knowledge is wrong or there
is a bug, which makes it affecting it unintentional


  My point with PF here was that it would reduce the possible numbers of
  close_wait state you could possibly see in the first place, witch is one
  of the original goal of the question.
 
  Why?

 OK, I could be wrong and I am sure someone with a huge stick will hit me
 with it if I say something stupid, and/or there might be something I am
 overlooking or not understanding fully, witch is sure possible as well. (;

 But if httpd received a fake connection that do not do the full
 handshake, isn't it there a socket open and/or use by httpd for that
 fake connection anyway. Meaning it tries to communicate with that fake
 source and can't and eventually will close and (that's where may be I am
 failing here) will end up in close_wait may be?

no fake connections involved, CLOSE_WAIT is a state _after_ having a
fully established connection

 Or, are you saying that the ONLY possible way a socket end up in
 close_wait state is ONLY when and ONLY possible if it was fully open
 properly in the first place? If so, then I stand corrected and I was/am
 wrong about that part of my suggestions. So, is it the case then?

Yes. Random example:
http://www4.informatik.uni-erlangen.de/Projects/JX/Projects/TCP/tcpstate.html


--knitti



Re: : : rouge IPs / user

2007-12-12 Thread Otto Moerbeek
On Wed, Dec 12, 2007 at 10:42:23PM +0100, knitti wrote:

 On 12/12/07, Daniel Ouellet [EMAIL PROTECTED] wrote:
  net.inet.tcp.keepidle
  net.inet.tcp.keepinittime
  net.inet.tcp.keepintvl
  net.inet.tcp.rstppslimit
  net.inet.tcp.synbucketlimit
  net.inet.tcp.syncachelimit
 
 nope, shoudn't apply, unless my TCP knowledge is wrong or there
 is a bug, which makes it affecting it unintentional
 
 
   My point with PF here was that it would reduce the possible numbers of
   close_wait state you could possibly see in the first place, witch is one
   of the original goal of the question.
  
   Why?
 
  OK, I could be wrong and I am sure someone with a huge stick will hit me
  with it if I say something stupid, and/or there might be something I am
  overlooking or not understanding fully, witch is sure possible as well. (;
 
  But if httpd received a fake connection that do not do the full
  handshake, isn't it there a socket open and/or use by httpd for that
  fake connection anyway. Meaning it tries to communicate with that fake
  source and can't and eventually will close and (that's where may be I am
  failing here) will end up in close_wait may be?
 
 no fake connections involved, CLOSE_WAIT is a state _after_ having a
 fully established connection
 
  Or, are you saying that the ONLY possible way a socket end up in
  close_wait state is ONLY when and ONLY possible if it was fully open
  properly in the first place? If so, then I stand corrected and I was/am
  wrong about that part of my suggestions. So, is it the case then?
 
 Yes. Random example:
 http://www4.informatik.uni-erlangen.de/Projects/JX/Projects/TCP/tcpstate.html
 

I did not follow the complete thread, but I like to mention one thing:
there might be half open connections involved here. 

A client might do a half close (i.e. shutdown(SHUT_WR)) after sending
a request. This will make the connection a half-duplex one.  iirc, after the
shutdown, the server moves to CLOSE_WAIT, but will still be able to
send data to the client, until it decides that it is done and closes
down the connection.

-Otto



Re: : : rouge IPs / user

2007-12-12 Thread Daniel Ouellet

Otto Moerbeek wrote:

I did not follow the complete thread, but I like to mention one thing:
there might be half open connections involved here. 


A client might do a half close (i.e. shutdown(SHUT_WR)) after sending
a request. This will make the connection a half-duplex one.  iirc, after the
shutdown, the server moves to CLOSE_WAIT, but will still be able to
send data to the client, until it decides that it is done and closes
down the connection.


Thanks for your feedback Otto!

I keep reading this and may be my English is not getting it fully. For 
my own knowledge and understanding, when you say half open and haft 
close. In both cases they needed and have done the full three step 
handshake or may be skip some of it?


In other word they can or can't reach the CLOSE_WAIT without doing the 
full complete and proper three step handshake?


I would appreciate to understand that for myself as I thought it was 
clear, but I guess I still have a gap unfill. (;


Best,

Daniel



Re: : : rouge IPs / user

2007-12-12 Thread Daniel Ouellet

knitti wrote:

On 12/12/07, Daniel Ouellet [EMAIL PROTECTED] wrote:

net.inet.tcp.keepidle
net.inet.tcp.keepinittime
net.inet.tcp.keepintvl
net.inet.tcp.rstppslimit
net.inet.tcp.synbucketlimit
net.inet.tcp.syncachelimit


nope, shoudn't apply, unless my TCP knowledge is wrong or there
is a bug, which makes it affecting it unintentional


They would help at making more socket to httpd available sooner, but for 
this problem, you are right. It wouldn't as if I understood properly the 
reading I did, it will get to CLOSE_WAIT only after the FIN/ACK and then 
get to that state from where it will then wait for the application to 
close and then after the application is closed, the last ACK will be 
sent. That's how I understand it, so you are right and I am wrong for 
the CLOSE_WAIT state.



My point with PF here was that it would reduce the possible numbers of
close_wait state you could possibly see in the first place, witch is one
of the original goal of the question.

Why?

OK, I could be wrong and I am sure someone with a huge stick will hit me
with it if I say something stupid, and/or there might be something I am
overlooking or not understanding fully, witch is sure possible as well. (;

But if httpd received a fake connection that do not do the full
handshake, isn't it there a socket open and/or use by httpd for that
fake connection anyway. Meaning it tries to communicate with that fake
source and can't and eventually will close and (that's where may be I am
failing here) will end up in close_wait may be?


no fake connections involved, CLOSE_WAIT is a state _after_ having a
fully established connection


Agree, unless it's somehow possible to reach that state without having 
an establish and confirmed connection first and looks like it not 
possible. So, you are right again. (;



Or, are you saying that the ONLY possible way a socket end up in
close_wait state is ONLY when and ONLY possible if it was fully open
properly in the first place? If so, then I stand corrected and I was/am
wrong about that part of my suggestions. So, is it the case then?


Yes. Random example:
http://www4.informatik.uni-erlangen.de/Projects/JX/Projects/TCP/tcpstate.html


Thanks!

Daniel



Re: : : rouge IPs / user

2007-12-12 Thread Daniel Ouellet

knitti wrote:

On 12/12/07, Daniel Ouellet [EMAIL PROTECTED] wrote:

knitti wrote:

The problem would be to forget calling ap_bclose() after ending a
connection, either because all data has been sent or the connection has
been aborted. What I can read with some confidence, is that keeping a
socket open beyond sending any data is not intentional, and there is
nothing (for me) which suggests that it would happen at all.

Logically if that was the case, wouldn't you think you would run out of
sockets in just a few minutes after starting httpd? I am not saying
there isn't any bugs in httpd, or that there is. Fair to assume there is
some, but to that extend, I couldn't imagine so. Just think about it for
a second. What the effect of it would be if that was the case?


I think you misunderstood me. I meant I don't see any obvious occasion
in which the problem I assumed (forgetting ap_bclose() ) would occur.
So I don't see any bug (surpise), but something occurs. So either I don't
see the bug because its not obvious (surprise, again), or my
assumption (ap_bclose() not called) is wrong.

My question: would not calling ap_bclose() show this behaviour ?


May be, I am not sure, but based on your previous information provided 
it would make sense I guess. A stupid idea, may be something like Ajax 
might keep it going... No, I don't think so.



- Application needs sockets and send request to create and destroy them
and keep using them after they are created. Who does that, kernel or
application?


I assume the kernel creates the actual socket, but the app keeps it as long
as it wants (or longer ;-)


- Who receive the sockets creation and destroy requests and will create
them or destroy them and pass the handle to the application when ready.
The Kernel, or the applications?

- Who is handling the signaling, meaning handshake, opening, close_wait,
retransmitions, etc. Application or kernel?

- So, in the end, if a socket is in close_wait, is it the application,
or the kernel at that point? Meaning, was it already requested to be
close and is now a signaling issue, or an application that hasn't asked
to close the socket yet? (;


I *assume* that it is the application forgetting to close(), because if the
kernel forgets to close() something what is more or less a file, we would
also have massive stale open files being around.


Make sense.


- If jam in close_wait state, is it because it hasn't send the ACK on
the request from the client to close the socket?

- Or is it that it did send the ACK to the client and is now waiting on
the final ACK from that client to do it?

- Or is it that it reach that point because it was an none fully open
three way handshake establish connection to start with may be?

- Or it is because the client just open a socket, get what it needed and
  didn't bother to do the proper closing of the sockets as it should be?


_please_, read my last mails, or look at a TCP state diagram.


I did, thank you.


- Now, where is the application, in the case httpd involved here?


CLOSE_WAIT is a defined state. The most simple explaination is not
closing the socket even after recognizing there is nothing more to
read from it.


One would think so. What would really be nice is to be able to create 
these states I guess to understand how they are staying there in the 
first place. I try to see if I could, but I can't right now.



- Where can keep alive in httpd help, or not?

- Where pf proxy help or not?

- Where keep alive in tcp stack (sysctl) help or not?


these three questions I simply don't understand. Please rephrase.


It was intended as a reflexion more then questions to show where each 
part could affect the efficiency of httpd, but in this case, we left the 
part related to it and just focus in why CLOSE_WAIT was present. 
However, there must be a relation as the original posted corrected his 
problem of having to many CLOSE_WAIT state by reducing the KEEP_ALIVE to 
10 seconds, instead of the wrong setup of 150 seconds. However thinking 
about it, unless the wrong informations was provided, there shouldn't 
have been a CLOSE_WAIT issue, but a lack of available sockets. I think 
they should have been FIN_WAIT_2 and not CLOSE_WAIT if I think about it. 
I must have mix the two then assuming it was FIN_WAIT_2, but continue 
with CLOSE_WAIT. I think you can definitely help FIN_WAIT_2, but as for 
CLOSE_WAIT as you put it and as I think more about it, you are right. 
Would looks like an application issue somehow.



That's why there isn't a single answer to the questions here and it will
always depend on your specific setup, traffic patterns and load, etc.


I seems we are here of different opinions. I'm more or less convinced
now, that there is a bug not closing the socket even after httpd has
nothing more to send. Under the assumption my interpretation of the
problem is not fundamentally flawed.


May be not of a different opinion. There obviously was a part that I 
miss understood and ready 

Re: : : rouge IPs / user

2007-12-12 Thread Daniel Ouellet

knitti wrote:

On 12/12/07, Daniel Ouellet [EMAIL PROTECTED] wrote:

I am only
saying that using PF in front of httpd will reduce the possible number
of httpd close_wait you might see. By default httpd can only support up
to 256 connections, unless you increase it and compile it again.


I don't understand why pf would reduce this. Every single CLOSE_WAIT
stems from a former established connection, and pf can nothing do
to convince httpd to close its socket. No rogue clients involved here.


Yes you are right.


lead you in that path, then I am sorry. What will affect your close_wait
time (when you reach that point) are the tcp stack value, witch I am
reluctant to suggest to adjust as they sure can create way more harm
then goods.


I don't think there is a systl for that. TCP connections don't expire by
default, if you not make them, and the same should go for a half-closed
one. There are perfectly legit reasons for long open half-closed
TCP connections.


In my flow of emails, I mixed two things. Glad you cut it. It would help 
for normal heavy traffic, or fake one and it's possible to affect the 
keep alive from the tcp side making sockets be available sooner, etc. 
But again, specifically for CLOSE_WAIT, I stand corrected.



My point with PF here was that it would reduce the possible numbers of
close_wait state you could possibly see in the first place, witch is one
of the original goal of the question.


Why?


I miss spoke ( or in this case write ) I confuse myself between that 
stage and idle/waiting/FIN_WAIT_x/etc of fake connections that would 
take sockets of httpd away from legit traffic that PF would/could 
definitely help, however, in this tread, I lost track of where we were 
in the process and specially, I had a gap in my understanding that miss 
lead me as well.


Was interesting however.

Thanks

Daniel



Re: : : rouge IPs / user

2007-12-12 Thread Otto Moerbeek
On Wed, Dec 12, 2007 at 08:57:18PM -0500, Daniel Ouellet wrote:

 Otto Moerbeek wrote:
 I did not follow the complete thread, but I like to mention one thing:
 there might be half open connections involved here. A client might do a 
 half close (i.e. shutdown(SHUT_WR)) after sending
 a request. This will make the connection a half-duplex one.  iirc, after 
 the
 shutdown, the server moves to CLOSE_WAIT, but will still be able to
 send data to the client, until it decides that it is done and closes
 down the connection.

 Thanks for your feedback Otto!

 I keep reading this and may be my English is not getting it fully. For my 
 own knowledge and understanding, when you say half open and haft close. In 
 both cases they needed and have done the full three step handshake or may 
 be skip some of it?

 In other word they can or can't reach the CLOSE_WAIT without doing the full 
 complete and proper three step handshake?

 I would appreciate to understand that for myself as I thought it was clear, 
 but I guess I still have a gap unfill. (;

I'm only talking about the tear-down. The three-way handshake happens
before that, both sockets are in ESTABSLIHED state. You have to read
half-close as a verb (action), and half-open as a description of the
state of the connection. 

Check the state transition diagram, and maybe do some reading in
Stevens's TCP/Illustrated Volume I (which has the state diagram on the
inside cover ;-)

-Otto



Re: : : rouge IPs / user

2007-12-12 Thread Daniel Ouellet

Otto Moerbeek wrote:

I'm only talking about the tear-down. The three-way handshake happens
before that, both sockets are in ESTABSLIHED state. You have to read
half-close as a verb (action), and half-open as a description of the
state of the connection. 


Check the state transition diagram, and maybe do some reading in
Stevens's TCP/Illustrated Volume I (which has the state diagram on the
inside cover ;-)


I did a lots of reading and I cleared my miss understanding looks like 
and see where I confuse myself:


1) A socket gets into CLOSE_WAIT when the remote side sends a FIN and 
the local side sends an ACK, but has not yet sent it's FIN.


2) A socket gets into FIN_WAIT_2 when the local side closes the socket 
and sends a FIN (FIN_WAIT_1) and then receives an ACK for that FIN but 
has not yet received a FIN from the remote side.


I got confuse as I can't never see any CLOSE_WAIT on my server, no 
complain for sure but I see a FIN_WAIT_2 and obviously confuse the two.


I am puzzled as to why they would get so many CLOSE_WAIT and I guess it 
might be an under power server may be that when the httpd child process 
is done, that the master, or what not don't have time to process it and 
as such keep the socket open because of that. Not sure, but I guess 
possible.


In any case, I appreciate the reply and I sure clear that part of the 
handshake anyway, so I will not make the same mistakes again. (;


Best,

Daniel



Re: : rouge IPs / user

2007-12-11 Thread Raimo Niskanen
On Tue, Dec 11, 2007 at 01:15:11AM +1300, Joel Wiramu Pauling wrote:
 Tip.
 
 Don't allow password challenge. Problem solved. Just use key'd ssh and this
 problem disappears.
 

Bin there, done that.

You answered the wrong question.

I want to know if and what I can do (on the server side) about HTTP
clients that put sockets on my httpd server in state CLOSE_WAIT and
thereby chew up all sockets for the server causing a kind of
denial of service state.

And yes, I have googled for HPPT server socket CLOSE_WAIT and
did not get much wiser.



 
 On 11/12/2007, Raimo Niskanen [EMAIL PROTECTED] wrote:
 
  I have a related problem, but I am not sure if the source
  IPs are nasty computers or just...
 
  # lsof -ni:www
  shows me lots of connections hanging in state CLOSE_WAIT
  from some hosts (often in China). These used to eat all
  sockets for httpd. Now I have a max-src-conn limit so
  it is not a real problem any more.
 
  I now also log hosts that succedes in getting many
  sockets in CLOSE_WAIT, and they are still there.
 
  What do the gurus say? What can I do about these hosts?
 
 
 
  On Fri, Dec 07, 2007 at 09:51:52AM -0800, badeguruji wrote:
   I am getting constant hacking attempt into my computer
   from following IPs. Although, I have configured my ssh
   config and tcp-wrappers to deny such attempts. But I
   wish some expert soul in this community 'fix' this
   rouge hacker for ever, for everyones good.
  
   This hacker could be spoofing the IPs, but i have only
   the IPs in my message logs(and a url)...
  
   218.6.16.30
   195.187.33.66
   202.29.21.6
   60.28.201.57
   218.24.162.85
   wpc4643.amenworld.com
   202.22.251.23
   219.143.232.131
   220.227.218.21
   124.30.42.36
  
   -for community.
  
   -BG
  
   
   ~~Kalyan-mastu~~
 
  --
 
  / Raimo Niskanen, Erlang/OTP, Ericsson AB

-- 

/ Raimo Niskanen, Erlang/OTP, Ericsson AB



Re: : rouge IPs / user

2007-12-11 Thread knitti
On 12/11/07, Raimo Niskanen [EMAIL PROTECTED] wrote:
 I want to know if and what I can do (on the server side) about HTTP
 clients that put sockets on my httpd server in state CLOSE_WAIT and
 thereby chew up all sockets for the server causing a kind of
 denial of service state.

 And yes, I have googled for HPPT server socket CLOSE_WAIT and
 did not get much wiser.

If I understand correctly you could try synproxy states with pf and let these
states expire rapidly. If the states expire, I *think* pf should end the
connection completely, so your half-closed sockets don't get stale.
BUT perhaps I didn't get it at all and this makles no sense ;)

--knitti



Re: : rouge IPs / user

2007-12-11 Thread Marti Martinez
Yep, synproxy in your answer for OpenBSD. For linux or freebsd, try
enabling syn cookies.

On Dec 11, 2007 5:43 AM, knitti [EMAIL PROTECTED] wrote:
 On 12/11/07, Raimo Niskanen [EMAIL PROTECTED] wrote:
  I want to know if and what I can do (on the server side) about HTTP
  clients that put sockets on my httpd server in state CLOSE_WAIT and
  thereby chew up all sockets for the server causing a kind of
  denial of service state.
 
  And yes, I have googled for HPPT server socket CLOSE_WAIT and
  did not get much wiser.

 If I understand correctly you could try synproxy states with pf and let these
 states expire rapidly. If the states expire, I *think* pf should end the
 connection completely, so your half-closed sockets don't get stale.
 BUT perhaps I didn't get it at all and this makles no sense ;)

 --knitti





-- 
Systems Programmer, Principal
Electrical  Computer Engineering
The University of Arizona
[EMAIL PROTECTED]



Re: : rouge IPs / user

2007-12-11 Thread Daniel Ouellet

Raimo Niskanen wrote:

On Tue, Dec 11, 2007 at 01:15:11AM +1300, Joel Wiramu Pauling wrote:

Tip.

Don't allow password challenge. Problem solved. Just use key'd ssh and this
problem disappears.



Bin there, done that.

You answered the wrong question.


I think you got the right answer many times so far, but you just refuse 
to take the advise. People have told you many times to just use pf and 
be done with it.


You just reply and dismiss them like one here:

I was adviced for pf, but right now a simple ssh-config and 
hosts.allow/deny is serving me fine. I will learn and use pf in due course.



I want to know if and what I can do (on the server side) about HTTP
clients that put sockets on my httpd server in state CLOSE_WAIT and
thereby chew up all sockets for the server causing a kind of
denial of service state.


People have giving you the answer over and over, but it is up to you to 
listen tot he advise.



And yes, I have googled for HPPT server socket CLOSE_WAIT and
did not get much wiser.


I am not sure you actually did, but I will give you the benefit here.

Again, the same answer and same advise. Get with it and use pf.

If you google it, you would have seen exactly the answer and example to 
your question here yet again using pf:


http://openbsd.org/faq/pf/filter.html#synproxy

It one thing to ask for help and advise, users here have given you 
plenty of really good one, it's an other to refuse it, dismiss it and 
come back saving no one tell you the answer, or provide you answer to 
the wrong question.


The answer to your problem is just to use PF, or may be the real problem 
is between the monitor and the chair.


Please, just read on it and do it right and stop telling people are not 
helping you. They are and they give you the right answer, but you refuse 
them.
 Your computer(s), your choice, that I get it, but then don't say you 
don't get help.


Great FAQ on PF and it's easy to read:

Spend the same amount of time reading it as you write emails and you 
will know it much better then I looks like.


http://openbsd.org/faq/pf/

If you want more then read great docs on it here:

http://www.bsdly.net/~peter/pf.html

and if that still not answering your questions, then get the book:

http://nostarch.com/frameset.php?startat=pf

So far ALL the answers to your various questions on the subject and the 
variation of it is to use PF, so just do it.


Hope this help you some.

Best,

Daniel



Re: : rouge IPs / user

2007-12-11 Thread Stuart Henderson
On 2007/12/11 09:40, Marti Martinez wrote:
 Yep, synproxy in your answer for OpenBSD. For linux or freebsd, try
 enabling syn cookies.

synproxy works at the start of the connection, not the end.

CLOSE_WAIT is the state where the network stack waits for
the application (httpd) to close the connection after receiving
the client's FIN.



Re: : rouge IPs / user

2007-12-11 Thread knitti
On 12/11/07, Stuart Henderson [EMAIL PROTECTED] wrote:
 On 2007/12/11 09:40, Marti Martinez wrote:
  Yep, synproxy in your answer for OpenBSD. For linux or freebsd, try
  enabling syn cookies.

 synproxy works at the start of the connection, not the end.

 CLOSE_WAIT is the state where the network stack waits for
 the application (httpd) to close the connection after receiving
 the client's FIN.

oh sorry, then I was wrong. So when client's FIN is already in, then
(depending on how long it takes), is it normal behaviour of httpd
or could it be considered a bug?


--knitti



Re: : rouge IPs / user

2007-12-11 Thread Daniel Ouellet

knitti wrote:

On 12/11/07, Stuart Henderson [EMAIL PROTECTED] wrote:

On 2007/12/11 09:40, Marti Martinez wrote:

Yep, synproxy in your answer for OpenBSD. For linux or freebsd, try
enabling syn cookies.

synproxy works at the start of the connection, not the end.

CLOSE_WAIT is the state where the network stack waits for
the application (httpd) to close the connection after receiving
the client's FIN.


oh sorry, then I was wrong. So when client's FIN is already in, then
(depending on how long it takes), is it normal behaviour of httpd
or could it be considered a bug?


It's not a bug, but a feature I guess. It's useful for keep alive setup 
and can be adjusted in httpd as well, or being turn off is that really 
annoyed you. I am not recommending it however.


PF can help in making sure the connections you pass to your httpd server 
are legitimate one (three way handshake) and then you can adjust the 
keep alive on the httpd to reduce it if you want, or turn it off may be 
in very bad cases.


Even in very worst cases, you could adjust some of pf net.inet.tcp.xxx 
value to help, but I am not going there as in most cases, users will 
make it way worst then better. You have to have a very busy server(s) to 
start playing with these values for both/either pf and httpd keep alive.


If it is just that it annoy you to see the CLOSE_WAIT in pf as an 
example, but that the httpd server is operating normally, then just let 
it be.


There is also possibility to adjust PF to start limiting the states in 
it's table as you start running under very heavy load, but again, that's 
not for everyone. You can setup PF to expired states sooner then they 
would if you reach high limits, etc.


But again, all this is for very heavy setup and servers. I could be 
wrong, but I don't think that's the issue in this case.


In any case, in the interest to answer your question, you can always 
read on this a bit. Adaptive options and various timeout in PF combine 
with some changes in httpd.conf for keep alive will carry you a long way:


http://www.openbsd.org/cgi-bin/man.cgi?query=pf.confsektion=5manpath=OpenBSD+4.2

So, if you configure PF to use some of that, then change the httpd 
default for keep alive and reduce it if need be as well as making sysctl 
changes, you can make a system support a hell of a lots more traffic, 
but at the same time, you can shoot you in the foot pretty bad and 
making it way worst as well. So, unless you really have to and oyu truly 
understand each aspect of it, leaving it alone is best and simple PF 
configuration alone will carry you a very long way.


There is a lots that can be done, however, when you reach this level, an 
answer doesn't fit all and is really dependent on your setup.


Hope this help answering your question.

Daniel



Re: : rouge IPs / user

2007-12-11 Thread knitti
On 12/11/07, Daniel Ouellet [EMAIL PROTECTED] wrote:
[... snipped away a lot ...]
 There is a lots that can be done, however, when you reach this level, an
 answer doesn't fit all and is really dependent on your setup.

 Hope this help answering your question.

It's not me having the problem, but I desire to understand it. AFAIK
HTTP keep alives have nothing to do with it. If the socket is in
CLOSE_WAIT, the TCP connection can't be reused, the server
has sent its FIN and the client its FIN/ACK, but the server doesn't
have yet sent its final ACK.
I can imagine some possibilites why this happens (some might
not be valid due to my lack of knowledge):
- the server didn't clean up its socket, so it stays there until the
process dies eventually
- the server does this to keep its socket (that I don't know: can
a socket be reused on any state?)


btw: I might be going off topic here, but I think it applies to
OpenBSDs httpd. I won't sent any further mail to this thread
you tell me to shut up.

--knitti



Re: : rouge IPs / user

2007-12-11 Thread Daniel Ouellet

knitti wrote:

On 12/11/07, Daniel Ouellet [EMAIL PROTECTED] wrote:
[... snipped away a lot ...]

There is a lots that can be done, however, when you reach this level, an
answer doesn't fit all and is really dependent on your setup.

Hope this help answering your question.


It's not me having the problem, but I desire to understand it. AFAIK


I understand that, but you did asked a valid question on the state of 
the socket connection and I tried to answer that. If wasn't directed to 
the previous guy that can't search on Google and asked advise but refuse 
very valid answer. Sorry if you fell I confuse the two, but I didn't. 
May not have been obvious in my writing however.



HTTP keep alives have nothing to do with it. If the socket is in
CLOSE_WAIT, the TCP connection can't be reused, the server
has sent its FIN and the client its FIN/ACK, but the server doesn't
have yet sent its final ACK.


Well actually it does under normal operation. See, if you get a 
connection from a user and have keep alive setup. The socket will stay 
open to speed up the next request from the same users without having to 
establish a new connection, reusing the same socket for speed, but at 
the same time keeping that socket open and not ready to close yet for 
the next users. So, you see, if you have longer keep alive setup in 
httpd, you will reach the CLOSE_WAIT later on instead of sooner if you 
have shorter keep alive setup. See what I explain, may be not as well as 
I would like is the impact of PF and httpd together as well as the 
net.inet.tcp.xxx in sysctl setup. They all interact together in some 
ways and as such I also said it wasn't something to take isolated of one 
an other.


Just as an example. If you put keep alive to 2 minutes instead of 15 
seconds default as an example, you will use much more sockets and you 
will end up running out of socket possibly, all depend on traffic obviously.


Now if keep alive from httpd is the only responsible party for having 
socket in CLOSE_WAIT, no it is not. But it does play a role in there as 
well into making more or less of them available.


What's important here is that the maximum number of TCP/IP sockets in 
the CLOSE_WAIT state can not exceed the maximum number allowed TCP/IP 
sockets from the Web server or in here the httpd.


netstat -an can show you the state of the various sockets, or more 
limited display


netstat -an | grep WAIT


I can imagine some possibilites why this happens (some might
not be valid due to my lack of knowledge):
- the server didn't clean up its socket, so it stays there until the
process dies eventually


It will clean it up eventually, or may be force with some directive in 
httpd about the usage, I can't recall right this instant and I would 
need to look. I may confuse two things as well here, but it might be 
possible to do it. Not sure. I wonder if the net.inet.tcp.keepidle, or 
something similar wouldn't actually affect it here. I would think so, 
but I could be wrong.


I think the CLOSE_WAIT state and time is a function of the OS stack, not 
the application itself, in this case httpd. I could be wrong here and I 
would love for someone to correct that for me if I do not understand 
that properly. But my understanding is this is control by the OS, not 
the application itself, other then the keep alive obviously in this case.



- the server does this to keep its socket (that I don't know: can
a socket be reused on any state?)


No, it can't. See above. You are limited by the MaxSpareServers 
directive in httpd anyway as far as the www is concern here. You sure 
can increase that from the maximum default of 256 if you recompile it 
and changed it in the include file, but again, should only be done on 
very busy servers.



btw: I might be going off topic here, but I think it applies to
OpenBSDs httpd. I won't sent any further mail to this thread
you tell me to shut up.


I didn't do such thing. The original poster however should/may take the 
advice, or drop it. (;


I actually find it interesting, not the original subject, but where it 
was/is going.


Daniel



Re: : rouge IPs / user

2007-12-11 Thread knitti
On 12/12/07, Daniel Ouellet [EMAIL PROTECTED] wrote:
 knitti wrote:
  HTTP keep alives have nothing to do with it. If the socket is in
  CLOSE_WAIT, the TCP connection can't be reused, the server
  has sent its FIN and the client its FIN/ACK, but the server doesn't
  have yet sent its final ACK.

 Well actually it does under normal operation. See, if you get a
 connection from a user and have keep alive setup. The socket will stay
 open to speed up the next request from the same users without having to
 establish a new connection, reusing the same socket for speed, but at
 the same time keeping that socket open and not ready to close yet for
 the next users. So, you see, if you have longer keep alive setup in
 httpd, you will reach the CLOSE_WAIT later on instead of sooner if you
 have shorter keep alive setup. See what I explain, may be not as well as
 I would like is the impact of PF and httpd together as well as the
 net.inet.tcp.xxx in sysctl setup. They all interact together in some
 ways and as such I also said it wasn't something to take isolated of one
 an other.
[...]
 I think the CLOSE_WAIT state and time is a function of the OS stack, not
 the application itself, in this case httpd. I could be wrong here and I
 would love for someone to correct that for me if I do not understand
 that properly. But my understanding is this is control by the OS, not
 the application itself, other then the keep alive obviously in this case.


you tell me that there is some correlation between HTTP keep alives and
a socket ending up in CLOSE_WAIT for some time. That is the practical
observation. But I'm interested in whether this is by design or not.
RFC 2616 doesn't mention implementation details, and I can't see why
the socket implementation (OS) would want to keep a socket in
CLOSE_WAIT for some time (not sending a final ACK).

  btw: I might be going off topic here, but I think it applies to
  OpenBSDs httpd. I won't sent any further mail to this thread
  you tell me to shut up.

 I didn't do such thing. The original poster however should/may take the
 advice, or drop it. (;

sorry for the confusion, I forgot to write an if after thread

--knitti



Re: : rouge IPs / user

2007-12-11 Thread Daniel Ouellet

knitti wrote:

you tell me that there is some correlation between HTTP keep alives and
a socket ending up in CLOSE_WAIT for some time. That is the practical
observation. But I'm interested in whether this is by design or not.
RFC 2616 doesn't mention implementation details, and I can't see why
the socket implementation (OS) would want to keep a socket in
CLOSE_WAIT for some time (not sending a final ACK).


No. I am saying that there is a direct relation between the socket not 
being available to reach that state and the value assigned to keep alive 
making it take more time to reach the CLOSE_WAIT state and as such 
reducing the number of sockets you can use and as a side effect of this, 
limiting the number of users httpd can handle.


As to the second part of that question, meaning after it reach the 
CLOSE_WAIT, how long it stay in it? I think, and that's where my 
knowledge and understanding is lacking some, that it is at that point an 
OS part and as such may be able to be adjusted by some OS variable, not 
applications one at that time.


See, the difference is creation, usage and destruction of sockets are an 
application function, but all the signaling of it and handling of it is 
an OS function. At a minimum, that's how I understand it and as such 
when you reach the CLOSE_WAIT state, that's not under the application 
layer control anymore, but the OS and as such can be helped by OS changes.


I may be wrong here and if so, I would love for someone to correct that 
for me, but that's how I understand it.


The creation, usage and closing of the socket itself is application 
related, but the signaling, etc is a function of the TCP/IP stack under 
the OS control, and this 'CLOSE_WAIT' state is in the TCP/IP stack 
control and as such not an application issue, but an OS control factor 
that may be helped some and only if needed under heavy traffic as other 
wise the default as good as is.


I hope this makes it more clear, for my own understanding, or lack there 
of, of it anyway.


May be I make a foul of myself here (wouldn't be the first time and I 
only learn by extending myself out and learn from my mistakes), but that 
what I understand is, thinking about it now.


So, that's why I pointed the three parts that would/could help in this case.

Best,

Daniel



Re: : rouge IPs / user

2007-12-11 Thread Daniel Ouellet

knitti wrote:

you tell me that there is some correlation between HTTP keep alives and
a socket ending up in CLOSE_WAIT for some time. That is the practical
observation. But I'm interested in whether this is by design or not.
RFC 2616 doesn't mention implementation details, and I can't see why
the socket implementation (OS) would want to keep a socket in
CLOSE_WAIT for some time (not sending a final ACK).


One more thing I also forgot to add, or may be didn't come across as 
clear as it should.


If you put of in front of it and use it to proxy the connections, it 
will only pass the real connection to httpd that are real and as such 
save you socket that httpd would have to manage and that would end up in 
 CLOSE_WAIT.


Why? Let say someone doesn't like your site and send you a bunch of fake 
connections to (initiate connections) occupy all your sockets and as 
such making your site totally useless.


You can increase the number of connection httpd can support, recompile 
it and sue it, or a much more logical and practical ways is to use pf to 
actually filter these connections and avoid the problem in the first 
place that the limit of httpd have in the default.


If you try to establish a connection to httpd directly then it will use 
a socket even if it can't reply to the source as fake and as such use 
your resources and I guess end up in CLOSE_WAIT state and waiting to get 
the final ACK that will never come as it is a fake source.


However putting PF in front of it, your httpd wouldn't suffer this part 
anyway of the depletion of the sockets it can use.


Now adjusting the tcp stack value would/could then improve on the time 
sockets stay in this CLOSE_WAIT state.


So, all are connected in any way or angle you try to look at it.

Keep alive, max_spare_connections, etc, for speed and time delay for the 
httpd application to release that socket to the OS.


PF to handle these fake/forged TCP connections that would otherwise 
occupy your httpd sockets available and as such needs to do the full 
cycle of open, wait and close based on the delay preset and may keep it 
open for way more time that you may want as it will wait for ever on the 
ACK for the fake source.


And TCP stack variable in making more or less of them (sockets) 
available sooner or later.


So, that's how each one interact with each others in many ways.

Hopefully I didn't make more of a mess then it was already and make it a 
little bit more clear. That's my intend anyway.


Hope it help you anyhow.

Best,

Daniel



Re: rouge IPs / user

2007-12-10 Thread Raimo Niskanen
I have a related problem, but I am not sure if the source
IPs are nasty computers or just...

# lsof -ni:www 
shows me lots of connections hanging in state CLOSE_WAIT
from some hosts (often in China). These used to eat all
sockets for httpd. Now I have a max-src-conn limit so
it is not a real problem any more.

I now also log hosts that succedes in getting many
sockets in CLOSE_WAIT, and they are still there.

What do the gurus say? What can I do about these hosts?



On Fri, Dec 07, 2007 at 09:51:52AM -0800, badeguruji wrote:
 I am getting constant hacking attempt into my computer
 from following IPs. Although, I have configured my ssh
 config and tcp-wrappers to deny such attempts. But I
 wish some expert soul in this community 'fix' this
 rouge hacker for ever, for everyones good.
 
 This hacker could be spoofing the IPs, but i have only
 the IPs in my message logs(and a url)...
 
 218.6.16.30
 195.187.33.66
 202.29.21.6
 60.28.201.57
 218.24.162.85
 wpc4643.amenworld.com
 202.22.251.23
 219.143.232.131
 220.227.218.21
 124.30.42.36
 
 -for community.
 
 -BG
 
 
 ~~Kalyan-mastu~~

-- 

/ Raimo Niskanen, Erlang/OTP, Ericsson AB



Re: rouge IPs / user

2007-12-10 Thread Joel Wiramu Pauling
Tip.

Don't allow password challenge. Problem solved. Just use key'd ssh and this
problem disappears.


On 11/12/2007, Raimo Niskanen [EMAIL PROTECTED] wrote:

 I have a related problem, but I am not sure if the source
 IPs are nasty computers or just...

 # lsof -ni:www
 shows me lots of connections hanging in state CLOSE_WAIT
 from some hosts (often in China). These used to eat all
 sockets for httpd. Now I have a max-src-conn limit so
 it is not a real problem any more.

 I now also log hosts that succedes in getting many
 sockets in CLOSE_WAIT, and they are still there.

 What do the gurus say? What can I do about these hosts?



 On Fri, Dec 07, 2007 at 09:51:52AM -0800, badeguruji wrote:
  I am getting constant hacking attempt into my computer
  from following IPs. Although, I have configured my ssh
  config and tcp-wrappers to deny such attempts. But I
  wish some expert soul in this community 'fix' this
  rouge hacker for ever, for everyones good.
 
  This hacker could be spoofing the IPs, but i have only
  the IPs in my message logs(and a url)...
 
  218.6.16.30
  195.187.33.66
  202.29.21.6
  60.28.201.57
  218.24.162.85
  wpc4643.amenworld.com
  202.22.251.23
  219.143.232.131
  220.227.218.21
  124.30.42.36
 
  -for community.
 
  -BG
 
  
  ~~Kalyan-mastu~~

 --

 / Raimo Niskanen, Erlang/OTP, Ericsson AB



Re: rouge IPs / user

2007-12-07 Thread Daniel Ouellet

badeguruji wrote:

I am getting constant hacking attempt into my computer
from following IPs. Although, I have configured my ssh
config and tcp-wrappers to deny such attempts. But I
wish some expert soul in this community 'fix' this
rouge hacker for ever, for everyones good.



Not sure that I understand what you are asking.

Just put these IP's in your pf configuration and then forget about it.

That's all there is to it.



rouge IPs / user

2007-12-07 Thread badeguruji
I am getting constant hacking attempt into my computer
from following IPs. Although, I have configured my ssh
config and tcp-wrappers to deny such attempts. But I
wish some expert soul in this community 'fix' this
rouge hacker for ever, for everyones good.

This hacker could be spoofing the IPs, but i have only
the IPs in my message logs(and a url)...

218.6.16.30
195.187.33.66
202.29.21.6
60.28.201.57
218.24.162.85
wpc4643.amenworld.com
202.22.251.23
219.143.232.131
220.227.218.21
124.30.42.36

-for community.

-BG


~~Kalyan-mastu~~



Re: rouge IPs / user

2007-12-07 Thread Greg Thomas
On Dec 7, 2007 10:03 AM, Daniel Ouellet [EMAIL PROTECTED] wrote:
 badeguruji wrote:
  I am getting constant hacking attempt into my computer
  from following IPs. Although, I have configured my ssh
  config and tcp-wrappers to deny such attempts. But I
  wish some expert soul in this community 'fix' this
  rouge hacker for ever, for everyones good.


 Not sure that I understand what you are asking.

 Just put these IP's in your pf configuration and then forget about it.

 That's all there is to it.

He's already been told that under this previous thread:

[plz. help] constant attack from: 201.244.17.162, 222.231.60.88,
82.207.116.209

Greg
-- 
Bicycle ride in the low desert:
http://lodesertprotosites.org/pokerrun/pokerrun.html

Dethink to survive - Mclusky



Re: rouge IPs / user

2007-12-07 Thread Nick Guenther
On 12/7/07, badeguruji [EMAIL PROTECTED] wrote:
 Steve, you were able to understand my concern/wish.
 Aren't all security experts, just building their own
 islands with the problem [of unsecure space] remaining
 as it always was? we should try to build a secure
 'atmosphere' where 'clouds of all colors/density' can
 freely glide with less caution in mind? A frame-work
 for internet security like Java, where all different
 kind of web-servers(and all other apps for that
 matter) can concentrate on their job, rather then
 worrying about security - is needed.

See, that requires trusting the other 'security experts' are actually
being honest and working for each others benefit... but that system
isn't secure, how do you distinguish 'security expert' from
'infiltrator'?
You *must* have decentralized systems/methods for this. There's no way
to combine data together, the best you can do is share techniques
which you can verify with your own logic -- except for blacklists like
SPEWS, and even then there are all sorts of politics and troubles.

-Nick



Re: rouge IPs / user

2007-12-07 Thread new_guy
badeguruji wrote:
 
 I am getting constant hacking attempt into my computer
 from following IPs. Although, I have configured my ssh...
 

This is so common that we ignore it at Virginia Tech. Some days, we log 20k
- 30k ssh brute force attempts... I'd like to track 'em down and string 'em
up too, but I've got better things to do and really, it's quite harmless :)

-- 
View this message in context: 
http://www.nabble.com/rouge-IPs---user-tf4963521.html#a14225107
Sent from the openbsd user - misc mailing list archive at Nabble.com.



Re: rouge IPs / user

2007-12-07 Thread Insan Praja SW

On Sat, 08 Dec 2007 04:05:34 +0700, Unix Fan [EMAIL PROTECTED] wrote:

I think this is the second time you've posted something similar to  
this... I have news for you




Everyone gets such traffic in their logs.. from DoS'ers and other  
mischievous individuals..




There really isn't much you can do about it either, and if you report  
back to each IP's abuse email.. chances are it originated from some 80  
year old grandmothers trojan infected computer.




Just use sane firewall rules... only enable services you need, and suck  
it up!! ;)




-Nix Fan.

 You could try to sink hole them.. direct their traffic, to lo0, lo1, lo2  
or whatever you want. It's an everyday life handling such attack for Net  
admin anyway... There area a lot of tutorials/best practice in the net.  
You might know better than me..

Cheers

Insan



Re: rouge IPs / user

2007-12-07 Thread Unix Fan
I think this is the second time you've posted something similar to this... I 
have news for you



Everyone gets such traffic in their logs.. from DoS'ers and other mischievous 
individuals..



There really isn't much you can do about it either, and if you report back to 
each IP's abuse email.. chances are it originated from some 80 year old 
grandmothers trojan infected computer.



Just use sane firewall rules... only enable services you need, and suck it up!! 
;)



-Nix Fan.



Re: rouge IPs / user

2007-12-07 Thread Steve Shockley

STeve Andre' wrote:

The one time I did send mail to an ISP was when one little
vandal developed an inordinate fondness for the web server,
and hit it 110,000 times in a week.  Fortunately the ISP did
do something about that one.  But the lice, I don't think you
can do anything about, unless you consider it a hobby.


For fun, I like to make the NT4 Option Pack default server web page 
the index.html on some OpenBSD servers I set up.  It's fun to watch and 
see who's paying attention to their logs when they start trying 
8-year-old exploitz.




Re: rouge IPs / user

2007-12-07 Thread STeve Andre'
On Friday 07 December 2007 12:51:52 badeguruji wrote:
 I am getting constant hacking attempt into my computer
 from following IPs. Although, I have configured my ssh
 config and tcp-wrappers to deny such attempts. But I
 wish some expert soul in this community 'fix' this
 rouge hacker for ever, for everyones good.

 This hacker could be spoofing the IPs, but i have only
 the IPs in my message logs(and a url)...

 218.6.16.30
 195.187.33.66
 202.29.21.6
 60.28.201.57
 218.24.162.85
 wpc4643.amenworld.com
 202.22.251.23
 219.143.232.131
 220.227.218.21
 124.30.42.36

 -for community.

 -BG

It isn't going to happen.  For one thing, its very likely that 
several people are invoved, probing your network.  Last
year my web server was getting hit once a second for about
two days, the efforts of at least 20 different creatures probing
around.  What are you going to do about that?  I consider
these people as net lice.

The one time I did send mail to an ISP was when one little
vandal developed an inordinate fondness for the web server,
and hit it 110,000 times in a week.  Fortunately the ISP did
do something about that one.  But the lice, I don't think you
can do anything about, unless you consider it a hobby.

--STeve Andre'



Re: rouge IPs / user

2007-12-07 Thread Nick Guenther
On Dec 7, 2007 1:03 PM, Daniel Ouellet [EMAIL PROTECTED] wrote:
 badeguruji wrote:
  I am getting constant hacking attempt into my computer
  from following IPs. Although, I have configured my ssh
  config and tcp-wrappers to deny such attempts. But I
  wish some expert soul in this community 'fix' this
  rouge hacker for ever, for everyones good.

 Not sure that I understand what you are asking.

I think he's advocating e-violence of some sort?
Hahahahahahah.



Re: rouge IPs / user

2007-12-07 Thread Jon Radel
badeguruji wrote:

 And seriously, 'anything' in self-defense is not
 violence (or e-violence) - I am not going in hackers'
 territory to teach him a lesson, i am only trying to
 build a wall [by asking the experts] which can save
 all those who are NOT-hacking into other people's
 computers, and want to operate in a secure environment
 (with-in those walls)

How can you prove that you aren't attempting to social engineer us
into launching a denial of service attack against some perfectly
innocent net lice?  Think about the model a bit more.

--Jon Radel
[EMAIL PROTECTED]



Re: rouge IPs / user

2007-12-07 Thread Axton
On Dec 7, 2007 12:51 PM, badeguruji [EMAIL PROTECTED] wrote:
 I am getting constant hacking attempt into my computer
 from following IPs. Although, I have configured my ssh
 config and tcp-wrappers to deny such attempts. But I
 wish some expert soul in this community 'fix' this
 rouge hacker for ever, for everyones good.

 This hacker could be spoofing the IPs, but i have only
 the IPs in my message logs(and a url)...

 218.6.16.30
 195.187.33.66
 202.29.21.6
 60.28.201.57
 218.24.162.85
 wpc4643.amenworld.com
 202.22.251.23
 219.143.232.131
 220.227.218.21
 124.30.42.36

 -for community.

 -BG

 
 ~~Kalyan-mastu~~



Afraid it's a fact of life when running things on the open net.  Don't
worry about it.  Make sure the way you authenticate to ssh isn't weak.
 I use key based authentication and don't use passwords.  This gives
me peace of mind.  It's a bit harder to guess and I don't have to
worry about accounts with weak passwords.  I also only allow specific
users to authenticate to ssh.  The DoS hits I get periodically are the
ones that bother me.

Axton Grams



Re: rouge IPs / user

2007-12-07 Thread badeguruji
Thanks guys.

Steve, you were able to understand my concern/wish.

Yes, I have posted the same issue earlier, that time i
was looking for a solution for 'myself', this time i
wish: if something can be done 'for everyone', so i
publicized the IPs hacker('net lice') was coming from.

I was adviced for pf, but right now a simple
ssh-config and hosts.allow/deny is serving me fine. I
will learn and use pf in due course.

And seriously, 'anything' in self-defense is not
violence (or e-violence) - I am not going in hackers'
territory to teach him a lesson, i am only trying to
build a wall [by asking the experts] which can save
all those who are NOT-hacking into other people's
computers, and want to operate in a secure environment
(with-in those walls)

Aren't all security experts, just building their own
islands with the problem [of unsecure space] remaining
as it always was? we should try to build a secure
'atmosphere' where 'clouds of all colors/density' can
freely glide with less caution in mind? A frame-work
for internet security like Java, where all different
kind of web-servers(and all other apps for that
matter) can concentrate on their job, rather then
worrying about security - is needed.

thank you.

-BG

--- Nick Guenther [EMAIL PROTECTED] wrote:

 On Dec 7, 2007 1:03 PM, Daniel Ouellet
 [EMAIL PROTECTED] wrote:
  badeguruji wrote:
   I am getting constant hacking attempt into my
 computer
   from following IPs. Although, I have configured
 my ssh
   config and tcp-wrappers to deny such attempts.
 But I
   wish some expert soul in this community 'fix'
 this
   rouge hacker for ever, for everyones good.
 
  Not sure that I understand what you are asking.
 
 I think he's advocating e-violence of some sort?
 Hahahahahahah.
 
 




~~Kalyan-mastu~~