Re: thoughts on kernel security issues

2005-01-27 Thread Jesse Pollard
On Thursday 27 January 2005 11:18, Zan Lynx wrote:
> On Thu, 2005-01-27 at 10:37 -0600, Jesse Pollard wrote:
>
> >
> > > > Unfortunately, there will ALWAYS be a path, either direct, or
> > > > indirect between the secure net and the internet.
> > >
> > > Other than letting people use secure computers after they have seen the
> > > Internet, a good setup has no indirect paths.
> >
> > Ha. Hahaha...
> >
> > Reality bites.
>
> In the reality I'm familiar with, the defense contractor's secure
> projects building had one entrance, guarded by security guards who were
> not cheap $10/hr guys, with strict instructions.  No computers or
> computer media were allowed to leave the building except with written
> authorization of a corporate officer.  The building was shielded against
> Tempest attacks and verified by the NSA.  Any computer hardware or media
> brought into the building for the project was physically destroyed at
> the end.
>

And you are assuming that everybody follows the rules.

when a PHB, whether military or not (and not contractor) comes in and
says "... I don't care what it takes... get that data over there NOW..."
guess what - it gets done. Even if it is "less secure" in the process.

Oh - and about that "physically destroyed" - that used to be true.

Until it was pointed out to them that destruction of 300TB of data
media would cost them about 2 Million.

Suddenly, erasing became popular. And sufficient. Then it was reused
in a non-secure facility, operated by the same CO.

> Secure nets _are_ possible.

Yes they are. But they are NOT reliable.
Don't ever assume a "secure" network really is.

All it means is: "as secure as we can manage"
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: thoughts on kernel security issues

2005-01-27 Thread Jesse Pollard
On Wednesday 26 January 2005 13:56, Bill Davidsen wrote:
> On Wed, 26 Jan 2005, Jesse Pollard wrote:
> > On Tuesday 25 January 2005 15:05, linux-os wrote:
> > > This isn't relevant at all. The Navy doesn't have any secure
> > > systems connected to a network to which any hackers could connect.
> > > The TDRS communications satellites provide secure channels
> > > that are disassembled on-board. Some ATM-slot, after decryption
> > > is fed to a LAN so the sailors can have an Internet connection
> > > for their lap-tops. The data took the same paths, but it's
> > > completely independent and can't get mixed up no matter how
> > > hard a hacker tries.
> >
> > Obviously you didn't hear about the secure network being hit by the "I
> > love you" virus.
> >
> > The Navy doesn't INTEND to have any secure systems connected to a network
> > to which any hackers could connect.
>
> What's hard about that? Matter of physical network topology, absolutely no
> physical connection, no machines with a 2nd NIC, no access to/from I'net.
> Yes, it's a PITA, add logging to a physical printer which can't be erased
> if you want to make your CSO happy (corporate security officer).

And you are ASSUMING the connection was authorized. I can assure you that 
there are about 200 (more or less) connections from the secure net to the
internet expressly for the purpose of transferring data from the internet
to the secure net for analysis. And not ALL of these connections are 
authorized. Some are done via sneakernet, others by running a cable ("I need
the data NOW... I'll just disconnect afterward..."), and are not visible
for very long. Other connections are by picking up a system and carrying it
from one connection to another (a version of sneakernet, though here it
sometimes needs a hand cart).

> > Unfortunately, there will ALWAYS be a path, either direct, or indirect
> > between the secure net and the internet.
>
> Other than letting people use secure computers after they have seen the
> Internet, a good setup has no indirect paths.

Ha. Hahaha...

Reality bites.

> > The problem exists. The only to protect is to apply layers of protection.
> >
> > And covering the possible unknown errors is a good way to add protection.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: thoughts on kernel security issues

2005-01-27 Thread Jesse Pollard
On Wednesday 26 January 2005 13:56, Bill Davidsen wrote:
 On Wed, 26 Jan 2005, Jesse Pollard wrote:
  On Tuesday 25 January 2005 15:05, linux-os wrote:
   This isn't relevant at all. The Navy doesn't have any secure
   systems connected to a network to which any hackers could connect.
   The TDRS communications satellites provide secure channels
   that are disassembled on-board. Some ATM-slot, after decryption
   is fed to a LAN so the sailors can have an Internet connection
   for their lap-tops. The data took the same paths, but it's
   completely independent and can't get mixed up no matter how
   hard a hacker tries.
 
  Obviously you didn't hear about the secure network being hit by the I
  love you virus.
 
  The Navy doesn't INTEND to have any secure systems connected to a network
  to which any hackers could connect.

 What's hard about that? Matter of physical network topology, absolutely no
 physical connection, no machines with a 2nd NIC, no access to/from I'net.
 Yes, it's a PITA, add logging to a physical printer which can't be erased
 if you want to make your CSO happy (corporate security officer).

And you are ASSUMING the connection was authorized. I can assure you that 
there are about 200 (more or less) connections from the secure net to the
internet expressly for the purpose of transferring data from the internet
to the secure net for analysis. And not ALL of these connections are 
authorized. Some are done via sneakernet, others by running a cable (I need
the data NOW... I'll just disconnect afterward...), and are not visible
for very long. Other connections are by picking up a system and carrying it
from one connection to another (a version of sneakernet, though here it
sometimes needs a hand cart).

  Unfortunately, there will ALWAYS be a path, either direct, or indirect
  between the secure net and the internet.

 Other than letting people use secure computers after they have seen the
 Internet, a good setup has no indirect paths.

Ha. Hahaha...

Reality bites.

  The problem exists. The only to protect is to apply layers of protection.
 
  And covering the possible unknown errors is a good way to add protection.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: thoughts on kernel security issues

2005-01-27 Thread Jesse Pollard
On Thursday 27 January 2005 11:18, Zan Lynx wrote:
 On Thu, 2005-01-27 at 10:37 -0600, Jesse Pollard wrote:

 
Unfortunately, there will ALWAYS be a path, either direct, or
indirect between the secure net and the internet.
  
   Other than letting people use secure computers after they have seen the
   Internet, a good setup has no indirect paths.
 
  Ha. Hahaha...
 
  Reality bites.

 In the reality I'm familiar with, the defense contractor's secure
 projects building had one entrance, guarded by security guards who were
 not cheap $10/hr guys, with strict instructions.  No computers or
 computer media were allowed to leave the building except with written
 authorization of a corporate officer.  The building was shielded against
 Tempest attacks and verified by the NSA.  Any computer hardware or media
 brought into the building for the project was physically destroyed at
 the end.


And you are assuming that everybody follows the rules.

when a PHB, whether military or not (and not contractor) comes in and
says ... I don't care what it takes... get that data over there NOW...
guess what - it gets done. Even if it is less secure in the process.

Oh - and about that physically destroyed - that used to be true.

Until it was pointed out to them that destruction of 300TB of data
media would cost them about 2 Million.

Suddenly, erasing became popular. And sufficient. Then it was reused
in a non-secure facility, operated by the same CO.

 Secure nets _are_ possible.

Yes they are. But they are NOT reliable.
Don't ever assume a secure network really is.

All it means is: as secure as we can manage
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: thoughts on kernel security issues

2005-01-26 Thread Jesse Pollard
On Tuesday 25 January 2005 15:05, linux-os wrote:
> On Tue, 25 Jan 2005, John Richard Moser wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA1
> >
[snip]
> > In this context, it doesn't make sense to deploy a protection A or B
> > without the companion protection, which is what I meant.  You're
> > thinking of fixing specific bugs; this is good and very important (as
> > effective proactive security BREAKS things that are buggy), but there is
> > a better way to create a more secure environment.  Fixing the bugs
> > increases the quality of the product, while adding protections makes
> > them durable enough to withstand attacks targetting their own flaws.
>
> Adding protections for which no known threat exists is a waste of
> time, effort, and adds to the kernel size. If you connect a machine
> to a network, it can always get hit with so many broadcast packets
> that it has little available CPU time to do useful work. Do we
> add a network throttle to avoid this? If so, then you will hurt
> somebody's performance on a quiet network. Everything done in
> the name of "security" has its cost. The cost is almost always
> much more than advertised or anticipated.
>
> > Try reading through (shameless plug)
> > http://www.ubuntulinux.org/wiki/USNAnalysis and then try to understand
> > where I'm coming from.
>
> This isn't relevant at all. The Navy doesn't have any secure
> systems connected to a network to which any hackers could connect.
> The TDRS communications satellites provide secure channels
> that are disassembled on-board. Some ATM-slot, after decryption
> is fed to a LAN so the sailors can have an Internet connection
> for their lap-tops. The data took the same paths, but it's
> completely independent and can't get mixed up no matter how
> hard a hacker tries.

Obviously you didn't hear about the secure network being hit by the "I love 
you" virus.

The Navy doesn't INTEND to have any secure systems connected to a network to 
which any hackers could connect.

Unfortunately, there will ALWAYS be a path, either direct, or indirect between
the secure net and the internet.

The problem exists. The only to protect is to apply layers of protection.

And covering the possible unknown errors is a good way to add protection.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: thoughts on kernel security issues

2005-01-26 Thread Jesse Pollard
On Tuesday 25 January 2005 15:05, linux-os wrote:
 On Tue, 25 Jan 2005, John Richard Moser wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
[snip]
  In this context, it doesn't make sense to deploy a protection A or B
  without the companion protection, which is what I meant.  You're
  thinking of fixing specific bugs; this is good and very important (as
  effective proactive security BREAKS things that are buggy), but there is
  a better way to create a more secure environment.  Fixing the bugs
  increases the quality of the product, while adding protections makes
  them durable enough to withstand attacks targetting their own flaws.

 Adding protections for which no known threat exists is a waste of
 time, effort, and adds to the kernel size. If you connect a machine
 to a network, it can always get hit with so many broadcast packets
 that it has little available CPU time to do useful work. Do we
 add a network throttle to avoid this? If so, then you will hurt
 somebody's performance on a quiet network. Everything done in
 the name of security has its cost. The cost is almost always
 much more than advertised or anticipated.

  Try reading through (shameless plug)
  http://www.ubuntulinux.org/wiki/USNAnalysis and then try to understand
  where I'm coming from.

 This isn't relevant at all. The Navy doesn't have any secure
 systems connected to a network to which any hackers could connect.
 The TDRS communications satellites provide secure channels
 that are disassembled on-board. Some ATM-slot, after decryption
 is fed to a LAN so the sailors can have an Internet connection
 for their lap-tops. The data took the same paths, but it's
 completely independent and can't get mixed up no matter how
 hard a hacker tries.

Obviously you didn't hear about the secure network being hit by the I love 
you virus.

The Navy doesn't INTEND to have any secure systems connected to a network to 
which any hackers could connect.

Unfortunately, there will ALWAYS be a path, either direct, or indirect between
the secure net and the internet.

The problem exists. The only to protect is to apply layers of protection.

And covering the possible unknown errors is a good way to add protection.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: bzImage, root device Q

2001-07-20 Thread Jesse Pollard

On Fri, 20 Jul 2001, D. Stimits wrote:
>When booting to a bzImage kernel, bytes 508 and 509 can be used to name
>the minor and major number of the intended root device (although it can
>be overridden with a command line parameter). Other characteristics are
>also available this way, through bytes in the kernel. rdev makes a
>convenient way to hex edit those bytes.
>
>What I'm more curious about is how does the kernel know what filesystem
>_type_ the root is? Are there similar bytes in the bzImage, and can rdev
>change this? And is there a command line syntax to allow specifying
>filesystem type (e.g., something like "vmlinuz root=/dev/scd0,iso9660"
>or "vmlinuz root=/dev/scd0,xfs")? Or is this limited in some way,
>requiring mount on one or a few known filesystem types ("linux native"
>subset comes to mind), followed by a chroot or pivot_root style command
>(which in turn means no direct root mount of some filesystem types)?

Take a look at fs/super.c - function mount_root().

It reads the file system superblock (from the major/minor specified root
device) and determines the filesystem from that. There is a loop that
cycles through all known (ie built in) file systems until one works.

If none do, then it panics.

-- 
-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: bzImage, root device Q

2001-07-20 Thread Jesse Pollard

On Fri, 20 Jul 2001, D. Stimits wrote:
When booting to a bzImage kernel, bytes 508 and 509 can be used to name
the minor and major number of the intended root device (although it can
be overridden with a command line parameter). Other characteristics are
also available this way, through bytes in the kernel. rdev makes a
convenient way to hex edit those bytes.

What I'm more curious about is how does the kernel know what filesystem
_type_ the root is? Are there similar bytes in the bzImage, and can rdev
change this? And is there a command line syntax to allow specifying
filesystem type (e.g., something like vmlinuz root=/dev/scd0,iso9660
or vmlinuz root=/dev/scd0,xfs)? Or is this limited in some way,
requiring mount on one or a few known filesystem types (linux native
subset comes to mind), followed by a chroot or pivot_root style command
(which in turn means no direct root mount of some filesystem types)?

Take a look at fs/super.c - function mount_root().

It reads the file system superblock (from the major/minor specified root
device) and determines the filesystem from that. There is a loop that
cycles through all known (ie built in) file systems until one works.

If none do, then it panics.

-- 
-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Uncle Sam Wants YOU!

2001-07-02 Thread Jesse Pollard

"Jim Roland" <[EMAIL PROTECTED]>:
> From: "Jesse Pollard" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>; "Kurt Maxwell Weber" <[EMAIL PROTECTED]>; "J Sloan"
> <[EMAIL PROTECTED]>
> Cc: <[EMAIL PROTECTED]>
> Sent: Sunday, July 01, 2001 3:03 PM
> Subject: Re: Uncle Sam Wants YOU!
> 
> 
> [snip]
> > >In that case, I have the following options:
> > >1) Start my own ISP
> >
> > Only if the upstream provider doesn't require you to use windows.
> >
> > >2) Use Windows XP
> > >3) Not use Windows XP and not be able to use my current ISP
> > >4) Go to a different ISP
> >
> > You may not be able to find another. It took me a year. I gave up. I was
> > fortunate that Verio doesn't care what you have... though if you use
> > the dialup or basic dsl, MS is it, or no real support.
> >
> > >I'll just have to decide which I value more.  As long as I won't be
> killed
> > >for using a different OS, I still have a choice.
> >
> > No, but you might be forced out of a job.
> 
> In one of the large metro areas in which I live, there are a LOT of ISPs
> that do not require you to use Windows, but will not support you beyond the
> IP layer if you don't.  Use linux, install PPP with MS-CHAPv2 (with or
> without MPPE) for your dialup connection and it works just fine on a
> Winblows-only ISP.  DSL or Cable, just acquire your actual IP settings
> program a Linksys router/hub box and be done with it.

Better re-read the fine print on the "fair-use" statement. BOTH DSL and
Cable, or dialup (New Orleans at least) will disconnect you if you run ANY
unattended operation (if they determine it IS unattended). No daemon services.
No routing/NAT (unless they do it). No remote login. No mail. DHCP reconfig
between 4 and 8 hours (or whenever they choose to).

They will let you plug in, but will not provide any support (even TCP/IP is
not assured).

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Uncle Sam Wants YOU!

2001-07-02 Thread Jesse Pollard

Jim Roland [EMAIL PROTECTED]:
 From: Jesse Pollard [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]; Kurt Maxwell Weber [EMAIL PROTECTED]; J Sloan
 [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Sunday, July 01, 2001 3:03 PM
 Subject: Re: Uncle Sam Wants YOU!
 
 
 [snip]
  In that case, I have the following options:
  1) Start my own ISP
 
  Only if the upstream provider doesn't require you to use windows.
 
  2) Use Windows XP
  3) Not use Windows XP and not be able to use my current ISP
  4) Go to a different ISP
 
  You may not be able to find another. It took me a year. I gave up. I was
  fortunate that Verio doesn't care what you have... though if you use
  the dialup or basic dsl, MS is it, or no real support.
 
  I'll just have to decide which I value more.  As long as I won't be
 killed
  for using a different OS, I still have a choice.
 
  No, but you might be forced out of a job.
 
 In one of the large metro areas in which I live, there are a LOT of ISPs
 that do not require you to use Windows, but will not support you beyond the
 IP layer if you don't.  Use linux, install PPP with MS-CHAPv2 (with or
 without MPPE) for your dialup connection and it works just fine on a
 Winblows-only ISP.  DSL or Cable, just acquire your actual IP settings
 program a Linksys router/hub box and be done with it.

Better re-read the fine print on the fair-use statement. BOTH DSL and
Cable, or dialup (New Orleans at least) will disconnect you if you run ANY
unattended operation (if they determine it IS unattended). No daemon services.
No routing/NAT (unless they do it). No remote login. No mail. DHCP reconfig
between 4 and 8 hours (or whenever they choose to).

They will let you plug in, but will not provide any support (even TCP/IP is
not assured).

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Uncle Sam Wants YOU!

2001-07-01 Thread Jesse Pollard

On Sun, 01 Jul 2001, Jesse Pollard wrote:
>On Sun, 01 Jul 2001, Kurt Maxwell Weber wrote:
>>I'll just have to decide which I value more.  As long as I won't be killed 
>>for using a different OS, I still have a choice.
>
>No, but you might be forced out of a job.

Apologies for the followup to a followup:

You might be if the life monitoring sensors in surgury suddenly need
to be "registered with MS, or ... will be shutdown..."  ;-)

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Uncle Sam Wants YOU!

2001-07-01 Thread Jesse Pollard

On Sun, 01 Jul 2001, Kurt Maxwell Weber wrote:
>On Sunday 01 July 2001 13:48, you wrote:
>> Kurt Maxwell Weber wrote:
>> > I'm going to take a break from lurking to point out that I am not
>> > dissatisfied with Windows.  It has its uses, as do Linux (and NetBSD, and
>> > Solaris, and the other operating systems I have installed at home). 
>> > Frankly, I don't have a problem with Microsoft.  If I don't like their
>> > product, I'm free to choose not to use it.
>>
>> You do understand, don't you, that microsoft is
>> working frantically to take that choice away from
>> you? it's easy to sit back and say it doesn't affect
>> you, til one day you realize that you can't connect
>> to your ISP unless you are running windows xp.
>>
>> Then it hits you, and it's too late.
>
>In that case, I have the following options:
>1) Start my own ISP

Only if the upstream provider doesn't require you to use windows.

>2) Use Windows XP
>3) Not use Windows XP and not be able to use my current ISP
>4) Go to a different ISP

You may not be able to find another. It took me a year. I gave up. I was
fortunate that Verio doesn't care what you have... though if you use
the dialup or basic dsl, MS is it, or no real support.

>I'll just have to decide which I value more.  As long as I won't be killed 
>for using a different OS, I still have a choice.

No, but you might be forced out of a job.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Uncle Sam Wants YOU!

2001-07-01 Thread Jesse Pollard

On Sun, 01 Jul 2001, Kurt Maxwell Weber wrote:
On Sunday 01 July 2001 13:48, you wrote:
 Kurt Maxwell Weber wrote:
  I'm going to take a break from lurking to point out that I am not
  dissatisfied with Windows.  It has its uses, as do Linux (and NetBSD, and
  Solaris, and the other operating systems I have installed at home). 
  Frankly, I don't have a problem with Microsoft.  If I don't like their
  product, I'm free to choose not to use it.

 You do understand, don't you, that microsoft is
 working frantically to take that choice away from
 you? it's easy to sit back and say it doesn't affect
 you, til one day you realize that you can't connect
 to your ISP unless you are running windows xp.

 Then it hits you, and it's too late.

In that case, I have the following options:
1) Start my own ISP

Only if the upstream provider doesn't require you to use windows.

2) Use Windows XP
3) Not use Windows XP and not be able to use my current ISP
4) Go to a different ISP

You may not be able to find another. It took me a year. I gave up. I was
fortunate that Verio doesn't care what you have... though if you use
the dialup or basic dsl, MS is it, or no real support.

I'll just have to decide which I value more.  As long as I won't be killed 
for using a different OS, I still have a choice.

No, but you might be forced out of a job.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Uncle Sam Wants YOU!

2001-07-01 Thread Jesse Pollard

On Sun, 01 Jul 2001, Jesse Pollard wrote:
On Sun, 01 Jul 2001, Kurt Maxwell Weber wrote:
I'll just have to decide which I value more.  As long as I won't be killed 
for using a different OS, I still have a choice.

No, but you might be forced out of a job.

Apologies for the followup to a followup:

You might be if the life monitoring sensors in surgury suddenly need
to be registered with MS, or ... will be shutdown...  ;-)

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [Re: gcc: internal compiler error: program cc1 got fatal signal 11]

2001-06-29 Thread Jesse Pollard

-  Received message begins Here  -

> 
> 
> --- Jesse Pollard <[EMAIL PROTECTED]>
> wrote:
> > > 
> > > 
> > > "This is almost always the result of flakiness in
> > your hardware - either
> > > RAM (most likely), or motherboard (less likely). 
> > "
> > >  
> > >   I cannot understand
> > this. There are many other
> > > stuffs that I compiled with gcc without any
> > problem. Again compilation is only
> > > a application. It  only parse and gernerates
> > object files. How can RAM or
> > > motherboard makes different
> > 
> > It's most likely flackey memory.
> > 
> > Remember- a single bit that dropps can cause the
> > signal 11. It doesn't have
> > to happen consistently either. I had the same
> > problem until I slowed down
> > memory access (that seemd to cover the borderline
> > chip).
> > 
> > The compiler uses different amounts of memory
> > depending on the source file,
> > number of symbols defined (via include headers).
> > When the multiple passes
> > occur simultaneously, there is higher memory
> > pressure, and more of the
> > free space used. One of the pages may flake out.
> > Compiling the kernel
> > puts more pressure on memory than compiling most
> > applications.
> > 
> >
> -
> > Jesse I Pollard, II
> > Email: [EMAIL PROTECTED]
> > 
> > Any opinions expressed are solely my own.
> > -
> > To unsubscribe from this list: send the line
> > "unsubscribe linux-kernel" in
> > the body of a message to [EMAIL PROTECTED]
> > More majordomo info at 
> > http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/
> 
> Almost always ?
> It seems like gcc is THE ONLY program which gets
> signal 11
> Why the X server doesn't get signal 11 ?
> Why others programs don't get signal 11 ?

Load the system down with lots of processes/large
image windows. Unless the bit in question is in
a pointer, or data used in pointer arithmetic or function call
it won't
segfault. Applications (if an instruction page gets hit)
may get an illegal instruction.

> I remember that once Bill Gates was asked about
> crashes in windows and he said: It's a hardware
> problem.
> It was also a joke on that subject:
> Winerr xxx: Hardware problem (it's not our fault, it's
> not, it's not, it's not, it's not...)

Yup - because it crashed VERY frequently when it was obviously a
software bug.

> Seems to me like Micro$oft way of handling problems.
> 
> We must agree that gcc is full of bugs (xanim does not
> 
> run corectly if it is compiled with gcc 2.95.3 
> and other programs which use floating point
> calculations do the same (spice 3f5))

Generating wrong code is different than a segfault.

Currently I'm using egcs-2.91.66 on a 486, without problems.
(I don't do floating point on a 486... too slow).

> Some time ago I installed Linux (Redhat 6.0) on my 
> pc (Cx486 8M RAM) and gcc had a lot of signal 11 (a
> couple every hour) I was upgrading
> the kernel every time there was a new kernel and
> from 2.2.12(or 14) no more signal 11 (very rare)
> Is this still a hardware problem ?
> Was a bug in kernel ?

Not likely - It could just depend on whether all of available
was used. If the physical page with the problem doesn't get used
very often, it won't show up. If the bit in question is not part
of a pointer, or used in pointer arithmetic, again it won't show
up (actually, any operation on addresses). Wrong, or slightly wrong
results MAY show up.

> I think the last answer is more obvious.(or the gcc
> had a bug and the kernel -- a workaround).
> 
> Sorry for bothering you but in every piece of linux
> documentation signal 11 seems to be __identic__ with
> hardware problem.
> Bye

Only when it appears in random location.

GCC is a fairly well debugged program and doesn't segfault
unless you run out of memory, or flakey memory.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [Re: gcc: internal compiler error: program cc1 got fatal signal 11]

2001-06-29 Thread Jesse Pollard

> 
> 
> "This is almost always the result of flakiness in your hardware - either
> RAM (most likely), or motherboard (less likely).  "
>  
>   I cannot understand this. There are many other
> stuffs that I compiled with gcc without any problem. Again compilation is only
> a application. It  only parse and gernerates object files. How can RAM or
> motherboard makes different

It's most likely flackey memory.

Remember- a single bit that dropps can cause the signal 11. It doesn't have
to happen consistently either. I had the same problem until I slowed down
memory access (that seemd to cover the borderline chip).

The compiler uses different amounts of memory depending on the source file,
number of symbols defined (via include headers). When the multiple passes
occur simultaneously, there is higher memory pressure, and more of the
free space used. One of the pages may flake out. Compiling the kernel
puts more pressure on memory than compiling most applications.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [Re: gcc: internal compiler error: program cc1 got fatal signal 11]

2001-06-29 Thread Jesse Pollard

 
 
 This is almost always the result of flakiness in your hardware - either
 RAM (most likely), or motherboard (less likely).  
  
   I cannot understand this. There are many other
 stuffs that I compiled with gcc without any problem. Again compilation is only
 a application. It  only parse and gernerates object files. How can RAM or
 motherboard makes different

It's most likely flackey memory.

Remember- a single bit that dropps can cause the signal 11. It doesn't have
to happen consistently either. I had the same problem until I slowed down
memory access (that seemd to cover the borderline chip).

The compiler uses different amounts of memory depending on the source file,
number of symbols defined (via include headers). When the multiple passes
occur simultaneously, there is higher memory pressure, and more of the
free space used. One of the pages may flake out. Compiling the kernel
puts more pressure on memory than compiling most applications.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [Re: gcc: internal compiler error: program cc1 got fatal signal 11]

2001-06-29 Thread Jesse Pollard

-  Received message begins Here  -

 
 
 --- Jesse Pollard [EMAIL PROTECTED]
 wrote:
   
   
   This is almost always the result of flakiness in
  your hardware - either
   RAM (most likely), or motherboard (less likely). 
  

 I cannot understand
  this. There are many other
   stuffs that I compiled with gcc without any
  problem. Again compilation is only
   a application. It  only parse and gernerates
  object files. How can RAM or
   motherboard makes different
  
  It's most likely flackey memory.
  
  Remember- a single bit that dropps can cause the
  signal 11. It doesn't have
  to happen consistently either. I had the same
  problem until I slowed down
  memory access (that seemd to cover the borderline
  chip).
  
  The compiler uses different amounts of memory
  depending on the source file,
  number of symbols defined (via include headers).
  When the multiple passes
  occur simultaneously, there is higher memory
  pressure, and more of the
  free space used. One of the pages may flake out.
  Compiling the kernel
  puts more pressure on memory than compiling most
  applications.
  
 
 -
  Jesse I Pollard, II
  Email: [EMAIL PROTECTED]
  
  Any opinions expressed are solely my own.
  -
  To unsubscribe from this list: send the line
  unsubscribe linux-kernel in
  the body of a message to [EMAIL PROTECTED]
  More majordomo info at 
  http://vger.kernel.org/majordomo-info.html
  Please read the FAQ at  http://www.tux.org/lkml/
 
 Almost always ?
 It seems like gcc is THE ONLY program which gets
 signal 11
 Why the X server doesn't get signal 11 ?
 Why others programs don't get signal 11 ?

Load the system down with lots of processes/large
image windows. Unless the bit in question is in
a pointer, or data used in pointer arithmetic or function call
it won't
segfault. Applications (if an instruction page gets hit)
may get an illegal instruction.

 I remember that once Bill Gates was asked about
 crashes in windows and he said: It's a hardware
 problem.
 It was also a joke on that subject:
 Winerr xxx: Hardware problem (it's not our fault, it's
 not, it's not, it's not, it's not...)

Yup - because it crashed VERY frequently when it was obviously a
software bug.

 Seems to me like Micro$oft way of handling problems.
 
 We must agree that gcc is full of bugs (xanim does not
 
 run corectly if it is compiled with gcc 2.95.3 
 and other programs which use floating point
 calculations do the same (spice 3f5))

Generating wrong code is different than a segfault.

Currently I'm using egcs-2.91.66 on a 486, without problems.
(I don't do floating point on a 486... too slow).

 Some time ago I installed Linux (Redhat 6.0) on my 
 pc (Cx486 8M RAM) and gcc had a lot of signal 11 (a
 couple every hour) I was upgrading
 the kernel every time there was a new kernel and
 from 2.2.12(or 14) no more signal 11 (very rare)
 Is this still a hardware problem ?
 Was a bug in kernel ?

Not likely - It could just depend on whether all of available
was used. If the physical page with the problem doesn't get used
very often, it won't show up. If the bit in question is not part
of a pointer, or used in pointer arithmetic, again it won't show
up (actually, any operation on addresses). Wrong, or slightly wrong
results MAY show up.

 I think the last answer is more obvious.(or the gcc
 had a bug and the kernel -- a workaround).
 
 Sorry for bothering you but in every piece of linux
 documentation signal 11 seems to be __identic__ with
 hardware problem.
 Bye

Only when it appears in random location.

GCC is a fairly well debugged program and doesn't segfault
unless you run out of memory, or flakey memory.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: What is the best way for multiple net_devices

2001-06-28 Thread Jesse Pollard

-  Received message begins Here  -

> 
> On Wed, Jun 27, 2001 at 06:04:02PM -0400, Jeff Garzik wrote:
> > andrew may wrote:
> > > 
> > > Is there a standard way to make multiple copies of a network device?
> > > 
> > > For things like the bonding/ipip/ip_gre and others they seem to expect
> > > insmod -o copy1 module.o
> > > insmod -o copy2 module.o
> > 
> > The network driver should provide the capability to add new devices.
> 
> I am planning to write or patch some drivers to do this as well as other
> things. 
> 
> I would want to add things at run time after the module is alreaded loaded.
> So options to the module won't work.
> 
> I don't know how to use ifconfig to create a new device.

Ifconfig doesn't create the new device, when the driver module is loaded
it looks for all devices on the bus and creates the table with those
entries. To locat them, an "ifconfig -a" will do

> Any examples of drivers and apps that do this cleanly. The ones I have
> seen are not.

The only one I've seen are SCSI ( I believe it was done with
"echo 1 >/proc/ ". If a new device is present (turned on) the new
entry is appended.

Another one (similar) is the parport. Loading parport_probe rescans, and
defines the new devices.

Another is a module version of IDE. unload/loading ide-probe rescans
the IDE devices.

These ARE clumsy because you have to unload them to do a rescan, also
I think the tables are contained inside the probe module. I don't think
you can unload the probe module if one of the devices is busy (though the
SCSI version might be closer to what you want, it is also the most complex).

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: What is the best way for multiple net_devices

2001-06-28 Thread Jesse Pollard

-  Received message begins Here  -

 
 On Wed, Jun 27, 2001 at 06:04:02PM -0400, Jeff Garzik wrote:
  andrew may wrote:
   
   Is there a standard way to make multiple copies of a network device?
   
   For things like the bonding/ipip/ip_gre and others they seem to expect
   insmod -o copy1 module.o
   insmod -o copy2 module.o
  
  The network driver should provide the capability to add new devices.
 
 I am planning to write or patch some drivers to do this as well as other
 things. 
 
 I would want to add things at run time after the module is alreaded loaded.
 So options to the module won't work.
 
 I don't know how to use ifconfig to create a new device.

Ifconfig doesn't create the new device, when the driver module is loaded
it looks for all devices on the bus and creates the table with those
entries. To locat them, an ifconfig -a will do

 Any examples of drivers and apps that do this cleanly. The ones I have
 seen are not.

The only one I've seen are SCSI ( I believe it was done with
echo 1 /proc/ . If a new device is present (turned on) the new
entry is appended.

Another one (similar) is the parport. Loading parport_probe rescans, and
defines the new devices.

Another is a module version of IDE. unload/loading ide-probe rescans
the IDE devices.

These ARE clumsy because you have to unload them to do a rescan, also
I think the tables are contained inside the probe module. I don't think
you can unload the probe module if one of the devices is busy (though the
SCSI version might be closer to what you want, it is also the most complex).

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: How to change DVD-ROM speed?

2001-06-27 Thread Jesse Pollard

> 
> On Wed, Jun 27 2001, Jeffrey W. Baker wrote:
> > > On Wed, Jun 27 2001, Jeffrey W. Baker wrote:
> > > > I am trying to change the spin rate of my IDE DVD-ROM drive.  My system is
> > > > an Apple PowerBook G4, and I am using kernel 2.4.  I want the drive to
> > > > spin at 1X when I watch movies.  Currently, it spins at its highest speed,
> > > > which is very loud and a large power load.
> > > >
> > > > /proc/sys/dev/cdrom/info indicates that the speed of the drive can be
> > > > changed.  I use hdparm -E 1 /dev/dvd to attempt to set the speed, and it
> > > > reports success.  However, the drive continues to spin at its highest
> > > > speed.
> > >
> > > Linux still uses the old-style SET_SPEED command, which is probably not
> > > supported correctly by your newer drive. Just checking, I see latest Mt
> > > Fuji only lists it for CD-RW. For DVD, we're supposed to do
> > > SET_STREAMING to specify such requirements.
> > >
> > > Feel free to implement it :-)
> > 
> > I will be happy to :)  Should I hang conditional code off the existing
> > ioctl (CDROM_SELECT_SPEED, ide_cdrom_select_speed) or use a new one?
> 
> Excellent. I'd say use the same ioctl if you can, but default to using
> SET_STREAMING for DVD drives.

As long as it still works for the combo drives - CD/CD-RW/DVD
Sony VIAO high end laptops, Toshiba has one, maybe others by now.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [PATCH] User chroot

2001-06-27 Thread Jesse Pollard

[EMAIL PROTECTED] (David Wagner):
> H. Peter Anvin wrote:
> >By author:Jorgen Cederlof <[EMAIL PROTECTED]>
> >> If we only allow user chroots for processes that have never been
> >> chrooted before, and if the suid/sgid bits won't have any effect under
> >> the new root, it should be perfectly safe to allow any user to chroot.
> >
> >Safe, perhaps, but also completely useless: there is no way the user
> >can set up a functional environment inside the chroot.
> 
> Why is it useless?  It sounds useful to me, on first glance.  If I want
> to run a user-level network daemon I don't trust (for instance, fingerd),
> isolating it in a chroot area sounds pretty nice: If there is a buffer
> overrun in the daemon, you can get some protection [*] against the rest
> of your system being trashed.  Am I missing something obvious?

1. The libraries are already protected by ownership (root usually).
2. Any penetration is limited to what the user can access.
3. (non-deskop or server) Does the administrator really want users
   giving out access to the system to unknown persons? (I know, it's not
   prevented in either case.. yet)
4. inetd already does this. Spawned processes do not have to run as root...
5. A chroot environment (to be usefull) must have libraries/executables for any
   subprocesses that may do an exec. It doesn't matter whether it is done
   by a user or by root, but with root, at least the administrator KNOWS
   that the daemon process is untrusted, and how many are there, and what
   accounts they are in... And can be assured that each gets a separate
   UID, does/does not share files (and which files)...
6. There is no difference in the interpretation of setuid files between a
   chroot environment, and  outside a chroot environment.

Wait for the Linux Security Module - you may have a better way to define
access controls that DO allow what you want.

> [*] Yes, I know chroot is not sufficient on its own to completely
> protect against this, but it is a useful part of the puzzle, and
> there are other things we can do to deal with the remaining holes.


-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [comphist] Re: Microsoft and Xenix.

2001-06-27 Thread Jesse Pollard

Rob Landley <[EMAIL PROTECTED]>:
> On Monday 25 June 2001 16:19, [EMAIL PROTECTED] wrote:
...
> > I learnt my computing on a PDP8/E with papertape punch/reader, RALF,
> > Fortran II, then later 2.4Mb removable cartridges (RK05 I think).  toggling
> > in the bootstrap improved your concentration. Much later you could
> > get a single chip(?) version of this in a wee knee sized box.
> 
> "A quarter century of unix" mentions RK05 cartridges several times, but never 
> says much ABOUT them.
> 
> Okay, so they're 2.4 megabyte removable cartridges?  How big?  Are they tapes 
> or disk packs?  (I.E. can you run off of them or are they just storage?)  I 
> know lots of early copies of unix were sent out from Bell Labs on RK05 
> cartidges signed "love, ken"...

Ah, the memories... (apologies for the interruptions, but just had too ...)

RK05 cartriges looked very similar to a floppy case the size of an old 78 RPM
record (about 12 inches across, 2 - 3 inches high). I never used them, but
I did see them. They were among the first disk drives DEC ever made. Not the
first (I think that was a DF-32 for PDP 8 systems with 32 K bytes of disk
space). The raw storage was reported as 2.5 MB, formatted was ~2.4MB, with
two recording surfaces. The drive looked very similar to a modern CD drive
that would fit in about a 3U (ummm 4U?) 19 inch rack. It had 2 recording
surfaces. It did have a write enable/disable switch. If I remember right
these were originally made for the PDP 11/10-20 systems used for laboratory
device control - chromatographs were mentioned by the chemistry department
back in school.

I may have an old DEC peripheral specification book at home (11/45 version).
I really liked those books that DEC used to put out. If you ever needed to
program a DEC interface, that book had everything. It was almost like the
engineers were bragging about how easy the interfaces were to program.

> What was that big reel to reel tape they always show in movies, anyway?

I think they were CDC transports.

> I need a weekend just to collate stuff...
> 
> > One summer job was working on a PDP15 analog computer alongside an 11/20
> > with DECTAPE, trying to compute missile firing angles. [A simple version of
> > Pres Bush's starwars shield].
> 
> Considering that the Mark I was designed to compute tables of artillery 
> firing angles during World War II...  It's a distinct trend, innit?  And the 
> source of the game "artillery duel", of course...

Or the 11/34 version of the Lunar Lander (load from paper tape, graphics
display on VT11 - 512x512 8 bit color). It used to be distributed as a
diagnostic tool (hardware level interrupts, dual A/D conversion via joystick,
I/O via VT11). Any memory, DMA, or bus configuration errors would hang
the system with a known diagnostic one-liner message explaining the problem.

I also saw a report of a "terminal warfare" event where the graphics display
was being used for text editing when two little stick figure men would walk
onto the display, pick up a line, and then walk off the screen. There was
nothing the user could do until it finished. The text buffer wasn't touched,
only the display buffer.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [comphist] Re: Microsoft and Xenix.

2001-06-27 Thread Jesse Pollard

Rob Landley [EMAIL PROTECTED]:
 On Monday 25 June 2001 16:19, [EMAIL PROTECTED] wrote:
...
  I learnt my computing on a PDP8/E with papertape punch/reader, RALF,
  Fortran II, then later 2.4Mb removable cartridges (RK05 I think).  toggling
  in the bootstrap improved your concentration. Much later you could
  get a single chip(?) version of this in a wee knee sized box.
 
 A quarter century of unix mentions RK05 cartridges several times, but never 
 says much ABOUT them.
 
 Okay, so they're 2.4 megabyte removable cartridges?  How big?  Are they tapes 
 or disk packs?  (I.E. can you run off of them or are they just storage?)  I 
 know lots of early copies of unix were sent out from Bell Labs on RK05 
 cartidges signed love, ken...

Ah, the memories... (apologies for the interruptions, but just had too ...)

RK05 cartriges looked very similar to a floppy case the size of an old 78 RPM
record (about 12 inches across, 2 - 3 inches high). I never used them, but
I did see them. They were among the first disk drives DEC ever made. Not the
first (I think that was a DF-32 for PDP 8 systems with 32 K bytes of disk
space). The raw storage was reported as 2.5 MB, formatted was ~2.4MB, with
two recording surfaces. The drive looked very similar to a modern CD drive
that would fit in about a 3U (ummm 4U?) 19 inch rack. It had 2 recording
surfaces. It did have a write enable/disable switch. If I remember right
these were originally made for the PDP 11/10-20 systems used for laboratory
device control - chromatographs were mentioned by the chemistry department
back in school.

I may have an old DEC peripheral specification book at home (11/45 version).
I really liked those books that DEC used to put out. If you ever needed to
program a DEC interface, that book had everything. It was almost like the
engineers were bragging about how easy the interfaces were to program.

 What was that big reel to reel tape they always show in movies, anyway?

I think they were CDC transports.

 I need a weekend just to collate stuff...
 
  One summer job was working on a PDP15 analog computer alongside an 11/20
  with DECTAPE, trying to compute missile firing angles. [A simple version of
  Pres Bush's starwars shield].
 
 Considering that the Mark I was designed to compute tables of artillery 
 firing angles during World War II...  It's a distinct trend, innit?  And the 
 source of the game artillery duel, of course...

Or the 11/34 version of the Lunar Lander (load from paper tape, graphics
display on VT11 - 512x512 8 bit color). It used to be distributed as a
diagnostic tool (hardware level interrupts, dual A/D conversion via joystick,
I/O via VT11). Any memory, DMA, or bus configuration errors would hang
the system with a known diagnostic one-liner message explaining the problem.

I also saw a report of a terminal warfare event where the graphics display
was being used for text editing when two little stick figure men would walk
onto the display, pick up a line, and then walk off the screen. There was
nothing the user could do until it finished. The text buffer wasn't touched,
only the display buffer.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [PATCH] User chroot

2001-06-27 Thread Jesse Pollard

[EMAIL PROTECTED] (David Wagner):
 H. Peter Anvin wrote:
 By author:Jorgen Cederlof [EMAIL PROTECTED]
  If we only allow user chroots for processes that have never been
  chrooted before, and if the suid/sgid bits won't have any effect under
  the new root, it should be perfectly safe to allow any user to chroot.
 
 Safe, perhaps, but also completely useless: there is no way the user
 can set up a functional environment inside the chroot.
 
 Why is it useless?  It sounds useful to me, on first glance.  If I want
 to run a user-level network daemon I don't trust (for instance, fingerd),
 isolating it in a chroot area sounds pretty nice: If there is a buffer
 overrun in the daemon, you can get some protection [*] against the rest
 of your system being trashed.  Am I missing something obvious?

1. The libraries are already protected by ownership (root usually).
2. Any penetration is limited to what the user can access.
3. (non-deskop or server) Does the administrator really want users
   giving out access to the system to unknown persons? (I know, it's not
   prevented in either case.. yet)
4. inetd already does this. Spawned processes do not have to run as root...
5. A chroot environment (to be usefull) must have libraries/executables for any
   subprocesses that may do an exec. It doesn't matter whether it is done
   by a user or by root, but with root, at least the administrator KNOWS
   that the daemon process is untrusted, and how many are there, and what
   accounts they are in... And can be assured that each gets a separate
   UID, does/does not share files (and which files)...
6. There is no difference in the interpretation of setuid files between a
   chroot environment, and  outside a chroot environment.

Wait for the Linux Security Module - you may have a better way to define
access controls that DO allow what you want.

 [*] Yes, I know chroot is not sufficient on its own to completely
 protect against this, but it is a useful part of the puzzle, and
 there are other things we can do to deal with the remaining holes.


-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: How to change DVD-ROM speed?

2001-06-27 Thread Jesse Pollard

 
 On Wed, Jun 27 2001, Jeffrey W. Baker wrote:
   On Wed, Jun 27 2001, Jeffrey W. Baker wrote:
I am trying to change the spin rate of my IDE DVD-ROM drive.  My system is
an Apple PowerBook G4, and I am using kernel 2.4.  I want the drive to
spin at 1X when I watch movies.  Currently, it spins at its highest speed,
which is very loud and a large power load.
   
/proc/sys/dev/cdrom/info indicates that the speed of the drive can be
changed.  I use hdparm -E 1 /dev/dvd to attempt to set the speed, and it
reports success.  However, the drive continues to spin at its highest
speed.
  
   Linux still uses the old-style SET_SPEED command, which is probably not
   supported correctly by your newer drive. Just checking, I see latest Mt
   Fuji only lists it for CD-RW. For DVD, we're supposed to do
   SET_STREAMING to specify such requirements.
  
   Feel free to implement it :-)
  
  I will be happy to :)  Should I hang conditional code off the existing
  ioctl (CDROM_SELECT_SPEED, ide_cdrom_select_speed) or use a new one?
 
 Excellent. I'd say use the same ioctl if you can, but default to using
 SET_STREAMING for DVD drives.

As long as it still works for the combo drives - CD/CD-RW/DVD
Sony VIAO high end laptops, Toshiba has one, maybe others by now.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Linux and system area networks

2001-06-26 Thread Jesse Pollard

-  Received message begins Here  -

> 
> > "Pete" == Pete Zaitcev <[EMAIL PROTECTED]> writes:
> 
> Roland> The rough idea is that WSD is a new user space library
> Roland> that looks at sockets calls and decides if they have to go
> Roland> through the usual kernel network stack, or if they can be
> Roland> handed off to a "SAN service provider" which bypasses the
> Roland> network stack and uses hardware reliable transport and
> Roland> possibly RDMA.
> 
> Pete> That can be done in Linux just as easily, using same DLLs
> Pete> (they are called .so for "shared object"). If you look at
> Pete> Ashok Raj's Infi presentation, you may discern "user-level
> Pete> sockets", if you look hard enough. I invite you to try, if
> Pete> errors of others did not teach you anything.
> 
> I think you misunderstood the point.  Microsoft is providing this WSD
> DLL as a standard part of W2K now.  This means that hardware vendors
> just have to write a SAN service provider, and all Winsock-using
> applications benefit transparently.  No matter how good your TCP/IP
> implementation is, you still lose (especially in latency) compared to
> using reliable hardware transport.  Oracle-with-VI and DAFS-vs-NFS
> benchmarks show this quite clearly.

You do loose in security. You can't use IPSec over such a device without
some drastic overhaul.

> Linux has nothing to compare to Winsock Direct.  I agree, one could
> put an equivalent in glibc, or one could take advantage of Linux's
> relatively low system call latency and put something in the kernel.
> The unfortunate consequence of this is that SAN (system area network)
> hardware vendors are not going to support Linux very well.
> 
> BTW, do you have a pointer to Ashok Raj's presentation?

That would be usefull. We had a presentation here, but it did not
show any great detail (mostly marketing drivel "it will be faster/more
efficient/less overhead.." but nothing about security).
 
> Roland> This means that all applications that use Winsock benefit
> Roland> from the advanced network hardware.  Also, it means that
> Roland> Windows is much easier for hardware vendors to support
> Roland> than other OSes.  For example, Alacritech's TCP/IP offload
> Roland> NIC only works under Windows.  Microsoft is also including
> Roland> Infiniband support in Windows XP and Windows 2002.
> 
> Pete> IMHO, Alacritech is about to join scores and scores of
> Pete> vendors who tried that before. Customers understand very
> Pete> soon that a properly written host based stack works much
> Pete> better in the face of a changing environment: Faster CPUs,
> Pete> new CPUs (IA-64), new network protocols (ECN). Besides, it
> Pete> is easy to "accelerate" a bad network stack, but try to
> Pete> outdo a well done stack.
> 
> OK, how about an Infiniband network with a TCP/IP gateway at the edge?
> Have we thought about how Linux servers should use the gateway to talk
> to internet hosts?  Surely there's no point in running TCP/IP inside
> the Infiniband network, so there needs to be some concept of "socket
> over Infiniband."

One of the problems I haven't seen explained is how the address translation
between TCP/IP and any SAN. Much less how security is going to be controled.
Personally, I think it will end up equivalent to TCP/IP over fibre channel...

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Linux and system area networks

2001-06-26 Thread Jesse Pollard

-  Received message begins Here  -

 
  Pete == Pete Zaitcev [EMAIL PROTECTED] writes:
 
 Roland The rough idea is that WSD is a new user space library
 Roland that looks at sockets calls and decides if they have to go
 Roland through the usual kernel network stack, or if they can be
 Roland handed off to a SAN service provider which bypasses the
 Roland network stack and uses hardware reliable transport and
 Roland possibly RDMA.
 
 Pete That can be done in Linux just as easily, using same DLLs
 Pete (they are called .so for shared object). If you look at
 Pete Ashok Raj's Infi presentation, you may discern user-level
 Pete sockets, if you look hard enough. I invite you to try, if
 Pete errors of others did not teach you anything.
 
 I think you misunderstood the point.  Microsoft is providing this WSD
 DLL as a standard part of W2K now.  This means that hardware vendors
 just have to write a SAN service provider, and all Winsock-using
 applications benefit transparently.  No matter how good your TCP/IP
 implementation is, you still lose (especially in latency) compared to
 using reliable hardware transport.  Oracle-with-VI and DAFS-vs-NFS
 benchmarks show this quite clearly.

You do loose in security. You can't use IPSec over such a device without
some drastic overhaul.

 Linux has nothing to compare to Winsock Direct.  I agree, one could
 put an equivalent in glibc, or one could take advantage of Linux's
 relatively low system call latency and put something in the kernel.
 The unfortunate consequence of this is that SAN (system area network)
 hardware vendors are not going to support Linux very well.
 
 BTW, do you have a pointer to Ashok Raj's presentation?

That would be usefull. We had a presentation here, but it did not
show any great detail (mostly marketing drivel it will be faster/more
efficient/less overhead.. but nothing about security).
 
 Roland This means that all applications that use Winsock benefit
 Roland from the advanced network hardware.  Also, it means that
 Roland Windows is much easier for hardware vendors to support
 Roland than other OSes.  For example, Alacritech's TCP/IP offload
 Roland NIC only works under Windows.  Microsoft is also including
 Roland Infiniband support in Windows XP and Windows 2002.
 
 Pete IMHO, Alacritech is about to join scores and scores of
 Pete vendors who tried that before. Customers understand very
 Pete soon that a properly written host based stack works much
 Pete better in the face of a changing environment: Faster CPUs,
 Pete new CPUs (IA-64), new network protocols (ECN). Besides, it
 Pete is easy to accelerate a bad network stack, but try to
 Pete outdo a well done stack.
 
 OK, how about an Infiniband network with a TCP/IP gateway at the edge?
 Have we thought about how Linux servers should use the gateway to talk
 to internet hosts?  Surely there's no point in running TCP/IP inside
 the Infiniband network, so there needs to be some concept of socket
 over Infiniband.

One of the problems I haven't seen explained is how the address translation
between TCP/IP and any SAN. Much less how security is going to be controled.
Personally, I think it will end up equivalent to TCP/IP over fibre channel...

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: The Joy of Forking

2001-06-25 Thread Jesse Pollard

Rick Hohensee <[EMAIL PROTECTED]>:
> > On Sun, 24 Jun 2001, Rick Hohensee wrote:
> > >2.4.5 is 26 meg now. It's time to consider forking the kernel. Alan has
> > >already stuck his tippy-toe is that pool, and his toe is fine.
> > >
> > >   forget POSIX
> > >   The standards that matter are de-facto standards. Linux is the
> > >   standard. Congratulations. Take your seat in the chair for 
> > >   First Violin. 
> > 
> > NO. I port too many programs both ways. I need POSIX compliancy as much as
> > is possible. That way the programs will compile and go among Linux, UNICOS,
> > IRIX, Solaris, AIX, and sometimes HP-UX.
> 
> That's fine for things unix does well. Realtime is one counterexample. 

That depends entirely on the definition of "Realtime". UNICOS can be used
as realime (I understand it used to monitor nuclear reactors). If you need
microsecond response times, then unix of any flavor is not suitable. If you
mean "fast enough to watch DVDs" then you are into 100s of milliseconds where
Linux should be fast enough (with read-ahead caching).

> > >   rtlinux by default
> > >   no SMP
> > >   SMP doesn't scale. If this fork comes, the smart maintainer
> > >   will take the non-SMP fork.
> > 
> > Depends on platform and bus. From reports, it seems to scale just fine on
> > non-intel systems.
> 
> Big expensive systems. Non-desktop systems. Non-end-user systems. And
> clustering will eat its lunch eventually anyway.

Alpha based systems and UltraSparc systems are used for desktops. As are MIPS.
They are also used for servers and clusters.

> > >   x86 only (and similar, e.g. Crusoe)
> > 
> > Again, Linux is the only system that CAN run on anything from PDA thorough
> > supercomputer clusters.
> > 
> 
> NetBSD claims 24 platforms. Forths run on everything you've never heard of
> below a PDA.

When performance is below a PDA, fourth IS a reasonable system. It is also
reasonable for single purpose dedicated functions like sensor monitoring,
printers (without network, though it can be coerced). Fourth just isn't
that usefull (well... less so than other languages) on system that can afford
the software for compilers/linkers/multi-tasking/multi-processing
 
> > >   mimimal VM cacheing
> > >   So you can red-switch the box without journalling with
> > >   reasonable damage, which for an end-user is a file or two.
> > >   Having done a lot of very wrong things with Linux, I'm 
> > >   impressed that ext2 doesn't self-destruct under abuse.
> > 
> > Not if you want some speed out of it.
> 
> Again, throughput is a server thing. 

Refer to the DVD complaints about lack of performance. Linux does need
improving in the IO throughput. CPU throughput is a real pain if the
decryption isn't fast enough.
 
> > 
> > >   in-kernel interpreter
> > >   I have one working. It's fun.
> > 
> > VIRUSES, VIRUSES, and MORE VIRUS entry points. Assuming you mean both
> > translator and execution at the same time.
> 
> And assembler. This is called get your hands greasy. Fun. Your box. Not
> the admin's box. 

A kernel module to compile/link source code ???. The security hassels alone
arn't worth the effort. I've also seen reports of a "postscript" virus that
takes over a printer, and discards any output other than the person who
"printed" the virus; also (hazy memory) of taking over some printers to use
as a platform to launch other attacks. Don't like in-kernel interpreters.

> > >   EOL is CR
> > >   The one thing Dos got right and unix got wrong. Also, in my
> > >   2-month experience in a cube on a LAN, the most annoying thing
> > >   about trying to be a Linux end-user in a Dos shop. Printers
> > >   are CRLF, fer crissakes.
> > >   This is not a difficult mod, but it's a lot of little changes
> > >   throughout a box. Things that look for EOLs are the part that
> > >   has to be fixed by hand, and can be inclusive of CRLF and LF.
> > 
> > I've used both. They are equivalent. Live with it.
> > 
> 
> We disagree, but I wont rant about the phone company breaking a perfectly
> good telegraphy protocol called ASCII.

The phone company wasn't the first to do that - DEC PDP-8 systems also had
a tendancy to drop CR. Their "All-in-One" office hardware dropped both CR
and LF in favor of a record length field for text files (RMS-8/10/11
products - RMS => Record Management System). It was both faster and with smaller
files that way.

> > >   Plan 9-style header files structure
> > >   Plan 9's most amazing stuff to me is the subtle refinements,
> > >   like sane header files. Sane C header files, _oh_ _my_ _God_. 
> > 
> > As long as source code portability is maintained.
> 
> Dennis Ritchie, who signs the checks for the people that wrote Plan 9,
> said an interesting thing about portability. He said "good code is
> portable code." I infer from that, and from the Plan 9 sources, and from
> 

Re: The Joy of Forking

2001-06-25 Thread Jesse Pollard

Rick Hohensee [EMAIL PROTECTED]:
  On Sun, 24 Jun 2001, Rick Hohensee wrote:
  2.4.5 is 26 meg now. It's time to consider forking the kernel. Alan has
  already stuck his tippy-toe is that pool, and his toe is fine.
  
 forget POSIX
 The standards that matter are de-facto standards. Linux is the
 standard. Congratulations. Take your seat in the chair for 
 First Violin. 
  
  NO. I port too many programs both ways. I need POSIX compliancy as much as
  is possible. That way the programs will compile and go among Linux, UNICOS,
  IRIX, Solaris, AIX, and sometimes HP-UX.
 
 That's fine for things unix does well. Realtime is one counterexample. 

That depends entirely on the definition of Realtime. UNICOS can be used
as realime (I understand it used to monitor nuclear reactors). If you need
microsecond response times, then unix of any flavor is not suitable. If you
mean fast enough to watch DVDs then you are into 100s of milliseconds where
Linux should be fast enough (with read-ahead caching).

 rtlinux by default
 no SMP
 SMP doesn't scale. If this fork comes, the smart maintainer
 will take the non-SMP fork.
  
  Depends on platform and bus. From reports, it seems to scale just fine on
  non-intel systems.
 
 Big expensive systems. Non-desktop systems. Non-end-user systems. And
 clustering will eat its lunch eventually anyway.

Alpha based systems and UltraSparc systems are used for desktops. As are MIPS.
They are also used for servers and clusters.

 x86 only (and similar, e.g. Crusoe)
  
  Again, Linux is the only system that CAN run on anything from PDA thorough
  supercomputer clusters.
  
 
 NetBSD claims 24 platforms. Forths run on everything you've never heard of
 below a PDA.

When performance is below a PDA, fourth IS a reasonable system. It is also
reasonable for single purpose dedicated functions like sensor monitoring,
printers (without network, though it can be coerced). Fourth just isn't
that usefull (well... less so than other languages) on system that can afford
the software for compilers/linkers/multi-tasking/multi-processing
 
 mimimal VM cacheing
 So you can red-switch the box without journalling with
 reasonable damage, which for an end-user is a file or two.
 Having done a lot of very wrong things with Linux, I'm 
 impressed that ext2 doesn't self-destruct under abuse.
  
  Not if you want some speed out of it.
 
 Again, throughput is a server thing. 

Refer to the DVD complaints about lack of performance. Linux does need
improving in the IO throughput. CPU throughput is a real pain if the
decryption isn't fast enough.
 
  
 in-kernel interpreter
 I have one working. It's fun.
  
  VIRUSES, VIRUSES, and MORE VIRUS entry points. Assuming you mean both
  translator and execution at the same time.
 
 And assembler. This is called get your hands greasy. Fun. Your box. Not
 the admin's box. 

A kernel module to compile/link source code ???. The security hassels alone
arn't worth the effort. I've also seen reports of a postscript virus that
takes over a printer, and discards any output other than the person who
printed the virus; also (hazy memory) of taking over some printers to use
as a platform to launch other attacks. Don't like in-kernel interpreters.

 EOL is CRLF
 The one thing Dos got right and unix got wrong. Also, in my
 2-month experience in a cube on a LAN, the most annoying thing
 about trying to be a Linux end-user in a Dos shop. Printers
 are CRLF, fer crissakes.
 This is not a difficult mod, but it's a lot of little changes
 throughout a box. Things that look for EOLs are the part that
 has to be fixed by hand, and can be inclusive of CRLF and LF.
  
  I've used both. They are equivalent. Live with it.
  
 
 We disagree, but I wont rant about the phone company breaking a perfectly
 good telegraphy protocol called ASCII.

The phone company wasn't the first to do that - DEC PDP-8 systems also had
a tendancy to drop CR. Their All-in-One office hardware dropped both CR
and LF in favor of a record length field for text files (RMS-8/10/11
products - RMS = Record Management System). It was both faster and with smaller
files that way.

 Plan 9-style header files structure
 Plan 9's most amazing stuff to me is the subtle refinements,
 like sane header files. Sane C header files, _oh_ _my_ _God_. 
  
  As long as source code portability is maintained.
 
 Dennis Ritchie, who signs the checks for the people that wrote Plan 9,
 said an interesting thing about portability. He said good code is
 portable code. I infer from that, and from the Plan 9 sources, and from
 the design of unix and the two-character commands in /bin/, that he
 relates good very strongly with simple. Not slavish adherance to
 standards. Plan 9 C isn't ANSI, for 

Re: The Joy of Forking

2001-06-24 Thread Jesse Pollard

On Sun, 24 Jun 2001, Rick Hohensee wrote:
>2.4.5 is 26 meg now. It's time to consider forking the kernel. Alan has
>already stuck his tippy-toe is that pool, and his toe is fine.
>
>The "thou shalt not fork" commandment made sense at one point, when free
>unix was a lost tribe wandering hungry in the desert. When you have a
>project with several million users that has a scope that simply doesn't
>scale, it doesn't. Forking should be done responsibly, and with great joy.
>As in nature, software success breeds diversity. Linux should diversify.
>This is cause for celebration, ceremony, throwing of bouquets and so on.
>
>I have done a few trivial things that people with rather shallow ideas of
>what unix is about have excoriated as "NOT UNIX!". So far that's been
>absurd, but my stuff is getting more intrusive. Linux is far more
>interesting to me for it's general usefulness and openness, which are
>inextricably related, than for it's unixness, although unix is certainly
>beautiful.
>
>Alan was going to file for divorce over dev_t. Isn't is funny how
>estranged couples so often are so much alike? dev_t is crucial, of course,
>but it's not the biggest geological fault in the kernel. SMP is. I have
>dropped hints about this before. An SMP system is more fundamentally
>different than UP than a 386 is different than other big microprocessors.
>
>As I mentioned that Steve Ballmer mentioned, Linux isn't getting any
>traction on the client, the end-user desktop box. That's a huge and poorly
>served market, so there are lots of tragically shallow ideas of how to
>approach it. A few variations on the Linux theme are in order, that
>preserve unixness, openness, but that don't have pretenses of being Big
>UNIX(TM).
>
>For a client-use Linux kernel, I suggest, and will be and have been
>persuing, features and non-features such as...
>
>   forget POSIX
>   The standards that matter are de-facto standards. Linux is the
>   standard. Congratulations. Take your seat in the chair for 
>   First Violin. 

NO. I port too many programs both ways. I need POSIX compliancy as much as
is possible. That way the programs will compile and go among Linux, UNICOS,
IRIX, Solaris, AIX, and sometimes HP-UX.

>   rtlinux by default
>   no SMP
>   SMP doesn't scale. If this fork comes, the smart maintainer
>   will take the non-SMP fork.

Depends on platform and bus. From reports, it seems to scale just fine on
non-intel systems.

>   x86 only (and similar, e.g. Crusoe)

Again, Linux is the only system that CAN run on anything from PDA thorough
supercomputer clusters.

>   mimimal VM cacheing
>   So you can red-switch the box without journalling with
>   reasonable damage, which for an end-user is a file or two.
>   Having done a lot of very wrong things with Linux, I'm 
>   impressed that ext2 doesn't self-destruct under abuse.

Not if you want some speed out of it.

>   in-kernel interpreter
>   I have one working. It's fun.

VIRUSES, VIRUSES, and MORE VIRUS entry points. Assuming you mean both
translator and execution at the same time.

>   EOL is CR
>   The one thing Dos got right and unix got wrong. Also, in my
>   2-month experience in a cube on a LAN, the most annoying thing
>   about trying to be a Linux end-user in a Dos shop. Printers
>   are CRLF, fer crissakes.
>   This is not a difficult mod, but it's a lot of little changes
>   throughout a box. Things that look for EOLs are the part that
>   has to be fixed by hand, and can be inclusive of CRLF and LF.

I've used both. They are equivalent. Live with it.

>   Plan 9-style header files structure
>   Plan 9's most amazing stuff to me is the subtle refinements,
>   like sane header files. Sane C header files, _oh_ _my_ _God_. 

As long as source code portability is maintained.

>   excellent localizability
>   e.g. kernel error strings mapped to a file, or an #include
>   that can be language-specific. My DSFH stuff also. 

This is quite reasonable. Actually, unless you are referring to Kernel internal
error codes, it's already done with perror.

>
>What about GUI's, and "desktops" and such? They're nice. They are
>secondary, however. The free unix world doesn't often enough make the
>point that GUI's are much more important when the underlying OS sucks,
>which it doesn't in Linux. 

If you only use a compute/disk server. Otherwise you are saying "no desktop
publishing, word processing, or image analysis".

Are you still using DOS only?

>In short, an open source OS for end-users should be very serious about
>simplicity, and not just pay lip-service to it. There is evidence of the
>value of this in the marketplace. What doesn't exist is an OS where
>simplicity is systemic. This is why end-user 

Re: The Joy of Forking

2001-06-24 Thread Jesse Pollard

On Sun, 24 Jun 2001, Rick Hohensee wrote:
2.4.5 is 26 meg now. It's time to consider forking the kernel. Alan has
already stuck his tippy-toe is that pool, and his toe is fine.

The thou shalt not fork commandment made sense at one point, when free
unix was a lost tribe wandering hungry in the desert. When you have a
project with several million users that has a scope that simply doesn't
scale, it doesn't. Forking should be done responsibly, and with great joy.
As in nature, software success breeds diversity. Linux should diversify.
This is cause for celebration, ceremony, throwing of bouquets and so on.

I have done a few trivial things that people with rather shallow ideas of
what unix is about have excoriated as NOT UNIX!. So far that's been
absurd, but my stuff is getting more intrusive. Linux is far more
interesting to me for it's general usefulness and openness, which are
inextricably related, than for it's unixness, although unix is certainly
beautiful.

Alan was going to file for divorce over dev_t. Isn't is funny how
estranged couples so often are so much alike? dev_t is crucial, of course,
but it's not the biggest geological fault in the kernel. SMP is. I have
dropped hints about this before. An SMP system is more fundamentally
different than UP than a 386 is different than other big microprocessors.

As I mentioned that Steve Ballmer mentioned, Linux isn't getting any
traction on the client, the end-user desktop box. That's a huge and poorly
served market, so there are lots of tragically shallow ideas of how to
approach it. A few variations on the Linux theme are in order, that
preserve unixness, openness, but that don't have pretenses of being Big
UNIX(TM).

For a client-use Linux kernel, I suggest, and will be and have been
persuing, features and non-features such as...

   forget POSIX
   The standards that matter are de-facto standards. Linux is the
   standard. Congratulations. Take your seat in the chair for 
   First Violin. 

NO. I port too many programs both ways. I need POSIX compliancy as much as
is possible. That way the programs will compile and go among Linux, UNICOS,
IRIX, Solaris, AIX, and sometimes HP-UX.

   rtlinux by default
   no SMP
   SMP doesn't scale. If this fork comes, the smart maintainer
   will take the non-SMP fork.

Depends on platform and bus. From reports, it seems to scale just fine on
non-intel systems.

   x86 only (and similar, e.g. Crusoe)

Again, Linux is the only system that CAN run on anything from PDA thorough
supercomputer clusters.

   mimimal VM cacheing
   So you can red-switch the box without journalling with
   reasonable damage, which for an end-user is a file or two.
   Having done a lot of very wrong things with Linux, I'm 
   impressed that ext2 doesn't self-destruct under abuse.

Not if you want some speed out of it.

   in-kernel interpreter
   I have one working. It's fun.

VIRUSES, VIRUSES, and MORE VIRUS entry points. Assuming you mean both
translator and execution at the same time.

   EOL is CRLF
   The one thing Dos got right and unix got wrong. Also, in my
   2-month experience in a cube on a LAN, the most annoying thing
   about trying to be a Linux end-user in a Dos shop. Printers
   are CRLF, fer crissakes.
   This is not a difficult mod, but it's a lot of little changes
   throughout a box. Things that look for EOLs are the part that
   has to be fixed by hand, and can be inclusive of CRLF and LF.

I've used both. They are equivalent. Live with it.

   Plan 9-style header files structure
   Plan 9's most amazing stuff to me is the subtle refinements,
   like sane header files. Sane C header files, _oh_ _my_ _God_. 

As long as source code portability is maintained.

   excellent localizability
   e.g. kernel error strings mapped to a file, or an #include
   that can be language-specific. My DSFH stuff also. 

This is quite reasonable. Actually, unless you are referring to Kernel internal
error codes, it's already done with perror.


What about GUI's, and desktops and such? They're nice. They are
secondary, however. The free unix world doesn't often enough make the
point that GUI's are much more important when the underlying OS sucks,
which it doesn't in Linux. 

If you only use a compute/disk server. Otherwise you are saying no desktop
publishing, word processing, or image analysis.

Are you still using DOS only?

In short, an open source OS for end-users should be very serious about
simplicity, and not just pay lip-service to it. There is evidence of the
value of this in the marketplace. What doesn't exist is an OS where
simplicity is systemic. This is why end-user issues pertain to the kernel
at all. This is how open source should be. Simple, 

Re: Controversy over dynamic linking -- how to end the panic

2001-06-21 Thread Jesse Pollard

-  Received message begins Here  -

> 
> "Eric S. Raymond" wrote:
> > 
> > The GPL license reproduced below is copyrighted by the Free Software
> > Foundation, but the Linux kernel is copyrighted by me and others who
> > actually wrote it.
> > 
> > The GPL license requires that derivative works of the Linux kernel
> > also fall under GPL terms, including the requirement to disclose
> > source.  The meaning of "derivative work" has been well established
> > for traditional media, and those precedents can be applied to
> > inclusion of source code in a straightforward way.  But as of
> > mid-2001, neither case nor statute law has yet settled under what
> > circumstances *binary* linkage of code to a kernel makes that code a
> > derivative work of the kernel.
> > 
> > To calm down the lawyers, I as the principal kernel maintainer and
> > anthology copyright holder on the code am therefore adding the
> > following interpretations to the kernel license:
> > 
> > 1. Userland programs which request kernel services via normal system
> >calls *are not* to be considered derivative works of the kernel.
> > 
> > 2. A driver or other kernel component which is statically linked to
> >the kernel *is* to be considered a derivative work.
> > 
> > 3. A kernel module loaded at runtime, after kernel build, *is not*
> >to be considered a derivative work.
> > 
> > These terms are to be considered part of the kernel license, applying
> > to all code included in the kernel distribution.  They define your
> > rights to use the code in *this* distribution, however any future court
> > may rule on the underlying legal question and regardless of how the
> > license or interpretations attached to future distributions may change.
> 
> I disagree with 2.  Consider the following:
> 
> - GPL library foo is used by application bar.  bar must be GPL because
> foo is.  I agree with this.
> - Non-GPL library foo is used by GPL application bar.  foo does NOT
> become GPL just because bar is, even if bar statically linked foo in.
> 
> The kernel is the equivalent of an application.  If someone needs to
> statically link in a driver, which is the equivalent of a library, I
> don't see how that should make the driver GPL.

Isn't this all covered by the LGPL ? (Library GPL)

If the kernel is counted as a library (by the module, or the module
counted as a library by the kernel) doesn't that fit with the LGPL
definition? I believe the LGPL covers the runtime library.

This was hashed out some time ago - If I remember correctly Linus didn't
favor the LGPL for kernel because that ment the interface between
the kernel and modules had to become more "static", restricting the
future enhancements of the kernel/module interface.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Alan Cox quote? (was: Re: accounting for threads)

2001-06-21 Thread Jesse Pollard

Rob Landley <[EMAIL PROTECTED]>:
> 
> On Wednesday 20 June 2001 17:20, Albert D. Cahalan wrote:
> > Rob Landley writes:
> > > My only real gripe with Linux's threads right now [...] is
> > > that ps and top and such aren't thread aware and don't group them
> > > right.
> > >
> > > I'm told they added some kind of "threadgroup" field to processes
> > > that allows top and ps and such to get the display right.  I haven't
> > > noticed any upgrades, and haven't had time to go hunting myself.
> >
> > There was a "threadgroup" added just before the 2.4 release.
> > Linus said he'd remove it if he didn't get comments on how
> > useful it was, examples of usage, etc. So I figured I'd look at
> > the code that weekend, but the patch was removed before then!
> 
> Can we give him feedback now, asking him to put it back?
> 
> > Submit patches to me, under the LGPL please. The FSF isn't likely
> > to care. What, did you think this was the GNU system or something?
> 
> I've stopped even TRYING to patch bash.  try a for loop calling "echo $$&", 
> eery single process bash forks off has the PARENT'S value for $$, which is 
> REALLY annoying if you spend an afternoon writing code not knowing that and 
> then wonder why the various process's temp file sare stomping each other...

Actually - you have an error there. $$ gets evaluated during the parse, not
during execution of the subprocess. To get what you are describing it is
necessary to "sh -c 'echo $$'" to force the delay of evaluation. The only
"bug" interpretation is in the evaluation of the quotes. IF echo '$$' &
delayed the interpretation of "$$", then when the subprocess shell
"echo $$" reparsed the line the $$ would be substituted as you wanted.
This delay can only be done via the "sh -c ..." method. (its the same with
bourne/korn shell).

> Oh, and anybody who can explain this is welcome to try:
> 
> lines=`ls -l | awk '{print "\""$0"\""}'`
> for i in $lines
> do
>   echo line:$i
> done

That depends on what you are trying to do. Are you trying to echo the
entire "ls -l"? or are you trying to convert an "ls -l" into a single
column based on a field extracted from "ls -l".

If the latter, then the script should be:

ls -l | awk '{print $}' | while read i
do
echo line: $i
done

If the fields don't matter, but you want each line processed in the
loop do:

ls -l | while read i
do
   echo line:$i
done

Bash doesn't need patching for this.

Again, the evaluation of the quotes is biting you. When the $lines
parameter is evaluated, the quotes are present.

bash is doing a double evaluation for "for" loop. It expects
a list of single tokens, rather than a list of quoted strings. This is
the same as in the bourne/korn shell.

If you want such elaborate handling of strings, I suggest using perl.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: The latest Microsoft FUD. This time from BillG, himself.

2001-06-21 Thread Jesse Pollard


> 
> On Wed, Jun 20, 2001 at 11:09:10PM +0100, Alan Cox wrote:
> > > http://www.zdnet.com/zdnn/stories/news/0,4586,5092935,00.html > 
> > 
> > Of course the URL that goes with that is :
> > http://www.microsoft.com/windows2000/interix/features.asp
> > 
> > Yes., Microsoft ship GNU C (quite legally) as part of their offerings...
> 
> Which brings up an interesting question for us all.  Let's postulate, for
> the sake of discussion, that we agree on the following:
> 
> a) Linux (or just about any Unix) is a better low level OS than NT
> b) Microsoft's application infrastructure is better (the COM layer,
>the stuff that lets apps talk to each, the desktop, etc).

Not completly - the COM layer is (my opinion) part of what propagates some
of their security problems. What else would be capable of disabling a
cruser so fast (and take two hours to restart)...

There appears to be no functional difference between COM and CORBA
(based on superficial knowlege only) except specification availability.

> I know we can argue that KDE/GNOME/whatever is going to get there or is
> there or is better, etc., but for the time being lets just pretend that
> the Microsoft stuff is better.
> 
> What would be wrong with Microsoft/Linux?  It would be:
> 
> a) the Linux kernel
> b) the Microsoft API ported to X
> c) Microsoft apps
> d) Linux apps
> 
> Since Microsoft is all about making money, it doesn't matter if they
> charge for the dll's or the OS, either one is fine, you can't run Word
> without them.  If you don't need the Microsoft apps, you could strip
> them off and strip off the dlls and ship all the rest of it without
> giving Microsoft a dime.  If you do need the apps or you want the app
> infrastructure, you have to give Microsoft exactly what you have to give
> them today - money - but you can run Word side by side with Ghostview
> or whatever.  Microsoft could charge exactly the same amount for the
> dll's as they charge for the OS, none of the end users can tell the
> difference anyway.

Ah yes, raise the Mr. Bill tax... The DLLs ought to be less than half
the price of the OS .. after all, they are a small part of the distribution
and belong to the application(s).

If you attempt to find a full installation of NT (JUST the OS), it will
cost ~400+ dollars (US). If you then add Office, add an additional 200.
If you want program development, add another 200 to 600, maybe more
since I haven't looked recently.

For the most part, I wouldn't complain too much about their prices. If the
products would work. If they didn't have such horrible security. If the
"patches" supplied would also work and not introduce more and different
failures.

BTW, the prices are actually slightly less than what AT, SCO, and others
charged for pieces of a unix system when they were originally sold
($600 base os, $600 application development, $600 documentation workbench
all values approximate, from memory).

> I'm unimpressed with what Microsoft calls an operating system and
> I'm equally unimpressed with what Unix calls an application layer.
> For the last 10 years, Unix has gotten the OS right and the apps wrong
> and Microsoft has gotten the apps right and the OS wrong.  Seems like
> there is potential for a win-win.

I'm equally unimpressed by their applications - how many macro viruses
exist? How do they propagate? How many times do they change file formats?
How many patches are (re)issued to "fix" the same problem?

The biggest improvement would be that users could remain with a version
that works for them and NOT be forced to pay more money for the same
functionality (watch out for the XP license virus... also known as
a logic bomb).

> You can scream all you want that "it isn't free software" but the fact
> of the matter is that you all scream that and then go do your slides for
> your Linux talks in PowerPoint.

Not by choice - I'm forced to use M$ crap because the conferences will
not accept anything else (yet another monopoly point). Personally, I would
prefer to use Applix, StarOffice, WordPerfect, FrameMaker, ... Only one
of which is "free".

I agree that M$ applications should be available. But until M$ quits
appropriating other peoples code and calling it theirs I, for one, don't
want to be forced to use them.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: The latest Microsoft FUD. This time from BillG, himself.

2001-06-21 Thread Jesse Pollard


 
 On Wed, Jun 20, 2001 at 11:09:10PM +0100, Alan Cox wrote:
   http://www.zdnet.com/zdnn/stories/news/0,4586,5092935,00.html  
  
  Of course the URL that goes with that is :
  http://www.microsoft.com/windows2000/interix/features.asp
  
  Yes., Microsoft ship GNU C (quite legally) as part of their offerings...
 
 Which brings up an interesting question for us all.  Let's postulate, for
 the sake of discussion, that we agree on the following:
 
 a) Linux (or just about any Unix) is a better low level OS than NT
 b) Microsoft's application infrastructure is better (the COM layer,
the stuff that lets apps talk to each, the desktop, etc).

Not completly - the COM layer is (my opinion) part of what propagates some
of their security problems. What else would be capable of disabling a
cruser so fast (and take two hours to restart)...

There appears to be no functional difference between COM and CORBA
(based on superficial knowlege only) except specification availability.

 I know we can argue that KDE/GNOME/whatever is going to get there or is
 there or is better, etc., but for the time being lets just pretend that
 the Microsoft stuff is better.
 
 What would be wrong with Microsoft/Linux?  It would be:
 
 a) the Linux kernel
 b) the Microsoft API ported to X
 c) Microsoft apps
 d) Linux apps
 
 Since Microsoft is all about making money, it doesn't matter if they
 charge for the dll's or the OS, either one is fine, you can't run Word
 without them.  If you don't need the Microsoft apps, you could strip
 them off and strip off the dlls and ship all the rest of it without
 giving Microsoft a dime.  If you do need the apps or you want the app
 infrastructure, you have to give Microsoft exactly what you have to give
 them today - money - but you can run Word side by side with Ghostview
 or whatever.  Microsoft could charge exactly the same amount for the
 dll's as they charge for the OS, none of the end users can tell the
 difference anyway.

Ah yes, raise the Mr. Bill tax... The DLLs ought to be less than half
the price of the OS .. after all, they are a small part of the distribution
and belong to the application(s).

If you attempt to find a full installation of NT (JUST the OS), it will
cost ~400+ dollars (US). If you then add Office, add an additional 200.
If you want program development, add another 200 to 600, maybe more
since I haven't looked recently.

For the most part, I wouldn't complain too much about their prices. If the
products would work. If they didn't have such horrible security. If the
patches supplied would also work and not introduce more and different
failures.

BTW, the prices are actually slightly less than what ATT, SCO, and others
charged for pieces of a unix system when they were originally sold
($600 base os, $600 application development, $600 documentation workbench
all values approximate, from memory).

 I'm unimpressed with what Microsoft calls an operating system and
 I'm equally unimpressed with what Unix calls an application layer.
 For the last 10 years, Unix has gotten the OS right and the apps wrong
 and Microsoft has gotten the apps right and the OS wrong.  Seems like
 there is potential for a win-win.

I'm equally unimpressed by their applications - how many macro viruses
exist? How do they propagate? How many times do they change file formats?
How many patches are (re)issued to fix the same problem?

The biggest improvement would be that users could remain with a version
that works for them and NOT be forced to pay more money for the same
functionality (watch out for the XP license virus... also known as
a logic bomb).

 You can scream all you want that it isn't free software but the fact
 of the matter is that you all scream that and then go do your slides for
 your Linux talks in PowerPoint.

Not by choice - I'm forced to use M$ crap because the conferences will
not accept anything else (yet another monopoly point). Personally, I would
prefer to use Applix, StarOffice, WordPerfect, FrameMaker, ... Only one
of which is free.

I agree that M$ applications should be available. But until M$ quits
appropriating other peoples code and calling it theirs I, for one, don't
want to be forced to use them.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Alan Cox quote? (was: Re: accounting for threads)

2001-06-21 Thread Jesse Pollard

Rob Landley [EMAIL PROTECTED]:
 
 On Wednesday 20 June 2001 17:20, Albert D. Cahalan wrote:
  Rob Landley writes:
   My only real gripe with Linux's threads right now [...] is
   that ps and top and such aren't thread aware and don't group them
   right.
  
   I'm told they added some kind of threadgroup field to processes
   that allows top and ps and such to get the display right.  I haven't
   noticed any upgrades, and haven't had time to go hunting myself.
 
  There was a threadgroup added just before the 2.4 release.
  Linus said he'd remove it if he didn't get comments on how
  useful it was, examples of usage, etc. So I figured I'd look at
  the code that weekend, but the patch was removed before then!
 
 Can we give him feedback now, asking him to put it back?
 
  Submit patches to me, under the LGPL please. The FSF isn't likely
  to care. What, did you think this was the GNU system or something?
 
 I've stopped even TRYING to patch bash.  try a for loop calling echo $$, 
 eery single process bash forks off has the PARENT'S value for $$, which is 
 REALLY annoying if you spend an afternoon writing code not knowing that and 
 then wonder why the various process's temp file sare stomping each other...

Actually - you have an error there. $$ gets evaluated during the parse, not
during execution of the subprocess. To get what you are describing it is
necessary to sh -c 'echo $$' to force the delay of evaluation. The only
bug interpretation is in the evaluation of the quotes. IF echo '$$' 
delayed the interpretation of $$, then when the subprocess shell
echo $$ reparsed the line the $$ would be substituted as you wanted.
This delay can only be done via the sh -c ... method. (its the same with
bourne/korn shell).

 Oh, and anybody who can explain this is welcome to try:
 
 lines=`ls -l | awk '{print \$0\}'`
 for i in $lines
 do
   echo line:$i
 done

That depends on what you are trying to do. Are you trying to echo the
entire ls -l? or are you trying to convert an ls -l into a single
column based on a field extracted from ls -l.

If the latter, then the script should be:

ls -l | awk '{print $fieldno}' | while read i
do
echo line: $i
done

If the fields don't matter, but you want each line processed in the
loop do:

ls -l | while read i
do
   echo line:$i
done

Bash doesn't need patching for this.

Again, the evaluation of the quotes is biting you. When the $lines
parameter is evaluated, the quotes are present.

bash is doing a double evaluation for for loop. It expects
a list of single tokens, rather than a list of quoted strings. This is
the same as in the bourne/korn shell.

If you want such elaborate handling of strings, I suggest using perl.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Controversy over dynamic linking -- how to end the panic

2001-06-21 Thread Jesse Pollard

-  Received message begins Here  -

 
 Eric S. Raymond wrote:
  
  The GPL license reproduced below is copyrighted by the Free Software
  Foundation, but the Linux kernel is copyrighted by me and others who
  actually wrote it.
  
  The GPL license requires that derivative works of the Linux kernel
  also fall under GPL terms, including the requirement to disclose
  source.  The meaning of derivative work has been well established
  for traditional media, and those precedents can be applied to
  inclusion of source code in a straightforward way.  But as of
  mid-2001, neither case nor statute law has yet settled under what
  circumstances *binary* linkage of code to a kernel makes that code a
  derivative work of the kernel.
  
  To calm down the lawyers, I as the principal kernel maintainer and
  anthology copyright holder on the code am therefore adding the
  following interpretations to the kernel license:
  
  1. Userland programs which request kernel services via normal system
 calls *are not* to be considered derivative works of the kernel.
  
  2. A driver or other kernel component which is statically linked to
 the kernel *is* to be considered a derivative work.
  
  3. A kernel module loaded at runtime, after kernel build, *is not*
 to be considered a derivative work.
  
  These terms are to be considered part of the kernel license, applying
  to all code included in the kernel distribution.  They define your
  rights to use the code in *this* distribution, however any future court
  may rule on the underlying legal question and regardless of how the
  license or interpretations attached to future distributions may change.
 
 I disagree with 2.  Consider the following:
 
 - GPL library foo is used by application bar.  bar must be GPL because
 foo is.  I agree with this.
 - Non-GPL library foo is used by GPL application bar.  foo does NOT
 become GPL just because bar is, even if bar statically linked foo in.
 
 The kernel is the equivalent of an application.  If someone needs to
 statically link in a driver, which is the equivalent of a library, I
 don't see how that should make the driver GPL.

Isn't this all covered by the LGPL ? (Library GPL)

If the kernel is counted as a library (by the module, or the module
counted as a library by the kernel) doesn't that fit with the LGPL
definition? I believe the LGPL covers the runtime library.

This was hashed out some time ago - If I remember correctly Linus didn't
favor the LGPL for kernel because that ment the interface between
the kernel and modules had to become more static, restricting the
future enhancements of the kernel/module interface.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: obsolete code must die

2001-06-14 Thread Jesse Pollard

-  Received message begins Here  -

> 
> Cleanup is a nice idea , but Linux should support old hardware and should
> not affect them in any way.
> 
> Jaswinder.

I agree - and added my comments below.

> - Original Message -
> From: "Daniel" <[EMAIL PROTECTED]>
> To: "Linux kernel" <[EMAIL PROTECTED]>
> Sent: Wednesday, June 13, 2001 5:44 PM
> Subject: obsolete code must die
> 
> 
> > Anyone concerned about the current size of the kernel source code? I am,
> and
> > I propose to start cleaning house on the x86 platform. I mean it's all
> very
> > well and good to keep adding features, but stuff needs to go if kernel
> > development is to move forward. Before listing the gunk I want to get rid
> > of, here's my justification for doing so:
> > -- Getting rid of old code can help simplify the kernel. This means less
> > chance of bugs.
> > -- Simplifying the kernel means that it will be easier for newbies to
> > understand and perhaps contribute.
> > -- a simpler, cleaner kernel will also be of more use in an academic
> > environment.
> > -- a smaller kernel is easier to maintain and is easier to re-architect
> > should the need arise.
> > -- If someone really needs support for this junk, they will always have
> the
> > option of using the 2.0.x, 2.2.x or 2.4.x series.
> >
> > So without further ado here're the features I want to get rid of:
> >
> > i386, i486
> > The Pentium processor has been around since 1995. Support for these older
> > processors should go so we can focus on optimizations for the pentium and
> > better processors.

I'm still using 486 systems Works fine for a DSL firewall. Why change it?
I'd have to buy a whole new system. The case won't hold anything newer - so
it costs $600-$800; I'd rather put that into fixing up the house... or getting
a newer workstation (1.4 GHz looks REALLY nice). I don't need high performance
for a firewall that only handles ~700Kbits/sec over a 10 base T network.

I also understand that 386 systems make excellent terminal servers...

> > math-emu
> > If support for i386 and i486 is going away, then so should math emulation.
> > Every intel processor since the 486DX has an FPU unit built in. In fact
> > shouldn't FPU support be a userspace responsibility anyway?

Not when the code must support register save/restore on context switches.
Now the meat of the emulator perhaps. But then you must also provide a
way for applications that don't know about the lack to suddenly have access
to a new library, accessed via a kernel trap (illegal instruction). This
imposes more context switches on an already slow system (though why anywone
would use floating point on one is beyond me ... maybe performance tracking
of firewall/terminal server use...).

> > ISA bus, MCA bus, EISA bus
> > PCI is the defacto standard. Get rid of CONFIG_BLK_DEV_ISAPNP,
> > CONFIG_ISAPNP, etc
> >
> > ISA, MCA, EISA device drivers
> > If support for the buses is gone, there's no point in supporting devices
> for
> > these buses.

Not on the 386/486 systems (at least the ISA/EISA based ones).

> > all code marked as CONFIG_OBSOLETE
> > Since we're cleaning house we may as well get rid of this stuff.
> >
> > MFM/RLL/XT/ESDI hard drive support
> > Does anyone still *have* an RLL drive that works? At the very least get
> rid
> > of the old driver (eg CONFIG_BLK_DEV_HD_ONLY, CONFIG_BLK_DEV_HD_IDE,
> > CONFIG_BLK_DEV_XD, CONFIG_BLK_DEV_PS2)
> >
> > parallel/serial/game ports
> > More controversial to remove this, since they are *still* in pretty wide
> > use -- but USB and IEEE 1394 are the way to go. No ifs ands or buts.

really? I'm still running my printer this way (and just bought a parallel
printer/copier/scanner - the USB port isn't finished yet). And one of my
serial mice. Not to mention the plan to add the UPS to the serial lines
It's still cheaper to use existing serial ports than to buy 4 serial ports
for USB. USB doesn't buy me any performance advantage (yet).

> > a.out
> > Who needs it anymore. I love ELF.
> >
> > I really think doing a clean up is worthwhile. Maybe while looking for
> stuff
> > to clean up we'll even be able to better comment the existing code. Any
> > other features people would like to get rid of? Any comments or
> suggestions?
> > I'd love to start a good discussion about this going so please send me
> your
> > 2 cents.
> >
> > Daniel


-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: obsolete code must die

2001-06-14 Thread Jesse Pollard

-  Received message begins Here  -

 
 Cleanup is a nice idea , but Linux should support old hardware and should
 not affect them in any way.
 
 Jaswinder.

I agree - and added my comments below.

 - Original Message -
 From: Daniel [EMAIL PROTECTED]
 To: Linux kernel [EMAIL PROTECTED]
 Sent: Wednesday, June 13, 2001 5:44 PM
 Subject: obsolete code must die
 
 
  Anyone concerned about the current size of the kernel source code? I am,
 and
  I propose to start cleaning house on the x86 platform. I mean it's all
 very
  well and good to keep adding features, but stuff needs to go if kernel
  development is to move forward. Before listing the gunk I want to get rid
  of, here's my justification for doing so:
  -- Getting rid of old code can help simplify the kernel. This means less
  chance of bugs.
  -- Simplifying the kernel means that it will be easier for newbies to
  understand and perhaps contribute.
  -- a simpler, cleaner kernel will also be of more use in an academic
  environment.
  -- a smaller kernel is easier to maintain and is easier to re-architect
  should the need arise.
  -- If someone really needs support for this junk, they will always have
 the
  option of using the 2.0.x, 2.2.x or 2.4.x series.
 
  So without further ado here're the features I want to get rid of:
 
  i386, i486
  The Pentium processor has been around since 1995. Support for these older
  processors should go so we can focus on optimizations for the pentium and
  better processors.

I'm still using 486 systems Works fine for a DSL firewall. Why change it?
I'd have to buy a whole new system. The case won't hold anything newer - so
it costs $600-$800; I'd rather put that into fixing up the house... or getting
a newer workstation (1.4 GHz looks REALLY nice). I don't need high performance
for a firewall that only handles ~700Kbits/sec over a 10 base T network.

I also understand that 386 systems make excellent terminal servers...

  math-emu
  If support for i386 and i486 is going away, then so should math emulation.
  Every intel processor since the 486DX has an FPU unit built in. In fact
  shouldn't FPU support be a userspace responsibility anyway?

Not when the code must support register save/restore on context switches.
Now the meat of the emulator perhaps. But then you must also provide a
way for applications that don't know about the lack to suddenly have access
to a new library, accessed via a kernel trap (illegal instruction). This
imposes more context switches on an already slow system (though why anywone
would use floating point on one is beyond me ... maybe performance tracking
of firewall/terminal server use...).

  ISA bus, MCA bus, EISA bus
  PCI is the defacto standard. Get rid of CONFIG_BLK_DEV_ISAPNP,
  CONFIG_ISAPNP, etc
 
  ISA, MCA, EISA device drivers
  If support for the buses is gone, there's no point in supporting devices
 for
  these buses.

Not on the 386/486 systems (at least the ISA/EISA based ones).

  all code marked as CONFIG_OBSOLETE
  Since we're cleaning house we may as well get rid of this stuff.
 
  MFM/RLL/XT/ESDI hard drive support
  Does anyone still *have* an RLL drive that works? At the very least get
 rid
  of the old driver (eg CONFIG_BLK_DEV_HD_ONLY, CONFIG_BLK_DEV_HD_IDE,
  CONFIG_BLK_DEV_XD, CONFIG_BLK_DEV_PS2)
 
  parallel/serial/game ports
  More controversial to remove this, since they are *still* in pretty wide
  use -- but USB and IEEE 1394 are the way to go. No ifs ands or buts.

really? I'm still running my printer this way (and just bought a parallel
printer/copier/scanner - the USB port isn't finished yet). And one of my
serial mice. Not to mention the plan to add the UPS to the serial lines
It's still cheaper to use existing serial ports than to buy 4 serial ports
for USB. USB doesn't buy me any performance advantage (yet).

  a.out
  Who needs it anymore. I love ELF.
 
  I really think doing a clean up is worthwhile. Maybe while looking for
 stuff
  to clean up we'll even be able to better comment the existing code. Any
  other features people would like to get rid of? Any comments or
 suggestions?
  I'd love to start a good discussion about this going so please send me
 your
  2 cents.
 
  Daniel


-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: isolating process..

2001-06-07 Thread Jesse Pollard

-  Received message begins Here  -

> 
> On Wed, Jun 06, 2001 at 09:57:25PM +0200, Erik Mouw wrote:
> 
> >> Is it possible by any means to isolate any given process, so that
> >> it'll be unable to crash system. 
> > You just gave a nice description what an OS kernel should do :)
> * Sigh * :-)
> 
> > > Please, supply ANY suggestions.
> > > 
> > > My ideas:
> > > 
> > > create some user, and decrease his ulimits up to miminum of 1 process,
> > > 0 core size, appropriate memory/ etc.
> > That's indeed the way to do it.
> Byt how should I restrict him open socket and send some data (my IP,
> for example) somewhere ??
> 
> I thinks I'll end up writing kernel module which will restrict all
> ioctls but few {mmap, brk, geteuid, geuid, etc..} for given UID.

You might look into the Linux Security Module project. It's not finished
but the hooks may give you what you need to start. See

http://mail.wirex.com/mailman/listinfo/linux-security-module

BTW, it is not possible to gurantee the process can't crash the system
unless there are no other processes...

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: isolating process..

2001-06-07 Thread Jesse Pollard

-  Received message begins Here  -

 
 On Wed, Jun 06, 2001 at 09:57:25PM +0200, Erik Mouw wrote:
 
  Is it possible by any means to isolate any given process, so that
  it'll be unable to crash system. 
  You just gave a nice description what an OS kernel should do :)
 * Sigh * :-)
 
   Please, supply ANY suggestions.
   
   My ideas:
   
   create some user, and decrease his ulimits up to miminum of 1 process,
   0 core size, appropriate memory/ etc.
  That's indeed the way to do it.
 Byt how should I restrict him open socket and send some data (my IP,
 for example) somewhere ??
 
 I thinks I'll end up writing kernel module which will restrict all
 ioctls but few {mmap, brk, geteuid, geuid, etc..} for given UID.

You might look into the Linux Security Module project. It's not finished
but the hooks may give you what you need to start. See

http://mail.wirex.com/mailman/listinfo/linux-security-module

BTW, it is not possible to gurantee the process can't crash the system
unless there are no other processes...

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: OOM process killer: strange X11 server crash...

2001-05-25 Thread Jesse Pollard

Ishikawa <[EMAIL PROTECTED]>:

>Anyway, this time, here is what was printed on the screen (the tail end
> of it).
> --- begin quote ---
> ... could not record the above. they scrolled up and disapper...
> Out of Memory: Killed process 4550 (XF8_SVGA.ati12).
> __alloc_pages: 0-order allocation failed.
> VM: killing process XF86_SVGA.ati12
> --- end quote
> 
> And before the message disappeared, I think I saw the
> netscape process was killed, too.
> I checked the log message and looked for "Memory"
> Sure enough I foundnetscapewas killed, too, in this case.
> 
> May 25 09:16:46 duron kernel: Memory: 255280k/262080k available (978k
> kernel cod
> e, 6412k reserved, 378k data, 224k init, 0k highmem)
> ...
> May 25 10:45:31 duron kernel: Out of Memory: Killed process 5562
> (netscape).
> May 25 10:45:31 duron kernel: Out of Memory: Killed process 5450
> (XF86_SVGA.ati1
> 2).
>  ...

Something I have noticed with netscape is that if the X server is
killed out from under it (user logout, or kill X11 manually) is that
it continues to run. The process appears to be looping around select
and attempting to reconnect to the (now dead) X server, and not exiting
like it should.

Other times it seems to terminate properly. The problem may exhibit itself
if netscape is waiting for some asynchronous event (like the name service
lookup maybe) and misses the/a signal that it's socket to the X server
has failed. If a kill -15 doesn't terminate the rogue netscape, then it
takes a kill -9 . In my expierence it is in a tight loop, and ignoring
normal user input. It could still be expanding memory consumption...


-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: OOM process killer: strange X11 server crash...

2001-05-25 Thread Jesse Pollard

Ishikawa [EMAIL PROTECTED]:

Anyway, this time, here is what was printed on the screen (the tail end
 of it).
 --- begin quote ---
 ... could not record the above. they scrolled up and disapper...
 Out of Memory: Killed process 4550 (XF8_SVGA.ati12).
 __alloc_pages: 0-order allocation failed.
 VM: killing process XF86_SVGA.ati12
 --- end quote
 
 And before the message disappeared, I think I saw the
 netscape process was killed, too.
 I checked the log message and looked for Memory
 Sure enough I foundnetscapewas killed, too, in this case.
 
 May 25 09:16:46 duron kernel: Memory: 255280k/262080k available (978k
 kernel cod
 e, 6412k reserved, 378k data, 224k init, 0k highmem)
 ...
 May 25 10:45:31 duron kernel: Out of Memory: Killed process 5562
 (netscape).
 May 25 10:45:31 duron kernel: Out of Memory: Killed process 5450
 (XF86_SVGA.ati1
 2).
  ...

Something I have noticed with netscape is that if the X server is
killed out from under it (user logout, or kill X11 manually) is that
it continues to run. The process appears to be looping around select
and attempting to reconnect to the (now dead) X server, and not exiting
like it should.

Other times it seems to terminate properly. The problem may exhibit itself
if netscape is waiting for some asynchronous event (like the name service
lookup maybe) and misses the/a signal that it's socket to the X server
has failed. If a kill -15 doesn't terminate the rogue netscape, then it
takes a kill -9 . In my expierence it is in a tight loop, and ignoring
normal user input. It could still be expanding memory consumption...


-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: LANANA: To Pending Device Number Registrants

2001-05-16 Thread Jesse Pollard

Bob Glamm <[EMAIL PROTECTED]>:

> Finally, there has to be an *easy* way of identifying devices from software.
> You're right, I don't care if my network cards are numbered 0-1-2, 2-0-1,
> or in any other permutation, *as long as I can write something like this*:
> 
>   # start up networking
>   for i in eth0 eth1 eth2; do
>   identify device $i
>   get configuration/config procedure for device $i identity
>   configure $i
>   done
> 
> Ideally, the identity of device $i would remain the same across reboots.
> Note that the device isn't named by its identity, rather, the identity is
> acquired from the device.
> 
> This gets difficult for certain situations but I think those situations
> are rare.  Most modern hardware I've seen has some intrinsic identification
> built on board.

Not that rare here - HIPPI, parallel, serial lines, fibre channel, FDDI
interfaces... Anywhere there are two or more interfaces that do not or
cannot carry self identification. Consider the problem of two rocket port
multiplexor adapters where one uses high speed devices, another lower speed.
If the interfaces are swapped, which is the high speed? Or multiple parallel
interfaces. Which one has the color printer?

Even in cases with ethernet.. If there are two interfaces, which one is
eth0. Only the MAC address knows for sure, and some of the interfaces allow
changing the MAC address. It depends entirely on the wiring, not the MAC
as to which IP to assign. This is doable using DHCP, but not static IP.
Serial/parallel interfaces don't have that option. Neither do FDDI devices
broadcast on a FDDI isn't really defined.

> > Linux gets this right. We don't give 100Mbps cards different names from
> > 10Mbps cards - and pcmcia cards show up in the same namespace as cardbus,
> > which is the same namespace as ISA. And it doesn't matter what _driver_ we
> > use.
> > 
> > The "eth0..N" naming is done RIGHT!
> > 
> > > 2 (disk domain). I have multiple spindles on multiple SCSI adapters. 
> > 
> > So? Same deal. You don't have eth0..N, you have disk0..N. 
> [...]
> > Linux gets this _somewhat_ right. The /dev/sdxxx naming is correct (or, if
> > you look at only IDE devices, /dev/hdxxx). The problem is that we don't
> > have a unified namespace, so unlike eth0..N we do _not_ have a unified
> > namespace for disks.
> 
> This numbering seems to be a kernel categorization policy.  E.g.,
> I have k eth devices, numbered eth0..k-1.  I have m disks, numbered
> disc0..m-1, I have n video adapters, numbered fb0..n-1, etc.  This
> implies that at some point someone will have to maintain a list of 
> device categories.
> 
> IMHO the example isn't consistent though.  ethXX devices are a different
> level of classification from diskYY.  I would argue that *all* network
> devices should be named net0, net1, etc., be they Ethernet, Token Ring, Fibre
> Channel, ATM.  Just as different disks be named disk0, disk1, etc., whether
> they are IDE, SCSI, ESDI, or some other controller format.

Ummm... not sure. Fibre channel attached to disks are/should be treated
differently from fibre channel attached to compute clusters. Different
characteristics of use. Ethernet/fibre/FDDI/ATM all have drastically different
characteristics and have to be initialized differently. Even HIPPI can be
used as a network device... But it is also a storage attachment device.
Different characteristics, different MTU.

There must still be a way to specify/identify this uniqueness.

Looking at SCSI:

1. hardware controller driver
2. SCSI mid layer
3. device specific driver

The mid level is used to make multiple harware controllers look the same
to the device specific drivers. This heirarchy must also be identified, and
in some cases (non SCSI issues) should be initialized differently.

When there are two (or more) hardware controllers, there may need to be
different ways to force the hardware controller to scan for newly attached
devices. The mid layer may have to scan for new/replaced hardware controllers
(hot swap PCI), the device specific drivers may need to scan for new units
(possibly initiated from lower levels, possibly from external request).

Each unit at each level needs to be able to be addressed.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: LANANA: To Pending Device Number Registrants

2001-05-16 Thread Jesse Pollard

Bob Glamm [EMAIL PROTECTED]:

 Finally, there has to be an *easy* way of identifying devices from software.
 You're right, I don't care if my network cards are numbered 0-1-2, 2-0-1,
 or in any other permutation, *as long as I can write something like this*:
 
   # start up networking
   for i in eth0 eth1 eth2; do
   identify device $i
   get configuration/config procedure for device $i identity
   configure $i
   done
 
 Ideally, the identity of device $i would remain the same across reboots.
 Note that the device isn't named by its identity, rather, the identity is
 acquired from the device.
 
 This gets difficult for certain situations but I think those situations
 are rare.  Most modern hardware I've seen has some intrinsic identification
 built on board.

Not that rare here - HIPPI, parallel, serial lines, fibre channel, FDDI
interfaces... Anywhere there are two or more interfaces that do not or
cannot carry self identification. Consider the problem of two rocket port
multiplexor adapters where one uses high speed devices, another lower speed.
If the interfaces are swapped, which is the high speed? Or multiple parallel
interfaces. Which one has the color printer?

Even in cases with ethernet.. If there are two interfaces, which one is
eth0. Only the MAC address knows for sure, and some of the interfaces allow
changing the MAC address. It depends entirely on the wiring, not the MAC
as to which IP to assign. This is doable using DHCP, but not static IP.
Serial/parallel interfaces don't have that option. Neither do FDDI devices
broadcast on a FDDI isn't really defined.

  Linux gets this right. We don't give 100Mbps cards different names from
  10Mbps cards - and pcmcia cards show up in the same namespace as cardbus,
  which is the same namespace as ISA. And it doesn't matter what _driver_ we
  use.
  
  The eth0..N naming is done RIGHT!
  
   2 (disk domain). I have multiple spindles on multiple SCSI adapters. 
  
  So? Same deal. You don't have eth0..N, you have disk0..N. 
 [...]
  Linux gets this _somewhat_ right. The /dev/sdxxx naming is correct (or, if
  you look at only IDE devices, /dev/hdxxx). The problem is that we don't
  have a unified namespace, so unlike eth0..N we do _not_ have a unified
  namespace for disks.
 
 This numbering seems to be a kernel categorization policy.  E.g.,
 I have k eth devices, numbered eth0..k-1.  I have m disks, numbered
 disc0..m-1, I have n video adapters, numbered fb0..n-1, etc.  This
 implies that at some point someone will have to maintain a list of 
 device categories.
 
 IMHO the example isn't consistent though.  ethXX devices are a different
 level of classification from diskYY.  I would argue that *all* network
 devices should be named net0, net1, etc., be they Ethernet, Token Ring, Fibre
 Channel, ATM.  Just as different disks be named disk0, disk1, etc., whether
 they are IDE, SCSI, ESDI, or some other controller format.

Ummm... not sure. Fibre channel attached to disks are/should be treated
differently from fibre channel attached to compute clusters. Different
characteristics of use. Ethernet/fibre/FDDI/ATM all have drastically different
characteristics and have to be initialized differently. Even HIPPI can be
used as a network device... But it is also a storage attachment device.
Different characteristics, different MTU.

There must still be a way to specify/identify this uniqueness.

Looking at SCSI:

1. hardware controller driver
2. SCSI mid layer
3. device specific driver

The mid level is used to make multiple harware controllers look the same
to the device specific drivers. This heirarchy must also be identified, and
in some cases (non SCSI issues) should be initialized differently.

When there are two (or more) hardware controllers, there may need to be
different ways to force the hardware controller to scan for newly attached
devices. The mid layer may have to scan for new/replaced hardware controllers
(hot swap PCI), the device specific drivers may need to scan for new units
(possibly initiated from lower levels, possibly from external request).

Each unit at each level needs to be able to be addressed.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Not a typewriter

2001-05-14 Thread Jesse Pollard

-  Received message begins Here  -

> 
> > IIRC, the 6 character linker requirement came from when the Bell Labs folk
> > ported the C compiler the IBM mainframe world, not from the early UNIX (tm)
> > world.  During the original ANSI C meetings, I got the sense from the IBM rep,
> 
> 6 character linker name limits are very old. Honeywell L66 GCOS3/TSS which I
> had the dubious pleasure of experiencing and which is a direct derivative of
> GECOS and thus relevant to the era like many 36bit boxes uses 6 char link names
> 
> Why - well because 6 BCD characters fit in a 36bit word and its a single compare
> to check symbol matches

well... actually it was 6 bit "ascii" computed from: (char - ' '). Depends
entirely on architecture, and implementation. EBCD/6Bit/7Bit and EBCDIC were
supported on the Honeywell systems.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Not a typewriter

2001-05-14 Thread Jesse Pollard

-  Received message begins Here  -

 
  IIRC, the 6 character linker requirement came from when the Bell Labs folk
  ported the C compiler the IBM mainframe world, not from the early UNIX (tm)
  world.  During the original ANSI C meetings, I got the sense from the IBM rep,
 
 6 character linker name limits are very old. Honeywell L66 GCOS3/TSS which I
 had the dubious pleasure of experiencing and which is a direct derivative of
 GECOS and thus relevant to the era like many 36bit boxes uses 6 char link names
 
 Why - well because 6 BCD characters fit in a 36bit word and its a single compare
 to check symbol matches

well... actually it was 6 bit ascii computed from: (char - ' '). Depends
entirely on architecture, and implementation. EBCD/6Bit/7Bit and EBCDIC were
supported on the Honeywell systems.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: mount /dev/hdb2 /usr; swapon /dev/hdb2 keeps flooding

2001-05-13 Thread Jesse Pollard

On Sat, 12 May 2001, Alexander Viro wrote:
>On Sun, 13 May 2001, Alan Cox wrote:
>
>> > > root@kama3:/home/szabi# cat /proc/mounts
>> > > /dev/hdb2 /usr ext2 rw 0 0
>> > > root@kama3:/home/szabi# swapon /dev/hdb2
>> > 
>> > - Doctor, it hurts when I do it!
>> > - Don't do it, then.
>> > 
>> > Just what behaviour had you expected?
>> 
>> EBUSY would be somewhat nicer.
>
>Probably. Try to convince Linus that we need such exclusion. All stuff
>needed to implement it is already there - see blkdev_open() for details.
>OTOH, as long as kernel itself survives that... In this case I can see
>the point in "give them enough rope" approach.

It doesn't matter  The device is not a swap partition - from the
original message:
> root@kama3:/home/szabi# swapon /dev/hdb2
> set_blocksize: b_count 1, dev ide0(3,66), block 2, from c0126b48
> set_blocksize: b_count 1, dev ide0(3,66), block 3, from c0126b48
> set_blocksize: b_count 1, dev ide0(3,66), block 4, from c0126b48
> set_blocksize: b_count 1, dev ide0(3,66), block 5, from c0126b48
> set_blocksize: b_count 1, dev ide0(3,66), block 6, from c0126b48
> set_blocksize: b_count 1, dev ide0(3,66), block 7, from c0126b48
> set_blocksize: b_count 1, dev ide0(3,66), block 8, from c0126b48
> set_blocksize: b_count 1, dev ide0(3,66), block 1, from c0126b48
> Unable to find swap-space signature
> swapon: /dev/hdb2: Invalid argument

The error message is quite clear (the set blocksize isn't, but that
is during the identification and isn't an error, but appears to be
a status..).

If you are going to swap on a mounted file system, then you have to
specify a swap file, formatted for swap.

The message that was output says exactly what is needed for protecting
against configuration errors.

-- 
-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: mount /dev/hdb2 /usr; swapon /dev/hdb2 keeps flooding

2001-05-13 Thread Jesse Pollard

On Sat, 12 May 2001, Alexander Viro wrote:
On Sun, 13 May 2001, Alan Cox wrote:

   root@kama3:/home/szabi# cat /proc/mounts
   /dev/hdb2 /usr ext2 rw 0 0
   root@kama3:/home/szabi# swapon /dev/hdb2
  
  - Doctor, it hurts when I do it!
  - Don't do it, then.
  
  Just what behaviour had you expected?
 
 EBUSY would be somewhat nicer.

Probably. Try to convince Linus that we need such exclusion. All stuff
needed to implement it is already there - see blkdev_open() for details.
OTOH, as long as kernel itself survives that... In this case I can see
the point in give them enough rope approach.

It doesn't matter  The device is not a swap partition - from the
original message:
 root@kama3:/home/szabi# swapon /dev/hdb2
 set_blocksize: b_count 1, dev ide0(3,66), block 2, from c0126b48
 set_blocksize: b_count 1, dev ide0(3,66), block 3, from c0126b48
 set_blocksize: b_count 1, dev ide0(3,66), block 4, from c0126b48
 set_blocksize: b_count 1, dev ide0(3,66), block 5, from c0126b48
 set_blocksize: b_count 1, dev ide0(3,66), block 6, from c0126b48
 set_blocksize: b_count 1, dev ide0(3,66), block 7, from c0126b48
 set_blocksize: b_count 1, dev ide0(3,66), block 8, from c0126b48
 set_blocksize: b_count 1, dev ide0(3,66), block 1, from c0126b48
 Unable to find swap-space signature
 swapon: /dev/hdb2: Invalid argument

The error message is quite clear (the set blocksize isn't, but that
is during the identification and isn't an error, but appears to be
a status..).

If you are going to swap on a mounted file system, then you have to
specify a swap file, formatted for swap.

The message that was output says exactly what is needed for protecting
against configuration errors.

-- 
-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: ide messages in log. Hard disk dying or linux ide problem?

2001-05-08 Thread Jesse Pollard

"Joel Beach" <[EMAIL PROTECTED]>:
> Hi,
> 
> Until three or four weeks ago, I have been running kernel 2.4.2 with no
> problems. However, my hard disk now seems to be playing up. In my system log, I
> get the following messages.
> 
> May  3 08:13:14 kinslayer kernel: hda: dma_intr: error=0x40 {
> UncorrectableError }, LBAsect=3790389, sector=3790208
> May  3 08:13:14 kinslayer kernel: end_request: I/O error, dev 03:01 (hda),
> sector 3790208
> May  3 08:22:34 kinslayer kernel: hda: dma_intr: error=0x40 {
> UncorrectableError }, LBAsect=4614116, sector=4613824
> May  3 08:22:34 kinslayer kernel: end_request: I/O error, dev 03:01 (hda),
> sector 4613824
> 
> This only seems to affect access to my mounted FAT32 partition. Sometimes,
> windows itself won't load because it can't find the registry file. 
> 
> The problem manifests itself when the computer is first turned on. You can tell
> immediately if the problem is going to happen as the BIOS autodetect of the
> hard drive takes a long time. The access noise is also quite peculiar, with
> three low pitched accesses, followed by three high pitched accesses.
> 
> The problem seems to disappear after the computer has been used for a while,
> which seems to suggest flakey hardware to me.

Flakey hardware... It is amazing, but I just had an IDE disk fail (continuous
running for the last year and a half - disk, not system) with the same
symptoms. We decided that it was a head crash. Sometimes the disk was
identified, sometimes not. After the initial failure it ran for about 12 hours,
then did it again. Off and on it would work, sometimes recognized, sometimes
unable to read the partition table. I got it working just long enough to copy
most of the current configuration files.

A windows diagnostic disk couldn't even recover the disk via low level
formatting (we didn't expect it to, but wanted to try the software out).

In addition to the errors you show, it is possbile to also get "short read"
errors too.

BTW - the kernel was an old 2.0.33 system that has given very good service...

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: ide messages in log. Hard disk dying or linux ide problem?

2001-05-08 Thread Jesse Pollard

Joel Beach [EMAIL PROTECTED]:
 Hi,
 
 Until three or four weeks ago, I have been running kernel 2.4.2 with no
 problems. However, my hard disk now seems to be playing up. In my system log, I
 get the following messages.
 
 May  3 08:13:14 kinslayer kernel: hda: dma_intr: error=0x40 {
 UncorrectableError }, LBAsect=3790389, sector=3790208
 May  3 08:13:14 kinslayer kernel: end_request: I/O error, dev 03:01 (hda),
 sector 3790208
 May  3 08:22:34 kinslayer kernel: hda: dma_intr: error=0x40 {
 UncorrectableError }, LBAsect=4614116, sector=4613824
 May  3 08:22:34 kinslayer kernel: end_request: I/O error, dev 03:01 (hda),
 sector 4613824
 
 This only seems to affect access to my mounted FAT32 partition. Sometimes,
 windows itself won't load because it can't find the registry file. 
 
 The problem manifests itself when the computer is first turned on. You can tell
 immediately if the problem is going to happen as the BIOS autodetect of the
 hard drive takes a long time. The access noise is also quite peculiar, with
 three low pitched accesses, followed by three high pitched accesses.
 
 The problem seems to disappear after the computer has been used for a while,
 which seems to suggest flakey hardware to me.

Flakey hardware... It is amazing, but I just had an IDE disk fail (continuous
running for the last year and a half - disk, not system) with the same
symptoms. We decided that it was a head crash. Sometimes the disk was
identified, sometimes not. After the initial failure it ran for about 12 hours,
then did it again. Off and on it would work, sometimes recognized, sometimes
unable to read the partition table. I got it working just long enough to copy
most of the current configuration files.

A windows diagnostic disk couldn't even recover the disk via low level
formatting (we didn't expect it to, but wanted to try the software out).

In addition to the errors you show, it is possbile to also get short read
errors too.

BTW - the kernel was an old 2.0.33 system that has given very good service...

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: inserting a Forth-like language into the Linux kernel

2001-05-06 Thread Jesse Pollard

On Sat, 05 May 2001, Rick Hohensee wrote:
>kspamd/H3sm is now making continuous writes to tty1 from an 
>in-kernel thread. It was locking on a write to /dev/console by
>init, so I made /dev/console a plain file. This is after 
>hollowing out sys_syslog to be a null routine, and various 
>other minor destruction.
>
>I am now typing at you on tty4 or so while the kernel itself 
>sends an endless stream of d's to tty1. It will scroll-lock 
>and un-scroll-lock, which is how I can tell it's not just a 
>static screen of d's.
>
>I don't know about H1 S, but the ability to open a tty
>normally directly into kernelspace may prove popular, particularly 
>with a Forth on that tty in that kernelspace. Persons with actual 
>kernel clue may want to look at allowing /dev/console users and 
>an in-kernel tty user to play nice. For my purposes I'll do without 
>a real /dev/console and syslogging for now. 
>
>Now I get to find out how many worlds of trouble I didn't foresee
>in _reading_ a tty from the kernel :o)
>
>If someone knows of another example of interpreter-like behavior 
>directly in a unix in-kernel thread I'd like to know about it.  

Only in reference to allowing for virus infection of the kernel.

It isn't a good idea.

-- 
-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: inserting a Forth-like language into the Linux kernel

2001-05-06 Thread Jesse Pollard

On Sat, 05 May 2001, Rick Hohensee wrote:
kspamd/H3sm is now making continuous writes to tty1 from an 
in-kernel thread. It was locking on a write to /dev/console by
init, so I made /dev/console a plain file. This is after 
hollowing out sys_syslog to be a null routine, and various 
other minor destruction.

I am now typing at you on tty4 or so while the kernel itself 
sends an endless stream of d's to tty1. It will scroll-lock 
and un-scroll-lock, which is how I can tell it's not just a 
static screen of d's.

I don't know about H1 SM, but the ability to open a tty
normally directly into kernelspace may prove popular, particularly 
with a Forth on that tty in that kernelspace. Persons with actual 
kernel clue may want to look at allowing /dev/console users and 
an in-kernel tty user to play nice. For my purposes I'll do without 
a real /dev/console and syslogging for now. 

Now I get to find out how many worlds of trouble I didn't foresee
in _reading_ a tty from the kernel :o)

If someone knows of another example of interpreter-like behavior 
directly in a unix in-kernel thread I'd like to know about it.  

Only in reference to allowing for virus infection of the kernel.

It isn't a good idea.

-- 
-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: [RFC] Direct Sockets Support??

2001-05-03 Thread Jesse Pollard

-  Received message begins Here  -

> 
> 
>   > Doesn't this bypass all of the network security controls? Granted
> - it is
>   > completely reasonable in a dedicated environment, but I would
> think the
>   > security loss would prevent it from being used for most usage.
> 
>   Direct Sockets makes sense only in clustering (server farms) to
> reduce intra-farm communication. It is *not* supposed to be used for regular
> internet. Direct Sockets over subnets is also tough to implement it across
> different topology subnets. Fabrics like Infiniband provide security on
> hardware, so there is no need to worry about it. The simple point  is that
> hw supports TCP/IP, then why do we need a software TCP/IP over it?

Because the hardware doesn't have the users security context. All it can
see are addresses, socket numbers and protocol. Neither can it be extended
with that information (IPSec). Authentication of the connections are not
possible.

Now... If the server farm only runs one job at a time, it is irrelevent...

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: [RFC] Direct Sockets Support??

2001-05-03 Thread Jesse Pollard


>   > Define 'direct sockets' firstly.
>   Direct Sockets is the ablity by which the application(using sockets)
> can use the hardwares features to provide connection, flow control,
> etc.,instead of the TCP and IP software module. A typical hardware
> technology is Infiniband . In Infiniband, the hardware supports IPv6 . For
> this type of devices there is no need for software TCP/IP. But for
> networking application, which mostly uses sockets, there is a performance
> penalty with using software TCP/IP over this hardware. 
> 
> > I have seen several lines of attack on very high bandwidth devices.
> > Firstly
> > the linux projects a while ago doing usermode message passing directly
> > over
> > network cards for ultra low latency. Secondly there was a VI based project
> > that was mostly driven from userspace.
> > 
>   The application needs to rewritten to use VIPL, but if we could
> provide a sockets over VI (or Sockets over IB), then the existing
> applications can run with a known environment. 
> 
> 
> > One thing that remains unresolved is the question as to whether the very
> > low
> > cost Linux syscalls and zero copy are enough to achieve this using a
> > conventional socket API and the kernel space, or whether a hybrid direct 
> > access setup is actually needed.
> > 
>   My point is that if the hardware is capable of doing TCP/IP , we
> should let the sockets layer talk directly to it (direct sockets). Thereby
> the application which uses the sockets will get better performance.

Doesn't this bypass all of the network security controls? Granted - it is
completely reasonable in a dedicated environment, but I would think the
security loss would prevent it from being used for most usage.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: [RFC] Direct Sockets Support??

2001-05-03 Thread Jesse Pollard


Define 'direct sockets' firstly.
   Direct Sockets is the ablity by which the application(using sockets)
 can use the hardwares features to provide connection, flow control,
 etc.,instead of the TCP and IP software module. A typical hardware
 technology is Infiniband . In Infiniband, the hardware supports IPv6 . For
 this type of devices there is no need for software TCP/IP. But for
 networking application, which mostly uses sockets, there is a performance
 penalty with using software TCP/IP over this hardware. 
 
  I have seen several lines of attack on very high bandwidth devices.
  Firstly
  the linux projects a while ago doing usermode message passing directly
  over
  network cards for ultra low latency. Secondly there was a VI based project
  that was mostly driven from userspace.
  
   The application needs to rewritten to use VIPL, but if we could
 provide a sockets over VI (or Sockets over IB), then the existing
 applications can run with a known environment. 
 
 
  One thing that remains unresolved is the question as to whether the very
  low
  cost Linux syscalls and zero copy are enough to achieve this using a
  conventional socket API and the kernel space, or whether a hybrid direct 
  access setup is actually needed.
  
   My point is that if the hardware is capable of doing TCP/IP , we
 should let the sockets layer talk directly to it (direct sockets). Thereby
 the application which uses the sockets will get better performance.

Doesn't this bypass all of the network security controls? Granted - it is
completely reasonable in a dedicated environment, but I would think the
security loss would prevent it from being used for most usage.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: [RFC] Direct Sockets Support??

2001-05-03 Thread Jesse Pollard

-  Received message begins Here  -

 
 
Doesn't this bypass all of the network security controls? Granted
 - it is
completely reasonable in a dedicated environment, but I would
 think the
security loss would prevent it from being used for most usage.
 
   Direct Sockets makes sense only in clustering (server farms) to
 reduce intra-farm communication. It is *not* supposed to be used for regular
 internet. Direct Sockets over subnets is also tough to implement it across
 different topology subnets. Fabrics like Infiniband provide security on
 hardware, so there is no need to worry about it. The simple point  is that
 hw supports TCP/IP, then why do we need a software TCP/IP over it?

Because the hardware doesn't have the users security context. All it can
see are addresses, socket numbers and protocol. Neither can it be extended
with that information (IPSec). Authentication of the connections are not
possible.

Now... If the server farm only runs one job at a time, it is irrelevent...

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: init process in 2.2.19

2001-04-27 Thread Jesse Pollard

Subba Rao <[EMAIL PROTECTED]>:
> I am trying to add a process which is to be managed by init. I have added the
> following entry to /etc/inittab
> 
> SV:2345:respawn:env - PATH=/usr/local/bin:/usr/sbin:/usr/bin:/bin svscan /service 
> dev/console
> 
> After saving, I execute the following command:
> 
>   # kill -HUP 1
> 
> This does not start the process I have added. The process that I have added
> only starts when I do:
> 
>   # init u
> or
>   # telinit u
> 
> PS - The process will not start even after a reboot. I have to manually do one
> of the above commands as root.
> 
> My kernel version is : 2.2.19
> Distro : Slackware
> GCC : gcc version egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)
> 
> Any help appreciated.

I'm using Slackware 7.1, so one of the following possible solutions may work:

First
Make sure the daemon is available at boot time - if /usr/local/bin is
where the svscan daemon exists, then /usr/local must be part of the
root file system.

What I do is have a "/host" directory tree on the root file system
for this purpose. Alternatively, I start the daemon when the system
enters multi-user mode (either /etc/rc.d/rc.local, or one of the
already existing scripts related to what the daemon does).

A second possibility (try this first - its easer:
I see that the daemon is to run in modes "2345". There is a possiblity
that you have this entry near the beginning of the inittab. If so, try
putting it at the end. I believe that init runs each line of the
inittab for a given run level in the same order that it appears in the
file. Putting the entry last should allow it to be started AFTER
all file systems are mounted - the  entry for multiuser mode is:

# Script to run when going multi user.
rc:2345:wait:/etc/rc.d/rc.M

If your daemon entry follows this line then it may work as you
expect.

Remember, any facility that the daemon depends on must be
initialized before the daemon starts - If it uses the network
then the network needs to be loaded (mine needs sockets loaded...)
before the daemon is started.

Note: since the assumption that the daemon is in /usr/local and
  that /usr/local is a separate file system is true, then
  you will no longer be able to dismount the /usr/local
  file system while in multi-user mode (it's busy). This may
  only be relevent to how your backups are done.

BTW, SIGHUP may not be the correct signal - from the init manpage:

   SIGHUP
Init looks for /etc/initrunlvl and  /var/log/initrun-
lvl.   If  one  of  these  files exist and contain an
ASCII runlevel, init switches to  the  new  runlevel.
This  is  for backwards compatibility only! .  In the
normal case (the files don't exist) init behaves like
telinit q was executed.

The only documented startup is "init u" or "telinit u". To re-read the
inittab file use "init q" or "telinit q". I suspect the manpage is a
little "inaccurate" in stating that SIGHUP is equivalent to "telinit q"

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: init process in 2.2.19

2001-04-27 Thread Jesse Pollard

Subba Rao [EMAIL PROTECTED]:
 I am trying to add a process which is to be managed by init. I have added the
 following entry to /etc/inittab
 
 SV:2345:respawn:env - PATH=/usr/local/bin:/usr/sbin:/usr/bin:/bin svscan /service 
/dev/null 2 dev/console
 
 After saving, I execute the following command:
 
   # kill -HUP 1
 
 This does not start the process I have added. The process that I have added
 only starts when I do:
 
   # init u
 or
   # telinit u
 
 PS - The process will not start even after a reboot. I have to manually do one
 of the above commands as root.
 
 My kernel version is : 2.2.19
 Distro : Slackware
 GCC : gcc version egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)
 
 Any help appreciated.

I'm using Slackware 7.1, so one of the following possible solutions may work:

First
Make sure the daemon is available at boot time - if /usr/local/bin is
where the svscan daemon exists, then /usr/local must be part of the
root file system.

What I do is have a /host directory tree on the root file system
for this purpose. Alternatively, I start the daemon when the system
enters multi-user mode (either /etc/rc.d/rc.local, or one of the
already existing scripts related to what the daemon does).

A second possibility (try this first - its easer:
I see that the daemon is to run in modes 2345. There is a possiblity
that you have this entry near the beginning of the inittab. If so, try
putting it at the end. I believe that init runs each line of the
inittab for a given run level in the same order that it appears in the
file. Putting the entry last should allow it to be started AFTER
all file systems are mounted - the  entry for multiuser mode is:

# Script to run when going multi user.
rc:2345:wait:/etc/rc.d/rc.M

If your daemon entry follows this line then it may work as you
expect.

Remember, any facility that the daemon depends on must be
initialized before the daemon starts - If it uses the network
then the network needs to be loaded (mine needs sockets loaded...)
before the daemon is started.

Note: since the assumption that the daemon is in /usr/local and
  that /usr/local is a separate file system is true, then
  you will no longer be able to dismount the /usr/local
  file system while in multi-user mode (it's busy). This may
  only be relevent to how your backups are done.

BTW, SIGHUP may not be the correct signal - from the init manpage:

   SIGHUP
Init looks for /etc/initrunlvl and  /var/log/initrun-
lvl.   If  one  of  these  files exist and contain an
ASCII runlevel, init switches to  the  new  runlevel.
This  is  for backwards compatibility only! .  In the
normal case (the files don't exist) init behaves like
telinit q was executed.

The only documented startup is init u or telinit u. To re-read the
inittab file use init q or telinit q. I suspect the manpage is a
little inaccurate in stating that SIGHUP is equivalent to telinit q

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: /proc format (was Device Registry (DevReg) Patch 0.2.0)

2001-04-25 Thread Jesse Pollard

Tim Jansen <[EMAIL PROTECTED]>:
> On Wednesday 25 April 2001 21:37, you wrote:
> > Personally, I think
> >>proc_printf(fragment, "%d %d",get_portnum(usbdev), usbdev->maxchild);
> > is shorter (and faster) to parse with
> > fscanf(input,"%d %d",,);
> 
> Right, but what happens if you need to extend the format? For example 
> somebody adds support for USB 2.0 to the kernel and you need to some new 
> values. Then you would have the choice between changing the format and 
> breaking applications or keeping the format and dont provide the additional 
> information. 
> With XML (or single-value-per-file) it is easy to tell application to ignore 
> unknown tags (or files). When you just list values you will be damned sooner 
> or later, unless you make up additional rules that say how apps should handle 
> these cases. And then your approach is no longer simple, but possibly even 
> more complicated

Not necessarily. If the "extended data" is put following the current data
(since the data is currently record oriented) just making the output
format longer will not/should not casue problems in reading the data.
Just look at FORTRAN for an example of a extensible input :-) More data
on the record will/should just be ignored. The only coding change might
be to use a fgets to read a record, followed by a sscanf to get the known
values.

Alternatively, you can always put one value per record:
tag:value
tag2:value2...

This is still simpler than XML to read, and to generate.

The problem with this and XML is the same - If the tag is no longer relevent
(or changes its name), then the output must either continue to include it, or
break applications that depend on that tag.

In all cases, atomic extraction of the structured data will be problematical
since there may be buffering issues in output. XML is very verbose, and the
tagged format better; but a series of values goes even farther...

Try them out - Just go through the /proc/net formats and stick in the
XML... Just don't count on the regular utilities to decode them. It would
give some actual results to compair with the current structure.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [PATCH] Single user linux

2001-04-25 Thread Jesse Pollard

-  Received message begins Here  -

> 
> On Wed, 25 Apr 2001, Rick Hohensee wrote:
> 
> > [EMAIL PROTECTED] wrote:
> > > for those who didn't read that patch, i #define capable(),
> > > suser(), and fsuser() to 1. the implication is all users
> > > will have root capabilities.
> >
> > How is that not single user?
> 
> Every user still has it's own account, means profile etc.

Until some user removes all the other users
Or reads the other users mail
Or changes the other users configuration

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: /proc format (was Device Registry (DevReg) Patch 0.2.0)

2001-04-25 Thread Jesse Pollard

-  Received message begins Here  -

> 
> On Wednesday 25 April 2001 19:10, you wrote:
> > The command
> >   more foo/* foo/*/*
> > will display the values in the foo subtree nicely, I think.
> 
> Unfortunately it displays only the values. Dumping numbers and strings 
> without knowing their meaning (and probably not even the order) is not very 
> useful.
> 
> > Better to factor the XML part out to a userspace library...
> 
> But the one-value per file approach is MORE work. It would be less work to 
> create XML and factor out the directory structure in user-space :)
> Devreg collects its data from the drivers, each driver should contribute the 
> information that it can provide about the device.
> Printing a few values in XML format using the functions from xmlprocfs is as 
> easy as writing
> proc_printf(fragment, "\n",
> get_portnum(usbdev), usbdev->maxchild);
> 
> Extending the devreg output with driver-specific data means registering a 
> callback function that prints the driver's data. The driver should use its 
> own XML namespace, so whatever the driver adds will not break any 
> (well-written) user-space applications. The data is created on-demand, so the 
> values can be dynamic and do not waste any space when devreg is not used. 
> 
> The code is easy to read and not larger than a solution that creates static 
> /proc entries, and holding the data completely static would take much more 
> memory. And it takes less code than a solution that would create the values 
> in /proc dynamically because this would mean one callback per file or a 
> complicated way to specify several values with a single callback. 

Personally, I think

proc_printf(fragment, "%d %d",get_portnum(usbdev), usbdev->maxchild);

(or the string " ddd" with d representing a digit)

is shorter (and faster) to parse with

fscanf(input,"%d %d",,);

Than it would be to try parsing



with an XML parser.

Sorry - XML is good for some things. It is not designed to be a
interface language between a kernel and user space.

I am NOT in favor of "one file per value", but structured data needs
to be written in a reasonable, concise manner. XML is intended for
communication between disparate systems in an exreemly precise manner
to allow some self documentation to be included when the communication
fails.

Even Lisp S expressions are easier :-)

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: /proc format (was Device Registry (DevReg) Patch 0.2.0)

2001-04-25 Thread Jesse Pollard

-  Received message begins Here  -

 
 On Wednesday 25 April 2001 19:10, you wrote:
  The command
more foo/* foo/*/*
  will display the values in the foo subtree nicely, I think.
 
 Unfortunately it displays only the values. Dumping numbers and strings 
 without knowing their meaning (and probably not even the order) is not very 
 useful.
 
  Better to factor the XML part out to a userspace library...
 
 But the one-value per file approach is MORE work. It would be less work to 
 create XML and factor out the directory structure in user-space :)
 Devreg collects its data from the drivers, each driver should contribute the 
 information that it can provide about the device.
 Printing a few values in XML format using the functions from xmlprocfs is as 
 easy as writing
 proc_printf(fragment, usb:topology port=\%d\ portnum=\%d\/\n,
 get_portnum(usbdev), usbdev-maxchild);
 
 Extending the devreg output with driver-specific data means registering a 
 callback function that prints the driver's data. The driver should use its 
 own XML namespace, so whatever the driver adds will not break any 
 (well-written) user-space applications. The data is created on-demand, so the 
 values can be dynamic and do not waste any space when devreg is not used. 
 
 The code is easy to read and not larger than a solution that creates static 
 /proc entries, and holding the data completely static would take much more 
 memory. And it takes less code than a solution that would create the values 
 in /proc dynamically because this would mean one callback per file or a 
 complicated way to specify several values with a single callback. 

Personally, I think

proc_printf(fragment, %d %d,get_portnum(usbdev), usbdev-maxchild);

(or the string  ddd with d representing a digit)

is shorter (and faster) to parse with

fscanf(input,%d %d,usbdev,maxchild);

Than it would be to try parsing

usb:topology port=d portnum=

with an XML parser.

Sorry - XML is good for some things. It is not designed to be a
interface language between a kernel and user space.

I am NOT in favor of one file per value, but structured data needs
to be written in a reasonable, concise manner. XML is intended for
communication between disparate systems in an exreemly precise manner
to allow some self documentation to be included when the communication
fails.

Even Lisp S expressions are easier :-)

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: /proc format (was Device Registry (DevReg) Patch 0.2.0)

2001-04-25 Thread Jesse Pollard

Tim Jansen [EMAIL PROTECTED]:
 On Wednesday 25 April 2001 21:37, you wrote:
  Personally, I think
 proc_printf(fragment, %d %d,get_portnum(usbdev), usbdev-maxchild);
  is shorter (and faster) to parse with
  fscanf(input,%d %d,usbdev,maxchild);
 
 Right, but what happens if you need to extend the format? For example 
 somebody adds support for USB 2.0 to the kernel and you need to some new 
 values. Then you would have the choice between changing the format and 
 breaking applications or keeping the format and dont provide the additional 
 information. 
 With XML (or single-value-per-file) it is easy to tell application to ignore 
 unknown tags (or files). When you just list values you will be damned sooner 
 or later, unless you make up additional rules that say how apps should handle 
 these cases. And then your approach is no longer simple, but possibly even 
 more complicated

Not necessarily. If the extended data is put following the current data
(since the data is currently record oriented) just making the output
format longer will not/should not casue problems in reading the data.
Just look at FORTRAN for an example of a extensible input :-) More data
on the record will/should just be ignored. The only coding change might
be to use a fgets to read a record, followed by a sscanf to get the known
values.

Alternatively, you can always put one value per record:
tag:value
tag2:value2...

This is still simpler than XML to read, and to generate.

The problem with this and XML is the same - If the tag is no longer relevent
(or changes its name), then the output must either continue to include it, or
break applications that depend on that tag.

In all cases, atomic extraction of the structured data will be problematical
since there may be buffering issues in output. XML is very verbose, and the
tagged format better; but a series of values goes even farther...

Try them out - Just go through the /proc/net formats and stick in the
XML... Just don't count on the regular utilities to decode them. It would
give some actual results to compair with the current structure.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [OFFTOPIC] Re: [PATCH] Single user linux

2001-04-24 Thread Jesse Pollard

-  Received message begins Here  -

> 
> > 1. email -> sendmail
> > 2. sendmail figures out what it has to do with it. turns out it's deliver
> ...
> 
> > Now, in order for step 4 to be done safely, procmail should be running
> > as the user it's meant to deliver the mail for. for this to happen
> > sendmail needs to start it as that user in step 3 and to do that it
> > needs extra privs, above and beyond that of a normal user.
> 
>   email -> sendmail
>   sendmail 'its local' -> spool
> 
> user:
>   get_mail | procmail
>   mutt
> 
> The mail server doesnt need to run procmail. If you wanted to run mail batches
> through on a regular basis you can use cron for it, or leave a daemon running

And get_mail must have elevated privileges to search for the users mail...
or sendmail must have already switched user on reciept to put it in the
users inbox which also requires privleges...

And an additional daemon (owned by the user) is yet another attack point...

Cron could be used to batch message handling... as long as it runs before
the users quota is used up. This becomes the same as using IMAP or fetchmail
to download it.

It's much more efficent to process each mail as it arrives.

All this does is move the program that requires privileges to somewhere
else. It doesn't eliminate it.

Granted, sendmail could use a better implementation of a security model.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [OFFTOPIC] Re: [PATCH] Single user linux

2001-04-24 Thread Jesse Pollard

Tomas Telensky <[EMAIL PROTECTED]>
> On Tue, 24 Apr 2001, Alexander Viro wrote:
> > On Tue, 24 Apr 2001, Tomas Telensky wrote:
> > 
> > > of linux distributions the standard daemons (httpd, sendmail) are run as
> > > root! Having multi-user system or not! Why? For only listening to a port
> > > <1024? Is there any elegant solution?
> > 
> > Sendmail is old. Consider it as a remnant of times when network was
> > more... friendly. Security considerations were mostly ignored - and
> > not only by sendmail. It used to be choke-full of holes. They were
> > essentially debugged out of it in late 90s. It seems to be more or
> > less OK these days, but it's full of old cruft. And splitting the
> > thing into reasonable parts and leaving them with minaml privileges
> > they need is large and painful work.

Actually, if you view sendmail as being an expert system it is very
cutting edge :-) It can identify a user from very skimpy data if it
is allowed to (fuzzy matching user names). It identifies local hosts
(with FQDN or partial name, or only host name).

> Thanks for the comment. And why not just let it listen to 25 and then
> being run as uid=nobody, gid=mail?

Because then everybodys mail would be owned by user "nobody".

There are some ways to do this, but they are unreliable.

   1. If the users mail is delivered to /var/mail/; then the
  file /var/mail/ must always exist.

This requires ALL MUAs to truncate the file.
Some MUAs use file existance to determine if there is new mail.
If it doesn't exist, then no new mail... ever.

   2. sendmail will not be able to create the /var/mail/ mail box.

   3. sendmail will not be able to process forwarding mail.
User nobody should not be able to read files in users home
directory... .forward files are private to the user...

   4. sendmail will not be able to process user mail filters (same problem
as forwarding).

Note: these filters are applied on receipt of mail (saves time and
disk space since the filter can discard mail immediately or put it
in appropriate folders immediately).

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [OFFTOPIC] Re: [PATCH] Single user linux

2001-04-24 Thread Jesse Pollard

Tomas Telensky [EMAIL PROTECTED]
 On Tue, 24 Apr 2001, Alexander Viro wrote:
  On Tue, 24 Apr 2001, Tomas Telensky wrote:
  
   of linux distributions the standard daemons (httpd, sendmail) are run as
   root! Having multi-user system or not! Why? For only listening to a port
   1024? Is there any elegant solution?
  
  Sendmail is old. Consider it as a remnant of times when network was
  more... friendly. Security considerations were mostly ignored - and
  not only by sendmail. It used to be choke-full of holes. They were
  essentially debugged out of it in late 90s. It seems to be more or
  less OK these days, but it's full of old cruft. And splitting the
  thing into reasonable parts and leaving them with minaml privileges
  they need is large and painful work.

Actually, if you view sendmail as being an expert system it is very
cutting edge :-) It can identify a user from very skimpy data if it
is allowed to (fuzzy matching user names). It identifies local hosts
(with FQDN or partial name, or only host name).

 Thanks for the comment. And why not just let it listen to 25 and then
 being run as uid=nobody, gid=mail?

Because then everybodys mail would be owned by user nobody.

There are some ways to do this, but they are unreliable.

   1. If the users mail is delivered to /var/mail/username; then the
  file /var/mail/username must always exist.

This requires ALL MUAs to truncate the file.
Some MUAs use file existance to determine if there is new mail.
If it doesn't exist, then no new mail... ever.

   2. sendmail will not be able to create the /var/mail/username mail box.

   3. sendmail will not be able to process forwarding mail.
User nobody should not be able to read files in users home
directory... .forward files are private to the user...

   4. sendmail will not be able to process user mail filters (same problem
as forwarding).

Note: these filters are applied on receipt of mail (saves time and
disk space since the filter can discard mail immediately or put it
in appropriate folders immediately).

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [OFFTOPIC] Re: [PATCH] Single user linux

2001-04-24 Thread Jesse Pollard

-  Received message begins Here  -

 
  1. email - sendmail
  2. sendmail figures out what it has to do with it. turns out it's deliver
 ...
 
  Now, in order for step 4 to be done safely, procmail should be running
  as the user it's meant to deliver the mail for. for this to happen
  sendmail needs to start it as that user in step 3 and to do that it
  needs extra privs, above and beyond that of a normal user.
 
   email - sendmail
   sendmail 'its local' - spool
 
 user:
   get_mail | procmail
   mutt
 
 The mail server doesnt need to run procmail. If you wanted to run mail batches
 through on a regular basis you can use cron for it, or leave a daemon running

And get_mail must have elevated privileges to search for the users mail...
or sendmail must have already switched user on reciept to put it in the
users inbox which also requires privleges...

And an additional daemon (owned by the user) is yet another attack point...

Cron could be used to batch message handling... as long as it runs before
the users quota is used up. This becomes the same as using IMAP or fetchmail
to download it.

It's much more efficent to process each mail as it arrives.

All this does is move the program that requires privileges to somewhere
else. It doesn't eliminate it.

Granted, sendmail could use a better implementation of a security model.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: light weight user level semaphores

2001-04-20 Thread Jesse Pollard

Olaf Titz <[EMAIL PROTECTED]>:
> > Ehh.. I will bet you $10 USD that if libc allocates the next file
> > descriptor on the first "malloc()" in user space (in order to use the
> > semaphores for mm protection), programs _will_ break.
> 
> Of course, but this is a result from sloppy coding. In general, open()
> can just return anything and about the only case where you can even
> think of ignoring its result is this:
>  close(0); close(1); close(2);
>  open("/dev/null", O_RDWR); dup(0); dup(0);
> (which is even not clean for other reasons).
> 
> I can't imagine depending on the "fact" that the first fd I open is 3,
> the next is 4, etc. And what if the routine in question is not
> malloc() but e.g. getpwuid()? Both are just arbitrary library
> functions, and one of them clearly does open file descriptors,
> depending on their implementation.
> 
> What would the reason[1] be for wanting contiguous fd space anyway?
> 
> Olaf
> 
> [1] apart from not having understood how poll() works of course.

Optimization use in select: If all "interesting" file id's are known
to be below "n", then only the first "n" bits in a FD_ISSET need to
be examined. As soon as the bits are scattered, it takes MUCH longer
to check for activity

It may not be the "best" way, but what I tend to do is:

 Umm - this is snipped from a multiplexed logger using FIFOs for
 and indeterminate amount of data from differet utilities sending
 text buffers (normally one line at a time but could be more).

static void fd_init(argc,argv)
int argc;   /* number of parameters */
char**argv; /* parameter list   */
{
int i,j;/* scratch counters */
static char str[50];

pnames = argv;
FD_ZERO(_files); /* init all file descriptor sets*/

for (i = 0; i <= MAX_LOG && i < argc; i++) {
sprintf(str,"/tmp/%s",pnames[i]);
mkfifo(str,0600);   /* assume it exists */
inlogfd[i] = open(str,O_RDONLY | O_NDELAY);
FD_SET(inlogfd[i],_files);
}
used = i;
}


Then I can scan for any activity by:

do {
while (select(MAX_LOG,,NULL,NULL,NULL) >= 0) {
for(i = 0; i <= used; i++) {
if (FD_ISSET(inlogfd[i],)) {
r=ioctl(inlogfd[i],FIONREAD,);
while (n > 0) {
r = (n > BUF_MAX - 1) ? BUF_MAX - 1: n;
read(inlogfd[i],buf,r);
printbuf(pnames[i],r);
n -= r;
}
}
}
active = in_files;
}
} while (errno == EINTR);

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: light weight user level semaphores

2001-04-20 Thread Jesse Pollard

Olaf Titz [EMAIL PROTECTED]:
  Ehh.. I will bet you $10 USD that if libc allocates the next file
  descriptor on the first "malloc()" in user space (in order to use the
  semaphores for mm protection), programs _will_ break.
 
 Of course, but this is a result from sloppy coding. In general, open()
 can just return anything and about the only case where you can even
 think of ignoring its result is this:
  close(0); close(1); close(2);
  open("/dev/null", O_RDWR); dup(0); dup(0);
 (which is even not clean for other reasons).
 
 I can't imagine depending on the "fact" that the first fd I open is 3,
 the next is 4, etc. And what if the routine in question is not
 malloc() but e.g. getpwuid()? Both are just arbitrary library
 functions, and one of them clearly does open file descriptors,
 depending on their implementation.
 
 What would the reason[1] be for wanting contiguous fd space anyway?
 
 Olaf
 
 [1] apart from not having understood how poll() works of course.

Optimization use in select: If all "interesting" file id's are known
to be below "n", then only the first "n" bits in a FD_ISSET need to
be examined. As soon as the bits are scattered, it takes MUCH longer
to check for activity

It may not be the "best" way, but what I tend to do is:

 Umm - this is snipped from a multiplexed logger using FIFOs for
 and indeterminate amount of data from differet utilities sending
 text buffers (normally one line at a time but could be more).

static void fd_init(argc,argv)
int argc;   /* number of parameters */
char**argv; /* parameter list   */
{
int i,j;/* scratch counters */
static char str[50];

pnames = argv;
FD_ZERO(in_files); /* init all file descriptor sets*/

for (i = 0; i = MAX_LOG  i  argc; i++) {
sprintf(str,"/tmp/%s",pnames[i]);
mkfifo(str,0600);   /* assume it exists */
inlogfd[i] = open(str,O_RDONLY | O_NDELAY);
FD_SET(inlogfd[i],in_files);
}
used = i;
}


Then I can scan for any activity by:

do {
while (select(MAX_LOG,active,NULL,NULL,NULL) = 0) {
for(i = 0; i = used; i++) {
if (FD_ISSET(inlogfd[i],active)) {
r=ioctl(inlogfd[i],FIONREAD,n);
while (n  0) {
r = (n  BUF_MAX - 1) ? BUF_MAX - 1: n;
read(inlogfd[i],buf,r);
printbuf(pnames[i],r);
n -= r;
}
}
}
active = in_files;
}
} while (errno == EINTR);

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: IP Acounting Idea for 2.5

2001-04-17 Thread Jesse Pollard

-  Received message begins Here  -

> 
> Jesse Pollard replies:
> to Leif Sawyer who wrote: 
> >> Besides, what would be gained in making the counters RO, if 
> >> they were cleared every time the module was loaded/unloaded?
> > 
> > 1. Knowlege that the module was reloaded.
> > 2. Knowlege that the data being measured is correct
> > 3. Having reliable measures
> > 4. being able to derive valid statistics
> > 
> 
> Good.  Now that we have valid objectives to reach, which of these
> are NOT met by making the fixes entirely in userspace, say by
> incorporating a wrapper script to ensure that no external applications
> can flush the table counters?
> 
> They're still all met, right?

Nope - some of the applications that may be purchased do not have
to go through the scripts to reset the counters. They just need access
to the counters, and reset is built into the applications. This violates
all 4 objectives.

> And we haven't had to fill the kernel with more cruft.

Removing/no-oping the reset code would make the module
SMALLER, and simpler.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: IP Acounting Idea for 2.5

2001-04-17 Thread Jesse Pollard

Leif Sawyer <[EMAIL PROTECTED]>:
> > And that introduces errors in measurement. It also depends on 
> > how frequently an uncontroled process is clearing the counters.
> > You may never be able to get a valid measurement.
> 
> This is true.  Which is why application programmers need to write
> code as if they are not the only [ab]users of data.
> 
> Which brings me back to my point.
> 
> Don't force the kernel to uphold your local application requirements
> of stable counters.
> 
> Enforce it in the userspace portion of the code.
> 
> 
> Yes, you could extend the proc filesystem (ugh) with a flag that could
> be read by the ip[chains|tables] user app to determine if clearing flags
> were allowed.  Then a simple
> 
> echo 1 > /proc/sys/net/ipv4/counters_locked
> 
> or some such cruft.  But I don't see this extension making into the
> standard kernel at this time.  It just seems to be wasteful.
> 
> 
> If you (at your site) really need this type of functionality, it's
> pretty darn simple to write a wrapper to ip[tables|chains] which
> silently (or not so) drops the option to clear the counters before
> calling the real version.
> 
> Besides, what would be gained in making the counters RO, if they were
> cleared every time the module was loaded/unloaded?

1. Knowlege that the module was reloaded.
2. Knowlege that the data being measured is correct
3. Having reliable measures
4. being able to derive valid statistics


-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: PATCH(?): linux-2.4.4-pre2: fork should run child first

2001-04-17 Thread Jesse Pollard

Brunet <[EMAIL PROTECTED]>:
> >"Adam J. Richter" <[EMAIL PROTECTED]> said:
> >
> >>I suppose that running the child first also has a minor
> >> advantage for clone() in that it should make programs that spawn lots
> >> of threads to do little bits of work behave better on machines with a
> 
> There is another issue with this proposition. I have begun to write (free
> time, slow pace) an userland sandbox which allows me to prevent a process
> and its childs to perform some given actions, like removing files or
> writing in some directories. This works by ptrace-ing the process,
> modifying system calls and faking return values. It also needs,
> obviously, to ptrace-attach childs of the sandboxed process. When the
> parent in a fork runs first, the sandbox program has time to
> ptrace-attach the child before it does any system call. Obviously, if the
> child runs first, this is no longer the case.
> 
> If it is decided that the child should run first in a fork, there should
> be a way to reliably ptrace-attach it before it can do anything.
> 
> By the way, I tried to solve this problem in my sandbox program by
> masqerading any fork call into a clone system call with the flag
> CLONE_PTRACE. I had hoped that the child would in this way start already
> ptraced. However, this didn't work as expected. The child did start in a
> ptraced state, but the owner of the trace was its parent (which issued
> the fork), and not my sandbox process which was ptracing this parent. I
> find that this behaviour is really weird and useless. I could simulate
> the current behaviour simply by calling ptrace(TRACE_ME,..) in the child.
> What is the real use of the CLONE_PTRACE flag ?

I believe it allows the debugger to start the process to be debugged.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Is printing broke on sparc ?

2001-04-17 Thread Jesse Pollard

"Mr. James W. Laferriere" <[EMAIL PROTECTED]>:
[snip]
>   .. ie:  cat /etc/printcap > /dev/lp0(or /dev/par0)
>   gets me :
> 
> /c#eodiecnyotai rhernili s to rpaemn
> s eehpo o-.ROLPR0 roif{\=sl:x
>   /p:ao/lr
>   which is where it rolls off the paper .
>   printer is a DECLaser 2200  .  I have the PostScript option card
>   for it , but when it is installed -notthing- gets output so I
>   tried the above experiment without it installed .  With the option
>   installed the display shows 'PS Waiting' Then shortly 'PS
>   Processing' then 'PS Ready' .  This happens whether I cat .ps
>   files or not .  I beleive that something is garbling the data
>   being sent .

I have the 5100 printer - It is expecting PCL when the PS option is not
set. With it set, it only prints postscript.

What I did was to pass the data through enscript/nenscript to convert
to postscript. Then I had no problems at all.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: IP Acounting Idea for 2.5

2001-04-17 Thread Jesse Pollard

Leif Sawyer <[EMAIL PROTECTED]>:
> > From: Ian Stirling [mailto:[EMAIL PROTECTED]]
> > > Manfred Bartz responded to
> > > > Russell King <[EMAIL PROTECTED]> who writes:
> > 
> > > > You just illustrated my point.  While there is a reset capability
> > > > people will use it and accounting/logging programs will get wrong
> > > > data.  Resetable counters might be a minor convenience 
> > when debugging
> > > > but the price is unreliable programs and the loss of the 
> > ability of
> > > > several programs to use the same counters.
> > > 
> > > You of course, are commenting from the fact that your 
> > applications are
> > > stupid, written poorly, and cannot handle 'wrapped' data.  Take MRTG
> > 
> > > Similarly, if my InPackets are at 102345 at one read, and 
> > 2345 the next
> > > read,
> > > and I know that my counter is 32 bits, then I know i've 
> > wrapped and can do
> > 
> > I think the point being made is that if InPackets are at 
> > 102345 at one read,
> > and 2345 the next, and you know it's a 32 bit counter, it's completely
> > unreliable to assume that you have in fact recieved 4294867295
> > packets, if the counter can be zeroed.
> > You can say nothing other than at least 2345 packets, at most 
> > 2345+n*2^32 have been got since you last checked.
> 
> Ah, yes.. I seem to have misplaced a bit of text in my reply.
> 
> The continuation of thought:
> 
> How the application derives the status of a wrapped counter or
> a zero'ed counter is dependant on the device being monitored.
> 
> Yes, you have to know what your interface is capable of (maxbytes/sec)
> so that you can do a simple calculation where:
> 
> maximum_throughput = maxbytes_sec * (time_now - time_last_read)
> 
> and if your previous good counter + the maximum throughput wraps the
> counter, you have a good chance that you've simply wrapped.
> 
> If not, then you can assume that your counters were cleared at some point,
> log the data you've got, and keep moving forward.

And that introduces errors in measurement. It also depends on how frequently
an uncontroled process is clearing the counters. You may never be able to
get a valid measurement.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: IP Acounting Idea for 2.5

2001-04-17 Thread Jesse Pollard

Leif Sawyer [EMAIL PROTECTED]:
  From: Ian Stirling [mailto:[EMAIL PROTECTED]]
   Manfred Bartz responded to
Russell King [EMAIL PROTECTED] who writes:
  snip
You just illustrated my point.  While there is a reset capability
people will use it and accounting/logging programs will get wrong
data.  Resetable counters might be a minor convenience 
  when debugging
but the price is unreliable programs and the loss of the 
  ability of
several programs to use the same counters.
   
   You of course, are commenting from the fact that your 
  applications are
   stupid, written poorly, and cannot handle 'wrapped' data.  Take MRTG
  snip
   Similarly, if my InPackets are at 102345 at one read, and 
  2345 the next
   read,
   and I know that my counter is 32 bits, then I know i've 
  wrapped and can do
  
  I think the point being made is that if InPackets are at 
  102345 at one read,
  and 2345 the next, and you know it's a 32 bit counter, it's completely
  unreliable to assume that you have in fact recieved 4294867295
  packets, if the counter can be zeroed.
  You can say nothing other than at least 2345 packets, at most 
  2345+n*2^32 have been got since you last checked.
 
 Ah, yes.. I seem to have misplaced a bit of text in my reply.
 
 The continuation of thought:
 
 How the application derives the status of a wrapped counter or
 a zero'ed counter is dependant on the device being monitored.
 
 Yes, you have to know what your interface is capable of (maxbytes/sec)
 so that you can do a simple calculation where:
 
 maximum_throughput = maxbytes_sec * (time_now - time_last_read)
 
 and if your previous good counter + the maximum throughput wraps the
 counter, you have a good chance that you've simply wrapped.
 
 If not, then you can assume that your counters were cleared at some point,
 log the data you've got, and keep moving forward.

And that introduces errors in measurement. It also depends on how frequently
an uncontroled process is clearing the counters. You may never be able to
get a valid measurement.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Is printing broke on sparc ?

2001-04-17 Thread Jesse Pollard

"Mr. James W. Laferriere" [EMAIL PROTECTED]:
[snip]
   .. ie:  cat /etc/printcap  /dev/lp0(or /dev/par0)
   gets me :
 
 /c#eodiecnyotai rhernili s to rpaemn
 s eehpo o-.ROLPR0 roif{\=sl:x
   /p:ao/lr
   which is where it rolls off the paper .
   printer is a DECLaser 2200  .  I have the PostScript option card
   for it , but when it is installed -notthing- gets output so I
   tried the above experiment without it installed .  With the option
   installed the display shows 'PS Waiting' Then shortly 'PS
   Processing' then 'PS Ready' .  This happens whether I cat .ps
   files or not .  I beleive that something is garbling the data
   being sent .

I have the 5100 printer - It is expecting PCL when the PS option is not
set. With it set, it only prints postscript.

What I did was to pass the data through enscript/nenscript to convert
to postscript. Then I had no problems at all.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: PATCH(?): linux-2.4.4-pre2: fork should run child first

2001-04-17 Thread Jesse Pollard

Brunet [EMAIL PROTECTED]:
 "Adam J. Richter" [EMAIL PROTECTED] said:
 
 I suppose that running the child first also has a minor
  advantage for clone() in that it should make programs that spawn lots
  of threads to do little bits of work behave better on machines with a
 
 There is another issue with this proposition. I have begun to write (free
 time, slow pace) an userland sandbox which allows me to prevent a process
 and its childs to perform some given actions, like removing files or
 writing in some directories. This works by ptrace-ing the process,
 modifying system calls and faking return values. It also needs,
 obviously, to ptrace-attach childs of the sandboxed process. When the
 parent in a fork runs first, the sandbox program has time to
 ptrace-attach the child before it does any system call. Obviously, if the
 child runs first, this is no longer the case.
 
 If it is decided that the child should run first in a fork, there should
 be a way to reliably ptrace-attach it before it can do anything.
 
 By the way, I tried to solve this problem in my sandbox program by
 masqerading any fork call into a clone system call with the flag
 CLONE_PTRACE. I had hoped that the child would in this way start already
 ptraced. However, this didn't work as expected. The child did start in a
 ptraced state, but the owner of the trace was its parent (which issued
 the fork), and not my sandbox process which was ptracing this parent. I
 find that this behaviour is really weird and useless. I could simulate
 the current behaviour simply by calling ptrace(TRACE_ME,..) in the child.
 What is the real use of the CLONE_PTRACE flag ?

I believe it allows the debugger to start the process to be debugged.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: IP Acounting Idea for 2.5

2001-04-17 Thread Jesse Pollard

Leif Sawyer [EMAIL PROTECTED]:
  And that introduces errors in measurement. It also depends on 
  how frequently an uncontroled process is clearing the counters.
  You may never be able to get a valid measurement.
 
 This is true.  Which is why application programmers need to write
 code as if they are not the only [ab]users of data.
 
 Which brings me back to my point.
 
 Don't force the kernel to uphold your local application requirements
 of stable counters.
 
 Enforce it in the userspace portion of the code.
 
 subtopic
 Yes, you could extend the proc filesystem (ugh) with a flag that could
 be read by the ip[chains|tables] user app to determine if clearing flags
 were allowed.  Then a simple
 
 echo 1  /proc/sys/net/ipv4/counters_locked
 
 or some such cruft.  But I don't see this extension making into the
 standard kernel at this time.  It just seems to be wasteful.
 /subtopic
 
 If you (at your site) really need this type of functionality, it's
 pretty darn simple to write a wrapper to ip[tables|chains] which
 silently (or not so) drops the option to clear the counters before
 calling the real version.
 
 Besides, what would be gained in making the counters RO, if they were
 cleared every time the module was loaded/unloaded?

1. Knowlege that the module was reloaded.
2. Knowlege that the data being measured is correct
3. Having reliable measures
4. being able to derive valid statistics


-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: IP Acounting Idea for 2.5

2001-04-17 Thread Jesse Pollard

-  Received message begins Here  -

 
 Jesse Pollard replies:
 to Leif Sawyer who wrote: 
  Besides, what would be gained in making the counters RO, if 
  they were cleared every time the module was loaded/unloaded?
  
  1. Knowlege that the module was reloaded.
  2. Knowlege that the data being measured is correct
  3. Having reliable measures
  4. being able to derive valid statistics
  
 
 Good.  Now that we have valid objectives to reach, which of these
 are NOT met by making the fixes entirely in userspace, say by
 incorporating a wrapper script to ensure that no external applications
 can flush the table counters?
 
 They're still all met, right?

Nope - some of the applications that may be purchased do not have
to go through the scripts to reset the counters. They just need access
to the counters, and reset is built into the applications. This violates
all 4 objectives.

 And we haven't had to fill the kernel with more cruft.

Removing/no-oping the reset code would make the module
SMALLER, and simpler.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: fsck, raid reconstruction & bad bad 2.4.3

2001-04-15 Thread Jesse Pollard

On Sun, 15 Apr 2001, Bernd Eckenfels wrote:
>In article <[EMAIL PROTECTED]> you wrote:
>>>(There is no config file to disable/alter this .. no work-around that I
>>>know of ..)
>
>> You can't be serious.  Go sit down and think about what's going on.
>
>Well, there are two potential solutions:
>
>a) stop rebuild until fsck is fixed

And let fsck read bad data because the raid doesn't yet recognize the correct
one

There is nothing to fix in fsck. It should NOT know about the low level
block storage devices. If it does, then fsck for EACH filesystem will
have to know about ALL different raid hardware/software implementations.

>b) wait with fsck until rebuild is fixed

Depends on your definition of "fixed". The most I can see to fix is
reduce the amount of continued update in favor of updating those blocks
being read (by fsck or anything else). This really ought to be a runtime
configuration option. If it is set to 0, then no automatic repair would
be done.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: fsck, raid reconstruction bad bad 2.4.3

2001-04-15 Thread Jesse Pollard

On Sun, 15 Apr 2001, Bernd Eckenfels wrote:
In article [EMAIL PROTECTED] you wrote:
(There is no config file to disable/alter this .. no work-around that I
know of ..)

 You can't be serious.  Go sit down and think about what's going on.

Well, there are two potential solutions:

a) stop rebuild until fsck is fixed

And let fsck read bad data because the raid doesn't yet recognize the correct
one

There is nothing to fix in fsck. It should NOT know about the low level
block storage devices. If it does, then fsck for EACH filesystem will
have to know about ALL different raid hardware/software implementations.

b) wait with fsck until rebuild is fixed

Depends on your definition of "fixed". The most I can see to fix is
reduce the amount of continued update in favor of updating those blocks
being read (by fsck or anything else). This really ought to be a runtime
configuration option. If it is set to 0, then no automatic repair would
be done.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [RFC] exec_via_sudo

2001-04-10 Thread Jesse Pollard

kees <[EMAIL PROTECTED]>:
> 
> Hi
> 
> Unix/Linux have a lot of daemons that have to run as root because they
> need to acces some specific data or run special programs. They are
> vulnerable as we learn.
> Is there any way to have something like an exec call that is
> subject to a sudo like permission system? That would run the daemons
> as a normal user but allow only for specific functions i.e. NOT A SHELL.
> comments?

Simple answer: no.

1. The exec system call (or library) has no way to communicate with the
   user for getting a password.
2. A user is not always present when the exec is done (cron/at/batch...).
   there is no terminal like device available.
3. In the cases where terminals are available, which terminal? The program
   doing the exec may have been detatched (background/nohup...).
4. In the cases where the user is connected via a window - there is no
   known way to provide that communication. (the DISPLAY environment might
   not be present...)

More complex answer: in some cases.

If the application doing the exec is programmed to, then it may open
an input type and actually use "sudo" to start another program. It will
be up to the implementation of "sudo" to accept the communication path
and perform suitable validation.

The primary weakness in this is that the communication path may not be
trusted by sudo... terminals type devices are easier to validate than
others (windowing systems for instance).

The problem with cron/at/batch cannot be solved since the user context
for any authentication path is missing. It would be necessary to authenticate
the communication path, before authenticating to sudo...

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [RFC] exec_via_sudo

2001-04-10 Thread Jesse Pollard

kees [EMAIL PROTECTED]:
 
 Hi
 
 Unix/Linux have a lot of daemons that have to run as root because they
 need to acces some specific data or run special programs. They are
 vulnerable as we learn.
 Is there any way to have something like an exec call that is
 subject to a sudo like permission system? That would run the daemons
 as a normal user but allow only for specific functions i.e. NOT A SHELL.
 comments?

Simple answer: no.

1. The exec system call (or library) has no way to communicate with the
   user for getting a password.
2. A user is not always present when the exec is done (cron/at/batch...).
   there is no terminal like device available.
3. In the cases where terminals are available, which terminal? The program
   doing the exec may have been detatched (background/nohup...).
4. In the cases where the user is connected via a window - there is no
   known way to provide that communication. (the DISPLAY environment might
   not be present...)

More complex answer: in some cases.

If the application doing the exec is programmed to, then it may open
an input type and actually use "sudo" to start another program. It will
be up to the implementation of "sudo" to accept the communication path
and perform suitable validation.

The primary weakness in this is that the communication path may not be
trusted by sudo... terminals type devices are easier to validate than
others (windowing systems for instance).

The problem with cron/at/batch cannot be solved since the user context
for any authentication path is missing. It would be necessary to authenticate
the communication path, before authenticating to sudo...

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: OOM killer???

2001-03-29 Thread Jesse Pollard

avid Lang <[EMAIL PROTECTED]>:
>one of the key places where the memory is 'allocated' but not used is in
>the copy on write conditions (fork, clone, etc) most of the time very
>little of the 'duplicate' memory is ever changed (in fact most of the time
>the program that forks then executes some other program) on a lot of
>production boxes this would be a _very_ significant additional overhead in
>memory (think a busy apache server, it forks a bunch of processes, but
>currently most of that memory is COW and never actually needs to be
>duplicated)

So? If the requirement is no-overcommit, then assume it WILL be overwritten.
Allocate sufficient swap for the requirement.

Now, it shouldn't be necessary to include the text segment - after all
this should be marked RX.

Actually just X would do, but on Intel systems that also means R. and if W
is set it also means RWX. I hope that Intel gets a better clue about memory
protection sometime soon.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Bug in the file attributes ?

2001-03-29 Thread Jesse Pollard

-  Received message begins Here  -

> 
> 
> Hi,
> 
> I just made a manipulation that disturbs me. So I'm asking whether it's a
> bug or a features.
> 
> user> su
> root> echo "test" > test
> root> ls -l
> -rw-r--r--   1 root root5 Mar 29 19:14 test
> root> exit
> user> rm test
> rm: remove write-protected file `test'? y
> user> ls test
> ls: test: No such file or directory
> 
> This is in the user home directory.
> Since the file is read only for the user, it should not be able to remove
> it. Moreover, the user can't write to test.
> So I think this is a bug.

Nope - rm only updates the directory, which the user owns; not the file.
The prompt is just being nice.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Disturbing news..

2001-03-29 Thread Jesse Pollard

Walter Hofmann <[EMAIL PROTECTED]>:
> On Wed, 28 Mar 2001, Jesse Pollard wrote:
[snip]
> > Now, if ELF were to be modified, I'd just add a segment checksum
> > for each segment, then put the checksum in the ELF header as well as
> > in the/a segment header just to make things harder. At exec time a checksum
> > verify could (expensive) be done on each segment. A reduced level could be
> > done only on the data segment or text segment. This would at least force
> > the virus to completly read the file to regenerate the checksum.
> 
> So? The virus will just redo the checksum. Sooner or later their will be a
> routine to do this in libbfd and this all reduces to a single additional
> line of code. 

true.

> > That change would even allow for signature checks of the checksum if the
> > signature was stored somewhere else (system binaries/setuid binaries...).
> > But only in a high risk environment. This could even be used for a scanner
> > to detect ANY change to binaries (and fast too - signature check of checksums
> > wouldn't require reading the entire file).
> 
> One sane way to do this is to store the sig on a ro medium and make the
> kernel check the sig of every binary before it is run.

Only for trusted binaries. (extreme paranoia now).
 
> HOWEVER, this means no compilers will work, and you have to delete all
> script languages like perl or python (or make all of them check the
> signature).

Compilers should work normally, the link phase is what would generate
the checksums, though if each object file contained a checksum for the
segment then the interpreters/dynamic loaders would have the choice.

The only applications I see as really needing to check such signatures
are those using PAM. These should do it anyway. The dynamic linking programs
should do so only if they are configured to do so.

> Useless again, IMO.
> 
> > In any case, the problem is limited to one user, even if nothing is done.
> 
> Your best bet is to educate your users.

User eduation is a reasonable substitute as long as they can be directed
to follow the rules.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Linux connectivity trashed.

2001-03-29 Thread Jesse Pollard

"J . A . Magallon" <[EMAIL PROTECTED]>:
> On 03.29 Richard B. Johnson wrote:
> > 
> > The penetration occurred because somebody changed our  firewall
> > configuration
> > so that all of the non-DHCP addresses, i.e., all the real IP addresses had
> > complete
> > connectivity to the outside world. This meant that every Linux and Sun
> > Workstation
> > in this facility was exposed to tampering from anywhere in the world. This
> > appears
> > to be part of a plan to remove all non-DHCP machines by getting them
> > trashed.
> >
> 
> See the cleverness of his network admins, that spent their time configuring
> a firewall to MAKE HOLES where there are not any...

And obviously not tell anyone they were doing so

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: OOM killer???

2001-03-29 Thread Jesse Pollard

Guest section DW <[EMAIL PROTECTED]>:
> 
> On Thu, Mar 29, 2001 at 01:02:38PM +0100, Sean Hunter wrote:
> 
> > The reason the aero engineers don't need to select a passanger to throw out
> > when the plane is overloaded is simply that the plane operators do not allow
> > the plane to become overloaded.
> 
> Yes. But today Linux willing overcommits. It would be better if
> the default was not to.

Preferably, the default should be a configure option, with runtime
alterations.

> > Furthermore, why do you suppose an aeroplane has more than one altimeter,
> > artifical horizon and compass?  Do you think it's because they are unable to
> > make one of each that is reliable?  Or do you think its because they are
> > concerned about what happens if one fails _however unlikely that is_.
> 
> Unix V6 did not overcommit, and panicked if is was out of swap
> because that was a cannot happen situation.

Ummm... no. The user got "ENOMEM" or "insufficient memory for fork", or
"swap error". The system didn't panic unless there was an I/O error on
the swap device.

> If you argue that we must design things so that there is no overcommit
> and still have an OOM killer just in case, I have no objections at all.

good.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: OOM killer???

2001-03-29 Thread Jesse Pollard

Guest section DW [EMAIL PROTECTED]:
 
 On Thu, Mar 29, 2001 at 01:02:38PM +0100, Sean Hunter wrote:
 
  The reason the aero engineers don't need to select a passanger to throw out
  when the plane is overloaded is simply that the plane operators do not allow
  the plane to become overloaded.
 
 Yes. But today Linux willing overcommits. It would be better if
 the default was not to.

Preferably, the default should be a configure option, with runtime
alterations.

  Furthermore, why do you suppose an aeroplane has more than one altimeter,
  artifical horizon and compass?  Do you think it's because they are unable to
  make one of each that is reliable?  Or do you think its because they are
  concerned about what happens if one fails _however unlikely that is_.
 
 Unix V6 did not overcommit, and panicked if is was out of swap
 because that was a cannot happen situation.

Ummm... no. The user got "ENOMEM" or "insufficient memory for fork", or
"swap error". The system didn't panic unless there was an I/O error on
the swap device.

 If you argue that we must design things so that there is no overcommit
 and still have an OOM killer just in case, I have no objections at all.

good.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Linux connectivity trashed.

2001-03-29 Thread Jesse Pollard

"J . A . Magallon" [EMAIL PROTECTED]:
 On 03.29 Richard B. Johnson wrote:
  
  The penetration occurred because somebody changed our  firewall
  configuration
  so that all of the non-DHCP addresses, i.e., all the real IP addresses had
  complete
  connectivity to the outside world. This meant that every Linux and Sun
  Workstation
  in this facility was exposed to tampering from anywhere in the world. This
  appears
  to be part of a plan to remove all non-DHCP machines by getting them
  trashed.
 
 
 See the cleverness of his network admins, that spent their time configuring
 a firewall to MAKE HOLES where there are not any...

And obviously not tell anyone they were doing so

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Disturbing news..

2001-03-29 Thread Jesse Pollard

Walter Hofmann [EMAIL PROTECTED]:
 On Wed, 28 Mar 2001, Jesse Pollard wrote:
[snip]
  Now, if ELF were to be modified, I'd just add a segment checksum
  for each segment, then put the checksum in the ELF header as well as
  in the/a segment header just to make things harder. At exec time a checksum
  verify could (expensive) be done on each segment. A reduced level could be
  done only on the data segment or text segment. This would at least force
  the virus to completly read the file to regenerate the checksum.
 
 So? The virus will just redo the checksum. Sooner or later their will be a
 routine to do this in libbfd and this all reduces to a single additional
 line of code. 

true.

  That change would even allow for signature checks of the checksum if the
  signature was stored somewhere else (system binaries/setuid binaries...).
  But only in a high risk environment. This could even be used for a scanner
  to detect ANY change to binaries (and fast too - signature check of checksums
  wouldn't require reading the entire file).
 
 One sane way to do this is to store the sig on a ro medium and make the
 kernel check the sig of every binary before it is run.

Only for trusted binaries. (extreme paranoia now).
 
 HOWEVER, this means no compilers will work, and you have to delete all
 script languages like perl or python (or make all of them check the
 signature).

Compilers should work normally, the link phase is what would generate
the checksums, though if each object file contained a checksum for the
segment then the interpreters/dynamic loaders would have the choice.

The only applications I see as really needing to check such signatures
are those using PAM. These should do it anyway. The dynamic linking programs
should do so only if they are configured to do so.

 Useless again, IMO.
 
  In any case, the problem is limited to one user, even if nothing is done.
 
 Your best bet is to educate your users.

User eduation is a reasonable substitute as long as they can be directed
to follow the rules.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Bug in the file attributes ?

2001-03-29 Thread Jesse Pollard

-  Received message begins Here  -

 
 
 Hi,
 
 I just made a manipulation that disturbs me. So I'm asking whether it's a
 bug or a features.
 
 user su
 root echo "test"  test
 root ls -l
 -rw-r--r--   1 root root5 Mar 29 19:14 test
 root exit
 user rm test
 rm: remove write-protected file `test'? y
 user ls test
 ls: test: No such file or directory
 
 This is in the user home directory.
 Since the file is read only for the user, it should not be able to remove
 it. Moreover, the user can't write to test.
 So I think this is a bug.

Nope - rm only updates the directory, which the user owns; not the file.
The prompt is just being nice.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: OOM killer???

2001-03-29 Thread Jesse Pollard

avid Lang [EMAIL PROTECTED]:
one of the key places where the memory is 'allocated' but not used is in
the copy on write conditions (fork, clone, etc) most of the time very
little of the 'duplicate' memory is ever changed (in fact most of the time
the program that forks then executes some other program) on a lot of
production boxes this would be a _very_ significant additional overhead in
memory (think a busy apache server, it forks a bunch of processes, but
currently most of that memory is COW and never actually needs to be
duplicated)

So? If the requirement is no-overcommit, then assume it WILL be overwritten.
Allocate sufficient swap for the requirement.

Now, it shouldn't be necessary to include the text segment - after all
this should be marked RX.

Actually just X would do, but on Intel systems that also means R. and if W
is set it also means RWX. I hope that Intel gets a better clue about memory
protection sometime soon.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Larger dev_t

2001-03-28 Thread Jesse Pollard

Oliver Neukum <[EMAIL PROTECTED]>:
> 
> > My suggestion would be to add a filesystem label (optional) to the
> > homeblock of all filesystmes, then load that identifier into the
> > /proc/partitions file. This would allow a search to locate the
> > device parameters for any filesystem being mounted. If the label
> > is unavailable, then it must be mounted manually or via the current
> > structure. This would work for floppy/CD/DVD (although SCSI versions
> > would have a relocation problem for these devices).
> 
> And what would you do if the names collide ?

refuse to mount - give the admin time to fix them in single user mode
changing a volumn name only should not be prevented. How to fix... let
the admin look in the /proc/partitions, take one (I'd pick the second
one seen) and change its name. Mount the first using the devfs associated
name and verify that the contents are what is expected. Mount the second
and see what it should be. This situation should only occur via a dd copy
of an entire volumn; the procedure on copying should include changing the
copied volumn name... This is almost equivalent to having multiple mirror
partitions, in which case a "mount the first seen" would be reasonable.

> This might work for drives with unique identifiers in hardware, but for 
> anything else it is a nice addition, but I wouldn't identify an essential 
> partition that way. Furthermore you need to address removable media. There a 
> way to specify a drive opposed to a filesystem or medium is needed.

I didn't mean to say that there should be NO way to reach a specific drive.
There should be a devfs entry that corresponds to the entries in the
/proc/partitions list. This is what I think mount should do anyway.
First search the /proc/partitions list for the volumn; then use the
associated entry in devfs to actually do the mount. It's just a way
to allow the reorganization of volume to device name mapping.

I'm still thinking about how the root filesystem could be mounted during
boot where devfs and /proc are not yet mounted.

There should be a similar way to map removable media devices (even if it
takes using device serial numbers) to fixed device names. That way a
symbolic link could be created to point to the correct physical device:

ie: I want my SCSI tape drive (serial number 06408-XXX) to be called "tape" 

locate the serial number in /proc/scsi/scsi. use devfs name that
corresponds to this device (scsi2/target 6/lun/00 or similar) and
create a symbolic link for it. This does assume that the serial number or
equivalent is available to be searched for. It also assumes that the
devfs name can be derived from the entry in /proc/scsi/scsi (or where ever
the specification ends up).

Is this reasonable? Perhaps not for small systems, but when lots of dynamic
devices are available it is needed

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Disturbing news..

2001-03-28 Thread Jesse Pollard

Russell King <[EMAIL PROTECTED]>
> 
> On Wed, Mar 28, 2001 at 08:40:42AM -0600, Jesse Pollard wrote:
> > Now, if ELF were to be modified, I'd just add a segment checksum
> > for each segment, then put the checksum in the ELF header as well as
> > in the/a segment header just to make things harder. At exec time a checksum
> > verify could (expensive) be done on each segment. A reduced level could be
> > done only on the data segment or text segment. This would at least force
> > the virus to completly read the file to regenerate the checksum.
> 
> Checksums don't help that much - virus writers would treat it as "part
> of the set of alterations that need to be made" and then the checksum
> becomes zero protection.
> 
 [ snip of good stuff ]
> Therefore, if you follow good easy system administration techniques, then
> you end up minimising the risk of getting:
> 
> 1. viruses
> 2. trojans
> 3. malicious users
> 
> cracking your system.  If you don't follow these techniques, then you're
> asking for lots of trouble, and no amount of checksumming/signing/etc
> will ever save you.

Absolutely true. The only help the checksumming etc stuff is good for is
detecting the fact afterward by external comparison.

I like MLS for the ability to catch ATTEMPTS to make unauthorized
modification.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Disturbing news..

2001-03-28 Thread Jesse Pollard

Russell King <[EMAIL PROTECTED]>:
> On Wed, Mar 28, 2001 at 08:15:57AM -0600, Jesse Pollard wrote:
> > objcopy - copies object files. Object files are not marked executable...
> 
> objcopy copies executable files as well - check the kernel makefiles
> for examples.

At the time it's copying, the input doesn't need to be executable. That
appears to be a byproduct of a library link.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Disturbing news..

2001-03-28 Thread Jesse Pollard

Sean Hunter <[EMAIL PROTECTED]>:
> On Wed, Mar 28, 2001 at 06:08:15AM -0600, Jesse Pollard wrote:
> > Sure - very simple. If the execute bit is set on a file, don't allow
> > ANY write to the file. This does modify the permission bits slightly
> > but I don't think it is an unreasonable thing to have.
> > 
> 
> Are we not then in the somewhat zen-like state of having an "rm" which can't
> "rm" itself without needing to be made non-executable so that it can't execute?

We've been in that state for a long time... (carefull updating that libc.so
file... can't overwrite/delete without having some REAL problems show up.)

It just calls for some carefull activity. If rm is being replaced, first
rename it; then put new one in place; chmod old; delete old. It is directly
comparable to the libc.so update procedure.

I should have left off the "very simple" remark.

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Disturbing news..

2001-03-28 Thread Jesse Pollard

-  Received message begins Here  -

> 
> On Wed, Mar 28, 2001 at 06:08:15AM -0600, Jesse Pollard wrote:
> > Sure - very simple. If the execute bit is set on a file, don't allow
> > ANY write to the file. This does modify the permission bits slightly
> > but I don't think it is an unreasonable thing to have.
> 
> Even easier method - remove the write permission bits from all executable
> files, and don't do the unsafe thing of running email/web browsers/other
> user-type stuff as user root.
> 
> If it still worries you that root can write to files without the 'w' bit
> set, modify the capabilities of the system to prevent it (there is a bit
> that can be set which will remove this ability for all new processes).

How about just adding MLS ... :-)

-
Jesse I Pollard, II
Email: [EMAIL PROTECTED]

Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



  1   2   3   >