Re: obsolete code must die

2001-06-14 Thread L. K.

> 
> i386, i486
> The Pentium processor has been around since 1995. Support for these older
> processors should go so we can focus on optimizations for the pentium and
> better processors.

a lot of people use linux on old machine in networking environmens as
routers/firewalls.


> 
> math-emu
> If support for i386 and i486 is going away, then so should math emulation.
> Every intel processor since the 486DX has an FPU unit built in. In fact
> shouldn't FPU support be a userspace responsibility anyway?
> 
> ISA bus, MCA bus, EISA bus
> PCI is the defacto standard. Get rid of CONFIG_BLK_DEV_ISAPNP,
> CONFIG_ISAPNP, etc

ISA network cards are still used. Even the new motherboard producers add
an ISA slot on their MB.


> 
> ISA, MCA, EISA device drivers
> If support for the buses is gone, there's no point in supporting devices for
> these buses.

Sometimes a ISA card performs better than a PCI one.




/me

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: obsolete code must die

2001-06-14 Thread L. K.

 
 i386, i486
 The Pentium processor has been around since 1995. Support for these older
 processors should go so we can focus on optimizations for the pentium and
 better processors.

a lot of people use linux on old machine in networking environmens as
routers/firewalls.


 
 math-emu
 If support for i386 and i486 is going away, then so should math emulation.
 Every intel processor since the 486DX has an FPU unit built in. In fact
 shouldn't FPU support be a userspace responsibility anyway?
 
 ISA bus, MCA bus, EISA bus
 PCI is the defacto standard. Get rid of CONFIG_BLK_DEV_ISAPNP,
 CONFIG_ISAPNP, etc

ISA network cards are still used. Even the new motherboard producers add
an ISA slot on their MB.


 
 ISA, MCA, EISA device drivers
 If support for the buses is gone, there's no point in supporting devices for
 these buses.

Sometimes a ISA card performs better than a PCI one.




/me

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



3C905B -- EEPROM (i blive so) problem

2001-06-13 Thread L. K.


Hi,

I have a 3COM 3C905B ethernet card that has been hit by a power outage for
aprox. 0.5 sec.  Now, the kernel does not recongnize the card
anymore. When I do lspci, I see 3COM Ethernet controller, type unknown
0xff (rev 3x). The bios reports the card as an ethernet card at system
boot-up. I run the diagnostic program for 3com cards from Donald Becker
and all the card registers are  and . I do belive something
happened to the eeprom of the card. I would like to know if I can
overwrite-it with a new one so that I can make my ethernet card work
again.


Thank you,

Eugen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



3C905B -- EEPROM (i blive so) problem

2001-06-13 Thread L. K.


Hi,

I have a 3COM 3C905B ethernet card that has been hit by a power outage for
aprox. 0.5 sec.  Now, the kernel does not recongnize the card
anymore. When I do lspci, I see 3COM Ethernet controller, type unknown
0xff (rev 3x). The bios reports the card as an ethernet card at system
boot-up. I run the diagnostic program for 3com cards from Donald Becker
and all the card registers are  and . I do belive something
happened to the eeprom of the card. I would like to know if I can
overwrite-it with a new one so that I can make my ethernet card work
again.


Thank you,

Eugen

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: temperature standard - global config option?

2001-06-09 Thread L. K.



On 8 Jun 2001, Bill Pringlemeir wrote:

> 
> > "MHW" == Michael H Warfield <[EMAIL PROTECTED]> writes:
> [snip]
>  MHW> Yes, bits are free, sort of...  That's why an extra decimal
>  MHW> place is "ok".  Keeping precision within an order of magnitude
>  MHW> of accuracy is within the realm of reasonable.  Running out to
>  MHW> two decimal places for this particular application is just
>  MHW> silly.  If it were for calibrated lab equipment, fine.  But not
>  MHW> for CPU temperatures.
> 
> You do introduce some rounding errors if the measurement isn't in
> Celsius or Kelvin.  Ie, you must do a conversion because the hardware
> isn't in the desired units.  In this case, the extra precision will be
> beneficial.  
> 


Take for examples the motherboards with temperature sensors on them. At
some point it will display the temperature of the motherboard. This rises
questions: The motherboard will not have the same temeprature in two
distinct points. The temperature will be highere where there are presne
some thermistors or transistors that need cooloing and have a heatsink on
top of them. So, where are the sensors on the motherboard put ? I don't
think that there are a lot of them and then to display the average
temperature. This would be a stupid thing to do.  You want to know the
temperature of the motherboard for different resons. If you have a case
that it is sealed by the manufacturer and cannot keep it open to allow the
components to cooldown when things get hot inside or outside (in the
summer), you will have to rely on the motherboard sensors. And if it
happens to have in your computer some graphics card that generates a lot
of heat (like a Voodoo 2, or GeForce without a cooler), a TV tunner,
some of the heat generated will be passed on to the motherboard. And some
motherboard manufacturers insert beetween PCI/ISA slots different
componenents that can be affected by the heat generated by the cards in
the slots. And your confidence in the sensors in the motherboard
disapperas. The only thing you can trust nowadays is the one that messures
the temperature of the CPU (not very accurate). We can talk about accuracy
of temperature measurment when the sensors inside our computer get as good
as those used in a laboratory. Untill then ...


> If you are going your route, you should send error bars with all the
> measurements ;-) Fine, too many decimals leads to a false sense of
> security.  However, no one knows the accuracy of any future
> temperature sensors so why not accommodate the possibility.  Certainly
> some band gap semis can give a pretty good measurement if you have
> good coupling.  If the temperature sensor was built into the CPU, you
> might actually have accuracy!
> 

I haven't encountered any CPU with builtin temperature sensors.


> regards,
> Bill Pringlemeir.
> 
> This thread keeps going and going and going...

and going, and going . and still going .

> 
> 
and still going .



Regards,

/me

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: temperature standard - global config option?

2001-06-09 Thread L. K.


> > From: L. K. [mailto:[EMAIL PROTECTED]]
> > I really do not belive that for a CPU or a motherboard +- 1 
> > degree would make any difference.
> 
> You haven't pushed your system, or run it in a hostile
> environment then.  There are many places where systems are run
> right up to the edge of thermal breakdown, and it's a firm
> requirement to know exactly what that edge is.
> 
 I didn't pushed my system because I belive it performs well without
overclocking. There are a lot of chances to fry the chip, and for this
reason I use my system at the frequency my manufacturer gave it to me.


>  
> > If a CPU runs fine at, say, 37 degrees C, I do not belive it 
> > will have any problems running at 38 or 36 degrees. I support
> > the ideea of having very good sensors for temperature
> > monitoring, but CPU and motherboard temperature do not depend
> > on the rise of the temperature of 1 degree, but when the
> > temperature rises 10 or more degrees. I hope you understand
> > what I want to say.
> 
> I have a CPU that runs great up to 43C, and shuts down hard at 44C
> so I obviously want to know how close I am to that.  I don't want
> rounding errors to get in the way, and I don't want changes
> between kernel revs to affect it either.
> 

It might be as you say, but I really do not belive that your chip will fry
at 44C. I never seen a chip that fried becasue the temperature was 1
degree greater that the one it supposed to work at. And I worked with a
lot of CPU's and motherboards.


> If we've got the bitspace, keep the counters as granular as
> possible within the useable range that we're designing for.
> 
> counter = .01 * degrees kelvin
> 

I said, and now I'll say it again: I support the ideea of having very high
precission, BUT this is not the case for personal computer, this may
concern high-end systems that must run in a controlled environment at a
fixed temperature.

> 
> 


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



RE: temperature standard - global config option?

2001-06-09 Thread L. K.


  From: L. K. [mailto:[EMAIL PROTECTED]]
  I really do not belive that for a CPU or a motherboard +- 1 
  degree would make any difference.
 
 You haven't pushed your system, or run it in a hostile
 environment then.  There are many places where systems are run
 right up to the edge of thermal breakdown, and it's a firm
 requirement to know exactly what that edge is.
 
 I didn't pushed my system because I belive it performs well without
overclocking. There are a lot of chances to fry the chip, and for this
reason I use my system at the frequency my manufacturer gave it to me.


  
  If a CPU runs fine at, say, 37 degrees C, I do not belive it 
  will have any problems running at 38 or 36 degrees. I support
  the ideea of having very good sensors for temperature
  monitoring, but CPU and motherboard temperature do not depend
  on the rise of the temperature of 1 degree, but when the
  temperature rises 10 or more degrees. I hope you understand
  what I want to say.
 
 I have a CPU that runs great up to 43C, and shuts down hard at 44C
 so I obviously want to know how close I am to that.  I don't want
 rounding errors to get in the way, and I don't want changes
 between kernel revs to affect it either.
 

It might be as you say, but I really do not belive that your chip will fry
at 44C. I never seen a chip that fried becasue the temperature was 1
degree greater that the one it supposed to work at. And I worked with a
lot of CPU's and motherboards.


 If we've got the bitspace, keep the counters as granular as
 possible within the useable range that we're designing for.
 
 counter = .01 * degrees kelvin
 

I said, and now I'll say it again: I support the ideea of having very high
precission, BUT this is not the case for personal computer, this may
concern high-end systems that must run in a controlled environment at a
fixed temperature.

 
 


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: temperature standard - global config option?

2001-06-09 Thread L. K.



On 8 Jun 2001, Bill Pringlemeir wrote:

 
  MHW == Michael H Warfield [EMAIL PROTECTED] writes:
 [snip]
  MHW Yes, bits are free, sort of...  That's why an extra decimal
  MHW place is ok.  Keeping precision within an order of magnitude
  MHW of accuracy is within the realm of reasonable.  Running out to
  MHW two decimal places for this particular application is just
  MHW silly.  If it were for calibrated lab equipment, fine.  But not
  MHW for CPU temperatures.
 
 You do introduce some rounding errors if the measurement isn't in
 Celsius or Kelvin.  Ie, you must do a conversion because the hardware
 isn't in the desired units.  In this case, the extra precision will be
 beneficial.  
 


Take for examples the motherboards with temperature sensors on them. At
some point it will display the temperature of the motherboard. This rises
questions: The motherboard will not have the same temeprature in two
distinct points. The temperature will be highere where there are presne
some thermistors or transistors that need cooloing and have a heatsink on
top of them. So, where are the sensors on the motherboard put ? I don't
think that there are a lot of them and then to display the average
temperature. This would be a stupid thing to do.  You want to know the
temperature of the motherboard for different resons. If you have a case
that it is sealed by the manufacturer and cannot keep it open to allow the
components to cooldown when things get hot inside or outside (in the
summer), you will have to rely on the motherboard sensors. And if it
happens to have in your computer some graphics card that generates a lot
of heat (like a Voodoo 2, or GeForce without a cooler), a TV tunner,
some of the heat generated will be passed on to the motherboard. And some
motherboard manufacturers insert beetween PCI/ISA slots different
componenents that can be affected by the heat generated by the cards in
the slots. And your confidence in the sensors in the motherboard
disapperas. The only thing you can trust nowadays is the one that messures
the temperature of the CPU (not very accurate). We can talk about accuracy
of temperature measurment when the sensors inside our computer get as good
as those used in a laboratory. Untill then ...


 If you are going your route, you should send error bars with all the
 measurements ;-) Fine, too many decimals leads to a false sense of
 security.  However, no one knows the accuracy of any future
 temperature sensors so why not accommodate the possibility.  Certainly
 some band gap semis can give a pretty good measurement if you have
 good coupling.  If the temperature sensor was built into the CPU, you
 might actually have accuracy!
 

I haven't encountered any CPU with builtin temperature sensors.


 regards,
 Bill Pringlemeir.
 
 This thread keeps going and going and going...

and going, and going . and still going .

 
 
and still going .



Regards,

/me

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: temperature standard - global config option?

2001-06-08 Thread L. K.



On Fri, 8 Jun 2001, Albert D. Cahalan wrote:

> Michael H. Warfiel writes:
> 
> > We don't have sensors that are accurate to 1/10 of a K and certainly not
> > to 1/100 of a K.  Knowing the CPU temperature "precise" to .01 K when
> > the accuracy of the best sensor we are likely to see is no better than
> > +- 1 K is just about as relevant as negative absolute temperatures.
> ...
> > Even if we had or could, anticiplate, sensors with a +- .01 K,
> > the relevance of knowing the CPU temperature to that precision is
> > lost on me.  I see no sense in stuffing a field with meaningless
> > bits just because the field will hold them.  In fact, this "false precision"
> > quickly leads to the false impression of accuracy.  Based on several
> > messages I have seen on this thread and in private E-Mail, there are a
> > number of people who don't seem to grasp the fundamental difference
> > between precision and accuracy and truely don't understand that adding
> > meaningless precision like this adds nothing to the accuracy.
> >
> > I can see maybe making it precise to .1 K.  But stuffing the bits
> > in there to be precise to .01 K just because we have the bits and not
> > because we have any realistic information to fill the bits in with, is
> > just silly to me.  Just as silly as allowing for negative numbers in an
> > absolute temperature field.  We have the bits to support it, but why?
> 
> The bits are free; the API is hard to change.
> Sensors might get better, at least on high-end systems.
> Rounding gives a constant 0.15 degree error.
> Only the truly stupid would assume accuracy from decimal places.
> Again, the bits are free; the API is hard to change.
> 
> One might provide other numbers to specify accuracy and precision.
> 

I really do not belive that for a CPU or a motherboard +- 1 degree would
make any difference.

If a CPU runs fine at, say, 37 degrees C, I do not belive it will have any
problems running at 38 or 36 degrees. I support the ideea of having very
good sensors for temperature monitoring, but CPU and motherboard
temperature do not depend on the rise of the temperature of 1 degree, but
when the temperature rises 10 or more degrees. I hope you understand what
I want to say.



Regards,


> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: temperature standard - global config option?

2001-06-08 Thread L. K.

> > Are you really sure about this ?
> 
> I am. I made Abitur (german degree after 13yrs of school)
> with physics being an important course, and there can not
> be any temperature less than 0 K (or -273.15°C if you want).
> This is because temperature is nothing but the movement of
> pieces of materie (and even photons, ergo energy).

Thanks for enlightening me. Physics is not one of my strong points. I'm
used to the Celsius scale, maybe that's why I didn't belive the first
time.


Regads,




> 
> -mirabilos
> -- 
> C:\>debug
> -e100 EA F0 FF 00 F0
> -g
> --->Enjoy!
> 
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: temperature standard - global config option?

2001-06-08 Thread L. K.

  Are you really sure about this ?
 
 I am. I made Abitur (german degree after 13yrs of school)
 with physics being an important course, and there can not
 be any temperature less than 0 K (or -273.15°C if you want).
 This is because temperature is nothing but the movement of
 pieces of materie (and even photons, ergo energy).

Thanks for enlightening me. Physics is not one of my strong points. I'm
used to the Celsius scale, maybe that's why I didn't belive the first
time.


Regads,




 
 -mirabilos
 -- 
 C:\debug
 -e100 EA F0 FF 00 F0
 -g
 ---Enjoy!
 
 

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: temperature standard - global config option?

2001-06-08 Thread L. K.



On Fri, 8 Jun 2001, Albert D. Cahalan wrote:

 Michael H. Warfiel writes:
 
  We don't have sensors that are accurate to 1/10 of a K and certainly not
  to 1/100 of a K.  Knowing the CPU temperature precise to .01 K when
  the accuracy of the best sensor we are likely to see is no better than
  +- 1 K is just about as relevant as negative absolute temperatures.
 ...
  Even if we had or could, anticiplate, sensors with a +- .01 K,
  the relevance of knowing the CPU temperature to that precision is
  lost on me.  I see no sense in stuffing a field with meaningless
  bits just because the field will hold them.  In fact, this false precision
  quickly leads to the false impression of accuracy.  Based on several
  messages I have seen on this thread and in private E-Mail, there are a
  number of people who don't seem to grasp the fundamental difference
  between precision and accuracy and truely don't understand that adding
  meaningless precision like this adds nothing to the accuracy.
 
  I can see maybe making it precise to .1 K.  But stuffing the bits
  in there to be precise to .01 K just because we have the bits and not
  because we have any realistic information to fill the bits in with, is
  just silly to me.  Just as silly as allowing for negative numbers in an
  absolute temperature field.  We have the bits to support it, but why?
 
 The bits are free; the API is hard to change.
 Sensors might get better, at least on high-end systems.
 Rounding gives a constant 0.15 degree error.
 Only the truly stupid would assume accuracy from decimal places.
 Again, the bits are free; the API is hard to change.
 
 One might provide other numbers to specify accuracy and precision.
 

I really do not belive that for a CPU or a motherboard +- 1 degree would
make any difference.

If a CPU runs fine at, say, 37 degrees C, I do not belive it will have any
problems running at 38 or 36 degrees. I support the ideea of having very
good sensors for temperature monitoring, but CPU and motherboard
temperature do not depend on the rise of the temperature of 1 degree, but
when the temperature rises 10 or more degrees. I hope you understand what
I want to say.



Regards,


 

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: temperature standard - global config option?

2001-06-07 Thread L. K.



On Thu, 7 Jun 2001, Albert D. Cahalan wrote:

> Negative temperatures do not really exist.
> 

Are you really sure about this ?



> 
> 
> 
> 
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: temperature standard - global config option?

2001-06-07 Thread L. K.


Why not make it in Celsius ? Is more easy to read it this way.



On Thu, 7 Jun 2001, Philips wrote:

> Hello All!
> 
>   Kelvins good idea in general - it is always positive ;-)
> 
>   0.01*K fits in 16 bits and gives reasonable range.
> 
>   but may be something like K<<6 could be a option? (to allow use of shifts
> instead of muls/divs). It would be much more easier to extract int part.
> 
>   just my 2 eurocents.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: temperature standard - global config option?

2001-06-07 Thread L. K.


Why not make it in Celsius ? Is more easy to read it this way.



On Thu, 7 Jun 2001, Philips wrote:

 Hello All!
 
   Kelvins good idea in general - it is always positive ;-)
 
   0.01*K fits in 16 bits and gives reasonable range.
 
   but may be something like K6 could be a option? (to allow use of shifts
 instead of muls/divs). It would be much more easier to extract int part.
 
   just my 2 eurocents.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: temperature standard - global config option?

2001-06-07 Thread L. K.



On Thu, 7 Jun 2001, Albert D. Cahalan wrote:

 Negative temperatures do not really exist.
 

Are you really sure about this ?



 
 
 
 
 

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/