Linux-Development-Sys Digest #200, Volume #7 Thu, 16 Sep 99 01:14:04 EDT
Contents:
Re: Kernel Install Applet ("Drydd")
Re: Can only see 8Gb of 13Gb disk. (Kelly Burkhart)
Re: Device Driver for NuDAQ PCI-7200 data acquisition (Dmitri A. Sergatskov)
Re: TAO: the ultimate OS (Paul J Collins)
Re: help.. This should work but why doesn't it??? (write fnc in driver) (Keith
Wright)
Re: threads (David Schwartz)
help.. This should work but why doesn't it??? (write fnc in driver) (Karlo Szabo)
Re: 497.2 days ought to be enough for everybody (Ray)
Re: UDMA vs IDE: performance comparison wanted (M van Oosterhout)
Re: help.. This should work but why doesn't it??? (write fnc in driver) (Karlo
Szabo)
Re: Adding XML to Linux (Proposal Outline) (Christopher Browne)
----------------------------------------------------------------------------
From: "Drydd" <someone@special>
Subject: Re: Kernel Install Applet
Date: Wed, 15 Sep 1999 15:02:41 -0400
The other thing I think you're overlooking is that the whole reason
we... er, they, I sure as hell ain't on the development team, distribute
kernel source is because every machine and every administrator has different
expectations and needs out of their Linux system. There really isn't a
'best' kernel configuration, it's something everyone needs to come up with
for their particular needs.
The process, from start to finish, for compiling and installing a kernel
on a machine using LILO, is as follows. I put this here because I know I had
problems with some of these steps many years ago...
The following assumes that a file named linux-2.2.12.tar.gz exists in
the /tmp directory of your Linux partition. It also assumes installation to
/usr/src.
cd /usr/src
rm -rf linux
tar -xzf /tmp/linux-2.2.12.tar.gz (Extract the source from the archive)
ln -s /usr/src/linux-2.2.12 linux (Create a symbolic link, so we don't
have to relink
the places where the
compiler will look for
certain includes and
what not, trust us;))
cd linux
make menuconfig (I say use the menu configuration because a) if you're
not sure
what all the available options are it has
slightly better help
files and b) if you make a mistake in the
configure shell script in
my experience it means starting all over
again.)
make dep
make clean
make bzImage
make modules
make modules_install
cd /etc
Here's the tricky part, only if you don't know how to use vi, or
didn't install
some other text editor with your installation... I like joe
personally.
-use a text editor to edit the file: lilo.conf
-examine carefully the entry in lilo.conf that boots Linux from the
/vmlinuz kernel
-make a copy of that entry using a different label and pointing it to
/usr/src/linux/arch/i386/boot/bzImage instead of /vmlinuz
-save your changes to the file.
-now we're back at the command prompt:)
run the 'lilo' program, just type the sucker.... it should go through and
redo the lilo image on your hard drives boot sector, then you can restart
selecting the new lilo image to boot from.
There are a lot of steps, but they all make sense... Sorry for the
repetition of the other guys response, but I think this question needed a
slightly more... delicate approach:) After all, at least he's not asking us
how to edit a registry?
Peter T. Breuer <[EMAIL PROTECTED]> wrote in message
news:7roo96$kbd$[EMAIL PROTECTED]...
> Cocheese ([EMAIL PROTECTED]) wrote:
> : After teaching a few crash courses to the other service techs I work
with
> : about Linux and the programs, I got to thinking about a few
"Suggestions".
>
> : Nobody I know personally has ever installed the kernel and lived to
> : tell about it. In fact, a friend of mine and I spent an entire week
trying
>
> Hmmm .. I assure you that I have installed close on a thousand kernels
> and all work (I mean that I followed development from 1.1.something and
> have patched kernels up to the eyeballs, and run labs with 200 machines
> ...)
>
> : and to no such luck. Although I am now using the RPM version to install
> : the newest kernels (AND THE DAMN THING STILL DOESN'T TELL THE CORRECT
> : VERSION!) I began to think that obviously there is a strong need for a
> : program/applet for us "not-so-learned" users.
>
> Uh. Don't use redhat, and don't use rpm. Then you won't have any
> problems.
>
> : automatically install a zipped kernel (ex. zip,tar,etc...) through a
> : simple command that would run the executable program and compile it?
>
> Sure: rm /usr/src/linux; tar xzvfC linux-foo.tgz /usr/src; ln -s
> linux-foo /usr/src/linux; cd /usr/src/linux; cp ../*/linux*/.config .;
> make oldconfig; make zImage; make zlilo; make modules; make
modules-install;
>
> Hic. That'll be $10.
>
> Werrsa problem?
>
> : If anyone is up to the challenge I assure you many of the
> : distribution's would eat that program right up (since they are really
> : going all out to make Linux a little more user friendly- thus getting
your
> : name in the "Program Hall Of Fame" -LOL
>
> : **********************************************************************
> : P.S. A good example of something i was thinking would be to type
something
> : under the console like this:
>
> : KERNEL "/location/kernel_name-2.x.x.tgz"
> : **********************************************************************
>
> Eh?
>
> --
> Peter
------------------------------
Crossposted-To: comp.os.linux.misc
Subject: Re: Can only see 8Gb of 13Gb disk.
From: Kelly Burkhart <[EMAIL PROTECTED]>
Date: 15 Sep 1999 19:26:03 -0500
> On Tue, 14 Sep 1999 01:12:00 GMT, Robert Heller <[EMAIL PROTECTED]> wrote:
> > Web Serf <[EMAIL PROTECTED]>,
> > In a message on Mon, 13 Sep 1999 01:05:57 +0000, wrote :
> >
> >WS> Hello all, A long time ago I installed RH5.1 on this box and after
> >WS> trying a few things gave up on the idea of using more than 8GB of my
> >WS> 13Gb disk (I understand the BIOS limitations problem). In a while I'll
> >WS> be getting a new system and reformatting this one. I have tried adding
> >WS> 'append hda="1647,256,63"' to the lilo.conf file. This didn't work.
> >WS> Any ideas?
> >
> >What does your partition table look like? The Ext2 fs can only deal
> >with about 9 gig/partition, but Linux has no trouble with properly
> >*partitioned* disks of much larger sizes.
I have a Western Digital 13000RTL 13 gig drive and am able to use all
of it on my RH5.1 system.
The disk comes with some bios magic in a program called EZ-Drive which
is installed on the boot partition. I initially assumed that I would
not need this and tried everything to get Linux to recognize the
entire disk to no avail. I then made the large drive the master and
my previous drive the slave, installed EZ-Drive, moved lilo from the
MBR to my root partition on the new slave and told EZ-Drive to boot
there (I did this some time ago so I don't remember all the details
exactly).
Linux is able to recognize the EZ-Drive magic and adjust whatever
needs to be adjusted to recognize the entire disk. I was able to
fdisk and carve up all 13 gig of the drive.
HTH
--
Kelly R. Burkhart
[EMAIL PROTECTED]
MIDL error 0xc0000005: unexpected compiler problem. Try to find a work around.
-- Microsoft IDL compiler error message
------------------------------
From: [EMAIL PROTECTED] (Dmitri A. Sergatskov)
Crossposted-To: comp.os.linux.hardware
Subject: Re: Device Driver for NuDAQ PCI-7200 data acquisition
Date: 16 Sep 1999 01:26:01 GMT
On Tue, 14 Sep 1999 14:44:45 -0400, Arthur Perlo <[EMAIL PROTECTED]> wrote:
>Hi,
>
>linux PC with PCI, using DMA. I have a NuDaq PCI-7200
>2) can anyone recommend a different card for which a
>linux driver exists?
>
National Instruments have a driver for their E-series card.
They also have some links to linux-lab and other resources.
http://www.natinst.com/linux
Regards,
Dmitri.
------------------------------
From: Paul J Collins <[EMAIL PROTECTED]>
Crossposted-To: alt.os.linux,comp.os.linux.advocacy,comp.os.misc,comp.unix.advocacy
Subject: Re: TAO: the ultimate OS
Date: 15 Sep 1999 23:22:04 +0100
>>>>> "Vladimir" == Vladimir Z Nuri <[EMAIL PROTECTED]> writes:
--snip--
Vladimir> perhaps you did not read about the acrimony surrouding
Vladimir> the events of the IPO in which red hat offered stock
Vladimir> options to some developers but not others based on a
Vladimir> byzantine & incomprehensible system.
--snip--
The "byzantine and incomprehensible system" was that of Etrade (I
believe, I could be wrong) and not Red Hat themselves. Red Hat were
trying to give prominent free software and open source developers a
chance to realise something financial from their efforts, even if they
had not embarked on their projects with any expectation of same.
Paul.
--
Paul Collins <[EMAIL PROTECTED]> Public Key On Keyserver.
Fingerprint: 88BA 2393 8E3C CECF E43A 44B4 0766 DD71 04E5 962C
"I am a stranger in a strange land,
distracted by bright and shiny objects."
------------------------------
From: Keith Wright <[EMAIL PROTECTED]>
Subject: Re: help.. This should work but why doesn't it??? (write fnc in driver)
Date: 15 Sep 1999 21:56:42 -0400
Karlo Szabo <[EMAIL PROTECTED]> writes:
> Hi
> the following is my write function from my module.
>
> I'am having problems getting the value of count.
> and the copying the contents of buf from the user space.
>
> This is defined at the begining of the module
>
> static char *write_buf;
Shouldn't this be: char write_buf[SIZE]? You need to allocate space
for the stuff, not just a pointer.
>
>
> static int z_write(struct inode *node,
> struct file *file,
> const char *buf,
> int count)
> {char *chbuf;
> unsigned long copy_size;
> chbuf = (char *) buf;
>
> copy_size = count;
> write_buf="pre copy from us"; /*This is not being overwritten by
> copy_from_user */
I'm not sure why it doesn't get overwritten and blow up in your face,
but I know it's not nice to try to overwrite a constant. Should be:
strcpy(write_buf,"pre copy")'
(but I'm not sure strcpy is available in kernel address space, maybe
write: for(t=write_buf,s="pre copy"; *t++=*s++; *s);
or something droll like that.
>
> copy_from_user(write_buf,chbuf,copy_size);
> printk(KERN_NOTICE "write open %s Count is %i\n",write_buf,
> copy_size);
> return copy_size; /*This is always returned as a large negative
> number */
Another mystery, what was 'count' to begin with?
> };
--
-- Keith Wright <[EMAIL PROTECTED]>
Programmer in Chief, Free Computer Shop <http://www.free-comp-shop.com>
--- Food, Shelter, Source code. ---
------------------------------
From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: threads
Date: Wed, 15 Sep 1999 15:58:14 -0700
Leslie Mikesell wrote:
>
> In article <[EMAIL PROTECTED]>,
> David Schwartz <[EMAIL PROTECTED]> wrote:
> >
> > If the system is loaded by other programs, I had better do as much work
> >as possible in my timeslice. Otherwise, I'm going to slow those other
> >programs down. Or, to put it another way, the more loaded the system is,
> >the more work there is to do at any one time.
>
> And the more important it is to let the kernel scheduler wake
> up the right task at the right time.
Yes, and the fewer tasks there are, the easier this is.
> > A single-process model means that you can do more work when there is
> >more work to do. A process-per-connection model means that when you have
> >more work to do, you incur extra penalties and context switches.
> >
> > It really is that simple.
>
> I still don't see this in the case where many different programs
> are running, each with separate i/o streams, each with inputs
> that become ready separately, which is the way I am used to
> seeing unix machines run. If a bunch of programs written
> to select()/read() each are getting one read()'s worth of
> input in round-robin fashion, it has to be more work than
> if they just did a blocking read().
Of course. But this is the _unloaded_ case where you don't care about
performance. The _loaded_ case is where the process you care about is
being actually _worked_, that is, more than one thing is happening at a
time.
> > It is. Really. I'm afraid I can't do better than present argumentation
> >for why this should be so (fewer blocks, fewer stacks, fewer context
> >switches) and point to years of experience that demonstrates that it is.
>
> I can see this might be true for single-purpose machines where
> everything is fed to the same program, but I generally don't
> do that.
High performance machines, where performance matters, are generally
pretty much dedicated to a single function. The highest performance DNS
servers in the world don't waste their time serving web pages. Heavily
loaded news servers don't also handle mail. That just doesn't make any
sense at all.
I'm not talking about what you do. Probably what you do doesn't require
high performance or efficient server design. And that's fine, go for
ease of use or ease of maintainability.
> > If another program needs those CPU cycles, then it can have them. And
> >while it's using them, a few more packets will arrive. That will allow
> >me to do the work of ten connections with fewer expensive system calls
> >and fewer context switches. That will make that other program happier.
>
> But you keep ignoring the case where only one other packet arrives
> for you and now you have to do another system call to get it, taking
> time away from another program whose packet just arrived.
This is the simple case, where there's so little to do that you only do
one thing at a time. You don't optimize for this case.
Again, it's vitally important that high-performance servers be designed
to become more efficient as load increases. This creates a 'softness' in
the response curve that helps to prevent catastrophic collapse.
One technique to do this is to allow the number of context switches to
go down as load increases. It's not the only way, but it's a very
important way. Another way is to send larger packets when load
increases, or do more work on each connection when you handle it.
> >> > 200 is nothing. Try 5,000. Or 16,000.
> >>
> >> The nature of the http protocol is such that you don't need to
> >> run that many at once unless you are responding slowly and
> >> apache has some awkward things happening in accept() that would
> >> be a problem long before you see any difference from the
> >> rest of the process model.
>
> > I disagree entirely. Suppose you are serving out a 50Mb file to people
> >over 33.6 modem connections. Each connection will remain for as long as
> >it takes it to get that whole file.
>
> OK, each T1 can usefully feed 60 or so of those slow clients, so
> you might want 60 times the number of T1's you have of httpd's
> sending at once. I only have 4 so that's still not a big
> number.
On the Internet yes, but what of a corporate Intranet running at
100Mbps? What about the future when we have gigabit Ethernet?
Again, if you don't care about performance, fine, stop bitching about
it. But I do care about performance, and I'm talking about the ways to
get the absolute most of it.
> But, is it a good idea to allow 16,000 TCP connections to a
> single box in any case?
No, it's not. I generally recommend limiting it to about 10,000. When
you need more, you use cheap front end boxes to 'aggregate' multiple
incoming TCP connections to a single connection to the real server. Or,
if possible, just add more servers.
> What is the memory footprint of
> the tcp window for all those?
Well, 512Mb of RAM in a server is not that uncommon these days. The
memory footprint of the TCP windows vary with operating system. And it
depends upon the protocol, which determines whether you'll gain any
benefit from larger windows or not.
Since this is a Linux newsgroup, I'll just mention that my PPro-200 box
with 160Mb of RAM was able to handle 16,000 connections in a test server
without much difficultly. But this was on a very fast network, so not as
much data had to back up as you would expect in a more realistic test.
> What happens when a major
> internet router glitch causes a window of packets to
> be lost or delayed to the point where you have a complete
> window buffered for most of those connections and you
> start retransmitting?
You eat memory like crazy, but there's lots of things you can do about
that. It depends upon the specifics of the application and the protocol
involved.
DS
------------------------------
From: Karlo Szabo <[EMAIL PROTECTED]>
Subject: help.. This should work but why doesn't it??? (write fnc in driver)
Date: Thu, 16 Sep 1999 09:45:13 +1000
Hi
the following is my write function from my module.
I'am having problems getting the value of count.
and the copying the contents of buf from the user space.
This is defined at the begining of the module
static char *write_buf;
static int z_write(struct inode *node,
struct file *file,
const char *buf,
int count)
{char *chbuf;
unsigned long copy_size;
chbuf = (char *) buf;
copy_size = count;
write_buf="pre copy from us"; /*This is not being overwritten by
copy_from_user */
copy_from_user(write_buf,chbuf,copy_size);
printk(KERN_NOTICE "write open %s Count is %i\n",write_buf,
copy_size);
return copy_size; /*This is always returned as a large negative
number */
};
------------------------------
From: [EMAIL PROTECTED] (Ray)
Subject: Re: 497.2 days ought to be enough for everybody
Date: 16 Sep 1999 00:25:04 GMT
Hi,
> Is it possible to organise a shutdown & reboot sometime before you go?
> This'd certainly be one option I'd be looking at 8?)
Rebooting is not the answer, it's the question, and the answer is "no!" :>
Would you like to rely on an OS that would have to be rebootet every 497
days?
Anyway, thanks for all tips, the machine survived and is happily running.
What I did was kill most unneeded processes except for sshd (so I could log
in after the wraparound) and left for my holiday :>
(:ul8er, r@y
------------------------------
Date: Thu, 16 Sep 1999 13:27:11 +1000
From: M van Oosterhout <[EMAIL PROTECTED]>
Subject: Re: UDMA vs IDE: performance comparison wanted
Joseph H Allen wrote:
> Did you change any 'hdparm' settings before you did this test?
> Try: hdparm -m 16 -c 1 -A 1 -W 1 /dev/hda
>
> This makes sure that it's actually reading multiple sectors at a time, that
> read ahead is enabled, that 32-bit mode is enabled, and that write caching
> is enabled. This usually makes a huge difference if it had not been set,
> since Linux makes pessimistic settings by default.
I'm not sure wether this is common, but if I used hdparm to
change *any* of the default settings, the disk just went slower.
So it get's its maximum speed when it's marked for single-sector,
16-bit I/O.
Quantum Fireball EX 12.7A
Martijn van Oosterhout
Australia
------------------------------
From: Karlo Szabo <[EMAIL PROTECTED]>
Subject: Re: help.. This should work but why doesn't it??? (write fnc in driver)
Date: Thu, 16 Sep 1999 13:40:57 +1000
> > This is defined at the begining of the module
> >
static char w_buf[BUF_SIZE]
> > static char *write_buf;
>
> Shouldn't this be: char write_buf[SIZE]? You need to allocate space
> for the stuff, not just a pointer.
in the module_init()
I have write_buf = &w_buf;
so I guess the pointer points to the allocated array
>
> >
> >
> > static int z_write(struct inode *node,
> > struct file *file,
> > const char *buf,
> > int count)
> > {char *chbuf;
> > unsigned long copy_size;
> > chbuf = (char *) buf;
> >
> > copy_size = count;
> > write_buf="pre copy from us"; /*This is not being overwritten by
> > copy_from_user */
>
> I'm not sure why it doesn't get overwritten and blow up in your face,
> but I know it's not nice to try to overwrite a constant. Should be:
> strcpy(write_buf,"pre copy")'
I'am not overwriting any constants.
The above works fine, no explosions
it will explode if I do try and overwrite the const buf results in
No keys working, only "alt" + function keys to get to a different
desktiop works.
> (but I'm not sure strcpy is available in kernel address space, maybe
> write: for(t=write_buf,s="pre copy"; *t++=*s++; *s);
> or something droll like that.
>
That's what the copy_from_user is for
it copies copy_size number of bytes from a pointer to user space to the
write_buf.
> >
> > copy_from_user(write_buf,chbuf,copy_size);
> > printk(KERN_NOTICE "write open %s Count is %i\n",write_buf,
> > copy_size);
> > return copy_size; /*This is always returned as a large negative
> > number */
>
> Another mystery, what was 'count' to begin with?
The same as copy_size.???
The only thing which seems to work is if I do:
int cnt=buf;
This will always return the correct number of bytes in the buf.
karlo
>
> > };
>
> --
> -- Keith Wright <[EMAIL PROTECTED]>
>
> Programmer in Chief, Free Computer Shop <http://www.free-comp-shop.com>
> --- Food, Shelter, Source code. ---
------------------------------
From: [EMAIL PROTECTED] (Christopher Browne)
Subject: Re: Adding XML to Linux (Proposal Outline)
Reply-To: [EMAIL PROTECTED]
Date: Thu, 16 Sep 1999 02:56:19 GMT
On Wed, 15 Sep 1999 17:53:00 GMT, Howard B. Golden
<[EMAIL PROTECTED]> wrote:
>Bear with me as I give a bit of quasihistory: A key benefit of Unix was
>its organization around text files. Programmers were encouraged to
>write "filters" that would transform the input text file into the
>output text file. Unix included pipes to make it easy to string these
>filters together.
>
>Text files can handle a lot of problems, but they can't handle many
>important things. So I propose adding additional "paradigms." (I hope
>we can find a better word.)
>
>The most obvious to me is a tree structure. What I have in mind is
>adding features to programs that make it easy to design filters that
>take trees as inputs and return trees as outputs. (A text file is a
>simple tree, consisting of either one leaf (the whole file), or as many
>leaves as there are characters, or as many leaves as there are lines,
>depending on how you look at it.) I'll say more about the
>implementation below.
>
>You may be wondering what this has to do with XML. The point is, XML
>can represent many tree structures. (Can it represent all? I don't
>know, but maybe a theoretician can answer.) I propose to use XML as
>the "external" (between computers) way of transmitting trees. Within
>the computer, trees might be represented using an object-oriented
>structure that wouldn't require a text representation.
I will "take exception" at this point, and raise these issues:
a) A severe problem with XML is that it must be both generated and
parsed, and it is a nontrivial matter (probably an unsolvable
problem) to ensure that XML documents that are generated are
guaranteed to be both:
1) Parsable, and
2) Conformant to the DTD that the document claims to conform to.
b) "Come on over baby, whole lot of parsin' goin' on!"
With suitable apologies to Jerry Lee Lewis,
<http://www.fiftiesweb.com/lyrics/wholotta.htm>
This is a corollary to a); if everything's XML, then you need to make
sure it's all *valid* XML.
c) What about those DTDs?
If it all needs to be validated, then you need a pile of DTDs too.
d) UNIX already has ways of representing hierarchy; That's What
Kernighan, Ritchie, Pike, et al Created Directories For.
Between directories and links, there are some good ways of
representing hierarchy already. The filesystems available so far have
not been efficient at managing Hordes Of Tiny Files (HOTFs), which is why
Reiserfs is quite an important project.
>How could this be implemented? Here's my first thought: Currently,
>programs get two parameters at their "main" entry point, argc and argv,
>a count and a list of text string, respectively. They read
>from "stdin" and write to "stdout", which are assumed to be text
>streams. What if we set up a new entry point, say "xmain" that gets a
>single "xarg" (which is a tree). The program can read (or
>process) "xstdin" and write "xstdout", which are also trees.
You might want to look at
<http://www.ozemail.com.au/~birchb/linuxml/linuxml.htm>, which
proposes a not dissimilar scheme.
I'd tend to think that making Reiserfs stable (and portable to other
UNIXes) would be a better approach, allowing creation of HOTFs,
perhaps using XML as a way of managing structured data when it is
actually critical to *parse* the structured data.
Alternatively, this also arguably looks a Whole Lot like the notion of
passing around Lisp-like structures rather than Dumb Text. You might
look at <http://www.hex.net/~cbbrowne/lisposes.html>, and particularly
at NASOS.
The point here is that if you plan to pass around everything as trees,
it becomes Highly Logical for those trees to actually be Lisp-style
lists.
--
"Note that if I can get you to `su and say' something just by asking,
you have a very serious security problem on your system and you should
look into it." -- Paul Vixie, vixie-cron 3.0.1 installation notes
[EMAIL PROTECTED] <http://www.ntlug.org/~cbbrowne/xml.html>
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.development.system) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************