Linux-Development-Sys Digest #227, Volume #6 Wed, 6 Jan 99 20:14:06 EST
Contents:
readdir_r ("David")
Re: Registry for Linux - Bad idea (George MacDonald)
Re: Registry for Linux - Bad idea (George MacDonald)
Change thread name? (Ian D Romanick)
Re: Glibc2.0.7 where is it ? (brent verner)
Re: device driver (Adrian 'Dagurashibanipal' von Bidder)
Re: Timing events via parallel port?... (Eric Crampton)
Re: disheartened gnome developer (Navindra Umanee)
Re: blocksize / file write speed anomaly (Jerry Dinardo)
iBCS? (JAD)
----------------------------------------------------------------------------
From: "David" <[EMAIL PROTECTED]>
Subject: readdir_r
Date: Wed, 6 Jan 1999 13:56:53 -0800
Does anyone know where to find documentation for readdir_r() on linux. It
does appear in dirent.h. Im compiling a piece of code from solaris to Red
Hat 5.1. The problem is that readdir_r() does not seem to return the same
values as it does on solaris. This seems odd to me since readdir_r() is
supposed to be part of the POSIX standard (I thought), and solaris 2.5 and
Linux conform to it for the most part. Could the linux readdir_r() function
be conforming to an earlier standard? Any documentation would really help.
This is really driving me nuts. Thanks for any help.
David
------------------------------
From: George MacDonald <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Registry for Linux - Bad idea
Date: Wed, 06 Jan 1999 09:30:05 GMT
Robert Krawitz wrote:
>
> [EMAIL PROTECTED] writes:
>
> > > It is *not* dated, unscalable nor unwieldy. Using tools such as
> > > secure shell or rdist one can easily write a program to update
> > > numerous machines automatically. It took me less than an hour to come
> > > up with a perl script which updates all of our Linux servers
> > > automatically. It does this using RSA encryption, and requires each
> > > machine's root password to be typed in before it will execute. The
> > > reason why these things are so easy to construct: they use PLAIN OLD
> > > TEXT and standard I/O.
> >
> > You, are a system administrator. You know perl, you're familiar with rdist,
> > sounds to me like you used PGP to encrypt outbound data. Clearly, you have a
> > command over the systems involved here.
> >
> > Not everyone does.
>
> Anyone who's managing a large cluster of machines had *better* have
> good command over the systems. Trying to hide the fact that large
> systems are unpredictable and complex under a thin glossy veneer is
> asking for trouble. Better that people understand that up front. A
> person not familiar with Unix should not be managing a cluster -- or
> more -- of Unix machines. Better that this person should realize it
> up front than think that he does know what to do (because of all the
> fancy user interfaces).
>
> > > Linuxconf already works just fine, and it has managed to centralize
> > > more configuration information in one easy-to-use place than Microsoft
> > > has ever dreamed of (SMC is just a ghost of what linuxconf can already
> > > do).
> >
> > It's my understanding that linuxconf works by directly accessing the text
> > files when it needs to do work. Wouldn't it be nice if linuxconf only had one
> > interface into all the configs?
>
> No, because it still has to know all the gritty details. A sendmail
> configuration editor won't be materially simpler to write if it has to
> update a bunch of keys in a database vs. updating a sendmail.cf file.
> The semantics are the hard part, not the syntax. Rationalizing the
> semantics would help a lot more than rationalizing the syntax.
>
> Wouldn't it be nice if someone who is using
> > vi or emacs on a config file would not clobber changes that are pending in
> > linuxconf?
>
> Fine, so linuxconf can check before writing the file out. For that
> matter, linuxconf could touch the file when it first changes its
> internal state so that an emacs user will get a nasty message if s/he
> tries to edit the file. I'd argue that all this fancy interface stuff
> is intended for less knowledgeable users trying to maintain their home
> or SOHO systems, not for large installations, and so that this is
> somewhat of a red herring.
>
> Wouldn't it be nice if the folks at RedHat could concentrate on
> > making the interface truly first rate rather than coding plugins and allowing
> > for software changes everytime the author of a program gets industrious?
>
> Again, this is looking at the wrong issue. What changes when "the
> author of a program gets industrious" are the semantics -- the schema,
> if you will -- not the syntax. For example, there have been some
> extensions to inetd.conf. These extensions are generally back
> compatible, so an old inetd.conf file will work well enough on a newer
> system. However, linuxconf still has to be fixed to know about these
> changes. It doesn't matter whether the changes are to a text file or
> to relational tables, anything that modifies an inetd configuration
> has to know about the changes.
>
One solution to changing schema's are to externalize the schema into
a seperate file, then version tag it. Thus when a new version of an
app comes along or more precisely it's interface changes(i.e. it's
persistent storage interface) then define a new interface spec(i.e.
schema). This would allow one module to handle many schemas.
Also most files support some kind of comment mechanism, which
means the files could be "classed" and version tagged. Thus
one could possibly tell from the flat file what interface module
to use. Each interface module could also have a validation
routines which checks if the files syntax/semantics are ok.
Worse case you could try each version of a module until one
matches, or none does in which case punt.
--
We stand on the shoulders of those giants who coded before.
Build a good layer, stand strong, and prepare for the next wave.
Guide those who come after you, give them your shoulder, lend them your code.
Code well and live! - [EMAIL PROTECTED] (7th Coding Battalion)
------------------------------
From: George MacDonald <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Registry for Linux - Bad idea
Date: Wed, 06 Jan 1999 10:31:36 GMT
Tristan Wibberley wrote:
>
> Andrew Morton wrote:
> >
> > > It is *not* dated, unscalable nor unwieldy. Using tools such as
> > > secure shell or rdist one can easily write a program to update
> > > numerous machines automatically. It took me less than an hour to come
> > > up with a perl script which updates all of our Linux servers
> > > automatically. It does this using RSA encryption, and requires each
> > > machine's root password to be typed in before it will execute. The
> > > reason why these things are so easy to construct: they use PLAIN OLD
> > > TEXT and standard I/O.
> >
> > - no provision for local overrides (inheritance)
>
> I agree, this is awkward, but a well designed application makes this
> easily possible (and it's not difficult to do it). One solution I've
> seen is to have a number of files each inheriting from the next. Though
> I agree, the key/value pair method does make this easier. I think there
> is too much cruft on this argument for the discussion to continue in
> this thread or in THREE newsgroups. Start a new thread (leaving out the
> word registry - it's not) in comp.os.linux.development.apps and discuss
> the requirements, then look at how the current system doesn't meet
> those, and come up with ways to alter it (rather than scrap it).
>
> > - apps must be restarted
>
> A well designed application doesn't require this, it just needs to be
> told via SIGUSR1 or such to reread it's config and continue gracefully.
> You can do this with a simple tool that knows how to tell applications
> to do this. (and no, doing it automatically as soon as a change is made
> to the config is not good enough, the superuser must do it).
>
> > - requires clients to be online
>
> Any system that doesn't cache data requires this. Network mounted
> filesystems can do this.
>
> > - all the other things we've been saying.
> >
> > > As for it being expensive for "administration and training resources",
> > > I have one word: linuxconf. I think you will be very surprised when
> > > RedHat 6+Gnome+linuxconf hits the shelves.
> >
> > Linuxconf is an attempt to graft a uniform interface onto legacy chaos.
>
> I think it's time to leave out end user administration helpers from the
> argument, we all know that it's got nothing to do with it (If you need a
> system more complex than the current one, you're not and end user :).
>
> > What we're doing here is exploring the requirements and design of a
> > _new_ approach which will be uniform from end-to-end. Please stop
> > shouting at us - some good may come of all this.
>
> I don't think George is trying to do that, just get a uniform interface
> for the application programmer (As he keeps stressing the importance of
> the application, and not the administration).
>
Actually I'm being a bit sneaky, I am!! It's that I want the applications
to be the prooving ground! And also linuxconf is tackling this job,
perhaps it will be sufficient, so I don't see the burning need there.
In other words what we come up with will at best only be an incremental
improvement of the system admin stuff, however we can make fairly
major improvements on the application side.
> > A few random thoughts:
> >
> > - As one of our Microsoft friends pointed out in the halloween docs, the
> > OSS community is good at following tail lights. We are now almost in
> > the embarrassing position of being in the overtaking lane. It's time to
> > look ahead and to innovate. This will require some evolution of the
> > communication and management model.
>
> Change for changes sake... I think you're going to ruin you own idea
> thinking like that. The reason OSS plays catchup is because we don't
> have to scrap stuff and replace it for marketing purposes. If you're
> going to do this, do it because you have a specific problem that needs
> solving - That's why Linux is good, and why Windows isn't.
>
> > - The discussion here tends to leap ahead of itself: we're getting into
> > fine grain implementation issues without having identified the
> > requirements.
>
Software development is never really a linear process, there is always
some thinking forwards to implementation.
> Exactly, specify what you need first. Exactly what you need, not
> innovative crap, but just the things that will solve your problem. Then
> stop and think what it limits, and decide if the limitations are worth
> it, or go back and rethink to eliminate the limitations (I don't mean
> what innovations does it limit, but what problems does it prevent
> solutions too - exactly list problems). Then code a demo. Then you'll
> have something that other people will like, use, and become standard (it
> will happen only if you take the advice in this paragraph... I guarantee
> it.
>
Wise words indeed!!
> We're serious people in these newsgroups, we don't want to hear
> 'registry', 'active' or 'innovation'. We want to hear 'here's a problem,
> and here's a solution', followed by 'here's a problem that your solution
> may cause', etc.
>
> The reason there's been such a big argument is that all that happened at
> the start was that we got 'The current solution is not a complex system
> covering every possibility, let's change it... I know, a registry - a
> registry for registries sake'.
>
> Now you have finally figured out problems for your solution to cater
> with. But you need to go back and do it right. Then bring it here.
>
> > Can we please take this from the top?
>
> *VERY* good idea. I will join in and help.
Agreed!! Here's the first cut at a problem defn
http://at.home/gmd/opStore/rfo/probDefn.html
I'm still not satisfied with it, so expect it to change somewhat.
Comments are welcome.
--
We stand on the shoulders of those giants who coded before.
Build a good layer, stand strong, and prepare for the next wave.
Guide those who come after you, give them your shoulder, lend them your code.
Code well and live! - [EMAIL PROTECTED] (7th Coding Battalion)
------------------------------
From: [EMAIL PROTECTED] (Ian D Romanick)
Subject: Change thread name?
Date: 6 Jan 1999 16:23:36 -0800
Ok, I'm sure a way exists, but I'm just too stupide to find it. How does
one change the name of a thread (POSIX thread) under Linux? I want each of
my threads to show up in ps/top with a descrptive name. I'm sure that this
will 100% unportable, but that's not a concern at this time.
--
"Except it usually sounds like a fog horn instead of a snare"
Cool compression stuff at http://www.cs.pdx.edu/~idr
------------------------------
From: brent verner <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.misc
Subject: Re: Glibc2.0.7 where is it ?
Date: Wed, 06 Jan 1999 07:26:54 -0500
AFAIK ...
ftp://alpha.gnu.org/gnu
maybe there is some other place
brent
Matt wrote:
>
> Help,
>
> I need to upgrade to glibc2.0.7 I have upgraded to 2.0.6 but
> now I need glib2.0.7 does anyone know where I can get a download
> of the locale, crypt and other files I need (like the 2.0.6 ones ?
>
> Many thanks
>
> Matt
------------------------------
From: [EMAIL PROTECTED] (Adrian 'Dagurashibanipal' von Bidder)
Subject: Re: device driver
Date: Wed, 06 Jan 1999 12:58:10 GMT
On 4 Jan 1999 18:57:00 GMT, [EMAIL PROTECTED] (Scott Savarese) wrote:
>I have been sending a few posts recently on creating a device driver for=
my
>CDROM on my Compaq 1640 laptop. [...] located as master on the second
>IDE controller.
Usually, you have /dev/cdrom symlinked to /dev/hdc, so, yes,
major/minor ar the same.
What does the IDE driver say at boot time? It should be something
like this:
# dmesg=A6less
[... quite a lot ...]
hda: Whateveryourharddiskis 4G <hda1 hda2>
hdc: ATAPI CDROM
[... another lot ...]
What does 'mount -t iso9660 /dev/hdc /mnt' say?
What does 'cat /dev/hdc > /dev/null' say?
Any CD-ROM drive which is truly IDE compatible should be adressable
with the standard ATAPI CD-ROM driver.
However, I don't know too much about laptops - perhaps there is
something in the Laptop-HOWTO (or you could write a contribution
once you found out...)
--
Greets from over there
Dagurashibanipal
EMail: [EMAIL PROTECTED]
Nothing travels faster than light.
With, of course, the exception of bad news. -- D. Adams
------------------------------
From: Eric Crampton <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Re: Timing events via parallel port?...
Date: 6 Jan 1999 13:04:52 GMT
Maxwell Lock <[EMAIL PROTECTED]> writes:
> I'm thinking about developing some code to time slot car
> races.
Wow, that's amazing. My brother just wrote such a think for
Windows. (This was before I converted him to Linux, he's much happier
now!)
> ... I'd like to be able to use interrupts or some
> such.
If you aren't too concerned about portability, you can use /dev/rtc
(you'll need to compile that into the kernel if it isn't already). See
the documentation in the Documentation directory of the kernel, it has
code examples, too. You can get some pretty good interrupt rates with
it.
------------------------------
From: Navindra Umanee <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy,comp.os.linux.development.apps,comp.os.linux.x
Subject: Re: disheartened gnome developer
Date: 7 Jan 1999 00:33:40 GMT
Tristan Wibberley <[EMAIL PROTECTED]> wrote:
> Navindra Umanee wrote:
>>
>> jedi <[EMAIL PROTECTED]> wrote:
>> >>The runtime version of Qt (just the .so file) is not a development tool.
>> >
>> > Back to spreading old lies again I see...
>> >
>> > [deletia]
>> >
>> > You know very well that a 'runtime version' would
>> > not all be sufficient even for a user workstation.
>>
>> Sometimes you have a way with words... Elaborate, why is it a lie?
>
> No... Please don't elaborate. This thread's more resiliant than an eggy
> fart in a sauna!
Huh? I think you're confusing threads. Here's the gist of this one:
Claim: You cannot use the runtime version of Qt to develop an
application.
Counter-claim: Roberto is a liar and the 'runtime version' is not
sufficient for a user workstation. (???)
> Let it die now.
>
> BTW, note the followups.
Sorry, but what does gnu.misc.discuss have anything to do with it?
I've set the followups to comp.os.linux.advocacy.
-N.
--
"These download files are in Microsoft Word 6.0 format. After unzipping,
these files can be viewed in any text editor, including all versions of
Microsoft Word, WordPad, and Microsoft Word Viewer." [Microsoft quote]
< http://www.cs.mcgill.ca/~navindra/editors/ >
------------------------------
From: Jerry Dinardo <[EMAIL PROTECTED]>
Subject: Re: blocksize / file write speed anomaly
Date: Wed, 06 Jan 1999 19:46:01 -0500
My previous post should have said :
5. write block 0,2,4,6 ...24998
6. write block 1,3,5,7 ...24999
however , even with just the even block writes my computer takes quite a long time.
I have tested it on 4 ibm pc's (different speeds and linux versions).
The only time , that I got results similar to yours was when I ran it on an AIX
machine(model r30 - 4 pc 604 processors - 256 M memory).
I also ran it on NT and I got some very interesting results. 5 second for the 1st
loop and 6 seconds for the 2nd loop. The NT machine was the same model as the linux
machine (although it had a lot of memory) so I am wondering if NT really fsynced
???
I have included the program so you can see exactly what I did. (although netscape
reformatted it)
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
#include <time.h>
#include <string.h>
#define SYSBLK (1024*2)
#define BIGLOOP 25000
int write_linux(int,int);
int rwrite_linux(int,int);
int pdur(time_t t1);
void main() {
write_linux(SYSBLK,BIGLOOP);
rwrite_linux(SYSBLK,BIGLOOP);
}
/************************************************************/
/* WRITE STUF */
/************************************************************/
int write_linux(int wsize,int loopcnt) {
int i,fd,rc;
time_t t1=time(0);
char buf[SYSBLK],*fn="stufit.dat";
printf("write_linux running ...\n");
remove(fn);
fd=open(fn,O_CREAT|O_TRUNC|O_WRONLY,0600);
if (fd<0) {
perror("write_linux open failed");
exit(1);
}
memset(buf,'x',sizeof(buf));
for (i=0;i<loopcnt;i++) {
rc=lseek(fd,i*wsize,SEEK_SET);
if (rc!=i*wsize) {
perror("write_linux seek failed");
exit(1);
}
rc=write(fd,buf,wsize);
if (rc != wsize) {
perror("write_linux write failed");
exit(1);
}
}
rc=fsync(fd);
if (rc<0) {
perror("write_linux fsync failed");
exit(1);
}
close(fd);
pdur(t1);
return 0;
}
int rwrite_linux(int wsize,int loopcnt) {
int i,fd,rc;
time_t t1=time(0);
char buf[SYSBLK],*fn="stufit.dat";
printf("rwrite_linux running ...\n");
fd=open(fn,O_WRONLY,0600);
if (fd<0) {
perror("rwrite_linux open failed");
exit(1);
}
memset(buf,'x',sizeof(buf));
for (i=0;i<loopcnt;i+=2) {
rc=lseek(fd,i*wsize,SEEK_SET);
if (rc!=i*wsize) {
perror("rwrite_linux seek failed");
exit(1);
}
rc=write(fd,buf,wsize);
if (rc != wsize) {
perror("rwrite_linux write failed");
exit(1);
}
}
printf("%d records written\n",i/2);
for (i=1;i<loopcnt;i+=2) {
rc=lseek(fd,i*wsize,SEEK_SET);
if (rc!=i*wsize) {
perror("rwrite_linux seek failed");
exit(1);
}
rc=write(fd,buf,wsize);
if (rc != wsize) {
perror("rwrite_linux write failed");
exit(1);
}
}
printf("%d records written\n",i/2);
rc=fsync(fd);
if (rc<0) {
perror("rwrite_linux fsync failed");
exit(1);
}
close(fd);
pdur(t1);
return 0;
}
/************************************************************/
/* MISC STUF */
/************************************************************/
int pdur(time_t t1) {
int dur=time(0)-t1;
printf(" Duration %d seconds\n",dur);
return dur;
}
Stefaan A Eeckels wrote:
> In article <[EMAIL PROTECTED]>,
> Jerry Dinardo <[EMAIL PROTECTED]> writes:
> > After looking at the problem further , I found that the problem has nothing
> > to do with blocksize. The problem occurs with random disk IO.
> >
> > 1. write 25000 2KB blocks sequentially.
> > 2. fsync
> > 3. close
> > 4. open output
> > 5. write block 0,2,4,6 ...24998
> > 6. write block 0,2,4,6 ...24999
> > 7. fsync
> > 8. close
> >
> > the program takes 10 seconds to do the sequential writes.
> > It then takes 259 seconds to do the random writes.
> >
> > the same program takes 10 seconds and 16 (vs 259) seconds to run on an AIX
> > system with slower disks than the linux system.
>
> I have tried to replicate your results, but I don't get the same
> results (on a PII 266):
> $ ./testspeed
> Wall time sequential write: 25000 2K byte blocks in 7 seconds (=7142.857 Kb/s)
> Wall time random write: 25000 2K byte blocks in 10 seconds (=5000.000 Kb/s)
>
> Could you post a copy of the test program?
>
> Take care,
>
> --
> Stefaan
> --
>
> PGP key available from PGP key servers (http://www.pgp.net/pgpnet/)
> ___________________________________________________________________
> Perfection is reached, not when there is no longer anything to add,
> but when there is no longer anything to take away. -- Saint-Exup�ry
------------------------------
From: JAD <[EMAIL PROTECTED]>
Subject: iBCS?
Date: Wed, 06 Jan 1999 13:48:02 +0100
Last I looked into the iBCS matter I a little surprised to
find that nothing seemed to have happened since 1994(?)
Is that really so?
I was looking for a way to execute programs made for
Solaris 7 (Intel) under Linux, but I'm not sure about what
I found. Does anybody have any experience?
Also I wondered if there is support for AIX programs (for
Intel CPUs)?
/jan
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.development.system) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************