Linux-Development-Sys Digest #266, Volume #6     Wed, 13 Jan 99 01:14:13 EST

Contents:
  Re: disheartened gnome developer (jedi)
  Re: virtualizing i386-linux (Steven Hand)
  Re: iso9660: time-stamp mismatch/bug? (Horst von Brand)
  Re: Linux Sound Engine (Peter Steiner)
  Re: silly question (mlw)
  Re: iso9660: time-stamp mismatch/bug? (Jim Van Zandt)
  Re: Open Configuration Storage - was Registry for Linux (Leslie Mikesell)
  Re: Registry for Linux - Bad idea (Stefan Monnier)
  How do you read core dump on Linux? ("http://www.otokovideo.com")
  Re: silly question (George MacDonald)
  Re: Linux should not support non-free modules (MalkContent)
  RH 5.2 and 2.2.x kernels [Re: 2.2.0pre6 booting errors] (Mumit Khan)
  Re: IPMasquerading / SSH (Daniel R. Grayson)
  Re: virtualizing i386-linux (Andi Kleen)
  Re: Registry - Already easily doable, in part. (Todd Knarr)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (jedi)
Crossposted-To: comp.os.linux.advocacy,comp.os.linux.development.apps,comp.os.linux.x
Subject: Re: disheartened gnome developer
Date: Tue, 12 Jan 1999 17:08:17 -0800

On Tue, 12 Jan 1999 18:42:20 GMT, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
>  [EMAIL PROTECTED] wrote:
>> On Mon, 11 Jan 1999 17:13:58 GMT, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
>> posted:
>> >>   You'll have to be more specific.
>> >>
>> >>   All I've found so far in the Redhat control-panel are like thus:
>> >>
>> >> Red Hat Linux netcfg 2.18
>> >> Copyright (C) 1996, 1997 Red Hat Software
>> >> Redistributable under the terms of the GNU General Public License
>> >
>> >Gee, so the control panel is owned by Red Hat, and they could rerelease them
>> >tomorrow under a proprietary license. Just like I said they could (and said
>> >they wont do it) when you called me a liar.
>> >
>> >I suppose you wont apologize, of course.
>
>> You miss the consideration that the third line of that comment shows
>> a *really big stick* that strongly discourages Red Hat Software from
>> doing a proprietary release.
><Snip reasons why Red Hat wont do it>
>
>No, I didn�t miss anything.
>That is why I said they would not do it. All I said is that they *can*
>do it. Going back to what I actually said a few posts ago:

        Which does nothing to all of the millions of copies floating
        around with the old licence. That licence doesn't suddenly 
        become null and void. WE would still all be able to make whatever
        derivate works we wish from it including printing it out and
        making paper boats out of it.

[deletia]

-- 
                Herding Humans ~ Herding Cats
  
Neither will do a thing unless they really want to, or         |||
is coerced to the point where it will scratch your eyes out   / | \
as soon as your grip slips.

        In search of sane PPP docs? Try http://penguin.lvcm.com

------------------------------

From: Steven Hand <[EMAIL PROTECTED]>
Subject: Re: virtualizing i386-linux
Date: 12 Jan 1999 15:50:42 +0000


Michael Brunnbauer <[EMAIL PROTECTED]> writes:
> i'm playing with the idea to virtualize i386-linux (several kernels runnning
> simultaneously on one machine). this could be useful in environments where
> security is important, especially firewall-configurations. 

While you may well wish to implement this from scratch, you might like
to take a look at L4-Linux (http://os.inf.tu-dresden.de/L4/LinuxOnL4/)
This has the linux kernel running as a task on top of a micro-kernel 
(much like MkLinux, but smaller). It might be easy to restrict the set
of real resources which 'the' linux kernel gets, and instead run
multiple ones as different tasks. 

HTH, 

S.



------------------------------

From: [EMAIL PROTECTED] (Horst von Brand)
Subject: Re: iso9660: time-stamp mismatch/bug?
Date: 13 Jan 1999 02:03:35 GMT

In article <77g38k$[EMAIL PROTECTED]>, Jim Van Zandt wrote:
>I see a similar problem with vfat filesystems.  When I create a file
>with Linux then later examine it with Windows, I see the wrong time.
>In Unix, the timestamp is in UTC and any timezone information is
>applied by the displaying program (like ls).  For a DOS filesystem,
>timestamps are in local time.

Unix (and Linux) uses UTC exclusively, the translation to/from local time is
done in libc. It's controlled by environment variables, so different users
can use different timezones at the same time on the same system. This
strategy also gets rid of strange time warps (2 x 23:00, or 23:59 -> 1:00).

Yep, there are braindamaged systems around that don't grok this.  Too bad.
-- 
Horst von Brand                             [EMAIL PROTECTED]
Casilla 9G, Vi�a del Mar, Chile                               +56 32 672616

------------------------------

From: [EMAIL PROTECTED] (Peter Steiner)
Subject: Re: Linux Sound Engine
Date: 12 Jan 1999 18:05:00 +0100
Reply-To: [EMAIL PROTECTED]

In article <[EMAIL PROTECTED]>, Caolan (McNamara) wrote:

>on the user level side, there are a few sound daemons that already exist, that
>do some of this already. esd and nas come to mind. i took a look at writing a
>device driver that would mix incoming streams to /dev/audio to allow multiple
>program support, one of the things that stumped me was what to do with
>the existing programs that mmap /dev/audio.

What exactly does mmaping /dev/audio do? Raw access to the DMA-buffer?
I've never read about this so far.

Ciao,

Peter
-- 
   _   x    ___
  / \_/_\_ /,--'  [EMAIL PROTECTED] (Peter Steiner)
  \/>'~~~~//
    \_____/   signature V0.2 alpha

------------------------------

From: mlw <[EMAIL PROTECTED]>
Subject: Re: silly question
Date: Tue, 12 Jan 1999 18:24:21 +0000

Brett W. McCoy wrote:
> 
> On Tue, 12 Jan 1999 13:09:14 +0000, mlw <[EMAIL PROTECTED]> wrote:
> 
> >A utility like Dos's xcopy would be useful. I have often wanted to be
> >able to do the sorts of things it can do. Yes, it is possible to
> >synthesize the behavior with a few other utilities, but, that does not
> >diminish the fact that it would be a good utility to have.
> >
> >In UNIX terms it would probably be called xcp.
> 
> Why not alias cp plus the approriate options (like -a or -dpR)?  What
> difference does it make if there's a native binary or you "synthesize" it
> with other commands?
> 
> --
> Brett W. McCoy

Because cp does not do the same thing that xcopy can do. 

I would have to programatically run somethings like:

tar cv pattern | tar xc dest.patern, but be very careful of paths.

For instance:

I have a directory, it has subdirectories. The tree contains .doc, .xls,
.aw, and .as files. I want to copy the directory tree, but, only with
the .as files. With Xcopy, it would be:
xcopy /path/*.as . /s /e /h /c

Under UNIX it would be

cd srcpath
tar cvf /tmp/fubar.tmp
cd /destpath
tar xvf /tmp/fubar.tmp *.as

an 'xcp' command would be usefull.

-- 
Mohawk Software
Windows 95, Windows NT, UNIX, Linux. Applications, drivers, support. 
Visit the Mohawk Software website: www.mohawksoft.com

------------------------------

From: [EMAIL PROTECTED] (Jim Van Zandt)
Subject: Re: iso9660: time-stamp mismatch/bug?
Date: 12 Jan 1999 18:11:32 GMT
Reply-To: [EMAIL PROTECTED]

[EMAIL PROTECTED] (Jim Y. Kwon) writes:
>
>When mounting CD-ROM's, all the files display the wrong date/time - on my
>particular machine all CD-ROM files are off by 15 hours. I was doing some
>archive work and burning some CD's and I noticed this discrepancy. I checked
>these CD's under Win95 and the files showed the correct date/time (as the
>date/times from the original files).

I see a similar problem with vfat filesystems.  When I create a file
with Linux then later examine it with Windows, I see the wrong time.
In Unix, the timestamp is in UTC and any timezone information is
applied by the displaying program (like ls).  For a DOS filesystem,
timestamps are in local time.  

I would expect the kernel to do a UTC->local conversion when a file
timestamp is written, and a local->UTC conversion before the timestamp
is handed over to the display program.  I guess these conversions are
not being done.

                        - Jim Van Zandt

------------------------------

From: [EMAIL PROTECTED] (Leslie Mikesell)
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Open Configuration Storage - was Registry for Linux
Date: 12 Jan 1999 22:36:10 -0600

In article <[EMAIL PROTECTED]>,
George MacDonald  <[EMAIL PROTECTED]> wrote:
>> 
>> You can detect a network failure pretty quickly and move on
>> to the next possibility just like detecting the lack of a file.
>> The only slow failure mode is where you make a DNS request but
>> you don't actually have a connection to a working server - as
>> long as this timeout is logged to the console along with what
>> to do about it, I don't see a problem.
>> 
>
>Your assuming I want to plug into the network, try assuming
>I don't.

Then you have to use local files or prompt the user.  That's
done, probably can't be done much better, and isn't all
that interesting. 

>> That's the point of having the alternates, but if you have to
>> explicitly configure the configuration utility, why not spend
>> your time configuring the application instead?
>> 
>You should only need to configure the config utility when you
>are defining new configuration senarios/contexts/profiles.

Why should you expect the configuration of the configuration
utility itself to be any less volatile than any of the other
components using the facility?

>Once done then if you wish/need configure the apps relative
>to the contexts/profiles.

But why have a new facility if it can't do this for you? 

>> >Lets say I
>> >am a consultant and bring my laptop into a site, perhaps
>> >to do a demo. I may not need to reconfigure or may not want to.
>> 
>> If you want an IP address on my network, you have to reconfigure.
>> If you don't hook to the network then nothing changes.
>
>That's your network, I'm not talking about your network. I'm
>talking about "a" network. Some sites may have "public"
>plug points with DHCP already configured so guests can
>plug in. Also some networks may be "open", some "closed"
>and others somewhere inbetween. These are policy decisions
>and have no business being hard coded into whatever solution
>we offer. 

How do you propose to find this information if we don't come
up with a standard way to ask for it?

>Forcing config and pre-defining settings are one kind
>of modality, there are others. We should define a system
>that allows for what you wish, but that does not mandate it.

Then it either has to ask what is available or you have
to manually configure it.  Manual configuration is already
there, so what is there to the new facility other than finding
things available on a network?

>You are talking about defining the rules for different 
>classes/objects and how to set the rules. I am talking about
>how to build the mechanism to support that.

I'm saying you should build a mechanism that will use itself
to figure out the rules.

>So we want the same thing, except I also want application
>configuration to be smarter. i.e. I may want to publish
>my public address book so that I can access it from any
>where on the Internet.

LDAP already does that.

 Les Mikesell
   [EMAIL PROTECTED]

------------------------------

From: Stefan Monnier 
<[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Registry for Linux - Bad idea
Date: 12 Jan 1999 23:37:34 -0500

>>>>> "Christopher" == Christopher Browne <[EMAIL PROTECTED]> writes:
> 1) Site config: /etc/site/foo.conf  (/etc/site might be NFS mounted
> from a central server; feel free to suggest a better location to stick
> this...)
> 2) Host-based config: /etc/foo.conf
> 3) $HOME/.foo.rc
> 4) $HOME/etc/foo.conf
> 5) $HOME/GNUStep/Library/Defaults/foo/Defaults

To encompass the ~/.foorc style, we could just say that CONFPATH has a list
of strings to which the file name is appended (rather than a list of
directories) so that you'd have:

        CONFPATH=/etc/site/:/etc/:~/.:~/etc/


-- Stefan

------------------------------

From: "http://www.otokovideo.com" <[EMAIL PROTECTED]>
Subject: How do you read core dump on Linux?
Date: 13 Jan 1999 04:55:03 GMT
Reply-To: [EMAIL PROTECTED]

How do you read core dump on Linux?
Thanks.

Patrick
--

sincerely,

Otoko Entertainment, LLC
http://www.otokovideo.com
http://www.otokovideo.com/wupi
http://www.otokovideo.com/photo



------------------------------

From: George MacDonald <[EMAIL PROTECTED]>
Subject: Re: silly question
Date: Wed, 13 Jan 1999 05:17:18 GMT

mlw wrote:
> 
> George MacDonald wrote:
> >
> > mlw wrote:
> > >
> > > mlw wrote:
> > > >
> > > > Peter Pointner wrote:
> > > > >
> > > > > mlw <[EMAIL PROTECTED]> wrote:
> > > > >
> > > > > > Actually, I would love to see Xcopy on Linux. "xcopy /s /e /h /c this
> > > > > > there" would be great.
> > > > >
> > > > > I can't remember all the options, but are you sure "cp -a" doesn't do
> > > > > what you want?
> > > > >
> > > > > Peter
> > > >
> > > > I have looked, maybe I am dense, but, I would like to do something like:
> > > >
> > > > xcopy *.cpp -s -e -h -c ../anotherdir
> > > >
> > > > This will copy all of the files that end with .cpp to another directory,
> > > > recreating the directory structure with them. If an error happens it
> > > > will continue.
> > >
> > > Don't get me wrong, I am a Linux user. Look at my mail headers. There
> > > just isn't a command with the abilities of Xcopy on Linux, but, hey if
> > > that is all I miss, I am way ahead.
> >
> > see the man page for duhduh(8), it's a secret command that is only
> > visible to root users who put the secret word "..." into a file
> > called /...enableHacks.  You need to use fsdb to create the file,
> > then set it permissions 9876543210.
> >
> > Hope this helps
> 
> Yeh, if you want to insult me, that would require that I have enough
> regard for you to affect me.
> 
> This sort of thing is uncalled for.

Sorry, you asked an obviously silly question about something not related
to the subject matter of this group. I thought you were looking for a silly
answer. If not, I suggest you try comp.os.linux.help.



-- 
We stand on the shoulders of those giants who coded before.
Build a good layer, stand strong, and prepare for the next wave.
Guide those who come after you, give them your shoulder, lend them your code.
Code well and live!   - [EMAIL PROTECTED] (7th Coding Battalion)

------------------------------

From: [EMAIL PROTECTED] (MalkContent)
Subject: Re: Linux should not support non-free modules
Crossposted-To: comp.os.linux.misc,comp.os.linux.advocacy
Date: Wed, 13 Jan 1999 05:28:17 GMT

: 
: It's the power of a hardware giant like Intel behind UDI that provides 
: the chance that MS will run into problems when trying to ignore it.
:

        Friends, in all forty of these quite educational posts I have seen 
almost seven utterly divergent points of view.
        Not one of these is from someone who sounds like a remotely recent 
Convert to "the great glory of linux"

        I HATE WRITING CODE.

        I can't stress that enough.  I never will like it.
        Sadly, I am therefore amongst the ranks of the M$ zombies out there.
Without the know-how of a programmer, and only just learning Linux, I could 
easily become a person who resents it.

        The linux interface is greatly less user-friendly, and so divergent 
from
 windoze (I don't necessarily *like* them...) that the average end user *will*
resent it. 
 Note: the average end user also is heavily influenced by television.

        So, we're trapped in a mad conrundrum - do we risk the wilds of 
standarf
dization, and possibly commercialization (and thusly public visibility); or 
does linux stay a happy communistic community complaining about how unfair the
situation between linux and M$'s monster is?

        As a hard working warehouse monkey, I really don't enjoy coming home 
to
 find out that my music player for my sound card crashed while I'm at work, 
and
 locked the system because I may or may not have configured a kernel right.
        As we all look at our respective situations, we must also remember 
that
 linux is *NOT* built for the typical end user - its for those who do want 
to 
learn more about it, and maybe help it grow.
        Its made each of us who can use it more marketable in the computer 
workplace (some more than others), but taken away the 
        "So, Sven, didja buy that new game?"
        "Yeah, I love it! Have you..."
from all of us, and given us the reply
        "No, X doesn't support my machine, and the drivers for A, B, and C..."

        all by our own choice.

Are any of us willing to give up our personal tinkered systems for what may 
one
 day become another windoze?

Just a thought.

Malkcontent.
 

------------------------------

From: [EMAIL PROTECTED] (Mumit Khan)
Subject: RH 5.2 and 2.2.x kernels [Re: 2.2.0pre6 booting errors]
Date: 13 Jan 1999 05:29:14 GMT

[ posted with courtesy copy to Frank Hale  <[EMAIL PROTECTED]> ]

In article <[EMAIL PROTECTED]>,
Frank Hale  <[EMAIL PROTECTED]> wrote:
>
>I'm sure glad everybody keeps pointing out that they have successfully
>built and are running 2.2.0pre6 with out any problems, but the fact of
>the matter is this helps me out nil.
>

Hi Frank,

I can understand your frustration only too well, but have faith. If I can
do it, so can anyone! I finally did get my new box running, and it smokes!

Here's what I did on a DELL dual-PII/512M/FireGL-100PRO/AIC78xx-SCSI-only 
machine.

1. Install RH 5.2 and updates (I needed to update XFree86 for Fire GL)

2. Grab the following *source* RPMs from RawHide (best site I've found
   is ftp://rufus.w3.org/linux/rawhide/):

   kernel-2.1.133-1.src.rpm

   [See #5 below for the following components]

   modutils-2.1.121-2.src.rpm
   net-tools-1.49-2.src.rpm
   procinfo-15-1.src.rpm
   util-linux-2.9-6.src.rpm

   Optionally:

   knfsd-981204-1.src.rpm

   The following I build locally since my copies have patches that are
   not in the release sources, but you might as well grab these as
   well:

   binutils-2.9.1.0.19a-2.src.rpm
   egcs-1.1.1-3.i386.rpm
   egcs-c++-1.1.1-3.i386.rpm
   egcs-g77-1.1.1-3.i386.rpm
   egcs-objc-1.1.1-3.i386.rpm

3. Now first install and build the essential source components:
   
   $ rpm -ivh kernel-2.1.133-1.src.rpm modutils-2.1.121-2.src.rpm \
       net-tools-1.49-2.src.rpm procinfo-15-1.src.rpm

4. Let's do the kernel first:

a. Now tweak the SMP config in the kernel:
   
   $ cd /usr/src/redhat/BUILD/linux

   edit .config and modify. Mine looks like the following:

   #
   # Processor type and features
   #
   # CONFIG_M386 is not set
   # CONFIG_M486 is not set
   # CONFIG_M586 is not set
   CONFIG_M686=y
   # CONFIG_MATH_EMULATION is not set
   CONFIG_MTRR=y
   CONFIG_SMP=y

b. Now build the kernel:

   $ /usr/src/redhat/SPECS
   $ rpm -ba kernel-2.2.spec

   This step puts things by default under /usr/src/redhat. After this
   step, you have ready to go RPMs in /usr/src/redhat/RPMS/i386/.

c. Now you install:
   
   $ cd /usr/src/redhat/RPMS/i386
   $ rpm -ivh kernel-2.1.133-1.i386.rpm 

d. Upgrade (you might have to use --force) the kernel headers as well:

   $ rpm -Uvh kernel-headers-2.1.133-1.i386.rpm 
   $ rpm -ivh kernel-source-2.1.133-1.i386.rpm

e. Create initial RAM image, especially if you boot off SCSI and so on:
   (See the Kernel Upgrade HOWTO at RedHat site)
   
   $ cd /boot
   $ /sbin/mkinitrd -v initrd-2.1.133-1.img 2.1.133-1

f. Edit /etc/lilo.conf as necessary and reboot. I'm assuming you have
   a rescue disk and so on.

5. Update the rest of the components (modutils, etc). I actually ran
   without updating for a while, and it worked just fine.

After this was successful, and I let the machine build a whole bunch
of stuff using ``make -j'' (egcs-1.1.1 bootstrap takes 9 min 30s for
example) for about 5 hours, I was reasonably confident about its
stability.

I then built 2.2.0-pre6 + ac2 (ac == Alan Cox) using egcs-1.1.1 (didn't
use the usual kernel default compiler, gcc-2.7.x) and installed that,
and re-ran all the compile jobs. Some of these compiles take >400M
of VM, so it's a good memory test as well ;-)

For Alan Cox patches, see
  ftp://ftp.linux.org.uk/       (which has been off the net for a while)
  ftp://blue.netnation.com/pub/alan/patch-2.2.0pre6ac2.bz2

I apologize in advance for any omissions and/or errors ...

Regards,
Mumit


------------------------------

From: [EMAIL PROTECTED] (Daniel R. Grayson)
Subject: Re: IPMasquerading / SSH
Date: 12 Jan 1999 23:33:20 -0600

mlw <[EMAIL PROTECTED]> writes:

> Daniel R. Grayson wrote:
> > 
> > I don't know what you mean by "without using masq".  With two computers, one
> > modem, and an ISP that hands me one IP number when I dial up with ppp, I
> > don't know how to get both of them on the network without using IP
> > masquerading.  And it works fine.  Except for the expiration time.
> > 
> > I also don't know what you mean by "I could be the server to ...", sorry --
> > it sounds like you're offering to be part of my modem setup, so it must be a
> > misprint.
> 
> You misunderstood what I meant. Sometimes servers will disconnect with
> no activity on their own. You may be tracking down a problem which does
> not exist. Do others not going through a masquerading server have this
> same problem? Is this piece of informaton known?
> 
> It is just when you say you have tried various things, and you still
> have a problem, perhaps you don't have one.

I'm not sure what you mean by "server".  Perhaps you mean my "internet
service provider", but it doesn't participate in or know about the
masquerading.  In any case, the modem stays connected.  Here is my setup in
more detail.

        hosts A and B are connected to each other by ethernet in my house

        host B also has a modem which dials up my ISP using ppp

        host B implements "IP masquerading"

        when host A contacts host C somewhere else in the world, say by
                telnet, the traffic goes through host B.

        on host B, the command "netstat -M" will display each such
                "masqueraded" connection, together with its expiration
                time.  I can watch the time till expiration tick down
                from 15 minutes to 0 for the connection from A to C --
                each time some data are exchanged, it gets reset to
                15 minutes.  Eventually the connection is terminated.

        host A can have multiple connections with hosts in the outside
                world -- only the one that stayed idle for 15 minutes
                is terminated.  The others remain active, and the modem
                connection is still there.

------------------------------

From: Andi Kleen <[EMAIL PROTECTED]>
Subject: Re: virtualizing i386-linux
Date: 12 Jan 1999 17:54:11 +0100

In article <77fjap$o6e$[EMAIL PROTECTED]>,
Michael Brunnbauer <[EMAIL PROTECTED]> writes:

> the physical memory is divided statically for exclusive use by the virtual
> kernels. a later version could assign physical mem to the kernels dynamically.
> io-ports,hardware interrupts, segments for memory-mapped-io are assigned in
> the same way. only one kernel can use a piece of hardware.

This means if you run N instances of the kernel you need N SCSI adapters,
N network cards, N graphic adapters etc. If you require that anyways, why
not just buy multiple PCs then?  

> there are 2 possible ways to share hardware.
>  -an inter-kernel communications mechanism. special device drivers of one
>   kernel could use low or high level device drivers of another that 'owns'
>   the hardware. another nice use of a communication mechanism is a
>   network-driver to build virtual LANs between the kernels.
>  -virtualized drivers with switching in the metakernel (useful for console and
>   keyboard)

> cli/sti could be implemented by masking all assigned hardware interrupts of the
> calling kernel.

rtlinux (www.rtlinux.org) does this already in some limited fashion. It 
virtualizes sti/cli to guarantee fast irq answers for its hard real time tasks.
The regular kernel is simply the task with the lowest priority. It doesn't
use any protection domains though and only supports one kernel instance.
rtlinux doesn't do its own IO (except when especially implemented in some
rttask) and relies on the normal kernel for that.  

another related design is MkLinux. All hardware dependent
services have been stripped from it and it uses Mach services for them.
The hardware is not virtualized, all interrupt handlers and device drivers
run inside Mach.

on the other hand - is it worth it? PC hardware has become so cheap, much
cheaper than programmer time. Just buy a few old Pentiums, run multiple Linuxes
and use hardware firewalling [5 inchs of air(tm) - patent pending]. 
Most people seem to move to clustered configurations, is it worth to take
the opposite path? 

-Andi 

------------------------------

From: Todd Knarr <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Re: Registry - Already easily doable, in part.
Date: 13 Jan 1999 05:46:43 GMT

In comp.os.linux.development.system Stefan Monnier 
<[EMAIL PROTECTED]> wrote:
> My take on it is that you need a CONFPATH setting that would typically look
> like CONFPATH=$HOME/etc:/etc:/usr/local/etc.

The only thing is that the set of config files can vary depending on the
software. For example, something like fetchmail _only_ has user configuration
almost by definition, the name resolver only has system-wide configuration,
a mail program probably has both, some applications might have things in
between system-wide and per-user and in the most perverse case you might
have an app that has a different search path depending on which user is
running it ( eg. admin users search in /usr/local/admin/etc between system
and user config while non-admin users do not ).

> what when data appears both in ~/etc and in /etc can be tricky.  Also it might
> be desirable to allow the admin to enforce some part of the config (for instance
> by forcefully prepending /etc/impose to CONFPATH).

One problem with having the admin force things: often the admin is not
qualified to know when things should be forced. I'm often in that position
as a developer, where the admins know only the current release version and
the version in testing, while _I'm_ working about 4 versions down the road
from that and the configuration that works for the release version will
cause the development version to malfunction ( or worse yet, will cause the
development version to do things that cause the entire rest of the company
to malfunction ).

This applies to some of the other suggestions being bandied about here
as well. For example, if a consultant comes in he probably wants things
like DNS, routing and mailservers configured from the local network's
configuration but he does _not_ want his word processor to suddenly
acquire the company's standard configuration in the process.

-- 
Contrary to popular belief, Unix is user friendly. It just happens to be
very selective about who its friends are. 
                                -- Kyle Hearn 

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to