Linux-Development-Sys Digest #267, Volume #6 Wed, 13 Jan 99 02:14:27 EST
Contents:
Re: silly question (mlw)
Re: 2.2.0pre6 booting errors (bill davidsen)
Linux LVM (was "things I'd pay to have developed for Linux...") (Andreas Dilger)
Re: Dynamically linked library (Frank Sweetser)
Re: moving linux to different partition (David Fox)
Re: Linux Sound Engine (bill davidsen)
Re: disheartened gnome developer ([EMAIL PROTECTED])
Re: Open Configuration Storage - was Registry for Linux (George MacDonald)
Re: How to get disk I/O rate? (Aurel Balmosan)
Re: virtualizing i386-linux (Emile van Bergen)
Re: virtualizing i386-linux (Michael Brunnbauer)
----------------------------------------------------------------------------
From: mlw <[EMAIL PROTECTED]>
Subject: Re: silly question
Date: Tue, 12 Jan 1999 18:41:56 +0000
Josef Moellers wrote:
> It's a philosophical question.
Why is it a philosophical question. There is a task which is usefull
which no utilities perform easily. Yes, any number of utilities can be
strung together to do it, just not easily.
Why does UNIX have 'rmdir' and 'rm' both can remove directories?
Why are there so many shells?
Because it is this wealth of utilities that makes UNIX easier to use
than other OS platforms.
>
> DOS has the approach that there has to be a specific tool for each and
> every purpose (... Gates can think of). If a user wants to do something,
> there has to be a ready-to-use tool. If there is no tool, you can't do
> it! So there is no use for a scripting language, hence there is none.
This is not true. DOS, Windows and NT support pipes. And one can type:
dir | sort | more
in DOS, Windows and NT, UNIX does not have a lock on these features.
>
> UNIX has a tool-based approach that you can build a tool for a specific
> purpose from simpler tools. In order to do so, you need a powerfull
> scripting language, hence there are even a number of shells!
>
> "Finally, the UNIX system provided a new style of computing, a new way
> of thinking of how to attack a problem with a computer. This style was
> based on the use of tools: using programs separately or in combination
> to get a job done, rather than doing it by hand [pointy-clicky], by
> monolythic self-sufficient subsystems [xcopy], or by special-purpose,
> one-time programs." (R. Pike, B.W. Kernighan)
XCOPY is not a monolythic self-sufficient subsystem any more than is rm
or cp. Infact, cp is deficient in that it can't do what xcopy does.
>
> BTW If your task is performed often enough, you can always build your
> own tool and call it ...
> > In UNIX terms it would probably be called xcp.
> ..., yes, "xcp" would be a good name!
>
> usage: xcp 'pattern' dstdir
>
> Beware that 'pattern' will be expanded by the shell _before_ xcp is
> invoked, so it must be quoted to prevent this!
I love UNIX, but the UNIX attitude is tiring. UNIX is not a fixed
document, it is a living platform. If a single utility can make a
specific task easier, there is NO reason why it would not useful to have
or persue.
This all started with a little rant. I stated the the abilities of xcopy
would make doing a few things in UNIX a bit easier, rather than
thoughtful statements like, "I've never needed that" or "That would be
cool" I get "Oh NO! you can't do that, UNIX has all the pieces with
which you can build that command" "Phylosophically, you shouldn't," etc.
I don't need a lecture about UNIX. I have been using UNIX, off an on,
since 1985. The original Suns, SCO Xenix (286), Interactive UNIX,
Solaris, Linix, NetBSD, and FreeBSD.
--
Mohawk Software
Windows 95, Windows NT, UNIX, Linux. Applications, drivers, support.
Visit the Mohawk Software website: www.mohawksoft.com
------------------------------
From: [EMAIL PROTECTED] (bill davidsen)
Subject: Re: 2.2.0pre6 booting errors
Date: 13 Jan 1999 05:46:32 GMT
In article <[EMAIL PROTECTED]>,
Frank Hale <[EMAIL PROTECTED]> wrote:
| I'm sure glad everybody keeps pointing out that they have successfully
| built and are running 2.2.0pre6 with out any problems, but the fact of
| the matter is this helps me out nil.
Frank, I did one system Slackware 3.6, and it just dropped in. I did
another built from about 3.2 and upgraded all the painful way, and it is
a mess (but works). There is a file in the Documentation directory
called Changes which list every blessed thing you need and where to get
it.
- make a /usr/src/linux-2.2.0pre6 directory with the kernel unpacked in
it
- point symbolic link /usr/src/linux at it
- pul each and every package listed in the Changes, compile and install
it
- now make menuconfig, dep, clean, bzImage, modules, modules_install
That will work, it's just a PITA, and do *not* omit steps one and two,
you need the right include files in place to get correct builds.
I have done it this way, it's very painful, but it will absolutely work
with virtually any system which booted enough to compile it's own kernel
and had an ELF compile.
NOTE: read the installation instructions on library upgrades, as I
recall there's something inobvious you *must* delete to make it work,
and a boot is needed in the middle. Follow the instructions and you will
have joy.
Or order Slackware 3.6 from ChaepBytes for $2 and install only the
kernel.
--
bill davidsen <[EMAIL PROTECTED]> CTO, TMR Associates, Inc
"Too soon we grow old, and too late we grow smart" -Arthur Godfrey
------------------------------
From: [EMAIL PROTECTED] (Andreas Dilger)
Crossposted-To:
comp.os.linux.misc,comp.os.linux.development.apps,comp.os.linux.hardware
Subject: Linux LVM (was "things I'd pay to have developed for Linux...")
Date: 12 Jan 1999 17:51:24 GMT
In article <77efbr$rvq$[EMAIL PROTECTED]>,
Peter Samuelson <[EMAIL PROTECTED]> wrote:
>anyway I really *do* wish you could mirror an entire VG at once; I
>tried once for quite awhile to coax AIX into doing this.
In AIX 4.2 and up, there is a command "mirrorvg" which does exactly
this. It includes a SMIT entry as well ;-)
>> Striped or mirrored, not both, at least up to 4.2.1 kernel.
>
>You are of course correct. Seems like a silly design limitation.
I believe that Linux LVM will be able to do this once mirroring is
available.
>Right on. I was amazed when we got an SGI O2 in here and it had IRIX
>6.3 preinstalled on a single partition of the 4G disk. I was used to
>AIX which by default creates separate fs's for /, /tmp, /var, /home and
>/usr
I agree that for single user systems it can be a hassle for multiple
partitions. However, on AIX with LVM and jfs, it is so easy to resize
filesystems there is little disincentive not to have multiple filesystems
in the first place. It WOULD be nice if you could shrink filesystems
on AIX (even if it meant unmounting it...)
>ID strings
>on each disk can cut both ways, but I think a vgck utility to repair or
>compensate for incomplete or corrupted vg's would help a lot and not be
>that hard to do.
I think that LVM with disk IDs (like PVIDs under AIX, but probably using
the same UUID code that ext2 uses) is the best way to go, so we don't
have to deal with the nightmare of disks being renamed on IDE or SCSI
when you re-configure your system. Having disk IDs would mean we don't
have to change the way Linux does disk major/minor device numbers in
order to rid ourselves of the evil of /dev/sdc becoming /dev/sdd when
you connect an external SCSI drive (for example).
>Anyway, Heinz's LVM is reportedly modeled after the HP-UX one which I
>understand is somewhat similar to AIX. I eagerly await version 0.5
>which has been almost here for about six months now. I have been
>reluctant to hack 0.4 (to support glibc and newer kernels) just in case
>0.5 already has this stuff....
I'm hoping that someone will develop AIX compatibility commands for
Linux LVM, so I don't have to use the HP versions, which I find more
confusing...
Cheers, Andreas
--
Andreas Dilger University of Calgary \"If a man ate a pound of pasta and
Micronet Research Group \ a pound of antipasto, would they
Dept of Electrical & Computer Engineering \ cancel out, leaving him still
http://www-mddsp.enel.ucalgary.ca/People/adilger/ hungry?" -- Dogbert
------------------------------
From: Frank Sweetser <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Re: Dynamically linked library
Date: 12 Jan 1999 14:08:44 -0500
Vlad Doubrov <[EMAIL PROTECTED]> writes:
> Hi All!
>
> Perhaps I'm a stupid dummy in linux programming, but anyone could say me
> how can I create dynamically linked libraries, link some of them at a
> runtime of my application, use some of their functions and then unload?
> And after some time just change the libraries without recompiling the
> main program. Just like dynamic DLLs in win32... Under Linux, i don't
> even know what to start from and what manpage to read in this case :(
>
> PS. I need this to be able to add some plugins into my app later.
man dlopen()
--
Frank Sweetser rasmusin at wpi.edu fsweetser at blee.net | PGP key available
paramount.ind.wpi.edu RedHat 5.2 kernel 2.2.0pre5ac1 i586 | at public servers
I know it's weird, but it does make it easier to write poetry in perl. :-)
-- Larry Wall in <[EMAIL PROTECTED]>
------------------------------
From: d s f o x @ c o g s c i . u c s d . e d u (David Fox)
Subject: Re: moving linux to different partition
Date: 12 Jan 1999 22:00:54 -0800
Peter Ross <[EMAIL PROTECTED]> writes:
> [EMAIL PROTECTED] (Daniel R. Grayson) writes:
> >richard hankins <[EMAIL PROTECTED]> writes:
> >> Is it possible to correctly copy my complete linux installation,
> >> currently on one small partition, to another partition?
> >> How do I handle the /dev and /proc directories?
> >>
> >'tar' can handle the whole thing, including the devices in /dev. Tell tar
> >not to cross onto other file systems with the -l ( --one-file-system )
> >option, and it will stay out of /proc. There is no need to copy anything in
> >/proc since it is all put there by the kernel when the /proc file system is
> >mounted. When extracting, tar should even make the mount point for /proc for
> >you, but you can make it yourself with "mkdir /proc".
> >
> >Also remember to use the -p option when extracting at the other end, to
> >prevent your umask from interfering.
> >
> >So, this ought to work:
> >
> > tar cfl - -C / | tar xfp - -C /newpartition
>
> AFAIK you will also need to update the root option in /etc/lilo.conf to
> point to the new partition. Then rerun lilo. Don't forget to keep a
> bootdisk handy because I imagine that this process could go horribly
> wrong.
Two more things. Run "umask 0" before doing the tar command or some
of your file permissions might change. And update the partition
devices in the new /etc/fstab before you reboot.
--
David Fox http://hci.ucsd.edu/dsf xoF divaD
UCSD HCI Lab baL ICH DSCU
------------------------------
From: [EMAIL PROTECTED] (bill davidsen)
Subject: Re: Linux Sound Engine
Date: 13 Jan 1999 05:29:15 GMT
In article <[EMAIL PROTECTED]>,
Peter Steiner <[EMAIL PROTECTED]> wrote:
| How can I create a device like that? Multiple processes are supposed to
| write to /dev/leaf at the same time while internally the audio mixer
| must handle all connections separately. It looks like a kind of named
| socket to me, but can this be made compatible with the original
| /dev/dsp behaviour? I'd prefer a userspace solution here. Mixing the
| audio data shouldn't be a big problem.
That lovely and very useful SysV feature, the message queue. Actually I
don't know if this would handle the bandwidth, otherwise it certainly
has most of the characteristics you need, N to M connections.
--
bill davidsen <[EMAIL PROTECTED]> CTO, TMR Associates, Inc
"Too soon we grow old, and too late we grow smart" -Arthur Godfrey
------------------------------
From: [EMAIL PROTECTED]
Crossposted-To: comp.os.linux.advocacy,comp.os.linux.development.apps,comp.os.linux.x
Subject: Re: disheartened gnome developer
Date: Tue, 12 Jan 1999 18:57:22 GMT
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] wrote:
> Roberto Alsina writes:
> > I was told that was one of the things lawyers had to think a lot about.
> > Basically the right to rerelease has already been granted. So, the
> > successors cant take it away, just like they cant take away my own Qt
> > copy.
>
> Ok. I don't know anything about Norwegian bankruptcy law, but here is a
> scenario for US law:
I am not a lawyer either, much less a norwegian one.
Then again, the man who wrote that is, he was explained what the goals
of the agreements were, and he says the agreements should do what he
was explained.
Just as side notes to your bankrupcy remarks:
1) Assets can be and are released on bankrupcies. Think of a asset under
escrow. That asset will be released on bankrupcy. So, in some cases
it wont happen as you describe. The lawyer says it wont in this case.
2) The judge can�t just throw away a contract signed by the company.
Otherwise, all commercial operations of a bankrupted company would
be under perennial revision when that company goes under!
Imagine you buy a house from a construction company, who has only
built that one house and goes under. Do you expect the judge to
come and say you don�t own the house because it has to be used to
pay creditors? I know the analogy is not exact, but I can not think
of a simple one right now :-)
--
Roberto Alsina (KDE developer, MFCH)‰
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: George MacDonald <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Open Configuration Storage - was Registry for Linux
Date: Wed, 13 Jan 1999 06:19:24 GMT
Leslie Mikesell wrote:
>
> In article <[EMAIL PROTECTED]>,
> George MacDonald <[EMAIL PROTECTED]> wrote:
> >>
> >> You can detect a network failure pretty quickly and move on
> >> to the next possibility just like detecting the lack of a file.
> >> The only slow failure mode is where you make a DNS request but
> >> you don't actually have a connection to a working server - as
> >> long as this timeout is logged to the console along with what
> >> to do about it, I don't see a problem.
> >>
> >
> >Your assuming I want to plug into the network, try assuming
> >I don't.
>
> Then you have to use local files or prompt the user. That's
> done, probably can't be done much better, and isn't all
> that interesting.
>
Perhaps at the system level, but there is much to do in the
app realm.
For starters:
Re-organizing where the 40+ .app files in my $HOME go
A library for developers that can
read config values
write config values (handling multiple updates)
use a common file format
evaluate between user defined values, system defined
values, ... allowing overrides and values marked final
Automatically keeping multiple versions of configs
Allow named sets of configs
Provide a "safe" set of config values
Allow storage of config values in a DB if I choose
Integrate the various system config files to allow
them to be accessed via one mechanism.
Provide an object oriented mechanism to allow
component config setting that are seen across all
apps.
Dynamic notification of config changes should an
app request it.
And finally allow seemless integration with a
networking based config system that has configurable
contexts that support multiple security policies.
Hmmm
> >> That's the point of having the alternates, but if you have to
> >> explicitly configure the configuration utility, why not spend
> >> your time configuring the application instead?
> >>
> >You should only need to configure the config utility when you
> >are defining new configuration senarios/contexts/profiles.
>
> Why should you expect the configuration of the configuration
> utility itself to be any less volatile than any of the other
> components using the facility?
Seems obvious to me? Perhaps you are limiting you discussion
to system configuration. I am talking about mainly application
config first, then system config.
>
> >Once done then if you wish/need configure the apps relative
> >to the contexts/profiles.
>
> But why have a new facility if it can't do this for you?
>
Well in many cases it could, but there are exceptions where
a user may want to tweak it.
> >> >Lets say I
> >> >am a consultant and bring my laptop into a site, perhaps
> >> >to do a demo. I may not need to reconfigure or may not want to.
> >>
> >> If you want an IP address on my network, you have to reconfigure.
> >> If you don't hook to the network then nothing changes.
> >
> >That's your network, I'm not talking about your network. I'm
> >talking about "a" network. Some sites may have "public"
> >plug points with DHCP already configured so guests can
> >plug in. Also some networks may be "open", some "closed"
> >and others somewhere inbetween. These are policy decisions
> >and have no business being hard coded into whatever solution
> >we offer.
>
> How do you propose to find this information if we don't come
> up with a standard way to ask for it?
I see at least two modalities. i.e. You know where the data
is at, or you don't know so you ask where the data is at.
>
> >Forcing config and pre-defining settings are one kind
> >of modality, there are others. We should define a system
> >that allows for what you wish, but that does not mandate it.
>
> Then it either has to ask what is available or you have
> to manually configure it. Manual configuration is already
> there, so what is there to the new facility other than finding
> things available on a network?
Storing things on the network, integrating the various storage
mechanisms behind one common interface.
I think I have already stated that most parts of what I have
been talking about are already done in one form or another.
Obviously there are many solutions that each address part
of the configuration issue. So there really is nothing new
in terms of solution, only that they are integrated into
a cohesive service that is easy for developers and flexible
for end users and administrators. Done properly it will
enable applications to make greater use of the existing
capabilities that are somewhat fractured and therfore not
highly used.
>
> >You are talking about defining the rules for different
> >classes/objects and how to set the rules. I am talking about
> >how to build the mechanism to support that.
>
> I'm saying you should build a mechanism that will use itself
> to figure out the rules.
Yes, I agree.
>
> >So we want the same thing, except I also want application
> >configuration to be smarter. i.e. I may want to publish
> >my public address book so that I can access it from any
> >where on the Internet.
>
> LDAP already does that.
Yes but you have to custom code for LDAP, what do you do then if
you want to use ACAP? Recode an application? That's not what end
users want, they just want to be able to use the services.
--
We stand on the shoulders of those giants who coded before.
Build a good layer, stand strong, and prepare for the next wave.
Guide those who come after you, give them your shoulder, lend them your code.
Code well and live! - [EMAIL PROTECTED] (7th Coding Battalion)
------------------------------
From: Aurel Balmosan <[EMAIL PROTECTED]>
Subject: Re: How to get disk I/O rate?
Date: Tue, 12 Jan 1999 19:14:42 GMT
JiSook Kim <[EMAIL PROTECTED]> wrote:
> Hi!
> I'm writing a program that monitors disk I/O rate(bps) per disk.
> The system has three hard disks(/dev/hda1, /dev/sda1, /dev/sdb1).
> How to get disk I/O rate per disk ?
Check vmstat which is included with the ps package.
Bye,
Aurel.
--
================================================================
Aurel Balmosan | [EMAIL PROTECTED], [EMAIL PROTECTED]
http://gaia.owl.de/~aurel/ |
================================================================
------------------------------
From: Emile van Bergen <[EMAIL PROTECTED]>
Subject: Re: virtualizing i386-linux
Date: Tue, 12 Jan 1999 21:27:34 +0100
Michael Brunnbauer wrote:
>
> hi all
>
> i'm playing with the idea to virtualize i386-linux (several kernels runnning
> simultaneously on one machine). this could be useful in environments where
> security is important, especially firewall-configurations. Example:
> distributing 2 services on 2 machines is more secure than running them both on
> one. of cource there are other useful applications for a virtualized linux.
>
> my current understanding of the 2.0.x kernel suggests that this can be
> accomplished in an elegant and quite easy way.
>
> current linux (2.0.x):
> every process in i386-linux is a task as supported by x86-processors and there
> are no other tasks. interrupts and system calls are always handled in the
> current task. the kernel runs with priviledge 0, the applications with 3.
>
> virtualized linux:
> the basic idea is to stay with the whole concept of linux and simply add a
> privilegde level.
>
> the meta-kernel runs with level 0, the linux-kernel with 1 and the applications
> with 3. all parts of linux that need level 0 must be rewritten to use a
> "system call" of the metakernel (e.g. task switch, direct modification of
> page tables, cli and sti). i hope that these parts will be found in the
> architecture-dependend source and include-files only and not in device drivers.
> the meta-kernel handles all interrupts and calls the handlers of the virtual
> kernels accordingly. no extra tasks for the meta-kernel are needed (same
> concept as normal linux).
>
> the physical memory is divided statically for exclusive use by the virtual
> kernels. a later version could assign physical mem to the kernels dynamically.
> io-ports,hardware interrupts, segments for memory-mapped-io are assigned in
> the same way. only one kernel can use a piece of hardware.
>
> there are 2 possible ways to share hardware.
> -an inter-kernel communications mechanism. special device drivers of one
> kernel could use low or high level device drivers of another that 'owns'
> the hardware. another nice use of a communication mechanism is a
> network-driver to build virtual LANs between the kernels.
> -virtualized drivers with switching in the metakernel (useful for console and
> keyboard)
>
> cli/sti could be implemented by masking all assigned hardware interrupts of the
> calling kernel.
>
> i'm not an expert in kernel development - perhaps i missed something that makes
> virtualizing linux impossible or difficult. i would be glad if some experienced
> developers could comment this concept.
>
> thnx,
> cu,
> brunni
You're looking at linux with its underbelly replaced by a small
microkernel (or 'metakernel' as you call it). This has already been done
in MkLinux (Linux on Mach). The only difference is that you'd like it to
be a multiserver (more 'kernel' servers running on the microkernel),
whereas Mklinux is a single server... But I guess it'd be a starting
point for such a thing.
Now, if only the 386 could virtualise a 386... oh, that would make
wonderful debuggers, multitaskers (linux, nt, os2 all running cosily
next to each other)... Sadly, in the real world the 386 can only
virtualise a 8086 with 32-bit general registers.
I had some ideas about something like this as well. My concept was:
---------- --------
| App | | App | ... etc.
---------- --------
| |
---------------------------------------------------------
| small microkernel (sched + msg.), let's call it V/OS |
---------------------------------------------------------
|1 |2 |3 <- uses low level services from 1,
2 or 3.
| | |
----------------------- | ----------------------------
| mkserver for Unix | | | mkserver on raw hardware |
----------------------- | ----------------------------
| | |
-------------------- | --------------
| traditional Unix | | | Bare metal |
-------------------- | --------------
|
-------------------------
| mkserver on mkserver! |
-------------------------
|
ad infinitum - 1
(V/OS as in Virtualisable OS, or as in the dutch Vos = Fox!)
I'm sure there are lots of pitfalls, and maybe it can't even be done on
a [345]86, but looks nice, doesn't it?
MVS was way ahead...
--
M.vr.gr. / Best regards,
Emile van Bergen (preferred e-mail address: [EMAIL PROTECTED])
This e-mail message is 100% electronically degradeable and produced
on a GNU/Linux system.
------------------------------
From: Michael Brunnbauer <[EMAIL PROTECTED]>
Subject: Re: virtualizing i386-linux
Date: 12 Jan 1999 19:22:47 GMT
Andi Kleen <[EMAIL PROTECTED]> wrote:
: This means if you run N instances of the kernel you need N SCSI adapters,
: N network cards, N graphic adapters etc. If you require that anyways, why
: not just buy multiple PCs then?
imagine a special network device for a virtual LAN between the kernels, one
kernel owning all the hardware and the others booting diskless from this one.
other device drivers to share a SCSI-adaptor or a filesystem are possible.
: on the other hand - is it worth it? PC hardware has become so cheap, much
: cheaper than programmer time.
imagine a separate machine for every network service. no security hole in one
service can affect another one.
tripwire is a program to monitor the filesystem for changes by hackers.
you have to keep it and its database on read only media and it will not help
if the hacker changes the kernel. but you could run a separate kernel not
connected to a network with tripwire and other monitors (the boot device
should not be accessible to the 'primary' kernel of course).
cu,
brunni
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.development.system) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************