Linux-Development-Sys Digest #253, Volume #6     Mon, 11 Jan 99 00:14:14 EST

Contents:
  SunOS {4,5} emulation (was: Poor NFS performance from Solaris to Linux) (Paul Vojta)
  Re: CDROM under DOSEMU (Victor Wagner)
  Re: disheartened gnome developer ([EMAIL PROTECTED])
  ( First timer) Kernel/modules BUILD PROBLEMS !!! (mvrao)
  Re: blocksize / file write speed anomaly (Ingo Rohloff)
  Re: Open Configuration Storage - was Registry for Linux (Leslie Mikesell)
  Re: Linux Sound Engine (Andreas Bombe)
  Re: Ram And comp.Fan (Andreas Bombe)
  Re: Open Configuration Storage - was Registry for Linux (Leslie Mikesell)
  Telephony and Linux (Screech)
  Re: Adopting COM? ("Osvaldo Pinali Doederlein")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Paul Vojta)
Subject: SunOS {4,5} emulation (was: Poor NFS performance from Solaris to Linux)
Date: 10 Jan 1999 17:28:40 -0800

In article <77a76p$2vv$[EMAIL PROTECTED]>,
 <[EMAIL PROTECTED]> wrote:
>They claim that SPARC Linux runs SunOS4 aka. Solaris1-binaries, but no
>SunOS5 aka. Solaris2-binaries. Did I miss something ?

Who is ``they''?

SunOS 4 emulation:  earlier sparc kernels supported it (provided you compiled
the kernel with support for a.out binaries); recent kernels have it disabled
with the following comment in linux/include/asm-sparc/namei.h:

/* Until we solve, why SunOS apps sometime crash, disable gnemul support
  for Sun OS */

SunOS 5 emulation:  you need to install the iBCS package.  I've never tried
it myself.

--Paul Vojta, [EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED] (Victor Wagner)
Crossposted-To: comp.os.linux.development.apps
Subject: Re: CDROM under DOSEMU
Date: 10 Jan 1999 10:05:12 +0300
Reply-To: [EMAIL PROTECTED]

mvrao ([EMAIL PROTECTED]) wrote:
: How do I get dosemu to recognize my CDROM drive ?

Do you need so?

Typically it is enough to mount CDROM under linux and open its root
directory as network drive in lredir. 
I work with DOSEMU four years and never need something else.

Of course, there are special situations like games, which checks for
presence of licensed CDROM. For this rare situations cdrom.sys is
provided with DOSEMU distribution. 

I always feel that give DOSEMU direct access to some piece of hardware
is evil, becouse it may prevent any unix program access same hardware
while DOSemu is running, and may cause problem if two or more DOSEMU
running simulateously. Probably it is not a case with CDROM, becouse it
is read-only.

-- 
========================================================
I have tin news and pine mail...
Victor Wagner @ home       =         [EMAIL PROTECTED] 

------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: comp.os.linux.advocacy,comp.os.linux.development.apps,comp.os.linux.x
Subject: Re: disheartened gnome developer
Date: Sun, 10 Jan 1999 22:15:35 GMT

Roberto Alsina writes:
> Not "reportedly".You can see a scan of the legal papers at www.troll.no.
> Dammit, I signed some of those papers myself.

I wrote 'reportedly' to make it clear that I have only indirect knowledge
of the matter.  While I wish KDE and Troll Tech well, I do not follow their
adventures particularly closely.

> No, not GPL. A really free license.

A GPL compatible license?  Good.

> If that happens, the code will be owned by the KDE Free Qt foundation, of
> which I am member, and which is bound by its statutes to immediately
> release the code under a BSD-like license.

And this foundation is properly chartered and incorporated?  Good.  This
might actually work, if you can pry the rights away from Troll Tech's
successors in a timely fashion.

> Yes, that too. And not "it looks", but it will.

It certainly looks likely, but we can't be certain until it happens.

> I found your message a bit insulting, and entirely misinformed.

Don't be so hostile.  My point was that a work still in copyright *always*
has an owner, and that the copyright law give that owner the right to
license the work however it sees fit.  In the unlikely event that Troll
Tech went bust and your foundation acquired the code and then violated its
charter by selling proprietary licenses, those licenses might very well be
be considered valid by the courts, even if the board members were all
hauled off to jail for fraud.
-- 
John Hasler
[EMAIL PROTECTED] (John Hasler)
Dancing Horse Hill
Elmwood, WI

------------------------------

From: mvrao <[EMAIL PROTECTED]>
Subject: ( First timer) Kernel/modules BUILD PROBLEMS !!!
Date: Mon, 11 Jan 1999 01:55:05 +0000

I managed to build the kernel and also the modules.

Than I did make modules_install and it copied stuff to a dir under
/lib/modules with name as the version I compiled the kernel as.

I also copied the System.map file to /boot with the proper filename.

Now I do not know where to find 'module-info' file. Without this, lilo
continues to use the older modules directory ( module-info and
System.map of previous versions).

Most of the changes I made were in the modules ( I changed some loadable
driver settings) which I cannot see because the kernel, upon booting
uses the modules from the previous version. In fact, even thopugh I
create the link to 'preferred' to point to the new version, it simply
removes the link at boot time.

Pl help me here- this is my first kernel build.

TIA


------------------------------

From: Ingo Rohloff <[EMAIL PROTECTED]>
Subject: Re: blocksize / file write speed anomaly
Date: Sun, 10 Jan 1999 21:12:57 GMT

Jerry Dinardo <[EMAIL PROTECTED]> wrote:

> 1. write 25000 2KB blocks sequentially.
> 2. fsync
> 3. close
> 4. open output
> 5. write block 0,2,4,6 ...24998
> 6. write block 0,2,4,6 ...24999

(Which was later corrected to 1,3,5,7,...24999)

> 7. fsync
> 8. close

> the program takes 10 seconds to do the sequential writes.
> It then takes 259 seconds to do the random writes.

By the way: I just thought about this more deeply...

The Problem is that this isn't a "random" write, but
the worst case possible...

A cache makes sense, because of the idea of localized
accesses, which means that a "normal" program usually
accesses the same area of the disk over and over again.

Your program is completely contrary (not random).

I first writes linear to the disk, (which is exactly the
opposite of a localized access).


First you write all blocks lineary to disk, which means
that the buffer cache is filled up with the file.
(Because it's a write back cache.)

In the second rw section the following happens (at least
that's what I think) :

First you start to write all blocks from 0,2,... 24999.
Near the end of this writes the kernel has to throw out some
of the other blocks out of the cache to create space
for the last blocks (24000...). Unfortunately, the blocks which
will be thrown out are (1,3,5,7 ...), because they are the oldest
ones.

When you then start to write (1,3,5,...) the blocks 0,2,4,...
are thrown out of the cache. The access characteristics for
such action must be very bad indeed...


I figure that things get better if you use 4K blocks, because
than a block corresponse to exactly 1 Memory Page (which is 
also 4K) so perhaps the Virtual Memory code is enabled to
handle things more gracefully....

To be honest I'm not really sure about what I explained, but
it's the only explanation I can think of

so long
  Ingo

-- 
Please remove .PLOPP at end of mail-adress, if you want to reach
me. I'm sorry for that inconvenience, but I'm fed up of spammers.

------------------------------

From: [EMAIL PROTECTED] (Leslie Mikesell)
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Open Configuration Storage - was Registry for Linux
Date: 10 Jan 1999 20:36:00 -0600

In article <[EMAIL PROTECTED]>,
George MacDonald  <[EMAIL PROTECTED]> wrote:
>> 
>> hmm... that actually makes a lot of sense, assigning a priority or a weight
>> to each setting.  there would have to be some restrictions, though - the
>> system config should be able to unconditionally override a user's
>> settings.  still i like the idea =)
>
>Actually this is quite interesting. As frank says we definately need a way
>for administrators to *lock* values by marking them final and then also
>not allowing the evaluation methods or "store path" to be overriden. 

I think that you also want a way that any network-available value
can either be a fall-back default only for the cases where the
local value hasn't been configured at all, and the opposite case
where the local copy is only used if no one has configured the
setting at a more global level. 

>We also want to allow the opposite(in some cases), i.e. allow the user
>to override these(e.g. A Trusted user). I had only envisage one evaluation
>mechanism, but the priority evaluation suggest there may be others.

I'd expect some sort of hierarchy where you don't know the number of
levels ahead of time.  That is, you might have a laptop that runs
off-network sometimes, on a home network with one other host sometimes,
and on the office network where there are department, division, and
enterprise defaults for different values.  Having a wide value range
with space available on both sides of the initial defaults should let
things fall into their assigned places.

>So perhaps making the evaluation mechanisms definable externally and
>then configuring them? 

You will need a way to assign the priorities to the existing
mechanisms (files, etc.) that do not supply them directly.  New
mechanisms can be built to supply their own along with the value
(like asking for a DNS MX record).  There is a certain intellectual
purity in using the same mechanism internally as you present
externally.  I'd recommend assigning a default priority to values
obtained by each method and get this setting just like all the
others.  Note that if you allow local files to be used at all
you probably can't prevent local administrators from overriding
the network-mandated defaults and I'm not sure how much effort
should go into even trying.

  Les Mikesell
   [EMAIL PROTECTED]

------------------------------

From: Andreas Bombe <[EMAIL PROTECTED]>
Subject: Re: Linux Sound Engine
Date: 10 Jan 1999 19:03:27 +0100

Ross Vandegrift <[EMAIL PROTECTED]> wrote:
> The idea sprang after downloading esd, the Enlightened SounDaemon.  Esd
> runs in the background and any esd aware apps open esd, not the sound
> device.  The problem here is that this ties up the sound device
> completely.  So, I was thinking, why not make a real-time mixer
> pipeline?  It could replace the current kernel interface to provide
> transparent support for existing applications, and simply run on top of
> the OSS Modules.  This engine would have a number of benfits that I can
> see:
>       1) Sound becomes non-exclusive - an arbitrary number of apps can now
> play sound through one device.
>       2) Real-time mixing - these sounds can be given an `Alpha channel'',
> and mixed before sending it to the sound card to create effects.

Actually, this is what esd is about - any number of apps can play sound
through esd, which mixes the streams in real-time and outputs them to
the dsp device.  esd also comes with a script that preloads a library
to make any program esd-aware (though this doesn't work always).

Apart from the other goodies (like sound over network), it even seems
to include some filter functionality, that probably can be used to
post-process the audio stream to create effects (just guessed by
looking at the sources, I'm not sure about that).

-- 
Andreas E. Bombe <[EMAIL PROTECTED]>
http://home.pages.de/~andreas.bombe/

PGP 1024bit ID:0x886663c9 fingerprint: 30EC0973847B5583 C47A91D99DC54BB0
    2048bit ID:0xF62D5CC1 fingerprint: 136BBC1536B8B77A 200558E86FAAF8ED

------------------------------

From: Andreas Bombe <[EMAIL PROTECTED]>
Subject: Re: Ram And comp.Fan
Date: 10 Jan 1999 19:07:42 +0100

User470357 <[EMAIL PROTECTED]> wrote:

>    I would like to know if linux becomes much faster with more ram or does it
> have a little effect. Say I had 16 mb and upgrade to 144 mb would I see a great
> diffrence in speed in command line and in x windows.

If you're running X or any memory intensive apps, you will most
certainly see speed improvements.  If you're not using all the memory,
it will automatically be used by Linux for hd buffers, so if anything
on disk was accessed recently, it will still be in memory and can be
loaded from there instead from disk.

-- 
Andreas E. Bombe <[EMAIL PROTECTED]>
http://home.pages.de/~andreas.bombe/

PGP 1024bit ID:0x886663c9 fingerprint: 30EC0973847B5583 C47A91D99DC54BB0
    2048bit ID:0xF62D5CC1 fingerprint: 136BBC1536B8B77A 200558E86FAAF8ED

------------------------------

From: [EMAIL PROTECTED] (Leslie Mikesell)
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Open Configuration Storage - was Registry for Linux
Date: 10 Jan 1999 21:07:35 -0600

In article <[EMAIL PROTECTED]>,
George MacDonald  <[EMAIL PROTECTED]> wrote:
>> 
>> Someone has to care, since someone has to configure it.   The last
>> thing I want is yet-another-protocol service to manage in the name
>> of making things simpler.  If this needs more than ordinary files
>> and possibly NFS, can it use some existing protocol like http or
>> LDAP?
>
>There is a protocol ACAP, see RFC 2244 which is designed to addess
>the networking aspect of application configuration. But it should
>be hidden behind a common interface, so that normal flat files
>can be used if desired. You should visit 
>
>    http://andrew2.andrew.cmu.edu/cyrus/acap/
>
>They have one doc that does a comparison of the various protocols
>and why they created a new one.

I agree that the other existing protocols don't do quite what we
need, but on the other hand I don't see everyone jumping to install
ACAP.  It is just another thing to set up and maintain and it doesn't
replace the need for LDAP.

>My approach is to build an
>architecture that can use any of these, perhaps with some limitations.

>Decisions as to what level of security, performance, extra features
>should be left to site administrators as much as possible. Admin
>policy will have a significant bearing on these decisions. At the
>simplest level one needs a 
>
>       get - i.e. read, select, ...
>       
>and to store some kind of
>
>       set - i.e. write, update, ...
>
>and perhaps some kind of
>
>       start transaction - open, ...
>       end transaction - close, ..
>
>The goal is to keep it as simple as posssible, so that it can map
>onto each of the underlying mechanisms.

How do you propose to map transactions onto something that doesn't
supply a transaction mechanism?  Suppose you want to update
values that are independent in LDAP but the set of updates must
be held together when you retreive the values as a configuration?
How can you prevent a machine from getting a mix of old and
new values (a potential disaster) while the update is in progress?

>Impossible to do all this? Absolutely not! Difficult?
>Of course, making complex things simple always is.

Yes, it really is impossible to do it perfectly.  However, even
with config files unix is pretty sloppy already with the only
transactioning mechanism being the number of people who know the
root password. You might be able to make it 'good enough'.

>Don't think I can do it? Have a look at treeps, 

I wouldn't question your capabilities.  You are just attempting something
that needs transactioning and it can't be done right unless the
back end supports it - or you are willing to accept occasional errors.
Perhaps you should start with postgresql (which supports transactions
and has an open license) as the preferred network store.  Even then
you will have problems if you allow settings stored at different
levels in the hierarchy to be used if they come from different databases
or files.

  Les Mikesell
    [EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED] (Screech)
Crossposted-To: comp.os.linux.development
Subject: Telephony and Linux
Date: Mon, 11 Jan 1999 04:09:39 GMT

I am looking for a library for telephony under Linux.  In particular,
I need something that will allow me to utilize caller ID.  Is there
such a beast?  TIA...



------------------------------

From: "Osvaldo Pinali Doederlein" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: Adopting COM?
Date: Mon, 11 Jan 1999 00:33:27 +0100

Christopher B. Browne <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>On the other hand, it is likely that one or another of the ORBs will
>become a "preferable option."
>Applications tend to get deployed for a specific ORB; the client and
>server code *is* ORB-specific to at least some extent.  As a result,
>GNOME apps will almost certainly be ORBit-specific.  They may request
>services that are being provided via another ORB; nothing stops you
>from installing multiple ORBs.
>At this point, *all* the ORBs for Linux are too immature to be treated
>as critical system facilities for anything that's critical.

I was thinking about that.  Maybe all you need is a standard procedure so
any ORB can be plugged in the OS.  People who can buy big Linux
distributions can also get all free ORBs included on it, since there's
plenty space in CD-ROM distributions and ORBs are not that big.  The people
doing the kernel could set up the rules to implement things like CORBA
Services and helper daemons (like interface repositories, or
location/activation daemons) as modules.  Having standard APIs or admin
tools to do the basic management of each pieces (these could be just
wrappers to whatever different admin interfaces of each ORB).  The idea is,
adding to the OS some smart standards, scripts and wrapper libs to make all
ORBs look the same to the user, administrator, and as much as possible to
the developer.  This is almost as good as having a single standardized ORB.

It's surely a big advantage of CORBA that you have choice of ORBs, as well
as language and platform, and it's not nice selecting some ORB as preferred
and making the competition difficult for people producing all other ORBs for
Linux.  BUT, as much as I hate to admit it (as somebody who thinks CORBA is
better than COM) and as Linux users will gate too (as people who think Unix
is better than Windows), the inclusion of the COM ORB in Windows NT is a
terrific advantage both in technical and marketing aspects.  It's a big
technical advantage for added performance, ease of administration, and
system integration (the OS itself uses it extensively and exposes each day
more functionality through ORB interfaces).  It's a big marketing advantage
because the ORB is "always there" so there are no deployment complications,
its use imply no additional cost (or admin work, in the free-software
world), and its use by the OS to expose important features forces developers
to learn it and write applications to it, and this helps to move the system
to a more up-to-date reality.

I see that an OS like Linux has a big potential to become a _real_
revolution.  But this is not going to happen if developers insist on using
technologies, APIs and standards from the seventies, no matter how open and
how bug-free/efficient they are in Linux.  This will only happen if you are
bold enough to embrace today's and tomorrow's big opportunities (my best
bets are CORBA and Java) and support them much better than other
alternatives.  And I can imagine an architecture where the OS has an
included ORB and depends on it, and there is some specific ORB which is
selected by the distribution (or by the user at installation time) but you
can plug other ORB, and even have several ORBs installed in the same system
but only one of them selected as the "system ORB" and used by system
facilities needing an ORB.  Of course, in addition to very good
implementation of normal CORBA interoperability, the ORBs would have the
burden of additional standard interfacing to the OS, but I wouldn't expect
that to be a major issue.  BTW, if you look around, there are commercial
Unices beggining to deliver included CORBA ORBs.






------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to