Linux-Development-Sys Digest #257, Volume #6     Mon, 11 Jan 99 17:14:23 EST

Contents:
  Re: i960 RP/RD driver ("jay")
  phentzise error? (Shane Owenby)
  Re: GUI, The Next Generation (Doug DeJulio)
  Re: disheartened gnome developer (Marco Anglesio)
  Re: blocksize / file write speed anomaly (Peter Steiner)
  Rebuilding kernel help ("Tim Underwood")
  Re: blocksize / file write speed anomaly (jerry)
  Re: Open Configuration Storage - was Registry for Linux (George MacDonald)
  Re: GUI, The Next Generation (Anselm Lingnau)
  Re: Open Configuration Storage - was Registry for Linux (Leslie Mikesell)
  Re: Open Configuration Storage - was Registry for Linux (Leslie Mikesell)
  2.2.0pre5/6 IDE disk geometry detection problem (Marko Siladin)
  Re: Open Configuration Storage - was Registry for Linux (George MacDonald)
  Help about wish and tcl commands ! (YANG Tong)

----------------------------------------------------------------------------

From: "jay" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.hardware
Subject: Re: i960 RP/RD driver
Date: Mon, 11 Jan 1999 19:37:09 GMT

Doesn't the intel i960 use the 82557/8 chipset still?  I haven't tried it
yet, but I think it would be worth a try.

jay

[EMAIL PROTECTED] wrote in message <77b3qu$140$[EMAIL PROTECTED]>...
>I was reading that Alan Cox (http://www.linux.org.uk/diary) what using the
>linux Qlogic ISP2100 driver as basis of his i960 based card.
>
>The driver can be found at:
>
>http://www.iol.unh.edu/consortiums/fc/fc_linux.html
>ftp://gfs.lcse.umn.edu/pub/GFS/FC-drivers
>
>I am not sure if this helps, but hopefully it is what you are looking for.
>
>Martin Giguere
>
>  Neal Richter <[EMAIL PROTECTED]> wrote:
>> Hello,
>>     Does anyone have any source out there for a device driver for any
>> kind of  i960 RP/RD based PCI board?
>>
>>     I would appreciate any info anyone can provide!
>>
>>     Thanks
>>
>> --
>> ____________________________________
>> Neal Richter
>> Software Engineer
>> Salt Lake Digital Imaging
>> 124 South 600 West
>> Logan, UT  84321
>> 435-787-2803
>> 435-787-2810 FAX
>>
>> [EMAIL PROTECTED]
>>
>>
>
>-----------== Posted via Deja News, The Discussion Network ==----------
>http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own



------------------------------

From: [EMAIL PROTECTED] (Shane Owenby)
Subject: phentzise error?
Date: 11 Jan 1999 20:15:31 GMT
Reply-To: [EMAIL PROTECTED]

I am trying to build an Apache module that gives this error
when loading:

(redhat) /shane/one1/bin% ./apachectl start
Syntax error on line 219 of /shane/one1/conf/httpd.conf:
Cannot load /shane/one1/libexec/mod_ibm_mt.so into server: 
/shane/one1/libexec/mod_ibm_mt.so: ELF file's phentsize not the expected size
./apachectl start: httpd could not be started
Line 219 is just the LoadModule line..so not a syntax error.

I built both Apache and this module on the same machine
using gcc-2.7.2.3-14

Does anyone have any ideas as why this might be happening?

(redhat) ~/apache-1.3/src/modules/extra% uname -a
Linux redhat.raleigh.ibm.com 2.0.36 #1 Tue Oct 13 22:17:11 EDT 1998 i686 unknown

PS I know there is not a problem with the Module as if I build it into the
Apache core ( statically ) it works like a charm. Also the Apache code
that gives this error is only doing an dlopen on the library...so I am
at a lost as to the error.

Thanks 
Shane

------------------------------

From: [EMAIL PROTECTED] (Doug DeJulio)
Subject: Re: GUI, The Next Generation
Date: 11 Jan 1999 15:21:13 -0500

In article <[EMAIL PROTECTED]>,
Derek B. Noonburg <[EMAIL PROTECTED]> wrote:
>I have no idea if the source for wm or wmc is available.

I believe it is.  When I worked for the AUIS consortium, I remember
lobbying until I secured permission to distribute it.

I don't believe anyone has ever gotten it working under Linux.  I
don't think it's worked on a more than 1-bit-per-pixel display since
the very early days.

The big thing *I* liked about it was that its network protocol was so
lean.  It ran acceptably over 9600bps SLIP connections, back in the
days before VJ header compression existed (I was using a network of
Sun2 workstations at home in those days, so this was noticable and
appreciated).
-- 
Doug DeJulio      | mailto:[EMAIL PROTECTED]
HKS, Incorporated | http://www.hks.net/~ddj/

------------------------------

From: Marco Anglesio <[EMAIL PROTECTED]>
Subject: Re: disheartened gnome developer
Crossposted-To: comp.os.linux.advocacy,comp.os.linux.development.apps,comp.os.linux.x
Date: Mon, 11 Jan 1999 16:11:50 GMT

In comp.os.linux.development.system Luke Scharf <[EMAIL PROTECTED]> wrote:
> I've never read Marx, and I may be biased by my own philosophy.  I got
> the impression that in *IDEAL* communism, everyone works for the common
> good, and the people who work harder get the respect of the community. 

Not merely that; the concept of the Stakhanovite is alive and well in the
open source movement. The Stakhanovite is the worker who works so hard and
diligently as to inspire other workers to work harder. Linus, or Alan Cox,
or even RMS would be Stakhanovites. Or some of the more educated posters
on c.o.l.development.system. 

> Those who are lazy get no respect.  I.E. it's still capitalism with the
> currency being time & effort in one direction and esteem in the other.

Imparting of respect as an abstract kind of currency, perhaps, but you
can't use respect in order to get other people to work for you and turn a
profit off their labour. In the Marxist sense, since respect is not a
medium of exchange but is imparted by consent of the masses, respect is
not currency, since it cannot be quantitatively measured nor can it be
exchanged for anything. The exchangeability is the essence of currency. 

That said, Marx is not *that* difficult or long a read, and is probably as
essential as any other work to political consciousness of any flavour.

marco

--
Marco Anglesio                                    Like Captain Idiot 
mpa at the-wire dot com                 in Astounding Science comics
http://www.the-wire.com/~mpa              (The Manchurian Candidate)



------------------------------

From: [EMAIL PROTECTED] (Peter Steiner)
Subject: Re: blocksize / file write speed anomaly
Date: 11 Jan 1999 17:11:58 +0100
Reply-To: [EMAIL PROTECTED]

I wrote:

>This program makes my computer sleep 2 times for 100 seconds each.

The processes block on disk I/O.

All the computer is doing is to write dirty buffers to disk. I checked
with SysRq-M:

St›tart:

Time         buffers used (last) locked  dirty
46:54  CLEAN:  4366 , 59  (2134),   0  ,     0
46:54 LOCKED:  1664 , 41  (1627),   0  ,     0

Now write dirty buffers:

48:13  CLEAN: 23699 , 55  (23597),  1  ,     0
48:13 LOCKED:   156 , 49  (   65), 92  ,     0
48:13  DIRTY: 17633 ,  2  (    2),  1  , 17633

48:17  CLEAN: 24261 , 55  (24237),  1  ,     0  (stays const.)
48:17 LOCKED:   389 , 49  (   65), 92  ,     0
48:17  DIRTY: 16838 ,  3  (    3),  2  , 16838

48:19 LOCKED:   873 , 49  (   65), 92  ,     0
48:19  DIRTY: 16354 ,  3  (    3),  2  , 16354

48:21 LOCKED:  1517 , 49  (   65), 92  ,     0
48:21  DIRTY: 15710 ,  3  (    3),  2  , 15710

48:23 LOCKED:  1873 , 49  (   65), 92  ,     0
48:23  DIRTY: 15354 ,  3  (    3),  2  , 15354

...

48:45 LOCKED:  6998 , 49  ( 3118), 91  ,     0
48:45  DIRTY: 10229 ,  2  (    2),  1  , 10229

48:46 LOCKED:  7166 , 49  ( 3118), 91  ,     0
48:46  DIRTY: 10061 ,  2  (    2),  1  , 10061

...

49:09 LOCKED: 12466 , 49  ( 3118), 91  ,     0
49:09  DIRTY:  4761 ,  2  (    2),  1  ,  4761

49:14 LOCKED: 13782 , 49  ( 3118), 91  ,     0
49:14  DIRTY:  3445 ,  2  (    2),  1  ,  3445

49:27 LOCKED: 16792 , 49  ( 3118), 91  ,     0
49:27  DIRTY:   435 ,  2  (    2),  1  ,   435

It seems DIRTY buffers are converted into LOCKED buffers (written
back?) and they stay LOCKED until all DIRTY buffers are written. Then
all LOCKED buffers are unlocked immediately ( <1 sec).

I don't understand this: What's the difference between LOCKED buffers
and buffers that are "locked" (the last but one column)? Anyway...

The problem occures even with a small number of records

#define BIGLOOP 5000

write_linux running ...
  Duration 1 seconds
rwrite_linux running ...
2500 records written
  Duration 22 seconds    (That's almost all this fsync();)
2500 records written
  Duration 22 seconds

Now I comment out the fsync()'s.

write_linux running ...
  Duration 2 seconds
rwrite_linux running ...
2500 records written
  Duration 0 seconds
2500 records written
  Duration 0 seconds

Looks good... However, 30 seconds later the computer sleeps for 40 seconds
(yes, 40 seconds; these "interleaved" writes are not combined)

Ciao,

Peter
-- 
   _   x    ___
  / \_/_\_ /,--'  [EMAIL PROTECTED] (Peter Steiner)
  \/>'~~~~//
    \_____/   signature V0.2 alpha

------------------------------

From: "Tim Underwood" <[EMAIL PROTECTED]>
Subject: Rebuilding kernel help
Date: Mon, 11 Jan 1999 16:15:45 GMT

I am using RH 5.2 "Official", installed as WS from the CDROM.  I am simply
trying to get "dial-on-demand" working.  That feature is not supported in
the default installation.  Anytime I try to run pppd, I get something like
the following:

"dial on demand is not supported in kernel version 2.2.0"

>From the sketchy docs I have,  I seem to understand that the kernel was
compiled with ppp 2.2.0.  I need to rebuild the kernel with the 2.3.5.

I downloaded the .rpms for the 2.0.36 kernel source, and headers.  Installed
both, with --replacefiles option (DL'd from Redhat's site).
I then installed the ppp 2.3.5 source rpm from Redhat's source CD.

I looked over the docs for rebuilding the kernel, and the docs in ppp.

1.  Ran the ./configure in ppp
2.  Ran 'make install'  - installed new headers, etc. in kernel source
3.  Ran 'make dep' - no errors,  a few warnings
4.  Ran 'make module' - ran ok, then got errors in ppp.c - too few args for
a macro - don't remember the name.

What am I doing wrong?  There were several other ppp files in the rpm
(couple of patch files, etc.)  that I didn't mess with.  No docs were
"unrpm'ed" for the ppp source rpm.  I don't know what these patch files are
for, or if I should use them, or how to get them installed properly.

So, is there an easier way to get dial-on-demand, or can someone help me to
get the kernel recompiled with ppp 2.3.5?

Thanx.



------------------------------

From: jerry <[EMAIL PROTECTED]>
Subject: Re: blocksize / file write speed anomaly
Date: Mon, 11 Jan 1999 11:35:29 -0500

Then why does it go so fast (14 times faster) when you write blocks
0,2,4 ... out as aNEW file instead of writing to an  existing file ?

Ingo Rohloff wrote: By the way: I just thought about this more deeply...

> The Problem is that this isn't a "random" write, but
> the worst case possible...
>
> A cache makes sense, because of the idea of localized
> accesses, which means that a "normal" program usually
> accesses the same area of the disk over and over again.
>
> Your program is completely contrary (not random).
>
> I first writes linear to the disk, (which is exactly the
> opposite of a localized access).
>
> First you write all blocks lineary to disk, which means
> that the buffer cache is filled up with the file.
> (Because it's a write back cache.)
>
> In the second rw section the following happens (at least
> that's what I think) :
>
> First you start to write all blocks from 0,2,... 24999.
> Near the end of this writes the kernel has to throw out some
> of the other blocks out of the cache to create space
> for the last blocks (24000...). Unfortunately, the blocks which
> will be thrown out are (1,3,5,7 ...), because they are the oldest
> ones.
>
> When you then start to write (1,3,5,...) the blocks 0,2,4,...
> are thrown out of the cache. The access characteristics for
> such action must be very bad indeed...
>
> I figure that things get better if you use 4K blocks, because
> than a block corresponse to exactly 1 Memory Page (which is
> also 4K) so perhaps the Virtual Memory code is enabled to
> handle things more gracefully....
>
> To be honest I'm not really sure about what I explained, but
> it's the only explanation I can think of
>
> so long
>   Ingo
>
> --
> Please remove .PLOPP at end of mail-adress, if you want to reach
> me. I'm sorry for that inconvenience, but I'm fed up of spammers.




------------------------------

From: George MacDonald <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Open Configuration Storage - was Registry for Linux
Date: Mon, 11 Jan 1999 21:09:03 GMT

Leslie Mikesell wrote:
> 
> In article <[EMAIL PROTECTED]>,
> George MacDonald  <[EMAIL PROTECTED]> wrote:
> >>
> >> Someone has to care, since someone has to configure it.   The last
> >> thing I want is yet-another-protocol service to manage in the name
> >> of making things simpler.  If this needs more than ordinary files
> >> and possibly NFS, can it use some existing protocol like http or
> >> LDAP?
> >
> >There is a protocol ACAP, see RFC 2244 which is designed to addess
> >the networking aspect of application configuration. But it should
> >be hidden behind a common interface, so that normal flat files
> >can be used if desired. You should visit
> >
> >    http://andrew2.andrew.cmu.edu/cyrus/acap/
> >
> >They have one doc that does a comparison of the various protocols
> >and why they created a new one.
> 
> I agree that the other existing protocols don't do quite what we
> need, but on the other hand I don't see everyone jumping to install
> ACAP.  It is just another thing to set up and maintain and it doesn't
> replace the need for LDAP.
> 
> >My approach is to build an
> >architecture that can use any of these, perhaps with some limitations.
> 
> >Decisions as to what level of security, performance, extra features
> >should be left to site administrators as much as possible. Admin
> >policy will have a significant bearing on these decisions. At the
> >simplest level one needs a
> >
> >       get - i.e. read, select, ...
> >
> >and to store some kind of
> >
> >       set - i.e. write, update, ...
> >
> >and perhaps some kind of
> >
> >       start transaction - open, ...
> >       end transaction - close, ..
> >
> >The goal is to keep it as simple as posssible, so that it can map
> >onto each of the underlying mechanisms.
> 
> How do you propose to map transactions onto something that doesn't
> supply a transaction mechanism?  Suppose you want to update
> values that are independent in LDAP but the set of updates must
> be held together when you retreive the values as a configuration?
> How can you prevent a machine from getting a mix of old and
> new values (a potential disaster) while the update is in progress?

Good question! See below.

> 
> >Impossible to do all this? Absolutely not! Difficult?
> >Of course, making complex things simple always is.
> 
> Yes, it really is impossible to do it perfectly.  However, even
> with config files unix is pretty sloppy already with the only
> transactioning mechanism being the number of people who know the
> root password. You might be able to make it 'good enough'.
> 
> >Don't think I can do it? Have a look at treeps,
> 
> I wouldn't question your capabilities.  You are just attempting something
> that needs transactioning and it can't be done right unless the
> back end supports it - or you are willing to accept occasional errors.
> Perhaps you should start with postgresql (which supports transactions
> and has an open license) as the preferred network store.  Even then
> you will have problems if you allow settings stored at different
> levels in the hierarchy to be used if they come from different databases
> or files.

There are a couple of different ways to go about this. The first is to
simply say the underlying mechanisms limitations flow through. i.e.
if it's possible to corrupt a LDAP directory via multiple updates
simultaneously from different locations on the same shared data then
that is a limitations of that storage/access "module". The same
can be said for flat files accessed by root, it is possible for
two root users to be modifying a set of files and end up corrupting
them instead. Yet it is a practical solution in many situations.

Just yesterday I was running an application that modified application
config files in my home directory. The program documentation said that 
running two copies of the program could corrupt my files. I was
aware of that, ran two copies of the program, and only updated the
config files from one of them. My point is that a solution in
one situation may not be practical in another, so the first
solution is to allow the config service to be configured to the desired level
of service. Thus if you want the RDBMS features then configure the
service that way. The thing we have to do in this case is ensure the API
carries sufficient calls to enable the underlying mechanisms to work.
i.e. a opStore_close() might do the update->commit or rollback.

Now the second way is to build a service that supports all the concurrency/
transactional mechanisms into the service. This is not easy as you suggest, 
because some of the underlying mechanism's do not directly support it. However
that does not mean it's impossible, only more difficult. Consider that
TCP supports a "reliable service" on top of an "unreliable" one.

We also need to build a module that uses a new flat file format which
will handle the transactional mechanism. This can be done using
file locking or Copy on write or versioning based repositories. There
are ways to do this that are not as efficient as a DBMS and probably
not as robust either but they would be practical on small systems.

The strategy is to get an API that wraps the current mechanisms,
allows us to build a simple file based mechanism, then allows
us to build a top to bottom, end to end system that has all the
features and is reliable/concurrent/transactional ... If this
can be achieved using existing mechanisms that are configured 
properly then the last part may not be needed.


-- 
We stand on the shoulders of those giants who coded before.
Build a good layer, stand strong, and prepare for the next wave.
Guide those who come after you, give them your shoulder, lend them your code.
Code well and live!   - [EMAIL PROTECTED] (7th Coding Battalion)

------------------------------

From: Anselm Lingnau <[EMAIL PROTECTED]>
Subject: Re: GUI, The Next Generation
Date: 10 Jan 1999 16:54:10 +0100

In article <[EMAIL PROTECTED]>,
Derek B. Noonburg <[EMAIL PROTECTED]> wrote:

> I have no idea if the source for wm or wmc is available.

However, there would be nothing to prevent somebody from writing a
nonoverlapping-window window manager for X11, in the same way that we
have all sorts of different window managers. One could probably take
fvwm or twm source and butcher it into something resembling such a
beast.

Anselm
-- 
Anselm Lingnau ......................... [EMAIL PROTECTED]
The purpose of computing is insight, not numbers.            -- Richard Hamming

------------------------------

From: [EMAIL PROTECTED] (Leslie Mikesell)
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Open Configuration Storage - was Registry for Linux
Date: 11 Jan 1999 10:53:22 -0600

In article <[EMAIL PROTECTED]>,
Frank Sweetser  <[EMAIL PROTECTED]> wrote:
>
>one of the goals is to make the simple case simple.  for a plain
>non-networked, single user, home machine, the default opStore config could
>just point to /etc/opStore/<app>.conf priority 5, then
>~/.opStore/<app>.conf priority 10.  this would require no maintanence on
>the part of the sysadmin (== user here), and give the same behavior as is
>common now - it wouldn't be a full service, but rather just another library
>the app uses.
>
>with more complex systems, the system administrator would have the ability
>to specify a more complex configuration.  in that case, it can safely be
>assumed that there would already be a system administrator capable of
>dealing with more complex setups.

I think additional goals here should be to take advantage of existing
network data stores (LDAP, etc.) and to encourage consolidation of
such data into something that can be maintained consistently.  Personally
I'd like to see postgresql as the back-end storage since it already
supports transactions and has java, odbc, perl DBI, and some other
client interfaces and is included in the RedHat distribution.

>> Yes, it really is impossible to do it perfectly.  However, even
>> with config files unix is pretty sloppy already with the only
>> transactioning mechanism being the number of people who know the
>> root password. You might be able to make it 'good enough'.
>
>hrm... i think you're probably right here.  we can play some tricks with
>file locking and rename(), but for resources spread accross multiple
>sources (in particular ones w/out true transaction support, which will
>probably be most of 'em) there will always be a danger.  having an opStore
>lock on each resource *should* mean that 2 opStore apps won't corrupt each
>other, though that won't help with other entities (ie, someone with vi)
>from interfering.  the best we can do in that case is make things no more
>dangerous than they already are.

You can do it right if you provide a data store that supports transactions
and one or more interfaces to do updates (perl DBI would be good and
easily adaptable to many SQL servers).  Non-transactioning data stores
and combining data from more than one source could still be supported
but at the user's risk.

  Les Mikesell
    [EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED] (Leslie Mikesell)
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Open Configuration Storage - was Registry for Linux
Date: 11 Jan 1999 10:56:47 -0600

In article <[EMAIL PROTECTED]>,
Frank Sweetser  <[EMAIL PROTECTED]> wrote:
>
>the existing plan already includes locally configured metadata, this
>metadata coule include default priorities for any resources that do not
>specify one themself.

If you can't obtain these values through the same mechanism you 
propose for everyone else's configurations then you have done
something wrong.

  Les Mikesell
   [EMAIL PROTECTED]

------------------------------

From: Marko Siladin <[EMAIL PROTECTED]>
Subject: 2.2.0pre5/6 IDE disk geometry detection problem
Date: Mon, 11 Jan 1999 11:49:31 -0500
Reply-To: [EMAIL PROTECTED]

I tried to send this to the linux kernel mailing list - but it vger
seems to be too busy. I hope someone in the kernel development is
reading this list as well:

I've just upgraded from 2.1.131 to 2.2.0pre5 & 2.2.0pre6 and
now I have trouble building boot disks using LILO. It looks
like the problem is in the detection of the geometry of my
IDE hard disk:

in 2.1.131 cat /proc/ide/hda/geometry results in:
physical     4092/16/63
logical      847/128/63  <- correct

in 2.2.0 pre5/6 I get :
physical     4092/16/63
logical      511/128/63  <- false

the actual physical geometry of the disk is 6780/16/63
so both kernels get that one wrng - but at least the 2.1.131
gets the logical geometry correctly (from the BIOS)

After making a kernel and typing make installCreating a lilo bootdisk...
mkdir /tmp/boot12461
mke2fs -q /dev/fd0
mke2fs 1.12, 9-Jul-98 for EXT2 FS 0.5b, 95/08/09
mount -t ext2 /dev/fd0 /tmp/boot12461
cd /tmp/boot12461
cp /boot/vmlinuz /boot/boot.b .
lilo -C - <<- EOF2
geo_comp_addr: Cylinder 823 beyond end of media (511)
set +e; cd /; umount /dev/fd0; rmdir /tmp/boot12461

The created boot disk is not bootable. I can boot into
2.1.131 and make the boot disks for 2.2.preX.

To me it seems like there was a bug introduced - or
do I need to change the way how I make my boot disks
(by providing the explicit geometry)?

...Marko

please CC replies to 
[EMAIL PROTECTED] 
thanks

------------------------------

From: George MacDonald <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Open Configuration Storage - was Registry for Linux
Date: Mon, 11 Jan 1999 21:20:17 GMT

Frank Sweetser wrote:
> 
> [EMAIL PROTECTED] (Leslie Mikesell) writes:
> 
> > In article <[EMAIL PROTECTED]>,
> > Frank Sweetser  <[EMAIL PROTECTED]> wrote:
> > >
> > >one of the goals is to make the simple case simple.  for a plain
> > >non-networked, single user, home machine, the default opStore config could
> > >just point to /etc/opStore/<app>.conf priority 5, then
> > >~/.opStore/<app>.conf priority 10.  this would require no maintanence on
> > >the part of the sysadmin (== user here), and give the same behavior as is
> > >common now - it wouldn't be a full service, but rather just another library
> > >the app uses.
> > >
> > >with more complex systems, the system administrator would have the ability
> > >to specify a more complex configuration.  in that case, it can safely be
> > >assumed that there would already be a system administrator capable of
> > >dealing with more complex setups.
> >
> > I think additional goals here should be to take advantage of existing
> > network data stores (LDAP, etc.) and to encourage consolidation of
> > such data into something that can be maintained consistently.  Personally
> > I'd like to see postgresql as the back-end storage since it already
> > supports transactions and has java, odbc, perl DBI, and some other
> > client interfaces and is included in the RedHat distribution.
> 
> agreed.  postresql does sound like a good choice for one of the initial
> modules.
> 
> > >> Yes, it really is impossible to do it perfectly.  However, even
> > >> with config files unix is pretty sloppy already with the only
> > >> transactioning mechanism being the number of people who know the
> > >> root password. You might be able to make it 'good enough'.
> > >
> > >hrm... i think you're probably right here.  we can play some tricks with
> > >file locking and rename(), but for resources spread accross multiple
> > >sources (in particular ones w/out true transaction support, which will
> > >probably be most of 'em) there will always be a danger.  having an opStore
> > >lock on each resource *should* mean that 2 opStore apps won't corrupt each
> > >other, though that won't help with other entities (ie, someone with vi)
> > >from interfering.  the best we can do in that case is make things no more
> > >dangerous than they already are.
> >
> > You can do it right if you provide a data store that supports transactions
> > and one or more interfaces to do updates (perl DBI would be good and
> > easily adaptable to many SQL servers).  Non-transactioning data stores
> > and combining data from more than one source could still be supported
> > but at the user's risk.
> >
> >   Les Mikesell
> >     [EMAIL PROTECTED]
> 
> however, here's the problem.  values can come in from multiple locations.
> say any number of these values can be changed.  one or more of these
> locations do not support guaranteed record locking or transactions.  say
> you make a set of changes that in addition to any transaction capable
> locations, also covers values originating from multiple non-capable
> locations.  how do you guarantee consistency accross the entire
> configuration?
> 
> the more i think about it, the more i think that there just won't be a sane
> way to guarantee consistensy on all sources, esp when multiple ones don't
> support guaranteeing it.  first off, when we write out changes, we should
> only write out the delta.  second, i suggest that we only write out the
> changes to a single, configurable location.  for the regular user, 98% of
> the time this will just be ~/.opStore/<app>.conf  for the admin, design a
> tool capable of groking where each item comes from, and can update the
> values on each one individually.  by providing this tool, the work of
> guaranteeing consistency is largly offloaded to the admin.  it would be a
> lot nicer to do it automagically, but i don't see any way that we can do it
> without faking parts of it and hoping nothing blows up.
> 

Yeah I was thinking something similar, i.e. that you simply keep all
changes. Thus nothing can ever be lost and the resolution of conflicts
can be done according the the particular situations needs. Some
of this could be automated by defining resolution algorithms/methods
and configuring them. One of the configuration options for resolution
could be to involve a human(user/admin/...).

One could then say the "current view" is the application of a set of
changes on to the "base config". This is nice cause it also provides
for a history/fail safe mechanism. Of course DBMS wouldn't be able
to support this.


-- 
We stand on the shoulders of those giants who coded before.
Build a good layer, stand strong, and prepare for the next wave.
Guide those who come after you, give them your shoulder, lend them your code.
Code well and live!   - [EMAIL PROTECTED] (7th Coding Battalion)

------------------------------

From: YANG Tong <[EMAIL PROTECTED]>
Subject: Help about wish and tcl commands !
Date: Mon, 11 Jan 1999 22:21:21 +0000

Hi, 

Just a question about tcl "langage" :
How can I put a bitmap (and I hope a pixmap) in a button ?
I explain with a simple example (on the sreen) :

/root# wish
% button .b1 -bitmap ??????



Which value I must put in ????? (I have a X11 bitmap file call cdrom.xbm
in the directory /root)

Thanks.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to