Linux-Development-Sys Digest #272, Volume #6     Wed, 13 Jan 99 10:20:36 EST

Contents:
  Re: Registry for Linux - Bad idea (George MacDonald)
  Re: How do you read core dump on Linux? (Daniel R. Grayson)
  Re: How do you read core dump on Linux? (Arun Sharma)
  Re: silly question (mlw)
  Re: How do you read core dump on Linux? (Timothy J. Lee)
  Re: iso9660: time-stamp mismatch/bug? (Villy Kruse)
  Re: silly question (Josef Moellers)
  Re: iso9660: time-stamp mismatch/bug? (Villy Kruse)
  Total amount of physical memory (Yao-Lu Tsai)
  Re: Total amount of physical memory (Richard Jones)
  Re: Total amount of physical memory (Josef Moellers)
  Re: How can I tell if a kernel module is loaded from within kernel code? (Villy 
Kruse)
  Re: Total amount of physical memory (Mark Tranchant)

----------------------------------------------------------------------------

From: George MacDonald <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Registry for Linux - Bad idea
Date: Wed, 13 Jan 1999 05:12:25 GMT

Christopher Browne wrote:
> 
> On Thu, 07 Jan 1999 10:25:28 GMT, George MacDonald <[EMAIL PROTECTED]> wrote:
> >Christopher Browne wrote:
> >>
> >> On 05 Jan 1999 19:59:16 -0500, Frank Sweetser <[EMAIL PROTECTED]> wrote:
> >> >Tristan Wibberley <[EMAIL PROTECTED]> writes:
> >> >> 2) then a couple of libraries for parsing flat text will be most
> >> >> appropriate no? Simplest to implement.
> >> >
> >> >agreed.  this would be the first logical module to implement.  i actually
> >> >have two of them in front of me now (profile code from the kerberos package
> >> >courtesy ted t'so, and libconfig from sunsite), one of which will probably
> >> >end up getting stuffed into the flat text module.
> >>
> >> Indeed.
> >>
> >> Note that there will likely be two pieces:
> >> a) Read the config, and
> >> b) Write the config.
> >>
> >> The UNIX "approach" has been for these to be distinctly separated.
> >> Which is, for things that are commonly referenced, but seldom changed, a
> >> Good Thing.
> >
> >This model works well for CLI based users who know the unix "model".
> >However newer users and GUI based users have a learning curve to be
> >able to do the "write" part.
> 
> I'm not talking about applications.  I'm talking about the way it works
> "behind the curtain."
> 
> It doesn't matter very much, when designing the low level part, how the
> "user side" works.
> 
> If the system provides a "pretty, Barneyfied tool," the user will not
> ever need to touch the data by hand, which means that the user *doesn't
> care* about the data representation.
> 

Most end users. Some will use the CLI and go looking for config files
simply because that's the way they are used to doing things or they
like to "know" how it works.

> From a design perspective, the first thing we need to do is establish a
> good data representation.  And *then* figure out how to make it "Barney
> friendly." Looking at Barney first represents a wonderful opportunity to
> misdesign things.

Well I agree a good data representation is needed, but what data are we
going to represent? Do we need to represent 

        key's  and string values which are "typed" by the app
        key's  and lists of strings typed by the apps
        keys   and typed values
        key's  and lists of returned typed values

        keys'  and a list of objects some of which are values
               some of which is meta data.

        Do we need to support a redirect or link to say get the value
         from somewhere else?

        How do you lavel something as final, is that meta data on a field?

What do we call these, attributes, fields, objects. Exactly what does
the term we use mean? ... Is it object oriented, are there class definitions,
are there methods? can methods be defined, overridden ...

Many questions, some of which are dependent on the higher level model
we decide on.

> 
> >platform and it would be nice to be able to minimize the required learning
> >for users who are task oriented. So one big question is, do you want to
> 
> Data representation != Tools to manipulate data.
> 
> >       support a newer GUI based model for "end users"
> >       support the CLI model where users work with CLI and file tools
> >       support both
> >
> >I think doing both is the best approach.
> 
> I agree that having both "powerful" and "Barney-friendly" data
> manipulation tools is a good thing.  That is orthogonal to data
> *representation.*
> 

Well yes and no. For example do you support unicode values? if so
by hex? Following the HTML way? ...

Do you use SGML to define the grammer ... Or follow a C++ type
class definition, or a CORBA interface spec, ...


> >> Everyone seems to want to come up with a "sexy" system that will
> >> represent the "be-all and end-all."
> >>
> >>   "I'll come up with the perfect configuration system, and it will make
> >>    me famous!"
> >
> >Even the best config system would get you a wet rag response! Don't
> >count on getting famous out of it, you may help a lot of developers
> >and end users, but most likely they won't even give it a passing
> >thought. In fact if it's done well, it will be invisible.
> 
> Which is to say that from the enduser's perspective, a how data is
> represented will be invisible.  Which is what I said earlier.

For GUI users yes, for CLI users not nesc.

> 
> >> Unfortunately, actually implementing such a universal thing requires
> >> that a *lot* of programs be modified.  Which requires more effort than
> >> anyone is likely to be willing to employ.  It's *not* as simple as
> >> coming up with the "perfect config system."
> >
> >More wisdom, any good solution will find a way to work with the
> >existing mechanism and extend it to be better. Also as you
> >say it should not take much effort on the part of a developer
> >otherwise they won't bother. I belive they should get something
> >out of using a config service and not the other way around.
> 
> True.
> 
> Which means that a good API will allow a programmer to construct some
> common tools that can manipulate multiple data sources.
> 
Yup, but as you say later. First things first, we need an architecture
that can cope with multiple data sources, perhaps translating to some
neutral data format.

> >> If, in contrast, a scheme is set up that is *useful enough* and
> >> *convenient enough* that it convinces *SOME* developers to adopt it,
> >> thereby reducing the number of completely independent configuration
> >> systems, that's GOOD ENOUGH.
> >>
> >> We don't need a "Unified Field Theory" of configuration systems; we need
> >> something that's Good Enough, and perhaps that's somewhat better than we
> >> have now.
> >
> >I tend to think through a problem to come up with a nice clean conceptual
> >solution. This includes describing the *ultimate* desired solution.
> 
> But when you talk about an "*ultimate* desired solution," you are
> assuming something akin to a "Grand Unified Field Theory," where there
> will be a single clean conceptual solution.
> 
> I claim that it is not proven that that single solution exists.
> 

Well I'm striving to come up with a paragraph or two that can define
the solution. It needs to be clean, clear and easily communicated. It's
important to be able to set a vision or framework in someone
elses head so you can hang the details on it.

Saying it's a distributed hierarchical persistent network
based object storage system doesn't quite cut it. 

> There are diverse varieties of configuration, which means that fitting
> them all onto one data store may be like the legendary Procrustean Bed,
> where Procrustes would modify peoples' sizes so they'd fit.  Stretching
> those that are too short, and chopping bits off of those that are too
> tall.
> 

Hmm, I guess he never heard of folding space.


> >> Make sure it is documented clearly how this is to work, so that it is
> >> *CLEAR* which files will be evaluated in what order.  The problem with
> >> (in contrast) X resource information is that the order of evaluation is
> >> *not* clear.
> >
> >How so? I thought it was  .Xdefaults, any xrdb'ed values, app-defaults,
> >$HOME/AppName, command line, internalized defaults
> >
> >Well ther is XAPPRES and the localizing, ... Jesh that's at least 8
> >levels!!!
> 
> That there are on the order of 8 levels, and that people have a hard
> time remembering which goes in which order, is very much the point.
> 

I typically only use four of them!

> >The directory locations should not be fixed, perhaps a default
> >list defined in a
> >
> >    /etc/app.conf
> >
> >and maybe a switch kind of mechanism like that used in /etc/nsswitch.conf,
> >perhaps
> >
> >    /etc/appswitch.conf
> >
> >I'm wondering if apps should be able to have their own switch.conf
> >files, or perhaps a line in the config file that specifies the
> >evaluation order and the types of sub-service to use.
> >
> >This would allow setting system wide defaults for applications,
> >i.e. get from local files, then CORBA, then ACAP, ...
> >
> >and then allow individual apps to change from that.
> 
> That's pretty fair.
> 
> >Well this is one way to seperate the data from the storage
> >mechanism. There are also other ways to do this.
> >
> >> I would suggest documenting something on the order of five sets of
> >> places for defaults to come from, for application foo:
> >> 1) Site config: /etc/site/foo.conf  (/etc/site might be NFS mounted
> >> from a central server; feel free to suggest a better location to stick
> >> this...)
> >> 2) Host-based config: /etc/foo.conf
> >> 3) $HOME/.foo.rc
> >> 4) $HOME/etc/foo.conf
> >> 5) $HOME/GNUStep/Library/Defaults/foo/Defaults
> >
> >That's a good example of typical complexity, but the number of levels
> >should really be arbitrary, no?
> 
> Agreed.
> 
> >/etc is typically for system related config files, app config files
> >are found all over the place  /usr/lib/$app  /home/$app  /opt/$app
> >/usr/local/lib/$app ...   I think this should be site configurable,
> >perhaps in the /etc/app.conf
> >
> >Also I don't think using a "/etc" in the $HOME will work,
> >I commonly have my hown etc directory and I have seen many people
> >use the same. So a .something is preferable. I have also seen
> >.etc used in $HOME, but don't recall exactly where.
> 
> I've got stuff in $HOME/etc; I treat it as an extra place for "config
> info."
> 
> I don't much care if we're using 'dot files' in $HOME, or files in
> $HOME/etc, or $HOME/GNUStep/...; the point is to have *some* sort of
> convention so that it is easy to search for configuration information.

Using a dot directory in $HOME is the typical way to handle lots of
config info. Examples are CDE, GNOME, ... Using the . allows the
file to be "hidden", well at least from a normal ls. It's not the
best way to do it, but that's the convention. Invading the
users name space i.e. a non dot file/directory is considered a
real no-no.

> 
> >Finally my $HOME already has way to many .$app dirs, too many
> >for my tastes, hence one of the reasons for my desire to push
> >them down to ".userStore/Applications".
> 
> The huge number of .dot .files that collect in $HOME is quite annoying.
> It would be nice if they hid under a directory; whether that be
> $HOME/.userstore/, or $HOME/etc/, or $HOME/GNUStep/, I do not really
> care which is used.
> 
> - $HOME/.userstore/ has the non-salutory effect that it is somewhat
> "hidden."
> 
We can configure the name, that way if/when someone comes up with a
better one we can say set it to what they want. Could even allow the
user a environmental variable  i.e. USER_STORE=".myStore"

> - $HOME/GNUStep/ appears to associate things with GNUStep, which is
> somewhat silly if applications *aren't* associated with it.
> 
> - $HOME/etc/ seems to me to be most sensible, as it agrees with the use
> of /etc/ for "global" configuration.

I do like this, but then again I grok /etc. Many users may not know
what /etc does. Also I have seen a lot of people use $HOME/etc,
including myself so it may run into problems. This is really a non-issue
as it should be definable. I just like the idea of using store's,
cause I want the helpers to be called store managers. It give me
an easy metaphor to document things. i.e. If it's not on the shelf
the manager checks the back room(his cache). If he's out of stock he calls
up and requests an immediate delivery ... If a delivery cannot be
delivered in time, he might get it from cold storage(repo). ...

> 
> --
> if (argc > 1 && strcmp(argv[1], "-advice") == 0) {
>   printf("Don't Panic!\n");
>   exit(42);
> }

 srand("/dev/hda");

-- 
We stand on the shoulders of those giants who coded before.
Build a good layer, stand strong, and prepare for the next wave.
Guide those who come after you, give them your shoulder, lend them your code.
Code well and live!   - [EMAIL PROTECTED] (7th Coding Battalion)

------------------------------

From: [EMAIL PROTECTED] (Daniel R. Grayson)
Subject: Re: How do you read core dump on Linux?
Date: 12 Jan 1999 23:21:12 -0600

"http://www.otokovideo.com" <[EMAIL PROTECTED]> writes:

> How do you read core dump on Linux?
> Thanks.
> 

Use

        gdb foo core

where foo is the name of the executable file that died, and examine the core
file with gdb commands.  If you don't know the name, put any old thing there
and gdb will tell you.


------------------------------

Subject: Re: How do you read core dump on Linux?
From: Arun Sharma <[EMAIL PROTECTED]>
Date: Wed, 13 Jan 1999 05:21:09 GMT

"http://www.otokovideo.com" <[EMAIL PROTECTED]> writes:

> How do you read core dump on Linux?

gdb executable core

        -Arun

------------------------------

From: mlw <[EMAIL PROTECTED]>
Subject: Re: silly question
Date: Wed, 13 Jan 1999 05:24:12 +0000

Tristan Wibberley wrote:
> 
> mlw wrote:
> >
> >
> > cd srcpath
> > tar cvf /tmp/fubar.tmp
> > cd /destpath
> > tar xvf /tmp/fubar.tmp *.as
> >
> > an 'xcp' command would be usefull.
> 
> put it into a file called xcp in your path, 'chmod +x xcp'. This is
> called making a script, it will do it all with one command.
> 
> btw tar has an option to specify the directory to extract to.
> 
> --
> Tristan Wibberley               Linux is a registered trademark
>                                 of Linus Torvalds.

I am not a NEWBE!!! Geez, I know how to write a script file. Geez!

Why is it when one says "Gee it would be nice to have this...." you
people go out of your way to explain, in the most condisending
phrasiology, that one does not, in fact, want that nice program.

I have been working on UNIX off and on for almost 15 years. I know what
scripts are, I know how to pipe commands together, I know how to write
device drivers. 

I just think a simple xcopy like program would be nice. Why am I treated
as a heritic?


-- 
Mohawk Software
Windows 95, Windows NT, UNIX, Linux. Applications, drivers, support. 
Visit the Mohawk Software website: www.mohawksoft.com

------------------------------

From: [EMAIL PROTECTED] (Timothy J. Lee)
Subject: Re: How do you read core dump on Linux?
Reply-To: see-signature-for-email-address---junk-not-welcome
Date: Wed, 13 Jan 1999 05:21:49 GMT

[EMAIL PROTECTED] writes:
|How do you read core dump on Linux?

$ some-program
Segmentation fault (core dumped)
$ gdb some-program core

Of course, it helps if some-program was compiled with debugging
symbols (-g option to gcc).

-- 
========================================================================
Timothy J. Lee                                                   timlee@
Unsolicited bulk or commercial email is not welcome.             netcom.com
No warranty of any kind is provided with this message.

------------------------------

From: [EMAIL PROTECTED] (Villy Kruse)
Subject: Re: iso9660: time-stamp mismatch/bug?
Date: 13 Jan 1999 08:53:16 +0100

In article <77g38k$[EMAIL PROTECTED]>, Jim Van Zandt <[EMAIL PROTECTED]> wrote:
>
>I would expect the kernel to do a UTC->local conversion when a file
>timestamp is written, and a local->UTC conversion before the timestamp
>is handed over to the display program.  I guess these conversions are
>not being done.
>


It does! :-)      Provided that someone is kind enough to tell the kernel
what the current TZ offset is, and at least with redhat this is not done.

It works like this:

vfat time stamp in local time 
-> converted to UTC using kernel TZ offset
-> converted to local time by ls using /etc/localtime 
   or the TZ environment variable


As the kernel tz ofset remains unset the first conversion appears not to 
take place even though the vfat file system code is coded to do so.

>From the code below look at the following line:

        secs += sys_tz.tz_minuteswest*60;


Villy


::::::::::::::
/usr/src/linux-2.0.35/fs/fat/misc.c
::::::::::::::

[ snip ]

extern struct timezone sys_tz;


/* Convert a MS-DOS time/date pair to a UNIX date (seconds since 1 1 70). */

int date_dos2unix(unsigned short time,unsigned short date)
{
        int month,year,secs;

        month = ((date >> 5) & 15)-1;
        year = date >> 9;
        secs = (time & 31)*2+60*((time >> 5) & 63)+(time >> 11)*3600+86400*
            ((date & 31)-1+day_n[month]+(year/4)+year*365-((year & 3) == 0 &&
            month < 2 ? 1 : 0)+3653);
                        /* days since 1.1.70 plus 80's leap day */
        secs += sys_tz.tz_minuteswest*60;
        if (sys_tz.tz_dsttime) {
            secs -= 3600;
        }
        return secs;
}


/* Convert linear UNIX date to a MS-DOS time/date pair. */

void fat_date_unix2dos(int unix_date,unsigned short *time,
    unsigned short *date)
{
        int day,year,nl_day,month;

        unix_date -= sys_tz.tz_minuteswest*60;
        if (sys_tz.tz_dsttime) unix_date += 3600;

        *time = (unix_date % 60)/2+(((unix_date/60) % 60) << 5)+
            (((unix_date/3600) % 24) << 11);
        day = unix_date/86400-3652;
        year = day/365;
        if ((year+3)/4+365*year > day) year--;
        day -= (year+3)/4+365*year;
        if (day == 59 && !(year & 3)) {
                nl_day = day;
                month = 2;
        }
        else {
                nl_day = (year & 3) || day <= 59 ? day : day-1;
                for (month = 0; month < 12; month++)
                        if (day_n[month] > nl_day) break;
        }
        *date = nl_day-day_n[month-1]+1+(month << 5)+(year << 9);
}

[ snip ]

------------------------------

From: Josef Moellers <[EMAIL PROTECTED]>
Subject: Re: silly question
Date: Wed, 13 Jan 1999 08:40:37 +0100

mlw wrote:
> 
> Josef Moellers wrote:
> > It's a philosophical question.
> Why is it a philosophical question. There is a task which is usefull
> which no utilities perform easily. Yes, any number of utilities can be
> strung together to do it, just not easily.

It's a philosophical question, or should I have written "matter of
principle", whether an OS should provide single commands for each and
every purpose (and no means of easily extend the set of existing
commands, as DOS does) or provide a basic set of commands (and an easy
way of combining these into more powerfull sets as UNIX does).

As more than one person have pointed out: UNIX allows you to roll your
own. This is one of the basic design issues behind UNIX!

> Why does UNIX have 'rmdir' and 'rm' both can remove directories?

Has been answered.

> Why are there so many shells?

Has been answered.

> Because it is this wealth of utilities that makes UNIX easier to use
> than other OS platforms.

And you're complaining about a single missing sub-functionality?

> > DOS has the approach that there has to be a specific tool for each and
> > every purpose (... Gates can think of). If a user wants to do something,
> > there has to be a ready-to-use tool. If there is no tool, you can't do
> > it! So there is no use for a scripting language, hence there is none.
> 
> This is not true. DOS, Windows and NT support pipes. And one can type:
> dir | sort | more
> in DOS, Windows and NT, UNIX does not have a lock on these features.

I wasn't referring to a (badly implemented) functionality, I was
referring to a scripting language (aka shell) that allows you to easily
string together a set of commands. You can give that set of commands a
name ... voila ... a new command is born.

[ ... ]

> XCOPY is not a monolythic self-sufficient subsystem any more than is rm
> or cp. Infact, cp is deficient in that it can't do what xcopy does.

Well, I know a little of DOS. Why aren't you complaining that COPY
doesn't do what XCOPY can? Imagine XCOPY wouldn't exist on DOS, how
would you go around doing what you want to do?
There is functionailty in DOS that UNIX doesn't have, and vice versa.
But it's easy to replicate DOS' functionality in UNIX.

[ ... ]

> I love UNIX, but the UNIX attitude is tiring. UNIX is not a fixed
> document, it is a living platform. If a single utility can make a
> specific task easier, there is NO reason why it would not useful to have
> or persue.

UNIX has grown through people who have discovered a deficiency and have
written a tool to fill the gap. Lots of people have gained popularity
due to this, the guys from Berkely (Leffler, Joy, et. al), Richard
Stallman, Linus Thorvalds, Larry Wall.
These people have not continued complaining that this and that
functionality (BSD: paging et al, Stallman: a powerfull editor that can
double as an OS B-{), Thorvalds: cost-free UNIX on industry standard
hardware, Wall: yet another scripting language for easier text
processing) didn't exist, they "just" sat down and did the job.

> This all started with a little rant. I stated the the abilities of xcopy
> would make doing a few things in UNIX a bit easier, rather than
> thoughtful statements like, "I've never needed that" or "That would be
> cool" I get "Oh NO! you can't do that, UNIX has all the pieces with
> which you can build that command" "Phylosophically, you shouldn't," etc.

At the time you posted your "little rant", we didn't know your
background and abilities. We provided help as we usually do, we showed
how you can implement what you need. Since you explained in-depth that
you know how to do it and keep re-iterating your "little rant", you risk
being considered a "troll".
1. We agree, there is no single standard UNIX command that does what you
need.
2. You agree that it is possible to roll your own.

> I don't need a lecture about UNIX. I have been using UNIX, off an on,
> since 1985. The original Suns, SCO Xenix (286), Interactive UNIX,
> Solaris, Linix, NetBSD, and FreeBSD.

Apparently you do need a lecture.
You know the technicalities, but you apparently don't know the design
issues that hide behind the technicalities. You obviously know "how",
but you don't know "why".
We now all know that you know how to write a shell script. Now please
explain what keeps you from doing that.

-- 
Josef Moellers          [EMAIL PROTECTED]
        UNIX - Live free or die!
PS Dieser Artikel enthaelt einzig und allein meine persoenlichen
Ansichten!
PS This article contains my own, personal opinion only!

------------------------------

From: [EMAIL PROTECTED] (Villy Kruse)
Subject: Re: iso9660: time-stamp mismatch/bug?
Date: 13 Jan 1999 09:09:51 +0100

In article <[EMAIL PROTECTED]>,
Jim Y. Kwon <[EMAIL PROTECTED]> wrote:
>
>When mounting CD-ROM's, all the files display the wrong date/time - on my
>particular machine all CD-ROM files are off by 15 hours. I was doing some
>archive work and burning some CD's and I noticed this discrepancy. I checked
>these CD's under Win95 and the files showed the correct date/time (as the
>date/times from the original files). All my CD's exhibit this behavior,
>including commercially-made CD's.
>



The time fields on iso9660 file systems do have a variable for specifying
the time zone.  I'm not sure if that means that if this field is zero
that the time stamps are supposed to be in UTC time.  I beleive that
the linux code treat the time stamps as UTC.  It wouldn't supprice
me if windows ignores the time zone issue when creating CD images.
Actually this would be a function of the CD formatting software.


Someone should consult the ISO 9660 text to see which is correct.



Villy

------------------------------

From: Yao-Lu Tsai <[EMAIL PROTECTED]>
Subject: Total amount of physical memory
Date: Wed, 13 Jan 1999 18:30:25 +0800
Reply-To: [EMAIL PROTECTED]

Hi,

Is there any system call or library function in Linux
for C programmers to get the total amount of
physical memory installed ?

Thanks.

-- 

+--------------------------------+---------------------------------+
| Yao-Lu Tsai                    | Project Manager at OODB Team,   |
| (H) 886-2-2746-5471            | Syscom Computer CO., Taiwan.    |
| [EMAIL PROTECTED]       | (O)   886-2-2741-8010 EXT 8857  |
| http://dbmaker.syscom.com.tw   | (FAX) 886-2-731-3913            |
+--------------------------------+---------------------------------+

------------------------------

From: Richard Jones <[EMAIL PROTECTED]>
Subject: Re: Total amount of physical memory
Date: Wed, 13 Jan 1999 10:40:52 +0000

Yao-Lu Tsai <[EMAIL PROTECTED]> wrote:
: Hi,

: Is there any system call or library function in Linux
: for C programmers to get the total amount of
: physical memory installed ?

Not a system call as such. The normal way to
do this is to parse /proc/meminfo (although that
raises questions about what to do when /proc
isn't mounted as such - you can generally
ignore such cases, as lots of stuff breaks
if /proc isn't mounted).

This is from 2.2.0-pre6:

$ cat /proc/meminfo 
        total:    used:    free:  shared: buffers:  cached:
Mem:  64991232 64008192   983040 21356544  1347584 19275776
Swap: 246743040 162455552 84287488
MemTotal:     63468 kB
MemFree:        960 kB
MemShared:    20856 kB
Buffers:       1316 kB
Cached:       18824 kB
SwapTotal:   240960 kB
SwapFree:     82312 kB

Rich.

PS. If your program needs to know how much
memory is installed, and it's not a `memory
monitoring tool', then it's probably doing
something wrong.

-- 
-      Richard Jones. Linux contractor London and SE areas.        -
-    Very boring homepage at: http://www.annexia.demon.co.uk/      -
- You are currently the 1,991,243,100th visitor to this signature. -
-    Original message content Copyright (C) 1998 Richard Jones.    -

------------------------------

From: Josef Moellers <[EMAIL PROTECTED]>
Subject: Re: Total amount of physical memory
Date: Wed, 13 Jan 1999 13:05:02 +0100

Richard Jones wrote:
> 
> Yao-Lu Tsai <[EMAIL PROTECTED]> wrote:
> : Hi,
> 
> : Is there any system call or library function in Linux
> : for C programmers to get the total amount of
> : physical memory installed ?
> 
> Not a system call as such. The normal way to
> do this is to parse /proc/meminfo (although that
> raises questions about what to do when /proc
> isn't mounted as such - you can generally
> ignore such cases, as lots of stuff breaks
> if /proc isn't mounted).

I was about to answer the same, but dared to look into the source first
B-{)

The amount given in /proc/meminfo is the amount _unreserved_, i.e. the
amount of physical memory MINUS the amount of reserved memory, so it's
less than the amount of physical memory.

The only place where I've found the amount of physically installed
memory is in the boot messages recorded in the /var/log/boot.msg file
(the following is from my machine):
Memory: 30792k/32768k available (808k kernel code, 384k reserved, 784k
data)

-- 
Josef Moellers          [EMAIL PROTECTED]
        UNIX - Live free or die!
PS Dieser Artikel enthaelt einzig und allein meine persoenlichen
Ansichten!
PS This article contains my own, personal opinion only!

------------------------------

From: [EMAIL PROTECTED] (Villy Kruse)
Subject: Re: How can I tell if a kernel module is loaded from within kernel code?
Date: 13 Jan 1999 13:07:47 +0100

In article <[EMAIL PROTECTED]>,
Ronald S. Kundla Jr. <[EMAIL PROTECTED]> wrote:
>Hi everybody!
>
>I am experimenting on the 2.0.35 kernel where I want to move a chunk
>of kernel code into a loadable module (the "bridge" code to be exact).
>
>The purpose of this is to put a "stub" in place that would allow me to
>create custom bridge code to a speciallized application without
>destroying :) too much of the linux kernel.
>
>1) Is this feasible?
>

probably.

>2) How can I tell from the kernel if that module has been loaded by
>
>   the user/OS?
>

The standard way is to have a register procedure defined in the resident
kernel and let the module call this routine when it gets loaded and unloaded.


>3) Should I register it as a netdevice? What functions could I use to 
>   look up the registration and verify it has been loaded?
>

Only if it is a netdevice.  If it is a special purpose module you should
probably make your own register-xxx() routine.


>Also, is there a recommended book besides the one from Rubini that has
>comprehesive kernel information?
>




Villy

------------------------------

From: Mark Tranchant <[EMAIL PROTECTED]>
Subject: Re: Total amount of physical memory
Date: Wed, 13 Jan 1999 11:54:21 +0000
Reply-To: [EMAIL PROTECTED]

Open and read /proc/meminfo. This will allow you to read the maximum
amount of physical memory available to the system.

Alternatively, it seems that you can subtract 4096 from the size of
/proc/kcore to get the installed RAM size in bytes. Don't take that as
gospel though.

Mark.

Yao-Lu Tsai wrote:
> 
> Hi,
> 
> Is there any system call or library function in Linux
> for C programmers to get the total amount of
> physical memory installed ?
> 
> Thanks.
> 
> --
> 
> +--------------------------------+---------------------------------+
> | Yao-Lu Tsai                    | Project Manager at OODB Team,   |
> | (H) 886-2-2746-5471            | Syscom Computer CO., Taiwan.    |
> | [EMAIL PROTECTED]       | (O)   886-2-2741-8010 EXT 8857  |
> | http://dbmaker.syscom.com.tw   | (FAX) 886-2-731-3913            |
> +--------------------------------+---------------------------------+

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to