Linux-Development-Apps Digest #765, Volume #6    Thu, 13 Jul 00 16:13:11 EDT

Contents:
  Re: creating a window ("Norm Dresner")
  Re: keyboard with additional function keys (Paul Fox)
  Re: Library for I/O Base Adress access wanted (Markus Kossmann)
  Re: Problems porting C app from HP UNIX to LINUX ("Scott Clough")
  Re: Problems porting C app from HP UNIX to LINUX (Martin Kroeker)
  Re: Problems porting C app from HP UNIX to LINUX (Kaz Kylheku)
  Re: Problems porting C app from HP UNIX to LINUX (Kaz Kylheku)
  Re: Where is <limits>? (John Gluck)
  Re: Where is <limits>? (Kaz Kylheku)
  Re: sizeof() in gcc (John Gluck)
  linking error (Danny Tran)
  Re: Problems porting C app from HP UNIX to LINUX (David T. Blake)
  Re: sizeof() in gcc (John Gluck)
  link errors in Alpha (h0l0gRaM)
  Re: Help (Christopher Plant)
  Re: Problems porting C app from HP UNIX to LINUX ("Scott Clough")

----------------------------------------------------------------------------

Reply-To: "Norm Dresner" <[EMAIL PROTECTED]>
From: "Norm Dresner" <[EMAIL PROTECTED]>
Subject: Re: creating a window
Date: Thu, 13 Jul 2000 17:19:56 GMT

man  ncurses

    Norm

Nick Lahtinen <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> I need to create an application that would display entered text in one
> window and text messages recieved in another,  with a way to switch
between
> them programmatically, and both would need to be contained in one larger
> application window.   Is there a way to do this with basic Linux
libraries,
> or would using Qt be the best option?  I am kind of new to this, so
samples
> would be helpful, too.
> Thanks a bunch.
>
> Nick
>
>


------------------------------

From: Paul Fox <[EMAIL PROTECTED]>
Subject: Re: keyboard with additional function keys
Crossposted-To: 
comp.os.linux.development,comp.os.linux.development.system,comp.os.linux.hardware
Date: Thu, 13 Jul 2000 17:37:28 GMT

In comp.os.linux.hardware LY <[EMAIL PROTECTED]> wrote:
: I have a keyboard with about 30 additional function keys. If I press one key

have you looked at the man pages for loadkeys, dumpkeys, showkey, and
keytables?

have you looked in /var/log/messages (i think) for reports of unknown
scancodes after using your keyboard?

i was able to load up values for a similar keyboard that i have.  (though
mine wasn't producing doubles like yours is -- mine were simply not
understood.)

paul
=---------------------
  paul fox, [EMAIL PROTECTED]

------------------------------

From: Markus Kossmann <[EMAIL PROTECTED]>
Subject: Re: Library for I/O Base Adress access wanted
Date: Thu, 13 Jul 2000 18:36:36 +0200

Wolfgang Fritzsche wrote:
> 
> Hi all,
> 
> just searching a library to manipulate for example the printerport at 0x378
> or a 8bit I/O interface card
> 
Did you read the Linux I/O port programming mini-HOWTO ? 
--
Markus Kossmann                                    
[EMAIL PROTECTED]

------------------------------

Reply-To: "Scott Clough" <[EMAIL PROTECTED]>
From: "Scott Clough" <[EMAIL PROTECTED]>
Subject: Re: Problems porting C app from HP UNIX to LINUX
Date: Thu, 13 Jul 2000 18:00:15 GMT

"Kaz Kylheku" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On Thu, 13 Jul 2000 14:14:04 GMT, Scott Clough
<[EMAIL PROTECTED]>
> wrote:
> >The most frequent crasher I encounter with ported code is someone
fclosing a
> >file which is already closed.  On some Unix's (and Windows) this is
either
> >ignored or returns an error (invalid file descriptor),  but on Linux it
seg faults
>
> A FILE * is not a file descriptor, but a pointer to an object in your
> program's address space.

You are correct, I used the low-level call terminology which doesn't apply
to fclose.

>
> >Seems fairly drastic to me...
>
> What is so drastic about it? It is undefined behavior. It would not even
> occur to me to fclose something twice.
>

Of course you wouldn't INTENTIONALLY do it - I was responding to a post that
pointed out some _bugs_ in some code, and I was trying to be helpful by
pointing out _bugs_ I've seen in the past that cause seg faulting on Linux.


> The fclose function deallocates the stream object that you pass to it.
> It is analogous to calling free on a memory that was created with
> malloc.
>
> Do you have any idea what it might cost in order to validate each FILE *
> pointer on every call to the stdio library, so that less ``drastic''
behavior
> could be provided?

Oh, come on.  Other, very heavily used stdio implementations do a bit of
error checking so they don't _crash_.  When you're doing file i/o, a few
durn pointer checks won't make a nit of difference to performance.  That's
the kind of frugal thinking that got UNIX case-dependent file names (just
imagine how many cycles we'd burn if we had to do a case-independent check
for every open? GASP!)
>
> I think that the code which fclosed twice only worked by fluke on the
original
> platforms from which the program was ported. What probably happened was
that on
> the second call to fclose, the memory of the FILE object was still there,
and
> contained the file descriptor number. The fclose function applied the
close
> system call to that descriptor which returned EBADF, and so the function
> bailed.
>
Or, maybe it checked the list of open files it has to keep for an
fcloseall() function to work... and did nothing but return a meaningful
error (which was, unfortunately, subsequently ignored).

> Imagine if that memory had been overwritten in such a way that the file
> descriptor field referred to a valid file descriptor. The close would then
blow
> away some unrelated file, report success and fclose would then proceed to
> deallocate the deallocated memory a second time.
>
Yep, that's a pretty gruesome scenario.  Good argument for code that
validates a few parameters.

> --
> #exclude <windows.h>
>
Ohhh, now I see...



------------------------------

From: Martin Kroeker <[EMAIL PROTECTED]>
Subject: Re: Problems porting C app from HP UNIX to LINUX
Date: Thu, 13 Jul 2000 17:47:40 GMT

Some things to look for:

- use of uninitialized variables
- fclosing files that are already closed
- use of strcmp() and friends with NULL pointers - on HPUX, these are
  valid and treated as pointers to empty strings
- if your code does dirty tricks at the bit level (or does binary I/O),
  remember that endianness differs between 7xx series and PCs

HTH,
Martin 
-- 
Dr. Martin Kroeker, daVeg GmbH Darmstadt  CAD/CAM/CAQ  [EMAIL PROTECTED]
                      Precision Powered by Penguins

------------------------------

From: [EMAIL PROTECTED] (Kaz Kylheku)
Subject: Re: Problems porting C app from HP UNIX to LINUX
Reply-To: [EMAIL PROTECTED]
Date: Thu, 13 Jul 2000 18:37:52 GMT

On Thu, 13 Jul 2000 18:00:15 GMT, Scott Clough <[EMAIL PROTECTED]>
wrote:
>>
>> Do you have any idea what it might cost in order to validate each FILE *
>> pointer on every call to the stdio library, so that less ``drastic''
>behavior
>> could be provided?
>
>Oh, come on.  Other, very heavily used stdio implementations do a bit of
>error checking so they don't _crash_.  When you're doing file i/o, a few
>durn pointer checks won't make a nit of difference to performance.  That's

Complete myth. I/O becomes CPU intensive when you have decent I/O hardware to
balance your computer architecture. Or when the I/O takes place to and from
buffers. Another consideration is that the CPU cycles you waste are taken from
other processes. This might not matter, or even be noticeable, when the lightly
loaded system has cycles to spare but that assumption can change.

One purpose of the stdio library is to reduce the number of system calls while
allowing the program to transfer data in more convenient units, such as
individual characters. For example, you can use the getc function to obtain
individual characters from a stream; most of these calls fetch a byte from a
buffer. Every few thousand operations results in a relatively expensive system
call that refills the buffer.  Would you validate the FILE * pointer in each
call to getc?  In a procedure like the lexical analyzer of a compiler, that
could drastically impact the performance.

I don't think that correct programs should be penalized by extra bloat the
compensates for incorrect programs.  I'd welcome such checks in a special
debugging version of the library, not in a production version.

Unconditional validation only makes sense between protection domains where
security and stability of the overall system is at stake, like at
process-kernel or machine-world boundaries.

>the kind of frugal thinking that got UNIX case-dependent file names (just
>imagine how many cycles we'd burn if we had to do a case-independent check
>for every open? GASP!)

Case sensitivity is better than locale-dependent behavior.  What do you do when
two filenames are the same name in one character encoding, but not in another?

Also, how do you handle a file set downloaded from a platform that
is case sensitive in the naming of *its* files?

With a case sensitive system, you can be careful about naming when preparing
files that are to be used on another platform.  In accepting files from other
platforms, you don't have a problem with case.

A system that silently maps two distinct filenames to the same data unit is
brain-damaged from a a cross-platform interoperability point of view.

-- 
#exclude <windows.h>

------------------------------

From: [EMAIL PROTECTED] (Kaz Kylheku)
Subject: Re: Problems porting C app from HP UNIX to LINUX
Reply-To: [EMAIL PROTECTED]
Date: Thu, 13 Jul 2000 18:39:32 GMT

On Thu, 13 Jul 2000 17:47:40 GMT, Martin Kroeker <[EMAIL PROTECTED]> wrote:
>Some things to look for:
>
>- use of uninitialized variables
>- fclosing files that are already closed
>- use of strcmp() and friends with NULL pointers - on HPUX, these are
>  valid and treated as pointers to empty strings
>- if your code does dirty tricks at the bit level (or does binary I/O),
>  remember that endianness differs between 7xx series and PCs

Here is another: on some traditional UNIX compilers, string literals
are writable. Writing to a string literal is undefined behavior and
on Linux results in an access violation.

-- 
#exclude <windows.h>

------------------------------

From: John Gluck <[EMAIL PROTECTED]>
Crossposted-To: gnu.gcc
Subject: Re: Where is <limits>?
Date: Thu, 13 Jul 2000 14:47:58 -0400

Christopher Prosser wrote:

> Hi Folks,
>   I'm trying to compile some stuff under a standard RedHat 6.2 dist and I
> can't find <limits>. It doesn't seem to be anywhere on my disk as doing a
> find and looking in all the usual places doesn't turn it up. There is a
> check in comment at the gcc/egcs site that states this file was rolled into
> libstdc++3 and was rolled into gcc 2.80 (something). The 6.2 dist comes with
> 2.91.66.
>   Any ideas?
> -chris prosser

I think you are looking for limits.h
If everything is installed ok you should find it in one of the standard include
dirs
On my sun (running solaris) it's in /usr/include/limits.h

--
John Gluck  (Passport Kernel Design Group)

(613) 765-8392  ESN 395-8392

Unless otherwise stated, any opinions expressed here are strictly my own
and do not reflect any official position of Nortel Networks.




------------------------------

From: [EMAIL PROTECTED] (Kaz Kylheku)
Crossposted-To: gnu.gcc
Subject: Re: Where is <limits>?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 13 Jul 2000 18:53:55 GMT

On Thu, 13 Jul 2000 14:47:58 -0400, John Gluck <[EMAIL PROTECTED]>
wrote:
>Christopher Prosser wrote:
>
>> Hi Folks,
>>   I'm trying to compile some stuff under a standard RedHat 6.2 dist and I
>> can't find <limits>. It doesn't seem to be anywhere on my disk as doing a
>> find and looking in all the usual places doesn't turn it up. There is a
>> check in comment at the gcc/egcs site that states this file was rolled into
>> libstdc++3 and was rolled into gcc 2.80 (something). The 6.2 dist comes with
>> 2.91.66.
>>   Any ideas?
>> -chris prosser
>
>I think you are looking for limits.h

Or perhaps he is just missing the c from the C++ equivalent, <climits>

-- 
#exclude <windows.h>

------------------------------

From: John Gluck <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.system
Subject: Re: sizeof() in gcc
Date: Thu, 13 Jul 2000 14:54:40 -0400

Norm Dresner wrote:

> I'm getting a "parse error" on the (simplified) line
>     #if    ( sizeof(int) ) != 4
>
> Is this really illegal?

Yes

>
>
> Is there any other way to do size comparisons in the pre-processor?
>
>     Norm

Check the limits.h file...
It should give you enough info to determine the number of bytes for an
int and most anything else as well.


--
John Gluck  (Passport Kernel Design Group)

(613) 765-8392  ESN 395-8392

Unless otherwise stated, any opinions expressed here are strictly my own
and do not reflect any official position of Nortel Networks.




------------------------------

From: Danny Tran <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.system
Subject: linking error
Date: Thu, 13 Jul 2000 12:04:20 -0700

Hi  I'm using RedHat 6.2 for the x86.  I'm using a linker script which
is just the default script with some extra sections added.  The default
script is elf_i386.  I've made no changes for the elf program headers.
The documentation states the program headers should be handled
automatically.

The flags used to link
ld
  -m elf_i386 \
   -lc \
  -lm \
  -lpthread \
   -warn-section-align \
  -warn-common \
  -warn-constructors \
  -t \
  --verbose


The error:
/usr/bin/ld: warning: changing start of section .note.ABI-tag by 1 bytes

/usr/bin/ld: warning: changing start of section .gnu.version by 1 bytes
/usr/bin/ld: warning: changing start of section stbbram by 4 bytes
/usr/bin/ld: warning: changing start of section .plt by 1 bytes
/usr/bin/ld: warning: changing start of section .text by 12 bytes
/usr/bin/ld: warning: changing start of section .rodata by 10 bytes
/usr/bin/ld: warning: changing start of section .bss by 28 bytes
/usr/bin/ld: warning: changing start of section .nocache by 24 bytes
/usr/bin/ld: emerald.elf: Not enough room for program headers (allocated
6, need 7)
/usr/bin/ld: final link failed: Bad value

Any thoughts?

Thanks
-Danny


------------------------------

From: [EMAIL PROTECTED] (David T. Blake)
Subject: Re: Problems porting C app from HP UNIX to LINUX
Date: 13 Jul 2000 15:30:46 GMT
Reply-To: [EMAIL PROTECTED]

Scott Clough <[EMAIL PROTECTED]> wrote:
> The most frequent crasher I encounter with ported code is
> someone fclosing a file which is already closed. On some Unix's
> (and Windows) this is either ignored or returns an error (invalid
> file descriptor), but on Linux it seg faults. Seems fairly
> drastic to me...

There can be substantial problems with size_t variables. Often
people ASSUME their data type. In linux people will often
cast them to ints. On 64 Unices to longs. size_t is unsigned long int,
which is 64 bit on 64 bit systems and 32 bits on linux (and
unsigned int). The size_t is used quite a bit for return values
from sizeof, and for operands to malloc - things very likely to
cause a seg fault. 


-- 
Dave Blake
[EMAIL PROTECTED]

------------------------------

From: John Gluck <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.system
Subject: Re: sizeof() in gcc
Date: Thu, 13 Jul 2000 15:06:59 -0400

Villy Kruse wrote:

> On Mon, 10 Jul 2000 16:37:15 GMT, Norm Dresner <[EMAIL PROTECTED]> wrote:
> >I'm getting a "parse error" on the (simplified) line
> >    #if    ( sizeof(int) ) != 4
> >
> >Is this really illegal?
> >
> >Is there any other way to do size comparisons in the pre-processor?
> >
> >    Norm
> >
> >
>
> You can write it as a proper C conditional statement.  Any decent compiler
> will recognize the condition is always false/true, whatever the case may
> be and eliminate the dead code.
>
>  if ( sizeof(int) != 4 ) {
>         alternative one;
>  } else {
>         alternative two;
>  }
>
> Read "decent compiler" as one that does eliminate dead code.
>
> Villy

That solution only works well if theres on or 2 spots you're trying to
include or not include code.
One should also not rely on the quality of a compiler where possible...

There is this nifty file called "limits.h" which is architecture and OS
dependant.
It contains all sorts of useful things like for example a define for INT_MAX
The file was specifically designed to keep people from turning into
contortonists to do very simple things

--
John Gluck  (Passport Kernel Design Group)

(613) 765-8392  ESN 395-8392

Unless otherwise stated, any opinions expressed here are strictly my own
and do not reflect any official position of Nortel Networks.




------------------------------

From: h0l0gRaM <[EMAIL PROTECTED]>
Subject: link errors in Alpha
Date: Thu, 13 Jul 2000 22:28:12 +0300

I'm trying to compile Midnight Commander ,mc-4.5.42.
But there's little problem when ld tries link.
Here's error:

/usr/bin/ld: plain-gmc: Not enough room for program headers (allocated
6, need )
/usr/bin/ld: final link failed: Bad value
collect2: ld returned 1 exit status

What that means ?
-- 
Too often I find that the volume of paper expands to fill the available
briefcases.
                -- Governor Jerry Brown

------------------------------

From: Christopher Plant <[EMAIL PROTECTED]>
Subject: Re: Help
Date: Thu, 13 Jul 2000 20:46:49 +0100

Keith Brown wrote:

> Christopher Plant wrote:
>
> > I'm 16 and have taught myself C to the level of malloc/free linked lists
> > fifo ,etc and want help!  Can anyone recommend a good book describing
> > socket routines and other linux tasks.  I'm trying to write a small
> > messaging system for my win98/linux network.  HELP PLEASE!
> >
>
> I'm sure most people would recommend a few of books by Richard Stevens:
>
> "Advanced Programming in the UNIX Environment"
> http://www.kohala.com/start/apue.html
> "UNIX Network Programming Volume 1"
> http://www.kohala.com/start/unpv12e.html
> "UNIX Network Programming Volume 2"
> http://www.kohala.com/start/unpv22e/unpv22e.html
>
> These are probably the reference standards, but they are not cheap.
>
> I also have a book by John Shapley Gray:
>
> "Interprocess Communications in UNIX: The Nooks & Crannies"
> http://www.phptr.com/ptrbooks/ptr_0138995923.html
>
> This is very good if you're only interested in the subject material and is
> a little cheaper, at least it was...
>
> > P.S Anyone else who has similar experience on windoze and would write a
> > windoze client would be a GOD in my eyes!
>
> Well, if you use Berkely sockets for the Linux client, the code is already
> mostly portable to Windows using Windows Sockets. It will only require a
> few minor changes and these can be #ifdef'ed so you can work from the same
> code base.
>
> Beware the First Commandment...:-)
>
> --
> Keith Brown
> [EMAIL PROTECTED]
> web.wt.net/~bahalana

Thanx  Bit scared of windoze after I wrote a small database system on linux
and it fucked my dads windoze box up!


------------------------------

Reply-To: "Scott Clough" <[EMAIL PROTECTED]>
From: "Scott Clough" <[EMAIL PROTECTED]>
Subject: Re: Problems porting C app from HP UNIX to LINUX
Date: Thu, 13 Jul 2000 19:51:15 GMT

"Kaz Kylheku" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On Thu, 13 Jul 2000 18:00:15 GMT, Scott Clough
<[EMAIL PROTECTED]>
> wrote:
> >>
> >> Do you have any idea what it might cost in order to validate each FILE
*
> >> pointer on every call to the stdio library, so that less ``drastic''
> >behavior
> >> could be provided?
> >
> >Oh, come on.  Other, very heavily used stdio implementations do a bit of
> >error checking so they don't _crash_.  When you're doing file i/o, a few
> >durn pointer checks won't make a nit of difference to performance.
That's
>
> Complete myth. I/O becomes CPU intensive when you have decent I/O hardware
to
> balance your computer architecture. Or when the I/O takes place to and
from
> buffers.
You're still comparing the effects of a few cycles at the start of a system
call to moving many bytes.  Or what should be many bytes... see below, after
your paragraph on getc...

> Another consideration is that the CPU cycles you waste are taken from
> other processes. This might not matter, or even be noticeable, when the
lightly
> loaded system has cycles to spare but that assumption can change.

On a pre-emptive multitasked system, the cycles I waste are clearly not
taken from other processes, or I could just code an infinite loop and slow
the system to a crawl.

>
> One purpose of the stdio library is to reduce the number of system calls
while
> allowing the program to transfer data in more convenient units, such as
> individual characters. For example, you can use the getc function to
obtain
> individual characters from a stream; most of these calls fetch a byte from
a
> buffer. Every few thousand operations results in a relatively expensive
system
> call that refills the buffer.  Would you validate the FILE * pointer in
each
> call to getc?  In a procedure like the lexical analyzer of a compiler,
that
> could drastically impact the performance.

NOW you're talking inefficient code.  I once used a function written by
someone else, which merely took a file* as input and pumped it out stdout.
It did so character by character, which I know was easy for the programmer.
Problem is, even on a fast Linux box, large files were introducing
bothersome delays.  And yes, the file operations were buffered by stdio, but
that wasn't the problem - the problem was a gazillionteen function calls.  A
1k buffer and a few code tweaks later, it's noteably faster, and with the
system calls down by 2000x, sure, go ahead and validate my input.

>
> I don't think that correct programs should be penalized by extra bloat the
> compensates for incorrect programs.  I'd welcome such checks in a special
> debugging version of the library, not in a production version.

Hmmm, maybe we can see eye-to-eye on that one.  I know the Mac wraps it's
debug lib pretty heavily with all sorts of checks, to good effect.

>
> Unconditional validation only makes sense between protection domains where
> security and stability of the overall system is at stake, like at
> process-kernel or machine-world boundaries.
>
> >the kind of frugal thinking that got UNIX case-dependent file names (just
> >imagine how many cycles we'd burn if we had to do a case-independent
check
> >for every open? GASP!)
>
> Case sensitivity is better than locale-dependent behavior.  What do you do
when
> two filenames are the same name in one character encoding, but not in
another?
>

Unicode.  But seriously, filesystems just set down locale-comparison rules,
and give you functions for dealing with them.  It's a lot more rare than
just getting the case wrong...

> Also, how do you handle a file set downloaded from a platform that
> is case sensitive in the naming of *its* files?
>
> With a case sensitive system, you can be careful about naming when
preparing
> files that are to be used on another platform.  In accepting files from
other
> platforms, you don't have a problem with case.
>
The blame for both of these issues lies squarely with the fact that there
are two ways of doing it.  The issues you bring up don't point to a clear
winner... I would blanch at any serious distribution that contained two
distinct files in the same directory that differed only in case.

> A system that silently maps two distinct filenames to the same data unit
is
> brain-damaged from a a cross-platform interoperability point of view.
Hmmm, I've had few troubles sharing filenames between Mac and Windows...
Oops, I forgot about your "#exclude <windows.h>" again.

One more thing I suspect we agree on - the other post in this thread that
indicated that strcmp on HPUX was validated, and made null ptrs empty
strings - Whoa!  string functions are one place I would NOT waste cycles (as
they are more likely to deal with numerous small amounts of data) and I
certainly wouldn't 'help' the programmer by hiding his/her mistake behind a
spontaneously generated (char *)0.  Give me a core dump here.

ps.  I recently had the misfortune of typing a NULL instead of a (char*)0 at
the end of an execl parameter list - and I got an error 14, Bad Address back
from Linux.  Gasp!  Parameter checking!  <sarcasm>Now I'll never write two
processes that execl each other a kajillion times, just think of the wasted
cycles... </sarcasm>



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.apps) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-Apps Digest
******************************

Reply via email to