Linux-Development-Sys Digest #983, Volume #7     Thu, 29 Jun 00 09:13:20 EDT

Contents:
  Re: a better process kill (Kaz Kylheku)
  problem with makefile??? (newbie?) ("Jeff D. Hamann")
  Re: read(2) error EINTR (Villy Kruse)
  Serial comms in C/C++/probably anything! (Jem)
  Kernel-Crash (Carsten Scholten)
  Re: Why so many collisioins ? ("Tod D. Ihde")
  Number of packet receeived in a client server program ? ("Sasha")
  Re: xntpd not starting at boot ("Dennis Huang")
  Re: Why so many collisioins ? ("Boris Pran")
  Re: Why no defrag? (M Sweger)
  Re: problem with makefile??? (newbie?) (Kurt Wall)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Kaz Kylheku)
Subject: Re: a better process kill
Reply-To: [EMAIL PROTECTED]
Date: Thu, 29 Jun 2000 05:15:09 GMT

On Thu, 29 Jun 2000 04:26:57 GMT, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
>Having had a hardware failure that led to a hung process, I started to
>think again about better ways to handle process kills.  The hardware
>was a tape drive that lost an interrupt due to a loose SCSI cable.
>
>There are 2 things I think could be improved:
>
>1.  When a process is kiled with -KILL, this should be so terminal that
>    it has no hope of ever running again (should the lost interrupt
>    somehow mysteriously re-appear).  What I'd like for -KILL to also
>    do to a process is to forcibly close all it's open file descriptors
>    that are not actually the blocking ones, including disassociating
>    the process with any current directory (e.g. I should be able to
>    umount its current directory), and unmapping its program space (so
>    I can umount the filesystem it ran from).  Short of the list I/O
>    being on a filesystem mounted device, I should be able to get a clean
>    unmount and shutdown done.

SIGKILL does do these things. I suspect your process got hosed in an
uninterruptible sleep. In an uninterruptible sleep, a process cannot be woken
by a signal. The driver in question need to be bullet-proofed against
lost interrupts. E.g. it should have a watchdog timer that will 
unblock a process that has been waiting too long.

-- 
#exclude <windows.h>

------------------------------

From: "Jeff D. Hamann" <[EMAIL PROTECTED]>
Crossposted-To: gnu.gcc.help
Subject: problem with makefile??? (newbie?)
Date: Wed, 28 Jun 2000 23:35:31 -0700

For some reason I can't figure ouy why this makefile isn't working



/* main.c */
#include <stdio.h>
#include <stdlib.h>
#include <math.h>

int main( int argc, char *argv[] )
{
  float a;

  a = 4.0 * atan( 1.0 );
  fprintf( stdout, "PI = %f\n", a );

  return 0;
}

and the makefile


# Unix/Linux makefile used for ANSI compliance testing
# and portability testing

OBJS = main.o
LIB = /usr/lib/libc.so /usr/lib/libm.so
INCLUDES = /usr/include

mainp : ${OBJS} ${LIB}
 ${CC} -o $@ ${OBJS} ${LIB}

main.o : main.c
 ${CC} -c main.c


I keep getting the error

', needed by 'main.o'.  Stop.get 'main.c

I'm running RedHat 6.0 on a 66-486 16mb ram and didn't have any problems
with makefiles on another machine. any ideas why this error keeps happening
no matter what the target?

Jeff.




===========================================================
Jeff D. Hamann
2601 NE Jack London Street #27
Corvallis, Oregon 97330 USA
541-740-5988
[EMAIL PROTECTED]





------------------------------

From: [EMAIL PROTECTED] (Villy Kruse)
Subject: Re: read(2) error EINTR
Date: 29 Jun 2000 07:24:45 GMT

On Wed, 28 Jun 2000 22:04:56 GMT,
                 [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:



>H. Peter Anvin <[EMAIL PROTECTED]> wrote:
>>> The SunOS thing is annoying. However, Linux, *BSD, and possibly others
>>> default to restarting system calls if you use signal() rather than
>>> sigaction(). I think the confusion about my statement of system calls
>>> being restarted by default on most modern implementation is that I was
>>> refering only to signal() and not sigaction().
>
>> Linux doesn't, unless you compile with __BSD_SOURCE.
>
>I suspect either this is incorrect, or __BSD_SOURCE is incorrectly
>turned on in the header files in some cases. I'd have to make time and
>peruse the source to be sure, but the original poster _did_ note the
>above behavior. Wether or not he defined __BSD_SOURCE himself, I have
>no idea.
>


The behaviour of signal() changed when moveing from libc5 to glibc2.
The standard signal() behaviour is now BSD style, and SysV style is
available if you can set the magic flag.  

The linux kernel can do it either way; the libc library chooses the
behaviour.

A few programs broke when linked with glibc2, because these programs
assumed that signals would interrupt read calls, and that was no longer
the case.



Villy



------------------------------

From: Jem <[EMAIL PROTECTED]>
Subject: Serial comms in C/C++/probably anything!
Date: Thu, 29 Jun 2000 09:14:30 +0100

Hi guys,

I'm developing an application that takes bytes of a serial port and 
throws them out of a socket interface onto a LAN. All is well and good 
BUT how do I make sure that the bytes are "untampered" with, i.e. \r 
hasn't been removed or changed into \n or \0 and ALL control codes will 
pass through?

There are loads of termios settings I realise but it would appear that 
\r will be affected all the time by things like, what column is the 
cursor in, etc... Am I reading the manuals wrong? Is there a nice 
straightforward way of getting, byte by byte, each /dev/ttyS0 byte?

Currently I'm using open() and read() to get the bytes...Maybe I need to 
go to a lower level somehow? I mean, transferring of binary data over a 
com port must be possible, it's just the books seem to approach it from 
the point of view of talking to headless server devices and the like.

The options I have come up with so far would be:

c_iflag :       IGNBRK
                ~ISTRIP
                ~INLCR
                ~IGNCR
                ~ICRNL

c_oflag :       OPOST
                ~ONOCR
                ~OCRNL
                ~ONLRET

c_cflag :       CLOCAL          
                CS8             8 bit
                ~CSTOPB
                ~HUCPL

Do I need ALL of these constants in termios or am I going about it all 
wrong?

Thanks for any help,

Jeremy

------------------------------

From: Carsten Scholten <[EMAIL PROTECTED]>
Subject: Kernel-Crash
Date: Thu, 29 Jun 2000 10:13:33 +0200

Hi!

I don't know if someone here can help me. My kernel crashes after about
3-4 weeks everytime on thursday morning with messages similiar like
below. 
I think it should be something with the file system.
I'm using  2.2.14 from a SuSe 6.4 distribution.

Maybe someone can help me with this problem or just give me some
information where to start to search for the failure.

Thanks in advance,

Carsten Scholten


Jun 29 00:00:00 localhost /USR/SBIN/CRON[32523]: (root) CMD ( rm -f
/var/cron/lastrun/cron.daily) 
Jun 29 00:00:12 localhost su: (to nobody) root on none
Jun 29 00:00:12 localhost PAM-unix2[32708]: session started for user
nobody, service su 
Jun 29 00:00:36 localhost kernel: Negative d_count (-1) for
3.9.16/s3virge5.html
Jun 29 00:00:36 localhost kernel: Unable to handle kernel NULL pointer
dereference at virtual address 00000000
Jun 29 00:00:36 localhost kernel: current->tss.cr3 = 01728000, %cr3 =
01728000
Jun 29 00:00:36 localhost kernel: *pde = 00000000
Jun 29 00:00:36 localhost kernel: Oops: 0002
Jun 29 00:00:36 localhost kernel: CPU:    0
Jun 29 00:00:36 localhost kernel: EIP:    0010:[dput+330/352]
Jun 29 00:00:36 localhost kernel: EFLAGS: 00010292
Jun 29 00:00:36 localhost kernel: eax: 00000032   ebx: c19f4a80   ecx:
0000001d   edx: 0000001e
Jun 29 00:00:36 localhost kernel: esi: ffffffff   edi: c3a5e220   ebp:
000001cd   esp: c6e37d8c
Jun 29 00:00:36 localhost kernel: ds: 0018   es: 0018   ss: 0018
Jun 29 00:00:36 localhost kernel: Process find (pid: 32709, process nr:
67, stackpage=c6e37000)
Jun 29 00:00:36 localhost kernel: Stack: c454f760 c19f4ae0 c19f4a80
c189ad60 c6e37dcc c0144d02 00000802 00004918 
Jun 29 00:00:36 localhost kernel:        00001000 c0131c63 c19f4a80
c189ad60 00000002 c48b2660 00000000 c02a24b0 
Jun 29 00:00:36 localhost kernel:        00000fff c6e37e1c c1efe240
00000002 00000001 00000fff c02a24b0 00000fff 
Jun 29 00:00:36 localhost kernel: Call Trace: [inode_getblk+74/420]
[prune_dcache+263/312] [try_to_free_inodes+190/260] [__brelse+19/100]
[cprt+9024/38422] [grow_inodes+32/428] [get_new_inode+197/304] 
Jun 29 00:00:36 localhost kernel:        [iget+103/116]
[ext2_lookup+90/140] [real_lookup+91/180] [update_process_times+98/108]
[lookup_dentry+304/504] [getname+95/156] [__namei+49/104]
[do_8259A_IRQ+142/156] 
Jun 29 00:00:36 localhost kernel:        [sys_newlstat+20/116]
[do_IRQ+65/72] [common_interrupt+24/32] [system_call+52/64]
[startup_32+43/285] 
Jun 29 00:00:36 localhost kernel: Code: c7 05 00 00 00 00 00 00 00 00 83
c4 10 5b 5e 83 c4 14 c3 8d

------------------------------

From: "Tod D. Ihde" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.networking
Subject: Re: Why so many collisioins ?
Date: Thu, 29 Jun 2000 02:37:32 -0500

Your problems could also be caused by one of the cards erronously going into
full duplex mode (or the hub doing that, for that matter)... The most common
symptom of this is a high collision rate.

You might want to boot a DOSish OS & force the cards to be half duplex & see if
that solves it (do them 1 at a time, so that if it DOES fix it, you know which
card is the offender). Also, try different cables - they can muck things up as
well (esp. when working w/ full duplex cards). Also, try using a crossover cable
between the 2 machines (thereby taking the hub out of the network) and see if
that cures it as well (you'l have to do an ifconfig to see the collisions, no
col. lights on the cards. :( )

 Tod.


Cedric Blancher wrote:

> "Boris Pran" <[EMAIL PROTECTED]> a �crit dans le message news:
> 8inape$c37$[EMAIL PROTECTED]
> > I understand your points, except my hub is 10MB and 10/100MB card is in
> 10MB
> > mode, so no need to slowing it down in that part.
> >
> > Having it both on 10MB does the slower computer still generates collisions
> > to slow down the connection or there is more sophisticated way than
> > generating collisions?
>
> If both cards are in 10Mbps mode, there's no need any more for a slow down.
>
> > I can understand that the slow computer could be the bottleneck but I
> would
> > expect that some kind of acknowledgement would be done on the protocol or
> > even higher level - not flooding the slower computer with packets and
> > slowing down complete network.
>
> No. Network speed is up to ethernet layer to control.
>
> But I don't think that your slowest computer could be flooded by a 10Mbps
> network.


------------------------------

From: "Sasha" <[EMAIL PROTECTED]>
Crossposted-To: 
comp.os.linux.networking,comp.os.linux.development.apps,comp.os.linux.misc
Subject: Number of packet receeived in a client server program ?
Date: Thu, 29 Jun 2000 08:55:41 GMT

Hi all

I have a client server program, I am interested to see how many successive
packet I have received and how much time it takes to receive a packet using
tcpdump.

I want to see this result when I am running my program in my computer as a
loopback (sending and receiving data in my own computer).

I want also see the result when I am running my program in two computer
(running client in one computer and server in another ).

Please tell me what command I should use to see the time and the amount of
packet received or sendt by client or server.

Thanks in advance Sasha




------------------------------

From: "Dennis Huang" <[EMAIL PROTECTED]>
Subject: Re: xntpd not starting at boot
Date: Thu, 29 Jun 2000 17:05:00 +0800

Dear Brad,

    try change the file name "K10xntpd" to "S90xntpd" in
/etc/rc.d/rc5.d/ if you start X at boot-time.
/etc/rc.d/rc3.d/ if you use console mode.

yours, Dennis

"Brad Clawsie" <[EMAIL PROTECTED]> wrote in message
news:8j11uf$uh2$[EMAIL PROTECTED]...
> Hi.
>
> I am having a problem with the standard RH6.2 xntpd package. It does not
> start xntpd at boot time, even though /etc/rc.d/init.d contains the
> xntpd script, which I am able to run manually like
>
> # sudo /etc/rc.d/init.d/xntpd start
>
> after the machine boots.
>
> /var/log/boot.log and /var/log/messages do not show any boot time
> activities regarding xntpd.
>
> Anything obvious I am missing?
>
> Thanks in advance for reading this and any solutions you may have.
>
> Brad Clawsie
>
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.



------------------------------

From: "Boris Pran" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.networking
Subject: Re: Why so many collisioins ?
Date: Thu, 29 Jun 2000 12:35:32 +0200

I did everything you said except crossover cable but later on I posted a
message explaining that my problems with collisions are application
dependant - which is more than confusing to me.

Here is the copy of that message - check it out and comment

Boris
============================================================================
=================================================

Yes, it seems like a long message but I couldn't make it any shorter without
dropping anything.
So, once upon a time...

Few days ago I have posted a message explaining my wondering about too many
collisions generated when connecting from one Linux box to the another as an
X terminal.

I first thought that network cards were problem (half/full duplex) or the
big difference in speed between the client and the server on the network,
but...

After few days of experimenting here is what I found out and I have to say
that the result is pretty confusing and unexpected to me.

Everything stated below was occurring regardless of the server / client
combination or network card setup / combination. I had three different
computers (Dual Celeron433 with D-Link DFE530TX  , Compaq PIII 450 with
Intel PRO 10/100 and 486 with some NE2k PCI clone) using for different
roles.
All three computers had RedHat 6.2 Linux with the same versions of X and
kernel - which was maybe wrong but didn't have any other distribution to
make the company less homogenous.

* When I connect from the client to the server using i.e.. Samba and do some
heavy copying the collisions are generated only here and there - noting
significant I would say

* To my big surprise, when I connect as an X terminal and especially if the
application I am running has graphical display (like Adobe Acrobat Reader or
VMware) the amount of collisions is just enormous it goes up to 40% of the
transmitted / received packets.

* All services ( except incriminated X ) that I have tried out didn't
generate any significant amount of collisions, if any (I guess that it could
have something to do with the possibility that most of those services were
not able to fill up the 10MB bandwidth !? - am I right !?)

Now, can some good soul tell me what the hell is going on.

I would expect that X uses sockets the same way as any other service /
application.
Kernel suppose (at least in theory)  to take care of the content being
written to the socket and safely take it to the other side (from application
through TCP, IP and Ethernet layer), application doesn't have anything to do
with it - correct ?!
So, how can one service, which is in application level, generate that much
more collisions than any other ?
Aren't they use the same mechanism so send the data to the other side of the
socket ?
What did I miss in the whole story ?

============================================================================
=============================================

Tod D. Ihde <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Your problems could also be caused by one of the cards erronously going
into
> full duplex mode (or the hub doing that, for that matter)... The most
common
> symptom of this is a high collision rate.
>
> You might want to boot a DOSish OS & force the cards to be half duplex &
see if
> that solves it (do them 1 at a time, so that if it DOES fix it, you know
which
> card is the offender). Also, try different cables - they can muck things
up as
> well (esp. when working w/ full duplex cards). Also, try using a crossover
cable
> between the 2 machines (thereby taking the hub out of the network) and see
if
> that cures it as well (you'l have to do an ifconfig to see the collisions,
no
> col. lights on the cards. :( )
>
>  Tod.
>
>
> Cedric Blancher wrote:
>
> > "Boris Pran" <[EMAIL PROTECTED]> a �crit dans le message news:
> > 8inape$c37$[EMAIL PROTECTED]
> > > I understand your points, except my hub is 10MB and 10/100MB card is
in
> > 10MB
> > > mode, so no need to slowing it down in that part.
> > >
> > > Having it both on 10MB does the slower computer still generates
collisions
> > > to slow down the connection or there is more sophisticated way than
> > > generating collisions?
> >
> > If both cards are in 10Mbps mode, there's no need any more for a slow
down.
> >
> > > I can understand that the slow computer could be the bottleneck but I
> > would
> > > expect that some kind of acknowledgement would be done on the protocol
or
> > > even higher level - not flooding the slower computer with packets and
> > > slowing down complete network.
> >
> > No. Network speed is up to ethernet layer to control.
> >
> > But I don't think that your slowest computer could be flooded by a
10Mbps
> > network.
>



------------------------------

From: [EMAIL PROTECTED] (M Sweger)
Crossposted-To: comp.os.linux.development.apps
Subject: Re: Why no defrag?
Date: 29 Jun 2000 12:01:38 GMT
Reply-To: [EMAIL PROTECTED]


Does this work for all Linux filesystems? i.e. UMSDOS, ext2, MSDOS
(mtool utilities), other unix flavors such as SUN filesystems.

It would be nice if defrag was a universal defragger for all filesystems.
Do the disk drives (such as root) have to be unmounted to fix the defrag.



: >> Hello.
: >> 
: >> I've seen the question posted to this group many times now, is there a
: >> defrag for linux? The conclusion is there is not. I'm sitting at my
: >> desk, thinking, why not?
: >
: >Obviously you should check this out:
: >
: >/usr/doc/LDP/sag/index.html
: >
: >Reference section:
: > TV
: >          Stephen Tweedie and Alexei Vovenko. Linux filesystem
: >          defragmenter. Available electronically from \
: >          ftp://sunsite.unc.edu/pub/Linux/system/Filesystems/defrag-0.6.t
: >          ar.gz.
: >
: >the more current ones are defrag-0.7x, hacked for easier compilation
: >under glibc2.1x version.
: >

: I'm guessing that after a defrag that it'd be a good idea to rerun lilo
: since your kernel may have moved?

: -- 
: Paul Richards (Pauldoo) - http://www.pauldoo.co.uk
:       .~.
:       /V\       L   I   N   U   X
:      // \\     >Phear the Penguin<
:     /(   )\
:      ^^-^^

--
        Mike,
        [EMAIL PROTECTED]


------------------------------

From: Kurt Wall <[EMAIL PROTECTED]>
Crossposted-To: gnu.gcc.help
Subject: Re: problem with makefile??? (newbie?)
Date: 29 Jun 2000 12:53:09 GMT

In comp.os.linux.development.system Jeff D. Hamann <[EMAIL PROTECTED]> wrote:
> For some reason I can't figure ouy why this makefile isn't working


> OBJS = main.o
> LIB = /usr/lib/libc.so /usr/lib/libm.so
> INCLUDES = /usr/include

> mainp : ${OBJS} ${LIB}
>  ${CC} -o $@ ${OBJS} ${LIB}

> main.o : main.c
>  ${CC} -c main.c


> I keep getting the error

> ', needed by 'main.o'.  Stop.get 'main.c

You probably have embedded carriage returns (\r or ^M) at
the ends of each line.  Also make sure that each command is
preceded by actual TAB characters and not spaces.

Kurt
-- 
Genetics explains why you look like your father, and if you don't, why
you should.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to