Linux-Development-Sys Digest #114, Volume #8     Wed, 30 Aug 00 21:13:17 EDT

Contents:
  Re: client connect() fails to see SYN ACK ("Dave Rhodes")
  Re: purify and memory managers ("Paul D. Smith")
  Re: spin locks (Karl Heyes)
  Re: Linux, XML, and assalting Windows (Christopher Browne)
  Re: Kernel panic: VFS: (Karl Heyes)

----------------------------------------------------------------------------

From: "Dave Rhodes" <[EMAIL PROTECTED]>
Subject: Re: client connect() fails to see SYN ACK
Date: Wed, 30 Aug 2000 19:35:43 -0400

Also, what is the effect of "SO_DEBUG" - does this turn on logging
or something ... if so (or if connect failures are logged anyway)  where
might I find them ... sorry for not being more familiar with LINUX.

"Dave Rhodes" <[EMAIL PROTECTED]> wrote in message
news:8ok0v1$40o$[EMAIL PROTECTED]...
> Hi Developers-
>
> I am trying to develop an advanced server application that can manipulate
> IP packets prior to being sent for a TCP connection. From a prior posting,
> Andi Kleen told me that I would have to use a AF_PACKET socket along with
> ipchains blocking to avoid the internal TCP/IP from seeing the packet and
> responding. This was certainly a big help and improved things, but now I
> must develop my own TCP which is what I am trying to do. There is still
> something wrong, here are the facts:
>
> * On the client, a typical client approach is used, socket(AF_INET,
> SOCK_STREAM, 0)
>   along with a connect() call using the server IP/and custom port # (above
> 50000).
> * On the server side, if I use the system's socket/bind/listen/accept
calls,
>   the connection is made, this verifies that the client is working (I
> think).
> * I am using a non-blocking socket(AF_PACKET, SOCK_DGRAM,
htons(ETH_P_ALL))
>   as the basis for the server. This seems to work on the 'lo' interface
too?
> * The client sends the SYN and I respond with a SYN ACK but it seems that
> the
>   client never sees this as it follows later with a SYN with the same
> sequence
>   number again (the ipchains did work to block the normal TCP processing
on
>   the server).
> * I am making up the server's SYN number using rand seeded with process id
>   and time, I don't think that I have access to the system syn generator,
>   but since it is blocked from seeing these packets I don't think that I
> need
>   to coordinate with the local system, right?
> * I am sending the response to the same interface that I got the recvfrom,
>   and the same undesired behavior occurs when the client and server are on
> the
>   same machine (using the 'lo' interface) or remote using 'eth0'.
> * I _think_ that all my IP and TCP checksums are right, I check my
outgoing
>   packet with the same code used to screen incoming packets for
correctness.
>   The ack # is +1 from the recv'd, the ports are right etc.
>
> I have done the following to try to figure out why the client doesn't see
my
> response:
>
> * I have used tcpdump, iptraf, and even snoop on a LAN connected solaris
> machine
>   and carefully gone through the packets bit by bit ... and compared with
> the
>   'correct' packets that are sent with the built in system calls.
> Everything,
>   including the TCP options, MSS, tstamps, wscale, etc. seems right. All
of
>   these tools 'see' the packet either on the lo or eth0 interface as I
> expect
>   to see it.
> * I have gone through the linux 2.2 source code to try and see what might
>   prevent the (client) tcp_connect call from working
>   (in net/ipv4/tcp_{,output,input}.c) and elsewhere. There is a comment in
> one
>   of these files to the effect that "unfortunately ... TCP magic is
used..."
>   but I couldn't trace that out exactly.
>
> Here are the only things that I think could be the problem, but I am not
> sure:
>
> * ARP/RARP is done by the system prior to the system's accept call, but
the
>   ethernet addresses I am using on the outgoing packet seems right (and
this
>   wouldn't affect the lo loop back anyway, right?).
> * the tcp checksums is wrong (the IP checksum is checked by snoop), this
> would
>   cause the packet to be dropped by the client before getting to the user
> code,
>   but i really think that its right (I even checked it using a known good
> packet
>   from the system to make sure I get the same check).
>
> Anyway, does anyone have an idea about what might be wrong? Especially
about
> things that I didn't think about? Is there other reasons that the client
> would
> drop an incoming SYN ACK packet?
>
> Much indebted,
> dave
>
>
>



------------------------------

From: "Paul D. Smith" <[EMAIL PROTECTED]>
Subject: Re: purify and memory managers
Date: 30 Aug 2000 20:13:43 -0400
Reply-To: [EMAIL PROTECTED]

%% [EMAIL PROTECTED] (Kaz Kylheku) writes:

  kk> There is a bounds checking patch for GCC, which instrument the
  kk> intermediate code rather than the object code.

Yes, that is also cool, but my understanding is that it only works with
arrays.  Perhaps I'm atypical, but I use pointers to allocated memory
_MUCH_ more than I use arrays.

-- 
===============================================================================
 Paul D. Smith <[EMAIL PROTECTED]>         Network Management Development
 "Please remain calm...I may be mad, but I am a professional." --Mad Scientist
===============================================================================
   These are my opinions---Nortel Networks takes no responsibility for them.

------------------------------

From: Karl Heyes <[EMAIL PROTECTED]>
Subject: Re: spin locks
Date: Thu, 31 Aug 2000 01:21:00 +0000

In article <[EMAIL PROTECTED]>, Josef Moellers
<[EMAIL PROTECTED]> wrote:
> Karl Heyes wrote:
>> =
> 

...

> 
> You don't want to yield() if you can get the spinlock. Somehow the formatting
> of my code got mixed up. If I don't get the spinlock, then I yield(),
> otherwise I break out of the loop, get on with the job and release the lock.


I know what you mean, it was far too early in the morning to start reading
spinlock type code..

> 
> Of course. You have to check whether the overhead of using a semaphore
> function (semget(2), semctl(2), and semop(2)) is in an acceptable relation to
> the duration of the critical section and the actions therein. But it's the
> same in the kernel: if the critical section is too long or you run the risk
> of switching context inside, you might just as well go to sleep() rather than
> use spinlocks.
> 

I don't know if POSIX is any different but I thought semop anf friends were so
awful. 

> If there were only one solution for every problem, we'd all be using
> Microsoft products and VisualBasic.
> -- =

And most experienced computer people would have to in technical support.

karl.



------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: alt.os.linux,comp.text.xml,comp.os.linux.misc
Subject: Re: Linux, XML, and assalting Windows
Reply-To: [EMAIL PROTECTED]
Date: Thu, 31 Aug 2000 00:18:19 GMT

Centuries ago, Nostradamus foresaw a time when [EMAIL PROTECTED] would say:
>In article <pb_q5.545915$[EMAIL PROTECTED]>,
>  [EMAIL PROTECTED] wrote:
>> Centuries ago, Nostradamus foresaw a time when [EMAIL PROTECTED]
>> would say:
>> >Sure you could use xml, as long as your install program can write it.
>> >It would equivalent to the registry in WinX or the assorted /etc
>files
>> >(and more) in *x.  But these mechanisms work (ugly as they may be in
>> >their own unique ways).  Why are you trying to fix the part of
>software
>> >installation & configuration that isn't broken?
>>
>> Indeed.
>>
>> What the world could use is some Better Tools for managing /etc files.
>>
>> For that purpose, I find I very much like cfengine
>>    <http://www.iu.hioslo.no/cfengine/>,
>> which provides a rule-oriented system with operators for setting up
>> directory links, modifying text-based config files (which is _very_
>> nice for modifying things like /etc/hosts, /etc/fstab, and such),
>> copying files into place, and Lots Of Other Stuff.
>>
>> There would be _some_ merit to creating an XML or SGML DTD to describe
>> cfengine rules, thereby allowing cfengine configuration files to be
>> managed using the fabled "generic XML editing tools," and validated
>> before being dropped into place to give at least some _limited_
>> guarantees of good behaviour.
>>
>> That would essentially amount to things like:
>>
>> <filestatusrules>
>>    <filestatusrule>
>>     <filename> /etc/printcap </filename>
>>     <mode> 644 </mode>
>>     <owner> root </owner>
>>     <action> fixplain </action>
>>    </filestatusrule>
>>    <!-- replacing  "/etc/printcap m=644 o=root action=fixplain" -->
>>    <filestatusrule>
>>    <filename> /usr/sbin/sendmail </filename>
>>    <mode> 755 </mode>
>>    <owner> root </owner>
>>    <action> fixplain    </action>
>>    </filestatusrule>
>>    <!-- replacing  "/usr/sbin/sendmail m=755 o=root action=fixplain" -
>->
>> </filestatusrules>
>> <editfiles>
>> <editfile>
>>   <filename>/etc/apt/sources.list </filename>
>>   <appendifnosuchline> deb file:/brownes/knuth/debianstuff unstable
>main
>>   </appendifnosuchline>
>>   <appendifnosuchline> deb http://alpha.onshore.com/debian local/
>>   </appendifnosuchline>
>>   <appendifnosuchline> deb http://hops.harvestroad.com.au/ debian/
>>   </appendifnosuchline>
>> </editfile>
>> <editfile>
>> <filename> /etc/hosts </filename>
>> <appendifnosuchline> 192.168.1.5     knuth.brownes.org       knuth
>cache
>> </appendifnosuchline>
>> <appendifnosuchline> 192.168.1.1     dantzig.brownes.org     dantzig
>> </appendifnosuchline>
>> <appendifnosuchline> 192.168.1.7     godel.brownes.org       godel
>> </appendifnosuchline>
>> </editfile>
>> </editfiles>
>>
>> Mind you, the existing cfengine syntax _works_, which means that it
>> would be likely to take some convincing to "force" anyone to move over
>> to using an XML parser for this.
>
>The cfengine does indeed cover much of the ground we need to cover.
>And you have made a very insightful observation in that XML could be
>used to describe in general what a software component requires.  But I
>do not think the right approach would be to simply write cfengine
>syntax rules directly into the XML.  Instead, why not render the XML
>into cfengine rules?

Of course, what _really_ begs the question is why XML ever comes up in
the first place.  

It sounds like what you're trying to do is to design something that
cfengine would be perfectly suited for.  

Issues that come up in system configuration include:
- Dealing with resource locking;
- Describing and setting permissions;
- Describing what links should exist, whether those be symbolic links
  or references in files (e.g. - having a host identified in
  /etc/hosts)

The primary merit of XML is that (despite being considerably _more_
difficult to parse than S-expressions) it is _somewhat_ less difficult
to parse than SGML, and resembles the HTML that only the most
pointy-haired of technical managers are incapable of coping with.  

XML buys you the ability to get a "cheap parser."  

It does not solve the other problems involved in building complex
systems, which begs the question of why XML need get involved in the
first place.

>What you need to do is simply identify the data structures that will
>need to be referenced by the cfengine.  Then use a translator to
>produce the cfengine rules.  So, in your example, those constructs such
>as machine names, ip addresses, filenames, etc. that may vary from
>system to system should instead be referenced by a XML variable.

Those constructs are intended to be the _invariants_; the whole point
of the exercise is to establish the IP addresses, hostnames and such.
They are _never_ generic.

What could vary is where the configuration info may be put, and the
cfengine language is designed to provide the ability to describe that
sort of thing.

>These variables represent decision points in how the software
>component should be rendered, and vary from machine to machine, and
>network to network.
>
>The values for these variables can be collected by prompting the user,
>or as supplied by a separate XML description.

By the way, I think you're rather confused about the nature of XML.

It isn't a language that can _have_ "variables."  It is a static
language; nothing _can_ vary, so that the notion of "variable" is
pretty nonsensical.  There are entities, consisting of elements, which
may have attributes.  A Lisp-like perspective would view this as
involving a set of "static bindings."

>The question is, why use XML if I can just write the cfengine rules
>myself?

A good question indeed.

>We have (in the cfengine) the need for the same kind of separation.
>The use of cfengine requires a large scripting base which can define
>the installation and health of a collection of systems on a network.
>The generation of these scripts is the hardest part of the cfengine
>approach.  I am not a big user of cfengine, but I am unaware of any
>standard for distributing applications that allows them to simply "drop
>into" cfengine scripts.

There are no common conventions for this, probably primarily because
people haven't seriously thought to use it in this way.

It would be a cool idea [for someone else to implement :-)] to build a
packaging tool that would use cfengine as the installation tool rather
than the random groupings of shell scripts that are often used with
RPM and DPKG.  I think this would be _reasonably_ practical, and that
a reasonable design would include several scripts:

a) One cfengine script to move files into place.  This would run only
   once, and be fairly "hard coded."

b) One, possibly written in something else, to ask the user and/or
   system any information that needs to be asked.  For a web server,
   for instance, this might involve asking what port to accept
   requests on, as well as any proxies to pass requests to.

   This script would then 'fill in blanks' to generate other scripts.

   If you ever want to redesign configuration, you'd rerun this
   script.

c) A cfengine script would then be set up to "fix up" the config
   files based on the parameters provided by b).

   This might involve filling in the HTTP port number, or fixing up
   permissions on files/directories, or indicating log rotation
   policies, or other such stuff.

   This script might be re-run as needed to clean up configuration; it
   would be a reasonably good move to rerun this on a regular
   [daily/weekly?]  basis.

>Do you think it would help if software were defined in XML abstractions
>that could be combined and rendered into cfengine scripts in an
>automated way?  Or is there something about cfengine scripts that would
>make this too difficult.  In your XML example, the only problem I see
>is that you hard coded IP addresses, machine names, directories, and
>file names, some of which may need to vary if the description were to
>be generalized.  Some of the operations are pretty platform dependent,
>but that I think is okay.

I hard coded the IP addresses entirely intentionally; that was the
whole _point_ of the exercise.

As for "defining the software in XML abstractions," that either
doesn't make sense, if we speak in any sort of "complete" sense, or
represents something only _marginally_ useful.

In the approach suggested above, there would be a whole "horde" of
cfengine scripts of the various sorts.

There would be _some_ value in the "second variety" being presented in
a form that would make it easy to rewrite them into a more efficiently
processed form.

Thus, if we had the following sequence of "periodically-run" scripts:
  /etc/cfengine/packages/daily/apache
  /etc/cfengine/packages/daily/squid
  /etc/cfengine/packages/daily/zsh
  /etc/cfengine/packages/daily/ftpd
  /etc/cfengine/packages/daily/ftp
  /etc/cfengine/packages/daily/inetd
  /etc/cfengine/packages/daily/cfengine
  ...

There are two approaches:
a) Have cfengine run through them all individually, or
b) [Somehow, magically] join them together to generate "master.daily",
   and have cfengine run once on One Big Script.

If the "daily" things were represented in XML, or S-expressions, or
some other "readily-walkable-tree," it would be straightforward to
walk through them all and assemble that One Big Script in a somewhat
optimized order.

On the other hand, it could be just as easy, and require no
fundamentally new code, to have the main file do:

import:
   Hr00:
     /etc/cfengine/daily.master

And (daily?) rebuild daily.master via:
 "echo 'import:' > /etc/cfengine/daily.master'
 "cd /etc/cfengine; ls packages/daily >> /etc/cfengine/daily.master"

XML doesn't forcibly enter into this _at all_.
-- 
(concatenate 'string "aa454" "@" "freenet.carleton.ca")
<http://www.hex.net/~cbbrowne/lsf.html>
"DTDs are  not common knowledge  because programming students  are not
taught markup.  A markup language is not a programming language."
-- Peter Flynn <[EMAIL PROTECTED]>

------------------------------

From: Karl Heyes <[EMAIL PROTECTED]>
Subject: Re: Kernel panic: VFS:
Date: Thu, 31 Aug 2000 02:00:23 +0000

In article <[EMAIL PROTECTED]>, xavian anderson macpherson
<[EMAIL PROTECTED]> wrote:
> i am having a similar problem.
> 
> i have compiled my kernel-2.4test7 in what was originally a suse 6.4
> environment.  when compiled, it sets the device as (3,65), but when the
> system boots, it looks for (3,41) as the root device.  this may have resulted
> due to the fact that i have installed several linux-mandrake components into
> my original suse system.  i did this because i wanted a system that was more
> compliant with industry standards.  i have found out the hard way that linux
> is not a truly cohesive environment.  if it was, there wouldn't be multiple
> distributions each with their own pecularities in their kernels and
> directories.  

linux is a kernel, only one (important) part of a system.  Also please bare in
mind that you are using a development kernel as well. I'm not sure what the
state of current suse (or other distributions) is regarding 2.4. Mandrake is
redhat based not suse so there will be potential pitfalls 

> 
> one of the things that i've found out is that suse uses a different file
> directory structure than some of the other distributions, and for this
> reason, i nolonger recommend it to anyone interested in moving to linux from
> windows.  it is basically like moving from an overtly proprietary system, to
> one that is subvertly proprietary.  for this rewason, i am fairly certain
> that i will migrate to the WALNUTCREEK FreeBSD-4.0 powerpak come september
> 1rst.
> 

>From a users perspective, knowledge of the directory structure layout shouldn't
 be a requirement.  look at what the LSB is doing.

> anyway, back to the main subject.  how do i set the root device correctly, so
> that when my system boots, it will find everything where it is supposed to
> be.  also, i am using the reiserfs system on all of my partitions except my
> /boot partition.  what exactly does this root device refer to, and where do i
> find it's description.
> 

rdev bzImage /dev/hda1

This will setup the kernel image to boot off /dev/hda1.   I don't think you
have to run lilo after this, but it won't hurt.

The kernel has to know how to get things going, after intialising the root
partiton is mounted and init in invoked.


> being that i mentioned the FreeBSD migration;  is there anyway to change my
> existing filesystem over to UFS without having to lose my data.  it occurs to
> me that if you can defragment a hard disk without losing data, you should be
> able to use the same type of procedure to change your filesystem. 
> 

depends on how complex the filesystem along with how much space there is. It's 
easier to transfer the data elsewhere (maybe backup!!), recreate, and restore

> if however FreeBSD can use reiserfs, then i don't need to do anything to my
> system at all!
> 

Whats the license situation?

> the only other option that i can see, is if there is some way to compile
> FreeBSD kernel components into a linux kernel; or vice-versa.  i mean, is it
> possible to combine the sources of the two systems into one kernel.  adding
> only those portions which are not supported in linux that are required by
> FreeBSD?  certainly, if BSD and linux are truly open as they claim to be,
> there is no intellectual basis for the segregation of the two systems;
> especially since they are both supposed to related.

You need to state what __you__ mean by the word open.

> 
> if there is some kind of repository of kernel modules for both linux and BSD,
> with somekind of an index of what the individual modules do, then someone
> somewhere should be able to aggregate them into a cohesive library system, so
> that you could then use a program like make xconifg to select which
> components you want from this collective library to create your kernel.
> 

There are diffenences between linux and freebsd wrt say memory handling or
network buffers.

> i mean after all, with linux you can compile all or most of the system as
> modules; which can then be removed at a later time if you don't want that
> particular feature.  what i don't know, is whether or not you have to notify
> the kernel (within it's source code) that a particular feature is present,
> for a module to be useable?  or is it simply a matter of adding a module to
> /lib/modules in order for that feature to become available to the system. 
> does the module initiate notification to the kernel of it's presence, or does
> the kernel have to know beforehand from it's creation, of the modules
> function?  why can't there be an open socket to the kernel, allowing any
> module to be added later at anytime, if that feature does not already exist.
> 

a combination of modprobe and /etc/conf.modules can handle many cases.

> if modules only have to be added to /lib/modules for the system to able to
> use it, what would prohibit you from putting BSD modules into linux.  after
> all, that how linux is able to understand the partitions and filesystems of
> other operating systems; so why stop there?

You can do that. As long as the modules interface with the rest of the kernel
correctly.

> 
> i also want to know if there is a way to use bzip2 to make my kernel image;
> bzImage > bz2Image.  i just installed bzip2-1.0.1, so i am not familiar with
> compressing an image with bzip2.  what is the syntax for doing so? what would
> i do? is it simply a matter of `make -is dep bzip2 -z bzImage', or do i drop
> the `bz' before Image?  will whatever is responsble for determining that the
> size of the kernel is under 1024KB's, recognize that that you have compressed
> the `Image' with bzip2?  so what's the deal?
> 

stick with bzImage.  After the kernel loads, it has to de-compress to memory.


karl.


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to