Re: [hackers] Re: Realtek

2003-04-01 Thread Nate Williams
  An address that works.  Without further knowledge of your laptop, it
  is impossible for me to say.  You will have to find this out by trial
  and error.  Some folks like 0xf800, others like 0x4 and
  one uses 0xd4000, but the last one I don't recommend.
 
 0xf800 seems to work on my StinkPad (still can't get the serial
 port to work though).

You have to enable it using the PS2 'DOS' command (the Windows one won't
work, for whatever reason).  Once it's probably enabled/configured, it
acts like any other normal serial port.  (It's a pain to get it working
right, since it involves dozens of reboots in order to understand what
exactly the configuration *should* be.  From memory, it wasn't as
obvious as it could have been.)



Nate
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: making CVS more convenient

2003-03-17 Thread Nate Williams
   That's the plan for the next stage, provided that the first stage
   goes well. I'm yet to play with CVSup and see if it can be
   integrated there (as with system()) easily without making a lot
   of changes to CVS itself. Otherwise I'm aftarid it's going to
   be a large amount of work to duplicate this functionality :-(
  
  Another choice is to have the commit be also made to the 'cache' if and
  only if the remote (master) respository has accepted the commit.
  
  That way, the commit is made in both repositories using the same
  algorithm, so in essence they should be in sync.
 
 Yes, makes sense.
  
   Yet another idea is to be able to make local commits with committing
   them to the central remote repository later.
  
  I'd do the reverse, since the possibility of synchronization problems
  are a huge deal.  Imagine if someone 'snuck' in and made a commit in
  the master tree after your local commit was made, but before 'later'
  occurred and your cache pushed it out to the master tree.
 
 It gets handled in the same way as now: I believe, CVS checks
 whether the checked-out version matches the top of the branch,
 and if it does not then it refuses to commit and requires you
 to make an update. So the same thing can be done for a local branch:
 check that its base version is still the top of the real branch,
 and if so then commit. Otherwise require an update/merge.

Except that it's possible that the 'local' cache is out-of-date
w/respect to the remote repository, say if someone made a commit to it
since the last 'synchronization' of the local cache.

You don't want that commit to happen, since it should be allowed because
the file is really not up-to-date w/respect to the master.  The worst
case is the committed change would conflict with the as-yet-unseen
change on the master, so allowing the local user to commit it means that
when the 'cache' attempts to commit it later, it will fail, and the
'cache code' is required to figure out how to extract the commit from
the local cache, update the local cache, re-apply the (now conflicing)
commit back to the local cache and somehow inform the user at a later
point.

*UGH*

  If you only allow the commit if it can occur locally, you're ensuring
  that the cache can't get out of date with local changes.  I tried
  something like the above (cause it was easier to implement), and it
  worked most of the time.  However, the times it didn't work it was a
  royal pain in the *ss to cleanup and get the original commit back out.
 
 Maybe I just was not clear: I think that making the commits in the
 local copy on the real top of the tree is a quite bad idea.

I think it's a good idea *IF and only IF* the commit to the remote tree
works.  That way, the local user isn't required to re-synchronize his
cached tree agains the master tree, since their is a high liklihood that
after the commit the user will also want to continue working on the same
files.  If no re-sychronization occurs, as soon as an 'cvs update' is
done, the local cache must either re-synchronize itself (doing the exact
same work as if it just done the commit), or the newly committed change
will be reverted back out, since the local cache will now be
out-of-date.

 I want to get is some temporary versioned storage to play around
 while I work on the code. After the code gets finished, it
 gets committed to the master repository just as it committed gets now.

What happens to the local cache *right after* the commit occurs?  In
essence, it's no longer valid right after a commit, since it's now
out-of-date with the master (it doesn't include the newly committed
changes).

   Now I have to use RCS
   locally for the temporary in-delevopment versions of file. Would
   be nice to have a kind of a local branch which can be later committed
   as a whole - in one commit per file, or by duplicating all the
   intermediate versions with their messages.
  
  Agreed.  The downside to the above method is that it requires network
  access to make a commit.  However, it certainly simplifies the
  problem set. :) :)
 
 Well, at least the commit would get done in one batch (of course,
 unless a conflict happens).

Right, it's a step in the right direction.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message


Re: making CVS more convenient

2003-03-17 Thread Nate Williams
   It gets handled in the same way as now: I believe, CVS checks
   whether the checked-out version matches the top of the branch,
   and if it does not then it refuses to commit and requires you
   to make an update. So the same thing can be done for a local branch:
   check that its base version is still the top of the real branch,
   and if so then commit. Otherwise require an update/merge.
  
  Except that it's possible that the 'local' cache is out-of-date
  w/respect to the remote repository, say if someone made a commit to it
  since the last 'synchronization' of the local cache.
  
  You don't want that commit to happen, since it should be allowed because
  the file is really not up-to-date w/respect to the master.  The worst
  case is the committed change would conflict with the as-yet-unseen
  change on the master, so allowing the local user to commit it means that
  when the 'cache' attempts to commit it later, it will fail, and the
  'cache code' is required to figure out how to extract the commit from
  the local cache, update the local cache, re-apply the (now conflicing)
  commit back to the local cache and somehow inform the user at a later
  point.
  
  *UGH*
 
 Yes, this is way too complicated and error-prone. This is why I don't 
 want to change the cache without changing the master in the same way
 first.

I think we're in *violent* agreement at this point. :)

If you only allow the commit if it can occur locally, you're ensuring
that the cache can't get out of date with local changes.  I tried
something like the above (cause it was easier to implement), and it
worked most of the time.  However, the times it didn't work it was a
royal pain in the *ss to cleanup and get the original commit back out.
  
   Maybe I just was not clear: I think that making the commits in the
   local copy on the real top of the tree is a quite bad idea.
  
  I think it's a good idea *IF and only IF* the commit to the remote tree
  works.  That way, the local user isn't required to re-synchronize his
  cached tree agains the master tree, since their is a high liklihood that
 
 Agreed. So the commit would essentially work as a commit plus
 resynchronization of a subset of files in the cache.

*grin*  I love it when a plan comes together.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message


Re: making CVS more convenient

2003-03-16 Thread Nate Williams
 The idea is to support a cache repository (the one copied to a local
 machine by CVSup or CTM) transparently. So that the reads from
 directory will go from the local cache repository (and won't
 overstrain the remote server, and will be fast too), while the commits
 and other changes will go into the remote master repository.

Good stuff.  I wanted something like this *years* ago when we first
started using CVS in FreeBSD.

 The value specified in CVSROOTCACHE is the local path to the cache
 repository. All the check-outs, updates, diffs etc. will be obtained 
 from there.  All the check-ins, tagging etc. will go into the master 
 repository specified by CVSROOT. Naturally, to see these changes
 in the cache repository, it needs to be updated by some outside
 means such as CVSup or CTM.

So, the cache doesn't automagically update itself when commits are done?
This is less useful, since often-times after a commit has been done the
user is still working in the same general area, so a 'cvs update' would
now give the user older files since the read-only cache is not
up-to-date, thus making it a requirement that everytime you make a
commit, you also sychronize the cache.

If this could be done automagically, it would make the cache more
'coherent' and things much more useful.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message


Re: making CVS more convenient

2003-03-16 Thread Nate Williams
   The value specified in CVSROOTCACHE is the local path to the cache
   repository. All the check-outs, updates, diffs etc. will be obtained
   from there.  All the check-ins, tagging etc. will go into the master
   repository specified by CVSROOT. Naturally, to see these changes
   in the cache repository, it needs to be updated by some outside
   means such as CVSup or CTM.
  
  So, the cache doesn't automagically update itself when commits are done?
  This is less useful, since often-times after a commit has been done the
  user is still working in the same general area, so a 'cvs update' would
  now give the user older files since the read-only cache is not
  up-to-date, thus making it a requirement that everytime you make a
  commit, you also sychronize the cache.
 
 That's the plan for the next stage, provided that the first stage
 goes well. I'm yet to play with CVSup and see if it can be
 integrated there (as with system()) easily without making a lot 
 of changes to CVS itself. Otherwise I'm aftarid it's going to
 be a large amount of work to duplicate this functionality :-(

Another choice is to have the commit be also made to the 'cache' if and
only if the remote (master) respository has accepted the commit.

That way, the commit is made in both repositories using the same
algorithm, so in essence they should be in sync.

This saves the overhead of running a complete synchronization of all the
files.  And, you have a safe 'fallback' of mirroring the remote tree
which should cleanup any problems you had, which will still be for any
non-local modifications.

 Yet another idea is to be able to make local commits with committing
 them to the central remote repository later.

I'd do the reverse, since the possibility of synchronization problems
are a huge deal.  Imagine if someone 'snuck' in and made a commit in
the master tree after your local commit was made, but before 'later'
occurred and your cache pushed it out to the master tree.

If you only allow the commit if it can occur locally, you're ensuring
that the cache can't get out of date with local changes.  I tried
something like the above (cause it was easier to implement), and it
worked most of the time.  However, the times it didn't work it was a
royal pain in the *ss to cleanup and get the original commit back out.

 Now I have to use RCS
 locally for the temporary in-delevopment versions of file. Would
 be nice to have a kind of a local branch which can be later committed
 as a whole - in one commit per file, or by duplicating all the
 intermediate versions with their messages.

Agreed.  The downside to the above method is that it requires network
access to make a commit.  However, it certainly simplifies the
problem set. :) :)



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message


Re: making CVS more convenient

2003-03-16 Thread Nate Williams
  The corollary to that would be to teach cvs(1) to commit changes to
  the master repo that have been made to the local repo.  Version number
  sync would be a problem, but it'd be really cool to be able to do
  that.  With a branch mapping layer (RELENG_5_SEANC - HEAD), people
  could actually get back into the habit of using CVS as a development
  tool instead of just a way of publishing finalized work.
 
 Nate's suggestion covers the version number issue... sort of.  It
 assumes that the patches will be approved for commit to the main
 repo

This is easy to get around, b/c if the commit doesn't happen
successfully on the repo, then the commit fails, and as such it also
won't also be committed to the local branch (the remote commit failed).

 and it assumes that the main repo will not get tagged in
 between.

For *most* users, this is not a problem.  My solution is for the
developer.  However, it's not intended to make the local cache a
complete mirror of the remote repository.  That is a whole 'nother
project. :)

 The main problem with that is pretty obvious, especially
 around code-freeze/code-slush, but also for anyone without a commit
 bit, or who is being mentored, and so lacks the ability to just
 commit.

Even during code-freeze, it does allow you to everything you *need* to
do safely.  If you attempt a commit and your local cache isn't
up-to-date, then the commit will fail, and the user will have to
re-sychronize their repository.  Fortunately, this is a mostly rare
occurance, especially if there are regular scheduled synchronization
occurances (aka; nightly cron jobs).

 A more minor problem is that it replaced the version conflict with a
 $FreeBSD$ conflict that CVSup has to ignore.

See above.  This is mostly a non-issue as long as the versions are kept
up-to-date.  No merges will be attempted what-so-ever, even if they
would not necessarily cause conflicts.

However, this is all a pipe-dream, although Sergey's work is making it
less pie-in-the-sky.

The other solution to the problem is the P4 route.  Making things so
darn effecient that there's little need to have a local mirror.  Where
this falls down is when the remote developer doesn't have a 24x7
connection to the main repository.  From what I've been told ClearCase
allows for 'mirrored read-only' repositories similar to what most of the
open-source CVS developers have been doing with sup/CVSup for years,
although it's nowhere near as effecient as CVSup at creating snapshots.




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message


RE: making CVS more convenient

2003-03-16 Thread Nate Williams
  The other solution to the problem is the P4 route.  Making things so
  darn effecient that there's little need to have a local mirror.  Where
  this falls down is when the remote developer doesn't have a 24x7
  connection to the main repository.  From what I've been told ClearCase
  allows for 'mirrored read-only' repositories similar to what 
  most of the
  open-source CVS developers have been doing with sup/CVSup for years,
  although it's nowhere near as effecient as CVSup at creating 
  snapshots.
 
 The current version of Perforce has p4proxy which caches a local copy
 of the depot files used.

Does it still require a working net link to the master repository?  When
it was originally released, I remember it being useful for slow links,
but not so good on non-existant links (ie; airplane rides, etc..)

 What is the status of Perforce in the FreeBSD project?  Is the issue the
 absence of a p4up?  Licensing?  Inertia?

See the archives for a more thorough discussion, but I believe the
licensing is the biggest issue.  If we moved to use commercial software,
it would make our development much more difficult for the average
developer to track our progress.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message


Re: making CVS more convenient

2003-03-16 Thread Nate Williams
   Nate's suggestion covers the version number issue... sort of.  It
   assumes that the patches will be approved for commit to the main
   repo
  
  This is easy to get around, b/c if the commit doesn't happen
  successfully on the repo, then the commit fails, and as such it also
  won't also be committed to the local branch (the remote commit failed).
 
 I see how you are viewing this: as a means of avoiding a full
 CVSup.
 
 I think the reason the cache was wanted was not to avoid the
 CVSup, but to allow operations *other than CVSup* to proceed
 more quickly?

Having a local 'CVS' tree mirrored already does this.  However, it's a
hassle since everytime you make a commit, you have to modify the
parameters (or use an alias), and after the commit has completed, you
have to resynchronize your mirrored tree otherwise your commit will be
lost on your first 'cvs update'.

 If so, this kind of reduces the reason for having a local cache:
 attempt locally, and then, if successful, attempt remotely.

See above.  It reduces the 'hassle' factor immensely.

 
   and it assumes that the main repo will not get tagged in
   between.
  
  For *most* users, this is not a problem.  My solution is for the
  developer.  However, it's not intended to make the local cache a
  complete mirror of the remote repository.  That is a whole 'nother
  project. :)
 
 Specifically, it's for the FreeBSD developer, not the developer
 who uses FreeBSD.  8-).

I wasn't trying to imply that the CVS mods were specifically targeted at
FreeBSD users.  For what it's worth, I have *numerous* occasions outside
of the project where this functionality would have been helpful

   A more minor problem is that it replaced the version conflict with a
   $FreeBSD$ conflict that CVSup has to ignore.
  
  See above.  This is mostly a non-issue as long as the versions are kept
  up-to-date.  No merges will be attempted what-so-ever, even if they
  would not necessarily cause conflicts.
 
 I think this is still an issue because of the date, and because of
 the committer name.

  If the files are the same version, the committer name would also
be the same in the Id tag, even when it's committed.

 Even if the committer name is the same (in your
 scenario where the FreeBSD developer is the issue, I'll concede it
 might be, except in the mentor case), the timestamp will still shoot
 you in the foot.

CVS doesn't use timestamps.  The Id is also created at checkout time, so
it's value in the database is mostly irrelevant.

  The other solution to the problem is the P4 route.  Making things so
  darn effecient that there's little need to have a local mirror.  Where
  this falls down is when the remote developer doesn't have a 24x7
  connection to the main repository.  From what I've been told ClearCase
  allows for 'mirrored read-only' repositories similar to what most of the
  open-source CVS developers have been doing with sup/CVSup for years,
  although it's nowhere near as effecient as CVSup at creating snapshots.
 
 Yes, P4 solves a *lot* of the problems, except the mirroring in
 the first place.  ClearCase is nice, in its way, but you are right
 about CVSup being a much better tool for the job; that's why all
 the people who complain about it continue running it anyway... 8-).

*grin*


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message


Re: making CVS more convenient

2003-03-16 Thread Nate Williams
   I see how you are viewing this: as a means of avoiding a full
   CVSup.
  
   I think the reason the cache was wanted was not to avoid the
   CVSup, but to allow operations *other than CVSup* to proceed
   more quickly?
  
  Having a local 'CVS' tree mirrored already does this.  However, it's a
  hassle since everytime you make a commit, you have to modify the
  parameters (or use an alias), and after the commit has completed, you
  have to resynchronize your mirrored tree otherwise your commit will be
  lost on your first 'cvs update'.
 
 Yes.  This is the main gripe you are representing here, in a
 nutshell, I think:
 
   I want to CVSup, get on an airplane to the U.S., and
be able to work like I normally work, without having
to worry about synchronization, because it will happen
automatically next time I connect up to the net.
 
 In theory, application and theory are the same, but in application,
  they are not.  8-).

Sort of.  I want to be able to CVSup, get on an airplane, create a bunch
of changes (all the while having access to logs, ability to do diffs,
etc..), get off an airplane, dialup from my hotel, commit my changes,
and have everything *just* work w/out having to re-synchronize my tree.

Right now, I can do this, but it requires a lot of steps to get right,
and there's room for mistakes being made the entire time.  (Accidentally
committing changes to the local tree which get lost upon
re-synchronization, confusion on the part of the users, requiring
additional tools such as CVSup, configuration, etc..)

   If so, this kind of reduces the reason for having a local cache:
   attempt locally, and then, if successful, attempt remotely.
  
  See above.  It reduces the 'hassle' factor immensely.
 
 I don't think it can.  The commits you want to make are to the
 local (offline) repository copy, and you want them to be reflected
 back to the online repository, automatically.

See above.  I'm not expecting to commit when offline.  I'm expecting to
commit online, but I don't want to have to setup the CVSupd at the
remote end, ensure that anytime we add new modules it's kept up-to-date,
ensure that the users don't accidentally commit to the wrong tree, etc..

I want to set things up *once* in CVS, and have it *just* work.  If they
attempt to commit but don't have a connection to the master, then CVS
will fail to allow the commit.  If they attempt to commit and their
local version does not match the version in the master, it fails.  You
get the picture.

What you described (and I deleted) was a description of what might be
nice, but what I think is nearly impossible to do correctly 100% of the
time.

 A more minor problem is that it replaced the version conflict with a
 $FreeBSD$ conflict that CVSup has to ignore.
   
See above.  This is mostly a non-issue as long as the versions are kept
up-to-date.  No merges will be attempted what-so-ever, even if they
would not necessarily cause conflicts.
  
   I think this is still an issue because of the date, and because of
   the committer name.
  
    If the files are the same version, the committer name would also
  be the same in the Id tag, even when it's committed.
 
 Nope.  I commit locally as nwilliams, and then I commit on
 FreeBSD.org as nate.  Then I try to update, and the versions
 match, but the tag data in the file itself doesn't.

I couldn't commit as 'nwilliams' on the master repository, since the
master doesn't allow commits by nwilliams.  Again, when commits are
done, they are done *remotely* against the master, and then 'mirrored'
in the local cache.  However, if they master aborts the commit, it
simply won't get done locally.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message


Re: Trailing whitespace in FreeBSD

2003-02-11 Thread Nate Williams
 Heh, bet you didn't know that bento's predecessor was called thud.
 And we had a 'ripcord' for a while too.   I just dont remember exactly
 which machine it became.  I think it was a temporary name for the machine
 that became hub.

Don't forget gndrsh, which was freefall's replacement (and later became
freefall).



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Trailing whitespace in FreeBSD

2003-02-11 Thread Nate Williams
  Wow, deja-vu!
 
 Hey! I've got a GREAT idea!  I whipped up this nifty perl script and
 I can run it over the src tree to delete all the trailing whitespace!
 And even better, I can collapse tabs at the beginning of lines! What
 a great deal! That should be good for a few hundred commits!

Gofer it!  Make sure to forward your commit email to Rod when you're
done. :)



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD firewall for high profile hosts - waste of time ?

2003-01-16 Thread Nate Williams
 Obviously, my goal is to mitigate as much as possible - I have accepted
 that I cannot stop all DDoS - my question is, do serious people ever
 attempt to do the mitigation/load shedding with a host-based firewall (in
 this case fbsd+ipfw) ?  Or would all serious people interested in
 mitigating attacks use an appliance, like a netscreen ?

Why don't use a freebsd firewall in-front of the host?  That way, the
freebsd box is acting like an appliance, and thus it 'filters' out the
DDOS loads and as such leaves the host CPU free to server the DDOS
attacks that make it past your appliance.

This is what I do, and because my pipe is fairly small and my site is
mostly unknown, the 486/66 box that I use has *way* more than enough
power to deal with the simple task of filtering packets, since it has
nothing else it needs to do.

 I will say this - 9/10 attacks that hurt me do not do anything interesting
 - in fact they are even low bandwidth (2-3 megabits/s) but they have a
 packet/second rate that just eats up all my firewall cpu and no traffic
 goes through - and as soon as the attack goes away the firewall is fine.

Is your firewall also doing the WWW hosting?  If so, then the amount of
CPU it needs is much higher than a dedicated firewall.  If it's eating
up all the CPU and you're using a dedicated firewall, methinks that your
rules need tweaking to 'optimize' them.  It's *very* easy to generate
firewall rules that work fine, but are very unoptimal when put under
load.

 So, I am looking at putting in more sophisticated traffic shaping
 (limiting packets/s from each IP I have) and skipto rules to make the
 ruleset more efficient ... but this is going to be a lot of work, and I
 want to know if it is all just a waste because no matter how good I get at
 a freebsd firewall, a netscreen 10 will always be better ?

See above.  A poorly configured netscreen will perform no better than a
poorly equipped freebsd dedicated firewall.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD firewall for high profile hosts - waste of time ?

2003-01-16 Thread Nate Williams
 Again, thank you very much for your advice and comments - they are very
 well taken.
 
 I will clarify and say that the fbsd system I am using / talking about is
 a _dedicated_ firewall.  Only port 22 is open on it.

Ah, OK.  That wasn't clear from your emails.

 The problem is, I have a few hundred ipfw rules (there are over 200
 machines behind this firewall) and so when a DDoS attack comes, every
 packet has to traverse those hundreds of rules - and so even though the
 firewall is doing nothing other than filtering packets, the cpu gets all
 used up.

If you've created your rules well, then you should only have to traverse
a couple of dozen at rules at most for the majority of packets.

 I have definitely put rules at the very front of the ruleset to filter out
 bad packets, and obvious attacks, but there is a new one devised literally
 every day.

Agreed, but establishing good rules and optimizing them helps to
mitigate both current *and* future attacks.

As an example of something that tends to help, adding rules that apply
*ONLY* to the external (internet) interface helps out a ton.  Otherwise,
the packet has to traverse both rulesets (once as it passes the external
interface, and again for the internal interface).

This is but one *very* easily implemented rule that many do not realize.
To be honest, I just recently implemented in my ruleset, having been
made aware of it, despite the fact that my firewall is not CPU bound.  A
good idea is a good idea, and I strive to keep my firewall as optimized
as I can, as long as I can maintain protection of my internal machines.

On that matter, I've been following this and other threads, and are
doing some 'research' into tightening up my firewall against the more
common DOS/DDOS attacks.  However, as Terry pointed out, for the most
part there is little that can be done in a 'dedicated' attack if the
attacker can fill up your pipe.  Ultimately, all you can do is keep from
responding to the attacker (thus causing needless/unecessary that is
created at your end), or at least do things to slow the attacker down
(in some case the attack relies on responses from your machines).

The bottom line is that there's a good chance that *IF* your firewall is
CPU bound under attack, your firewall ruleset could use some optimizing.
In my experience, it's hard to wipe out a well configured firewall.
Now, it is possible to saturate a network link, but generally it doesn't
take out the firewall, but it certainly makes it difficult for 'valid'
traffic to get through due to packet loss and other 'niceties' that
typically occur in a saturated network. :(

 So, you say that a poorly configured netscreen is no better than a poorly
 configured freebsd+ipfw ... but what about the best possibly configured
 netscreen vs. the best possibly configured freebsd+ipfw ?

IMHO, they would perform similarly, depending on the hardware used on
both appliances.  Now, if you're firewall is a 386sx/15, and the
netscreen has a P4-3.0Ghz CPU in it, their would certainly be a
difference in performance. :) :) :)

As far as the suggestion to use the FreeBSD box in bridging mode, I
can't speak to that.  My attempts to do so were less than successful, so
I stuck with the more 'common' router/firewall combination.

FWIW, I now have two firewalls in my rather 'puny' network.  One is used
to connect my 'DMZ' LAN to my ISP via a wireless link, which uses a
*very* simple ruleset to ensure that only traffic destined for my
network is passed in, and that traffic with a source address from my
network is passed out.  There are also some simply 'spoofing' rules in
place, but the ruleset is less than 2-dozen rules.  (Again, it's
optimized by interface, so that packets only need to visit the rules
once).  This keeps a bunch of Windows/broadcast crap that lives on my
ISP's network from cluttering up my firewall logs, and also sanitizes
the traffic both to/from my network so that my firewall doesn't have to
mess with it.  On my 'real' firewall, I have a copy of these rules in
place as well, but that's because paranoia runs deep, but these rules
are rarely hit.

I suspect I could do without the 'outer' firewall, but since it's got
nothing else to do besides pass packets from one interface to the other,
I figured it wouldn't hurt to give it *something* else to do. :) :)



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD firewall for high profile hosts - waste of time ?

2003-01-16 Thread Nate Williams
 So you are saying that if I put in:
 
 ipfw add 1 deny tcp from any to 10.10.10.10 6667
 
 That an incoming packet for 10.10.10.10 on port 6667 will go through the
 rule set _twice_ (once for each interface) ?

No, that much is true.  However, you want to optimize your firewall for
packets that *do* go through, so your 'typical' packets (which are
passed) go through two sets of rules before they are dropped.

You don't want to stick the 'block abnormal packets' rules at the top of
the list, IMO.  You want those at the end, since abnormal packets are
*usually* the exception.  Optimize for the standard case.

 Or are you saying that rules I have in place that _allow_ traffic should
 have their _allowance_ apply to the external, because then they pass
 through the firewall without ever checking the ruleset for the internal
 interface they have to pass through ?

Exactly.

 Also, if you don't mind, could you post your paranoia rules:

Sure.  This set of rules probably won't work, as I've massaged it and
added/removed some rules so that it makes more sense.

-cut here-
# Netif is my external interface's address.  If this box has no external
# services on it, the rules related to it can be removed.
netif=fxp0
myeip=10.0.9.2
mynet=10.0.10.0/24
mybcast=10.0.10.255


# Only in rare cases do you want to change this rule
/sbin/ipfw add 10 pass all from any to any via lo0
/sbin/ipfw add 15 deny log all from any to 127.0.0.0/8
/sbin/ipfw add 20 deny log all from 127.0.0.0/8 to any


# Don't allow packets with source addresses from my boxes 'in'.
/sbin/ipfw add  100 deny log all from ${mynet} to any via ${netif} in
/sbin/ipfw add  110 deny log all from ${myeip} to any via ${netif} in


# RFC 1918 unroutable hosts shouldn't be generating incoming/outgoing packets.
/sbin/ipfw add  200 deny log all from 192.168.0.0/16 to any via ${netif}
/sbin/ipfw add  205 deny log all from any to 192.168.0.0/16 via ${netif}
/sbin/ipfw add  210 deny log all from 172.16.0.0/12  to any via ${netif}
/sbin/ipfw add  215 deny log all from any to 172.16.0.0/12  via ${netif}
/sbin/ipfw add  220 deny log all from 10.0.0.0/8 to any via ${netif}
/sbin/ipfw add  225 deny log all from any to 10.0.0.0/8 via ${netif}


# Multicast source stuff addresses.  Don't send out multicast packets, and 
# ignore incoming TCP multicast packets (which are bogus).
/sbin/ipfw add  230 deny log all from 224.0.0.0/8to any via ${netif}
/sbin/ipfw add  235 deny log tcp from any to 224.0.0.0/8 via ${netif}


# Other misc. netblocks that shouldn't see traffic
#   0.0.0.0/8 should never be seen on the net
#   169.254.0.0/16 is used by M$ if a box can't find a DHCP server
#   204.162.64.0/23 is an old Sun block for private cluster networks
#   192.0.2.0/24 is reserved for documentation examples
/sbin/ipfw add  240 deny log all from 0.0.0.0/8  to any via ${netif}
/sbin/ipfw add  245 deny log all from any to 0.0.0.0/8  via ${netif}
/sbin/ipfw add  250 deny log all from 169.254.0.0/16  to any via ${netif}
/sbin/ipfw add  260 deny log all from 204.152.64.0/23 to any via ${netif}
/sbin/ipfw add  270 deny log all from 192.0.2.0/24to any via ${netif}

###
# Ignore/block (don't log) broadcast packets.
/sbin/ipfw add  400 deny all from ${mynet} to ${mybcast} via ${netif} out

###
# Ensure that outgoing packets have a source address from my machine now.
/sbin/ipfw add  500 pass all from ${mynet} to any via ${netif} out
/sbin/ipfw add  510 pass all from ${myeip} to any via ${netif} out

###
# Packets should be sanitized from invalid addresses at this point.
/sbin/ipfw add  600 pass all from any to ${mynet} via ${netif} in
/sbin/ipfw add  610 pass all from any to ${myeip} via ${netif} in

###
# Hmm, time to go kill something on my network that is using an invalid
# address.  However, we 'actively' reject packets from internal boxes.
/sbin/ipfw add  700 log reject all from ${myeip} to any via ${netif} out

###
# Catch-all to see what I've missed.
/sbin/ipfw add  700 log deny all from any to any via ${netif} in
-cut here-

Note the use of 'deny' vs. reject.  I don't want externally generated
packets to generate anything from my end.  Stupid packet don't even
deserve a response to use up *MY* bandwidth.

The above are just the start, and are my 'first line of defense', since
it doesn't require any stateful inspection in the firewall.

There much more there, but these along with some local customizations
are what seemd to have worked for me and my previous employers.  I use
logging to verify that I've caught things, and I monitor the box
regularly with tcpdump and such to ensure that I'm catching these.

This and email spam protection is something that I have to keep vigilant
on...




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with 

Re: FreeBSD firewall for high profile hosts - waste of time ?

2003-01-16 Thread Nate Williams
  You don't want to stick the 'block abnormal packets' rules at the top of
  the list, IMO.  You want those at the end, since abnormal packets are
  *usually* the exception.  Optimize for the standard case.
 
 Wow - that is _very interesting_ that you say this.  We were having a
 similar discussion on -network a few days back where the consensus was the
 exact opposite - and based on that I have actually stuck my 10-15 deny
 rules at the very front of my ruleset.  Things like:
 
 deny tcp from any to any tcpflags syn tcpoptions !mss
 deny tcp from any to any tcpflags syn,rst
 deny tcp from any to any tcpflags fin,syn,rst,psh,ack
 
 and so on.  The thinking is this (and you may find this interesting):

I watched that discussion, but I don't remember folks saying that the
rules should go at the top.  Apparently I didn't play close enough
attention. :(

 During normal traffic, yes, there are now 5 extra rules that every single
 packet has to go through in addition to whatever other portion of the
 ruleset they would have to go through.  But, in normal traffic - even if
 the bandwidth is getting up to 8-12 megabits/s, the packets/s rate is
 still fairly low, and thus the firewall is not taxed.  However, in an
 attack, the bandwidth might go as high as double normal situations BUT the
 packets/s can climb as high as 100 times the rate you normally see.  So
 now you have 100x normal packets/s going through 200 rules to get to the
 special case deny rules (like the ones above).  Therefore, by putting the
 special case deny rules at the very beginning, yes you add a very small
 amount of extra work for normal traffic, but you imporve your situation
 _dramatically_ in the case of suddenly ramping up to 12,000 packets/s of
 SYN flood ... and by kicking these out immediately at the top of the
 ruleset you at least buy yourself some time.

Yep, you dump things as quick as possible.

 My problem is that every time I add a new rule to the top, a new kind of
 attack is used, and gets through just fine - so I have 12K packets/s
 coming through all 300 rules of mine no matter what I put in :)

It may be time to look at things differently.  Rather than trying to
generate a rule for every attack, it may be time to start thinking about
only allowing 'valid' packets through.  I know this is a bit harder than
it may sound, but you may be suprised to find that it works.

Kind of like how the people who look for counterfeits do.  Rather than
spend their time trying to find a counterfeit, they analyze what a
'good' bill is, so that when a counterfeit arrives, it's immediately
obvious.

Again, this may be beyond what ipfw can do today, but it's certainly
something to think about.

At this point, without more details to what your rules look like, I'm
still of the opinion that it can be optimized to handle the higher
loads.


Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD firewall for high profile hosts - waste of time ?

2003-01-16 Thread Nate Williams
 Try this simple ruleset:
 
 possible deny log tcp from any to any setup tcpoptions !mss
 
 ipfw add allow ip from any to any out
 ipfw add allow ip from any to your.c.net{x,y,z,so on...}
 ipfw add deny log ip from any to any

I'd limit these to the outside interface, for performance rules.


# Whatever the interface is...
outif=fxp0
ipfw add allow ip from any to any out via ${outif}
ipfw add allow ip from any to your.c.net{x,y,z,so on...} via ${outif}
ipfw add deny log ip from any to any via ${outif}

etc...

Or, you could do.
# The internal interface is not filtered
intif=fxp1
ipfw add allow all from any to any via ${inif}

# Everything else only applies to the external interface
ipfw add allow ip from any to any out
ipfw add allow ip from any to your.c.net{x,y,z,so on...}
ipfw add deny log ip from any to any



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD firewall for high profile hosts - waste of time ?

2003-01-16 Thread Nate Williams
  So, you say that a poorly configured netscreen is no better than a poorly
  configured freebsd+ipfw ... but what about the best possibly configured
  netscreen vs. the best possibly configured freebsd+ipfw ?
 
 The answer to that particular question depends on what you mean
 by configured.
 
 Netscreen hs integral load shedding in its stack.
 
 FreeBSD is actually adding pointers and other complexity to its
 stack, to attribute packets with metadata for mandatory access
 controls, and for some of the IPSEC and other stuff that Sam
 Leffler has been doing.  If you have IPSEC compiled into your
 kernel at all, each coneection setup for IPv4, and the per
 connection overhead for IPv4, is very, very high, because the
 IPSEC code allocates a context, even if IPSEC is never invoked,
 rather than using a default context.

Except that it's acting as a router, and as such there is no 'setup'
except for the one he is using to configure/monitor the firewall via
SSH.

In essence, a no-op in a dedicated firewall setup.

  FreeBSD timers used in
 the TCP stack to not scale well (this is relative to your point
 of view, e.g. they don't scale well to 1,000,000 connections,
 but can be tuned to be OK for 10,000 connections).

Again, you're missing the point.  This is a dedicated firewall, not a
firewall being used at the point of service.

[ The rest of the irrelevant descriptions deleted ]


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD firewall for high profile hosts - waste of time ?

2003-01-16 Thread Nate Williams
 will freebsd+ipfw always be worse in a ~10 meg/s throughput network
 that gets attacked all the time than a purpose-built appliance like a
 netscreen ?

I think its' been said that in general, the answer is no.  It should
behave as well, and is some cases better.  There are cases where it will
perform worse, but in 'general' it should behave similarly.




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD firewall for high profile hosts - waste of time ?

2003-01-16 Thread Nate Williams
   Try this simple ruleset:
   
   possible deny log tcp from any to any setup tcpoptions !mss
   
   ipfw add allow ip from any to any out
   ipfw add allow ip from any to your.c.net{x,y,z,so on...}
   ipfw add deny log ip from any to any
  
  I'd limit these to the outside interface, for performance rules.
  
  # Whatever the interface is...
  outif=fxp0
  ipfw add allow ip from any to any out via ${outif}
  ipfw add allow ip from any to your.c.net{x,y,z,so on...} via ${outif}
  ipfw add deny log ip from any to any via ${outif}
  
  etc...

 Your above ruleset seems to be correct ... if add
 some rule for outcoming traffic.
 I was too fast and keep in mind only incoming traffic.
 
 Effectivity depends on number of interfaces.
 If I remember right, one external and one internal.
 If such, the ruleset without interfaces defined
 for allow rules is not worse then without interfaces IMHO.

Not true.  The packets still pass through 'both' interfaces, and as such
the number of rules it must traverse is doubled (once for the internal,
one for the external).  Halving the # of ipfw rules is an easy way to
decrease the load on a CPU. :)

For most people, it makes little difference, but the user in question
has a firewall that's overloaded, so 2x decrease in the # of rules might
make the difference, since the 'load' is caused by packets that
shouldn't be getting through..


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD firewall for high profile hosts - waste of time ?

2003-01-16 Thread Nate Williams
  In any case, he's got something else strange going on, because
  his load under attack, according to his numbers, never gets above
  the load you'd expect on 10Mbit old-style ethernet, so he's got
  something screwed up; probably, he has a loop in his rules, and
  a packet gets trapped and reprocessed over and over again (a
  friend of mine had this problem back in early December).

 If I remember correctly he has less then 10Mbit
 uplink and a lot of count rules for client accounting.

Ahh, I remember now.  Good point.

 It is reason I recommend him to use userland accounting.

Or another (separate) box inline with the original firewall for
accounting.

 And as far as I understand a lot of count rules is
 the reason for trouble.

If this is the case, then I agree.  A firewall that is under attack
should only be used as a firewall, not an accounting tool.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD firewall for high profile hosts - waste of time ?

2003-01-16 Thread Nate Williams
  If I remember correctly he has less then 10Mbit
  uplink and a lot of count rules for client accounting.
  It is reason I recommend him to use userland accounting.
  And as far as I understand a lot of count rules is
  the reason for trouble.
 
 I removed all the count rules a week or so ago.  Now I just have 2-300
 rules in the form:
 

[ Snip ]

Seriously, if you want more help, you're going to have to give more
details than 'of the form'.  Send a couple of us (not the entire list)
your rules to look at, and maybe something will jump out.  At this
point, we can only guess, and spin our wheels trying to help you out.

 allow tcp from $IP to any established
 allow tcp from any to $IP established
 allow tcp from any to $IP 22,25,80,443 setup
 deny ip from any to $IP

Seems like overkill to me, when you can do something simpler with a
single rule, although depending on that rule is risky with ipfw, since
it *can* be spoofed (as you are well aware). ;(

 and I have that same set in there about 50-70 times - one for each
 customer IP address hat has requested it.  That's it :)

Yikes.  Can't you simply allow in *all* the packets for an entire
netblock, and let them bounce around in the network for any
'non-listening' host?

 So each packet I get goes through about 5 rules at the front to check for
 bogus packets, then about 70 sets of the above until it either matches one
 of those, or goes out the end with the default allow rule.

If you've got a default allow rule, what's the point of the above rules?

Again, specific details (ie; your rules list) would certainly go a long
way.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD firewall for high profile hosts - waste of time ?

2003-01-16 Thread Nate Williams
 PS: I still think that if your CPU pegs, you've got a loop in there
 somewhere.  Most common case is a reject or deny.  Try changing
 all of them to drop, instead, and see if that fixes it.

FWIW, deny == drop.  The 'reject' rule is the one that sends out ICMP
and RST packets.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: sendfile() in tftpd?

2002-04-23 Thread Nate Williams

[ TFTP performance is poor ]

   USE TFTP to get a tiny image up, and then go TCP.
 
  
  Going to TCP soon assumes that you have a lossless medium in order to
  transmit packets over.  If you're using a lossy medium, TFTP (and other
  UDP based protocols) can kick their butt because of TCP's assumption
  that packet loss is a function of congestion, which is often not the
  case in lossy mediums such as wirless. :(
 
 tftp in particular probably won't, because it uses the same packet
 window concept as TCP, but with the window set to 1.

Actually, it still tends to kick TCP's butt in very lossy networks,
because or resends and other vaguaries of TCP.  We've done benchmarks,
and when packet loss gets bad, TCP backoff algorithm (which causes
window size shrinkage *and* increases in resend delays) cause TCP to
slow to a crawl.  We've found that TFTP's timeouts tend to work better,
although it may be more an issue of having the lower overhead vs. TCP.

 It is a protocol that is braindead by design, in order to be simple to
 implement.  It was never pretended that performance was a design goal.

Completely agreed on that point.  Another point worth mentioning is that
it's rather trivial to add in some extensions to TFTP (that are
backwards compatible) to speed it up by increasing the window size to
even 2 packets.  We were able to do that and it almost halved the
transfer times. :)

However, it required slight modifications on the part of the sender, and
the ability to recognize when the double window size modification had to
be disabled because certain (very slow) pieces of hardware couldn't
handle even the slight speedup of packets.



Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: sendfile() in tftpd?

2002-04-23 Thread Nate Williams

  Going to TCP soon assumes that you have a lossless medium in order to
  transmit packets over.  If you're using a lossy medium, TFTP (and other
  UDP based protocols) can kick their butt because of TCP's assumption
  that packet loss is a function of congestion, which is often not the
  case in lossy mediums such as wireless. :(
 
 THat's true.  I can't really think of an example of such a
 medium, though, that you would still trust to netboot something.
 8-).
 
 Maybe 802.11b.  8-(.

Exactly!  Or, something that boots remotely over satellite (for easier
maintenance).

 The specific problem here is that UDP is ``too slow''; it looks
 like a classic Doctor, it hurts when I do this  8-) 8-).

Actually, UDP is actually *faster* than TCP in these sorts of
environments, if you know what you are doing. :) :) :) :)

Overhead during a demo of my former company was demo'ing a product to a
client, while the client was talking to our competitor.

'Hmm, how come your product doesn't do anything, when their product
seems to be working fine here.'.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: sendfile() in tftpd?

2002-04-23 Thread Nate Williams

  [ TFTP performance is poor ]
  
 USE TFTP to get a tiny image up, and then go TCP.
   

Going to TCP soon assumes that you have a lossless medium in order to
transmit packets over.  If you're using a lossy medium, TFTP (and other
UDP based protocols) can kick their butt because of TCP's assumption
that packet loss is a function of congestion, which is often not the
case in lossy mediums such as wirless. :(
   
   tftp in particular probably won't, because it uses the same packet
   window concept as TCP, but with the window set to 1.
  
  Actually, it still tends to kick TCP's butt in very lossy networks,
  because or resends and other vaguaries of TCP.  We've done benchmarks,
  and when packet loss gets bad, TCP backoff algorithm (which causes
  window size shrinkage *and* increases in resend delays) cause TCP to
  slow to a crawl.  We've found that TFTP's timeouts tend to work better,
  although it may be more an issue of having the lower overhead vs. TCP.
 
 This is an issue with TCP in your situation. You're playing with network
 equipment TCP wasn't designed for, and noticing that TCP isn't anywhere
 near perfect. It's relatively simple (see OSI before you disagree...),
 and optimized for common network technology at the time it was designed.
 (And sometimes those optimizations work...)

Yer' preachin to the choir here. :)

 There are things it doesn't fit well. A connection-less reliable
 datagram protocol might have been a better choice for http, for example.

You mean like TTCP?  Unfortunately, it wasn't available, and because of
inertia, it will probably never happen. :(

   It is a protocol that is braindead by design, in order to be simple to
   implement.  It was never pretended that performance was a design goal.
  
  Completely agreed on that point.  Another point worth mentioning is that
  it's rather trivial to add in some extensions to TFTP (that are
  backwards compatible) to speed it up by increasing the window size to
  even 2 packets.  We were able to do that and it almost halved the
  transfer times. :)
 
 Probably true, but the better solution is to find something else (or
 make something else) that doesn't completely suck like TFTP does.

Because it's used so rarely, having it suck every once in a while isn't
so bad.  TFTP is well supported natively in lots of hardware platforms,
so rather than re-inventing the wheel over and over again, we chose to
continue using what other vendors have used, but we 'optimized' it for
our situation.  That's called 'good engineering' in my book. :)

  However, it required slight modifications on the part of the sender, and
  the ability to recognize when the double window size modification had to
  be disabled because certain (very slow) pieces of hardware couldn't
  handle even the slight speedup of packets.
 
 I suspect that you might be better off solving your lossy network issues
 with a layer under IP, rather than tinkering with the protocols that sit
 on top.

Actually, I disagree.  IP is all about routing, and since we still want
packets routed correctly, something on top of IP *is* the best
solution.  (Check out your OSI model. :)

In any case, even writing my own 'RDP' (Reliable Datagram Protocol) on
top of IP was massive overkill.  It means messing around with the stack
on every OS used in the product (which includes Windoze).  Most of the
stacks are *NOT* written with extensibility in mind, even assuming you
can get your hands on the source code.

The amount of resources (both in terms of finding people with enough
expertise as well as the time to do proper testing) to do such a task is
beyond the scope of almost any hardware vendor.

Been there, done that, only going to do it again if it makes sense...




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: sendfile() in tftpd?

2002-04-23 Thread Nate Williams

[ moved to -chat, since this has no business being in -hackers anymore ]

   Probably true, but the better solution is to find something else (or
   make something else) that doesn't completely suck like TFTP does.
  
  Because it's used so rarely, having it suck every once in a while isn't
  so bad.  TFTP is well supported natively in lots of hardware platforms,
  so rather than re-inventing the wheel over and over again, we chose to
  continue using what other vendors have used, but we 'optimized' it for
  our situation.  That's called 'good engineering' in my book. :)
 
 That's what everyone else said, and why that stupid protocol still
 exists.

No, it exists because it's good enough to do the job.  It's not optimal,
but it's good enough.  Optimal for all situations means re-inventing TCP
over and over again, which is non-optimal from an engineering point of
view, IMO.

However, it required slight modifications on the part of the sender, and
the ability to recognize when the double window size modification had to
be disabled because certain (very slow) pieces of hardware couldn't
handle even the slight speedup of packets.
   
   I suspect that you might be better off solving your lossy network issues
   with a layer under IP, rather than tinkering with the protocols that sit
   on top.
  
  Actually, I disagree.  IP is all about routing, and since we still want
  packets routed correctly, something on top of IP *is* the best
  solution.  (Check out your OSI model. :)
 
 Why should routing enter into it?

Because that is what IP does.  IP's job is to route packets.  No
reliability, no streaming, no nothing.  It just makes sure a packets
gets from point A to point B.  (See your ISO layering pictures and
descriptions.)

 Ethernet is a pretty mindless MAC layer, but there are others like PPP
 or Token Ring that aren't, and IP runs fine over them.

And your point is?

  In any case, even writing my own 'RDP' (Reliable Datagram Protocol) on
  top of IP was massive overkill.  It means messing around with the stack
  on every OS used in the product (which includes Windoze).  Most of the
  stacks are *NOT* written with extensibility in mind, even assuming you
  can get your hands on the source code.
 
 On Windows, you could put that RDP layer into the network driver, and
 not mess with the rest of the network stack. Or, perhaps write a
 driver that sits between an existing network driver and the IP
 stack. I did something like that with DOS drivers once.

Yer preaching to the choir.  It's *WAY* too much for *WAY* too little
gain.  On top of it, it's non-portable.

  The amount of resources (both in terms of finding people with enough
  expertise as well as the time to do proper testing) to do such a task is
  beyond the scope of almost any hardware vendor.
  
  Been there, done that, only going to do it again if it makes sense...
 
 Sounds suspiciously like a problem with the standard for your medium.

???  It's my job to provide optimum solutions to my employer, using the
least amount of both my resources and their resources.  In other words,
if we were in a perfect world, with no time schedules, and infinite
resources, a perfect solution as you suggest might be a good idea.  But,
unfortunately (or fortunately, depending on your perspective), I have to
work with what I got, since I've got a zillion others things to do
besides trying to perfect a file transfer protocol that is used less
than .01% of the time in the lifetime of the product.

I'd rather spend my time optimizing the other 99.99% of the
functionality of the product, since that's what most of my customer's
are more concerned with.

Some would call this good engineering, but apparently you aren't in that
group.

In another product at another company, I messed with perfecting a data
transfer protocol that worked over wireless mediums.  Like it or not,
loss is not only common, it's considered acceptable.  In that case,
because of design considerations (it must run on legacy hardware running
on legacy software, ie; any pc running any version of Win95/98), messing
with device drivers is simply unfeasible.  So, you work with what you
got, which means a standard TCP/IP stack, with no additions.

Amazingly enough, both products *work* despite all of the limitations.
I know it might be hard for you to believe that one *can* provide a
working solution without resorting to OS hacking, but it is not only
possible, sometimes it's actually fun. :)


Nate

ps. I've done my share of kernel hacking, and my current job is *all*
kernel hacking, but just because I have a hammer in my toolbox doesn't
mean that every problem looks like a nail. :) :) :)

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Any interest in core dump's from Nokia (Ipsolin?)

2002-04-20 Thread Nate Williams

 Just curious how the relationship is with Nokia's sping off of FreeBSD
 for their security platform?

The FreeBSD used on the Nokia product is different enough from the stock
FreeBSD code that a dump would be essentially useless.  A dump requires
that at least have access to the symbols, and even then given that
they've made significant kernel changes (especially to the routing
code), it may not even give you much of a hint as to what went on.





Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Interesting sysctl variables in Mac OS X with hw info

2002-03-13 Thread Nate Williams

 hw.busfrequency = 133326902
 hw.cpufrequency = 66700
 hw.cachelinesize = 32
 hw.l1icachesize = 32768
 hw.l1dcachesize = 32768
 hw.l2settings = -2147483648
 hw.l2cachesize = 262144
 
 Assuming that some or all of this information can be derived on x86 /
 alpha / sparc, how useful do folks think it would be to have this
 information be available from sysctl space?  I personally would love
 to see CPU and bus speed info.

Note, CPU speed on x86 laptops is variable depending on power control.
I'm not sure this is the case on the Apple hardware.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: C vs C++

2002-03-06 Thread Nate Williams

[ Moving this thread over to -chat as well.  We'll get them all over in time ]

Raymond Wiker writes:
 Giorgos Keramidas writes:
   Well, to be frank, I've seen a few C++ coding style documents, that suggest
   avoiding stdout.h altogether when writing in C++.

 I assume you mean stdio.h?

 Anyway, I *really* can't see any reason not to use iostream,
 fstream, sstream and friends. 

The fact that the programmer has no control over *how* the data is
displayed, and relies on the person who wrote the class to display the
data is one good reason.

iostreams gives all the control the the person who writes the class, so
in order to print things out, you have to extend the class (which often
means peeking into it's private data, a violation of layering), or doing
all sort of kludges/hacks to get things working.

 I also cannot see any reason not to use exceptions, the standard
 containers, the string classes etc.

Because exceptions are *still* not portable across multiple platforms.
There are N different implementations of exceptions, 'standard
containers', and all behave slightly different. 

IMO, this is probably the biggest single stumbling block for using C++
extended features.  Very few people know how to use these features
correctly, and since they were so unportable, they are essentially
unused except by those folks who worked very hard at using them, and as
such have a higher clue-factor than most.

 Used properly, these make it possible to write code that is
 inherently safer than anything built around printf/scanf, char *,
 longjump, etc. Without these (and a few others) you may just as well
 stay with standard C.

Safer?  The intracacies of printf/scanf are *well* known, so I wouldn't
say that it's any more/less safe.  At least with the above functions,
you *know* ahead of time the issues, vs. some random implementation of a
class you don't want to look at.

Exceptions are great, but there are too many gotchas because the
behavior is not standardizes well enough to depend on them.  (And, if
you're not careful, you can cause yourself *all* sorts of problems using
them.)

 Then again, if you want to do object-oriented programming, C++
 is probably not the right choice. If you want to use several different
 paradigms simulataneously in one language, C++ may be a better fit -
 although Common Lisp is a much better choice :-)

Except that it's *obnoxiously* hard to deploy it.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: C vs C++

2002-03-05 Thread Nate Williams

[ moved to -chat ]

  Because that underlying assumption is false, and I'm making
  fun of it.

 Well, that in itself is wrong. C++ code IS harder to write and write
 correctly and effeciently, as I would assume it is for any OO language.

Not so.  Having done C professionally for umpteen years, C++ for a
little less than umpteen years, and Java for 4, I can say w/out
reservation that C++ sucks.  OOP programming doesn't *have* to be hard.
C++ puts too many roadblocks in your way.

It not just because Java is newer that it's displacing C++ as the
primary development language.  It's because C++ as a language is *NOT*
well-designed (design my commitee).  C is becoming more and more like
C++ in this regard.  (And before Terry starts whining about strongly
typed languages, let me state that IMO strongly typed languages are a
good thing, since they allow you to verify your code at *COMPILE* time,
vs. at runtime.)

I can get more done in a shorter period of time with Java than with C++.
However, when speed is of the issues, the computer get more done in a
shorter amount of time with C than I can with either Java/C++.

My Java programs can often-times run *faster* than my own C++ programs,
simply because Java (the language) makes it easier to produce a good
design.  I don't find the limitations to be limitations so much, and
they tend to force me to do better design up front.  Both are OOP
languages, but C++ *feels* like a non-OOP language with some hooks to
make it more OOP like.  (I'd like to play with Smalltalk, but alas
there's no market for it, and there's no time left in my day to work on
what I need to get done, let alone for things like playing with ST.)

C++ in it's simple form *can* be easier to maintain, but it rarely turns
out that way.  As programmers, it's difficult to not succumb to the
temptation to use the latest/greatest feature of the language, since at
the time it certainly *seems* like it would help things out in the
long-term. :)

Finally, well-written/optimized C++ code is an abomination to look at,
and requires sacrificing small animals at alters whenever you need to
modify it. :)




Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD-1.X public cvs?

2002-02-26 Thread Nate Williams

  A FreeBSD 1.X CVS tree has been found, which has it's first import as
  386BSD 0.1 + PK 024.  There are a couple minor points that need to be
  clarified from Caldera before it can be made public.
 
 Has there been any more progress with this?

There have been no clarifications from Caldera, AFAIK.  At least,
nothing I've heard about. :(



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: A question about timecounters

2002-02-05 Thread Nate Williams

  Can you try to MFC rev 1.111 and see if that changes anything ?
 
 That produced some interesting results.  I am still testing under
 very heavy network interrupt load.  With the change from 1.111, I
 still get the microuptime messages about as often.  But look how
 much larger the reported backwards jumps are:
 
 microuptime() went backwards (896.225603 - 888.463636)
 microuptime() went backwards (896.225603 - 888.494440)
 microuptime() went backwards (896.225603 - 888.500875)
 microuptime() went backwards (1184.392277 - 1176.603001)
 microuptime() went backwards (1184.392277 - 1176.603749)
 
 (Ok, I'll MFC 1.111)

Huh?  It appears that 1.111 makes things worse, not better (larger
jumps).

Can you explain why you think this is a good things, since it seems to
be a bad thing to me.

 Sanity-check: this is NOT a multi-CPU system, right ?

As stated before, both are  1Ghz single-CPU systems running -stable,
although I'm sure John is capable of a answering this on his own. :)

 We now have three options left:
   hardclock interrupt starvation 

This is Bruce's hypothesis, right?

   scheduling related anomaly wrt to the use of microuptime().
   arithmetic overflow because the call to microuptime() gets
   interrupted for too long.

'Interrupted for too long'.  Do you mean 'not interrupted enough', aka
a long interrupt blockage?  (I'm trying to understand here.)



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: A question about timecounters

2002-02-05 Thread Nate Williams

 How are issues (1) and (3) above different?
 
 ps. I'm just trying to understand, and am *NOT* trying to start a
 flame-war. :) :) :)
 
 If the starvation happens to hardclock() or rather tc_windup() the effect
 will be cummulative and show up in permanent jumps in the output of date
 for instance.  In stable hardclock() is spl-protected so this would be
 _really_ bad news.
 
 If the starvation happens in any of {micro|nano}[up]time() (but not the
 get variants!) the it will result in a single spurious reading.

Ok, the bulb is starting to grow from dim to bright. :)

 The premise for avoiding locking in the access functions to timecounters
 where precisely that we could trust them to not be pre-empted for long
 enough for the hardware to roll over, if this is not the case we loose
 because the overflow in the hardware counter means that the timecounter
 we calculate from is not valid for the delta we get from the hardware.
 
 I'm not sure this answers your question, if not it is not bad will, just
 me not understanding the question :-)

*grin*

I think I understand the problem.  Let me try to rephrase to make sure.

1) If we have an interrupt lockout (*NOT* due to time-counting code),
   then we'd have a problem since the hardclock would never get run.

2) If however, the locking done to protect the timecounter code happens
   to make getting/setting the timecounter take too long, we'd get
   similar results, but for *completely* different reasons.

Let me be more precise.

(1)
  cli();
   /* Take a really long time doing something */
  sti();

(2)
  /* Do something */
  gettime();  /* Takes a really long time to complete */

The first is harder to track down/fix, simply because you don't know
*who* the offender is.  The latter is essentially the same problem to
fix, but *may* be easier to fix since the offending code *IS* the
timecounter code.

Am I even close to understanding?



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: A question about timecounters

2002-02-05 Thread Nate Williams

   Can you try to MFC rev 1.111 and see if that changes anything ?
  
  That produced some interesting results.  I am still testing under
  very heavy network interrupt load.  With the change from 1.111, I
  still get the microuptime messages about as often.  But look how
  much larger the reported backwards jumps are:
  
  microuptime() went backwards (896.225603 - 888.463636)
  microuptime() went backwards (896.225603 - 888.494440)
  microuptime() went backwards (896.225603 - 888.500875)
  microuptime() went backwards (1184.392277 - 1176.603001)
  microuptime() went backwards (1184.392277 - 1176.603749)
  
  (Ok, I'll MFC 1.111)
 
 Huh?  It appears that 1.111 makes things worse, not better (larger
 jumps).
 
 No, 1.111 makes the jumps report more correctly I think. 

Now, if that ain't a glowing reason to MFC it, I don't know one (I
think). :) :)

 They will maybe save your meal in less bad cases than yours, but in
 yours they just make sure that we don't get invalid number of
 microseconds in a timeval, and consequently we get more honest output.

How can you verify that this is the case?

  We now have three options left:
 hardclock interrupt starvation 
 
 This is Bruce's hypothesis, right?
 
 Also mine for that matter.
 
 scheduling related anomaly wrt to the use of microuptime().
 arithmetic overflow because the call to microuptime() gets
 interrupted for too long.
 
 'Interrupted for too long'.  Do you mean 'not interrupted enough', aka
 a long interrupt blockage?  (I'm trying to understand here.)
 
 See my previous email, I just explained it there.

I still didn't understand, hence the reason for the question.  (The
explanation was in the email I originall responded to).

I understand the 'overflow' issue, but it would seem to my naive
thinking that it would occur only when interrupts are blocked for a
period of time, which is the same as hardclock interrupt starvation in
my mind.

How are issues (1) and (3) above different?



Nate

ps. I'm just trying to understand, and am *NOT* trying to start a
flame-war. :) :) :)

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD-1.X public cvs?

2002-01-30 Thread Nate Williams

   Caldera's License Agreement:
   
   http://www.tuhs.org/Archive/Caldera-license.pdf
  
  Thanks.  However, this isn't as specific as I'd like it to be.  It
  implies that Net1/Net2 are now 'legal', but it doesn't give explicit
  release of said source code.
  
  Well, I have never heard claims that BSD was tainted by any USL
  release besides 32V, so this is good enough for me to put my 1.X
  tree up without fearing ugly lawyers.
  
  Now, where did all those CD's go...
 
 If all else fails I have stored my FreeBSD 1.0 CD as a precious
 gem ;) Cannot find the 386BSD 0.1 + PK024 QIC tape though :(

A FreeBSD 1.X CVS tree has been found, which has it's first import as
386BSD 0.1 + PK 024.  There are a couple minor points that need to be
clarified from Caldera before it can be made public.


Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD-1.X public cvs?

2002-01-30 Thread Nate Williams

  A FreeBSD 1.X CVS tree has been found, which has it's first import as
  386BSD 0.1 + PK 024.  There are a couple minor points that need to be
  clarified from Caldera before it can be made public.
 
 Just curious, but will this be folded in the main CVS tree, or will it be
 available as a separate tree/cvsup dist? I'd imagine that the CVS hackery
 needed to implement the former takes a lot of time...

It would be *way* too much work to fold it into the release.  You'd end
up with a completely different CVS tree, and have little/no gain from
doing it.

I also don't see the FreeBSD project making it available as a CVSup dist
either.  *IF* it's made publically available, I could see it as a port
or something like that.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD-1.X public cvs?

2002-01-29 Thread Nate Williams

 Now that ancient unix has been relicensed with an old-style BSD licence,
 is the FreeBSD-1.X cvs repository going to be made public?

Out of curiousity, why?

And, where have you heard that it's been relicensed?



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD-1.X public cvs?

2002-01-29 Thread Nate Williams

Now that ancient unix has been relicensed with an old-style BSD licence,
is the FreeBSD-1.X cvs repository going to be made public?
   
   Out of curiousity, why?
  
  Out of curiousity :)
 
 Kirk was surprised by how popular the CSRG archives CDs are.

I got one of those too. :)

   And, where have you heard that it's been relicensed?
  
  http://minnie.tuhs.org/PUPS/
 
 There's also a link from Caldera's own site
 http://www.caldera.com/company/news/

Thanks.  I'm going to wait and see what happens w/regards to the talking
heads on this, and if the consensus is that it's legal to post, I'll
upload the bits to freefall.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD-1.X public cvs?

2002-01-29 Thread Nate Williams

  Caldera's License Agreement:
  
  http://www.tuhs.org/Archive/Caldera-license.pdf
 
 Thanks.  However, this isn't as specific as I'd like it to be.  It
 implies that Net1/Net2 are now 'legal', but it doesn't give explicit
 release of said source code.
 
 Well, I have never heard claims that BSD was tainted by any USL
 release besides 32V, so this is good enough for me to put my 1.X tree
 up without fearing ugly lawyers.

Ahh, the advantages of being overseas, away from litigious lawyers. :)



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Amount of free memory available in system?

2002-01-12 Thread Nate Williams

Is there a simple sysctl or a command line utility I can use to
determine how much free memory is available in a system?

I've got an embedded application that has *very* limited memory, and I
was trying to figure out how much memory was available for the userland
applications.

'top' has something, as well as 'vmstat'.  Unfortunately, because of the
limited amount of disk space available on this box, I don't have access
to either one of those.

Is there a sysctl I can use to determine how much free memory is
available on the box?

Note, I've disabled swapping, since the *ONLY* thing running is an MFS
at the point I'm checking.

/stand/sysctl vm.defer_swapspace_pageouts=1
/stand/sysctl vm.disable_swapspace_pageouts=1



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Amount of free memory available in system?

2002-01-12 Thread Nate Williams

  Is there a simple sysctl or a command line utility I can use to
  determine how much free memory is available in a system?
  
  I've got an embedded application that has *very* limited memory, and I
  was trying to figure out how much memory was available for the userland
  applications.
  
  'top' has something, as well as 'vmstat'.  Unfortunately, because of the
  limited amount of disk space available on this box, I don't have access
  to either one of those.
  
  Is there a sysctl I can use to determine how much free memory is
  available on the box?
 
 Why not look how top and vmstat calculate it and do that in your code.

I was hoping to get do 'sysctl foo.bar.bletch' to tell me information.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: path_mtu_discovery

2002-01-05 Thread Nate Williams

 : Out of curiosity, where do MTUs  ~512 occur?
 
 Old slip links that used it to reduce latency.  I suspect that there
 aren't too many of them left in the world.

You'd be suprised.  I measure SLIP's effeciency (in throughput) to be
about 5-15% more effecient than PPP in older versions of FreeBSD (both
using kernel versions, slip and ppp).  Since both were using static
configurations on both ends and the only traffic was TCP/IP, there was
no reason to use PPP over SLIP so we opted for using the more effecient
protocol.

I can imagine this is SLIP is still more effecient than PPP in terms of
CPU, although I haven't measured it in years.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD performing worse than Linux?

2002-01-01 Thread Nate Williams

 Remember also that Matt recently shot Reno for performance
 reasons

Actually, it was turned back on less than 72 hours later, when he
found/fixed a bug in NewReno.  It was only off for a little bit, and
only in -stable.

, when compared to Linux, when he should probably have
 simply cranked initial window size to 3 (Jacobson) and added
 piggy-back ACKs (Mogul).

See above.  You need to pay more attention to what's going on. :)


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Re[2]: switching to real mode

2001-12-06 Thread Nate Williams

  I saw an example of switching in real mode in linux' sources (it looks
  pretty clear) and thouhgt it is possible to do the same under FreeBSD.
  The problem is I'm absolutely lost in FreeBSD's physical memory management
  implementation (page tables and directory and so on).
 
 That code is quite broken. You need to check out the ones I mentioned
 earlier. All that the code does in the linux kernel is fail badly.
 
 Actually there used to be in freebsd some really nice code for popping
 into real mode and back again. It was to support calling BIOS for certain
 things.

I believe the code is still there, and it's used for APM bios calls,
if I remember correctly.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Re[2]: switching to real mode

2001-12-06 Thread Nate Williams

   I saw an example of switching in real mode in linux' sources (it looks
   pretty clear) and thouhgt it is possible to do the same under FreeBSD.
   The problem is I'm absolutely lost in FreeBSD's physical memory management
   implementation (page tables and directory and so on).
  
  That code is quite broken. You need to check out the ones I mentioned
  earlier. All that the code does in the linux kernel is fail badly.
  
  Actually there used to be in freebsd some really nice code for popping
  into real mode and back again. It was to support calling BIOS for certain
  things.
  
  I believe the code is still there, and it's used for APM bios calls,
  if I remember correctly.
 
 It uses VM86 mode, not real mode.  Going back and forth to real mode is too
 expensive.

Ahh, that's right.  However, if I remember right, in older versions of
FreeBSD it did the real-mode switch, because we didn't have the VM86
code.

(This would be prior to FreeBSD 4).


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Found the problem, w/patch (was Re: FreeBSD performing worse than Linux?)

2001-12-03 Thread Nate Williams

  Unfortunately, I'm unable to run tcpdump on the client, since it's
  running NT and we're not allowed to install any 3rd party apps on it
  (such as the WinDump package).
 
  NT???  You wouldn't happen to be seeing performance problems with
 Samba, I hope?

We're not using Samba over 100's of miles. :)

 There are some known Samba/FreeBSD issues that can cause
 abysmal performance (~30-40KB/sec -- yes, kilobytes/sec), even with
 100BT cards.

This may be due to problems that Matt Dillon just recently fixed this
weekend in FreeBSD's TCP/IP stack.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD performing worse than Linux?

2001-11-30 Thread Nate Williams

 : FWIW, I'm seeing this as well.  However, this appears to be a new
 : occurance, as we were using a FreeBSD 3.X system for our reference test
 : platform.
 :
 :Someone recently submitted a PR about TCP based NFS being significantly
 :slower under 4.X. I wonder if it could be related?
 :
 : http://www.freebsd.org/cgi/query-pr.cgi?pr=misc/32141
 :
 :There is quite a lot of detail in the PR and the submitter has no
 :trouble reproducing the problem.
 :
 : David.
 
Hmm.  I'll play with it a bit tomorrow.  Er, later today.  One thing
I noticed recently with NFS/TCP is that I have to run 'nfsiod -n 4'
on the client to get reasonable TCP performance.  I don't think I
had to do that before.  It sure sounds similar... like a delayed-ack
problem or improper transmit side backoff.
 
It would be nice if someone able to reproduce the problem can test
the TCP connection with newreno turned off (net.inet.tcp.newreno)
and delayed acks turned off (net.inet.tcp.delayed_ack).  If that
fixes the proble it narrows down our search considerably.

John Capo replied that turning off both did not help his setup any.

I was supposed to be testing things yesterday, but the guys got pulled
away on another project.  Perhaps today I'll get a chance to get some
tcpdump's and some more test data.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Found the problem, w/patch (was Re: FreeBSD performing worse than Linux?)

2001-11-30 Thread Nate Williams


 I believe I have found the problem.  The transmit side has a
 maximum burst count imposed by newreno.  As far as I can tell, if
 this maxburst is hit (it defaults to 4 packets), the transmitter
 just stops - presumably until it receives an ack.

Note, my experiences (and John Capos) are showing degraded performance
when *NOT* on a LAN segment.  In other words, when packet loss enters
the mix, performance tends to fall off rather quickly.

This is with or without newreno (which should theoretically help with
packet loss).  John claims that disabling delayed_ack doesn't seem to
affect his performance, and I've not been able to verify if delayed_ack
helps/hurts in my situation, since the testers have been pressed for
time so I can't get them to iterate through the different settings.

I do however have some packet dumps, although I'm not sure they will
tell anything. :(



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Found the problem, w/patch (was Re: FreeBSD performing worse than Linux?)

2001-11-30 Thread Nate Williams

 :Note, my experiences (and John Capos) are showing degraded performance
 :when *NOT* on a LAN segment.  In other words, when packet loss enters
 :the mix, performance tends to fall off rather quickly.
 :
 :This is with or without newreno (which should theoretically help with
 :packet loss).  John claims that disabling delayed_ack doesn't seem to
 :affect his performance, and I've not been able to verify if delayed_ack
 :helps/hurts in my situation, since the testers have been pressed for
 :time so I can't get them to iterate through the different settings.
 :
 :I do however have some packet dumps, although I'm not sure they will
 :tell anything. :(
 :
 :Nate
 
 Packet loss will screw up TCP performance no matter what you do.  

I know, dealing with that issue is my day job. :)

My point is that older FreeBSD releases (and newer Linux releases) seem
to be dealing with it in a more sane manner.  At least, it didn't effect
performance nearly as much as it does in newer releases.

 NewReno, assuming it is working properly, can improve performance
 for that case but it will not completely solve the problem
 (nothing will).  Remember that our timers are only good to around
 20ms by default, so even the best retransmission case is going to
 create a serious hicup.

See above.

 The question here is... is it actually packet loss that is creating
 this issue for you and John, or is it something else?

In my opinion, it's how the TCP stack recovers from packet loss that is
the problem.

 The only way
 to tell for sure is to run tcpdump on BOTH the client and server
 and then observe whether packet loss is occuring by comparing the dumps.

Unfortunately, I'm unable to run tcpdump on the client, since it's
running NT and we're not allowed to install any 3rd party apps on it
(such as the WinDump package).

I'm not saying that I expect the same results as I get on the LAN
segment, but I *am* expecting results that are equivalent to what we
were seeing with FreeBSD 3.x, and those that are in the same ballpark
(or better) than the Linux systems sitting next to it.

Given that I get great LAN resuls, I no longer suspect I have a ethernet
autonegotiation problem, since I can get almost wire-speeds with local
nodes, and close to maximum performance with our wireless products when
the network segment the FreeBSD server is relatively idle.

 I would guess that turning off delayed-acks will improve performance
 in the face of packet loss, since a lost ack packet in that case will
 not be as big an issue.

I'm not sure I agree.  I wouldn't expect it would help/hinder the
performance assuming a correctly performing stack, *UNLESS* the packet
loss was completely due to congestion.  In that case, delayed-acks *may*
improve things, but I doubt it would help much with TCP backoff and
such.




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD performing worse than Linux?

2001-11-29 Thread Nate Williams

 I started noticing some TCP weirdness when I moved my bandwidth
 stats site from my office to my colo facility last week.  The colo
 is five miles away by road and 1200 miles away by network.  Netscape
 would stop for seconds at a time while loading the graph images but
 there was no consistency.  Worked properly sometimes and sometimes
 not.

Thanks for the much more detailed bug report vs. mine.  Can you try
disabling delayed acks to see if that helps, per another poster's
response to this thread?

sysctl net.inet.tcp.delayed_ack=0



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: tar and nodump flag (fwd)

2001-11-29 Thread Nate Williams

 Of course, if you only know GNUtar Star's standard option handling
 _may_ look strange. But then why did FreBSD switch to GNUtar instead
 of keeping a real tar?

Because there didn't exist a real tar at the time that FreeBSD was
created.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: tar and nodump flag (fwd)

2001-11-29 Thread Nate Williams

  Of course, if you only know GNUtar Star's standard option handling
  _may_ look strange. But then why did FreBSD switch to GNUtar instead
  of keeping a real tar?
 
 Because there didn't exist a real tar at the time that FreeBSD was
 created.
 
 Well this is from BSD-4.3:

[ SNIP ]

 ... And it has no Copyright ATT inside.
 

That may be, but at the time FreeBSD was created (so many years ago),
there was no 'real' tar to choose from.  BSD-4.3 tar was not available
publically.  I'm not sure it's available even now publically.  (Is it
part of 4.4-Lite/Lite2?)

We tried a number of different versions of tar to distribute initially
(including the one from Minix, who Andrew Tanenbaum graciously gave us
permission to use), but we decided that GNU-tar was the best of the
available versions.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD performing worse than Linux?

2001-11-28 Thread Nate Williams

  I know my lack of information isn't helping much, and that I've not done
  much to help debug the problem.  However, all my attempts to track down
  what is causing this from a high-level (w/out digging into the code
  itself and analyzing tcpdump output) have come up empty.
 
 It's not only not helping much, but it's pretty lame:
 
 .) You won't run tcpdump.

I haven't been able to run tcpdump up till this point because the field
trail folks won't tell me when they're running the tests.  I haven't
been able to get any real-world data (yet), and after the recent
drubbing that Linux made, I may not get a chance because they're
chomping at the bit to replace the box now.

 .) You won't look at the code.

Wrong.  I've merged in bugfixes.  But, I don't have time to walk through
the entire TCP/IP stack trying to figure out what's changed between 3.X
and 4.X to see what's changed.

 .) You won't give good details.

I'm telling you all the details I have.  If I had better details, I'd
have given them.  I've not said anything up till this point because I
haven't had good details, but Greg's post reflected the same sort of
behavior I've been seeing, so I was simply agreeing with his unknown
friend.

 Is there anything else you can do other than to possibly spread FUD
 about FreeBSD's network performance?

You call it FUD, I call it real-world results.  I'm not the only one
who's seen these kinds of results.

 Get off your behind and do some serious investigation (I'm pretty
 certain you're capable of it) and we'll be able to work this out
 in no time.

It's not my problem, except because of my interest in making FreeBSD
look good.  Fighting for FreeBSD has created me more enemies than
friends here, so I'm not doing myself any favors continually try to
defend the numbers/results we're seeing.

If you want me to shutup and go into a corner, it might make you feel
better, but it certainly won't solve the real problem.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD performing worse than Linux?

2001-11-28 Thread Nate Williams

  If you want me to shutup and go into a corner, it might make you feel
  better, but it certainly won't solve the real problem.
 
 I made it clear that my problem was not with the complaint itself.

No, you didn't.

 My problem with it was with the lack of technical backing or any
 real way for me to reproduce the problem.

For what it's worth, I can't reproduce it either, but that's because of
lack of resources, not ability.  I don't have the necessary
hardware/network connection to cause the weird behavior.  If I did, I'd
give you better information.

However, I don't doubt they're seeing this behavior.  They've got graphs
and what behavior I could reproduce showed that things were indeed
completely bogus.  Disabling newreno fixed *those* problems I could fix.

 to shutup and go into a corner, I want you to get off your ass and
 gather us all up some useful information so that we can solve the
 problem.

No can do, sorry.

 Lastly, if these people are intent on running Linnex, they'll spread
 however much FUD that they need to and provide as little information
 as possible in order to effect the switch.

It's not FUD.  These people aren't OS bigots, they're folks trying to
get a job done.  They could care less if the box was running WinNT, if
it got the job done.  (FWIW, the ftp client boxes are running NT and
Win2K, which *may* have something to do with it, but I don't know, since
I can't reproduce it. :()

Basically, all I'm saying is that Greg's friend results are similar to
what I'm seeing.  What's causing this is unknown.  If it was known I
wouldn't have sent any messages out, since I could have fixed it
myself.  But, I can't, so you get a message saying 'Me Too', which isn't
much help except to verify that there may be some truth to the report.
(Call it unbiased independant verification.)






Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



RE: FreeBSD performing worse than Linux?

2001-11-28 Thread Nate Williams

 I had a similar problem, especially with different FreeBSD 4.x boxes (4.1.1,
 4.2, 4.3, 4.4-stable after dirpref merge) and with Windows NT systems, but
 the crap performance was only1 limited to FTP. SSH, NFS and CVS operations
 were all fine.

We're not using any of the other listed services, but we are using both
FTP and WWW, and both show decreased performance.  (However, the latter
may be a configuration issue, so it may be irrelevant.)

The pre-4.3 boxes are all using RTL8029 cards, and the 4.3+
 boxes are all Intel 8255x-based cards. The laptop has 4.4-stable and a
 D-Link DFE-650. The poor performance showed up in interactions with the
 100Mbit/s cards (Intel, D-Link).

I'm using an fxp cards, as described in the email to Peter.

 They have all disappeared since I've
 explicitly set the links to 100Mbit/s with full-duplex. The switches and
 hubs are all 10/100 D-Links.

I've messed with auto-negotiations.  The funny thing is that performance
on the LAN segment is quite good, it's that non-LAN performance is poor.

 My guess is that the autonegotiation feature of both the fxp and ed drivers
 somehow adversely affects FTP.

Hmm, I can hard-code and see what happens.  I did mess with the
autonegotiation stuff initially, and it didn't seem to make any
difference.

I will try again.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



RE: FreeBSD performing worse than Linux?

2001-11-28 Thread Nate Williams

 As a follow-up, I've just checked the newreno setting on the boxes I
 experienced the problems with - newreno is on.
 I'll try turning it off and see if I experience any problems. BTW, what does
 it do exactly?

It's supposed to make performance of resends/ACKs better in the case of
packet loss.

 Also, a query on my timesheets shows that I had the same FTP problems on a
 FreeBSD 3.2 box with the dc driver talking to an NT4 Terminal Server with
 onboard Intel 8255x controller via a 10/100 hub (full duplex), and also a
 FreeBSD 4.0 box with the rl driver talking to an NT4 Terminal Server with
 onboard Intel 8255x controller via a 10Mbit/s hub (full duplex). Disabling
 autonegotiation on the FreeBSD NIC fixed it. Only FTP was affected in both
 cases - SMTP, HTTP and SSH were all fine.

I've got HTTP problems as well, although as I stated before, that might
be a configuration issue.  FTP is certainly effected.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD performing worse than Linux?

2001-11-27 Thread Nate Williams

 I've just been talking with a friend of mine from the Samba team.
 He's about to change jobs, and a lot of his work in future will
 involve FreeBSD.  He's just been doing some performance testing, and
 while the numbers are pretty even (since he discovered soft updates
 :-), he's noticing some significant performance differences,
 particularly on the TCP/IP area.

FWIW, I'm seeing this as well.  However, this appears to be a new
occurance, as we were using a FreeBSD 3.X system for our reference test
platform.  I recently updated it to FreeBSD 4.4-RELEASE, and I'm getting
nothing but complaints about broken connections, poor performance, and
very inconsistent results.

They are now considering installing Linux on this box with the hope that
they can get consistent results.  (Unfortunately, FreeBSD 3.X is out
because I convinced them that we needed to upgrade to 4.X due to
security measures, so we can't go back.) 

Note, some of the performance issues were made better by disabling the
TCP newreno implementation, but it's still poor and very inconsistent
for hosts not on the local network, while the Linux box next to it gets
much more consistent results.

I know my lack of information isn't helping much, and that I've not done
much to help debug the problem.  However, all my attempts to track down
what is causing this from a high-level (w/out digging into the code
itself and analyzing tcpdump output) have come up empty.




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD performing worse than Linux?

2001-11-27 Thread Nate Williams

 Note, some of the performance issues were made better by disabling the
 TCP newreno implementation, but it's still poor and very inconsistent
 for hosts not on the local network, while the Linux box next to it gets
 much more consistent results.
 
 For what it's worth I have disabled newreno at my customer sites as well
 and felt and heard less bogosity since.

It's actually pretty awful.  However, even with the fix I merged back
into RELENG_4, the performance with/without newreno is still *much*
worse (in terms of consistantly giving the same results) than the code
in FreeBSD 3.x.

The interesting thing is that the application that's getting the most
press is one of our field technicians downloading a file over anonymous
ftp by hand, so it's not like we're generating tons of traffic, or
alot of parallel connections.

The connections hang, abort, and those that complete have numbers that
are *all* over the map.  However, when connected to a Linux box on the
same network, none of these bad things occur. :(

(And, we've verified the network is up by running ping in another
window.)




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: FreeBSD performing worse than Linux?

2001-11-27 Thread Nate Williams

  FWIW, I'm seeing this as well.  However, this appears to be a new
  occurance, as we were using a FreeBSD 3.X system for our reference test
  platform.  I recently updated it to FreeBSD 4.4-RELEASE, and I'm getting
  nothing but complaints about broken connections, poor performance, and
  very inconsistent results.
  
  They are now considering installing Linux on this box with the hope that
  they can get consistent results.  (Unfortunately, FreeBSD 3.X is out
  because I convinced them that we needed to upgrade to 4.X due to
  security measures, so we can't go back.) 
 
 And they somehow think any variant of Linux is going to be better on
 this point?

More to the point, it *IS* better with Linux. :(

(At least, comparing the latest FreeBSD with the 'latest' version of
some release of Linux.  I'm not sure if it's Mandrake, or RedHat, or
what.  I wasn't involved in that end of things.)

I'm still trying to figure out if it's some simple configuration that's
causing the problems, but the field trial folks are starting to get
annoyed with my constant 'excuses' as to why we shouldn't just switch to
Linux.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



dump/restore and DIRPREF

2001-10-02 Thread Nate Williams

After Kris's recent report of 'massive speedups' using dirpref, I've
been toying with the idea of backing up my box, and then restoring them.

However, backup/restore are so much faster than doing a tar/untar.

If I do a backup of my FS, wipe the disk, will the 'restore' cause the
same (ineffecient) directory layout to appear on disk?

I wouldn't think so since the directory layout is controlled by the
kernel, but I do know that dump/restore are much lower-layer tools than
tar, so they may possibly have layout information embedded in them.

Is my assumption correct?



Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: dump/restore and DIRPREF

2001-10-02 Thread Nate Williams

 After Kris's recent report of 'massive speedups' using dirpref, I've
 been toying with the idea of backing up my box, and then restoring them.
 
 However, backup/restore are so much faster than doing a tar/untar.
 
 If I do a backup of my FS, wipe the disk, will the 'restore' cause the
 same (ineffecient) directory layout to appear on disk?
 
 I wouldn't think so since the directory layout is controlled by the
 kernel, but I do know that dump/restore are much lower-layer tools than
 tar, so they may possibly have layout information embedded in them.
 
 Is my assumption correct?
 
 no.

Actually, yes, but I didn't word it very well above.

 Dump reads the raw device and finds everything by hand.
 
 Restore (like tar!) just open/write/close/chown regular files.

So my assumption *is* correct that it won't matter if I use dump/restore
to do the job, and that the lower-layer effeciencies of dump don't
effect the resulting layout done by restore.

Great, thanks for the quick response!


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: What is VT_TFS?

2001-09-05 Thread Nate Williams

   TRW supported a lot of the early
   386BSD/FreeBSD effort, back before Walnut Creek CDROM threw
   in and had us change the version number from 0.1 to 1.0 to
   make it a bit easier to sell.
  
  *Huh*  That's revisionist history if I've ever heard it.  We
  did a 1.0 release for FreeBSD because we wanted to differentiate
  ourselves from 386BSD (lot of bad blood there with the Jolitz's)
  and NetBSD (which had a 0.8 release at that time).
 
 FWIW: This is all archived on Minnie, thanks to Warren Toomey.

Sure, and I've got archives of it as well.

 I believe that Julian was the first corporately employed
 person, who had at least part of his paid job as working on
 the 386BSD/FreeBSD code.

Yes, and the original SCSI system was Julian's, which was later replaced
by CAM.

 Bill Jolitz approved a 0.5 interim release of 386BSD

And then Lynn revoked this, and posted a public message to the world
stating what obnoxious fiends we were.

As the person who spoke with both Bill and Lynn getting their approval
(Jordan did as well), I'm *very* familiar with the process.

 Some of the people who later split off NetBSD and released the
 NetBSD 0.8 release had reverse engineered the patchkit format,
 and built tools to do the same thing.

Actually, no.  It was the person who was going to take it from me (I
could name him, but it wouldn't do much good).  The new maintainer
didn't do anything or respond to email for over 3 months, so Jordan took
it over from where I left off.

NetBSD was Chris Demetriou's child after he got fed up with Bill's
promises never coming true.  I was the third committer on what would
later become the NetBSD development box, but I still naively assumed
that Bill's promises would eventually come to fruition.

NetBSD happened when Lynn's famous email was sent out claiming we were
all evil incarnate, and that no-one understood them anymore.

Soon afterward NetBSD 0.8 was released, but Adam Glass (the owner of the
second account on the NetBSD development box) was a big 68K fan, so his
influence (as well as Chris's) made NetBSD into a cross-platform OS.

 Progress was made on the 386BSD 0.5 release under the auspices
 of the patchkit maintainers, who had their position of control
 because I did not distribute the patchkit patch making shell
 scripts very widely, in order to ensure serialization, so that
 the patches, when applied, would work, have proper dependency
 tracking, and not result in conflicts.

Actually, all of the patchkit maintainers (myself, Jordan, and Rod) had
access to your shell software.  However, it turned out that avoiding
conflicts was hard, because serialization often required patches upon
patches upon patches upon patches, and at some point, the
creation/maintenance of the patchkit was greater than building a new
release.  (Plus the fact that you couldn't install the patches w/out a
running system, and the running system couldn't be installed on certain
hardware w/out patches, causing a catch-22).

 There was an angry posting on Usenet by Lynne Jolitz; in it,
 she claimed that 1/3 of the patchkit was good, 1/3 was benign
 (but unnecessary), and 1/3 was crap.  Then she would not say
 which 1/3 was which; this pissed off more people than the
 original claim that only 1/3 of the code was any good.
 
 After much sniping back and forth, Bill Jolitz posted, and
 revoked his previous permission to use the 386BSD name (a
 common law trademark belonging to him), and therefore he had
 effectively scuttled the interim release under the 386BSD
 name.

Close, but the original posting was by Bill, and the revokation was done
by Lynn.

 Unwilling to throw away many months of work, it was decided to
 go forward with the release, under the name FreeBSD 0.1.
 
 Walnut Creek CDROM suggested that the version number be changed
 to 1.0, in order to make it an easier sell on CDROM.
 
 Check with Warren, if you don't believe this account.

I was involved with the entire affair, and Warren's archive doesn't
include much of what later became 'core' email.  Also, it doesn't
include the phone conversations with Bill and Lynn, which (obviously)
aren't in the public domain.




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: What is VT_TFS?

2001-09-05 Thread Nate Williams

  You're replying to Terry for christs sake!  What did you expect if not
  revisionist $anything ?
  
  Which reminds me, Adrian still oves us his story about ref :-)
 
 Poul, you're going off again, without regard for facts.
 
 Remember the last time FreeBSD history came up, I proved Nate
 mistaken in his claim that my authorship of the original 386BSD
 FAQ was revisionist history.

No you didn't.  You changed the questions. :)

 You can check these facts out in the archives on Minnie; I can
 also provide almost every email I ever sent or received (if it
 resulted in a response from me to the author), from 1988 forward,
 since I have it all archived, since even at the time, I felt it
 might end up being an important historical record.  At the very
 least, it has provided me with a rich source of information from
 which to draw, in order to study Open Source projects in general,
 and 386BSD, FreeBSD, and NetBSD, in particular.

You're not the only pack-rat around here.  Be careful of your claims,
since they could come back to bite you.



Nate

ps. I still have my phone-logs of my conversations with Bill as well. ;)

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: What is VT_TFS?

2001-09-05 Thread Nate Williams

   Bill Jolitz approved a 0.5 interim release of 386BSD
  
  And then Lynn revoked this, and posted a public message to the world
  stating what obnoxious fiends we were.
 
 Actually, Lynne didn't have the right to do this; the trademark
 was Bill's, so the revocation wasn't valid until Bill did it.
 
 
   Some of the people who later split off NetBSD and released the
   NetBSD 0.8 release had reverse engineered the patchkit format,
   and built tools to do the same thing.
  
  Actually, no.  It was the person who was going to take it from me (I
  could name him, but it wouldn't do much good).  The new maintainer
  didn't do anything or respond to email for over 3 months, so Jordan took
  it over from where I left off.
 
 I was aware that CGD had reverse engineered it.

He didn't.  Chris never used the patchkit, nor did he ever release any
patches.  He used some of the patches, but never got involved in
anything but his own BSD release.

 I wasn't aware
 that you had given the tools to the people who later released
 the 1000 level patches.

He was supposed to be the next maintainer. :(

  NetBSD happened when Lynn's famous email was sent out claiming we were
  all evil incarnate, and that no-one understood them anymore.
 
 I talked to Lynne and Bill through much of that time; it was
 (unfortunately) a discussion well before the fireworks that
 resulted in him knowing about common law trademarks.  I was
 still on good terms with them, well after the NetBSD 0.8
 release, and we mostly just lost touch, rather than letting
 the bickering come between us.

I'm suprised you were able to talk to them.  Lynn refused to talk to me
(or anyone else) on the phone towards the end, and then the famous email
was released.

 As for the binaries, we had a number of patched floppy images
 floating around (I personally couldn't boot the thing at all
 until I binary edited the floppy to look for 639 instead of
 640 in the CMOS base memory data registers).

Right, but they weren't good enough for a complete install.

 Unfortunately, I cut myself out of the loop early on that,
 due to the impending purchase of USL by Novell, which went through
 in June of 1994, after off shore locations which were not Berne
 Convention signatories had been found to house the code in case the
 worst happened, so this email is not part of my personal archives.
 I hope someone, somewhere has saved it for posterity...

It's on 120MB QIC tapes in the drawer next to me.  The 'original'
386BSD/FreeBSD development box (prior to WC's involvement) with the tape
drive is still in service as my firewall. :)



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: local changes to CVS tree

2001-09-05 Thread Nate Williams

  CVS claims to support multiple vendor branches, but in practice it
  doesn't work in any useful sense.  There's at least one place in the
  CVS sources where the vendor branch is hard-coded as 1.1.1.  You
  really don't want to use multiple vendor branches -- trust me. :-)
  Use two repositories instead, or use perforce.
 
 I guess I'll ask the usual question:
 
 Any chance of getting CVSup to transfer from a remote repository
 to a local vendor branch, instead of from a remote repository to
 a local repository?

The problem is that you aren't just transferring bits from the HEAD, but
from multiple active branches.  As John already stated, CVS doesn't
handle multiple 'vendor' branches well (and in this case, the FreeBSD
tree has vendor (CSRG) branches, FreeBSD vendor branches (RELENG_2,
RELENG_3, ..., contrib vendor branches (TCSH, GCC, etc..)

CVS is simply not setup to do what you ask. :(


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: local changes to CVS tree

2001-09-05 Thread Nate Williams

   Any chance of getting CVSup to transfer from a remote repository
   to a local vendor branch, instead of from a remote repository to
   a local repository?
  
  The problem is that you aren't just transferring bits from the HEAD, but
  from multiple active branches.  As John already stated, CVS doesn't
  handle multiple 'vendor' branches well (and in this case, the FreeBSD
  tree has vendor (CSRG) branches, FreeBSD vendor branches (RELENG_2,
  RELENG_3, ..., contrib vendor branches (TCSH, GCC, etc..)
  
  CVS is simply not setup to do what you ask. :(
 
 No, Terry's idea is sound as long as you only try to track one branch
 of FreeBSD.

So, you're saying that the person would choose the branch (which may be
RELENG_4 *OR* HEAD).  I can see how that would work for RELENG_4, but
for the HEAD, many of the files on the HEAD in /usr/src/contrib are on
vendor branches, which means it would be a *nightmare* to get that right
(IMO).

 I.e., you consider FreeBSD to be your vendor, and you do
 a checkout-mode type of fetch from a branch of the FreeBSD repository
 and directly import it onto your own vendor branch.  This would meet
 the needs of a lot of people, e.g., companies who make products based
 on FreeBSD.

Agreed.  Although, it may not be as useful to developers, who often have
to track development in multiple branches (for MFC's).

 I have had this on my to-do list for a long time, but I have no idea
 if or when it'll ever get implemented.  It would require a focused
 period of working on it that I just don't have these days.  Maybe if
 the economy gets worse ...

*sigh* Let's hope it doesn't come down to that.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: What is VT_TFS?

2001-09-04 Thread Nate Williams

  What is the file system that uses VT_TFS in vnode.h? Is it still available
  on FreeBSD?  Thanks.
 
 Julian added it for TRW Financial Services; the first public
 reference machine for 386BSD (which later became FreeBSD and
 NetBSD) was ref.tfs.com.

So far so good.  ref died an ugly horrible death, although I think I
still have lying around a 4mm backup tape of what was left of it.

 TRW supported a lot of the early
 386BSD/FreeBSD effort, back before Walnut Creek CDROM threw
 in and had us change the version number from 0.1 to 1.0 to
 make it a bit easier to sell.

*Huh*  That's revisionist history if I've ever heard it.  We did a 1.0
release for FreeBSD because we wanted to differentiate ourselves from
386BSD (lot of bad blood there with the Jolitz's) and NetBSD (which had
a 0.8 release at that time).




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Recommendation for minor KVM adjustments for the release

2001-08-19 Thread Nate Williams

 - I would like to cap the size of the buffer cache at 200MB,
   giving us another 70MB or so of KVM which is equivalent to
   another 30,000 or so nmbclusters.
  
 That also seems like overkill for the vast majority of systems.
 
 But probably not for the large-memory systems (and on the machines
 with small memory the limit will be smaller anyway). Having a
 machine with a few gigs of memory and being able to use only 200MB
 for the buffer cache seems to be quite bad for a general-purpose
 machine. 
 
Uh, I don't think you understand what this limit is about. It's
 essentially the limit on the amount of filesystem directory data that
 can be cached. It does not limit the amount of file data that can
 be cached - that is only limited by the amount of RAM in the machine.

Ahh, thanks for the clarification.  I retract my previous email about
limiting this as well.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Silly crackers... NT is for kids...

2001-08-17 Thread Nate Williams

 Which just brings me to another point, why not just turn ssh on by default
 and turn telnetd off by default, given the latest exploit.

As Bruce already mentioned, this is the new default in 4.4.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Development for older FreeBSD releases

2001-07-12 Thread Nate Williams

  Building a new development box from a set of 2.2.8 CDs would
  certainly be a simple and guaranteed method if that's an option
  for you.
 
 Unfortunately it's not guaranteed...a lot of new hardware has been
 released since December 1998 (the date of 2.2.8-RELEASE).  :-p

Why is that important?  I've got brand-new boxes running 2.2.8-R w/out
any problems.  Occasionally I needed to back-port an ethernet driver or
fix, but otherwise things work fine.




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Status of encryption hardware support in FreeBSD

2001-07-02 Thread Nate Williams

   I think you've missed the fact that the '486 solution requires an
add-on board (priced at $80.) and the faster cpu solution doesnt. That
adds a lot of margin to get a faster MB, more than enough to
compensate for the board.
   
   Not necessarily.  The upgraded motherboard also requires a faster
   processor, and the two parts added together are almost certainly going
   to be more than $80.
   
 
 There is nothing more annoying than someone who argues subjects he clearly 
 knows nothing about. 

I agree. :)

 You are way off on your pricing. Way off. A 633 Celeron 
 is under 50. Q1 for petes sake. The cost difference would be less than $20. 
 in quantity. It would be less than $80. Q1.

That's just CPU.  You've left off the motherboard, as well as the memory
and other supporting hardware required for the CPU to do the work.

 Theres an old saying about being penny-wise and pound foolish. Using a
 486 in todays networking and cost environment is just plain moronic.

See your first sentence.  You *really* don't know what you are talking
about.





Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Status of encryption hardware support in FreeBSD

2001-06-30 Thread Nate Williams

Really?  Have you even looked at the net4501 board which was mentioned? 
  
It's
  a single-board computer constructed for some specific communication
  applications, with no VGA or keyboard support, or spinning fans, and 
 is
  pretty inexpensive and in a very small form factor.  Why do I want to
  replace this with a new motherboard?

Because my motherboard is 20 times faster, has VGA support,doesnt 
 require 
  an 
add-on board to do fast encryption and costs about the same as yours. 
  Thats 
why.
   
   Again, you are only considering your personal case.  If crypto should
   be needed on an embedded appliance, I don't think they would need
   a lightning-fast processor and VGA support, when crypto is all
   they want.
   
 
 Your premise that embedded appliances are somehow doomed to use pitifully 
 outdated processors is simply wrong.

Who said anything about pitifully outdated processors.  I can buy a heck
of alot of CPU horsepower w/out buying the latest/greatest CPU.

As a matter of fact, in almost all cases, the best bang for the buck
would be for processorts that you imply to 'pitfully outdated'.

 Embedded MBs with speeds enough to eliminate the requirement for 1) a
 slot and 2) an external board are available for less than the delta in
 cost.

That's simply not true.  If you are building 'truly embedded' systems
(ie; 100K+ boxes), you're not going to be using 'off the shelf' PC
parts.  You're going to be specifying particular parts to be used, and
in general the difference in cost of $1-5 for the CPU makes a *huge*
difference in the price-point you are trying to make.

Often times it's easier to build a hierarchy of products, with each
individual tier having the same 'basic' setup (keeping costs low), and
by adding additional 'special purpose' boards, you can increase the
functionality of the box by only increasing the costs trivially.

 So, logically speaking, anyone 
 with a requirement for crypto would simply chose a faster embedded MB 
 solution.

You seem to have a different definition of embedded than many others do
Bryan.  Have you ever been involved with specifying and building
embedded systems products, or are you just talking out of the side of
your mouth?



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Status of encryption hardware support in FreeBSD

2001-06-30 Thread Nate Williams

   Your premise that embedded appliances are somehow doomed to use 
 pitifully 
  
outdated processors is simply wrong.
   
   Who said anything about pitifully outdated processors.  I can buy a heck
   of alot of CPU horsepower w/out buying the latest/greatest CPU.
   
   As a matter of fact, in almost all cases, the best bang for the buck
   would be for processorts that you imply to 'pitfully outdated'.
 
 I think you've missed the fact that the '486 solution requires an
 add-on board (priced at $80.) and the faster cpu solution doesnt. That
 adds a lot of margin to get a faster MB, more than enough to
 compensate for the board.

Not necessarily.  The upgraded motherboard also requires a faster
processor, and the two parts added together are almost certainly going
to be more than $80.

In any case, that's just one example.  There are many more examples
where a PIII-400 is more than adequate to do most of the processing, if
you don't have to do encryption.  If you involve encryption, we're
talking alot more CPU, memory, and such.  You can easily get an add-on
board for *much* cheaper than to upgrade your memory, mboard, and CPU.




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Java (Was Re: NGPT 1.0.0 port to freebsd)

2001-06-29 Thread Nate Williams

  With the current license, this won't be installed as part of the base
  kernel.  (GPL and/or LGPL)
 
 I understand it'll continue to be a port. Am I hearing that it is
 unacceptable even as a temporary solution because of the license ?
 
  It's been answered time and time again over the past months, so you must
  not be paying attention.  The binary distribution hasn't been created
  because we don't have a legal license to do so (yet).
 
 Yes, I've been reading that for a long time now, but it (what Sun is
 doing) doesn't make any sense to me. Are Sun's reasons
 
 (a) Technical ? Passing of JCK etc ? 
 (b) Political ? Yet another competitor to Solaris ?

Sun is very picky about the license they want to give us.  In
particular, due to a recent fight in court they had with an well-known
company in the Pacific Northwest, the type of license they are proposing
protects them from just about everything, but doesn't give us enough
lee-way to actually distribute the license.

The difficulty has been trying to appease Sun's lawyers w/out overlying
restricting the team's ability to create and maintain the JDK long-term.
(In other words, we don't want to have to go through this over and over
again for each new JDK release).

 From your posting it appears that it's technical (not passing JCK),

Passing the JCK/TCK is simply an excercise that we haven't done yet.
Basically, once you pass the TCK, you must ship the *EXACT* version of
the binary without any modifications.  Since we are still doing
development of the port, it seemed a waste of time to run the TCK when
we may have to run it again if/when the license is signed.  (Running the
TCK is a long, drawn out process that one doesn't want to repeat if at
all possible.)

 well as political (not getting the license to run JCK). What is their
 answer reg: blackdown.org doing the same ?

Blackdown was given access to the JDK before the recent lawsuit, and as
such has 'special' privileges that they are no longer willing to grant
to new licensees.

 May be getting Zdnet to publish an article on this is the right way to
 go ? The bug parades and votes didn't seem to help much.

Actually, it's the reason that Sun is doing the dance with us right
now.  The whole Java affair has been a series of mis-steps by all
parties (myself, BSDi, and Sun), so no one party shares the entire
blame.  The most recent issue was the BSDi/WindRiver acquisition, which
left us w/out any legal advisors (unless we wanted to pay out of the
pocket, which would have cost upwards of $2K to solve, not something I
can affort).

We're hoping to have something for you in the near future.
Unfortunately, my Sun contact went on vacation yesterday before I could
get some stuff ironed out, and when he gets back from vacation, I'm
going on vacation, so nothing can get done with this for at least
another month.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Query: How to tell if Microsoft is using BSD TCP/IP code?

2001-06-25 Thread Nate Williams

  http://www.microsoft.com/windows2000/interix/interixinc.asp
  Plenty of GNU stuff there, though it doesn't say so explicitly.
  Of course, they say it's all meant only for legacy Unix stuff.
 
 Can you substantiate your claim there is plenty of GNU stuff in
 Interix, or are you just talking out your ass as usual?

That's uncalled for Wes.  Interix contains *lots* of GNU code, but to be
fair to M$, the company that developed Interix was acquired by M$ long
before Linux was as big of a threat to it's business as it is now.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Query: How to tell if Microsoft is using BSD TCP/IP code?

2001-06-15 Thread Nate Williams

 I've had several marketing types approach me recently for details as
 to whether or not Microsoft was using the BSD TCP/IP stack and/or user
 utilities, and though it's always been common knowledge in the
 community that they were, when I set about to prove it I found it to
 be less easy than I'd thought.  I've strings'd various binaries and
 DLLs in my copy of Windows 98 but have yet to find anything resembling
 proof.  Does anyone out there have any details or discovery techniques
 for confirming or disproving this assertion either way?  It would be
 very useful (for us) from a PR standpoint to know.

I think the nmap folks noticed that the stack in Win98 (I don't remember
if it was in Win2K as wll) behaved almost exactly like the BSD stack in
ways that weren't mandatory.  Their conclusion was that it had to be
based on the BSD code to get such similar behavior, since no other stack
behaved in this manner.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: -R for make update ?

2001-05-22 Thread Nate Williams

   Is there any specific reason why one needs to be able to
   write a lock to the CVS repo when running 'make update'
   to get a freshly checked out source?
  
  Yeah: you aren't running your CVS server in pserver
  mode, and so are trying to do a lock, either in your
  local copy, or over NFS.
  
  If you run your repository in pserver mode, the CVS server
  will be connected to over the network, instead of attacking
  your CVS repo directly, and you won't have the problem you
  are seeing, since the cvs server will be able to get the
  lock, no problem.
 
 It will also be freakishly slow, and use massive amounts of temp
 space.

No slower than cvs using rsh/ssh, although it does tend to create alot
of inodes in /tmp.  (It doesn't create alot of temp space, other than
what is used to create the directories...)




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: if_fxp - the real point

2001-03-14 Thread Nate Williams

 You cant strong-arm companies into making their intellectual properly 
 rights publicly available. its a losing argument.

Strange, in that it worked for a number of video-card vendors when
XFree86 either dropped support and/or never supported the card in
question.



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Nate Williams

[ Memory overcommit ]

 One important way to gain confidence that you're little box won't
 silently crash at the worst possible time for the customer is to
 be able to *prove* to yourself that it can't happen, given certain
 assumptions. Those assumptions usually include things like "the
 hardware is working properly" (e.g., no ECC errors) and "the compiler
 compiled my C code correctly".
 
 Given these basic assumptions, you go through and check that you've
 properly handled every possible case of input (malicious or otherwise)
 from the outside world. Part of the "proof" is verifying that you've
 checked all of your malloc(3) return values for NULL.. and assuming
 that if malloc(3) returns != NULL, then the memory is really there.
 
 Now, if malloc can return NULL and the memory *not* really be there,
 ^^^
I assume you meant 'can't' here, right?

 there is simply no way to prove that your code is not going to crash.

Even in this case, there's no way to prove your code is not going to
crash.

The kernel has bugs, your software will have bugs (unless you've proved
that it doesn't, and doing so on any significant piece of software will
probably take longer to do than the amount of time you've spent writing
and debugging it).

And, what's to say that your correctly working software won't go bad
right in the middle of your program running.

There is no such thing as 100% fool-proof.

 This memory overcommit thing is the only case that I can think of
 where this happens, given the basic assumptions of correctly
 functioning hardware, etc. That is why it's especially annoying to
 (some) people.

If you need 99.999% fool-proof, memory-overcommit can be one of the
many classes of problems that bite you.  However, in embededed systems,
most folks design the system with particular software in mind.
Therefore, you know ahead of time how much memory should be used, and
can plan for how much memory is needed (overcommit or not) in your
hardware design.  (We're doing this right now in our 3rd generation
product at work.)

If the amount of memory is unknown (because of changing load conditions,
and/or lack-of-experience with newer hardware), then overcommit *can*
allow you to actually run 'better' than a non-overcommit system, though
it doesn't necessarily give you the same kind of predictability when you
'hit the wall' like a non-overcommit system will do.

Our embedded OS doesn't do memory-overcommit, but sometimes I wish it
did, because it would give us some things for free.  However, *IF* it
did, we'd need some sort of mechanism (ie; AIX's SIGDANGER) that memory
was getting tight, so the application could start dumping unused memory,
or at least have an idea that something bad was happening so it could
attempt to cleanup before it got whacked. :)



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Nate Williams

 Even in this case, there's no way to prove your code is not going to
 crash.
 
 Sure.  But you can at least prove that all crashes are the result of bugs,
 not merely design "features".

'Proving' something is correct is left as an excercise for the folks who
have way too much time on their hand.  At my previous job (SRI), we have
folks who work full-time trying to prove algorithms.

In general, proving out simple algorithms takes months, when the
algorithm itself took 1-2 hours to design and write.

Another thing is that crashes may have occurred because of invalid
input, invalid output, valid but not expected input, etc...

Again, memory overcommit is only *one* class of bugs that is avoided.
The phrase "can't see the forest for the trees" jumps to mind. :)




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Nate Williams

 is mkdir(3) guaranteed to be atomic?  
 
 Yes.
 
 Are there filesystem type cases where this might not be the case 
 (NFS being my main concern )
 
 No.

Yes.  NFS doesn't guarantee atomicity, because it can't.  If the mkdir
call returns, you have no guarantee that the remote directory has been
created (caching, errors, etc...)




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Nate Williams

   Are there filesystem type cases where this might not be the case 
   (NFS being my main concern )
   
   No.
  
  Yes.  NFS doesn't guarantee atomicity, because it can't.  If the
 mkdir
  call returns, you have no guarantee that the remote directory has
 been
  created (caching, errors, etc...)
 
 I can handle it if there is a case where both fail, but is there a
 case where both can SUCCEED ?? 

What do you mean 'both succeed'?


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Nate Williams

   I can handle it if there is a case where both fail, but is
 there a
   case where both can SUCCEED ?? 
  
  What do you mean 'both succeed'?
 
 My understanding is that, on non-broken filesystems, calls to
 mkdir(2) either succeed by creating a new directory, or fail and return
 EEXIST (note: excluding all other types of errors :-))
 
 However, NFS seems to have issues, so the question is:  could both
 mkdir(2) calls actually succeed and claim to have created the same
 directory (even if it is?), or is one ALWAYS guaranteed to fail, as on
 a normal fs.

You're implying that you are making two calls to create the same
directory.  Am I correct?

The answer is 'maybe'?  Depends on the remote NFS server.  Matt or one
of the other NFS gurus may know more, but I wouldn't count on *anything*
over NFS.  If you need atomicity, you need lockd, which isn't
implemented on FreeBSD.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: make bug? (dependency names with '$')

2001-02-20 Thread Nate Williams

Jason Brazile writes:
  :   I want to construct a portable Makefile to build a java application.
  
  That's not possible.  Java specifies a half assed make system as part
  of the language, so it is nearly impossible to use another make system
  on top of it unless you are willing to live with a whole slew of
  problems.

That's not true.  I built a 100K line application using make w/out any
problems.  (It builds on Win9X, NT, FreeBSD, and Solaris).


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



RE: make bug? (dependency names with '$')

2001-02-20 Thread Nate Williams

  
I want to construct a portable Makefile to build a java application.
 
 I've played with Java and Make in the past, but I found that spawning a new
 instance of the Java compiler is more expensive than compiling a pretty big
 bunch of files. gcc starts up a lot quicker than a JVM.

Jikes is your friend.  We switched from using javac because of this.




Nate
ps. This should probably be moved to freebsd-java.

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: make bug? (dependency names with '$')

2001-02-20 Thread Nate Williams

 Jason Brazile [EMAIL PROTECTED] writes:
I want to construct a portable Makefile to build a java application.
 
 Don't bother.
 
  a) use jikes instead of javac, it's much faster and gives better
 diagnostics.

Agreed.

  b) to rebuild, just list all the source (.java) files on the jikes
 command line. Jikes will figure out what needs rebuilding and what
 doesn't. If there are too many files, list them all (each on one
 line) in a text file (e.g. 'sources') and specify '@sources' on
 the command line.

Disagree.  If you want it to be portable, don't use a non-standard
extension to a tool, such as jikes dependency features.

We used jikes for our day-day development, but move back to using
'javac' for our Q/A and final builds.  That way we can complain to Sun
when things don't work. ;)



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: make bug? (dependency names with '$')

2001-02-20 Thread Nate Williams

  Disagree.  If you want it to be portable, don't use a non-standard
  extension to a tool, such as jikes dependency features.
  
  We used jikes for our day-day development, but move back to using
  'javac' for our Q/A and final builds.  That way we can complain to Sun
  when things don't work. ;)
 
 So what's the problem? javac also automatically builds dependencies,
 it's just not as good at it as jikes.

Not in a way that's usable my make.  Jike can be used to build external
dependency files.

 Also, my experience is that unless you're paying Sun significant
 amounts of $$, their reaction to bug reports is to close their eyes,
 hum real loud and hope they go away.

True, but Sun is no different than anyone else in that regard.  I have
found the individual developers somewhat more easy to work with, if you
can get a contact within Sun.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: kernel type

2000-12-17 Thread Nate Williams

   PS. Before this starts a flame war, let me say that I really believe
   that MacOS X is a very good thing for everyone involved, although the
   choice of Mach for the microkernel seems a little arbitrary if not
   misguided.
  
  It's hardly arbitrary, though the jury's still out as to whether it's
  misguided or not.  You may remember that Apple bought a little company
  called NeXT a few years back.  Well, that company's people had a lot
  to do with the OS design of OS X and let's not forget the design of
  NeXTStep.
 
 Yeah, but in what sense is that use of Mach a serious
 microkernel, if it's only got one server: BSD?  I've never
 understood the point of that sort of use.  It makes sense for a
 QNX or GNU/Hurd or minix or Amoeba style of architecture, but
 how does Mach help Apple, instead of using the bottom half of
 BSD as well as the top half?

Kernel threads out of the box?


Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: kernel type

2000-12-17 Thread Nate Williams

  Kernel threads out of the box?
 
 The Mach kernel makes use of a thread primitive and a task primitive;
 however, their BSD OS personality is largely single-threaded with
 something approximately equivilent to our Giant -- they refer to this as a
 "Funnel", through which access to the BSD code is funneled so as to
 prevent problems.

Interesting.

 My understanding from a brief chat while in their
 Cupertino office is that they are in the process of gradually pushing
 locks down for specific subsystems (networking, etc), in much the same
 style we are.  While there, I suggested that closer coordination between
 our development teams could save a lot of redundant work, given that the
 primitives we're using are probably quite similar (although presumably
 non-identical).  It would be great if someone wanted to step up and help
 Apple coordinate their work with our work better, as it would allow more
 code sharing and more rapid development, as well as more wide-spread
 testing.  If anyone is interested in looking at doing this, I have a list
 of relevent contacts in their kernel group.

The reason I mentioned the above is that Apple has HotSpot (Sun's fast
Java VM) running under OS/X, which requires kernel threads.

Until FreeBSD's kernel threads are a bit more 'user-friendly', we can't
do anything with HotSpot.  (As I understand it, HotSpot runs very well
on the OS/X, so they seems to have gotten that part right...)


Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: vm_pageout_scan badness

2000-12-03 Thread Nate Williams

  I'm going to take this off of hackers and to private email.  My reply
  will be via private email.

Actually, I was enjoying the discussion, since I was learning something
in the process of hearing you debug this remotely.

It sure beats the KR vs. ANSI discussion. :)



Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Legacy ethernet cards in FreeBSD

2000-11-10 Thread Nate Williams

 :   3Com 3c503ISA
 
 I think so.  The ed driver supports this

I'm pretty sure the ed driver doesn't support the 503.  I think we
dropped support for the 503 a *REALLY* long time ago (2.1 days...)

 :   DEC EtherworksISA
 :   DEC DE205 ISA
 
 don't know about these.  lnc driver supports them maybe ?

Use to be the le driver supported them, but apparently it's broken now.




Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Legacy ethernet cards in FreeBSD

2000-11-10 Thread Nate Williams

   :   3Com 3c503ISA
   
   I think so.  The ed driver supports this
  
  I'm pretty sure the ed driver doesn't support the 503.  I think we
  dropped support for the 503 a *REALLY* long time ago (2.1 days...)
 
 You are probably confusing it with the 501 or 505.  The 503 is basically 
 an NE1000 (with a better probe routine).

You're indeed correct.  It was the 501 that we dropped support for,
sorry for the false information.



Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: IPFW bug/incoming TCP connections being let in.

2000-10-20 Thread Nate Williams

 I had blocked incoming TCP connections coming into my network using
 IPFW, and I noticed that my brother was able to establish a Napster
 connection, even though I had blocked it earlier.

*sigh*

Thanks to Guy Helmer for being patient with me as I fretted about this.

I just found out that Napster leaves a client running in the background,
and even though I had added firewall rules to block new connections to
the server, the old 'established' connection was still up and running.

I didn't realize that was the case, so that everytime I 'restarted'
Napster the packets were getting through.  In fact, what had happened
was that the 'GUI' was being stopped/restarted, but the network portion
was running the entire time.

Once Guy walked me through and showed me that things were indeed working
correct, we rebooted the box and my rules worked fine.

Sorry for the false alarm!


Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Blocking Napster (WAS: IPFW bug/incoming TCP connections being let in.)

2000-10-20 Thread Nate Williams

   I had blocked incoming TCP connections coming into my network using
   IPFW, and I noticed that my brother was able to establish a Napster
   connection, even though I had blocked it earlier.
  
  *sigh*
  
  Thanks to Guy Helmer for being patient with me as I fretted about this.
  
  I just found out that Napster leaves a client running in the background,
  and even though I had added firewall rules to block new connections to
  the server, the old 'established' connection was still up and running.
  
 
 This might be helpful to you and others.  Since napster uses what ever
 ports it can find the best way is to block the servers.
 
 # Napster
 $fwcmd add deny tcp from any to 208.178.163.56/29 via tun0
 $fwcmd add deny tcp from any to 208.178.175.128/29 via tun0
 $fwcmd add deny tcp from any to 208.49.239.240/28 via tun0
 $fwcmd add deny tcp from any to 208.49.228.0/24 via tun0
 $fwcmd add deny tcp from any to 208.184.216.0/24 via tun0

I had these rules in place, but it appears that there are new servers in
place.  I also had to to add

 $fwcmd add deny tcp from any to 64.124.41.0/24 via tun0

(I'm guessing it's a class C, I just had hit two addresses in that
block, so I blocked the entire class C.)

The above is the reason I was trying to do a 'port' block of the Napster
servers, because trying to keep up with IP addresses is a real pain in
the butt...



Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



IPFW bug/incoming TCP connections being let in.

2000-10-19 Thread Nate Williams

I had blocked incoming TCP connections coming into my network using
IPFW, and I noticed that my brother was able to establish a Napster
connection, even though I had blocked it earlier.

I thought, no worries, I'll just block it at the port level.

I read a couple of articles, and noted that connections from  to the
server should be blocked.

Easy enough, I'll just block my clients from establishing connections to
port .

Unfortunately, that doesn't work.  Looking at tcpdump output, the
'server' appears to initiates a TCP connection from  - some random
port.  My firewall rules do *NOT* allow incoming TCP connections to be
made to internal machines, since they only allow 'setup' packets to go
out.

So, how can Napster work?  What happened to the 3-way handshake?  I
could see an issue if the OS's were hacked to get around this and not
require a 3-way handshake, but the client in this case in a Win98 box.

I'm *really* confused, and more than a little concerned, since it
appears that somehow Napster is getting around the 3-way handshake.
Does Napster create 'raw' sockets that emulate TCP traffic?

In an attempt to attempt to debug what was going on, I stuck the
following rules in place;

00016 allow log tcp from ${client} to any out xmit ep0 setup
00017 allow log tcp from any to ${client} in recv ep0 setup
00018 allow log tcp from ${client} to any out xmit ep0 established
00019 allow log tcp from any to ${client} in recv ep0 established
00020 allow log tcp from ${client} to any out xmit ep0
00021 allow log tcp from any to ${client} in recv ep0

Then, I started up Napster and logged what showed up.

It was suprising (to me at least).

One would think that rules 16 or 17 *must* be hit first, because the
connection has to be initially established.  However, it doesn't work
that way.

ipfw: 18 Accept TCP CLIENT-IP:1897 NAPSTER: out via ep0
ipfw: 19 Accept TCP NAPSTER: CLIENT-IP:1897 in via ep0
ipfw: 19 Accept TCP NAPSTER: CLIENT-IP:1897 in via ep0
ipfw: 19 Accept TCP NAPSTER: CLIENT-IP:1897 in via ep0
ipfw: 18 Accept TCP CLIENT-IP:1897 NAPSTER: out via ep0
ipfw: 19 Accept TCP NAPSTER: CLIENT-IP:1897 in via ep0
ipfw: 19 Accept TCP NAPSTER: CLIENT-IP:1897 in via ep0
ipfw: 19 Accept TCP NAPSTER: CLIENT-IP:1897 in via ep0
ipfw: 18 Accept TCP CLIENT-IP:1897 NAPSTER: out via ep0
ipfw: 18 Accept TCP CLIENT-IP:1897 NAPSTER: out via ep0
ipfw: 19 Accept TCP NAPSTER: CLIENT-IP:1897 in via ep0


No 'setup' packets are sent at all.

Confused and concerned




Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: IPFW bug/incoming TCP connections being let in.

2000-10-19 Thread Nate Williams

  I had blocked incoming TCP connections coming into my network using
  IPFW, and I noticed that my brother was able to establish a Napster
  connection, even though I had blocked it earlier.
  
  I thought, no worries, I'll just block it at the port level.
  
  I read a couple of articles, and noted that connections from  to the
  server should be blocked.
  
  Easy enough, I'll just block my clients from establishing connections to
  port .
  
  Unfortunately, that doesn't work.  Looking at tcpdump output, the
  'server' appears to initiates a TCP connection from  - some random
  port.  My firewall rules do *NOT* allow incoming TCP connections to be
  made to internal machines, since they only allow 'setup' packets to go
  out.
  
  So, how can Napster work?  What happened to the 3-way handshake?  I
  could see an issue if the OS's were hacked to get around this and not
  require a 3-way handshake, but the client in this case in a Win98 box.
 
 The remote napster client sends a message through the central Napster
 server, which relays the message to your Napster client to tell your
 machine to make a connection to the remote machine.

This much I undertand.  However, I'm not making any downloads, so my
client isn't (yet) connecting to another client.  I'm trying to block
connections to the server.  How is the client connecting to the server?
I don't see *any* TCP setup packets being sent out by my client, so how
is the client communicating with the server via TCP?

(I *AM* seeing TCP packets being sent out, but they are being sent as
'established' connections, before a setup packet is being sent.)

 The regular 3-way handshake is occurring.  It's just not initiated by the
 machine you would expect.

The only way my client can work is if it initiates the connection, but I
don't see it initiating a connection to port .

So, how then is the Napster server at port  communicating with my client?

 You'd have to block outgoing SYNs to any
 outside host at port  (but anyone who knows anything about ports could
 change their port number and get around your block).

That was what I did, but the rule is never being hit.  However, there
appears to be a connecting from my client to port  on the server.




Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Automatic updates (was Re: How long for -stable...)

2000-10-04 Thread Nate Williams

  I think that we can do a lot with cvsupd.  I've used cvsupd to grab
  binaries on an experimental basis and it seems to work great.  I've
 
 Hmmm.  Does cvsupd also move a target out of the way if it already
 exists and it's in the process of replacing it?  What if the target is
 chflag'd but can be unprotected at the current security level?

I know the author, and I'll bet you he could be convinced to modify it
to do what we need. *grin*



Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Automatic updates (was Re: How long for -stable...)

2000-10-03 Thread Nate Williams

[ Culled the list way down, and moved it to -hackers ]

 We could also look into providing an "update" command or something
 which would pull either sources or binaries over from a snapshot box
 and make the process of getting up to the branch-head a lot easier.
 It's long been on my wishlist and I'm at the point where I'd be
 willing to devote some BSDi resources to both writing the software
 and setting up a build box for creating the relevant binaries on an
 ongoing basis.

Whoo hoo.  Sounds like a *great* plan!


Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Trouble with dynamic loading of C++ libs in PHP v4.02 on FreeBSD 4.1

2000-09-14 Thread Nate Williams

  We are trying to create a dynamic library of extensions to PHP 4.02.
  This library implements a C++ class and has a C interface using the "Extern C"
  declaration.
  This library is linked with libstdc++.so.3 .
  
  If the library is called in a C program = no trouble.
  If the library is called from PHP with the "dlopen()" function =
  [Warning: Unable to load dynamic library
  '/users/em/ftp/php/test_cpp/debug/libphptest.so' - /usr/lib/libstdc++.so.3:
  Undefined symbol "__ti9exception" in
  /usr/local/httpd/htdocs/www/Iti_q/testso.php on line 2
 
 This is because FreeBSD uses an archive library "libgcc.a" instead
 of a shared library.  That means that everything from libgcc which
 is needed by your shared libraries had better already be linked into
 the main program.  The right solution is for us to use a shared
 library for libgcc.

At one point libgcc was shared (FreeBSD 1.*), and it caused way more
problems that it solved.




Nate


  (Note to eager committers: don't do this without
 coordinating with obrien.  There are ramifications that aren't
 obvious.)
 
 As a work-around, try adding this to your main program.  (I am
 assuming it is a C++ program too.)
 
 extern void terminate(void);
 void (*kludge_city)(void) = terminate;
 
 Another possibility would be to link explicitly with libgcc when
 creating your dynamic library:
 
 cc -shared -o libphptest.so ... -lgcc
 
 That might cause other problems, but probably not.
 
 John
 -- 
   John Polstra   [EMAIL PROTECTED]
   John D. Polstra  Co., Inc.Seattle, Washington USA
   "Disappointment is a good sign of basic intelligence."  -- Chögyam Trungpa
 
 
 
 To Unsubscribe: send mail to [EMAIL PROTECTED]
 with "unsubscribe freebsd-hackers" in the body of the message


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Trouble with dynamic loading of C++ libs in PHP v4.02 on FreeBSD 4.1

2000-09-14 Thread Nate Williams

  At one point libgcc was shared (FreeBSD 1.*), and it caused way more
  problems that it solved.
 
 Do you remember any details?  I analyzed it pretty thoroughly (I
 thought) more than a year ago, and decided the shared library was the
 best solution.

If I remember right (and my memory is fuzzy for stuff that far bak)
there were a couple of issues.

1) Speed.  Shared libraries are slower than static libraries (PIC
   et. al), and the stuff in libgcc tends to be performance centric.
2) Ease of use.  Everytime we upgrade or modify libgcc, it required
   keeping around the old libgcc.so.  I don't think we had much
   experience with versioning back then, so we tended to either 'never'
   increment the versions or 'overdo' it.

We weren't making releases nearly as often then, so keeping backwards
compatability was more difficult as people tended to be running -stable
instead of releases.  Nowadays we handle backwards compatability better,
so having N different versions of libgcc is still annoying, but easier
to deal with (/usr/lib/compat).

 At that time, I converted my -current system to using a shared libgcc
 and ran it like that for 6 months at least without any problems.  I
 believe David O'Brien was also using a shared libgcc for a long time
 without problems.

There were no problems, it was just annoying.  Many were of the opinion
that 'not everything should be linked shared', since it tends to clutter
up /usr/lib, and also tends to slow things down.

Static linking isn't always bad

 The non-shared libgcc used to work for us mainly because on the i386
 architecture practically nothing from libgcc was ever used.

Aren't there quite a few 'math' routines inside libgcc (mods and diffs
and all sorts of low-level 32/64 bit routines that are quite often used..)

 That is no longer the case, because of all the exception support that
 has been added to it for C++.  If a shared library uses exceptions (as
 does libstdc++) but the main program doesn't, you get undefined symbol
 errors as the original poster reported.  Using a shared libgcc
 completely solves that.

Ahh.  Can't we just make the linker *always* add libgcc onto the end?
Because it's a static library, if it's the symbol isn't used, then it
won't be linked into the library?

 Also, we _desperately_ need to switch away from the setjmp/longjmp
 exception implementation and start using the now-standard DWARF2
 implementation.  It makes a tremendous performance difference even in
 programs that don't use exceptions at all.  (I measured it once.)  But
 that in turn requires more support from libgcc, and exacerbates the
 problems associated with using a non-shared libgcc.

How so?


Nate


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



  1   2   3   >