Re: Plans for 1.8+ (2.0?)

2007-02-01 Thread Dmitri Nikulin

On 2/1/07, ricardo [EMAIL PROTECTED] wrote:

On Wed, 31 Jan 2007 21:35:42 -0500 (EST)
Justin C. Sherrill [EMAIL PROTECTED] wrote:

 On Wed, January 31, 2007 3:18 pm, Matthew Dillon wrote:
 

  I am seriously considering our options with regards to ZFS or a
  ZFS-like filesystem.  We clearly need something to replace UFS,
  but I am a bit worried that porting ZFS would be as much work
  as simply designing a new filesystem from scratch.

 One of the reasons people are so excited about ZFS is because it
 solves the problem of managing space.  Disk management is and has
 always been a pain in the rear, and ZFS goes a long way toward
 reducing that.

 While constructing a new filesystem will help your goals, it will also
 mean that DragonFly users miss out on having all the other advantages
 that come with ZFS.  Put another way, we're going to lose planned
 functionality.

  You're implying that ZFS=God, in other words, you're implying that
there could be no better FS that ZFS. A very obnoxious statement!


That's not his point. He means that ZFS, while very good at what it
is, would not be optimal for transparent clustering. And a file system
which is designed for clustering won't necessarily be as good as ZFS
on single machines. Either way, some use cases becomes sub-optimal,
and it's a choice of what's more important to do first.

ZFS is optimized all the way down to avoiding byte swapping with a
simple but adequate endian adaptiveness technique, and being as new
as it is, it still has a few years worth of optimization potential.
It's definitely not going to perform as well on DragonFly as it does
on Solaris for a long time, but it could still be better than UFS by
design alone. Any optimization over that is just a bonus.

On the other hand, I'm not convinced there's a need to make a new
filesystem just for clustering, not just yet anyway. How about 9P?
It's not like clustering is a brand new problem, it's had decades of
research applied and there is no shortage of work to reference until
it's practical to attempt to do better.

---
Dmitri Nikulin

Centre for Synchrotron Science
Monash University
Victoria 3800, Australia


Re: buildworld error v1.8 - undefined reference to `EVP_sha256'

2007-02-01 Thread Sascha Wildner

j s wrote:

Here's the error.  I have attempted building several times including
from a cleaned /usr/src and cvsup with tag=.  and
tag=DragonFly_RELEASE_1_8_Slip

cc -O -pipe -mtune=pentiumpro
-I/usr/src/secure/libexec/sftp-server/../../lib/libssh
-I/usr/src/secure/libexec/sftp-server/../../../crypto/openssh-4
-L/usr/pkg/lib -o sftp-server sftp-server.o sftp-common.o -lssh
-lcrypto
/usr/obj/usr/src/world_i386/usr/lib/libssh.so: undefined reference to
`EVP_sha256'
*** Error code 1

Stop in /usr/src/secure/libexec/sftp-server.
*** Error code 1

I've modified make.conf to avoid building certain subsystems:

NO_BIND=true# do not build BIND
NO_GAMES=   true# do not enter the games subdirectory
NO_I4B= true# do not build isdn4bsd package
NO_LPR= true# do not build lpr and related programs
NO_SENDMAIL=true# do not build sendmail and related programs


Don't know where to go with this and a suggestion would be very helpful.


Hm, how do you build? Do you have other settings in make.conf?

I'm not able to reproduce this using the settings you gave.

Sascha

--
http://yoyodyne.ath.cx


Re: Plans for 1.8+ (2.0?)

2007-02-01 Thread Bill Hacker

Matthew Dillon wrote:

:Besides the finalization of vkernel, what else can we expect into 2.0? There 
are many long-awaited (not only by me) features and additions:
:- ZFS

I am seriously considering our options with regards to ZFS or a
ZFS-like filesystem.  We clearly need something to replace UFS,
but I am a bit worried that porting ZFS would be as much work
as simply designing a new filesystem from scratch.

One big advantage of a from-scratch design is that I would be
able to address the requirements of a clustered operating system
in addition the requirements of multi-terrabyte storage media.


Tilt.

I thot that was one of the design goals of ZFS?

Would it not make sense also to look again at the pioneering work done in Plan 
9?

'Clustering' per se is not new - only the DFLY approach to same.

Bill



ath(4) major update

2007-02-01 Thread Sepherosa Ziehau

Hi all,

Following is a patch updating ath(4) to the latest hal:
http://leaf.dragonflybsd.org/~sephe/ath0.9.20.3.diff

This patch is against src/sys

For HEAD users, this patch should be applied cleanly.

For 1.8 users, you probably need to apply following patch first:
http://leaf.dragonflybsd.org/~sephe/sample.h.diff

Please test/review it.
If no objection comes, it will be committed two weeks later.

Best Regards,
sephe

--
Live Free or Die


Re: vkernel migration

2007-02-01 Thread Bill Hacker

Nigel Weeks wrote:

Just an idea for thought over your next coffee...

I'm if it would be to conceivably possible to move a vkernel process(and any
sub-processes it had) to another host? It'd have to stop temporarily, or at
least, slow down immensely while pumping all the userland data inside the
vkernel to the other host.

It might just be easier to have a vkernel powered by multiple physical
kernels (on different machines), giving the appearance of an SMP machine
inside the vkernel.

(Insert 40,000 lines of code here...)

Nige.



Hmm... there is a sort of 'left-handed' precedent..

Before Microslog got its 'one way' licensing mits on Connectix' Virtual PC..
one could close a VPC session in 'save state' - which preserved even the 
(virtual) program-counter of the (virtal) PC (not to mention registers, stack, 
memory) in the 'container file' along with everything else.


That container file could be-emailed half way 'round the globe, loaded on 
another VPC-equipped box, and fired up - with the cursor in the same spreadsheet 
cell as when saved. Or whatever. Whether minutes - or months - earlier.


And it fired-up fast.

MS later prevented a container created on box A from running on box B.
Or even logged-in user A and logged-in user B on the same box.

So long as the registers, caches, RAM, stack, and program counter are preserved, 
it is as portable as the (virtualized?) hardware supports.


The IP stack, OTOH, *should* get hairy - but they handle that anyway - as when a 
network cable is swapped over to a different link and IP.


Bill


Re: buildworld error v1.8 - undefined reference to `EVP_sha256'

2007-02-01 Thread YONETANI Tomokazu
On Wed, Jan 31, 2007 at 03:10:21PM -0800, j s wrote:
 cc -O -pipe -mtune=pentiumpro
 -I/usr/src/secure/libexec/sftp-server/../../lib/libssh
 -I/usr/src/secure/libexec/sftp-server/../../../crypto/openssh-4
 -L/usr/pkg/lib -o sftp-server sftp-server.o sftp-common.o -lssh


You have something similar to this

LDFLAGS=-L/usr/pkg/lib

in /etc/make.conf?  Try commenting out, or use a custom make.conf
only for {build,install}{kernel,world} to avoid this.  You can use
$__MAKE_CONF to switch make.conf files:

cd /usr/src
env __MAKE_CONF=/etc/src.conf make buildworld

Cheers.


Re: buildworld error v1.8 - undefined reference to `EVP_sha256'

2007-02-01 Thread j s

On 2/1/07, Sascha Wildner [EMAIL PROTECTED] wrote:

j s wrote:
 Here's the error.  I have attempted building several times including
 from a cleaned /usr/src and cvsup with tag=.  and
 tag=DragonFly_RELEASE_1_8_Slip


snip


Hm, how do you build? Do you have other settings in make.conf?

I'm not able to reproduce this using the settings you gave.

Sascha


Hi Sascha,
Thanks for taking a look. I'm following the man build instructions.
I started with an empty /usr/src  cvsup -g -L 2 file,   The latest
cvsup I had set tag=DragonFly_RELEASE_1_8_Slip, then proceeded
directly to make buildworld.

I'm also attempting the upgrade from DragonFly 1.7.0-DEVELOPMENT #0.

I've set LDFLAGS=-L/usr/pkg/lib, CPPFLAGS=-I/usr/pkg/include and
LD_LIBRARY_PATH=/usr/pkg/lib:/usr/local/lib:/usr/lib in the
environment.

I didn't bother sending any more make.conf content because it is
pretty standard install config.  However, here you go just in case:

# $FreeBSD: src/etc/defaults/make.conf,v 1.97.2.80 2003/02/15 16:34:56
trhodes Exp $
# $DragonFly: src/etc/defaults/make.conf,v 1.17 2006/07/02 16:03:42
swildner Exp $
#
#
#CPUTYPE=i686
#NO_CPU_CFLAGS= true# Don't add -march=cpu to CFLAGS automatically
#NO_CPU_COPTFLAGS=true  # Don't add -march=cpu to COPTFLAGS automatically
#
# The CCVER variable controls which GCC-version to use by default.  It
# should be set using ?= so as not to interfere with CCVER overrides from
# userland or the buildworld.  We currently recommend that an override NOT
# be set in /etc/make.conf and that gcc 3.4 not yet be used to build the boot
# blocks, boot loader, or the kernel.
#
#CCVER?=gcc34   # (use GCC 3.4, default)
#CCVER?=gcc40   # (use GCC 4.0, experimental, must define WANT_GCC40)
#
# Build gcc40 with world.  This won't make gcc40 the default compiler
#WANT_GCC40=yes
#
# CFLAGS controls the compiler settings used when compiling C code.
# Note that optimization settings above -O (-O2, ...) are not recommended
# or supported for compiling the world or the kernel - please revert any
# nonstandard optimization settings to -O before submitting bug reports
# to the developers.
#
#CFLAGS= -O -pipe
#
# CXXFLAGS controls the compiler settings used when compiling C++ code.
# Note that CXXFLAGS is initially set to the value of CFLAGS.  If you wish
# to add to CXXFLAGS value, += must be used rather than =.  Using =
# alone will remove the often needed contents of CFLAGS from CXXFLAGS.
#
# BDECFLAGS are a set of gcc warning settings that Bruce Evans has suggested
# for use in developing FreeBSD and testing changes.  They can be used by
# putting CFLAGS+=${BDECFLAGS} in /etc/make.conf.  -Wconversion is not
# included here due to compiler bugs, e.g., mkdir()'s mode_t argument.
#
BDECFLAGS=  -W -Wall -ansi -pedantic -Wbad-function-cast -Wcast-align \
   -Wcast-qual -Wchar-subscripts -Winline \
   -Wmissing-prototypes -Wnested-externs -Wpointer-arith \
   -Wredundant-decls -Wshadow -Wstrict-prototypes -Wwrite-strings
#
# WARNS_WERROR causes -Werror to be added when WARNS is in effect.
#WARNS_WERROR=  yes
#
# To compile just the kernel with special optimizations, you should use
# this instead of CFLAGS (which is not applicable to kernel builds anyway).
# There is very little to gain by using higher optimization levels, and doing
# so can cause problems.
#
#COPTFLAGS= -O -pipe
#
# Strip the file before installing.  Setting this to nothing will keep
# the debugging symbols in the installed files.
#STRIP= -s
#
# Compare before install
#INSTALL=install -C
#
# To build ppp with normal permissions
#PPP_NOSUID=true
# To enable installing ssh(1) with the setuid bit turned on
#ENABLE_SUID_SSH=   true
#
# To use bsdtar as the standard tar.  The default is GNU tar.
#WITH_BSDTAR=   true
#
# To avoid building various parts of the base system:
NO_BIND=true# do not build BIND
#NO_CRYPT=  true# do not build crypto code
#NO_CVS=true# do not build CVS
#NO_FORTRAN=true# do not build g77 and related libraries
NO_GAMES=   true# do not enter the games subdirectory
NO_I4B= true# do not build isdn4bsd package
#NO_IPFILTER=   true# do not build IP Filter package
#NO_LIBC_R= true# do not build libc_r (re-entrant version of libc)
NO_LPR= true# do not build lpr and related programs
#NO_MAILWRAPPER=true# do not build the mailwrapper(8) MTA selector
#NO_MODULES=true# do not build modules with the kernel
#NO_OBJC=   true# do not build Objective C support
#NO_OPENSSH=true# do not build OpenSSH
#NO_OPENSSL=true# do not build OpenSSL (implies NO_OPENSSH)
#NO_PKGTOOLS=   true# do not build included package tools (for pkgsrc)
NO_SENDMAIL=true# do not build sendmail and related programs
#NO_SHARE=  true# do not enter the share subdirectory
NO_X=   true# do not compile in XWindows support (e.g. doscmd)
#NOINFO=true# do not make or install info files

Re: Plans for 1.8+ (2.0?)

2007-02-01 Thread Justin C. Sherrill
On Thu, February 1, 2007 3:20 am, Dmitri Nikulin wrote:

 That's not his point. He means that ZFS, while very good at what it
 is, would not be optimal for transparent clustering. And a file system
 which is designed for clustering won't necessarily be as good as ZFS
 on single machines. Either way, some use cases becomes sub-optimal,
 and it's a choice of what's more important to do first.

Sort of.  I'm saying that if Matt rolls his own filesystem instead of
using ZFS, that new filesystem is either:

1: not going to have the variety of tools available with zfs for handling
things like disk pooling/snapshots/data scrubbing/insert zfs term here.

2: going to have those features, which means Matt's time is going to be
eaten up reimplementing features already present in other filesystems.

It's a moot point until Matt can evaluate modifying existing filesystems
vs building a new one, though.  I don't want NIH-ism to get in the way of
having something neat, though




Re: Plans for 1.8+ (2.0?)

2007-02-01 Thread Simon 'corecode' Schubert

Justin C. Sherrill wrote:

It's a moot point until Matt can evaluate modifying existing filesystems
vs building a new one, though.  I don't want NIH-ism to get in the way of
having something neat, though


Yah.  I think porting ZFS and possibly inventing a new FS or pimping up ZFS can 
run in parallel and thus ZFS _should_ be done.

cheers
 simon

--
Serve - BSD +++  RENT this banner advert  +++ASCII Ribbon   /\
Work - Mac  +++  space for low €€€ NOW!1  +++  Campaign \ /
Party Enjoy Relax   |   http://dragonflybsd.org  Against  HTML   \
Dude 2c 2 the max   !   http://golden-apple.biz   Mail + News   / \



signature.asc
Description: OpenPGP digital signature


clustered (file) system on multiple host

2007-02-01 Thread Martin P. Hellwig
Reading the list about vkernel, clustering and file system I've began to 
wonder.


How on earth can you make a cluster without paying for the extreme 
overhead for locking everything?


Is it a question of locking more specific or is there some 'magical' 
technology that helps you out of that. Logical speaking (and this more a 
problem of my knowledge) there always must be some overhead since all 
machines must know at the same time what the state of an object is, thus 
creating 'much'(?) traffic for synchronization. So I guess that there 
must be some tricks to avoid too much traffic.


Or do I just get the concept of clustering wrong (I see it as 
high-availability combined with load-balancing)?


--
mph


Re: clustered (file) system on multiple host

2007-02-01 Thread Matthew Dillon

:Reading the list about vkernel, clustering and file system I've began to 
:wonder.
:
:How on earth can you make a cluster without paying for the extreme 
:overhead for locking everything?
:
:Is it a question of locking more specific or is there some 'magical' 
:technology that helps you out of that. Logical speaking (and this more a 
:problem of my knowledge) there always must be some overhead since all 
:machines must know at the same time what the state of an object is, thus 
:creating 'much'(?) traffic for synchronization. So I guess that there 
:must be some tricks to avoid too much traffic.
:
:Or do I just get the concept of clustering wrong (I see it as 
:high-availability combined with load-balancing)?
:
:-- 
:mph

It actually isn't too hard.  You simply lock a larger area then
requested (at least until you start getting conflicts).  So if a
program opens the file fubar the machine acquires a range lock on
the entire contents of the file if it can.  Accesses then do not
require individual locking.  If multiple programs are competing
for the same data space, the range locks become more fine grained.

It all comes down to managing the conflict space.  The fewer
conflicts there are, the larger the spaces that can be reserved.

That's the jist of how it can be done efficiently.

-Matt
Matthew Dillon 
[EMAIL PROTECTED]


Re: Plans for 1.8+ (2.0?)

2007-02-01 Thread Matthieu Guéguen

On 2/1/07, Simon 'corecode' Schubert [EMAIL PROTECTED] wrote:

Yah.  I think porting ZFS and possibly inventing a new FS or pimping up ZFS can 
run in parallel and thus ZFS _should_ be done.

cheers
  simon



Yes, I second that. Maybe ZFS could be improved to handle the problems
Matt listed. And I think it could be easier than creating a whole new
filesystem (and test this new FS, tune it, write tools for it...).

By the way, congrats to all DragonFly developers for this 1.8 release !


Re: buildworld error v1.8 - undefined reference to `EVP_sha256'

2007-02-01 Thread j s

On 2/1/07, YONETANI Tomokazu [EMAIL PROTECTED] wrote:

On Wed, Jan 31, 2007 at 03:10:21PM -0800, j s wrote:
 cc -O -pipe -mtune=pentiumpro
You have something similar to this

LDFLAGS=-L/usr/pkg/lib

in /etc/make.conf?  Try commenting out, or use a custom make.conf
only for {build,install}{kernel,world} to avoid this.  You can use
$__MAKE_CONF to switch make.conf files:


It appears that my custom LDFLAGS setting in the environment screwed
things up.  After replying to Sascha I got to thinking that might be
the case so I set about removing it from the environment an rerunning
the build.  Happy to report the build was successful.

 DragonFly 1.8.0-RELEASE #0: Thu Feb  1 08:54:00 PST 2007 


BTW, the LDFLAGS setting is in the root .cshrc file, not in the
make.conf file and nothing exists in the make.conf file to override
this setting.

Thanks,
js


Re: Plans for 1.8+ (2.0?)

2007-02-01 Thread Chris Csanady

2007/1/31, Matthew Dillon [EMAIL PROTECTED]:


I am seriously considering our options with regards to ZFS or a
ZFS-like filesystem.  We clearly need something to replace UFS,
but I am a bit worried that porting ZFS would be as much work
as simply designing a new filesystem from scratch.


It is worth noting that Sun is looking at extending ZFS to be a
cluster aware filesystem.  If you dig through their mailing list
archives, you will see that it is a topic that pops up every now and
then.

In any case, I feel that it would be best to port ZFS, even if you
intend to create a new filesystem.  It is a great local filesystem,
and it will offer compatibility with Solaris, MacOS, and FreeBSD. (and
probably Linux once it is relicensed.)  It seems a waste not to take
advantage of Sun's efforts, especially since the code is so
portable--in fact, almost all of the OS dependent bits are in a single
file.

Pawel Jakub Dawidek made very rapid progress on the FreeBSD port.
Considering that DragonFly now has a virtual kernel and much simpler
VFS, the project should be vastly easier.  If you were to work on it,
I wouldn't be surprised if you could finish the core of the work in a
weekend.  Probably the most time-consuming part will be interfacing
with the device layer; things like supporting EFI labels,
automatically discovering disks, and so forth.

They even have a porting guide if you are interested:

 http://www.opensolaris.org/os/community/zfs/porting


One big advantage of a from-scratch design is that I would be
able to address the requirements of a clustered operating system
in addition the requirements of multi-terrabyte storage media.


Even with a from-scratch design, ZFS is well worth careful
examination.  There are many things it does very well, and
re-implementing even a fraction of its features would be very time
consuming.  In the mean time, it would be good to have ZFS.

The one part of it that I think could be handled better is the
inflexibility of the redundancy.  It would be nice to specify
redundancy per-dataset, and not be tied to the underlying static vdev
redundancy.  RAIDZ is also a bit inflexible itself; it would be great
to throw arbitrarily sized disks into a pool and not have to worry
about the layout at all.  To distribute blocks and recovery blocks
(much like with par2) across machines.  Full 3-way mirroring is quite
expensive, but would be necessary over a WAN.  The current limitations
though seem to be the result of a compromise, considering that this is
a very difficult problem.

Finally, I think that the network filesystem is the single largest
gaping hole in modern operating systems.  All of the commonly
available systems are absolutely awful, and I have been anticipating
DragonFly's CCMS.  It seems that with this and the VFS messaging work,
it should be almost trivial to create a fast and solid remote
filesystem.  That said, the very paradigm of the network filesystem
should probably be tossed in favor of the clusterable filesystem which
I imagine you have in mind.

Chris


Re: Plans for 1.8+ (2.0?)

2007-02-01 Thread Bill Hacker

Chris Csanady wrote:

very well-thought-out post in re ZFS. Thanks!

I'd only add that porting one or more 'foreign' fs in general seem to be a good 
idea - it is bound to show up things not yet covered well.


In all of the published comparison tests, I have never seen a single 'always 
best' fs anyway.


Pre ZFS,though, JFS and XFS were consistently 'nearly always in the top 3' - IOW 
pretty good all-around *compromises*.


And therein lies the rub:  One man's needs for optiization differ from the next.

But one thing *for sure* NFS and SMBFS and others (TVFS, Andrew...) have 
'problems' of one sort or another.


So *anything* that makes for a better shared storage - even if it must rely on 
nothing slower than gig-E or 10-Gig-E to be at its best, is a plus.


...shared multi-host SCSI RAID controllers being rudely rare and expensive...

;-)

Bill



Re: vkernel migration

2007-02-01 Thread Dave Hayes
Dmitri Nikulin [EMAIL PROTECTED] writes:
 What? Opening this e-mail is acceptance of the terms specified in
 this e-mail?

Not exactly, according to what I read. Nevertheless it's pretty
unenforcable, for the same reasons you can't stop spam. 

 A great example is already in DragonFly - process checkpointing. I
 don't even know how it works as well as it does.

Has this been coupled with the new vkernel mods yet? In other words,
could I build a checkpointable kernel and then pause it, put it away
for a month, and come back to it? ( Pardon me if I'm sounding naive
here. :) )
--
Dave Hayes - Consultant - Altadena CA, USA - [EMAIL PROTECTED] 
 The opinions expressed above are entirely my own 

We have just enough religion to make us hate, but not
enough to make us love one another.  
  --Jonathan Swift




jail/virtual servers and multiple network interfaces

2007-02-01 Thread Jeffrey Williams
I regularly make use of jail in FreeBSD for hosting virtual servers 
(typically for application development and testing, and infrastructure 
server configuration testing).


One thing I have always found frustrating is the inability to set up a 
additional network interfaces on the machine so that they can be 
dedicated to the jailed servers, in such a way that all the host's 
network traffic stays on the primary interface, and all the jail's 
network traffic uses its own dedicated interface.  i.e. a virtual 
network stack, for the jailed server, that can be bound directly to a 
separate NIC than the one used by the host environment.


I did read a  2003 Usenix white paper by Marko Zec on this 
topic(http://www.usenix.org/events/usenix03/tech/freenix03/full_papers/zec/zec_html/index.html), 
but have not found any mature implementations.


Anyways, I was curious if this type of functionality is being 
implemented, or in consideration for implementation, in DragonFlyBSD?


Thanks
Jeffrey Williams



Re: jail/virtual servers and multiple network interfaces

2007-02-01 Thread Simon 'corecode' Schubert

Jeffrey Williams wrote:
One thing I have always found frustrating is the inability to set up a 
additional network interfaces on the machine so that they can be 
dedicated to the jailed servers, in such a way that all the host's 
network traffic stays on the primary interface, and all the jail's 
network traffic uses its own dedicated interface.  i.e. a virtual 
network stack, for the jailed server, that can be bound directly to a 
separate NIC than the one used by the host environment.


Not quite it, but what happens when you assign the second NIC's IP to the jail?

Anyways, I was curious if this type of functionality is being 
implemented, or in consideration for implementation, in DragonFlyBSD?


Not yet.  It adds quite some infrastructure as well, so I am not sure if it is 
worth it.  Apart from that, we're always happy to welcome enthusiastic 
developers :)

cheers
 simon

--
Serve - BSD +++  RENT this banner advert  +++ASCII Ribbon   /\
Work - Mac  +++  space for low €€€ NOW!1  +++  Campaign \ /
Party Enjoy Relax   |   http://dragonflybsd.org  Against  HTML   \
Dude 2c 2 the max   !   http://golden-apple.biz   Mail + News   / \



signature.asc
Description: OpenPGP digital signature


Re: jail/virtual servers and multiple network interfaces

2007-02-01 Thread Jeffrey Williams

Simon 'corecode' Schubert wrote:
 Jeffrey Williams wrote:
 One thing I have always found frustrating is the inability to set up 
a additional network interfaces on the machine so that they can be 
dedicated to the jailed servers, in such a way that all the host's 
network traffic stays on the primary interface, and all the jail's 
network traffic uses its own dedicated interface.  i.e. a virtual 
network stack, for the jailed server, that can be bound directly to a 
separate NIC than the one used by the host environment.


 Not quite it, but what happens when you assign the second NIC's IP to 
the jail?
I have actually tried setting that up, unfortunately all of the jail's 
outbound traffic still goes through the primary interface (even though 
the jails ip address is not bound to that interface).  The crux of the 
problem is that even the jail's services are bound to the IP address of 
the second NIC, the jail still shares a common network stack with the 
host environment, such that it uses the host's routing tables, arp 
tables, etc, which will always route traffic to the first interface. 
Some people have suggested that I might be able to solve the problem 
with a creative implementation of ipfw/static routing, but I don't think 
that would really work, because the problem isn't limited to layer 3 
(IP), but is also layer 2 (ethernet/arp), when both NICs are connected 
to the same network segment.



 Anyways, I was curious if this type of functionality is being 
implemented, or in consideration for implementation, in DragonFlyBSD?


 Not yet.  It adds quite some infrastructure as well, so I am not sure 
if it is worth it.  Apart from that, we're always happy to welcome 
enthusiastic developers :)
I am flattered that you think I am a developer, alas, I am simply a 
humble sys admin.  That being said, I have been working hard to collect 
enough spare hardware to offer up some testing and development platforms 
to the DragonFly crowd (as well as a few other projects), and will 
hopefully be able to help by participating in testing in the near future.


I have been watching you guys with great anticipation, ever since Matt 
first announced.


 cheers
  simon

Thanks,
Jeff