Re: jail/virtual servers and multiple network interfaces

2007-02-02 Thread Magnus Eriksson

On Thu, 1 Feb 2007, Jeffrey Williams wrote:

i.e. a virtual network stack, for the jailed server, that can be bound 
directly to a separate NIC than the one used by the host environment.


  This sort of thing can be done with NetBSD/Xen and bridging*, I think. 
I'd be surprised if there wasn't a vkernel way of doing it.  (if not now, 
then soon)



 * Networking is not one of my strong sides, but there seem to be at least 
two ways: http://mail-index.netbsd.org/port-xen/2006/11/15/0005.html


 (Sorry if this is off topic.)


MAgnus



Re: ath(4) major update

2007-02-02 Thread Ja'far Railton
On Thu, Feb 01, 2007 at 07:55:24PM +0800, Sepherosa Ziehau wrote:
 Hi all,

 Following is a patch updating ath(4) to the latest hal:
 http://leaf.dragonflybsd.org/~sephe/ath0.9.20.3.diff

 This patch is against src/sys

 For HEAD users, this patch should be applied cleanly.

 For 1.8 users, you probably need to apply following patch first:
 http://leaf.dragonflybsd.org/~sephe/sample.h.diff

 Please test/review it.

Hello

I applied both patches to 1.8-Release and I get the following error
message in dmesg. What is my next step? (This is on a Thinkpad R30 and a
D-Link DWL-G650.) TIA.

snip
sio1: can't drain, serial port might not exist, disabling
ppc0: parallel port not found.
Manufacturer ID: 71021200
Product version: 7.1
Product name: Atheros Communications, Inc. | AR5001-- | Wireless
LAN Reference Card | 00 |
Functions: Network Adaptor, Memory
Function Extension: 02808d5b00
Function Extension: 0240548900
Function Extension: 02001bb700
Function Extension: 0280a81201
Function Extension: 0200366e01
Function Extension: 0200512502
Function Extension: 02006cdc02
Function Extension: 0280f93703
Function Extension: 0200a24a04
Function Extension: 0308
Function Extension: 040600032f123456
Function Extension: 0501
CIS reading done
ath0: Atheros 5212 mem 0x8801-0x8801 irq 11 at device 0.0 on
cardbus0
ath0: unable to attach hardware; HAL status 3
device_probe_and_attach: ath0 attach returned 6
cbb0: CardBus card activation failed
ad0: 38154MB FUJITSU MHT2040AS [77520/16/63] at ata0-master UDMA66
acd0: CDROM CD-224E at ata1-master PIO4
/snip


hardware compatibility - Nvidia SATA controller

2007-02-02 Thread Jon Nathan
Hello,

I'm trying to install Dragonfly BSD 1.8 on a Dell XPS 600.  It has an
integrated Nvidia Nforce 4 Intel Edition SATA RAID Controller, but
Dragonfly can't find the hard disk attached to it.

I looked for hardware compatibility lists, but couldn't find anything
referencing different SATA chipsets and what was supported.  Mailing
list searches seem to indicate that ATAng was not really implemented,
but that was a while ago:

http://leaf.dragonflybsd.org/mailarchive/kernel/2003-11/msg00043.html

Any info on this chipset?  Thanks,

-Jon



Re: hardware compatibility - Nvidia SATA controller

2007-02-02 Thread Nuno Antunes

On 2/2/07, Jon Nathan [EMAIL PROTECTED] wrote:

Hello,

I'm trying to install Dragonfly BSD 1.8 on a Dell XPS 600.  It has an
integrated Nvidia Nforce 4 Intel Edition SATA RAID Controller, but
Dragonfly can't find the hard disk attached to it.

I looked for hardware compatibility lists, but couldn't find anything
referencing different SATA chipsets and what was supported.  Mailing
list searches seem to indicate that ATAng was not really implemented,
but that was a while ago:

http://leaf.dragonflybsd.org/mailarchive/kernel/2003-11/msg00043.html

Any info on this chipset?  Thanks,

-Jon




Hello Jon,

Have you tried NATA?

For NATA, add options PCI_MAP_FIXUP to your kernel, comment out the
old ata devices, add device nata, device natadisk and device natapicd.
Credits due to [EMAIL PROTECTED] :-)

Cheers,
Nuno


Re: hardware compatibility - Nvidia SATA controller

2007-02-02 Thread Jon Nathan
* Nuno Antunes [EMAIL PROTECTED] [02-02-2007 17:23]:

 On 2/2/07, Jon Nathan [EMAIL PROTECTED] wrote:
 Hello,
 
 I'm trying to install Dragonfly BSD 1.8 on a Dell XPS 600.  It has an
 integrated Nvidia Nforce 4 Intel Edition SATA RAID Controller, but
 Dragonfly can't find the hard disk attached to it.
 
 I looked for hardware compatibility lists, but couldn't find anything
 referencing different SATA chipsets and what was supported.  Mailing
 list searches seem to indicate that ATAng was not really implemented,
 but that was a while ago:
 
 http://leaf.dragonflybsd.org/mailarchive/kernel/2003-11/msg00043.html
 
 Any info on this chipset?  Thanks,
 
 -Jon
 
 Hello Jon,
 
 Have you tried NATA?
 
 For NATA, add options PCI_MAP_FIXUP to your kernel, comment out the
 old ata devices, add device nata, device natadisk and device natapicd.
 Credits due to [EMAIL PROTECTED] :-)
 
 Cheers,
 Nuno

If this isn't in the kernel on the install CD, it's probably not much
use to me.  

You seem to imply that ata and nata are mutually exclusive above.
Is this the case?  If not, is there a chance that nata could be
added to the kernel for future releases?

-Jon



Re: hardware compatibility - Nvidia SATA controller

2007-02-02 Thread Nuno Antunes

On 2/2/07, Jon Nathan [EMAIL PROTECTED] wrote:

* Nuno Antunes [EMAIL PROTECTED] [02-02-2007 17:23]:

 On 2/2/07, Jon Nathan [EMAIL PROTECTED] wrote:
 Hello,
 
 I'm trying to install Dragonfly BSD 1.8 on a Dell XPS 600.  It has an
 integrated Nvidia Nforce 4 Intel Edition SATA RAID Controller, but
 Dragonfly can't find the hard disk attached to it.
 
 I looked for hardware compatibility lists, but couldn't find anything
 referencing different SATA chipsets and what was supported.  Mailing
 list searches seem to indicate that ATAng was not really implemented,
 but that was a while ago:
 
 http://leaf.dragonflybsd.org/mailarchive/kernel/2003-11/msg00043.html
 
 Any info on this chipset?  Thanks,
 
 -Jon
 
 Hello Jon,

 Have you tried NATA?

 For NATA, add options PCI_MAP_FIXUP to your kernel, comment out the
 old ata devices, add device nata, device natadisk and device natapicd.
 Credits due to [EMAIL PROTECTED] :-)

 Cheers,
 Nuno

If this isn't in the kernel on the install CD, it's probably not much
use to me.

You seem to imply that ata and nata are mutually exclusive above.
Is this the case?  If not, is there a chance that nata could be
added to the kernel for future releases?

-Jon




Yes, i think NATA is mutually exclusive with ATA and it's not enabled
by default. NATA is supposed to replace old ATA in the future when
more testing is performed.

Nuno.


DragonFlyBSD Thread on osnews

2007-02-02 Thread Jonathan Weeks
FYI -- there was a DragonFlyBSD 1.8 announcement on osnews, with a 
thread discussing Linux scalability vs DragonFlyBSD, which might bear 
an educated response:


http://www.osnews.com/comment.php?news_id=17114offset=30rows=34threshold=-1

I admit I'm not the most experienced kernel programmer in the world, 
but I have a few years of Linux and AIX kernel programming experience. 
Maybe you are more qualified, I don't know.


You say Linux scales up to 2048 CPUs, but on what kind of system?

The top end of the SGI Altix line of Linux supercomputers runs 4096 
CPUs, and IBM validated Linux on a 2048-CPU System P. Linux scales to 
1024 CPUs without any serious lock contention. At 2048 it shows some 
contention for root and /usr inode locks, but no serious performance 
impact. Directory traversal will be the first to suffer as we move 
toward 4096 CPUs and higher, so that's where the current work is 
focused.


Is this the same kernel I get on RHEL. Can I use this same kernel on a 
4 CPU systemm? What Linux version allows you to mix any amount of 
computers with whatever amount of cpus and treats them all as one 
logical computer while being able to scale linearly?


Choose the latest SMP kernel image from Red Hat. The feature that 
allows this massive scaling is called scheduler domains, introduced by 
Nick Piggin at version 2.6.7 (in 2004). There is no special kernel 
config flag or recompilation required to activate this feature, but 
there are some tunables you need to set (via a userspace interface) to 
reflect the topology of your supercomputer (i.e. grouping CPUs in a 
tree of domains).


Usually massive supercomputers are installed, configured, and tuned by 
the vendor. They'd probably compile a custom kernel instead of using 
the default RHEL image. But it could work out of the box if you really 
wanted it to.


..rather than rely on locking, spinning, threading processes to 
infinity, it will assign processes to cpus and then allow the processes 
to communicate to each other through messages.


That's fine. It's just that nobody has proven that message passing is 
more efficient than fine-grained locking. It's my understanding 
(correct me if I'm wrong) that DF requires that, in order to modify the 
hardware page table, a process must to send a message to all other CPUs 
and block waiting for responses from all of them. In addition, an 
interrupted process is guaranteed to resume on the same processor after 
return from interrupt even if the interrupt modified the local runqueue.


The result is that minor page faults (page is resident in memory but 
not in the hardware page table) become blocking operations. Plus, you 
have interrupts returning to threads that have become blocked by the 
interrupt (and must immediately yield), and the latency for waking up 
the highest priority thread on a CPU can be as high as one whole 
timeslice.


DF has serialization resources, but they are called tokens instead of 
locks. I'm not quite sure what the difference is. There also seems to 
be a highly-touted locking system that allows multiple writers to write 
to different parts of a file, which is interesting because Linux, 
FreeBSD, and even SVR4 have extent-based filocks that do the same 
thing. What's different about this method?


I hope I've addressed your questions adequately. Locks are evil, I 
know, but they seem to be doing quite well at the moment. Maybe by the 
time DF is ready for production use there will be machines that push 
other UNIX implementations beyond their capabilities. But for now, 
Linux is a free kernel for over a dozen architectures that scales 
better than some proprietary UNIX kernels do on their target 
architecture. That says a lot about the success of its design




Re: DragonFlyBSD Thread on osnews

2007-02-02 Thread Petr Janda
From what I gathered across this mailing list. DF can't scale nowhere 
close to Linux at the moment because DF still operates under the BGL.


Petr


Re: vkernel migration

2007-02-02 Thread Matthew Dillon

: A great example is already in DragonFly - process checkpointing. I
: don't even know how it works as well as it does.
:
:Has this been coupled with the new vkernel mods yet? In other words,
:could I build a checkpointable kernel and then pause it, put it away
:for a month, and come back to it? ( Pardon me if I'm sounding naive
:here. :) )
:--
:Dave Hayes - Consultant - Altadena CA, USA - [EMAIL PROTECTED] 
: The opinions expressed above are entirely my own 

It won't work, but it would not be hard to save and restore the state:

* The checkpointing code doesn't understand VPAGETABLE
  mappings.   Offset and page directory settings would have to be
  saved and restored.

* Network (TAP) interface code would have to reopen the interface and
  re-set its parameters.

* Console driver would have to restore the console mode via termios.

* Kernel core would have to re-create the VM spaces for the various 
  processes on checkpoint restore (but otherwise would not have to do
  anything fancy, since the VM spaces are controlled entirely by 
  VPAGETABLE mappings).

-Matt
Matthew Dillon 
[EMAIL PROTECTED]