Dear all,
Well,
I've been receiving quite lots of nice and *warm* and joyful
email for the thread ;-)
Some quite of enlightnment (including your email, Theo, it
gives me lots of enlightment, you surely is a funny guy ;-)
After also lots of google clickings, I think I get the big
picture of
arief_mulya wrote:
Dear all,
Well,
I've been receiving quite lots of nice and *warm* and joyful
email for the thread ;-)
Some quite of enlightnment (including your email, Theo, it
gives me lots of enlightment, you surely is a funny guy ;-)
After also lots of google clickings, I think
I think that you just burned all possible bridges with your rampant cross
posting. At least I hope so.
--
If it's moving, encrypt it. If it's not moving, encrypt
it till it moves, then encrypt it some more.
To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe
Respected Sir/ Madam
I am Dev, doing my research in Centre for Telecommunications Research,
King's college London. My research project involves evaluating the
performance of MIP6 TCP in the presence of fragmentation and without
fragmentation. I am using Kame MIP6 for Free BSD 4.4 and have
While I'm 100% aware of the pitfalls of such a setup, I find myself
implementing linux in a cluster because it can export 5G-ish of a disk
on each node to one machine that generates a gigantic filesystem.
This is done with linux's network-block-device (NBD). I'd like to
know if someone has
Hello!
I have problems with an usb Logitech iFeel MouseMan mouce.
When I connect it, boot log looks like:
===
uhci0: Intel 82371SB (PIIX3) USB controller port 0xe400-0xe41f irq 9 at device 7.2
on pci0
usb0: Intel 82371SB (PIIX3) USB controller on uhci0
usb0: USB revision 1.0
uhub0: Intel UHCI
How do I find how much memory (real and/or virtual) is being used by a set
of processes, taking shared pages into account? I see per-process numbers I
can use (vmspace_resident_count and vmspace_swap_count), and overall usage
numbers exist, but I can't find a better way of measuring multiple
check out /proc/PID/map for a really detailed map of the process.
On Wed, 29 Jan 2003, James Gritton wrote:
How do I find how much memory (real and/or virtual) is being used by a set
of processes, taking shared pages into account? I see per-process numbers I
can use (vmspace_resident_count
Julian Elischer [EMAIL PROTECTED] writes:
check out /proc/PID/map for a really detailed map of the process.
That looks good for a single process, suffers from the problem I'm
having. For example, if I run a program that simply mallocs a chumk of
memory and reads through it (to map it all
On Wed, 29 Jan 2003, David Gilbert wrote:
While I'm 100% aware of the pitfalls of such a setup, I find myself
implementing linux in a cluster because it can export 5G-ish of a disk
on each node to one machine that generates a gigantic filesystem. This
is done with linux's network-block-device
Matthew == Matthew N Dodd [EMAIL PROTECTED] writes:
Matthew On Wed, 29 Jan 2003, David Gilbert wrote:
While I'm 100% aware of the pitfalls of such a setup, I find myself
implementing linux in a cluster because it can export 5G-ish of a
disk on each node to one machine that generates a
On Wed, 29 Jan 2003, David Gilbert wrote:
but that would be no different than using the nfs directly. mdconfig
won't aggregate several chunks of files ... and last I checked md wasn't
entirely happy with nfs (some form of chicken-and-egg problem)
So use vinum, CCD or add the files as swap and
Matthew == Matthew N Dodd [EMAIL PROTECTED] writes:
Matthew So use vinum, CCD or add the files as swap and make a
Matthew swap-backed filesystem.
Matthew No reason to invent a totally new low level filesystem here.
Actually, I can see that working ... but it's going to be a whole lot
less
On Wed, 29 Jan 2003, David Gilbert wrote:
As I understand, NBD is just a little driver that lets you mount
foo:/dev/ad0s1g over the network and proxies the block transactions
across.
Right, you still have to stripe/mirror on the client side though. I don't
think it will be all that bad.
Any
Matthew == Matthew N Dodd [EMAIL PROTECTED] writes:
Matthew On Wed, 29 Jan 2003, David Gilbert wrote:
As I understand, NBD is just a little driver that lets you mount
foo:/dev/ad0s1g over the network and proxies the block transactions
across.
Matthew Right, you still have to stripe/mirror on
On Wed, 29 Jan 2003, David Gilbert wrote:
it doesn't work that way. the result of NBD is a /dev/nbd0 not a
filesystem. Block 0 of /dev/nbd0 is block 0 of /dev/hda1 (say). nbd
runs as a server on the node with the disk and as a client on the node
using the disk. Yes, you still stripe on the
In message [EMAIL PROTECTED], Matthew N. Dodd writes:
On Wed, 29 Jan 2003, David Gilbert wrote:
it doesn't work that way. the result of NBD is a /dev/nbd0 not a
filesystem. Block 0 of /dev/nbd0 is block 0 of /dev/hda1 (say). nbd
runs as a server on the node with the disk and as a client on
On Thu, 30 Jan 2003 [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], Matthew N. Dodd writes:
On Wed, 29 Jan 2003, David Gilbert wrote:
So involving NFS isn't really going to make that much of a difference.
Yes, it sure would.
nfs1:/foo/foo1 - md1
nfs2:/foo/foo2 - md2
ccd0 64 none md1
On Wed, 29 Jan 2003, Brandon D. Valentine wrote:
IMO NBD is less of a hack than you think it is. It is one of the
necessary components for creating a single system image from a cluster
of commodity hardware and this is something Linux developers are working
earnestly on. They're targeting a
geom meets netgraph.. :-)
You could possibly do something with the ng_device node
that exports a device into teh dev namesapce from netgraph.
(the version in the tree is curently broken, the author is rewrituing
it..) Adding a geom top-end to it might give you
something quite cute..
On Thu,
On Wed, 29 Jan 2003, Julian Elischer wrote:
geom meets netgraph.. :-)
You could possibly do something with the ng_device node that exports a
device into teh dev namesapce from netgraph. (the version in the tree is
curently broken, the author is rewrituing it..) Adding a geom top-end
to it
Matthew N. Dodd wrote:
They should really look at Sprite. (And anyone thats doing clustering and
not looking at VMS deserves what they get.)
On a real cluster running a single image all all the drives would just
show up. There wouldn't be any hacking going on. Stuff like this kind of
On Wed, 29 Jan 2003, Terry Lambert wrote:
And anyone that's doing clustering and things that it can't be done on a
32 bit machine, and not looking at the VAX, which runs VMS, deserves
what they get.
...Sorry, had to be said... 8-).
If we were talking about clustering 32 bit machines with
On Wed, Jan 29, 2003 at 06:06:20PM -0500, Matthew N. Dodd wrote:
What you really want is SCSI over IP. Anything else is just a hack and
not to be trusted. I think that NFS is less of a hack than NBD though.
IMO NBD is less of a hack than you think it is. It is one of the
necessary
On Thu, Jan 30, 2003 at 12:13:26AM +0100, [EMAIL PROTECTED] wrote:
NBD wouldn't be hard to implement on FreeBSD, the easiest way would
be to write two GEOM modules to do it: a client and a server.
No, I don't have time to do that right now, but I will happily
guide anybody who wants to
On Wed, Jan 29, 2003 at 09:44:59PM -0500, Matthew N. Dodd wrote:
If we were talking about clustering 32 bit machines with less than 128mb
of memory each that would be true.
I suppose you could use some sort of PAE to allow every cluster member's
address space to be mapped but a 64 bit
Thus spake James Gritton [EMAIL PROTECTED]:
The object's ref_count hasn't changed, which is what I meant about seeing
reference counts in the kernel that were apparently not counting what I'm
looking for. I did see a ref_count increase on the first object
(presumably the text image), but
:Thus spake James Gritton [EMAIL PROTECTED]:
: The object's ref_count hasn't changed, which is what I meant about seeing
: reference counts in the kernel that were apparently not counting what I'm
: looking for. I did see a ref_count increase on the first object
: (presumably the text image),
Audsin wrote:
I am Dev, doing my research in Centre for Telecommunications Research,
King's college London. My research project involves evaluating the
performance of MIP6 TCP in the presence of fragmentation and without
fragmentation. I am using Kame MIP6 for Free BSD 4.4 and have configured
29 matches
Mail list logo