Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Matthew Dillon

:Hey Matt,
:1) Does your filesystem plan include the ability to grow and shrink a 
:partition/volume? ie. /home is running out of space so we could run 
:"shrinkfs" ... on /usr which has a lot of space and "growfs" ... on /home

The filesystem's backing store will be segmented.  Segment size can 
range from 1MB to 4GB (ish).  A 'partition' would be able to hold
multiple segments from the filesystem's point of view, but the main
purpose of the segmentation is to create large, near-independant 
blocks of data which can be dealt with on a segment-by-segment basis
(e.g. for recovery, fsck/check, replication, growing, and shrinking
purposes).

Segmentation also means the filesystem's backing store is not 
restricted to a single block device but can be glued together
with several block devices, or even mixed-and-matched between
separate replicated data stores for recovery purposes.

So, yes, it will be possible to grow or shrink the filesystem on
a segment-by-segment basis.

:2) Are you going to do away disklabel stuff and replace it with 
:something better/easier to use?

Probably not in 2.0.  The disklabel still serves a purpose with
regards to mixing and matching different filesystems.

However, within the context of the new filesystem itself each
'segment' will be completely identified in its header so segments
belonging to different filesystems could comingle within one
disklabel partition.  The disklabel would simply say that the
storage is associated with the new filesystem but would not imply
that a particular parition would be associated with a mount 1:1
like they are currently.

This would effectively remove the partitioning requirement.  You would
just say how many segments you wanted each 'filesystem' to use, 
dynamically.  Growing is easy.  Shrinking would require a background
scan or temporary relocation of the effected segments but would
also be easy.

Since segments will self-identify in their header, the actual physical
location of a segment becomes irrelevant.

If you had 1TB of storage and 4GB segments the kernel would have to
do only 256 I/O's (reading the segment headers) to self-identify all
the segments and associate them with their filesystems.  Just as an
example.  Such a list would be cached, of course, but the point is
that for recovery purposes the OS would be able to regenerate the
list from scratch, given only access to the physical storage, with 
minimal delay.

:3) Is vinum finally gonna die with the new filesystem? ie. volume 
:manager will be integrated in the new file system, like ZFS?
:
:Cheers,
:
:Petr

I personally have never used vinum.  I never trusted the code enough
to use it... not so much the original code, but the fact that it has
gone unmaintained for so long a period of time.

But, yes, the new filesystem will have its own volume manager based
on the principle of self-identifying disk segments.

Note that I am not talking about RAID-5 here.  I'm talking about
replication topologies only.  I have no intention of supporting RAID-5
or other physical abstractions beyond pure replication at the logical
level.  This isn't to say that RAID-5 would not be supportable, only
that it would have to be implemented at the block level or the device
level rather then at the filesystem level.  The replication on the
other hand will be fully integrated into the filesystem.

-Matt
Matthew Dillon 
<[EMAIL PROTECTED]>


Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Rahul Siddharthan
Matthew Dillon wrote:
> Believe me, I think about this all the time.  I frankly have no idea
> whether 'DragonFly The OS' itself will survive the test of time,
> but I guarentee you that everything we develop for 'DragonFly The OS',
> especially more portable entities such as filesystems and cluster
> protocols, *WILL*.
>
> The moment you leave the context of the operating system codebase and
> enter the context of userspace, you guarentee code survivability.

But isn't there a lot of kernel infrastructure in DragonFly that you
have done, to allow this stuff to run in userspace?  So won't any
other operating system need to have that infrastructure too?  Or will
it be fairly straightforward to, say, run MattFS under FUSE on Linux?

There would certainly be great interest in the Linux world in a robust
filesystem with the features you describe and a BSD licence.  Uptake
of ZFS has been slow because its licence conflicts with the GPL, so it
can't be put in the kernel.  The other filesystems on Linux don't do
anything revolutionary.

I've been running Linux for a while now, since a sane distro
(Debian/Ubuntu) lets me focus on work rather than struggling with
ports/pkgsrc everytime I want to install a package, but I seriously
want to install DragonFly on my next computer some weeks/months from
now... perhaps dual-booting with FreeBSD or NetBSD, so that I can
share my /home partition.

Rahul


Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Matthew Dillon

:
:On Mon, February 19, 2007 5:37 pm, Matthew Dillon wrote:
:
:> I have many requirements that need to be fullfilled by the new
:> filesystem.  I have just completed the basic design work and I feel
:> quite confident that I can have the basics working by our Summer
:> release.
:
:How much is "the basics"? i.e. generally usable as the filesystem, or
:available only as committed code?

As in the design spec.  It works on paper.  I haven't started coding
anything yet.

-Matt
Matthew Dillon 
<[EMAIL PROTECTED]>


Why not FUSE ZFS?

2007-02-19 Thread Dmitri Nikulin

I got to thinking, there is a lot of debate raging over whether ZFS or
DFS (my tentative nickname for the alternative) is a better priority
for development. I was previously on the ZFS camp, thinking a good
on-disk file system which is gaining a lot of hype and adoption would
be a very good bargaining chip in DragonFly's pile.

Now it just occurred to me we can have ZFS for the nominal price of
supporting FUSE. It's not as fast as a pure kernel version, of course,
but Matt says he wants userland file systems anyway and FUSE is a
de-facto standard for that. Even NetBSD's puffs now has a 'refuse'
compatibility kit. FreeBSD also has FUSE support. DragonFly can
support FUSE by wrapping SYSLINK as is already planned, and this
immediately gives it a lot of file systems currently beyond reach.

FreeBSD has kernel support well on the way, and it seems to be ahead
of the FUSE version. I don't know how useful this is to DragonFly (as
far as being less work to port than the Solaris code, or being
portable to userland).

Not sure about the license issues of the FUSE ZFS, but purely
pragmatically, it may be the 'right' thing to develop rather than try
to port a kernel implementation into the DragonFly kernel. Like Matt
said, even if DragonFly doesn't survive, a lot of its technology will,
and developing things in the userland is precisely the way to ensure
that's what happens. It seems almost obvious that this is how ZFS
should be approached too, and it's already well on the way.

---
Dmitri Nikulin

Centre for Synchrotron Science
Monash University
Victoria 3800, Australia


Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Dmitri Nikulin

On 2/20/07, Petr Janda <[EMAIL PROTECTED]> wrote:

Hey Matt,
1) Does your filesystem plan include the ability to grow and shrink a
partition/volume? ie. /home is running out of space so we could run
"shrinkfs" ... on /usr which has a lot of space and "growfs" ... on /home

2) Are you going to do away disklabel stuff and replace it with
something better/easier to use?

3) Is vinum finally gonna die with the new filesystem? ie. volume
manager will be integrated in the new file system, like ZFS?


Seems all of this is handled in one fell swoop with the ZFS design.
Neat 'disklabels' which are dynamically sizable and can be configured
over multiple volumes. And with very little limitation all around.

It's entirely possible to use the same foundations without using the
same file system itself, and this would be good for DragonFly even
before it supports the full ZFS.

---
Dmitri Nikulin

Centre for Synchrotron Science
Monash University
Victoria 3800, Australia


Re: Gcc-ada errata?

2007-02-19 Thread Bill Hacker

walt wrote:

Bill Hacker wrote:

.

Joy!  gcc 4.0 and later should be OK as-is?


Well, corecode has imported just the bare-nekkid gcc/g++ parts, not
the fancy stuff like ada.  But -- maybe the native ada-4.1.2 code
will compile without a bootstrap ada compiler, dunno.


===

From the DFLY 1.8 release notes:

"Bring in GCC-4.1 (setenv CCVER GCC41 to use)."

Let's see.

=

# setenv CCVER GCC41
# gcc -v
gcc: in path []/usr/libexec/GCC41/gcc: No such file or directory


If you're willing to build DFly from source you just add a line to
your /etc/make.conf 'WANT_GCC41=true' (IIRC) and it will get built.
Then, after gcc41 is installed, you setenv CCVER GCC41 to actually
use it instead of the default gcc3x.


Thanks - may come back to that.

For now, 'one challenge at a time', so trying gcc4.1 on FBSD 6 AMD-64.

If it is broken there, it is broken, full-stop, AFAIAC (Leenux nor its interface 
layer, not being on my radar).


And 'make check' was not happy just now - well before getting at all Ada-ish, so 
I may have to revert to trying an older version on FBSD 4.X.


Or use the PowerBook :-(

Bill





Re: Gcc-ada errata?

2007-02-19 Thread walt
Bill Hacker wrote:
> .
> 
> Joy!  gcc 4.0 and later should be OK as-is?

Well, corecode has imported just the bare-nekkid gcc/g++ parts, not
the fancy stuff like ada.  But -- maybe the native ada-4.1.2 code
will compile without a bootstrap ada compiler, dunno.

> 
> ===
> 
> From the DFLY 1.8 release notes:
> 
> "Bring in GCC-4.1 (setenv CCVER GCC41 to use)."
> 
> Let's see.
> 
> =
> 
> # setenv CCVER GCC41
> # gcc -v
> gcc: in path []/usr/libexec/GCC41/gcc: No such file or directory

If you're willing to build DFly from source you just add a line to
your /etc/make.conf 'WANT_GCC41=true' (IIRC) and it will get built.
Then, after gcc41 is installed, you setenv CCVER GCC41 to actually
use it instead of the default gcc3x.


Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Petr Janda

Matthew Dillon wrote:



-Matt
	Matthew Dillon 
	<[EMAIL PROTECTED]>


  

Hey Matt,
1) Does your filesystem plan include the ability to grow and shrink a 
partition/volume? ie. /home is running out of space so we could run 
"shrinkfs" ... on /usr which has a lot of space and "growfs" ... on /home


2) Are you going to do away disklabel stuff and replace it with 
something better/easier to use?


3) Is vinum finally gonna die with the new filesystem? ie. volume 
manager will be integrated in the new file system, like ZFS?


Cheers,

Petr


Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Justin C. Sherrill
On Mon, February 19, 2007 5:37 pm, Matthew Dillon wrote:

> I have many requirements that need to be fullfilled by the new
> filesystem.  I have just completed the basic design work and I feel
> quite confident that I can have the basics working by our Summer
> release.

How much is "the basics"? i.e. generally usable as the filesystem, or
available only as committed code?



Re: Gcc-ada errata?

2007-02-19 Thread Chris Turner

there was some recent commit to CURRENT about needing to enable GCC4 via
make.conf ..

but if that still doesn't work:

Might be possible to bootstrap and/or port via FreeBSD emulation ..

I've got a DragonFly support patch for CMUCL that I keep meaning
to submit that was developed via the easily-obtained
FreeBSD executables..

not sure how that would hook in with the GCC build procedure,
but :

"This is left as an exercise for the reader" :)




Bill Hacker wrote:
> walt wrote:
>> On Tue, 20 Feb 2007, Bill Hacker wrote:
>>
>>> Date: Tue, 20 Feb 2007 00:34:37 +0800
>>> From: Bill Hacker <[EMAIL PROTECTED]>
>>> Newsgroups: dragonfly.users
>>> Subject: Gcc-ada errata?
>>>
>>> Anyone seen this (DFLY 1.8 REL, Celeron 1 GHz, 512MB SDRAM,
>>> IBM/Hitachi PATA
>>> 20GB HDD):
>>>
>>> =
>>> # cd /usr/pkgsrc/lang/gcc34-ada
>>>
>>> # bmake install -DUSE_GCC34
>>
>> Oops -- ignore my other answer.  I read the README ;o)  The gcc34
>> Makefile
>> clearly assumes that you've already build gcc34 *with* ada.  AFAICT, the
>> only way you can do that is to install gcc3 first, because that includes
>> ada by default.  You can then bootstrap to gcc34, or maybe even directly
>> to gcc34-ada without building gcc34, not sure.
>>
> 
> Let's see...
> 
> ===
> # cd gcc3-ada
> # bmake install clean
> ERROR: This package is not available for these platforms: Darwin-*-*
> Interix-*-* DragonFly-*-*.
> ERROR: This package has set PKG_FAIL_REASON:
> ERROR: gcc3-ada-3.3.6 is not available for DragonFly-1.8.0-i386
> ERROR: This package has set PKG_SKIP_REASON:
> ERROR: Sorry, need an Ada compiler
> *** Error code 1
> 
> Stop.
> bmake: stopped in /usr/pkgsrc/lang/gcc3-ada
> 
> ===
> 
> # cd ..
> # cd gcc3
> # bmake install clean
> WARNING: [depends.mk] Unknown object format for installed package
> digest-20050731
> => Required installed package digest>=20010302: digest-20050731 found
> ===> Checking for vulnerabilities in gcc3-3.3.6
> work -> /usr/obj/pkgsrc/lang/gcc3/work
> ===> Installing dependencies for gcc3-3.3.6
> => Required installed package gcc3-c-3.3.6{,nb*}: NOT found
> => Verifying reinstall for ../../lang/gcc3-c
> ERROR: This package is not available for these platforms: Darwin-*-*
> Interix-*-* DragonFly-*-*.
> ERROR: This package has set PKG_FAIL_REASON:
> ERROR: gcc3-c-3.3.6 is not available for DragonFly-1.8.0-i386
> *** Error code 1
> 
> Stop.
> bmake: stopped in /usr/pkgsrc/lang/gcc3-c
> *** Error code 1
> 
> Stop.
> bmake: stopped in /usr/pkgsrc/lang/gcc3
> 
> ===
> 
> 'GNATS' as the cartoon character said
> 
> Hmm.
> 
> (ssh's to a FreeBSD 6.X box.)
> 
> +++
> triligon# cd /usr/ports/lang/gnats
> triligon# less pkg-descr
> The GPL version of GNU Ada compiler system built on GCC 3.4.6
> 
> Binaries built with this version of compiler are covered by the
> GPL license.  Use the Ada compiler bundled with GCC 4.0 and
> subsequent for LGPL licensing.
> 
> 
> 
> Joy!  gcc 4.0 and later should be OK as-is?
> 
> ===
> 
> From the DFLY 1.8 release notes:
> 
> "Bring in GCC-4.1 (setenv CCVER GCC41 to use)."
> 
> Let's see.
> 
> =
> 
> # setenv CCVER GCC41
> # gcc -v
> gcc: in path []/usr/libexec/GCC41/gcc: No such file or directory
> 
> =
> 
> And even with case changed, the subdir:
> 
> /usr/libexec/gcc41 is indeed empty...
> 
> Perhaps the ISO image is not enough. But I do not see gcc41 on
> Chlamydia, either.
> 
> Bill
> 
> 



Re: Gcc-ada errata?

2007-02-19 Thread Bill Hacker

walt wrote:

On Tue, 20 Feb 2007, Bill Hacker wrote:


Date: Tue, 20 Feb 2007 00:34:37 +0800
From: Bill Hacker <[EMAIL PROTECTED]>
Newsgroups: dragonfly.users
Subject: Gcc-ada errata?

Anyone seen this (DFLY 1.8 REL, Celeron 1 GHz, 512MB SDRAM, IBM/Hitachi PATA
20GB HDD):

=
# cd /usr/pkgsrc/lang/gcc34-ada

# bmake install -DUSE_GCC34


Oops -- ignore my other answer.  I read the README ;o)  The gcc34 Makefile
clearly assumes that you've already build gcc34 *with* ada.  AFAICT, the
only way you can do that is to install gcc3 first, because that includes
ada by default.  You can then bootstrap to gcc34, or maybe even directly
to gcc34-ada without building gcc34, not sure.



Let's see...

===
# cd gcc3-ada
# bmake install clean
ERROR: This package is not available for these platforms: Darwin-*-* Interix-*-* 
DragonFly-*-*.

ERROR: This package has set PKG_FAIL_REASON:
ERROR: gcc3-ada-3.3.6 is not available for DragonFly-1.8.0-i386
ERROR: This package has set PKG_SKIP_REASON:
ERROR: Sorry, need an Ada compiler
*** Error code 1

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc3-ada

===

# cd ..
# cd gcc3
# bmake install clean
WARNING: [depends.mk] Unknown object format for installed package 
digest-20050731
=> Required installed package digest>=20010302: digest-20050731 found
===> Checking for vulnerabilities in gcc3-3.3.6
work -> /usr/obj/pkgsrc/lang/gcc3/work
===> Installing dependencies for gcc3-3.3.6
=> Required installed package gcc3-c-3.3.6{,nb*}: NOT found
=> Verifying reinstall for ../../lang/gcc3-c
ERROR: This package is not available for these platforms: Darwin-*-* Interix-*-* 
DragonFly-*-*.

ERROR: This package has set PKG_FAIL_REASON:
ERROR: gcc3-c-3.3.6 is not available for DragonFly-1.8.0-i386
*** Error code 1

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc3-c
*** Error code 1

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc3

===

'GNATS' as the cartoon character said

Hmm.

(ssh's to a FreeBSD 6.X box.)

+++
triligon# cd /usr/ports/lang/gnats
triligon# less pkg-descr
The GPL version of GNU Ada compiler system built on GCC 3.4.6

Binaries built with this version of compiler are covered by the
GPL license.  Use the Ada compiler bundled with GCC 4.0 and
subsequent for LGPL licensing.



Joy!  gcc 4.0 and later should be OK as-is?

===

From the DFLY 1.8 release notes:

"Bring in GCC-4.1 (setenv CCVER GCC41 to use)."

Let's see.

=

# setenv CCVER GCC41
# gcc -v
gcc: in path []/usr/libexec/GCC41/gcc: No such file or directory

=

And even with case changed, the subdir:

/usr/libexec/gcc41 is indeed empty...

Perhaps the ISO image is not enough. But I do not see gcc41 on Chlamydia, 
either.

Bill




Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Matthew Dillon
:I think it's important to ask oneself these questions since it's a shame
:to waste time on something that nobody can ever appreciate. On the other
:hand, in Matt's and many other's vision clustered computing will perhaps
:be an integral part of the future, just like many cored processors will be
:in all personal computers of the future. Because of this, only now are
:people scrambling and trying to figure out how they can squeeze more juice
:out of their programs and operating systems in SMP environments. 

 Believe me, I think about this all the time.  I frankly have no idea
 whether 'DragonFly The OS' itself will survive the test of time, 
 but I guarentee you that everything we develop for 'DragonFly The OS',
 especially more portable entities such as filesystems and cluster
 protocols, *WILL*.

 The moment you leave the context of the operating system codebase and
 enter the context of userspace, you guarentee code survivability.

 This is, ultimately, what the DragonFly oprating system is intended
 support... the SYSLINK clustering protocol will allow all the major
 pieces to be moved into userland.  And, I will add, that the execution
 context piece can *ALREADY* be controlled by userland, with only a modest
 number of new system calls (DragonFly's VMSPACE_*() system calls)...
 our virtual kernel is proof of that.

:Jokingly: I think the notion of functional individual computers
:helping each other out sounds a bit like neourons in a brain. The
:technological singularity is coming, nothing can stop it!

Well, we can dream.  Unless the world self destructs, AI in all its
Sci-fi glory will become a reality.  It will happen in the next 80-200
years, most likely.  However, I won't be leading that particular project.
Hehe.  I'm more an infrastructure guy.

-Matt
Matthew Dillon 
<[EMAIL PROTECTED]>


Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Matthew Dillon
I've been letting the conversation run to see what people have to say,
I am going to select this posting by Brett to answer, as well as 
provide some more information.

:I am not sure I understand the potential aim of the new file system -
:is it to allow all nodes on the SSI (I purposefully avoid terms like
:"grid") to have all "local" data actually on their hard drive or is it
:more like each node is aware of all data on the SSI, but the data may
:be scattered about all of the nodes on the SSI?

I have many requirements that need to be fullfilled by the new
filesystem.  I have just completed the basic design work and I feel
quite confident that I can have the basics working by our Summer
release.

- On-demand filesystem check and recovery.  No need to scan the entire
  filesystem before going live after a reboot.

- Infinite snapshots (e.g. on 30 second sync), ability to collapse 
  snapshots for any time interval as a means of recovering space.

- Multi-master replication at the logical layer (not the physical layer),
  including the ability to self-heal a corrupted filesystem by accessing
  replicated data.  Multi-master means that each replicated store can
  act as a master and new filesystem ops can run independantly on any
  of the replicated stores and will flow to the others.

- Infinite log Replication.  No requirement to keep a log of changes
  for the purposes of replication, meaning that replication targets can
  be offline for 'days' without effecting performance or operation.
  ('mobile' computing, but also replication over slow links for backup
  purposes and other things).

- 64 bit file space, 64 bit filesystem space.  No space restrictions
  whatsoever.

- Reliably handle data storage for huge multi-hundred-terrabyte
  filesystems without fear of unrecoverable corruption.

- Cluster operation - ability to commit data to locally replicated
  store independantly of other nodes, access governed by cache
  coherency protocols.

:So, in effect, is it similar in concept to the notion of storing bits
:of files across many places using some unified knowledge of where the
:bits are? This of course implies redunancy and creates synchronization
:problems to handle (assuming bo global clock), but I certainly think
:it is a good goal.  In reality, how redundant will the data be?  In a
:practical sense, I think the principle of "locality" applies here -
:the pieces that make up large files will all be located very close to
:one another (aka, clustered around some single location).

Generally speaking the topology is up to the end-user.  The main
issue for me is that the type of replication being done here is
logical layer replication, not physical replication.  You can
think of it as running a totally independant filesystem for each
replication target, but the filesystems cooperate with each other
and cooperate in a clustered environment to provide a unified,
coherent result.

:>From my experiences, one of the largest issues related to large scale
:computing is the movement of large files, but with the trend moving
:towards huges local disks and many-core architectures (which I agree
:with), I see the "grid" concept of geographically diverse machines
:connected as a single system being put to rest in favor of local
:clusters of many-core machines.
:...
:With that, the approach that DfBSD is taking is vital wrt distributed
:computing, but any hard requirement of moving huge files long
:distances (even if done so in parallel) might not be so great.  What
:is required is native parallel I/O that is able to handle locally
:distributed situations - because it is within a close proximity that
:many processes would be writing to a "single" file.  Reducing the
:scale of the problem may provide some clues into how it may be used
:and how it should handle the various situations effectively.
:...
:Additionally, the concept of large files somewhat disappears when you
:are talking about shipping off virtual processes to execute on some
:other processor or core because they are not shipped off with a whole
:lot of data to work on.  I know this is not necessarily a SSI concept,
:but one that DfBSD will have people wanting to do.
:
:Cheers,
:Brett

There are two major issues here:  (1) Where the large files reside
and (2) How much of that data running programs need to access.

For example, lets say you had a 16 gigabyte database.  There is a
big difference between scanning the entire 16 gigabytes of data
and doing a query that only has to access a few hundred kilobytes
of the data.

No matter what, you can't avoid reading data that a program insists
on reading.  If the data is not cacheable or the amount of data
being read is huge, the cluster has the choice of moving the program's
running context closer to the storage, or transfering the d

Re: Gcc-ada errata?

2007-02-19 Thread walt
On Tue, 20 Feb 2007, Bill Hacker wrote:

> Date: Tue, 20 Feb 2007 00:34:37 +0800
> From: Bill Hacker <[EMAIL PROTECTED]>
> Newsgroups: dragonfly.users
> Subject: Gcc-ada errata?
>
> Anyone seen this (DFLY 1.8 REL, Celeron 1 GHz, 512MB SDRAM, IBM/Hitachi PATA
> 20GB HDD):
>
> =
> # cd /usr/pkgsrc/lang/gcc34-ada
>
> # bmake install -DUSE_GCC34

Oops -- ignore my other answer.  I read the README ;o)  The gcc34 Makefile
clearly assumes that you've already build gcc34 *with* ada.  AFAICT, the
only way you can do that is to install gcc3 first, because that includes
ada by default.  You can then bootstrap to gcc34, or maybe even directly
to gcc34-ada without building gcc34, not sure.



Re: Gcc-ada errata?

2007-02-19 Thread walt
On Tue, 20 Feb 2007, Bill Hacker wrote:

> Date: Tue, 20 Feb 2007 00:34:37 +0800
> From: Bill Hacker <[EMAIL PROTECTED]>
> Newsgroups: dragonfly.users
> Subject: Gcc-ada errata?
>
> Anyone seen this (DFLY 1.8 REL, Celeron 1 GHz, 512MB SDRAM, IBM/Hitachi PATA
> 20GB HDD):
>
> =
> # cd /usr/pkgsrc/lang/gcc34-ada
>
> # bmake install -DUSE_GCC34

> => Required installed package gcc34>=3.4.5nb1: NOT found

I thought I would try to build it on NetBSD, but then I noticed
this at the beginning of the gcc34 build:

The following variables will affect the build process of this package,
gcc34-3.4.6nb1.  Their current value is shown below:

* BUILD_ADA = NO

So, this feels like catch 22.  The makefile says it needs an ada compiler
to bootstrap the new ada compiler, but the new compiler by default does
not build ada. Seems to me even if the gcc34 package did build okay, you
still need an ada compiler.  Anyone here think this should work?



Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Peter Serwe

Michel Talon wrote:

Of course it is none of my business, but i have always wandered about the
real usefulness of a clustering OS in the context of free systems, and you
post allows me to explain why. People who have the money to buy machines by
the thousands, run them, pay the electricity bill, etc. should also have
the money to pay $$$ to IBM, and not count on the generosity of unpaid
developers. Small installations are the natural target of free systems, and
in this context i remain convinced that the clustering ideas have an
utility next to null. And frankly, i doubt they have any utility for big
systems if you don't use high speed, low latency connects which are far
more expensive than the machines themselves. And even with this highly
expensive hardware, if you don't have high brain programmers able to really
make use of concurrency.
On the contrary, the disks of Joe User are becoming bigger and bigger, his
processor is getting more and more cores, so there is clearly a need for
file systems appropriate for big disks and sufficiently reliable ( ZFS
being an example ) and operating systems able to use multicores
efficiently.
  
Open source software in a business context is about business 
applications that a small company
can fire up, scale up, and run for the longhaul.  A lot of the 
'generosity of unpaid developers'
you refer to is actually funded by the companies they work for, where 
working hours are
left available for the people to work on pieces of 'free' code while 
maintaining a place to live,

food in their stomachs, and a lifestyle of their choosing.

For very small companies, who might have a core team of innovative 
people, and don't want
VC financing to dictate the use of Micro$oft or other, shall we say, 
typically less functional,
less customizable, and less friendly platforms on the basis of 
'protecting the investment' or
funneling money into another investment, 'free' software is critical.  
The less software my
company has to use that is proprietary, closed source, and licen$ed at 
often ludicrous fees
to pay for a marketing arm and the CEO's Porsche, Ferrari, Bentley, or 
whatever, the more
money we have to hire people and pay $alaries.  Which gives us more time 
to innovate.


Lastly, where there's an application that a business needs, there's 
resources to develop it.


I can't count the number of projects that benefit from a given company 
doing customization
work, and then releasing the non-proprietary bits out via some sort of 
'free to use' license.


If it weren't for some notably large companies using open source 
software, open source
software wouldn't be nearly as far along as it is.  Academia can only 
take it so far.


Peter

--
Peter Serwe 

http://www.infostreet.com

"The only true sports are bullfighting, mountain climbing and auto racing." 
-Earnest Hemingway

"Because everything else requires only one ball." -Unknown

"Do you wanna go fast or suck?" -Mike Kojima

"There are two things no man will admit he cannot do well: drive and make 
love." -Sir Stirling Moss



Re: Installing Dragonfly 1.8 hangs BIOS completly

2007-02-19 Thread Rauf Kuliyev

Hi,

I bet it is IBM ThinkPad. You can find additional information here:
http://www.freebsd.org/doc/en_US.ISO8859-1/books/faq/install.html#BOOT-ON-THINKPAD

Regards,
Rauf

On 2/19/07, Michael Neumann <[EMAIL PROTECTED]> wrote:

Hi,

Just a few minutes ago, I installed Dragonfly 1.8 onto my laptop.
Then I rebooted, and the BIOS hung up completely after showing that it
detected the harddisk and cdrom. I powered down and tried again, but
that didn't worked either. I couldn't even boot a CD or anything else or
couldn't even enter the BIOS setup.

The only thing that worked was to remove the harddisk physically and
then pluging it in a few seconds after the BIOS crossed the detection of
the devices. Using this method, I booted the Dragonfly installer cd and
used the disk tools to wipe out the beginning of the harddisk. Then I
rebooted again and voila, I could boot normally (without removing the
harddisk). Puh!

Now I tried a second time to install Dragonfly 1.8, but after I reboot
the BIOS hangs again!

I know that the BIOS should not hang up itself, but on the other hand
that didn't happen with any other operating system I installed on my
laptop (FreeBSD, NetBSD, DragonFly 1.6). So I think there is something
wrong in the 1.8 version. Any hints?

Regards,

   Michael



Re: Installing Dragonfly 1.8 hangs BIOS completly

2007-02-19 Thread Bill Hacker

Michael Neumann wrote:

Hi,

Just a few minutes ago, I installed Dragonfly 1.8 onto my laptop.
Then I rebooted, and the BIOS hung up completely after showing that it 
detected the harddisk and cdrom. I powered down and tried again, but 
that didn't worked either. I couldn't even boot a CD or anything else or 
couldn't even enter the BIOS setup.


The only thing that worked was to remove the harddisk physically and 
then pluging it in a few seconds after the BIOS crossed the detection of 
the devices. Using this method, I booted the Dragonfly installer cd and 
used the disk tools to wipe out the beginning of the harddisk. Then I 
rebooted again and voila, I could boot normally (without removing the 
harddisk). Puh!


Now I tried a second time to install Dragonfly 1.8, but after I reboot 
the BIOS hangs again!


I know that the BIOS should not hang up itself, but on the other hand 
that didn't happen with any other operating system I installed on my 
laptop (FreeBSD, NetBSD, DragonFly 1.6). So I think there is something 
wrong in the 1.8 version. Any hints?


Regards,

  Michael


Welll... you have told us *which* laptop, (Make, model, age, CPU, whether you 
have APM, ACPI, enabled/not, if storage devices are autodetecting, swapped, set 
to boot out-of-order, etc. any and all of that info might help.


.and nothing attached to a serial port while booting, please.

Bill


Installing Dragonfly 1.8 hangs BIOS completly

2007-02-19 Thread Michael Neumann

Hi,

Just a few minutes ago, I installed Dragonfly 1.8 onto my laptop.
Then I rebooted, and the BIOS hung up completely after showing that it 
detected the harddisk and cdrom. I powered down and tried again, but 
that didn't worked either. I couldn't even boot a CD or anything else or 
couldn't even enter the BIOS setup.


The only thing that worked was to remove the harddisk physically and 
then pluging it in a few seconds after the BIOS crossed the detection of 
the devices. Using this method, I booted the Dragonfly installer cd and 
used the disk tools to wipe out the beginning of the harddisk. Then I 
rebooted again and voila, I could boot normally (without removing the 
harddisk). Puh!


Now I tried a second time to install Dragonfly 1.8, but after I reboot 
the BIOS hangs again!


I know that the BIOS should not hang up itself, but on the other hand 
that didn't happen with any other operating system I installed on my 
laptop (FreeBSD, NetBSD, DragonFly 1.6). So I think there is something 
wrong in the 1.8 version. Any hints?


Regards,

  Michael


Gcc-ada errata?

2007-02-19 Thread Bill Hacker
Anyone seen this (DFLY 1.8 REL, Celeron 1 GHz, 512MB SDRAM, IBM/Hitachi PATA 
20GB HDD):


=
# cd /usr/pkgsrc/lang/gcc34-ada

# bmake install -DUSE_GCC34
WARNING: [depends.mk] Unknown object format for installed package 
digest-20050731
=> Required installed package digest>=20010302: digest-20050731 found
===> Checking for vulnerabilities in gcc34-ada-3.4.6
=> Checksum SHA1 OK for gcc-3.4.6.tar.bz2
=> Checksum RMD160 OK for gcc-3.4.6.tar.bz2
work -> /usr/obj/pkgsrc/lang/gcc34-ada/work
===> Installing dependencies for gcc34-ada-3.4.6
=> Required installed package gmake>=3.78: gmake-3.81 found
=> Required installed package {gettext-tools>=0.14.5,gettext>=0.10.36}: 
gettext-tools-0.14.6 found

=> Required installed package gcc34>=3.4.5nb1: NOT found
=> Verifying reinstall for ../../lang/gcc34
WARNING: [depends.mk] Unknown object format for installed package 
digest-20050731
=> Required installed package digest>=20010302: digest-20050731 found
===> Checking for vulnerabilities in gcc34-3.4.6nb1
=> Checksum SHA1 OK for gcc-3.4.6.tar.bz2
=> Checksum RMD160 OK for gcc-3.4.6.tar.bz2
===> Building for gcc34-3.4.6nb1
(ulimit -d `ulimit -H -d`; && cd /usr/obj/pkgsrc/lang/gcc34/work/obj && 
/usr/bin/env USETOOLS=no ICONVPREFIX=/usr SHELL=/bin/sh PTHREAD_CFLAGS=\ 
-pthread\  PTHREAD_LDFLAGS=\ -pthread PTHREAD_LIBS=-lc_r PTHREADBASE=/usr CC=cc 
CFLAGS=-O2\ -I/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/include\ 
-I/usr/include\ -I/usr/pkg/include 
CPPFLAGS=-I/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/include\ -I/usr/include\ 
-I/usr/pkg/include CXX=c++ CXXFLAGS=-O2\ 
-I/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/include\ -I/usr/include\ 
-I/usr/pkg/include COMPILER_RPATH_FLAG=-Wl,-R F77=f77 FC=f77 FFLAGS=-O LANG=C 
LC_COLLATE=C LC_CTYPE=C LC_MESSAGES=C LC_MONETARY=C LC_NUMERIC=C LC_TIME=C 
LDFLAGS=-L/usr/lib\ -Wl,-R/usr/lib\ -L/usr/pkg/lib\ -Wl,-R/usr/pkg/lib 
LINKER_RPATH_FLAG=-R 
PATH=/usr/obj/pkgsrc/lang/gcc34/work/.wrapper/bin:/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/bin:/usr/obj/pkgsrc/lang/gcc34/work/.gcc/bin:/usr/obj/pkgsrc/lang/gcc34/work/.tools/bin:/usr/pkg/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/pkg/bin:/usr/pkg/sbin:/usr/games:/usr/local/sbin:/usr/local/bin:/usr/pkg/xorg/bin:/usr/X11R6/bin:/root/bin:/usr/pkg/bin:/usr/pkg/xorg/bin 
PREFIX=/usr/pkg PKG_SYSCONFDIR=/usr/pkg/etc CXXCPP=cpp CPP=cpp 
LINK_ALL_LIBGCC_HACK= LOCALBASE=/usr/pkg 
NO_WHOLE_ARCHIVE_FLAG=-Wl,--no-whole-archive 
WHOLE_ARCHIVE_FLAG=-Wl,--whole-archive X11BASE=/usr/pkg/xorg X11PREFIX=/usr/pkg 
PKGMANDIR=man MAKECONF=/dev/null OBJECT_FMT=ELF USETOOLS=no 
BSD_INSTALL_PROGRAM=/usr/bin/install\ -c\ -s\ -o\ root\ -g\ wheel\ -m\ 555 
BSD_INSTALL_SCRIPT=/usr/bin/install\ -c\ -o\ root\ -g\ wheel\ -m\ 555 
BSD_INSTALL_LIB=/usr/bin/install\ -c\ -o\ root\ -g\ wheel\ -m\ 555 
BSD_INSTALL_DATA=/usr/bin/install\ -c\ -o\ root\ -g\ wheel\ -m\ 444 
BSD_INSTALL_MAN=/usr/bin/install\ -c\ -o\ root\ -g\ wheel\ -m\ 444 
BSD_INSTALL=/usr/bin/install BSD_INSTALL_PROGRAM_DIR=/usr/bin/install\ -d\ -o\ 
root\ -g\ wheel\ -m\ 755 BSD_INSTALL_SCRIPT_DIR=/usr/bin/install\ -d\ -o\ root\ 
-g\ wheel\ -m\ 755 BSD_INSTALL_LIB_DIR=/usr/bin/install\ -d\ -o\ root\ -g\ 
wheel\ -m\ 755 BSD_INSTALL_DATA_DIR=/usr/bin/install\ -d\ -o\ root\ -g\ wheel\ 
-m\ 755 BSD_INSTALL_MAN_DIR=/usr/bin/install\ -d\ -o\ root\ -g\ wheel\ -m\ 755 
BSD_INSTALL_GAME=/usr/bin/install\ -c\ -s\ -o\ root\ -g\ wheel\ -m\ 555 
BSD_INSTALL_GAME_DATA=/usr/bin/install\ -c\ -o\ root\ -g\ wheel\ -m\ 444 
BSD_INSTALL_GAME_DIR=/usr/bin/install\ -d\ -o\ root\ -g\ wheel\ -m\ 755 
INSTALL_INFO= MAKEINFO=/usr/obj/pkgsrc/lang/gcc34/work/.tools/bin/makeinfo 
PKG_CONFIG= 
PKG_CONFIG_LIBDIR=/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/lib/pkgconfig:/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/share/pkgconfig 
PKG_CONFIG_LOG=/usr/obj/pkgsrc/lang/gcc34/work/.pkg-config.log WRAPPER_DEBUG=no 
WRAPPER_UPDATE_CACHE=yes VIEWBASE=/usr/pkg /usr/pkg/bin/gmake bootstrap)

Syntax error: "&&" unexpected
*** Error code 2

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc34
*** Error code 1

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc34
*** Error code 1

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc34-ada



Bill Hacker



Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Bill Hacker

Robert Luciani wrote:

*snip* (clustering discussion..)



Jokingly: I think the notion of functional individual computers
helping each other out sounds a bit like neourons in a brain. The
technological singularity is coming, nothing can stop it!


Oddly, they *are* - but not in the way theoreticians around the time 
'Metropolis' or even 'Brave New World' were written might have enviaged, i.e. 
neither centrally controlled nor even 'close coupled'.


The 'net, e-mail, file & media exchange, IRC... one could go on.. seem to JFDI 
w/r all manner of pragmatic 'sharing' with one of the essential characteristics 
of their human users:


- For the most part, a little 'latency' is not unwelcome.

IOW 'MaƱana' - response / gratification in minutes, if not hours, is usually as 
good as we could absorb anyway w/o becomeing totally 'time-slaved' to the 
machinery that was intended to be servant - not master.


Specialized scientific applications are just that - specialized, and often 
warranting from-the-ground-up bespoke software - OS included.


Not to put too fine a point on it, but most of these need a very different core 
than (any of the) Unix anyway. Real-Time Exec's, self-onitoring, 
soft-fault-tolerance, etc.


Bill


Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Robert Luciani
> Bill Hacker wrote:
> 
>> Simon 'corecode' Schubert wrote:
>>> Rupert Pigott wrote:
>> 
>> 
>> But a brief scan of those that were 'free' brings up the question:
>> 
>> 'Just who is it that actually NEEDS this anyway?'
>> 
>> Bill Hacker
> 
> Well Rupert Pigott gave some pretty convincing explanations of the
> usefulness of the concept. Personnally i see the problem from the other
> side, i am a physicist in a lab which has a cluster, the lab below us also
> has a cluster. Besides pissing contests, who has the biggest, i have hard
> time finding the real usefulness of this stuff. Basically the cluster is
> used by people as a collection of independent computers running independent
> computations. Probably few of these guys if any has any notion of
> concurrent programming. On the other hand i know an american physicist who
> has a cluster and does real clustered computations on it, in fact QCD
> computations on the lattice. This is the work of his life, he has learnt
> MPI and other hard stuff to exploit parallelism. People like that are so
> rare, and can get funding both for the harware and software, that this
> doesn't justify, in my opinion, lots of efforts from free software
> developers. Of course developers develop what they like, i have absolutely
> nothing against that. People see advantages in distributed filesystems.
> Now AFS is 30 years old, its rejuvenated version ARLA is 10 years old, who
> is really using that in the real world? Of course i don't know much but i
> don't know a single installation using it. This says a lot about the
> usefulness or the necessity of these concepts.
> 
>

I think it's important to ask oneself these questions since it's a shame
to waste time on something that nobody can ever appreciate. On the other
hand, in Matt's and many other's vision clustered computing will perhaps
be an integral part of the future, just like many cored processors will be
in all personal computers of the future. Because of this, only now are
people scrambling and trying to figure out how they can squeeze more juice
out of their programs and operating systems in SMP environments. 

Jokingly: I think the notion of functional individual computers
helping each other out sounds a bit like neourons in a brain. The
technological singularity is coming, nothing can stop it!