Re: ltp man page

2009-12-13 Thread Bill Hacker

Pierre Abbat wrote:
I'm trying to figure out how to try out ltp. I looked at the man page and 
found this sentence:


Pan uses the signal ratchet found in other zoo tools.

Huh???

Pierre


. should make sense in context of what follows;

The first time pan is signaled it sends a SIGTERM to the active pgrps; the 
second time it sends SIGHUP; the third time a SIGINT; after that it always sends 
SIGKILL.


- but perhaps not to all native-languages.

Perhaps 'escalates' or 'succesively re-tries with increaesd urgency' of its 
signals (until it has terminated all ...)


.. might be more clear.

Just omitting the sentance you cite and starting with the one I've cited above 
strikes me as better yet


Bill





Re: Updating USB stack from FBSD 8.x and others

2009-12-13 Thread Bill Hacker

elekktrett...@exemail.com.au wrote:

I don't know if non-profit would help us here - people would have to pay
money to the non-profit entity, which would then hire someone (Alexander)
to do the work.  Once money gets involved with an official organization,
the processes have to be very clear and very fair.  We'd probably need to
move to a grant application process like what the FreeBSD Foundation does
now.

We're fine with this informal volunteer process we have now, as a way to
move money between individuals.  I've seen open source non-profits end up
with problems because money gets awarded to individual developers in a
not-too-clear way and other developers get hurt, emotionally and
financially.


I know what you mean, but being able to use such contributions for tax
deductions would surely encourage more people to contribute.

Petr



Not to forget several thou (in US$ or equivalent) that has to go for all the 
unavoidable setup paperwork and forever-after periodic filings.


That's money that can't fund anything but chair-warmers - a class already in 
massive surplus.


:-(

Bill


Re: Fluxbox 1.1.1 apps

2009-11-17 Thread Bill Hacker

Niklas Rosencrantz wrote:

Dear dragonfly team,
I enable Fluxbox 1.1.1 from a Dell Laptop d✈fly 2.4.1 regularly wikiing
http://www.dragonflybsd.org/docs/user/dellcpx/
welcoming your instructions how to enable more apps and browsers e.g.
firefox. The best browser starting is links-graphics, konquerer and more
also start, still many apps appear in the manus registering as installed and
don't start. Hence probably path issue. pidgin IM does start, gimp won't,
igoo won't, might be random or config the thing. If doable, please inform
whether apps in menu are supposed to start the program or path to path
configuration. Starting a root gui, gnome starts OK and not mozilla. My goal
is migrating my dev environment.
Thanks  regards
Nick Rosencrantz



When encountering dodgy behaviour with X Window System apps, try calling their 
binary name from a terminal window instead of the menu. Then slide the app off 
coverage of that terminal window (if one even opens at all).



Reason:

error info will be there (missing libs et al) that would not be as easily seen 
if started with a menu click, and may or may not be hitting one or more logs...


Doesn't FIX the problem, but may help ID the cause.

HTH,

Bill Hacker


Re: Xorg configuration

2009-11-01 Thread Bill Hacker

jus...@shiningsilence.com wrote:

On Saturday 31 October 2009 22:58:27 jus...@shiningsilence.com wrote:



I used X -config xorg.conf.new and it came up blank. Should I post the
conf or
the log file?


Both - we're not low on electrons.  If you have someplace to put them, you
can do that and paste a link.

Another help option is the #dragonflybsd IRC channel on EFNet; if you are
comfortable using IRC, there's a good number of people in that channel,
all of whom are knowledgeable and may be able to help more immediately. 
It's usually busiest during daytime hours in Europe.




Not sure it is germane but 'something' in xorg went at least temporarily 
pear-shaped on the latest FreeBSD step-ups (7.1 - 7.2 as well as 8 RC1) and one 
other *BSD w/r Intel VGA driver vs VESA autoselection and loading.


Symptom was an i9XX that was previously automagically handled w/o need of a conf 
file stalling with 'unable' vs prior art falling back to VESA if all else failed.


Taming it with conf file entries quickly became tedious, and was of no interest 
in a constantly-changing workshop of portable storage devices, so I've simply 
backleveled until the Xorg community recover their automagicality.


NB: These are changes in the Xorg 'containership' - not the host OS, have been 
extent for quite a while now, BUT AFAIK not relevant to other-than i8XX / i9xx 
Intel VGA.


HTH,

Bill


Re: Xorg configuration

2009-11-01 Thread Bill Hacker

Saifi Khan wrote:

On Sun, 1 Nov 2009, Bill Hacker wrote:


Not sure it is germane but 'something' in xorg went at least temporarily
pear-shaped on the latest FreeBSD step-ups (7.1 - 7.2 as well as 8 RC1) and
one other *BSD w/r Intel VGA driver vs VESA autoselection and loading.

Symptom was an i9XX that was previously automagically handled w/o need of a
conf file stalling with 'unable' vs prior art falling back to VESA if all else
failed.

Taming it with conf file entries quickly became tedious, and was of no
interest in a constantly-changing workshop of portable storage devices, so
I've simply backleveled until the Xorg community recover their automagicality.

NB: These are changes in the Xorg 'containership' - not the host OS, have been
extent for quite a while now, BUT AFAIK not relevant to other-than i8XX / i9xx
Intel VGA.



On one of my laptops, i'm running FreeBSD freebsd
8.0-CURRENT-200906 and it has a Intel 945GM chipset.

Xorg 1.6.x is running absolutely fine !

Tested it with both KDE 3.5.10 and DWM.

There were some xorg.conf changes between Xorg server 1.5.x and
1.6.x

Can you please highlight the specific issues ?


thanks
Saifi.



Lenovo Asian-market G400 3000 laptop of January 2008 vintage:

Intel T2130 CPU, 1.86 GHz unicore hyperthreading as if dual-core

2 GB RAM

Intel 82945GM rev 0X03 video

Same issue surfaced in both FreeBSD 7.2-RELEASE and 8-RC1,'neither updated 
beyond original iso. Was not a problem in 7.1-RELEASE, though it had been 
updated several times (not always with updates to installed Xorg  sputniks).


The log (and CLI artifacts) showed a hang on attempt to load an Intel VGA 
driver, whose specific idenity I did not take note of - only that it was neither 
on box nor in the ports tree.


I'm not expecting Xorg to have all possible vendor-specific drivers all the 
time, or even most of the time - but it *used to* drop into generic VESA ELSE 
vendor-BIOS specific VESA instead of hanging.


Given that a HDD here may be attached to any of several machines - which vary on 
kbd and mouse as well as VGA and display res, VESA is preferred over otherwise 
wrong-for-all-but-one conf files. Finding a hardware-specific driver is 
serendipitous, but only if the attempt doesn't create more pain than gain.


As stipulated, the problem appears to have been transient:

*newer* Xorg seems to be fine:

- I have 1.6.4 RC 1 on OpenBSD-4.6 - initially reports Intel 82945GM,
jumps through a few hoops, (very few), uses 'built in 'Intel VESA at correct 
1280 x 800 native resolution.


*older* Xorg was at least hang-free:

- I have 1.4.2 'usable' if suboptimal on NetBSD 5.0.1 - initially reports Intel 
Mobile 945GM/GMS.GME,  943/940GML Express, plays with itself for a while, 
finally settles on VESA VBE BIOS mode for the (correct) Intel 82945GM chipset, 
but with incorrect native resolution (1024 x 1068 vs actual 1280 X 800) for the 
Samsung panel in use.


Unreleated, but the Haiku OS, nails it precisely on the first go, and displays 
as sharply as if it were OS X - something the X-Window System has a hard time 
matching, (at least on the various apps) regardless of KDE, Gnome or my usual - 
Xfce4.


An exercise for others where DragonFly is w/r Xorg version at the moment - I've 
presently got Haiku in the partition where I usually put DragonFly, so have 
neither dmesg nor Xorg.0.log handy OR saved.


Best,

Bill


Re: Network maintainance on dragonflybsd.org

2009-10-28 Thread Bill Hacker

Matthew Dillon wrote:

There is going to be some serious restructuring of my subnet some
time in the next two weeks.  Theoretically I'll be drilled through
my cable modem while my DSL is down but it is unclear how well that
will work and the ISP muffed up the router they tried to FedEx me
by sending it saturday-delivery on a weekend I wasn't home and
not checking the box that allows the delivery guy to just leave the
package.  Sigh.

So this is a heads-up that there may be some significant outages.
I'll post (if possible) when I have more details.

-Matt
	Matthew Dillon 
	dil...@backplane.com


'Best case' w/r the cable modem is a substantial drop in your upload / all 
others download b/w. Otherwise not much diff.


Shout if you need the loan of a server for a few weeks. I've got a backup online 
at One IAdvantage (Kwun Tong, HK) that is not critically needed for that job 
until December.


Bill Hacker


Re: Instant crash with Linux OpenOffice

2009-06-06 Thread Bill Hacker

Matthew Dillon wrote:

:Hi,
:
:I have recently upgraded the root fs to Hammer on one of my machines. Since
:then, I have been unable to run any version of OpenOffice.
:Previously, misc/openoffice2-bin and misc/openoffice3-bin ran fine.
:
:The OS is DragonFly 2.2.2-RELEASE.
:
:The splash screen begins to appear and then OpenOffice crashes with some weird
:errors:
:
:$ soffice
:javaldx: Could not find a Java Runtime Environment! 
:sh: g/,: No such file or directory

:
:$ soffice
:javaldx: Could not find a Java Runtime Environment! 
:sh: -c: line 0: unexpected EOF while looking for matching `''

:sh: -c: line 1: syntax error: unexpected end of file
:
:I'm not sure how to debug this.
:
:-- 
:Francois Tigeot


I've had similar issues with OpenOffice but I have never been able
to figure out what's wrong.  I don't think it is HAMMER per-say, but
instead something related to our linux emulation that is breaking.
It seems to happen more consistently with HAMMER but issues also pop
up with UFS.


OO 2.X also consistently useless on Mac OSX 10.3.9 with all-UFS fs. Shows OO 
splash screen, dies and goes where all responsible bloatware should take itself.


OOS 1.X did work on Mac UFS  10.3.9.

OO 2.X works OK on OSX 10.4 with hfs+ fs.



It could be related to the path prefixing the linux emulation does
to give priority to the emulation directory structure on lookups.

-Matt
	Matthew Dillon 
	dil...@backplane.com


Probably a lot more to it than that I'd point at javishness, but Mac's have 
/* oxymoron warning */ fairly usable java


Bill


Re: Shuttle SN78SH7 - success, and starting AHCI driver port from OpenBSD.

2009-05-28 Thread Bill Hacker

Matthew Dillon wrote:

I've upgraded my test boxes to a new generation.  The Shuttle SN78SH7
barebones (meaning one must buy the cpu, memory, and drives separately),
with a Phenom x4 cpu pretty much just works.  Everything probes and
it boots up without complaint.  Dmesg output is here:

http://apollo.backplane.com/DFlyMisc/ShuttleSN78SH7.txt

This baby doesn't have any PS/2 connectors.  The keyboard/mouse is
all USB, and does appear to work just fine.

I also went with both a SATA HD and a SATA DVD-RW, so no IDE stuff at
all in my box.

I got the whole mess from newegg.com, it was delivered smartly and
ridiculously cheap for the horsepower it represents.  I already had
the hard drives.  Everything else came to less then $600 per box
(shuttle, cpu, memory, DVD-RW).

--

In anycase, the major reason for updating was to get some test boxes
worked up with native AHCI hardware so I can port the OpenBSD AHCI driver.

So I am now officially porting that driver and I expect to have it done
for our July release.  OpenBSD has a very nice implementation which fully
supports command queueing and being able to separate out the AHCI
support from the NATA driver will allow us to slowly phase-out the NATA
driver for all-SATA systems.  I also think we can get removable SATA
working (something we want since ESATA is becoming the new standard for
external hard drives).

-Matt
	Matthew Dillon 
	dil...@backplane.com


Matt,

Where will 'phaseout' leave natacontrol RAID et al?

Bill


Re: mail/mailx question

2009-05-24 Thread Bill Hacker

Tim Darby wrote:

I was trying to write a mail script and couldn't find a way to set the
from field to an arbitrary address.  Is there a way to do this with
the base mail client?  I ended up installing mutt to solve the
problem.

Tim


AFAIK, you will need to have your script bypass mail, Mail, mailx and call the 
MTA binary directly:


man sendmail, the -F and -f options.

exim fully supports both flags.

courier-mta probably does also. Not currently installed here.

postfix 'sendmail option' interface supports at least the '-F'.


Bill


Re: nb suffix to package names

2009-05-21 Thread Bill Hacker

Archimedes Gaviola wrote:

On Thu, May 21, 2009 at 5:26 PM, Saifi Khan saifi.k...@twincling.org wrote:

Hi:

Noticed that some of the packages installed from the dfly 2.2.1
DVD have 'nb' suffixes, for example:

xconsole-1.0.3nb1
shared-mime-info-0.51nb2
rxvt-unicode-8.3nb4

Two questions here:
 . what does the 'nb' suffix mean ?
 . what is the interpretation of the number after 'nb' ?


thanks
Saifi.



Hi Saifi,

I think this 'nb' is a NetBSD convention to their software packages -
pkgsrc which is what DragonFly is using.

Thanks,
Archimedes


Google finds a bit of history:

The NetBSD pkgsrc collection as of
   2000-02-14 contains versions of the ssh.com 1.2.27 distribution,
   patched to avoid the buffer overflow; these are identified by an
   nb1 suffix in the package version.

and:

/* this struct defines a version number */
typedef struct arr_t {
unsignedc;  /* # of version numbers */
unsignedsize;   /* size of array */
int*v;  /* array of decimal numbers */
int netbsd; /* any nb suffix */
} arr_t;


No idea what current use is, but a NetBSD sub-ID similar to a more generic 'p(X) 
patchlevel does look to be a reasonable fit.


Bill



Re: Hammer error?

2009-05-19 Thread Bill Hacker

Sdävtaker wrote:

Hello,
I was doing some backup from Machine0 to Machine1 using hammer
mirror-copy and got a suppicious new file called hammer.core.
Looking around the files it looks like a success copy but i still
curious about this new 720kb file.
Someone can tell me if i need to worry about it and where can i start
checking for errors in case there was one?
Machine0 has DFBSD2.2.0-release, Machine1 has DFBSD2.2.1-release,
could it be the problem? should i just upgrade both?
Sdav


ISTR *several* places where 'tis said hammer really needs to be 'matched'

As you've got 2.0 and skipped 2.1 for 2.2.1, I'd suggest having the odd core 
dump is b***dy lucky.


An upgrade will probably deliver more immediate 'treasure' than Rikku has time 
or inclination to bless you with...


;-)

Bill



Re: Disk question

2009-05-16 Thread Bill Hacker

Tim Darby wrote:

I have a machine that was running Windows XP until I recently
installed 2.2.2 on it.  This was mainly for the purpose of trying out
Hammer.  It contains a 40GB drive, which I made the boot drive and
installed with Hammer.  The other 2 drives are a 300GB Samsung IDE and
a 200GB Western Digital IDE, both connected to a SiI 0680 controller.
I've succcessfully installed Hammer on the WD drive, but the Samsung
fails as follows:

# newfs_hammer -L BACKUP /dev/ad6s1a
Volume 0 DEVICE /dev/ad6s1a size 279.46GB
initialize freemap volume 0
ad6: WARNING - READ_DMA UDMA ICRC error (retrying request) LBA=2245359
ad6: TIMEOUT - READ_DMA retrying (0 retries left) LBA=2245359
ad6: WARNING - READ_DMA status=51READY,DSC,ERROR
error=84ICRC,ABORTED LBA=2245359
newfs_hammer: get_buffer: /dev/ad6s1a:20814000 Read failed at
offset 1149583360: Input/output error

I ran a Samsung diagnostic on it and it found a DMA command timeout
problem (service code AJ27), so it appears that the drive really does
have a problem.  Just for kicks though, I tried installing UFS on this
drive and it gave me no complaints and seems to actually work.  Also,
Windows XP seemed to be working fine with this drive.  Does that make
any sense?

Tim


I don't see your Silicon Image HBA reported in dmesg all that differently from 
one I am using.


Relevant ID's from FreeBSD 7.1 AMD64 are;

atapci0: SiI SiI 3114 SATA150 controller port 
0x40a0-0x40a7,0x4094-0x4097,0x4098-0x409f,0x4090-0x4093,0x4080-0x408f

mem 0xdc04-0xdc0403ff irq 19 at device 6.0 on pci10

Yours form DragonFly were:

atapci0: SiI 0680 UDMA133 controller port
0xec70-0xec7f,0xec90-0xec93,0xec98-0xec9f,0xeca8-0xecab,0xecb0-0xecb7
mem 0xff1afc00-0xff1afcff irq 11 at device 11.0 on pci2


Further - I've had chronic trouble of the same sort with WD, Maxtor, Fujitsu, 
and Samsung drives on the Intel IHC7 (Tyan Tomcat) under FreeBSD between 6.2 and 
7.1 - not entirely absent on 7.1, either, and have moved drives OFF the onboard 
IHC7 onto a cheap-as-dirt SiI PCIB HBA that runs 'em slower, but trouble free.


Mine has:

atapci1: Intel ICH7 SATA300 controller port 
0x30c0-0x30c7,0x30b4-0x30b7,0x30b8-0x30bf,0x30b0-0x30b3,0x30a0-0x30af mem 
0xdc500400-0xdc5007ff irq 19 at device 31.2 on pci0



You are showing an IHC2:

atapci1: Intel ICH2 UDMA100 controller port
0xffa0-0xffaf,0x376,0x170-0x177,0x3f6,0x1f0-0x1f7 at device 31.1 on
pci0

(rest of dmesg snipped).


So I don't think it is the drives, and it may not be DragonFly, either.

Intels' IHC series are as ubiquitous as housefly feces - and not a great deal 
more welcome in our house these days as drive controllers go.


HTH,

Bill



Re: Seize During Boot

2009-05-08 Thread Bill Hacker

TP Reitzel wrote:







ohci1.pci1.pcib1.pci0.pcib0.legacypci0.nexus0.root0 ohci1: NEC uPD
9210 USB Controller [tentative] mem 0xfddff000-0xfddf irq 10 at
device 10.0 on pci1 pcib1: device ohci1 requested decoded memory
range 0xfddff000-0xfddf usb1: OHCI version 1.0, legacy support 
usb1.ohci1.pci1.pcib1.pci0.pcib0.legacypci0.nexus0.root0 usb1: NEC
uPD 9210 USB controller [tentative] on ohci1 usb1: USB revision 1.0 
uhub1.usb1.ohci1.pci1.pcib1.pci0.pcib0.legacypci0.nexus0.root0 uhub1:

NEC OHCI root hub, class 9/0, rev 1.00/1.00, addr 1 [tentative] on
usb1 uhub1: 3 ports with 3 removable, self powered uhub1:


At the final uhub1:, the cursor sits to the right of the colon and
the system is frozen. I have 3 USB video cameras plugged into these
three ports. If I unplug the cameras, DragonFly boots fine so the
problem isn't the actual 4 USB port PCI card, but rather the USB
cameras plugged into the USB ports of this PCI card, a Belkin FU220


If these same USB cameras are plugged into the onboard USB controller
(nForce 430 chipset), DragonFly boot fine as well. This problem needs
to be fixed for sure. HTH



Q1:

Once the system is already up..

- either booted without the cameras or

- booted with them initially on the nForce internal

What happens if the cameras are THEN plugged in to the Belkin?


Q2:

- Will the boot complete normally with just ONE camera on the Belkin?

- If so, how about TWO?

If one or two work, but NOT three, I'd suspect the current drain of 
three cameras is too much for the Belkin to handle AND still keep its 
bus conversations noise-free, clean-edged and within voltage and timing 
tolerance.


If so, the 'fixed for sure' is not software.  it would need a stronger 
attach pount (nForce), second Belkin, fewer cameras, or ...


. external *powered* USB hub.

An oscilloscope would confirm that - but you probably arent in Hong 
Kong, so mine is no use.


Bill


Re: Fwd: kqemu patch for DragonFly BSD on kqemu1.4.0pre1

2009-04-29 Thread Bill Hacker

Simon 'corecode' Schubert wrote:

Matthew Dillon wrote:

:Naoya Sugioka wrote:
: I've just sent a mail to gnat at netbsd.org to reach pkgsrc community
: if I can include them to their pkgsrc tree now.
:
:I don't have a good idea how to make a pkgsrc package for kernel 
module. :Actually I'd prefer it to be imported into base. Yah, I 
agree.  There seem to be some files missing from the

kqemu-dragonfly.patch.tar file you posted.  It's making calls
to routines that are not present in files in the tar.

e.g. kqemu_log(), but no file in the tar contains a function by
that name.

It will also require a major security and software review, and also
needs a master SYSCTL-based enable (default disabled).


Why an enable?  Just don't load the module if you don't want it to be 
used.  qemu doesn't load the module itself.



Actually it does on sme OS/hosts.

Or 'can' check for it and load it on startup if not already loaded - 
unless otherwise directed, anyway.




Also, I don't think we should put $random kernel modules into base.  Why 
isn't pkgsrc a good location?


cheers
 simon



Probably a better place until there are sufficient resources to insure 
that kqemu (heck, 'any' .ko) will always have been thoroughly tested 
with the release it travels with.


Bill


Re: Mid-term release 2.2.1 slated for Sunday

2009-04-23 Thread Bill Hacker

Matthew Dillon wrote:

I will be doing the build for 2.2.1 on Satuday and we will release
Sunday 26 Apr 2009.

-Matt
	Matthew Dillon 
	dil...@backplane.com


Question: (particularly given HAMMER's penchant for large media...)

Will the 'stock' installer be GUID Partition Table capable?
(including creation as, installation to, and boot from)

Bill Hacker


Re: Mid-term release 2.2.1 slated for Sunday

2009-04-23 Thread Bill Hacker

Matthew Dillon wrote:

:Question: (particularly given HAMMER's penchant for large media...)
:
:Will the 'stock' installer be GUID Partition Table capable?
:(including creation as, installation to, and boot from)
:
:Bill Hacker

Not unless someone did the work to integrate gpt into the installer.

-Matt
	Matthew Dillon 
	dil...@backplane.com


No way I'll have anything definitive in time, so not to worry.

I'm working with manual approach on FreeBSD, trialing editing and 
migration using DragonFlyBSD on FreeBSD-created GPT and the reverse.
Just about to introduce NetBSD-current into that mix. OpenBSD 'later' if 
at all.


What I am after is an 'agnostic' stand-alone-capable maintenance toolset 
to support both portability/sharing of 'data' storage (UFS 1 if need 
be) and multi-boot. I'm hoping to side-step the NIH legacy disklabel 
disjoint between/among the *BSD's.


Looks to be more a scripting exercise than coding - the tools are 
generally already there in each OS.


Regards,

Bill



Re: Installing DragonFly

2009-04-15 Thread Bill Hacker

Colin Adams wrote:

I couldn't use Windows anything - that is banned from my house.


Good news, that. Should reduce your long-term rosk of stroke or hear attack.

;-)



Equally, I can't use a Linux fdisk (for instance), because I can't
boot the computer at all if the disk is plugged in.
If I remove uncable the disk, then I can boot from the DragonFly live
DVD (or any other live CD/DVD presumably). But then I can't do
anything to the disk because it isn't plugged in.


Suggestion (from 'bitter' experience) - get your hands on another HDD, 
(temporarily) make that one the 'primary' and set it up with (at least) 
one or more *BSD and a low-hassle Linux. My mix of choice is FreeBSD, 
NetBSD, DFLY  Vector Linux 5.9 Std edition. (It often helps to see how, 
or IF 'the other guy' reads your MBR and disklabel)


You should then be able to attach the problematic disk - before or after 
boot. (Suspicion - is your BIOS set ot boot form it? and if so, can that 
be changed?)


Further - RAID quite aside, FreeBSD atacontrol, DFLY natacontrol have 
convenient utilities to list, attach/detach, re-scan, ATA chanels and 
devices et al w/o reboot.


fdisk and disklabel / bsdlabel, then newfs should let you re-slice etc 
to clean up the problematic HDD.


Presuming thet HDD is the newer/larger/faster or otherwise more 
desirable device, you should then be able to reverse the process and do 
further experimentation on the 'other' less-valuable HDD as a secondary.


It can be helpful to have multiple versions of /etc/fstab on each that 
can be 'cp'ed into place rather than edited to either/both get desired 
dev ID's to fit detached/swapped situations, and/or do only partial 
mounting with the rest done manually or by script other-than-fstab.


Thereafter, DFLY/FreeBSD boot manager should handle the rest painlessly.

You *can* 'get there from here' with a Live CD - but a fully-functional 
HDD install give you a richer toolset and more flexibility for 
relatively low cost in time and hardware - especially if the 'other' HDD 
can be USB-attached.


HTH,

Bill Hacker



2009/4/15 Simon 'corecode' Schubert corec...@fs.ei.tum.de:

I bet this is the set all bits to one on CHS overflow thing in fdisk.  I'd
really like to know how we are supposed to handle this (better).

Colin, sorry for trashing your computer.  I think we are well aware of this
issue, but we simply don't know exactly how to deal with it.  Could you
maybe use window's fdisk to create a large partition on the drive and then
report back how the partition table looks like?  In this case we could
adjust our fdisk so that this won't happen again.

thanks
 simon

Colin Adams wrote:

What appears to have happened is that in some way it has trashed my
disk-drive - I can still get the machine to boot from the live CD, but
only if I physically disconnect the hard-disk first.


2009/4/8 Hasso Tepper ha...@estpak.ee:

Colin Adams wrote:

Well, if that is the case the ISO should not be available for download
- there should be a fixed version.

Well. It shouldn't be any way fatal, but in general I agree - we should
release 2.2.1 ASAP, really.


--
Hasso Tepper





Re: ASUS Eee compatibility

2009-03-20 Thread Bill Hacker

Sepherosa Ziehau wrote:

On Fri, Mar 20, 2009 at 6:00 AM, John Leimon jlei...@gmail.com wrote:

Hello,

Has anybody tested the lasted version of dragonfly with an ASUS Eee netbook?
I would like to know if there are any hardware compatibility issues.


Old eee seemed to have msk(4), which is reported to work.

Newer eee has ale(4), which does not work reliably:
1) One user reported, it does not work properly if cable is unplugged
when/before bringing up the interface
2) From another user's report, the ale_stop() in ale_init() may cause
later ale_reset() fail, and PHY will no longer work after that.

I currently suspect the mii_phy_reset() in ale_mediachange() may be
the cause of the problem (Linux's driver seems to comment that PHY
reset must set autonegotiation bit to workaround certain PHY bug).
Well, I do not have any of these hardware though.

Best Regards,
sephe



Not entirely germane - but DragonFlyBSD 2.1, 2.2, 2.3 worked *very 
smoothly* on at least the CPU family and (predecessor?) support chips of 
the VIA-CPU based el-cheapo MB.


To wit: C3  C7 with VIA Rhine NICs, VIA/S3 Unichrome VGA, VIA onboard 
and add-in [P|S]ATA SiI OR VIA chipsets.


Nary a whimper, 'core' functionality OR ACPI.

Dunno about sound. I use Marantz for that...


OTOH, applied to a Lenovo G400 - 3000 with Core2 Solo Intel mobile 
whatsis, ACPI had to be disabled ELSE tolerate a chronic kvetching about 
being unable to talk to the battery, temperature, etc sensors.


Not reported 'coz the sub-model Lenovo I have was replaced very early in 
its production life, and the specs alone indicate the newer one is quite 
different. No point in chasing orphans...


OTOH, a brand-new-if-'remaindered' laptop with decent screen and 
keyboard, 2 GB RAM, 120GB HDD and Intel hyperthreading that actually 
*works* as 2 processers was too good to pass up for a mere US$ 340.. 
'orphan' or not...


About US$ 40 cheaper than an EE at the time (Hong Kong market...)

Bill


Re: 1 week until Summer of Code application time

2009-03-03 Thread Bill Hacker

Sdävtaker wrote:

It will be amazing if someone can get FreeBSD-UFS mountable (at least for read).




I didn't realize that it wasn't

Should I cease doing it?

OpenBSD is problematic among slices on on same-disk, but even that is 
apparently resolvable via disklabel editing.


Haven't felt the need... yet..

Bill





On Tue, Mar 3, 2009 at 01:27, Justin C. Sherrill
jus...@shiningsilence.com wrote:

Here's a heads-up:

Application time for organizations (not students) to get into the 2009
Summer of Code program is 1 week away.

If you are any of these things:

- potential student
- potential mentor
- person with an idea for a project

Please mark it down at:

http://www.dragonflybsd.org/gsoc2009/

It's OK to suggest an idea even if you don't have the time to mentor it.
I do need more mentors names - remember, it gets you $500 in addition to
helping the DragonFly project a great deal.








Re: glx module

2009-03-03 Thread Bill Hacker

Jim Chapman wrote:
I installed release 2.2.0 and added the xorg packages including 
modular-xorg-server.

When I start X I get a message

Failed to load module glx (module does not exist, 0)

This prevents the xorg.conf which uses the intel driver from starting. 
If I change the xorg.conf file to use the vesa driver I still get the 
message but X starts.


Is this a problem with the package and how do I get the module?

Thanks





pkg_radd modular-xorg-drivers?

As well as:

modular-xorg-fonts

modular-xorg-libs

IF the modular-xorg-server has not pulled them as depends.


ls /var/db/pkg/xf86-video-* will list 'em.

OTOH - not all vid cards have finished drivers.

I'm in VESA mode on one box with 1680x1050 screen until a newer 
VIA/S3-Unichrome is tuned-up.


OTOH, VESA is damn decent these days...

Bill



HAMMR fs limits

2009-03-03 Thread Bill Hacker

Matt,

Is HAMMER ready for this?

http://blogs.zdnet.com/storage/?p=383tag=rbxccnbzd1

35 bits stored.

.. per each *electron* .

Stackable to 70-bits per electron.  So far.

Somebody must have anticipated signed integers and parity..

;-)

Bill


Re: HAMMER and RAID 5

2009-03-02 Thread Bill Hacker

Dmitri Nikulin wrote:

On Tue, Mar 3, 2009 at 1:08 PM, Mag Gam magaw...@gmail.com wrote:

I was wondering if HAMMER will ever have network based RAID 5. After
researching several file systems it seems HAMMER is probably  the
closest to achieve this problem and will make HAMMER a pioneer.


Intuitively I highly doubt network RAID5 is worth it. Even local disk
RAID5 is unusable for many work loads.

In contrast, check out some of the more flexible RAID10 modes
available in Linux:
http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
You can get N/M effective space (N raw storage / M copies) with
RAID0-like striping for all of it. It performs very well and certainly
much better than the parity-based RAID5.

Imagine how RAID5 would work with network devices:

Read old data block from one server
Read parity block from another server
Generate new parity block
Write data block to one server
Write parity block to another server

All with NO atomicity guarantees, so HAMMER would have to pick up the
slack. Even in the best case you have 8x the latency of a single trip
to a machine (4 request/response pairs of 2 IOs each). All compared to
a one round trip (2 IOs) to write to a plain slave, or N round trips
for N redundant copies. What is an acceptable penalty on local disks
is pretty heavy for network storage.

If you really want, you can use vinum over iSCSI to get networked
RAID5, but it will not perform well.



Adding to that (as we have spent the past 12+ months researching all this..)

- there IS prior art, and lots of it. [1]

- none of it is fast - even over local 'Infiniband'

- the most practical compromise seems to be deferred background 
replication to 'pools' that are themselves *hardware8 RAID5 (6 or 10).


- 'hammer mirror-stream', especially if done over something faster than 
ssh, - eg: localy over 10GigE, iSCSI, or e-SAt over raw Ethernet, is a 
primo candidate for having at least one rapid-restoration near-real-time 
snapshot.


But at the present state of the art, HAMMER is challenged w/r quotas, 
subvolume-only selective replication, and r/w mounting of the mirrored 
snapshot(s).


Quite possibly there will be no 'one size fits all' solution, Too many 
compromises that pull in opposing directions.


As has always been the case..

HTH,

Bill

[1] Start with the Wikipedia article on distributed file systems, 
paticularly replicated and fault-tolerant.


Most are either IBM/Sun/Oracle/$AN-vendor, 'mainframe  big-bucks' 
class, ELSE Linux whole-damn-world-in-the kernel wannabees.


Among the contenders:

- Gluster (problematic getting it to work with fuse on FreeBSD)

- GFarm (wants to link in its own utils)

- MooseFS (compiles sweetly on FreeBSD - but sparse docs)

- Chiron (dirt-simple, but needs manual work if/as/when backends break)

- Ceph (relies on btrfs - which is scary as the btrfs developers claim 
'not ready yet..'


Aside from Ceph, most of the others I mention use 'any POSIX fs' for 
eventual store.


Chiron, to name one of many, expects those to be already-mounted smbfs 
or NFS mounts.


AFAIK, 'POSIX' compatibility includes HAMMER fs, whether over sshfs 
sshftp, NFS, SMBFs, or ...


so .. 'possibilites abound'.

Speaking from the transpacific fiber private-network alpha test 
exposure, there ain't no magic to the network, though!


What folks forget is that the delays introduced by each router or switch 
add up - even at 'light speed' to latency 'puters do not like.


One can hope for paired electron technology but not 'soon'

;-)


Re: Hammer FS: imposing a size limit on PFS?

2009-02-25 Thread Bill Hacker

Matthew Dillon wrote:

Our installer support for HAMMER isn't advanced enough yet.  What we
really want is a UFS /boot, swap, and then a HAMMER root that covers
everything else.


I would (and have) taken that a step farther.

'once upon a time'

'/usr' was not part of the core essential. It really was 'userland'

Long since, far too many of the *system's* 'needful things' in the way 
of binary and libs migrated there. Recall my push to get a decent 
static-compiled editor into the root partition so we could at least edit 
a FUBAR'ed /etc/rc.conf w/o having to manually mount a (potentially 
damaged) /usr.


'These days' one gains a bit of respect for NetBSD / OpenBSD plutting 
things into /usr/pkg rather than /usr/local, if only to keep them out of 
the way of 'real userland' - and even looks yearningly at Linux use of 
'/opt'


Reality is that a 'healthy' system needs '/usr' (libs and binaries) and 
'/var' (pidfiles, logs, and spools) to be mounted more or less 'at all 
costs'.


Ergo, one wants to push anything that really IS userland and user-app, 
or 'production use' specific out into bespoke mounts.


True whether the box is to be used for familiarization, learning, 
experimenting, OR 'production'.


And regardless of fs chosen



The idea with HAMMER is you just create one big filesystem and use the
PFS functionality to break it up into separate management domains.
Currently a size limit may not be placed on a PFS.

-Matt


I want my 'core' to be as damage-resistant as can be. So long as it IS 
such, and can boot and mount rapidly and respond to console - better yet 
ssh from afar - I have the wherewithal to manage, repair, or even nuke 
and reinstall - all the rest.


Ergo, absent a 'netboot' or flash/USB boot - I submit:

'Best Current Practice';

Minimum with one device:

- A modest 'slice' for the OS install, partioned and UFS, OR 'shared' 
with hammer.


- One or more SEPARATE partons if not SLICES for hammer-as-bulk storage, 
application support, etc.


IOW - not entangled in any way with '/usr', '/var'. You can wipe it and 
start over over-the-wire, as the 'core' is isolated.


Better yet - multiple devices, where second and subsequent devices where 
hammer owns the entire device.


If we cannot isolate and protect the 'core' within a hammer PFS, then we 
should not put it into the same PFS 'family' and open it to overflow or 
damage.


JM-scar-tissue's-2CW - but we have found this 'safe' from CP/M 1.X onward.

Logs and spool aside, 'core' has slow or no rate of change.

Bill



Re: BFBI OTT bug or limitation? UPDATE2

2009-02-25 Thread Bill Hacker


Responded to off-list, as it is:

A) tedious the way I did it, and not certain to have been free of 
hardware glitches..


B) a lead-pipe cinch that Matt will come up wth a better test methodology.

;-)

Bill


Matthew Dillon wrote:

:Bill Hacker wrote:
: Top-posting to my own post ...
:
:Again.
:
:Reproduced the original verbose crash. Only the last line is the same as 
:below.

:
:Failed to set up the HP-200LX serial, so will run it again...
:
:Bill
: 
: :-(
: 
: du -h  dulist
: 
: Two more runs, one OK  with hammer mirror-stream over ssh NOT running, 
: second  run with it mirroring a nearly empty dirtree (five static, 
: one-line text files only), runs for several minutes, then drops into 
: debugger with a mere three lines, rather than the original 
: scrolled-off-the screen;
: 
: CRC DATA @ 900a3b15b280/128 FAILED

: Debuger (CRCFAILED: DATA)
: Stopped atDebugger+0x34:movb$0, in_Debugger.3970
: 
: But this does not seem to relate. Could be an NATACONTROL + old HHD I/O 
: error artifact.
: 
: First looong err message had to do with vnodes..
: 
: More testing to do, then will swap-in a newer 500 GB SATA.
: 
: Bill


Bill, could you be more precise on exactly which tests you are running?
If I understand it you are doing some combination of bonnie++ and 
mirroring but you are explicitly filling up the disk and allowing it

to hit full-disk error conditions.

It sounds like a test I should be running here, particularly if you
managed to get a CRCFAILED panic out of it.  I want to try to reproduce
that.

It is possible that the CRCFAILED is due to an actual I/O screwup,
since we're talking PATA drives here.  This could be particularly
true if they are master/slave on the same controller.  However, my
first assumption is that it is a software bug related to pounding
it while it is full.

Note that HAMMER is not designed to run on an 8GB partition.  The
minimum is 50GB.  That said, HAMMER should never get a CRC error
even if running on an 8GB partition.  So I want to track it down.

-Matt



BFBI OTT bug or limitation?

2009-02-23 Thread Bill Hacker

Brute Force and Bloody Ignorance is Over The Top when;

'du -h'

saves keystrokes over:

'shutdown now'

Likewise a few other ways to either reboot or drop into the debugger.

Environment:

Target box:

VIA C7, 2 GB DDR2-533

2 X IBM 60GB PATA as NATACONTROL RAID1

2.3.0. 'default install' to all-HAMMER

root-mounted /slave1 PFS-created as such by hammer mirror-copy over ssh


Source box:

Lenovo G400 3000 laptop. 33GB slice ad0s1 for DFLY.

2.2.0 Installed to UFS, spare 8 GB partition *ONLY* later formatted and 
mounted as hammerfs '/hmr', made into a master for testing, '/hmr/master'.


ACTION:

hammer mirror-stream /hmr/master Thor@target_IP/slave1
over ssh 100 Mbps internal link.

Fire-off bonnie++ to fill the /hmr partition with fairly deep recursion.
It fills and stops gracefully with a ...cannot write' message, begins to 
cleanup its work area.


meanwhile, the ssh link has been doing its best - and it's best is very 
good.


Watching the target sees ~/slave1 gradually clear as bonnie++ mops up 
the master, until back where du shows zero usage, slaves having no 
snapshots of their own.


But the /hmr/master mount has gone from zero to 94% used, and the target 
has gone from 76% used to 87% used.


'du' on the master cannot seem to locate where TF the '94%' df reports 
for /hmr is hiding, but never mind.. we can nuke and newfs that 
partition at will.


But where is the used space on the *target* hiding?

'du -h /' on the all-hammerfs target *reboots* it soemwhere along the way.

.  Comes back up quick - I'll give it that...

But hang-on.  Could an 'ordinary user do that at will?

'du -h  dulist' (for later grep'ing) throws a panic and drops DFLY into 
the debugger...


Also worrisome...

By comparison, a UFS fs when overloaded, ordinarily soldiers on with 
109% utilized and a 'no space on device' message. For days


Hammer needs to get there also...

If this is an out-of-memory situation with 2GB, it shouldn't be.

If the fs is full, the exit should be graceful, not catastrophic.

If no one else can reproduce this, I'll try it on other hardware - and 
with a serial terminal.


NB: Rather small drives and partitions used. /hmr/master 8GB, entire 
hammer fs on target only 60 GB.


That part is intentional.

No need to wait all day to see if it happens on a half-terrabyte also.

Panic not captured. Do we need it, or is this a known issue?

Bill


Re: Hammer or ZFS based backup, encryption

2009-02-23 Thread Bill Hacker

Jasse Jansson wrote:


On Feb 23, 2009, at 8:10 AM, Bill Hacker wrote:


*trimmed*



But there we are, Startup 'seeding; is unavidable, but thereafter ... 
among other things, looking to reduce the reliance on rsync (and 
similar CVS'ish or git'ish techniques) having to 'inventory' stuff at 
a high per-file level that a 'hammer mirror-stream' (or GMIRROR to 
networked RAID) could do 'as you go along' at a lower level - closer 
to the actual blocks as they are being written.


How, and how well, would ZFS handle redundant pools on separate sites?

And can that be a streaming process - even if that means the 
redundancy target is r/o for 'the duration', as a hammer slave would be?


I have not tried it myself, but there is a ZFS send/receive command,
that might do the trick.





Kaiser Jasse -- Authorized Stealth Oracle



Looks promising. VERY promising...:

http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch06s03.html

Thanks for pointing that out

Bill


Re: OT - was Hammer or ZFS based backup, encryption

2009-02-23 Thread Bill Hacker

Jeremy Chadwick wrote:

On Mon, Feb 23, 2009 at 05:19:14PM +0100, Jasse Jansson wrote:

On Feb 23, 2009, at 11:54 AM, Bill Hacker wrote:

Robert Luciani wrote:

Freddie Cash wrote:

Booting FreeBSD 7.1 into a full KDE 4.2
desktop takes less than 5 minutes.  This is using 3x 120 GB SATA
drives in a single raidz1.

Wow 5 minutes?!
I don't think I'd be pushing it if I said that seems really slow. :S
On such a fast machine I'd be irritated if it took over a minute to 
boot. On my
3Ghz Athlon X2 w/ 2Gb RAM, DragonFly boots vkernels, hosting  
services, and Gnome

in about a minute with my two 500Gb hammer drives.
One minute 45 seconds into Xfce4 for a VIA C7 @ 1.5 GHz, 2 GB DDR-533, 
all-hammerfs on 2 natacontrol RAID1 'antique' 60 GB PATA UDMA 100 HDD.


One minute 4 seconds into Xfce4 for an Intel T2130 @ 1.86 GHz 2 GB ? 
RAM, 1 X 120 GB 2.5 HDD, DFLY on 33 GB ad0s1, UFS2 with one hammer  
partition.


RIADz looks to be the wall-time hog
RAIDZ is known to be slow, even the main developers admit it if you  
force them to it.


I'd recommend the individual seeing ~5 minute boot times try disabling
ZFS prefetching.  Place vfs.zfs.prefetch_disable=1 in
/boot/loader.conf and reboot the machine.



For same-box use, I'm happy to use hardware RAID or even a low-level 
(block) pseudo RAID, such as [n]atacontrol.


A good LSi controller or such with decent cache (and battery) can make 
even a marginal fs work faster..


'back in the days of .. '  100 Mbps TCNS (ARCnet), an EISA-bus Novell fs 
with twin CDC Wren IV on twin SCSI controllers (duplexed) was faster 
over-the-wire than any local HDD we could buy for ISA-bus 386'en.


OTOH, Netware had grwon up on ARCnet, cheated by assuming zero-error 
links. Which TCNS, on either fibre-optic or coax actually delivered. 
Unlike 10-Base-T Ethernet of that era...


Bill




Re: New mirror in Russia

2009-02-23 Thread Bill Hacker

Justin C. Sherrill wrote:

On Mon, February 23, 2009 12:18 am, Bill Hacker wrote:


Does anyone have stats on how much b/w and storage the typical mirrors
need to do a decent job?


Mirroring from chlamydia.fs.ie.tum.de is currently 80G of space; I don't
have a handle on the bandwidth usage right yet.  I suspect it's small
outside of release times.



Must be so - the iso downloads are 'average' among such things, but the 
individual appps pulled from pkgsrc are quite a bit faster than most.


jaist in Japan is faster yet from Hong Kong.

Thanks,

Bill


Re: OT - was Hammer or ZFS based backup, encryption

2009-02-22 Thread Bill Hacker

Michael Neumann wrote:

Am Sat, 21 Feb 2009 19:17:11 -0800
schrieb Jeremy Chadwick j...@parodius.com:


On Sun, Feb 22, 2009 at 11:59:57AM +1100, Dmitri Nikulin wrote:

On Sun, Feb 22, 2009 at 10:34 AM, Bill Hacker w...@conducive.org
wrote:

Hopefully more 'good stuff' will be ported out of Solaris before
it hits the 'too costly vs the alternatives' wall and is orphaned.

Btrfs has been merged into mainline Linux now, and although it's
pretty far behind ZFS in completeness at the moment, it represents a
far greater degree of flexibility and power. In a couple of years
when it's stable and user friendly, high-end storage solutions will
move back to Linux, after having given Sun a lot of contracts due
specifically to ZFS.

The fact that btrfs offers grow/shrink capability puts it ahead of ZFS
with regards to home users who desire a NAS.  I can't stress this
point enough.  ZFS's lack of this capability limits its scope.  As it
stands now, if you replace a disk with a larger one, you have to go
through this extremely fun process to make use of the new space
available:

- Offload all of your data somewhere (read: not zfs export); rsync
  is usually what people end up using -- if you have multiple ZFS
  filesystems, this can take some time
- zpool destroy
- zpool create
- zfs create

And if you add a new disk to the system, it's impossible to add that
disk to the existing pool -- you can, of course, create an entirely
new zpool which uses that disk, but that has nothing to do with the
existing zpool.  So you get to do the above dance.


Hm, I thought that would work easily with ZFS, and at least in theory I
think that should work well with ZFS. Or what is wrong with:

  zpool add tank /dev/ad8s1

Okay zpool remove doesn't seem to work as expected, but it should
work well at least for RAID-1 (which probably no one uses for large
storage systems ;-). Maybe zfs replace works, if you replace an old
disk, with a larger disk, and split it into two partitions, the one
equally sized to the old, and the other containing the remainder of the
space. Then do:

  zfs replace tank old_device new_device_equally_sized
  zfs add tank new_device_remainder

But you probably know more about ZFS than me ;-)

As for Hammer, I worked on some patches that will allow it to expand a
Hammer FS while mounted. It's actually very easy to implement (~100
LoC). And the shrink case should be at least in theory pretty easy to
implement, thanks to reblocking. So with very little work, we can make
Hammer grow/shrink natively (maybe it's in the next release). 


Regards,

  Michael


Side issue again - just brought up DFLY 2.3.0, default all-hammer layout.

- atop a natacontrol RAID1 on a pair of salvaged 60 GB IBM 'Deathstars'.

- VIA C7 1.5 GHz CPU (the el-cheapo MB aimed at Wal-Mart)

- pulled half the RAM, leaving only 1GB

Periodically, the aged IBM's sound like squirrels having a go in a 
gravel pit, Xfce4 is *very* slow, but it hadn't started swapping, and 
based on what 'top' is showing should actually work decently for basic 
file, web, or mail serving, especially as the OpenSSL recognizes the VIA 
padlock engine.  Planty fast enough with 2GB, BTW.


ZFS is not famed for tolerating that meagre a resource ration as well as 
HAMMER


Hardly a scientific test, but I can go off and grab lunch while 
OpenSolaris and ZFS boot to a GUI on 2 to 4 GB of RAM and a Core-D 2.6 
GHz and 'recent' WD SATA drives.


Not sure I have enough years left to try it on the VIA - at least 'til I 
can get the 64-bit Nano...


;-)

Bill


Re: OT - was Hammer or ZFS based backup, encryption

2009-02-22 Thread Bill Hacker

Jeremy Chadwick wrote:

On Sun, Feb 22, 2009 at 01:36:28PM +0100, Michael Neumann wrote:

Am Sat, 21 Feb 2009 19:17:11 -0800
schrieb Jeremy Chadwick j...@parodius.com:


*snip*


The one thing we all agree on is that, generally speaking, UFS isn't
cutting it.  :-)



*I* don't agree.

Mind, we don't currently run any single-store over 2 TB with it here, 
but there is a great deal of life left in FFS2/UFS2.


I don't view either ZFS or HAMMER as coming into being because UFS 
doesn't work as a file system - but rather that UFS and necessary 
'add-ons' (+ snapshots + incremental backups + network bandwidth 
limitations + peak loading + admin labor costs) - could all be improved 
upon.


IOW - the 'package'.

HAMMER - or ZFS - are sets of compromises - and all such things are 
compromises - that address that 'package' need  - itself dictated by the 
astonishing growth in drive sizes, and the even more rapid growth in the 
all-too-frequently outright garbage the human race chooses to save on 
those...


;-)

IOW HAMMER or ZFS or  [Linux flavor of the day fs] are 'nice to have 
maybe' today. The 'essential' part is still a ways off for most of us.


But coming fast.

Look for example, at the data stream Dfarm was designed to capture...

Now think about a *serious* load. Perhaps Interactive gamer scoring...

:-)

Bill


Re: New mirror in Russia

2009-02-22 Thread Bill Hacker

Justin C. Sherrill wrote:

http://df.v12.su/mirror/

It's mirroring from chlamydia.fs.ei.tum.de nightly, so it has ISOs,
packages, etc.  I get good speeds to it from the other side of the planet.

It's listed on the Downloads page on the DragonFly website now too.  I'll
add FTP access when I can get to it.


Justin,

Does anyone have stats on how much b/w and storage the typical mirrors 
need to do a decent job?


Bill


Re: Hammer or ZFS based backup, encryption

2009-02-22 Thread Bill Hacker

Freddie Cash wrote:

On Sat, Feb 21, 2009 at 10:39 AM, Csaba Henk csaba.h...@creo.hu wrote:

I need to setup a backup machine, and I intend to utilize today's
snapshotty filesystems (which boils down to Dfly+Hammer or FBSD+ZFS --
btrfs is not there yet, and I don't feel like dwelving into Solaris).
Set up such an OS with such an fs, and backup by syncing to the
snapshotty fs and create a snapshot.

I wonder about the following things:

1) Any idea how does this approach scale related to more conventional solutions,
like rdiff-backup or dump(8)? I see the the pros, but are there any
cons? How effective is taking regular snapshots space-wise?

2) Is there any practical argument for choosing between Dfly+Hammer and
FBSD+ZFS? (Feel free to give biased answers :) )

3) I'd like to encrypt stuff, either at device or fs level. For
FreeBSD there is geli(8). I haven't found anything for DragonFly.
Is there any way to get at it on DragonFly?


We do this at work, using FreeBSD 7.1 and ZFS, for backing up over 80
remote Linux and FreeBSD servers, and 1 Windows station.  We have two
servers, one that does the backups every night, and another that
mirrors the backups during the day.


*trimmed* (description of a quite decent ZFS approach)



After the initial rsync of a server, which can take several days as it
can easily max out an ADSL link's upload bandwidth, the daily run
takes about 6 hours, most of which is waiting for rsync to generate
the file listing.



*snipped*

But there we are, Startup 'seeding; is unavidable, but thereafter ... 
among other things, looking to reduce the reliance on rsync (and similar 
CVS'ish or git'ish techniques) having to 'inventory' stuff at a high 
per-file level that a 'hammer mirror-stream' (or GMIRROR to networked 
RAID) could do 'as you go along' at a lower level - closer to the actual 
blocks as they are being written.


How, and how well, would ZFS handle redundant pools on separate sites?

And can that be a streaming process - even if that means the redundancy 
target is r/o for 'the duration', as a hammer slave would be?


Bill


Re: the 'why' of pseudofs

2009-02-18 Thread Bill Hacker

Jost Tobias Springenberg wrote:

I do not want to sound offensive here but I don't get the point of this 
discussion at all.
What exactly is wrong with null mounts and / or the way PFS work ?

If you want to have seperate partitions instead of PFS, thats perfectly fine, nobody 
forces you to use PFS everywhere. In fact It might be very reasonable to keep data from home 
directories seperated from other data or the like.


Regarding null_mounts, I do not see any programm that might not be able to work 
with them.
And User failure is also not a real issue here, as those mounts can be secified in fstab as well as 
any other mount and the concept is not that hard to grasp.

Regards,
Tobi


No offense taken (nor meant).

And thanks for the input - taken on-board.

For someone who has spent much of the last two years experimenting with 
Gfarm, Gluster, Ceph, DFarm, Chiron, AFS, Coda, (AND NOT ZFS) etc ad 
nauseum..


.even (most reluctantly) having to endure Linux lib-version-Hell to be 
able to test several of those too-often 'fuse' dependent critters *at all*,


. it isn't the *concept* that is hard to grasp.

It's the gotcha's I'm trying to ring-fence.

;-)

And, to seek better sync between docs and methods.

EX:

man (5) hammer has an example of creating and mounting /home/pfs/master, 
/home/pfs/slave. Then /home/master and 'home/slave for mirror-copy  
mirror-stream.


But the 2.3.0 DEVELOPMENT iso had already mounted /home (/usr, /var, 
etc)  as PFS


By the time I had sorted a new non-pfs mount directly off '/' and got 
all the mirror-stream stuff working, a simple reboot blew way the 
now-multiply-recursively-mounted, BFBI privs and ownership FUBAER'ed, 
and overly aggressive pfs-destroy'ed,  '/home' altogether


.. *before* I had remapped all console messages to a log file.. so I 
saw it complain - but had no details.


Recoverable? Maybe, maybe not.

But testing calls for a complete re-install to a different formula 
anyway, so


'Essayons'

Regards,

Bill


EUREKA - was the 'why' of pseudofs

2009-02-18 Thread Bill Hacker

Matthew Dillon wrote:

There are several reasons for using PFSs.


EUREKA!

Matt - you've re-invented Ramphotyphlops braminus:

Weigh this:


PFS = Parthenogenetic File System


hammer pfs-master  = select a host.

hammer pfs-slave   = induce ovulation.

hammer mirror-copy = self-inseminate the egg, provide nutrients to
 grow a clone with the instincts of the parent.

hammer mirror-stream = continue growing/educating the child.

   While growing, deny it independence.
   (parent has r/w, others can see but not touch = no deviation)


hammer pfs-upgrade = hatch the egg, grant adulthood.


HAMMER fs structure - DNA if you will - must be precisely replicated in 
core form before any body-mass (files) save its own 'schema' can be 
added. Notochordata grow the spinal chord, THEN 'bud' the organs. The 
nerve-'tree' remains connected, trunked back to the brain, enforces 
order, reports environment.


Snapshotting, reblocking, pruning are comparable to our nightly sort, 
merge, index, and re-write of short-term memory to long-term memory.


So we're not really growing a Ramphotyphlops here.

We're growing the indexed storage portion and the storage/retrieval 
mechanism  . of a brain.


Proven pattern among Odontata, too:

http://ecoevo.uvigo.es/Olalla/index_en.htm

.. so this file system could live a very, very long time..

Wonder how long before it grows wings

...or an appetite...

:-)

Bill


Re: Fwd: EUREKA - was the 'why' of pseudofs

2009-02-18 Thread Bill Hacker

Colin Adams wrote:

-- Forwarded message --
From: Colin Adams colinpaulad...@googlemail.com
Date: 2009/2/18
Subject: Re: EUREKA - was the 'why' of pseudofs


2009/2/18 Bill Hacker w...@conducive.org:

Proven pattern among Odontata, too:

http://ecoevo.uvigo.es/Olalla/index_en.htm


That should be Odonata (= tooth-jawed) (Dragonflies and damselflies to
the rest of you - i.e Bill is trying to make the posting relevant to
DragonFly BSD).


Thanks - typing and proofing was easier when I still had the sight of
two eyes...

;-)



Incidentally, I looked at that web page, and by a curious coincidence,
I had an email this week from the guy she mentions as her PhD
supervisor - as he is the webmaster of the Worldwide Dragonfly
Association, and I was complaining about broken links.



'small world..'


Anyway, thanks for the link. I knew about the parthogenetic population
of Ischnura hastata on the Azores, but I didn't know there was a
downloadable thesis on the sunject, so I've just grabbed it.

It's really time I actually started using DragonFly (perhaps to port
GHC to it, as I am programming in Haskell these days). Is it available
64-bit yet?



Dunno. We've been 'greening down' to VIA CPU for lack of enough UPS
budget in the Data Centre, and their first 64-bit is still scarce.

But ISTR at least a couple of the devel team run on AMD-64..

What I've got to sort is whether/how soon DFY uses/will use the in-built
VIA hardware encryption engine.

I've seen tests showing it to need only 5% of the resources a
general-purpose CPU needs for the same encryption/decryption workload,
ergo letting the lowly VIA punch well above its weight in an
increasingly ssh/TLS'ed world. IF the algorithms it supports are among
the choices, anyway...

Meanwhile - and I expect this was already well-known among the
cognoscenti - scp'ing vs mirror-copy'ing from a *single* as-current PFS
snapshot, shows hammerfs on a laptop - especially one that has slept
through four days worth of cron's reblock-prune, can 'punch above its
weight'also. But in a different way.

For kicks, I've scp'ed from the root of /pfs (/pfs/usr ...) as well as
from the mount-point of each individual (virtual) mount (/usr..).

Naturally, scp -r is expanding the snapshots retained over a four day
period.

Predictable result?

- Four copies on the target from /pfs/usr et al, PLUS the ONE copy from
/usr.  Same files, near-zero actually changed, save for /var/log.  But
scp -r cannot know that there was no change, so

Five times the storage space needed on the target as on the original.

Glad it was not three weeks...

If a man won't take fishing instruction, just let him figure out on his
own how to fish

.. and he'll much better appreciate a fish 'n chips shop...

;-)

Bill



Re: hammer history ls

2009-02-17 Thread Bill Hacker

Steve O'Hara-Smith wrote:

On Tue, 17 Feb 2009 11:14:16 +0800
Bill Hacker w...@conducive.org wrote:

All-numeral dating might be more human-language independent, and an 
input-format reminder built-in, as in:


--startDDMM=20092008 --endDDMM=01012009


For all numeric it's probably best to use ISO8601 formats like
mmdd, mmddThhmm, mmddThhmmss and even hhmmss.



Agree that. Also more consistent with other utils aleady in-place.

(wonder why  ...)

;-)

Bill


the 'why' of pseudofs

2009-02-17 Thread Bill Hacker

Folks,

Google was no help, and I have only the last 54,000 or so of the 
DragonFlyBSD newsgroup messages to hand on on the PowerBook, wherein a 
message-body search on pfs, PFS, pseudofs turned up only about 240 or so 
messages, or Mark One eyeball processing..


That now done, I find:

Several of these cover conception, gestation, birth, and education  - 
the 'what' or 'how' of pseudofs / PFS, so to speak.


ONE of which lists the pro /con vs PFS_NOT. And that one not really 
hard-edged.


NONE of which tell me with any degree of absolute-ish-ness, if you will..

.. that one cannot, or even 'should not' run a HAMMER fs *without* PFS 
mounts.


.. or nullfs mounts.

or even  without softlinks.  Persih the htought. Or the confusion...

At all.

EG: 'none of the above'.

Mind - I see the rationale - even necessity - for their use in more than 
a few circumstances.


But I cannot seem to find the prohibitions against their 'non-use'.

What do you suppose breaks if I do not apply these in an initial setup, 
but rather leave them until specific needs arise, such as volume 
expansion, export, or mirroring?


I have in mind small drive(s) for /, /usr, /var/, /tmp, /home
- perhaps not even hammerfs, those. Nothing there that was ever overly 
hard to backup, restore, of JF replace. My mailstore, for exampel, has 
never lived in any of those. Nor web pages. Nor Databases.


It is on separate, much larger, drive(s) for /data, /mail, /web, /pub 
and such - where 'mission critical' clients live and play.


UFS(1), / FFS(1)  - not UFS2/FFS2 has made for less hassle when hardware 
goes pear-shaped or OS migration is afoot.


Enter (BF)HAMMER

But what concept am I missing here? Nice-to-have? Or absolute necessity?

Regards,

Bill


Re: off-box mirror-stream and friends - hacked to work

2009-02-16 Thread Bill Hacker

Matthew Dillon wrote:

:I have what appears to be a 'Catch 22', wherein:
:
:hammer mirror-stream /master user@remote_IP:/new_slave
:
:returns:
:
:PFS slave /new-slave does not exist.
:Do you want to create a new slave PFS? (yes|no) No terminal for response
:Aborting operation
:validate_mrec_header: short read
:
:'No terminal for response' 


Has now been hacked to work when ssh requires passwords,
*so long as*

EITHER:

- the user@target user is 'root' or equivalent.

(testing only - NOT recommended for prime-time!)


ELSE

- there is a mount-point on the target that is NOT in an existing PFS 
AND the non-root caller UID/EUID has the requisite privs to that mount 
to create dirs, PFS, links, files et al on it.


As with a cpdup or rsync operator, a bespoke user:group and carefully 
crafted privs and security are the order of the day.


Ugly hack below - needs more work before a formal patch, but it JFW's.

Regards,

Bill Hacker




- find in /usr/src/sbin/hammer/cmd_mirror.c

 AS READS =
/*
 * Get a yes or no answer from the terminal.  The program may be run as
 * part of a two-way pipe so we cannot use stdin for this operation.
 */
static int
getyn(void)
{
char buf[256];
FILE *fp;
int result;

fp = fopen(/dev/tty, r);
if (fp == NULL) {
fprintf(stderr, No terminal for response\n);
return(-1);
}
result = -1;
while (fgets(buf, sizeof(buf), fp) != NULL) {
if (buf[0] == 'y' || buf[0] == 'Y') {
result = 1;
break;
}
if (buf[0] == 'n' || buf[0] == 'N') {
result = 0;
break;
}
fprintf(stderr, Response not understood\n);
break;
}
fclose(fp);
return(result);
}

 WAS AMENDED TO READ ===
/*
 * If we hadn't meant to do it, we'd not have asked, so fake
 * a 'Yes' for now. LATER - remove the call to this entirely
 * and replace with a count-down timout and instructions to
 * use 'Ctrl c' to Abort. Borrow that code from cmd_pseudofs.c
 */
static int
getyn(void)
{
int result;
result = 1;
return(result);
}
==
Built with 'make' from within /usr/src/stdin by temporarily setting 
'subdirs = hammer' (and no others), in the /usr/src/stdin Makefile.


Resulting /usr/src/stdin/hammer/hammer binary, and no others, then 
manually copied into /sbin on BOTH source and destination machines.


Working well so far..





Re: hammer history ls

2009-02-16 Thread Bill Hacker

Matthew Dillon wrote:

:Hi again,


*trimmed*



If you wanted to get really fancy you could also implement date ranges.

hammer --start-date=20-Sep-2008 --end-date=01-Jan-2009 ...

-Matt



Yes, PLEASE!

More work -  one time.

But bound to make the wetware happier - all the time.

All-numeral dating might be more human-language independent, and an 
input-format reminder built-in, as in:


--startDDMM=20092008 --endDDMM=01012009

Bill



off-box mirror-stream and friends

2009-02-15 Thread Bill Hacker

I have what appears to be a 'Catch 22', wherein:

hammer mirror-stream /master user@remote_IP:/new_slave

returns:

PFS slave /new-slave does not exist.
Do you want to create a new slave PFS? (yes|no) No terminal for response
Aborting operation
validate_mrec_header: short read

'No terminal for response'  .was ass u me ed to be a byproduct of comign 
in off an Xfce4-terminal (Xorg  Xfce4 are quite happy on 2.3.0, BTW)


Dropped back out to the raw tty0 console and tried it from there.

No joy.

Not known to me (with only two DFLY boxen so far) if that would succeed 
is BOTH source and destination were remote from the box making the call.


It does not suceed if he IP of the local host is used - only adds the 
inability to get both passwords into the waiting mouths.


Command *appear* to succeed if/as/when I *manually* create 'new_slave' 
in advance with a matching shared_uuid. A local mirror-copy to it 
suceeds, with new_slave showing the files mirrored.


However, while the -vvv flag gives 5-sec updates, they all show a newer 
starting point that pfs-status has for the target, and the contents of 
the slave never change.


That holds true even if I add, edit, or delete files on the /master, and 
if I reblock the master after so doing.


By way of contrast, mirror-stream between on-box master and on-box slave 
 - same command otherwise - works fine.  No chdir needed to see the 
updates, just a 'View, Reload' in thunar and sputniks.


Query: Can the loop that seeks a 'yes' be changed to a 5-second 
countdown-timer with a message such as:


Creating new_slave Hit Ctrl-c to abort

.absent which it JFDI.

Thanks,

Bill Hacker


Re: 2.2 Release Progress Update #2

2009-02-15 Thread Bill Hacker

Matthew Dillon wrote:

:Questions w/r Rel 2.2 and HAMMER:
:

*trimmed*


:Query 1:
:- Will 2.2 have the Version 2 / 'WIP' or otherwise?

No.  Its still version 1 because the version 2 tests did not pan
out.  Do not use version 2 (and the hammer program will complain a
lot if you try).


Noted, thanks.



:Query 2:

*trimmed*

:However - where /home is itself a pfs mount, as in the default install, 
:a warning message will follow on the next three lines:

:You are attempting to access a PFS softlink... et al
:
:Further operations fail with that error message and/or no such file.

It wants you to create the actual PFS softlinks on PFS #0... the root
mount.



Noted. Which works fine on-box.

Note for future in planning a 'production' install:

For portability, it may make sense to reserve slices and/or additional 
storage media NOT part of the base in order to keep (OS+tools)  
(app/user datastore) less interdependent.



*trimmed*


:w/o the 'pfs':
:
: hammer mirror-copy /master /slave

They are not null-mounted by the hammer utility.  It only creates the
softlinks.



Noted as related to need to work from outside/above an existing PFS, 
thanks. Will dig deeper.


:OTOH - I'm not yet able to get the u...@host via ssh remote equivalent 
:to work, with or without a pre-existing slave on the target.

:


Hacking the offending code now. Seems that it loses ability to find the 
'involved' tty to receive an answer to the '[yes|no]' challenge whenever 
a PWD is required of one end or the other of the ssh connection. DOES 
accept a pwd for ONE end, but not for two, though it shows the 
appropriate challenge.


In either case, a countdown with 'Ctrl-c to abort' as used in 
pfs-destroy should be more universal if/as/when mirror-copy / 
mirror-stream are to be invoked by a cron script or such...


IF ALSO use of certs or other non-password auth is required, man page 
should so indicate.


:I thought it was to be possible to mount at points other-than '/' such 
:as to make selected substs available / exportable / mirror-able?

:


*trimmed*


No, you can only mirror whole PFSs.  You cannot mirror subdirectory
trees.



Noted.


*trimmed*


:.despite citing the matching pfs number in the error message.

You should be able to mount the slave after you've done the initial
mirroring operation.  The slave has no root directory until the initial
mirroring operation has been done (the root directory has to be
mirrored from the master first).



ACK. Absent (yet) getting the ssh toolset to agree with a 
password-authed environment, w/r either mirror-copy or mirror-stream to 
a not-yet-existing slave, 'cheating' seems to work or testing:


- create a slave on same-box, mirror-copy from master to it, dd the 
result into place on the remote box.


One of the issues for the uninitiated is that the default install leaves 
only the '/' not in a PFS, ergo the man page example fails, AND the 
change from '/home/whatever' to '/whatever requires an ssh caller 
arrive with root privs in order to create the slave.


Short term, local LAN, testing only, I have experimented with allowing 
root login and/or granting insane privs.


Longer term, mounting a separate HDD and granting requisite privs to 
that store (only) seems appropriate.


Not new - same environment a cpdup or rsync runner needs.

'Quis custodiet..' et al.


However, any mount_null will lock the slave's transaction id... you
are mounting a snapshot, not the current state of the slave.

-Matt
	Matthew Dillon 
	dil...@backplane.com


I hear that last part, but ... presume the 5-sec. default updates are 
still at work on (at least) the raw soflink.


A reload in Thunar has no-more-than 5-sec lag between what /master and 
/slave display as I make changes in /master. Cross-box, for now, not yet 
sorted how to get remote to update.


NB: Did have an 8-hour time diff between two boxes. Both on GMT and same 
dntp now.  Starting over omce I've corrected the code (other message).


Thanks  regards,

Bill


Re: off-box mirror-stream and friends

2009-02-15 Thread Bill Hacker

Matthew Dillon wrote:

:I have what appears to be a 'Catch 22', wherein:
:
:hammer mirror-stream /master user@remote_IP:/new_slave
:
:returns:
:
:PFS slave /new-slave does not exist.
:Do you want to create a new slave PFS? (yes|no) No terminal for response
:Aborting operation
:validate_mrec_header: short read
:
:'No terminal for response'  .was ass u me ed to be a byproduct of comign 
:in off an Xfce4-terminal (Xorg  Xfce4 are quite happy on 2.3.0, BTW)

:
:Dropped back out to the raw tty0 console and tried it from there.
:
:No joy.

Definitely a bug in the hammer utility, I'm not sure there is 
anything I can do about it though because the remote ssh connection

has no channel to accept a Y or N answer... stdin and stdout are used
for the protocol stream and I think stderr is output only.


Found and am working on a fix.

in /usr/src/sbin/hammer/cmd_hammer.c

the plan is to remove the request for a [yes|no] response and replace it 
with the countdown-timer that pfs-destroy returns.


Not having coded C since 386-16 was brand new doesn't help, but, 
'Hacker' has been the family name since some time prior to 1400 AD.


;-)



In anycase, I think what this means is that this feature currently
only works if the slave is local (non-ssh connection).  So you
would be able to do it with remote_master local_slave.



Had done. Still no joy. likewise form a 'disinterested third party' 
wherein both ends (appeared to be) remote.


No difference if on the console or via Xorg, either.

:Command *appear* to succeed if/as/when I *manually* create 'new_slave' 
:in advance with a matching shared_uuid. A local mirror-copy to it 
:suceeds, with new_slave showing the files mirrored.

:
:However, while the -vvv flag gives 5-sec updates, they all show a newer 
:starting point that pfs-status has for the target, and the contents of 
:the slave never change.


You must access the slave via its softlink to get the latest version
synced from the master.  If you try to access the slave via a null-mount
you will be accessing a snapshot of the slave, not the current state of
the slave.  The null mount locks in the transaction id of the slave.



Thanks - that will help...

:By way of contrast, mirror-stream between on-box master and on-box slave 
:  - same command otherwise - works fine.  No chdir needed to see the 
:updates, just a 'View, Reload' in thunar and sputniks.


You are probably accessing it via the softlink, yes?  The gui is
probably using an absolute path.  If you were to CD into a sub-directory
(even through the softlink), you would be accessing a snapshot as-of
when you did the CD, not the latest synced copy.



'Yes but'.  Will have to reconfirm, but I can see the softlink, its 
target, and the mount.


Thunar reports the softlin as a 'broken softlink'

Thunar rads the nullfs mount in a 5-sec-lag state

Thunar sees the @@string25 pfs as frozen at time of creation.

AFAIK, this means the mirror-stream is updating what *started as* a 
snapshot.


'll -F' at the CLI is, if anything, less helpful

hammer pfs-status helps.

:Query: Can the loop that seeks a 'yes' be changed to a 5-second 
:countdown-timer with a message such as:

:
:Creating new_slave Hit Ctrl-c to abort
:
:.absent which it JFDI.
:
:Thanks,
:
:Bill Hacker

That won't work, the target over an ssh link has no tty channel.

Adding an option to create the slave automatically and passing it to
the target hammer utility when it is run via the ssh, so it never has
to ask at all, would work.  If someone would like to do that and submit
a patch, I don't think it would take more then 20 minutes of
programming.




-Matt
	Matthew Dillon 
	dil...@backplane.com


Beavering away. Perhaps under five minutes of programming for whomever 
wrote it (Michael? Thomas?).


2 days of researching what and how 'C' does with return variables, and 
how to sort what was in the mind of the original author when he asked 
for them where he did.. (why 'tty'' and-not stdin - which accepts the 
password OK).


And should the ssh session have been called with different flags, or 
least the calling code made to pop-out at internediate points for 
interactive response, or 


Not even sure if I should be using 'make', 'bmake' or just 'cc', as my 
binaries so far have *way* diffent sizes from stock. They all run, oddly 
enough.. just not yet as wanted.


but I'll deal with that... just installed 'heme' as all I *really* need 
to do is change the one byte in the binary returned by the (failed) tty 
query to fake success, ELSE a JMP past the query routine altogether.


Bill



Re: HAMMER hosed?

2009-02-13 Thread Bill Hacker

Simon 'corecode' Schubert wrote:

Bill Hacker wrote:

.or perhaps not..

Have 120 GB HDD sliced for:

- FreeBSD

- DFLY with hammerfs

- OpenBSD

- NetBSD

FreeBSD installed last.

Unfortunately, did not think to do the within-slice partitioning for 
FreeBSD with DragonFly's modern toolset (..once bitten..)


Ergo, though I had FreeBSD NOT write bootblock or touch the MBR, it 
did munge the disklabel..


:-(

The DFLY bootloader I was using now throws 'invalid partition' and 
DFLY liveCD disklabel reports 'Bad magic number' for that slice.


I guess you used disklabel64?  What you could try is overwriting the 
disklabel64 by a new one which is exactly the same.  The contained 
hammer filesystem should not be destroyed in that case.  Let us know if 
you need help figuring out how to do that.


cheers
 simon


Hi Simon, Thanks for the quick reply...

The install would have used whatever the default was as of the
DEVELOPMENT snapshot of just a few days ago.

DFLY was happy cooperating with the (at the time) DFLY, Slackware,
OpenBSD, NetBSD and each booted fine off the new DFLY bootloader.

FreeBSD 8- December snapshot was used to change the type of the second
slice, sub-partition it, then install itself to replace Linux.

Bad move, as along the way it screwed the hammerfs-bootable DFLY somehow.

fdisk sees what was expected.

The other three OS'en still boot and run nomally.

Selecting DFLY (F1) returns 'invalid partition'

What I get with either disklabel or disklabel64 off the DFLY
Live/Install CD is:

 'bad pack magic number'


Attempts to edit the label give:

'Operation not supported by device'


Now - IF I knew what bits or bytes to change and where, I'm happy to go
after it with a hex editor... or dd. or whatever.

But I had not made a disklabel copy, so 

Best,

Bill




Re: HAMMER hosed?

2009-02-13 Thread Bill Hacker

Bill Hacker wrote:

Simon 'corecode' Schubert wrote:

Bill Hacker wrote:

Hi Simon, Thanks for the quick reply...

The install would have used whatever the default was as of the
DEVELOPMENT snapshot of just a few days ago.

DFLY was happy cooperating with the (at the time) DFLY, Slackware,
OpenBSD, NetBSD and each booted fine off the new DFLY bootloader.

FreeBSD 8- December snapshot was used to change the type of the second
slice, sub-partition it, then install itself to replace Linux.

Bad move, as along the way it screwed the hammerfs-bootable DFLY 
somehow.


fdisk sees what was expected.

The other three OS'en still boot and run nomally.

Selecting DFLY (F1) returns 'invalid partition'

What I get with either disklabel or disklabel64 off the DFLY
Live/Install CD is:

 'bad pack magic number'


Attempts to edit the label give:

'Operation not supported by device'


Now - IF I knew what bits or bytes to change and where, I'm happy to go
after it with a hex editor... or dd. or whatever.

But I had not made a disklabel copy, so 


you could post the output of

dd if=/dev/adXXsYY count=4 | hd

for us to debug.  Alternatively, you can try killing the disklabel with

dd if=/dev/zero of=/dev/adXXsYY count=4

and then re-creating it.  it basically has to read:

a: * 0 HAMMER
b: $SWAPSIZE * swap

where swapsize is the value you entered in the installer.  The default 
value depends on your memory size and is 
2*next_power_of_2(your_memory_in_MB) MB.


cheers
 simon


I've gotten into disklabel -e mode with NetBSD.

Not going to change anything just yet, but rather write what it sees, do 
the same with OpenBSD and FreeBSD (perhaps even a Linux).


Will post those as well as the dd output 'shortly'.

Thanks,

Bill



dd output heme'd to readable is attached, as it would word-wrap to 
uselessness.


Other views below.

Thanks,

Bill


=
NetBSD sees:

# /dev/rwd0d:
type: unknown
disk: Hitachi HTS5416
label:
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 16
sectors/cylinder: 1008
cylinders: 232581
total sectors: 234441648
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0   # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0

16 partitions:
#   sizeoffset fstype [fsize bsize cpg/sgs]
a:  8192016 188731620  4.2BSD  2048 16384   0  # (Cyl. 187233*- 195360*)
b:  2048256 196923636  swap# (Cyl. 195360*- 197392*)
c: 45703980 188731620  unused  00  # (Cyl. 187233*- 232574)
d:234441648 0  unused  00  # (Cyl.  0 - 232580)
e: 6291047763  4.2BSD  00   0  # (Cyl.  0*-  62411*)
f: 62910540  62910540 Linux Ext2   00  # (Cyl.  62411*- 124822*)
g: 62910540 125821080 unknown  # (Cyl. 124822*- 187233*)
h: 12288528 198971892  4.2BSD   2048 16384  0  # (Cyl. 197392*- 209583*)
i:  8192016 211260420  4.2BSD   2048 16384  0  # (Cyl. 209583*- 217710*)
j:  8192016 219452436  4.2BSD   2048 16384  0  # (Cyl. 217710*- 225837*)
k:  6791148 227644452  4.2BSD   2048 16384  0  # (Cyl. 225837*- 232574)

NOTES: The Linux Ext2 s/b type 165 / A5 FreeBSD, as it was so set then 
FreeBSD installed and tested from it.

=

OpenBSD sees (not much it has not put its own prints on.. but it boots)

# /dev/rwd0c:
type: ESDI
disk: ad0s3
label:
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 255
sectors/cylinder: 16065
cylinders: 14593
total sectors: 234441648
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0   # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0

8 partitions:
#size offset  fstype [fsize bsize  cpg]
a: 62910540125821080  4.2BSD   2048 163841
c:2344416480  unused  0 0



DFLY won't read anything at all...



:EB 3C 00 00 00 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 00 00 
00 00 ...
0018:12 00 02 00 00 00 00 00 00 00 00 00 00 16 1F 66 6A 00 51 50 06 53 
31 C0 ...fj.QP.S1.
0030:88 F0 50 6A 10 89 E5 E8 C0 00 8D 66 10 CB FC 31 C9 8E C1 8E D9 8E 
D1 BC ..Pj...f...1
0048:00 7C 89 E6 BF 00 07 FE C5 F3 A5 BE EE 7D 80 FA 80 72 2C B6 01 E8 
60 00 .|...}...r,...`.
0060:B9 01 00 BE AA 8E B6 01 80 7C 04 A5 75 07 E3 19 F6 04 80 75 14 83 
C6 10 .|..u..u
0078:FE C6 80 FE 05 72 E9 49 E3 E1 BE A2 7D EB 4B 31 D2 89 16 00 09 B6 
10 E8 .r.I}.K1
0090:2E 00 BB 00 90 8B 77 0A 01 DE BF 00 C0 B9 00 AE 29 F1 F3 A4 FA 49 
74 14 ..w.)It.
00A8:E4 64 A8 02 75 F7 B0 D1 E6 64 E4 64 A8 02 75 FA B0 DF E6 60 FB E9 
50 13 .d..ud.d..u`..P.
00C0:BB EC 8C 8B 44 08 8B

Re: HAMMER hosed?

2009-02-13 Thread Bill Hacker

Bill Hacker wrote:

Bill Hacker wrote:

Bill Hacker wrote:

Simon 'corecode' Schubert wrote:

Bill Hacker wrote:

Hi Simon, Thanks for the quick reply...

The install would have used whatever the default was as of the
DEVELOPMENT snapshot of just a few days ago.

DFLY was happy cooperating with the (at the time) DFLY, Slackware,
OpenBSD, NetBSD and each booted fine off the new DFLY bootloader.

FreeBSD 8- December snapshot was used to change the type of the second
slice, sub-partition it, then install itself to replace Linux.

Bad move, as along the way it screwed the hammerfs-bootable DFLY 
somehow.


fdisk sees what was expected.

The other three OS'en still boot and run nomally.

Selecting DFLY (F1) returns 'invalid partition'

What I get with either disklabel or disklabel64 off the DFLY
Live/Install CD is:

 'bad pack magic number'


Attempts to edit the label give:

'Operation not supported by device'


Now - IF I knew what bits or bytes to change and where, I'm happy 
to go

after it with a hex editor... or dd. or whatever.

But I had not made a disklabel copy, so 


you could post the output of

dd if=/dev/adXXsYY count=4 | hd

for us to debug.  Alternatively, you can try killing the disklabel with

dd if=/dev/zero of=/dev/adXXsYY count=4

and then re-creating it.  it basically has to read:

a: * 0 HAMMER
b: $SWAPSIZE * swap

where swapsize is the value you entered in the installer.  The 
default value depends on your memory size and is 
2*next_power_of_2(your_memory_in_MB) MB.


cheers
 simon


I've gotten into disklabel -e mode with NetBSD.

Not going to change anything just yet, but rather write what it sees, 
do the same with OpenBSD and FreeBSD (perhaps even a Linux).


Will post those as well as the dd output 'shortly'.

Thanks,

Bill



dd output heme'd to readable is attached, as it would word-wrap to 
uselessness.


Other views below.

Thanks,

Bill


=
NetBSD sees:

# /dev/rwd0d:
type: unknown
disk: Hitachi HTS5416
label:
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 16
sectors/cylinder: 1008
cylinders: 232581
total sectors: 234441648
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0   # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0

16 partitions:
#   sizeoffset fstype [fsize bsize cpg/sgs]
a:  8192016 188731620  4.2BSD  2048 16384   0  # (Cyl. 187233*- 195360*)
b:  2048256 196923636  swap# (Cyl. 195360*- 197392*)
c: 45703980 188731620  unused  00  # (Cyl. 187233*- 232574)
d:234441648 0  unused  00  # (Cyl.  0 - 232580)
e: 6291047763  4.2BSD  00   0  # (Cyl.  0*-  62411*)
f: 62910540  62910540 Linux Ext2   00  # (Cyl.  62411*- 124822*)
g: 62910540 125821080 unknown  # (Cyl. 124822*- 187233*)
h: 12288528 198971892  4.2BSD   2048 16384  0  # (Cyl. 197392*- 209583*)
i:  8192016 211260420  4.2BSD   2048 16384  0  # (Cyl. 209583*- 217710*)
j:  8192016 219452436  4.2BSD   2048 16384  0  # (Cyl. 217710*- 225837*)
k:  6791148 227644452  4.2BSD   2048 16384  0  # (Cyl. 225837*- 232574)

NOTES: The Linux Ext2 s/b type 165 / A5 FreeBSD, as it was so set then 
FreeBSD installed and tested from it.

=

OpenBSD sees (not much it has not put its own prints on.. but it boots)

# /dev/rwd0c:
type: ESDI
disk: ad0s3
label:
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 255
sectors/cylinder: 16065
cylinders: 14593
total sectors: 234441648
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0   # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0

8 partitions:
#size offset  fstype [fsize bsize  cpg]
a: 62910540125821080  4.2BSD   2048 163841
c:2344416480  unused  0 0

 



DFLY won't read anything at all...





Forgot this one:

=== 


FreeBSD sees:

FreeBSD sysinstal sees:

ad0 slices:

Offset SizeEndNamePTypeDescSubtypeFlags
0 6362-12unused0
63  62910225  62910287ad0s1 8freebsd165
62910540  62919539   252-12unused0
62910540  62910540 125821079ad0s2 8freebsd165
125821080 62910540 188731619ad0s3 4OpenBSD FFS166
188731620 45703980 234435599ad0s4 4NetBSD FFS169
234435600 6048 234441647-12unused0


ad0s2 partitions:

ad0s2a/ 6144MB
ad0s2d/usr12288MB
ad0s2e/var 6144MB
ad0s2f/home 3072MB
ad0s2bswap 2048MB
ad0s2g/tmp 1022MB

Re: New site design

2008-06-04 Thread Bill Hacker

Justin C. Sherrill wrote:

James Frazer put together a nice site redesign based on some discussion we
had here before.  He sent me a copy of it all, and I'm only now catching
up enough to show it:

http://www.shiningsilence.com:81/

I'm happy with the layout and content; I'm looking for further suggestions
for tweaks.  If there's no show-stopping objections, I'd like to move the
dragonflybsd.org site to this.




The red insect logo is artistic

- but the wing config is more akin to that of the common housefly than a 
dragonfly


:-(

Bill


Re: Announcing DragonFly mirror stats

2008-05-22 Thread Bill Hacker

Christian Sturm wrote:

Hi,

I set up a mirror stats script on:

http://www.dragonfly.dublu.org/

Well, in fact this page links to two status pages. One shows the
package mirrors status and the other the status of snapshot
mirrors. They are more or less self describing. I want to thank
Simon for adding the time stamp cronjob (it wasn't that easy
*g*).

Originally I wanted to change the script a lot, but I'm not sure
if it's a good idea to bloat and extend something like this.
Maybe I'll change it a bit when I get bored, but for now it will
stay like this.

I hope someone can use it :-)

Greetings,
Christian


Nice work!

Bill


Re: Installation on Macbook Pro

2008-03-18 Thread Bill Hacker

Christopher Rawnsley wrote:

Here is a little update for my problem...

On 12 Mar 2008, at 02:48, YONETANI Tomokazu wrote:

IIRC, you need to fiddle with ad5*.  A better alternative I can think of
is to partition (or maybe even disklabel it and newfs -O1) using FreeBSD
installer first, then boot with DragonFly LiveCD, and continue the rest
of procedure explained in /README .


Formatting with FreeBSD worked fine. FreeBSD appeared to have formatted 
the disk differently by switching the slice order. So the 165 ID 
(DragonFly/Free BSD) slice came before my 175 ID (HFS+) slice even 
though the physical disk layout was the other way round (Significant at 
all?). I now had a properly sliced drive so I rebooted with DF which 
could now write to the disk :). Thank you for that, Yonetani.


So I rebooted and this time booted from the hard drive. Boot loader came 
up! Unfortunately, it couldn't boot the DF slice. So I went back to the 
CD and (this is where I got a bit stupid...) fired up fdisk to check 
everything out. Nothing to odd. Had a look at gpt's output too. Which I 
found to be a little odd. The DF slice shared the same index (of 1) with 
my EFI slice with Mac OS X at index 2.


Let me first explain that, from what I read, in order for a computer to 
comply with EFI they have to have an extra slice set aside for EFI. On 
my laptop that means that I have this in the form of a 200MB slice with 
the rest for whatever.


 Now I remember seeing when Mac OS X and Windows
was installed, I booted into the DF live CD and ran gpt. The output that 
time around was EFI slice at 1, Mac OS X at 2 and Windows at 3. Even 
though I don't think boot0 can boot from the GPT, I wonder whether the 
FreeBSD installer changed something in a similar way to the MBR. Very 
speculative, I know but I can't check out anymore 'cause...




Side Note:  Recall my saying Apple marched to the beat of a different 
orchestra?


If memory serves, this is where some of that shows.

There isn't just one such 'hidden' slice - there is one preceding *every 
other* slice/partition. I.E. my 6 UFS slices, having been set up with OS 
X have 6 100 MB slices in between them.


There is information about this online in OS X-specific articles, and it 
is why I suggest using a separate HDD entirely, as I do with *BSD/PPc 
(FW800 is *fast*, and even USB 2.) is decent)


Going on my hunch that something in the MBR was wrong and was causing 
boot0 to not read it properly I decided to open up fdisk and see if I 
could find anything that might be a cause. I checked each slice through 
using the -u option (IIRC) and when I got the the Mac OS X slice it 
complained about the head boundaries being all wrong. I rather stupidly 
thought that I'd let it automatically update this. This caused nothing 
to boot etc. Long story short I ended wiping everything in favour of it 
being quicker for me (I had deadlines to meet...). It was all backed up 
so hopefully that won't keep anyone awake at night!


Just for 'education' as to how different things really are - creadm, but 
do not alter, with both fdisk and disklabel from FreeBSD/PPc AND OpenBSD 
PPc as well as DFLY.




I think I will come back to DF in the not so distant future but I'm just 
going to let myself go on other things for now. Thanks a lot to all you 
guys who tried to help me. I really appreciate it :)


--
Chris

P.S. Would it be possible to by pass the MBR all together in favour of 
GPT by simply using another boot loader other than boot0 


Yes - though that isn't entirely 'bypassing' the MBR in all cases, it 
can be made so.


See the various 'boot floppy' methods, including grub (compex for 
complexity's own sake, but flexible) and 'GAG' (simple, but effective) 
or even Warp LVM if you can get your hands on a copy - then think USB 
stick instead of floppy.


It is also fairly easy to alter and re-assemble boot0 to build a custom 
loader. Even if asm language is not your long suit, the code is in 
'symbolic' assembler, and there is very littel of it, so is easy to 
grok. (Mine drops never-used-here FAT/NTFS/DOS recognition in favor of 
grokking Minix, Solaris, Syllable, fs types for example)


or would this 
also require changes in the kernel or elsewhere?


Not required - yet.

Potential gains *could* be had where multiple controllers and drives 
exist - my 'normal' environment.


What is wanted is that the next two stages are as BFBI simple as boot0.

Ex: If changing drive order (cartridges, externals) or controller 
channel-order, the visible part is an fstab that 'follows suit'.


In FreeBSD or DFLY, this can be accomplished by creative disklabeling 
and such, e.g. :


- set up a RAID1 array with (n)atacontrol, even if you never add the 
second drive.  fstab will have entries such as /dev/ar(x)s(n) - which 
will be 'found' even if the devices of the set that had been /dev/ad0 
moves to /dev/ad6 or ad10. Similar with GEOM / GMIRROR in FreeBSD.


- in FreeBSD (haven't tested with DFLY), use 

Re: One thing ive been wondering for a long while

2008-03-12 Thread Bill Hacker

[EMAIL PROTECTED] wrote:

How does one list all diskdrives connected to the system (the ones mounted
AND unmounted) on a BSD system without looking at dmesg? Normally, id look
into dmesg to find that ie. my usb drive is on da0 or da1, but there must
be a much better way to find out.

Cheers,
Petr



Couple of 'convenient' ways, as they provide a good deal of info in one go:

(n)atacontrol list

- covers ATAPI

camcontrol [devlist | periphlist]

- handles similar functionality for SCSI.

NB: You would still need to look at dmesg if/as/when there is one or 
more controllers that report on scan but get no driver assigned.


As happens ...

Likewise, depending on OS, there are a number of not-really-SCSI devices 
- usually USB or FW, but even PATA/SATA on some controllers - that are 
mapped to 'scsibus' and may not be reported by either of the above 'coz 
they don't buy into the charade.


HTH,

Bill


Re: Installation on Macbook Pro

2008-03-10 Thread Bill Hacker

Christopher Rawnsley wrote:

On 9 Mar 2008, at 21:10, Bill Hacker wrote:

I would actually recommend an external HDD on FW-800 or USB2.


I don't have one of those handy at the moment so I think I'll keep on 
trying without for the moment.


Apple marches to the beat of a whole different orchestra w/r disk 
layout  labels, so I had to do that to get FreeBSD/PPC or OpenBSD/PPC 
up on my G4 PowerBook 17.


Ah maybe but I like to try none-the-less :)


And I don't use hfs at all - not even for OS X.


Mind me asking what you use and why?


Apple's UFS. Not fully compatible with 'real' UFS, but:

- gives me consistent directory structure, file handling and 
case-sensitivity across the PowerBook and the  *BSD  *n*x servers I 
work with all day.


- makes it easier to keep 'Finder' like stuff from defecating in the 
machinery.


- which - along with a few other tweaks, lets a 1 GHz G4 perform really 
well with 12 'desktops', yet w/o having to listen to its raspy 
variable-speed cooling fan unless the room is over 24 C or so.




DFLY  - or any *BSD - needs only a fraction of the resources even a 
cleand-up and stripped-down OS X consumes.


Well I don't have any problems with it. Runs lovely and smooth. Of 
course, I would like to get DragonFly running like that too.


--
Chris



Not knocking it 'as shipped'.

But the tweaks above - and a higher-RPM replacement HDD - have extended 
the useful life of the G4 by several years already. The replacement may 
well be a few more years out - and maybe ARM RISC based or such as this 
beast is quite heavy once in the Halliburton case it needs to stay 
healthy at 50 to 75 thousand air miles a year.


Other than the above tweaks, I keep the PowerBook as a 'don't f**k with' 
appliance as all but one other machine within reach (seldom fewer than 
half a dozen) is constantly being reconfigured. Some host a full dozen 
OS or variants on one box.


Too easy to lose track of stuff without an 'appliance' as sanity-anchor.

or attempted one, anyway ..

;-)

Bill


Re: Installation on Macbook Pro

2008-03-09 Thread Bill Hacker

Christopher Rawnsley wrote:

Hey everyone,
  I'm having a few problems when trying to install 1.12 on a Macbook 
Pro. So it loads off the DVD (I didn't have any CDs spare but it seems 
to work fine) and, after a bit, a prompt comes up so I can choose what 
kernel I want. So here is the first problem. If I choose option 1 the 
boot hangs at this message:
uhci4: UHCI (generic) USB controller port 0x6040-0x605f irq 9 a device 
29.2 on pci 0


It doesn't look like anything went wrong on that line but I have listed 
it hoping it might be a clue to something else...


OK, so I try door number 2 instead (without ACPI) which boots fine and I 
am invited to log in. I log in as installer and start the process. The 
first step is to partition the drive. I choose my hard drive which is 
detected as being 'ad4' (which struck me as odd. I thought they started 
at 0?). I have a 120GB drive of which I want to install DragonFly to a 
15GB slice. The installer tells me that it format is successful and then 
asks how I would like to set up my partitions (or sub-partitions). I 
stuck with the default and continue. Then get:


Execution of the command
/sbin/disklabel-r ad4s3
 /tmp/install.disklabel.ad4s3
FAILED with a return code of 4.

If I skip I then get:

Execution of the command
/usr/bin/grep '^ c:'
/tmp/install.disklabel.ad4s3
 /tmp/install.disklabel.ad4s3
FAILED with a return code of 1.

If I keep skipping I get more and more error messages...

I hope I have given enough information for someone to help :) If things 
go well, I will write up my experiences in the wiki.


Thank you in advance.

--
Chris


What else has had its fingers on that disk and its label prior to the 
attempt?


And/or - do you have a disk that nothing Apple-ish has yet touched?

Bill




Re: Installation on Macbook Pro

2008-03-09 Thread Bill Hacker

Christopher Rawnsley wrote:

On 9 Mar 2008, at 19:55, Bill Hacker wrote:
What else has had its fingers on that disk and its label prior to the 
attempt?


Well I used the Boot Camp Assistant (basically a partitioner) from 
within Mac OS X which resized my disk for installing Windows. So it 
formatted it with FAT32 but I just tried overwriting it with DF.



And/or - do you have a disk that nothing Apple-ish has yet touched?


Not sure what you mean by this. Do you mean another internal disk? If 
so, then no. I just have a 120GB volume of which 15GB is reserved for DF 
the rest is an HFS+ volume for Mac OS X. I also have a 2GB USB flash 
drive. Do you think I should I try that?


I would actually recommend an external HDD on FW-800 or USB2.

Apple marches to the beat of a whole different orchestra w/r disk layout 
 labels, so I had to do that to get FreeBSD/PPC or OpenBSD/PPC up on my 
G4 PowerBook 17.


And I don't use hfs at all - not even for OS X.

The Wintel models might seem easier by virtue of using a commodity CPU, 
 but *everything* Apple does is different, sometimes outright contrarian.


(If only IBM had made an affordable Power architecture laptop...)

DFLY  - or any *BSD - needs only a fraction of the resources even a 
cleand-up and stripped-down OS X consumes.


Ergo, a used 'Industry Standard' Wintel would save you time vs the Apple 
'appliance' as well as performing better.



Bill


Re: FreeBSD 7, DragonFly's status

2008-03-09 Thread Bill Hacker

Kris Kennaway wrote:

Adrian Michael Nida wrote:


SnipAndRearrange/

The benchmark at http://people.freebsd.org/~kris/scaling/os-mysql.png

SnipAndRearrange/

Is measuring 1.8.  We're at 1.12 now.  I'm sure an updated graph has a 
different

trend.  Take it upon yourself to redo the benchmark.


Hi Adrian,

Per your request I reran the benchmark, and I also took the opportunity 
to do some additional performance comparisons between Dragonfly 1.12, 
FreeBSD 4.11 and FreeBSD 7.0.  Here is a brief report of my results.


Kris



Kris,

w/r the http://people.freebsd.org/~kris/scaling/mysql.html page

The link to the MySQL config:

http://www.freebsd.org/%7Ekris/scaling/my.cnf

..gives me a 404.

I don't have even a Quad-core I can spare from duty at the moment, but 
I'd like to at least see what the relative UMP  dual-core results are 
on one of the OpenSolaris releases we have handy.


Solaris-on-x86 subjectively seems relatively faster now than 'SlowLaris' 
days, but still no great shakes speed-wise.


Thanks,

Bill



=

In May 2007 I ran some benchmarks of Dragonfly 1.8 to evaluate
progress of its SMP implementation, which was the original focus of
the project when it launched in 2003 and is still widely believed to
be an area in which they had made concrete progress.  This was part of
a larger cross-OS multiprocessor performance evaluation comparing
improvements in FreeBSD to Linux, NetBSD and other operating systems.

The 2007 results [1] showed essentially no performance increase from
multiple processors on dragonfly 1.8, in contrast to the performance
of FreeBSD 7.0 which scaled to 8 CPUs on the benchmark.

Recently Dragonfly 1.12 was released, and the question was raised on
the dragonfly-users list [2] of how well the OS performs after a
further year of development.  I performed several benchmarks to study
this question.

MySQL
-
This is a good general test of kernel performance and parallelism, as
well as performance of the thread library.  MySQL performance
(together with PostgreSQL performance) has been a driving force in
FreeBSD, Linux and NetBSD SMP development over the past year, see

http://people.freebsd.org/~kris/scaling/os-mysql.png

In this round of testing I compared Dragonfly 1.12-RELEASE, FreeBSD
4.11-STABLE and FreeBSD 7.0-RELEASE, running on the same 8-core Xeon
hardware.  On Dragonfly and FreeBSD 4.11 the GENERIC kernel
configuration was used except for enabling SMP and APIC_IO (for the
SMP tests), and removing I486_CPU.  Under FreeBSD 7.0 the GENERIC
kernel was used except for enabling the SCHED_ULE scheduler, removing
I486_CPU and enabling SMP when appropriate.  The test applications
were compiled from ports/pkgsrc and the same versions and
configuration options used for each OS.

Other configuration is the same as in my previous test and is also
documented here

http://people.freebsd.org/~kris/scaling/mysql.html

Here are the results:

http://people.freebsd.org/~kris/scaling/dfly-mysql.png

Dragonfly 1.12 achieves peak SMP performance of only 15% better than
UP performance, and drops to about 50% below UP performance at higher
loads.  Enabling SMP has a 20% performance overhead on this benchmark.

UP mode is faster than 4.11 when using the libthread_xu library.  With
libc_r (not graphed) performance is identical to 4.11 in both UP and
SMP mode, so the UP performance increase is most likely due to the
thread library.

Note: I am using mysql 5.0.51 in the current tests, which has
different performance characteristics than the older 5.0.37 tested
last year, so the current data cannot directly be compared to the
previous dragonfly 1.8 graphs to evaluate whether a small amount of
progress was made since 1.8.  However, there does not appear to be any
significant performance improvement from dragonfly 1.8 to 1.12.

FreeBSD 7.0 scales to 8 CPUs on this benchmark.  Peak performance is
6.5 times higher than peak dragonfly performance, and 9.0 times higher
than FreeBSD 4.11 performance.  UP performance is consistent with SMP
performance with a single thread.  7.0 UP is 45% faster than 4.11 UP
and 10% faster than dragonfly UP.

Note that while these benchmarks are on a test system with 8 CPU
cores, the results also provide information about performance on
systems with fewer than 8 cores, such as dual core systems.  If the
system does not show appreciable performance gain when 2 threads are
running and most CPUs are idle, it is unlikely to perform much better
when the system only has 2 CPUs.  I could not test this directly
because I don't know how to disable CPUs at boot time/run time in
dragonfly.

For example, this graph shows FreeBSD 7.0 running postgresql on the
same system with 1, 2, 4 or 8 CPU cores active, as well as comparing
the UP and SMP kernel running with 1 CPU active

http://people.freebsd.org/~kris/scaling/pgsql-ncpu.pdf

The performance seen with 8 CPUs also scales down to 1, 2 and 4 CPUs.
This also shows that there is negligible overhead from 

Re: FreeBSD 7, DragonFly's status

2008-02-28 Thread Bill Hacker

Matthew Dillon wrote:

Well, I'll give you my 5-second opinion.



*snip*



* Our interrupt routing subsystem really needs a major upgrade.
  (i.e. a major port from FreeBSD).



Given that theirs has choked several times on some fairly common 
hardware that DID work thru 6.2 RELEASE, that would not necessarily be 
the first or best place to look.



Where I think the future is:

* SMP, Storage, and SSI.  Real time mirroring at the logical level.
  HAMMER is a major component for the storage, SSI, and mirroring
  components, and I believe HAMMER will be a large interest magnet.

Our project goals have not changed, but if I had it all to do over
again I would have started work on HAMMER much earlier then I did.

I spent more time then I should have perfecting the low level
infrastructure, trying to build a base upon which all the other
work could occur.

-Matt



It may seem so in the rear-view mirror, but had you NOT done the 
low-level infrastructure, AND the 2+ year code-clean-up of what was 
adapted from Free (and other) BSD, nothing else would be working as well 
as it does.


That was time that pays back with long-running and ongoing dividends.

JM2CW,

Bill


Re: Dragonfly Routers

2008-02-23 Thread Bill Hacker

Adrian Chadd wrote:

On 20/02/2008, Bill Hacker [EMAIL PROTECTED] wrote:


 Routing and firewalling is a specialty that has become a very
 high-volume hardware/ASIC/RTOS field where any router a PC could at one
 time match on speed has become so cheap and flexible off-the-shelf it is
 no longer worth the bother to roll yer own *and maintain it* for any
 serious throughput.


Thing is, there are people who report doing 10ge (almost) line rate 64
byte pps on current PC hardware, with the right combination of PCIe,
decent chipsets and crazy tuned forwarding code complete with
prefetching.

Its just not being done in open source.



Nor for 'just one' 10GigE port, and only on 'PC' hardware for grins.

Serious system vendors (Cisco, Foundry, Juniper - many others) are able, 
for example, to set-up and tear-down multiple thousands of 
hardware-accelerated SSL connections *per second*, keep tens of 
thousands of simultaneous SSL sessions active at a time.


Or other specific routing or switching tasks - few of them on the same 
specialized box.


And carrier data centers have racks and racks of the various flavors of 
these. They have to.


The speed is due to hardware, firmware, RISC, ASIC, ... e.g. 'bespoke' 
silicon and single-minded software.


Not really what one would consider a proper 'OS' at all.

There may be a browser interface for configuration, but asking one of 
these boxes to be a general-purpose httpd, let alone an MTA or an RDBMS 
host, would be expensive and frustrating. Bit like swatting flies with a 
16 50. Wrong tool for the job.


And the reverse w/r 'PC' platforms.

At the 'low end', cheap-and-cheerful gear from Asian makers with 40 GB/s 
backplane 'fabric' now costs about the price of a decent CPU - let alone 
the fast NICS, RAM, and MB to support said CPU. Or the maintenance.


Once a niche has been commoditized - and cheaply - it is time to offload 
that function and turn to challenges NOT yet so well-covered.


Clustering and better-managed large file systems come to mind...

;-)

Bill


Re: Dragonfly Routers

2008-02-19 Thread Bill Hacker

Dave Hayes wrote:

Bill Hacker [EMAIL PROTECTED] writes:

Dave Hayes wrote:

Has anyone here tried to use DragonFly BSD as a router where the box had
more than 4 network interfaces? I'm wondering if too many network
interfaces on one machine would have performance issues?
What sort of hardware, 


As yet unspecified hardware, which is why I am asking the list. :)

I seem to remember some very old idea that more than 4 network
interfaces on a PCI bus was a Bad Idea(tm). 


The PCI bus itself needed a (originally DEC) chipset to 'bridge' each 
group of four slots.


Some 'commodity' MB have more than four slots as much for positioning 
'fat' cards, such as RAID or VGA with fans, but cannot assign full 
resources to all of them at once.


ISTR the IBM specs once listed an RS/6000 as eing expandable to 53 PCI 
slots, but doubt anyone has ever done it in anger.


Others cannot have but one or two PCI (Asus was notorious for this), as 
they've used up the resources for onboard chipsets.


Many boards have the bridge chip (or function of same) to support 
evergrowing arsenals of onboard stuff - but alos use it up 'onboard'.


Ergo *very few* available MB have user-available bridged PCI busses with 
more than 4 fully-usable slots.


PCI-X and PCI-e are a whole 'nuther - even more complex - story, but 
really fast interfaces (10 Gig-E) can stress the whole I/O 
infrastucture, if not CPU and RAM.




I know conventional wisdom suggests specifying an application and
expected load, but in this particular case I don't really know
exact numbers in advance, I can only determine that the load is
on the scale of ~100 machines, and several gigE networks.



'..several GigE *networks*' ?

OS aside, even 'server-grade' or 'carrier grade' MB are not well-suited 
to that. They place too many other demands on their I/O channels.


Go for bespoke hardware with fast backplane fabric. 40 GB/s and up, and 
dedicated to nothing else but moving the data.


Buy medium to low-end, it is cheap enough to retire for better kit when 
need be, and continues to drop in price as capability / functionality 
increases. There are rooms full of obsolete high-cost gear all over the 
place.


Look for those that do NOT run a *BSD or Linux OS. The closer to 
bare-metalloid state-machine, the faster it will run and the less admin 
work it will need.


I've run six pci-bus 10/100 NICs as an ipfw(1) bridging router under 
FreeBSD 4.8, 1 GHz Celeron, 512 MB PC133 SDRAM with acceptable performance.
Ergo I wouldn't expect DragonFly to take a back seat relative to any of 
the other *BSD's - or Linuces.


I'm not bridging, I'm actually routing...so that will take some of the
load off the idea. The downside is I'm routing gigE and I don't want too
much speed to be sacrificed. 


Serious router/firewall kit is on a different 'Planet' (or Cisco, or ..)
and better served with an RTOS.


Perhaps. I don't have any data to confirm or deny this, though it seems
reasonable. 


Check for reviews and actual benchmarks, whether you want/need multiple 
in-built VPN, multiple segmenting et al.


Routing and firewalling is a specialty that has become a very 
high-volume hardware/ASIC/RTOS field where any router a PC could at one 
time match on speed has become so cheap and flexible off-the-shelf it is 
no longer worth the bother to roll yer own *and maintain it* for any 
serious throughput.


Bill


Re: Dragonfly Routers

2008-02-17 Thread Bill Hacker

Dave Hayes wrote:

Has anyone here tried to use DragonFly BSD as a router where the box had
more than 4 network interfaces? I'm wondering if too many network
interfaces on one machine would have performance issues?


What sort of hardware, application, load?

I've run six pci-bus 10/100 NICs as an ipfw(1) bridging router under 
FreeBSD 4.8, 1 GHz Celeron, 512 MB PC133 SDRAM with acceptable performance.


Ergo I wouldn't expect DragonFly to take a back seat relative to any of 
the other *BSD's - or Linuces.


Not for any readily available off-the shelf hardware that can actually 
run a full-house Operating System (e.g. 'PC' acting as a router).


Serious router/firewall kit is on a different 'Planet' (or Cisco, or ..)
and better served with an RTOS.


Bill



Re: Mounting dragonfly ufs in freebsd

2008-02-01 Thread Bill Hacker

Jonas Trollvik wrote:

On 2/1/08, Chris Turner [EMAIL PROTECTED] wrote:


kind of a PITA, but if you can read 'a', you could keep 2 labels in
'a', and relabel it as required..

(as long as the label doesn't extend into the partition 
  hose things up ..)

if the filesystem is working, it's probably just the label that has
problems..

If you're just testing driver compatibility now  again,
I'd think that would might be good enough..



Yes this might work.  Thanks for the tip.

Otherwise I'll have a look into how complicated the disklabel code is
to change in freebsd to see if I can change it to work like dragonfly

Regards
Jonas


Dunno if this *still* works - in fact I may be wrong that it *ever* did..but

ISTR that if I used FreeBSD 4.X to set up a disk, then DFLY *and* all 
versions of FreeBSD - 4.X and newer - could read the disklabel.


Subsequently, as we pull server HDD well before they fail, I've simply 
used separate surplus HDD.


For a laptop, of course, that would mean USB, FireWire, or such...

HTH,

BIll




Re: rsync vs. cvsup benchmarks

2008-01-31 Thread Bill Hacker

Simon 'corecode' Schubert wrote:

Garance A Drosihn wrote:

Just use rsync, and shut up about it already.


What are you people blabering about?


Fair question, simple answer.

Not wanting to throw a useful tool out on specious grounds.

 cvsup SUCKS.

Not my field of expertise. Google 'Escort Services'.

 not the idea, but
the language it is implemented in.  and cvsup inherits the suckage.  as 
simple as that.  if it was written in a portable language, nobody would 
bother using rsync.  vince's benchmarks were just to establish one 
realisation: that rsync is not significantly worse than cvsup.  end of 
story.  move on.




?? the *language* [1]?

'inherits the suckage'? (so much for *that* view of birth control.

;-)

'..nobody would *bother* using rsync.' [2] ??

'..establish one realisation' ???

'realisation' indeed...

Snort enough dried horeshit up your nose and you could come to believe 
that the whole damn WORLD stinks!


But that's only from the observation point of your *own nose*.
It doesn't make it so.


Give this a think instead:

HAMMER fs is expected to deliver capabilities that can reduce the 
workload required to ascertain what is/was 'of interest' as-at a 
specified tag and/or point in time.


Built-in to the fs.

*But neither cvsup/csup nor rsync as they presently stand are aware of 
that, nor equipped to take advantage of it.*


So the bottom line is that a new 'none of the above' utility could be a 
very good thing to have.


Especially if the client fs is other-than HAMMER fs.

Until such time as that animal is coded, it just *might* be easier to 
adapt cvsup/csup than rsync.


Embarassing?

Or an inspiration to go off and code that tool?

There is precedent in Plan9's specialized fs'en. No CVS repository 
needed per se. But please - keep the feet on the matching legs..


:-)

Bill


[1] See csup. In C.

[2] cvsup/csup affecteth not rsync's utility one wit. Nor the reverse. 
Each is good at what it does best.




Re: rsync vs. cvsup benchmarks

2008-01-31 Thread Bill Hacker

Chris Turner wrote:


I have some benchmark test results comparing rsync to cvsup.  




okay.. so like:

you'd think with all of these repository copies flying around,
there'd be a lot less flaming and a lot more coding going on..

enough!

sheesh.. You people are making me want to write this email

IN EMACS

and,

WE ALL KNOW HOW HORRIBLE EMACS IS !!!

especially

WHEN COMPARED TO VI !!!

and also, just to finish this off (for now:)

People on each side are LIKE NAZIS

etc, etc, etc.


LOL!

I think I'd rather have polio that either EMACS or VI

But you are right - try for new and better solutions.

On which I say again HAMMER fs to HAMMER fs will probably have neat 
syncing features inbuilt.


Pulling from HAMMER fs to other fs OTOH, needs new tools if best use 
of HAMMER features is to be used advantageously.


Eiffel should be good for coding that...

(ducks and waddles away)

;-)

Bill



Re: rsync considered superior

2008-01-30 Thread Bill Hacker

Simon 'corecode' Schubert wrote:

Hello Vincent,

Vincent Stemen wrote:
The results are dramatic, with rsync performing hundreds of percent 
faster on

average while only loading the processor on the client side a little over
a third as much as cvsup.  Either the performance claims about cvsup 
being
faster than rsync are based on theory without real world testing or 
cvsup has

gotten a lot slower or rsync has gotten a lot faster than in the past.


Thank you for these thorough tests!  We finally have some hard numbers 
to work with.  I think it is obvious that rsync should be the preferred 
update mechanism if you want to download the cvs repository.  Cvsup 
might still be better suited when only downloading the checked out sources.


To state it clearly for everybody:

=

  Use rsync to sync your repos!  It is faster and can even be compiled!



To state it even MORE clearly...

 ...so long as you do not give a damn about the extra load you are 
placing on the source server


WBH



=

cheers
  simon



Think about it.

rsync predates CVSUP.

If rsync plus a bit of scripting or 'steering' code was better 'all around'?

- cvsup would never have seen the light of day in the first place.

- NOR been adopted so *very* widely.

- NOR have remained in service for so long on so many projects.

- NOR survived challenges from 'Mercurial' and several other similar tools.

A vast supposition about rsync, backed up by half-vast testing doesn't 
change any of that. Not even with a nicely done write-up.


It is all still one-ended.

Set up the repo you have mirrored as a source server.

Instrument that server's load with 100 simultaneous rsync clients and 
again with 100 simultaneous cvsup clients.


Post the results.

Bill


Re: rsync considered superior

2008-01-30 Thread Bill Hacker

Simon 'corecode' Schubert wrote:

*snip*

Simon,

Your command of the *language* is superb.

But it isn't about debating skills.

Test 100 simultaneous connections.

Or Not.

IDGASEW


Bill


Re: Some mail from dragonfly.users being rejected

2008-01-29 Thread Bill Hacker

Vincent Stemen wrote:

On Tue, Jan 29, 2008 at 02:24:52AM +, Bill Hacker wrote:

Vincent Stemen wrote:

Hopefully tmda will white list based on the sender address.

Tmda does tend to eventually earn a net reduction in traffic.

One way or another...

Bill


True :-).

It did white list based on sender.  I got this :-).



'One way or another ...'

patience...

Bill


Re: Some mail from dragonfly.users being rejected

2008-01-28 Thread Bill Hacker

Vincent Stemen wrote:

*snip*


Hopefully tmda will white list based on the sender address.


Tmda does tend to eventually earn a net reduction in traffic.

One way or another...

Bill


Re: kernel can't find boot device ad

2008-01-25 Thread Bill Hacker

mustkaru wrote:

Hi,

I successfully installed Dragonfly on my laptop and compiled a new
kernel. However, when booting the new kernel panics: can't find boot
device 'ad' (root is on /dev/ad0s1a). There is no 'ad' in the list of
devices recognized by kernel. ata0 appears in the boot messages
though. The nata devices are all present in my kernel config file,
precisely as in GENERIC. What might be wrong?

thanks, Must

--


'my laptop' is a bit too generic to go anywhere w/r troubleshooting

;-)

Can you boot from the original install media, see what it recognized for 
the ATA controller and attached device(s) - then do the same with the 
one with the new kernel - at least as much as you can see.


I'd say you compiled w/o a needed driver except that it sounds as if you 
aren't even getting to where the boot loader can *find* that kernel and 
its drivers.


What did you change for your new kernel?

Bill




Re: how to get dragonfly and freebsd source code

2008-01-22 Thread Bill Hacker

dark0s Optik wrote:

But I don't want to program in assembly, but in C language. For
example, I would like to analyze C code of FreeBSD or Linux for
processors. I'm not capable to assemble hardware.

2008/1/21, Matthew Dillon [EMAIL PROTECTED]:

   I think you have deciide exactly *what* you want to program.  If
   you want to mess around with hardware, e.g. blinking an LED or driving
   a LED display or something like that, or build a small controller such
   as a thermostat or a hottub controller, then the best way to do that is
   to buy an 80x51 series microcontroller.  Those things are almost
   completely self contained and have a little 8 bit microprocessor on them.
   They can't run anything sophisticated but they're the best way to learn
   how to program a processor.  They are also extremely cheap, in the
   $3-$15 range typically.  And you can very easily breadboard them.

   High-end chips like the ultrasparc, or intel, or amd those are very
   complex cpus and frankly you are better off simply writing assembly
   from inside a real operating system, like DragonFly (or any unix) and
   running it that way, instead of on bare hardware.

   -Matt







Matt is telling you how to take the first steps if you want to learn and 
grow. Shortcuts are few, and none are free. Time, study, and more time 
must be invested.


Starting directly with 'C' AND an Operating System port is like trying 
to conduct an orchestra and learning to read sheet music in the same 
afternoon. All you would produce is confusing noise.


If you are serious - start with Minix. It is has a very small code-base, 
was designed specifically as a teaching and learning tool, and has books 
to match that explain everything in it.


Bill




Re: cvsup

2008-01-21 Thread Bill Hacker

Matthew Dillon wrote:

   People shouldn't worry about server side overhead all that much.  Cpu
   cycles are cheap and the cvs tree is completely cached in memory anyway.
   And the only effect that extra network bandwidth has is that it takes
   a little longer to run the operation.  Now, granted, we don't want people
   to be downloading a whole copy's worth of network bandwidth every day,
   but rsync is close enough that I just don't care.

   One of the original reasons for using cvsup was so people could maintain
   local branches of the repository.  I don't think people do this much
   anymore, if they do it all.  Disk space is so cheap these days that
   keeping a master sync copy and a separate one for local work is not a big
   deal.

-Matt



Not that DragonFly is not already usable - but the time is approaching - 
perhaps Q3 or Q4 2008 - when enough 'good stuff' will converge to give 
it a distinct 'edge'. Especially if the 'major' *BSD doesn't soon find 
its way back to a more predictable and less fragile model.


*Then* the bandwidth and all could very much matter.

OTOH, enlisting more mirrors and managing them appropriately can 
probably happen rather rapidly if/as/when that sort of avalanche begins.


PP

Bill



Re: cvsup

2008-01-21 Thread Bill Hacker

Simon 'corecode' Schubert wrote:

Matthew Dillon wrote:
   One of the original reasons for using cvsup was so people could 
maintain

   local branches of the repository.  I don't think people do this much
   anymore, if they do it all.  Disk space is so cheap these days that
   keeping a master sync copy and a separate one for local work is not 
a big

   deal.


People use git or hg nowadays :)  I just can't stop nagging, it is 
unbelievably useful, especially for team work.


cheers
  simon



Several such options can co-exist. But developers and 
'deployer/maintainer' admins have different needs.

- The devoloper needs fine-grain control of diverse options

- The end-user / admin needs protection from breakage or stupidity, 
doesn't want to know anything about granularity beyond CPU family and 
release number.


CVS has been the 'compromise' that is at least not harmful or overly 
demanding.


Rather than 'nag' - set up what you want and see who joins or lends a hand.

If it adds enough value to enough people, bandwidth and storage will be 
attracted to the solution.


If not, not. Yet.

;-)

Bill



Re: cvsup

2008-01-21 Thread Bill Hacker

Simon 'corecode' Schubert wrote:

Bill Hacker wrote:

CVS has been the 'compromise' that is at least not harmful or overly
demanding.


CVS *is* harmful.


To you, and other running experimental differences, perhaps so..


 I can't run a patch and work on a different issue
myself - I'll mix both.  Or I'll have to check out into another tree and
lose the patch.



Indeed. And have to go find it and manually re-apply, and/or alter and 
re-apply 'coz it no longer fits quite tha same on code that has moved on 
in other details.


But unless and until the patch is vetted and accepted into the 
mainstream that is exactly what a 'production' user wants.


As few surprises as possible.

As said - developers and production users have different priorities and 
CVS is a fairly effective compromise - ELSE it would have been scrapped 
long ago by a lot more projects than have done so to date.


The alternatives are no longer new, nor are they without their own set 
of irritants.



Rather than 'nag' - set up what you want and see who joins or lends a hand.


It is really cumbersome to keep any repo synchronized, especially if you
want to have a nice repo which reflects vendor branches correctly.


Understood. But *any* repo and any toolset needs effort.


Basically all manual CVS interference has to be dealt with either in the
tool or manually.



Alternatives may provide more comfortable knobs and buttons - but AFAIK, 
none of them yet read minds, let alone cover the sharp edges.



If it adds enough value to enough people, bandwidth and storage will be
attracted to the solution.


Bandwidth and storage isn't the issue.  I can develop forever in my git
repo, and nobody might ever notice.  And it won't magically make the
project switch from CVS.

cheers
  simon


From tracking your work, I fully appreciate that you have made a great 
deal of effort, committed a lot of time and resources, and delivered 
much valuable code - so yes = anything that removes barriers to that 
gets some support.


But if your peer contributors - who must have nearly identical concerns 
- don't yet see CVS as a target for 'real soon now' replacement, there 
must not (yet) be an overwhelming case for change.



Change can't take place until it is possible for those involved to eval 
both tools side-by-side - on DragonFly, not 'other project x' - until 
they are convinced the advantage is worthwhile AND will not make it 
harder in general to use.


I'm not defending CVS in particular. I'm just saying if *most* folks 
don't see whatever as broken a fix won't get a lot of followers.


.or we would have left 'C' for something else about fifteen years ago, 
let alone the x86 archeologitecture.


;-)

Bill



Re: cvsup

2008-01-21 Thread Bill Hacker

Simon 'corecode' Schubert wrote:

Bill Hacker wrote:


*trimmed*

The bottom line - viewing it not a a coder, which I haven't been for 
around 30 years - but as a Manager of scarce resources - primarily 
*time*, and not even my own in this case - is that:


- Apparent: No readily available 'one size' fits all needs well enough 
to justify excluding others.


- Apparent: There is enough gain to justify running parallel options

- Suspected: There is NOT PRESENTLY enough 'assured' gain to attract the 
support needed to run more than one (or even *any*) as well as they need 
to be run order to secure value. See the comment about the existing git 
repository not being current... A good idea, bed-ridden, is less mobile 
 a bandaged-up idea on crutches.


Even so - it should NOT be necessary to chose 'one and only' one, e.g. 
move off CVS to else-thing.


The missing resource is not addressed by taking any one coder away from 
coding to maintain a repository - nor creating a new toolset from scratch.


It *may* be solvable by enlisting several coders to digress long enough 
to invest in the automation that makes it possible for less effort in 
future to keep at least one alternative repository system in top form.


I do not know if that should be git or some other CVS alternative.

But the payback has to become apparent in months if not weeks - no more 
than that - ELSE it is 'faster' to continue to deal with the Devil one 
knows than retrain - and *still* struggle.


Even so - someone with admin skills AND understanding of what matters 
AND how it has to work, AND daily availability to stay on top of it, 
i.e. almost by definition NOT a coder - would be needed to keep it sorted.


That's the issue as I see it - need for what we 'Politically Incorrect' 
old farts used to (be allowed to) call a 'Gal Friday'.


- but analysing  a barrier isn't the same as removing it, and I am in no 
better position to actually fix it than anyone else that has yet spoken.


That said - while one cannot 'herd cats' or 'direct' the efforts of 
volunteers... perhaps 'will' and 'need' can enlist enough consensus to 
move a solution out of 'wish for' and into 'useful' land.


pkgsrc revanche. It has paid-off.

ELSE - do the best that can be done with existing tools for a while longer.

Bill


Re: Futures - HAMMER comparison testing?

2008-01-18 Thread Bill Hacker

Matthew Dillon wrote:
:But - at the end of the day - how much [extra?] on-disk space will be 
:needed to insure mount 'as-of' is 'good enough' for some realisitic span 
:(a week?, a month?)? 'Forever' may be too much to ask.


The amount of disk needed is precisely the same as the amount of
historical data (different from current data) that must be retained,
plus record overhead.

So it comes down to how much space you are willing to eat up to store
the history, and what kind of granularity you will want for the history.


OK - so it WILL be a 'tunable', then.

FWIW - my yardsticks at the 'heavy' or most wasteful end are punch card 
 paper/mylar tape on low/no RAM systems, where 'backup' is essentially 
of 'infinite' granularity, moving through WORM storage to Plan9 Venti, 
et al.


AFAIK, none of the oldest 'write once' methods are in even 'virtualized' 
use - save possibly in the FAA or military fields, as few entities have 
any prectical use for that sort of history.


At the other end, one of our projects involved storing the floor plans 
of 60,000 buildings on RAID1. A technician manually rebuiding a failed 
array mirrored empty HDD to full, and over 600 CD's had to be manually 
reloaded.


In that case, there never had been risk of loss - anyone could buy the 
latast CD's from the government lands department.


What his error cost us was 'only' time and inconvenience.

HAMMER cannot protect against all forms of human error - BUT - if it 
inherently rebuilds more intelligently than the least-intelligent of 
RAID1, it can greatly reduce the opportunity for that sort of 'accident' 
to occur.




:How close are we to being able to start predicting that storage-space 
:efficiency relative to ${some_other_fs}?

:
:Bill

Ultimately it will be extremely efficient simply by the fact that
there will be a balancer going through it and repacking it.


... constantly, and in the background... (I presume)


.. and with tunable frequency and priority. (I wish, eventually).


For the moment (and through the alpha release) it will be fairly
inefficient because it is using fixed 16K data records, even for small
files.  The on-disk format doesn't care... records can reference 
variable-length data from around 1MB down to 64 bytes.  But supporting

variable-length data requires implementing some overwrite cases that
I don't want to do right now.


Is variable-length still likely to have a payback if the data records 
were to be fixed at 512B or 1024B or integer multiples thereof?


 This only applies to regular files

of course.  Directories store directory entries as records, not as data,
so directories are packed really nicely. 


e.g. if you have one record representing, say, 1MB of data, and you
write 64 bytes right smack in the middle of that, the write code will
have to take that one record, mark it as deleted, then create three
records to replace it (one pointing to the unchanged left portion of
the original data, one pointing to the 64 bytes of overwritten data,
and one pointing to the unchanged right portion of the original data).
The recovery and deletion code will also have to deal with that sort
of overlayed data situation.  I'm not going to be writing that
feature for a bit.  There are some quick hacks I can do too, for
small files, but its not on my list prior to the alpha release.

Remember that HAMMER is designed for large filesystems which don't fill
up instantly.  Consequently it will operate under the assumption that
it can take its time to recover free space.  If one doesn't want to use
the history feature one can turn it off, of course, or use a very
granular retention policy.

My local backup system is currently using a 730GB UFS partition and it
is able to backup apollo, crater, and leaf with daily cpdups (using
the hardlink snapshot trick) going back about 3 months.  In fact, I
can only fill up that 730GB about half way because fsck runs out of
memory and fails once you get over around 50 million inodes (mostly
dependant on the number of directories you have)... on UFS that is.
I found that out the hard way.


.which reminds us what we will ALL soon face if we do NOT seek newer 
solutions!




It takes almost a day for fsck to
recover the filesystem even half full.  I'll be happy when I can throw
that old stuff away.

-Matt
	Matthew Dillon 
	[EMAIL PROTECTED]


. or just relegate it to what it still does faster/better. IF..

I hope and trust that DragonFly BSD will earn a place as a 'broad 
spectrum' OS, competitive across the board with alternatives.


But - if not, or even just 'not at first'

- much as OpenBSD and NetBSD have long been seen as good choices for 
routers and firewalls, DragonFly should be able to carve out a viable 
niche as the better choice for centralized / 

Re: cvsup

2008-01-18 Thread Bill Hacker

Vincent Stemen wrote:

*snip*



Unless I am overlooking something obvious,


It is not likely so many projects would be using cvsup for as long as 
they have if the rsync advantage was that great, or that simple [1].


Have you:

A) compared the loads and bandwidth as well as the time on BOTH end 
machines - host as well as client?


B) tested for the 'more common' case where cvsup/csup are applied to 
rather more sparse pulls of just a fraction of older, larger 
repositories (older *BSD's) - and by more users simultaneously?


Unless I am wrong, cvsup/csup places more of the load of determining 
what to pull on the client, less on the source server.


 I think I am going to stick

with updating our repository via rsync :-).




It may be the right answer for now, and for what you are doing.

It may be less so for general end-user use - or even your own if/as/when 
mirror hosts are under heavier load.


Most older mirror hosts throttle each connection as well as limit the 
maximum number permitted simultaneously. The one you are using presently 
seems not to do so.


The key is to include measurement of host and bandwidth as well as 
client.  TANSTAAFL.



Bill

[1][ subversion, mercurial, et al alternatives are a different type of 
issue.


Re: Futures - HAMMER comparison testing?

2008-01-17 Thread Bill Hacker

Michael Neumann wrote:

Bill Hacker wrote:
I'm guessing it will be a while yet before HAMMER is ready for this, 
but it seems to be moving fast - and cleanly - so...


Sorry to hijack this thread. Just wanna mention a little write down of 
mine about HammerFS features (and sometimes comparing it with ZFS):


http://www.ntecs.de/blog/articles/2008/01/17/zfs-vs-hammerfs

I can't await to try it out in real!

Regards,

  Michael


Michael - that's a good start!

Such as good start, that I'd like to  suggest the 'un-blog-like' course 
of correcting the original, at least 'for a while yet' rather than 
blogging-on the errata at the tail.


I don't think your ZFS assessment is 100% accurate, so a bit of clean-up 
there could reduce flame-bait.


That may earn further alteration if/as/when Sun integrate 'lustre' 
features. At present, HAMMMERfs and ZFS have only partial overlap in 
their Venn diagrams.


But I encourage you to keep updating and tracking the changes.

Maybe it should open with a date-stamped 'current state of ...'?

Thanks!

Bill


Re: Futures - HAMMER comparison testing?

2008-01-17 Thread Bill Hacker

Matthew Dillon wrote:

:Sorry to hijack this thread.


Not to worry!

It was *intended* to be 'hijacked'.  Welcome!

Just wanna mention a little write down of 
:mine about HammerFS features (and sometimes comparing it with ZFS):

:
:http://www.ntecs.de/blog/articles/2008/01/17/zfs-vs-hammerfs
:
:I can't await to try it out in real!
:
:Regards,
:
:   Michael

Nice.  There are a few factual mistakes but nothing major.  ZFS is
reliable on a 386 system, you just have to limit its memory consumption
with (I think) a sysctl.  HAMMER's resource use is still far lower,
though.  There are still reports of total corruption for ZFS on FreeBSD
but the issue doesn't sound insurmountable.



Not just on FreeBSD. There are some 'bad news' ZFS stories on the 
Solaris-on-UltraSPARC lists as well.


But - so far- more seem to be 'tuning' issues than structural.


HAMMER doesn't journal.  Theoretically by carefully ordering
certain I/O operations it will not have to journal.


As with 'softupdates' the net result - reduced risk of damage -  is more 
important than the technique used to deliver it.



More importantly,
HAMMER can 'recover', as in regenerate, the contents of a cluster on
the fly by scanning the records in that cluster and then rebuilding
the B-Tree and allocation radix trees from scratch.  As long as
recognizable records are present, it should be possible to recover a
considerable amount of information even after the disk gets scratched.



Again - so long as it can be trusted to actually *deliver the goods*, 
the method is 'don't care' to the end-user.



The historical nature and backup sections are correct, minus your
journaling comment.  Basically you make backups by working from an
as-of access.   You don't even have to make an as-of mount, there's
an @@timestamp extension that allows you to access the filesystem
as-of any time.  So, e.g. you can do things like:
'diff /mnt /mnt/@@0xtimestamp'.



From 'decision maker land' - the hottest of hot buttons are not speed 
or bragging rights but rather:


- The fs will not fail, embarrass, or cost lost time or money.

- It will need less time install, configure, and maintain than other 
options (in which is included backup/restoral costs).


Needing less effort to muck about with tarballs and rsync offsets a 
(possibly) slower fs. Bigtime.


Not so lost or corrupted or even just wrongly erased data.


I am down to three major items for the release:  The Recovery, balancing,
and vacuuming subsystems.  All are interrelated and I am making good
progress.  Beyond that the spike code needs some major tweaking but
the only effect of that is poor write performance (probably through
the alpha release).


30% of the I/O speed of current alternatives is fine with me. 25% or 
less might be problematic.


50% is even OK for the long-term if the backup/restoral/rollback pays 
off. Those things are otherwise not 'free'.


 Of course, there are many other little issues

that need to be dealt with before the release as well.

Post release I'll have a go at implementing backup/mirroring streaming.
I have a pretty good idea how to implement it -- basically by storing a
last-transaction-id in cluster headers, super-cluster headers, and
volume headers, in order to reduce the amount of initial scanning
required to resynchronize a stream.

-Matt
	Matthew Dillon 
	[EMAIL PROTECTED]


That last part sounds like 'journaling' to me.

But - at the end of the day - how much [extra?] on-disk space will be 
needed to insure mount 'as-of' is 'good enough' for some realisitic span 
(a week?, a month?)? 'Forever' may be too much to ask.


How close are we to being able to start predicting that storage-space 
efficiency relative to ${some_other_fs}?


Bill




Re: USB stack

2008-01-16 Thread Bill Hacker

Hasso Tepper wrote:

Simon 'corecode' Schubert wrote:

Then devices should be probed and if there is a better match than ugen
and if the ugen device is not open, it should be detached from ugen and
attached to the new driver.  Do you think this would be possible?


There is a lot of things that suck in USB stack all BSD's are using at the 
moment. This particular one is one of smallest ones I worry about ;).


There is a light though. There is a new USB stack known as HPS or usb4bsd 
under development in FreeBSD. This particular problem is already resolved 
there as well. Also two biggest problems with current stack - sucking 
isochronous transfer support and sucking ugen.


See these for details:
http://www.turbocat.net/~hselasky/usb4bsd/
http://perforce.freebsd.org/depotTreeBrowser.cgi?FSPC=//depot/projects/usb/src/sys/dev/usb

Volunteers to port needed ;P.




Here is a just-discovered oddity to also look for - though I haven't 
tested it on a 'real' *BSD yet.


OS X 10.3.9 PowerBook 17 G4

- USB external kbd.

--  If directly attached to the PowerBook, kbd works when coming out of 
'sleep'.


-- If kbd is attached via an unpowered USB hub, it is ignored when 
coming out of 'sleep', and has to be unplugged/replugged (either 
kbd-to-hub or the hub-to-PowerBook.


This apparently relates to powering-down USB when sleeping, but I cannot 
fathom why there should be the difference exhibited once it awakens.


Anyone able to test for that on DFLY?

Bill


Futures - HAMMER comparison testing?

2008-01-14 Thread Bill Hacker
I'm guessing it will be a while yet before HAMMER is ready for this, but 
it seems to be moving fast - and cleanly - so...


.. to the extent networked clustering AND inherent recoverablility are 
intended to be major strengths - to what comparable fs should we expect 
to look if we were to compare performance  features?



My own first impression includes:

- IBM's 'GPFS'  (hard for mere mortals to get access to a realistic test 
environment, but perhaps performance is well documented somewhere in a 
Redpaper?)


- Just-acquired-by-Sun 'Lustre' (an unknown to me)

- CMU's AFS (in ports. Has never seemed particularly magical to me..)

- Plan 9's Fossil/Venti (likewise, even when run 'native' - IMNSHO, 
actually no longer viable).


- Is NFS a realistic candidate in some incarnation?

Others? [1]

 and - eventual performance quite aside, should we be looking at stress 
test design [2], or are there benchmarks already appropriate?



Bill Hacker


[1] I don't know that ZFS *inherently* plays in the cluster yet - but 
Lustre was allegedly purchased in order to be worked in with ZFS so... 
'maybe soon'.


[2] Tracking ZFS from mailing lists gives me the impression that is has 
been relatively solid for PJD on small HDD and RAM. Likewise for Sun on 
massively greater RAM and HDD arrays.


BUT - that many of the reported problems are from folks in the 'tween 
decks' - i.e. larger arrays amd RAM than PJD reports as solid, but less 
available RAM-to-disk than Sun generally uses for real-world use.


So the question arises if HAMMER will

a) be overly sensitive to / hungry for - memory as a resource

b) if so, at least just slow-down when short-changed rather than crashing.

c) IF crashed, (or otherwise treated rudely) be more certain of 
trouble-free recovery than product 'x'.


Re: renaming the current working directory (cwd)

2008-01-12 Thread Bill Hacker

Vincent Stemen wrote:

On 2008-01-12, Bill Hacker [EMAIL PROTECTED] wrote:

Vincent wrote:

Hi.

If my cwd gets renamed by another process, it disappears out from under
me.

To reproduce:

$ mkdir zzz1
$ cd zzz1

In another shell:

$ mv zzz1 zzz2

Back in the first shell:

$ ls
ls: .: No such file or directory
$ cd ..
cd: no such file or directory: ..


This seems to be unique to Dragonfly.  I tested on FreeBSD, NetBSD, and
Linux and, it does not affect the shell that was in the directory
because it was never deleted.  I verified, in Dragonfly, that the inode
number of the directory did not change after being renamed.

Is this a bug or a feature?

I am running DragonflyBSD 1.10.1-RELEASE



A misunderstanding, perhaps...

On FreeBSD, OS X or OpenBSD the more common action is an rm -Rf - which 
results in the original shell looking at a no-longer-valid dirtree, and 
saying so when you try to do an ls ... or whatever. 'no such file or 
directory'.


My test was to add to your test - above - 'touch fudge' while in the 
dirtree, then rm -Rf the dirtree from a different shell and try to ls or 
'touch fudge2' from the shell still in the now-removed dirtree.


As expected 'no such file or directory' was returned.

Likewise trying to backstep with 'cd ..' as the point of reference is 
lost. Needs a specific cd /path/to/whatever/still-exists


Then I tried an 'mv' with the 'fudge' file in place...

OS X 10.3.9, OpenBSd 4.2, FreeBSD 6.2 are the same as DFLY.

The original shell still shows in prompt that it is in the OLD dirtree 
and still sees the file 'fudge'.


Yes, this is what I expected to get and what I do get on every other
system.  My FreeBSD test was also on version 6.2.  But it behaves as if
the directory was removed on Dragonfly.  Are you sure you tested on
Dragonfly as well?


No - I specifically did NOT test on Dragonfly... (as you had already 
done so). I just happened to have all three of the others running at the 
time. I *can* add another HDD and check DFLY, but trust you are not 
inventing this...


;-)


If so, what version?  I also tested today on our
other Dragonfly machine running Dragonfly 1.8.0-RELEASE and got the same
results.  The directory disappears on the original shell.  



As above it, *should* disappear *unless* the change used hardlinks.


I did another test today.  I renamed the directory back to the original
name in the second shell and it reappeared in the original shell.
Unlike the other Unix systems, the shell on Dragonfly seems to require
the name to stay the same, not just the inode.  I also verified it was
not a characteristic of the specific shell.  I got the same behavior in
Zshell as well as sh.


shell1 $ mkdir zzz
shell1 $ cd zzz
shell1 $ touch fudge
shell1 $ ll
total 0
-rw-r--r--  1 vince  vince  - 0 Jan 12 13:18 fudge

shell2 $ mv zzz zzz2

shell1 $ ll
ls: .: No such file or directory

shell2 $ mv zzz2 zzz

shell1 $ ll
total 0
-rw-r--r--  1 vince  vince  - 0 Jan 12 13:18 fudge




Again - trusting your tests - that 'hints' to me that Dragonfly has not 
used (the classical) hardlink approach - wherein the information is not 
destroyed until the 'last known' consumer of it has relinquished it.


I could be wrong about that - plus - I'm not sure I would class it as a 
'bug' even if true.


Note that to the best of my recollection, DOS, Win, OS/2, and many other 
non-Unix OS'en would also be no longer able to find the path or file 
once 'rename' 'ed OR deleted by another's action.


OTOH - depending on which fs was in use, several of those would have 
prevented the second shell from doing so in the first place because 
another 'consumer' had it 'open'. Ergo, the behaviour can be fs-specific 
as well as OS-specific.




Any other shell does not. It sees them in the new dirtree. NB: This OS X 
is using UFS, not hfs


What I believe you are seeing is neither bug nor 'feature' - just a 
normal characteristic of hardlinks, which mv will use when possible.


Your shell that sits 'IN' the mv'ed dirtree has it 'open'. Once you move 
off of it, howver, that link is eaten and you cannot go back.


HTH,


Bill


And therein may lie the key.

First impression is that an OS/fs combination should *either* 'track' 
the change made by others - or prevent such chasnges.


Yet 'rm' and 'mv' already behave differently on Unix and FFS/UFS.

Which one is 'right' depends on whether the goal is consistency with 
other *nix or consistency between 'rm' and 'mv'.


Personally, I am au fait with DragonFly doing it a different way than 
other *nix - so long as that is the intent and 'fits' the rest of the 
model. DragonFly tends to be both very well planned, and to tkae 
cognizance that a good deal of 'legacy land' has long been in need of 
cleanup.


One would expect that an *unplanned* change from 'legacy' could break a 
whole host of other things expected by convention - but there is scant 
evidence that it has done so to-date.


I'll cite an unrelated example of 'differences':

 - On FreeBSD

Re: renaming the current working directory (cwd)

2008-01-12 Thread Bill Hacker

Vincent Stemen wrote:

*snip*


the name to stay the same, not just the inode.  I also verified it was
not a characteristic of the specific shell.  I got the same behavior in
Zshell as well as sh.


I overlooked that, but I don't *think* the shell makes a difference on 
this issue.


FWIW -

I was using the default tcsh on OS X, was an admin and wheel member, but 
not su'ed to root.


On OpenBSD it was ksh, on FreeBSD bash - both logged-in on the console 
as root, not su'ed.


Bill


Re: renaming the current working directory (cwd)

2008-01-11 Thread Bill Hacker

Vincent wrote:

Hi.

If my cwd gets renamed by another process, it disappears out from under
me.

To reproduce:

$ mkdir zzz1
$ cd zzz1

In another shell:

$ mv zzz1 zzz2

Back in the first shell:

$ ls
ls: .: No such file or directory
$ cd ..
cd: no such file or directory: ..


This seems to be unique to Dragonfly.  I tested on FreeBSD, NetBSD, and
Linux and, it does not affect the shell that was in the directory
because it was never deleted.  I verified, in Dragonfly, that the inode
number of the directory did not change after being renamed.

Is this a bug or a feature?

I am running DragonflyBSD 1.10.1-RELEASE




A misunderstanding, perhaps...

On FreeBSD, OS X or OpenBSD the more common action is an rm -Rf - which 
results in the original shell looking at a no-longer-valid dirtree, and 
saying so when you try to do an ls ... or whatever. 'no such file or 
directory'.


My test was to add to your test - above - 'touch fudge' while in the 
dirtree, then rm -Rf the dirtree from a different shell and try to ls or 
'touch fudge2' from the shell still in the now-removed dirtree.


As expected 'no such file or directory' was returned.

Likewise trying to backstep with 'cd ..' as the point of reference is 
lost. Needs a specific cd /path/to/whatever/still-exists


Then I tried an 'mv' with the 'fudge' file in place...

OS X 10.3.9, OpenBSd 4.2, FreeBSD 6.2 are the same as DFLY.

The original shell still shows in prompt that it is in the OLD dirtree 
and still sees the file 'fudge'.


Any other shell does not. It sees them in the new dirtree. NB: This OS X 
is using UFS, not hfs


What I believe you are seeing is neither bug nor 'feature' - just a 
normal characteristic of hardlinks, which mv will use when possible.


Your shell that sits 'IN' the mv'ed dirtree has it 'open'. Once you move 
off of it, howver, that link is eaten and you cannot go back.


HTH,


Bill


Re: Qemu running on DragonFly

2007-12-02 Thread Bill Hacker

Francois Tigeot wrote:

Hi,

I justed wanted to let you know that emulators/qemu is working great
with DragonFly as a host.

It doesn't build out of the box with pkgsrc, but the following patches fix
the compilation problems:

http://mail-index.netbsd.org/pkgsrc-users/2007/08/19/0001.html

They were sent by Patrick Georgi back in august and never commited.
I have opened a PR (pkg/37463).

Using the network in bridge mode is a little bit tricky but these commands
did it for me (I use tap2):

ln -s /dev/tap2 /dev/tap
ifconfig bridge0 create
ifconfig tap2 up
ifconfig bridge0 addm tap2
ifconfig bridge0 up

qemu -net nic -net tap,ifname=tap2,script=no -hda diskfile.img -m 256 -localtime



Thank you for that!

Bill Hacker


Re: Adding ad[4-6]s* to default devices

2007-10-28 Thread Bill Hacker

Chris Turner wrote:

Any thoughts?

More systems are coming out with 2 controllers - a PATA and a SATA,
and as a result there might be more users using ad4 and so on..



Three or more is becoming more common now.

- legacy IDE / PATA in bridge chipset - vanilla as to (emulated) bus, addr, etc.

- SATA in bridge chipset - not all identical, but converging.

- second SATA in discrete IC - depends on maker.

Ex: recent 'commodity' Asus  Gigabyte when 6 or 8 SATA / e-SATA are furnished.

Not to mention add-in supplemental controllers



I know there's the MAKEDEV.local hook, just thought I'd mention it since
I keep having having to remember to MAKEDEV on some my systems after
installing them. If OK I'll probably go ahead and make the change.

Thanks,
- Chris


No manual step needed 'ere on either of those cited, nor on older MB with one or 
more added PCI bus controllers.


Whatever DFLY used JFW'ed for me.

BIOS / PnP / ACPI / AHCI differences, perhaps?

Bill


Re: Weird issues with fdisk/dd

2007-06-08 Thread Bill Hacker

Joe Holden wrote:

Matthew Dillon wrote:

What are you running, the 1.8 release or 1.9 development?  There are
some very major differences in how storage is handled between the two.
It kinda looks like you are running 1.9.


I am indeed running 1.9.

For 1.9 you are going to have to regenerate all your devices in
/dev.

cd /dev
rm ad*
./MAKEDEV ad0
./MAKEDEV ad1
./MAKEDEV ad2
etc...


I have done the above, presumably a reboot was needed so I have done that
also, however i'm still getting the same messages.

Is there anything else I can try, or am I being completely stupid?

Thanks for the prompt response!

Regards,
Joe


*Which* FreeBSD were you coming from?

And had you previously set up the problematic disk with that (or other) OS?

Bill


Re: tty/pty/console wish

2007-06-05 Thread Bill Hacker

Matthew Dillon wrote:

I think the last time I used screen was 20 years ago.  I just leave
all my xterms open.  Sometimes I have upwards of 30 windows open across
four virtual screens in X.  When people were describing the NATA bugs
I had an xterm open in an unsaved vi for over a week with all my notes
My version of Mac stickies!

I have about two dozen little FVWM2 buttons which run a remote ssh in
an xterm, and use -e none so the ssh doesn't interpret control escapes:

(Title Staid, Icon up, Back lightgreen, Action `Exec exec xterm -T TITLE -e ssh 
-a -e none HOSTNAME`)

-Matt


Ditto, OS X or Xfce4 on (another)BSD. Sometimes several weeks, and HKG to Zurich 
or IAD to ZRH or HKG as well.


OTOH, more recent ssh DO want to time out if inactive, which I have not yet had 
reason to try to alter, as it seems a good thing, actually.


'screen' had been of quite recent value though as the ZRH-HKG link had pretty 
high latency, and dying in the middle of a make world was not nice back when 
slower machines (or our lowball VIA Samuel 2's) took the better part of a day to 
complete it instead of an hour or two per chunk the dual-core's now deliver.


That said, there is an old 'wish for' w/r remote access that I would still like 
to see made a 'standardized option':


That of configuring a machine to be able to drop into an ssh-receptive mode - 
perhaps limited to only a preset correspondent - if/as/when it stalls in 
single-user mode and cannot go multi.


In theory, of course this is no longer 'single' user mode... but it would be a 
useful fiddle for maintenance of remote servers that do not have an IP KVM, 
expensive 'lights out' module, or site technicians who speak anything but 
Winders. IOW - most of them.


JM2CW,

Bill




Re: cvs commit: src/sys/dev/disk/fd fd.c fdc.h

2007-05-22 Thread Bill Hacker

Petr Janda wrote:

Matt Emmerton wrote:
You may be interested to know that my primary DFly development machine 
is an
AMD Sempron-based system with a BIOS date of late 2004.  This machine 
could
hardly be considered ancient, but yet it has ISA slots and numerous 
other

devices that rely on the ISA bus architecture.

Be careful what you wish for -- removing ISA support will prevent a great
deal of hardware from running DFly, and I don't think anyone is 
interested

in taking that step.

--
Matt Emmerton

  
In that case, maybe leave ISA bus support in for legacy, but remove 
support for rare/noone uses ISA based cards.


Petr


Many devices that were once on a visible, external, socketed ISA bus card have 
long since moved into ~bridge silicon, and most BIOS (and OS) expect some of the 
key ones to be there - be they emulated, downsized, integrated - whatever, and 
all at far higher speed than the 'real' ISA' bus ever could have reached.


It could be another ten years before that 'worldview' changes. If then.

Bill


Re: rebuild DragonFly using gcc 4.1

2007-05-05 Thread Bill Hacker

Erik Wikström wrote:

On 2007-05-05 20:59, arnuld wrote:

On 5/5/07, arnuld [EMAIL PROTECTED] wrote:
i wanted to have GCC 4.x based DragonFly system (for a C++ project)


Arnild,

Pardon the top post, but I'd take all this a step or so further for a 'newbie' 
with only one machine:


- Install a 'production' release of DragonFly in a separate slice before 
installing 'preview' or 'head' in its own slice.


Equip the 'production' one with email, browser, man pages - all the 'tools' you 
need.


And *keep* that one installed. i.e. - be able to boot to no fewer than TWO DFLY.
Throw in FreeBSD, Minix or a Linux if you have the space, time, and inclination.
Use a second drive or a USB stick if need be.

Now you have a DragonFly baseline that is 'known stable', gets you man pages, 
packages, tarballs, browsing, etc - all with minimal hassle vs HEAD.


From it, you can also manually mount the slice and partitions of the 'RD' 
version 'next door'.  That lets you transfer files, inspect, alter, fix things 
of the sibling. Or clean it out and start with another.


. which may be just another copy of the release version equipped with the Gcc 
4.x  compiler suite.


PREVIEW or HEAD are probably not needed for your project at all *unless* their 
newest features - not just Gcc 4.x - are specifically part of the project itself.


HTH,

Bill Hacker



and Trevor Kendall advised this:


If you are using 1.8.x:
In /etc/make.conf uncomment WANT_GCC41=yes and rebuild.

If you are using HEAD, it is built automatically already.

To use it set CCVER to gcc41.
--


but Matt said please do NOT use HEAD. so i used LATEST preview
release which was GCC 3.4 based. In /etc/mk.conf i added a line
WANT_GCC41=yes but i do not know how to rebuild ? i  tried Google
and got this:

# make buildworld
# make buildkernel


More on this at the end.


when i try this i get this message: do not know how to build world.
funny message, i thought.  so i have these questions:

1.) Is it a good idea to use  DragonFly preview release as my
general-purpose OS. actually, i want learn UNIX and i want GCC 4.x to
work on a C++ project. i just use a Window Manager, xine-ui, firefox,
emacs, bash, gimp, audacious or xmms and nothing else. i don't use
Desktops.


Well, most of those applications will probably be available from pkgsrc 
and if not then some other workalike will be. And it's quite a good 
system to learn about UNIX on since it does not try to hide it's 
heritage like some Linux distros (Ubunto).



2.) how to rebuild the preview release for gcc4.x and how long this
process will be ?


It depends a bit on your computer and internet connection, but it can 
take a up to a couple of hours on a not to old PC.



3.) is it necessary to do this before rebuilding world:

cvs -d [EMAIL PROTECTED]:/cvsroot co pkgsrc

even after 4 hours, it is still downloading packages :-(. is it
downloading the source code of every package ?


No, it's not necessary, in fact I would not do that at all if I were 
you. Using binary packages are much easier, see below for more info.



4.) DragonFlyBSd guide also advises to use cvs up after step 3. what
is it and how much time will that take ?


It downloads the DragonFly sourcecode to your computer so that you can 
compile it. If you have a slow connection you might want to download a 
compressed tarball of the sources instead of using cvsup. You can 
download tarballs from here: 
ftp://chlamydia.fs.ei.tum.de/pub/DragonFly/snapshots/src



hmm.. this page gives lots of information tha i snot present in
DragonFlyBSD handbook:

http://wiki.dragonflybsd.org/index.cgi/QuickStartBSDUser


Yes, the steps under Keeping up to date are good, but use the file

/usr/share/examples/cvsup/DragonFly-release1_8-supfile

or

/usr/share/examples/cvsup/DragonFly-preview-supfile

it you want preview. If you have a slow connection you might want to 
download a tarball as mentioned above.


For third party application don't follow what's written under Installing 
software, look at http://wiki.dragonflybsd.org/index.cgi/HowToPkgsrc and 
follow the steps under the section Pre-built pkgsrc packages, but use 
the addresses found on


ftp://packages.stura.uni-rostock.de/pkgsrc-current/DragonFly/RELEASE/i386/All 



if a package you want can't be found there take a look in

ftp://packages.stura.uni-rostock.de/pkgsrc-current/DragonFly/RELEASE/i386/vulnerable 





Re: Need Reassurance for using DragonFly for Prodduction

2007-04-08 Thread Bill Hacker

Siju George wrote:

*snip*


Does it run quite Well on Intel's Core2Duo


1.8 Runs fine here on Core-D, both dual  single core, 2.8  3.0 GHz, tested 
with 1 GB SDRAM, DDR 266,  2 GB DDR2 @ 800 MHz fsb. Tyan, Asus, MSi, et al.


Should be no problem on Core-2 Duo  1024 MHz fsb with commodity hardware of 
decent grade.


Runs well at the other end of the performance spectrum on VIA C3 also.

Been a while since I did a NetBSD install, but ISTR that DragonFly is easier by 
comparison.


Either way, it makes life simpler to have a FreeBSD 6 or 7 CD around with 
'sysinstall' to do the disk slicing  MBR setup beforehand.


Bill



Re: Anyone got a trailer I can borrow?

2007-03-20 Thread Bill Hacker

Nigel Weeks wrote:

Just wondering if DragonFly will ever support UltraSPARC platforms?

It's a silly question, as I already know it depends entirely on manpower,
but have a look at what showed up on eBay:
http://cgi.ebay.com.au/ws/eBayISAPI.dll?ViewItemitem=220092087657

26 processors, 26GB ram, 4 scsi cards, 2 gigabit NICs.

It'd be a nice system to test SMP/clustering code, without needing multiple
machines...

N


Nigel Weeks
Tech Support and Systems Developer
Rural Press Tasmania
The Examiner Newspaper
Ph. 03 6336 7234
Mob. 0408 133 738
Email.  [EMAIL PROTECTED]





Yah well... it might also be like the Data Printer CT-1064 1000 LPM monster I 
once picked up for my Z-80 for a mere US$100.


Helps if you own your own coal mine to power the half-ton beast

;-)

Multiple machines are probably a more realistic test anyway.

Plus - they need not be 24 X 7 X 365 dedicated or even necessarily on the same 
continent.


Volunteers joining/leaving the test 'pool' on some spare/idle time schedule
CETI style would provide a real test...

Bill


Re: To be a new DFly commiter

2007-03-18 Thread Bill Hacker

Michel Talon wrote:

*trimmed*


work, and in particular they have understood the obvious, that is a ports
system must be centered about binary packages, not recompiling source. 
This is true for at least two reasons:

- first, today users don't want to lose time compiling


Often true.  Penguinistas, for example, are used to rebooting, don't seem to 
mind binaries that don't match their environment.



- second, it is *impossible* to guarantee reliability of a system based on
source code, because two people may compile the same software on different
background, and obtain different result. This is a fundamental issue that
nobody will be able to solve.



Of *course* they may obtain different results. Purpose of the exercise.

That's why, for 'production', one compiles on the target and in the environment 
that is to host the resulting binary.  If something is mis-matched, the error is 
'in your face', right up front - even if cryptic and not helpful.  It is not 
buried in a mostly-workign binary to bite cherass later.


For 'sport', OTOH, do whatever gives you the most intense feeling the fastest.

Hint:  Computers may not even have a role

;-)

Bill



Re: wiki log of #dragonfly irc channel

2007-03-09 Thread Bill Hacker

Joerg Sonnenberger wrote:

On Fri, Mar 09, 2007 at 01:28:10AM +, Helge Rohde wrote:
Yeah, i would have thought so too. But apparently they do bend their rules 
when the see the need, atleast in Germany they *can* put you into jail until 
you tell them the passphrase and i have heard similar from other european 
countries.


That's only partly true. First of all, without the consent of a judge,
you can be put into jail only until the next day. Second, to keep you in
prison they have to have enough evidence to warrant it. Third, you can
not be forced to answer the questions.

Joerg


Dream on, guys, dream on.

'They have to...' have been much in the news of late for NOT doing what they 
allegedly 'have to'.


Bigtime.  And not just in the Yew-Ass-A, either.

And that's just what is public information.

But might we let this not-specifically-related-to-DragonFly thread sort of 
wander off to a more appropriate venue?


Of which there is no shortage.

Bill



Re: 1.8.x: fdisk and disklabel fails to operate disk

2007-02-24 Thread Bill Hacker

Tero Mäntyvaara wrote:

Hi!

I have used Linux. I managed to install 1.8.x to my system (in one
drive) and it boot as I expected. I headed to make my hard drive
mirrored,


'Mirrored' by which means, of the several available?

i.e. - are you on 'raw' ata / nata, 'psuedo-RAID' ata controller, 'raw' SCSI 
controller, 'hardware' RAID SCSI controller... or ... what?


NB: It sounds like the first, i.e. pure 'software' RAID1.. but it would help to 
have more information as what you where doing and at what point in the process 
it stalled.


Unlike WinWOES/Leenux, there DFLY has no 'tick the box' interface to RAID1 
creation (AFAIK...).


;-)

 but both fdisk and disklabel said that both drives were not

configured. :-O


Tero Mäntyvaara


Been doing 'hardware' RAID since SMD/SCSI were 'new', 'software' RAID since MFM, 
RLL and GCR controllers were in use. Using SCSI RAID, atacontrol, and gmirror 
(FreeBSD) on everything but my personal PowerBook still yet...


Very often the setup is 'polite' w/r possibly overwriting precious data - i.e 
wants the drive that is to be added to the mirror set to be 'raw' - neither 
partitioned nor labeled *at all*.


IOW - it is about to duplicate the MBR, partitioning, and disklabel as well as 
the data. Or near-as-dammit, and will not over-write if such pre-exists.


Bill



Re: Native jdk build - success

2007-02-20 Thread Bill Hacker

Francois Tigeot wrote:

Hi,

Today, I finally succeeded in building a native version of wip/jdk14.

The big difference with my previous attempts was the DragonFly version:
1.4.5-RELEASE

So, we have:

1.4.5 = success
1.6.2 = failure
1.8.0 = failure

No matter what version of the pkgsrc system I used, the results were the
same.

The error was also the same in all cases:

Error: failed 
/usr/obj/pkgsrc/wip/jdk14/work/control/build/bsd-i586/lib/i386/client/libjvm.so, because 
Undefined symbol JNI_CreateJavaVM

Sadly, I'm not sure I will be able to pinpoint what exactly has changed between
1.4 and 1.6 to cause this.



'pinpoint' may be the gcc ver  libs installed by default at each of the above 
releases?


Bill


Re: Plans for 1.8+ (2.0?)

2007-02-19 Thread Bill Hacker

Robert Luciani wrote:

*snip* (clustering discussion..)



Jokingly: I think the notion of functional individual computers
helping each other out sounds a bit like neourons in a brain. The
technological singularity is coming, nothing can stop it!


Oddly, they *are* - but not in the way theoreticians around the time 
'Metropolis' or even 'Brave New World' were written might have enviaged, i.e. 
neither centrally controlled nor even 'close coupled'.


The 'net, e-mail, file  media exchange, IRC... one could go on.. seem to JFDI 
w/r all manner of pragmatic 'sharing' with one of the essential characteristics 
of their human users:


- For the most part, a little 'latency' is not unwelcome.

IOW 'Mañana' - response / gratification in minutes, if not hours, is usually as 
good as we could absorb anyway w/o becomeing totally 'time-slaved' to the 
machinery that was intended to be servant - not master.


Specialized scientific applications are just that - specialized, and often 
warranting from-the-ground-up bespoke software - OS included.


Not to put too fine a point on it, but most of these need a very different core 
than (any of the) Unix anyway. Real-Time Exec's, self-onitoring, 
soft-fault-tolerance, etc.


Bill


Gcc-ada errata?

2007-02-19 Thread Bill Hacker
Anyone seen this (DFLY 1.8 REL, Celeron 1 GHz, 512MB SDRAM, IBM/Hitachi PATA 
20GB HDD):


=
# cd /usr/pkgsrc/lang/gcc34-ada

# bmake install -DUSE_GCC34
WARNING: [depends.mk] Unknown object format for installed package 
digest-20050731
= Required installed package digest=20010302: digest-20050731 found
=== Checking for vulnerabilities in gcc34-ada-3.4.6
= Checksum SHA1 OK for gcc-3.4.6.tar.bz2
= Checksum RMD160 OK for gcc-3.4.6.tar.bz2
work - /usr/obj/pkgsrc/lang/gcc34-ada/work
=== Installing dependencies for gcc34-ada-3.4.6
= Required installed package gmake=3.78: gmake-3.81 found
= Required installed package {gettext-tools=0.14.5,gettext=0.10.36}: 
gettext-tools-0.14.6 found

= Required installed package gcc34=3.4.5nb1: NOT found
= Verifying reinstall for ../../lang/gcc34
WARNING: [depends.mk] Unknown object format for installed package 
digest-20050731
= Required installed package digest=20010302: digest-20050731 found
=== Checking for vulnerabilities in gcc34-3.4.6nb1
= Checksum SHA1 OK for gcc-3.4.6.tar.bz2
= Checksum RMD160 OK for gcc-3.4.6.tar.bz2
=== Building for gcc34-3.4.6nb1
(ulimit -d `ulimit -H -d`;  cd /usr/obj/pkgsrc/lang/gcc34/work/obj  
/usr/bin/env USETOOLS=no ICONVPREFIX=/usr SHELL=/bin/sh PTHREAD_CFLAGS=\ 
-pthread\  PTHREAD_LDFLAGS=\ -pthread PTHREAD_LIBS=-lc_r PTHREADBASE=/usr CC=cc 
CFLAGS=-O2\ -I/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/include\ 
-I/usr/include\ -I/usr/pkg/include 
CPPFLAGS=-I/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/include\ -I/usr/include\ 
-I/usr/pkg/include CXX=c++ CXXFLAGS=-O2\ 
-I/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/include\ -I/usr/include\ 
-I/usr/pkg/include COMPILER_RPATH_FLAG=-Wl,-R F77=f77 FC=f77 FFLAGS=-O LANG=C 
LC_COLLATE=C LC_CTYPE=C LC_MESSAGES=C LC_MONETARY=C LC_NUMERIC=C LC_TIME=C 
LDFLAGS=-L/usr/lib\ -Wl,-R/usr/lib\ -L/usr/pkg/lib\ -Wl,-R/usr/pkg/lib 
LINKER_RPATH_FLAG=-R 
PATH=/usr/obj/pkgsrc/lang/gcc34/work/.wrapper/bin:/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/bin:/usr/obj/pkgsrc/lang/gcc34/work/.gcc/bin:/usr/obj/pkgsrc/lang/gcc34/work/.tools/bin:/usr/pkg/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/pkg/bin:/usr/pkg/sbin:/usr/games:/usr/local/sbin:/usr/local/bin:/usr/pkg/xorg/bin:/usr/X11R6/bin:/root/bin:/usr/pkg/bin:/usr/pkg/xorg/bin 
PREFIX=/usr/pkg PKG_SYSCONFDIR=/usr/pkg/etc CXXCPP=cpp CPP=cpp 
LINK_ALL_LIBGCC_HACK= LOCALBASE=/usr/pkg 
NO_WHOLE_ARCHIVE_FLAG=-Wl,--no-whole-archive 
WHOLE_ARCHIVE_FLAG=-Wl,--whole-archive X11BASE=/usr/pkg/xorg X11PREFIX=/usr/pkg 
PKGMANDIR=man MAKECONF=/dev/null OBJECT_FMT=ELF USETOOLS=no 
BSD_INSTALL_PROGRAM=/usr/bin/install\ -c\ -s\ -o\ root\ -g\ wheel\ -m\ 555 
BSD_INSTALL_SCRIPT=/usr/bin/install\ -c\ -o\ root\ -g\ wheel\ -m\ 555 
BSD_INSTALL_LIB=/usr/bin/install\ -c\ -o\ root\ -g\ wheel\ -m\ 555 
BSD_INSTALL_DATA=/usr/bin/install\ -c\ -o\ root\ -g\ wheel\ -m\ 444 
BSD_INSTALL_MAN=/usr/bin/install\ -c\ -o\ root\ -g\ wheel\ -m\ 444 
BSD_INSTALL=/usr/bin/install BSD_INSTALL_PROGRAM_DIR=/usr/bin/install\ -d\ -o\ 
root\ -g\ wheel\ -m\ 755 BSD_INSTALL_SCRIPT_DIR=/usr/bin/install\ -d\ -o\ root\ 
-g\ wheel\ -m\ 755 BSD_INSTALL_LIB_DIR=/usr/bin/install\ -d\ -o\ root\ -g\ 
wheel\ -m\ 755 BSD_INSTALL_DATA_DIR=/usr/bin/install\ -d\ -o\ root\ -g\ wheel\ 
-m\ 755 BSD_INSTALL_MAN_DIR=/usr/bin/install\ -d\ -o\ root\ -g\ wheel\ -m\ 755 
BSD_INSTALL_GAME=/usr/bin/install\ -c\ -s\ -o\ root\ -g\ wheel\ -m\ 555 
BSD_INSTALL_GAME_DATA=/usr/bin/install\ -c\ -o\ root\ -g\ wheel\ -m\ 444 
BSD_INSTALL_GAME_DIR=/usr/bin/install\ -d\ -o\ root\ -g\ wheel\ -m\ 755 
INSTALL_INFO= MAKEINFO=/usr/obj/pkgsrc/lang/gcc34/work/.tools/bin/makeinfo 
PKG_CONFIG= 
PKG_CONFIG_LIBDIR=/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/lib/pkgconfig:/usr/obj/pkgsrc/lang/gcc34/work/.buildlink/share/pkgconfig 
PKG_CONFIG_LOG=/usr/obj/pkgsrc/lang/gcc34/work/.pkg-config.log WRAPPER_DEBUG=no 
WRAPPER_UPDATE_CACHE=yes VIEWBASE=/usr/pkg /usr/pkg/bin/gmake bootstrap)

Syntax error:  unexpected
*** Error code 2

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc34
*** Error code 1

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc34
*** Error code 1

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc34-ada



Bill Hacker



Re: Installing Dragonfly 1.8 hangs BIOS completly

2007-02-19 Thread Bill Hacker

Michael Neumann wrote:

Hi,

Just a few minutes ago, I installed Dragonfly 1.8 onto my laptop.
Then I rebooted, and the BIOS hung up completely after showing that it 
detected the harddisk and cdrom. I powered down and tried again, but 
that didn't worked either. I couldn't even boot a CD or anything else or 
couldn't even enter the BIOS setup.


The only thing that worked was to remove the harddisk physically and 
then pluging it in a few seconds after the BIOS crossed the detection of 
the devices. Using this method, I booted the Dragonfly installer cd and 
used the disk tools to wipe out the beginning of the harddisk. Then I 
rebooted again and voila, I could boot normally (without removing the 
harddisk). Puh!


Now I tried a second time to install Dragonfly 1.8, but after I reboot 
the BIOS hangs again!


I know that the BIOS should not hang up itself, but on the other hand 
that didn't happen with any other operating system I installed on my 
laptop (FreeBSD, NetBSD, DragonFly 1.6). So I think there is something 
wrong in the 1.8 version. Any hints?


Regards,

  Michael


Welll... you have told us *which* laptop, (Make, model, age, CPU, whether you 
have APM, ACPI, enabled/not, if storage devices are autodetecting, swapped, set 
to boot out-of-order, etc. any and all of that info might help.


.and nothing attached to a serial port while booting, please.

Bill


Re: Gcc-ada errata?

2007-02-19 Thread Bill Hacker

walt wrote:

On Tue, 20 Feb 2007, Bill Hacker wrote:


Date: Tue, 20 Feb 2007 00:34:37 +0800
From: Bill Hacker [EMAIL PROTECTED]
Newsgroups: dragonfly.users
Subject: Gcc-ada errata?

Anyone seen this (DFLY 1.8 REL, Celeron 1 GHz, 512MB SDRAM, IBM/Hitachi PATA
20GB HDD):

=
# cd /usr/pkgsrc/lang/gcc34-ada

# bmake install -DUSE_GCC34


Oops -- ignore my other answer.  I read the README ;o)  The gcc34 Makefile
clearly assumes that you've already build gcc34 *with* ada.  AFAICT, the
only way you can do that is to install gcc3 first, because that includes
ada by default.  You can then bootstrap to gcc34, or maybe even directly
to gcc34-ada without building gcc34, not sure.



Let's see...

===
# cd gcc3-ada
# bmake install clean
ERROR: This package is not available for these platforms: Darwin-*-* Interix-*-* 
DragonFly-*-*.

ERROR: This package has set PKG_FAIL_REASON:
ERROR: gcc3-ada-3.3.6 is not available for DragonFly-1.8.0-i386
ERROR: This package has set PKG_SKIP_REASON:
ERROR: Sorry, need an Ada compiler
*** Error code 1

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc3-ada

===

# cd ..
# cd gcc3
# bmake install clean
WARNING: [depends.mk] Unknown object format for installed package 
digest-20050731
= Required installed package digest=20010302: digest-20050731 found
=== Checking for vulnerabilities in gcc3-3.3.6
work - /usr/obj/pkgsrc/lang/gcc3/work
=== Installing dependencies for gcc3-3.3.6
= Required installed package gcc3-c-3.3.6{,nb*}: NOT found
= Verifying reinstall for ../../lang/gcc3-c
ERROR: This package is not available for these platforms: Darwin-*-* Interix-*-* 
DragonFly-*-*.

ERROR: This package has set PKG_FAIL_REASON:
ERROR: gcc3-c-3.3.6 is not available for DragonFly-1.8.0-i386
*** Error code 1

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc3-c
*** Error code 1

Stop.
bmake: stopped in /usr/pkgsrc/lang/gcc3

===

'GNATS' as the cartoon character said

Hmm.

(ssh's to a FreeBSD 6.X box.)

+++
triligon# cd /usr/ports/lang/gnats
triligon# less pkg-descr
The GPL version of GNU Ada compiler system built on GCC 3.4.6

Binaries built with this version of compiler are covered by the
GPL license.  Use the Ada compiler bundled with GCC 4.0 and
subsequent for LGPL licensing.



Joy!  gcc 4.0 and later should be OK as-is?

===

From the DFLY 1.8 release notes:

Bring in GCC-4.1 (setenv CCVER GCC41 to use).

Let's see.

=

# setenv CCVER GCC41
# gcc -v
gcc: in path []/usr/libexec/GCC41/gcc: No such file or directory

=

And even with case changed, the subdir:

/usr/libexec/gcc41 is indeed empty...

Perhaps the ISO image is not enough. But I do not see gcc41 on Chlamydia, 
either.

Bill




Re: Gcc-ada errata?

2007-02-19 Thread Bill Hacker

walt wrote:

Bill Hacker wrote:

.

Joy!  gcc 4.0 and later should be OK as-is?


Well, corecode has imported just the bare-nekkid gcc/g++ parts, not
the fancy stuff like ada.  But -- maybe the native ada-4.1.2 code
will compile without a bootstrap ada compiler, dunno.


===

From the DFLY 1.8 release notes:

Bring in GCC-4.1 (setenv CCVER GCC41 to use).

Let's see.

=

# setenv CCVER GCC41
# gcc -v
gcc: in path []/usr/libexec/GCC41/gcc: No such file or directory


If you're willing to build DFly from source you just add a line to
your /etc/make.conf 'WANT_GCC41=true' (IIRC) and it will get built.
Then, after gcc41 is installed, you setenv CCVER GCC41 to actually
use it instead of the default gcc3x.


Thanks - may come back to that.

For now, 'one challenge at a time', so trying gcc4.1 on FBSD 6 AMD-64.

If it is broken there, it is broken, full-stop, AFAIAC (Leenux nor its interface 
layer, not being on my radar).


And 'make check' was not happy just now - well before getting at all Ada-ish, so 
I may have to revert to trying an older version on FBSD 4.X.


Or use the PowerBook :-(

Bill





Re: Plans for 1.8+ (2.0?)

2007-02-18 Thread Bill Hacker

Simon 'corecode' Schubert wrote:

Rupert Pigott wrote:

I don't know if IBM's GridFS does any better with the latency, but it
certainly scales a lot better but the barrier for adoption is $$$. It
costs $$$ and it costs a lot more $$$ to train up and hire the SAs to run
it. There are other options like AFS too, but people tend to be put 
off by

the learning curve and the fact it's an extra rather than something that
is packaged with the OS.


Do you happen to have links to GridFS and other systems you mentioned?

cheers
 simon



Google, surprisingly, did not find Big Blue's 'RedBook' fingerprints so much as 
University work, some with Government funding (US, Chinese, Brazilain, rench, et 
al):


Overview with more links:

http://www.scl.ameslab.gov/Projects/Infrastructure/gridafs.html

a(free) .pdf:

http://www.slac.stanford.edu/econf/C0303241/proc/papers/THAT005.PDF



Standards bodies (fee):

http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?isnumber=34198arnumber=1630912count=73index=25

http://doi.ieeecomputersociety.org/10.1109/CCGRID.2006.141



Other (fee) publications

http://www.springerlink.com/content/5xpry6xu0nnwrwcd/

http://www.springerlink.com/content/rdrdbu6pgeanxgqu/


- most citations indicate implementations that appear to rely heavily on AFS 
legacy.


Also of interest 'Distributed Shared Memory':

http://perso.ens-lyon.fr/laurent.lefevre/dsm2006/


But a brief scan of those that were 'free' brings up the question:

'Just who is it that actually NEEDS this anyway?'

Bill Hacker


Re: Plans for 1.8+ (2.0?)

2007-02-01 Thread Bill Hacker

Matthew Dillon wrote:

:Besides the finalization of vkernel, what else can we expect into 2.0? There 
are many long-awaited (not only by me) features and additions:
:- ZFS

I am seriously considering our options with regards to ZFS or a
ZFS-like filesystem.  We clearly need something to replace UFS,
but I am a bit worried that porting ZFS would be as much work
as simply designing a new filesystem from scratch.

One big advantage of a from-scratch design is that I would be
able to address the requirements of a clustered operating system
in addition the requirements of multi-terrabyte storage media.


Tilt.

I thot that was one of the design goals of ZFS?

Would it not make sense also to look again at the pioneering work done in Plan 
9?

'Clustering' per se is not new - only the DFLY approach to same.

Bill



Re: vkernel migration

2007-02-01 Thread Bill Hacker

Nigel Weeks wrote:

Just an idea for thought over your next coffee...

I'm if it would be to conceivably possible to move a vkernel process(and any
sub-processes it had) to another host? It'd have to stop temporarily, or at
least, slow down immensely while pumping all the userland data inside the
vkernel to the other host.

It might just be easier to have a vkernel powered by multiple physical
kernels (on different machines), giving the appearance of an SMP machine
inside the vkernel.

(Insert 40,000 lines of code here...)

Nige.



Hmm... there is a sort of 'left-handed' precedent..

Before Microslog got its 'one way' licensing mits on Connectix' Virtual PC..
one could close a VPC session in 'save state' - which preserved even the 
(virtual) program-counter of the (virtal) PC (not to mention registers, stack, 
memory) in the 'container file' along with everything else.


That container file could be-emailed half way 'round the globe, loaded on 
another VPC-equipped box, and fired up - with the cursor in the same spreadsheet 
cell as when saved. Or whatever. Whether minutes - or months - earlier.


And it fired-up fast.

MS later prevented a container created on box A from running on box B.
Or even logged-in user A and logged-in user B on the same box.

So long as the registers, caches, RAM, stack, and program counter are preserved, 
it is as portable as the (virtualized?) hardware supports.


The IP stack, OTOH, *should* get hairy - but they handle that anyway - as when a 
network cable is swapped over to a different link and IP.


Bill


Re: Plans for 1.8+ (2.0?)

2007-02-01 Thread Bill Hacker

Chris Csanady wrote:

very well-thought-out post in re ZFS. Thanks!

I'd only add that porting one or more 'foreign' fs in general seem to be a good 
idea - it is bound to show up things not yet covered well.


In all of the published comparison tests, I have never seen a single 'always 
best' fs anyway.


Pre ZFS,though, JFS and XFS were consistently 'nearly always in the top 3' - IOW 
pretty good all-around *compromises*.


And therein lies the rub:  One man's needs for optiization differ from the next.

But one thing *for sure* NFS and SMBFS and others (TVFS, Andrew...) have 
'problems' of one sort or another.


So *anything* that makes for a better shared storage - even if it must rely on 
nothing slower than gig-E or 10-Gig-E to be at its best, is a plus.


...shared multi-host SCSI RAID controllers being rudely rare and expensive...

;-)

Bill



  1   2   >