Re: FreeBSD on a Mac Mini Intel?

2008-11-26 Thread Ian Jefferson


On Tue, 25 Nov 2008, Tom Marchand wrote:

snip

  Ian,
 
  You could always test it using VMWare Fusionand then let
  us know
 
  Er, Gee thanks.  I'll just have a word with the VMware guys about
  fully
  abastracting the mini in software... back in a jiffy ;-)
 

 Actually VMWare has a Mac Version which is what the poster was
 probably referring to.
 ___

FreeBSD 6.x installs and runs well as far as I can tell on VM-Ware Fusion.
This I Have done but I don't recall any specifics.  I'm pretty sure I
tried 6.1 and 6.3 but I forget which processor (amd64 vs i386).

However personal preference: I'd rather run the box native FreeBSD and not
have to bother with Mac OS X.

I was actually musing that this (VM) might be a nice way to pre-install a
complete custom system.  Install, configure, add packages, tweak your fav
kernel stuff, etc then dump/restore to a real disk and pop in to a
physical system.

I used to so something like this with NeXT systems.  Twerked good.

It all sounds promising enough to buy a new toy.  I'll let you all know
real soon now if/how I get it running.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD on a Mac Mini Intel?

2008-11-24 Thread Ian Jefferson


On Mon, 24 Nov 2008, Andrew Gould wrote:

 On Sun, Nov 23, 2008 at 9:08 AM, John Almberg [EMAIL PROTECTED] wrote:

  On Nov 21, 2008, at 11:42 PM, Ian Jefferson wrote:
 
   Is anyone running FreeBSD on a Mac Mini Intel?
 
 

 Ian,

 You could always test it using VMWare Fusionand then let us know
 ;-)


Er, Gee thanks.  I'll just have a word with the VMware guys about fully
abastracting the mini in software... back in a jiffy ;-)

Ok any comment about other low power platforms?

I'm sorely tempted to just buy one (mini-intel) and promise to write up
the results on some web page somewhere.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


FreeBSD on a Mac Mini Intel?

2008-11-21 Thread Ian Jefferson

Is anyone running FreeBSD on a Mac Mini Intel?

I've looked around for a definitive discussion on the topic but  
couldn't find anything on this list or Google at least.


I'd like to replace a couple of relatively high power-consuming  
servers with a couple of Mac Mini Intel's.  For my purposes they are  
plenty good enough.  I'd prefer to stay with the 6.X release for now.


I've got a lot of Mac's around running OS X but in this case these  
boxes would be headless and without keyboards.  An alternate serial  
console would be nice if that can be rigged up via USB and a serial  
converter.


IJ
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum stability?

2006-07-07 Thread Ian Jefferson

One thing you might consider is that gvinum is quite flexible.

The subdisks in vinum that make up a raid 5 plex are partitions.   
This means you can create raid 5 sets without using each entire disk   
and the disks don't need to be the same model or size.  It's also  
handy for spares.  If you start having media errors a new partition  
on the offending disk might be one option but any other disk that  
support a partition size equal to the ones used as subdisks in the  
raid 5 plex will also do.


Having said that I'm finding it tricky to understand and use gvinum.   
It seems to be on the mend though, the documentation is improving and  
the raid 5 set I had running seemed pretty stable for a 40 minute  
iozone benchmark.  That's all I've done with it to date.


IJ

On Jul 6, 2006, at 8:56 AM, Jeremy Ehrhardt wrote:

I have a quad-core Opteron nForce4 box running 6.1-RELEASE/amd64  
with a gvinum RAID 5 setup comprising six identical SATA drives on  
three controllers (the onboard nForce4 SATA, which is apparently  
two devices, and one Promise FastTrak TX2300 PCI SATA RAID  
controller in IDE mode), combined into one volume named drugs.  
We've been testing this box as a file server, and it usually works  
fine, but smartd reported a few bad sectors on one of the drives,  
then a few days later it crashed while I was running chmod -R on a  
directory on drugs and had to be manually rebooted. I can't  
figure out exactly what happened, especially given that RAID 5 is  
supposed to be robust against single drive failures and that  
despite the bad blocks smartctl claims the drive is healthy.


I have three questions:
1: what's up with gvinum RAID 5? Does it crash randomly? Is it  
considered stable? Will it lose data?
2: am I using a SATA controller that has serious problems or  
something like that? In other words, is this actually gvinum's fault?
3: would I be better off using a different RAID 5 system on another  
OS?


Jeremy Ehrhardt
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions- 
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: gvinum question: why are subdisks not attached?

2006-07-05 Thread Ian Jefferson

Howdy,

On Jun 26, 2006, at 9:39 AM, Travis H. wrote:


Hiya,

I finally resolved the source of my gvinum problems.  Every time I
reboot, the plexes and volumes come up attached to one another, but
both are size zero and the subdisks exist but are not attached.  Has
anyone a guess about the source of this problem?


One possibility is your partition type.  I'm not certain. (see below  
on my bsdlabel output sample)  I was working on a Raid5 configuration  
and finally did get it working.  I have a few suggestions.


One is to try to find to find old vinum documentation.  What finally  
brought clarity to me is looking at the vinum man pages from FreeBSD  
4.7.  It has a lot more information vinum(8), including a nice  
section on the config file format and examples.  Of course I could  
have just missed it (nope just checked again).


The second is to try marking your partitions as type vinum.

The third is to drop the slices since the configuration that worked  
for my Raid 5 set used just partitions no slices (see below).


The fourth is to move up to 6.1.  I did this during my struggles but  
I don't think that was my issue.  However 6.1 does have improved man  
pages :-), but not as good as the 4.7 pages.  You have to read both  
because gvinum doesn't implement the full vinum command set.  I  
*think* the config file is the same though.


# /dev/ad4:
# same as /dev/ad8 and /dev/ad10 for a Raid 5 3-disk system.
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  a: 390721952   16unused0 0
  c: 3907219680unused0 0 # raw  
part, don't edit

  d: 209715200 vinum

Finally, enough people have been able to get/claim vinum success to  
make me believe it's OK.  There are two key issues in my mind.  I'll  
add that now that I found TFM I can RTFM.  ;-)


#1 it is a little fragile (ie not idiot proof enough yet for this hack)
#2 the documentation needs some work ... but that's kind of up to us  
now isn't it...


I think I owe Freebsd-docs a long post but I have another 20 hours of  
understanding and playing to do at least.  This is a great capability  
for small systems.  Frankly I'm a bit puzzled by geom and why the  
change was made from whatever it replaced.  Overall 6.x looks like a  
step backwards from a filesystem management perspective... maybe  
someone could comment on geom... perhaps a future enabler?


Ian
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


moving gvinum raid 5 volume between systems

2006-07-05 Thread Ian Jefferson
Could anyone offer some guidance on how under 6.1 I can move 3 gvinum  
disks from one system to another.


There is an old post

http://lists.freebsd.org/pipermail/freebsd-questions/2004-August/ 
054607.html


That covers pretty much what I am doing.  In my case I have 6.1 i386  
and 6.1 AMD64 versions of FreeBSD on one machine.  I'd like to be  
able to mount the same raid 5 set on either OS.


IJ
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-28 Thread Ian Jefferson



I'd rather run 5 SATA cables then one SCSI cable (say 68pin) with  
multiple heads...   The darn SCSI cables are so thick,  
comparatively, that running them in your case is a lot harder :-)





Well everyone's mileage may vary.  Parallel cables only work nicely  
when you have a stack of drives all close  lined up together.  I  
personally yearn for a simple 40gbps daisychainable serial bus.  I  
hoped firewire would have been it but we seem to be stuck at 800mpbs.


The other cabling option I forgot about is USB2 or Firewire.

There are a number of very low cost external cases that pre-package  
USB/Firewire SATA converters.  You basically fill a hard disk case  
with SATA or ATA drives and connect your computer to the case via a  
single firewire or USB cable.  I have not seen one of these that's  
hot swap yet but I did see a few of these recently in Tokyo Akihabara  
district for ~$100 so I assume they are available all over.  The  
box's I have seen are 4 drive systems.  Just fill them with your  
favorite commodity hard disk I guess.


At ~50MB/s the interface is plenty fast and greatly simplifies the  
cable issue inside the PC.


IJ

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-25 Thread Ian Jefferson

Hi Chris,

I have many of the same questions.  SATA is plenty fast for home  
systems and modern drives are smoking stuff that was enterprise class  
just a few years ago.  'twas ever thus.


Cables are a nightmare IMHO.  This was by far the reason I've been a  
big fan of SCSI for a long time.  You can make a pretty effective and  
tidy Raid system by custom making a short length daisy chain scsi  
cable. I have not explored this recently but used to do this ~5+  
years ago for non-raid applications.  We used to run into device  
compatibility problems on the SCSI bus though so same drive mfg might  
be a good idea.  Perhaps things have improved.


You can buy old 80 pin 16 bit SCSI controllers quite reasonably on  
EBay.  Even though the bus speeds might be 40 or 80 MB/sec (that's  
bytes) this still exceeds what I get on single disk SATA benchmarks.   
My impression is that modern drives are backward compatible with  
older SCSI but I've not tested this extensively, just a couple of  
anecdotes.


You can do quite well in the used Enterprise market.  You might have  
a look at pricewatch.com for some low cost SCSI disks.  My experience  
has been that S/P-ATA drives seem to be easily available in large  
sizes,  300 GB whereas SCSI seems to be available in volume only for  
smaller drives ~100-200GB.


Above is mostly supposition.

I have been experimenting with SATA to see what's possible.  There  
are gizmo's, Backplanes, out there that make the cabling issue easier:


I have one of these:
http://www.mwave.com/mwave/viewspec.hmx?scriteria=BA20689

And I'm considering one of these:
http://www.mwave.com/mwave/viewspec.hmx?scriteria=BA20690

Similar devices are available for SCSI and PATA drives they are a  
little difficult to find.  You can google for backplane, 3X5 and 2X3  
that type of thing.


I finally got gvinum to work for me under 6.1 i386 RELEASE for Raid  
5.  The volume manager concept appeals to me because you can work  
with smaller chunks pieces of storage than whole disks. So with the  
same set of physical disks you can contemplate different RAID  
strategies depending on how much performance you want, all at the  
same time.  So far my benchmarks indicate that a 3 partition raid 5  
vinum disk performs fine for me.  Minimum write performance is around  
7MB/s and Minimum read is around 14MB/s.  Usually however writes came  
in on  the low side of 15 MB/s and reads around 50 MB/s.  This is all  
just a first attempt though without any attempt to tune the raid  
set.  With two 5X3 backplanes and software Raid 5 you could build PDQ  
a 4TB system and your drives would not have to be identical.


Even with a backplane device though you end up with quite a cable issue.

The last option I've considered is to look at some of the SATA to  
SCSI backplanes.  There are commercial solutions that allow you to  
put SATA or PATA drives up to 12 in an enclosure then connect to your  
host computer via SCSI.  I haven't found anything cheap though.   
Cheap =  20% of the drive cost.  Apple sells such a device as do  
numerous other manufacturers.  Search for SATA Raid.


IJ


On May 11, 2006, at 7:51 PM, [EMAIL PROTECTED] wrote:




My questions that I'm posting is not really related towards the  
performance of
the system, it's more towards the capacity of the system... I guess  
it boils
down to the physical hardware... How does everything connect, how  
to expand
systems, and how to run arrays bigger than what one single  
controller can

provide...

--
C


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Advice on RAID?

2006-05-12 Thread Ian Jefferson

Robert,

I think I already sent out this link that documents FreeBSD R5  
performance:

http://www25.big.or.jp/~jam/filesystem/

I recently saw an article documenting similar benchmarks using geom  
and vinum in a Japanese FreeBSD magazine and the handbook section  
around vinum does warn about write performance of Raid 5.  For lot's  
of applications though the Raid 5 low write performance is not an  
issue. (it's not an issue for me)


Were you able to get gvinum raid 5 working? Could you share that  
experience?


I'd really like to use gvinum or raid 5 with a 3 SATA drive, + 2IDE  
drive setup but so far I have not been able to get it to work. :-(


IJ

On May 12, 2006, at 6:35 AM, Robert Fitzpatrick wrote:

I have looked into and tried FreeBSD 6.0 Vinum and GEOM RAID in our  
PIII
SCSI 80-pin server with the help of several here on the list. I'm  
pretty

much going to use GEOM RAID-1 for the system disks using Ralf's doc. I
have room for 3 more disks. Would you recommend using Vinum RAID-5 on
three 73GB drives or using GEOM RAID-1 again on 2 147GB drives?

If there is no big reason to use either over the other, we've  
decided to

go for the most space and RAID-5. But the amount of space we would be
gaining is probably less than 50GB, correct?

Or do you have another solution on our $700 budget. It is a debate  
here

and would like to get experienced insight.

Thanks in advance for your time!
--
Robert

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions- 
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Software RAID guidance

2006-05-04 Thread Ian Jefferson
I have been uable to get vinum to work under 6.0.  I'm no expert though.

Vinum became gvinum in 6.0 and is implemented using geom.

Recently the gvinum man page has been updated and it available in 6.1
RC-1.

I think if you want mirroring only you should consult the geom pages.  It
seems as though geom is the way of the future but does not currently
support R5 which is what I was looking for.

Somewhere out there is a pretty comprehensive set of iozone benchmarks
comparing linux and BSD software Raid.  Ah found it:
http://www25.big.jp/~jam/filesystem/old/

This might give you some ideas.

On Thu, 4 May 2006, Robert Fitzpatrick wrote:

 I have an old NT4 PIII here that has a pair Adaptec Array1000 Family
 controllers with 2 pairs of identical drives on one of them (2 IBM 9GB
 and 2 Seagate 35GB). From what I googled, *nix does not support the
 controller, so I have removed the RAID arrays and loaded FreeBSD 6.0
 onto the two IBM drives. Now, I wanted to mirror the other two for data
 and looking for guidance as to whether it is first of all suited for
 software RAID and if so, CCD or vinum. I am contemplating vinum because
 the handbook mentions CCD is when cost is the important factor and for
 me, is reliability. What would someone suggest? If vinum, one thing I
 don't quite understand is do I create the partitions to be used in the
 device? There doesn't seem to be a man for gvinum and the link to it in
 the handbook section 19.6.1 is broken.

 Thanks in advance.

 --
 Robert

 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Problem creating DR bootable disk

2006-04-29 Thread Ian Jefferson
I just moved my 6.0 Release from one slice to another.  The procedure is
similar.  You could look at my response to expanding a partition, same
idea.  So relocating a copy of 6.0 (at least) works OK for booting.

What happens when you pull the raid card?

I'm guessing that bios is ignored pretty early in the boot processs.  Your
disks might get renumbered somehow.

Choice of boot manager?

IJ

On Fri, 28 Apr 2006, Joe Gross wrote:

 I'm running FreeBSD 6.1-RC #0 with a generic kernel and an Asus A8V
 motherboard. I have three IDE disks in my system. Two are on a 3ware
 7200 RAID card in a RAID1 configuration. This is currently used to
 boot.

 The third disk is intended to be a DR disk, with a nightly script to
 mount, sync, change twed0 to ad0 in fstab, and unmount. If something
 untoward would happen to the main RAID, I could simply reset the boot
 list in the BIOS and boot off the DR disk that has an image from early
 that morning. This is also a handy way to shuffle the OS onto larger
 disks as I upgrade.

 I was using this in 4.10 with an Asus a7v133 board and fortunately
 never had to utilize the DR capability. It did pass tests for booting
 off the new disk and I did several disk upgrade shuffles over the
 years.

 For 4.10 the script I used to initialize the DR disk and add boot
 blocks was:

 #!/bin/sh
 dd if=/dev/zero of=/dev/ad0 bs=1k count=1
 fdisk -BI ad0
 disklabel -B -w -r ad0s1 auto
 disklabel -R -r ad0s1 disklabel.250
 disklabel -B -r ad0s1
 newfs -U -i 20480 /dev/ad0s1a

 This doesn't work with 6.0. When I try to boot off the secondary disk
 it gets through the initial loader and then spews what looks like a
 repeating register dump. Nothing short of a power cycle will kill
 it. I can't say exactly what it says since it's scrolling too fast to
 read.

 Everything I've read indicates the procedure hasn't changed in
 6.0. Any suggestions on where to look or what to try?

 Thanks for the help,

 Joe
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Problem creating DR bootable disk

2006-04-29 Thread Ian Jefferson

Hmmm

I'm probably making obvious suggestions but... I think I'd be  
inclined to do a fresh install of the same OS version on the target  
disk.  See if that boots OK.


If it does then something in your mirror tools is the issue.
If it doesn't then it's a bootstrap problem.

Re: boot manager

I'm not a bootstrap expert but I was thinking that grub seems pretty  
flexible and might help out in this case.  I have not installed it  
with FreeBSD but I have it on a FreeDOS/Linux machine.  There are  
others also.


IJ

On Apr 29, 2006, at 5:56 PM, Joe Gross wrote:


On Sat, Apr 29, 2006 at 04:08:36AM -0400, Ian Jefferson wrote:


What happens when you pull the raid card?


Same thing.


Choice of boot manager?


Doesn't the -B option just install the standard boot manager?

Joe
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions- 
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: How do you resize an existing partition / slice ?

2006-04-28 Thread Ian Jefferson
If you have extra disk space it's fairly straightforward to use dump/ 
restore and re-partition.


I recently found myself desiring to re-slice my disk from a single  
slice to 5 slices.


The basics were to dump the contents of my root, var, and usr  
partitions to 3 files on another disk.  I booted from a distribution  
CD and installed a new OS on slice 2 with the default partitions.   
Slice 1 became a 1g partition for DOS if the urge struck me later,  
Slice 2 was a new OS and slice 3 was for the old/existing OS.


I relabeled my 2nd slice and restored the original OS to the various  
partitions.  The only thing I forgot was to re-name the partitions in  
the /etc/fstab on my old OS. The change in slice number went like this:


/dev/ad0s1b noneswapsw   
0   0
/dev/ad0s1a /   ufs rw   
1   1
/dev/ad0s1e /tmpufs rw   
2   2
/dev/ad0s1f /usrufs rw   
2   2
/dev/ad0s1d /varufs rw   
2   2


Became

/dev/ad0s3b noneswapsw   
0   0
/dev/ad0s3a /   ufs rw   
1   1
/dev/ad0s3e /tmpufs rw   
2   2
/dev/ad0s3f /usrufs rw   
2   2
/dev/ad0s3d /varufs rw   
2   2



In your case if you want just the original slice you would do  
something like:


boot from the live filesystem CD
fdisk your drive,
label (partition) your drive to the configuration you want
newfs each new partition,
mount the drive you used for backup
mount each of the new partitions of your original disk
Restore each of the partitions in turn.

Reboot from the original disk.

This looks a bit complex but is not too bad at all.  This dump/ 
restore or even tar'ing filesystems is something we used a long time  
ago to image NeXT systems.  It's been quite reliable for me.  Your  
mileage may vary.


IJ


On Apr 29, 2006, at 3:24 AM, Low Kian Seong wrote:


Dear all,

Like the subject shows, I would just like to know how do i resize an
existing partition or slice, ermm with minimum loss of data of course.

Thanks.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions- 
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


gvinum help under 6.0 release

2006-04-28 Thread Ian Jefferson

Hi folks,

Well I think I'm out of ideas in my experience with gvinum.  I need  
some help.


I cannot get gvinum to work for me at all in setting up a raid5  
set.   This is the first FreeBSD gizmo that I've run into that has  
proven dangerously unreliable.  Each time I use it I get a panic, and  
one of these tries kept the machine from booting until I did a  
bsdlabel -B /dev/adxx for each of three drives.  (it just did it again)


I was surprised that even a dd if=/dev/zero of=/dev/adxx would not  
clean up things.   I read somewhere that geom does something  
preventing overwrite of parts of the device.


I'm hoping someone will point out something I'm doing horribly wrong.

Synopsis:

The drives in question are ad4, ad8 and ad10  all identical disks.

One attempt under amd64 6.0 release with this gvinum patch I get  
panics with i386 6.1-RC1


http://wikitest.freebsd.org/moin.cgi/GvinumMoveRename

bsd2# cat vinum_r5.config
drive a device /dev/ad4s1d
drive b device /dev/ad8s1d
drive c device /dev/ad10s1d

volume r5vol
  plex org raid5 15g
sd length 5g drive a
sd length 5g drive b
sd length 5g drive c


bsd2# cat sdisk.bsdlabel
# for vinum configuraiton
# good for /dev/ad4s1 /dev/ad8s1 and /dev/ad10s1
#
#
# /dev/ad4s1:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  a: 390721889   16unused0 0
  c: 3907219050unused0 0 # raw  
part, don't edit

  d: 50g 17vinum

bsd2# cat clean.sh
dd if=/dev/zero of=/dev/ad4 bs=1k count=100
fdisk -I ad4
bsdlabel -R /dev/ad4s1 sdisk.bsdlabel
dd if=/dev/zero of=/dev/ad8 bs=1k count=100
fdisk -I ad8
bsdlabel -R /dev/ad8s1 sdisk.bsdlabel
dd if=/dev/zero of=/dev/ad10 bs=1k count=100
fdisk -I ad10
bsdlabel -R /dev/ad10s1 sdisk.bsdlabel


The story so far:

I recently purchased a Gigabyte GA-8VT880P (VIA PT880 Pro Chipset)  
and put an Intel Celeron D 336 (Intel EMB64T) processor in it.  I  
suppose by today's standards this is a pretty low end board but it's  
way fast for what I need.


On this system the MB was replacing an older AMD Athalon board.  I'm  
using this system to study an upgrade path to a 4.7 system I'm  
running.  I stupidly decided to install the 6.0-RELEASE amd64.  At  
this point I'm still running the Generic kernel.  I say stupidly  
because the target system is really going to run an i386-RELEASE of  
5.x or 6.x.  Later I repented, after gvinum frustration, re-sliced  
and moved my original install onto slice 2.


For disk I have to PATA drives that I have been using for some time  
and 3 Samsung SATA drives.  All but one of the drives is attached to  
the MB controller.  I have an addon Buffalo SATA/PATA card (also  
~cheap) with a VT6421L chipset in it.


I'll comment that one of the SATA connections on the addon board does  
not seem to function correctly with the amd release.  I have not  
tested it yet with my i386 RC-1 yet.  HOWEVER I did run some default  
iozone tests on all the working SATA and PATA drives so I'm fairly  
confident that this the working SATA connection really works OK.


I also have a SATA backplane.  These are called various things but  
the basic idea is to put three 3.5 drives in two 5.25 HH external  
slots.  This one has hot swap capability (dubious IMHO).  For me  
this is a convenient mechanical place to put disks.  Again I'm  
confident that this is OK since I ran the iozone tests with the  
drives in this enclosure.


What I am trying to do:

What I'm studying is how to put together a software raid5 volume  
and generally I like what I was reading about vinum as a flexible  
tool to manage space.  I'm not terribly concerned with IO performance  
since I'm still calibrated to 5MB/s sustained throughput and keep  
wondering why 2kb of code keeps getting repacked into 35MB's of  
bloat. (see grumble) above :-).


TIA


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]