Using ccd with zfs

2008-07-21 Thread Steven Schlansker

Hello -questions,
I have a FreeBSD ZFS storage system working wonderfully with 7.0.   
It's set up as three 3-disk RAIDZs -triplets of 500, 400, and 300GB  
drives.


I recently purchased three 750GB drives and would like to convert to  
using a RAIDZ2.  As ZFS has no restriping capabilities yet, I will  
have to nuke the zpool from orbit and make a new one.  I would like to  
verify my methodology against your experience to see if what I wish to  
do is reasonable:


I plan to first take 2 of the 750GB drives and make an unreplicated  
1.5TB zpool as a temporary storage.  Since ZFS doesn't seem to have  
the ability to create zpools in degraded mode (with missing drives) I  
plan to use iSCSI to create two additional drives (backed by /dev/ 
zero) to fake having two extra drives, relying on ZFS's RAIDZ2  
protection to keep everything running despite the fact that two of the  
drives are horribly broken ;)


To make these 500, 400, and 300GB drives useful, I would like to  
stitch them together using ccd.  I would use it as 500+300 = 800GB and  
400+400=800GB


That way, in the end I would have
750 x 3
500 + 300 x3
400 + 400 x 1
400 + 200 + 200 x 1
as the members in my RAIDZ2 group.  I understand that this is slightly  
less reliable than having real drives for all the members, but I am  
not interested in purchasing 5 more 750GB drives.  I'll replace the  
drives as they fail.


I am wondering if there are any logistical problems.  The three parts  
I am worried about are:


1) Are there any problems with using an iSCSI /dev/zero drive to fake  
drives for creation of a new zpool, with the intent to replace them  
later with proper drives?


2) Are there any problems with using CCD under zpool?  Should I stripe  
or concatenate?  Will the startup scripts (either by design or less  
likely intelligently) decide to start CCD before zfs?  The zpool  
should start without me interfering, correct?


3) I hear a lot about how you should use whole disks so ZFS can enable  
write caching for improved performance.  Do I need to do anything  
special to let the system know that it's OK to enable the write  
cache?  And persist across reboots?


Any other potential pitfalls?  Also, I'd like to confirm that there's  
no way to do this pure ZFS-like - I read the documentation but it  
doesn't seem to have support for nesting vdevs (which would let me do  
this without ccd)


Thanks for any information that you might be able to provide,
Steven Schlansker
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Using ccd with zfs

2008-07-21 Thread John Nielsen
On Tuesday 22 July 2008 12:18:31 am Steven Schlansker wrote:
 Hello -questions,
 I have a FreeBSD ZFS storage system working wonderfully with 7.0.
 It's set up as three 3-disk RAIDZs -triplets of 500, 400, and 300GB
 drives.

 I recently purchased three 750GB drives and would like to convert to
 using a RAIDZ2.  As ZFS has no restriping capabilities yet, I will
 have to nuke the zpool from orbit and make a new one.  I would like to
 verify my methodology against your experience to see if what I wish to
 do is reasonable:

 I plan to first take 2 of the 750GB drives and make an unreplicated
 1.5TB zpool as a temporary storage.  Since ZFS doesn't seem to have
 the ability to create zpools in degraded mode (with missing drives) I
 plan to use iSCSI to create two additional drives (backed by /dev/
 zero) to fake having two extra drives, relying on ZFS's RAIDZ2
 protection to keep everything running despite the fact that two of the
 drives are horribly broken ;)

 To make these 500, 400, and 300GB drives useful, I would like to
 stitch them together using ccd.  I would use it as 500+300 = 800GB and
 400+400=800GB

 That way, in the end I would have
 750 x 3
 500 + 300 x3
 400 + 400 x 1
 400 + 200 + 200 x 1
 as the members in my RAIDZ2 group.  I understand that this is slightly
 less reliable than having real drives for all the members, but I am
 not interested in purchasing 5 more 750GB drives.  I'll replace the
 drives as they fail.

 I am wondering if there are any logistical problems.  The three parts
 I am worried about are:

 1) Are there any problems with using an iSCSI /dev/zero drive to fake
 drives for creation of a new zpool, with the intent to replace them
 later with proper drives?

I don't know about the iSCSI approach but I have successfully created a 
degraded zpool using md and a sparse file in place of the missing disk. 
Worked like a charm and I was able to transfer everything to the zpool 
before nuking the real device (which I had been using for temporary 
storage) and replacing the md file with it.

You can create a sparse file using dd:
dd if=/dev/zero of=sparsefile bs=512 seek=(size of the fake device in 
512-byte blocks) count=0

Turn it into a device node using mdconfig:
mdconfig -a -t vnode -f sparsefile

Then create your zpool using the /dev/md0 device (unless the mdconfig 
operation returns a different node number).

The size of the sparse file should not be bigger than the size of the real 
device you plan to replace it with. If using GEOM (which I think you 
should, see below), be sure to remember to subtract 512 bytes for each 
level of each provider (GEOM modules store their metadata in the last 
sector of each provider so that space is unavailable for use). To be on the 
safe side you can whack a few KB off.

You can't remove the fake device from a running zpool but the first time you 
reboot it will be absent and the zpool will come up degraded.

 2) Are there any problems with using CCD under zpool?  Should I stripe
 or concatenate?  Will the startup scripts (either by design or less
 likely intelligently) decide to start CCD before zfs?  The zpool
 should start without me interfering, correct?

I would suggest using gconcat rather than CCD. Since it's a GEOM module (and 
you will have remembered to load it via /boot/loader.conf) it will 
initialize its devices before ZFS starts. It's also much easier to set up 
than CCD. If you are concatenating two devices of the same size you could 
consider using gstripe instead, but think about the topology of your drives 
and controllers and the likely usage patterns your final setup will create 
to decide if that's a good idea.

 3) I hear a lot about how you should use whole disks so ZFS can enable
 write caching for improved performance.  Do I need to do anything
 special to let the system know that it's OK to enable the write
 cache?  And persist across reboots?

Not that I know of. As I understand it ZFS _assumes_ it's working with whole 
disks so since it uses its own i/o scheduler performance can be degraded 
for anything sharing a physical device with a ZFS slice.

 Any other potential pitfalls?  Also, I'd like to confirm that there's
 no way to do this pure ZFS-like - I read the documentation but it
 doesn't seem to have support for nesting vdevs (which would let me do
 this without ccd)

You're right, you can't do this with ZFS alone. Good thing FreeBSD is so 
versatile. :)

JN

 Thanks for any information that you might be able to provide,
 Steven Schlansker
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
 [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


problems with handbook instructions for creating a CCD on 6-RELEASE

2005-12-09 Thread Mike Hunter
Hi,

I ran into some trouble following the handbook to create a CCD on a server
I built recently.

I have 4 identical disks that I wanted to use in a CCD.  Problem 1 was
after labeling each disk per the instructions, I end up with an a and a
c partition.  When I follow the instructions to create an all-encompassing
e partition, the partition editor complains that I am creating an
over-lapping partition.  Removing the a partition fixes the problem

Then, after I run ccdconfig, disklabel -e ccd0 does not work, I have to
run disklabel -w ccd0 auto and get things going from there.

Finally, I'm a bit confused about soft-updates:

# mount
/dev/da0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/da0s1e on /tmp (ufs, local, soft-updates)
/dev/da0s1f on /usr (ufs, local, soft-updates)
/dev/da0s1d on /var (ufs, local, soft-updates)
/dev/ccd0c on /data3 (ufs, local)
/dev/da1s1e on /data4 (ufs, local)

Ok, so softupdates sounds cool, but it's off on ccd0why?  It's treated the 
same
in /etc/fstab:

# cat /etc/fstab 
# DeviceMountpoint  FStype  Options Dump # Pass#
/dev/da0s1b noneswapsw  0   0
/dev/da0s1a /   ufs rw  1   1
/dev/da0s1e /tmpufs rw  2   2
/dev/da0s1f /usrufs rw  2   2
/dev/da0s1d /varufs rw  2   2
/dev/ccd0c  /data3  ufs rw  2   3
/dev/da1s1e /data4  ufs rw  2   4
/dev/cd0/cdrom  cd9660  ro,noauto   0   0

Pleaes cc me as I'm not on -questions.

Thanks!

Mike
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: gmirror, gvinum or ccd to mirror root-filesystem under 6.0R

2005-11-17 Thread Peter Schuller
 i plan to install 6.0-R in near future and ask myself if i should use
 gmirror, ccd or gvinum (again) for software-raid for mirroring the root
 file-system, as to:
 - reliability, stability issues
 - performance issues
 - minimum installation/configuration effort 
 - advantages / disadvantages of gmirror vs. ccd vs. gvinum
 
 what are the experiences here ?

Personally I currently do not trust vinum at all (any and all of my
edge case tests / simulated hardware failures have turned into
disasters). ccd I haven't tried, but I have set up root-on-gmirror on
at three machines so far.

I am very happy with gmirror; I have only observed two major problems
so far.

Firstly, geom/geom_mirror seems to obtain an exclusive open of the
drive. this makes it a royal pain to update the boot sector of a drive
while the system is booted with geom having claimed the device (and it
doesnt help that boot0cfg does not report the error properly (and the
patch i sent has been ignored so far))

Secondly, on at least one occation, the total failure of a mirror
(rebuild test and the drive being rebuilt FROM had a bad sector)
resulted in a kernel panic. The filesystem was mounted at the time,
so I presume this isn't a problem with geom_mirror per se, but
rather has to do with an attempt to access a destroyed geom or similar.

(This wasn't the root filesystem btw - if it was the root filesystem
then the system has a right to panic :))

-- 
/ Peter Schuller, InfiDyne Technologies HB

PGP userID: 0xE9758B7D or 'Peter Schuller [EMAIL PROTECTED]'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: gmirror, gvinum or ccd to mirror root-filesystem under 6.0R

2005-11-17 Thread Peter Clutton
On 11/17/05, Reinhard [EMAIL PROTECTED] wrote:
 currently i use gvinum under 5.4-R to mirror (raid-1) my
 root-file-system. works nice but was a little bit
 complicate/nasty to setup (
 i plan to install 6.0-R in near future and ask myself if i should use
 gmirror, ccd or gvinum (again) for software-raid for mirroring the root
 file-system, as to:
 - reliability, stability issues
 - performance issues
 - minimum installation/configuration effort
 - advantages / disadvantages of gmirror vs. ccd vs. gvinum

From what i know, vinum is very powerful, and currently has the most
extensive set of tools. However, for a simple root raid1, i find
gmirror simple, straightforward, and pretty much error free.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


gmirror, gvinum or ccd to mirror root-filesystem under 6.0R

2005-11-16 Thread Reinhard

hi list

currently i use gvinum under 5.4-R to mirror (raid-1) my
root-file-system. works nice but was a little bit
complicate/nasty to setup (basically it was the procedure as described
on http://devel.reinikainen.net/docs/how-to/Vinum/#Chapter3.2 )

i plan to install 6.0-R in near future and ask myself if i should use
gmirror, ccd or gvinum (again) for software-raid for mirroring the root
file-system, as to:
- reliability, stability issues
- performance issues
- minimum installation/configuration effort 
- advantages / disadvantages of gmirror vs. ccd vs. gvinum

what are the experiences here ?

thanks for answers,
~reinhard

-- 
My mother drinks to forget she drinks.
-- Crazy Jimmy
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Configuring ccd during install

2005-11-14 Thread Ben Siemon
I have an older machine I got from work that has several identical
scsi drives that I want to merge into one and mount it it as home. Can
this be done during install? If not how do I tell it to mount the new
thing I creat as home after the initial install? I have read the RAID
explanation on useing ccd and that all makes sense I jus do not see
how to mount the thing created with ccd in a usefull way outside of
/home/newDisks.
--
cheers

Ben Siemon

cs.baylor.edu/~siemon
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.4 kernel ccd driver

2005-06-29 Thread Kris Kennaway
On Sat, Jun 25, 2005 at 07:32:24PM -0700, Casey Scott wrote:
 Has ccd driver support been removed from the 5.4 kernel? Below caused me to 
 ask the question. 
 
 ccdconfig ccd0c 1 0 ad2e ad3e
 ccdconfig: Provider not found
 or possibly kernel and ccdconfig out of sync

No, but it may not be compiled in to your kernel.  Check the kernel
config and load a module if necessary.

Kris



pgptVJKM9Qv3a.pgp
Description: PGP signature


5.4 kernel ccd driver

2005-06-25 Thread Casey Scott
Has ccd driver support been removed from the 5.4 kernel? Below caused me to 
ask the question. 

ccdconfig ccd0c 1 0 ad2e ad3e
ccdconfig: Provider not found
or possibly kernel and ccdconfig out of sync


Casey
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


ccd usage

2005-06-23 Thread Dan Z
Greetings,

I'm planning a new install and my question regards the usage of ccd. 
I have two disks of 30G and 20G.  Is it possible to use ccd to create
a single /usr partition across these two disks?  How might this be
done?  Can it be done from the sysinstall menu off the boot disk or
will I need to do some toying around after initial install is
completed?

Also, while not part of the ccd question, if I'm not mistaken, I can
create multiple swap partitions to spread swap usage across multiple
drives.  Is this true?

Thanks in advance.

Dan Z.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ccd usage

2005-06-23 Thread Dmitry Mityugov
On 6/23/05, Dan Z [EMAIL PROTECTED] wrote:
 Greetings,
 
 I'm planning a new install and my question regards the usage of ccd.
 I have two disks of 30G and 20G.  Is it possible to use ccd to create
 a single /usr partition across these two disks?  How might this be
 done?  Can it be done from the sysinstall menu off the boot disk or
 will I need to do some toying around after initial install is
 completed?
 
 Also, while not part of the ccd question, if I'm not mistaken, I can
 create multiple swap partitions to spread swap usage across multiple
 drives.  Is this true?

Yes, this is true. From
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/configtuning-initial.html:

On larger systems with multiple SCSI disks (or multiple IDE disks
operating on different controllers), it is recommend that a swap is
configured on each drive (up to four drives). The swap partitions
should be approximately the same size. The kernel can handle arbitrary
sizes but internal data structures scale to 4 times the largest swap
partition. Keeping the swap partitions near the same size will allow
the kernel to optimally stripe swap space across disks

Hopefully your first question will be answered by somebody else.

-- 
Dmitry

We live less by imagination than despite it - Rockwell Kent, N by E
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ccd usage

2005-06-23 Thread Anatoliy Dmytriyev

Dan Z wrote:


Greetings,

I'm planning a new install and my question regards the usage of ccd. 
I have two disks of 30G and 20G.  Is it possible to use ccd to create

a single /usr partition across these two disks?  How might this be
done?  Can it be done from the sysinstall menu off the boot disk or
will I need to do some toying around after initial install is
completed?



http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/raid.html



--
Anatoliy Dmytriyev
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Corrupted Disk? [Was: Re: FreeBSD 5.x CCD]

2004-11-18 Thread Gerard Samuel
Gerard Samuel wrote:
Kris Kennaway wrote:
On Wed, Nov 17, 2004 at 09:22:51PM -0500, Gerard Samuel wrote:
 

Well you just burst my bubble.
I was hoping I was missing a node.  Im trying to figure out a 
problem Im
having -
http://lists.freebsd.org/pipermail/freebsd-questions/2004-November/064973.html 

Thanks
  

Just mount /dev/ccd0 instead of /dev/ccd0c since the latter refers to
the entire disk anyway.
Here is what I did...
1.  Unconfigured ccd
hivemind# ccdconfig -U -f /etc/ccd.conf
2.  Reconfigured it
hivemind# ccdconfig -C -f /etc/ccd.conf
3.  Tried mount the drive the way you recommended, and got
hivemind# mount /dev/ccd0 /storage
mount: /dev/ccd0: Operation not permitted
Im going to try googling to see what I can find out.
But if anyone knows why I cannot mount this, then by all means,
let me know.
Thanks 
I noticed this in the logs when I was trying to mount the disk -
Nov 18 09:05:20 hivemind kernel: WARNING: R/W mount of /storage denied.  
Filesystem is not clean - run fsck

I ran fsck a few times, but it could fix the drive.
hivemind# fsck -t ffs -y /dev/ccd0
--snip--
** Phase 2 - Check Pathnames
MISSING '.'  I=896624  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:18 2004
DIR=?
CANNOT FIX, FIRST ENTRY IN DIRECTORY CONTAINS mesg.27.2.jar
MISSING '..'  I=896624  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:18 2004
DIR=?
CANNOT FIX, SECOND ENTRY IN DIRECTORY CONTAINS mesg.28.2.jar
DIRECTORY CORRUPTED  I=941361  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
SALVAGE? yes
MISSING '.'  I=941361  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
FIX? yes
DIRECTORY CORRUPTED  I=941362  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
SALVAGE? yes
MISSING '.'  I=941362  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
FIX? yes
DIRECTORY CORRUPTED  I=941365  OWNER=nobody MODE=40755
SIZE=1024 MTIME=Jun 11 23:19 2004
DIR=?
SALVAGE? yes
MISSING '.'  I=941365  OWNER=nobody MODE=40755
SIZE=1024 MTIME=Jun 11 23:19 2004
DIR=?
FIX? yes
DIRECTORY CORRUPTED  I=941393  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
SALVAGE? yes
MISSING '.'  I=941393  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
CANNOT FIX, FIRST ENTRY IN DIRECTORY CONTAINS
fsck_ffs: inoinfo: inumber -2115204267 out of range
My question.  If fsck cannot repair a drive, does it mean that all hope 
is lost?
Thanks
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Corrupted Disk? [Was: Re: FreeBSD 5.x CCD]

2004-11-18 Thread Gerard Samuel
Gerard Samuel wrote:
Gerard Samuel wrote:
Kris Kennaway wrote:
On Wed, Nov 17, 2004 at 09:22:51PM -0500, Gerard Samuel wrote:
 

Well you just burst my bubble.
I was hoping I was missing a node.  Im trying to figure out a 
problem Im
having -
http://lists.freebsd.org/pipermail/freebsd-questions/2004-November/064973.html 

Thanks
  

Just mount /dev/ccd0 instead of /dev/ccd0c since the latter refers to
the entire disk anyway.
Here is what I did...
1.  Unconfigured ccd
hivemind# ccdconfig -U -f /etc/ccd.conf
2.  Reconfigured it
hivemind# ccdconfig -C -f /etc/ccd.conf
3.  Tried mount the drive the way you recommended, and got
hivemind# mount /dev/ccd0 /storage
mount: /dev/ccd0: Operation not permitted
Im going to try googling to see what I can find out.
But if anyone knows why I cannot mount this, then by all means,
let me know.
Thanks 

I noticed this in the logs when I was trying to mount the disk -
Nov 18 09:05:20 hivemind kernel: WARNING: R/W mount of /storage 
denied.  Filesystem is not clean - run fsck

I ran fsck a few times, but it could fix the drive.
hivemind# fsck -t ffs -y /dev/ccd0
--snip--
** Phase 2 - Check Pathnames
MISSING '.'  I=896624  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:18 2004
DIR=?
CANNOT FIX, FIRST ENTRY IN DIRECTORY CONTAINS mesg.27.2.jar
MISSING '..'  I=896624  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:18 2004
DIR=?
CANNOT FIX, SECOND ENTRY IN DIRECTORY CONTAINS mesg.28.2.jar
DIRECTORY CORRUPTED  I=941361  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
SALVAGE? yes
MISSING '.'  I=941361  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
FIX? yes
DIRECTORY CORRUPTED  I=941362  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
SALVAGE? yes
MISSING '.'  I=941362  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
FIX? yes
DIRECTORY CORRUPTED  I=941365  OWNER=nobody MODE=40755
SIZE=1024 MTIME=Jun 11 23:19 2004
DIR=?
SALVAGE? yes
MISSING '.'  I=941365  OWNER=nobody MODE=40755
SIZE=1024 MTIME=Jun 11 23:19 2004
DIR=?
FIX? yes
DIRECTORY CORRUPTED  I=941393  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
SALVAGE? yes
MISSING '.'  I=941393  OWNER=nobody MODE=40755
SIZE=512 MTIME=Jun 11 23:19 2004
DIR=?
CANNOT FIX, FIRST ENTRY IN DIRECTORY CONTAINS
fsck_ffs: inoinfo: inumber -2115204267 out of range
My question.  If fsck cannot repair a drive, does it mean that all 
hope is lost?
Thanks 

Ah forget it.
I bit the bullet, and newfs'ed the ccd, and now it mounts without any 
complaints.
So all that data went to /dev/null
Note to self (and hopefully to others):  Back up your ccd arrays before 
reinstalling the OS...
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Getting back my CCD Raid

2004-11-17 Thread Gerard Samuel
Gerard Samuel wrote:
I had a ccd raid 0 drive setup under 4.10.
I did a fresh install of 5.3, with the thought, that I could just
reenable the settings for the ccd drive, to bring it back to life
with its data intact.
1.  Added device  ccd to the kernel and rebuilt it.
2.  Verified that the disklabels are intact for the drives ad0/ad2
# /dev/ad0:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
 c: 534643200unused0 0 # raw part, 
don't edit
 e: 5346432004.2BSD0 0 0

# /dev/ad2:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
 c: 534643200unused0 0 # raw part, 
don't edit
 e: 5346432004.2BSD0 0 0

3.  Ran ccdconfig ccd0 32 0 /dev/ad0e /dev/ad2e
4.  Ran ccdconfig -g  /etc/ccd.conf
5.  Try mounting the ccd with mount /dev/ccd0c /storage and I get
mount: /dev/ccd0c: No such file or directory
The device does exist -
hivemind# ls -al /dev/ccd*
crw-r-  1 root  operator4,  49 Nov 16 19:16 /dev/ccd0
I even tried configuring the drive before mounting but -
hivemind# ccdconfig -C
ccdconfig: Unit 0 already configured
or possibly kernel and ccdconfig out of sync
Could someone point out to me, what Im doing wrong?
Or is it even possible to achieve the results that Im looking for?
Should I be reconstructing the raid from scratch, deleting the data on 
them?

Thanks 

Any other ideas???
Thanks
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


FreeBSD 5.x CCD

2004-11-17 Thread Gerard Samuel
This is to anyone who is successfully running a CCD raid under
5.x.
I want to compare your ccd* device nodes under /dev to what I have.
This is what I have.
hivemind# ls -al /dev/ccd*
crw-r-  1 root  operator4,  38 Nov 17 10:53 /dev/ccd0
I want to see if Im missing the ccd0c node.
Thanks for your time.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD 5.x CCD

2004-11-17 Thread Kris Kennaway
On Wed, Nov 17, 2004 at 12:22:57PM -0500, Gerard Samuel wrote:
 This is to anyone who is successfully running a CCD raid under
 5.x.
 I want to compare your ccd* device nodes under /dev to what I have.
 This is what I have.
 hivemind# ls -al /dev/ccd*
 crw-r-  1 root  operator4,  38 Nov 17 10:53 /dev/ccd0
 
 I want to see if Im missing the ccd0c node.
 Thanks for your time.

I have 

$ ls -l /dev/ccd*
crw-r-  1 root  operator4,  28 Oct 28 19:14 /dev/ccd0
crw-r-  1 root  operator4,  29 Oct 28 19:14 /dev/ccd1

which I use for swap and a single fs partition, respectively.

Kris



pgpDNpWsEfl4u.pgp
Description: PGP signature


Re: FreeBSD 5.x CCD

2004-11-17 Thread Gerard Samuel
Kris Kennaway wrote:
On Wed, Nov 17, 2004 at 12:22:57PM -0500, Gerard Samuel wrote:
 

This is to anyone who is successfully running a CCD raid under
5.x.
I want to compare your ccd* device nodes under /dev to what I have.
This is what I have.
hivemind# ls -al /dev/ccd*
crw-r-  1 root  operator4,  38 Nov 17 10:53 /dev/ccd0
I want to see if Im missing the ccd0c node.
Thanks for your time.
   

I have 

$ ls -l /dev/ccd*
crw-r-  1 root  operator4,  28 Oct 28 19:14 /dev/ccd0
crw-r-  1 root  operator4,  29 Oct 28 19:14 /dev/ccd1
which I use for swap and a single fs partition, respectively.
Well you just burst my bubble.
I was hoping I was missing a node.  Im trying to figure out a problem Im
having -
http://lists.freebsd.org/pipermail/freebsd-questions/2004-November/064973.html
Thanks
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD 5.x CCD

2004-11-17 Thread Kris Kennaway
On Wed, Nov 17, 2004 at 09:22:51PM -0500, Gerard Samuel wrote:

 Well you just burst my bubble.
 I was hoping I was missing a node.  Im trying to figure out a problem Im
 having -
 http://lists.freebsd.org/pipermail/freebsd-questions/2004-November/064973.html
 Thanks

Just mount /dev/ccd0 instead of /dev/ccd0c since the latter refers to
the entire disk anyway.


pgpof96AqCe4R.pgp
Description: PGP signature


Re: FreeBSD 5.x CCD

2004-11-17 Thread Gerard Samuel
Kris Kennaway wrote:
On Wed, Nov 17, 2004 at 09:22:51PM -0500, Gerard Samuel wrote:
 

Well you just burst my bubble.
I was hoping I was missing a node.  Im trying to figure out a problem Im
having -
http://lists.freebsd.org/pipermail/freebsd-questions/2004-November/064973.html
Thanks
   

Just mount /dev/ccd0 instead of /dev/ccd0c since the latter refers to
the entire disk anyway.
Here is what I did...
1.  Unconfigured ccd
hivemind# ccdconfig -U -f /etc/ccd.conf
2.  Reconfigured it
hivemind# ccdconfig -C -f /etc/ccd.conf
3.  Tried mount the drive the way you recommended, and got
hivemind# mount /dev/ccd0 /storage
mount: /dev/ccd0: Operation not permitted
Im going to try googling to see what I can find out.
But if anyone knows why I cannot mount this, then by all means,
let me know.
Thanks
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Getting back my CCD Raid

2004-11-16 Thread Gerard Samuel
I had a ccd raid 0 drive setup under 4.10.
I did a fresh install of 5.3, with the thought, that I could just
reenable the settings for the ccd drive, to bring it back to life
with its data intact.
1.  Added device  ccd to the kernel and rebuilt it.
2.  Verified that the disklabels are intact for the drives ad0/ad2
# /dev/ad0:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
 c: 534643200unused0 0 # raw part, 
don't edit
 e: 5346432004.2BSD0 0 0

# /dev/ad2:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
 c: 534643200unused0 0 # raw part, 
don't edit
 e: 5346432004.2BSD0 0 0

3.  Ran ccdconfig ccd0 32 0 /dev/ad0e /dev/ad2e
4.  Ran ccdconfig -g  /etc/ccd.conf
5.  Try mounting the ccd with mount /dev/ccd0c /storage and I get
mount: /dev/ccd0c: No such file or directory
The device does exist -
hivemind# ls -al /dev/ccd*
crw-r-  1 root  operator4,  49 Nov 16 19:16 /dev/ccd0
I even tried configuring the drive before mounting but -
hivemind# ccdconfig -C
ccdconfig: Unit 0 already configured
or possibly kernel and ccdconfig out of sync
Could someone point out to me, what Im doing wrong?
Or is it even possible to achieve the results that Im looking for?
Should I be reconstructing the raid from scratch, deleting the data on them?
Thanks
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Getting back my CCD Raid

2004-11-16 Thread Olivier Nicole
 mount: /dev/ccd0c: No such file or directory
 hivemind# ls -al /dev/ccd*
 crw-r-  1 root  operator4,  49 Nov 16 19:16 /dev/ccd0

It seems that the device does not exists rather :)

You have /dev/ccd0 but not /dev/ccd0C !

First of all I'd try MAKEDEV

Olivier
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Getting back my CCD Raid

2004-11-16 Thread Gerard Samuel
Olivier Nicole wrote:
mount: /dev/ccd0c: No such file or directory
hivemind# ls -al /dev/ccd*
crw-r-  1 root  operator4,  49 Nov 16 19:16 /dev/ccd0
   

It seems that the device does not exists rather :)
You have /dev/ccd0 but not /dev/ccd0C !
First of all I'd try MAKEDEV
Im running 5.3 (maybe that wasn't clear in the original email).
I shouldn't have to run MAKEDEV
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Disk I/O Perforamnce with CCD

2004-10-19 Thread Kristofer Pettijohn

Im looking for some suggestions on I/O performance.

I'm using FreeBSD 4.10-RELEASE on a Usenet transit server running
Diablo for the transit software.

I have 4 Seagate ST373435LC SCSI drives, 70GB each, and I am using
CCD to bind them together with RAID-0 stripes.

I can pull in anywhere from 30-40 MB sec and push out ~ 8-15 MB/sec..
averaging about 50 MB/sec throughput.. feeds coming in are coming
in just fine, but sending stuff back out is lagging behind.. its
falling about a half hour behind every hour.

I've used tunefs to set the average file size to 20 MB and enabled
soft-updates, as these are generally larger binary files that just
get appended to, and then seeked later on to send the article out,
I've played with setting the stripe size from anywhere between 8MB
and 64MB, and did not see much change on performance between those.

Maybe I'm just missing something small, but on these SCSI drives
which have 160 MB/s transfer rates, I'm expecting a bit more than
I'm getting with CCD.

Can someone give me any pointers to look at or suggestions of things
to try?

Thanks!

Kristofer

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


3+ TB Storage... CCD, growfs, etc...

2004-10-19 Thread dag vilmar tveit

did u get fbsd working with large disks
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 3+ TB Storage... CCD, growfs, etc...

2004-10-19 Thread Jeremy Faulkner
On Mon, 2003-10-20 at 00:12, dag vilmar tveit wrote:
 did u get the correct year set in fbsd?

 did u get fbsd working with large disks

Keep an eye on: http://www.freebsd.org/projects/bigdisk/ for more
information on that work.
-- 
Jeremy Faulkner [EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part


Installing 4.9-STABLE on 2 HDDs w/ ccd

2004-03-01 Thread Alex Soares de Moura
Hello,

Please Cc: my email, as I'm not on the list.

I'd like to know if it's possible to do a fresh FreeBSD-4.9-STABLE install
on filesystems on a ccd device over 2 SCSI HDDs. The system is a
IBM Netfinity 3500,  with Adaptec controller and IBM HDDs.
I've been trying to accomplish the task using the reference below, without
success:
FreeBSD Handbook - RAID
http://www.freebsdsystems.com/handbook/raid.html
I'm trying to avoid Vinum for 2 reasons: first, it seems that it can't  
hold the / 
filesystem and secont  it's  an overkill for my needs.
Any tips are appreciated.

Thank you very much,

Alex
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: RAID (ccd/vinum) with 2 harddisks

2004-02-23 Thread Tony Frank
Hi,

On Mon, Feb 23, 2004 at 08:32:40AM +0100, Axel S. Gruner wrote:
 Hi.
 
 Greg 'groggy' Lehey wrote:
 On Monday, 23 February 2004 at  8:27:33 +0100, Axel S. Gruner wrote:
 
 Hi.
 
 I want to use RAID on a IBM xSeries 345 with two harddisks.
 So the problem is, i can not setup a RAID System via Installation of
 FreeBSD (it will be 4.9 or maybe 5.2) and i do not have a System
 harddisk so that i can use the other ones as a RAID System.
 Is there a way to setup RAID (CCD otr Vinum) on a System with 2
 harddisks (mirroring or stripping)?
 
 
 Yes.  It's described in the man pages.
 
 Oh boy, i need a real large cup of coffee, or some glasses.

In addition to the man pages I found these resources of particular use:

http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-root.html
and
http://www.vinumvm.org/cfbsd/vinum.pdf

From my own experience it can be done without much trouble.
I burnt myself a few times in the process but that was mostly user fault.

Regards,

Tony
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RAID (ccd/vinum) with 2 harddisks

2004-02-22 Thread Axel S. Gruner
Hi.

I want to use RAID on a IBM xSeries 345 with two harddisks.
So the problem is, i can not setup a RAID System via Installation of 
FreeBSD (it will be 4.9 or maybe 5.2) and i do not have a System 
harddisk so that i can use the other ones as a RAID System.
Is there a way to setup RAID (CCD otr Vinum) on a System with 2 
harddisks (mirroring or stripping)?

I would appreciate any suggestions.

asg
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: RAID (ccd/vinum) with 2 harddisks

2004-02-22 Thread Greg 'groggy' Lehey
On Monday, 23 February 2004 at  8:27:33 +0100, Axel S. Gruner wrote:
 Hi.

 I want to use RAID on a IBM xSeries 345 with two harddisks.
 So the problem is, i can not setup a RAID System via Installation of
 FreeBSD (it will be 4.9 or maybe 5.2) and i do not have a System
 harddisk so that i can use the other ones as a RAID System.
 Is there a way to setup RAID (CCD otr Vinum) on a System with 2
 harddisks (mirroring or stripping)?

Yes.  It's described in the man pages.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
Note: I discard all HTML mail unseen.
Finger [EMAIL PROTECTED] for PGP public key.
See complete headers for address and phone numbers.


pgp0.pgp
Description: PGP signature


Re: RAID (ccd/vinum) with 2 harddisks

2004-02-22 Thread Axel S. Gruner
Hi.

Greg 'groggy' Lehey wrote:
On Monday, 23 February 2004 at  8:27:33 +0100, Axel S. Gruner wrote:

Hi.

I want to use RAID on a IBM xSeries 345 with two harddisks.
So the problem is, i can not setup a RAID System via Installation of
FreeBSD (it will be 4.9 or maybe 5.2) and i do not have a System
harddisk so that i can use the other ones as a RAID System.
Is there a way to setup RAID (CCD otr Vinum) on a System with 2
harddisks (mirroring or stripping)?


Yes.  It's described in the man pages.
Oh boy, i need a real large cup of coffee, or some glasses.
Thanks Greg.
asg
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


ccd recipe for extending a filesystem

2003-11-16 Thread David Muir Sharnoff

Hi.  I've come up with a recipe to grow a filesystem.  I would
like to do several things here:

1.  Share the recipe so others can use it
2.  Have someone verify my recipe -- I think it works, but
another set of eyes would help
3.  Ask if anyone knows a reason this won't work.

Anyway... 

This is not for those who want clean systems or make mistakes when
typing...  To make a filesystem bigger (I'm assuming a normal
filesystem here) like /foobar mounted on /dev/sd3d
   
1. make two new partitions. The first should be 32 blocks long (call
   it /dev/sd4e). The second should be the new space you want for
   your filesystem (16 blocks will be overhead) (call it /dev/sd4f)
   # disklabel -e sd4
2. zero the new small partion
   # dd if=/dev/zero of=/dev/sd4e
3. unmount the filesystem
4. capture the first 16 blocks of the filesystem:
   # dd if=/dev/sd3d of=/root/x count=16
5. make a ccd partition that spans the three partitions:
   # ccdconfg -c ccd2 0 none /dev/sd4e /dev/sd3d /dev/sd4f
   # disklabel -r -w ccd2c auto
   # disklabel ccd2c
6. back up the new disklabel
   # disklabel ccd2c  /root/y
7. restore the captured 16 blocks:
   # dd if=/root/x of=/dev/ccd2c
8. unfortunantly the last command clobbered the disk label. restore
   it:
   # disklabel -R -r ccd2c /root/y
   # disklabel ccd2c
9. fsck  mount the partition to make sure it's okay
   10. # ccdconfig -g  /etc/ccd.conf
   11. change /etc/fstab: /dev/sd3d becomes /dev/ccd2c
   12. unmount the partition
   13. use growfs to expandit
   14. fsck  mount the partition to make sure it's okay

   I think making a backup first might be a good idea :-)

   If you are trying to grow a ccd partion that is set up like what you
   get after following the above instructions, it's eaiser but not as
   much easier as you might expect.

1. make the new partion for the addtional space (remember 16 blocks
   will be overhead). call it /dev/sd5h.
   # disklabel -e sd5
2. unmount the filesystem
3. capture the first 16 blocks of the filesystem:
   # dd if=/dev/ccd2c of=/root/x count=16
4. zero the original small partion
   # dd if=/dev/zero of=/dev/sd4e
5. make a ccd partition that spans all the partitions:
   # ccdconfg -c ccd2 0 none /dev/sd4e /dev/sd3d /dev/sd4f /dev/sd5h
   # disklabel -r -w ccd2c auto
   # disklabel ccd2c
6. back up the new disklabel
   # disklabel ccd2c  /root/y
7. restore the captured 16 blocks:
   # dd if=/root/x of=/dev/ccd2c
8. unfortunantly the last command clobbered the disk label. restore
   it:  
   # disklabel -R -r ccd2c /root/y
   # disklabel ccd2c
9. fsck  mount the partition to make sure it's okay
   10. ccdconfig -g  /etc/ccd.conf
   11. unmount the partition
   12. use growfs to expandit
   13. fsck  mount the partition to make sure it's okay

-Dave
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 3+ TB Storage... CCD, growfs, etc...

2003-06-17 Thread Kevin Marcus
 Hi all,

 I am looking at the promise ultratrak RM 15000

 (http://www.promise.com/product/product_detail_eng.asp?productId=109familyI
 d=6) Raid appliance with a 3TB disk configuration. This box connects
 to the host with a SCSI 160 interface which is no problem, and as I
 understand it UFS2 is 64 bit so I am not constrained by a 2TB
 filesystem limit. The smallest size file on this box will be 11GB, and
 there will be lots of them.

 My questions are.

 1) What is the maximum filesystem size with UFS2?
 Are there any special tuning parameters that I should be aware of that
 will better optimize the disk?

 2) How much CPU/Ram would be suggested per TB of disk attached?

 3) If I wanted to eventually strip two+ of these external boxes what
 would I need to do? Given this configuration would Vinum or CCD be
 better? Why?

 Oh... and this will be running Samba to serve these files to windows
 pc's over 1Gb copper ethernet.

I can't speak to the exact hardware you're using, although I can speak to
my experiences with large filesystems.  I primarily use Adaptec RAID card
directly connected to their disks.  This is also somewhat nice since for
the freebsd 4.x series, adaptec also has a utility ('raidutil') which
allows you to monkey around with the raid right from the os instead of the
post/bios.  These utilities do not work on freebsd 5.0 or 5.1, even if you
have the proper shared libs so that is moot.

So to try and answer these questions: 
1) You will likely have problems with fdisk and disklabel as soon as you
try to get filesystems over 2TB because of int32 overflows.  I dont even
want to think of what fsck would do on a non 64 bit system.

2) Uh, as much as possible?  There are various limits around the sizes of
a single block of memory that can be allocated although these are somewhat
easy to tweak.  Older versions of FreeBSD could only use 2-4G of RAM with
some kernel hacking *AND* disabling swapping all together.  I am not sure
of the current state of affairs there - matt dillon could probably tell
you though.

3) can't help you there

I have more recently become concerned with freebsd's ability to simply
have mroe than 2TB of disk actually attached.  I recently acquired a
system with about 2.1TB of disk and I have tried again, various versions
all of which have the same problem as shown below:
-
ad0: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata0-master UDMA100
da1 at asr1 bus 0 target 0 lun 0
da1: ADAPTEC RAID-0 380E Fixed Direct Access SCSI-2 device
da1: Tagged Queueing Enabled
da1: 840086MB (1720496128 512 byte sectors: 255H 63S/T 41559C)
da2 at asr1 bus 0 target 10 lun 0
da2: ADAPTEC RAID-0 380E Fixed Direct Access SCSI-2 device
da2: Tagged Queueing Enabled
da2: 560057MB (1146996736 512 byte sectors: 255H 63S/T 5861C)
da0 at asr0 bus 0 target 0 lun 0
da0: ADAPTEC RAID-5 3A0L Fixed Direct Access SCSI-2 device
da0: Tagged Queueing Enabled
da0: 572346MB (1172164608 512 byte sectors: 255H 63S/T 7427C)
Mounting root from ufs:/dev/ad0s1a
da2: cannot find label (no disk label)
da2s1: cannot find label (no disk label)
da2: cannot find label (no disk label)
da2s1: cannot find label (no disk label)
da2: cannot find label (no disk label)
da2s1: cannot find label (no disk label)
da2: cannot find label (no disk label)
-
No matter what I do and no matter how hard I try, I am simply unable to
get the da2 filesystem to function.  Since this system is brand new I have
tried rebuilding the raid (which I originally had simply as a single large
1.3+TB filesystem which didn't work at all.  Anyway, I've rearranged
things in various orders and tried making them individual disks, smaller
systems, etc. but as soon as the actual attached storage goes over 2TB, I
start to get these types of errors.  interestingly, fdisk and disklabel
also really hose the da2 system as well.  You can't even rewrite the
partition table through fdisk -i on the system.  

So I would be very very very weary before you attempt to add storage
beyond 2tb to a single system and would be cautious on any single
filesystem between 1-2tb. 



___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


3+ TB Storage... CCD, growfs, etc...

2003-06-12 Thread Max Clark
Hi all,

I am looking at the promise ultratrak RM 15000
(http://www.promise.com/product/product_detail_eng.asp?productId=109familyI
d=6) Raid appliance with a 3TB disk configuration. This box connects to the
host with a SCSI 160 interface which is no problem, and as I understand it
UFS2 is 64 bit so I am not constrained by a 2TB filesystem limit. The
smallest size file on this box will be 11GB, and there will be lots of them.

My questions are.
1) What is the maximum filesystem size with UFS2? Are there any special
tuning parameters that I should be aware of that will better optimize the
disk?
2) How much CPU/Ram would be suggested per TB of disk attached?
3) If I wanted to eventually strip two+ of these external boxes what would I
need to do? Given this configuration would Vinum or CCD be better? Why?

Oh... and this will be running Samba to serve these files to windows pc's
over 1Gb copper ethernet.

Thanks in advance,
Max

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Any problems using ccd with large disk ?

2003-06-04 Thread Pranav A. Desai
Hi!

  We are planning to install 6x170GB disks on our server and use ccd to
create a logging partition. Is anyone aware of any problems that might
be caused by using ccd on large disks or any other problem related to BIOS
or filesystem limitation? I would appreciate any suggestions.

Thank you for your time.

-Pranav

***
Pranav A. Desai

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


ccd problem

2003-01-28 Thread Damien Hull
I'm trying to setup raid 0 between two 80 gig drives using ccd but it's
not working. Here is the setup.

1. two 80 gig drives
2. Both on a Promise ultra 66 card
3. System disk is separate 

According to the FreeBSD hand book you need to run disklabel to give the
disk a lable and to change the partition typ to 4.2BSD. I tried running
disklabel but it tells me that it can't preforme either operation. 

It's almost like I don't have access to the drives when I use disklabel.
I can format and partition both drives when I use /stand/sysinstall. 

If anyone knows what's going on here I'd apretiate the help.



To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



Re: ccd without ccd.conf

2002-11-05 Thread Ed Powers
Hi.

 Hey people, 
 
 Got a bit of a problem..  I made a ccd out of two drives, /dev/ad0e
 /dev/ad1e.  I put stuff on them and rebooted the server.  I forgot to write 
 out a /etc/ccd.conf.  Is there any way I can still mount the ccd after the  
 reboot?  Or is all hope lost?   
 
 Thanks, 
 Christopher J. Umina

Your CCD is still be intact after a reboot. So long as you recall the
interleave you used (if any) just run ccdconfig and remount.

Ed
-- 
___
Talk More, Pay Less with Net2Phone Direct(R), up to 1500 minutes free! 
http://www.net2phone.com/cgi-bin/link.cgi?143 




To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



ccd without ccd.conf

2002-11-04 Thread Christopher J. Umina
Hey people,

Got a bit of a problem..  I made a ccd out of two drives, /dev/ad0e 
/dev/ad1e.  I put stuff on them and rebooted the server.  I forgot to write
out a /etc/ccd.conf.  Is there any way I can still mount the ccd after the
reboot?  Or is all hope lost?

Thanks,
Christopher J. Umina



To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



CCD

2002-07-18 Thread Christopher J. Umina

Hey guys and gals,

I'm just looking for an oppinion on using CCD for stripping on two
IDE drives.  Anybody have anything to say about performance, reliability,
manageability and so on?

Thanks A Lot,
Christopher J. Umina


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



Re: CCD

2002-07-18 Thread Roger 'Rocky' Vetterberg

Christopher J. Umina wrote:
 Hey guys and gals,
 
   I'm just looking for an oppinion on using CCD for stripping on two
 IDE drives.  Anybody have anything to say about performance, reliability,
 manageability and so on?
 
 Thanks A Lot,
 Christopher J. Umina
 

I have never tried CCD, but Vinum works fine for me. It does both 
striping, mirroring, RAID-5 and most other RAID combinations you 
can think of.

--
R



To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message