Re: Volume Manager on FreeBSD ( ZFS / VINUM / GEOM )

2010-04-20 Thread krad
On 20 April 2010 03:25, Alberto Mijares amijar...@gmail.com wrote:

 On Mon, Apr 19, 2010 at 8:37 PM, Leandro F Silva
 fsilvalean...@gmail.com wrote:
  Hi all,
 
  I'd like to know what kind of technology are you using on FreeBSD for
 volume
  manager, I mean, Z file system (ZFS), VINUM, GEOM,  or anyone else.
  Seems that Oracle won't offer support for ZFS on opensolaris, so do you
 know
  if FreeBSD will keep working with ZFS ?
 
  I had some old production servers those aren't using any kind of
 technology
  for volume manager, so could you please share what you're using / ideas /
  complains with us .


 I'd put my hands on fire for gvinum (vinum + GEOM). ZFS is great, but
 I'm not expert yet.

 Regards.


 Alberto Mijares
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to 
 freebsd-questions-unsubscr...@freebsd.org



If you are looking at running oracle it wont be supported on freebsd either.
The only free OS it will be supported on now is going to be linux
unfortunately (assuming it doesnt have to be redhat enterprise). However if
you are getting paid support on Oracle, then I doubt it will be much more to
get a support contract for solaris, therefore if you are mission critical
that would be your best option as you would have full vendor support and
have zfs, which is very useful for hot backups.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Volume Manager on FreeBSD ( ZFS / VINUM / GEOM )

2010-04-19 Thread Leandro F Silva
Hi all,

I'd like to know what kind of technology are you using on FreeBSD for volume
manager, I mean, Z file system (ZFS), VINUM, GEOM,  or anyone else.
Seems that Oracle won't offer support for ZFS on opensolaris, so do you know
if FreeBSD will keep working with ZFS ?

I had some old production servers those aren't using any kind of technology
for volume manager, so could you please share what you're using / ideas /
complains with us .

Thank you !
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Volume Manager on FreeBSD ( ZFS / VINUM / GEOM )

2010-04-19 Thread Alberto Mijares
On Mon, Apr 19, 2010 at 8:37 PM, Leandro F Silva
fsilvalean...@gmail.com wrote:
 Hi all,

 I'd like to know what kind of technology are you using on FreeBSD for volume
 manager, I mean, Z file system (ZFS), VINUM, GEOM,  or anyone else.
 Seems that Oracle won't offer support for ZFS on opensolaris, so do you know
 if FreeBSD will keep working with ZFS ?

 I had some old production servers those aren't using any kind of technology
 for volume manager, so could you please share what you're using / ideas /
 complains with us .


I'd put my hands on fire for gvinum (vinum + GEOM). ZFS is great, but
I'm not expert yet.

Regards.


Alberto Mijares
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Software RAID performance? RAID-Z or vinum and RAID5?

2009-03-16 Thread Mike Manlief
I'm looking into moving a workstation from Ubuntu 10 to FreeBSD 7.1
(both amd64) and I'm a bit worried about storage -- specifically
moving from mdadm, which performs very well for me.

Current in Linux I use an mdadm RAID5 of 5 disks.  After investigating
FreeBSD storage options, RAID-Z sounds optimal[1].  I'd like to avoid
levels 3 and 1 due to write bottlenecks[2], and level 0 for obvious
reasons.  Migrating from the existing mdadm is not an issue.  I also
do not plan to boot from the software array.

Various docs/postings seem to indicate that using ZFS/RAID-Z under
FreeBSD will destroy my computer, run over my cat, and bail out the
investment banking industry.  Will it really perform that poorly on a
Phenom and 8GB RAM?  Significantly more resources than mdadm in Linux?
 How about compared to RAID 5 under vinum?

Thanks,
~Mike Manlief

1: The ability to read the array with the Linux FUSE ZFS
implementation is very appealing; don't care about performance for
such inter-op scenarios.  Copy-on-write sounds awesome too.

2: ...and even level 5, now that I've learned of RAID-Z.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Vinum/FreeBSD 6.4

2009-02-01 Thread Don O'Neil
Are there any disk size/volume size limitations on Vinum with FreeBSD 6.4? 

Can I run Vinum on 4 500 G drives and get a 1Tbyte RAID10 config?

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


vinum raid degraded

2008-12-12 Thread Gerhard Schmidt
Hi,

I'm running a gvinum raid array with 4x80G drives. This raid is running
for 4 Years now. Today i found out that the status in degraded. All
drives are up but on subdisk is stale. How can get the raid out of
degraded mode. I have attached the output of gvinum l

Greeting
Estartu

-- 
-
Gerhard Schmidt   | E-Mail: schm...@ze.tum.de
TU-München|
WWW  Online Services |
Tel: 089/289-25270|
Fax: 089/289-25257| PGP-Publickey auf Anfrage

4 drives:
D vinumdrive3   State: up   /dev/ad7A: 0/78533 MB (0%)
D vinumdrive2   State: up   /dev/ad6A: 0/78533 MB (0%)
D vinumdrive1   State: up   /dev/ad5s1  A: 0/78533 MB (0%)
D vinumdrive0   State: up   /dev/ad4A: 0/78533 MB (0%)

1 volume:
V daten State: up   Plexes:   1 Size:230 GB

1 plex:
P daten.p0   R5 State: degraded Subdisks: 4 Size:230 GB

4 subdisks:
S daten.p0.s3   State: up   D: vinumdrive3  Size: 76 GB
S daten.p0.s2   State: up   D: vinumdrive2  Size: 76 GB
S daten.p0.s1   State: staleD: vinumdrive1  Size: 76 GB
S daten.p0.s0   State: up   D: vinumdrive0  Size: 76 GB
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

Vinum for hard disk drive balanced load sharing configuration example RAID-5

2008-09-18 Thread Nash Nipples
Dear Listmates,

Is it possible to configure vinum for balancing the mileage between two and 
three non-volatile storage devices of a different size

I have read the manual thoroughly and noticed that the are certain restrictions 
applied to the hard drive sizes in the proposed RAID5 data handling 
implementation

A fact of use of the plexes as structural entinties make me wonder why would 
the size of an actual hard drive make a difference to the actual i/o layer when 
plexes are even and the subdisks sizes are even why not just i/o consequently

Can someone please provide me with a working example of a RAID5 configuration

Sincerely

Nash


  
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Vinum Raid 5: Bad HD Crashes Server. Can't rebuild array.

2007-10-18 Thread FX Charpentier
 Hi there,



I setup this FreeBSD 4.9 a while back (I know, I know this isn't the  latest 
version; but look it's been running perfectly since then), with  the OS on a 
SCSI drive and a vinum volume on 3 IDE 200GB drives, hook on  a Promise IDE 
controller.



A) The Crash

= = = = = = = 

The vinum volume is set in a Raid 5 configuration.  Here is how it's  
configured:




   drive d1 device /dev/ad4s1h


   drive d2 device /dev/ad5s1h


   drive d3 device /dev/ad6s1h


   volume datastore


 plex org raid5 256k


   subdisk length 185g
drive d1


   subdisk length 185g
drive d2


   subdisk length 185g
drive d3





Each drive in the array had a single partition, and were labeled with a  type 
of vinum and an h partition.



Last Saturday night, drive d2 (ad5) went bad.  To my surprise the  server 
stopped, crashed and automatically rebooted.  I got a kernel panic  at the 
console and the server would stop during the boot process when  trying to start 
/ mount the vinum volume.



= Q1: Isn't a Raid 5 configuration supposed to allow me to run on  a degraded 
array, when 1 of the drive is missing?

= Q2: Did I do anything wrong with the vinum config above?





B) The Recovery (well, sort of)

= = = = = = = = = = = = = = =

So, the next day I got a brand new 250GB hard drive and replaced d2  (ad5).  
Then I did the fixit floppy thing to comment out vinum from both  rc.conf and 
fstab.  This way I was able to start the server.



I prepared the new drive with Fdisk first, then did a 'disklabel' to  change 
the type to vinum and the partition to h.  After that I  created a special 
vinum configuration file called 'recoverdata' to recover  the volume, and put 
drive d2 device /dev/ad5s1h there.  Finally I ran:  vinum create -v 
recoverdata.  This worked and I finally entered vinum  in interactive mode.



First thing, I started vinum with the 'start' command.  That worked.  Next, I 
did a ld -v to bring information about the vinum drives.  Vinum drive d1 came 
up with the right information.  d2 came
up with some information.  d3 had all fields, but no information.  It was just 
like a drive with only blank information.

I checked d2, formerly failed, was pointing at ad5, then ran an lv -r to 
ensure that datastore.p0 said 'degraded'.  It did.  Finally to rebuild the 
array I ran: start datastore.p0.

At that point I didn't notice right away, but I had vinum [xxx]: reviving 
datastore.p0.s0.  I started to get worried the drive to rebuild is 
datastore.p0.s1.  Then reviving failed at 69%.

I tried start datastore.p0.s1 to rebuild the array, but that failed at 69% 
too.

= Q3: What can I do to revive the array?  I don't know what to do at this 
point.
= Q4: Did I do anything wrong in the recovery process?  Just want to make sure 
I learn from my mistakes.

Many thanks for your help in advance.









__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: GEOM, Vinum difference

2007-09-27 Thread Brian A. Seklecki
On Wed, 2007-08-22 at 08:51 +0400, Rakhesh Sasidharan wrote:
 Lowell Gilbert wrote:
 
  Rakhesh Sasidharan [EMAIL PROTECTED] writes:
 
  I see that if I want to do disk striping/ concating/ mirroring,
  FreeBSD offers the GEOM utilities and the Vinum LVM (which fits into
  the GEOM architecture). Why do we have two different ways of doing the

...

 definitely a difference. Thanks!
 
 Another (related) question: both gvinum and the geom utilities like 
 gmirror and gstripe etc provide for RAID0, RAID1, and RAID3. Any 
 advantages/ disadvantages of using one instead of the other?

It depends greatly upon your application and needs.  A common practice
in a common 6-disk capable server is to use a RAID1 set of smaller
capacity, faster speed/RPM disks for RAID1 for the system file
systems, while using a combination of larger, slower disks in a RAID1
set, then RAID0'd together for both space, performance, and redundancy.
RAID1+0.

~BAS

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Pre-geom vinum compatibility

2007-09-22 Thread Wojciech Puchar

I have a 4.11-R box that I'm planning on reinstalling a fresh 6.2-R on.  Not an 
upgrade, but a fresh binary install after newfsing the system partitions.

A remaining planning issue is that I have a pre-GEOM vinum data volume on other 
disks.  The handbook mentions gvinum retaining the same disk metadata.  Does 
this mean that I should be able to mount that 4.11 vinum volume after 6.2 and 
gvinum is installed on the system disk?

Anything I should watch out for?

to be sure export vinum definitions to file (it includes beginning and 
ending sector of each subdisk) and if gvinum won't take it itself, just 
use it file with create

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Pre-geom vinum compatibility

2007-09-21 Thread clear
I have a 4.11-R box that I'm planning on reinstalling a fresh 6.2-R on.  Not an 
upgrade, but a fresh binary install after newfsing the system partitions. 

A remaining planning issue is that I have a pre-GEOM vinum data volume on other 
disks.  The handbook mentions gvinum retaining the same disk metadata.  Does 
this mean that I should be able to mount that 4.11 vinum volume after 6.2 and 
gvinum is installed on the system disk?

Anything I should watch out for?

Thanks,

-Jed
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: GEOM, Vinum difference

2007-08-22 Thread Michel Talon
Rakhesh Sasidharan wrote:

 Another (related) question: both gvinum and the geom utilities like 
 gmirror and gstripe etc provide for RAID0, RAID1, and RAID3. Any 
 advantages/ disadvantages of using one instead of the other?

There has been a polemic between Greg Lehey and PJ Dawidek about the
comparative advantages of raid3 and raid5. You can find the exchanges on
Google. One example being:
http://arkiv.freebsd.se/?ml=freebsd-performancea=2004-08t=227183
As far as i remember there are arguments showing that raid3 is better
than raid5 both in terms of speed and of data security. It seems that
raid5 has mostly a hype factor for him, but i may err. Anyways it is for
such reasons that in the modern geom system, raid3 has been implemented
and not raid5. But vinum has been ported to the geom framework for the
benefit of old users, or of people who like it. For example if you are
using FreeBSD-4 or DragonFlyBSD, vinum is the standard tool, and you
may prefer getting expertise in just one tool.

Finally none of these raid systems is really good, both for performance
and security. If you are concerned with your data and want good write
speed, you must buy enough disks and use raid 10. Another important
factor is ease of use.  The geom tools, gmirror, gstripe, graid3, etc.
are *very* easy to use.  The documentation in the man pages is clear,
sufficient for doing work, and not too long. On the contrary, vinum was
traditionaly documented in a very hermetic way. But more recently, Greg
Lehey has provided a very clear chapter of his book on his web site
which can be recommanded, but is not short. Note the documentation is a
critical aspect of such systems because its lack may bite you in case a
disk crashes and you need to adopt correct procedures under stress.
Also for some time the gvinum stuff was extremely buggy, and was
completely non functional when i tried it. I hope it is fixed now.

-- 

Michel TALON

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: GEOM, Vinum difference

2007-08-22 Thread Rakhesh Sasidharan


Michel Talon wrote:


Rakhesh Sasidharan wrote:


Another (related) question: both gvinum and the geom utilities like
gmirror and gstripe etc provide for RAID0, RAID1, and RAID3. Any
advantages/ disadvantages of using one instead of the other?


There has been a polemic between Greg Lehey and PJ Dawidek about the
comparative advantages of raid3 and raid5. You can find the exchanges on
Google. One example being:
http://arkiv.freebsd.se/?ml=freebsd-performancea=2004-08t=227183
As far as i remember there are arguments showing that raid3 is better
than raid5 both in terms of speed and of data security. It seems that
raid5 has mostly a hype factor for him, but i may err. Anyways it is for
such reasons that in the modern geom system, raid3 has been implemented
and not raid5. But vinum has been ported to the geom framework for the
benefit of old users, or of people who like it. For example if you are
using FreeBSD-4 or DragonFlyBSD, vinum is the standard tool, and you
may prefer getting expertise in just one tool.

Finally none of these raid systems is really good, both for performance
and security. If you are concerned with your data and want good write
speed, you must buy enough disks and use raid 10. Another important
factor is ease of use.  The geom tools, gmirror, gstripe, graid3, etc.
are *very* easy to use.  The documentation in the man pages is clear,
sufficient for doing work, and not too long. On the contrary, vinum was
traditionaly documented in a very hermetic way. But more recently, Greg
Lehey has provided a very clear chapter of his book on his web site
which can be recommanded, but is not short. Note the documentation is a
critical aspect of such systems because its lack may bite you in case a
disk crashes and you need to adopt correct procedures under stress.
Also for some time the gvinum stuff was extremely buggy, and was
completely non functional when i tried it. I hope it is fixed now.


Great, thanks Michael! :) That's just the sort of info I was looking for 
...


Regards,

- Rakhesh
http://rakhesh.com/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: GEOM, Vinum difference

2007-08-21 Thread Lowell Gilbert
Rakhesh Sasidharan [EMAIL PROTECTED] writes:

 I see that if I want to do disk striping/ concating/ mirroring,
 FreeBSD offers the GEOM utilities and the Vinum LVM (which fits into
 the GEOM architecture). Why do we have two different ways of doing the
 same tasks -- any advantages/ disadvantages to either approach?

 I did check the archives before posting this question. Got a couple of
 hits, but they seem to be old info. Hence this question.

 The GEOM utilities seem to be newer, fancier, and probably the
 future. Vinum seems to be how things used to happen earlier. After
 GEOM was introduced, if Vinum had been discarded, I would have
 understood. But it wasn't. Instead, it was rewritten for GEOM and is
 probably still actively maintained. So I wonder why we have two ways
 of doing the same tasks ...

 What I understand from the archives is that Vinum was _probably_
 rewritten for GEOM coz the GEOM utilities were still new and not as
 time tested as Vinum. Is that the case? So will Vinum continue to be
 around for a while or it be discarded?

geom(4) does not provide RAID.  It provides framework services that
are used by gvinum(8), (and by many other disk-related capabilities).
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: GEOM, Vinum difference

2007-08-21 Thread Rakhesh Sasidharan


Lowell Gilbert wrote:


Rakhesh Sasidharan [EMAIL PROTECTED] writes:


I see that if I want to do disk striping/ concating/ mirroring,
FreeBSD offers the GEOM utilities and the Vinum LVM (which fits into
the GEOM architecture). Why do we have two different ways of doing the
same tasks -- any advantages/ disadvantages to either approach?

I did check the archives before posting this question. Got a couple of
hits, but they seem to be old info. Hence this question.

The GEOM utilities seem to be newer, fancier, and probably the
future. Vinum seems to be how things used to happen earlier. After
GEOM was introduced, if Vinum had been discarded, I would have
understood. But it wasn't. Instead, it was rewritten for GEOM and is
probably still actively maintained. So I wonder why we have two ways
of doing the same tasks ...

What I understand from the archives is that Vinum was _probably_
rewritten for GEOM coz the GEOM utilities were still new and not as
time tested as Vinum. Is that the case? So will Vinum continue to be
around for a while or it be discarded?


geom(4) does not provide RAID.  It provides framework services that
are used by gvinum(8), (and by many other disk-related capabilities).


Missed that one! :) There's no geom utility for RAID5, so that's 
definitely a difference. Thanks!


Another (related) question: both gvinum and the geom utilities like 
gmirror and gstripe etc provide for RAID0, RAID1, and RAID3. Any 
advantages/ disadvantages of using one instead of the other?


Thanks,

- Rakhesh
http://rakhesh.com/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


GEOM, Vinum difference

2007-08-20 Thread Rakhesh Sasidharan


Hi,

I see that if I want to do disk striping/ concating/ mirroring, FreeBSD 
offers the GEOM utilities and the Vinum LVM (which fits into the GEOM 
architecture). Why do we have two different ways of doing the same tasks 
-- any advantages/ disadvantages to either approach?


I did check the archives before posting this question. Got a couple of 
hits, but they seem to be old info. Hence this question.


The GEOM utilities seem to be newer, fancier, and probably the future. 
Vinum seems to be how things used to happen earlier. After GEOM was 
introduced, if Vinum had been discarded, I would have understood. But it 
wasn't. Instead, it was rewritten for GEOM and is probably still 
actively maintained. So I wonder why we have two ways of doing the same 
tasks ...


What I understand from the archives is that Vinum was _probably_ rewritten 
for GEOM coz the GEOM utilities were still new and not as time tested as 
Vinum. Is that the case? So will Vinum continue to be around for a while 
or it be discarded?



- Rakhesh
http://rakhesh.com/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration syntax

2007-08-14 Thread CyberLeo Kitsana
Modulok wrote:
 Take the following example vinum config file:
 
 drive a device /dev/da2a
 drive b device /dev/da3a
 
 volume rambo
 plex org concat
 sd length 512m drive a
 plex org concat
 sd length 512m drive b
   
8cut here8
drive disk1 device /dev/ad4s1h
drive disk2 device /dev/ad5s1h
drive disk3 device /dev/ad6s1h

volume raid5
plex org raid5 512k
sd length 190782M drive disk1
sd length 190782M drive disk2
sd length 190782M drive disk3
8cut here8

This syntax still worked for me as of gvinum in 6.2. However, the new
SoC patches for geom_vinum functionality may change some things when
included.

-- 
Fuzzy love,
-CyberLeo
Technical Administrator
CyberLeo.Net Webhosting
http://www.CyberLeo.Net
[EMAIL PROTECTED]

Furry Peace! - http://.fur.com/peace/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Vinum configuration syntax

2007-08-13 Thread Modulok
Take the following example vinum config file:

drive a device /dev/da2a
drive b device /dev/da3a

volume rambo
plex org concat
sd length 512m drive a
plex org concat
sd length 512m drive b

The keyword concat specifies the relationship between the plexes and
the subdisks. All writes, are always written to all plexes of a given
volume, thus the example above is a mirror with two plexes, each being
comprised of one very small subdisk. I understand this. What I don't
understand, is how to implement a RAID-5 volume.

The only two vinum plex organizations listed in the handbook were
striped and concat. How do I implement striping with distributed
parity (RAID 5)? This was not covered (or I missed it) in the
handbook, or the vinum(4) manual page, or the gvinum(8) manual page,
or in The Complete FreeBSD. There is a lot of great material on how
vinum is implemented and how great it will make your life, but
painfully little on the actual configuration syntax.

In the vinum(4) man page, it describes a number of mappings between
subdisks and plexes including: Concatenated, Striped and RAID-5,
however these are section headings and in the example config files,
the keywords were striped and concat, not Striped and
Concatenated were used. There has to be at least one other subdisk
to plex mapping:

Vinum implements the RAID-0, RAID-1 and RAID-5 models, both
individually and in combination.

RAID-5 is mentioned several times, but no examples were ever given.
What is the plex organization keyword, raid5, raid-5, RAID-5,
5, parity, disparity? I could use trial and error, but there has
to be a document with this information somewhere. Other than rummaging
through source code, is there any additional documentation on vinum
configuration syntax, (A strict specification would be great!)? I
found a FreeBSD Diary article using vinum, but it wasn't for RAID-5,
so no luck there.

FreeBSD 6.1-RELEASE
-Modulok-
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Creating vinum RAID 1 on place

2006-07-09 Thread Greg 'groggy' Lehey
On Friday,  7 July 2006 at 11:29:46 +0700, Olivier Nicole wrote:
 Hi,

 Is there a trick on the way to build a vinum RAID 1 without backup-in
 the data first?

Sometimes.  It's described in The Complete FreeBSD, page 236 or so.
See http://www.lemis.com/grog/Documentation/CFBSD/ for how to get the
book.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgp9bRktZRlpn.pgp
Description: PGP signature


Re: Creating vinum RAID 1 on place

2006-07-07 Thread John Nielsen
On Friday 07 July 2006 00:29, Olivier Nicole wrote:
 Is there a trick on the way to build a vinum RAID 1 without backup-in
 the data first?

 I have the two disk that will get mirrored. One of the disk if
 formated as UFS 4.2 and already holds all the data. The second disk is
 blank.

 NormallyI should start with 2 blank disks, label them as vinum, create
 the vinum plex, then push the data on that RAID. Is there a way to do
 it without blanking both disk first (a RAID 0 on a single disk, copy
 the data on the RAID 0), label the other disk as vinum and create a
 RAID1?

This is quite possible.  The 100% safe way would be to configure the blank 
disk as the sole member of a (degraded) mirror set, use dump / restore to 
transfer the data from the existing filesystem to the mirror, then wipe the 
old filesystem and add the original disk to the mirror.

The faster but only 90% safe way would be to gmirror label the partition 
containing the existing filesystem and then adding the second disk as a 
member. This is not safe if the last sector of the existing provider (where 
gmirror stores its metadata) is (or could be in the future) used by the 
filesystem. Frequently the geometry works out such that there are spare 
sectors at the end of a partition that are not used by newfs, but if you're 
not sure then don't go this route. See the archives of this and other lists 
for details about this situation.

JN
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Creating vinum RAID 1 on place

2006-07-07 Thread John Nielsen
On Friday 07 July 2006 08:04, John Nielsen wrote:
 On Friday 07 July 2006 00:29, Olivier Nicole wrote:
  Is there a trick on the way to build a vinum RAID 1 without backup-in
  the data first?
 
  I have the two disk that will get mirrored. One of the disk if
  formated as UFS 4.2 and already holds all the data. The second disk is
  blank.
 
  NormallyI should start with 2 blank disks, label them as vinum, create
  the vinum plex, then push the data on that RAID. Is there a way to do
  it without blanking both disk first (a RAID 0 on a single disk, copy
  the data on the RAID 0), label the other disk as vinum and create a
  RAID1?

 This is quite possible.  The 100% safe way would be to configure the blank
 disk as the sole member of a (degraded) mirror set, use dump / restore to
 transfer the data from the existing filesystem to the mirror, then wipe
 the old filesystem and add the original disk to the mirror.

 The faster but only 90% safe way would be to gmirror label the partition
 containing the existing filesystem and then adding the second disk as a
 member. This is not safe if the last sector of the existing provider (where
 gmirror stores its metadata) is (or could be in the future) used by the
 filesystem. Frequently the geometry works out such that there are spare
 sectors at the end of a partition that are not used by newfs, but if you're
 not sure then don't go this route. See the archives of this and other lists
 for details about this situation.

Sorry, I completely missed the vinum in your message the first time through. 
My comments above apply to GEOM mirroring (gmirror) and not to vinum. I would 
recommend gmirror over vinum for RAID 1, though, as it's much simpler to get 
going and at least as robust.

JN
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum stability?

2006-07-07 Thread Ian Jefferson

One thing you might consider is that gvinum is quite flexible.

The subdisks in vinum that make up a raid 5 plex are partitions.   
This means you can create raid 5 sets without using each entire disk   
and the disks don't need to be the same model or size.  It's also  
handy for spares.  If you start having media errors a new partition  
on the offending disk might be one option but any other disk that  
support a partition size equal to the ones used as subdisks in the  
raid 5 plex will also do.


Having said that I'm finding it tricky to understand and use gvinum.   
It seems to be on the mend though, the documentation is improving and  
the raid 5 set I had running seemed pretty stable for a 40 minute  
iozone benchmark.  That's all I've done with it to date.


IJ

On Jul 6, 2006, at 8:56 AM, Jeremy Ehrhardt wrote:

I have a quad-core Opteron nForce4 box running 6.1-RELEASE/amd64  
with a gvinum RAID 5 setup comprising six identical SATA drives on  
three controllers (the onboard nForce4 SATA, which is apparently  
two devices, and one Promise FastTrak TX2300 PCI SATA RAID  
controller in IDE mode), combined into one volume named drugs.  
We've been testing this box as a file server, and it usually works  
fine, but smartd reported a few bad sectors on one of the drives,  
then a few days later it crashed while I was running chmod -R on a  
directory on drugs and had to be manually rebooted. I can't  
figure out exactly what happened, especially given that RAID 5 is  
supposed to be robust against single drive failures and that  
despite the bad blocks smartctl claims the drive is healthy.


I have three questions:
1: what's up with gvinum RAID 5? Does it crash randomly? Is it  
considered stable? Will it lose data?
2: am I using a SATA controller that has serious problems or  
something like that? In other words, is this actually gvinum's fault?
3: would I be better off using a different RAID 5 system on another  
OS?


Jeremy Ehrhardt
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions- 
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum stability?

2006-07-06 Thread Chuck Swiger

Jeremy Ehrhardt wrote:
We've been testing this box as a 
file server, and it usually works fine, but smartd reported a few bad 
sectors on one of the drives, then a few days later it crashed while I 
was running chmod -R on a directory on drugs and had to be manually 
rebooted. I can't figure out exactly what happened, especially given 
that RAID 5 is supposed to be robust against single drive failures and 
that despite the bad blocks smartctl claims the drive is healthy.


As soon as you notice bad sectors appearing on a modern drive, it's time to 
replace it.  This is because modern drives already use spare sectors to 
replace failing data areas transparently, and when that no longer can be done 
because all of the spares have been used, the drive is likely to die shortly 
thereafter.


RAID-5 provides protection against a single-drive failure, but once errors are 
seen, the RAID-volume is operating in degraded mode which involves a 
significant performance penalty and you no longer have any protection against 
data loss-- if you have a problem with another disk in the meantime before the 
failing drive gets replaced, you're probably going to lose the entire RAID 
volume and all data on it.



I have three questions:
1: what's up with gvinum RAID 5? Does it crash randomly? Is it 
considered stable? Will it lose data?


Gvinum isn't supposed to crash randomly, and it reasonably stable, but it 
doesn't seen to be as reliable as either a hardware RAID setup or the older 
vinum from FreeBSD-4 and earlier.


As for losing data, see above.

2: am I using a SATA controller that has serious problems or something 
like that? In other words, is this actually gvinum's fault?


If you had a failing drive, that's not gvinum's fault.  gvinum is supposed to 
handle a single-drive failure, but it's not clear what actually went 
wrong...log messages or dmesg output might be useful.



3: would I be better off using a different RAID 5 system on another OS?


Changing OSes won't make much difference; using hardware to implement the RAID 
might be an improvement, rather than using gvinum's software RAID.  Of course, 
you'd have to adjust your config to fit within your hardware controller's 
capabilities.


--
-Chuck
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Creating vinum RAID 1 on place

2006-07-06 Thread Olivier Nicole
Hi,

Is there a trick on the way to build a vinum RAID 1 without backup-in
the data first?

I have the two disk that will get mirrored. One of the disk if
formated as UFS 4.2 and already holds all the data. The second disk is
blank.

NormallyI should start with 2 blank disks, label them as vinum, create
the vinum plex, then push the data on that RAID. Is there a way to do
it without blanking both disk first (a RAID 0 on a single disk, copy
the data on the RAID 0), label the other disk as vinum and create a
RAID1?

best regards,

Olivier
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


vinum stability?

2006-07-05 Thread Jeremy Ehrhardt
I have a quad-core Opteron nForce4 box running 6.1-RELEASE/amd64 with a 
gvinum RAID 5 setup comprising six identical SATA drives on three 
controllers (the onboard nForce4 SATA, which is apparently two devices, 
and one Promise FastTrak TX2300 PCI SATA RAID controller in IDE mode), 
combined into one volume named drugs. We've been testing this box as a 
file server, and it usually works fine, but smartd reported a few bad 
sectors on one of the drives, then a few days later it crashed while I 
was running chmod -R on a directory on drugs and had to be manually 
rebooted. I can't figure out exactly what happened, especially given 
that RAID 5 is supposed to be robust against single drive failures and 
that despite the bad blocks smartctl claims the drive is healthy.


I have three questions:
1: what's up with gvinum RAID 5? Does it crash randomly? Is it 
considered stable? Will it lose data?
2: am I using a SATA controller that has serious problems or something 
like that? In other words, is this actually gvinum's fault?

3: would I be better off using a different RAID 5 system on another OS?

Jeremy Ehrhardt
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum stability?

2006-07-05 Thread Peter A. Giessel


On 7/5/2006 15:56, Jeremy Ehrhardt seems to have typed:
 3: would I be better off using a different RAID 5 system on another OS?

You would be best off with a 3ware card (www.3ware.com) running RAID 5
(hardware raid  software raid).

It works great in FreeBSD and is *very* stable and fault tolerant.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum stability?

2006-07-05 Thread Jonathan Horne
On Wednesday 05 July 2006 19:05, Peter A. Giessel wrote:
 On 7/5/2006 15:56, Jeremy Ehrhardt seems to have typed:
  3: would I be better off using a different RAID 5 system on another OS?

 You would be best off with a 3ware card (www.3ware.com) running RAID 5
 (hardware raid  software raid).

 It works great in FreeBSD and is *very* stable and fault tolerant.

 ___

i have a 3ware card in my production server running a RAID5, and its never 
skipped a beat.

if you dont buy a raid card (with 6 or more channels), try breaking the usage 
up.  put the system partitions on one controller, and build a 4 disk raid5 on 
the other, and see if it behaves differently (ie, remove the 
cross-controller-raid from the equation and see what happens).

cheers,
jonathahn
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: is vinum in FBSD 6.0?

2006-06-04 Thread Greg 'groggy' Lehey
On Friday,  2 June 2006 at  5:04:15 -0500, Kevin Kinsey wrote:
 Travis H. wrote:

 Is there some kind of IP lawsuit over vinum or something?

 If so, it's never been mentioned ;-)

It has now, but it's the first time I've heard of it.

 It's a valid question, but I don't think Greg's that kind of guy.
 As for Veritas, I *think* they had some sort of agreement (re: the
 name), (but I could be blowing smoke there); IIRC, vinum(8) was
 patterned after the idea of the Veritas software, and not in any
 way a copy or clone of it

In fact, I never asked VERITAS.  I only modelled Vinum on VxVM,
anyway; there are big differences.

In any case, there was one IP issue at the very beginning: I developed
the RAID-5 functionality under contract with Cybernet Inc., and part
of that agreement was that I would not release it until 18 months
after it became functional.  That time has long passed, and RAID-5 has
of course been released.  There have never been any conflicts arising
from IP issues, neither with Cybernet, VERITAS, myself or anybody
else.

 As others have stated, vinum has been replaced by gvinum.  Greg
 had stated in the past that the GEOM layer's introduction had badly
 broken vinum, so I'm guessing that vinum was removed so that no one
 would attempt to use it on a newer system and get unexpected
 results.

The original intention was to modify Vinum to work with GEOM.  Lukas
did the work, and he chose to rewrite significant parts of it, and
also to rename it.  I disagreed with both of these decisions (see the
problems they've caused, like what's being discussed here), but he's
the man, and he gets to call the shots.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpjzW7IHXfWa.pgp
Description: PGP signature


Re: is vinum in FBSD 6.0?

2006-06-02 Thread bsd
 I recently installed 6.0, and there doesn't seem to be a vinum binary.

 There is a gvinum binary, but it doesn't even implement all of the
 commands in its own help screen.

 I'm somewhat confused.  Did I screw up my install, or is this normal?

 Is there some kind of IP lawsuit over vinum or something?
 --
 Scientia Est Potentia -- Eppur Si Muove
 Security guru for rent or hire - http://www.lightconsulting.com/~travis/
 --
 GPG fingerprint: 9D3F 395A DAC5 5CCC 9066  151D 0A6B 4098 0C55 1484
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
 [EMAIL PROTECTED]

vinum has been replaced by gvinum and geom see the handbook for setup.

Rob



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: is vinum in FBSD 6.0?

2006-06-02 Thread Kevin Kinsey

Travis H. wrote:


Is there some kind of IP lawsuit over vinum or something?


If so, it's never been mentioned ;-)

It's a valid question, but I don't think Greg's that kind
of guy.  As for Veritas, I *think* they had some sort of
agreement (re: the name), (but I could be blowing smoke there);
IIRC, vinum(8) was patterned after the idea of the
Veritas software, and not in any way a copy or clone of it

As others have stated, vinum has been replaced by gvinum.
Greg had stated in the past that the GEOM layer's introduction
had badly broken vinum, so I'm guessing that vinum was removed
so that no one would attempt to use it on a newer system and
get unexpected results.

My $0.02, IANAL, IANAE, etc.,

Kevin Kinsey

--
It is now 10 p.m.  Do you know where Henry Kissinger is?
-- Elizabeth Carpenter

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


is vinum in FBSD 6.0?

2006-06-01 Thread Travis H.

I recently installed 6.0, and there doesn't seem to be a vinum binary.

There is a gvinum binary, but it doesn't even implement all of the
commands in its own help screen.

I'm somewhat confused.  Did I screw up my install, or is this normal?

Is there some kind of IP lawsuit over vinum or something?
--
Scientia Est Potentia -- Eppur Si Muove
Security guru for rent or hire - http://www.lightconsulting.com/~travis/ --
GPG fingerprint: 9D3F 395A DAC5 5CCC 9066  151D 0A6B 4098 0C55 1484
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: is vinum in FBSD 6.0?

2006-06-01 Thread Mikhail Goriachev
Travis H. wrote:
 I recently installed 6.0, and there doesn't seem to be a vinum binary.
 
 There is a gvinum binary, but it doesn't even implement all of the
 commands in its own help screen.
 
 I'm somewhat confused.  Did I screw up my install, or is this normal?
 
 Is there some kind of IP lawsuit over vinum or something?

Hi,

The following is an extract from:

http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-vinum.html

[...]

Note: Starting with FreeBSD 5, Vinum has been rewritten in order to fit
into the GEOM architecture (Chapter 18), retaining the original ideas,
terminology, and on-disk metadata. This rewrite is called gvinum (for
GEOM vinum). The following text usually refers to Vinum as an abstract
name, regardless of the implementation variant. Any command invocations
should now be done using the gvinum command, and the name of the kernel
module has been changed from vinum.ko to geom_vinum.ko, and all device
nodes reside under /dev/gvinum instead of /dev/vinum. As of FreeBSD 6,
the old Vinum implementation is no longer available in the code base.

[...]

Cheers,
Mikhail.

-- 
Mikhail Goriachev
Webanoide

Telephone: +61 (0)3 62252501
Mobile Phone: +61 (0)4 38255158
E-Mail: [EMAIL PROTECTED]
Web: http://www.webanoide.org

PGP Key ID: 0x4E148A3B
PGP Key Fingerprint: D96B 7C14 79A5 8824 B99D 9562 F50E 2F5D 4E14 8A3B
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


vinum concat

2006-05-17 Thread Joe Auty

Hello,

Can anybody recommend using vinum to concatenate across two disks?  
What are the upsides? Downsides?


Are their any tutorials explaining how to do so? So far, based on the  
lack of info I've been able to find, it seems to me that this is a  
rarely used configuration... I'm wondering what the reasons for this  
might be?








---
Joe Auty
NetMusician: web publishing software for musicians
http://www.netmusician.org
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum concat

2006-05-17 Thread Emil Thelin

On Wed, 17 May 2006, Joe Auty wrote:

Are their any tutorials explaining how to do so? So far, based on the lack of 
info I've been able to find, it seems to me that this is a rarely used 
configuration... I'm wondering what the reasons for this might be?


http://devel.reinikainen.net/docs/how-to/Vinum/ might be helpful.

/e

--
http://hostname.nu/~emil
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum concat

2006-05-17 Thread Joe Auty
There might be some helpful nuggets in there, but I'm looking to  
basically combine the storage of multiple disks, like RAID-0, except  
I want my second drive written to only when my first drive has been  
filled. I understand this can be done via vinum concatenation. I'm  
looking for general feedback on whether anybody has tried this setup,  
how it worked, and what was useful to know to get started.




On May 17, 2006, at 12:06 PM, Emil Thelin wrote:


On Wed, 17 May 2006, Joe Auty wrote:

Are their any tutorials explaining how to do so? So far, based on  
the lack of info I've been able to find, it seems to me that this  
is a rarely used configuration... I'm wondering what the reasons  
for this might be?


http://devel.reinikainen.net/docs/how-to/Vinum/ might be helpful.

/e

--
http://hostname.nu/~emil
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions- 
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Creating Vinum Volume during install

2006-05-04 Thread Loren M. Lang
I am trying to create a vinum file system during the install so I can
also use it for the root filesystem as described in the handbook, but it
appears that the geom_vinum modules are not available from the FreeBSD
6.1-RC2 disc 1 LiveCD shell.  Are the modules not available or do I need
to load something for it to work?  If they're not available, what other
choices to I have?  Freesbie?

-- 
Loren M. Lang
[EMAIL PROTECTED]
http://www.alzatex.com/


Public Key: ftp://ftp.tallye.com/pub/lorenl_pubkey.asc
Fingerprint: CEE1 AAE2 F66C 59B5 34CA  C415 6D35 E847 0118 A3D2
 


pgpTz5apekhX5.pgp
Description: PGP signature


Vinum and upgrading from 5.1-RELEASE

2006-04-13 Thread Chris Hastie
I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.

The system has two mirrored disks, with Vinum used for the root
filesystem. make buildworld, make buildkernel and make installkernel
happened without complaint. But on rebooting vinum loaded, but failed to
load any volumes. Consequently the root filesystem didn't mount.

I ended up manually mounting ad0s1a, then in a desperate attempt to get
out of the situation

mount -u -w /
mount /dev/ad2s1a /root
cd /boot
mv kernel kernel.55
mv kernel.old kernel
cd /root/boot
mv kernel kernel.55
mv kernel.old kernel
reboot

To my surprise and relief this worked and I managed to get more or less
back to where I started.

But what do I need to do now to upgrade this system and have it actually
work?

For info, output of vinum dumpconfig below:

Drive chaucer:  Device /dev/ad0s1h
Created on xxx at Mon Sep 29 21:14:13 2003
Config last updated Thu Apr 13 07:59:46 2006
Size:  58946229760 bytes (56215 MB)
volume rootvol state up
volume varvol state up
volume tempvol state up
volume usrvol state up
plex name rootvol.p0 state up org concat vol rootvol
plex name rootvol.p1 state up org concat vol rootvol
plex name varvol.p0 state up org concat vol varvol
plex name varvol.p1 state up org concat vol varvol
plex name tempvol.p0 state up org concat vol tempvol
plex name tempvol.p1 state up org concat vol tempvol
plex name usrvol.p0 state up org concat vol usrvol
plex name usrvol.p1 state up org concat vol usrvol
sd name rootvol.p0.s0 drive shakespeare len 524288s driveoffset 265s state up 
plex rootvol.p0 plexoffset 0s
sd name rootvol.p1.s0 drive chaucer len 524288s driveoffset 265s state up plex 
rootvol.p1 plexoffset 0s
sd name varvol.p0.s0 drive shakespeare len 31457280s driveoffset 524553s state 
up plex varvol.p0 plexoffset 0s
sd name varvol.p1.s0 drive chaucer len 31457280s driveoffset 524553s state up 
plex varvol.p1 plexoffset 0s
sd name tempvol.p0.s0 drive shakespeare len 1048576s driveoffset 31981833s 
state up plex tempvol.p0 plexoffset 0s
sd name tempvol.p1.s0 drive chaucer len 1048576s driveoffset 31981833s state up 
plex tempvol.p1 plexoffset 0s
sd name usrvol.p0.s0 drive shakespeare len 82098946s driveoffset 33030409s 
state up plex usrvol.p0 plexoffset 0s
sd name usrvol.p1.s0 drive chaucer len 82098946s driveoffset 33030409s state up 
plex usrvol.p1 plexoffset 0s

Drive /dev/ad0s1h: 54 GB (58946229760 bytes)
Drive shakespeare:  Device /dev/ad2s1h
Created on xxx at Mon Sep 29 21:14:13 2003
Config last updated Thu Apr 13 07:59:46 2006
Size:  58946229760 bytes (56215 MB)
volume rootvol state up
volume varvol state up
volume tempvol state up
volume usrvol state up
plex name rootvol.p0 state up org concat vol rootvol
plex name rootvol.p1 state up org concat vol rootvol
plex name varvol.p0 state up org concat vol varvol
plex name varvol.p1 state up org concat vol varvol
plex name tempvol.p0 state up org concat vol tempvol
plex name tempvol.p1 state up org concat vol tempvol
plex name usrvol.p0 state up org concat vol usrvol
plex name usrvol.p1 state up org concat vol usrvol
sd name rootvol.p0.s0 drive shakespeare len 524288s driveoffset 265s state up 
plex rootvol.p0 plexoffset 0s
sd name rootvol.p1.s0 drive chaucer len 524288s driveoffset 265s state up plex 
rootvol.p1 plexoffset 0s
sd name varvol.p0.s0 drive shakespeare len 31457280s driveoffset 524553s state 
up plex varvol.p0 plexoffset 0s
sd name varvol.p1.s0 drive chaucer len 31457280s driveoffset 524553s state up 
plex varvol.p1 plexoffset 0s
sd name tempvol.p0.s0 drive shakespeare len 1048576s driveoffset 31981833s 
state up plex tempvol.p0 plexoffset 0s
sd name tempvol.p1.s0 drive chaucer len 1048576s driveoffset 31981833s state up 
plex tempvol.p1 plexoffset 0s
sd name usrvol.p0.s0 drive shakespeare len 82098946s driveoffset 33030409s 
state up plex usrvol.p0 plexoffset 0s
sd name usrvol.p1.s0 drive chaucer len 82098946s driveoffset 33030409s state up 
plex usrvol.p1 plexoffset 0s

Drive /dev/ad2s1h: 54 GB (58946229760 bytes)


and /etc/fstab
# DeviceMountpoint  FStype  Options DumpPass#
/dev/ad0s1b noneswapsw  0   0
/dev/ad2s1b noneswapsw  0   0
/dev/vinum/rootvol  /   ufs rw  1   
1
/dev/vinum/tempvol  /tmpufs rw  2   
2
/dev/vinum/usrvol   /usrufs rw  2   
2
/dev/vinum/varvol   /varufs rw,userquota
2   2
/dev/ad1s1d /bakufs rw  2   
2


-- 
Chris Hastie
___
freebsd-questions

Re: Vinum and upgrading from 5.1-RELEASE

2006-04-13 Thread Emil Thelin

On Thu, 13 Apr 2006, Chris Hastie wrote:


I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.


Since 5.3 you should probably use the geom-aware gvinum instead of vinum.

I think the manual says that both vinum and gvinum can be used for  5.3 
but I've never actually got vinum to work on  5.3.


/e
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum and upgrading from 5.1-RELEASE

2006-04-13 Thread Chris Hastie

On Thu, 13 Apr 2006, Emil Thelin [EMAIL PROTECTED] wrote:


On Thu, 13 Apr 2006, Chris Hastie wrote:


I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.


Since 5.3 you should probably use the geom-aware gvinum instead of vinum.

I think the manual says that both vinum and gvinum can be used for  5.3
but I've never actually got vinum to work on  5.3.


Any hints as to how I make this migration? Is it as simple as putting
geom_vinum_load=YES in /boot/loader.conf, or is there further configuration
to do? Is gvinum happy with the same configuration data as vinum? Presumably
device names are different so I'll have to change /etc/fstab accordingly?


--
Chris Hastie
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum and upgrading from 5.1-RELEASE

2006-04-13 Thread Emil Thelin

On Thu, 13 Apr 2006, Chris Hastie wrote:


On Thu, 13 Apr 2006, Emil Thelin [EMAIL PROTECTED] wrote:


On Thu, 13 Apr 2006, Chris Hastie wrote:


I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.


Since 5.3 you should probably use the geom-aware gvinum instead of vinum.

I think the manual says that both vinum and gvinum can be used for  5.3
but I've never actually got vinum to work on  5.3.


Any hints as to how I make this migration? Is it as simple as putting
geom_vinum_load=YES in /boot/loader.conf, or is there further configuration
to do? Is gvinum happy with the same configuration data as vinum? Presumably
device names are different so I'll have to change /etc/fstab accordingly?


I've never tried it so I'm not sure how to migrate from vinum to gvinum, 
check the handbook and ask google.


But my gut feeling about it is that it will probably not be that easy, 
g(vinum) has a way of causing headaches..


/e

--
http://blogg.hostname.nu || http://photoblog.hostname.nu
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Root on vinum volume on freebsd6.0/sparc64

2006-04-03 Thread Stephan Zaubzer

Hi!
Is here a way to place the root partition of freebsd in a mirrored vinum 
volume? I have read tutorials which explain the procedure in more or 
less detail, but all the tutorials seem to deal with the x86 version. 
For example the tutorials assume that freebsd is installed on a slice 
and that the first 8 sectors of this slice contain the bootstrap. And it 
tells you to move some partitions a few sectors.
On sparc64 I did neither succeed to create overlapping partitions nor is 
there the possibility to shift a partition a few sectors (to leave space 
for the vinum configuration data). Has anyone experience with a root 
partition on a vinum volume on sparc64?

Regards
Stephan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Vinum FreeBSD 4.11-STABLE

2006-01-29 Thread Forth
Hi,
I am trying setup a vinum mirrored plex with two disks:
ad2: 38204MB SAMSUNG SP0411N [77622/16/63] at ata1-master UDMA33
ad3: 38204MB SAMSUNG SP0411N [77622/16/63] at ata1-slave UDMA33
Disks are new and fully functional, but when i do:
#vinum start
#vinum
vinum - mirror -v -n mirror /dev/ad2 /dev/ad3
i get this:
drive vinumdrive0 device /dev/ad2
Can't create drive vinumdrive0, device /dev/ad2: Can't initialize drive 
vinumdrive0

P.S 
Sorry for my english.:)
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum FreeBSD 4.11-STABLE

2006-01-29 Thread Greg 'groggy' Lehey
On Monday, 30 January 2006 at 10:31:13 +0300, Forth wrote:
 Hi,
 I am trying setup a vinum mirrored plex with two disks:
 ad2: 38204MB SAMSUNG SP0411N [77622/16/63] at ata1-master UDMA33
 ad3: 38204MB SAMSUNG SP0411N [77622/16/63] at ata1-slave UDMA33
 Disks are new and fully functional, but when i do:
 #vinum start
 #vinum
 vinum - mirror -v -n mirror /dev/ad2 /dev/ad3
 i get this:
 drive vinumdrive0 device /dev/ad2
 Can't create drive vinumdrive0, device /dev/ad2: Can't initialize drive
 vinumdrive0

You should be using partitions, not disk drives.  Create a partition
of type vinum, for example /dev/ad2s1h, and specify that.  The man
page explains in more detail.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpHrIkJCSqEG.pgp
Description: PGP signature


vinum RAID 1, FreeBSD 4-STABLE two different size drives

2006-01-18 Thread Mark Cullen

Hello :-)

I've come across a deal where by I can either buy two identical 20GB 
drives, or a 20GB drive and a 40GB drive for the same price as the two 
20GB's. I was intending to use the drives for a RAID 1 array and have 
read that ideally the drives should be identical, but it is do-able with 
different size drives.


Obviously as both options are the same price I would like to get more 
for my money and get the 40GB and 20GB drive, rather than two 20GB's. 
This should also give me the option of buying only a single 40GB drive 
later on for cheap if I need to bump the space up a bit, rather than 
buying two drives.


Simple questions really I suppose. Is having two different size drives 
going to work? Is it going to make configuration trickier in any way? 
Are there any disadvantages (other than I will be losing out on 20GB on 
the 40GB drive, and that it's only going to be as fast as the slowest 
drive I imagine)? I've never ever touched vinum before!


Thanks in advance for any advice / comments,
Mark
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: your advice on vinum, RAIDs

2005-12-29 Thread Robert Slade
On Thu, 2005-12-29 at 06:09, Joe Auty wrote:
 Hello,
 
 I've been considering buying another hard drive for my FreeBSD  
 machine, and building a RAID with my current hard drive so that both  
 drives are treated as one.
 
 Do any of you have experience in this area? Is this advisable? I'm  
 assuming I'd be looking at creating a RAID-5? Can this be done  
 without reformatting my current drive? Does this setup work well? Do  
 you have any general advice for me? I need to know if there is risk  
 involved here.
 
 
 
 Thanks in advance!

Joe,

It depends on what you need to do. If you just want data integrity then
you need raid 1 - mirroring and GEOM is your friend. There is a good
section in the Handbook on setting a GEOM raid 1 without formatting the
original drive.


If you are also looking for more drive space, then raid 5 gives a
measure of both. 

In both cases, the warning of backing up the system first applies.

Rob  

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: your advice on vinum, RAIDs

2005-12-29 Thread Chuck Swiger

Joe Auty wrote:
I've been considering buying another hard drive for my FreeBSD  machine, 
and building a RAID with my current hard drive so that both  drives are 
treated as one.


This is known as RAID-1 mirroring.

Do any of you have experience in this area? Is this advisable? 


Yes and yes.  :-)


I'm assuming I'd be looking at creating a RAID-5?


Not with only two drives.
RAID-5 needs at least 3, and is wasteful unless you have 4-5.


Can this be done without reformatting my current drive?


You can set up mirroring without reformatting, but be sure you have good backups 
of your data regardless.



Does this setup work well?  Do you have any general advice for me? I need to
know if there is risk  involved here.


When choosing RAID levels, you are making a tradeoff between performance, 
reliability, and cost:


If you prefer... ...consider using:
---
performance, reliability:RAID-1 mirroring
performance, cost:   RAID-0 striping
reliability, performance:RAID-1 mirroring (+ hot spare, if possible)
reliability, cost:   RAID-5 (+ hot spare)
cost, reliability:   RAID-5
cost, performance:   RAID-0 striping

If you've got enough drives, using RAID-10 or RAID-50 will also improve 
performance compared to stock RAID-1 or RAID-5 modes.


--
-Chuck
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: your advice on vinum, RAIDs

2005-12-29 Thread Joe Auty


On Dec 29, 2005, at 4:14 AM, Robert Slade wrote:


On Thu, 2005-12-29 at 06:09, Joe Auty wrote:

Hello,

I've been considering buying another hard drive for my FreeBSD
machine, and building a RAID with my current hard drive so that both
drives are treated as one.

Do any of you have experience in this area? Is this advisable? I'm
assuming I'd be looking at creating a RAID-5? Can this be done
without reformatting my current drive? Does this setup work well? Do
you have any general advice for me? I need to know if there is risk
involved here.



Thanks in advance!


Joe,

It depends on what you need to do. If you just want data integrity  
then

you need raid 1 - mirroring and GEOM is your friend. There is a good
section in the Handbook on setting a GEOM raid 1 without formatting  
the

original drive.


If you are also looking for more drive space, then raid 5 gives a
measure of both.

In both cases, the warning of backing up the system first applies.




Hmmm. What I need is more drive space. Should I look at GEOM  
rather than vinum? Do you know whether the drives would need to be  
reformatted in order to setup the RAID?


I'll definitely heed your advice on backing up the drive first!







---
Joe Auty
NetMusician: web publishing software for musicians
http://www.netmusician.org
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: your advice on vinum, RAIDs

2005-12-29 Thread Joe Auty

Some great advice here!

What RAID level would you recommend for simply maximizing the hard  
disk space I have available? This is just my personal backup machine  
and will consist of two drives, so I don't need kick ass performance,  
and I don't need my files mirrored. I take it that striping is what I  
need to look at? RAID-0? Can I setup striping without reformatting,  
or only mirroring?


Sorry, still learning the basics here


On Dec 29, 2005, at 9:31 AM, Chuck Swiger wrote:


Joe Auty wrote:
I've been considering buying another hard drive for my FreeBSD   
machine, and building a RAID with my current hard drive so that  
both  drives are treated as one.


This is known as RAID-1 mirroring.


Do any of you have experience in this area? Is this advisable?


Yes and yes.  :-)


I'm assuming I'd be looking at creating a RAID-5?


Not with only two drives.
RAID-5 needs at least 3, and is wasteful unless you have 4-5.


Can this be done without reformatting my current drive?


You can set up mirroring without reformatting, but be sure you have  
good backups of your data regardless.


Does this setup work well?  Do you have any general advice for me?  
I need to

know if there is risk  involved here.


When choosing RAID levels, you are making a tradeoff between  
performance, reliability, and cost:


If you prefer... ...consider using:
---
performance, reliability:RAID-1 mirroring
performance, cost:   RAID-0 striping
reliability, performance:RAID-1 mirroring (+ hot spare, if  
possible)

reliability, cost:   RAID-5 (+ hot spare)
cost, reliability:   RAID-5
cost, performance:   RAID-0 striping

If you've got enough drives, using RAID-10 or RAID-50 will also  
improve performance compared to stock RAID-1 or RAID-5 modes.


--
-Chuck
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions- 
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: your advice on vinum, RAIDs

2005-12-29 Thread Chuck Swiger

Joe Auty wrote:

Some great advice here!

What RAID level would you recommend for simply maximizing the hard  disk 
space I have available?


RAID-0 striping.  Note that it gives you no redundancy or protection.

This is just my personal backup machine  and 
will consist of two drives, so I don't need kick ass performance,  and I 
don't need my files mirrored. I take it that striping is what I  need to 
look at? RAID-0? Can I setup striping without reformatting,  or only 
mirroring?


If you plan to use RAID most effectively, plan to reformat.
Do not attempt to setup RAID without having backups.

However, you can glue two disks together without reformatting using something 
called concatenation, but it doesn't perform as well as reformatting using 
striping.


--
-Chuck
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: your advice on vinum, RAIDs

2005-12-29 Thread Martin Cracauer
Joe Auty wrote on Thu, Dec 29, 2005 at 10:12:02AM -0500: 
 Some great advice here!
 
 What RAID level would you recommend for simply maximizing the hard  
 disk space I have available? This is just my personal backup machine  
 and will consist of two drives, so I don't need kick ass performance,  
 and I don't need my files mirrored. I take it that striping is what I  
 need to look at? RAID-0? Can I setup striping without reformatting,  
 or only mirroring?

Why do you want to RAID in first place then? RAID-0 is the only option
that doesn't lose space, but it increases your risk for the benefit of
performance.  Since you don't need performance there is no point in
taking the risk, much less the bootstrapping hassle.

You cannot convert existing filesystems to raid without first moving
the data somewhere.

Martin
-- 
%%%
Martin Cracauer cracauer@cons.org   http://www.cons.org/cracauer/
FreeBSD - where you want to go, today.  http://www.freebsd.org/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: your advice on vinum, RAIDs

2005-12-29 Thread Robert Slade
On Thu, 2005-12-29 at 15:05, Joe Auty wrote:
 On Dec 29, 2005, at 4:14 AM, Robert Slade wrote:
 
  On Thu, 2005-12-29 at 06:09, Joe Auty wrote:
  Hello,
 
  I've been considering buying another hard drive for my FreeBSD
  machine, and building a RAID with my current hard drive so that both
  drives are treated as one.
 
  Do any of you have experience in this area? Is this advisable? I'm
  assuming I'd be looking at creating a RAID-5? Can this be done
  without reformatting my current drive? Does this setup work well? Do
  you have any general advice for me? I need to know if there is risk
  involved here.
 
 
 
  Thanks in advance!
 
  Joe,
 
  It depends on what you need to do. If you just want data integrity  
  then
  you need raid 1 - mirroring and GEOM is your friend. There is a good
  section in the Handbook on setting a GEOM raid 1 without formatting  
  the
  original drive.
 
 
  If you are also looking for more drive space, then raid 5 gives a
  measure of both.
 
  In both cases, the warning of backing up the system first applies.
 
 
 
 Hmmm. What I need is more drive space. Should I look at GEOM  
 rather than vinum? Do you know whether the drives would need to be  
 reformatted in order to setup the RAID?
 
 I'll definitely heed your advice on backing up the drive first!

Joe,

I if you are not worried about integrity, I would leave Raid alone. To
add more space, just the drive and mount it. I have just added more
space on a machine by mounting a new drive as /data and copying the home
dirs across then relinking the /home dir to home. 

Have a look at:

http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/disks-adding.html

Rob

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


your advice on vinum, RAIDs

2005-12-28 Thread Joe Auty

Hello,

I've been considering buying another hard drive for my FreeBSD  
machine, and building a RAID with my current hard drive so that both  
drives are treated as one.


Do any of you have experience in this area? Is this advisable? I'm  
assuming I'd be looking at creating a RAID-5? Can this be done  
without reformatting my current drive? Does this setup work well? Do  
you have any general advice for me? I need to know if there is risk  
involved here.




Thanks in advance!









---
Joe Auty
NetMusician: web publishing software for musicians
http://www.netmusician.org
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


vinum stuck in 'initalizing' state ...

2005-11-28 Thread Marc G. Fournier


but, from what I can tell, is doing absolutely nothing, both via iostat 
and vinum list ... the server has been 'up' for 15 minutes ... when I 
accidentally typed 'vinum init vm.p0.s0' instead of 'vinum start 
vm.p0.s0', the machine hung up and required cold boot ... when it came 
back up, the initalize was running:


Subdisk vm.p0.s0:
Size:  72865336320 bytes (69489 MB)
State: initializing
Plex vm.p0 at offset 0 (0  B)
Initialize pointer:  0  B (0%)
Initialize blocksize:0  B
Initialize interval: 0 seconds
Drive d0 (/dev/da1s1a) at offset 135680 (132 kB)

But, nothing is happening on da1:

neptune# iostat 5 5
  tty da0  da1  da2 cpu
 tin tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0  144  2.00 1513  2.95   0.00   0  0.00   0.00   0  0.00   2  0  4  0 93
   0   64 14.24 191  2.66   0.00   0  0.00   0.00   0  0.00  12  0 14  2 71
   0  138  7.92 172  1.33   0.00   0  0.00   0.00   0  0.00   7  0  6  0 87
   0   19 11.27 159  1.75   0.00   0  0.00   0.00   0  0.00  16  0  5  1 78
   0   69  8.95 157  1.37   0.00   0  0.00   0.00   0  0.00  23  0  4  1 72

Not sure what to do/try to give it a kick :(  Or did I just lose that 
whole file system?


This is with FreeBSD 4.x still ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum stuck in 'initalizing' state ...

2005-11-28 Thread Marc G. Fournier


Ignore, Google'd a bit longer and found the answer ...

On Mon, 28 Nov 2005, Marc G. Fournier wrote:



but, from what I can tell, is doing absolutely nothing, both via iostat and 
vinum list ... the server has been 'up' for 15 minutes ... when I 
accidentally typed 'vinum init vm.p0.s0' instead of 'vinum start vm.p0.s0', 
the machine hung up and required cold boot ... when it came back up, the 
initalize was running:


Subdisk vm.p0.s0:
   Size:  72865336320 bytes (69489 MB)
   State: initializing
   Plex vm.p0 at offset 0 (0  B)
   Initialize pointer:  0  B (0%)
   Initialize blocksize:0  B
   Initialize interval: 0 seconds
   Drive d0 (/dev/da1s1a) at offset 135680 (132 kB)

But, nothing is happening on da1:

neptune# iostat 5 5
 tty da0  da1  da2 cpu
tin tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
  0  144  2.00 1513  2.95   0.00   0  0.00   0.00   0  0.00   2  0  4  0 93
  0   64 14.24 191  2.66   0.00   0  0.00   0.00   0  0.00  12  0 14  2 71
  0  138  7.92 172  1.33   0.00   0  0.00   0.00   0  0.00   7  0  6  0 87
  0   19 11.27 159  1.75   0.00   0  0.00   0.00   0  0.00  16  0  5  1 78
  0   69  8.95 157  1.37   0.00   0  0.00   0.00   0  0.00  23  0  4  1 72

Not sure what to do/try to give it a kick :(  Or did I just lose that whole 
file system?


This is with FreeBSD 4.x still ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664




Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Incorrectly followed steps to re-sync vinum subdisk

2005-11-10 Thread Ahnjoan Amous
Tonight while checking a few things on my personal machine I noticed
that one of my vinum sub disks was stale.  Included below are the
steps I took attempting to remedy this.  Clearly I did not follow the
proper procedures and now the volume is un-mountable.  Currently when
I attempt to mount the /export filesystem the machine panics and
reboots.  My apologies in advance for the length of information
included.  I include it so that it might aid in recovery of the data
on this filesystem.  It has been cleaned up a bit but am happy to
provide the full output with the multiple help command output. 
Shortly after the command sequence included the machine rebooted. 
Since then I have booted in to single user mode and commented out
/export from fstab.  The machine is fully functioning with the
exception of the important data on /export.

If anyone doesn't believe I will be able to recover the data on my own
but does know of a company that might be able to recover data written
on a vinum mirror would you send me their name?

Thank you
Ahnjoan

[EMAIL PROTECTED]:~  df -k
Filesystem   1K-blocks UsedAvail Capacity  Mounted on
/dev/vinum/root101297461138   870800 7%/
devfs110   100%/dev
/dev/vinum/var 1012974   355886   57605238%/var
/dev/vinum/crash   20260306  1863942 0%/var/crash
/dev/vinum/tmp 1012974   16   931922 0%/tmp
/dev/vinum/usr 8122126  4714322  275803463%/usr
/dev/vinum/home4058062 6922  3726496 0%/home
/dev/vinum/tmp1   50777034 35262392 1145248075%/export
procfs   440   100%/proc
devfs110   100%/var/db/dhcpd/dev
#
[EMAIL PROTECTED]:~  vinum list
2 drives:
D been  State: up   /dev/ad2s1h A: 7739/76347 MB (10%)
D evie  State: up   /dev/ad0s1h A: 7739/76347 MB (10%)

7 volumes:
V root  State: up   Plexes:   2 Size:   1024 MB
V var   State: up   Plexes:   2 Size:   1024 MB
V crash State: up   Plexes:   2 Size:   2048 MB
V tmp   State: up   Plexes:   2 Size:   1024 MB
V usr   State: up   Plexes:   2 Size:   8192 MB
V home  State: up   Plexes:   2 Size:   4096 MB
V tmp1  State: up   Plexes:   2 Size: 50 GB

14 plexes:
P root.p0 C State: up   Subdisks: 1 Size:   1024 MB
P var.p0  C State: up   Subdisks: 1 Size:   1024 MB
P crash.p0C State: up   Subdisks: 1 Size:   2048 MB
P tmp.p0  C State: up   Subdisks: 1 Size:   1024 MB
P usr.p0  C State: up   Subdisks: 1 Size:   8192 MB
P home.p0 C State: up   Subdisks: 1 Size:   4096 MB
P tmp1.p0 C State: faulty   Subdisks: 1 Size: 50 GB
P root.p1 C State: up   Subdisks: 1 Size:   1024 MB
P var.p1  C State: up   Subdisks: 1 Size:   1024 MB
P crash.p1C State: up   Subdisks: 1 Size:   2048 MB
P tmp.p1  C State: up   Subdisks: 1 Size:   1024 MB
P usr.p1  C State: up   Subdisks: 1 Size:   8192 MB
P home.p1 C State: up   Subdisks: 1 Size:   4096 MB
P tmp1.p1 C State: up   Subdisks: 1 Size: 50 GB

14 subdisks:
S root.p0.s0State: up   D: been Size:   1024 MB
S var.p0.s0 State: up   D: been Size:   1024 MB
S crash.p0.s0   State: up   D: been Size:   2048 MB
S tmp.p0.s0 State: up   D: been Size:   1024 MB
S usr.p0.s0 State: up   D: been Size:   8192 MB
S home.p0.s0State: up   D: been Size:   4096 MB
S tmp1.p0.s0State: staleD: been Size: 50 GB
S root.p1.s0State: up   D: evie Size:   1024 MB
S var.p1.s0 State: up   D: evie Size:   1024 MB
S crash.p1.s0   State: up   D: evie Size:   2048 MB
S tmp.p1.s0 State: up   D: evie Size:   1024 MB
S usr.p1.s0 State: up   D: evie Size:   8192 MB
S home.p1.s0State: up   D: evie Size:   4096 MB
S tmp1.p1.s0State: up   D: evie Size: 50 GB
#
[EMAIL PROTECTED]:~  vinum
vinum - detach tmp1.p0.s0
#
vinum - list
2 drives:
D been  State: up   /dev

Incorrectly followed steps to re-sync vinum subdisk

2005-11-10 Thread Ahnjoan Amous
Tonight while checking a few things on my personal machine I noticed
that one of my vinum sub disks was stale.  Included below are the
steps I took attempting to remedy this.  Clearly I did not follow the
proper procedures and now the volume is un-mountable.  Currently when
I attempt to mount the /export filesystem the machine panics and
reboots.  My apologies in advance for the length of information
included.  I include it so that it might aid in recovery of the data
on this filesystem.  It has been cleaned up a bit but am happy to
provide the full output with the multiple help command output.
Shortly after the command sequence included the machine rebooted.
Since then I have booted in to single user mode and commented out
/export from fstab.  The machine is fully functioning with the
exception of the important data on /export.

If anyone doesn't believe I will be able to recover the data on my own
but does know of a company that might be able to recover data written
on a vinum mirror would you send me their name?

Thank you
Ahnjoan

[EMAIL PROTECTED]:~  df -k
Filesystem   1K-blocks UsedAvail Capacity  Mounted on
/dev/vinum/root101297461138   870800 7%/
devfs110   100%/dev
/dev/vinum/tmp1   50777034 35262392 1145248075%/export
procfs   440   100%/proc
devfs110   100%/var/db/dhcpd/dev
#
[EMAIL PROTECTED]:~  vinum list
2 drives:
D been  State: up   /dev/ad2s1h A: 7739/76347 MB (10%)
D evie  State: up   /dev/ad0s1h A: 7739/76347 MB (10%)

7 volumes:
V root  State: up   Plexes:   2 Size:   1024 MB
V tmp1  State: up   Plexes:   2 Size: 50 GB

14 plexes:
P root.p0 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p0 C State: faulty   Subdisks: 1 Size: 50 GB
P root.p1 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p1 C State: up   Subdisks: 1 Size: 50 GB

14 subdisks:
S root.p0.s0State: up   D: been Size:   1024 MB
S tmp1.p0.s0State: staleD: been Size: 50 GB
S root.p1.s0State: up   D: evie Size:   1024 MB
S tmp1.p1.s0State: up   D: evie Size: 50 GB
#
[EMAIL PROTECTED]:~  vinum
vinum - detach tmp1.p0.s0
#
vinum - list
2 drives:
D been  State: up   /dev/ad2s1h A: 7739/76347 MB (10%)
D evie  State: up   /dev/ad0s1h A: 7739/76347 MB (10%)

7 volumes:
V root  State: up   Plexes:   2 Size:   1024 MB
V tmp1  State: up   Plexes:   2 Size: 50 GB

14 plexes:
P root.p0 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p0 C State: faulty   Subdisks: 0 Size:  0  B
P root.p1 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p1 C State: up   Subdisks: 1 Size: 50 GB

14 subdisks:
S root.p0.s0State: up   D: been Size:   1024 MB
S tmp1.p0.s0State: staleD: been Size: 50 GB
S root.p1.s0State: up   D: evie Size:   1024 MB
S tmp1.p1.s0State: up   D: evie Size: 50 GB
#
vinum - start tmp1.p0.s0
Can't start tmp1.p0.s0: Invalid argument (22)
#
vinum - start
** no additional drives found: No such file or directory
Can't save Vinum config: No child processes
#
vinum - start tmp1.p0.s0
Can't start tmp1.p0.s0: Invalid argument (22)
#
vinum - list
2 drives:
D been  State: up   /dev/ad2s1h A: 7739/76347 MB (10%)
D evie  State: up   /dev/ad0s1h A: 7739/76347 MB (10%)

7 volumes:
V root  State: up   Plexes:   2 Size:   1024 MB
V tmp1  State: up   Plexes:   2 Size: 50 GB

14 plexes:
P root.p0 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p0 C State: faulty   Subdisks: 0 Size:  0  B
P root.p1 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p1 C State: up   Subdisks: 1 Size: 50 GB

14 subdisks:
S root.p0.s0State: up   D: been Size:   1024 MB
S tmp1.p0.s0

Incorrectly re-synced vinum subdisk

2005-11-10 Thread Ahnjoan Amous
Tonight while checking a few things on my personal machine I noticed
that one of my vinum sub disks was stale.  Included below are the
steps I took attempting to remedy this.  Clearly I did not follow the
proper procedures and now the volume is un-mountable.  Currently when
I attempt to mount the /export filesystem the machine panics and
reboots.  My apologies in advance for the length of information
included.  I include it so that it might aid in recovery of the data
on this filesystem.  It has been cleaned up a bit but am happy to
provide the full output with the multiple help command output.
Shortly after the command sequence included the machine rebooted.
Since then I have booted in to single user mode and commented out
/export from fstab.  The machine is fully functioning with the
exception of the important data on /export.

If anyone doesn't believe I will be able to recover the data on my own
but does know of a company that might be able to recover data written
on a vinum mirror would you send me their name?

Thank you
Ahnjoan

[EMAIL PROTECTED]:~  df -k
Filesystem   1K-blocks UsedAvail Capacity  Mounted on
/dev/vinum/root101297461138   870800 7%/
devfs110   100%/dev
/dev/vinum/tmp1   50777034 35262392 1145248075%/export
procfs   440   100%/proc
devfs110   100%/var/db/dhcpd/dev
#
[EMAIL PROTECTED]:~  vinum list
2 drives:
D been  State: up   /dev/ad2s1h A: 7739/76347 MB (10%)
D evie  State: up   /dev/ad0s1h A: 7739/76347 MB (10%)

7 volumes:
V root  State: up   Plexes:   2 Size:   1024 MB
V tmp1  State: up   Plexes:   2 Size: 50 GB

14 plexes:
P root.p0 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p0 C State: faulty   Subdisks: 1 Size: 50 GB
P root.p1 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p1 C State: up   Subdisks: 1 Size: 50 GB

14 subdisks:
S root.p0.s0State: up   D: been Size:   1024 MB
S tmp1.p0.s0State: staleD: been Size: 50 GB
S root.p1.s0State: up   D: evie Size:   1024 MB
S tmp1.p1.s0State: up   D: evie Size: 50 GB
#
[EMAIL PROTECTED]:~  vinum
vinum - detach tmp1.p0.s0
#
vinum - list
2 drives:
D been  State: up   /dev/ad2s1h A: 7739/76347 MB (10%)
D evie  State: up   /dev/ad0s1h A: 7739/76347 MB (10%)

7 volumes:
V root  State: up   Plexes:   2 Size:   1024 MB
V tmp1  State: up   Plexes:   2 Size: 50 GB

14 plexes:
P root.p0 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p0 C State: faulty   Subdisks: 0 Size:  0  B
P root.p1 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p1 C State: up   Subdisks: 1 Size: 50 GB

14 subdisks:
S root.p0.s0State: up   D: been Size:   1024 MB
S tmp1.p0.s0State: staleD: been Size: 50 GB
S root.p1.s0State: up   D: evie Size:   1024 MB
S tmp1.p1.s0State: up   D: evie Size: 50 GB
#
vinum - start tmp1.p0.s0
Can't start tmp1.p0.s0: Invalid argument (22)
#
vinum - start
** no additional drives found: No such file or directory
Can't save Vinum config: No child processes
#
vinum - start tmp1.p0.s0
Can't start tmp1.p0.s0: Invalid argument (22)
#
vinum - list
2 drives:
D been  State: up   /dev/ad2s1h A: 7739/76347 MB (10%)
D evie  State: up   /dev/ad0s1h A: 7739/76347 MB (10%)

7 volumes:
V root  State: up   Plexes:   2 Size:   1024 MB
V tmp1  State: up   Plexes:   2 Size: 50 GB

14 plexes:
P root.p0 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p0 C State: faulty   Subdisks: 0 Size:  0  B
P root.p1 C State: up   Subdisks: 1 Size:   1024 MB
P tmp1.p1 C State: up   Subdisks: 1 Size: 50 GB

14 subdisks:
S root.p0.s0State: up   D: been Size:   1024 MB
S tmp1.p0.s0

Incorrectly followed steps to re-sync vinum subdisk

2005-11-08 Thread Ahnjoan Amous
Tonight while checking a few things on my personal machine I noticed
that one of my vinum sub disks was stale. Included below are the
steps I took attempting to remedy this. Clearly I did not follow the
proper procedures and now the volume is un-mountable. Currently when
I attempt to mount the /export filesystem the machine panics and
reboots. My apologies in advance for the length of information
included. I include it so that it might aid in recovery of the
data on this filesystem. It has been cleaned up a bit but am happy
to provide the full output with the multiple help command output.
Shortly after the command sequence included the machine rebooted.
Since then I have booted in to single user mode and commented out
/export from fstab. The machine is fully functioning with the
exception of the important data on /export.

 If anyone doesn't believe I will be able to recover the data on my
own but does know of a company that might be able to recover data
written on a vinum mirror would you send me their name?

 Thank you

Ahnjoan

 [EMAIL PROTECTED]:~  df -k

Filesystem 1K-blocks Used Avail Capacity Mounted on

/dev/vinum/root 1012974 61138 870800 7% /

devfs 1 1 0 100% /dev

/dev/vinum/var 1012974 355886 576052 38% /var

/dev/vinum/crash 2026030 6 1863942 0% /var/crash

/dev/vinum/tmp 1012974 16 931922 0% /tmp

/dev/vinum/usr 8122126 4714322 2758034 63% /usr

/dev/vinum/home 4058062 6922 3726496 0% /home

/dev/vinum/tmp1 50777034 35262392 11452480 75% /export

procfs 4 4 0 100% /proc

devfs 1 1 0 100% /var/db/dhcpd/dev

##

[EMAIL PROTECTED]:~  vinum list

2 drives:

D been State: up /dev/ad2s1h A: 7739/76347 MB (10%)

D evie State: up /dev/ad0s1h A: 7739/76347 MB (10%)

 7 volumes:

V root State: up Plexes: 2 Size: 1024 MB

V var State: up Plexes: 2 Size: 1024 MB

V crash State: up Plexes: 2 Size: 2048 MB

V tmp State: up Plexes: 2 Size: 1024 MB

V usr State: up Plexes: 2 Size: 8192 MB

V home State: up Plexes: 2 Size: 4096 MB

V tmp1 State: up Plexes: 2 Size: 50 GB

 14 plexes:

P root.p0 C State: up Subdisks: 1 Size: 1024 MB

P var.p0 C State: up Subdisks: 1 Size: 1024 MB

P crash.p0 C State: up Subdisks: 1 Size: 2048 MB

P tmp.p0 C State: up Subdisks: 1 Size: 1024 MB

P usr.p0 C State: up Subdisks: 1 Size: 8192 MB

P home.p0 C State: up Subdisks: 1 Size: 4096 MB

P tmp1.p0 C State: faulty Subdisks: 1 Size: 50 GB

P root.p1 C State: up Subdisks: 1 Size: 1024 MB

P var.p1 C State: up Subdisks: 1 Size: 1024 MB

P crash.p1 C State: up Subdisks: 1 Size: 2048 MB

P tmp.p1 C State: up Subdisks: 1 Size: 1024 MB

P usr.p1 C State: up Subdisks: 1 Size: 8192 MB

P home.p1 C State: up Subdisks: 1 Size: 4096 MB

P tmp1.p1 C State: up Subdisks: 1 Size: 50 GB

 14 subdisks:

S root.p0.s0 State: up D: been Size: 1024 MB

S var.p0.s0 State: up D: been Size: 1024 MB

S crash.p0.s0 State: up D: been Size: 2048 MB

S tmp.p0.s0 State: up D: been Size: 1024 MB

S usr.p0.s0 State: up D: been Size: 8192 MB

S home.p0.s0 State: up D: been Size: 4096 MB

S tmp1.p0.s0 State: stale D: been Size: 50 GB

S root.p1.s0 State: up D: evie Size: 1024 MB

S var.p1.s0 State: up D: evie Size: 1024 MB

S crash.p1.s0 State: up D: evie Size: 2048 MB

S tmp.p1.s0 State: up D: evie Size: 1024 MB

S usr.p1.s0 State: up D: evie Size: 8192 MB

S home.p1.s0 State: up D: evie Size: 4096 MB

S tmp1.p1.s0 State: up D: evie Size: 50 GB

##

[EMAIL PROTECTED]:~  vinum

vinum - detach tmp1.p0.s0

##

vinum - list

2 drives:

D been State: up /dev/ad2s1h A: 7739/76347 MB (10%)

D evie State: up /dev/ad0s1h A: 7739/76347 MB (10%)

 7 volumes:

V root State: up Plexes: 2 Size: 1024 MB

V var State: up Plexes: 2 Size: 1024 MB

V crash State: up Plexes: 2 Size: 2048 MB

V tmp State: up Plexes: 2 Size: 1024 MB

V usr State: up Plexes: 2 Size: 8192 MB

V home State: up Plexes: 2 Size: 4096 MB

V tmp1 State: up Plexes: 2 Size: 50 GB

 14 plexes:

P root.p0 C State: up Subdisks: 1 Size: 1024 MB

P var.p0 C State: up Subdisks: 1 Size: 1024 MB

P crash.p0 C State: up Subdisks: 1 Size: 2048 MB

P tmp.p0 C State: up Subdisks: 1 Size: 1024 MB

P usr.p0 C State: up Subdisks: 1 Size: 8192 MB

P home.p0 C State: up Subdisks: 1 Size: 4096 MB

P tmp1.p0 C State: faulty Subdisks: 0 Size: 0 B

P root.p1 C State: up Subdisks: 1 Size: 1024 MB

P var.p1 C State: up Subdisks: 1 Size: 1024 MB

P crash.p1 C State: up Subdisks: 1 Size: 2048 MB

P tmp.p1 C State: up Subdisks: 1 Size: 1024 MB

P usr.p1 C State: up Subdisks: 1 Size: 8192 MB

P home.p1 C State: up Subdisks: 1 Size: 4096 MB

P tmp1.p1 C State: up Subdisks: 1 Size: 50 GB

 14 subdisks:

S root.p0.s0 State: up D: been Size: 1024 MB

S var.p0.s0 State: up D: been Size: 1024 MB

S crash.p0.s0 State: up D: been Size: 2048 MB

S tmp.p0.s0 State: up D: been Size: 1024 MB

S usr.p0.s0 State: up D: been

Re: Help setting up Vinum mirror

2005-10-04 Thread Peter Clutton
 In some releases 'vinum_enable=yes' in /etc/conf caused a kernel panic at
 boot.
 Hence my question what OS...

 Arno

Thanks for the replies everyone and sorry for the slow reply.
I'm running 5.3 , and realised i had to run newfs and mount etc to get
it going. I was getting confused thinking that it wouldn't add
the mount to /etc/fstab , and if i put it in there myself.

Apart from adding the vinum_enable etc, will i need to add the info
to /etc/fstab referring to the name i gave it in vinum.. I'm starting to get
it alot more after reading the man page a few times, and Greg Lehey's info
in the Complete FreeBSD, and was happy to get it running.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


i would like to recover data from my vinum partition

2005-10-04 Thread Milo
I broke my system (did make world with RELENG_5_4 while running 4.10) and I
was running
mirrored /usr partitions with vinum using ad0 and ad2. This morning I didn't
have time to setup vinum and installed 5.4 from cd onto ad0 as usual.

How can I see the data on my vinum drive (ad2) using mount? Other methods
you know of to access the drive to see if I can recover any data?

I tried mounting but only get incorrect super block error.
root# mount -t ufs /dev/ad2s1e /mnt/ad2
mount: /dev/ad2s1e on /mnt/ad2: incorrect super block

I also tried changing the fstype from vinum to 4.2BSD using bsdlabel but
no such luck.
root# bsdlabel ad2s1
# /dev/ad2s1:
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
c: 241248042 0 unused 0 0 # raw part, don't edit
e: 233815971 0 vinum
f: 7432071 233815971 4.2BSD 2048 16384 89

Suggestions greatly appreciated. I searched the lists and google but could
not find the answer.

Milan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Help setting up Vinum mirror

2005-09-29 Thread Peter Clutton
Hi, I have gone through the docs on this but am just missing a couple of points
conceptually, and would be grateful for any help.

Basically i have created two slices on two IDE drives and mounted them
(through fdisk, label etc), and had that all up and running correctly.
I then went into Vinum in interactive mode and (hopefully) created a
mirror by typing
mirror -d /dev/ad0s2d  /dev/ad1s1d  .  It then gave me successful
messages and gave the drive a name and said it's up.

I'm just wondering after this point, can i just type quit and it's up
and running? I noticed on reboot the directories that were my mount
point for these partitions say they are not a directory now. Do i
need to go on and mount the mirror? Or did i make a mistake mounting
these partitions before creating the mirror. How do i utilize it after
issuing the mirror command.
Many thanks in advance.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Help setting up Vinum mirror

2005-09-29 Thread FreeBSD usergroup


On 29 sep 2005, at 13:28, Peter Clutton wrote:


Hi, I have gone through the docs on this but am just missing a  
couple of points

conceptually, and would be grateful for any help.

Basically i have created two slices on two IDE drives and mounted them
(through fdisk, label etc), and had that all up and running correctly.
I then went into Vinum in interactive mode and (hopefully) created a
mirror by typing
mirror -d /dev/ad0s2d  /dev/ad1s1d  .  It then gave me successful
messages and gave the drive a name and said it's up.

I'm just wondering after this point, can i just type quit and it's up
and running? I noticed on reboot the directories that were my mount
point for these partitions say they are not a directory now. Do i
need to go on and mount the mirror? Or did i make a mistake mounting
these partitions before creating the mirror. How do i utilize it after
issuing the mirror command.
Many thanks in advance.



Which FBSD release do you use?

basically (FBSD  5.3)
for vinum you just have to type:
vinum start
after a reboot and it'll read the config from the disks and put the  
volume in /dev/vinum/

from there you can mount it manually or add a line to /etc/fstab

Arno

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Help setting up Vinum mirror

2005-09-29 Thread Drew Tomlinson

On 9/29/2005 10:04 AM FreeBSD usergroup wrote:



On 29 sep 2005, at 13:28, Peter Clutton wrote:


Hi, I have gone through the docs on this but am just missing a  
couple of points

conceptually, and would be grateful for any help.

Basically i have created two slices on two IDE drives and mounted them
(through fdisk, label etc), and had that all up and running correctly.
I then went into Vinum in interactive mode and (hopefully) created a
mirror by typing
mirror -d /dev/ad0s2d  /dev/ad1s1d  .  It then gave me successful
messages and gave the drive a name and said it's up.

I'm just wondering after this point, can i just type quit and it's up
and running? I noticed on reboot the directories that were my mount
point for these partitions say they are not a directory now. Do i
need to go on and mount the mirror? Or did i make a mistake mounting
these partitions before creating the mirror. How do i utilize it after
issuing the mirror command.
Many thanks in advance.



Which FBSD release do you use?

basically (FBSD  5.3)
for vinum you just have to type:
vinum start
after a reboot and it'll read the config from the disks and put the  
volume in /dev/vinum/
from there you can mount it manually or add a line to /etc/fstab 


Neither of this is probably necessary.  Vinum started automatically when 
'vinum' was typed on the console to create the mirror.  Once the mirror 
was created and shown as 'up', the volume was created in /dev/vinum.  
However there is something you need to add to /etc/rc.conf to have vinum 
start automatically upon booting and thus make your volume available for 
mounting.  Seems it was 'vinum_enable = yes' or something like that.  
Search /etc/defaults/rc.conf for the exact line.


HTH,

Drew

--
Visit The Alchemist's Warehouse
Magic Tricks, DVDs, Videos, Books,  More!

http://www.alchemistswarehouse.com

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Help setting up Vinum mirror

2005-09-29 Thread FreeBSD usergroup


On 29 sep 2005, at 21:57, Drew Tomlinson wrote:


On 9/29/2005 10:04 AM FreeBSD usergroup wrote:




On 29 sep 2005, at 13:28, Peter Clutton wrote:



Hi, I have gone through the docs on this but am just missing a   
couple of points

conceptually, and would be grateful for any help.

Basically i have created two slices on two IDE drives and mounted  
them
(through fdisk, label etc), and had that all up and running  
correctly.

I then went into Vinum in interactive mode and (hopefully) created a
mirror by typing
mirror -d /dev/ad0s2d  /dev/ad1s1d  .  It then gave me successful
messages and gave the drive a name and said it's up.

I'm just wondering after this point, can i just type quit and  
it's up

and running? I noticed on reboot the directories that were my mount
point for these partitions say they are not a directory now. Do i
need to go on and mount the mirror? Or did i make a mistake mounting
these partitions before creating the mirror. How do i utilize it  
after

issuing the mirror command.
Many thanks in advance.




Which FBSD release do you use?

basically (FBSD  5.3)
for vinum you just have to type:
vinum start
after a reboot and it'll read the config from the disks and put  
the  volume in /dev/vinum/

from there you can mount it manually or add a line to /etc/fstab



Neither of this is probably necessary.  Vinum started automatically  
when 'vinum' was typed on the console to create the mirror.  Once  
the mirror was created and shown as 'up', the volume was created  
in /dev/vinum.  However there is something you need to add to /etc/ 
rc.conf to have vinum start automatically upon booting and thus  
make your volume available for mounting.  Seems it was  
'vinum_enable = yes' or something like that.  Search /etc/ 
defaults/rc.conf for the exact line.


HTH,

Drew

--


In some releases 'vinum_enable=yes' in /etc/conf caused a kernel  
panic at boot.

Hence my question what OS...

Arno
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Vinum migration 4.x-5.4

2005-08-19 Thread Robin Smith
There seems to be a consensus in the references I've found that vinum
is completely broken on 5.4 and that gvinum/geom_vinum is not ready
for production use.  As it seems to me, this means that anyone using
4.11 (say) and vinum will have to abandon vinum (i.e. quit doing software
RAID) in order to upgrade to 5.4.  That can be both laborious and slow
(e.g. if you have /usr on, say, a four-drive vinum volume in 4.11, you're
going to have to replace those drives with something else in order to go
to 5.4.  Is that false, and is there a relatively simple way to get 
geom_vinum in 5.4 to read a vinum configuration produced under 4.11 and
start the vinum volume as it is?

Robin Smith
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum migration 4.x-5.4

2005-08-19 Thread Stijn Hoop
On Fri, Aug 19, 2005 at 11:01:55AM -0500, Robin Smith wrote:
 There seems to be a consensus in the references I've found that vinum
 is completely broken on 5.4

That is true. IMHO it should be removed from RELENG_5 and _6 if it isn't
already.

 and that gvinum/geom_vinum is not ready for production use.

Well the only reason it might not be is that it hasn't seen widespread
testing, as far as I can tell it should all just work. I do use gvinum
on a 5-STABLE host and it has worked well for me in the past [1].

 As it seems to me, this means that anyone using
 4.11 (say) and vinum will have to abandon vinum (i.e. quit doing software
 RAID) in order to upgrade to 5.4.

5.4 does have alternatives to vinum (which is another reason why gvinum
hasn't received as much testing): gmirror, graid3, gstripe, gconcat.

 That can be both laborious and slow
 (e.g. if you have /usr on, say, a four-drive vinum volume in 4.11, you're
 going to have to replace those drives with something else in order to go
 to 5.4.

I'd say building a new test box is about the only sane way to do it.

 Is that false, and is there a relatively simple way to get 
 geom_vinum in 5.4 to read a vinum configuration produced under 4.11 and
 start the vinum volume as it is?

As far as I can tell, it should just work. To pick up the latest round
of vinum fixes it might be best to run 5-STABLE (ie. RELENG_5) but it
should not be necessary unless you run into difficulties.

But the only way to know for sure if things work, is to test...

--Stijn

[1] for some reason I discovered a configuration problem earlier this
week, but the other part of the mirror is holding up and it seems
that I can reconstruct the broken part this weekend. If anything,
it seems that a gvinum mirrored plex is robust.

-- 
Coca-Cola is solely responsible for ensuring that people - too stupid to know
not to tip half-ton machines on themselves - are safe. Forget parenting - the
blame is entirely on the corporation for designing machines that look so
innocent and yet are so deadly.
-- http://www.kuro5hin.org/?op=displaystory;sid=2001/10/28/212418/42


pgpPDFFxixFyi.pgp
Description: PGP signature


Re: Vinum migration 4.x-5.4

2005-08-19 Thread Paul Mather
On Fri, 19 Aug 2005 11:01:55 -0500, Robin Smith
[EMAIL PROTECTED] wrote:

 There seems to be a consensus in the references I've found that vinum
 is completely broken on 5.4 and that gvinum/geom_vinum is not ready
 for production use.  As it seems to me, this means that anyone using
 4.11 (say) and vinum will have to abandon vinum (i.e. quit doing
 software
 RAID) in order to upgrade to 5.4.  That can be both laborious and slow
 (e.g. if you have /usr on, say, a four-drive vinum volume in 4.11,
 you're
 going to have to replace those drives with something else in order to
 go
 to 5.4.  Is that false, and is there a relatively simple way to get 
 geom_vinum in 5.4 to read a vinum configuration produced under 4.11
 and
 start the vinum volume as it is?

I am using geom_vinum on RELENG_5 without problems.  However, I use only
mirrored and concat plexes, and most of the problems I've heard people
experiencing involve RAID 5 plexes.

Geom_vinum uses the same on-disk metadata format as Vinum, so it will
read a configuration produced under 4.x---in fact, this was one of its
design goals.  BTW, Vinum is not the only software RAID option under
5.x: you can use geom_concat (gconcat) or geom_stripe (gstripe) for RAID
0; geom_mirror (gmirror) for RAID 1; and geom_raid3 (graid3) for RAID 3.
I successfully replaced my all-mirrored geom_vinum setup in-place on one
system with a geom_mirror setup.

Finally, if you are migrating from 4.x to 5.x, you might consider a
binary installation with restore rather than a source upgrade.  That
way, you can newfs your filesystems as UFS2 and get support for, e.g.,
snapshots, background fsck, etc.

Cheers,

Paul.
-- 
e-mail: [EMAIL PROTECTED]

Without music to decorate it, time is just a bunch of boring production
 deadlines or dates by which bills must be paid.
--- Frank Vincent Zappa
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


vinum problem

2005-08-02 Thread yuri
Hi all,
 
I'm makara from Cambodia. I'm freebsd newbie. I try to configure vinum I always 
get this messages every time when I start vinum
 
vinum: Inappropriate ioctl for device
 
and a few minute later my pc is restart. I hope you can solv the problem. 
Thanks sorry for my english.


-
 Start your day with Yahoo! - make it your home page 
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Vinum Bootstrap Help

2005-07-22 Thread Ben Craig
Hi All,
 
I've been trying to get a bootstrapped vinum volume up and running on a 5.4
release system (generic kernel, minimal install), based this How-to:
 
http://devel.reinikainen.net/docs/how-to/Vinum/
 
But I've managed to run into a problem that no amount of Googling, reading
the archive of this list, or reading the manual seems help me get by.
 
Basically, I have Vinum configured fine and can successfully run:
 
vinum  create -f /etc/vinum.conf
 
The Vinum volume is all fine and a vinum  list shows no problems.  I can
also successfully do a fsck on each of the mounts, which are:
 
/
/home
/tmp
/var
 
However, it appears that the vinum config isn't being saved, as rebooting
the machine can't find the vinum root partition, and after manually booting
to the pre-vinum root (ufs:ad0s1a) running vinum  list shows no volume
information.
 
During the reboot, vinum appears to load ok, but it can't find the root (as
shown by the last bit of the dmesg):
 
vinum: loaded
vinum: no drives found
Mounting root from ufs:/dev/vinum/root
setrootbyname failed
ffs_mountroot: can't find rootvp
Root mount failed: 6
 
The relevant config files look like this:
 
/etc/fstab
 
#DeviceMountpoint   FStypeOptionsDumpPass#
/dev/vinum/swapnoneswap   sw00
/dev/vinum/root  /ufsrw   1
1
/dev/vinum/home/homeufsrw2 2
/dev/vinum/tmp  /tmpufs   rw2
2
/dev/vinum/var   /var ufsrw2
2
/dev/acd0 /cdromcd9660  ro,noauto  00

 
 
/boot/loader.conf
 
vinum_load=YES
vinum.autostart=YES
 
Any suggestions as to how to sort this would be greatly appreciated.
 
Regards,
 
Ben Craig.
 
 
 
 
 
 
 
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum Bootstrap Help

2005-07-22 Thread David Kelly


On Jul 22, 2005, at 5:51 PM, Ben Craig wrote:

However, it appears that the vinum config isn't being saved, as  
rebooting
the machine can't find the vinum root partition, and after manually  
booting
to the pre-vinum root (ufs:ad0s1a) running vinum  list shows no  
volume

information.


Wasn't trying to boot root but had same problem. Changed  
start_cmd=gvinum start (note the g) in /etc/rc.d/vinum (note the  
filename was not renamed) and suitably edit /etc/fstab and have had  
no problem since.


Or apparently rather than use start_vinum=YES in /etc/rc.conf and  
changing /etc/rc.d/vinum, placing geom_vinum_load=YES in /boot/ 
loader.conf does the trick.


gvinum is said not to have all the features of vinum, but if vinum  
won't start on boot then it isn't of much good. gvinum has enough  
features for me to work.


--
David Kelly N4HHE, [EMAIL PROTECTED]

Whom computers would destroy, they must first drive mad.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Vinum subdisk crash

2005-06-30 Thread Gareth Bailey
Hello,

It appears that one of the vinum subdisks on my server has crashed. On 
rebooting I get the following message:


-- start message --

Warning: defective objects

V usr State:down Plexes:2 Size:37GB
P usr.p0 C State:faulty Subdisks:1 Size:37GB
P usr.p1 C State:faulty Subdisks:1 Size:37GB
S usr.p0.s0 State:crashed P0:0 B Size:37GB
S usr.p1.s0 State:stale P0:0 B Size:37GB

[some fsck messages]
Can't open /dev/vinum/usr: Input/output error
[some more fsck messages]
THE FOLLOWING FILE SYSTEM HAD AN UNEXPEXTED INCONSISTENCY:
/dev/vinum/usr (/usr)
Automatic file system checj failed . . . help!
Enter full pathname of shell

-- end message --

I have a straight forward configuration based on the Bootstrapping Vinum: A 
Foundation for Reliable Servers article by Robert Van Valzah.
What could have caused this? The disks are pretty new. Please advise on the 
quickest route to getting our server back online.

Much appreciated,

Gareth Bailey
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum subdisk crash

2005-06-30 Thread Gareth Bailey
Hello everyone,

Just to add, I had to type out the above message manually since I can't get 
access to anything with the crashed subdisk on /usr.
With regard to Greg's requests for information when reporting vinum problems 
as stated on vinumvm.org http://vinumvm.org's website, I can provide the 
following info:

What problems are you having?
My usr.p0.s0 subdisk reports a 'crashed' status

Which version of FreeBSD are you running?
4.10 Stable

Have you made any changes to the system sources, including Vinum?
No

Supply the output of the *vinum list* command. If you can't start Vinum, 
supply the on-disk configuration, as described below. If you can't start 
Vinum,
then (and only then) send a copy of the configuration file.
I can't get anything off the system, and its too long to type out! (I have 
the same layout as in the Van Valzah article.)

Supply an *extract* of the Vinum history file. Unless you have explicitly 
renamed it, it will be */var/log/vinum_history*. This file can get very big; 
please limit it to the time around when you have the problems. Each line 
contains a timestamp at the beginning, so you will have no difficulty in 
establishing which data is of relevance.
I will summarise the tail of vinum_history (doesn't seem to provide any 
interesting info):
30 Jun 2005 ***vinum started***
30 Jun 2005 list
30 Jun 2005 ***vinum started***
30 Jun 2005 dumpconfig

Supply an *extract* of the file */var/log/messages*. Restrict the extract to 
the same time frame as the history file. Again, each line contains a 
timestamp at the beginning, so you will have no difficulty in establishing 
which data is of relevance.
Again, I will summarise the tail contents of messages:
30 Jun server /kernel: ad0s1h: hard error reading fsbn 59814344 of 
29126985-29127080 (ad0s1 bn 59814344; cn 3723 tn 69 sn 2) trying PIO mode
30 Jun server /kernel: ad0s1h: hard error reading fsbn 59814344 of 
29126985-29127080 (ad0s1 bn 59814344; cn 3723 tn 69 sn 2) status=59 error=40
30 Jun server /kernel: vinum: usr.p0.s0 is crashed by force
30 Jun server /kernel: vinum: usr.p0 is faulty
30 Jun server /kernel: vinum: usr is down
30 Jun server /kernel: fatal:usr.p0.s0 read error, block 29126985 for 49152 
bytes
30 Jun server /kernel: usr.p0.s0: user buffer block 28102720 for 49152 bytes

If you have a crash, please supply a backtrace from the dump analysis as 
discussed below under Kernel Panics. Please don't delete the crash dump; it 
may be needed for further analysis.
I'm not sure if a kernel panic occurred?

I hope this information helps, and that someone can give me some advice! 

Cheers,
Gareth

On 6/30/05, Gareth Bailey [EMAIL PROTECTED] wrote:
 
 Hello,
 
 It appears that one of the vinum subdisks on my server has crashed. On 
 rebooting I get the following message:
 
 
 -- start message --
 
 Warning: defective objects
 
 V usr State:down Plexes:2 Size:37GB
 P usr.p0 C State:faulty Subdisks:1 Size:37GB
 P usr.p1 C State:faulty Subdisks:1 Size:37GB
 S usr.p0.s0 State:crashed P0:0 B Size:37GB
 S usr.p1.s0 State:stale P0:0 B Size:37GB
 
 [some fsck messages]
 Can't open /dev/vinum/usr: Input/output error
 [some more fsck messages]
 THE FOLLOWING FILE SYSTEM HAD AN UNEXPEXTED INCONSISTENCY:
 /dev/vinum/usr (/usr)
 Automatic file system checj failed . . . help!
 Enter full pathname of shell
 
 -- end message --
 
 I have a straight forward configuration based on the Bootstrapping Vinum: 
 A Foundation for Reliable Servers article by Robert Van Valzah.
 What could have caused this? The disks are pretty new. Please advise on 
 the quickest route to getting our server back online.
 
 Much appreciated,
 
 Gareth Bailey
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum subdisk crash

2005-06-30 Thread Greg 'groggy' Lehey
[Format recovered--see http://www.lemis.com/email/email-format.html]

Broken wrapping, unclear attribution, incorrect quotation levels.  It
took five minutes of my time fixing this message to a point where I
could reply to it.

On Thursday, 30 June 2005 at 15:37:56 +0200, Gareth Bailey wrote:
 Hello everyone,

 Just to add, I had to type out the above message manually since I can't get
 access to anything with the crashed subdisk on /usr.
 With regard to Greg's requests for information when reporting vinum problems
 as stated on vinumvm.org http://vinumvm.org's website, I can provide the
 following info:

 What problems are you having?

 My usr.p0.s0 subdisk reports a 'crashed' status

 Supply the output of the *vinum list* command. If you can't start
 Vinum, supply the on-disk configuration, as described below. If you
 can't start Vinum, then (and only then) send a copy of the
 configuration file.

 I can't get anything off the system, and its too long to type out!
 (I have the same layout as in the Van Valzah article.)

Would you like a reply like I have an answer, but it's too long to
type out?  Do you really expect me to go and re-read Bob's article?
This, along with your hard-to-read message, is a good way to be
ignored.

 Supply an *extract* of the file */var/log/messages*. Restrict the
 extract to the same time frame as the history file. Again, each
 line contains a timestamp at the beginning, so you will have no
 difficulty in establishing which data is of relevance.

 30 Jun server /kernel: ad0s1h: hard error reading fsbn 59814344 of 
 29126985-29127080 (ad0s1 bn 59814344; cn 3723 tn 69 sn 2) trying PIO mode
 30 Jun server /kernel: ad0s1h: hard error reading fsbn 59814344 of 
 29126985-29127080 (ad0s1 bn 59814344; cn 3723 tn 69 sn 2) status=59 error=40

You have a hardware problem with /dev/ad0.

 30 Jun server /kernel: vinum: usr.p0.s0 is crashed by force
 30 Jun server /kernel: vinum: usr.p0 is faulty
 30 Jun server /kernel: vinum: usr is down

And this suggests that your configuration is  non-resilient.

 I hope this information helps, and that someone can give me some
 advice!

Yes.  Replace the disk.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
The virus contained in this message was detected by LEMIS anti-virus.

Finger [EMAIL PROTECTED] for PGP public key.
See complete headers for address and phone numbers.


pgp6FGHYtqDfO.pgp
Description: PGP signature


Re: Vinum and Volumes Larger Than 2TB

2005-06-29 Thread Nikolas Britton
On 6/27/05, Bri [EMAIL PROTECTED] wrote:
  Howdy,
 
  I'm attempting to use Vinum to concat multiple plexes together to make a 
 single 4.5TB volume. I've noticed that once I hit the 2TB mark it seems to 
 fail, it looks like once it hits 2TB the size gets reset to 0. Example below,
 
 [EMAIL PROTECTED]:~# vinum create /etc/vinum0.conf
 2 drives:
 D partition0State: up   /dev/da0A: 0/0 MB
 D partition1State: up   /dev/da1A: 0/0 MB
 
 1 volumes:
 V vinum0State: up   Plexes:   1 Size:   1858 GB
 
 1 plexes:
 P vinum0.p0   C State: up   Subdisks: 2 Size:   3906 GB
 
 2 subdisks:
 S vinum0.p0.s0  State: up   D: partition0   Size:   1953 GB
 S vinum0.p0.s1  State: up   D: partition1   Size:   1953 GB
 
 
  Now I've seen mentions of people using Vinum on larger partitions and it 
 seems to work ok. Also when I use gvinum it succeeds, however given the state 
 of the gvinum implementation I'd like to stick with vinum.
 
 
  Suggestions/comments anyone?
 

What version of FreeBSD are you using? also I seem to remember from my
readings somewhere that there is still a soft limit of 2TB depending
on what and how you do it and that vinum had this problem.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Vinum and Volumes Larger Than 2TB

2005-06-27 Thread Bri
 Howdy,

 I'm attempting to use Vinum to concat multiple plexes together to make a 
single 4.5TB volume. I've noticed that once I hit the 2TB mark it seems to 
fail, it looks like once it hits 2TB the size gets reset to 0. Example below,

[EMAIL PROTECTED]:~# vinum create /etc/vinum0.conf 
2 drives:
D partition0State: up   /dev/da0A: 0/0 MB
D partition1State: up   /dev/da1A: 0/0 MB

1 volumes:
V vinum0State: up   Plexes:   1 Size:   1858 GB

1 plexes:
P vinum0.p0   C State: up   Subdisks: 2 Size:   3906 GB

2 subdisks:
S vinum0.p0.s0  State: up   D: partition0   Size:   1953 GB
S vinum0.p0.s1  State: up   D: partition1   Size:   1953 GB


 Now I've seen mentions of people using Vinum on larger partitions and it seems 
to work ok. Also when I use gvinum it succeeds, however given the state of the 
gvinum implementation I'd like to stick with vinum.


 Suggestions/comments anyone?

 Thanks in advance,

-Bri
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Vinum GVinum Recommendations

2005-06-27 Thread Drew Tomlinson
I am using 4.11 and have been a FBSD user since the beginning of version 
4.  I successfully used vinum to create concatenated volumes that have 
grown over time.  Vinum has proved very stable as long as the drives 
were IDE or SCSI.  However I outgrew the confines of my case and added a 
firewire enclosure that contains two IDE drives.  This was somewhere 
around 4.6.


Initially, the firewire drives worked well once I learned I needed to 
set the SCSI delay kernel config option back up to 15 secs from 5 
(unfortunately I lost my volumes getting that education).  But when I 
upgraded from 4.9 to 4.10, vinum and the firewire devices didn't play 
well.  I posted about it here:


http://lists.freebsd.org/pipermail/freebsd-questions/2004-December/069290.html

Later, I discovered there was an spb timeout issue that might be part of 
the problem and I posted about it here:


http://lists.freebsd.org/pipermail/freebsd-firewire/2005-February/000331.html

Anyway, I've been using the delete/create vinum volume workaround after 
every reboot as I never found a solution.  But this last time, I lost 
both vinum volumes performing that workaround.  I don't know what 
happened exactly other than fsck started getting disk errors.  At the 
first problem, I answered 'n' and aborted the fsck.  I tried the 
delete/create workaround again but was unable to stop or rm the 
subdisks, plexes, drives, etc.  Then I rebooted. did the delete/create 
thing, but no longer could fsck read the disks.


Without going on and on, the bottom line is that now I've lost both 
vinum volumes again.  Since the bulk of my system is gone (I sure hope 
the tapes I have are good), I've decided to consider moving up to 
version 5.  I've googled for info on gvinum but see a lot of posts about 
it being flaky.  What's it's status at this time?  I really like 
having one large concatenated drive for storing my digital pictures, 
video, etc. but at this point, stability is more important.  Also, how 
does ccd work with 5?


I'd appreciate hearing of your experiences with vinum, gvinum, and ccd, 
especially as they relate to firewire devices.


Thanks,

Drew

--
Visit The Alchemist's Warehouse
Magic Tricks, DVDs, Videos, Books,  More!

http://www.alchemistswarehouse.com

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum GVinum Recommendations

2005-06-27 Thread Mac Mason
On Mon, Jun 27, 2005 at 11:57:15AM -0700, Drew Tomlinson wrote:
 I'd appreciate hearing of your experiences with vinum, gvinum, and ccd, 
 especially as they relate to firewire devices.

In my experience, vinum doesn't play well with GEOM, and gvinum isn't
anywhere near feature-complete. (I haven't looked at gvinum in a while;
it has probably improved greatly)

On the other hand, using GEOM itself has worked quite well; I use
gmirror to mirror /usr, and gconcat to string a collection of drives
together. Both have worked flawlessly since I set them up.

--Mac


pgpp15JFv1K1O.pgp
Description: PGP signature


Re: Vinum GVinum Recommendations

2005-06-27 Thread Drew Tomlinson

On 6/27/2005 12:04 PM Mac Mason wrote:


On Mon, Jun 27, 2005 at 11:57:15AM -0700, Drew Tomlinson wrote:
 

I'd appreciate hearing of your experiences with vinum, gvinum, and ccd, 
especially as they relate to firewire devices.
   



In my experience, vinum doesn't play well with GEOM, and gvinum isn't
anywhere near feature-complete. (I haven't looked at gvinum in a while;
it has probably improved greatly)

On the other hand, using GEOM itself has worked quite well; I use
gmirror to mirror /usr, and gconcat to string a collection of drives
together. Both have worked flawlessly since I set them up.

   --Mac
 

I GEOM something that is built in to 5 or is it a port?  I don't see it 
in ports so I assume it's built in.  If gconcat works well with 
firewire, that would suit my needs just fine.


Thanks for your reply,

Drew

--
Visit The Alchemist's Warehouse
Magic Tricks, DVDs, Videos, Books,  More!

http://www.alchemistswarehouse.com

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


vinum question: ide chain crashed, one or two drives dead?

2005-06-22 Thread Joseph Kerian
I was away for a long weekend when I recieved an annoyed phone call
regarding a site that I manage. While it was not terribly difficult to
bring the website back online, I lost either a controller card or an
IDE cable, and this produced a string of errors that eventually led to
a kernel panic. I would very much like to be able to recover some of
this data, and am curious if I am able to.

Running FreeBSD nene 5.3-RELEASE-p5 with a custom kernel to include
IPFW options.

The following was my var/log/messages for the relevent times:
Jun 20 04:30:44 nene kernel: ad2: WARNING - removed from configuration
Jun 20 04:30:44 nene kernel: ad3: WARNING - removed from configuration
Jun 20 04:30:44 nene kernel: ata1-master: FAILURE - unknown CMD (0xb0) timed out
Jun 20 04:30:44 nene kernel: vinum: ideraid.p0.s0 is crashed by force
Jun 20 04:30:44 nene kernel: vinum: ideraid.p0 is degraded
Jun 20 04:30:44 nene kernel: fatal:ideraid.p0.s0 read error, block
231758025 for 16384 bytes
Jun 20 04:30:44 nene kernel: ideraid.p0.s0: user buffer block
1158788032 for 16384 bytes
Jun 20 04:30:44 nene kernel: dua: fatal drive I/O error, block
231758025 for 16384 bytes
Jun 20 04:30:44 nene kernel: vinum: drive dua is down
Jun 20 04:30:44 nene kernel: vinum: ideraid.p0.s0 is stale by force
Jun 20 04:30:44 nene kernel: fatal :ideraid.p0.s0 write error, block
231758025 for 16384 bytes
Jun 20 04:30:44 nene kernel: ideraid.p0.s0: user buffer block
1158788032 for 16384 bytes
Jun 20 04:30:44 nene kernel: dua: fatal drive I/O error, block
231758025 for 16384 bytes
Jun 20 04:30:44 nene smartd[604]: Device: /dev/ad2, failed to read
SMART Attribute Data
Jun 20 04:30:44 nene kernel: vinum: Can't write config to /dev/ad3a, error 6
Jun 20 04:30:44 nene kernel: vinum: drive eva34 is down
Jun 20 04:30:44 nene kernel: vinum: ideraid.p0.s1 is crashed
Jun 20 04:30:44 nene kernel: vinum: ideraid.p0 is corrupt
Jun 21 03:01:08 nene kernel: fatal:ideraid.p0.s1 read error, block
301684457 for 16384 bytes
Jun 21 03:01:08 nene kernel: ideraid.p0.s1: user buffer block
1508419040 for 16384 bytes
Jun 21 03:01:08 nene kernel: eva34: fatal drive I/O error, block
301684457 for 16384 bytes
Jun 21 03:01:08 nene kernel: fatal:ideraid.p0.s1 read error, block
301684457 for 16384 bytes
Jun 21 03:01:08 nene kernel: ideraid.p0.s1: user buffer block
1508419040 for 16384 bytes
Jun 21 03:01:08 nene kernel: eva34: fatal drive I/O error, block
301684457 for 16384 bytes
Jun 21 03:01:09 nene kernel: 4 bytes
Jun 21 03:01:09 nene kernel: fatal:ideraid.p0.s1 read error, block
301684457 for 16384 bytes
Jun 21 03:01:09 nene kernel: ideraid.p0.s1: user buffer block
1508419040 for 16384 bytes
Jun 21 03:01:09 nene kernel: eva34: fatal drive I/O error, block
301684457 for 16384 bytes
(approximately 3000 repeats of those 3 lines later)
Jun 21 03:01:37 nene kernel: vinum: ideraid.p0.s1 is stale by force
Jun 21 03:01:37 nene kernel: vinum: ideraid.p0.s0 is crashed by force
Jun 21 03:01:37 nene kernel: fatal:ideraid.p0.s0 read error, block
177262153 for 16384 bytes
Jun 21 03:01:37 nene kernel: ideraid.p0.s0: user buffer block
886309184 for 16384 bytes
Jun 21 03:01:37 nene kernel: dua: fatal drive I/O error, block
177262153 for 16384 bytes
Jun 21 03:01:37 nene kernel: fatal:ideraid.p0.s0 read error, block
177563337 for 16384 bytes
Jun 21 03:01:37 nene kernel: ideraid.p0.s0: user buffer block
887814592 for 16384 bytes
Jun 21 03:01:37 nene kernel: dua: fatal drive I/O error, block
177563337 for 16384 bytes
Jun 21 03:01:37 nene kernel: fatal:ideraid.p0.s0 read error, block
178466889 for 16384 bytes
Jun 21 03:01:37 nene kernel: ideraid.p0.s0: user buffer block
892330816 for 16384 bytes
Jun 21 03:01:37 nene kernel: dua: fatal drive I/O error, block
178466889 for 16384 bytes
Jun 21 03:01:37 nene kernel: vinum: ideraid.p0.s0 is stale by force

vinum list: (after using start)
6 drives:
D gva250State: up   /dev/ad7a   A: 0/238475 MB (0%)
D eva200State: up   /dev/ad6a   A: 0/190782 MB (0%)
D gva200State: up   /dev/ad5a   A: 0/190782 MB (0%)
D eva250State: up   /dev/ad4a   A: 0/238475 MB (0%)
D eva34 State: up   /dev/ad3a   A: 0/190782 MB (0%)
D dua   State: up   /dev/ad2a   A: 0/190782 MB (0%)

2 volumes:
V ideraid   State: up   Plexes:   1   Size:931 GB
V mirror2   State: up   Plexes:   2   Size: 46 GB

3 plexes:
P ideraid.p0 R5 State: corrupt Subdisks: 6   Size:931 GB
P mirror2.p0  C State: up   Subdisks: 1   Size: 46 GB
P mirror2.p1  C State: up   Subdisks: 1   Size: 46 GB

8 subdisks:
S ideraid.p0.s0 State: stale   D: dua  Size:186 GB
S ideraid.p0.s1 State: crashed D: eva34Size:186 GB
S ideraid.p0.s2 State: up   D: eva250   Size:186 GB
S ideraid.p0.s3 State: up   D: gva200   Size:186 GB
S ideraid.p0.s4

vinum question

2005-06-05 Thread Wojciech Puchar

i'm reading about vinum disk manager, did some test, works fine.

but the question - could root partition be vinum volume? (i have compiled 
kernel with vinum built in, not as module).


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum question

2005-06-05 Thread TAOKA Fumiyoshi
On Sun, 5 Jun 2005 14:14:43 +0200 (CEST)
Wojciech Puchar [EMAIL PROTECTED] wrote:

 i'm reading about vinum disk manager, did some test, works fine.
 
 but the question - could root partition be vinum volume? (i have compiled 
 kernel with vinum built in, not as module).

FreeBSD Handbook
17.9 Using Vinum for the Root Filesystem
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-root.html

-- 
TAOKA Fumiyoshi
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum problems on FreeBSD 5.4-STABLE

2005-05-28 Thread FreeBSD questions mailing list


On 28 mei 2005, at 02:21, Kris Kirby wrote:



Trying to make a mirror of two slices, but I see to be running into  
some

issues here. I'm not on questions, so please CC me on all replies.

Maybe it's a good idea to subscribe cuz then you would have been able  
to read what we wrote about this for the lastcouple of days



# /etc/vinum.conf
volume var
plex name var.p0 org concat
drive va device /dev/ad0s1g
sd name var.p0.s0 drive va length 256m
plex name var.p1 org concat
drive vb device /dev/ad1s1d
sd name var.p1.s0 drive vb length 256m

When I create it, using -f, I get:

vinum - l
2 drives:
D vaState: up   /dev/ad0s1g A:  
1791/2048 MB

(87%) - note triple allocation
D vbState: up   /dev/ad1s1d A: 255/512 MB
(49%)

1 volumes: V var State: up Plexes:  2 Size:  256 MB

2 plexes:
P var.p0 C State: up Subdisks:  1 Size:  256 MB
P var.p1 C State: faulty Subdisks:  1 Size:  256 MB

2 subdisks:
S var.p0.s0 State: up D: va Size:  256 MB
S var.p1.s0 State: empty D: vb Size:  256 MB


vinum does not like FBSD 5.4
or vice versa
it is not supported

you could manually 'setstate up var.p1.s0' and 'setstate up var.p1'
that should work
i tested it with a raid 1 mirror, removed the HD that came up ok and  
checked if the mirror would still come up, which it did

i do definitely not recommend it though
while you aren't still useing the raid, go use something else than vinum


Doesn't seem like this is right. I also run into a problem when  
trying to

do a resetconfig:

vinum - resetconfig
 WARNING!  This command will completely wipe out your vinum  
configuration.

 All data will be lost.  If you really want to do this, enter the text

 NO FUTURE
 Enter text - NO FUTURE
Can't find vinum config: Inappropriate ioctl for device

you'll be getting this all the time if yu continue to use vinum in  
FBSD 5.4
and the configuration is still there after I try to resetconfig.  
When I
reboot and do I vinum start, I get an error that there are no  
drives in

the vinum config.


do 'vinum read' but expect to have kernel panics...



FreeBSD ginsu.catonic.net 5.4-STABLE FreeBSD 5.4-STABLE #1: Fri May 27
01:28:15 PDT 2005 [EMAIL PROTECTED]:/usr/src/sys/i386/ 
compile/SMP

i386

dmesg availible on request. No securelevel in place, at -1. Thanks in
advance.


again, i'd stay away from vinum/gvinum in FBSD 5.x  if i were you...
if you don't, come join me in the mental institute for stubbern vinum  
users :)

Arno
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-28 Thread FreeBSD questions mailing list


On 28 mei 2005, at 09:01, [EMAIL PROTECTED] wrote:





[...]


you're welcome

maybe the complete vinum chapter should be removed from the handbook?

Arno



Perhaps the Vinum chapter should say up front that Vinum works with  
FreeBSD 4.x but not with 5.x


jd



yeah better idea
Arno
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-28 Thread web



[...]

you're welcome

maybe the complete vinum chapter should be removed from the handbook?

Arno


Perhaps the Vinum chapter should say up front that Vinum works with FreeBSD 
4.x but not with 5.x


jd


Janos Dohanics
[EMAIL PROTECTED]
http://www.3dresearch.com/ 


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Vinum problems on FreeBSD 5.4-STABLE

2005-05-28 Thread Kris Kirby

Trying to make a mirror of two slices, but I see to be running into some 
issues here. I'm not on questions, so please CC me on all replies.

# /etc/vinum.conf
volume var
plex name var.p0 org concat
drive va device /dev/ad0s1g
sd name var.p0.s0 drive va length 256m
plex name var.p1 org concat
drive vb device /dev/ad1s1d
sd name var.p1.s0 drive vb length 256m

When I create it, using -f, I get:

vinum - l
2 drives:
D vaState: up   /dev/ad0s1g A: 1791/2048 MB 
(87%) - note triple allocation
D vbState: up   /dev/ad1s1d A: 255/512 MB 
(49%)

1 volumes: V var State: up Plexes:  2 Size:  256 MB

2 plexes: 
P var.p0 C State: up Subdisks:  1 Size:  256 MB
P var.p1 C State: faulty Subdisks:  1 Size:  256 MB

2 subdisks: 
S var.p0.s0 State: up D: va Size:  256 MB
S var.p1.s0 State: empty D: vb Size:  256 MB

Doesn't seem like this is right. I also run into a problem when trying to 
do a resetconfig:

vinum - resetconfig
 WARNING!  This command will completely wipe out your vinum configuration.
 All data will be lost.  If you really want to do this, enter the text

 NO FUTURE
 Enter text - NO FUTURE
Can't find vinum config: Inappropriate ioctl for device

and the configuration is still there after I try to resetconfig. When I 
reboot and do I vinum start, I get an error that there are no drives in 
the vinum config. 

FreeBSD ginsu.catonic.net 5.4-STABLE FreeBSD 5.4-STABLE #1: Fri May 27 
01:28:15 PDT 2005 [EMAIL PROTECTED]:/usr/src/sys/i386/compile/SMP  
i386

dmesg availible on request. No securelevel in place, at -1. Thanks in 
advance.

--
Kris Kirby [EMAIL PROTECTED]
   BIG BROTHER IS WATCHING YOU!
 This message brought to you by the US Department of Homeland Security
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-28 Thread FreeBSD questions mailing list


On 27 mei 2005, at 16:38, [EMAIL PROTECTED] wrote:





[...]

not very encouraging... Is it a RAID 5 you were able to make work
under 5.4?

jd



hey
yeah a 3x160 GB RAID5 and 2x80 GB RAID 1 in FreeBSD 5.4-p1
 i get the same error message everytime i start vinum or whenever i
execute a command in vinum
and really loads of kernel panics whenever i try to get it to work
after a crash
i have an unorthodox way of recovering from this because it's nearly
impossible to do stuff in vinum without causing kernel panics
yesterday evrything was ruined again (after upgrading to p1)
i wiped the complete config (rest config - NO FUTURE)
read in the config file (create RAID5)
set every state up manually (setstate up ...)
and rebuild parity
took about 10 hours to rebuild but then everything is back up and
running

i tried Gvinum too but that doesn't have the setstate nor the ruibld
parity command and still you can't stop gvinum (gvinum stop doesn't
work, nor does kldunload geom_vinum.ko)

i have no way of changing to a different soft raid due to lack of
space to backup so i'm stuck with this for as lo0ng as it takes :)

so, one advice don't do it
hehehe
Arno



Many thanx... I guess I better stick with 4.11

jd



you're welcome

maybe the complete vinum chapter should be removed from the handbook?

Arno

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-27 Thread FreeBSD questions mailing list


On 26 mei 2005, at 23:10, jd wrote:



I am trying to set up Vinum on a new system, and I get the error  
message:

vinum: Inappropriate ioctl for device.

Here are the details:

...

Vinum used to work beautiful under 4.11; I'm wondering what do I  
need to

change to make it work under 5.4?



Go back to 4.11!
vinum is a nightmare in 5.4
and gvinum is not nearly mature enough...

I do have it running but every update it takes me 2 days to get the  
RAIDs back up


Arno


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-27 Thread FreeBSD questions mailing list


On 27 mei 2005, at 16:13, [EMAIL PROTECTED] wrote:





[...]





Go back to 4.11!
vinum is a nightmare in 5.4
and gvinum is not nearly mature enough...

I do have it running but every update it takes me 2 days to get the
RAIDs back up

Arno



Arno,

not very encouraging... Is it a RAID 5 you were able to make work  
under 5.4?


jd



hey
yeah a 3x160 GB RAID5 and 2x80 GB RAID 1 in FreeBSD 5.4-p1
 i get the same error message everytime i start vinum or whenever i  
execute a command in vinum
and really loads of kernel panics whenever i try to get it to work  
after a crash
i have an unorthodox way of recovering from this because it's nearly  
impossible to do stuff in vinum without causing kernel panics

yesterday evrything was ruined again (after upgrading to p1)
i wiped the complete config (rest config - NO FUTURE)
read in the config file (create RAID5)
set every state up manually (setstate up ...)
and rebuild parity
took about 10 hours to rebuild but then everything is back up and  
running


i tried Gvinum too but that doesn't have the setstate nor the ruibld  
parity command and still you can't stop gvinum (gvinum stop doesn't  
work, nor does kldunload geom_vinum.ko)


i have no way of changing to a different soft raid due to lack of  
space to backup so i'm stuck with this for as lo0ng as it takes :)


so, one advice don't do it
hehehe
Arno
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-27 Thread Mark Bucciarelli

FreeBSD questions mailing list wrote:

i tried Gvinum too but that doesn't have the setstate nor the ruibld  
parity command and still you can't stop gvinum (gvinum stop doesn't  
work, nor does kldunload geom_vinum.ko)


try gmirror for raid 1.  it worked great for me.

could gmirror and gstripe be used to get raid5?  i think i read a geom 
provider can be used as a consumer ...


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


vinum: Inappropriate ioctl for device

2005-05-26 Thread jd

I am trying to set up Vinum on a new system, and I get the error message:
vinum: Inappropriate ioctl for device.

Here are the details:

- FreeBSD 5.4-RELEASE

- vinum list
3 drives:
D a State: up   /dev/ad0s1g A: 0/74692 MB (0%)
D b State: up   /dev/ad2s1h A: 0/74692 MB (0%)
D c State: up   /dev/ad3s1h A: 0/74692 MB (0%)

3 volumes:
V tmp   State: up   Plexes:   3 Size:  0 
B
V var   State: up   Plexes:   3 Size:  0 
B
V usr   State: up   Plexes:   3 Size:  0 
B

9 plexes:
P tmp.p0 R5 State: degraded Subdisks: 1 Size:  0 
B
P tmp.p1 R5 State: degraded Subdisks: 1 Size:  0 
B
P tmp.p2 R5 State: degraded Subdisks: 1 Size:  0 
B
P var.p0 R5 State: degraded Subdisks: 1 Size:  0 
B
P var.p1 R5 State: degraded Subdisks: 1 Size:  0 
B
P var.p2 R5 State: degraded Subdisks: 1 Size:  0 
B
P usr.p0 R5 State: degraded Subdisks: 1 Size:  0 
B
P usr.p1 R5 State: degraded Subdisks: 1 Size:  0 
B
P usr.p2 R5 State: degraded Subdisks: 1 Size:  0 
B

9 subdisks:
S tmp.p0.s0 State: emptyD: aSize:512
MB
S tmp.p1.s0 State: emptyD: bSize:512
MB
S tmp.p2.s0 State: emptyD: cSize:512
MB
S var.p0.s0 State: emptyD: aSize:   1024
MB
S var.p1.s0 State: emptyD: bSize:   1024
MB
S var.p2.s0 State: emptyD: cSize:   1024
MB
S usr.p0.s0 State: emptyD: aSize: 71
GB
S usr.p1.s0 State: emptyD: bSize: 71
GB
S usr.p2.s0 State: emptyD: cSize: 71
GB

- cat /var/log/vinum_history
26 May 2005 16:08:05.271518 *** vinum started ***
26 May 2005 16:08:43.517343 *** vinum started ***
26 May 2005 16:09:11.268732 create -f /etc/vinum.conf
drive a device /dev/ad0s1g
drive b device /dev/ad2s1h
drive c device /dev/ad3s1h
volume tmp setupstate
plex org raid5 512k
sd length 512m drive a
plex org raid5 512k
sd length 512m drive b
plex org raid5 512k
sd length 512m drive c
volume var setupstate
plex org raid5 512k
sd length 1024m drive a
plex org raid5 512k
sd length 1024m drive b
plex org raid5 512k
sd length 1024m drive c
volume usr setupstate
plex org raid5 512k
sd length 0 drive a
plex org raid5 512k
sd length 0 drive b
plex org raid5 512k
sd length 0 drive c

26 May 2005 16:47:17.470116 *** vinum started ***
26 May 2005 16:47:17.471861 list

- May 26 16:08:05 ferrando kernel: vinum: loaded
May 26 16:08:05 ferrando vinum: Inappropriate ioctl for device
May 26 16:08:43 ferrando vinum: Inappropriate ioctl for device
May 26 16:09:11 ferrando kernel: vinum: drive a is up
May 26 16:09:11 ferrando kernel: vinum: drive b is up
May 26 16:09:11 ferrando kernel: vinum: drive c is up
May 26 16:09:11 ferrando kernel: vinum: plex tmp.p0 does not have at
least 3 subdisks
May 26 16:09:11 ferrando kernel: vinum: tmp.p0 is degraded
May 26 16:09:11 ferrando kernel: vinum: tmp is up
May 26 16:09:11 ferrando kernel: vinum: plex tmp.p1 does not have at
least 3 subdisks
May 26 16:09:11 ferrando kernel: vinum: tmp.p1 is degraded
May 26 16:09:11 ferrando kernel: vinum: plex tmp.p2 does not have at
least 3 subdisks
May 26 16:09:11 ferrando kernel: vinum: tmp.p2 is degraded
May 26 16:09:11 ferrando kernel: vinum: plex var.p0 does not have at
least 3 subdisks
May 26 16:09:11 ferrando kernel: vinum: var.p0 is degraded
May 26 16:09:11 ferrando kernel: vinum: var is up
May 26 16:09:11 ferrando kernel: vinum: plex var.p1 does not have at
least 3 subdisks
May 26 16:09:11 ferrando kernel: vinum: var.p1 is degraded
May 26 16:09:11 ferrando kernel: vinum: plex var.p2 does not have at
least 3 subdisks
May 26 16:09:11 ferrando kernel: vinum: var.p2 is degraded
May 26 16:09:11 ferrando kernel: vinum: plex usr.p0 does not have at
least 3 subdisks
May 26 16:09:11 ferrando kernel: vinum: usr.p0 is degraded
May 26 16:09:11 ferrando kernel: vinum: usr is up
May 26 16:09:11 ferrando kernel: vinum: plex usr.p1 does not have at
least 3 subdisks
May 26 16:09:11 ferrando kernel: vinum: usr.p1 is degraded
May 26 16:09:11 ferrando kernel: vinum: plex usr.p2 does not have at
least 3 subdisks
May 26 16:09:11 ferrando kernel: vinum

Re: vinum: Inappropriate ioctl for device

2005-05-26 Thread Andrea Venturoli

jd wrote:

I am trying to set up Vinum on a new system, and I get the error message:
vinum: Inappropriate ioctl for device.

Here are the details:

- FreeBSD 5.4-RELEASE


I'd be glad if someone steps in and says I'm wrong, but AFAIK vinum is 
not supported anymore on new releases.
At least in my experience it stopped working when I upgraded from 5.2.1 
to 5.3, producing continuous crashes until I switched to gmirror.
You might want to look at gvinum, but last I checked it wasn't quite 
finished.


 bye
av.

P.S. Please, let me know if you manage to make it.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Hardware RAID 5 - Need vinum?

2005-05-13 Thread Thomas Hurst
* Greg 'groggy' Lehey ([EMAIL PROTECTED]) wrote:

 There have been issues with growfs in the past; last time I looked
 it hadn't been updated to handle UFS 2.  If you don't need the UFS 2
 functionality, you might be better off using UFS 1 if you intend to
 grow the file system.

growfs gained basic UFS2 support in June 2002 according to the CVS
log.  It seems pretty unloved; the last interesting commit was 7
months ago (important fixes, still not MFCd) but that's why we
have backups, right? :)

-- 
Thomas 'Freaky' Hurst
http://hur.st/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Hardware RAID 5 - Need vinum?

2005-05-11 Thread Tony Shadwick
The problem I've had in the past in Windows for example:
Drive D: is a RAID5 volume, 400GB, nearly full.  If I add a 200GB drive to 
the array, the 'disk' that Drive D: resides on is now ~600GB, but Drive D: 
will remain 400GB.  I would have to utilize a third party piece of 
software to resize Drive D: to utilize all 400GB, or create another 
partition to use that extra 200GB.

In my case. /media/video will still only have 400GB available to it.  I'm 
creating one partition on the array with one slice.  My understanding then 
is if I go into the label editor after adding my new drive, I'll have 
200GB of free space, and I could create another slice and another 
mountpoint, but not simply add that additional space to my original slice 
and mountpoint at /media/video.

Now, since I originally posted this message, I did more digging, and found 
some posts regarding growfs.  Perhaps that command is what I'm looking 
for, and would allow me to grow /media/video to use all 600GB in that 
case.

Now my only concern is whether or not the SX6000 support nondestructively 
growing a RAID5 array.  If I'm right about growfs that is. :)

On Wed, 11 May 2005, Subhro wrote:
On 5/11/2005 2:35, Tony Shadwick wrote:
What my concern is when I start to fill up the ~400GB of space I'm giving 
myself with this set.  I would like to simply insert another 200GB drive 
and expand the array, allowing the hardware raid to do the work.
That is what everybody does. It is very much normal.
The problem I see with this is that yes, the /dev/(raid driver name)0 will 
now be that much larger, however the original partition size and the 
subsequent slices will still be the original size. 
I could not understand what you meant by RAID device entry would be larger. 
The various entries inside the /dev are nothing but sort of handles to the 
various devices present in the system. If you want to manipulate or utilize 
some device for a particular device present on your box from a particular 
application, then you can reference the same using the handles in the /dev. 
And the handles remains the same in size irrespective of whether you have 1 
hard disk or 100 hard disks in some kind of RAID.

Do I need to (and is there a way?) to utilize vinum and still allow the 
hardware raid controller to do the raid5 gruntwork and still have the 
ability to arbitrarily grow the volume as needed?  The only other solution 
I see is to use vinum to software-raid the set of drives, leaving it as a 
glorified ATA controller card, and the cpu/ram of the card unitilized and 
burden the system CPU and RAM with the task.
The main idea in favor of using Hardware RAID solutions over software RAID 
solutions is you can let the CPU do things which are more worthwhile than 
managing I/O. The I/O can be well handled and is indeed better handled by the 
chip on the RAID controller card than the CPU. If you add another disk to 
your RAID or replace a dead disk at any point of time, then the RAID card 
should automatically detect the change and rebuild the volumes as and when 
required. This would be completely transparent to the OS and sometimes also 
transparent to the user.

Regards
S.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Hardware RAID 5 - Need vinum?

2005-05-11 Thread Subhro
On 5/11/2005 19:33, Tony Shadwick wrote:
The problem I've had in the past in Windows for example:
Drive D: is a RAID5 volume, 400GB, nearly full.  If I add a 200GB 
drive to the array, the 'disk' that Drive D: resides on is now ~600GB, 
but Drive D: will remain 400GB.  I would have to utilize a third party 
piece of software to resize Drive D: to utilize all 400GB, or create 
another partition to use that extra 200GB.

In my case. /media/video will still only have 400GB available to it.  
I'm creating one partition on the array with one slice.  My 
understanding then is if I go into the label editor after adding my 
new drive, I'll have 200GB of free space, and I could create another 
slice and another mountpoint, but not simply add that additional space 
to my original slice and mountpoint at /media/video.

Now, since I originally posted this message, I did more digging, and 
found some posts regarding growfs.  Perhaps that command is what I'm 
looking for, and would allow me to grow /media/video to use all 600GB 
in that case.

Now my only concern is whether or not the SX6000 support 
nondestructively growing a RAID5 array.  If I'm right about growfs 
that is. :)
You have already answered your question :). BTW kindly do not top post 
and wrap up mails at 72 characters. IT really creates a mess in my text 
mode client :(.

Regards
S.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Hardware RAID 5 - Need vinum?

2005-05-11 Thread Tony Shadwick
On Wed, 11 May 2005, Subhro wrote:
On 5/11/2005 19:33, Tony Shadwick wrote:
The problem I've had in the past in Windows for example:
Drive D: is a RAID5 volume, 400GB, nearly full.  If I add a 200GB drive to 
the array, the 'disk' that Drive D: resides on is now ~600GB, but Drive D: 
will remain 400GB.  I would have to utilize a third party piece of software 
to resize Drive D: to utilize all 400GB, or create another partition to use 
that extra 200GB.

In my case. /media/video will still only have 400GB available to it.  I'm 
creating one partition on the array with one slice.  My understanding then 
is if I go into the label editor after adding my new drive, I'll have 200GB 
of free space, and I could create another slice and another mountpoint, but 
not simply add that additional space to my original slice and mountpoint at 
/media/video.

Now, since I originally posted this message, I did more digging, and found 
some posts regarding growfs.  Perhaps that command is what I'm looking for, 
and would allow me to grow /media/video to use all 600GB in that case.

Now my only concern is whether or not the SX6000 support nondestructively 
growing a RAID5 array.  If I'm right about growfs that is. :)
You have already answered your question :). BTW kindly do not top post and 
wrap up mails at 72 characters. IT really creates a mess in my text mode 
client :(.

Regards
S.

Nani?  I'm using pine in it's default config.  Totally bizarre.  I'll look 
into though.  Thanks for the help!

Tony
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Hardware RAID 5 - Need vinum?

2005-05-11 Thread Greg 'groggy' Lehey
On Tuesday, 10 May 2005 at 16:05:50 -0500, Tony Shadwick wrote:
 I've worked with RAID5 in FreeBSD in the past, with either vinum or a
 hardware raid solution.  Never had any problems either way.

 I'm now building a server for myself at home, and I'm creating a large
 volume to store video.  I have purchased 3 200GB EIDE hard drives, and a 6
 channel Promise SX6000 ATA RAID controller.

 I know how to set up a RAID5 set, and create a mountpoint (say
 /media/video).

 What my concern is when I start to fill up the ~400GB of space I'm giving
 myself with this set.  I would like to simply insert another 200GB drive
 and expand the array, allowing the hardware raid to do the work.

 The problem I see with this is that yes, the /dev/(raid driver name)0 will
 now be that much larger, however the original partition size and the
 subsequent slices will still be the original size.  Do I need to (and is
 there a way?) to utilize vinum and still allow the hardware raid
 controller to do the raid5 gruntwork and still have the ability to
 arbitrarily grow the volume as needed?  The only other solution I see is
 to use vinum to software-raid the set of drives, leaving it as a glorified
 ATA controller card, and the cpu/ram of the card unitilized and burden the
 system CPU and RAM with the task.

What you need here is not Vinum (which would replace the hardware RAID
array), but growfs.  You'd need that with Vinum as well.

There have been issues with growfs in the past; last time I looked it
hadn't been updated to handle UFS 2.  If you don't need the UFS 2
functionality, you might be better off using UFS 1 if you intend to
grow the file system.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Hardware RAID 5 - Need vinum?

2005-05-10 Thread Tony Shadwick
I've worked with RAID5 in FreeBSD in the past, with either vinum or a 
hardware raid solution.  Never had any problems either way.

I'm now building a server for myself at home, and I'm creating a large 
volume to store video.  I have purchased 3 200GB EIDE hard drives, and a 6 
channel Promise SX6000 ATA RAID controller.

I know how to set up a RAID5 set, and create a mountpoint (say 
/media/video).

What my concern is when I start to fill up the ~400GB of space I'm giving 
myself with this set.  I would like to simply insert another 200GB drive 
and expand the array, allowing the hardware raid to do the work.

The problem I see with this is that yes, the /dev/(raid driver name)0 will 
now be that much larger, however the original partition size and the 
subsequent slices will still be the original size.  Do I need to (and is 
there a way?) to utilize vinum and still allow the hardware raid 
controller to do the raid5 gruntwork and still have the ability to 
arbitrarily grow the volume as needed?  The only other solution I see is 
to use vinum to software-raid the set of drives, leaving it as a glorified 
ATA controller card, and the cpu/ram of the card unitilized and burden the 
system CPU and RAM with the task.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Hardware RAID 5 - Need vinum?

2005-05-10 Thread Subhro
On 5/11/2005 2:35, Tony Shadwick wrote:
What my concern is when I start to fill up the ~400GB of space I'm 
giving myself with this set.  I would like to simply insert another 
200GB drive and expand the array, allowing the hardware raid to do the 
work.
That is what everybody does. It is very much normal.
The problem I see with this is that yes, the /dev/(raid driver name)0 
will now be that much larger, however the original partition size and 
the subsequent slices will still be the original size.  
I could not understand what you meant by RAID device entry would be 
larger. The various entries inside the /dev are nothing but sort of 
handles to the various devices present in the system. If you want to 
manipulate or utilize some device for a particular device present on 
your box from a particular application, then you can reference the same 
using the handles in the /dev. And the handles remains the same in size 
irrespective of whether you have 1 hard disk or 100 hard disks in some 
kind of RAID.

Do I need to (and is there a way?) to utilize vinum and still allow 
the hardware raid controller to do the raid5 gruntwork and still have 
the ability to arbitrarily grow the volume as needed?  The only other 
solution I see is to use vinum to software-raid the set of drives, 
leaving it as a glorified ATA controller card, and the cpu/ram of the 
card unitilized and burden the system CPU and RAM with the task.
The main idea in favor of using Hardware RAID solutions over software 
RAID solutions is you can let the CPU do things which are more 
worthwhile than managing I/O. The I/O can be well handled and is indeed 
better handled by the chip on the RAID controller card than the CPU. If 
you add another disk to your RAID or replace a dead disk at any point of 
time, then the RAID card should automatically detect the change and 
rebuild the volumes as and when required. This would be completely 
transparent to the OS and sometimes also transparent to the user.

Regards
S.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum (Again)

2005-04-26 Thread Greg 'groggy' Lehey
On Friday, 22 April 2005 at 16:11:55 -0400, Timothy Radigan wrote:
 Hi all,

 Ok, I'm still having trouble with vinum, I got it to load at start, but the
 vinum.autostart=YES in /boot/loader.conf returns a vinum: no drives
 found message.

 I had the mirrored set up and running before the reboot and the file system
 was mounted and everything, I even made sure to issue a saveconfig in
 vinum to make sure the configuration was written to the drives.

 There is no mention of anything else in the Handbook or in the Complete
 FreeBSD chapter on Vinum that describes how to get the configured drives
 loaded at boot up.  Am I missing something?

 Here is my /etc/vinum.conf:

 drive a device /dev/ad1
 drive b device /dev/ad2

These should be partitions, not disks.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpr4tBYTS3Dy.pgp
Description: PGP signature


Re: Vinum

2005-04-22 Thread Kevin Kinsey
Timothy Radigan wrote:
I know this topic has come up before, but how in the world do you get vinum
to load AND start itself at boot time so I don't have to repair my mirrored
volumes every reboot?
I have tried to add start_vinum=YES to /etc/rc.conf but that ends in a
'panic dangling node' error.  I also tried to add vinum_load=YES to
/boot/loader.conf and that will load vinum at boot without panicing on me,
but it doesn't start vinum and bring the mirrored devices up.  I have to
physically log into the server and type vinum start to bring up vinum.
What I want, is that on a reboot, vinum is loaded at boot and is started so
that I can mount my vinum drive through /etc/fstab
Any ideas/suggestions?
--Tim
 

If you're on 5.X, I believe you want gvinum instead of vinum; vinum
has been trying hard to catch up with the addition of the GEOM layer
in 5.X (Kudos to Lukas E!)
Kevin Kinsey
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


  1   2   3   4   5   6   7   8   9   10   >