On 20 April 2010 03:25, Alberto Mijares wrote:
> On Mon, Apr 19, 2010 at 8:37 PM, Leandro F Silva
> wrote:
> > Hi all,
> >
> > I'd like to know what kind of technology are you using on FreeBSD for
> volume
> > manager, I mean, Z file system (ZFS), VINUM,
On Mon, Apr 19, 2010 at 8:37 PM, Leandro F Silva
wrote:
> Hi all,
>
> I'd like to know what kind of technology are you using on FreeBSD for volume
> manager, I mean, Z file system (ZFS), VINUM, GEOM, or anyone else.
> Seems that Oracle won't offer support for ZFS on op
Hi all,
I'd like to know what kind of technology are you using on FreeBSD for volume
manager, I mean, Z file system (ZFS), VINUM, GEOM, or anyone else.
Seems that Oracle won't offer support for ZFS on opensolaris, so do you know
if FreeBSD will keep working with ZFS ?
I had some old
SD will destroy my computer, run over my cat, and bail out the
investment banking industry. Will it really perform that poorly on a
Phenom and 8GB RAM? Significantly more resources than mdadm in Linux?
How about compared to RAID 5 under vinum?
Thanks,
~Mike Manlief
1: The ability to read the
Are there any disk size/volume size limitations on Vinum with FreeBSD 6.4?
Can I run Vinum on 4 500 G drives and get a 1Tbyte RAID10 config?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To
Hi,
I'm running a gvinum raid array with 4x80G drives. This raid is running
for 4 Years now. Today i found out that the status in degraded. All
drives are up but on subdisk is stale. How can get the raid out of
degraded mode. I have attached the output of gvinum l
Greeting
Estartu
--
--
Dear Listmates,
Is it possible to configure vinum for balancing the mileage between two and
three non-volatile storage devices of a different size
I have read the manual thoroughly and noticed that the are certain restrictions
applied to the hard drive sizes in the proposed RAID5 data handling
Hi there,
I setup this FreeBSD 4.9 a while back (I know, I know this isn't the latest
version; but look it's been running perfectly since then), with the OS on a
SCSI drive and a vinum volume on 3 IDE 200GB drives, hook on a Promise IDE
controller.
A) The Crash
= = = = =
On Wed, 2007-08-22 at 08:51 +0400, Rakhesh Sasidharan wrote:
> Lowell Gilbert wrote:
>
> > Rakhesh Sasidharan <[EMAIL PROTECTED]> writes:
> >
> >> I see that if I want to do disk striping/ concating/ mirroring,
> >> FreeBSD offers the GEOM utilities a
I have a 4.11-R box that I'm planning on reinstalling a fresh 6.2-R on. Not an
"upgrade", but a fresh binary install after newfsing the system partitions.
A remaining planning issue is that I have a pre-GEOM vinum data volume on other
disks. The handbook mentions gvinum ret
I have a 4.11-R box that I'm planning on reinstalling a fresh 6.2-R on. Not an
"upgrade", but a fresh binary install after newfsing the system partitions.
A remaining planning issue is that I have a pre-GEOM vinum data volume on other
disks. The handbook mentions gvinum ret
d and of data security. It seems that
raid5 has mostly a hype factor for him, but i may err. Anyways it is for
such reasons that in the modern geom system, raid3 has been implemented
and not raid5. But vinum has been ported to the geom framework for the
benefit of old users, or of people who like
data security. It seems that
raid5 has mostly a hype factor for him, but i may err. Anyways it is for
such reasons that in the modern geom system, raid3 has been implemented
and not raid5. But vinum has been ported to the geom framework for the
benefit of old users, or of people who like it. For example
Lowell Gilbert wrote:
Rakhesh Sasidharan <[EMAIL PROTECTED]> writes:
I see that if I want to do disk striping/ concating/ mirroring,
FreeBSD offers the GEOM utilities and the Vinum LVM (which fits into
the GEOM architecture). Why do we have two different ways of doing the
same tasks
Rakhesh Sasidharan <[EMAIL PROTECTED]> writes:
> I see that if I want to do disk striping/ concating/ mirroring,
> FreeBSD offers the GEOM utilities and the Vinum LVM (which fits into
> the GEOM architecture). Why do we have two different ways of doing the
> same tasks
Hi,
I see that if I want to do disk striping/ concating/ mirroring, FreeBSD
offers the GEOM utilities and the Vinum LVM (which fits into the GEOM
architecture). Why do we have two different ways of doing the same tasks
-- any advantages/ disadvantages to either approach?
I did check the
Modulok wrote:
> Take the following example vinum config file:
>
> drive a device /dev/da2a
> drive b device /dev/da3a
>
> volume rambo
> plex org concat
> sd length 512m drive a
> plex org concat
> sd length 512m drive b
>
8&l
Take the following example vinum config file:
drive a device /dev/da2a
drive b device /dev/da3a
volume rambo
plex org concat
sd length 512m drive a
plex org concat
sd length 512m drive b
The keyword "concat" specifies the relationship between the plex
On Friday, 7 July 2006 at 11:29:46 +0700, Olivier Nicole wrote:
> Hi,
>
> Is there a trick on the way to build a vinum RAID 1 without backup-in
> the data first?
Sometimes. It's described in "The Complete FreeBSD", page 236 or so.
See http://www.lemis.com/grog/Docu
One thing you might consider is that gvinum is quite flexible.
The subdisks in vinum that make up a raid 5 plex are partitions.
This means you can create raid 5 sets without using each entire disk
and the disks don't need to be the same model or size. It's also
handy for spare
On Friday 07 July 2006 08:04, John Nielsen wrote:
> On Friday 07 July 2006 00:29, Olivier Nicole wrote:
> > Is there a trick on the way to build a vinum RAID 1 without backup-in
> > the data first?
> >
> > I have the two disk that will get mirrored. One of the disk if
On Friday 07 July 2006 00:29, Olivier Nicole wrote:
> Is there a trick on the way to build a vinum RAID 1 without backup-in
> the data first?
>
> I have the two disk that will get mirrored. One of the disk if
> formated as UFS 4.2 and already holds all the data. The second
Hi,
Is there a trick on the way to build a vinum RAID 1 without backup-in
the data first?
I have the two disk that will get mirrored. One of the disk if
formated as UFS 4.2 and already holds all the data. The second disk is
blank.
NormallyI should start with 2 blank disks, label them as vinum
as either a hardware RAID setup or the older
vinum from FreeBSD-4 and earlier.
As for losing data, see above.
2: am I using a SATA controller that has serious problems or something
like that? In other words, is this actually gvinum's fault?
If you had a failing drive, that's not
On Wednesday 05 July 2006 19:05, Peter A. Giessel wrote:
> On 7/5/2006 15:56, Jeremy Ehrhardt seems to have typed:
> > 3: would I be better off using a different RAID 5 system on another OS?
>
> You would be best off with a 3ware card (www.3ware.com) running RAID 5
> (hardware raid >> software raid
On 7/5/2006 15:56, Jeremy Ehrhardt seems to have typed:
> 3: would I be better off using a different RAID 5 system on another OS?
You would be best off with a 3ware card (www.3ware.com) running RAID 5
(hardware raid >> software raid).
It works great in FreeBSD and is *very* stable and fault tol
I have a quad-core Opteron nForce4 box running 6.1-RELEASE/amd64 with a
gvinum RAID 5 setup comprising six identical SATA drives on three
controllers (the onboard nForce4 SATA, which is apparently two devices,
and one Promise FastTrak TX2300 PCI SATA RAID controller in IDE mode),
combined into
On Friday, 2 June 2006 at 5:04:15 -0500, Kevin Kinsey wrote:
> Travis H. wrote:
>>
>> Is there some kind of IP lawsuit over vinum or something?
>
> If so, it's never been mentioned ;-)
It has now, but it's the first time I've heard of it.
> It's a v
Travis H. wrote:
Is there some kind of IP lawsuit over vinum or something?
If so, it's never been mentioned ;-)
It's a valid question, but I don't think Greg's that kind
of guy. As for Veritas, I *think* they had some sort of
agreement (re: the name), (but I could be
> I recently installed 6.0, and there doesn't seem to be a vinum binary.
>
> There is a gvinum binary, but it doesn't even implement all of the
> commands in its own help screen.
>
> I'm somewhat confused. Did I screw up my install, or is this normal?
>
>
Travis H. wrote:
> I recently installed 6.0, and there doesn't seem to be a vinum binary.
>
> There is a gvinum binary, but it doesn't even implement all of the
> commands in its own help screen.
>
> I'm somewhat confused. Did I screw up my install, or is this n
I recently installed 6.0, and there doesn't seem to be a vinum binary.
There is a gvinum binary, but it doesn't even implement all of the
commands in its own help screen.
I'm somewhat confused. Did I screw up my install, or is this normal?
Is there some kind of IP lawsui
There might be some helpful nuggets in there, but I'm looking to
basically combine the storage of multiple disks, like RAID-0, except
I want my second drive written to only when my first drive has been
filled. I understand this can be done via vinum concatenation. I'm
looking f
t/docs/how-to/Vinum/ might be helpful.
/e
--
http://hostname.nu/~emil
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Hello,
Can anybody recommend using vinum to concatenate across two disks?
What are the upsides? Downsides?
Are their any tutorials explaining how to do so? So far, based on the
lack of info I've been able to find, it seems to me that this is a
rarely used configuration... I'm
I am trying to create a vinum file system during the install so I can
also use it for the root filesystem as described in the handbook, but it
appears that the geom_vinum modules are not available from the FreeBSD
6.1-RC2 disc 1 LiveCD shell. Are the modules not available or do I need
to load
On Thu, 13 Apr 2006, Chris Hastie wrote:
On Thu, 13 Apr 2006, Emil Thelin <[EMAIL PROTECTED]> wrote:
On Thu, 13 Apr 2006, Chris Hastie wrote:
I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
On Thu, 13 Apr 2006, Emil Thelin <[EMAIL PROTECTED]> wrote:
On Thu, 13 Apr 2006, Chris Hastie wrote:
I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.
Sin
On Thu, 13 Apr 2006, Chris Hastie wrote:
I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.
Since 5.3 you should probably use the geom-aware gvinum instead of vinum
I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.
The system has two mirrored disks, with Vinum used for the root
filesystem. make buildworld, make buildkernel and
Hi!
Is here a way to place the root partition of freebsd in a mirrored vinum
volume? I have read tutorials which explain the procedure in more or
less detail, but all the tutorials seem to deal with the x86 version.
For example the tutorials assume that freebsd is installed on a slice
and
On Monday, 30 January 2006 at 10:31:13 +0300, Forth wrote:
> Hi,
> I am trying setup a vinum mirrored plex with two disks:
>> ad2: 38204MB [77622/16/63] at ata1-master UDMA33
>> ad3: 38204MB [77622/16/63] at ata1-slave UDMA33
> Disks are new and fully functional, but when
Hi,
I am trying setup a vinum mirrored plex with two disks:
>ad2: 38204MB [77622/16/63] at ata1-master UDMA33
>ad3: 38204MB [77622/16/63] at ata1-slave UDMA33
Disks are new and fully functional, but when i do:
#vinum start
#vinum
vinum -> mirror -v -n mirror /dev/ad2 /dev/ad3
i get th
only going to be as fast as the slowest
drive I imagine)? I've never ever touched vinum before!
Thanks in advance for any advice / comments,
Mark
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-quest
gt; >
> > If you are also looking for more drive space, then raid 5 gives a
> > measure of both.
> >
> > In both cases, the warning of backing up the system first applies.
> >
>
>
> Hmmm. What I need is more drive space. Should I look at GEOM
> rather t
Joe Auty wrote on Thu, Dec 29, 2005 at 10:12:02AM -0500:
> Some great advice here!
>
> What RAID level would you recommend for simply maximizing the hard
> disk space I have available? This is just my personal backup machine
> and will consist of two drives, so I don't need kick ass performan
Joe Auty wrote:
Some great advice here!
What RAID level would you recommend for simply maximizing the hard disk
space I have available?
RAID-0 striping. Note that it gives you no redundancy or protection.
This is just my personal backup machine and
will consist of two drives, so I don't
Some great advice here!
What RAID level would you recommend for simply maximizing the hard
disk space I have available? This is just my personal backup machine
and will consist of two drives, so I don't need kick ass performance,
and I don't need my files mirrored. I take it that striping i
measure of both.
In both cases, the warning of backing up the system first applies.
Hmmm. What I need is more drive space. Should I look at GEOM
rather than vinum? Do you know whether the drives would need to be
reformatted in order to setup the RAID?
I'll definitely heed your
Joe Auty wrote:
I've been considering buying another hard drive for my FreeBSD machine,
and building a RAID with my current hard drive so that both drives are
treated as one.
This is known as RAID-1 mirroring.
Do any of you have experience in this area? Is this advisable?
Yes and yes. :
On Thu, 2005-12-29 at 06:09, Joe Auty wrote:
> Hello,
>
> I've been considering buying another hard drive for my FreeBSD
> machine, and building a RAID with my current hard drive so that both
> drives are treated as one.
>
> Do any of you have experience in this area? Is this advisable? I'm
Hello,
I've been considering buying another hard drive for my FreeBSD
machine, and building a RAID with my current hard drive so that both
drives are treated as one.
Do any of you have experience in this area? Is this advisable? I'm
assuming I'd be looking at creating a RAID-5? Can this b
Ignore, Google'd a bit longer and found the answer ...
On Mon, 28 Nov 2005, Marc G. Fournier wrote:
but, from what I can tell, is doing absolutely nothing, both via iostat and
vinum list ... the server has been 'up' for 15 minutes ... when I
accidentally typed 'vinum in
but, from what I can tell, is doing absolutely nothing, both via iostat
and vinum list ... the server has been 'up' for 15 minutes ... when I
accidentally typed 'vinum init vm.p0.s0' instead of 'vinum start
vm.p0.s0', the machine hung up and required cold boot
Tonight while checking a few things on my personal machine I noticed
that one of my vinum sub disks was stale. Included below are the
steps I took attempting to remedy this. Clearly I did not follow the
proper procedures and now the volume is un-mountable. Currently when
I attempt to mount the
Tonight while checking a few things on my personal machine I noticed
that one of my vinum sub disks was stale. Included below are the
steps I took attempting to remedy this. Clearly I did not follow the
proper procedures and now the volume is un-mountable. Currently when
I attempt to mount the
Tonight while checking a few things on my personal machine I noticed
that one of my vinum sub disks was stale. Included below are the
steps I took attempting to remedy this. Clearly I did not follow the
proper procedures and now the volume is un-mountable. Currently when
I attempt to mount the
Tonight while checking a few things on my personal machine I noticed
that one of my vinum sub disks was stale. Included below are the
steps I took attempting to remedy this. Clearly I did not follow the
proper procedures and now the volume is un-mountable. Currently when
I attempt to mount the
I broke my system (did make world with RELENG_5_4 while running 4.10) and I
was running
mirrored /usr partitions with vinum using ad0 and ad2. This morning I didn't
have time to setup vinum and installed 5.4 from cd onto ad0 as usual.
How can I see the data on my vinum drive (ad2) using
going. I was getting confused thinking that it wouldn't add
the mount to /etc/fstab , and if i put it in there myself.
Apart from adding the vinum_enable etc, will i need to add the info
to /etc/fstab referring to the name i gave it in vinum.. I'm starting to get
it alot more after reading th
i have created two slices on two IDE drives and mounted
them
(through fdisk, label etc), and had that all up and running
correctly.
I then went into Vinum in interactive mode and (hopefully) created a
mirror by typing
mirror -d /dev/ad0s2d /dev/ad1s1d . It then gave me successful
messages
mounted them
(through fdisk, label etc), and had that all up and running correctly.
I then went into Vinum in interactive mode and (hopefully) created a
mirror by typing
mirror -d /dev/ad0s2d /dev/ad1s1d . It then gave me successful
messages and gave the drive a name and said it's "up&q
all up and running correctly.
I then went into Vinum in interactive mode and (hopefully) created a
mirror by typing
mirror -d /dev/ad0s2d /dev/ad1s1d . It then gave me successful
messages and gave the drive a name and said it's "up".
I'm just wondering after this point, can
Hi, I have gone through the docs on this but am just missing a couple of points
conceptually, and would be grateful for any help.
Basically i have created two slices on two IDE drives and mounted them
(through fdisk, label etc), and had that all up and running correctly.
I then went into Vinum in
On Fri, 19 Aug 2005 11:01:55 -0500, Robin Smith
<[EMAIL PROTECTED]> wrote:
> There seems to be a consensus in the references I've found that vinum
> is completely broken on 5.4 and that gvinum/geom_vinum is not ready
> for production use. As it seems to me, this means that
On Fri, Aug 19, 2005 at 11:01:55AM -0500, Robin Smith wrote:
> There seems to be a consensus in the references I've found that vinum
> is completely broken on 5.4
That is true. IMHO it should be removed from RELENG_5 and _6 if it isn't
already.
> and that gvinum/geom_vin
There seems to be a consensus in the references I've found that vinum
is completely broken on 5.4 and that gvinum/geom_vinum is not ready
for production use. As it seems to me, this means that anyone using
4.11 (say) and vinum will have to abandon vinum (i.e. quit doing software
RAID) in ord
Hi all,
I'm makara from Cambodia. I'm freebsd newbie. I try to configure vinum I always
get this messages every time when I start vinum
vinum: Inappropriate ioctl for device
and a few minute later my pc is restart. I hope you can solv the problem.
Thanks sorry for
On Jul 22, 2005, at 5:51 PM, Ben Craig wrote:
However, it appears that the vinum config isn't being saved, as
rebooting
the machine can't find the vinum root partition, and after manually
booting
to the pre-vinum root (ufs:ad0s1a) running vinum > list shows no
volume
Hi All,
I've been trying to get a bootstrapped vinum volume up and running on a 5.4
release system (generic kernel, minimal install), based this How-to:
http://devel.reinikainen.net/docs/how-to/Vinum/
But I've managed to run into a problem that no amount of Googling, reading
the
:
> Hello everyone,
>
> Just to add, I had to type out the above message manually since I can't get
> access to anything with the crashed subdisk on /usr.
> With regard to Greg's requests for information when reporting vinum problems
> as stated on vinumvm.org <http://vinumv
Hello everyone,
Just to add, I had to type out the above message manually since I can't get
access to anything with the crashed subdisk on /usr.
With regard to Greg's requests for information when reporting vinum problems
as stated on vinumvm.org <http://vinumvm.org>'s w
Hello,
It appears that one of the vinum subdisks on my server has crashed. On
rebooting I get the following message:
<-- start message -->
Warning: defective objects
V usr State:down Plexes:2 Size:37GB
P usr.p0 C State:faulty Subdisks:1 Size:37GB
P usr.p1 C State:faulty Subdisks:1 Siz
On 6/27/05, Bri <[EMAIL PROTECTED]> wrote:
> Howdy,
>
> I'm attempting to use Vinum to concat multiple plexes together to make a
> single 4.5TB volume. I've noticed that once I hit the 2TB mark it seems to
> fail, it looks like once it hits 2TB the size
On 6/27/2005 12:04 PM Mac Mason wrote:
On Mon, Jun 27, 2005 at 11:57:15AM -0700, Drew Tomlinson wrote:
I'd appreciate hearing of your experiences with vinum, gvinum, and ccd,
especially as they relate to firewire devices.
In my experience, vinum doesn't play well with
On Mon, Jun 27, 2005 at 11:57:15AM -0700, Drew Tomlinson wrote:
> I'd appreciate hearing of your experiences with vinum, gvinum, and ccd,
> especially as they relate to firewire devices.
In my experience, vinum doesn't play well with GEOM, and gvinum isn't
anywhere near
I am using 4.11 and have been a FBSD user since the beginning of version
4. I successfully used vinum to create concatenated volumes that have
grown over time. Vinum has proved very stable as long as the drives
were IDE or SCSI. However I outgrew the confines of my case and added a
firewire
Howdy,
I'm attempting to use Vinum to concat multiple plexes together to make a
single 4.5TB volume. I've noticed that once I hit the 2TB mark it seems to
fail, it looks like once it hits 2TB the size gets reset to 0. Example below,
[EMAIL PROTECTED]:~# vinum create /etc/vinu
moved from configuration
Jun 20 04:30:44 nene kernel: ad3: WARNING - removed from configuration
Jun 20 04:30:44 nene kernel: ata1-master: FAILURE - unknown CMD (0xb0) timed out
Jun 20 04:30:44 nene kernel: vinum: ideraid.p0.s0 is crashed by force
Jun 20 04:30:44 nene kernel: vinum: ideraid.p0 is degra
On Sun, 5 Jun 2005 14:14:43 +0200 (CEST)
Wojciech Puchar <[EMAIL PROTECTED]> wrote:
> i'm reading about vinum disk manager, did some test, works fine.
>
> but the question - could root partition be vinum volume? (i have compiled
> kernel with vinum built in, not as mod
i'm reading about vinum disk manager, did some test, works fine.
but the question - could root partition be vinum volume? (i have compiled
kernel with vinum built in, not as module).
___
freebsd-questions@freebsd.org mailing list
On 28 mei 2005, at 09:01, [EMAIL PROTECTED] wrote:
[...]
you're welcome
maybe the complete vinum chapter should be removed from the handbook?
Arno
Perhaps the Vinum chapter should say up front that Vinum works with
FreeBSD 4.x but not with 5.x
jd
yeah better idea
about this for the lastcouple of days
# /etc/vinum.conf
volume var
plex name var.p0 org concat
drive va device /dev/ad0s1g
sd name var.p0.s0 drive va length 256m
plex name var.p1 org concat
drive vb device /dev/ad1s1d
sd name var.p1.s0 drive vb length 256m
When I create it, using -f, I get:
vi
On 27 mei 2005, at 16:38, [EMAIL PROTECTED] wrote:
[...]
not very encouraging... Is it a RAID 5 you were able to make work
under 5.4?
jd
hey
yeah a 3x160 GB RAID5 and 2x80 GB RAID 1 in FreeBSD 5.4-p1
i get the same error message everytime i start vinum or whenever i
execute a command
[...]
you're welcome
maybe the complete vinum chapter should be removed from the handbook?
Arno
Perhaps the Vinum chapter should say up front that Vinum works with FreeBSD
4.x but not with 5.x
jd
Janos Dohanics
[EMAIL PROTECTED]
http://www.3dresearc
oncat
drive vb device /dev/ad1s1d
sd name var.p1.s0 drive vb length 256m
When I create it, using -f, I get:
vinum -> l
2 drives:
D vaState: up /dev/ad0s1g A: 1791/2048 MB
(87%) <- note triple allocation
D vbState: up /dev/ad1s1d A:
FreeBSD questions mailing list wrote:
i tried Gvinum too but that doesn't have the setstate nor the ruibld
parity command and still you can't stop gvinum (gvinum stop doesn't
work, nor does kldunload geom_vinum.ko)
try gmirror for raid 1. it worked great for me.
could gmirror and gstripe
On 27 mei 2005, at 16:13, [EMAIL PROTECTED] wrote:
[...]
Go back to 4.11!
vinum is a nightmare in 5.4
and gvinum is not nearly mature enough...
I do have it running but every update it takes me 2 days to get the
RAIDs back up
Arno
Arno,
not very encouraging... Is it a RAID 5 you
On 26 mei 2005, at 23:10, jd wrote:
I am trying to set up Vinum on a new system, and I get the error
message:
"vinum: Inappropriate ioctl for device".
Here are the details:
...
Vinum used to work beautiful under 4.11; I'm wondering what do I
need to
change to make it
jd wrote:
I am trying to set up Vinum on a new system, and I get the error message:
"vinum: Inappropriate ioctl for device".
Here are the details:
- FreeBSD 5.4-RELEASE
I'd be glad if someone steps in and says I'm wrong, but AFAIK vinum is
not supported anymore on new r
I am trying to set up Vinum on a new system, and I get the error message:
"vinum: Inappropriate ioctl for device".
Here are the details:
- FreeBSD 5.4-RELEASE
- vinum list
3 drives:
D a State: up /dev/ad0s1g A: 0/74692 MB (0%)
D b
* Greg 'groggy' Lehey ([EMAIL PROTECTED]) wrote:
> There have been issues with growfs in the past; last time I looked
> it hadn't been updated to handle UFS 2. If you don't need the UFS 2
> functionality, you might be better off using UFS 1 if you intend to
> grow the file system.
growfs gained
On Tuesday, 10 May 2005 at 16:05:50 -0500, Tony Shadwick wrote:
> I've worked with RAID5 in FreeBSD in the past, with either vinum or a
> hardware raid solution. Never had any problems either way.
>
> I'm now building a server for myself at home, and I'm creating a la
On Wed, 11 May 2005, Subhro wrote:
On 5/11/2005 19:33, Tony Shadwick wrote:
The problem I've had in the past in Windows for example:
Drive D: is a RAID5 volume, 400GB, nearly full. If I add a 200GB drive to
the array, the 'disk' that Drive D: resides on is now ~600GB, but Drive D:
will remain 40
On 5/11/2005 19:33, Tony Shadwick wrote:
The problem I've had in the past in Windows for example:
Drive D: is a RAID5 volume, 400GB, nearly full. If I add a 200GB
drive to the array, the 'disk' that Drive D: resides on is now ~600GB,
but Drive D: will remain 400GB. I would have to utilize a thi
/dev.
And the handles remains the same in size irrespective of whether you have 1
hard disk or 100 hard disks in some kind of RAID.
Do I need to (and is there a way?) to utilize vinum and still allow the
hardware raid controller to do the raid5 gruntwork and still have the
ability to arbitrar
d the handles remains the same in size
irrespective of whether you have 1 hard disk or 100 hard disks in some
kind of RAID.
Do I need to (and is there a way?) to utilize vinum and still allow
the hardware raid controller to do the raid5 gruntwork and still have
the ability to arbitrarily gro
I've worked with RAID5 in FreeBSD in the past, with either vinum or a
hardware raid solution. Never had any problems either way.
I'm now building a server for myself at home, and I'm creating a large
volume to store video. I have purchased 3 200GB EIDE hard drives, and a 6
On Friday, 22 April 2005 at 16:11:55 -0400, Timothy Radigan wrote:
> Hi all,
>
> Ok, I'm still having trouble with vinum, I got it to load at start, but the
> vinum.autostart="YES" in /boot/loader.conf returns a "vinum: no drives
> found" message.
>
>
Hi all,
Ok, I'm still having trouble with vinum, I got it to load at start, but the
vinum.autostart="YES" in /boot/loader.conf returns a "vinum: no drives
found" message.
I had the mirrored set up and running before the reboot and the file system
was mounted and everyt
1 - 100 of 951 matches
Mail list logo