On 03/05/10 15:49, Bart Noordervliet wrote:
On Fri, Mar 5, 2010 at 21:31, Josef Bacik wrote:
Since I have three devices in a RAID1 pool, can it survive 2 drive failures?
Yes, tho you won't be able to remove more than 1 at a time (since it wants you
to keep at least two disks around). Thanks,
On Friday 05 March 2010 23:13:54 Mike Fedyk wrote:
> On Fri, Mar 5, 2010 at 1:49 PM, Bart Noordervliet
wrote:
> > Maybe it's worth to consider leaving the burdened raid* terminology
> > behind and name the btrfs redundancy modes more clearly by what they
> > do. For instance "-d double|triple" or
On Fri, Mar 5, 2010 at 1:49 PM, Bart Noordervliet wrote:
> On Fri, Mar 5, 2010 at 21:31, Josef Bacik wrote:
>>> Since I have three devices in a RAID1 pool, can it survive 2 drive failures?
>>
>> Yes, tho you won't be able to remove more than 1 at a time (since it wants
>> you
>> to keep at least
The way we report df usage is way confusing for everybody, including some other
utilities (bacula for one). So this patch makes df a little bit more
understandable. First we make used actually count the total amount of used
space in all space info's. This will give us a real view of how much dis
On Fri, Mar 5, 2010 at 21:31, Josef Bacik wrote:
>> Since I have three devices in a RAID1 pool, can it survive 2 drive failures?
>
> Yes, tho you won't be able to remove more than 1 at a time (since it wants you
> to keep at least two disks around). Thanks,
>
> Josef
Hmm, I would expect the raid
Thank you!
One more question:
Since I have three devices in a RAID1 pool, can it survive 2 drive failures?
On Mar 5, 2010, at 1:58 PM, Chris Ball wrote:
> Hi,
>
>> DF with btrfs is a loaded question. In the RAID1 case you are
>> going to show 3TB of free space, but everytime you use some spa
On Fri, Mar 05, 2010 at 02:29:56PM -0600, Grady Neely wrote:
> Thank you!
>
> One more question:
>
> Since I have three devices in a RAID1 pool, can it survive 2 drive failures?
>
Yes, tho you won't be able to remove more than 1 at a time (since it wants you
to keep at least two disks around).
Hi,
I get an oops with 2.6.33-0.46.rc8.git1.fc13.x86_64 while trying to
mount a degraded raid1 btrfs filesystem.
Here are the steps I performed to get to this stage.
- Install fedora12 btrfs / on sda2
- mkfs.btrfs -m raid1 -d raid1 /dev/sda7
- cp -a from sda2 to sda7
- reboot into sda7 as /
- bt
Hi,
> DF with btrfs is a loaded question. In the RAID1 case you are
> going to show 3TB of free space, but everytime you use some space
> you are going to show 3 times the amount used (I think thats
> right). There are some patches forthcoming to make the reporting
> for RAID stuf
Instead of hard coding the minimum I/O alignment, use the smallest
bdev_logical_blocksize in the filesystem. Also change the alignment
tests to determine the real user request minimum alignment and make
all eof tail and device checks on that user blocksize.
Signed-off-by: jim owens
---
fs/btrf
Instead of hard coding the minimum I/O alignment, use the smallest
bdev_logical_blocksize in the filesystem. Also change the alignment
tests to determine the real user request minimum alignment and make
all eof tail and device checks on that user blocksize.
Signed-off-by: jim owens
---
fs/btrf
In a multi-device filesystem, it is possible to have devices with
different block sizes, as in 512, 1024, 2048, 4096. DirectIO read
will check user request alignment is valid for at least one device.
Signed-off-by: jim owens
---
fs/btrfs/volumes.c | 24 +++-
fs/btrfs/volum
The following patches add the field for tracking the smallest
device block size in the filesystem and using it instead of
the hard coded 512 byte values in dio.c.
I also implemented a simpler test for user misalignment on
devices with larger block sizes.
It passes fsx, but I have not tested mixed
On Fri, Mar 05, 2010 at 01:28:00PM -0600, Grady Neely wrote:
> Hello,
>
> I have a 3 1TB drives that I wanted to make a Raid1 system on. I issued the
> following command "mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc /dev/sdd"
> And it seems to have created the fs, with no issue. When I do an
Hello,
I have a 3 1TB drives that I wanted to make a Raid1 system on. I issued the
following command "mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc /dev/sdd" And
it seems to have created the fs, with no issue. When I do an df -h, I see that
the available space is 3TB. Seems like with RAID1
15 matches
Mail list logo