Chris Murphy posted on Wed, 23 Dec 2015 19:38:23 -0700 as excerpted:
> There's a worthwhile distinction between stability of raid56 vs all
> other profiles, and btrfs multiple device failure behavior. Right now
> there's no monitoring or notification of failures to user space. In
> fact Btrfs itse
There's a worthwhile distinction between stability of raid56 vs all
other profiles, and btrfs multiple device failure behavior. Right now
there's no monitoring or notification of failures to user space. In
fact Btrfs itself doesn't really understand device failures, a device
can spit out many read
Neuer User posted on Wed, 23 Dec 2015 11:45:28 +0100 as excerpted:
> - both hdd and ssd in one LVM VG
> - one LV on each hdd, containing a btrfs filesystem
> - both btrfs LV configured as RAID1
> - the single SDD used as a LVM cache device for both HDD LVs to speed up
> random access, where possib
Goffredo Baroncelli posted on Wed, 23 Dec 2015 19:20:32 +0100 as
excerpted:
> Ducan talked about a N-way mirroring, where each disks contains a copy
> of the same data. Nobody talked about N-way mirroring where N is less
> than the number of the available disks.
Well, to be fair, I did /try/ to t
Donald Pearson posted on Wed, 23 Dec 2015 09:53:41 -0600 as excerpted:
> Additionally real Raid10 will run circles around what BTRFS is doing in
> terms of performance. In the 20 drive array you're striping across 10
> drives, in BTRFS right now you're striping across 2 no matter what. So
> not o
jwalmer posted on Wed, 23 Dec 2015 17:52:10 -0500 as excerpted:
> Just an avid follower of the project checking in. It has been about nine
> months since the initial Raid 5/6 features were released in 3.19 and
> they are still listed as incomplete/experimental on the Wiki.
>
> Admittedly, I don't
On Tue, Dec 22, 2015 at 02:22:40AM +, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> Commit 27d077ec0bda (common: use mount/umount helpers everywhere) made
> a few btrfs test fail for 2 different reasons:
>
> 1) Some tests (btrfs/029 and btrfs/031) use $SCRATCH_MNT as a mount
>poin
Eric Sandeen writes:
>> 3) A lot of user even don't now mount ro can still modify device
>>Yes, I didn't know this point until I checked the log replay code of
>>btrfs.
>>Adding such mount option alias may raise some attention of users.
>
> Given that nothing in the documentation impli
On Mon, Dec 21, 2015 at 01:18:22PM +0800, Anand Jain wrote:
>
>
> >BTW, any good idea for btrfs to do such operation like
> >enabling/disabling some minor features? Especially when it can be set on
> >individual file/dirs.
> >
> >Features like incoming write time deduplication, is designed to be
On Wed, Dec 23, 2015 at 3:15 PM, Donald Pearson
wrote:
> On Wed, Dec 23, 2015 at 12:20 PM, Goffredo Baroncelli
>> Ducan talked about a N-way mirroring, where each disks contains a copy of
>> the same data. Nobody talked about N-way mirroring where N is less than the
>> number of the available d
Hello dev crew,
Just an avid follower of the project checking in. It has been about nine months
since the initial Raid 5/6 features were released in 3.19 and they are still
listed as incomplete/experimental on the Wiki.
Admittedly, I don't understand how such a large and distributed project
pr
On Wed, Dec 23, 2015 at 12:20 PM, Goffredo Baroncelli
wrote:
> On 2015-12-23 16:53, Donald Pearson wrote:
> [...]
>>
>> Additionally real Raid10 will run circles around what BTRFS is doing
>> in terms of performance. In the 20 drive array you're striping across
>> 10 drives, in BTRFS right now yo
On Wed, Dec 23, 2015 at 1:24 PM, Neuer User wrote:
> One other thing:
>
> I read that btrfs has some options that are turned off for SSDs as they
> might be harmful or so. In my case btrfs, however, would not know about
> the SSD and probably use its HDD optimized settings. The result,
> however,
On Wed, Dec 23, 2015 at 1:21 PM, Neuer User wrote:
> Am 23.12.2015 um 20:49 schrieb Chris Murphy:
>> Seems to me if the LV's on the two HDDs are exposed, the lvmcache has
>> to separately keep track of those LVs. So as long as everything is
>> working correctly, it should be fine. That includes ei
On 12/23/15 21:07, Neuer User wrote:
> Understood. However, do SSDs really do automatic deduplication? I might
> be completely wrong here, but that sounds to be a rather complex
> mechanism, requiring lots of RAM to deduplicate 100 GB. I wouldn't have
> thought that typical SSDs include that?
tl;d
One other thing:
I read that btrfs has some options that are turned off for SSDs as they
might be harmful or so. In my case btrfs, however, would not know about
the SSD and probably use its HDD optimized settings. The result,
however, would be forwared also to the SSD via lvmcache. Do I see that
r
Am 23.12.2015 um 20:49 schrieb Chris Murphy:
> Seems to me if the LV's on the two HDDs are exposed, the lvmcache has
> to separately keep track of those LVs. So as long as everything is
> working correctly, it should be fine. That includes either transient
> or persistent, but consistent, errors fo
Am 23.12.2015 um 20:45 schrieb Noah Massey:
> On Wed, Dec 23, 2015 at 6:38 AM, Neuer User wrote:
> I believe Martin's concern is two-fold:
>
> The first, major issue, concerns the default writeback cache mode,
> which makes the SSD a single point of failure.
> (in writeback mode, a write to a blo
On Wed, Dec 23, 2015 at 4:38 AM, Neuer User wrote:
> Am 23.12.2015 um 12:21 schrieb Martin Steigerwald:
>> Hi.
>>
>> As far as I understand this way you basically loose the RAID 1 semantics of
>> BTRFS. While the data is redundant on the HDDs, it is not redundant on the
>> SSD. It may work for a p
On Wed, Dec 23, 2015 at 6:38 AM, Neuer User wrote:
> Am 23.12.2015 um 12:21 schrieb Martin Steigerwald:
>> Hi.
>>
>> As far as I understand this way you basically loose the RAID 1 semantics of
>> BTRFS. While the data is redundant on the HDDs, it is not redundant on the
>> SSD. It may work for a p
On 2015-12-23 16:53, Donald Pearson wrote:
[...]
>
> Additionally real Raid10 will run circles around what BTRFS is doing
> in terms of performance. In the 20 drive array you're striping across
> 10 drives, in BTRFS right now you're striping across 2 no matter what.
> So not only do I lose in ter
On Tue, Dec 22, 2015 at 10:13 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> Donald Pearson posted on Tue, 22 Dec 2015 17:56:29 -0600 as excerpted:
>
>
>>> Also understand with Brfs RAID 10 you can't lose more than 1 drive
>>> reliably. It's not like a strict raid1+0 where you can lose all of the
>>> "
Am 23.12.2015 um 12:21 schrieb Martin Steigerwald:
> Hi.
>
> As far as I understand this way you basically loose the RAID 1 semantics of
> BTRFS. While the data is redundant on the HDDs, it is not redundant on the
> SSD. It may work for a pure read cache, but for write-through you definately
>
Am Mittwoch, 23. Dezember 2015, 11:45:28 CET schrieb Neuer User:
> Hello
Hi.
> I want to setup a small homeserver, based on a HP Microserver Gen8 (4GB
> RAM, 2x3TB HDD + 1x120GB SSD) and Proxmox as distro.
>
> The server will be used to host a (small) number of virtual machines,
> most of them b
Hello
I want to setup a small homeserver, based on a HP Microserver Gen8 (4GB
RAM, 2x3TB HDD + 1x120GB SSD) and Proxmox as distro.
The server will be used to host a (small) number of virtual machines,
most of them being LXC containers, few being KVM machines. One of the
LXC containers will host a
25 matches
Mail list logo