On Sat, Mar 10, 2018 at 03:55:25AM +0100, Christoph Anton Mitterer wrote:
> Just wondered... was it ever planned (or is there some equivalent) to
> get support for btrfs in zerofree?
Do you want zerofree for thin storage optimization, or for security?
For the former, you can use fstrim; this is e
Austin S. Hemmelgarn wrote:
On 2018-03-09 11:02, Paul Richards wrote:
Hello there,
I have a 3 disk btrfs RAID 1 filesystem, with a single failed drive.
Before I attempt any recovery I’d like to ask what is the recommended
approach? (The wiki docs suggest consulting here before attempting
recov
09.03.2018 19:43, Austin S. Hemmelgarn пишет:
>
> If the answer to either one or two is no but the answer to three is yes,
> pull out the failed disk, put in a new one, mount the volume degraded,
> and use `btrfs replace` as well (you will need to specify the device ID
> for the now missing failed
Dne 9.3.2018 v 20:03 Martin Svec napsal(a):
> Dne 9.3.2018 v 17:36 Nikolay Borisov napsal(a):
>> On 23.02.2018 16:28, Martin Svec wrote:
>>> Hello,
>>>
>>> we have a btrfs-based backup system using btrfs snapshots and rsync.
>>> Sometimes,
>>> we hit ENOSPC bug and the filesystem is remounted read
On 9.03.2018 21:05, Alex Adriaanse wrote:
> Am I correct to understand that nodatacow doesn't really avoid CoW when
> you're using snapshots? In a filesystem that's snapshotted
Yes, so nodatacow won't interfere with how snapshots operate. For more
information on that topic check the following
>>> And then report back on the output of the extra debug
>>> statements.
>>>
>>> Your global rsv is essentially unused, this means
>>> in the worst case the code should fallback to using the global rsv
>>> for satisfying the memory allocation for delayed refs. So we should
>>> figure out wh
btrfs inspect dump-tree cli picks the disk with the largest generation
to read the root tree, even when all the devices were not provided in
the cli. But in 2 disks RAID1 you may need to know what's in the disks
individually, so this option -x | --noscan indicates to use only the
given disk to dump
On Sat, 2018-03-10 at 09:16 +0100, Adam Borowski wrote:
> Do you want zerofree for thin storage optimization, or for security?
I don't think one can really use it for security (neither on SSD or
HDD).
On both, zeroed blocks may still be readable by forensic measures.
So optimisation, i.e. digging
On Sat, 2018-03-10 at 14:04 +0200, Nikolay Borisov wrote:
> So for OLTP workloads you definitely want nodatacow enabled, bear in
> mind this also disables crc checksumming, but your db engine should
> already have such functionality implemented in it.
Unlike repeated claims made here on the list a
On Sat, 10 Mar 2018 15:19:05 +0100
Christoph Anton Mitterer wrote:
> TRIM/discard... not sure how far this is really a solution.
It is the solution in a great many of usage scenarios, don't know enough about
your particular one, though.
Note you can use it on HDDs too, even without QEMU and the
Dne 10.3.2018 v 13:13 Nikolay Borisov napsal(a):
>
>
>
And then report back on the output of the extra debug
statements.
Your global rsv is essentially unused, this means
in the worst case the code should fallback to using the global rsv
for satisfying the memory a
I am 100% sure Netapp Flexclone can change the ownership of the clone content.
We are using that functionality right now.
https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-cmpr-900%2Fvolume__clone__create.html
When you create clone in Flrxclone, you can specify a uid/gid.
On Fri, Mar 9, 2018 at 10:10 PM, Miguel Ojeda
wrote:
> On Sat, Mar 10, 2018 at 4:11 AM, Randy Dunlap wrote:
>> On 03/09/2018 04:07 PM, Andrew Morton wrote:
>>> On Fri, 9 Mar 2018 12:05:36 -0800 Kees Cook wrote:
>>>
When max() is used in stack array size calculations from literal values
On Sat, Mar 10, 2018 at 07:37:22PM +0500, Roman Mamedov wrote:
> Note you can use it on HDDs too, even without QEMU and the like: via using LVM
> "thin" volumes. I use that on a number of machines, the benefit is that since
> TRIMed areas are "stored nowhere", those partitions allow for incredibly
On Fri, Mar 9, 2018 at 11:03 PM, Miguel Ojeda
wrote:
>
> Just compiled 4.9.0 and it seems to work -- so that would be the
> minimum required.
>
> Sigh...
>
> Some enterprise distros are either already shipping gcc >= 5 or will
> probably be shipping it soon (e.g. RHEL 8), so how much does it hurt
On Sat, Mar 10, 2018 at 7:33 AM, Kees Cook wrote:
>
> And sparse freaks out too:
>
>drivers/net/ethernet/via/via-velocity.c:97:26: sparse: incorrect
> type in initializer (different address spaces) @@expected void
> *addr @@got struct mac_regs [noderef] http://vger.kernel.org/majordomo
On Sat, Mar 10, 2018 at 7:33 AM, Kees Cook wrote:
>
> Alright, I'm giving up on fixing max(). I'll go back to STACK_MAX() or
> some other name for the simple macro. Bleh.
Oh, and I'm starting to see the real problem.
It's not that our current "min/max()" are broiken. It's that "-Wvla" is garbage
On Sat, 2018-03-10 at 19:37 +0500, Roman Mamedov wrote:
> Note you can use it on HDDs too, even without QEMU and the like: via
> using LVM
> "thin" volumes. I use that on a number of machines, the benefit is
> that since
> TRIMed areas are "stored nowhere", those partitions allow for
> incredibly f
On Sat, 2018-03-10 at 16:50 +0100, Adam Borowski wrote:
> Since we're on a btrfs mailing list
Well... my original question was whether someone could make zerofree
support for btrfs (which I think would be best if someone who knows how
btrfs really works)... thus I directed the question to this list
On Sat, Mar 10, 2018 at 5:30 PM, Linus Torvalds
wrote:
> On Sat, Mar 10, 2018 at 7:33 AM, Kees Cook wrote:
>>
>> Alright, I'm giving up on fixing max(). I'll go back to STACK_MAX() or
>> some other name for the simple macro. Bleh.
>
> Oh, and I'm starting to see the real problem.
>
> It's not tha
On Sat, Mar 10, 2018 at 9:34 AM, Miguel Ojeda
wrote:
>
> So the warning is probably implemented to just trigger whenever VLAs
> are used but the given standard does not allow them, for all
> languages. The problem is why the ISO C90 frontend is not giving an
> error for using invalid syntax for ar
Thanks all for the help again.
I just wrote a blog post to explain the process to others should anyone
need this later.
http://marc.merlins.org/perso/btrfs/post_2018-03-09_Btrfs-Tips_-Rescuing-A-Btrfs-Send-Receive-Relationship.html
Marc
--
"A mouse is a device used to point at the xterm you want
On Sat, 10 Mar 2018 16:50:22 +0100
Adam Borowski wrote:
> Since we're on a btrfs mailing list, if you use qemu, you really want
> sparse format:raw instead of qcow2 or preallocated raw. This also works
> great with TRIM.
Agreed, that's why I use RAW. QCOW2 would add a second layer of COW on top
On Sat, 2018-03-10 at 23:31 +0500, Roman Mamedov wrote:
> QCOW2 would add a second layer of COW
> on top of
> Btrfs, which sounds like a nightmare.
I've just seen there is even a nocow option "specifically" for btrfs...
it seems however that it doesn't disable the CoW of qcow, but rather
that of b
On Sat, Mar 10, 2018 at 6:51 PM, Linus Torvalds
wrote:
>
> So in *historical* context - when a compiler didn't do variable length
> arrays at all - the original semantics of C "constant expressions"
> actually make a ton of sense.
>
> You can basically think of a constant expression as something t
Andrei Borzenkov posted on Sat, 10 Mar 2018 13:27:03 +0300 as excerpted:
> And "missing" is not the answer because I obviously may have more than
> one missing device.
"missing" is indeed the answer when using btrfs device remove. See the
btrfs-device manpage, which explains that if there's mo
The regression is introduced to btrfs in linux v4.4 and it refuses to create
new files after log replay by returning -EEXIST.
Although the problem is on btrfs only, there is no btrfs stuff in terms of
test, so this makes it generic.
The kernel fix is
Btrfs: fix unexpected -EEXIST when creating
27 matches
Mail list logo