On 2018-12-07 01:43, Doni Crosby wrote:
This is qemu-kvm? What's the cache mode being used? It's possible the
usual write guarantees are thwarted by VM caching.
Yes it is a proxmox host running the system so it is a qemu vm, I'm
unsure on the caching situation.
On the note of QEMU and the cache
On 2018-12-06 23:09, Andrei Borzenkov wrote:
06.12.2018 16:04, Austin S. Hemmelgarn пишет:
* On SCSI devices, a discard operation translates to a SCSI UNMAP
command. As pointed out by Ronnie Sahlberg in his reply, this command
is purely advisory, may not result in any actual state change
On 2018-12-06 01:11, Robert White wrote:
(1) Automatic and selective wiping of unused and previously used disk
blocks is a good security measure, particularly when there is an
encryption layer beneath the file system.
(2) USB attached devices _never_ support TRIM and they are the most
likely
On 2018-12-05 14:50, Roman Mamedov wrote:
Hello,
To migrate my FS to a different physical disk, I have added a new empty device
to the FS, then ran the remove operation on the original one.
Now my FS has only devid 2:
Label: 'p1' uuid: d886c190-b383-45ba-9272-9f00c6a10c50
Total
On 2018-12-04 08:37, Graham Cobb wrote:
On 04/12/2018 12:38, Austin S. Hemmelgarn wrote:
In short, USB is _crap_ for fixed storage, don't use it like that, even
if you are using filesystems which don't appear to complain.
That's useful advice, thanks.
Do you (or anyone else) have any
On 2018-12-04 00:37, Tomasz Chmielewski wrote:
I'm trying to use btrfs on an external USB drive, without much success.
When the drive is connected for 2-3+ days, the filesystem gets remounted
readonly, with BTRFS saying "IO failure":
[77760.444607] BTRFS error (device sdb1): bad tree block
On 2018-11-15 13:39, Juan Alberto Cirez wrote:
Is BTRFS mature enough to be deployed on a production system to underpin
the storage layer of a 16+ ipcameras-based NVR (or VMS if you prefer)?
For NVR, I'd say no. BTRFS does pretty horribly with append-only
workloads, even if they are WORM
On 11/13/2018 10:31 AM, David Sterba wrote:
On Mon, Oct 01, 2018 at 09:31:04PM +0800, Anand Jain wrote:
+ /*
+ * we are going to replace the device path, make sure its the
+ * same device if the device mounted
+ */
+ if (device->bdev) {
+ struct
On 11/4/2018 11:44 AM, waxhead wrote:
Sterling Windmill wrote:
Out of curiosity, what led to you choosing RAID1 for data but RAID10
for metadata?
I've flip flipped between these two modes myself after finding out
that BTRFS RAID10 doesn't work how I would've expected.
Wondering what made you
On 10/30/2018 12:10 PM, Ulli Horlacher wrote:
On Mon 2018-10-29 (17:57), Remi Gauvin wrote:
On 2018-10-29 02:11 PM, Ulli Horlacher wrote:
I want to know how many free space is left and have problems in
interpreting the output of:
btrfs filesystem usage
btrfs filesystem df
btrfs filesystem
On 18/10/2018 08.02, Anton Shepelev wrote:
I wrote:
What may be the reason of a CRC mismatch on a BTRFS file in
a virutal machine:
csum failed ino 175524 off 1876295680 csum 451760558
expected csum 1446289185
Shall I seek the culprit in the host machine on in the
guest one? Supposing the
On 2018-10-16 16:27, Chris Murphy wrote:
On Tue, Oct 16, 2018 at 9:42 AM, Austin S. Hemmelgarn
wrote:
On 2018-10-16 11:30, Anton Shepelev wrote:
Hello, all
What may be the reason of a CRC mismatch on a BTRFS file in
a virutal machine:
csum failed ino 175524 off 1876295680 csum
On 2018-10-16 11:30, Anton Shepelev wrote:
Hello, all
What may be the reason of a CRC mismatch on a BTRFS file in
a virutal machine:
csum failed ino 175524 off 1876295680 csum 451760558
expected csum 1446289185
Shall I seek the culprit in the host machine on in the guest
one?
On 2018-10-15 10:42, Anton Shepelev wrote:
Hugo Mills to Anton Shepelev:
While trying to resolve free space problems, and found
that I cannot interpret the output of:
btrfs filesystem show
Label: none uuid: 8971ce5b-71d9-4e46-ab25-ca37485784c8
Total devices 1 FS bytes used
On 2018-10-13 18:28, Chris Murphy wrote:
Is it practical and desirable to make Btrfs based OS installation
images reproducible? Or is Btrfs simply too complex and
non-deterministic? [1]
The main three problems with Btrfs right now for reproducibility are:
a. many objects have uuids other than
On 2018-10-14 07:08, waxhead wrote:
In case BTRFS fails to WRITE to a disk. What happens?
Does the bad area get mapped out somehow? Does it try again until it
succeed or until it "times out" or reach a threshold counter?
Does it eventually try to write to a different disk (in case of using
the
On 2018-10-07 09:37, Holger Hoffstätte wrote:
The Prometheus statistics collection/aggregation/monitoring/alerting system
[1] is quite popular, easy to use and will probably be the basis for the
upcoming OpenMetrics "standard" [2].
Prometheus collects metrics by polling host-local "exporters"
On 2018-10-05 20:34, Duncan wrote:
Wilson, Ellis posted on Fri, 05 Oct 2018 15:29:52 + as excerpted:
Is there any tuning in BTRFS that limits the number of outstanding reads
at a time to a small single-digit number, or something else that could
be behind small queue depths? I can't
On 2018-10-01 04:56, Anand Jain wrote:
Its not that impossible to imagine that a device OR a btrfs image is
been copied just by using the dd or the cp command. Which in case both
the copies of the btrfs will have the same fsid. If on the system with
automount enabled, the copied FS gets scanned.
On 2018-09-19 15:08, Goffredo Baroncelli wrote:
On 18/09/2018 19.15, Goffredo Baroncelli wrote:
b. The bootloader code, would have to have sophisticated enough Btrfs
knowledge to know if the grubenv has been reflinked or snapshot,
because even if +C, it may not be valid to overwrite, and COW
On 2018-09-18 15:00, Chris Murphy wrote:
On Tue, Sep 18, 2018 at 12:25 PM, Austin S. Hemmelgarn
wrote:
It actually is independent of /boot already. I've got it running just fine
on my laptop off of the EFI system partition (which is independent of my
/boot partition), and thus have no issues
On 2018-09-18 14:57, Chris Murphy wrote:
On Tue, Sep 18, 2018 at 12:16 PM, Andrei Borzenkov wrote:
18.09.2018 08:37, Chris Murphy пишет:
The patches aren't upstream yet? Will they be?
I do not know. Personally I think much easier is to make grub location
independent of /boot, allowing
On 2018-09-18 14:38, Andrei Borzenkov wrote:
18.09.2018 21:25, Austin S. Hemmelgarn пишет:
On 2018-09-18 14:16, Andrei Borzenkov wrote:
18.09.2018 08:37, Chris Murphy пишет:
On Mon, Sep 17, 2018 at 11:24 PM, Andrei Borzenkov
wrote:
18.09.2018 07:21, Chris Murphy пишет:
On Mon, Sep 17, 2018
On 2018-09-18 14:16, Andrei Borzenkov wrote:
18.09.2018 08:37, Chris Murphy пишет:
On Mon, Sep 17, 2018 at 11:24 PM, Andrei Borzenkov wrote:
18.09.2018 07:21, Chris Murphy пишет:
On Mon, Sep 17, 2018 at 9:44 PM, Chris Murphy wrote:
On 2018-09-06 03:23, Nathan Dehnel wrote:
https://lwn.net/Articles/287289/
In 2008, HP released the source code for a filesystem called advfs so
that its features could be incorporated into linux filesystems. Advfs
had a feature where a group of file writes were an atomic transaction.
On 2018-08-30 13:13, Axel Burri wrote:
On 29/08/2018 21.02, Austin S. Hemmelgarn wrote:
On 2018-08-29 13:24, Axel Burri wrote:
This patch allows to build distinct binaries for specific btrfs
subcommands, e.g. "btrfs-subvolume-show" which would be identical to
"btrfs
On 2018-08-29 13:24, Axel Burri wrote:
This patch allows to build distinct binaries for specific btrfs
subcommands, e.g. "btrfs-subvolume-show" which would be identical to
"btrfs subvolume show".
Motivation:
While btrfs-progs offer the all-inclusive "btrfs" command, it gets
pretty cumbersome
On 2018-08-29 08:33, Nikolay Borisov wrote:
On 29.08.2018 15:09, Qu Wenruo wrote:
On 2018/8/29 下午4:35, Nikolay Borisov wrote:
Here is the userspace tooling support for utilising the new metadata_uuid field,
enabling the change of fsid without having to rewrite every metadata block. This
nnot be easily disabled, and without the
apt-btrfs-snapshot package scheduling cleanups it's not ever
automatically removed?
> just google it, there is no mention of this behaviour
> Il giorno mar 28 ago 2018 alle ore 19:07 Austin S. Hemmelgarn
> mailto:ahferro...@gmai
On 2018-08-28 12:05, Noah Massey wrote:
On Tue, Aug 28, 2018 at 11:47 AM Austin S. Hemmelgarn
wrote:
On 2018-08-28 11:27, Noah Massey wrote:
On Tue, Aug 28, 2018 at 10:59 AM Menion wrote:
[sudo] password for menion:
ID gen top level path
On 2018-08-28 11:27, Noah Massey wrote:
On Tue, Aug 28, 2018 at 10:59 AM Menion wrote:
[sudo] password for menion:
ID gen top level path
-- --- -
257 600627 5 /@
258 600626 5 /@home
296 599489 5
On 2018-08-27 18:53, John Petrini wrote:
Hi List,
I'm seeing corruption errors when running btrfs device stats but I'm
not sure what that means exactly. I've just completed a full scrub and
it reported no errors. I'm hoping someone here can enlighten me.
Thanks!
The first thing to understand
On 2018-08-27 17:05, Eugene Bright wrote:
Greetings!
BTRFS wiki says there is no per-subvolume compression option [1].
At the same time next command allow me to set properties per-subvolume:
btrfs property set /volume compression zstd
Corresponding get command shows distinct
On 2018-08-23 10:04, Stefan Malte Schumacher wrote:
Hallo,
I originally had RAID with six 4TB drives, which was more than 80
percent full. So now I bought
a 10TB drive, added it to the Array and gave the command to remove the
oldest drive in the array.
btrfs device delete /dev/sda
On 2018-08-22 11:01, David Sterba wrote:
On Wed, Aug 22, 2018 at 09:56:59AM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-22 09:48, David Sterba wrote:
On Tue, Aug 21, 2018 at 01:01:00PM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 12:05, David Sterba wrote:
On Tue, Aug 21, 2018 at 10:10
On 2018-08-22 09:48, David Sterba wrote:
On Tue, Aug 21, 2018 at 01:01:00PM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 12:05, David Sterba wrote:
On Tue, Aug 21, 2018 at 10:10:04AM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 09:32, Janos Toth F. wrote:
so pretty much everyone who
On 2018-08-21 23:57, Duncan wrote:
Austin S. Hemmelgarn posted on Tue, 21 Aug 2018 13:01:00 -0400 as
excerpted:
Otherwise, the only option for people who want it set is to patch the
kernel to get noatime as the default (instead of relatime). I would
look at pushing such a patch upstream
On 2018-08-21 12:05, David Sterba wrote:
On Tue, Aug 21, 2018 at 10:10:04AM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 09:32, Janos Toth F. wrote:
so pretty much everyone who wants to avoid the overhead from them can just
use the `noatime` mount option.
It would be great if someone
On 2018-08-21 09:43, David Howells wrote:
Qu Wenruo wrote:
But to be more clear, NOSSD shouldn't be a special case.
In fact currently NOSSD only affects whether we will output the message
"enabling ssd optimization", no real effect if I didn't miss anything.
That's not quite true. In:
On 2018-08-21 09:32, Janos Toth F. wrote:
so pretty much everyone who wants to avoid the overhead from them can just
use the `noatime` mount option.
It would be great if someone finally fixed this old bug then:
https://bugzilla.kernel.org/show_bug.cgi?id=61601
Until then, it seems practically
On 2018-08-21 08:06, Adam Borowski wrote:
On Mon, Aug 20, 2018 at 08:16:16AM -0400, Austin S. Hemmelgarn wrote:
Also, slightly OT, but atimes are not where the real benefit is here for
most people. No sane software other than mutt uses atimes (and mutt's use
of them is not sane, but that's
On 2018-08-19 06:25, Andrei Borzenkov wrote:
Отправлено с iPhone
19 авг. 2018 г., в 11:37, Martin Steigerwald написал(а):
waxhead - 18.08.18, 22:45:
Adam Hunt wrote:
Back in 2014 Ted Tso introduced the lazytime mount option for ext4
and shortly thereafter a more generic VFS
On 2018-08-17 08:50, Roman Mamedov wrote:
On Fri, 17 Aug 2018 14:28:25 +0200
Martin Steigerwald wrote:
First off, keep in mind that the SSD firmware doing compression only
really helps with wear-leveling. Doing it in the filesystem will help
not only with that, but will also give you more
On 2018-08-17 08:28, Martin Steigerwald wrote:
Thanks for your detailed answer.
Austin S. Hemmelgarn - 17.08.18, 13:58:
On 2018-08-17 05:08, Martin Steigerwald wrote:
[…]
I have seen a discussion about the limitation in point 2. That
allowing to add a device and make it into RAID 1 again
On 2018-08-17 05:08, Martin Steigerwald wrote:
Hi!
This happened about two weeks ago. I already dealt with it and all is
well.
Linux hung on suspend so I switched off this ThinkPad T520 forcefully.
After that it did not boot the operating system anymore. Intel SSD 320,
latest firmware, which
On 2018-08-10 06:07, Cerem Cem ASLAN wrote:
Original question is here: https://superuser.com/questions/1347843
How can we sure that a readonly snapshot is not corrupted due to a disk failure?
Is the only way calculating the checksums one on another and store it
for further examination, or does
On 2018-08-12 03:04, Andrei Borzenkov wrote:
12.08.2018 06:16, Chris Murphy пишет:
On Fri, Aug 10, 2018 at 9:29 PM, Duncan <1i5t5.dun...@cox.net> wrote:
Chris Murphy posted on Fri, 10 Aug 2018 12:07:34 -0600 as excerpted:
But whether data is shared or exclusive seems potentially ephemeral,
On 2018-08-10 14:07, Chris Murphy wrote:
On Thu, Aug 9, 2018 at 5:35 PM, Qu Wenruo wrote:
On 8/10/18 1:48 AM, Tomasz Pala wrote:
On Tue, Jul 31, 2018 at 22:32:07 +0800, Qu Wenruo wrote:
2) Different limitations on exclusive/shared bytes
Btrfs can set different limit on
On 2018-08-10 14:21, Tomasz Pala wrote:
On Fri, Aug 10, 2018 at 07:39:30 -0400, Austin S. Hemmelgarn wrote:
I.e.: every shared segment should be accounted within quota (at least once).
I think what you mean to say here is that every shared extent should be
accounted to quotas for every
On 2018-08-09 13:48, Tomasz Pala wrote:
On Tue, Jul 31, 2018 at 22:32:07 +0800, Qu Wenruo wrote:
2) Different limitations on exclusive/shared bytes
Btrfs can set different limit on exclusive/shared bytes, further
complicating the problem.
3) Btrfs quota only accounts data/metadata
On 2018-08-09 19:35, Qu Wenruo wrote:
On 8/10/18 1:48 AM, Tomasz Pala wrote:
On Tue, Jul 31, 2018 at 22:32:07 +0800, Qu Wenruo wrote:
2) Different limitations on exclusive/shared bytes
Btrfs can set different limit on exclusive/shared bytes, further
complicating the problem.
3)
On 2018-08-02 06:56, Qu Wenruo wrote:
On 2018年08月02日 18:45, Andrei Borzenkov wrote:
Отправлено с iPhone
2 авг. 2018 г., в 10:02, Qu Wenruo написал(а):
On 2018年08月01日 11:45, MegaBrutal wrote:
Hi all,
I know it's a decade-old question, but I'd like to hear your thoughts
of today. By
On 2018-07-20 14:41, Hugo Mills wrote:
On Fri, Jul 20, 2018 at 09:38:14PM +0300, Andrei Borzenkov wrote:
20.07.2018 20:16, Goffredo Baroncelli пишет:
[snip]
Limiting the number of disk per raid, in BTRFS would be quite simple to implement in the
"chunk allocator"
You mean that currently
On 2018-07-20 13:13, Goffredo Baroncelli wrote:
On 07/19/2018 09:10 PM, Austin S. Hemmelgarn wrote:
On 2018-07-19 13:29, Goffredo Baroncelli wrote:
[...]
So until now you are repeating what I told: the only useful raid profile are
- striping
- mirroring
- striping+paring (even limiting
On 2018-07-20 01:01, Andrei Borzenkov wrote:
18.07.2018 16:30, Austin S. Hemmelgarn пишет:
On 2018-07-18 09:07, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 6:35 AM, Austin S. Hemmelgarn
wrote:
If you're doing a training presentation, it may be worth mentioning that
preallocation
On 2018-07-19 13:29, Goffredo Baroncelli wrote:
On 07/19/2018 01:43 PM, Austin S. Hemmelgarn wrote:
On 2018-07-18 15:42, Goffredo Baroncelli wrote:
On 07/18/2018 09:20 AM, Duncan wrote:
Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
excerpted:
On 07/17/2018 11:12 PM
On 2018-07-19 03:27, Qu Wenruo wrote:
On 2018年07月14日 02:46, David Sterba wrote:
Hi,
I have some goodies that go into the RAID56 problem, although not
implementing all the remaining features, it can be useful independently.
This time my hackweek project
On 2018-07-18 15:42, Goffredo Baroncelli wrote:
On 07/18/2018 09:20 AM, Duncan wrote:
Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
excerpted:
On 07/17/2018 11:12 PM, Duncan wrote:
Goffredo Baroncelli posted on Mon, 16 Jul 2018 20:29:46 +0200 as
excerpted:
On 07/15/2018
On 2018-07-18 17:32, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 12:01 PM, Austin S. Hemmelgarn
wrote:
On 2018-07-18 13:40, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 11:14 AM, Chris Murphy
wrote:
I don't know for sure, but based on the addresses reported before and
after dd
On 2018-07-18 13:40, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 11:14 AM, Chris Murphy wrote:
I don't know for sure, but based on the addresses reported before and
after dd for the fallocated tmp file, it looks like Btrfs is not using
the originally fallocated addresses for dd. So maybe it
On 2018-07-18 09:07, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 6:35 AM, Austin S. Hemmelgarn
wrote:
If you're doing a training presentation, it may be worth mentioning that
preallocation with fallocate() does not behave the same on BTRFS as it does
on other filesystems. For example
On 2018-07-18 03:20, Duncan wrote:
Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
excerpted:
On 07/17/2018 11:12 PM, Duncan wrote:
Goffredo Baroncelli posted on Mon, 16 Jul 2018 20:29:46 +0200 as
excerpted:
On 07/15/2018 04:37 PM, waxhead wrote:
Striping and
On 2018-07-18 04:39, Duncan wrote:
Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted:
As implemented in BTRFS, raid1 doesn't have striping.
The argument is that because there's only two copies, on multi-device
btrfs raid1 with 4+ devices of equal size so chunk allocations tend to
On 2018-07-17 13:54, Martin Steigerwald wrote:
Nikolay Borisov - 17.07.18, 10:16:
On 17.07.2018 11:02, Martin Steigerwald wrote:
Nikolay Borisov - 17.07.18, 09:20:
On 16.07.2018 23:58, Wolf wrote:
Greetings,
I would like to ask what what is healthy amount of free space to
keep on each device
On 2018-07-16 16:58, Wolf wrote:
Greetings,
I would like to ask what what is healthy amount of free space to keep on
each device for btrfs to be happy?
This is how my disk array currently looks like
[root@dennas ~]# btrfs fi usage /raid
Overall:
Device size:
On 2018-07-16 14:29, Goffredo Baroncelli wrote:
On 07/15/2018 04:37 PM, waxhead wrote:
David Sterba wrote:
An interesting question is the naming of the extended profiles. I picked
something that can be easily understood but it's not a final proposal.
Years ago, Hugo proposed a naming scheme
On 2018-07-03 03:35, Duncan wrote:
Austin S. Hemmelgarn posted on Mon, 02 Jul 2018 07:49:05 -0400 as
excerpted:
Notably, most Intel systems I've seen have the SATA controllers in the
chipset enumerate after the USB controllers, and the whole chipset
enumerates after add-in cards (so
On 2018-07-02 13:34, Marc MERLIN wrote:
On Mon, Jul 02, 2018 at 12:59:02PM -0400, Austin S. Hemmelgarn wrote:
Am I supposed to put LVM thin volumes underneath so that I can share
the same single 10TB raid5?
Actually, because of the online resize ability in BTRFS, you don't
technically _need_
On 2018-07-02 11:19, Marc MERLIN wrote:
Hi Qu,
thanks for the detailled and honest answer.
A few comments inline.
On Mon, Jul 02, 2018 at 10:42:40PM +0800, Qu Wenruo wrote:
For full, it depends. (but for most real world case, it's still flawed)
We have small and crafted images as test cases,
On 2018-07-02 11:18, Marc MERLIN wrote:
Hi Qu,
I'll split this part into a new thread:
2) Don't keep unrelated snapshots in one btrfs.
I totally understand that maintain different btrfs would hugely add
maintenance pressure, but as explains, all snapshots share one
fragile extent
On 2018-06-30 02:33, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 29 Jun 2018 14:31:04 -0400 as
excerpted:
On 2018-06-29 13:58, james harvey wrote:
On Fri, Jun 29, 2018 at 1:09 PM, Austin S. Hemmelgarn
wrote:
On 2018-06-29 11:15, james harvey wrote:
On Thu, Jun 28, 2018 at 6:27 PM
On 2018-06-30 01:32, Andrei Borzenkov wrote:
30.06.2018 06:22, Duncan пишет:
Austin S. Hemmelgarn posted on Mon, 25 Jun 2018 07:26:41 -0400 as
excerpted:
On 2018-06-24 16:22, Goffredo Baroncelli wrote:
On 06/23/2018 07:11 AM, Duncan wrote:
waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200
On 2018-06-29 23:22, Duncan wrote:
Austin S. Hemmelgarn posted on Mon, 25 Jun 2018 07:26:41 -0400 as
excerpted:
On 2018-06-24 16:22, Goffredo Baroncelli wrote:
On 06/23/2018 07:11 AM, Duncan wrote:
waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
According to this:
https
On 2018-06-29 13:58, james harvey wrote:
On Fri, Jun 29, 2018 at 1:09 PM, Austin S. Hemmelgarn
wrote:
On 2018-06-29 11:15, james harvey wrote:
On Thu, Jun 28, 2018 at 6:27 PM, Chris Murphy
wrote:
And an open question I have about scrub is weather it only ever is
checking csums, meaning
On 2018-06-29 11:15, james harvey wrote:
On Thu, Jun 28, 2018 at 6:27 PM, Chris Murphy wrote:
And an open question I have about scrub is weather it only ever is
checking csums, meaning nodatacow files are never scrubbed, or if the
copies are at least compared to each other?
Scrub never looks
On 2018-06-29 07:04, marble wrote:
Hello,
I have an external HDD. The HDD contains no partition.
I use the whole HDD as a LUKS container. Inside that LUKS is a btrfs.
It's used to store some media files.
The HDD was hooked up to a Raspberry Pi running up-to-date Arch Linux
to play music from the
On 2018-06-28 07:46, Qu Wenruo wrote:
On 2018年06月28日 19:12, Austin S. Hemmelgarn wrote:
On 2018-06-28 05:15, Qu Wenruo wrote:
On 2018年06月28日 16:16, Andrei Borzenkov wrote:
On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo
wrote:
On 2018年06月28日 11:14, r...@georgianit.com wrote:
On Wed, Jun
On 2018-06-28 05:15, Qu Wenruo wrote:
On 2018年06月28日 16:16, Andrei Borzenkov wrote:
On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote:
On 2018年06月28日 11:14, r...@georgianit.com wrote:
On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote:
Please get yourself clear of what other raid1 is
On 2018-06-25 21:05, Sterling Windmill wrote:
I am running a single btrfs RAID10 volume of eight LUKS devices, each
using a 2TB SATA hard drive as a backing store. The SATA drives are a
mixture of Seagate and Western Digital drives, some with RPMs ranging
from 5400 to 7200. Each seems to
On 2018-06-25 12:07, Marc MERLIN wrote:
On Tue, Jun 19, 2018 at 12:58:44PM -0400, Austin S. Hemmelgarn wrote:
In your situation, I would run "btrfs pause ", wait to hear from
a btrfs developer, and not use the volume whatsoever in the meantime.
I would say this is probably good
On 2018-06-24 16:22, Goffredo Baroncelli wrote:
On 06/23/2018 07:11 AM, Duncan wrote:
waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
According to this:
https://stratis-storage.github.io/StratisSoftwareDesign.pdf Page 4 ,
section 1.2
It claims that BTRFS still have
On 2018-06-19 12:30, james harvey wrote:
On Tue, Jun 19, 2018 at 11:47 AM, Marc MERLIN wrote:
On Mon, Jun 18, 2018 at 06:00:55AM -0700, Marc MERLIN wrote:
So, I ran this:
gargamel:/mnt/btrfs_pool2# btrfs balance start -dusage=60 -v . &
[1] 24450
Dumping filters: flags 0x1, state 0x0, force
On 2018-06-15 13:40, Chris Murphy wrote:
On Fri, Jun 15, 2018 at 5:33 AM, ein wrote:
Hello group,
does anyone have had any luck with hosting qemu kvm images resided on BTRFS
filesystem while serving
the volume via iSCSI?
I encouraged some unidentified problem and I am able to replicate it.
On 2018-05-29 10:02, ein wrote:
On 05/29/2018 02:12 PM, Austin S. Hemmelgarn wrote:
On 2018-05-28 13:10, ein wrote:
On 05/23/2018 01:03 PM, Austin S. Hemmelgarn wrote:
On 2018-05-23 06:09, ein wrote:
On 05/23/2018 11:09 AM, Duncan wrote:
ein posted on Wed, 23 May 2018 10:03:52 +0200
On 2018-05-28 13:10, ein wrote:
On 05/23/2018 01:03 PM, Austin S. Hemmelgarn wrote:
On 2018-05-23 06:09, ein wrote:
On 05/23/2018 11:09 AM, Duncan wrote:
ein posted on Wed, 23 May 2018 10:03:52 +0200 as excerpted:
IMHO the best course of action would be to disable checksumming for
you
vm
On 2018-05-23 06:09, ein wrote:
On 05/23/2018 11:09 AM, Duncan wrote:
ein posted on Wed, 23 May 2018 10:03:52 +0200 as excerpted:
IMHO the best course of action would be to disable checksumming for you
vm files.
Do you mean '-o nodatasum' mount flag? Is it possible to disable
checksumming
On 2018-05-21 13:43, David Sterba wrote:
On Fri, May 18, 2018 at 01:10:02PM -0400, Austin S. Hemmelgarn wrote:
On 2018-05-18 12:36, Niccolò Belli wrote:
On venerdì 18 maggio 2018 18:20:51 CEST, David Sterba wrote:
Josef started working on that in 2014 and did not finish it. The patches
can
On 2018-05-21 09:42, Timofey Titovets wrote:
пн, 21 мая 2018 г. в 16:16, Austin S. Hemmelgarn <ahferro...@gmail.com>:
On 2018-05-19 04:54, Niccolò Belli wrote:
On venerdì 18 maggio 2018 20:33:53 CEST, Austin S. Hemmelgarn wrote:
With a bit of work, it's possible to handle things sanely
On 2018-05-19 04:54, Niccolò Belli wrote:
On venerdì 18 maggio 2018 20:33:53 CEST, Austin S. Hemmelgarn wrote:
With a bit of work, it's possible to handle things sanely. You can
deduplicate data from snapshots, even if they are read-only (you need
to pass the `-A` option to duperemove and run
On 2018-05-18 13:18, Niccolò Belli wrote:
On venerdì 18 maggio 2018 19:10:02 CEST, Austin S. Hemmelgarn wrote:
and also forces the people who have ridiculous numbers of snapshots to
deal with the memory usage or never defrag
Whoever has at least one snapshot is never going to defrag anyway
On 2018-05-18 12:36, Niccolò Belli wrote:
On venerdì 18 maggio 2018 18:20:51 CEST, David Sterba wrote:
Josef started working on that in 2014 and did not finish it. The patches
can be still found in his tree. The problem is in excessive memory
consumption when there are many snapshots that need
PM, Jeff Mahoney wrote:
On 5/17/18 8:25 AM, Austin S. Hemmelgarn wrote:
On 2018-05-16 22:32, Anand Jain wrote:
On 05/17/2018 06:35 AM, David Sterba wrote:
On Wed, May 16, 2018 at 06:03:56PM +0800, Anand Jain wrote:
Not yet ready for the integration. As I need to introduce
-o
On 2018-05-17 10:46, Jeff Mahoney wrote:
On 5/16/18 6:35 PM, David Sterba wrote:
On Wed, May 16, 2018 at 06:03:56PM +0800, Anand Jain wrote:
Not yet ready for the integration. As I need to introduce
-o no_read_mirror_policy instead of -o read_mirror_policy=-
Mount option is mostly likely not
On 2018-05-16 22:32, Anand Jain wrote:
On 05/17/2018 06:35 AM, David Sterba wrote:
On Wed, May 16, 2018 at 06:03:56PM +0800, Anand Jain wrote:
Not yet ready for the integration. As I need to introduce
-o no_read_mirror_policy instead of -o read_mirror_policy=-
Mount option is mostly likely
On 2018-05-16 09:23, Anand Jain wrote:
On 05/16/2018 07:25 PM, Austin S. Hemmelgarn wrote:
On 2018-05-15 22:51, Anand Jain wrote:
Add a kernel log when the balance ends, either for cancel or completed
or if it is paused.
---
v1->v2: Moved from 2/3 to 3/3
fs/btrfs/volumes.c | 7 +++
On 2018-05-15 22:51, Anand Jain wrote:
Add a kernel log when the balance ends, either for cancel or completed
or if it is paused.
---
v1->v2: Moved from 2/3 to 3/3
fs/btrfs/volumes.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index
n, and you
can't inspect that code yourself).
Le 08/05/2018 à 13:32, Austin S. Hemmelgarn a écrit :
On 2018-05-08 03:50, Rolf Wald wrote:
Hello,
some hints inside
Am 08.05.2018 um 02:22 schrieb faurepi...@gmail.com:
Hi,
I'm curious about btrfs, and maybe considering it for my new laptop
in
On 2018-05-08 03:50, Rolf Wald wrote:
Hello,
some hints inside
Am 08.05.2018 um 02:22 schrieb faurepi...@gmail.com:
Hi,
I'm curious about btrfs, and maybe considering it for my new laptop
installation (a Lenovo T470).
I was going to install my usual lvm+ext4+full disk encryption setup, but
On 2018-05-03 04:11, Andrei Borzenkov wrote:
On Wed, May 2, 2018 at 10:29 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
...
Assume you have a BTRFS raid5 volume consisting of 6 8TB disks (which gives
you 40TB of usable space). You're storing roughly 20TB of data on it, using
On 2018-05-02 16:40, Goffredo Baroncelli wrote:
On 05/02/2018 09:29 PM, Austin S. Hemmelgarn wrote:
On 2018-05-02 13:25, Goffredo Baroncelli wrote:
On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ? On the best
of my knowledge nothing
1 - 100 of 1331 matches
Mail list logo