On 30.03.2021 09:16 Wang Yugui wrote:
H,
On 30.03.21 г. 9:24, Wang Yugui wrote:
Hi, Nikolay Borisov
With a lot of dump_stack()/printk inserted around ENOMEM in btrfs code,
we find out the call stack for ENOMEM.
see the file -btrfs-dump_stack-when-ENOMEM.patch
#cat /usr/hpc-bio/xfstests/
On 11.03.2021 18:58 Martin Raiber wrote:
On 01.02.2021 23:08 Martin Raiber wrote:
On 27.01.2021 22:03 Chris Murphy wrote:
On Wed, Jan 27, 2021 at 10:27 AM Martin Raiber wrote:
Hi,
seems 5.10.8 still has the ENOSPC issue when compression is used
(compress-force=zstd,space_cache=v2):
Jan 27
On 29.03.2021 19:25 Henning Schild wrote:
> Am Mon, 29 Mar 2021 19:30:34 +0300
> schrieb Andrei Borzenkov :
>
>> On 29.03.2021 16:16, Claudius Heine wrote:
>>> Hi,
>>>
>>> I am currently investigating the possibility to use `btrfs-stream`
>>> files (generated by `btrfs send`) for deploying a image
On 11.03.2021 15:43 Filipe Manana wrote:
> On Wed, Mar 10, 2021 at 5:18 PM Martin Raiber wrote:
>> Hi,
>>
>> I have this in a btrfs directory. Linux kernel 5.10.16, no errors in dmesg,
>> no scrub errors:
>>
>> ls -lh
>> total 19G
>&g
On 01.02.2021 23:08 Martin Raiber wrote:
> On 27.01.2021 22:03 Chris Murphy wrote:
>> On Wed, Jan 27, 2021 at 10:27 AM Martin Raiber wrote:
>>> Hi,
>>>
>>> seems 5.10.8 still has the ENOSPC issue when compression is used
>>> (compress-force=zstd,spa
missing the parent directory fsync).
So far no negative consequences... (except that programs might get confused).
echo 3 > /proc/sys/vm/drop_caches doesn't help.
Regards,
Martin Raiber
On 26.02.2021 18:00 David Sterba wrote:
> On Fri, Jan 08, 2021 at 12:02:48AM +0000, Martin Raiber wrote:
>> When reading from btrfs file via io_uring I get following
>> call traces:
>>
>> [<0>] wait_on_page_bit+0x12b/0x270
>> [<0>
ver, I don't really care
if it has to iterate over all block group metadata after mount for a few
seconds, if that means it has less write IOs for every write. The calculus
obivously changes for a hard disk where reading this metadata would talke
forever due to low IOPS.
Regards,
Martin Raiber
On 27.01.2021 22:03 Chris Murphy wrote:
> On Wed, Jan 27, 2021 at 10:27 AM Martin Raiber wrote:
>> Hi,
>>
>> seems 5.10.8 still has the ENOSPC issue when compression is used
>> (compress-force=zstd,space_cache=v2):
>>
>> Jan 27 11:02:14 kernel:
UKS-RC-a6414fd731ce4f878af44c3987bce533 1.00MiB
Regards,
Martin Raiber
On 12.01.2021 18:01 Pavel Begunkov wrote:
On 12/01/2021 15:36, David Sterba wrote:
On Fri, Jan 08, 2021 at 12:02:48AM +, Martin Raiber wrote:
When reading from btrfs file via io_uring I get following
call traces:
Is there a way to reproduce by common tools (fio) or is a specialized
one
ing "preferred_metadata=metadata" option for years and the
impact e.g. for
rsync backups is huge.
Martin
t
8730f12b7962b21ea9ad2756abce1e205d22db84 ("btrfs: flag files as
supporting buffered async reads") with 5.9. Io_uring will read
the data via worker threads if it can't be read without sync IO
this way.
Signed-off-by: Martin Raiber
---
fs/btrfs/file.c | 15 +--
1 fil
e v1, is what you wrote above?
Or would it be more straight forward than that with a newer kernel?
Best,
--
Martin
On 07.07.2019 14:15 Qu Wenruo wrote:
>
> On 2019/7/6 上午4:28, Martin Raiber wrote:
>> More research on this. Seems a generic error reporting mechanism for
>> this is in the works https://lkml.org/lkml/2018/6/1/640 .
> sync() system call is defined as void sync(void);
lication to think
data is on disk even though it isn't.
On 05.07.2019 16:22 Martin Raiber wrote:
> Hi,
>
> I realize this isn't a btrfs specific problem but syncfs() returns no
> error even on complete fs failure. The problem is (I think) that the
> return value of sb->
tem
changes to disk.
For btrfs there is a work-around of using BTRFS_IOC_SYNC (which I am
going to use now) but that is obviously less user friendly than syncfs().
Regards,
Martin Raiber
I've fixed the same problem(s) by increasing the global metadata size as
well. Though I haven't encountered them since Josef Bacik's block rsv
rework in 5.0.
Another problem with increasing the global metadata size is, that I
think it is the only way dirty metadata is throttled. If increased too
mu
On 23.05.2019 19:41 Austin S. Hemmelgarn wrote:
> On 2019-05-23 13:31, Martin Raiber wrote:
>> On 23.05.2019 19:13 Austin S. Hemmelgarn wrote:
>>> On 2019-05-23 12:24, Chris Murphy wrote:
>>>> On Thu, May 23, 2019 at 5:19 AM Austin S. Hemmelgarn
>>>> wro
On 23.05.2019 19:13 Austin S. Hemmelgarn wrote:
> On 2019-05-23 12:24, Chris Murphy wrote:
>> On Thu, May 23, 2019 at 5:19 AM Austin S. Hemmelgarn
>> wrote:
>>>
>>> On 2019-05-22 14:46, Cerem Cem ASLAN wrote:
Could you confirm or disclaim the following explanation:
https://unix.stackexch
On 26.03.2019 14:37 Qu Wenruo wrote:
> On 2019/3/26 下午6:24, berodual_xyz wrote:
>> Mount messages below.
>>
>> Thanks for your input, Qu!
>>
>> ##
>> [42763.884134] BTRFS info (device sdd): disabling free space tree
>> [42763.884138] BTRFS info (device sdd): force clearing of disk cache
>> [42763.8
On 14.03.2019 23:20 Chris Murphy wrote:
> If you install btrfs-progs 4.20+ you'll see the documentation for
> supporting swapfiles on Btrfs, supported in kernel 5.0+. `man 5 btrfs`
>
> Anyone with access to the wiki should update the FAQ
> https://btrfs.wiki.kernel.org/index.php/FAQ#Does_btrfs_supp
thout patching.
Regards,
Martin Raiber
essful in compelling the drive manufacturers to make
DEALLOCATE perform well for typical application workloads. So I'm not
holding my breath...
--
Martin K. Petersen Oracle Linux Engineering
d is dead on anything but the cheapest
devices. And on those it is probably going to be
performance-prohibitive to use it in any other way than a weekly fstrim.
--
Martin K. Petersen Oracle Linux Engineering
devices really need to distinguish between discard-as-a-hint where
it is free to ignore anything that's not a whole multiple of whatever
the internal granularity is, and the WRITE ZEROES use case where the end
result needs to be deterministic.
--
Martin K. Petersen Oracle Linux Engineering
(device dm-0): forced readonly
[ 51.357442] BTRFS: error (device dm-0) in
btrfs_run_delayed_refs:2978: errno=-2 No such entry
On Sun, Feb 17, 2019 at 5:27 PM Martin Pöhlmann wrote:
>
> Tried zero-log. After reboot the system booted again. But all
> sub-volumes are mounted read-only.
>
&g
ice dm-0) in __btrfs_free_extent:6828:
errno=-2 No such entry
[ 51.357441] BTRFS info (device dm-0): forced readonly
[ 51.357442] BTRFS: error (device dm-0) in
btrfs_run_delayed_refs:2978: errno=-2 No such entry
On Sat, Feb 16, 2019 at 9:46 PM Martin Pöhlmann wrote:
>
> Thanks a lot for your help.
&g
a usb-bootable recovery system w/ 5.0rc6 first.
On Sat, Feb 16, 2019 at 1:54 AM Qu Wenruo wrote:
>
>
>
> On 2019/2/16 上午5:31, Martin Pöhlmann wrote:
> > Hello,
> >
> > After a reboot I am lost with an unmountable BTRFS partition. Before
> > reboot I had
Hello,
After a reboot I am lost with an unmountable BTRFS partition. Before
reboot I had first compile problems with freezing IntelliJ. These
persisted after a first reboot, after a second reboot I am faced with
the following error after entering the dm-crypt password (also after
manual mount with
e bug – even with the firmware version that was
supposed to fix this issue – is out of service now. Although I was able
to bring it back to a working (but blank) state with a secure erase, I
am just not going to use such a SSD for anything serious.
Thanks,
--
Martin
On 06.02.2019 01:22 Qu Wenruo wrote:
> On 2019/2/6 上午6:18, Stephen R. van den Berg wrote:
>> Are these Sysreq+w dumps not usable?
>>
> Sorry for the late reply.
>
> The hang looks pretty strange, and doesn't really look like previous
> deadlock caused by tree block locking.
> But some strange behav
On 14.12.2018 09:07 ethanlien wrote:
> Martin Raiber 於 2018-12-12 23:22 寫到:
>> On 12.12.2018 15:47 Chris Mason wrote:
>>> On 28 May 2018, at 1:48, Ethan Lien wrote:
>>>
>>> It took me a while to trigger, but this actually deadlocks ;) More
>>> below.
e to make progress on page
> writeback.
>
I had lockups with this patch as well. If you put e.g. a loop device on
top of a btrfs file, loop sets PF_LESS_THROTTLE to avoid a feed back
loop causing delays. The task balancing dirty pages in
btrfs_finish_ordered_io doesn't have the flag and causes slow-downs. In
my case it managed to cause a feedback loop where it queues other
btrfs_finish_ordered_io and gets stuck completely.
Regards,
Martin Raiber
I was having the same issue with kernels 4.19.2 and 4.19.4. I don’t appear to
have the issue with 4.20.0-0.rc1 on Fedora Server 29.
The issue is very easy to reproduce on my setup, not sure how much of it is
actually relevant, but here it is:
- 3 drive RAID5 created
- Some data moved to it
- Ex
olume.
Does that work recursively?
I wound find it quite unexpected if I did btrfs subvol list in or on the
root directory of a BTRFS filesystem would not display any subvolumes on
that filesystem no matter where they are.
Thanks,
--
Martin
reports the raw size and sometimes the logical
size. Especially in the "Total" line I find this a bit inconsistent.
"RAID1" columns show logical size, "Unallocated" shows raw size.
Also "Used:" in the global section shows raw size and "Free
(estimated):" shows logical size.
Thanks
--
Martin
Filipe Manana - 05.10.18, 17:21:
> On Fri, Oct 5, 2018 at 3:23 PM Martin Steigerwald
wrote:
> > Hello!
> >
> > On ThinkPad T520 after battery was discharged and machine just
> > blacked out.
> >
> > Is that some sign of regular consistency check / replay
30b2fea7fb77 ]---
[6.251219] BTRFS info (device dm-3): checking UUID tree
Thanks,
--
Martin
s-progs are that?
I am wondering whether to switch to freespace tree v2. Would it provide
benefit for a regular / and /home filesystems as dual SSD BTRFS RAID-1
on a laptop?
Thanks,
--
Martin
Am 08.09.2018 um 18:24 schrieb Adam Borowski:
> On Thu, Sep 06, 2018 at 06:08:33AM -0400, Austin S. Hemmelgarn wrote:
>> On 2018-09-06 03:23, Nathan Dehnel wrote:
>>> So I guess my question is, does btrfs support atomic writes across
>>> multiple files? Or is anyone interested in such a feature?
>>
ytime. Has any thought been given to adding support
> > for lazytime to Btrfs?
[…]
> Is there any new regarding this?
I´d like to know whether there is any news about this as well.
If I understand it correctly this could even help BTRFS performance a
lot cause it is COW´ing metadata.
Thanks,
--
Martin
Roman Mamedov - 18.08.18, 09:12:
> On Fri, 17 Aug 2018 23:17:33 +0200
>
> Martin Steigerwald wrote:
> > > Do not consider SSD "compression" as a factor in any of your
> > > calculations or planning. Modern controllers do not do it anymore,
> > > th
Austin S. Hemmelgarn - 17.08.18, 14:55:
> On 2018-08-17 08:28, Martin Steigerwald wrote:
> > Thanks for your detailed answer.
> >
> > Austin S. Hemmelgarn - 17.08.18, 13:58:
> >> On 2018-08-17 05:08, Martin Steigerwald wrote:
[…]
> >>> Anyway, creating
Hi Roman.
Now with proper CC.
Roman Mamedov - 17.08.18, 14:50:
> On Fri, 17 Aug 2018 14:28:25 +0200
>
> Martin Steigerwald wrote:
> > > First off, keep in mind that the SSD firmware doing compression
> > > only
> > > really helps with wear-leveling. Doing
Austin S. Hemmelgarn - 17.08.18, 15:01:
> On 2018-08-17 08:50, Roman Mamedov wrote:
> > On Fri, 17 Aug 2018 14:28:25 +0200
> >
> > Martin Steigerwald wrote:
> >>> First off, keep in mind that the SSD firmware doing compression
> >>> only
> >&
7fe655700
R09: 0101
[Fri Aug 17 16:21:06 2018] R10: 56521bf7c0cc R11: 0246
R12: 7f67fd6d6440
[Fri Aug 17 16:21:06 2018] R13: 7f67fd6d5900 R14: 0064
R15: 0000
Regards,
Martin Raiber
Thanks for your detailed answer.
Austin S. Hemmelgarn - 17.08.18, 13:58:
> On 2018-08-17 05:08, Martin Steigerwald wrote:
[…]
> > I have seen a discussion about the limitation in point 2. That
> > allowing to add a device and make it into RAID 1 again might be
> > dange
not compress, but Crucial
m500 mSATA SSD does. That has been the secondary SSD that still had all
the data after the outage of the Intel SSD 320.
Overall I am happy, cause BTRFS RAID 1 gave me access to the data after
the SSD outage. That is the most important thing about it for me.
Thanks,
--
Martin
On 02.08.2018 14:27 Austin S. Hemmelgarn wrote:
> On 2018-08-02 06:56, Qu Wenruo wrote:
>>
>> On 2018年08月02日 18:45, Andrei Borzenkov wrote:
>>>
>>> Отправлено с iPhone
>>>
2 авг. 2018 г., в 10:02, Qu Wenruo
написал(а):
> On 2018年08月01日 11:45, MegaBrutal wrote:
> Hi all,
Andrei Borzenkov - 02.08.18, 12:35:
> Отправлено с iPhone
>
> > 2 авг. 2018 г., в 12:16, Martin Steigerwald
> > написал(а):>
> > Hugo Mills - 01.08.18, 10:56:
> >>> On Wed, Aug 01, 2018 at 05:45:15AM +0200, MegaBrutal wrote:
> >>> I know it
gaddict.com/posts/friends-dont-let-friends-use-btrfs-for-oltp
Interestingly it also compares with ZFS which is doing much better. So
maybe there is really something to be learned from ZFS.
I did not get clearly whether the benchmark was on an SSD, as Tomas
notes the "ssd" mount
Nikolay Borisov - 17.07.18, 10:16:
> On 17.07.2018 11:02, Martin Steigerwald wrote:
> > Nikolay Borisov - 17.07.18, 09:20:
> >> On 16.07.2018 23:58, Wolf wrote:
> >>> Greetings,
> >>> I would like to ask what what is healthy amount of free space to
>
aintenance from user space,
the filesystem needs to be fixed. Ideally I would not have to worry on
whether to regularly balance an BTRFS or not. In other words: I should
not have to visit a performance analysis and tuning course in order to
use a computer with BTRFS filesystem.
Thanks
On 10.07.2018 09:04 Pete wrote:
> I've just had the error in the subject which caused the file system to
> go read-only.
>
> Further part of error message:
> WARNING: CPU: 14 PID: 1351 at fs/btrfs/extent-tree.c:3076
> btrfs_run_delayed_refs*0x163/0x190
>
> 'Screenshot' here:
> https://drive.google.
.
> > Maybe it's a good idea to added it to "submitting-patches.rst"?
>
> I guess it's not officially documented but if you do git log --grep
> "Link:" you'd see quite a lot of patches actually have a Link pointing
> to the original thread if it
compressible just fine.
> Hardware is fine. Passes memtest86+ in SMP mode. Works fine on all
> other files.
>
>
>
> [ 381.869940] BUG: unable to handle kernel paging request at
> 00390e50 [ 381.870881] BTRFS: decompress failed
[…]
--
Martin
--
To unsubscribe fr
Hello Chris,
Dne 7.5.2018 v 18:37 Chris Mason napsal(a):
>
>
> On 7 May 2018, at 12:16, Martin Svec wrote:
>
>> Hello Chris,
>>
>> Dne 7.5.2018 v 16:49 Chris Mason napsal(a):
>>> On 7 May 2018, at 7:40, Martin Svec wrote:
>>>
>>>
Hello Chris,
Dne 7.5.2018 v 16:49 Chris Mason napsal(a):
> On 7 May 2018, at 7:40, Martin Svec wrote:
>
>> Hi,
>>
>> According to man btrfs [1], I assume that metadata_ratio=1 mount option
>> should
>> force allocation of one metadata chunk after every alloca
errors.
Best regards.
Martin
[1] https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs(5)#MOUNT_OPTIONS
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
tions too.
Martin
Dne 21.4.2018 v 9:38 David Goodwin napsal(a):
> Hi,
>
> I'm running a 3TiB EBS based (2+1TiB devices) volume in EC2 which contains
> about 500 read-only
> snapshots.
>
> btrfs-progs v4.7.3
>
> There are two dmesg trace things belo
Dne 10.3.2018 v 15:51 Martin Svec napsal(a):
> Dne 10.3.2018 v 13:13 Nikolay Borisov napsal(a):
>>
>>
>>>>> And then report back on the output of the extra debug
>>>>> statements.
>>>>>
>>>>> Your global rsv is essentia
Dne 10.3.2018 v 13:13 Nikolay Borisov napsal(a):
>
>
>
And then report back on the output of the extra debug
statements.
Your global rsv is essentially unused, this means
in the worst case the code should fallback to using the global rsv
for satisfying the memory a
Dne 9.3.2018 v 20:03 Martin Svec napsal(a):
> Dne 9.3.2018 v 17:36 Nikolay Borisov napsal(a):
>> On 23.02.2018 16:28, Martin Svec wrote:
>>> Hello,
>>>
>>> we have a btrfs-based backup system using btrfs snapshots and rsync.
>>> Sometimes,
>>
Dne 9.3.2018 v 17:36 Nikolay Borisov napsal(a):
>
> On 23.02.2018 16:28, Martin Svec wrote:
>> Hello,
>>
>> we have a btrfs-based backup system using btrfs snapshots and rsync.
>> Sometimes,
>> we hit ENOSPC bug and the filesystem is remounted read-only. Howe
it an
indication of a damaged
filesystem? Also note that rebuilding free space cache doesn't help.
Thank you.
Martin
Dne 23.2.2018 v 15:28 Martin Svec napsal(a):
> Hello,
>
> we have a btrfs-based backup system using btrfs snapshots and rsync.
> Sometimes,
> we hit ENOSPC bug and th
] BTRFS info (device sdb): forced readonly
[285169.096979] BTRFS warning (device sdb): Skipping commit of aborted
transaction.
[285169.096981] BTRFS: error (device sdb) in cleanup_transaction:1873:
errno=-28 No space left
How can I help you to fix this issue?
Regards,
Martin Svec
--
To un
On 08.01.2018 19:34 Austin S. Hemmelgarn wrote:
> On 2018-01-08 13:17, Graham Cobb wrote:
>> On 08/01/18 16:34, Austin S. Hemmelgarn wrote:
>>> Ideally, I think it should be as generic as reasonably possible,
>>> possibly something along the lines of:
>>>
>>> A: While not strictly necessary, runnin
them with btrfs_should_throttle_delayed_refs . Maybe by
creating a snapshot of a file and then modifying it (some action that
creates delayed refs, is not truncate which is already throttled and
does not commit a transaction which is also throttled).
Regards,
Martin Raiber
--
To unsubscribe from this list: sen
On 03.12.2017 16:39 Martin Raiber wrote:
> Am 26.11.2017 um 17:02 schrieb Tomasz Chmielewski:
>> On 2017-11-27 00:37, Martin Raiber wrote:
>>> On 26.11.2017 08:46 Tomasz Chmielewski wrote:
>>>> Got this one on a 4.14-rc7 filesystem with some 400 GB left:
>>>
Am 26.11.2017 um 17:02 schrieb Tomasz Chmielewski:
> On 2017-11-27 00:37, Martin Raiber wrote:
>> On 26.11.2017 08:46 Tomasz Chmielewski wrote:
>>> Got this one on a 4.14-rc7 filesystem with some 400 GB left:
>> I guess it is too late now, but I guess the "btrfs fi
T: 131072
> ZSTD_DStreamWorkspaceBound with ZSTD_BTRFS_MAX_INPUT: 549424
>
> This is not something I could fix easily, we'd probalby need a tuned
> version of ZSTD for grub constraints. Adding Nick to CC.
Somehow I am happy that I still have a plain Ext4 for /boot. :)
Thanks fo
David Sterba - 14.11.17, 19:49:
> On Tue, Nov 14, 2017 at 08:34:37AM +0100, Martin Steigerwald wrote:
> > Hello David.
> >
> > David Sterba - 13.11.17, 23:50:
> > > while 4.14 is still fresh, let me address some concerns I've seen on
> > > linux
> &
ZSTD is safe to use? Are you aware of any other issues?
I consider switching from LZO to ZSTD on this ThinkPad T520 with Sandybridge.
Thank you,
--
Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
M
t completely sure
it was btrfs's fault and as usual not all the conditions may be
relevant. Could also be instead an upper layer error (Hyper-V storage),
memory issue or an application error.
Regards,
Martin Raiber
--
To unsubscribe from this list: send the line "unsubscribe linu
On 02.11.2017 16:10 Hans van Kranenburg wrote:
> On 11/02/2017 04:02 PM, Martin Raiber wrote:
>> snapshot cleanup is a little slow in my case (50TB volume). Would it
>> help to have multiple btrfs-cleaner threads? The block layer underneath
>> would have higher throughput w
Hi,
snapshot cleanup is a little slow in my case (50TB volume). Would it
help to have multiple btrfs-cleaner threads? The block layer underneath
would have higher throughput with more simultaneous read/write requests.
Regards,
Martin Raiber
--
To unsubscribe from this list: send the line
e/btrfs/keep/2017-10-27-shotgunblast.png
> [4]
> https://syrinx.knorrie.org/~knorrie/btrfs/keep/2016-12-18-heatmap-scripting/
> fsid_ed10a358-c846-4e76-a071-3821d423a99d_startat_320029589504_at_1482095269
> .png [5] https://www.spinics.net/lists/linux-btrfs/msg64418.html
> [6]
> https://git.ke
more ENOSPC issues with 4.9.x than with the latest 4.14.
Regards,
Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
userspace, I am all for it.
[1] http://www.netbsd.org/releases/formal-7/NetBSD-7.0.html
(tons of presentation PDFs on their site as well)
Thanks,
--
Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majo
presentation PDFs on their site as well)
Thanks,
--
Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ther file operations?
>
as far as I can see it only uses the log tree in some cases where the
log tree was already used for the file or the parent directory. The
cases are documented here
https://github.com/torvalds/linux/blob/master/fs/btrfs/tree-log.c#L45 .
So rename isn't much heavier
tall the client in the VM. It excludes unnessary
stuff like e.g. page files or the shadow storage area from the image
backups, as well and has a mode to store image backups as raw btrfs files.
Linux VMs I'd backup as files either from the hypervisor or from in VM.
If you want to backup
all dirty
>> data pages to disk, and then commit transaction.
>> While only calling btrfs_commit_transacation() doesn't trigger dirty
>> page writeback.
>>
>> So there is a difference.
this conversation made me realize why btrfs has sub-optimal meta-data
performa
filesystems with
> snapshots, is unlikely to hit me.
Hmmm, the BTRFS filesystems on my laptop 3 to 5 or even more years old. I stick
with 4.10 for now, I think.
The older ones are RAID 1 across two SSDs, the newer one is single device, on
one SSD.
These filesystems didn´t fail me in years and s
filesystems like Ext4 and XFS can do it… so this should be
possible with BTRFS as well.
Thanks,
--
Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Christoph,
> We will only have sense data if the command exectured and got a SCSI
> result, so this is pointless.
"executed"
Reviewed-by: Martin K. Petersen
--
Martin K. Petersen Oracle Linux Engineering
--
To unsubscribe from this list: send the line "unsubscrib
*clone, int error)
> {
> + struct multipath *m = ti->private;
> + struct dm_mpath_io *mpio = get_mpio_from_bio(clone);
> + struct pgpath *pgpath = mpio->pgpath;
> unsigned long flags;
>
> - if (!error)
> - return 0;
to my past experience something like xfs_repair surpasses btrfs
check in the ability to actually fix broken filesystem by a great extent.
Ciao,
--
Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.o
Martin Steigerwald - 22.04.17, 20:01:
> Chris Murphy - 22.04.17, 09:31:
> > Is the file system created with no-holes?
>
> I have how to find out about it and while doing accidentally set that
I didn´t find out how to find out about it and…
> feature on another filesystem (bt
Hello Chris.
Chris Murphy - 22.04.17, 09:31:
> Is the file system created with no-holes?
I have how to find out about it and while doing accidentally set that feature
on another filesystem (btrfstune only seems to be able to enable the feature,
not show the current state of it).
But as there i
ASAP.
Thanks,
Martin
Martin Steigerwald - 14.04.17, 21:35:
> Hello,
>
> backup harddisk connected via eSATA. Hard kernel hang, mouse pointer
> freezing two times seemingly after finishing /home backup and creating new
> snapshot on source BTRFS SSD RAID 1 for / in order to b
. Anyway deleting that
subvolume works and I as I suspected an issue with the backup disk I started
with that one.
I got
merkaba:~> btrfs --version
btrfs-progs v4.9.1
merkaba:~> cat /proc/version
Linux version 4.9.20-tp520-btrfstrim+ (martin@merkaba) (gcc version 6.3.0
20170321 (Debian
ears now for 48TB of data (not enough room in the host for the
disks).
Never suffered am eSATA disconnect.
Had the usual cooling fan fails and HDD fails due to old age.
All just a case of ensuring undisturbed clean cabling and a good UPS?...
(BTRFS spanning four disks per external pack has
It looks you're right!
On a different machine:
# btrfs sub list / | grep -v lxc
ID 327 gen 1959587 top level 5 path mnt/reaver
ID 498 gen 593655 top level 5 path var/lib/machines
# btrfs sub list / -d | wc -l
0
Ok, apparently it's a regression in one of the latest versions then.
But, it seem
On 13.2.2017 21:03, Hans van Kranenburg wrote:
On 02/13/2017 12:26 PM, Martin Mlynář wrote:
I've currently run into strange problem with BTRFS. I'm using it as my
daily driver as root FS. Nothing complicated, just few subvolumes and
incremental backups using btrbk.
Now I've
used 132.89GiB
devid1 size 200.00GiB used 200.00GiB path /dev/mapper/vg0-btrfsroot
Thank you for your time,
Best regards
--
Martin Mlynář
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majord
On 08.02.2017 14:08 Austin S. Hemmelgarn wrote:
> On 2017-02-08 07:14, Martin Raiber wrote:
>> Hi,
>>
>> On 08.02.2017 03:11 Peter Zaitsev wrote:
>>> Out of curiosity, I see one problem here:
>>> If you're doing snapshots of the live database, each
s snapshots shouldn't be much behind
the properly snapshotted state, so I see the advantages more with
usability and taking care of corner cases automatically.
Regards,
Martin Raiber
smime.p7s
Description: S/MIME Cryptographic Signature
On 04.01.2017 00:43 Hans van Kranenburg wrote:
> On 01/04/2017 12:12 AM, Peter Becker wrote:
>> Good hint, this would be an option and i will try this.
>>
>> Regardless of this the curiosity has packed me and I will try to
>> figure out where the problem with the low transfer rate is.
>>
>> 2017-01
1 - 100 of 860 matches
Mail list logo