2017-10-13 20:15 GMT+02:00 Chris Murphy :
> On Fri, Oct 13, 2017 at 12:02 PM, Duncan <1i5t5.dun...@cox.net> wrote:
>> Those warnings aren't anything to be /too/ worried about. They are
>> triggered when a btrfs device size isn't a multiple of the btrfs
>> sectorsize
2017-10-13 13:02 GMT+02:00 Duncan <1i5t5.dun...@cox.net>:
> Those warnings aren't anything to be /too/ worried about. They are
> triggered when a btrfs device size isn't a multiple of the btrfs
> sectorsize (currently 4 KiB on amd64 aka x86_64). You can manually
> shrink your btrfs devices the
2017-10-13 10:40 GMT+02:00 Juan Orti Alcaine <j.orti.alca...@gmail.com>:
> Hi,
>
> I've upgraded my system to Fedora 27 and now I see many btrfs
> warnings, although the system seems to be working fine. Is this
> something I should worry about?
I'm getting more war
Hi,
I've upgraded my system to Fedora 27 and now I see many btrfs
warnings, although the system seems to be working fine. Is this
something I should worry about?
I have a scrub running with no errors so far:
# btrfs scrub status /mnt/btrfs
scrub status for 038b2b48-fd2d-4565-b2b1-d07847ecca8c
2017-08-31 13:36 GMT+02:00 Roman Mamedov :
> If you could implement SSD caching in front of your FS (such as lvmcache or
> bcache), that would work wonders for performance in general, and especially
> for mount times. I have seen amazing results with lvmcache (of just 32 GB) for
2017-03-13 12:29 GMT+01:00 Hérikz Nawarro :
> Hello everyone,
>
> Today is safe to use btrfs for home storage? No raid, just secure
> storage for some files and create snapshots from it.
>
In my humble opinion, yes. I'm running a RAID1 btrfs at home for 5
years and I
Hi, today I got this bug after a power failure.
One file that was being written during the power failure, appeared
after reboot as:
$ ls -la data/0/3833
?-. 1 root root 0 ene 1 1970 data/0/3833
I decided to delete it, but I got this bug after a few seconds and the
system halted, I had
2016-06-18 0:01 GMT+02:00 Hans van Kranenburg :
> Hi!
>
> After playing around a bit for a few months with a bunch of
> proof-of-concept level scripts to be able to debug my btrfs file
> systems, the inevitable happened:
>
> https://github.com/knorrie/python-btrfs/
Hello, I've hit this bug when removing the device
/dev/mapper/vg_hd04-lv_btrfs_hd04 from this filesystem. The only
peculiarity is that it mixes partitions and a lvm logical volume.
The device was removed successfully and no further errors have been seen.
# btrfs fi show
Label: 'btrfs_raid1'
Hello,
I have added a new disk to my filesystem and I'm doing a balance right
now, but I'm a bit worried that the disk usage does not get updated as
it should. I remember from earlier versions that you could see the
disk usage being balanced across all disks.
These are the commands I've run:
#
2015-08-11 15:20 GMT+02:00 Austin S Hemmelgarn ahferro...@gmail.com:
How much slack space was allocated by BTRFS before running the balance (ie,
how big a difference was there between the allocated and used space), and
did the balance run to completion? If you had a lot of mostly empty chunks
I'm not a developer, but I read this a days ago. Could it be helpful?
https://blog-vpodzime.rhcloud.com/?p=61
https://github.com/rhinstaller/libblockdev
2015-05-27 14:31 GMT+02:00 Stef Bon stef...@gmail.com:
Hi,
I'm working on a program (using sqlite, FUSE and btrfs) to provide
backup and
El 2015-03-29 22:04, Holger Hoffstätte escribió:
On Sun, 29 Mar 2015 13:40:56 -0600, Chris Murphy wrote:
On Sun, Mar 29, 2015 at 1:15 PM, Juan Orti Alcaine
juan.o...@miceliux.com wrote:
The filesystem was created with one device, and I have added two more
devices afterwards. To convert
Hello, I'm experiencing problems while balancing a filesystem to
raid1. The versions I'm using are:
kernel-4.0.0-0.rc5.git1.3.fc22.x86_64
btrfs-progs-3.19.1-1.fc22.x86_64
The filesystem was created with one device, and I have added two more
devices afterwards. To convert it to raid1, I have
El 2014-10-14 18:54, Robert White escribió:
Howdy,
So I run several gentoo systems and I upgraded two of them to kernel
3.17.0
One using BTRFS for root.
One using ext3 for root (via the ext4 driver)
_Both_ systems exhibited strange behavior (long pauses and then hangs
requiring hard-power)
El 2014-10-15 15:46, Josef Bacik escribió:
On 10/15/2014 03:08 AM, Juan Orti Alcaine wrote:
I've also experienced Btrfs corruptions with 3.17.0 (Fedora 21 alpha).
It has happened two times, each one after a clean reinstall and a wipe
of the old fs. In less than a day, both installations got
El 2014-10-15 16:30, Josef Bacik escribió:
On 10/15/2014 10:05 AM, Juan Orti Alcaine wrote:
El 2014-10-15 15:46, Josef Bacik escribió:
On 10/15/2014 03:08 AM, Juan Orti Alcaine wrote:
I've also experienced Btrfs corruptions with 3.17.0 (Fedora 21
alpha).
It has happened two times, each one
I cannot find the answer to this one. How can I determine which
subvolume I have mounted in a certain path? I'm looking through /sys but
no clue.
Thank you.
--
Juan Orti
https://miceliux.com
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to
Hello, I noticed that file capabilites are lost on received subvolumes, so I
opened the bug report #68891 [1]. I don't know if other xattrs are affected by
this problem.
I just like to know if fixing this issue is under the radar of the developers.
Thank you.
[1]
This kind of crashes happens me very often when I delete a large
number (+200) of snapshots at once.
There is a very high IO for a while, and after that, the system
freezed at intervals. I had to reboot the system to get it responsive
again.
Versions used:
kernel-3.13.6-200.fc20.x86_64
I got this error when deleting around a hundred snapshots with kernel
3.13.0 and btrfs-progs 3.12:
[ 831.628833] [sched_delayed] sched: RT throttling activated
[ 858.586718] BUG: soft lockup - CPU#1 stuck for 22s! [btrfs-transacti:643]
[ 858.586721] Modules linked in: ipt_MASQUERADE
I'm trying to delete a emtpy subvolume, but I can't. It is a Luks
device and I'm using autofs, I have unmounted it, closed the Luks
device and remounted again, but it doesn't work.
# mount | grep hd04
systemd-1 on /mnt/hd04 type autofs
22 matches
Mail list logo