06.12.2018 16:04, Austin S. Hemmelgarn пишет:
>
> * On SCSI devices, a discard operation translates to a SCSI UNMAP
> command. As pointed out by Ronnie Sahlberg in his reply, this command
> is purely advisory, may not result in any actual state change on the
> target device, and is not
02.12.2018 23:14, Patrick Dijkgraaf пишет:
> I have some additional info.
>
> I found the reason the FS got corrupted. It was a single failing drive,
> which caused the entire cabinet (containing 7 drives) to reset. So the
> FS suddenly lost 7 drives.
>
This remains mystery for me. btrfs is
27.10.2018 21:12, Remi Gauvin пишет:
> On 2018-10-27 01:42 PM, Marc MERLIN wrote:
>
>>
>> I've been using btrfs for a long time now but I've never had a
>> filesystem where I had 15GB apparently unusable (7%) after a balance.
>>
>
> The space isn't unusable. It's just allocated.. (It's used in
27.10.2018 18:45, Lennert Buytenhek пишет:
> Hello!
>
> FS_IOC_FIEMAP on btrfs seems to be returning fe_physical values that
> don't always correspond to the actual on-disk data locations. For some
> files the values match, but e.g. for this file:
>
> # filefrag -v foo
> Filesystem type is:
24.10.2018 3:36, Marc MERLIN пишет:
> Normally when btrfs fi show will show lost space because
> your trees aren't balanced.
> Balance usually reclaims that space, or most of it.
> In this case, not so much.
>
> kernel 4.17.6:
>
> saruman:/mnt/btrfs_pool1# btrfs fi show .
> Label: 'btrfs_pool1'
19.10.2018 12:41, Cerem Cem ASLAN пишет:
> By saying "manually", I mean copying files into a subvolume on a
> different mountpoint manually, then mark the target as if it is
> created by "btrfs send | btrfs receive".
>
> Rationale:
>
> When we delete all common snapshots from source, we have to
16.10.2018 0:33, Chris Murphy пишет:
> On Mon, Oct 15, 2018 at 3:26 PM, Anton Shepelev wrote:
>> Chris Murphy to Anton Shepelev:
>>
How can I track down the origin of this mount point:
/dev/sda2 on /home/hana type btrfs
09.10.2018 18:52, Chris Murphy пишет:
> On Tue, Oct 9, 2018 at 8:48 AM, Gervais, Francois
> wrote:
>> Hi,
>>
>> If I have a snapshot where I overwrite a big file but which only a
>> small portion of it is different, will the whole file be rewritten in
>> the snapshot? Or only the different part
18.09.2018 22:11, Austin S. Hemmelgarn пишет:
> On 2018-09-18 14:38, Andrei Borzenkov wrote:
>> 18.09.2018 21:25, Austin S. Hemmelgarn пишет:
>>> On 2018-09-18 14:16, Andrei Borzenkov wrote:
>>>> 18.09.2018 08:37, Chris Murphy пишет:
>>>>> On Mo
18.09.2018 21:57, Chris Murphy пишет:
> On Tue, Sep 18, 2018 at 12:16 PM, Andrei Borzenkov
> wrote:
>> 18.09.2018 08:37, Chris Murphy пишет:
>
>>> The patches aren't upstream yet? Will they be?
>>>
>>
>> I do not know. Personally I think much e
18.09.2018 21:28, Gervais, Francois пишет:
>> No. It is already possible (by setting received UUID); it should not be
> made too open to easy abuse.
>
>
> Do you mean edit the UUID in the byte stream before btrfs receive?
>
No, I mean setting received UUID on subvolume. Unfortunately, it is
18.09.2018 21:25, Austin S. Hemmelgarn пишет:
> On 2018-09-18 14:16, Andrei Borzenkov wrote:
>> 18.09.2018 08:37, Chris Murphy пишет:
>>> On Mon, Sep 17, 2018 at 11:24 PM, Andrei Borzenkov
>>> wrote:
>>>> 18.09.2018 07:21, Chris Murphy пишет:
>>>
18.09.2018 08:37, Chris Murphy пишет:
> On Mon, Sep 17, 2018 at 11:24 PM, Andrei Borzenkov
> wrote:
>> 18.09.2018 07:21, Chris Murphy пишет:
>>> On Mon, Sep 17, 2018 at 9:44 PM, Chris Murphy
>>> wrote:
>>>> https://btrfs.wiki.kernel.org/index.php/FAQ
18.09.2018 20:56, Gervais, Francois пишет:
>
> Hi,
>
> I'm trying to apply a btrfs send diff (done through -p) to another subvolume
> with the same content as the proper parent but with a different uuid.
>
> I looked through btrfs receive and I get the feeling that this is not
> possible
18.09.2018 07:21, Chris Murphy пишет:
> On Mon, Sep 17, 2018 at 9:44 PM, Chris Murphy wrote:
>> https://btrfs.wiki.kernel.org/index.php/FAQ#Does_grub_support_btrfs.3F
>>
>> Does anyone know if this is still a problem on Btrfs if grubenv has
>> xattr +C set? In which case it should be possible to
Отправлено с iPhone
> 19 авг. 2018 г., в 11:37, Martin Steigerwald написал(а):
>
> waxhead - 18.08.18, 22:45:
>> Adam Hunt wrote:
>>> Back in 2014 Ted Tso introduced the lazytime mount option for ext4
>>> and shortly thereafter a more generic VFS implementation which was
>>> then merged into
14.08.2018 18:16, Hans van Kranenburg пишет:
> On 08/14/2018 03:00 PM, Dmitrii Tcvetkov wrote:
>>> Scott E. Blomquist writes:
>>> > Hi All,
>>> >
>>> > [...]
>>
>> I'm not a dev, just user.
>> btrfs-zero-log is for very specific case[1], not for transid errors.
>> Transid errors mean that some
12.08.2018 10:04, Andrei Borzenkov пишет:
>
> On ZFS snapshots are contained in dataset and you limit total dataset
> space consumption including all snapshots. Thus end effect is the same -
> deleting data that is itself captured in snapshot does not make a single
> byte availa
12.08.2018 06:16, Chris Murphy пишет:
> On Fri, Aug 10, 2018 at 9:29 PM, Duncan <1i5t5.dun...@cox.net> wrote:
>> Chris Murphy posted on Fri, 10 Aug 2018 12:07:34 -0600 as excerpted:
>>
>>> But whether data is shared or exclusive seems potentially ephemeral, and
>>> not something a sysadmin should
10.08.2018 12:33, Tomasz Pala пишет:
>
>> For 4 disk with 1T free space each, if you're using RAID5 for data, then
>> you can write 3T data.
>> But if you're also using RAID10 for metadata, and you're using default
>> inline, we can use small files to fill the free space, resulting 2T
>>
10.08.2018 21:21, Tomasz Pala пишет:
> On Fri, Aug 10, 2018 at 07:39:30 -0400, Austin S. Hemmelgarn wrote:
>
>>> I.e.: every shared segment should be accounted within quota (at least once).
>> I think what you mean to say here is that every shared extent should be
>> accounted to quotas for
10.08.2018 10:33, Tomasz Pala пишет:
> On Fri, Aug 10, 2018 at 07:03:18 +0300, Andrei Borzenkov wrote:
>
>>> So - the limit set on any user
>>
>> Does btrfs support per-user quota at all? I am aware only of per-subvolume
>> quotas.
>
> Well, this is a kind
Отправлено с iPhone
> 2 авг. 2018 г., в 10:02, Qu Wenruo написал(а):
>
>
>
>> On 2018年08月01日 11:45, MegaBrutal wrote:
>> Hi all,
>>
>> I know it's a decade-old question, but I'd like to hear your thoughts
>> of today. By now, I became a heavy BTRFS user. Almost everywhere I use
>> BTRFS,
Отправлено с iPhone
> 2 авг. 2018 г., в 12:16, Martin Steigerwald написал(а):
>
> Hugo Mills - 01.08.18, 10:56:
>>> On Wed, Aug 01, 2018 at 05:45:15AM +0200, MegaBrutal wrote:
>>> I know it's a decade-old question, but I'd like to hear your
>>> thoughts
>>> of today. By now, I became a heavy
24.07.2018 15:16, Marc Joliet пишет:
> Hi list,
>
> (Preemptive note: this was with btrfs-progs 4.15.1, I have since upgraded to
> 4.17. My kernel version is 4.14.52-gentoo.)
>
> I recently had to restore the root FS of my desktop from backup (extent tree
> corruption; not sure how, possibly
20.07.2018 20:16, Goffredo Baroncelli пишет:
> On 07/20/2018 07:17 AM, Andrei Borzenkov wrote:
>> 18.07.2018 22:42, Goffredo Baroncelli пишет:
>>> On 07/18/2018 09:20 AM, Duncan wrote:
>>>> Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
>>>&
18.07.2018 22:42, Goffredo Baroncelli пишет:
> On 07/18/2018 09:20 AM, Duncan wrote:
>> Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
>> excerpted:
>>
>>> On 07/17/2018 11:12 PM, Duncan wrote:
Goffredo Baroncelli posted on Mon, 16 Jul 2018 20:29:46 +0200 as
excerpted:
18.07.2018 16:30, Austin S. Hemmelgarn пишет:
> On 2018-07-18 09:07, Chris Murphy wrote:
>> On Wed, Jul 18, 2018 at 6:35 AM, Austin S. Hemmelgarn
>> wrote:
>>
>>> If you're doing a training presentation, it may be worth mentioning that
>>> preallocation with fallocate() does not behave the same
18.07.2018 03:05, Qu Wenruo пишет:
>
>
> On 2018年07月18日 04:59, Marc MERLIN wrote:
>> Ok, I did more testing. Qu is right that btrfs check does not crash the
>> kernel.
>> It just takes all the memory until linux hangs everywhere, and somehow (no
>> idea why)
>> the OOM killer never triggers.
03.07.2018 10:15, Duncan пишет:
> Andrei Borzenkov posted on Tue, 03 Jul 2018 07:25:14 +0300 as excerpted:
>
>> 02.07.2018 21:35, Austin S. Hemmelgarn пишет:
>>> them (trimming blocks on BTRFS gets rid of old root trees, so it's a
>>> bit dangerous to do
02.07.2018 21:35, Austin S. Hemmelgarn пишет:
> them (trimming blocks on BTRFS gets rid of old root trees, so it's a bit
> dangerous to do it while writes are happening).
Could you please elaborate? Do you mean btrfs can trim data before new
writes are actually committed to disk?
--
To
03.07.2018 04:37, Qu Wenruo пишет:
>
> BTW, IMHO the bcache is not really helping for backup system, which is
> more write oriented.
>
There is new writecache target which may help in this case.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message
t;Could not find base snapshot matching UUID
xxx" would be far less ambiguous.
> Marc
>
> On Sun, Jul 01, 2018 at 01:03:37AM +0200, Hannes Schweizer wrote:
>> On Sat, Jun 30, 2018 at 10:02 PM Andrei Borzenkov
>> wrote:
>>>
>>> 30.06.2018 21:49, Andre
30.06.2018 21:49, Andrei Borzenkov пишет:
> 30.06.2018 20:49, Hannes Schweizer пишет:
...
>>
>> I've tested a few restore methods beforehand, and simply creating a
>> writeable clone from the restored snapshot does not work for me, eg:
>> # create some source sn
30.06.2018 20:49, Hannes Schweizer пишет:
> On Sat, Jun 30, 2018 at 8:24 AM Andrei Borzenkov wrote:
>>
>> Do not reply privately to mails on list.
>>
>> 29.06.2018 22:10, Hannes Schweizer пишет:
>>> On Fri, Jun 29, 2018 at 7:44 PM Andrei Borzenkov
>>&
Do not reply privately to mails on list.
29.06.2018 22:10, Hannes Schweizer пишет:
> On Fri, Jun 29, 2018 at 7:44 PM Andrei Borzenkov wrote:
>>
>> 28.06.2018 23:09, Hannes Schweizer пишет:
>>> Hi,
>>>
>>> Here's my environment:
>>> Linux diablo
30.06.2018 06:22, Duncan пишет:
> Austin S. Hemmelgarn posted on Mon, 25 Jun 2018 07:26:41 -0400 as
> excerpted:
>
>> On 2018-06-24 16:22, Goffredo Baroncelli wrote:
>>> On 06/23/2018 07:11 AM, Duncan wrote:
waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
> According
28.06.2018 23:09, Hannes Schweizer пишет:
> Hi,
>
> Here's my environment:
> Linux diablo 4.17.0-gentoo #5 SMP Mon Jun 25 00:26:55 CEST 2018 x86_64
> Intel(R) Core(TM) i5 CPU 760 @ 2.80GHz GenuineIntel GNU/Linux
> btrfs-progs v4.17
>
> Label: 'online' uuid: e4dc6617-b7ed-4dfb-84a6-26e3952c8390
28.06.2018 12:15, Qu Wenruo пишет:
>
>
> On 2018年06月28日 16:16, Andrei Borzenkov wrote:
>> On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote:
>>>
>>>
>>> On 2018年06月28日 11:14, r...@georgianit.com wrote:
>>>>
>>>>
>>>> O
On Thu, Jun 28, 2018 at 11:16 AM, Andrei Borzenkov wrote:
> On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote:
>>
>>
>> On 2018年06月28日 11:14, r...@georgianit.com wrote:
>>>
>>>
>>> On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote:
>>>
On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote:
>
>
> On 2018年06月28日 11:14, r...@georgianit.com wrote:
>>
>>
>> On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote:
>>
>>>
>>> Please get yourself clear of what other raid1 is doing.
>>
>> A drive failure, where the drive is still there when the
07.06.2018 05:50, Christoph Anton Mitterer пишет:
> Hey.
>
> Just wondered about the following:
>
> When I have a btrfs which acts as a master and from which I make copies
> of snapshots on it via send/receive (with using -p at send) to other
> btrfs which acts as copies like this:
> master
23.05.2018 09:32, Nikolay Borisov пишет:
>
>
> On 22.05.2018 23:05, ein wrote:
>> Hello devs,
>>
>> I tested BTRFS in production for about a month:
>>
>> 21:08:17 up 34 days, 2:21, 3 users, load average: 0.06, 0.02, 0.00
>>
>> Without power blackout, hardware failure, SSD's SMART is flawless
On Wed, May 2, 2018 at 10:29 PM, Austin S. Hemmelgarn
wrote:
...
>
> Assume you have a BTRFS raid5 volume consisting of 6 8TB disks (which gives
> you 40TB of usable space). You're storing roughly 20TB of data on it, using
> a 16kB block size, and it sees about 1GB of
09.03.2018 19:43, Austin S. Hemmelgarn пишет:
>
> If the answer to either one or two is no but the answer to three is yes,
> pull out the failed disk, put in a new one, mount the volume degraded,
> and use `btrfs replace` as well (you will need to specify the device ID
> for the now missing
10.03.2018 02:13, Saravanan Shanmugham (sarvi) пишет:
>
> Netapp’s storage system, has the concept of snapshot/clones.
> And when I create a clone from a snapshot, I can give/change ownership of
> entire tree in the volume to a different userid.
You are probably mistaken. NetApp FlexClone
09.03.2018 08:38, Liu Bo пишет:
> On Thu, Mar 08, 2018 at 09:15:50AM +0300, Andrei Borzenkov wrote:
>> 07.03.2018 21:49, Liu Bo пишет:
>>> Hi,
>>>
>>> In the following steps[1], if on receiver side has got
>>> changed via 'btrfs property set', then
08.03.2018 19:02, Marc MERLIN пишет:
> On Thu, Mar 08, 2018 at 09:34:45AM +0300, Andrei Borzenkov wrote:
>> 08.03.2018 09:06, Marc MERLIN пишет:
>>> On Tue, Mar 06, 2018 at 12:02:47PM -0800, Marc MERLIN wrote:
>>>>> https://githu
08.03.2018 09:06, Marc MERLIN пишет:
> On Tue, Mar 06, 2018 at 12:02:47PM -0800, Marc MERLIN wrote:
>>> https://github.com/knorrie/python-btrfs/commit/1ace623f95300ecf581b1182780fd6432a46b24d
>>
>> Well, I had never heard about it until now, thank you.
>>
>> I'll see if I can make it work when I
07.03.2018 21:49, Liu Bo пишет:
> Hi,
>
> In the following steps[1], if on receiver side has got
> changed via 'btrfs property set', then after doing incremental
> updates, receiver gets a different snapshot from what sender has sent.
>
> The reason behind it is that there is no change about
05.03.2018 19:16, Marc MERLIN пишет:
> Howdy,
>
> I did a bunch of copies and moving around subvolumes between disks and
> at some point, I did a snapshot dir1/Win_ro.20180205_21:18:31
> dir2/Win_ro.20180205_21:18:31
>
> As a result, I lost the ro flag, and apparently 'Received UUID' which is
>
On Thu, Mar 1, 2018 at 12:26 PM, vinayak hegde wrote:
> No, there is no opened file which is deleted, I did umount and mounted
> again and reboot also.
>
> I think I am hitting the below issue, lot of random writes were
> happening and the file is not fully written and
On Wed, Feb 28, 2018 at 9:01 AM, vinayak hegde wrote:
> I ran full defragement and balance both, but didnt help.
Showing the same information immediately after full defragment would be helpful.
> My created and accounting usage files are matching the du -sh output.
>
On Wed, Feb 28, 2018 at 2:26 PM, Shyam Prasad N wrote:
> Hi,
>
> Thanks for the reply.
>
>> * `df` calls `statvfs` to get it's data, which tries to count physical
>> allocation accounting for replication profiles. In other words, data in
>> chunks with the dup, raid1, and
27.02.2018 01:54, Emil.s пишет:
> Hello,
>
> I'm trying to restore a subvolume from a backup, but I'm failing when
> I try to setup the replication chain again.
>
> Previously I had disk A and B, where I was sending snapshots from A to
> B using "send -c /disk_a/1 /disk_a/2 | receive /disk_b"
11.02.2018 04:02, Hans van Kranenburg пишет:
...
>
>> - /dev/sda6 / btrfs
>> rw,relatime,ssd,space_cache,subvolid=259,subvol=/@/.snapshots/1/snapshot
>> 0 0
>
> Note that changes on atime cause writes to metadata, which means cowing
> metadata blocks and unsharing them from a previous snapshot,
08.02.2018 06:03, Chris Murphy пишет:
> On Wed, Feb 7, 2018 at 6:26 PM, Nick Gilmour wrote:
>> Hi all,
>>
>> I have successfully restored a snapshot of root but now when I try to
How exactly was it done?
>> make a new snapshot I get this error:
>> IO Error (.snapshots is
29.01.2018 14:24, Adam Borowski пишет:
...
>
> So any event (the user's request) has already happened. A rc system, of
> which systemd is one, knows whether we reached the "want root filesystem" or
> "want secondary filesystems" stage. Once you're there, you can issue the
> mount() call and let
28.01.2018 18:57, Duncan пишет:
> Andrei Borzenkov posted on Sun, 28 Jan 2018 11:06:06 +0300 as excerpted:
>
>> 27.01.2018 18:22, Duncan пишет:
>>> Adam Borowski posted on Sat, 27 Jan 2018 14:26:41 +0100 as excerpted:
>>>
>>>> On Sat, Jan 27, 2
27.01.2018 18:22, Duncan пишет:
> Adam Borowski posted on Sat, 27 Jan 2018 14:26:41 +0100 as excerpted:
>
>> On Sat, Jan 27, 2018 at 12:06:19PM +0100, Tomasz Pala wrote:
>>> On Sat, Jan 27, 2018 at 13:26:13 +0300, Andrei Borzenkov wrote:
>>>
>>>>> I j
e any systemd message when degraded option
>> is missing and have to remount manually with degraded.
>>
>> It seems it is better to use mdadm for raid and btrfs over it as i
>> understand. Even in recent kernel ?
>> I hav me to do some bench and compare...
>
raded and only 1/2 root device.
Then your initramfs does not use systemd.
> --
> Christophe Yayon
> cyayon-l...@nbux.org
>
>
>
> On Sat, Jan 27, 2018, at 06:50, Andrei Borzenkov wrote:
>> 26.01.2018 17:47, Christophe Yayon пишет:
>>> Hi Austin,
26.01.2018 17:47, Christophe Yayon пишет:
> Hi Austin,
>
> Thanks for your answer. It was my opinion too as the "degraded" seems to be
> flagged as "Mostly OK" on btrfs wiki status page. I am running Archlinux with
> recent kernel on all my servers (because of use of btrfs as my main
>
On Tue, Jan 16, 2018 at 9:45 AM, Chris Murphy wrote:
...
>>
>> Unless some better fix is in the works, this _should_ be a systemd unit or
>> something. Until then, please put it in FAQ.
>
> At least openSUSE has a systemd unit for a long time now, but last
> time I
16.01.2018 00:56, Dave пишет:
> I want to exclude my ~/.cache directory from snapshots. The obvious
> way to do this is to mount a btrfs subvolume at that location.
>
> However, I also want the ~/.cache directory to be nodatacow. Since the
> parent volume is COW, I believe it isn't possible to
On Wed, Dec 20, 2017 at 11:07 PM, Chris Murphy wrote:
>
> YaST doesn't have Btrfs raid1 or raid10 options; and also won't do
> encrypted root with Btrfs either because YaST enforces LVM to do LUKS
> encryption for some weird reason; and it also enforces NOT putting
>
19.12.2017 22:47, Chris Murphy пишет:
>
>>
>> BTW, doesn't SuSE use btrfs by default? Would you expect everyone using
>> this distro to research every component used?
>
> As far as I'm aware, only Btrfs single device stuff is "supported".
> The multiple device stuff is definitely not supported
On Tue, Dec 19, 2017 at 1:28 AM, Chris Murphy wrote:
> On Mon, Dec 18, 2017 at 1:49 AM, Anand Jain wrote:
>
>> Agreed. IMO degraded-raid1-single-chunk is an accidental feature
>> caused by [1], which we should revert back, since..
>>- balance
18.12.2017 19:49, Ulli Horlacher пишет:
> I want to mount an alternative subvolume of a btrfs filesystem.
> I can list the subvolumes when the filesystem is mounted, but how do I
> know them, when the filesystem is not mounted? Is there a query command?
>
> root@xerus:~# mount | grep /test
>
02.12.2017 03:27, Qu Wenruo пишет:
>
> That's the difference between how sub show and quota works.
>
> For quota, it's per-root owner check.
> Means even a file extent is shared between different inodes, if all
> inodes are inside the same subvolume, it's counted as exclusive.
> And if any of
01.12.2017 21:04, Austin S. Hemmelgarn пишет:
> On 2017-12-01 12:13, Andrei Borzenkov wrote:
>> 01.12.2017 20:06, Hans van Kranenburg пишет:
>>>
>>> Additional tips (forgot to ask for your /proc/mounts before):
>>> * Use the noatime mount option, so that on
01.12.2017 20:06, Hans van Kranenburg пишет:
>
> Additional tips (forgot to ask for your /proc/mounts before):
> * Use the noatime mount option, so that only accessing files does not
> lead to changes in metadata,
Is not 'lazytime" default today? It gives you correct atime + no extra
metadata
29.11.2017 16:24, Austin S. Hemmelgarn пишет:
> On 2017-11-28 18:49, David Sterba wrote:
>> On Tue, Nov 28, 2017 at 09:31:57PM +, Nick Terrell wrote:
>>>
On Nov 21, 2017, at 8:22 AM, David Sterba wrote:
On Wed, Nov 15, 2017 at 08:09:15PM +, Nick Terrell
19.11.2017 09:17, Chris Murphy пишет:
> fstrim should trim free space, but it only trims unallocated. This is
> with kernel 4.14.0 and the entire 4.13 series. I'm pretty sure it
> behaved this way with 4.12 also.
>
Well, I was told it should also trim free space ...
16.11.2017 19:13, Kai Krakow пишет:
...
> > BTW: From user API perspective, btrfs snapshots do not guarantee
> perfect granular consistent backups.
Is it documented somewhere? I was relying on crash-consistent
write-order-preserving snapshots in NetApp for as long as I remember.
And I was sure
14.11.2017 12:56, Stefan Priebe - Profihost AG пишет:
> Hello,
>
> after a controller firmware bug / failure i've a broken btrfs.
>
> # parent transid verify failed on 181846016 wanted 143404 found 143399
>
> running repair, fsck or zero-log always results in the same failure message:
>
04.11.2017 21:55, Chris Murphy пишет:
> On Sat, Nov 4, 2017 at 12:27 PM, Andrei Borzenkov <arvidj...@gmail.com> wrote:
>> 04.11.2017 10:05, Adam Borowski пишет:
>>> On Sat, Nov 04, 2017 at 09:26:36AM +0300, Andrei Borzenkov wrote:
>>>> 04.11.2017 07:49, Adam
04.11.2017 10:05, Adam Borowski пишет:
> On Sat, Nov 04, 2017 at 09:26:36AM +0300, Andrei Borzenkov wrote:
>> 04.11.2017 07:49, Adam Borowski пишет:
>>> On Fri, Nov 03, 2017 at 06:15:53PM -0600, Chris Murphy wrote:
>>>> Ancient bug, still seems to be a bug.
&g
04.11.2017 07:49, Adam Borowski пишет:
> On Fri, Nov 03, 2017 at 06:15:53PM -0600, Chris Murphy wrote:
>> Ancient bug, still seems to be a bug.
>> https://bugzilla.redhat.com/show_bug.cgi?id=906591
>>
>> The issue is that updatedb by default will not index bind mounts, but
>> by default on Fedora
02.11.2017 20:13, Austin S. Hemmelgarn пишет:
>>
>> 2. I want to limit access to sftp, so there will be no custom commands
>> to execute...
> A custom version of the 'quota' command would be easy to add in there.
> In fact, this is really the only option right now, since setting up sudo
> (or
01.11.2017 15:01, Austin S. Hemmelgarn пишет:
...
> The default subvolume is what gets mounted if you don't specify a
> subvolume to mount. On a newly created filesystem, it's subvolume ID 5,
> which is the top-level of the filesystem itself. Debian does not
> specify a subvo9lume in /etc/fstab
31.10.2017 20:45, Austin S. Hemmelgarn пишет:
> On 2017-10-31 12:23, ST wrote:
>> Hello,
>>
>> I've recently learned about btrfs and consider to utilize for my needs.
>> I have several questions in this regard:
>>
>> I manage a dedicated server remotely and have some sort of script that
>>
26.10.2017 15:18, Lentes, Bernd пишет:
>
>> -Original Message-
>> From: linux-btrfs-ow...@vger.kernel.org
>> [mailto:linux-btrfs-ow...@vger.kernel.org] On Behalf Of Lentes, Bernd
>> Sent: Tuesday, October 24, 2017 6:44 PM
>> To: Btrfs ML
>> Subject: RE: SLES
On Tue, Oct 24, 2017 at 2:53 PM, Austin S. Hemmelgarn
wrote:
>
> SLES (and OpenSUSE in general) does do something special though, they use
> subvolumes and qgroups to replicate multiple independent partitions (which
> is a serious pain in the arse), and they have
19.10.2017 23:04, Chris Murphy пишет:
> Btrfs
> is not just supported by SUSE, it's the default file system.
>
It is default choice for root starting with SLES12, not in SLES11. But
yes, it should still be supported.
I do not hold my breath though. For all I can tell transid errors are
usually
07.10.2017 00:27, Hans van Kranenburg пишет:
> On 10/06/2017 10:07 PM, Andrei Borzenkov wrote:
>>
>> What is reason behind allowing change from ro to rw in the first place?
>> What is the use case?
>
> I think this is a case of "well, nobody actually has been thin
06.10.2017 20:49, Hans van Kranenburg пишет:
> On 10/06/2017 07:24 PM, David Sterba wrote:
>> On Thu, Oct 05, 2017 at 05:03:47PM +0800, Anand Jain wrote:
>>> On 10/05/2017 04:22 PM, Nikolay Borisov wrote:
Currently when a read-only snapshot is received and subsequently its ro
property
On Mon, Oct 2, 2017 at 11:19 AM, Misono, Tomohiro
wrote:
> This patch changes "subvol set-default" to also accept the subvolume path
> for convenience.
>
> This is the one of the issue on github:
> https://github.com/kdave/btrfs-progs/issues/35
>
> If there are two
30.09.2017 14:57, Goffredo Baroncelli пишет:
> (please ignore my previous email, because I wrote somewhere "top id" instead
> of "top level")
> Hi All,
>
> I am trying to figure out which means "top level" in the output of "btrfs sub
> list"
>
>
Digging in git history - "top level"
30.09.2017 17:53, Peter Grandi пишет:
>> I am trying to figure out which means "top level" in the
>> output of "btrfs sub list"
>
> The terminology (and sometimes the detailed behaviour) of Btrfs
> is not extremely consistent, I guess because of permissive
> editorship of the design, in a "let
26.09.2017 10:31, Lukas Pirl пишет:
> On 09/25/2017 06:11 PM, linux-bt...@oh3mqu.pp.hyper.fi wrote as excerpted:
>> After a long googling (about more complex situations) I suddenly
>> noticed "device sdb" WTF??? Filesystem is mounted from /dev/md3 (sdb
>> is part of that mdraid) so btrfs should
24.09.2017 16:53, Fuhrmann, Carsten пишет:
> Hello,
>
> 1)
> I used direct write (no page cache) but I didn't disable the Disk cache of
> the HDD/SSD itself. In all tests I wrote 1GB and looked for the runtime of
> that write process.
So "latency" on your diagram means total time to write 1GiB
20.09.2017 22:05, Antoine Belvire пишет:
> Hello,
>
>> All snapshots listed in -c options and snapshot that we want to
>> transfer must have the same parent uuid, unless -p is explicitly
>> provided.
>
> It's rather the same mount point than the same parent uuid, like cp
> --reflink, isn't it?
19.09.2017 03:41, Dave пишет:
> new subject for new question
>
> On Mon, Sep 18, 2017 at 1:37 PM, Andrei Borzenkov <arvidj...@gmail.com> wrote:
>
>>>> What scenarios can lead to "ERROR: parent determination failed"?
>>>
>>> The man p
18.09.2017 09:10, Dave пишет:
> I use snap-sync to create and send snapshots.
>
> GitHub - wesbarnett/snap-sync: Use snapper snapshots to backup to external
> drive
> https://github.com/wesbarnett/snap-sync
>
Are you trying to backup top-level subvolume? I just reproduced this
behavior with
system. What you can do, is to rotate devices. I.e. remove
/dev/md126, set seed flag on md127 and add md126 back.
I actually tested it and it works for me.
> Thank you very much for the reply.
> Greetings.
>
> El martes, 12 de septiembre de 2017 6:34:15 (CEST) Andrei Borzenkov e
On Tue, Sep 19, 2017 at 1:24 PM, Graham Cobb wrote:
> On 19/09/17 01:41, Dave wrote:
>> Would it be correct to say the following?
>
> Like Duncan, I am just a user, and I haven't checked the code. I
> recommend Duncan's explanation, but in case you are looking for
> something
18.09.2017 11:45, Graham Cobb пишет:
> On 18/09/17 07:10, Dave wrote:
>> For my understanding, what are the restrictions on deleting snapshots?
>>
>> What scenarios can lead to "ERROR: parent determination failed"?
>
> The man page for btrfs-send is reasonably clear on the requirements
> btrfs
On Mon, Sep 18, 2017 at 11:20 AM, Tomasz Chmielewski wrote:
>>> # df -h /var/lib/lxd
>>>
>>> FWIW, standard (aka util-linux) df is effectively useless in a situation
>>> such as this, as it really doesn't give you the information you need (it
>>> can say you have lots of space
18.09.2017 05:31, Dave пишет:
> Sometimes when using btrfs send-receive, I get errors like this:
>
> ERROR: parent determination failed for
>
> When this happens, btrfs send-receive backups fail. And all subsequent
> backups fail too.
>
> The issue seems to stem from the fact that an automated
1 - 100 of 198 matches
Mail list logo