Re: kernel BUG at /linux/fs/btrfs/extent-tree.c:1833!

2015-10-10 Thread Peter Becker
lete. > > Using "btrfs check --repair" has never resulted in succes for me (for > some root filesystems (single profiles for s m d) on real and virual > machines), so I would only use that once you have your files backed up > on some other (cloned) filesystem. > > /H

Re: kernel BUG at /linux/fs/btrfs/extent-tree.c:1833!

2015-10-10 Thread Peter Becker
5-10-10 21:23 GMT+02:00 Peter Becker <floyd@gmail.com>: > Hi Henk, > > i have try it with kernel 4.1.6 and 4.2.3; btrfs progs 4.2.1 and 4.2.2 > .. the same error. > System freeze after 70% of balancing. > > Scrub complete without error. > > has someone a hint wha

Re: kernel BUG at /linux/fs/btrfs/extent-tree.c:1833!

2015-10-11 Thread Peter Becker
67] [] shrink_zone+0x291/0x2b0 [44929.058268] [] kswapd+0x500/0x9b0 [44929.058269] [] ? mem_cgroup_shrink_node_zone+0x130/0x130 [44929.058270] [] kthread+0xc9/0xe0 [44929.058271] [] ? kthread_create_on_node+0x180/0x180 [44929.058272] [] ret_from_fork+0x3f/0x70 [44929.058273] [] ? kthread_create_on_

Re: kernel BUG at /linux/fs/btrfs/extent-tree.c:1833!

2015-10-11 Thread Peter Becker
le but i need the new space. 2015-10-10 21:48 GMT+02:00 Peter Becker <floyd@gmail.com>: > btrfs balance start -m /media/RAID > > complete with out any error but the resulte of device usage is confusing me. > Metadata on sdb and sdc are 2 GiB, but on sdd (the new added device) &g

Re: kernel BUG at /linux/fs/btrfs/extent-tree.c:1833!

2015-10-11 Thread Peter Becker
the output of btrfs check --readonly /dev/sdb http://pastebin.com/UxkeVd7Y many entrys with "extent buffer leak" the output of btrfs-show-super -i0 /dev/sd[bcd] && btrfs-show-super -i1 /dev/sd[bcd] && btrfs-show-super -i2 /dev/sd[bcd] http://pastebin.com/zs7B8827

Re: kernel BUG at /linux/fs/btrfs/extent-tree.c:1833!

2015-10-11 Thread Peter Becker
Ok, that's what i expected. :) if it will work :) -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

kernel BUG at /linux/fs/btrfs/extent-tree.c:1833!

2015-10-09 Thread Peter Becker
At first i add a new device to my btrfs raid1 pool and start balance. After ~5 hours, balanace hangs and cpu-usage goes to 100% (kworker/u4 use all cpu-power). What should i do now? Run "btrfs check --repair" on all devices? Kernel: 4.2.3-040203-generic Btrfs progs v4.2.1 Full Syslog:

Re: Resize doesnt work as expected

2016-05-28 Thread Peter Becker
Thanks for the clarification. I've probably overlooked this. But should "resize max" does not do what you expect instead of falling back on an "invisible" 1? 2016-05-28 22:52 GMT+02:00 Alexander Fougner <fougne...@gmail.com>: > 2016-05-28 22:32 GMT+02:00 Pete

Resize doesnt work as expected

2016-05-28 Thread Peter Becker
Hello, i have found a small issue but i doesn't know if this is intended. Starting with a RAID 1 setup with 3 x 4GB devices. If you replace one of this devices with a 2GB device and run "resize max" nothing happens. Only if you resize with the device-ID the additional GB will be usable. Loop at

Re: "No space left on device" and balance doesn't work

2016-06-01 Thread Peter Becker
try this: btrfs fi balance start -musage=0 / btrfs fi balance start -dusage=0 / btrfs fi balance start -musage=1 / btrfs fi balance start -dusage=1 / btrfs fi balance start -musage=5 / btrfs fi balance start -musage=10 / btrfs fi balance start -musage=20 / btrfs fi balance start -dusage=5 /

Re: Resize doesnt work as expected

2016-05-29 Thread Peter Becker
2016-05-29 19:11 GMT+02:00 Chris Murphy <li...@colorremedies.com>: > On Sat, May 28, 2016 at 3:42 PM, Peter Becker <floyd@gmail.com> wrote: >> Thanks for the clarification. I've probably overlooked this. >> >> But should "resize max" does not do

invalid opcode 0000 / kernel bug with defect HDD

2016-06-28 Thread Peter Becker
Cause of kernel bugs was a defective HDD (/dev/sdd). The kernel BUG: May 16 07:41:38 nas kernel: [37168.832800] btrfs_dev_stat_print_on_error: 470 callbacks suppressed May 16 07:41:38 nas kernel: [37168.832806] BTRFS error (device sdd): bdev /dev/sdb errs: wr 49293, rd 567248, flush 0, corrupt

btrfs filesystem du - Failed to lookup root id - Inappropriate ioctl for device

2016-03-27 Thread Peter Becker
Hi i found the descriped error in if i execute du with btrfs-progs v4.5 with kernel v4.5. floyd@nas ~ $ sudo btrfs version btrfs-progs v4.5 floyd@nas ~ $ uname -r 4.5.0-040500-generic floyd@nas ~ $ sudo btrfs fi show Label: 'RAID' uuid: 3247737b-87f9-4e8c-8db3-2beed50fb104 Total devices 4 FS

Fwd: Snapshots slowing system

2016-03-19 Thread Peter Becker
> Not sure if there is much else to do about fragmentation apart from running a > balance which would probally make thje machine v sluggish for a day or so. I think a full balance-run makes in this point of view no sense. A weekly freeing of unused and summarize of underused blocks make more

Re: btrfs filesystem du - Failed to lookup root id - Inappropriate ioctl for device

2016-04-18 Thread Peter Becker
the same environment, update to btrfs-progs 4.5.1, new errors in "fi du" $ sudo btrfs fi du /media/RAID/owncloud/ 140.00KiB 0.00B - /media/RAID/owncloud//.snapshot/weekly_2016-03-26_07:56:42/docker/postgres 264.00KiB 0.00B -

Re: how to understand "btrfs fi show" output? "No space left" issues

2016-09-20 Thread Peter Becker
for the future. disable COW for all database containers 2016-09-20 9:28 GMT+02:00 Peter Becker <floyd@gmail.com>: > * If this NOT solve the "No space left" issues you must remove old snapshots. > > 2016-09-20 9:27 GMT+02:00 Peter Becker <floyd@gmail.com>:

Re: build error btrfs-progs 4.8.1

2016-10-22 Thread Peter Becker
Solved .. delete and clone again works .. but rebase / reset --hard not .. hmm curious 2016-10-21 23:30 GMT+02:00 Peter Becker <floyd@gmail.com>: > Generate build-system by: >aclocal:aclocal (GNU automake) 1.15 >autoconf: autoconf (GNU Autoconf) 2.69 >auto

Re: Drive Replacement

2016-10-21 Thread Peter Becker
if you have >750 GB free you can simply remove one of the drives. btrfs device delete /dev/sd[x] /mnt #power off, replace device btrfs device add /dev/sd[y] /mnt if not you can use an USB-SATA adapter or an eSata-Port and make the following: btrfs device add /dev/sd[y] /mnt btrfs device delete

Re: build error btrfs-progs 4.8.1

2016-10-21 Thread Peter Becker
: yes backtrace support: yes btrfs-convert: yes (ext2) Type 'make' to compile. make: *** Keine Regel vorhanden, um das Ziel "list.h", benötigt von "ctree.o", zu erstellen. Schluss. 2016-10-21 23:23 GMT+02:00 Peter Becker <floyd@gmail.com>: > $ un

build error btrfs-progs 4.8.1

2016-10-21 Thread Peter Becker
$ uname -r 4.8.3-040803-generic $ git remote -v origin git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git (fetch) origin git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git (push) $ git pull Already up-to-date. $ ./autogen.sh ... $ ./configure ... $ make ...

Re: Problem with btrfs snapshots

2016-11-03 Thread Peter Becker
Have you tryed "sync" between create several snapshots commands? 2016-11-03 13:22 GMT+01:00 Дмитрий Нечаев : > Hello. > We are have a strange situation with btrfs snapshot. We have a special > script to create snapshot and if we create several snapshots in the same > time

Re: Problem with btrfs snapshots

2016-11-03 Thread Peter Becker
(copy for mainlinglist) 2016-11-03 15:16 GMT+01:00 Дмитрий Нечаев : Yes. We tried "sync" in our script but it doesn't help. It works only then we make one snapshot at a time. Even if we use "sync" before and after creating snapshot, it doesn't help. -- To unsubscribe from

Re: raid levels and NAS drives

2016-10-10 Thread Peter Becker
>From experience, for a video-archiv and backup-server its not a problem to use desktop drives if you respect the following thinks: 1. Avoid stock "green"-drives; For example, use the WD Idle tool to stop excessive load cycles for WD Green drives 2. Desktop drives didn't have time-limited error

Re: duperemove : some real world figures on BTRFS deduplication

2016-12-08 Thread Peter Becker
> 2016-12-08 16:11 GMT+01:00 Swâmi Petaramesh : > > Then it took another 48 hours just for "loading the hashes of duplicate > extents". > This issue i adressing currently with the following patches: https://github.com/Floyddotnet/duperemove/commits/digest_trigger Tested

[markfasheh/duperemove] Why blocksize is limit to 1MB?

2016-12-30 Thread Peter Becker
Hello, i have a 8 TB volume with multiple files with hundreds of GB each. I try to dedupe this because the first hundred GB of many files are identical. With 128KB blocksize with nofiemap and lookup-extends=no option, will take more then a week (only dedupe, previously hashed). So i tryed -b 100M

Re: [markfasheh/duperemove] Why blocksize is limit to 1MB?

2017-01-03 Thread Peter Becker
e and the dub-data/metadata-feature of btrfs is realy nice. In particular if one considers the 7 years legally prescribed storage time. 2017-01-03 13:40 GMT+01:00 Austin S. Hemmelgarn <ahferro...@gmail.com>: > On 2016-12-30 15:28, Peter Becker wrote: >> >> Hello, i have a 8 TB volume

Re: [markfasheh/duperemove] Why blocksize is limit to 1MB?

2017-01-04 Thread Peter Becker
Austin S. Hemmelgarn <ahferro...@gmail.com>: > On 2017-01-03 16:35, Peter Becker wrote: >> >> As i understand the duperemove source-code right (i work on/ try to >> improve this code since 5 or 6 weeks on multiple parts), duperemove >> does hashing and calcula

Re: [markfasheh/duperemove] Why blocksize is limit to 1MB?

2017-01-03 Thread Peter Becker
Good hint, this would be an option and i will try this. Regardless of this the curiosity has packed me and I will try to figure out where the problem with the low transfer rate is. 2017-01-04 0:07 GMT+01:00 Hans van Kranenburg <hans.van.kranenb...@mendix.com>: > On 01/03/2017 08:24

Re: [markfasheh/duperemove] Why blocksize is limit to 1MB?

2017-01-03 Thread Peter Becker
the reflinks 3. unlocks the new extent If i'm not wrong with my understanding of the duperemove source code, this behaivor should also affected the online dedupe feature on with Qu Wenruo works. 2017-01-03 21:40 GMT+01:00 Austin S. Hemmelgarn <ahferro...@gmail.com>: > On 2017-01-03 15

Re: [markfasheh/duperemove] Why blocksize is limit to 1MB?

2017-01-09 Thread Peter Becker
2017-01-09 2:09 GMT+01:00 Zygo Blaxell <ce3g8...@umail.furryterror.org>: > On Wed, Jan 04, 2017 at 07:58:55AM -0500, Austin S. Hemmelgarn wrote: >> On 2017-01-03 16:35, Peter Becker wrote: >> >As i understand the duperemove source-code right (i work on/ try to >> &

Re: [markfasheh/duperemove] Why blocksize is limit to 1MB?

2017-01-02 Thread Peter Becker
> achieved. > 1M is already a little bit too big in size. > > Thanks, > Xin > > > > > Sent: Friday, December 30, 2016 at 12:28 PM > From: "Peter Becker" <floyd@gmail.com> > To: linux-btrfs <linux-btrfs@vger.kernel.org> > Subject: [mark

Re: [markfasheh/duperemove] Why blocksize is limit to 1MB?

2017-01-03 Thread Peter Becker
Querstion 1: why, so slow? Questiont 2a: would be a higher extend-size perform better? Querstion 2b: or did i understand something wrong? 2017-01-03 20:37 GMT+01:00 Austin S. Hemmelgarn <ahferro...@gmail.com>: > On 2017-01-03 14:21, Peter Becker wrote: >> >> All invocations are j

Fwd: [markfasheh/duperemove] Why blocksize is limit to 1MB?

2017-01-03 Thread Peter Becker
-- Forwarded message -- From: Austin S. Hemmelgarn <ahferro...@gmail.com> Date: 2017-01-03 20:37 GMT+01:00 Subject: Re: [markfasheh/duperemove] Why blocksize is limit to 1MB? To: Peter Becker <floyd@gmail.com> On 2017-01-03 14:21, Peter Becker wrote: > &g

Re: netapp-alike snapshots?

2017-08-22 Thread Peter Becker
.de>: > On Tue 2017-08-22 (15:44), Peter Becker wrote: >> Is use: https://github.com/jf647/btrfs-snap >> >> 2017-08-22 15:22 GMT+02:00 Ulli Horlacher <frams...@rus.uni-stuttgart.de>: >> > With Netapp/waffle you have automatic hourly/daily/weekly snapshots.

Fwd: confusing "no space left" -- how to troubleshoot and "be prepared"?

2017-05-18 Thread Peter Becker
2017-05-18 15:41 GMT+02:00 Yaroslav Halchenko : > > our python-based program crashed with > > File > "/home/yoh/proj/datalad/datalad/venv-tests/local/lib/python2.7/site-packages/gitdb/stream.py", > line 695, in write > os.write(self._fd, data) > OSError: [Errno 28] No

Re: snapshots of encrypted directories?

2017-09-15 Thread Peter Becker
2017-09-15 12:01 GMT+02:00 Ulli Horlacher : > On Fri 2017-09-15 (06:45), Andrei Borzenkov wrote: > >> The actual question is - do you need to mount each individual btrfs >> subvolume when using encfs? > > And even worse it goes with ecryptfs: I do not know at all how

Re: how to run balance successfully (No space left on device)?

2017-09-18 Thread Peter Becker
i'm not sure if it would help, but maybe you could try adding an 8GB (or more) USB flash drive to the pool and try to start balance. if it works out, you can throw him out of the pool after that. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to

Re: Workqueue: events_unbound btrfs_async_reclaim_metadata_space [btrfs]

2017-09-07 Thread Peter Becker
You can check the usage of each block group with the following scripts. If there are many blockgroups with low usage you should run btrfs balance -musage= -dusage= /data cd /tmp wget https://raw.githubusercontent.com/kdave/btrfs-progs/master/btrfs-debugfs chmod +x btrfs-debugfs stats=$(sudo

Re: Workqueue: events_unbound btrfs_async_reclaim_metadata_space [btrfs]

2017-09-07 Thread Peter Becker
2017-09-07 16:37 GMT+02:00 Marco Lorenzo Crociani : [...] > I got: > > 00-49: 1 > 50-79: 0 > 80-89: 0 > 90-99: 1 > 100:25540 > > this means that fs has only one block group used under 50% and 1 between 90 > and 99% while the rest are all full? > yes ..

Re: netapp-alike snapshots?

2017-08-22 Thread Peter Becker
Is use: https://github.com/jf647/btrfs-snap 2017-08-22 15:22 GMT+02:00 Ulli Horlacher : > With Netapp/waffle you have automatic hourly/daily/weekly snapshots. > You can find these snapshots in every local directory (readonly). > Example: > > framstag@fex:/sw/share:

Re: [PATCH 0/2] Policy to balance read across mirrored devices

2018-01-30 Thread Peter Becker
A little question about mount -o read_mirror_policy=. How would this work with RAID1 over 3 or 4 HDD's? In particular, if the desired block is not available on device . Could i repeat this option like the device-option to specify a order/priority like this: mount -o read_mirror_policy=

Re: [PATCH 0/2] Policy to balance read across mirrored devices

2018-01-31 Thread Peter Becker
stripe to use] = [preffer stripes present on read_mirror_policy devids] > [fallback to pid % stripe count] Perhaps I'm not be able to express myself in English or did I misunderstand you? 2018-01-31 15:26 GMT+01:00 Anand Jain <anand.j...@oracle.com>: > > > On 01/31/2018 06:47 PM,

Re: [PATCH 0/2] Policy to balance read across mirrored devices

2018-01-31 Thread Peter Becker
es to use this as performance tuning. at least the feature with the devid. Thanks Austin, Thanks Anand 2018-01-31 17:11 GMT+01:00 Austin S. Hemmelgarn <ahferro...@gmail.com>: > On 2018-01-31 09:52, Peter Becker wrote: >> >> This is all clear. My question referes to "use

Re: [PATCH V5 RESEND] Btrfs: enchanse raid1/10 balance heuristic

2018-09-20 Thread Peter Becker
i like the idea. do you have any benchmarks for this change? the general logic looks good for me.