i can mistake, but i think what:
btrfstune -x dev # can improve perfomance because this decrease metadata
Also, in last versions of btrfs progs changed from 4k to 16k, it also
can help (but for this, you must reformat fs)
For clean btrfs fi df /, you can try do:
btrfs bal start -f
2014-06-05 18:52 GMT+03:00 Igor M igor...@gmail.com:
One more question. Is there any other way to find out file fragmentation ?
I just copied 35Gb file on new btrfs filesystem (compressed) and
filefrag reports 282275 extents found. This can't be right ?
hes, because filefrag show compressed
I working on readahead in systemd and try to complete todo for it.
One of todos it is:
readahead: use BTRFS_IOC_DEFRAG_RANGE instead of BTRFS_IOC_DEFRAG
ioctl, with START_IO
Can someone explain what start_io flag in BTRFS_IOC_DEFRAG_RANGE do?
Just force write data after defragment or do
Good time of day.
I have several questions about data deduplication on btrfs.
Sorry if i ask stupid questions or waste you time %)
What about implementation of offline data deduplication? I don't see
any activity on this place, may be i need to ask a particular person?
Where the problem? May be a
Jose, I add my 50 cents,
i know, what you want backup data from raid through network and what
you have only 11 TB data from 40 TB fs
As i now, you can safety resize btrfs fs, without btrfs fi resize,
something like that:
$ btrfs fi df /
Data, single: total=81.00GiB, used=60.33GiB
System, DUP:
command, like in
project ideas (btrfs dev set-seed /dev/ or something else).
And if seed is same as readonly, may be command must named like
readonly 1/0 dev?
Signed-off-by: Timofey Titovets nefelim...@gmail.com
---
Documentation/Makefile | 3 +-
Documentation/btrfs-device.txt | 19
I not so long install ubuntu 14.04 to my friend and it working good,
but i don't know, devs of distro port fixes from newer kernel to 3.13
or not.
2014-08-12 10:46 GMT+03:00 Cyril Scetbon cyril.scet...@free.fr:
Hi guys,
Can you tell me if there are any pitfalls to use Btrfs under Ubuntu 14.04
command, like in
project ideas (btrfs dev set-seed /dev/ or something else).
And if seed is same as readonly, may be command must named like
readonly 1/0 dev?
Signed-off-by: Timofey Titovets nefelim...@gmail.com
---
Documentation/Makefile | 3 +-
Documentation/btrfs-device.txt | 19
No problem =).
Then, just ignore patch.
2014-08-19 17:03 GMT+03:00 David Sterba dste...@suse.cz:
On Mon, Aug 11, 2014 at 03:17:11AM +0300, Timofey Titovets wrote:
According to https://btrfs.wiki.kernel.org/index.php/Project_ideas#btrfs
Quote:
merge functionality of btrfstune, eg. under btrfs
2014-08-24 8:41 GMT+03:00 Brian Norris computeaboversforpe...@gmail.com:
It looks like this intended to be 64-bit arithmetic, but it's actually
performed as 32-bit. Fix that. (Note that 'increment' was being
initialized twice, so this patch removes one of those.)
Caught by Coverity Scan (CID
[$] uname -a
Linux beplan 3.18.0-rc1-next-20141023-ARCH-dirty #1 SMP PREEMPT Sat
Oct 25 22:19:01 FET 2014 x86_64 GNU/Linux
I have a custom config for kernel, and i disable debugs feature in kernel.
If needed i can enable it and recompile kernel.
After this message, system boot continue and
2014-10-31 22:23 GMT+03:00 Nishant Agrawal nragra...@cs.wisc.edu:
Hi,
I want to compile btrfs as kernel module with special debug prints. Can
anyone help me with the instructions to do that?
I tried to do online search but I couldn't find anything.
Regards,
Nishant
--
To unsubscribe
Hello, i suggest temporary solution to use swap file under btrfs.
I test it, and it work good.
I invent simple the way, how create and using swap file, just see
following sh code:
swapfile=$(losetup -f) #free loop device
truncate -s 8G /swap #create 8G sparse swap file
losetup $swapfile /swap
Thanks to all, who* answer.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
I've get it several times, after rebooting or unclean shutdown system.
This is very strange bug, because if i reboot, and mount it from live
cd, all that okay, and after reboot in system, system successful mount
all and working good.
i did try to found any previous issues on it, and found
So cool, thanks Hugo :)
2015-04-02 14:46 GMT+03:00 Hugo Mills h...@carfax.org.uk:
On Thu, Apr 02, 2015 at 02:38:24PM +0300, Timofey Titovets wrote:
I've get it several times, after rebooting or unclean shutdown system.
This is very strange bug, because if i reboot, and mount it from live
cd
2015-06-22 19:03 GMT+03:00 Chris Murphy li...@colorremedies.com:
On Mon, Jun 22, 2015 at 5:35 AM, Timofey Titovets nefelim...@gmail.com
wrote:
Okay, logs, i did release disk /dev/sde1 and get:
Jun 22 14:28:40 srv-lab-ceph-node-01 kernel: Read(10): 28 00 11 1d 69
00 00 00 08 00
Jun 22 14:28
And again if i've try
echo 1 /sys/block/sdf/device/delete
Jun 22 14:44:16 srv-lab-ceph-node-01 kernel: [ cut here
]
Jun 22 14:44:16 srv-lab-ceph-node-01 kernel: kernel BUG at
/build/buildd/linux-3.19.0/fs/btrfs/extent_io.c:2056!
Jun 22 14:44:16 srv-lab-ceph-node-01
Okay, logs, i did release disk /dev/sde1 and get:
Jun 22 14:28:40 srv-lab-ceph-node-01 kernel: Read(10): 28 00 11 1d 69
00 00 00 08 00
Jun 22 14:28:40 srv-lab-ceph-node-01 kernel: blk_update_request: I/O
error, dev sde, sector 287140096
Jun 22 14:28:40 srv-lab-ceph-node-01 kernel: mptbase: ioc0:
Upd:
i've try do removing disk by 'right' way:
# echo 1 /sys/block/sdf/device/delete
All okay and system don't crush immediately on 'sync' call and can
work some time without problem, but after some call, which i can
repeat by:
# apt-get update
testing system get kernel crush (on which i
Oh, i missing, i've test it on 3.19+ kernels
I can get trace from screen if it interesting for developers.
2015-05-26 14:23 GMT+03:00 Timofey Titovets nefelim...@gmail.com:
Hi list,
I'm regular on this list and I very like btrfs, I want use it on production
server, and I want replace hw raid
Oh, thanks for advice, i'll get and attach it.
i.e. as i understand behaviour like it, not expected, cool
2015-05-26 22:49 GMT+03:00 Chris Murphy li...@colorremedies.com:
Without a complete dmesg it's hard to say what's going on. The call
trace alone probably don't show the instigating factor
Hi list,
I'm regular on this list and I very like btrfs, I want use it on production
server, and I want replace hw raid on it.
Test case: server with N scsi discs
2 SAS disks used for raid 1 root fs
If I just remove one disk physically, all okay, kernel show me write errors and
system continue
Zygo, you are right
Thread closed, thanks
2015-10-23 3:14 GMT+03:00 Zygo Blaxell <ce3g8...@umail.furryterror.org>:
> On Tue, Oct 20, 2015 at 04:29:46PM +0300, Timofey Titovets wrote:
>> For performance reason, leave data at the start of disk, is preferable
>> while dedupin
Ubuntu create snapshot before each release upgrade
sudo mount /dev/sda6 /mnt -o rw,subvol=/;
ls /mnt
2015-11-14 9:16 GMT+03:00 Brenton Chapin :
> Thanks for the ideas. Sadly, no snapshots, unless btrfs does that by
> default. Never heard of snapper before.
>
> Don't see
2015-10-20 17:56 GMT+03:00, Filipe Manana <fdman...@gmail.com>:
> On Tue, Oct 20, 2015 at 2:29 PM, Timofey Titovets <nefelim...@gmail.com>
> wrote:
>> For performance reason, leave data at the start of disk, is preferable
>> while deduping
>
> Have you
, by represent it like offsets of
blocks
It return average data offset of all "pagesized" blocks in given range
for inode
Function cloned from btrfs_clone()
Changes from V1:
Added new function which compute "normal" offset
Signed-off-by: Timofey Titovets <nefelim...@g
ret;
+}
+
+/**
* btrfs_clone() - clone a range from inode file to another
*
* @src: Inode to clone from
--
2.6.1
>From 8719129c0fb4d5325f9f87ee34f591d301933f67 Mon Sep 17 00:00:00 2001
From: Timofey Titovets <nefelim...@gmail.com>
Date: Wed, 21 Oct 2015 03:31:57 +0300
Subject:
For performance reason, leave data at the start of disk, is preferable
while deduping
It's might sense for the reasons:
1. Spinning rust - start of the disk is much faster
2. Btrfs can deallocate empty data chunk from the end of fs - ie it's compact fs
Signed-off-by: Timofey Titovets <nefe
This patch have LOT of errors, sorry, please ignore it.
2015-10-21 4:11 GMT+03:00 Timofey Titovets <nefelim...@gmail.com>:
> It's just a proof of concept, and i hope to see feedback/ideas/review about
> it.
> ---
> While deduplication,
> Btrfs produce extent and file fragmen
Hello guys,
i like btrfs, and i want put it in production soon,
one of the feature that i want use, is a deduplication.
i frequently testing duperemove on btrfs and already see this problem before.
i know what btrfs before, change mtime while deduping, but after dedup
fixes from Mark
Hi list,
i've catch a io error, caused by csum mismatch
Can i force fs to read data?
This is really not a cool, if only way is use btrfs restore.
#Info vm machin, after power failure get 2 blocks with errors, and one
mysql table, can't be readed by mysql (and also, i can't just dump it)
--
Have
Thanks for tip hugo *_*
2015-09-15 23:38 GMT+03:00 Hugo Mills <h...@carfax.org.uk>:
> On Tue, Sep 15, 2015 at 10:59:48PM +0300, Timofey Titovets wrote:
>> Hi list,
>> i've catch a io error, caused by csum mismatch
>> Can i force fs to read data?
>> This is real
FYI:
Looks like patch:
Btrfs: fix read corruption of compressed and shared extents
Partial fixed my issue
2015-08-26 22:33 GMT+03:00 Timofey Titovets <nefelim...@gmail.com>:
> Hello guys,
> i like btrfs, and i want put it in production soon,
> one of the feature
Thx Filipe,
this is full fix my issue
2015-09-29 15:49 GMT+03:00 Filipe Manana <fdman...@gmail.com>:
> On Tue, Sep 29, 2015 at 1:38 PM, Timofey Titovets <nefelim...@gmail.com>
> wrote:
>> FYI:
>> Looks like patch:
>> Btrfs: fix read corruption of compressed a
for (page_idx = 0; page_idx < nr_pages; page_idx++) {
> page = list_entry(pages->prev, struct page, lru);
> @@ -4234,12 +4237,12 @@ int extent_readpages(struct extent_io_tree *tree,
> if (nr < ARRAY_SIZE(pagepool))
> continue;
2015-09-25 16:52 GMT+03:00 Jim Salter :
> Pretty much bog-standard, as ZFS goes. Nothing different than what's
> recommended for any generic ZFS use.
>
> * set blocksize to match hardware blocksize - 4K drives get 4K blocksize, 8K
> drives get 8K blocksize (Samsung SSDs)
> * LZO
Hi guys, i catch strange problem:
btrfs with mysql, mysql rewrite data, but space consumption only increasing
i.e:
df -h
/dev/sdb1 112G 73G 39G 66% /var/lib/mysql
du -hs
43G /var/lib/mysql/
btrfs fi df /var/lib/mysql
Data, single: total=78.00GiB, used=72.39GiB
System, single:
thanks for the explanation
2015-12-18 15:49 GMT+03:00 Henk Slager <eye...@gmail.com>:
> On Fri, Dec 18, 2015 at 10:37 AM, Timofey Titovets <nefelim...@gmail.com>
> wrote:
>> Hi guys, i catch strange problem:
>> btrfs with mysql, mysql rewrite data, but space cons
Hi guys,
i just find this document:
http://www.cs.technion.ac.il/~erez/Papers/lfbtree-full.pdf
It's describe implementation of lock-free btree
I believe it's can be interesting for someone
(AFAIK btrfs use btree)
--
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line
2016-06-30 16:58 GMT+03:00 Timofey Titovets <nefelim...@gmail.com>:
> 2016-06-30 14:57 GMT+03:00 Anand Jain <anand.j...@oracle.com>:
>>
>>
>> Thanks for reporting.
>>
>> Right. Application shouldn't notice the EIO. First of all,
>> we
Hi list,
has done some stability test and AFAIK, i see unexpected errors
So:
Take 2 flash devices:
/dev/sdb
/dev/sdc
Format it:
mkfs.btrfs -L TEST -m raid1 -d raid1 /dev/sdb /dev/sdc
Mount:
mount /dev/sdb /mnt
Config test.fio:
[global]
size=1g
filename=/mnt/testfile.fio
numjobs=1
runtime=60
2016-06-30 14:57 GMT+03:00 Anand Jain :
>
>
> Thanks for reporting.
>
> Right. Application shouldn't notice the EIO. First of all,
> we are not stopping IO to the disk which is pulled out. The
> below patches 11/13 and 12/13 fixes it.
>
> [PATCH 11/13] btrfs: introduce
2017-02-07 17:13 GMT+03:00 Peter Zaitsev :
> Hi Hugo,
>
> For the use case I'm looking for I'm interested in having snapshot(s)
> open at all time. Imagine for example snapshot being created every
> hour and several of these snapshots kept at all time providing quick
>
>> I think that you have a problem with extent bookkeeping (if i
>> understand how btrfs manage extents).
>> So for deal with it, try enable compression, as compression will force
>> all extents to be fragmented with size ~128kb.
>
> No, it will compress everything in chunks of 128kB, but it will
Thank you for your great work:
JFYI Packaged in AUR:
https://aur.archlinux.org/packages/python-btrfs-heatmap/
--
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
2017-02-03 15:57 GMT+03:00 Hans van Kranenburg <hans.van.kranenb...@mendix.com>:
> On 02/03/2017 12:25 PM, Timofey Titovets wrote:
>> Thank you for your great work:
>> JFYI Packaged in AUR:
>> https://aur.archlinux.org/packages/python-btrfs-heatmap/
>
> Hey, t
2017-02-03 17:27 GMT+03:00 Hans van Kranenburg <hans.van.kranenb...@mendix.com>:
> On 02/03/2017 03:18 PM, Timofey Titovets wrote:
>> 2017-02-03 15:57 GMT+03:00 Hans van Kranenburg
>> <hans.van.kranenb...@mendix.com>:
>>> On 02/03/2017 12:25 PM, Timofey Tit
2017-01-28 11:03 GMT+03:00 Rich Gannon :
> Hello Btrfs users and devs,
>
>
> I've gone searching through the mailing list dating back to 2014 or so and
> never found a positive answer - mostly "guesses" about as to the answer,
> albeit with good theories. I have two separate
Hi, today i try move my FS from old HDD to new SSD
While processing i catch I/O error and device remove operation was canceled
Dmesg:
[ 1015.010241] blk_update_request: I/O error, dev sda, sector 81353664
[ 1015.010246] BTRFS error (device sdb1): bdev /dev/sda1 errs: wr 0,
rd 23, flush 0, corrupt
2017-03-02 3:40 GMT+03:00 Chris Murphy <li...@colorremedies.com>:
> On Wed, Mar 1, 2017 at 12:38 PM, Kai Krakow <hurikha...@gmail.com> wrote:
>> Am Wed, 1 Mar 2017 19:04:26 +0300
>> schrieb Timofey Titovets <nefelim...@gmail.com>:
>>
>>> Hi, today i
https://btrfs.wiki.kernel.org/index.php/Status
I suggest to mark RAID1/10 as 'mostly ok'
as on btrfs RAID1/10 is safe to data, but not for application that uses it.
i.e. it not hide I/O error even if it's can be masked.
https://www.spinics.net/lists/linux-btrfs/msg56739.html
/* Retest it with
Do you try: nofail,noauto,x-systemd.automount ?
2016-08-29 9:28 GMT+03:00 Stefan Priebe - Profihost AG :
> Hi Qu,
>
> Am 29.08.2016 um 03:48 schrieb Qu Wenruo:
>>
>>
>> At 08/29/2016 04:15 AM, Stefan Priebe - Profihost AG wrote:
>>> Hi,
>>>
>>> i'm trying to get my 60TB
Hi, i use btrfs for NFS VM replica storage and for NFS shared VM storage.
At now i have a small problem what VM image deletion took to long time
and NFS client show a timeout on deletion
(ESXi Storage migration as example).
Kernel: Linux nfs05 4.7.0-0.bpo.1-amd64 #1 SMP Debian 4.7.5-1~bpo8+2
2016-10-20 15:09 GMT+03:00 Austin S. Hemmelgarn <ahferro...@gmail.com>:
> On 2016-10-20 05:29, Timofey Titovets wrote:
>>
>> Hi, i use btrfs for NFS VM replica storage and for NFS shared VM storage.
>> At now i have a small problem what VM image deletion took to long
Hi, i use btrfs as a storage for root and data for ElasticSearch
servers and i catch strange bug then servers hungs.
But i get this stack trace only if start Elastic.
Debian 8 x64
Linux msq-k1-srv-ids-02 4.8.0-1-amd64 #1 SMP Debian 4.8.5-1
(2016-10-28) x86_64 GNU/Linux
Also catch it on Debian
So it's btrfs problem,
i catch hung again with 4.8.7, and i can't catch if ES data stored on ext4
Trace from 4.8.7:
Nov 25 14:09:30 msq-k1-srv-ids-01 kernel: INFO: task
btrfs-transacti:4143 blocked for more than 120 seconds.
Nov 25 14:09:30 msq-k1-srv-ids-01 kernel: Not tainted 4.8.0-1-amd64
2016-11-18 21:15 GMT+03:00 Goffredo Baroncelli :
> Hello,
>
> these are only my thoughts; no code here, but I would like to share it hoping
> that it could be useful.
>
> As reported several times by Zygo (and others), one of the problem of raid5/6
> is the write hole. Today
2016-11-18 23:32 GMT+03:00 Janos Toth F. :
> Based on the comments of this patch, stripe size could theoretically
> go as low as 512 byte:
> https://mail-archive.com/linux-btrfs@vger.kernel.org/msg56011.html
> If these very small (0.5k-2k) stripe sizes could really work
2016-12-08 18:42 GMT+03:00 Austin S. Hemmelgarn :
> On 2016-12-08 10:11, Swâmi Petaramesh wrote:
>>
>> Hi, Some real world figures about running duperemove deduplication on
>> BTRFS :
>>
>> I have an external 2,5", 5400 RPM, 1 TB HD, USB3, on which I store the
>> BTRFS
Hi, as wiki say https://btrfs.wiki.kernel.org/index.php/Glossary:
A part of a block group. Chunks are either 1 GiB in size (for data) or
256 MiB (for metadata).
Btrfs tools show me that allocated size is not 1GiB aligned, things
are changes? I miss something?
# btrfs fi df /; btrfs fi usage /;
Calculate byte core set for data sample:
Sort bucket's numbers in decreasing order
Count how many numbers use 90% of sample
If core set are low (<=25%), data are easily compressible
If core set high (>=80%), data are not compressible
Signed-off-by: Timofey Titovets <nefelim...@gmail.com
Calculate byte set size for data sample:
Calculate how many unique bytes has been in sample
By count all bytes in bucket with count > 0
If byte set low (~25%), data are easily compressible
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compressi
e care about where and what symbol stored
in bucket at now
Changes v2 -> v3 (only update #3 patch):
- Fix u64 division problem by use u32 for input_size
- Fix input size calculation start - end -> end - start
- Add missing sort.h header
Timofey Titovets (3):
Btrfs: heuristic add sim
Get small sample from input data and calculate
byte type count for that sample into bucket.
Bucket will store info about which bytes
and how many has been detected in sample
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compression.c | 24 ++--
fs
help improve the system]
>
> url:
> https://github.com/0day-ci/linux/commits/Timofey-Titovets/Btrfs-populate-heuristic-with-detection-logic/20170729-061208
> config: arm-arm5 (attached as .config)
> compiler: arm-linux-gnueabi-gcc (Debian 6.1.1-9) 6.1.1 201607
2017-08-01 23:21 GMT+03:00 Leonidas Spyropoulos :
> On 01/08/17, E V wrote:
>> In general I think btrfs takes time proportional to the size of your
>> metadata to mount. Bigger and/or fragmented metadata leads to longer
>> mount times. My big backup fs with >300GB of metadata
For now that code just return true
Later more complex heuristic code will be added
Changes v1 -> v2:
- Heuristic call moved into inode_need_compress() function
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compression.c | 22 ++
fs/btrfs/comp
For now that code just return true
Later more complex heuristic code will be added
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compression.c | 30 ++
fs/btrfs/compression.h | 2 ++
fs/btrfs/inode.c | 10 +-
3 files chang
For now that code just return true
Later more complex heuristic code will be added
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compression.c | 22 ++
fs/btrfs/compression.h | 2 ++
fs/btrfs/inode.c | 18 --
3 files chang
Calculate byte core set for data sample:
Sort bucket's numbers in decreasing order
Count how many numbers use 90% of sample
If core set are low (<=25%), data are easily compressible
If core set high (>=80%), data are not compressible
Signed-off-by: Timofey Titovets <nefelim...@gmail.com
Get small sample from input data and calculate
byte type count for that sample into bucket.
Bucket will store info about which bytes
and how many has been detected in sample
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compression.c | 24 ++--
fs
e care about where and what symbol stored
in bucket at now
Timofey Titovets (3):
Btrfs: heuristic add simple sampling logic
Btrfs: heuristic add byte set calculation
Btrfs: heuristic add byte core set calculation
fs/btrfs/compression.c | 108 +++
Calculate byte set size for data sample:
Calculate how many unique bytes has been in sample
By count all bytes in bucket with count > 0
If byte set low (~25%), data are easily compressible
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compressi
Calculate byte core set for data sample
For low core set, data are easily compressible
For high core set, data are not compressible
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compression.c | 60 ++
fs/btrfs/compres
Calculate byte set size for data sample
if byte set low, data are easily compressible
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compression.c | 27 +++
fs/btrfs/compression.h | 1 +
2 files changed, 28 insertions(+)
diff --git a/fs
Get small sample from input data
and calculate byte type count for that sample
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compression.c | 24 ++--
fs/btrfs/compression.h | 11 +++
2 files changed, 33 insertions(+), 2 deletions(-)
diff
Based on kdave for-next
As heuristic skeleton already merged
Populate heuristic with basic code that:
1. Collect sample from input data
2. Calculate byte set for sample
For detect easily compressible data
3. Calculate byte core set size
For detect easily and not compressible data
Timofey
Hi Nick Terrell,
If i understood all correctly,
zstd can compress (decompress) data in way compatible with gzip (zlib)
Do that also true for in kernel library?
If that true, does that make a sense to directly replace zlib with
zstd (configured to work like zlib) in place (as example for btrfs
zlib
Add a heuristic computation before compression,
for avoiding load resource heavy compression workspace,
if data are probably can't be compressed
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/Makefile| 2 +-
fs/btrfs/heuristic.c
Heuristic code compute shannon entropy in cases when
other methods can't make clear decision
For realization that calculation it's needs floating point,
but as this doesn't possible to use floating point,
lets just precalculate all our input/output values
Signed-off-by: Timofey Titovets <nefe
nings about big stack.
- Small cleanups
v3 -> v4:
- Split: btrfs_compress_heuristic() to simplify code
- Drop BUG_ON(!page),
this already checked in extent_range_clear_dirty_for_io()
- Add zero detection
- Add more macros for tuning
- Fix several typos
Timofey Titovets (2):
Btrfs: add precompute
2017-07-03 20:30 GMT+03:00 David Sterba <dste...@suse.cz>:
> On Sat, Jul 01, 2017 at 07:56:02PM +0300, Timofey Titovets wrote:
>> Add a heuristic computation before compression,
>> for avoiding load resource heavy compression workspace,
>> if data are probably can't be
For now that code just return true
Later more complex heuristic code will be added
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compression.c | 22 ++
fs/btrfs/compression.h | 2 ++
fs/btrfs/inode.c | 25 -
3 files c
2017-07-29 16:36 GMT+03:00 Timofey Titovets <nefelim...@gmail.com>:
> Based on kdave for-next
> As heuristic skeleton already merged
> Populate heuristic with basic code.
>
> First patch: add simple sampling code
> It's get 16 byte samples with 256 bytes shifts
> ov
Calculate byte set size for data sample:
Calculate how many unique bytes has been in sample
By count all bytes in bucket with count > 0
If byte set low (~25%), data are easily compressible
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/heurist
Heuristic workspace:
- Add bucket for storing byte type counters
- Add sample array for storing partial copy of
input data range
- Add counter for store current sample size to workspace
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/heuristic.
Copy sample data from input data range to sample buffer
then calculate byte type count for that sample into bucket.
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/heuristic.c | 31 +--
1 file changed, 29 insertions(+), 2 deletions(-)
diff
Use memcmp for check sample data to zeroes.
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/heuristic.c | 18 ++
1 file changed, 18 insertions(+)
diff --git a/fs/btrfs/heuristic.c b/fs/btrfs/heuristic.c
index 5336638a3b7c..4557ea1db373 100644
--- a/fs
Calculate byte core set for data sample:
Sort bucket's numbers in decreasing order
Count how many numbers use 90% of sample
If core set are low (<=25%), data are easily compressible
If core set high (>=80%), data are not compressible
Signed-off-by: Timofey Titovets <nefelim...@gmail.com
stic code to external file
- Make heuristic use compression workspaces
- Add check sample to zeroes
Timofey Titovets (6):
Btrfs: heuristic make use compression workspaces
Btrfs: heuristic workspace add bucket and sample items
Btrfs: Implement heuristic sampling logic
Btrfs: heuristic ad
Move heuristic to external file
Implement compression workspaces support for
heuristic resources
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/Makefile | 2 +-
fs/btrfs/compression.c | 18 +
fs/btrfs/compression.h | 7 -
fs/btrfs/heuristic.c
Calculate byte set size for data sample:
Calculate how many unique bytes has been in sample
By count all bytes in bucket with count > 0
If byte set low (~25%), data are easily compressible
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compressi
Change counter type in bucket item u16 -> u32
- Drop other fields from bucket item for now,
no one use it
Timofey Titovets (3):
Btrfs: heuristic add simple sampling logic
Btrfs: heuristic add byte set calculation
Btrfs: heuristic add byte core set calculation
fs/btrfs/c
Calculate byte core set for data sample:
Sort bucket's numbers in decreasing order
Count how many numbers use 90% of sample
If core set are low (<=25%), data are easily compressible
If core set high (>=80%), data are not compressible
Signed-off-by: Timofey Titovets <nefelim...@gmail.com
Get small sample from input data and calculate
byte type count for that sample into bucket.
Bucket will store info about which bytes
and how many has been detected in sample
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
fs/btrfs/compression.c | 24 ++--
fs
2017-06-20 1:09 GMT+03:00 Timofey Titovets <nefelim...@gmail.com>:
> Hi, for last several days i try work on entropy calculation that can
> be usable in btrfs compression code (for detect bad compressible
> data),
>
> I've implemented:
> - avg meaning (Problems with accur
I've done with heuristic method,
so i post some performance test output:
(I store test data in /run/user/$UID/, and script just ran programm 2 times)
###
# Performance test will measure initialization time
# And remove it from run time of tests
# This may be inaccurate in some cases
# But this
ke log2_lshift16() more like binary tree as suggested by:
Adam Borowski <kilob...@angband.pl>
Changes since v2:
- Fix page read address overflow in heuristic.c
- Make "bucket" dynamically allocated, for fix warnings about big stack.
- Small cleanups
Timofey Titovets (2):
Btr
Heuristic code compute shannon entropy in cases when
other methods can't make clear decision
For realization that calculation it's needs floating point,
but as this doesn't possible to use floating point,
lets just precalculate all our input/output values
Signed-off-by: Timofey Titovets <nefe
1 - 100 of 323 matches
Mail list logo