On 11/16/17, Austin S. Hemmelgarn wrote:
> I'm pretty sure defrag is equivalent to 'compress-force', not
> 'compress', but I may be wrong.
Are there any devs to confirm this?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message
On 11/15/17, Martin Steigerwald wrote:
> Somehow I am happy that I still have a plain Ext4 for /boot. :)
You may use uncompressed btrfs for /boot.
Both Syslinux (my choice) and Grub supports it.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the
On 11/15/17, Lukas Pirl wrote:
> you might be interested in the thread "Read before you deploy
> btrfs + zstd"¹.
Thanks. I've read it. Bootloader is not an issue since /boot is on
another uncompressed fs.
Let me make my question more generic:
Can there be any issues for
Kernel 4.14 now includes btrfs zstd compression support.
My question:
I currently have a fs mounted and used with "compress=lzo"
option. What happens if I change it to "compress=zstd"?
My guess is that existing files will be read and uncompressed via lzo.
And new files will be written with zstd
On 8/1/17, Duncan <1i5t5.dun...@cox.net> wrote:
> Imran Geriskovan posted on Mon, 31 Jul 2017 22:32:39 +0200 as excerpted:
>>>> Now the init on /boot is a "19 lines" shell script, including lines
>>>> for keymap, hdparm, crytpsetup. And let's not forg
Do you have any experience/advice/comment regarding dup data on ssds?
>>> Very good question. =:^)
>> Now the init on /boot is a "19 lines" shell script, including lines for
>> keymap, hdparm, crytpsetup. And let's not forget this is possible by a
>> custom kernel and its reliable buddy
On 7/30/17, Duncan <1i5t5.dun...@cox.net> wrote:
>>> Also, all my btrfs are raid1 or dup for checksummed redundancy
>> Do you have any experience/advice/comment regarding
>> dup data on ssds?
> Very good question. =:^)
> Limited. Most of my btrfs are raid1, with dup only used on the device-
>
On 7/9/17, Duncan <1i5t5.dun...@cox.net> wrote:
> I have however just upgraded to new ssds then wiped and setup the old
> ones as another backup set, so everything is on brand new filesystems on
> fast ssds, no possibility of old undetected corruption suddenly
> triggering problems.
>
> Also, all
On 5/15/17, Tomasz Kusmierz wrote:
> Theoretically all sectors in over provision are erased - practically they
> are either erased or waiting to be erased or broken.
> Over provisioned area does have more uses than that. For example if you have
> a 1TB drive where you
On 5/14/17, Tomasz Kusmierz wrote:
> In terms of over provisioning of SSD it’s a give and take relationship … on
> good drive there is enough over provisioning to allow a normal operation on
> systems without TRIM … now if you would use a 1TB drive daily without TRIM
> and
On 5/12/17, Kai Krakow wrote:
> I don't think it is important for the file system to know where the SSD
> FTL located a data block. It's just important to keep everything nicely
> aligned with erase block sizes, reduce rewrite patterns, and free up
> complete erase blocks as
On 5/12/17, Duncan <1i5t5.dun...@cox.net> wrote:
> FWIW, I'm in the market for SSDs ATM, and remembered this from a couple
> weeks ago so went back to find it. Thanks. =:^)
>
> (I'm currently still on quarter-TB generation ssds, plus spinning rust
> for the larger media partition and backups, and
On 4/17/17, Roman Mamedov wrote:
> "Austin S. Hemmelgarn" wrote:
>> * Compression should help performance and device lifetime most of the
>> time, unless your CPU is fully utilized on a regular basis (in which
>> case it will hurt performance, but still
Well, this may the follow up for the Btrfs/SSD discussion.
Probably nobody here had his hands on these Optane SSDs (or is it?)
Anyway, what are your expectations/projections about
memory/storage hybrid tech?
XPoint and/or any other tech will make memory and storage
eventually to converge.
With
Hi,
Sometime ago we had some discussion about SSDs.
Within the limits of unknown/undocumented device infos,
we loosely had covered data retension capability/disk age/life time
interrelations, (in?)effectiveness of btrfs dup on SSDs, etc..
Now, as time passed and with some accumulated experience
Opps.. I mean 4.9/4.10 Experiences
On 2/16/17, Imran Geriskovan <imran.gerisko...@gmail.com> wrote:
> What are your experiences for btrfs regarding 4.10 and 4.11 kernels?
> I'm still on 4.8.x. I'd be happy to hear from anyone using 4.1x for
> a very typical single disk setup. Are
What are your experiences for btrfs regarding 4.10 and 4.11 kernels?
I'm still on 4.8.x. I'd be happy to hear from anyone using 4.1x for
a very typical single disk setup. Are they reasonably stable/good
enough for this case?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
I don't know if it is btrfs related but I'm getting
hard freezes on 4.8.17.
So I went back to 4.8.14 (with identical .config file).
It is one of my kernels which is known to be trouble
free for a long time.
Since they are hard lock up for real, I can't provide
anything.. Does anyone experience
>> I seem to have a similar issue to a subject in December:
>> Subject: page allocation stall in kernel 4.9 when copying files from one
>> btrfs hdd to another
>> In my case, this is caused when rsync'ing large amounts of data over NFS
>> to the server with the BTRFS file system. This was not
> Wait wait wait a second:
> This is 256 MB SINGLE created
> by GPARTED, which is the replacement of MANUALLY
> CREATED 127MB DUP which is now non-existant..
> Which I was not aware it was a DUP at the time..
> Peeww... Small btrfs is full of surprises.. ;)
What's more, I also have another 128MB
> btrfs filesystem df /mnt/back/boot
> Data, single: total=8.00MiB, used=0.00B
> System, DUP: total=8.00MiB, used=16.00KiB
> Metadata, DUP: total=32.00MiB, used=112.00KiB
> GlobalReserve, single: total=16.00MiB, used=0.00B
> IT IS DUP!!
Wait wait wait a second:
This is 256 MB SINGLE created
by
>> Just to note again:
>> Ordinary 127MB btrfs gives "Out of space" around 64MB payload. 128MB is
>> usable to the end.
> Thanks, and just to clarify for others possibly following along or
> googling it up later, that's single mode (as opposed to dup mode) for at
> least data, if in normal
On 9/11/16, Chris Murphy wrote:
> Something else that's screwy in that bug that I just realized, why is
> it not defaulting to mixed-block groups on a 100MiB fallocated file? I
> thought mixed-bg was the default below a certain size like 2GiB or
> whatever?
>> With an
On 9/11/16, Duncan <1i5t5.dun...@cox.net> wrote:
> Martin Steigerwald posted on Sun, 11 Sep 2016 17:32:44 +0200 as excerpted:
>>> What is the smallest recommended fs size for btrfs?
>>> Can we say size should be in multiples of 64MB?
>> Do you want to know the smalled *recommended* or the smallest
What is the smallest recommended fs size for btrfs?
- There are mentions of 256MB around the net.
- Gparted reserves minimum of 256MB for btrfs.
With an ordinary partition on a single disk,
fs created with just "mkfs.btrfs /dev/sdxx":
- 128MB works fine.
- 127MB works but as if it is 64MB.
Can
> Why not just create a Systemd unit (or whatever the proper term is) that
> runs on boot and runs the mount command manually and doesn't wait for it to
> return? Seems easier than messing with init systems.
Exactly: Never "mess" with inits..
--
To unsubscribe from this list: send the line
>>> I can't find any fstab setting for systemd to higher this timeout.
>>> There's just the x-systemd.device-timeout but this controls how long to
>>> wait for the device and not for the mount command.
>>> Is there any solution for big btrfs volumes and systemd?
>>> Stefan
Switch to Runit.
>> On a side note, I really wish BTRFS would just add LZ4 support. It's a
>> lot more deterministic WRT decompression time than LZO, gets a similar
>> compression ratio, and runs faster on most processors for both
>> compression and decompression.
Relative ratios according to
>> What are your disk space savings when using btrfs with compression?
> * There's the compress vs. compress-force option and discussion. A
> number of posters have reported that for mostly text, compress didn't
> give them expected compression results and they needed to use compress-
> force.
On 11/30/15, Duncan <1i5t5.dun...@cox.net> wrote:
> Of course you can also try compress-force(=lzo the default
> compression so the =spec isn't required), which should give
> you slightly better performance than zlib, but also a bit
> less efficient compression in terms of size saved.
lzo perf
>>> After upgrading from systemd227 to 228
>>> these messages began to show up during boot:
>>> [ 24.652118] BTRFS: could not find root 8
>>> [ 24.664742] BTRFS: could not find root 8
> b. For the OP, is it possible quotas was ever enabled on this file system?
Quotas have never been enabled
It's on every boot.
With systemd.log_level=debug boot parameter appended,
I could not find any meaningful operation just before the message.
The systemd journal boot dump will be in your personal mailbox shortly.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body
After upgrading from systemd227 to 228
these messages began to show up during boot:
[ 24.652118] BTRFS: could not find root 8
[ 24.664742] BTRFS: could not find root 8
Are they important?
Regards,
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a
It's not about snapshots but here is an other incremental
backup recipe for optical mediums like DVDs, BlueRays:
Base Backup:
1) Create encrypted loopback devices of DVD or BlueRay sizes.
2) Create a compressed multi device Btrfs spanning these
loopback devices. (To save space, you may use
On 6/25/14, Duncan 1i5t5.dun...@cox.net wrote:
Imran Geriskovan posted on Wed, 25 Jun 2014 15:01:49 +0200 as excerpted:
Note that gdisk gives default 8 sector alignment value for AF disks.
That is 'sector' meant by gdisk is 'Logical Sector'!
Sufficiently determined user may create misaligned
On 6/25/14, Chris Murphy li...@colorremedies.com wrote:
On Jun 25, 2014, at 1:47 AM, Hugo Mills h...@carfax.org.uk wrote:
The question is, why? If you have enough disk media errors to make
it worth using multiple copies, then your storage device is basically
broken and needs replacing, and
On 6/23/14, Martin K. Petersen martin.peter...@oracle.com wrote:
Anyway. The short answer is that Linux will pretty much always do I/O in
multiples of the system page size regardless of the logical block size
of the underlying device. There are a few exceptions to this such as
direct I/O,
On 6/25/14, Hugo Mills h...@carfax.org.uk wrote:
Storage is pretty cheap now, and to have multiple copies in btrfs is
something that I think could be used a lot. I know I will use multiple
copies of my data if made possible.
The question is, why? If you have enough disk media errors to
The 64KB Btrfs bootloader pad is 8 sector aligned, so for 512e AF disks
there's no problem formatting the whole drive. The alignment problem
actually happens when partitioning it, using old partition tools that don't
align on 8 sector boundaries. There are some such tools still floating
On 6/19/14, Russell Coker russ...@coker.com.au wrote:
On Wed, 18 Jun 2014 21:29:39 Daniel Cegiełka wrote:
Everything works fine. Is such a solution is recommended? In my
opinion, the creation of the partitions seems to be completely
unnecessary if you can use btrfs.
If you don't need to have
On 6/19/14, Russell Coker russ...@coker.com.au wrote:
On Wed, 18 Jun 2014 21:29:39 Daniel Cegiełka wrote:
Everything works fine. Is such a solution is recommended? In my
opinion, the creation of the partitions seems to be completely
unnecessary if you can use btrfs.
If you don't need to have
On 6/18/14, Daniel Cegiełka daniel.cegie...@gmail.com wrote:
I created btrfs directly to disk using such a scheme (no partitions):
cd /mnt
btrfs subvolume create __active
btrfs subvolume create __active/rootvol
Everything works fine. Is such a solution is recommended? In my
opinion, the
I've experienced the following with balance:
Setup:
- Kernel 3.12.9
- 11 DVD sized (4.3GB) loopback devices.
(9 Read-Only Seed devices + 2 Read/Write devices)
- 9 device seed created with -m single -d single and made
Read-only with btrfstune -S 1 ...
- 2 devices was added at different dates. NO
I'm trying to track this down - this started happening without changing
the kernel in use, so probably
a corrupted filesystem. The symptoms are that all memory is suddenly used
by no apparent source. OOM
killer is invoked on every task, still can't free up enough memory to
continue.
I
Every write on a SSD block reduces its data retension capability.
No concrete figures but it is assumed to be
- 10 years for new devices
- 1 year at rated usage. (There are much lower figures around)
Hence, I would not trade retension time and wear for
autodefrag with no/minor benefits on SSD.
On 12/12/13, Chris Mason c...@fb.com wrote:
For me anyway, data=dup in mixed mode is definitely an accident ;)
I personally think data dup is a false sense of security, but drives
have gotten so huge that it may actually make sense in a few
configurations.
Sure, it's not about any security
That's actually the reason btrfs defaults to SINGLE metadata mode on
single-device SSD-backed filesystems, as well.
But as Imran points out, SSDs aren't all there is. There's still
spinning rust around.
And defaults aside, even on SSDs it should be /possible/ to specify data-
dup mode,
What's more (in relation to our long term data integrity aim)
order of magnitude for their unpowered data retension period is
1 YEAR. (Read it as 6months to 2-3 years.
Does btrfs need to date-stamp each block/chunk to ensure that data is
rewritten before suffering flash memory bitrot?
Is
:
mkfs.btrfs -m dup 4 -d dup 3 ... (4 duplicates for metadata, 3
duplicates for data)
I kindly request your comments. (At least for -d dup)
Regards,
Imran Geriskovan
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More
Currently, if you want to protect your data against bit-rot on
a single device you must have 2 btrfs partitions and mount
them as Raid1.
No this also works:
mkfs.btrfs -d dup -m dup -M device
Thanks a lot.
I guess docs need an update:
https://btrfs.wiki.kernel.org/index.php/Mkfs.btrfs:
-- Forwarded message --
From: Imran Geriskovan imran.gerisko...@gmail.com
Date: Wed, 11 Dec 2013 02:14:25 +0200
Subject: Re: Feature Req: mkfs.btrfs -d dup option on single device
To: Chris Murphy li...@colorremedies.com
Current btrfs-progs is v3.12. 0.19 is a bit old. But yes
I'm not a developer, I'm just an ape who wears pants. Chris Mason is the
lead developer. All I can say about it is that it's been working for me OK
so far.
Great:) Now, I understand that you were using -d dup, which is quite
valuable for me. And since GMail only show first names in Inbox list,
53 matches
Mail list logo