s and can live with them the same way that people that don't care
generally live with defaults that may or may not be the absolute best
case for them, but are generally at least not horrible.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a maste
, dup for both data and metadata on non-deduping ssds, but of
course that means data takes double the space since there's two copies of
it, and that gets kind of expensive on ssd, if it's more than the
fraction of a GiB that's /boot.
--
Duncan - List replies preferred. No HTML msg
be too big to be practical, at least for other than
the lucky few with the sort of memory required, as well.)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from
. I do periodic backups, but have run restore a
couple times when a filesystem wouldn't mount, in ordered to get back as
much of the delta between the last backup and current as possible. Of
course I know not doing more frequent backups is a calculated risk and I
was prepared to have t
to be triggering when you're away from the machine, but your
problem does seem a bit different than mine (mine was a consistent
crash), and I don't believe mine made release code anyway, so it's likely
the similarity is just coincidence.
--
Duncan - List replies preferred. No HTML
e, as too many unfortunate people eventually find out,
actions, or the lack of them, speak louder than words, and if the data is
lost due to not having a backup, well, the only thing to do is to be
happy that the thing your actions defined as worth more than that data,
the time/hassle/resources
ackups demonstrating that
value, because if you don't, you have a very real possibility of
demonstrating that you did /not/ value the data as much as you claimed
to, because it wasn't backed up and that lack of backup demonstrated the
lie in any claim to the contrary. IOW, backups speak
y a btrfs bug, even if you argue it's only in the
documentation, because the manpage (tho still 4.6.1, here) says it
resumes an interrupted scrub but won't start a new one if the scrub
finished successfully, and an abort is definitely an interruption, not a
successful finish.
--
Duncan -
onably current btrfs userland as well.
So I'd recommend upgrading to the latest kernel 4.4 if you want to stay
with the stable series, or 4.6 or 4.7 if you want current, and then (less
important) upgrading the btrfs userspace as well. It's possible the
newer kernel will handle the comb
happen.
Of course that may mean backing up and recreating the filesystem fresh in
ordered to have autodefrag on from the beginning, if you're looking at
trying it on existing filesystems that are likely highly fragmented.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfre
1 GiB as
>> > I said.
And... I see that the progs v4.7-rc1 release has the 32M default.
=:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To u
is actually compressed or not, since if it is filefrag will report
multiple 128 KiB extents, while if it's not, extent size should be much
less regular, likely with larger extents unless the file is often
modified and rewritten in-place.
--
Duncan - List replies preferred. No HTML msgs.
"
purpose of
testing it, in ordered to be sure it's ready for that day when btrfs
really is finally stable enough that it can in clear conscience be
recommended to people without backups, as at least as stable and problem
free as whatever they were using previously. Tho that day's
had some place to write it to,
because btrfs restore is read-only against the btrfs you're restoring
from, and thus has no chance of causing further damage, while of course
btrfs check --init-csum-tree writes to the filesystem in question and
thus is a higher risk.
--
Duncan - List replies pre
rts gpt partition tables and has
functionality that allows you to restore the primary from the secondary
as above. It has a good manpage, and there's more info about it on the
home page as well.
http://www.rodsbooks.com/gdisk/
--
Duncan - List replies preferred. No HTML msgs.
"
not
compress-force, and presumably not autodefrag. So there's probably a lot
of uncompressed files that were heavily fragmented and thus in many
extents, that a defrag run helped consolidate, thus reducing mount time
substantially.
--
Duncan - List replies preferred. No HTML msgs.
&quo
#x27;t want
to lose in the mean time, is now recommended.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the line "unsubscribe linux-btr
7;m assuming it did and that you
had a full raid1 btrfs fi df report at one point.
A third way would be if some other bug triggered
btrfs to suddenly start writing single mode
chunks. There were some bugs like that in the
past, but they've been fixed for some time. But
perhaps there are sim
ing optional, which means things need to work without it, too.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the line "u
lude a short, often
one-line per revision, revision changelog. It helps reviewers keep track
of what changed since the last time they looked at the patch. See pretty
much any on-list v2+ patch as an example.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a
Chris Mason posted on Fri, 08 Jul 2016 12:02:35 -0400 as excerpted:
> Can you please run the attached test program:
Umm... you want him to run it on the affected 4.6.x and late 4.7-rcs, not
on the unaffected 4.5.x, correct?
--
Duncan - List replies preferred. No HTML msgs.
"Every
ilesystem, those will normally only identify the one device of a multi-
device filesystem, but the by-id links ID on device serials and partition
number, and if you are using GPT partitioning, you have by-partuuid and
(if you set them when setting up the partitions) by-partlabel as well.
--
Duncan
same kernel and progs you were likely using half a year
ago when you did that balance, just to nail down for sure whether it did
eat the files back then, so we don't have to worry about some other
problem.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program ha
you can duplicate the behavior once again with a
reasonably current kernel within the two-release series either LTS or
current range, as specified above, and can provide the logs, etc, from
it...
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a ma
lly support the last two series, 4.6 and 4.5 ATM.
(FWIW, userspace version numbers are synced to kernelspace numbers.
Backward compatibility is supported, so a good rule of thumb there is to
run btrfs userspace at least as new as your kernel, which assuming you're
staying in line wit
t all these opcode traces too, until
someone explained that to me.)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the line
le but as yet unoptimized btrfs raid10.
And of course there's one other alternative, zfs, if you are good with
its hardware requirements and licensing situation.
But I'd recommend btrfs raid1 as the simple choice. It's what I'm using
here (tho on a pair of ssds, so far smaller bu
the raid56 warning up there on the
arch wiki is a good idea, indeed.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the line &
27;ve been noticing that in the "stripe length"
patches, when the comment associated with the patch suggests it's "strip
length" they're actually talking about, using the "N strips, one per
device, make a stripe" definition.
--
Duncan - List replies prefer
using raid56
mode for anything but testing with throw-away data, at this point.
Anything else is simply irresponsible.
Does that mean we need to put a "raid56 mode may eat your babies" level
warning in the manpage and require a --force to either mkfs.btrfs or
balance to raid56 mo
at
once triggering a system crash sure does sound familiar, here.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the line "un
, and 3.18 was trouble-free enough btrfs-
wise, that we could expand that to three LTS series now, as the
indications were we might when 4.4 was still new. But it seems that
while we did support it a bit longer, say 2.5 LTS series, that couldn't
continue until the /next/ LTS came out.
r occasional
bugs that may force you to use them, but btrfs can be reasonably used for
daily use provided you are doing so.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." R
al metadata chunk
size.
But I'd not worry about it yet. Once unallocated space gets down to
about half a TB, or either data or metadata size becomes multiple times
actual usage, a balance will arguably be useful. But the numbers look
pretty healthy ATM.
--
Duncan - List replies prefer
entire filesystem. The process runs
in the foreground.
So the balance start operation runs in the foreground, but as explained
elsewhere in the manpage, the balance is interruptible by unmount and
will automatically restart after a remount. It can also be paused and
resumed or canceled with
t package for it and sometimes pre-release or first release
deployment versions with systemd/udev/lvm patches that others simply
don't have yet, but it could yet be a few years before it /fully/
settles down.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program ha
ime to
look up the bug ATM, but you might try the latest 4.1 and 4.4 LTS kernels
as well as the latest 4.6 and 4.7-rc kernels to see if it makes a
difference. If it's a problem with 4.6 and 4.7-rc then it becomes
interesting, and if it works with 4.4 or 4.1, even more so.
--
Duncan - Lis
check without further
options should be a read-only operation. You can post the results from
that and ask if it's safe to run check with the --repair option, or if
you should try something else.
Of course, particularly once you have fresh backups available, another
option is to simply
6T and replacing with an 8T.
Don't forget the resize. That should leave you with two devices with
free space and thus hopefully allow normal raid1 reallocation with a
device remove again.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
an
d take it from there, but unless you're going to be
deleting several TB of stuff as you add, at least doing a few TB worth of
balance to the new drive to start the process should result in a pretty
even spread as it fills up the rest of the way.
--
Duncan - List replies preferred. No HT
output of the yes command to stdin, for instance, or similar
sysadmin prompt automation tricks). A number of folks have mentioned
that and requested a way to say "yes, really all, don't ask again", an
option that btrfs restore unfortunately doesn't have yet.
--
Duncan - L
ly different
> from a conventional raid10 where it's either gonna completely work or
> completely fail.
Yes, thanks, CMurphy. That's exactly what I was trying to explain. =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master
agments in place, defrag likely won't help a lot, because the free
space as well will be heavily fragmented. So starting off with a clean
and new filesystem and using autodefrag from the beginning really is your
best bet.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfr
y hit)
here. I could find it but I'd have to do a search in my own list
archives, and now that you are aware of the problem, you can of course do
the search as well, if you need to. =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a mast
he patch as soon as you can thing, but not
something to be hugely worried about in the mean time.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe fro
that back for you, if you set it up
that way, of course.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the line "unsubscribe
m (tho it did
and does still have the reflinks/snapshots problem, but that's a totally
different issue).
Meanwhile, it's news to me that autodefrag doesn't have that problem any
longer...
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a l
7;ve been off the platform for a decade and a half now.)
I may have to play with it a bit, when I have more time (I'm moving in a
couple days...).
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your mas
d really consider raid56
mode as reasonably stable as btrfs in general, which is to say,
stabilizing, but not yet fully stable, so even then, the standard admin
backup rule that if you don't have backups you consider the data to be
worth less than the time/resources/hassle to do those b
, split them into cron.daily1 and cron.daily2, scheduled at
different times, bisecting the problem by seeing which one the behavior
follows.
Repeat as needed until you've discovered the culprit, then examine
exactly what it's doing to the filesystem.
And please report your results.
in this regard,
as balance (and check) operations simply don't scale well when there's
thousands or tens of thousands of reflinks per extent to account for.
Unfortunately (in this regard) it's incredibly easy and fast to create
snapshots, deceptively so, masking the work balance and c
the fix is integrated, just
to see if any other raid56 related bugs turn up, before actually
considering it reasonably usable. And definitely ask again then (if you
haven't been following the list and further raid56 development in the
mean time) before you start relying on it, just in cas
Nicholas D Steeves posted on Wed, 25 May 2016 16:36:13 -0400 as excerpted:
> On 25 May 2016 at 15:03, Duncan <1i5t5.dun...@cox.net> wrote:
>> Dmitry Katsubo posted on Wed, 25 May 2016 16:45:41 +0200 as excerpted:
>>> btrfs-restore [needs an o]ption that appl
e some reason it was excluded that I
as a non-dev simply didn't grok.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the line &qu
serious additional complexity, as I suspect it
might, yes, the option might still be added at some point, but the
priority to do it will be pretty low, which means the point at which it's
actually added is likely to be out there five years or more, simply
because there's so many othe
tion as well, to restore
timestamps and owner/perms information. Similarly, there's an option to
restore symlinks as well, without which they'll be missing. So you
probably do want to check that manpage. Just sayin'. =:^)
--
Duncan - List replies preferred. No HTML msgs.
&
s beside the point in terms of the original question. I
think the problem is with his understanding of restore. See the reply
directly to his post, that I'll be making after this one.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
an
VM on its own subvolume), if a
300ish cap per subvolume is maintained, the 15K total snapshots per
filesystem should still work reasonably well, so I should be able to drop
the overall filesystem cap recommendation and simply recommend a per-
subvolume snapshot cap of a few hundred?
--
D
the wiki page link. Try the normal mode first. If it
fails and you need further help with the advanced usage stuff, you can
ask more questions then.
https://btrfs.wiki.kernel.org/index.php/Restore
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord,
case, if the directory simply doesn't
exist, that would prevent the status files from being written, and thus
read, as well.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stal
I
mounted root, or some other subvolume (which I don't have here to worry
about, but for those who do...) writable?
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
ost of a single ssd, the
total cost of four of them plus four hdds should still be below the cost
of five ssds, and you're still not using more than the 8 total hookups
you had already mentioned, so it should be quite reasonable to do it that
way.
--
Duncan - List replies preferred. No
onally because as I said I went the ssd route and that has been fine
for me, but at least one regular here says this sort of arrangement works
quite well, with the mdraid0s underneath to some extent making up for
btrfs raid1's bad read-scheduling.
--
Duncan - List replies preferred. No
omething, I think it's gone from that regard, as the emulation above
would of course have a different subvolume ID.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stall
inline
metadata state to the multiples of 4 KiB block data extent state.
And if all the files had just shrunk, say from compaction (if done in-
place, not with a copy and rename), perhaps it's the reverse, the
transition from written data blocks to inline metadata state.
--
Duncan - List r
nearer 20%, that's still a significant
savings for the initial inline result, with the dedup-packer coming along
later to clean things up properly.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your ma
n). Then simply keep them separate, only attaching one at a time and
DEFINITELY never mounting the filesystem with both the clone and the
original devices attached, so the kernel can't get confused and write to
the wrong one because the other one is never there at the same time to
provide the
or something else, and that precludes just
letting things be, unless of course you can afford to simply buy your way
out of the problem with more storage devices.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program,
place by previous snapshots and the new version, not the three that
you're likely to have if you wait until snapshots have been done before
doing the defrag (the old version as in previous snapshots, the new
version as initially written and locked in place by post-change pre-
defrag sna
David Sterba posted on Wed, 11 May 2016 16:47:39 +0200 as excerpted:
> btrfs-progs 4.5.2 have been released. A bugfix release.
So 4.5.3 as stated in the subject line, or 4.5.2 as stated in the message
body?
Given that I'm on 4.5.2, it must be 4.5.3 that's just released. =:^)
--
g my data eggs
all in the same filesystem basket with subvolumes, where if the
filesystem goes out all the subvolumes go with it!
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Ric
the get-go? Presumably that can indeed be patched in, but not
being a dev, even if I could figure out a patch that worked for it,
there's a fair chance it would be more a hack than proper code. (As an
admin I have a patch that switches the normal relatime default to noatime,
so I don'
d
already demonstrated in multiple different ways that the data was really
of only trivial value to you, so you can simply blow away the existing
broken filesystem and simply start over with a new test. Words can claim
otherwise, but actions don't lie. 8^0
--
Duncan - List replies prefer
Adam Borowski posted on Sun, 08 May 2016 01:11:18 +0200 as excerpted:
> Duncan wrote:
>> > btrfs_destroy_inode
>
>> That's a known apparent false-positive warning on current 4.6-rc kernel
>> btrfs. The destroy-inode bit is related to a file deletion happening
n I use exclusively the lower level scripting language config stuff,
here.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this lis
if you
haven't changed it, and would have failed to boot as a result.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the l
either by specifically including it in the initr*, or
by setting rootflags= on the kernel commandline via grub or whatever. If
your initr* first mounts it read-only, and it's not mounted writable
until you've switched to the main system and are using the main system's
fstab, then f
k that I'd get
them to change the policy and call systemd's failure to support the
multiple fstab entries for a single mountpoint feature a bug, instead of
a feature.
Oh, well... As long as it still works in practice, as it has so far,
systemd complaining every time I reload it,
x27;t bothered looking into it further, but
I'd have a head-start on it if I did as I'm already used to doing custom
service, target and timer units. Without that information, it'd
definitely take awhile longer to figure it all out, and I expect many
people will simply give up and fi
ack with dirty memory above the unlabeled/foreground value
is indeed happening in process context, because the kernel is charging
write time to individual processes at that point (the part you got
right), the value is still global and it's still actually the kernel
doing the writing -- it
run would correct the remaining errors and
further runs would return no errors at all.
So AFAIK, raid1 scrub handles the missing writes well too, as long as
scrub is run again whenever there's unverified errors, so it can detect
and correct more on the next run after the parent layer was fixed s
ual issue, or be silenced, by 4.6 release.
If you want further details, as I said, there's at least two other
threads with people reporting and discussing it, so read the last week or
two of the list archive (or even just the non-patch original thread
starter posts) and you'll find t
lper
scripts that let me easily follow git logs for every package, and even do
git bisects when necessary, without having to manually switch into the
package's git dir first. =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and i
che layer, and see if it gives you similar behavior in
terms of even df of a tmpfs hanging like that, of course without the
positive effects of bcache as well, tho.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program
y error?
FWIW I didn't actually copy it, I just clicked the link, but my client is
dumb enough to include closing brackets/parenthesis in links, so...
And I'm dumb enough to have forgotten that irritating bug in my client,
tho I know about it and have caught and corrected it before, so...
Mike Fleetwood posted on Sun, 01 May 2016 14:54:44 +0100 as excerpted:
> On 1 May 2016 at 13:47, Duncan <1i5t5.dun...@cox.net> wrote:
>> Direct from that section of my /etc/sysctl.conf:
>>
>> ##
(upto 5 minutes or so,
tho I've read of people going to extremes and setting it to 15
minutes or even longer, tho that of course risks losing all that work
in a crash (!!)), again, to let the drive stay spun down for longer.
Between the two, setting much lower writeback cache size triggers,
and u
e error that receive is, but I haven't the foggiest what the
problem would be.
I guess another possibility would be that btrfs check is violating some
sort of security policy such as selinux. I don't run anything of that
nature here so know little more about it beyond the possibility, but
.
> Scrub completes without errors on both FS.
That's good. =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
To unsubscribe from this list: send the line "
ement or simply to change the number of devices in
the array, to the point that completion could take weeks, so long that
the chance of a device death during the balance is non-trivial, which
means while the process technically works, in practice it's not actually
usable. Given that similar
rly
reliable at more than one point, making multi-device anything over it
rather unwise. JBOD /as/ /JBOD/, creating individual single-device
filesystems on each device (or device partition), may be somewhat more
workable, but multi-device, whether at the btrfs level or dm- or md-raid
level un
d could handle the legal situation, but didn't consider btrfs suitably
stable and mature, I'd go zfs. If I didn't need the features, probably
xfs, of course on top of mdraid or possibly hardware raid and mdraid
hybrid, to get the multi-hdd coverage.
--
Duncan - List replies preferred.
al case where one
overrules the other, is hard. So it's best just to use just the option
you want and not confuse people, at least others trying to make sense of
things even if you yourself know which one gets applied, with both.
--
Duncan - List replies preferred. No HTML msgs.
"
f course it also forces
btrfs to a more deterministic distribution of those chunk copies, so you
can loose up to all the devices in one of those raid0s, as long as the
other one remains functional, but that's nothing to really count on, so
you still plan for single device failure redundancy only
re of the problem and presumably working on it,
but it's equally obviously not fixed yet.
Were I seeing the problem frequently (again, I've not seen it at all),
I'd likely drop back to 4.5 until there's a fix, tho if it takes long
enough 4.5 might be going out of support, 4.4-L
Austin S. Hemmelgarn posted on Mon, 25 Apr 2016 07:18:10 -0400 as
excerpted:
> On 2016-04-23 01:38, Duncan wrote:
>>
>> And again with snapshotting operations. Making a snapshot is normally
>> nearly instantaneous, but there's a scaling issue if you have too many
>
an the other two, but
I've never had to use it as I don't use extended attributes on anything
on the filesystems I needed to recover. So I have no experience with it,
but you shouldn't need it unless you /have/ extended attributes to
restore. It's definitely not
y're edited a bit
differently than the git pull notices, where Linus and git log readers
are the primary audience.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richa
ve, and
you'll better understand how to effectively work with the filesystem when
you're done. It's well worth the time invested! =:^)
https://btrfs.wiki.kernel.org
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if
t
believe the bug has even been fully traced down yet, 4.8 is definitely
the earliest I'd say consider it again, and a more conservative
recommendation might be to ask again around 4.10.)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master -
401 - 500 of 1873 matches
Mail list logo