Feel free to close this. The bug has not affected me in over a decade.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/317781
Title:
Ext4 data loss
To manage notifications about this bug go to:
Shouldn't this problem be closed by now if this bug was fixed?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/317781
Title:
Ext4 data loss
To manage notifications about this bug go to:
As far as I understand, the problem has been fixed for overwriting by
rename and overwriting by truncate. Is it an issue at all for just
overwriting part of a file, without truncating it first?
I realize that there are basically no guarantees when fsync() is not
used, but will such writes to
Can someone point me toward documentation for data=alloc_on_commit?
I am getting 0 byte files after system freezes on Ubuntu 10.04.01
(amd64) with kernel version 2.6.32-25. Just want to understand how one
uses alloc_on_commit and how it works before I use it, and I can't find
any proper
Installed Karmic with ext4 on a new PC today. Installed FGLRX
afterwards. All of a sudden the PC froze completely. No mouse-movement,
no keyboard. Hard reset. After reboot lots of configuration files that
were recently changed had zero length. The system became unusable due to
this (lots of error
Hi,
I found a workaround to the problem of determining the cleartext filenames.
*Before* you delete the zero-byte files, back 'em up:
1) tar find .Private -size 0b | xargs tar -czvf zerofiles.tgz
2) Unmount your encrypted home
3a) Temporarily move the good files away:
mv .Private
Hi,
I am also bitten by the above ecryptfs messages slowly filling my /var/log and
have a followup question to the cleanup workaround presented by Dustin in
comment #57
of this bug:
Is there any way to determine (=decrypt) which files have been messed up,
so I know if there is anything
I have added a separate bug for the problem of (de-)crypting filenames,
see https://bugs.launchpad.net/ecryptfs/+bug/493779
Yours,
Steffen
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
ted ts'o:
You can opine all you want, but the problem is that POSIX does not
specify anything ...
I'll opine that POSIX needs to be updated.
The use of the create-new-file-write-rename design pattern is pervasive
and expected that after a crash either the new contents or the old
contents of
Would it be possible to create sync policies (per distribution, per
user, per application) and ensure like this a flexibility/compromise
every user might choose/change?
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
After another reboot some more problems with kwallet, here is dmesg.
** Attachment added: dmesg.txt
http://launchpadlibrarian.net/27107305/dmesg.txt
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is
And after another clean shutdown and a reboot, I finally had to reformat
my home partition and restore it from a backup, as the fsck gave a huge
amount of errors and unlinked inodes. Gone back to ext3, will wait for
2.6.30 final before new tests. Here is the final dmesg to just after the
fsck. As
Jose, please open a separate bug, as this is an entirely different
problem. (I really hate Ubuntu bugs that have a generic description,
because it seems to generate Ubuntu Launchpad Syndrome --- a problem
which seems to cause users to search for bugs, see something that looks
vaguely similar,
Ok, I'll try installing 2.6.30 final for ubuntu and report a new bug. As for
the fsck, the only time I didn't boot into single user mode and ran fsck by
hand was that one. My fstab entry is simple - LABEL=Home/home ext4
relatime,defaults 0 0, and most errors I had were data
I just subscribed this bug as I started seeing this behaviour with 2.6.30-rc6
on my aspire one. First it was the 0 length files after a crash (the latest
intel drivers still hang sometimes at suspend/resume or at logout/shutdown, and
only the magic REISUB gets you out of it), and once I saw my
Now I didn't even had a crash, but on reboot my kdewallet.kwl file was empty. I
removed it, and in syslog I got the following:
EXT4-FS warning (device mmcblk0p1): ext4_unlink: Deleting nonexistent file
(274), 0
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug
No worries, André! Some more feedback on 2.6.30: I've been using
2.6.30-rc3 then 2.6.30-rc5 without problems in Jaunty for several weeks
now (I used to get kernel panics at least twice a week with 2.6.28) and
am now trying 2.6.30-rc6. Still so far so good.
--
Ext4 data loss
Hi
Thanks for your answers, Rocko. Today I have installed the Karmic Koala
Alpha 1 with Kernel 2.6.30-5-generic, and it seems that all the former
problems with ext4 are gone. For testing purposes I have created 5 big
dvd iso files (together about 30 GB of data), moved them around in the
system,
Hello together
Just reporting some observations after making a brand new installation
of Ubuntu 9.04 with ext4 as default file system on my Sony Vaio VGN-
FS195VP. Since the installation some days ago I had again four hard
locks, but luckily - despite my experiences some weeks ago - without any
@André: you might be experiencing one or two different bugs that are
possibly related to ext4 in the Jaunty kernel - see
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/348731 and
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/330824. The latter
happens when you try and delete lots of
@Theo: would it be hard to implement something like I suggested, ie
storing rename backup metadata for crash recovery? I think in the
discussion on your blog someone says that reiserfs already does this via
'save links' (comment 120).
Alternatively, if there were a barrier to all renames instead
I agree with Daniel - consistency should be a primary objective of any
journaling file system.
Would it be possible to do something like store both the old and new
inodes when a rename occurs, and to remove the old inode when the data
is written? This way it could operate like it is currently,
@Theo
Sorry for the false alarm. Filed it as soon as I found the 0 byte file while
still investigating the source. I've since created and submitted a patch (via
launchpad, https://bugs.launchpad.net/ubuntu/+source/gajim/+bug/349661) that I
believe should correct gajim's behavior in this area.
@Theo: I vote for what (I think) lots of people are saying: if the file
system delays writing of data to improve performance, it should delay
renames and truncates as well so you don't get *complete* data loss in
the event of a crash... Why have a journaled file system if it allows
you to lose
@Rocko,
If you really want this, you can disable delayed allocation via the
mount option, nodelalloc. You will take a performance hit and your
files will be more fragmented. But if you have applications which
don't call fsync(), and you have an unstable system, then you can use
the mount
@Jamin,
We'd have to see how gaim is rewriting the application file. If it is
doing open/truncate/write/close, there will always be the chance that
the file would be lost if you crash right after the truncate. This is
true with both ext3 and ext4. With the workaround, the chances of
losing
@Theo,
Been digging through the source to track down how it does it. Managed
to find it. It does use a central consistent method, which does use a
tempfile. However, it does not (as of yet) force a sync. I'm working
on getting that added to the code now. Here's the python routine it
uses:
That looks like it removes the file before it does the rename, so it
misses the special overwrite-by-rename workaround. This is slightly
unsafe on any filesystem, since you might be left with no config file
with the correct name if the system crashes in a small window, fsync()
or no. Seemingly
What that code does is stupid, yes. It shouldn't remove the original
unless the platform is win32. *Windows* (except with Transactional NTFS)
doesn't support an atomic rename, so it's no surprise that Python under
Windows doesn't either.
You're seeing a zero-length file because Tso's fix for ext4
@Daniel,
Note that if you don't call fsync(), and you hence you don't check the
error returns from fsync(), your application won't be notified about any
possible I/O errors. So that means if the new file doesn't get written
out due to media errors, the rename may also end up wiping out the
The filesystem should be fixed to allocate blocks on *every* commit,
not just ones overwriting existing files.
alloc_on_commit mode has been added. Those who want to use it (and take
the large associated performance hit) can use it. It's a tradeoff that
is and should be in the hands of the
If you accept that it makes sense to allocate on rename commits for
overwrites of *existing* files, it follows that it makes sense to commit
on *all* renames. Otherwise, users can still see zero-length junk files
when writing a file out for the first time. If an application writes out
a file using
If you accept that it makes sense to allocate on rename commits for
overwrites of *existing* files, it follows that it makes sense to commit
on *all* renames.
Renaming a new file over an existing one carries the risk of destroying
*old* data. If I create a new file and don't rename it to
The risk isn't data loss; if you forgo fsync, you accept the risk of
some data loss. The issue that started this whole debate is consistency.
The risk here is of the system ending up in an invalid state with zero-
length files *THAT NEVER APPEARED ON THE RUNNING SYSTEM* suddenly
cropping up. A
On Fri, 2009-03-27 at 22:55 +, Daniel Colascione wrote:
The risk isn't data loss; if you forgo fsync, you accept the risk of
some data loss. The issue that started this whole debate is consistency.
The risk here is of the system ending up in an invalid state with zero-
length files *THAT
First of all, the program under discussion got it wrong. It shouldn't
have unlinked the destination filename. But the scenario it unwittingly
created is *identical* to the first-time creation of a filename via a
rename, and that's a very important case. EVERY program will encounter
it the first
Daniel Philipps, developer of Tux3 filesystem, wants to make sure that renames
come after file being written even when delayed writing of metadata is
introduced to it:
http://mailman.tux3.org/pipermail/tux3/2009-March/000829.html
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You
I know this report claims that a fix is already in Jaunty for this
issue. However, I just found myself with a 0 byte configuration file
after a system lockup (flashing caps lock).
$ uname -ra
Linux odin 2.6.28-11-generic #37-Ubuntu SMP Mon Mar 23 16:40:00 UTC 2009 x86_64
GNU/Linux
--
Ext4
@189: Jamin,
The fix won't protect against a freshly written new file (configuration
or otherwise); it only protects against a file which is replaced via
rename or truncate. But if it was a file that previously didn't exist,
then you can still potentially get a zero-length file --- just as you
@Theo
The file in question was a previously existing configuration file for my IM
client (gajim). All IM accounts and preferences were lost. Not a huge deal,
but definitely a preexisting file. The system kernel panicked (flashing caps
lock) while chatting. The kernel panic is a separate
Linus made some comments about the filesystem's behaviour:
http://lkml.org/lkml/2009/3/24/415
http://lkml.org/lkml/2009/3/24/460
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Problem seems 2.6.28-11. My system is stable with 2.6.28-9. I have
reported bug #346691.
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
@nicobrainless: Sounds like a hardware failure to me. I'd suggest
investigating the smartctl utility (in the package 'smartmontools') to
check on the general health of the drive.
Note that this isn't a troubleshooting forum, nor is 'too many
comments' really a good excuse for not reading them.
@Carey
Sorry I did read a fair amount of the comments but realized that my
problem was slightly different...
I already investigated the hardware side and all kind of test (short,
long and don't remember the third word) returned with no error! I also
reinstalled everything again on an ext3
I am experiencing the exact same problem... I was with an ext3 converted to
ext4 partition and ended up reinstalling everything as data loss killed all
/etc...
My fresh Jaunty on a fresh ext4 already gave me twice 'read-only file system'
and now I don't know what to do...
Olli Salonen wrote
BTW I am running on a fully updated jaunty with kernel 2.6.28-11..and it
started about 5 days ago
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
In this post, Ts'o writes: Since there is no location on disk, there is
no place to write the data on a commit; but it also means that there is
no security problem. Well, this means that the specific security
problem identified, exposure of information to those who are not
authorized to see it,
Well, al least it seems that tweaking kernel to allow ext4 to be nice,
something has been broken on JFS side
My box uses JFS for root FS, and all worked OK at least up to two days ago
(2.6.28-9, IIRC).
With both 2.6.28-10 and 2.6.28-11, all sort of filesystem corruptions started
to pop up,
Graziano: please open another bug for your issues. The only files
touched by the patches for this bug are in fs/ext4, and nothing else in
the source tree.
http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-jaunty.git;a=commit;h=f305d27b95849da130c3319e51054309c371e92a
Theodore, you're a bright guy, but this really means that you can't use
EXT4 for anything at all.
fsync() is slow on many(most?) filesystems, and it grinds the entire
system to a halt. What you're saying is that those applications have
to know the details of the filesystem implementation, and
Shit, not used to launchpad, I didn't see all the comments it hid by
default, I read the entire first page and didn't see my comment was
answered. Ignore what I wrote, it's been covered already.
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because
Wow, this thing sure is being actively discussed. I might as well weigh
in:
- I side with those that say lots of small files are a good idea --
the benefits of having many small human-readable config files are self-
evident to anyone who's ever had to deal with the windows registry.
Suggesting
There have been a lot of the same arguments (and more than few
misconceptions) that have been made over and over again in this thread,
so I've created a rather long blog post, Don't fear the fsync!, which
will hopefully answer a number of the questions people have raised:
@Tom
I might not be good at making my point sometimes, but you clearly sum
things up very good. Way better than I do.
@Aryeh
In Ext3, too many applications use fsync, I think that was from the ext2
day-and-era, where not syncing could lead to corrupt filesystems, not
just empty files. Same with
To conclude everything:
No distribution should probably ship with Ext4 as default because Ext3's
behaviour was broken and everyone relies on it. And it should not be
shipped with Ext4 as default for the same reasons while people are
warned to use XFS: Potential data loss on crashes.
So as this
KDE has a framework for reading and writing application settings. So the
solution should be simple: switch on the fsync call at the same Ubuntu release
where ext4 is the default file system. Does anybody know what is the situation
of the GNOME environment? Does a similar switch exist?
Of course
Guys, see comment 45 and comment 154. A workaround is going to be
committed to 2.6.30 and has already been committed to Jaunty. The bug
is fixed. There will be no data loss in these applications when using
ext4, it will automatically fsync() in these cases (truncate then
recreate, create new
This bug was fixed in the package linux - 2.6.28-10.32
---
linux (2.6.28-10.32) jaunty; urgency=low
[ Amit Kucheria ]
* Delete prepare-ppa-source script
[ Andy Isaacson ]
* SAUCE: FSAM7400: select CHECK_SIGNATURE
* SAUCE: LIRC_PVR150: depends on VIDEO_IVTV
- LP:
@Volodymyr
I finished recompiling the kernel with Theodore Ts'o patches, and reran
Volodymyr's test cases with the patched kernel. The results are:
File System Method Performance (Typical, Minimum, Maximum) #Lost %Lost
ext4patch 1 0.440.410.501 1.00%
ext4patch
Ted,
I am not sure if this was covered yet but if so I apologize in advance
for beating a dead horse.
The current opinion is that if you want to make meta data (eg the out of
order rename case) reliable you must use sync... Ok, that would require
a huge number of changes to many programs but it
I updated my Lame test case to include more scenarios and added
version implemented in C.
You can use it to estimate reliability of ext3/4 FS.
I will post my results (much) later - I will be busy with sport dancing
at Sun-Mon.
Can anybody prepare small QEMU or VMWare image with recent patched
For the application side of things I filed a bug report for KDE:
https://bugs.kde.org/show_bug.cgi?id=187172
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs
@helios
I fail to see the fact that this would ever be a KDE bug. fsync only
*helps*, it will never ever make sure that things will be solved
permanently for good. The reason that a rename() is used with a temp
file that *nothing* can get 100% durability (even using fsync). App
developers want
@CowBoyTim
Power failure during fsync() will result in a half-written file, but
that's why the correct sequence is
1) Create new temp file
2) Write to new temp file
3) fsync() new temp file
4) rename() over old file
If there's a power failure before or during step 3, the temp file will
be
@Volodymyr
I did some experimenting with your test cases. My results so far are:
File System Method Performance (Typical, Minimum, Maximum) #Lost %Lost
ext31 0.430.420.501 1.00%
ext32 0.320.300.330 0.00%
ext33 0.190.16
@CowBoyTim
I agree with you. I work with real-time industrial systems, where the
shop floor systems are considered unreliable. We have all the same
issues as a regular desktop user, except our users have bigger hammers.
The attraction of ext3 was the journalling with the ordered data mode.
If
@Bogdan Gribincea
While the discussion here has concentrated on rewriting config files,
you also report a loss of MySQL databases. What table configuration were
you using, and were the underlying files corrupted or reduced to 0
length?
InnoDB is intended to be ACID compliant, and takes care to
@Adrian Cox
I'm sorry but I was in the middle of work so I just quickly restored a
backup without really looking at what happened.
Some MYD files were truncated to 0 but I didn't take the time to
investigate the cause. It was a standard Jaunty MySQL 5.0.x install
using MyISAM tables.
--
Ext4
I wonder if KDE really rewrites it's configfiles on startup? Why write
something at all on startup? Maybe a KDE dev can comment on this...
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
@Volodymyr,
You can only fsync given a file descriptor, but I think writing an fsync
binary that opens the file read-only, fsync on the descriptor, and close
the file, should work.
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a
You can only fsync given a file descriptor, but I think writing an
fsync binary that opens the file read-only, fsync on the descriptor, and
close the file, should work.
Use this little program to verify your assumptions (I have no time right
now):
#include fcntl.h /* open(), O_RDONLY */
Just for reference:
http://flamingspork.com/talks/2007/06/eat_my_data.odp
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
I wrote something similar, but with one change -- it turns out you must
have write access to the file you want to fsync (or fdatasync).
It seems to work, but I have not had time to do a power loss simulation.
Would be useful performance-wise on any system but ext3 (where calling
this is identical
Per Ted's suggestion
(https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/317781/comments/56),
I've applied the following 3 commits in order to preserve some ext3
semantics.
http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-jaunty.git;a=commit;h=0903d3a2925f3cffb78ca611c4e3356ac7ffef8a
@Michel Salim
You can only fsync given a file descriptor, but I think writing an
fsync binary that opens the file read-only, fsync on the descriptor, and
close the file, should work.
Wouldn't that only guarantee the updates through that descriptor (none)
are synced?
--
Ext4 data loss
@Theodore Ts'o
3.a) open and read file ~/.kde/foo/bar/baz
3.b) fd = open(~/.kde/foo/bar/baz.new, O_WRONLY|O_TRUNC|O_CREAT)
3.c) write(fd, buf-of-new-contents-of-file, size-of-new-contents-of-file)
3.d) fsync(fd) --- and check the error return from the fsync
3.e) close(fd)
3.f)
Just in case this has not been done yet: I have experienced this »data
loss problem« with XFS, losing the larger part of my gnome settings,
including the evolution ones (uh-oh).
Alas, filesystems are not databases. Obviously, there's some work to be
done in application space.
--
Ext4 data loss
@Olaf
from the manpage, fsync() transfers all modified in-core data of the
file referred to by the file descriptor fd. So it should really be all
pending writes, not just the writes that take place using the current
fd.
I cannot really reboot any of my machines right now, but it does make
sense.
@mkluwe:
Filesystems are databases by definition (they are structured sets of data).
But of course not all databases are/work equal, because they serve different
purposes...
Maybe it would be good to amend POSIX to provide an (optional?)
mechanism for guaranteed transactional integrity for
The fundamental problem is that there are two similar but different
operations an application developer can request:
1. open(A)-write(A,data)-close(A)-rename(A,B): replace the contents of B
with data, atomically. I don't care when or even if you make the change,
but whenever you get around to it,
While it may not be guaranteed by POSIX, operation 1's atomicity is
nevertheless something any sane filesystme should provide.
That's very misguided. It's /not/ guaranteed by POSIX, and going above
and beyond POSIX in every respect is a surefire recipe for terrible
performance.
--
Ext4 data
A new sata standard with a tiny battery, to ensure buffers are written,
and fsync implemented as noop is very high on my wishlist...
The idea is free ;)
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is
@Matthew: I reject your premise. ZFS preserves ordering guarantees
between individual writes. UFS maintains a dependency graph of all
pending filesystem operations. Both these filesystems perform rather
well, especially the former.
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You
@Matthew:
No, rewriting virtually every application to do fsync when updating
files is a surefire recipe for terrible performance. Atomic updates
without fsync are how we ensure good performance.
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because
So a couple of things, since this has gone totally out of control.
First of all, ZFS also has delayed allocation (also called allocate on
flush). That means it will suffer similar issues if you crash without
doing an f*sync(), and the page cache hasn't been flushed out yet.
Solaris partisans
@Theo: could you comment on the points made above (e.g., comment #98),
namely about why the truncate operation is immediate and the writing
operation is delayed? I think that's a very good point; if both
operations were delayed no (old) data would be lost, while achieving top
performance.
I'm following this bug for a while now and I must say it's quite
amusing. However, I do find it quite strange that people keep referring
to this as a bug when it has been clearly and more than once explained
that it really isn't. So I thought I'd chime in and repeat it in nice
and friendly letters
THIS IS NOT A BUG!
No, it's a feature. You automatically get a clean desktop configuration
once in a while, because your desktop effects were probably too fancy
and your desktop was too cluttered. Entirely as intended.
Seriously, though, this IS a bug, and it involves data loss, so it's an
Perhaps off topic and/or well known, but this paper claims to solve all these
problems? (At least compared to ext3)
http://www.usenix.org/events/osdi06/tech/nightingale.html
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of
@Theo,
Appreciate everything you've done for ext filesystems and Linux in
general. A few comments:
Slightly more sophisticated application writers will do this:
2.a) open and read file ~/.kde/foo/bar/baz
2.b) fd = open(~/.kde/foo/bar/baz.new, O_WRONLY|O_TRUNC|O_CREAT)
2.c) write(fd,
I agree with Daniel's comment #113. The data integrity semantics of
ext3's data=ordered is indeed useful in practice, and ext4 should not
introduce different semantics (essentially no safer than data=writeback)
for the option with the same name. The current behavior should be
called
THIS IS NOT A BUG!
I would consider it a bug. As far as I understood the problem is that
flushing to the filesystem does not occur in correct order. Metadata
should be flushed after data has been flushed to ensure transactional
integrity. But exactly this is what Ext4 currently does not. Hence
ubuntu 9.04 alpha amd64, ext4
I play World Of Goo 1.4 and evry time I press Quit it stopped working
and the only way to return to Ubuntu was 'hard' reboot (turning the
power switch off).
So, one time after such reboot I lost my save file from the game, Pidgin
displayed 'can't read buddy.lst' and
Finally, I'll note that Fedora folks haven't really been complaining about
this, so far as I know.
Which should make people ask the question, why is Ubuntu different?
This, to me gets to the root of the loggerheads displayed in this bug.
The reason Ubuntu is different is because it is *more*
So servers, presumably running with CLI and not GNOME or KDE should be
relatively unaffected by this, correct?
I've been using ext4 on my desktop (/, not /home) for quite some time
and have seen no problems. I like the performance boost it gives me and
would like to give it to my servers as
@Brett,
Servers generally run on UPS's, and most server applications generally
are set up to use fsync() where it is needed. So you should be able to
use ext4 in good health. :-) I'm using a GNOME desktop myself, but I
use a very minimalistic set of GNOME applications, and so I don't see a
@Kai,
But you can imagine what happens to fs performance if
every application does fsyncs after every write or before
every close. Performance would suffer badly.
Note that the fsync causes performance problems meme got started
precisely because of ext3's data=ordered mode. This is what
The reason why the write operation is delayed is because of
performance.
Yup, I understand that and I'm all for it. Delay writing for hours if
that improves performance further, that's great. But the question
remains: why is the _truncate_ operation not delayed as well? The gap
between the
As for configuration registry:
Filesystems are about files exactly same as sqlite and other databases
are about records. Actually, files could be treated like some sort of
records in some sort of very specific database (if we'll ignore some
specifics).
And we're, the users expect BOTH databases
@Theodore,
Note that the fsync causes performance problems meme got started
precisely because of ext3's data=ordered mode. This is what causes
all dirty blocks to be pushed out to disks on an fsync(). Using ext4 with
delayed allocation, or ext3 with data=writeback, fsync() is actually quite
1 - 100 of 229 matches
Mail list logo