15:25 <rbasak> tsimonq2: the gvfs fix looks good to me superficially,
though I think the surgery involved in the fix deserves a closer review
which will take me more time. I'll look again later. Thank you again for
driving both of these.

You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.

  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
Status in pcmanfm package in Ubuntu:
Status in caja source package in Xenial:
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
Status in pcmanfm source package in Xenial:

Bug description:
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 

  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because this fix is in the gvfs release that
  is already in 17.04 and later (so they should have fixed it already).
  I can't think of a package that does this (from what I can tell, it's
  not sane code if it isn't compatible with this change) but if
  something does regress, it should be trivial to fix.

  Patch Description: The file->children condition is always true at this point, 
child->children should be there instead in order to speed up processing. This 
fix doesn't affect the functionality, it just slightly improves processing.

  This is a trivial iteration on top of the other two patches that
  improves performance while not affecting functionality. It just seems
  to eliminate an unnecessary step when executing the file by putting
  child->children there instead as it improves processing time. The
  regression potential is low because all of the variables already exist
  and the patch would have been likely been integrated in one of the
  other two if they had noticed the performance enhancement. Also, it
  doesn't affect functionality, so a regression seems unlikely because
  if one exists it would be in one of the other patches.

  Patch Description: g_dbus_connection_flush_sync causes troubles in certain 
cases when using multiple threads. The problem occurs when there are ongoing 
gvfs_metadata_call_set_sync calls. It blocks threads e.g. when moving files 
over Nautilus. Nautilus freezes sometimes due to it. Use sync methods instead 
of flushing dbus. I don't see any significant slowdown when using sync methods 
instead of the flushing according to my testing. It fixes hangs in Nautilus and 
moving in Nautilus is almost instant again.

  This patch fixes the Nautilus crashes.

  It helps fix the problems introduced in some cases when using mutliple
  threads are being used. One comment that is removed alongside the code
  it is noting (this stood out to me) is the following:

  /* flush the call with the expense of sending all queued messages on
  the connection */

  It shows that when initially writing this, care wasn't taken to ensure
  that it would be useful when there is a large amount of queued calls,
  i.e. in the case of this bug. If the user does not have enough memory
  (which is my theory about what the issue was when I reproduced this on
  my hardware, it showed an error message something along the lines of
  "Out of Memory"), Nautilus will stop and fail to continue the

  This takes care to make sure this is never the case, and uses
  gvfs_metadata_call_remove_sync instead of flushing dbus, which is much
  more efficient, and helps make sure that given a large amount of
  files, it will not crash.

  In any case, these patches are tracked in upstream bug 757747 (
  https://bugzilla.gnome.org/show_bug.cgi?id=757747 ), and if
  regressions in this code are found, it will likely be noted there.

  [Original Description]
  With Nautilus, I navigated to a folder that contained about 2,000 gif files.
  I selected all of them
  I cut with Ctrl+X
  I navigated to another folder
  I pasted with Ctrl+V

  => A popup window appeared saying "Preparing to move N files", and it got 
stuck there forever.
  Nautilus stopped responding. I waited half an hour, then I had to kill it.

  Actually some of the files were moved. Which is the worst thing that
  could happen.

  I tried again with the remaining files and it happened again!! It's

  I suspect it has to do with displaying the previews of a big number of
  files. Nautilus becomes completely unmanageable whenever you open
  folders with lots of files. Maximum priority should be given to the UI
  and actual operations; displaying of the previews should be done in
  the background and should never, ever slow down user operations the
  slightest bit. Seems pretty obvious.

  Also note that 2000 is not even that much. Actually it's very few

  ProblemType: Bug
  DistroRelease: Ubuntu 12.10
  Package: nautilus 1:3.5.90.really.3.4.2-0ubuntu4.2
  ProcVersionSignature: Ubuntu 3.5.0-25.38-generic
  Uname: Linux 3.5.0-25-generic i686
  NonfreeKernelModules: nvidia
  ApportVersion: 2.6.1-0ubuntu10
  Architecture: i386
  Date: Tue Feb 26 18:23:52 2013
   b'org.gnome.nautilus.window-state' b'geometry' b"'1312x713+64+71'"
   b'org.gnome.nautilus.window-state' b'sidebar-width' b'180'
   b'org.gnome.nautilus.window-state' b'start-with-status-bar' b'true'
  InstallationDate: Installed on 2010-06-23 (979 days ago)
  InstallationMedia: Ubuntu 10.04 LTS "Lucid Lynx" - Release i386 (20100429)
  MarkForUpload: True
   PATH=(custom, no user)
  SourcePackage: nautilus
  UpgradeStatus: Upgraded to quantal on 2013-01-13 (44 days ago)

To manage notifications about this bug go to:

Mailing list: https://launchpad.net/~desktop-packages
Post to     : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to