I've done some testing around in this area as well as discovered some
information that may be relevant to this bug, so I wanted to update it
here. I'm doing all of this on my LAN, as my server resides on my LAN.
I'm also working on a gigabit connection. I was able to push 8,000 files
amounting to 30GB to my WebDAV server with 0 issues from my desktop,
which is the system I mentioned above with 8GB of RAM. I saw no memory
spike whatsoever. Likewise, if I sent a single 5.9GB file to my WebDAV
server, it spiked and when the file had sent roughly 4GB of the 5.9GB
size, it tanked the connection with the RAM nearly 100% maxed.

It seemed to play significantly nicer with smaller files, such as the
8,000 above (30GB) that it had zero issues with. I have to assume that
it's caching the files prior to sending them, and the system is able to
let go of the cached files once they're already sent, thereby resulting
in no visible memory spike. LIkewise, the singular 5.9GB file is one
solid large file would have nothing to "let go" since it's one
continuous file. That's probably where the 8,000 file scenario has an
advantage.

When I ran into this "bug" I decided to try out alternatives. I found
davfs2 which mounts WebDAV shares via terminal. Davfs2 had the same
issue, however I realized it was more reliant on hard disk space versus
RAM. A quick read of the man page of davfs showed:

##################################
Caching

mount.davfs tries to reduce HTTP-trafic by caching and reusing data.
Information about directories and files are held in memory, while
downloaded files are cached on disk.

mount.davfs will consider cached information about directories and file 
attributes valid for a configurable time and look up this information on the 
server only after this time has expired (or there is other evidence that this 
information is stale). So if somebody else creates or deletes files on the 
server it may take some time before the local file system reflects this.
##################################

I can't help but to wonder... if davfs caches by default, is that to
suggest Nautilus/.gvfs is actually handling WebDAV services as intended?
Can any devs comment on this info?

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to gnome-vfs2 in Ubuntu.
https://bugs.launchpad.net/bugs/42720

Title:
  Copying files to webdav broken (first completely copied to RAM and
  then transfered)

Status in Gnome VFS Filesystem Abstraction library:
  Won't Fix
Status in “gnome-vfs2” package in Ubuntu:
  Triaged

Bug description:
  Hi,
  when copying files to a webdav share with nautilus or gnomevfs-copy the 
following happens currently:

  1. a 0 byte file is created
  2. the file is completely copied to RAM, nautilus shows this as progress from 
0% to 100%
  3. the file is finally transfered, nautilus stalls at 100% until it's finished

  
  while copying the file to RAM you could obviously run out of memory if the 
file is too large

  
  (to get an easy to setup webdav share you could use gnome-user-share)

To manage notifications about this bug go to:
https://bugs.launchpad.net/gnome-vfs/+bug/42720/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to