Hello,
I am attempting to verify my understanding of how mime type assignment works in
s3cmd. I'm hoping this post will make this detail easier to find for others
seeking it too.
I looked at the S3.py code and surmised that the python-magic module will
attempt to be used if present. If it is
at 5:08 PM, WagnerOne wag...@wagnerone.com wrote:
Hi,
The man page states the following:
--no-check-md5
Do not check MD5 sums when comparing files for [sync]. Only size will be
compared. May significantly speed up transfer but may also miss some changed
files.
When this says only
Hi,
During some of my initial S3 uploads using s3cmd, I didn't realize S3 behaved
differently in terms of the hash it stores for multipart uploads compared to
normal uploads.
I since bumped my s3cfg setting for multipart threshold well beyond any of my
file sizes so multipart uploading
, n, upload_count, total_size)
n_copies, saved_bytes, failed_copy_files = remote_copy(s3,
copy_pairs, destination_base)
#upload file that could not be copied
On Mon, Apr 21, 2014 at 3:45 PM, WagnerOne wag...@wagnerone.com wrote:
example s3cmd for the below
/usr
with --debug enabled, privately.
Thanks,
Matt
On Thu, Apr 10, 2014 at 3:30 PM, WagnerOne wag...@wagnerone.com wrote:
Hi,
When attempting to s3cmd sync 2 buckets today, I encountered this An
unexpected error has occurred. output from s3cmd itself.
I am using the latest available
the HEAD call, and calculate the MD5 of the local file (and use
--cache-file to record that for posterity), and compare.
On Sun, Apr 13, 2014 at 5:08 PM, WagnerOne wag...@wagnerone.com wrote:
Hi,
The man page states the following:
--no-check-md5
Do not check MD5 sums when
I've struggled with some huge object count transfers and went back and forth
between aws s3 cli and s3cmd.
aws s3 cli seems to edge s3cmd out on speed and RAM consumption when doing huge
object count transfers. However, aws s3 cli seems to choke on large object
counts and s3cmd offers so much
Hi,
The man page states the following:
--no-check-md5
Do not check MD5 sums when comparing files for [sync]. Only size will be
compared. May significantly speed up transfer but may also miss some changed
files.
When this says only size will be compared, I'm taking it to mean only size
of
, 2014 at 4:48 PM, WagnerOne wag...@wagnerone.com wrote:
Running this same sync in debug, I see additional detail following the INFO:
Summary: ... line.
I'm not sure what I should anonymize in that output, so I'd prefer to share
it with a dev only. I can produce that on request.
Mike
Running this same sync in debug, I see additional detail following the INFO:
Summary: ... line.
I'm not sure what I should anonymize in that output, so I'd prefer to share it
with a dev only. I can produce that on request.
Mike
On Apr 10, 2014, at 3:21 PM, WagnerOne wag...@wagnerone.com wrote
Mostly just an observation to report... occasionally when I stop s3cmd with
ctrl-c when also running under --dry-run, I'll see a
Cleaning up. Please wait...
Completed parts...
And then it hangs indefinitely until I kill the process. This seems to happen
more often with dry-run than it
to include directories as well as files. That's
not a trivial undertaking, for what is really a corner case added by S3
later. I'd be happy to review a patch that cleanly deals with empty
directories as objects.
On Tue, Mar 11, 2014 at 8:32 AM, WagnerOne wag...@wagnerone.com wrote
While this feature is fantastic, I can't find a lot of detail on it in general.
I wonder how to disable it?
During initial uploads at least, our DirectConnect link seems to be faster in
copying the files themselves than s3cmd is at telling S3 to remote copy
objects.
Would that simply be using
Hi,
I noticed empty directories on my source (not s3) side aren't making it to my
target, s3 side.
I searched the archives and saw this is a known issue. Is there a timeline for
when this may be resolved?
Also, what is the current behavior if I create the required empty directories
in s3
at 6:07 PM, WagnerOne wag...@wagnerone.com wrote:
I've identified the subdir in my content to be transferred with the huge file
count that I need to systematically transfer.
Will --exclude allow me to sync everything but said directory, so I can then
work within that subdir or will I hit
in a sqlite on-disk or in-memory database for transient use in storing
and comparing the local and remote file lists, but that's a fairly heavy
undertaking and not one anyone has chosen to develop.
Thanks,
Matt
On Thu, Mar 6, 2014 at 4:18 PM, WagnerOne wag...@wagnerone.com wrote:
Hi
16 matches
Mail list logo