Re: configured endpoint not honored

2022-09-30 Thread John Calcote
Hi all - I found a solution for this problem and thought others might find it useful. I wrote up the issue in this stack overflow issue: https://stackoverflow.com/questions/73169813/jclouds-getbucketlocation-timeout-on-getblob/73902608#73902608 On Thu, Sep 29, 2022 at 9:43 AM John Calcote wrote

configured endpoint not honored

2022-09-29 Thread John Calcote
to go to *s3.amazonaws.com <http://s3.amazonaws.com>*. We have the firewall set up so that traffic can only go to the region we want to use, so this clearly fails. What do we have to do to tell jclouds "here's my region, trust me?" Thanks in advance, John Calcote

Re: SimpleDateFormat.rfc822DateParse exiting while holding lock

2022-05-05 Thread John Calcote
ds should be as easy as opening a pull request in > GitHub, and you don't need an ASF account for that. > What resources do you need access to that are not available? Is it Slack? > > On Thu, May 5, 2022 at 5:30 PM John Calcote > wrote: > >> I would love to contribute, howe

Re: SimpleDateFormat.rfc822DateParse exiting while holding lock

2022-05-05 Thread John Calcote
I would love to contribute, however, ASF makes it near impossible to access the appropriate resources. The account creation page for ASF indicates you need an apache.org email address, but if you don't have one you should "contact the workspace administrator at ASF". But then it provides no clue

SimpleDateFormat.rfc822DateParse exiting while holding lock

2022-05-04 Thread John Calcote
I have a deadlock scenario in jclouds where a jstack shows 14 threads waiting for a lock in SimpleDateFormat.rfc822DateParse that is owned by a 15th thread that is no longer in the method. In fact, the lock holder is waiting for work to do in the thread pool. Should I report this as an issue in

Re: S3 multipart upload throws NPE occasionally

2021-09-14 Thread John Calcote
r and it works. I'll compile and test this myself. I'll let you know if it seems to fix the issue. John On Tue, Sep 14, 2021 at 4:52 PM John Calcote wrote: > Hi Andrew, > > Thanks for the quick response, but I don't think there's anything wrong > with that regex. The expression - &quo

S3 multipart upload throws NPE occasionally

2021-09-09 Thread John Calcote
Hi Andrew, I'm running jclouds 2.3.0 these days. Here's the call stack: java.lang.NullPointerException: Null id at org.jclouds.blobstore.domain.AutoValue_MultipartUpload.(AutoValue_MultipartUpload.java:32) ~[jclouds-blobstore-2.3.0.jar:2.3.0] at

Guice problem in latest jclouds

2021-05-12 Thread John Calcote
Andrew, We've upgraded to jclouds 2.3.0. We see this occasionally and when we do, additional calls into jclouds (upload) just hang: ``` java.lang.IllegalStateException: Constructor not ready at

Re: filesystem (LocalBlobStore) failing intermittently on initial "upload" in v2.2.1

2021-01-11 Thread John Calcote
I looked over the code changes myself and I agree with you. It's unlikely anything in jclouds is causing this error. I'm currently investigating issues without build system where the tmp directory may be close to full or some other space/perms-related issue. Thanks, John

Re: filesystem (LocalBlobStore) failing intermittently on initial "upload" in v2.2.1

2021-01-08 Thread John Calcote
Hi Andrew, Your theory about another client deleting the file cannot be correct. This is an integration test where a single test is running - the container is created in /tmp//test-container. The test creates the test container in the setup method, uploads several files, manipulates some data,

Re: filesystem (LocalBlobStore) failing intermittently on initial "upload" in v2.2.1

2021-01-07 Thread John Calcote
One more comment - it's an intermittent failure. Sometimes it works, sometimes not.

filesystem (LocalBlobStore) failing intermittently on initial "upload" in v2.2.1

2021-01-07 Thread John Calcote
We use the filesystem (LocalBlobStore) implementation for integration testing in our code. I'm seeing the following behavior in version 2.2.1 which I recently upgraded to: ``` 2021-01-07 20:03:43.503,7964896021968234 {} DEBUG c.p.d.m.j.AbstractUploadCallable [cpu-normalpri-1] [task 1] State

Re: AWS v4 Signature algorithm required for generic-S3

2020-12-11 Thread John Calcote
Would you be able to > submit a PR for this? > > The unsigned payload is also a desired feature and easy to add after > fixing the above. > > We plan to release 2.3.0 later this month so it could include these > features if you can help out! > > On Thu, Dec 10, 2020 at

Re: AWS v4 Signature algorithm required for generic-S3

2020-12-10 Thread John Calcote
I see someone asked this very question a while back (2018): https://lists.apache.org/thread.html/3553946dcbe7c72fc8f82c6353c9a5c56d7b587da18a4ebe12df1e26%40%3Cuser.jclouds.apache.org%3E Did anyone ever solve the double read issue regarding disabling data-hashing? On 2020/12/10 18:03:13, John

AWS v4 Signature algorithm required for generic-S3

2020-12-10 Thread John Calcote
Hi all - We have a client who would like to use jclouds with ONTAP 9.8, which requires V4 signatures AND path-based URL naming. I believe the true AWS S3 provider requires virtual bucket URL naming. Anyone have any ideas on how to either: 1) coerce jclouds into using the AWS S3 provider with

how to unwrap a back-end provider

2020-05-21 Thread John Calcote
Hi Andrew, I've searched high and low for how to do that "unwrap" trick you mentioned here: https://jclouds.apache.org/start/concepts/#providers I've even played with evaluating different methods on the blobstore context and builder. I cannot find an obvious (or unobvious) way to access the

multi-part upload writing temp files

2020-05-21 Thread John Calcote
Hi Andrew, I've got code that's uploading using the jclouds multi-part abstraction. Here are the log trace messages for a portion of an upload of a 1G file: 2020-05-21 16:52:07.203 +,36570333402959 {} DEBUG o.j.r.i.InvokeHttpMethod [cloud-normalpri-4] >> invoking PutObject 2020-05-21

Re: Range download

2020-05-18 Thread John Calcote
On 2020/05/18 20:00:16, John Calcote wrote: > while (length < size) { Oops! Should have been: while (length >= size) {

Range download

2020-05-18 Thread John Calcote
I'm trying to create a sort of multi-part download mechanism using jclouds download with a GetOption of "range". I'd like to just download 5MB ranges until I run out of file to download. The doc on the use of this option is a bit sparse. Can someone help me understand the semantics of download

Re: tokens and oauth

2019-02-05 Thread john . calcote
Thanks everyone for the responses - we were looking at doing some sort of token-based auth for a cloud version of our service, but it seems this has become a dead issue now. We cared most about AWS, but might also have been interested at some point in google. Regardless, not a hot issue at this

joyent triton

2019-02-05 Thread john . calcote
Hi all -- Anyone know if joyent triton (private cloud service) is supported by jclouds? My preliminary research says no, but if anyone else knows better, I'd appreciate it. John

tokens and oauth

2019-01-10 Thread john . calcote
Hi Andrew, I've been asked to look into converting our jclouds-based cloud client to use tokens for authentication. The specific ask was to use OAuth tokens, but I think tokens in general would be a good start. At the moment, we just use the root account access/secret keys to authenticate, but

Re: hgst hcp server returning 500 internal server error

2018-09-20 Thread john . calcote
jclouds attempts to parse as XML with a > SAX parser. This generates a stack trace in the log. It's incidental, but you > might want to try to do a little pre-checking of the content before > attempting to parse it with SAX. Just a thought.) > > Thanks in advance, > John Calcote

hgst hcp server returning 500 internal server error

2018-09-15 Thread john . calcote
-checking of the content before attempting to parse it with SAX. Just a thought.) Thanks in advance, John Calcote Hammerspace, Inc.

Re: putBlob failing, but acting like success

2018-08-17 Thread john . calcote
Nevermind - egg on face. There was an exception caught in my code, but it was buried in the middle of the HttpResponseException content so I missed it. The problem turned out to be one where s3 ninja was literally returning success without having written the file due to a previous error where

putBlob failing, but acting like success

2018-08-17 Thread john . calcote
I'm calling putBlob to put a blob to an S3 ninja server. The server is returning a ridiculous 500 server error response page - it contains a ton of javascript and the jclouds SAX parser is choking on the content of a

Re: putBlob fails to send proper content when using payload(InputStream)

2018-04-13 Thread john . calcote
> > I notice you filed a related bug against s3ninja recently -- you may > > want to try S3Proxy[1] instead which has a more complete implementation > > and actually uses jclouds as its backend. Hi Andrew, Just tried out s3proxy - it's exactly what we're looking for. I tried setting it up to

Re: putBlob fails to send proper content when using payload(InputStream)

2018-04-13 Thread John Calcote
Thanks Andrew. Very helpful information. I'll definitely look at s3proxy. John On Fri, Apr 13, 2018, 12:06 AM Andrew Gaul wrote: > On Thu, Apr 12, 2018 at 01:41:38PM -, john.calc...@gmail.com wrote: > > Additional information on this issue: I've discovered by virtue of a >

Re: putBlob fails to send proper content when using payload(InputStream)

2018-04-12 Thread john . calcote
Additional information on this issue: I've discovered by virtue of a wireshark session that jclouds client is NOT sending chunked transfer-encoding, but rather aws-chunked content-encoding. Can anyone tell me why this is necessary, since A) it accomplishes the same thing that chunked

Re: putBlob fails to send proper content when using payload(InputStream)

2018-04-11 Thread john . calcote
On 2018/04/11 20:28:55, john.calc...@gmail.com wrote: > I just found out that if I use "s3" provider type rather than "aws-s3" > provider it works. I've set a breakpoint in the read(byte[], ...) method of > my ByteBufferBackedInputStream and I can see that the

Re: putBlob fails to send proper content when using payload(InputStream)

2018-04-11 Thread john . calcote
I just found out that if I use "s3" provider type rather than "aws-s3" provider it works. I've set a breakpoint in the read(byte[], ...) method of my ByteBufferBackedInputStream and I can see that the difference appears to be that the "s3" provider is using a straight upload, while the "aws-s3"

putBlob fails to send proper content when using payload(InputStream)

2018-04-11 Thread john . calcote
uds.aws.s3.blobstore.AWSS3BlobStore.putBlob(AWSS3BlobStore.java:85) at org.jclouds.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:248) at test.App.main(App.java:98) Process finished with exit code 1 ``` Thanks in advance, John Calcote

Re: putBlob with an already existing object

2018-04-05 Thread john . calcote
> multi-part uploads. Unfortunately Atmos is odd for a number of reasons > and delete and retry was the best workaround at the time, especially for > a low-popularity provider. Some blobstores like Ceph can address this > issue with conditional PUT but this is not supported elsewhere. In the

Re: putBlob with an already existing object

2018-04-05 Thread john . calcote
Thanks for the quick response Andrew - > The closet analog is AtmosUtils.putBlob which retries on > KeyAlreadyExistsException after removing. Generally the jclouds > portable abstraction tries to make all blobstores act the same and uses > the native behavior for the providers. Which blobstore

Re: Requiring Java 8 and Guava 21+

2018-04-04 Thread john . calcote
On 2018/03/24 19:40:39, Andrew Gaul wrote: > jclouds-dev has a thread on requiring Java 8 and Guava 21+. > when not if. Any comments on this proposed change? I've been using Java 8 for 2 years now and I started late. I can't believe what I was missing out on - Java 8 is

putBlob with an already existing object

2018-04-04 Thread john . calcote
What is jclouds's general policy with regard to putting a blob to a cloud service where the blob already exists and the cloud provider doesn't allow overwrites? Seems like it would be nice to be able to treat the operation like it's an idempotent http PUT, but if the service disallows

deleteObject with missing object

2018-04-04 Thread john . calcote
What does deleteObject do when the object is not present in the cloud? does it silently return success (as with the corresponding idempotent http verb)? When writing a client, one doesn't want to catch an exception for a missing object and, not knowing what it's for, just retry the operation.