Re: Reusing BlobStores

2014-01-26 Thread Andrew Gaul
Instantiating a BlobStoreContext performs several requests such
authentication and populating metadata like bucket locations.  This
process has some overhead so users should issue many requests against a
single BlobStore.  After instantiation each BlobStore request, e.g.,
getBlob, issues an HTTP request, creating an HTTP connection as
necessary.  From a functional perspective users should not worry about
the underlying HTTP connection pool reconnecting.  If an authentication
token expires jclouds will automatically renew it using the provided
credentials.

On Sun, Jan 26, 2014 at 09:30:08PM -0500, John D. Ament wrote:
 Hi all,
 
 Per the docs here:
 http://jclouds.apache.org/documentation/userguide/blobstore-guide/
 BlobStores should be reused for multiple requests, not a single one
 per request.  However, most of these requests have a timeout involved,
 where the connection is only good for so long.  So, does jclouds
 automatically reconnect to the provider if the session times out?
 
 John

-- 
Andrew Gaul
http://gaul.org/


Re: Reusing BlobStores

2014-01-27 Thread Andrew Gaul
You correctly understand this and can create any number of
BlobStoreContext for simultaneous use of different providers and
regions.  Region support for Rackspace is incomplete and you might want
to look at the newer implementation in jclouds-labs-openstack.

On Mon, Jan 27, 2014 at 01:35:31PM -0500, John D. Ament wrote:
 Andrew,
 
 Ok, thanks for the heads up.  In my case, I end up needing two blob
 store contexts, since I have different blob stores in different
 regions (the context appears to be region specific, at least in my use
 with Rackspace).  So I should essentially create these as singletons,
 and send data as appropriate.
 
 Thanks,
 
 John
 
 On Sun, Jan 26, 2014 at 9:44 PM, Andrew Gaul g...@apache.org wrote:
  Instantiating a BlobStoreContext performs several requests such
  authentication and populating metadata like bucket locations.  This
  process has some overhead so users should issue many requests against a
  single BlobStore.  After instantiation each BlobStore request, e.g.,
  getBlob, issues an HTTP request, creating an HTTP connection as
  necessary.  From a functional perspective users should not worry about
  the underlying HTTP connection pool reconnecting.  If an authentication
  token expires jclouds will automatically renew it using the provided
  credentials.
 
  On Sun, Jan 26, 2014 at 09:30:08PM -0500, John D. Ament wrote:
  Hi all,
 
  Per the docs here:
  http://jclouds.apache.org/documentation/userguide/blobstore-guide/
  BlobStores should be reused for multiple requests, not a single one
  per request.  However, most of these requests have a timeout involved,
  where the connection is only good for so long.  So, does jclouds
  automatically reconnect to the provider if the session times out?
 
  John
 
  --
  Andrew Gaul
  http://gaul.org/

-- 
Andrew Gaul
http://gaul.org/


Clojure support

2014-02-19 Thread Andrew Gaul
Does anyone use the jclouds Clojure bindings, specifically
blobstore2.clj?  These have not seen many changes over the years and
have made it harder to evolve the underlying Java jclouds APIs, e.g.,
https://github.com/jclouds/jclouds/pull/44 .  Can someone volunteer to
maintain these?

-- 
Andrew Gaul
http://gaul.org/


Re: Chunked upload with glance

2014-04-15 Thread Andrew Gaul
You can enable multi-part upload on a per-blob basis with:

BlobStore.putBlob(containerName, blob, new PutOptions().multipart());

On Wed, Apr 16, 2014 at 10:19:20AM +0530, Shital Patil wrote:
 Hi,
 I am trying to upload big images using *jclouds glance labs 1.8.0 and jdk
 1.7*. With jclouds glance we can upload small images. But when we try large
 images it throws -
 
 java.lang.IllegalArgumentException: Cannot transfer 2 GB or larger chunks
 due to JDK 1.6 limitations. Use chunked encoding or multi-part upload, if
 possible. For more information:
 http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6755625
 EXception Cannot transfer 2 GB or larger chunks due to JDK 1.6 limitations.
 Use chunked encoding or multi-part upload, if possible. For more
 information: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6755625
 
 
 How to specify chunked upload option ?
 How to do multi-part upload?
 
 Thank you

-- 
Andrew Gaul
http://gaul.org/


Re: 1.6.4 release?

2014-04-16 Thread Andrew Gaul
On Wed, Apr 16, 2014 at 08:43:11PM +0200, Andrew Phillips wrote:
 I'm interested in a 1.6.4 release to resolve:
https://issues.apache.org/jira/browse/JCLOUDS-427
 
 We have a couple of systems which for we do not want to do the
 async-synchronous upgrade right now; but we would like to upgrade java.
 
 Thanks for letting us know, John. We've seen this question from a
 couple of other users, too.
 
 @users: please vote on this by replying to this thread if you're
 also interested!
 
 @John: are you able to use 1.6.4-SNAPSHOT builds at present, or how
 are you handling this issue?

-1, prefer not to spread our finite volunteer resources across
maintaining multiple stable branches and validating releases.  A few
users have asked for a 1.5.11 release as well!  We should push them to
upgrade to a more recent version of jclouds instead.

-- 
Andrew Gaul
http://gaul.org/


Re: Clojure support

2014-05-27 Thread Andrew Gaul
On Wed, Feb 19, 2014 at 01:04:43PM -0800, Andrew Gaul wrote:
 Does anyone use the jclouds Clojure bindings, specifically
 blobstore2.clj?  These have not seen many changes over the years and
 have made it harder to evolve the underlying Java jclouds APIs, e.g.,
 https://github.com/jclouds/jclouds/pull/44 .  Can someone volunteer to
 maintain these?

Repeating my request for a Clojure maintainer.  In addition to making
evolving the APIs more difficult, jclouds has a 3 year stale dependency
on Clojure, 1.3 vs. 1.6.  I will ask the development team to drop
support unless someone volunteers to maintain it.

-- 
Andrew Gaul
http://gaul.org/


Re: blob storage provider for Google?

2014-05-27 Thread Andrew Gaul
On Tue, May 27, 2014 at 09:48:01PM +, Guenther Thomsen wrote:
  You may also want to have a look at  
  https://github.com/jclouds/jclouds-labs-google/pull/25 and perhaps get  
  in touch with Bhathiya to see what the plans are..?
 
 Thanks.
 
 This seems to be work in progress for Google Cloud Storage. Are there plans 
 to support Google Drive as well?
 
 ~ Guenther

Google Drive (or Dropbox) support would be an interesting addition to
jclouds.  These syncing services provide programmatic access to the
underlying storage, perhaps sufficient to implement the jclouds
BlobStore abstraction.  Note that the use cases for syncing services are
much different than object storage (like Google Cloud Storage); the
latter tends to scale to greater number and size of objects, more
clients, and support different privilege mechanisms.  If this interests
you, I encourage you to look at the existing Google support which
implements OAuth.  This will give you a good head start on any
implementation.  Good luck!

-- 
Andrew Gaul
http://gaul.org/


Re: LDAP Integation/Support

2014-05-29 Thread Andrew Gaul
On Wed, May 28, 2014 at 08:28:08AM +0200, Kenny Aondona wrote:
 I am new to jclouds, I was wondering if jClouds supports LDAP or Active
 Directory(or any of that kind).
 Can someone please advice me regarding an app for an enterprise of 100
 people, the app will be used for accessing blobstores across 3-4 different
 cloud storage providers.
 What would be the best approach for managing user identities and
 access/authorization using jClouds. Thanks !!

You should architect your application to authenticate users against a
LDAP/AD server, store a mapping from objects to user permissions in some
persistent store, and vend signed URLs to clients when they request and
have permission to read/write an given object.  jclouds does not offer
any support for LDAP/AD and the greater Java ecosystem generally has
poor support for this.

-- 
Andrew Gaul
http://gaul.org/


Re: Maginatics Cloud Storage Platform

2014-06-02 Thread Andrew Gaul
On Mon, Jun 02, 2014 at 06:00:52PM -0300, felipe gutierrez wrote:
 Hi,
 
 I have a doubt about Maginatics Cloud Storage Platform.
 
 JClouds is a subproject of it?
 
 Can I use the platform for free? Where can I download?
 
 Thanks in advance!
 Felipe

Maginatics is one of the many vendors developing a commercial product
which includes jclouds code.  Their developers actively participate in
jclouds development and contribute code.  Please contact Maginatics at
http://maginatics.com/ for information on their offering.

-- 
Andrew Gaul
http://gaul.org/


Re: Error on using JClouds SshClient

2014-06-15 Thread Andrew Gaul
Can you try calling payload.getContentMetadata().setContentLength(...)?
Also you might want to use a ByteSourcePayload instead, e.g.,

ByteSource byteSource = Files.asByteSource(
new File(C:/Users/Udara/input.txt));
Payload payload = new ByteSourcePayload(byteSource);
payload.getContentMetadata().setContentLength(byteSource.size())

This will allow the jclouds retry mechanism to work in the presence of
network failures.  InputStreamPayload are not retryable.

On Mon, Jun 16, 2014 at 09:22:15AM +0530, Nipun Udara wrote:
 Hi all
 
 FileInputStream inputStream=new FileInputStream(C:/Users/Udara/input.txt);
 try{
   ssh.connect();
   Payload payload=new InputStreamPayload(inputStream);
   ssh.put(input.txt,payload);
 }finally {
 if (ssh!=null){
 ssh.disconnect();
 }
 }
 
 when i use Jclouds sshClient put method above i got the following error,
 
 [ERROR] 
 (ec2-user:rsa[fingerprint(f7:fd:af:ae:25:0b:d3:99:2a:c2:ee:8c:83:be:92:0c),sha1(0f:4b:9e:f7:a7:f8:b1:4e:f4:7c:68:3b:2e:e2:e7:68:f3:25:8c:32)]@
 54.85.181.97:22) error acquiring Put(path=[input.txt]) (not retryable): null
 java.lang.NullPointerException
 at
 org.jclouds.sshj.SshjSshClient$PutConnection$1.getLength(SshjSshClient.java:323)
 at
 net.schmizz.sshj.sftp.SFTPFileTransfer$Uploader.upload(SFTPFileTransfer.java:182)
  at
 net.schmizz.sshj.sftp.SFTPFileTransfer$Uploader.access$100(SFTPFileTransfer.java:172)
 at net.schmizz.sshj.sftp.SFTPFileTransfer.upload(SFTPFileTransfer.java:70)
  at net.schmizz.sshj.sftp.SFTPClient.put(SFTPClient.java:248)
 at
 org.jclouds.sshj.SshjSshClient$PutConnection.create(SshjSshClient.java:314)
  at
 org.jclouds.sshj.SshjSshClient$PutConnection.create(SshjSshClient.java:290)
 at org.jclouds.sshj.SshjSshClient.acquire(SshjSshClient.java:196)
  at org.jclouds.sshj.SshjSshClient.put(SshjSshClient.java:346)
 at org.apache.airavata.gfac.utils.JCloudsFileTransfer.uploadFileToEc2(JCl
 
  I can put strings or StringPayLoads but when i use InputStreamPayloads  i
 got this error. Any help regarding this
 
 Regards
 Nipun Udara

-- 
Andrew Gaul
http://gaul.org/


Re: jCloud and HTTPS

2014-07-08 Thread Andrew Gaul
On Tue, Jul 08, 2014 at 11:15:33PM +0200, Andrew Phillips wrote:
 Quoting Bk Lau bklau2...@gmail.com:
 I'm new to using HTTPS to  access OpenStack compute API hosted on a HTTPS
 url. Is there any special settings that I should set to allow HTTPs call to
 endpoints to succeed?.
 
 Have you tried simply setting the HTTPS endpoint when building your
 context, as described in the ContextBuilder Javadoc? [1]
 
 If that fails, could you put the error in a Pastie or Gist?

Also try experimenting with setting Constants.PROPERTY_RELAX_HOSTNAME
and Constants.PROPERTY_TRUST_ALL_CERTS in ContextBuilder.

-- 
Andrew Gaul
http://gaul.org/


Re: Is it possible to abort putBlob operation?

2014-07-08 Thread Andrew Gaul
On Thu, Jul 03, 2014 at 07:18:56PM +0200, Nikola Knezevic wrote:
 is there a way to abort a putBlob operation, making sure that the object
 store will not create the blob?

jclouds does not support aborting putBlob, but you appear to have found
a clever workaround below!  jclouds 1.7.3 also does not support aborting
getBlob but we are tracking this issue and a fix at:

https://issues.apache.org/jira/browse/JCLOUDS-417
https://github.com/jclouds/jclouds/pull/435

 For example, if I'm issue a putBlob operation, where I generate the content
 and pass it to the Blob through PipedInputStream/PipedOutputStream pair,
 would it be possible to abort the operation if there is an exception in the
 generation. I'm not sure would mere closing of a blob's stream prevent the
 object store from creating the blob. If the input stream I pass to
 blob.setPayload() throws an IOException, would that do the trick?

This will work although you should provide the blobstore some cookie to
know that the blob did not complete successfully, e.g., Content-length
or Content-MD5.  Further you should ensure that you use a non-repeatable
Payload like an InputStream, specifically a PipedInputStream will work.

jclouds should provide real support for aborting putBlob.  Can you open
a JIRA issue to track this?  It intersects with some of our
AsyncBlobStore discussion.

-- 
Andrew Gaul
http://gaul.org/


Re: [DISCUSS] Java 6 support

2014-07-15 Thread Andrew Gaul
On Wed, May 28, 2014 at 11:57:18AM -0700, Andrew Gaul wrote:
 jclouds presently supports Java 6, 7, and 8 which imposes extra
 development costs and prevents uptake of new language and library
 features including try-with-resources, NIO.2, and HTTP client
 improvements.  Oracle ceased public updates to Java 6 in early 2013[1]
 and jclouds could use this to guide its support strategy.  The jclouds
 developers would like to understand how many users continue to use Java
 6 and what prevents upgrading to newer versions.  Please respond to this
 thread with any relevant information.  Thanks!

I collected some limited statistics from JIRA which show that most bug
reporters use Java 7 and none use Java 6.  Perhaps we can use a similar
data-driven approach when we move to Java 8 in a few years.

major version summary:

 10 Java 1.7
  1 Java 1.8

minor version summary:

  3 Java 1.7
  2 Java 1.7.0_21
  1 Java 1.7.0_25
  2 Java 1.7.0_45
  1 Java 1.7.0_51
  1 Java 1.7.0_55
  1 Java 1.8

individual JIRA issues:

JCLOUDS-247: Java 1.7
JCLOUDS-249: Java 1.7.0_25
JCLOUDS-498: Java 1.7.0_45
JCLOUDS-519: Java 1.7
JCLOUDS-539: Java 1.7.0_51
JCLOUDS-542: Java 1.7
JCLOUDS-556: Java 1.7.0_45
JCLOUDS-569: Java 1.8
JCLOUDS-604: Java 1.7.0_21
JCLOUDS-605: Java 1.7.0_21
JCLOUDS-626: Java 1.7.0_55

collected from:

for i in `seq 650`; do echo $i; wget 
https://issues.apache.org/jira/si/jira.issueviews:issue-xml/JCLOUDS-$i/JCLOUDS-$i.xml
 || break; done
grep -A1 'environment' * | grep -i -e jdk -e java | tr '\r' '\n'

-- 
Andrew Gaul
http://gaul.org/


Re: How to write a Blob using an OutputStream?

2014-08-04 Thread Andrew Gaul
On Mon, Aug 04, 2014 at 04:39:15PM -0400, Steve Kingsland wrote:
 I'm trying to use jclouds to write to an S3-compatible object store (Ceph),
 and I'd like to use an OutputStream to write the payload for a Blob. How do
 I do this?
 
 I'm working on an existing system which uses a stream-based abstraction
 around all of the file I/O, that looks like this:
 
 public interface ResourceFactory {
 InputStream getInputStream(String resourcePath) throws IOException;
 
 OutputStream getOutputStream(String resourcePath) throws IOException;
 }
 
 I was able to implement getInputStream() for *reading* a blob from jclouds,
 but I'm not sure how to return an OutputStream for *writing* a blob.
 
 I know this question has already been asked
 https://groups.google.com/forum/#!topic/jclouds/F2pCt9i7TSg, but it seems
 like a common-enough use case that it shouldn't be terribly complicated to
 implement. Can anyone provide suggestions for how to accomplish this?
 
 The best I could find is Payload#writeTo
 http://demobox.github.io/jclouds-maven-site-1.7.2/1.7.2/jclouds/apidocs/org/jclouds/io/WriteTo.html,
 which accepts an OutputStream but is @Deprecated. Thanks in advance!

Steve, I am not sure I understand your use case.  putBlob consumes an
input *source*, e.g., ByteSource or InputStream.  Why do you want to
provide it an output *sink*, e.g., OutputStream?  If you have a special
need, could you provide a custom implementation of ByteSource or
InputStream, or use PipedInputStream/PipedOutputStream if you really
must use an OutputStream?

-- 
Andrew Gaul
http://gaul.org/


Re: How to write a Blob using an OutputStream?

2014-08-04 Thread Andrew Gaul
Please look at PipedInputStream/PipedOutputStream which should address
this use case.

On Mon, Aug 04, 2014 at 08:10:49PM -0400, Steve Kingsland wrote:
 My use case is:
 
 1. the calling code is generating content in memory, and wants an
 OutputStream to write it to (currently it's going to disk);
 
 2. the putBlob() method wants a btye[], InputStream, etc. that it can read
 from.
 
 My problem is that *both* parties want to control the transaction. Here is
 what my calling code looks like:
 
 OutputStream documentOutputStream = null;
 try {
documentOutputStream = this.
 *documentResourceFactory.getDocumentOutputStream(documentPath);*
 
renderAndWriteDocument(renderContext, documentOutputStream);
 }
 catch (IOException e) {
...
 }
 finally {
Closeables.closeQuietly(documentOutputStream);
 }
 
 I'm trying to create an implementation of DocumentResourceFactory that
 returns an OutputStream for writing the document to an Object Store using
 jclouds, instead of writing it to the local file system. I guess a
 stream-based API isn't really supported for writing to object stores...
 
 In my case, the files are small enough that I'm OK buffering them in
 memory. So what I'm planning to do, if there are no better options, is to
 create an OutputStream implementation that buffers the file contents, and
 uploads it to the blob store when flush()/close() is called. But that
 doesn't sound great, so I'm hoping maybe someone else has a better idea?
 
 
 
 *Steve Kingsland*
 
 Senior Software Engineer
 
 *Opower * http://www.opower.com/
 
 
 *We’re hiring! See jobs here http://www.opower.com/careers *
 
 
 On Mon, Aug 4, 2014 at 7:53 PM, Andrew Gaul g...@apache.org wrote:
 
  On Mon, Aug 04, 2014 at 04:39:15PM -0400, Steve Kingsland wrote:
   I'm trying to use jclouds to write to an S3-compatible object store
  (Ceph),
   and I'd like to use an OutputStream to write the payload for a Blob. How
  do
   I do this?
  
   I'm working on an existing system which uses a stream-based abstraction
   around all of the file I/O, that looks like this:
  
   public interface ResourceFactory {
   InputStream getInputStream(String resourcePath) throws IOException;
  
   OutputStream getOutputStream(String resourcePath) throws IOException;
   }
  
   I was able to implement getInputStream() for *reading* a blob from
  jclouds,
   but I'm not sure how to return an OutputStream for *writing* a blob.
  
   I know this question has already been asked
   https://groups.google.com/forum/#!topic/jclouds/F2pCt9i7TSg, but it
  seems
   like a common-enough use case that it shouldn't be terribly complicated
  to
   implement. Can anyone provide suggestions for how to accomplish this?
  
   The best I could find is Payload#writeTo
   
  http://demobox.github.io/jclouds-maven-site-1.7.2/1.7.2/jclouds/apidocs/org/jclouds/io/WriteTo.html
  ,
   which accepts an OutputStream but is @Deprecated. Thanks in advance!
 
  Steve, I am not sure I understand your use case.  putBlob consumes an
  input *source*, e.g., ByteSource or InputStream.  Why do you want to
  provide it an output *sink*, e.g., OutputStream?  If you have a special
  need, could you provide a custom implementation of ByteSource or
  InputStream, or use PipedInputStream/PipedOutputStream if you really
  must use an OutputStream?
 
  --
  Andrew Gaul
  http://gaul.org/
 

-- 
Andrew Gaul
http://gaul.org/


Re: How to configure Jclouds S3 API to put the container name *after* the hostname?

2014-08-11 Thread Andrew Gaul
On Mon, Aug 11, 2014 at 08:19:56PM -0400, Steve Kingsland wrote:
 I'm trying to use jclouds' S3 API to connect to an internally-hosted Ceph
 server, which exposes an S3-compliant API. Use other (ruby and perl)
 clients, I've noticed that my requests work fine when I put the
 container/bucket name *after* the hostname, like this:
 
 https://opower.internal/*mybucket/*?acl
 
 However, jclouds is putting the container name *in the hostname* (S3 calls
 this a virtual bucket), like so:
 
 https://*mybucket.*opower.internal/?acl
 
 And that's failing with an AccessDenied error that I haven't figured out
 yet, but am still working on. In the mean time, is there a way to configure
 jclouds to put the container name *after* the host name, in the request?

Which version of jclouds do you use?  jclouds 1.7.0 should have resolved
this issue:

https://issues.apache.org/jira/browse/JCLOUDS-305

If you use 1.7.0 or newer, can you share the failing operations with
wire logs?  Also make sure you use the generic s3 provider and not the
aws-s3 specific provider.

-- 
Andrew Gaul
http://gaul.org/


Re: How does jclouds storage my files?

2014-08-11 Thread Andrew Gaul
On Mon, Aug 11, 2014 at 05:08:16PM -0300, felipe gutierrez wrote:
 I am new with jclouds. What I am doing is: Opening a disk with java and
 formatting it with jclouds. So, instead of write Files.write() I use
 jclouds.

 When I format I can see about 63 blocks of 2MB. The blocks next to it, say
 the 30 neighbors, are the indexes of my files copied to the disk. It is so
 true that if I save 100 files for example, I can keep these blocks and move
 the another blocks to my cloud and I can still open the disk and see all
 files inside. Only see, of course. If I tried to open I need to move back
 the exactly blocks of that file I want to open.
 
 My question: When I copied 50GB of movies I still can see all movies on my
 disk. When I open the movie I need to restore all blocks. It is what my app
 does. But I got a claim saying the index of the movie are not integrate. I
 still can see the size of the movie is 800MB, but when I open it says only
 20 minutes of movie. So I would like to restore all the blocks of that
 movie, not only what my app is asking to read.

Can you explain how you open a disk and what it means to format it with
jclouds?  Also what does an index mean?  The jclouds filesystem provider
does not provide any indexing; the underlying operating system provides
this.

-- 
Andrew Gaul
http://gaul.org/


S3Proxy 1.0.0 release

2014-08-12 Thread Andrew Gaul
Over the last few weeks I have hacked up S3Proxy, which provides an S3
interface on top of the jclouds BlobStore portable abstraction.  This
enables new uses of jclouds, especially for non-programmers, similar to
jclouds-cli.  For example, one could use an existing S3 application with
a CloudFiles backend or run create local S3 blobstore using the
filesystem backend.  I appreciate any feedback:

https://github.com/andrewgaul/s3proxy

-- 
Andrew Gaul
http://gaul.org/


Re: S3Proxy 1.0.0 release

2014-08-13 Thread Andrew Gaul
On Tue, Aug 12, 2014 at 07:50:38PM +0200, Andrew Phillips wrote:
 Over the last few weeks I have hacked up S3Proxy, which provides an S3
 interface on top of the jclouds BlobStore portable abstraction.
 
 Looks cool! Is there an example or a guide somewhere demonstrating
 how to use this?
 
 One comment about the Limitations: is single-part uploads larger
 than 2 GB still applicable?

I expanded on the readme, giving examples of the two use cases of
translating to non-S3 object stores and providing a local S3 for
testing.  Unfortunately jclouds 1.8 has the 2 GB single-part limit
although I will upgrade to jclouds 2.0 as soon as we release it.  Thanks
for the feedback!

-- 
Andrew Gaul
http://gaul.org/


Re: Error: No space left on device

2014-08-27 Thread Andrew Gaul
Felipe, you exhausted your rootfs space; a file system with 83 GB
available cannot accommodate a 100 GB copy.  Perhaps change your basedir
to your home partition which has 750 GB available?  Note that the
filesystem blobstore does not create any temporary files although other
processes can consume this space.

On Mon, Aug 25, 2014 at 03:12:59PM -0300, felipe gutierrez wrote:
 I am using rootfs, but I deleted the blocks to start again. So I have 83G
 free. My copy of 100G stoped in 80G.
 
 $ df -h
 Filesystem  Size  Used Avail Use% 
 Mounted on
 rootfs   92G  4.7G   83G   6% 
 /
 udev 10M 0   10M   0% 
 /dev
 tmpfs   779M  620K  779M   1% 
 /run
 /dev/disk/by-uuid/5147a770-64ed-4aae-918e-21bd237b359b   92G  4.7G   83G   6% 
 /
 tmpfs   5.0M 0  5.0M   0% 
 /run/lock
 tmpfs   4.6G   72K  4.6G   1% 
 /run/shm
 /dev/sda6   811G   17G  753G   3% 
 /home
 
 
 On Mon, Aug 25, 2014 at 2:44 PM, Andrew Phillips andr...@apache.org wrote:
 
  How could I delete the files only the temporary directory? I also cant find
  this directory at /tmp
 
 
  All I can see in your stacktrace is:
 
 
   java.lang.RuntimeException: java.io.IOException: No space left on device
 
 
  It does not say *which* device is out of space. Could you check on your
  target system to find out which device has no space left [1]?
 
  ap
 
  [1] https://kb.iu.edu/d/agfe
 

-- 
Andrew Gaul
http://gaul.org/


Re: Guava versions

2014-09-04 Thread Andrew Gaul
On Thu, Sep 04, 2014 at 08:49:26PM +0300, Inbar Stolberg wrote:
 Is it agreed that those should be avoided/removed in future releases?
 Also if there is a patched version for 1.8 with guava 15 or 17 without
 @Beta interfaces pls let me know.

Google's generous and well-communicated compatibility guarantees do not
apply to @Beta APIs; use of these undermines jclouds goal of
forward-compatibility with Guava.  That being said, jclouds benefits
from a lot of these interfaces instead of reimplementing equivalent
functionality.

I do not plan to work on Guava backward compatibility; please send pull
requests if you want to include this functionality.  Note that we would
need to provide this across all the jclouds repositories, including
labs.  Also note that Guava 15 TypeToken is incompatible with Java 7 so
going further back than Guava 16 is likely impossible:

https://issues.apache.org/jira/browse/JCLOUDS-427

-- 
Andrew Gaul
http://gaul.org/


Re: Maximum number of Retries JCloud do to AWS

2014-10-23 Thread Andrew Gaul
By default jclouds will retry HTTP commands 5 times.  You can configure
this by setting jclouds.max-retries in the ContextBuilder overrides.

On Thu, Oct 23, 2014 at 11:41:30AM -0700, Manisha Eleperuma wrote:
 Hi all,
 
 I want to know how many times by default the JCloud API retries the
 instance creation in AWS? Does AWS explicity specify an amount?
 According to AWS docs, we need to specifically set the maxErrorRetry count
 to 0 to stop AWS doing retrying.
 Each AWS SDK implements automatic retry logic. The AWS SDK for Java
 automatically retries requests, and you can configure the retry settings
 using the ClientConfiguration class. For example, in some cases, such as a
 web page making a request with minimal latency and no retries, you might
 want to turn off the retry logic. Use the ClientConfiguration class and
 provide a maxErrorRetry value of 0 to turn off the retries.
 Please confirm.
 
 Thanks and Regards
 Manisha
 -- 
 ~Regards
 *Manisha Eleperuma*
 Software Engineer, Cloud TG
 WSO2, Inc.: http://wso2.com
 lean.enterprise.middleware
 
 *blog:  http://manisha-eleperuma.blogspot.com/
 http://manisha-eleperuma.blogspot.com/*

-- 
Andrew Gaul
http://gaul.org/


Re: too may files open

2014-11-18 Thread Andrew Gaul
On Tue, Nov 18, 2014 at 05:27:48PM -0300, felipe gutierrez wrote:
 time ago I asked here why I am getting too many files open with jclouds
 and the answer was I needed to close my BlobStoreContext. I did but now I
 am getting this error again. I believe the context is opening in other
 location. This is my code http://pastebin.com/xCMMMqFR. I close the
 context at lines 175 and 508. Maybe I have to close other objects. But I am
 still getting the error:
 
 Caused by: java.io.FileNotFoundException:
 /home/felipe/udrive/disks/storage/bench53473ResourcegraveISCSI9284/355 (Too
 many files open)

Try closing the results of bloblgetPayload().getInput() (lines 428 and
431).  You can do this via Java 7 try-with-resources or with
ByteStreams2.toByteArrayAndClose.  These InputStream are actually
FileInputStream and likely the cause of your leaks.

-- 
Andrew Gaul
http://gaul.org/


Re: Fwd: Re: Download file from Swift

2014-12-23 Thread Andrew Gaul
Try calling outputStream.flush() since it is a BufferedOutputStream.
Generally easier to use the Guava helpers in these situations, e.g.,
Files.asByteSink(File).writeFrom(InputStream), which handles resource
management for you.  Also do not forget to close all the InputStream and
OutputStream!

On Tue, Dec 23, 2014 at 12:45:07PM +0100, Sofiane Soulfly wrote:
 I tried your solution and the problem is that it creates a temp file,
 which is empty. My objective is to download the file and its content.
 
 
  Forwarded Message 
 Subject:  Re: Download file from Swift
 Date: Mon, 22 Dec 2014 17:01:34 +
 From: Everett Toews everett.to...@rackspace.com
 Reply-To: user@jclouds.apache.org
 To:   user@jclouds.apache.org user@jclouds.apache.org
 
 
 
 What version of jclouds are you using?
 
 Try this…
 
 ObjectApi objectApi = cloudFiles.getObjectApi(REGION, CONTAINER);
 SwiftObject swiftObject = objectApi.get(uploadObjectFromFile.txt”);
 
 // Write the object to a file
 File file = File.createTempFile(uploadObjectFromFile, .txt);
 BufferedOutputStream outputStream = new BufferedOutputStream(new 
 FileOutputStream(file));
 ByteStreams.copy(swiftObject.getPayload().openStream(), outputStream);
 
 Everett
 
 
 On Dec 22, 2014, at 6:23 AM, Sofiane Soulfly saw...@hotmail.com wrote:
 
  Hello everybody,
  
  after I uploaded successfully a text file to Swift, now I want download
  that file into my file system.
  I tried with ObjectApi.get but nothing was downloaded.
  Thank you.
  
  Regards,
  
  SB
 
 
 

-- 
Andrew Gaul
http://gaul.org/


SwiftProxy (was: S3Proxy 1.0.0 release)

2015-05-11 Thread Andrew Gaul
And now the equivalent for Swift:

https://github.com/bouncestorage/swiftproxy

On Tue, Aug 12, 2014 at 12:02:01AM -0700, Andrew Gaul wrote:
 Over the last few weeks I have hacked up S3Proxy, which provides an S3
 interface on top of the jclouds BlobStore portable abstraction.  This
 enables new uses of jclouds, especially for non-programmers, similar to
 jclouds-cli.  For example, one could use an existing S3 application with
 a CloudFiles backend or run create local S3 blobstore using the
 filesystem backend.  I appreciate any feedback:
 
 https://github.com/andrewgaul/s3proxy
 
 -- 
 Andrew Gaul
 http://gaul.org/

-- 
Andrew Gaul
http://gaul.org/


OpenStack Summit Vancouver

2015-05-18 Thread Andrew Gaul
Anyone interested in jclouds attending OpenStack Summit in Vancouver
this week?  Let's meet either for lunch or after hours!

-- 
Andrew Gaul
http://gaul.org/


Re: Bulk deletes in jclouds 1.8.0 for Swift?

2015-06-29 Thread Andrew Gaul
1.8.0 includes the provider-level support for bulk delete.  However,
1.9.0 added portable abstraction support:

https://issues.apache.org/jira/browse/JCLOUDS-144

You should reference the jclouds-labs-openstack repository for 1.8.x
since we promoted it to core in 1.9.0.

On Tue, Jun 30, 2015 at 12:31:07AM +, Forrest Townsend wrote:
 In jclouds version 1.8.0 did this feature make it in?
 https://issues.apache.org/jira/browse/JCLOUDS-685
 
 I was poking around and could not find the related code that this work item
 referenced. So Im asking here, is a bulk delete supported on Swift with
 jclouds 1.8.0? By bulk delete I mean, one HTTP request to delete a
 collection of object names.
 
 Thanks,
 Forrest T.

-- 
Andrew Gaul
http://gaul.org/


Re: Java 7 and 8 features

2015-10-27 Thread Andrew Gaul
Related to Java versions, Apache jclouds uses modernizer-maven-plugin to
encourage use of modern Java APIs such as ArrayList,
String.getBytes(Charset), etc.:

https://github.com/andrewgaul/modernizer-maven-plugin

On Tue, Oct 27, 2015 at 01:49:41PM -0200, Vinicius Corrêa de Almeida wrote:
> I analized some releases and i noticed that not using java 7 features like
> multi catch and in java 8 do not use lambda expressions and others
> features, so i came by this email to know why the developers not using this
> features?

-- 
Andrew Gaul
http://gaul.org/


Re: JCloud - Openstack supported versions

2015-10-28 Thread Andrew Gaul
OpenStack Swift maintains backwards compatibility with previous releases
so the existing code should work with newer Swift versions.  Do you have
a specific issue with Kilo?

On Wed, Oct 28, 2015 at 11:06:44AM +0530, Trupti Mali wrote:
> Hi,
> I was checking out JCloud since I have got Openstack Swift already installed 
> with my system. I am using Kilo version of Openstack Swift. I was trying to 
> see JCloud to Openstack version mapping but failed to get any pointers. Can 
> you please affirm if JCloud 1.9.1 supports latest Openstack Version that is 
> Kilo. And please point me to a page which has this information.
> 
> 
> Thanks
> Trupti

-- 
Andrew Gaul
http://gaul.org/


Re: JCloud - Openstack supported versions

2015-10-28 Thread Andrew Gaul
Swift has backwards compatibility and thus the latest version should
support it.  However, newer Swift releases may have additional
functionality that jclouds does not yet support.

On Wed, Oct 28, 2015 at 11:40:36AM +0530, Trupti Mali wrote:
> Thanks for the response Andrew. No I don’t have any issues yet . I was 
> analysing JCloud to be used with Swift Kilo. And before starting to do that I 
> wanted to confirm that latest JCloud version works well with Openstack Kilo.
> 
> Sorry - It was not very clear from your answer - so u explaining me about 
> Openstack’s backward compatibility or u meant JCloud instead?
> 
> > On 28-Oct-2015, at 11:37 am, Andrew Gaul <g...@apache.org> wrote:
> > 
> > OpenStack Swift maintains backwards compatibility with previous releases
> > so the existing code should work with newer Swift versions.  Do you have
> > a specific issue with Kilo?
> > 
> > On Wed, Oct 28, 2015 at 11:06:44AM +0530, Trupti Mali wrote:
> >> Hi,
> >> I was checking out JCloud since I have got Openstack Swift already 
> >> installed with my system. I am using Kilo version of Openstack Swift. I 
> >> was trying to see JCloud to Openstack version mapping but failed to get 
> >> any pointers. Can you please affirm if JCloud 1.9.1 supports latest 
> >> Openstack Version that is Kilo. And please point me to a page which has 
> >> this information.
> >> 
> >> 
> >> Thanks
> >> Trupti
> > 
> > -- 
> > Andrew Gaul
> > http://gaul.org/
> 

-- 
Andrew Gaul
http://gaul.org/


Re: Request Signature, Content Dispositioning, and RequestAuthorizeSignature

2015-10-15 Thread Andrew Gaul
Unfortunately jclouds does not support content disposition or type in
signed requests presently.  Another user opened an issue tracking this:

https://issues.apache.org/jira/browse/JCLOUDS-1016

This is a relatively easy feature to add; would either you or Halvdan
like to investigate this further?

On Fri, Oct 09, 2015 at 11:01:00AM -0400, Jeremy Glesner wrote:
> Hello,
> 
> We have gotten our first jclouds/S3 app up and running using a combination
> of the general and provider specific APIs.  I have a question about blob
> signatures that I hope someone might be able to help me with.
> 
> I'm familiar with the standard method for getting a blob request signature:
> 
> signer = context.getSigner();
> request = signer.signGetBlob(bucket, key);
> 
> This produces a standard HttpRequest object with method, endpoint, and
> headers: date and authorization.  However, on an object where I've defined
> a content disposition (via BaseMutableContentMetadata and the
> setContentDisposition on payload.setContentMetadata()), I was hoping that
> the signature would (or could be made to) include the
> "response-content-disposition=attachment; filename=" in the HttpRequest
> response.  However, I don't see a way to do this.
> 
> I did find the RequestAuthorizeSignature class which provides methods that
> would allow me to build the HttpRequest myself, adding the
> response-content-disposition, and to sign that request appropriately in
> order to build the authorization.  However, RequestAuthorizeSignature looks
> to be an internal class and not something that I can readily use.
> 
> What is the appropriate way in jclouds to incorporate the
> response-content-disposition in a blob request signature so that when a
> non-java client retrieves the resource, it will download with the provided
> file name?
> 
> V/r,
> 
> Jeremy

-- 
Andrew Gaul
http://gaul.org/


Re: aws-s3 etag when using multipart

2015-09-29 Thread Andrew Gaul
S3 emits different ETags for single- and multi-part uploads.  You can
use both types of ETags for future conditional GET and PUT operations
but only single-part upload returns an MD5 hash.  Multi-part upload
returns an opaque token which is likely a hash of hashes combined with
number of parts.

You can ensure data integrity in-transit via comparing the ETag or via
providing a Content-MD5 for single-part uploads.  Multi-part is more
complicated; each upload part call can have a Content-MD5 and each call
returns the MD5 hash.  jclouds supplies the per-part ETag hashes to the
final complete multi-part upload call but does not provide a way to
check the results of per-part calls or a way to supply a Content-MD5 for
each.

Fixing this requires calculating the MD5 in
BaseBlobStore.putMultipartBlob.  We could either calculate it beforehand
for repeatable Payloads or compare afterwards for InputStream payloads.
There is some subtlety to this for providers like Azure which do not
return an MD5 ETag.  We would likely want to guard this with a property
since not every caller wants to pay the CPU overhead.  Would you like to
take a look at this?

If you want a purely application fix, look at calling the BlobStore
methods initiateMultipartUpload, uploadMultipartPart, and
completeMultipartUpload.  jclouds internally uses these to implement
putBlob(new PutOptions.multipart()).

On Tue, Sep 22, 2015 at 05:10:18PM +0200, Veit Guna wrote:
> Hi.
>  
> We're using jclouds 1.9.1 with the aws-s3 provider. Until now, we have used 
> the returned etag of blobStore.putBlob() to manually verify
> against a client provided hash. That worked quite well for us. But since we 
> are hitting the 5GB limit of S3, we switched to the multipart() upload
> that jclouds offers. But now, putBlob() returns someting like 
> - e.g. 90644a2d0c7b74483f8d2036f3e29fc5-2 that of course
> fails with our validation.
>  
> I guess this is due to the fact, that each chunk is hashed separately and 
> send to S3. So there is no complete hash over the whole payload that could
> be returned by putBlob() - is that correct?
>  
> During my research I stumbled across this:
>  
> https://github.com/jclouds/jclouds/commit/f2d897d9774c2c0225c199c7f2f46971637327d6
>  
> Now I'm wondering, what the contract of putBlob() is. Should it only return 
> valid etag/hashes otherwise return null?
>  
> I'm asking that, because otherwise, I would have to start parsing and 
> validating the returned value by myself and skip any
> validation when it isn't a normal md5 hash. My guess is, that this is the 
> hash from the last transferred chunk plus
> the chunk number?
>  
> Maybe someone can shed some light on this :).
>  
> Thanks
> Veit
>  

-- 
Andrew Gaul
http://gaul.org/


Re: Logging current jclouds API version

2015-12-16 Thread Andrew Gaul
org.jclouds.JcloudsVersion contains the jclouds version.  jclouds also
includes this in the User-Agent head in each HTTP request.

On Wed, Dec 16, 2015 at 06:22:03AM +, Forrest Townsend wrote:
> Hey,
> 
> Is there a way that I could access the current jclouds version that I am
> using within the API?
> 
> I can assuming running a script to access the dependencies' version but my
> goal is to print out the jclouds version number as a header when I start a
> debug trace.
> 
> Any thoughts?
> 
> Thanks,
> Forrest T.

-- 
Andrew Gaul
http://gaul.org/


Re: Does there exist an OpenStack API implementation backed by JClouds?

2015-11-20 Thread Andrew Gaul
If you want an OpenStack Swift API (storage) into jclouds then
SwiftProxy provides this.  If you want another OpenStack API like Nova
(compute) then you will have to write this yourself.  Certainly you can
write your own provider then use the portable jclouds abstraction.

On Fri, Nov 20, 2015 at 09:53:53AM +0800, philip andrew wrote:
> Hello Andrew,
> 
> As I understand, correct me if I am wrong, JClouds *can* call an OpenStack
> API as one of its providers but JClouds also has other providers which
> JClouds call which do not necessarily operate through the OpenStack API
> when JClouds is used. So I want to write a provider to my Cloud which I can
> put inside JClouds as a provider, that is my own work which I will be
> doing. This will allow me to expose a JClouds API to my customers.
> 
> I want someone else to make an OpenStack API over JClouds so that I can
> ALSO expose an OpenStack API to customers, not just a JClouds API. My
> customers want an OpenStack API and a JClouds API.
> 
> I can imagine one problem with this is that the API's won't be a 1-1
> mapping and also some information may be lost on API call, I mean JClouds
> API may be a smaller "set" of information than OpenStack API's "set". so I
> would want to have all of this done in Java (or Scala) so that down the
> call stack if some information was lost down the call stack it could be
> retrieved from a thread local variable or some other method at the bottom
> of the call stack so that I could get all the information I needed from the
> OpenStack API call.
> 
> Is the above all true?
> Does this seems via-able?
> Do you agree or disagree with the way I am suggesting?
> 
> As I know JClouds can connect to OpenStack so you may be thinking, why
> don't I just write an OpenStack API to my cloud, but at this moment I have
> a Java API to my cloud right now, so if I write my code as a provider in
> Java to JClouds then I don't need to deal with REST API calls, I only write
> Java code inside which JClouds calls to my Java API to my cloud. This is
> the reason why I don't want to write an OpenStack API to my cloud but to do
> it in this round-about way is that I have a full Java API to my cloud.
> 
> Thanks!
> Philip
> 
> 
> 
> 
> On Fri, Nov 20, 2015 at 1:30 AM, Andrew Phillips <andr...@apache.org> wrote:
> 
> > Hi Philip
> >
> > Has anyone produced an OpenStack API which calls JClouds in
> >> its implementation?
> >>
> >
> > Could you give some details about what you're trying to put together here?
> >
> > As Andrew G's answer hinted, I could imagine using jclouds in a proxy-like
> > setup, but since jclouds itself expects to call an OpenStack API, I can't
> > immediately see how it would be used to create a "standalone" OpenStack
> > implementation.
> >
> > Regards
> >
> > ap
> >

-- 
Andrew Gaul
http://gaul.org/


Re: BlobStore#clearContainer never returns after closing BlobStoreContext

2016-06-13 Thread Andrew Gaul
Agreed.  Would you like to submit a pull request via GitHub?  You can
find instructions here:

https://www.openhub.net/accounts/khc

On Mon, Jun 13, 2016 at 10:26:07AM +, Bram Pouwelse wrote:
> Thanks for the quick response Andrew!
> 
> If I understand correctly closing the context destroys the ExecutorService
> but after that is is still possible to use the context for operations that
> don't need this Executor service? That feels a bit weird, I get that I
> shouldn't use the context after closing but in case I made a mistake and do
> use it after closing I'd personally prefer to get something like a
> ContextClosedException instead of a method call that never returns.
> 
> Regards,
> Bram
> 
> 
> On Fri, Jun 10, 2016 at 6:34 PM Andrew Gaul <g...@apache.org> wrote:
> 
> > Bram, you cannot use the BlobStoreContext or BlobStore view after
> > closing it.  Closing destroys the ExecutorService that clearContainer
> > uses to issue asynchronous operations.
> 
> The filesystem provider does not
> > use asynchronous operations and thus does not have this issue.  In the
> > future jclouds will remove its ExecutorService and require that callers
> > provide an external one for asynchronous calls like clearContainer.
> >
> > On Fri, Jun 10, 2016 at 12:45:41PM +, Bram Pouwelse wrote:
> > > Hi,
> > >
> > > I'm running into some never ending integration tests in my prohect after
> > > upgrading jcouds to 1.9.2, this seems to be caused by calling
> > > BlobStore#clearContainer after the BlobStoreContext has been closed. I've
> > > attached a patch to demonstrate this issue using the
> > > org.jclouds.blobstore.integration.TransientServiceIntegrationTest at the
> > > end of this mail.
> > >
> > > After figuring out the root cause of my problem I did some more tests
> > and I
> > > don't really get what the close method is supposed to do, If I try the
> > same
> > > using the filesystem provider there is no issue? How is it possible the
> > > BlobStoreContext (and BlobStore instances created from that context) are
> > > still usable after closing the context?
> > >
> > > Regards,
> > > Bram
> > >
> > > Patch (tested on the 1.9.x branch):
> > >
> > > diff --git
> > >
> > a/blobstore/src/test/java/org/jclouds/blobstore/integration/TransientBlobIntegrationTest.java
> > >
> > b/blobstore/src/test/java/org/jclouds/blobstore/integration/TransientBlobIntegrationTest.java
> > > index eeaaf96..e63cef6 100644
> > > ---
> > >
> > a/blobstore/src/test/java/org/jclouds/blobstore/integration/TransientBlobIntegrationTest.java
> > > +++
> > >
> > b/blobstore/src/test/java/org/jclouds/blobstore/integration/TransientBlobIntegrationTest.java
> > > @@ -16,6 +16,8 @@
> > >   */
> > >  package org.jclouds.blobstore.integration;
> > >
> > > +import org.jclouds.blobstore.BlobStore;
> > > +
> > >  import
> > org.jclouds.blobstore.integration.internal.BaseBlobIntegrationTest;
> > >  import org.testng.annotations.Test;
> > >
> > > @@ -24,4 +26,13 @@ public class TransientBlobIntegrationTest extends
> > > BaseBlobIntegrationTest {
> > > public TransientBlobIntegrationTest() {
> > >provider = "transient";
> > > }
> > > +
> > > +   @Test(groups = { "integration"})
> > > +   public void testClearContainerAfterCloseNeverEnds() throws Exception
> > {
> > > +BlobStore blobStore = view.getBlobStore();
> > > +blobStore.createContainerInLocation(null, "test");
> > > +blobStore.putBlob("test",
> > > blobStore.blobBuilder("dummy").payload("test".getBytes()).build());
> > > +view.close();
> > > +blobStore.clearContainer("test");
> > > +   }
> > >  }
> >
> > --
> > Andrew Gaul
> > http://gaul.org/
> >

-- 
Andrew Gaul
http://gaul.org/


Re: requested location eu-central-1, which is not in the configured locations

2016-03-20 Thread Andrew Gaul
I can confirm that the aws-s3 provider still uses the v2 signatures.  I
committed several fixes for the v4 signatures in JCLOUDS-766 but one
blocking issue remains.  v4 URL signing requires a content hash for the
server to accept the PUT request but the jclouds API does not allow for
this.  One workaround would be to use v2 signatures for jclouds URL
signing and v4 signatures for other requests, which would work for most
regions.  If you can live without this functionality, please use the
following branch for now:

https://github.com/andrewgaul/jclouds/tree/aws-signature-v4

Unfortunately I cannot work on this further for a few weeks.  Let's
track further discussion with JCLOUDS-1090.

On Wed, Mar 09, 2016 at 11:57:53PM +0530, Ranjith R wrote:
> Hi,
>I would also like to know how to use AWS S3 v4 signature for
> authentication.  Is there way to specify which signature version to use
> while constructing a BlobStoreContext?
> 
> Thanks in advance,
> Ranjith
> 
> On Tue, Mar 8, 2016 at 3:24 PM, Archana C <carchan...@yahoo.co.uk> wrote:
> 
> > From the amazon documentation, we understand that the region eu-central-1
> > (Frankfurt) supports only AWS signature version 4, and from
> > https://issues.apache.org/jira/browse/JCLOUDS-480 we understand the
> > jclouds already has the support added for version 4 signature.
> >
> > What we are trying to figure out is *if* we need to do anything from the
> > client side to change the authorization signature.  We were hoping the
> > BlobStore APIs might work it used to with out changing the client code,
> > however, we see the requests sent are using V2 signature, and the server is
> > rejecting them.
> >
> > 2016-03-08 14:57:57,044 DEBUG [jclouds.wire] [main] >> "Test[\n]"
> > 2016-03-08 14:57:57,045 DEBUG [jclouds.headers] [main] >> PUT
> > https://testcontainer3.s3-eu-central-1.amazonaws.com/file1 HTTP/1.1
> > 2016-03-08 14:57:57,045 DEBUG [jclouds.headers] [main] >> Expect:
> > 100-continue
> > 2016-03-08 14:57:57,045 DEBUG [jclouds.headers] [main] >> Host: 
> > *testcontainer3.s3-eu-central-1.amazonaws.com
> > <http://testcontainer3.s3-eu-central-1.amazonaws.com/>*
> > 2016-03-08 14:57:57,045 DEBUG [jclouds.headers] [main] >> Date: Tue, 08
> > Mar 2016 09:27:50 GMT
> > *2016-03-08 14:57:57,045 DEBUG [jclouds.headers] [main] >> Authorization:
> > AWS AKIAISCW6DRRITWR6IWQ:6AndVHQV2w75OXQDq/9sWt37KN0=*
> > 2016-03-08 14:57:57,045 DEBUG [jclouds.headers] [main] >> Content-Type:
> > application/unknown
> > 2016-03-08 14:57:57,045 DEBUG [jclouds.headers] [main] >> Content-Length: 5
> >
> > *org.jclouds.http.HttpResponseException: Server rejected operation
> > connecting to PUT
> > https://testcontainer3.s3-eu-central-1.amazonaws.com/file1
> > <https://testcontainer3.s3-eu-central-1.amazonaws.com/file1> HTTP/1.1*
> > * at
> > org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:118)*
> > * at
> > org.jclouds.rest.internal.InvokeHttpMethod.invoke(InvokeHttpMethod.java:90)*
> > * at
> > org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:73)*
> > * at
> > org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:44)*
> >
> > *Any help on how to use the AWS version 4 signature for the blobstore APIs
> > will be much appreciated. *
> >
> > *NB: With the last commit mentioned in the above reply, we do not see the
> > region not supported exception anymore. *
> >
> > *Thanks!*
> >
> > *Regards*
> >
> > *Archana*
> > <https://uk-mg42.mail.yahoo.com/neo/launch?.rand=6ppet36ik2beh#>
> >
> >
> > On Tuesday, 8 March 2016, 10:54, Andrew Gaul <g...@apache.org> wrote:
> >
> >
> > Please test again with the latest master which includes a fix:
> >
> > https://git-wip-us.apache.org/repos/asf?p=jclouds.git;a=commit;h=c18371a7
> >
> > On Mon, Mar 07, 2016 at 12:29:59PM +, Archana C wrote:
> > > public class App
> > > {
> > > public static void main( String[] args ) throws IOException
> > > {
> > > // TODO Auto-generated method stub
> > > // TODO Auto-generated method stub
> > > String containername = "archanatrial12";
> > > String objectname = "object1";
> > > String tempFile =
> > "/home/archana/Eclipse/trialV42/src/main/java/trialV41/trialV42/result.txt";
> > > //int length;

Re: requested location eu-central-1, which is not in the configured locations

2016-03-07 Thread Andrew Gaul
Please test again with the latest master which includes a fix:

https://git-wip-us.apache.org/repos/asf?p=jclouds.git;a=commit;h=c18371a7

On Mon, Mar 07, 2016 at 12:29:59PM +, Archana C wrote:
> public class App 
> {
>     public static void main( String[] args ) throws IOException
>     {
>         // TODO Auto-generated method stub
>                 // TODO Auto-generated method stub
>                 String containername = "archanatrial12";
>                 String objectname = "object1";
>                 String tempFile = 
> "/home/archana/Eclipse/trialV42/src/main/java/trialV41/trialV42/result.txt";
>                 //int length;
>                 
>                 // s3.amazonaws.com   s3.eu-central-1.amazonaws.com   
> s3-external-1.amazonaws.com
>                         
>             BlobStoreContext context = ContextBuilder.newBuilder("aws-s3")
>                         .credentials("XXX", "YYY")
>                         .buildView(BlobStoreContext.class);
>             
>                 // Access the BlobStore
>                 BlobStore blobStore = context.getBlobStore();
>                 //Location loc = "us-east-1";
>                 Location loc = new 
> LocationBuilder().scope(LocationScope.REGION)
>     .id("eu-central-1")
>     .description("region")
>     .build();
> 
>                 // Create a Container
>                 blobStore.createContainerInLocation(loc, containername);
> 
>                 // Create a Blob
>                 File input = new 
> File("/home/archana/Eclipse/jclouds1/src/main/java/jclouds1/sample.txt");
>                 long length = input.length();
>                 // Add a Blob
>                 Blob blob = 
> blobStore.blobBuilder(objectname).payload(Files.asByteSource(input)).contentLength(length)
>                 .contentDisposition(objectname).build();
> 
>                 // Upload the Blob
>                 String eTag = blobStore.putBlob(containername, blob);
>                 System.out.println(eTag);}}
> Error  : requested location eu-central-1, which is not in the configured 
> locations
> Solution to rectify the issue required
> 
> RegardsArchana

-- 
Andrew Gaul
http://gaul.org/


Re: jClouds 2.0 MultiPart Upload

2017-02-03 Thread Andrew Gaul
PutOptions *takes* an ExecutorService which allows multiple threads to
concurrently upload multiple parts.

On Sat, Feb 04, 2017 at 03:34:30AM +, Archana C wrote:
> Hi 
> 
> I think the question was not clear. Parallel upload of multiple file is fine 
> and that can be achieved by using executorservice.
> The question here is, does multipartUpload i.e uploading of each part is 
> happening in parallel ?
> Does sequential upload of part deprecated ?
> RegardsArchana
> 
>  
> 
> On Saturday, 4 February 2017, 1:30, Andrew Gaul <g...@apache.org> wrote:
>  
> 
>  We rewrote multi-part uploads in jclouds 2.0.  You should pass an
> ExecutorService and via PutOptions in your call to BlobStore.putBlob.
> 
> On Fri, Feb 03, 2017 at 01:11:15PM +, Archana C wrote:
> > Hi 
> > 
> > Is SequentialMultiPartUpload deprecated in jClouds2.0. Is all the multipart 
> > uploads are parallel now ?
> > RegardsArchana 
> > 
> >    On Friday, 3 February 2017, 18:39, Archana C <carchan...@yahoo.co.uk> 
> >wrote:
> >  
> > 
> >  Thanks it helped
> > RegardsArchana 
> > 
> >    On Friday, 3 February 2017, 12:06, Ignasi Barrera <n...@apache.org> 
> >wrote:
> >  
> > 
> >  It looks like the OOM exception is thrown when writing the wire logs. When 
> >using the blob store apis you might see binary data in the logs, as the 
> >"jclouds.wire" logger logs the response/request payloads which might be huge 
> >for some blobs and can cause this kind of exceptions.
> > Could you try disabling the wire logs? (I recommend doing this for 
> > production environments).
> > Perhaps for your use case the "jclouds.headers" are enough; that will log 
> > all request/reponse path and headers but skip the bodies.
> > More on this here:https://issues.apache.org/jira/browse/JCLOUDS-1187
> > https://issues.apache.org/jira/browse/JCLOUDS-932
> > 
> > 
> > HTH!
> > I.
> > On Feb 3, 2017 06:22, "Archana C" <carchan...@yahoo.co.uk> wrote:
> > 
> > Hi 
> > 
> > I have written a sample code for multipart upload using jClouds-2.0
> >     Properties overrides = new Properties();
> >         BlobStoreContext context = ContextBuilder.newBuilder(" 
> > openstack-swift")
> >                 .endpoint("http://x.xxx.xx.xx: 5000/v2.0")
> >                 .credentials("xx:xx", "xx")
> >                 .overrides(overrides)
> >                 .modules(modules)
> >                 .buildView(BlobStoreContext. class);
> >         BlobStore blobStore = context.getBlobStore();
> >         blobStore. createContainerInLocation( null, CONTAINER_NAME);
> >         Path path = Paths.get("test2");
> >         File f = new File("test2");
> >         byte []byteArray =  Files.readAllBytes(path);
> >         Payload payload = newByteSourcePayload(wrap( byteArray));
> >         PutOptions opt = new PutOptions();
> >         opt.multipart();
> >         Blob blob = blobStore.blobBuilder(OBJECT_ NAME)
> >                 .payload(payload). contentLength(f.length())
> >                 .build();
> >         String etag =  blobStore.putBlob(CONTAINER_ NAME, blob, opt);
> > test2 is the file I am trying to upload which is of size 36MB and I am 
> > getting the following exception
> > 10:21:52.355 [main] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice - 
> > Sending request 1344471693: PUT http://x.x.x.x:8091/v1/AUTH_ 
> > 0909ac10e7024847b1a9fe9787c7de 8f/arctestMP HTTP/1.1
> > 10:21:52.356 [main] DEBUG jclouds.headers - >> PUT 
> > http://x.x.x.x:8091/v1/AUTH_ 0909ac10e7024847b1a9fe9787c7de 8f/arctestMP 
> > HTTP/1.1
> > 10:21:52.356 [main] DEBUG jclouds.headers - >> Accept: application/json
> > 10:21:52.357 [main] DEBUG jclouds.headers - >> X-Auth-Token: 
> > fd72b74db90c46cabcca3f317d5a09 d4
> > 10:21:53.129 [main] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice - 
> > Receiving response 1344471693: HTTP/1.1 201 Created
> > 10:21:53.129 [main] DEBUG jclouds.headers - << HTTP/1.1 201 Created
> > 10:21:53.129 [main] DEBUG jclouds.headers - << Date: Fri, 03 Feb 2017 
> > 04:51:53 GMT
> > 10:21:53.129 [main] DEBUG jclouds.headers - << X-Trans-Id: 
> > tx83ba6249347c43c99bb41- 0058940c68
> > 10:21:53.129 [main] DEBUG jclouds.headers - << Connection: keep-alive
> > 10:21:53.129 [main] DEBUG jclouds.headers - << Content-Type: text/html; 
> > charset=UTF-8
> > 10:21:53.129 [main] DEBUG jclouds.header

Re: S3 bucket policies

2017-01-26 Thread Andrew Gaul
jclouds only supports bucket and object ACLs.  Odd that minio does not
support these as S3 clones more widely implement them than policy.  That
being said, the functionality is simple and we could easily add bucket
policies.  Or you could use an external tool like the aws cli to set
them.  I opened an issue for this and we welcome pull requests on
GitHub:

https://issues.apache.org/jira/browse/JCLOUDS-1232

On Tue, Jan 24, 2017 at 05:40:06PM +0100, cen wrote:
> Hi,
> 
> I am trying to use jClouds 2.0 with minio (s3 clone). Minio does not
> support ACLs but they do support bucket policies.
> 
> Does jClouds blob store support bucket policies? I couldn't find any
> mentions in the docs and my source code exploration didn't find
> anything that fits the description. If this exists an example would
> be most welcome.
> 
> Best regards, cen
> 
> 

-- 
Andrew Gaul
http://gaul.org/


Re: how to test swift api in transient mode

2017-01-18 Thread Andrew Gaul
[parallel discussion on Stack Overflow
https://stackoverflow.com/questions/40929969/transient-mode-for-unit-tests-with-jclouds-and-openstack-swift-doesnt-work
 ]

You can't create a SwiftApi view of a transient blobstore; instead you
must use the portable BlobStore view:

String provider = ...;  // openstack-swift or transient
BlobStoreContext context = ContextBuilder.newBuilder(provider)
.endpoint(endpoint)
.credentials(user, password)
.buildApi(BlobStoreContext.class);
BlobStore blobStore = context.getBlobStore();
// interact with blobStore, e.g., get, put
...
context.close();

On Tue, Nov 29, 2016 at 03:00:00PM +0100, Aleksandra Nowak wrote:
> I had to switch (or I thought that I had to) because I wanted
> JClouds to take care of repeating PUT operations after a failure.
> And for now it looks like it helped for this issue.
> However, I would like to have junit test for this code.
> Before I used the same code but with "transient" as a provider.
> Now after changing dependency version to 2.0.0 it unit tests trow:
> 
> com.google.inject.ConfigurationException: Guice configuration errors:
> 
> 1) No implementation for org.jclouds.openstack.swift.v1.SwiftApi was bound.
>   while locating org.jclouds.openstack.swift.v1.SwiftApi
> 
> 
> The same piece of code works perfectly  when run with a live swift
> instance and "openstack-swift" as provider.
> Best,
> Aleksandra
> 
> W dniu 25.11.2016 o 20:03, Andrew Gaul pisze:
> >Aleksandra, you can use both openstack-swift and transient APIs via the
> >BlobStore interface.  You should only need to change the provider name
> >from swift to openstack-swift.
> >
> >On Fri, Nov 25, 2016 at 04:32:57PM +0100, Aleksandra Nowak wrote:
> >>Hi,
> >>I'm trying to migrate my code to use jclouds 2.0.0. I changed from
> >>swift dependency to openstack-swift, so I use SwiftApi object to
> >>access Swift (instead of BlobStore which I used before). The code
> >>seems to look fine when I run it on a live cluster.
> >>
> >>But I had unit tests that were using transient mode/container and
> >>now I cannot make them work.
> >>
> >>I got:
> >>
> >>com.google.inject.ConfigurationException: Guice configuration errors:
> >>
> >>1) No implementation for org.jclouds.openstack.swift.v1.SwiftApi was bound.
> >>   while locating org.jclouds.openstack.swift.v1.SwiftApi
> >>
> >>Is transient mode still valid in 2.0? It is listed as possible
> >>"providerOrApi" in
> >>org.jclouds.ContextBuilder#newBuilder(java.lang.String). How to use
> >>it? There is no information about it on documentation pages, wheras
> >>it still claims that " Write your unit tests without mocking
> >>complexity or the brittleness of remote connections. Writing tests
> >>for cloud endpoints is difficult. jclouds provides you with Stub
> >>connections that simulate a cloud without creating network
> >>connections." (from
> >>http://jclouds.apache.org/start/what-is-jclouds/)
> >>
> >>Thank you in advance,
> >>Aleksandra
> >>
> 

-- 
Andrew Gaul
http://gaul.org/


Re: Issue when trying to download from Swift3 middleware...

2017-01-19 Thread Andrew Gaul
Sorry for the late response -- can you enable jclouds wire logs and try
again?

https://jclouds.apache.org/reference/logging/

On Mon, Dec 05, 2016 at 03:20:35PM +0530, Pratheesh wrote:
> Hi
> 
>  
> 
> I am connecting to Swift3 middleware and trying to upload and download files
> from the configured bucket. Upload is working perfect. 
> 
> But the download is failing with following error:
> 
> org.jclouds.aws.AWSResponseException: request GET
> http://mydomain:8080/myapp?acl HTTP/1.1 failed with code 500, error:
> AWSError{requestId='txb8a5b39946394b1282d4a-0058452eb1',
> requestToken='txb8a5b39946394b1282d4a-0058452eb1', code='InternalError',
> message='unexpected status code 406'}
> 
>  
> 
> Can you please let me know , how to resolve this error?
> 
>  
> 
> Sample code tried:
> 
> 
> Properties overrides = new Properties();
> 
> overrides.setProperty(PROPERTY_S3_VIRTUAL_HOST_BUCKETS, "false");
> 
> overrides.setProperty(PROPERTY_S3_SERVICE_PATH, "/");
> 
> BlobStoreContext context;
> 
> context = ContextBuilder.newBuilder("s3")
> 
> .endpoint("http://mydomain:8080 ")
> 
> .credentials("userid", "password")
> 
>.overrides(overrides)
> 
> .buildView(BlobStoreContext.class);
> 
> BlobStore blobStore = context.getBlobStore();
> 
> System.out.println("Obtained blob store ...");
> 
> Blob blob = blobStore.getBlob("mybucket", "Dashboard.jpg");
> 
> System.out.println("Finding file...");
> 
> InputStream in = blob.getPayload().openStream();
> 
> System.out.println("Downloading...");
> 
> FileOutputStream fout = new
> FileOutputStream("E:\\jCloudStorage\\downloads\\Dashboard.jpg");
> 
> byte[] bytes = new byte[1024];
> 
>     int length = -1;
> 
> while ((length = in.read(bytes, 0, 1024)) != -1) {
> 
> fout.write(bytes, 0, length);
> 
> }
> 
> fout.flush();
> 
> fout.close();
> 
> in.close();
> 
> System.out.println("Download complete...");
> 
>  
> 

-- 
Andrew Gaul
http://gaul.org/


Re: Directory Name - 2.0.0-SNAPSHOT

2016-10-26 Thread Andrew Gaul
You did not provide an exact test case so I will provide one for you.  I
successfully tested the following against aws-s3, azureblob, and
filesystem:

   @Test(groups = { "integration", "live" })
   public void testListRecursive() throws Exception {
  BlobStore blobStore = view.getBlobStore();
  String containerName = getContainerName();
  try {
 blobStore.putBlob(containerName, 
blobStore.blobBuilder("blob-1").payload("").build());
 blobStore.putBlob(containerName, 
blobStore.blobBuilder("blob-2").payload("").build());
 blobStore.putBlob(containerName, 
blobStore.blobBuilder("dir/blob-3").payload("").build());

 ListContainerOptions options = new ListContainerOptions().recursive();
 PageSet pageSet = 
view.getBlobStore().list(containerName, options);
 assertThat(pageSet).hasSize(3);
 assertThat(pageSet.getNextMarker()).isNull();

 Iterator it = pageSet.iterator();
 assertThat(it.next().getName()).isEqualTo("blob-1");
 assertThat(it.next().getName()).isEqualTo("blob-2");
 assertThat(it.next().getName()).isEqualTo("dir/blob-3");
  } finally {
 returnContainer(containerName);
  }
   }

If you use the fake directory support in 1.9, your mileage may vary, and
jclouds 2.0 deprecated this misfeature and 2.1 will remove it:

https://issues.apache.org/jira/browse/JCLOUDS-1066

Prefix and delimiter support added in jclouds 2.0 replaces directories.

On Wed, Oct 26, 2016 at 04:05:33PM +, Paya, Ashkan wrote:
> Thank you for your response Andrew. So Im trying to construct the structure I 
> mentioned earlier and then perform the container listing on it using 
> ListContainerOptions.Builder.recursive(). Here is the result from different 
> providers:
> 
> * Filesystem:
>   - blob-1
>   - blob-2
>   - dir/blob-3
> 
> * AWS-S3:
>   - blob-1
>   - blob-2
>   - dir/
>   - dir/blob-3
> 
> * Azureblob
>   - blob-1
>   - blob-2
>   - dir
>   - dir/blob-3
> 
> On a separate note, there is a discrepancy between aws and azure directories 
> since we have ‘/‘ in the returned results of the former. This was not the 
> case in 1.9.2.
> 
> Thank you,
> Ashkan
> 
> 
> 
> 
> 
> On 10/25/16, 9:43 PM, "Andrew Gaul" <g...@apache.org> wrote:
> 
> >[Moving to jclouds-user list]
> >
> >Can you provide the exact test case, including ListContainerOptions, and
> >results from both the filesystem and s3 providers?  2.0 includes many
> >changes to align the former with the latter.
> >
> >For what it is worth, directories are a jclouds fiction and something we
> >deprecated in 2.0 and will remove in 2.1.  The new prefix and delimiter
> >support in 2.0 matches how real providers work.
> >
> >On Wed, Oct 26, 2016 at 12:52:34AM +, Paya, Ashkan wrote:
> >> Hello,
> >> 
> >> When I create a directory within a container in Filesystem, 
> >> blobstore.list() does not show the directory name separately. For example, 
> >> when I generate the following structure and call blobstore.list, I do not 
> >> get the ‘dir/‘ as a separate element:
> >> 
> >> Container
> >> |___blob1
> >> |___blob2
> >> |___dir
> >>   |__ blob3
> >> 
> >> => blobstore.list returns:
> >> 
> >>   *   blob1
> >>   *   blob2
> >>   *   dir/blob3
> >> 
> >> Thank you,
> >> Ashkan
> >
> >-- 
> >Andrew Gaul
> >http://gaul.org/

-- 
Andrew Gaul
http://gaul.org/


Re: Accessing blob-store by private-key only

2016-10-26 Thread Andrew Gaul
If you want HTTP basic access authentication you will need to write a
custom authentication module which implements BlobRequestSigner.  You
might look at how AWS-S3 uses a v4 signer while S3 uses a v2 signer.  To
disable authentication you would do something similar.

On Wed, Oct 26, 2016 at 05:34:57AM +, Archana C wrote:
> Hi 
> 
> Provider is s3 compatible,  how to do HTTPs basic access authentication 
> rather than credentails.  
> How to disable HTTP basic authentication and use only certificates for 
> authentication
> RegardsArchana 
> 
> On Wednesday, 26 October 2016, 10:18, Andrew Gaul <g...@apache.org> wrote:
>  
> 
>  Can you provide more details on your use case, e.g., which provider?
> All providers use an identity and credential.  Long ago someone asked
> about HTTP basic access authentication which we do not support but
> should be easy to add.
> 
> On Tue, Oct 25, 2016 at 07:09:52AM +, Archana C wrote:
> > Hi 
> > 
> >     Is there any way to authenticate blob store using private key alone, 
> > instead of passing credentials(identity, key) ?    Does jclouds support 
> > that kind of authentication ?
> > 
> > RegardsArchana
> 
> -- 
> Andrew Gaul
> http://gaul.org/
> 
> 
>

-- 
Andrew Gaul
http://gaul.org/


Re: Accessing blob-store by private-key only

2016-10-25 Thread Andrew Gaul
Can you provide more details on your use case, e.g., which provider?
All providers use an identity and credential.  Long ago someone asked
about HTTP basic access authentication which we do not support but
should be easy to add.

On Tue, Oct 25, 2016 at 07:09:52AM +, Archana C wrote:
> Hi 
> 
>     Is there any way to authenticate blob store using private key alone, 
> instead of passing credentials(identity, key) ?    Does jclouds support that 
> kind of authentication ?
> 
> RegardsArchana

-- 
Andrew Gaul
http://gaul.org/


Re: Directory Name - 2.0.0-SNAPSHOT

2016-10-25 Thread Andrew Gaul
[Moving to jclouds-user list]

Can you provide the exact test case, including ListContainerOptions, and
results from both the filesystem and s3 providers?  2.0 includes many
changes to align the former with the latter.

For what it is worth, directories are a jclouds fiction and something we
deprecated in 2.0 and will remove in 2.1.  The new prefix and delimiter
support in 2.0 matches how real providers work.

On Wed, Oct 26, 2016 at 12:52:34AM +, Paya, Ashkan wrote:
> Hello,
> 
> When I create a directory within a container in Filesystem, blobstore.list() 
> does not show the directory name separately. For example, when I generate the 
> following structure and call blobstore.list, I do not get the ‘dir/‘ as a 
> separate element:
> 
> Container
> |___blob1
> |___blob2
> |___dir
>   |__ blob3
> 
> => blobstore.list returns:
> 
>   *   blob1
>   *   blob2
>   *   dir/blob3
> 
> Thank you,
> Ashkan

-- 
Andrew Gaul
http://gaul.org/


Re: Content Length with compression with Jclouds PutBlob

2016-10-12 Thread Andrew Gaul
What you desire is not possible with single-part uploads.
Content-Encoding is not magic; adding it to an HTTP PUT is just an
instruction for subsequent HTTP GET to decode using that filter.  You
still need to calculate the Content-Length before the HTTP PUT for most
providers, e.g., AWS S3.

You may accomplish what you want via multi-part uploads, if your
provider's MPU restrictions allow it.  For example, AWS S3 requires that
all parts except the final one contain at least 5 MB.  You must slice
your data and ensure compression yields at least this part size.  Note
that you can individually gzip each part; the format allows
concatenation of multiple streams as the following shell code
demonstrates:

$ (echo  | gzip -c ; echo  | gzip -c) | gunzip -c



You must use the new Multipart API introduced in jclouds 2.0 for this
approach.

On Thu, Oct 06, 2016 at 08:29:23PM +0530, Dileep Dixith wrote:
> Hi,
> 
> We are planning to enable compression before we send data to Cloud as part 
> of put blob. We have written a payload and ByteSource.
> 
> For larger files Once we opened a stream with cloud, we read slice from 
> local file and send it to cloud as part of putblob method.
> 
> During this, we have a compression module, which will compress each slice 
> of data  rather than complete stream in one shot and we want to change the 
> content length after compression module performs its operation.
> 
> But, looks like once we open a stream we can not be able to change the 
> content length of blob.
> 
> I want to know is there a way to change the content length of MataData 
> after compression of all the slices completes.
> 
> Regards,
> Dileep

-- 
Andrew Gaul
http://gaul.org/


Re: Fwd: Jclouds with HDFS file system

2017-03-24 Thread Andrew Gaul
You can try looking at a bit-rotted example here:

https://github.com/jclouds/jclouds-examples/tree/master/blobstore-hdfs

Note that this code does not work any more and you will need to do
significant work to port it from jclouds 1.1 to 2.0.

On Wed, Mar 22, 2017 at 11:42:05PM -0400, Andrew Phillips wrote:
> [forwarding to user@...]
> 
>  Original Message 
> Subject: Jclouds with HDFS file system
> Date: 2017-03-22 22:43
> From: Chaitanya Anumalasetty <chaituchaitu...@gmail.com>
> To: andr...@apache.org
> 
> Hi Andrew,
> 
> I need one example Java code to upload/download file from HDFS file
> system using Jclouds.
> 
> If you can help me that will be great.
> Thanks in advance.

-- 
Andrew Gaul
http://gaul.org/


Re: Swift Multipart upload SLO or DLO

2017-04-04 Thread Andrew Gaul
jclouds supports static large objects with Swift.  We could add support
for dynamic objects but these have a number of caveats and differ from
other providers.

On Tue, Apr 04, 2017 at 04:28:56PM +, Archana C wrote:
> Hi 
> 
> Does jclouds 2.0.0 supports swift Static Large Object Upload (SLO) or Dynamic 
> Large Object Upload(DLO) ?
> 
> As per our observation,
> 1. It looks like jClouds does SLO and not DLO.
> 2. SLO requires no headers whereas DLO requires X-Object-Manifest as header 
> while manifest upload as mentioned in [1].
> 
> [1] https://docs.openstack.org/user-guide/cli-swift-large-object-creation.html
> RegardsArchana
> 

-- 
Andrew Gaul
http://gaul.org/


Re: Swift Multipart Manifest Upload

2017-04-04 Thread Andrew Gaul
Mat Mannion reported and fixed a related issue; does this address your
question?

https://github.com/jclouds/jclouds/pull/1083
https://issues.apache.org/jira/browse/JCLOUDS-1264

On Tue, Mar 28, 2017 at 07:59:16AM +, Archana C wrote:
> 
> Content length of the manifest file for multipart upload in swift doesnot 
> have size of entire blob instead it has the content length of the manifest 
> file.As per the javadoc [1] Content-Length of the manifest must be size of 
> the blob.
> As per my observation content length of the manifest has is not content 
> length of the blob.
> Also in the comments mentioned in below reference, Its clear that content 
> length will be set to the object size once the PUT operation is complete.
> Can you point us to the code snippet, where computation of content length 
> happens once PUT is completed ?
> 
> [1] 
> https://github.com/jclouds/jclouds/blob/master/apis/openstack-swift/src/main/java/org/jclouds/openstack/swift/v1/binders/BindManifestToJsonPayload.java
> 
> RegardsArchana

-- 
Andrew Gaul
http://gaul.org/


Re: jClouds 2.0

2017-04-06 Thread Andrew Gaul
jclouds is a community project so DLO support will be added when someone
contributes it, likely you.  I would look at how other libraries bridge
this gap and submit a pull request[1].  Alternatively you could migrate
your DLO to SLO with some external tool.

[1] https://cwiki.apache.org/confluence/display/JCLOUDS/How+to+Contribute

On Thu, Apr 06, 2017 at 04:03:04PM +, Archana C wrote:
> 
> Hi 
> 1. Is there any timeline planned for DLO support ?
> 2. How do we achieve backward comparability with 1.9 migrated objects (DLO) 
> etag with 2.o recall (SLO) etag ?
> 
> RegardsArchana
> 

-- 
Andrew Gaul
http://gaul.org/


Re: Swift Multipart upload SLO or DLO

2017-04-05 Thread Andrew Gaul
Sorry I cannot help you debug your Swift configuration.  jclouds SLO
works with properly configured public providers like Rackspace so I
suggest exploring the differences between it and your local setup.
jclouds 2.0.0 does not support DLO and the older jclouds 1.9.1 Swift
implementation works differently so I do not recommend using it.  I did
not mention a test case and do not understand your last comment.

On Wed, Apr 05, 2017 at 10:20:15AM +, Archana C wrote:
> Hi
> 
> 1. Do you enable "SLO" in swift as required filter (swift/proxy/server.py)
>     required_filters = [
>     {'name': 'catch_errors'},
>     {'name': 'gatekeeper',
>  'after_fn': lambda pipe: (['catch_errors']
>    if pipe.startswith('catch_errors')
>    else [])},
>     {'name': 'dlo', 'after_fn': lambda _junk: [
>     'staticweb', 'tempauth', 'keystoneauth',
>     'catch_errors', 'gatekeeper', 'proxy_logging']}]
> Should this be slo for static large object upload ? Or is there any thing to 
> be done to treat request as SLO.
> 
> 2. I noticed  that "DLO" is enabled as a default filter (confirmed from swift 
> logs), hence adding header "X-Object-Manifest" only helped get correct 
> Content-length.
> 3. I compared 1.9.1 jcloluds CommonSwiftClient.java:putObjectManifest with 
> jclouds 2.0 StaticLargeObject.java:replaceManifest, I figured out that 
> "X-Object-Manifest" is used as required in jclouds 1.9.1 and that has been 
> omitted in 2.0 
> Please share sample test case that you mentioned 
> RegardsArchana 
> 
> On Wednesday, 5 April 2017, 15:49, Archana C <carchan...@yahoo.co.uk> 
> wrote:
>  
> 
>  Hi
> 
> 1. Do you enable "SLO" in swift as required filter (swift/proxy/server.py)
>     required_filters = [
>     {'name': 'catch_errors'},
>     {'name': 'gatekeeper',
>  'after_fn': lambda pipe: (['catch_errors']
>    if pipe.startswith('catch_errors')
>    else [])},
>     {'name': 'dlo', 'after_fn': lambda _junk: [
>     'staticweb', 'tempauth', 'keystoneauth',
>     'catch_errors', 'gatekeeper', 'proxy_logging']}]
> Should this be slo for static large object upload ? Or is there any thing to 
> be done to treat request as SLO.
> 
> 2. I noticed  that "DLO" is enabled as a default filter (confirmed from swift 
> logs), hence adding header "X-Object-Manifest" only helped get correct 
> Content-length.
> 3. I compared 1.9.1 jcloluds CommonSwiftClient.java:putObjectManifest with 
> jclouds 2.0 StaticLargeObject.java:replaceManifest, I figured out that 
> "X-Object-Manifest" is used as required in jclouds 1.9.1 and that has been 
> omitted in 2.0 
> Please share sample test case that you mentioned 
> RegardsArchana 
> 
> On Tuesday, 4 April 2017, 23:39, Andrew Gaul <g...@apache.org> wrote:
>  
> 
>  jclouds supports static large objects with Swift.  We could add support
> for dynamic objects but these have a number of caveats and differ from
> other providers.
> 
> On Tue, Apr 04, 2017 at 04:28:56PM +, Archana C wrote:
> > Hi 
> > 
> > Does jclouds 2.0.0 supports swift Static Large Object Upload (SLO) or 
> > Dynamic Large Object Upload(DLO) ?
> > 
> > As per our observation,
> > 1. It looks like jClouds does SLO and not DLO.
> > 2. SLO requires no headers whereas DLO requires X-Object-Manifest as header 
> > while manifest upload as mentioned in [1].
> > 
> > [1] 
> > https://docs.openstack.org/user-guide/cli-swift-large-object-creation.html
> > RegardsArchana
> > 
> 
> -- 
> Andrew Gaul
> http://gaul.org/
> 
> 
>
> 
>

-- 
Andrew Gaul
http://gaul.org/


Re: Openstack-Swift

2017-04-14 Thread Andrew Gaul
Why don't you use the rackspace-cloudfiles-us provider which sets up
authentication for you?

On Thu, Apr 13, 2017 at 10:30:14PM +, Paya, Ashkan wrote:
> Hello,
> 
> We want to verify access to our openstack-swift account under Rackspace via 
> “apiAccessKeyCredential” keystone authentication type. Our credentials are 
> valid and we can issue the following curl command successfully and receive 
> tokenId, tenant name and Id and etc.
> 
>   *   curl https://identity.api.rackspacecloud.com/v2.0/tokens  \
>  *   -X POST \
>  *   -d 
> '{"auth":{"RAX-KSKEY:apiKeyCredentials":{"username”:”${USER}","apiKey":”${APIKEY}"}}}'
>  \
>  *   -H "Content-type: application/json" | python -m json.tool
> 
> Now, when we configure our BlobStoreContext object using the same parameters 
> (endpoint, keystone auth type and etc) and trying to get the 
> RegionScopedBlobStoreContext form it, we receive the following response:
> 
>   *   Failed in command: …., org.jclouds.http.HttpResponseException: command: 
> POST https://identity.api.rackspacecloud.com/v2.0/tokens HTTP/1.1 failed with 
> response: HTTP/1.1 400 Bad Request; content: 
> [{"badRequest":{"code":400,"message":"Invalid json request body"}}]
> 
> In which they explain 400 as “Missing required parameters. This error also 
> occurs if you include both the tenant name and ID in the request.”. Has 
> anyone else experienced the same scenario or have some insight to share?
> 
> Thank you,
> Ashkan

-- 
Andrew Gaul
http://gaul.org/


Re: jClouds 2.0

2017-04-08 Thread Andrew Gaul
Please refer to the OpenStack Swift documentation[1] which should answer
all your questions.  All jclouds pull requests must have unit and
integration tests so you should be able to determine this empirically as
well.

[1] https://docs.openstack.org/developer/swift/overview_large_objects.html

On Sat, Apr 08, 2017 at 04:00:23AM +, Archana C wrote:
> Ok. We will work on this.
> As per our observation, ETAG computation for SLO and DLO differs
> For DLO Etag is the MD5 of manifest file
> For SLO Etag is the MD5 if concatenated ETAG.
> Is our understanding correct ?
> RegardsArchana 
> 
> On Friday, 7 April 2017, 1:00, Andrew Gaul <g...@apache.org> wrote:
>  
> 
>  jclouds is a community project so DLO support will be added when someone
> contributes it, likely you.  I would look at how other libraries bridge
> this gap and submit a pull request[1].  Alternatively you could migrate
> your DLO to SLO with some external tool.
> 
> [1] https://cwiki.apache.org/confluence/display/JCLOUDS/How+to+Contribute
> 
> On Thu, Apr 06, 2017 at 04:03:04PM +, Archana C wrote:
> > 
> > Hi 
> > 1. Is there any timeline planned for DLO support ?
> > 2. How do we achieve backward comparability with 1.9 migrated objects (DLO) 
> > etag with 2.o recall (SLO) etag ?
> > 
> > RegardsArchana
> > 
> 
> -- 
> Andrew Gaul
> http://gaul.org/
> 
> 
>

-- 
Andrew Gaul
http://gaul.org/


Re: remove EMC Atmos provider?

2017-07-08 Thread Andrew Gaul
I found that AT Synaptic offers a public Atmos provider which our
integration tests mostly pass.  fog and libcloud continue to provide
Atmos support and I believe Dell/EMC ECS supports Atmos API as well as
S3.  Thus we should give Atmos a reprieve.

Except that I want to improve our object listing support, preferring
prefix and delimiters over our fake directory support:

https://issues.apache.org/jira/browse/JCLOUDS-1066

I will investigate this more but I do not want to do a bunch of hacking
to support an unused provider.  Please speak up if you use Atmos.

On Fri, Jul 07, 2017 at 10:24:38AM +0200, Ignasi Barrera wrote:
> Given the silence in this thread, I'd say go ahead and remove it.
> 
> On 3 July 2017 at 23:47, Andrew Gaul <g...@apache.org> wrote:
> > Does anyone use the EMC Atmos provider?  As I understand it, EMC has not
> > developed this in some years, preferring the S3 and Swift protocols for
> > its products.  Atmos is a bit of an odd bird, having more of a
> > filesystem interface and semantics than an object store and lacking
> > features like multi-part upload and server-side filtering.  The recent
> > disappearance of Atmos Online leaves me unable to test the provider for
> > the jclouds 2.0.2 release.  I would like to remove the Atmos provider if
> > no one is using it.
> >
> > --
> > Andrew Gaul
> > http://gaul.org/

-- 
Andrew Gaul
http://gaul.org/


Re: SigV4 for S3 comptabile Object Store

2017-07-20 Thread Andrew Gaul
jclouds only supports V4 signature with the aws-s3 provider and not the
generic s3 api.  Generally free software projects do not make plans to
add a given feature; instead we rely on volunteers to contribute.  Would
you like to add support for this?  Community software requires community
contributions!

On Fri, Jul 21, 2017 at 04:00:20AM +, Archana C wrote:
> Hi,
>     jClouds version till 2.0.1 supports signature V2 for S3 compatible object 
> storage. Is there a plan to support signature V4 for S3 compatible object 
> storage ?
> RegardsArchana

-- 
Andrew Gaul
http://gaul.org/


remove EMC Atmos provider?

2017-07-03 Thread Andrew Gaul
Does anyone use the EMC Atmos provider?  As I understand it, EMC has not
developed this in some years, preferring the S3 and Swift protocols for
its products.  Atmos is a bit of an odd bird, having more of a
filesystem interface and semantics than an object store and lacking
features like multi-part upload and server-side filtering.  The recent
disappearance of Atmos Online leaves me unable to test the provider for
the jclouds 2.0.2 release.  I would like to remove the Atmos provider if
no one is using it.

-- 
Andrew Gaul
http://gaul.org/


remove Clojure support?

2017-07-03 Thread Andrew Gaul
Does anyone use the jclouds Clojure bindings?  I raised this question
three years ago and despite some offers to help these bindings have seen
little activity:

https://lists.apache.org/thread.html/ff486477fa5af2ef944845d6b998d12b39e144e7353ba7cca532@1392843883@%3Cuser.jclouds.apache.org%3E

These bindings add to jclouds maintenance and make evolving core APIs
harder.  Further they really belong outside of the jclouds repository.
I would like to remove these in 2.1.0 unless someone makes a compelling
case.

-- 
Andrew Gaul
http://gaul.org/


Re: Update default part size other than 32MB

2017-04-26 Thread Andrew Gaul
Opened https://issues.apache.org/jira/browse/JCLOUDS-1280 to track this
feature request.

On Tue, Feb 14, 2017 at 01:55:02PM -, Rishika Kedia wrote:
> Hello,
> 
> Is there a way to update default part size, during the MPU, other than 32MB 
> for AWS-S3 blobstore for jclouds2.0? earlier for 1.9* version this was not 
> issue as MultipartUploadSlicingAlgorithm.java was not made final class.
> 
> Thanks,
> Rishika

-- 
Andrew Gaul
http://gaul.org/


Re: Promoting B2 and GCS

2017-05-09 Thread Andrew Gaul
I promoted the B2 and GCS providers from labs to core in the master
branch which will become 2.1.0.  We use the labs repository to incubate
providers and promote them to core after they mature.  Users should
update their maven groupIDs from org.apache.jclouds.labs to
org.apache.jclouds.provider in your applications.

On Tue, Mar 21, 2017 at 08:00:00PM -0400, Andrew Gaul wrote:
> Honestly, we could promote GCS to core today -- let's skip the
> intermediate step.  The URL signers are not much work which I could
> implement some time next week.  The InputStream hack uses multipart
> upload to hack around a jclouds limitation but works fine from a user
> perspective.  Can you share your filter-branch commands so I can
> preserve history?
> 
> Related, we could promote Backblaze B2 from labs to core.  From my
> biased perspective this implementation has high quality.  B2 lacks a
> provider guide which I could also write next week.
> 
> On Tue, Mar 21, 2017 at 09:12:29AM +0100, Ignasi Barrera wrote:
> > Thanks for the links!
> > 
> > I don't have a good overview on the status of all the blob store
> > providers, so I trust your criteria. How significant are these issues,
> > when it comes to the BlobStore abstraction?
> > 
> > BTW, could we consider moving it to the "labs" repo? It does not make
> > sense to maintain a separate repo just for that provider. Moving to
> > labs does not require a groupId change, so it can be done in master
> > and 2.0.x.
> > 
> > On 21 March 2017 at 08:45, Andrew Gaul <g...@apache.org> wrote:
> > > We have a meta-issue[1] tracking exactly this!  From my perspective, GCS
> > > lacks temporary signed URLs[2] and has an involved issue with
> > > InputStream payloads[3] which make its implementation inferior to other
> > > blobstore providers.  I can sprint on the former for 2.1.0 but the
> > > latter requires some effort.  It would be great to eliminate
> > > jclouds-labs-google from a release process so perhaps we could overlook
> > > these issues?
> > >
> > > [1] https://issues.apache.org/jira/browse/JCLOUDS-944
> > > [2] https://issues.apache.org/jira/browse/JCLOUDS-902
> > > [3] https://issues.apache.org/jira/browse/JCLOUDS-912
> > >
> > > On Tue, Mar 21, 2017 at 08:21:44AM +0100, Ignasi Barrera wrote:
> > >> I'm just wondering if we should think about promoting GCS in the next
> > >> major. And if not, what's missing?
> > >>
> > >>
> > >> I.
> > >
> > > --
> > > Andrew Gaul
> > > http://gaul.org/
> 
> -- 
> Andrew Gaul
> http://gaul.org/

-- 
Andrew Gaul
http://gaul.org/


Re: unsubscribe

2017-05-17 Thread Andrew Gaul
You must follow instructions here:

https://www.apache.org/foundation/mailinglists.html

On Wed, May 17, 2017 at 12:56:06PM -0500, Lahiru Sandaruwan wrote:
> -- 
> --
> 
> Lahiru Sandaruwan
> Associate Technical Lead,
> WSO2 Inc., http://wso2.com
> 
> lean.enterprise.middleware
> 
> m: +94773325954
> e: lahi...@wso2.com b: http://lahiruwrites.blogspot.com/
> in: http://lk.linkedin.com/pub/lahiru-sandaruwan/16/153/146

-- 
Andrew Gaul
http://gaul.org/


Re: bulk-delete objects

2017-05-17 Thread Andrew Gaul
This is likely a limitation of the Swift API.

On Wed, May 17, 2017 at 12:10:47PM +, Archana C wrote:
> Hi 
> I am trying to perform bulk-delete operation on a container.
> Request looks something like this{method=DELETE, 
> endpoint=http://x.x.x.x:8091/v1/AUTH_62cdd842bcf44023b987196add34951e?bulk-delete,
>  headers={Accept=[application/json], 
> X-Auth-Token=[5c94641e6a4146f1a9857d6892206bbb]}, payload=[content=true, 
> contentMetadata=[cacheControl=null, contentDisposition=null, 
> contentEncoding=null, contentLanguage=null, contentLength=41, 
> contentMD5=null, contentType=text/plain, expires=null], written=false, 
> isSensitive=false]}
> http://x.x.x.x:8091/v1/AUTH_62cdd842bcf44023b987196add34951e?bulk-delete
> For Non MPU uploads, bulk-delete works fine. But in case of bulk-delete on 
> multipart uploaded files, only the manifest file get removed, the segments 
> remains.
> Is there something missing ?
> RegardsArchana

-- 
Andrew Gaul
http://gaul.org/


Re: bulk-delete objects

2017-05-17 Thread Andrew Gaul
I filed https://bugs.launchpad.net/swift/+bug/1691523 to track this.

On Wed, May 17, 2017 at 10:53:56AM -0700, Andrew Gaul wrote:
> This is likely a limitation of the Swift API.
> 
> On Wed, May 17, 2017 at 12:10:47PM +, Archana C wrote:
> > Hi 
> > I am trying to perform bulk-delete operation on a container.
> > Request looks something like this{method=DELETE, 
> > endpoint=http://x.x.x.x:8091/v1/AUTH_62cdd842bcf44023b987196add34951e?bulk-delete,
> >  headers={Accept=[application/json], 
> > X-Auth-Token=[5c94641e6a4146f1a9857d6892206bbb]}, payload=[content=true, 
> > contentMetadata=[cacheControl=null, contentDisposition=null, 
> > contentEncoding=null, contentLanguage=null, contentLength=41, 
> > contentMD5=null, contentType=text/plain, expires=null], written=false, 
> > isSensitive=false]}
> > http://x.x.x.x:8091/v1/AUTH_62cdd842bcf44023b987196add34951e?bulk-delete
> > For Non MPU uploads, bulk-delete works fine. But in case of bulk-delete on 
> > multipart uploaded files, only the manifest file get removed, the segments 
> > remains.
> > Is there something missing ?
> > RegardsArchana
> 
> -- 
> Andrew Gaul
> http://gaul.org/

-- 
Andrew Gaul
http://gaul.org/


Re: bulk-delete objects

2017-05-18 Thread Andrew Gaul
[Please do not email me privately; instead use the public mailing lists.
I will not respond to further private queries.]

Bulk delete does not work with either SLO or DLO objects.  Swift has
some problems with this as you should have discovered in this bug
report:

https://bugs.launchpad.net/swift/+bug/1691523

On Thu, May 18, 2017 at 05:53:26AM +, Archana C wrote:
> Thanks.
> Is the case same for DLO. 
> Looking at jclouds-1.9 source, I am able to observe that for removeBlobs API 
> call, we internally trigger removeBlob for each element is iterable. Is that 
> the expected way for bulk-delete DLO uploads.
> RegardsArchana 
> 
> On Wednesday, 17 May 2017, 23:24, Andrew Gaul <g...@apache.org> wrote:
>  
> 
>  This is likely a limitation of the Swift API.
> 
> On Wed, May 17, 2017 at 12:10:47PM +, Archana C wrote:
> > Hi 
> > I am trying to perform bulk-delete operation on a container.
> > Request looks something like this{method=DELETE, 
> > endpoint=http://x.x.x.x:8091/v1/AUTH_62cdd842bcf44023b987196add34951e?bulk-delete,
> >  headers={Accept=[application/json], 
> > X-Auth-Token=[5c94641e6a4146f1a9857d6892206bbb]}, payload=[content=true, 
> > contentMetadata=[cacheControl=null, contentDisposition=null, 
> > contentEncoding=null, contentLanguage=null, contentLength=41, 
> > contentMD5=null, contentType=text/plain, expires=null], written=false, 
> > isSensitive=false]}
> > http://x.x.x.x:8091/v1/AUTH_62cdd842bcf44023b987196add34951e?bulk-delete
> > For Non MPU uploads, bulk-delete works fine. But in case of bulk-delete on 
> > multipart uploaded files, only the manifest file get removed, the segments 
> > remains.
> > Is there something missing ?
> > RegardsArchana
> 
> -- 
> Andrew Gaul
> http://gaul.org/
> 
> 
>

-- 
Andrew Gaul
http://gaul.org/


Re: directoryExists vs blobExists

2017-09-18 Thread Andrew Gaul
jclouds deprecated and will remove support for directories from the
portable abstraction since most object stores do not not support them.
Directories should still work through the provider level but I recommend
using the prefix and delimiter options instead.  Tracking issue:

https://issues.apache.org/jira/browse/JCLOUDS-1066

On Mon, Sep 18, 2017 at 05:26:31PM +0200, GARDAIS Ionel wrote:
> Hi, 
> 
> Got a question about using the filesystem API. 
> As a blob wrote using putBlob(« container », "folder/file.t xt »); 
> 
> directoryExists(« container », « folder") returns true 
> 
> blobExists(« container », « folder") returns false 
> blobExists(« container », « folder/") returns false 
> 
> As directoryExists() is deprecated, what should be used ? 
> 
> Thanks, 
> Ionel 
> 
> --
> 
> 232 avenue Napoleon BONAPARTE 92500 RUEIL MALMAISON
> 
> Capital EUR 219 300,00 - RCS Nanterre B 408 832 301 - TVA FR 09 408 832 301
> 

> BEGIN:VCARD
> VERSION:3.0
> FN:GARDAIS\, Ionel
> N:GARDAIS;Ionel;;;
> ADR;TYPE=work,postal,parcel:;;1 Rue Isabey;RUEIL MALMAISON;IdF;92500;FRANCE
> TEL;TYPE=work,voice:0147088131
> EMAIL;TYPE=internet:ionel.gard...@tech-advantage.com
> URL;TYPE=work:http://www.techad.fr
> ORG:TECH advantage
> TITLE:CIO
> REV:2016-12-16T11:52:22Z
> UID:5a4525af-5d0c-4a32-a77b-c565580b116e:114277
> END:VCARD


-- 
Andrew Gaul
http://gaul.org/


Re: directoryExists vs blobExists

2017-09-19 Thread Andrew Gaul
Prefix allows clients to filter results which only match a prefix.  For
example, prefix a/ with a/1, a/2 and b/3 keys would only match a/1 and
a/2.  Delimiter filters results based on a partial suffix.  For example,
delimiter / with the previous keys would return a and b.

Prefix and delimiter have more utility and match how blobstores actually
work.  File systems have an actual hierarchy such that creating a/b will
create two entries and must prohibit operations like removing a before
removing a/b.  Object stores remove these invariants to increase
scalability.  jclouds previously mapped directory operations onto
pseudo-file system operations, inconsistently and poorly.

On Tue, Sep 19, 2017 at 08:37:26AM +0200, GARDAIS Ionel wrote:
> Thanks Andrew.
> 
> Do I understand correctly that "prefix and delimiter" is "path ending with a 
> /" (like in "folder/") ?
> 
> Regards,
> -- 
> Ionel GARDAIS
> Tech'Advantage CIO - IT Team manager
> 
> - Mail original -
> De: "Andrew Gaul" <g...@apache.org>
> À: "user" <user@jclouds.apache.org>
> Envoyé: Lundi 18 Septembre 2017 19:53:08
> Objet: Re: directoryExists vs blobExists
> 
> jclouds deprecated and will remove support for directories from the
> portable abstraction since most object stores do not not support them.
> Directories should still work through the provider level but I recommend
> using the prefix and delimiter options instead.  Tracking issue:
> 
> https://issues.apache.org/jira/browse/JCLOUDS-1066
> 
> On Mon, Sep 18, 2017 at 05:26:31PM +0200, GARDAIS Ionel wrote:
> > Hi, 
> > 
> > Got a question about using the filesystem API. 
> > As a blob wrote using putBlob(« container », "folder/file.t xt »); 
> > 
> > directoryExists(« container », « folder") returns true 
> > 
> > blobExists(« container », « folder") returns false 
> > blobExists(« container », « folder/") returns false 
> > 
> > As directoryExists() is deprecated, what should be used ? 
> > 
> > Thanks, 
> > Ionel 
> > 
> > --
> > 
> > 232 avenue Napoleon BONAPARTE 92500 RUEIL MALMAISON
> > 
> > Capital EUR 219 300,00 - RCS Nanterre B 408 832 301 - TVA FR 09 408 832 301
> > 
> 
> > BEGIN:VCARD
> > VERSION:3.0
> > FN:GARDAIS\, Ionel
> > N:GARDAIS;Ionel;;;
> > ADR;TYPE=work,postal,parcel:;;1 Rue Isabey;RUEIL MALMAISON;IdF;92500;FRANCE
> > TEL;TYPE=work,voice:0147088131
> > EMAIL;TYPE=internet:ionel.gard...@tech-advantage.com
> > URL;TYPE=work:http://www.techad.fr
> > ORG:TECH advantage
> > TITLE:CIO
> > REV:2016-12-16T11:52:22Z
> > UID:5a4525af-5d0c-4a32-a77b-c565580b116e:114277
> > END:VCARD
> 
> 
> -- 
> Andrew Gaul
> http://gaul.org/
> --
> 232 avenue Napoleon BONAPARTE 92500 RUEIL MALMAISON
> Capital EUR 219 300,00 - RCS Nanterre B 408 832 301 - TVA FR 09 408 832 301
> 

-- 
Andrew Gaul
http://gaul.org/


Re: BlobStore list() and PageSet

2017-12-03 Thread Andrew Gaul
Unfortunately both the transient and filesystem blobstores have
inefficient implementations which enumerate the entire blobstore even
when returning only a subset of keys.  LocalBlobStore.list calls
getBlobKeysInsideContainer and filters on the output.  Instead it should
call into the storage strategy with the marker, limit, prefix, and
delimiter and the strategy could more efficiently enumerate the keys.
This is easy for the transient store if we change to a sorted map but
requires some more work for filesystem since we have to issue readdir
selectively.

Could you open a JIRA issue for this and perhaps submit a pull request
to fix it?

On Sun, Dec 03, 2017 at 04:47:38PM +0100, GARDAIS Ionel wrote:
> Hi, 
> 
> I have a question regarding BlobStore listing. 
> We are currently using the FileSystem storage implementation 
> 
> private long containerSize(String containerName) { 
> long containerSize = 0; 
> String marker = null; 
> PageSet containerSML = null; 
> ListContainerOptions lcoRecursive = new ListContainerOptions().recursive(); 
> 
> (1) containerSML = blobStore.list(containerName, lcoRecursive); 
> containerSize += pageSize(containerSML); 
> marker = containerSML.getNextMarker(); 
> 
> while (marker != null) { 
> lcoRecursive.afterMarker(marker); 
> (2) containerSML = blobStore.list(containerName, lcoRecursive); 
> containerSize += pageSize(containerSML); 
> marker = containerSML.getNextMarker(); 
> } 
> 
> log.debug("container:{} size:{}", containerName, containerSize); 
> return containerSize; 
> } 
> private long pageSize(PageSet 
> containerStorageMetadataList) { 
> long size = 0; 
> for (StorageMetadata containerStorageMetadata : containerStorageMetadataList) 
> { 
> if (containerStorageMetadata.getType() == StorageType.BLOB) { 
> log.debug("name:{} size:{}", containerStorageMetadata.getName(), 
> containerStorageMetadata.getSize()); 
> size += containerStorageMetadata.getSize(); 
> } else { 
> log.debug("other in container: {}", containerStorageMetadata.getType()); 
> } 
> } 
> return size; 
> } 
> 
> 
> By calling list() with ListContainerOptions().recursive() on both (1) and 
> (2), all container’s blobs are opened every PageSet. 
> I tried to call list() with recursive() on (1) only then iterating over 
> PageSets without the recursive flag set but I don’t get all Blobs. 
> 
> Is there a way to iterate over all blobs in a container in a more effective 
> way ? 
> 
> Thanks, 
> Ionel 
> 
> --
> 
> 232 avenue Napoleon BONAPARTE 92500 RUEIL MALMAISON
> 
> Capital EUR 219 300,00 - RCS Nanterre B 408 832 301 - TVA FR 09 408 832 301
> 

> BEGIN:VCARD
> VERSION:3.0
> FN:GARDAIS\, Ionel
> N:GARDAIS;Ionel;;;
> ADR;TYPE=work,postal,parcel:;;232 avenue Napol??on BONAPARTE;RUEIL 
> MALMAISON;IdF;92500;FRANCE
> TEL;TYPE=work,voice:0147088131
> EMAIL;TYPE=internet:ionel.gard...@tech-advantage.com
> URL;TYPE=work:http://www.techad.fr
> ORG:TECH advantage
> TITLE:CIO
> REV:2017-09-28T12:56:01Z
> UID:5a4525af-5d0c-4a32-a77b-c565580b116e:114277
> END:VCARD


-- 
Andrew Gaul
http://gaul.org/


Re: jclouds BlobStore - hierarchy within given (Provider, Location, Container) tuple

2018-02-22 Thread Andrew Gaul
All major providers (Azure, B2, GCS, S3, Swift) allow names
/like/this/one.jpg, natively support / as a delimiter, and support
listing via arbitrary prefixes.  Atmos is the oddball which actually
supports directories as a concept.  While has similarities, it cannot
list all objects which start with "a" or use non-/ delimiters.

On Thu, Feb 22, 2018 at 08:19:49AM -0600, Logan Widick wrote:
> So I can assume that all providers would either allow names
> "/like/this/one.jpg" or change such names on my behalf to whatever
> delimiters are supported? Or is there a way to get a provider's suggested
> delimiters for "directory emulation" (like the forward slash in the above
> example) and "file extension emulation" (like the period in the above
> example)?

-- 
Andrew Gaul
http://gaul.org/


Re: putBlob failing, but acting like success

2018-08-17 Thread Andrew Gaul
On Fri, Aug 17, 2018 at 10:04:29PM -, john.calc...@gmail.com wrote:
> I'm calling putBlob to put a blob to an S3 ninja server. The server is 
> returning a ridiculous 500 server error response page - it contains a ton of 
> javascript and the jclouds SAX parser is choking on the content of a 

Re: Need to reset an http header in Jcloud.

2018-03-01 Thread Andrew Gaul
The default Java HTTP client adds this header.  You can work around this
via a different HTTP client, e.g., Apache or OkHttp:

ContextBuilder.modules(ImmutableSet.of(new 
OkHttpCommandExecutorServiceModule())) 

On Thu, Mar 01, 2018 at 08:25:26PM +0530, Ranjith R wrote:
> That sounds like the problem in
> 
> https://mail-archives.apache.org/mod_mbox/jclouds-user/201601.mbox/%3c7fb39cd6f35fba4a84437e6707bd6f59150d5...@g9w0757.americas.hpqcorp.net%3E
> 
> Check the responses, there was suggestion to use a different http client
> and that worked.
> 
> On Thu, Mar 1, 2018 at 8:00 PM, Pratheesh <pratheesh...@teknowmics.com>
> wrote:
> 
> > Hi,
> >
> >
> >
> > I am using Jcloud 1.9.1 to connect to a custom S3 server. But when Itry to
> > upload a file to this server, I am getting following error - –“ HTTP/1.1
> > 500 Internal Server Error”.
> >
> > I found issue  because sending the Http header – “Accept: text/html,
> > image/gif, image/jpeg, *; q=.2, */*; q=.2” ,server throwing this
> > error.  If I modify this to – “Accept: text/plain”, it is working perfectly.
> >
> > So I would like to know, If there is a way to remove this HttpHeader in
> > Jcloud? Please provide some sample, if it has..
> >
> >
> >
> > Regards
> >
> > Pratheesh
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >

-- 
Andrew Gaul
http://gaul.org/


Re: putBlob with an already existing object

2018-04-04 Thread Andrew Gaul
On Thu, Apr 05, 2018 at 02:12:28AM -, john.calc...@gmail.com wrote:
> What is jclouds's general policy with regard to putting a blob to a cloud 
> service where the blob already exists and the cloud provider doesn't allow 
> overwrites?
> 
> Seems like it would be nice to be able to treat the operation like it's an 
> idempotent http PUT, but if the service disallows overwrites, jclouds would 
> receive an exception in this case. Jclouds could then verify that the 
> existing object has the same content and silently return "ok" as if the put 
> worked. 
> 
> However, what happens if the cloud service has an object with the same name 
> and different content? The only way to maintain the idempotent quality would 
> be to silently delete the existing object and try the put again under the 
> covers - this seems imprudent to me and unlikely to be the current 
> functionality.

The closet analog is AtmosUtils.putBlob which retries on
KeyAlreadyExistsException after removing.  Generally the jclouds
portable abstraction tries to make all blobstores act the same and uses
the native behavior for the providers.  Which blobstore has similar
behavior?  I am not sure how we should handle this for almost
S3-compatible implementations like Hitachi.

> P.S. I'd look this stuff up myself if I could only trace my way to the bottom 
> levels of the jclouds code. There's so much interface wrapping going on in 
> there, along with dependency injection, it's nearly impossible to tell where 
> the rubber hits the road. If anyone can provide a hint about how to read the 
> code from user-level to wire-level, I'd really appreciate it.

jclouds uses metaprogramming which allows compact notation but obscures
the intent.  Most of the magic lies in RestAnnotationProcessor if you
want to see how it works.

-- 
Andrew Gaul
http://gaul.org/


Re: deleteObject with missing object

2018-04-04 Thread Andrew Gaul
On Thu, Apr 05, 2018 at 02:04:58AM -, john.calc...@gmail.com wrote:
> What does deleteObject do when the object is not present in the cloud? does 
> it silently return success (as with the corresponding idempotent http verb)?
> 
> When writing a client, one doesn't want to catch an exception for a missing 
> object and, not knowing what it's for, just retry the operation.

All blobstores allow deletes of non-existent keys.

-- 
Andrew Gaul
http://gaul.org/


Re: putBlob with an already existing object

2018-04-05 Thread Andrew Gaul
On Thu, Apr 05, 2018 at 04:03:04PM -, john.calc...@gmail.com wrote:
> Thanks for the quick response Andrew - 
> 
> > The closet analog is AtmosUtils.putBlob which retries on
> > KeyAlreadyExistsException after removing.  Generally the jclouds
> > portable abstraction tries to make all blobstores act the same and uses
> > the native behavior for the providers.  Which blobstore has similar
> > behavior?  I am not sure how we should handle this for almost
> > S3-compatible implementations like Hitachi.
> 
> So, clarifying - you're saying if it gets a KeyAlreadyExistsException, it 
> then deletes the key and retries the put? That seems a bit harsh - what if 
> you're building a distributed system on top of jclouds and you have two 
> cluster nodes racing to put the same key? Would it not be better to at least 
> test the metadata to see if you're trying to overwrite the same data and just 
> silently return ok?

Agreed that this is racy and something
https://issues.apache.org/jira/browse/JCLOUDS- unsuccessfully tried
to address through a newer header that not all implementations support.
Atmos does not return an ETag so we cannot check the same content,
although ETag checking does not always work on S3, for example with
multi-part uploads.  Unfortunately Atmos is odd for a number of reasons
and delete and retry was the best workaround at the time, especially for
a low-popularity provider.  Some blobstores like Ceph can address this
issue with conditional PUT but this is not supported elsewhere.

-- 
Andrew Gaul
http://gaul.org/


Re: putBlob fails to send proper content when using payload(InputStream)

2018-04-13 Thread Andrew Gaul
On Fri, Apr 13, 2018 at 09:29:19PM -, john.calc...@gmail.com wrote:
> Just tried out s3proxy - it's exactly what we're looking for. I tried setting 
> it up to use aws-v4 authorization against a filesystem backend like this:
>
> $ cat s3proxy.conf 
> s3proxy.authorization=aws-v4
> s3proxy.endpoint=http://0.0.0.0:8080
> s3proxy.identity=identity
> s3proxy.credential=secret
> jclouds.provider=filesystem
> jclouds.filesystem.basedir=/tmp/s3proxy
> 
> It seemed to work, but it can't find the container (test-bucket) when I do a 
> PUT that works against amazon. This came back from jclouds client when 
> configured to use the aws-s3 provider (I added an /etc/hosts entry for the 
> vhost-buckets issue - worked like a charm):

S3Proxy interpreted your PUT object operation as a PUT bucket operation;
I suspect that you need to set s3proxy.virtual-host as documented here:

https://github.com/gaul/s3proxy/blob/master/src/main/java/org/gaul/s3proxy/S3ProxyConstants.java#L50

If this does not help, could you follow up with a GitHub issue at
https://github.com/gaul/s3proxy since this does not relate to the
jclouds backend?

-- 
Andrew Gaul
http://gaul.org/


Re: putBlob fails to send proper content when using payload(InputStream)

2018-04-13 Thread Andrew Gaul
On Thu, Apr 12, 2018 at 01:41:38PM -, john.calc...@gmail.com wrote:
> Additional information on this issue: I've discovered by virtue of a 
> wireshark session that jclouds client is NOT sending chunked 
> transfer-encoding, but rather aws-chunked content-encoding. Can anyone tell 
> me why this is necessary, since A) it accomplishes the same thing that 
> chunked transfer-encoding does (except that it's not compatible with most web 
> servers' built-in ability to handle chunked encoding) and B) we're sending 
> the content-length header?

aws-s3 uses V4 signing while s3 uses V2 signing.  V4 uses a chunked
encoding to sign the payload as well as the headers while V2 signs only
the headers.  V4 uses the AWS encoding because of the signatures it
attaches.  I believe you can Guice override the signer type in s3 to get
the same behavior as aws-s3.  If you are using a local S3 clone and not
AWS itself you really should use the s3 provider since aws-s3 just
overrides endpoints and regions.

I notice you filed a related bug against s3ninja recently -- you may
want to try S3Proxy[1] instead which has a more complete implementation
and actually uses jclouds as its backend.

[1] https://github.com/gaul/s3proxy

-- 
Andrew Gaul
http://gaul.org/


Requiring Java 8 and Guava 21+

2018-03-24 Thread Andrew Gaul
jclouds-dev has a thread on requiring Java 8 and Guava 21+.  Guava has
proven to be a painful dependency for developers and users due to
jclouds use of beta and deprecated methods.  Presently users can only
specify a few versions of Guava and no modern versions.  In jclouds
2.2.0 we would like to require Guava 21 which in turn requires Java 8.
Unfortunately I do not see another way forward and this is an issue of
when not if.  Any comments on this proposed change?

-- 
Andrew Gaul
http://gaul.org/


Re: Does jClouds support AZURE file storage (not the BLOB one)

2018-03-28 Thread Andrew Gaul
On Tue, Mar 27, 2018 at 10:52:29PM +0300, Totyo Totev wrote:
>  I found *jClouds *supports Azure Blob ("azureblob") as provider, but is
> there any support for "*file storage*"  on AZURE infrastructure (
> .file.core.windows.net <http://file.core.windows.net>)?

jclouds does not support Azure file storage today.  We would happily
accept pull requests to add this!  This is simpler than Azure blob which
has its own protocol; Azure file uses SMB2 so we only need a
provisioning API and some code to hook up to Azure compute.

-- 
Andrew Gaul
http://gaul.org/


Re: Signature v4 support for non amazon S3

2018-10-16 Thread Andrew Gaul
Sorry for the late reply, but I believe this needs some jclouds work to
use.  We would welcome patches for this and to enable v4 signing more
easily!

On Thu, Jun 21, 2018 at 09:52:36PM +0530, Ranjith R wrote:
> With sigv4, we see that we are doing single chunk upload with signed
> payload.  We also noticed that the data is read twice (once for calculating
> the hash and once for actual transfer).  While reading the sig v4
> documentation at
> https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html
> I saw that there is a unsigned payload option.   Is there a way to use
> unsigned payload from jclouds to avoid this double read?
> 
> Thanks,
> Ranjith
> 
> On Thu, Jun 21, 2018 at 8:33 PM Ignasi Barrera  wrote:
> 
> > Yeah! :)
> >
> >
> > On 21 June 2018 at 16:41, Ranjith R  wrote:
> >
> >> Thanks Ignasi.  That worked.
> >>
> >> Thanks,
> >> Ranjith
> >>
> >> On Thu, Jun 21, 2018 at 4:03 PM Ignasi Barrera  wrote:
> >>
> >>> I haven't tried it, but you should be able to define a Guice module that
> >>> extends the default S3 module and overrides the request signer
> >>> configuration. Then you can pass that one to the list of modules you pass
> >>> when creating the context:
> >>>
> >>> @ConfiguresHttpApipublic static class S3V4SignerModule extends 
> >>> S3HttpApiModule {
> >>>@Override
> >>>protected void bindRequestSigner() {
> >>>   
> >>> bind(RequestAuthorizeSignature.class).to(RequestAuthorizeSignatureV4.class).in(Scopes.SINGLETON);
> >>>}
> >>> }
> >>> public static void main(String[] args) {
> >>>ContextBuilder.newBuilder("s3")
> >>>   ...
> >>>   .modules(ImmutableSet.of(new S3V4SignerModule(), ...))
> >>>   ...
> >>> }
> >>>
> >>> ​
> >>>
> >>> Make sure you annotate the custom module with "@ConfiguresHttpApi".
> >>> Can you try this?
> >>>
> >>>
> >>>
> >>> I.
> >>>
> >>>
> >>> On 21 June 2018 at 11:58, Ranjith R  wrote:
> >>>
> >>>> I was looking at https://issues.apache.org/jira/browse/JCLOUDS-480 and
> >>>> it talks about the default signing for AWS being v4 and other s3 clones
> >>>> being v2.  I just want to know if I can use v4 for a s3 clone?  Is there
> >>>> any example that I can look at?
> >>>>
> >>>> Thanks,
> >>>> Ranjith
> >>>>
> >>>> On Mon, Jun 18, 2018 at 7:21 PM Ranjith R  wrote:
> >>>>
> >>>>> Hi All - I know signature v4 signing is implemented for Amazon S3
> >>>>> (aws-s3). Just wanted to know if I can use v4 signing for a non amazon
> >>>>> cloud which supports S3 API and sigV4  (s3).  If it does, what changes
> >>>>> should be done from the client side?
> >>>>>
> >>>>> Thanks,
> >>>>> Ranjith
> >>>>>
> >>>>
> >>>
> >

-- 
Andrew Gaul
http://gaul.org/


Re: hgst hcp server returning 500 internal server error

2018-10-16 Thread Andrew Gaul
On Thu, Sep 20, 2018 at 07:42:10PM -, john.calc...@gmail.com wrote:
> On 2018/09/16 05:19:15, john.calc...@gmail.com  
> wrote: 
> > Hi Andrew,
> > 
> > I know it's not your business to support HCP, but I'm hoping you've seen 
> > this sort of thing in the past and know of a possible fix/work-around for 
> > it. I've got a little test program that connects to an HGST HCP cloud 
> > server and sends a single file - no matter what settings I use, I get back 
> > a 500 internal server error (with no error details, of course). The 
> > configuration is an SSL connection using https and port 443. I had 
> > handshake fatal alerts from the server when I ran it on windows, but moving 
> > my program to a RedHat 7.3 system got me past the SSL connection issues. 
> > Now I just see that 500 internal server error when trying to putBlob. Note 
> > that I do a removeBlob first, and that succeeds (though it's likely the 
> > blob is not there). I've also tried several different content forms, 
> > including stream, string, byte array, etc.
> > 
> > Do you know of any special configuration that must be used with HCP?
> > 
> > (It might also interest you to know, if you don't already, that HCP returns 
> > an empty entity on this error, which jclouds attempts to parse as XML with 
> > a SAX parser. This generates a stack trace in the log. It's incidental, but 
> > you might want to try to do a little pre-checking of the content before 
> > attempting to parse it with SAX. Just a thought.)
> > 
> > Thanks in advance,
> > John Calcote
> > Hammerspace, Inc.
> > 
> Though we're still not sure, we believe these 500 errors are caused either by 
> disk-full or container-quota-full conditions.

Sorry for the late reply, but I have also heard users complain about 500
errors when overwriting existing blobs.  Apparently it can be configured
in a write-once manner.  Unfortunately I don't have any direct
experience with HGST.

-- 
Andrew Gaul
http://gaul.org/


Re: NullPointer in BlobStore#streamBlob using RegionScopedSwiftBlobStore (2.1.1)

2018-10-16 Thread Andrew Gaul
On Wed, Aug 22, 2018 at 11:33:48AM +0200, Jean Helou wrote:
> Hello,
> 
> I am currently working on integrating JClouds to create a blobstore storage
> for apache james. I noticed that using
> 
> blobStore.streamBlob("container", "wrongid");
> 
> yields a NullPointerException when trying to access a blob which doesn't
> exist. The NPE is raised in RegionScopedSwiftBlobStore when the streamBlob
> method tries to read the blob metadata.
> 
> I think it should null check and return null or an empty inputstream but
> not raise an NPE
> 
> see https://github.com/jclouds/jclouds
> /blob/46759f8bda00f86ef934345846e22e2bd2b0d7ae/apis/openstack-swift/src/main/java/org/
> jclouds/openstack/swift/v1/blobstore/RegionScopedSwiftBlobStore.java#L691
> 
> for now I work around by using
> 
> blobStore.getBlob("container","id");
> 
> doing the nullcheck myself and then getting the inputstream from the blob's
> payload.

Sorry for the late reply but I find the implementation of
BlobStore.streamBlob confusing since it lives in the portable
abstraction yet only has one implementation.  I recommend using getBlob
which actually does stream the payload.

-- 
Andrew Gaul
http://gaul.org/


Re: aws s3 list objects v2

2019-01-15 Thread Andrew Gaul
Great, let's continue the discussion at
https://github.com/jclouds/jclouds/pull/1267 .

jclouds commits must merge to master first although I can backport them
to 2.1.x.  Timing releases is a sore point but S3Proxy already upgraded
to 2.1.2-SNAPSHOT so S3Proxy's snapshot releases and Docker images will
immediately include such fixes.  Unfortunately its next release will now
be gated on jclouds.

On Tue, Jan 15, 2019 at 10:20:45AM -0500, Justinus Menzel wrote:
> Andrew, thank you for responding.
> I really need both parts. Hence I made changes to s3proxy which rely on
> changes to jclouds.
> I can setup the pull request with target https://github.com/gaul/s3proxy
> But it also needs the jclouds change where jclouds back-end issues the
> 'list-type=2' request.
> I'll try to setup a pull request into the jclouds repo.
> Not sure how easily this PR will get accepted and merged. Apparently
> changes need to go into master.
> How long before master becomes stable and can be used for building/
> releasing s3proxy?
> Anyways, I'll give it a shot.
> Best,

-- 
Andrew Gaul
http://gaul.org/


Re: aws s3 list objects v2

2019-01-03 Thread Andrew Gaul
On Fri, Nov 30, 2018 at 09:53:31AM -0500, Justinus Menzel wrote:
> I've been playing around with s3proxy and jclouds and was wondering if
> there are any plans to support s3's list objects v2 call.
> https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html
> Seems like some tools already rely on V2, for example the aws CLI command.

There are two parts to this; the S3Proxy front-end interpretation of
`continuation-token` and the jclouds back-end issuing of `list-type=2`.
The latter is easy to implement as an additional call in `S3Client`;
would you like to submit a pull request?  I suspect you really want the
former support, however.  Honestly I have read the documentation a few
times and do not understand the subtlety between v1 and v2 calls; these
look the same?

It looks like you implemented this but sent the pull request to the
wrong repository:

https://github.com/vyasa-analytics/s3proxy/pull/1

Could you send this to https://github.com/gaul/s3proxy ?

It is unfortunate that clients do not fail back from v2 to v1
automatically like some do for v4 to v2 authentication, but S3Proxy
should just add support for v2.

-- 
Andrew Gaul
http://gaul.org/


[ANNOUNCE] Apache jclouds 2.1.2 released

2019-02-06 Thread Andrew Gaul
The Apache jclouds team is pleased to announce the release of jclouds
2.1.2.

Apache jclouds is an open source multi-cloud toolkit for the Java
platform that gives you the freedom to create applications that are
portable across clouds while giving you full control to use
cloud-specific features.

The source archives for the release are available here:
https://jclouds.apache.org/start/install/

The Maven artifacts for the release are available in Maven Central,
under the org.apache.jclouds group ID.

The release notes are available here:
https://jclouds.apache.org/releasenotes/2.1.2/

We welcome your help and feedback. For more information on how to report
problems, and to get involved, visit the project website at:
https://jclouds.apache.org/

The Apache jclouds Team

-- 
Andrew Gaul
http://gaul.org/


Re: joyent triton

2019-02-05 Thread Andrew Gaul
There have been a few incomplete attempts at supporting Joyent if you
would like to use them as a basis to implement a new provider:

https://github.com/jclouds/jclouds/pulls?q=is%3Apr+joyent+is%3Aclosed
https://github.com/jclouds/jclouds-labs/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aclosed+joyent+

On Tue, Feb 05, 2019 at 10:17:56PM +0100, Ignasi Barrera wrote:
> No, Joyent is not one of the currently supported providers
> 
> On Tue, 5 Feb 2019 at 22:14, john.calc...@gmail.com 
> wrote:
> 
> > Hi all --
> >
> > Anyone know if joyent triton (private cloud service) is supported by
> > jclouds? My preliminary research says no, but if anyone else knows better,
> > I'd appreciate it.
> >
> > John
> >

-- 
Andrew Gaul
http://gaul.org/


Re: Does jcloud blobstore support "simple migration" from S3 to GCS?

2019-01-28 Thread Andrew Gaul
On Tue, Jan 29, 2019 at 01:10:29AM -, bclif...@systemsbiology.org
wrote:
> It appears that the request is being sent to S3.amazonaws.com rather
> than to storage.googleapis.com. My understanding is that one should
> pass s3-aws as the provider, but presumably somewhere the
> GOOG-ACCESS-KEY should be a clue to instead send the request to GCS.
> 
> So, my question: Is this migration not implemented in blobstore or am
> I just calling the example incorrectly? 

You should set the provider to "s3" instead of "aws-s3" and set the
endpoint to https://storage.googleapis.com.

-- 
Andrew Gaul
http://gaul.org/


Re: Range download

2020-05-19 Thread Andrew Gaul
On Mon, May 18, 2020 at 08:00:16PM -, John Calcote wrote:
> I'm trying to create a sort of multi-part download mechanism using jclouds 
> download with a GetOption of "range". I'd like to just download 5MB ranges 
> until I run out of file to download. The doc on the use of this option is a 
> bit sparse. Can someone help me understand the semantics of download with a 
> range option? 
> 
> Here's what I'd like to do:
> 
> long start = 0;
> long size = 5 * 1024 * 1024;
> long length = size;
> 
> while (length < size) {
> long limit = start + size
> Blob blob = getBlob(container, key, range(start, limit - 1));
> ContentMetadata meta = blob.getMetadata().getContentMetadata();
> long length = meta.getContentLength();
> 
> // stream 'length' bytes into a buffer and process the buffer here
> 
> start = limit;
> }
> 
> Assumptions:
> 1) content length will return the actual size downloaded if it's less than 
> the range size specified.
> 2) It's OK to request a range of bytes that's outside the file extant - 
> you'll just get back a length of zero.

BlobStore.getBlob takes an optional argument that you can populate via:

new GetOptions().range(start, size)

Note that this range is inclusive.

-- 
Andrew Gaul
http://gaul.org/


[ANNOUNCE] Apache jclouds 2.2.1 released

2020-05-14 Thread Andrew Gaul
The Apache jclouds team is pleased to announce the release of jclouds
2.2.1.

Apache jclouds is an open source multi-cloud toolkit for the Java
platform that gives you the freedom to create applications that are
portable across clouds while giving you full control to use
cloud-specific features.

The source archives for the release are available here:
https://jclouds.apache.org/start/install/

The Maven artifacts for the release are available in Maven Central,
under the org.apache.jclouds group ID.

The release notes are available here:
https://jclouds.apache.org/releasenotes/2.2.1/

We welcome your help and feedback. For more information on how to report
problems, and to get involved, visit the project website at:
https://jclouds.apache.org/

The Apache jclouds Team

-- 
Andrew Gaul
http://gaul.org/


Re: how to unwrap a back-end provider

2020-05-21 Thread Andrew Gaul
Here is an example of unwrapping a compute provider:

https://stackoverflow.com/questions/23044060/how-access-native-provider-api-with-jclouds-1-7

You can unwrap a blobstore provider in the same way.

On Fri, May 22, 2020 at 12:13:31AM -, John Calcote wrote:
> Hi Andrew,
> 
> I've searched high and low for how to do that "unwrap" trick you mentioned 
> here:
> 
> https://jclouds.apache.org/start/concepts/#providers
> 
> I've even played with evaluating different methods on the blobstore context 
> and builder. I cannot find an obvious (or unobvious) way to access the 
> back-end provider functionality as mentioned in the link above. 
> 
> Can you please provide a simple example of how one does this?
> 
> To be specific, I'm looking for a way to access the amazon S3 back-end 
> provider so I can find out what region jclouds has configured for the AWS S3 
> provider.
> 
> Thanks,
> John

-- 
Andrew Gaul
http://gaul.org/


Re: multi-part upload writing temp files

2020-05-21 Thread Andrew Gaul
This is nothing to worry about.  jclouds creates temporary files when
HTTP payloads exceed 256 KB to avoid exhausting memory.  You can see the
source of this message at
core/src/main/java/org/jclouds/logging/internal/Wire.copy.  I believe
that this code is only exercised with debug logging; usually jclouds
streams payloads directly.

On Thu, May 21, 2020 at 04:56:17PM -, John Calcote wrote:
> Hi Andrew,
> 
> I've got code that's uploading using the jclouds multi-part abstraction. Here 
> are the log trace messages for a portion of an upload of a 1G file:
> 
> 2020-05-21 16:53:12.020 +,36635150704660 {} DEBUG o.j.h.i.HttpWire [user 
> thread 18] over limit 33554416/262144: wrote temp file
> 
> I'm concerned about the trailing messages - "over limit ... wrote temp file" 
> - what does this mean?
> 
> Thanks,
> John

-- 
Andrew Gaul
http://gaul.org/


Re: Azure Datalake connectivity

2020-10-07 Thread Andrew Gaul
As I understand it, jclouds should be able to access Azure Data Lake v2
(not v1) via the existing Azure Blob provider.  Data is stored at the
container level and ADL and Blob can both access it.  So I would try
using the existing "blob.core.windows.net" endpoint with your container
name.  Sorry I have not tested this myself.

On Tue, Oct 06, 2020 at 04:25:00PM +0100, Rauf Butt wrote:
> Dear community,
> 
> I want to connect to Azure Data Lake using jclouds.azureblob.  Does
> azureblob provide connectivity to Data Lake? If so, any sample code
> available for Azure data lake connectivity?
> 
> Azure Datalake intro:
> https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction
> 
> Connection credentials include the following:
> tenant_id = ""
> client_id = ""
> client_secret = ""
> storage_account_name = ""
> file_system (or container) = ""
> 
> Note: Data lake uses domain "dfs.core.windows.net" instead of "
> blob.core.windows.net". I used  to change the endpoint.
> 
> I found the sample below for blob connectivity but its not working.
> https://jclouds.apache.org/guides/azure-storage/
> 
> Regards,
> Rauf

-- 
Andrew Gaul
http://gaul.org/


Re: jclouds s3 support implicit aws credentials via IAM roles

2020-07-15 Thread Andrew Gaul
Timo, this is definitely a desired feature.  I don't believe it is too
much work to add -- would you like to try implementing it yourself?
Please file a JIRA issue and we can help you with any questions.

On Fri, Jul 10, 2020 at 09:21:45PM +, Timo Naroska wrote:
> Hi,
> 
> I’m evaluating jclouds blobstore for accessing s3 and azure. When using 
> jclouds client on an ec2 instance, I was wondering if the client would 
> automatically pick up AWS credentials from EC2 metadata server. Basically the 
> functionality that the AWS SDK does out of the box.
> 
> I found this ancient request in legacy depot:
> https://github.com/jclouds/legacy-jclouds/issues/1280
> 
> Has anything like this ever been added to jclouds?
> 
> Thanks
> Timo

-- 
Andrew Gaul
http://gaul.org/


dependency troubles

2020-06-19 Thread Andrew Gaul
We have had many reports of Guava version incompatibilities in recent
years.  To address this, I proposed moving jclouds dependencies from
Guava 18 to 22 and from Java 7 to 8.  This should give us compatibility
with Guava 22-30:

https://github.com/apache/jclouds/pull/75

We also have several reports of gson which is hard to address due to
OSGi but something we may upgrade as well.  Finally I have a personal
issue with BouncyCastle which prevents upgrading OkHttp which prevents
tests succeeding with later Java versions:

https://github.com/apache/jclouds/pull/71

I wonder what other dependencies have troubled users?  I would like
jclouds 2.3.0 to address all of our common dependency issues since we
will break some (and unbreak more) users due to the Guava upgrade.

-- 
Andrew Gaul
http://gaul.org/


Re: filesystem (LocalBlobStore) failing intermittently on initial "upload" in v2.2.1

2021-01-07 Thread Andrew Gaul
il.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_181]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
> Caused by: java.nio.file.NoSuchFileException: 
> /tmp/16100498223118761388361431415820/test-container/byHashV1/0001001020
>  B3A2D81C390E0531DBCF0DEC082C4CA96D26F26AA9A26A26E80B5F00FA9F48E3
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) 
> ~[?:1.8.0_181]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
> ~[?:1.8.0_181]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
> ~[?:1.8.0_181]
> at 
> sun.nio.fs.UnixFileAttributeViews$Posix.readAttributes(UnixFileAttributeViews.java:218)
>  ~[?:1.8.0_181]
> at 
> sun.nio.fs.UnixFileAttributeViews$Posix.readAttributes(UnixFileAttributeViews.java:131)
>  ~[?:1.8.0_181]
> at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>  ~[?:1.8.0_181]
> at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>  ~[?:1.8.0_181]
> at java.nio.file.Files.readAttributes(Files.java:1737) ~[?:1.8.0_181]
> at java.nio.file.Files.getPosixFilePermissions(Files.java:2004) 
> ~[?:1.8.0_181]
> at 
> org.jclouds.filesystem.strategy.internal.FilesystemStorageStrategyImpl.setBlobAccess(FilesystemStorageStrategyImpl.java:686)
>  ~[filesystem-2.2.1.jar:2.2.1]
> ... 14 more
> ```
> 
> JClouds seems to be having trouble reading the existing POSIX attributes on 
> the file it just wrote (so it can modify them).
> 
> Has anyone else seen this, or have there been changes between 2.1 and 2.2.0 
> to the filesystem implementation?

-- 
Andrew Gaul
http://gaul.org/


Re: AWS v4 Signature algorithm required for generic-S3

2020-12-11 Thread Andrew Gaul
Some background, there is a generic s3 api with most of the
functionality and an aws-s3 provider which inherits from the former and
overrides the endpoints.  The aws-s3 provider also overrides the signer
that you can see in AWSS3HttpApiModule.bindRequestSigner and
AWSS3BlobStoreContextModule.bindRequestSigner.  Instead
S3HttpApiModule.bindRequestSigner and
S3BlobStoreContextModule.bindRequestSigner should set this conditionally
based on a property.  The Guice declarative style is confusing but you
can see some examples like @Named(Constants.PROPERTY_SESSION_INTERVAL)
which allow setting values based on a default or a user-provided value.
Fixing this will require moving AWSS3BlobRequestSignerV4 and its tests
from the aws-s3 provider to the s3 api.

jclouds has two styles of tests.  You can run the unit tests via:

mvn test -pl :s3,:aws-s3

and the integration tests via:

mvn integration-test -pl :s3 -Plive \
-Dtest.s3.endpoint="${JCLOUDS_ENDPOINT}" \
-Dtest.s3.identity="${JCLOUDS_IDENTITY}" \
-Dtest.s3.credential="${JCLOUDS_CREDENTIAL}"
mvn integration-test -pl :aws-s3 -Plive \
-Dtest.aws-s3.identity="${JCLOUDS_IDENTITY}" \
-Dtest.aws-s3.credential="${JCLOUDS_CREDENTIAL}"

The ideal initial state would be to allow the s3 api to default to V2
and opt into V4 while the aws-s3 provider continues to default to V4.
There is a reasonable question about what the s3 api default should be
which we could address in a subsequent commit.

If you get stuck it might make sense to share a work-in-progress PR on
GitHub.  I hope this helps!

On Sat, Dec 12, 2020 at 06:43:45AM -, John Calcote wrote:
> Andrew,
> 
> I would love to submit this PR. But I'm going to need some guidance. I'm a 
> pretty good coder, but I must tell you that there are big gaps in my 
> understanding of how the jclouds code works. 
> 
> Can you please point me in the right direction? Tell me which files to look 
> at and I'll try to decipher what needs to be done. And any additional 
> information you think might be relevant. My hope is that I can do the work, 
> saving you that effort, if you will spend just a few minutes jotting down how 
> you would approach the patch since you obviously understand the code better 
> than most.
> 
> Thanks very much for the opportunity to help.
> 
> 
> On 2020/12/12 03:04:12, Andrew Gaul  wrote: 
> > 1) and 2) are hacky but will work.  However we should really properly
> > support this by giving the generic S3 provider a property.  This would
> > require reparenting some code from aws-s3 to s3.  Would you be able to
> > submit a PR for this?
> > 
> > The unsigned payload is also a desired feature and easy to add after
> > fixing the above.
> > 
> > We plan to release 2.3.0 later this month so it could include these
> > features if you can help out!
> > 
> > On Thu, Dec 10, 2020 at 07:42:22PM -, John Calcote wrote:
> > > I see someone asked this very question a while back (2018): 
> > > https://lists.apache.org/thread.html/3553946dcbe7c72fc8f82c6353c9a5c56d7b587da18a4ebe12df1e26%40%3Cuser.jclouds.apache.org%3E
> > > 
> > > Did anyone ever solve the double read issue regarding disabling 
> > > data-hashing?
> > > 
> > > On 2020/12/10 18:03:13, John Calcote  wrote: 
> > > > Hi all - 
> > > > 
> > > > We have a client who would like to use jclouds with ONTAP 9.8, which 
> > > > requires V4 signatures AND path-based URL naming. I believe the true 
> > > > AWS S3 provider requires virtual bucket URL naming. Anyone have any 
> > > > ideas on how to either:
> > > > 
> > > > 1) coerce jclouds into using the AWS S3 provider with path-based naming 
> > > > or
> > > > 2) coerce jclouds into using the generic S3 provider with V4 signing
> > > > 
> > > > Thanks in advance.
> > > > John
> > > > 
> > 
> > -- 
> > Andrew Gaul
> > http://gaul.org/
> > 

-- 
Andrew Gaul
http://gaul.org/


Re: AWS v4 Signature algorithm required for generic-S3

2020-12-11 Thread Andrew Gaul
1) and 2) are hacky but will work.  However we should really properly
support this by giving the generic S3 provider a property.  This would
require reparenting some code from aws-s3 to s3.  Would you be able to
submit a PR for this?

The unsigned payload is also a desired feature and easy to add after
fixing the above.

We plan to release 2.3.0 later this month so it could include these
features if you can help out!

On Thu, Dec 10, 2020 at 07:42:22PM -, John Calcote wrote:
> I see someone asked this very question a while back (2018): 
> https://lists.apache.org/thread.html/3553946dcbe7c72fc8f82c6353c9a5c56d7b587da18a4ebe12df1e26%40%3Cuser.jclouds.apache.org%3E
> 
> Did anyone ever solve the double read issue regarding disabling data-hashing?
> 
> On 2020/12/10 18:03:13, John Calcote  wrote: 
> > Hi all - 
> > 
> > We have a client who would like to use jclouds with ONTAP 9.8, which 
> > requires V4 signatures AND path-based URL naming. I believe the true AWS S3 
> > provider requires virtual bucket URL naming. Anyone have any ideas on how 
> > to either:
> > 
> > 1) coerce jclouds into using the AWS S3 provider with path-based naming or
> > 2) coerce jclouds into using the generic S3 provider with V4 signing
> > 
> > Thanks in advance.
> > John
> > 

-- 
Andrew Gaul
http://gaul.org/


Re: filesystem (LocalBlobStore) failing intermittently on initial "upload" in v2.2.1

2021-01-08 Thread Andrew Gaul
I looked through:

git log upstream/2.2.x blobstore/
git log upstream/2.2.x apis/filesystem/

But there are only a few changes and I do not believe that these commits
could cause your symptoms.  You could try bisecting to be sure.  Given
that my suggested race condition is so easy to fix I would try that
though and in any case it would help the project.

On Fri, Jan 08, 2021 at 05:49:17PM -, John Calcote wrote:
> Hi Andrew,
> 
> Your theory about another client deleting the file cannot be correct. This is 
> an integration test where a single test is running - the container is created 
> in /tmp//test-container. The test creates the test 
> container in the setup method, uploads several files, manipulates some data, 
> and then deletes the entire container in the teardown method.
> 
> This is the very first file added, and the test-container itself was created 
> milliseconds earlier. There are no other clients running at this time. If you 
> look back at the log snippet I pasted, you'll see that the log line
> 
> 2021-01-07 20:03:43.576,7964896094167857 {} DEBUG o.j.b.c.LocalBlobStore 
> [cloud-normalpri-0] Put blob with key [byHashV1/0001001020 
> B3A2D81C390E0531DBCF0DEC082C4CA96D26F26AA9A26A26E80B5F00FA9F48E3 ] to 
> container [test-container]
> 
> which is logged by jclouds filesystem impl is logged four times in a row 
> (because jclouds is retrying under the covers for us, as we use Payload style 
> payloads, which are retriable). Only the last error is logged ultimately, but 
> it's clear this error is happening at least four times in quick succession 
> before jclouds gives up and throws the exception (which we log).
> 
> Just prior to this log line, jclouds logs the creation of the container and 
> the test-container directory.
> 
> We upgraded from 2.2.0 to 2.2.1 recently and this started happening. The test 
> has been in place for years, without seeing this issue. Suddenly, after 
> upgrading, it starts happening. 
> 
> Thanks,
> John

-- 
Andrew Gaul
http://gaul.org/


jclouds 2.3.0-SNAPSHOT testing

2021-01-31 Thread Andrew Gaul
We are preparing for 2.3.0 and jclouds could use your help!  The next
release requires Java 8 and upgraded many dependencies, including Guava
and Guice.  This will break some projects but improve compatibility with
modern Java environments.  Users can help ensure a successful release by
testing 2.3.0-SNAPSHOT:

https://jclouds.apache.org/start/install/#maven

You may need to wait a few hours for the latest snapshot to be
generated.

-- 
Andrew Gaul
http://gaul.org/


Re: filesystem (LocalBlobStore) failing intermittently on initial "upload" in v2.2.1

2021-01-31 Thread Andrew Gaul
I opened a pull request:

https://github.com/apache/jclouds/pull/94

On Fri, Jan 08, 2021 at 10:01:56AM +0900, Andrew Gaul wrote:
> Hi John, the code in LocalBlobStore.putBlob looks like this:
> 
>   String eTag = storageStrategy.putBlob(containerName, blob);
>   setBlobAccess(containerName, blobKey, options.getBlobAccess());
> 
> I suspect that the object is being removed by another client in between
> these calls.  Instead FilesystemStorageStrategyImp.putBlob should allow
> an options parameter to set the blobAccess on the temporary file which
> then gets renamed to the real file name.  Would you like to submit a PR
> for this?
> 
> On Thu, Jan 07, 2021 at 10:09:29PM -, John Calcote wrote:
> > We use the filesystem (LocalBlobStore) implementation for integration 
> > testing in our code.
> > 
> > I'm seeing the following behavior in version 2.2.1 which I recently 
> > upgraded to:
> > 
> > ```
> > 2021-01-07 20:03:43.503,7964896021968234 {} DEBUG 
> > c.p.d.m.j.AbstractUploadCallable [cpu-normalpri-1] [task 1] State change: 
> > (data) chunk [0-1048575] - uploading
> > 2021-01-07 20:03:43.575,7964896093429743 {} DEBUG 
> > o.j.f.s.i.FilesystemStorageStrategyImpl [cpu-normalpri-1] Creating 
> > container test-container
> > 2021-01-07 20:03:43.575,7964896093478721 {} DEBUG 
> > o.j.f.s.i.FilesystemStorageStrategyImpl [cpu-normalpri-1] Creating 
> > directory //tmp/16100498223118761388361431415820/test-container
> > 2021-01-07 20:03:43.575,7964896093881924 {} DEBUG 
> > c.p.d.m.j.AbstractUploadCallable [cpu-normalpri-1] [task 1] State change: 
> > (data) chunk [1048576-2097151] - checksummed 
> > checksum=D8ACDD71E006E090F4FBF32CAF0627A3
> > 2021-01-07 20:03:43.575,7964896093909591 {} DEBUG 
> > c.p.d.m.j.AbstractUploadCallable [cpu-normalpri-1] [task 1] State change: 
> > (data) chunk [1048576-2097151] - uploading
> > ..
> > 2021-01-07 20:03:43.576,7964896094167857 {} DEBUG o.j.b.c.LocalBlobStore 
> > [cloud-normalpri-0] Put blob with key [byHashV1/0001001020 
> > B3A2D81C390E0531DBCF0DEC082C4CA96D26F26AA9A26A26E80B5F00FA9F48E3 ] to 
> > container [test-container]
> > ..
> > 2021-01-07 20:03:43.577,7964896095288960 {} DEBUG o.j.b.c.LocalBlobStore 
> > [cloud-normalpri-0] Put blob with key [byHashV1/0001001020 
> > B3A2D81C390E0531DBCF0DEC082C4CA96D26F26AA9A26A26E80B5F00FA9F48E3 ] to 
> > container [test-container]
> > ..
> > 2021-01-07 20:03:43.584,7964896102791478 {} DEBUG o.j.b.c.LocalBlobStore 
> > [cloud-normalpri-1] Put blob with key [byHashV1/0001001020 
> > B3A2D81C390E0531DBCF0DEC082C4CA96D26F26AA9A26A26E80B5F00FA9F48E3 ] to 
> > container [test-container]
> > ..
> > 2021-01-07 20:03:43.597,7964896115564074 {} DEBUG o.j.b.c.LocalBlobStore 
> > [cloud-normalpri-1] Put blob with key [byHashV1/0001001020 
> > B3A2D81C390E0531DBCF0DEC082C4CA96D26F26AA9A26A26E80B5F00FA9F48E3 ] to 
> > container [test-container]
> > ..
> > 2021-01-07 20:03:43.605,7964896123355487 {} ERROR 
> > c.p.d.m.j.AbstractUploadCallable [cloud-normalpri-0] [task 1] Upload 
> > failed: chunk [1048576-2097151] - uploading
> > java.lang.RuntimeException: java.nio.file.NoSuchFileException: 
> > /tmp/16100498223118761388361431415820/test-container/byHashV1/0001001020
> >  B3A2D81C390E0531DBCF0DEC082C4CA96D26F26AA9A26A26E80B5F00FA9F48E3
> > at com.google.common.base.Throwables.propagate(Throwables.java:240) 
> > ~[guava-21.0.jar:?]
> > at 
> > org.jclouds.filesystem.strategy.internal.FilesystemStorageStrategyImpl.setBlobAccess(FilesystemStorageStrategyImpl.java:694)
> >  ~[filesystem-2.2.1.jar:2.2.1]
> > at 
> > org.jclouds.blobstore.config.LocalBlobStore.setBlobAccess(LocalBlobStore.java:409)
> >  ~[jclouds-blobstore-2.2.1.jar:2.2.1]
> > at 
> > org.jclouds.blobstore.config.LocalBlobStore.putBlob(LocalBlobStore.java:794)
> >  ~[jclouds-blobstore-2.2.1.jar:2.2.1]
> > at 
> > org.jclouds.blobstore.config.LocalBlobStore.putBlob(LocalBlobStore.java:533)
> >  ~[jclouds-blobstore-2.2.1.jar:2.2.1]
> > at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source) 
> > ~[?:?]
> > at 
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >  ~[?:1.8.0_181]
> > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
> > at 
> > com.google.inject.internal.DelegatingInvocationHandler.invoke(DelegatingInvocationHandler.java:37)
> >  ~[guice-3.0.jar:?]
> > at com.sun.prox

Re: [ANNOUNCE] Apache jclouds 2.3.0 released

2021-03-18 Thread Andrew Gaul
On Thu, Mar 18, 2021 at 09:16:51AM +0100, Jean-Noël Rouvignac (ForgeRock) wrote:
> Side note: I am interested in helping reduce the reliance on guava (as I
> did with xmlbuilder).
> I am not even contemplating getting rid of it given how deeply it is used.
> But we need to start somewhere. Less adherence == potentially less breakage.

We will gladly accept PRs which improve dependency issues and I
appreciate you removing xmlbuilder!  Java 8 introduced CompletableFuture
so it is possible to migrate from ListenableFuture.  Some technical debt
has accumulated over the years but we should keep chipping away at it.
I do think that using Guava in the public interfaces makes it difficult
to shade this dependency and thus ListenableFuture might be an easy
place to start.

-- 
Andrew Gaul
http://gaul.org/


Re: enable dependabot on jclouds github repo(s)?

2021-03-18 Thread Andrew Gaul
On Thu, Mar 18, 2021 at 11:38:02AM +0100, Fritz Elfert wrote:
> Jean-Noël mentioning security scanners in our recent discussion make me think:
> 
> It would be nice to have depedabot enabled in the github repo settings 
> (Security & analysis).
> If both alerts and security updates are enabled, it automatically creates 
> pull requests for the relevant changes.

This is something we could experiment with although there are more
considerations for upgrading dependencies than simply getting the latest
version, as the recent thread about Guava and Guice demonstrates.  My
experience with these automatic tools is that they work better for
applications than frameworks.  We would also want to align with other
Apache projects -- do we have some similar infrastructure already?

-- 
Andrew Gaul
http://gaul.org/


Re: [ANNOUNCE] Apache jclouds 2.3.0 released

2021-03-18 Thread Andrew Gaul
On Thu, Mar 18, 2021 at 09:38:19AM +0100, Jean-Noël Rouvignac (ForgeRock) wrote:
> Thanks for highlighting ListenableFuture for a start. Given the size of
> jclouds (about 400kLOC of production code with a stupid wc -l) and my
> inexperience of its codebase, I need help to know where to start. :)
> I'll see what I can do about it. Do you want to create an issue that
> describes this work and what you expect out of it?
> 
> You also mentioned that some public APIs return `ImmutableMap`. Have these
> methods been deprecated in favour of an overload returning a `Map`?

I think opening a JIRA issue describing the possible improvements makes
sense since this spans several PRs (and we should open issues for all
functional changes to make release notes work anyway).  ImmutableMap is
another good place to start -- some style guides suggest returning
ImmutableMap instead of Map to hint at immutability but this
unnecessarily makes the interface reliant on Guava.  I would have to
look at interface-maven-plugin again to see everything but unfortunately
I ran into issues with the way jclouds uses public access across
subpackages.

-- 
Andrew Gaul
http://gaul.org/


  1   2   >