Yes, I guess you need to have access to the bucket.
Not sure how it will work otherwise.

Thanks
Padma


On Oct 24, 2017, at 10:51 AM, Charles Givre 
<[email protected]<mailto:[email protected]>> wrote:

Hi Padma,
I’m wondering if the issue is that I only have access to a subfolder in the s3 
bucket.  IE:

s3://bucket/folder1/folder2/folder3 <s3://bucket/folder1/folder2/folder3>

I only have access to folder3.  Might that be causing the issue?
—C



On Oct 24, 2017, at 13:49, Padma Penumarthy 
<[email protected]<mailto:[email protected]>> wrote:

Charles, can you try exactly what I did.
I did not do anything else other than enable the S3 plugin and change the plugin
configuration like this.

{
"type": "file",
"enabled": true,
"connection": "s3a://<bucket-name>",
"config": {
  "fs.s3a.access.key": “XXXX",
  "fs.s3a.secret.key": “YYYY"
 },


Thanks
Padma


On Oct 24, 2017, at 10:06 AM, Charles Givre 
<[email protected]<mailto:[email protected]> <mailto:[email protected]>> wrote:

Hi everyone and thank you for your help.  I’m still not able to connect to S3.


Here is the error I’m getting:

0: jdbc:drill:zk=local> use s3;
Error: RESOURCE ERROR: Failed to create schema tree.


[Error Id: 57c82d90-2166-4a37-94a0-1cfeb0cdc4b6 on 
charless-mbp-2.fios-router.home:31010] (state=,code=0)
java.sql.SQLException: RESOURCE ERROR: Failed to create schema tree.


[Error Id: 57c82d90-2166-4a37-94a0-1cfeb0cdc4b6 on 
charless-mbp-2.fios-router.home:31010]
at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:489)
at 
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:561)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1895)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:61)
at 
org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:473)
at 
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1100)
at 
org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:477)
at 
org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:181)
at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:109)
at 
org.apache.calcite.avatica.AvaticaStatement.execute(AvaticaStatement.java:121)
at 
org.apache.drill.jdbc.impl.DrillStatementImpl.execute(DrillStatementImpl.java:101)
at sqlline.Commands.execute(Commands.java:841)
at sqlline.Commands.sql(Commands.java:751)
at sqlline.SqlLine.dispatch(SqlLine.java:746)
at sqlline.SqlLine.begin(SqlLine.java:621)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.common.exceptions.UserRemoteException: RESOURCE 
ERROR: Failed to create schema tree.


[Error Id: 57c82d90-2166-4a37-94a0-1cfeb0cdc4b6 on 
charless-mbp-2.fios-router.home:31010]
at 
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123)
at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:368)
at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:90)
at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274)
at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:244)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
0: jdbc:drill:zk=local>


Here is my conf.site <http://conf.site/> <http://conf.site/ 
<http://conf.site/>>.xml file:

<property>
      <name>fs.s3.awsAccessKeyId</name>
      <value>XXX</value>
  </property>

  <property>
      <name>fs.s3.awsSecretAccessKey</name>
      <value> XXX </value>
  </property>
  <property>
       <name>fs.s3n.awsAccessKeyId</name>
       <value> XXX </value>
   </property>

   <property>
       <name>fs.s3n.awsSecretAccessKey</name>
       <value> XXX </value>
   </property>
   <property>
        <name>fs.s3a.awsAccessKeyId</name>
        <value> XXX </value>
    </property>

    <property>
        <name>fs.s3a.awsSecretAccessKey</name>
        <value> XXX </value>
    </property>

And my config info:

{
"type": "file",
"enabled": true,
"connection": "s3://<my bucket>",
"config": null,
"workspaces": {
 "root": {
   "location": "/",
   "writable": false,
   "defaultInputFormat": null
 }
},

I did copy jets3t-0.9.4.jar to the /jars/3rdparty path.  Any debugging 
suggestions?
—C


On Oct 20, 2017, at 15:55, Arjun kr 
<[email protected]<mailto:[email protected]>> wrote:

Hi Charles,


I'm not aware of any such settings. As Padma mentioned in previous mail, It 
works fine for me by following instructions in 
https://drill.apache.org/docs/s3-storage-plugin/ .


Thanks,


Arjun


________________________________
From: Charles Givre <[email protected]<mailto:[email protected]>>
Sent: Friday, October 20, 2017 11:48 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: S3 Connection Issues

Hi Arjun,
Thanks for your help.  Are there settings in S3 that would prevent Drill from 
connecting?  I’ll try hdfs shell, but I am able to connect with the CLI tool.   
My hunch is that there is a permission not set correctly on S3 or I’m missing 
some config variable in Drill.
— C


On Oct 20, 2017, at 14:12, Arjun kr 
<[email protected]<mailto:[email protected]>> wrote:

Hi  Charles,


Any chance you can test s3 connectivity with other tools like hdfs shell or 
hive in case you haven't tried already (and these tools available)? This may 
help to identify if it is Drill specific issue.


For connecting via hdfs , you may try below command.


hadoop fs -Dfs.s3a.access.key="XXXX" -Dfs.s3a.secret.key="YYYYY" -ls 
s3a://<bucket-name>/


Enable DEBUG logging if needed.


export HADOOP_ROOT_LOGGER=hadoop.root.logger=DEBUG,console


Thanks,


Arjun


________________________________
From: Padma Penumarthy <[email protected]<mailto:[email protected]>>
Sent: Friday, October 20, 2017 3:00 AM
To: [email protected]<mailto:[email protected]>
Subject: Re: S3 Connection Issues

Hi Charles,

I tried us-west-2 and it worked fine for me with drill built from latest source.
I did not do anything special.
Just enabled the S3 plugin and updated the plugin configuration like this.

{
"type": "file",
"enabled": true,
"connection": "s3a://<bucket-name>",
"config": {
"fs.s3a.access.key": “XXXX",
"fs.s3a.secret.key": “YYYY"
},

I am able to do show databases and also can query the parquet files I uploaded 
to the bucket.

0: jdbc:drill:zk=local> show databases;
+---------------------+
|     SCHEMA_NAME     |
+---------------------+
| INFORMATION_SCHEMA  |
| cp.default          |
| dfs.default         |
| dfs.root            |
| dfs.tmp             |
| s3.default          |
| s3.root             |
| sys                 |
+---------------------+
8 rows selected (2.892 seconds)


Thanks
Padma

On Oct 18, 2017, at 9:18 PM, Charles Givre 
<[email protected]<mailto:[email protected]><mailto:[email protected]>> wrote:

Hi Padma,
The bucket is is us-west-2.  I also discovered that some of the variable names 
in the documentation on the main Drill site are incorrect. Do I need to specify 
the region in the configuration somewhere?

As an update, after discovering that the variable names are incorrect and that 
I didn’t have Jets3t installed properly, I’m now getting the following error:

jdbc:drill:zk=local> show databases;
Error: RESOURCE ERROR: Failed to create schema tree.


[Error Id: e6012aa2-c775-46b9-b3ee-0af7d0b0871d on 
charless-mbp-2.fios-router.home:31010]

(org.apache.hadoop.fs.s3.S3Exception) org.jets3t.service.S3ServiceException: 
Service Error Message. -- ResponseCode: 403, ResponseStatus: Forbidden, XML 
Error Message: <?xml version="1.0" 
encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>The request 
signature we calculated does not match the signature you provided. Check your 
key and signing method.</Message></Error>
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get():175
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode():221

Thanks,
— C


On Oct 19, 2017, at 00:14, Padma Penumarthy 
<[email protected]<mailto:[email protected]><mailto:[email protected]>>
 wrote:

Which AWS region are you trying to connect to ?
We have a  problem connecting to regions which support only v4 signature
since the version of hadoop we include in Drill is old.
Last time I tried, using Hadoop 2.8.1 worked for me.

Thanks
Padma


On Oct 18, 2017, at 8:14 PM, Charles Givre 
<[email protected]<mailto:[email protected]><mailto:[email protected]>> wrote:

Hello all,
I’m trying to use Drill to query data in an S3 bucket and running into some 
issues which I can’t seem to fix.  I followed the various instructions online 
to set up Drill with S3, and put my keys in both the conf-site.xml and in the 
plugin config, but every time I attempt to do anything I get the following 
errors:


jdbc:drill:zk=local> show databases;
Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 
S3, AWS Request ID: 56D1999BD1E62DEB, AWS Error Code: null, AWS Error Message: 
Forbidden


[Error Id: 65d0bb52-a923-4e98-8ab1-65678169140e on 
charless-mbp-2.fios-router.home:31010] (state=,code=0)
0: jdbc:drill:zk=local> show databases;
Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 
S3, AWS Request ID: 4D2CBA8D42A9ECA0, AWS Error Code: null, AWS Error Message: 
Forbidden


[Error Id: 25a2d008-2f4d-4433-a809-b91ae063e61a on 
charless-mbp-2.fios-router.home:31010] (state=,code=0)
0: jdbc:drill:zk=local> show files in s3.root;
Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 
S3, AWS Request ID: 2C635944EDE591F0, AWS Error Code: null, AWS Error Message: 
Forbidden


[Error Id: 02e136f5-68c0-4b47-9175-a9935bda5e1c on 
charless-mbp-2.fios-router.home:31010] (state=,code=0)
0: jdbc:drill:zk=local> show schemas;
Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 
S3, AWS Request ID: 646EB5B2EBCF7CD2, AWS Error Code: null, AWS Error Message: 
Forbidden


[Error Id: 954aaffe-616a-4f40-9ba5-d4b7c04fe238 on 
charless-mbp-2.fios-router.home:31010] (state=,code=0)

I have verified that the keys are correct but using the AWS CLI and downloaded 
some of the files, but I’m kind of at a loss as to how to debug.  Any 
suggestions?
Thanks in advance,
— C

Reply via email to