Hi Arjun,
Thanks!! This worked and now i am able to query s3. But i didn't understand your
last line and also how this worked with 2.7 jar. Can you please explain this a
bit or provide any reference link?
@padma, i was trying to build from source and executed below steps but got error
:-
JAVA Version : 1.8.0_151Maven Version : 3.5.2
1. git clone https://git-wip-us.apache.org/repos/asf/drill.git2. cd drill && vi
pom.xml --> changed hadoop version to 2.9.03. mvn clean install -DskipTests
Error :-
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireFilesSize failed with
message:The file drill-jdbc-all-1.13.0-SNAPSHOT.jar is outside the expected size
range.
This is likely due to you adding new dependencies to a java-exec and not
updating the excludes in this module. This is important as it minimizes the size
of the dependency of Drill application users.
/opt/apache-s/apache-drill-s/drill/exec/jdbc-all/target/drill-jdbc-all-1.13.0-SNAPSHOT.jar
size (35620228) too large. Max. is 35000000/opt/apache
-s/apache-drill-s/drill/exec/jdbc-all/target/drill-jdbc-all-1.13.0-SNAPSHOT.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:[INFO][INFO] Apache Drill Root POM
.............................. SUCCESS [03:02 min][INFO] tools/Parent Pom
................................... SUCCESS [ 0.510 s][INFO] tools/freemarker
codegen tooling ................... SUCCESS [02:42 min][INFO] Drill Protocol
..................................... SUCCESS [ 16.736 s][INFO] Common (Logical
Plan, Base expressions) ............ SUCCESS [01:31 min][INFO] Logical Plan,
Base expressions ..................... SUCCESS [ 17.550 s][INFO] exec/Parent Pom
.................................... SUCCESS [ 0.572 s][INFO] exec/memory/Parent
Pom ............................. SUCCESS [ 0.525 s][INFO] exec/memory/base
................................... SUCCESS [ 8.056 s][INFO] exec/rpc
........................................... SUCCESS [ 6.208 s][INFO]
exec/Vectors ....................................... SUCCESS [01:28 min][INFO]
contrib/Parent Pom ................................. SUCCESS [ 0.461 s][INFO]
contrib/data/Parent Pom ............................ SUCCESS [ 0.443 s][INFO]
contrib/data/tpch-sample-data ...................... SUCCESS [ 21.941 s][INFO]
exec/Java Execution Engine ......................... SUCCESS [05:54 min][INFO]
exec/JDBC Driver using dependencies ................ SUCCESS [ 14.827 s][INFO]
JDBC JAR with all dependencies ..................... FAILURE [ 44.417 s][INFO]
contrib/kudu-storage-plugin ........................ SKIPPED[INFO]
contrib/opentsdb-storage-plugin .................... SKIPPED[INFO]
contrib/mongo-storage-plugin ....................... SKIPPED[INFO]
contrib/hbase-storage-plugin ....................... SKIPPED[INFO]
contrib/jdbc-storage-plugin ........................ SKIPPED[INFO]
contrib/hive-storage-plugin/Parent Pom ............. SKIPPED[INFO]
contrib/hive-storage-plugin/hive-exec-shaded ....... SKIPPED[INFO]
contrib/hive-storage-plugin/core ................... SKIPPED[INFO]
contrib/drill-gis-plugin ........................... SKIPPED[INFO]
contrib/kafka-storage-plugin ....................... SKIPPED[INFO] Packaging and
Distribution Assembly ................ SKIPPED[INFO] contrib/mapr-format-plugin
......................... SKIPPED[INFO] contrib/sqlline
.................................... SKIPPED[INFO]
------------------------------------------------------------------------[INFO]
BUILD FAILURE[INFO]
------------------------------------------------------------------------[INFO]
Total time: 16:52 min[INFO] Finished at: 2018-02-14T12:27:17+05:30[INFO] Final
Memory: 156M/1571M[INFO]
------------------------------------------------------------------------[ERROR]
Failed to execute goal
org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce
(enforce-jdbc-jar-compactness) on project drill-jdbc-all: Some Enforcer rules
have failed. Look above for specific messages explaining why the rule failed. ->
[Help 1][ERROR]  





On Wed, Feb 14, 2018 12:32 PM, Arjun kr arjun...@outlook.com  wrote:
If you have 'hadoop-aws-2.9.0.jar' jar in drill classpath, replace it with
original aws jar that comes with tarball.







The class 'org/apache/hadoop/fs/GlobalStorageStatistics' is not available in
hadoop common jar - hadoop-common-2.7.1.jar ( this was added in 2.8.0). You can
try with original tarball installation jars.







Thanks,







Arjun




________________________________

From: Anup Tiwari <anup.tiw...@games24x7.com>

Sent: Wednesday, February 14, 2018 11:49 AM

To: user@drill.apache.org

Subject: Re: S3 Connection Issues




Hi Arjun,

I tried what you said but its not working and queries are going inENQUEUED

state. Please find below log :-

Error

[drill-executor-1] ERROR o.a.d.exec.server.BootStrapContext -

org.apache.drill.exec.work.foreman.Foreman.run() leaked an exception.

java.lang.NoClassDefFoundError:

org/apache/hadoop/fs/GlobalStorageStatistics$StorageStatisticsProvider at

java.lang.Class.forName0(Native Method) ~[na:1.8.0_72] at

java.lang.Class.forName(Class.java:348) ~[na:1.8.0_72] at

org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)

~[hadoop-common-2.7.1.jar:na] at

org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)

~[hadoop-common-2.7.1.jar:na] at

org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)

~[hadoop-common-2.7.1.jar:na] at

org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2638)

~[hadoop-common-2.7.1.jar:na] at

org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)

~[hadoop-common-2.7.1.jar:na] at

org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)

~[hadoop-common-2.7.1.jar:na] at

org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)

~[hadoop-common-2.7.1.jar:na] at

org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)

~[hadoop-common-2.7.1.jar:na] at

org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)

~[hadoop-common-2.7.1.jar:na] at

org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)

~[hadoop-common-2.7.1.jar:na] at

org.apache.drill.exec.store.dfs.DrillFileSystem.<init>(DrillFileSystem.java:91)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:219)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:216)

~[drill-java-exec-1.11.0.jar:1.11.0] at

java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_72]

  at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_72]

  at

org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

~[hadoop-common-2.7.1.jar:na] at

org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:216)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:208)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(WorkspaceSchemaFactory.java:153)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.<init>(FileSystemSchemaFactory.java:77)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas(FileSystemSchemaFactory.java:64)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileSystemPlugin.java:149)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFactory.registerSchemas(StoragePluginRegistryImpl.java:396)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema(SchemaTreeProvider.java:110)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema(SchemaTreeProvider.java:99)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:164)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:153)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContext.java:139)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.planner.sql.SqlConverter.<init>(SqlConverter.java:111)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:101)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:79)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1050)

~[drill-java-exec-1.11.0.jar:1.11.0] at

org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:280)

~[drill-java-exec-1.11.0.jar:1.11.0] at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

~[na:1.8.0_72] at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

[na:1.8.0_72] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]

Caused by: java.lang.ClassNotFoundException:

org.apache.hadoop.fs.GlobalStorageStatistics$StorageStatisticsProvider at

java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_72]

  at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_72]

  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)

~[na:1.8.0_72] at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

~[na:1.8.0_72] ... 38 common frames omitted




@padma, thanks for help but i will try to build it out using below link and if

things didn't worked out then will surely need your help :-

https://drill.apache.org/docs/compiling-drill-from-source/

Compiling Drill from Source - Apache
Drill<https://drill.apache.org/docs/compiling-drill-from-source/>

drill.apache.org

To develop Drill, you compile Drill from source code and then set up a project
in Eclipse for use as your development environment. To review or contribute to
Drill ...










Also as you have mentioned, will change hadoop version to 2.9.0 in pom file and

then build it.Let me know if anything needs to be taken care of.
















On Wed, Feb 14, 2018 9:17 AM, Padma Penumarthy ppenumar...@mapr.com wrote:

Yes, I built it by changing the version in pom file.




Try and see if what Arjun suggested works.




If not, you can download the source, change the version and build or




if you prefer, I can provide you with a private build that you can try with.










Thanks




Padma
















On Feb 13, 2018, at 1:46 AM, Anup Tiwari

<anup.tiw...@games24x7.com<mailto:anup.tiw...@games24x7.com>> wrote:










Hi Padma,




As you have mentioned "Last time I tried, using Hadoop 2.8.1 worked for me." so




have you build drill with hadoop 2.8.1 ? If yes then can you provide steps ?




Since i have downloaded tar ball of 1.11.0 and replaced hadoop-aws-2.7.1.jar




with hadoop-aws-2.9.0.jar but still not able to query successfully to s3 bucket;







queries are going in starting state.




We are trying to query : "ap-south-1" region which supports only v4 signature.


































On Thu, Oct 19, 2017 9:44 AM, Padma Penumarthy

ppenumar...@mapr.com<mailto:ppenumar...@mapr.com> wrote:




Which AWS region are you trying to connect to ?










We have a problem connecting to regions which support only v4 signature










since the version of hadoop we include in Drill is old.










Last time I tried, using Hadoop 2.8.1 worked for me.






















Thanks










Padma


































On Oct 18, 2017, at 8:14 PM, Charles Givre

<cgi...@gmail.com<mailto:cgi...@gmail.com>> wrote:






















Hello all,










I’m trying to use Drill to query data in an S3 bucket and running into some




issues which I can’t seem to fix. I followed the various instructions online to




set up Drill with S3, and put my keys in both the conf-site.xml and in the




plugin config, but every time I attempt to do anything I get the following




errors:


































jdbc:drill:zk=local> show databases;










Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon




S3, AWS Request ID: 56D1999BD1E62DEB, AWS Error Code: null, AWS Error Message:




Forbidden


































[Error Id: 65d0bb52-a923-4e98-8ab1-65678169140e on




charless-mbp-2.fios-router.home:31010] (state=,code=0)










0: jdbc:drill:zk=local> show databases;










Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon




S3, AWS Request ID: 4D2CBA8D42A9ECA0, AWS Error Code: null, AWS Error Message:




Forbidden


































[Error Id: 25a2d008-2f4d-4433-a809-b91ae063e61a on




charless-mbp-2.fios-router.home:31010] (state=,code=0)










0: jdbc:drill:zk=local> show files in s3.root;










Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon




S3, AWS Request ID: 2C635944EDE591F0, AWS Error Code: null, AWS Error Message:




Forbidden


































[Error Id: 02e136f5-68c0-4b47-9175-a9935bda5e1c on




charless-mbp-2.fios-router.home:31010] (state=,code=0)










0: jdbc:drill:zk=local> show schemas;










Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon




S3, AWS Request ID: 646EB5B2EBCF7CD2, AWS Error Code: null, AWS Error Message:




Forbidden


































[Error Id: 954aaffe-616a-4f40-9ba5-d4b7c04fe238 on




charless-mbp-2.fios-router.home:31010] (state=,code=0)






















I have verified that the keys are correct but using the AWS CLI and downloaded




some of the files, but I’m kind of at a loss as to how to debug. Any




suggestions?










Thanks in advance,










— C














































Regards,




Anup Tiwari










Sent with Mixmax






















Regards,

Anup Tiwari




Sent with Mixmax






Regards,
Anup Tiwari

Reply via email to