Hello Vladimir,

I am not certain if Drill will source the core-site.xml file from the
hadoop directory, but I know that you can provide one in the Drill conf/
directory.

That being said, I did not think that a new core-site entry was needed to
add a filesystem implementation. I thought that the JARs simply needed to
be added to the classpath. Have you added the JARs containing the ignite
FileSystem implementation to the Drill classpath? The easiest way I know to
do this is by copying them into jars/3rdparty directory in the Drill
installation.

- Jason

On Fri, Feb 5, 2016 at 9:55 AM, Vladimir Ozerov <voze...@gridgain.com>
wrote:

> *Peter,*
>
> I created the ticket in Ignite JIRA. Hope someone form community will be
> able to throw a glance on it soon -
> https://issues.apache.org/jira/browse/IGNITE-2568
> Please keep an eye on it.
>
> Cross-posting the issue to Drill dev list.
>
> *Dear Drill folks,*
>
> We have our own implementation of Hadoop FileSystem here in Ignite. It has
> unique URI prefix ("igfs://") and is normally registered in Hadoop's
> core-site.xml like this:
>
> <property>
>     <name>fs.igfs.impl</name>
>
> <value>org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem</value></property>
>
>
> However, when we try to use this file system as data source in Drill, the
> exception is thrown (see stack trace below). I suspect that default Hadoop
> core-site.xml is not taken in consideration by Drill somehow. Could you
> please give us a hint on how to properly configure custom Hadoop FileSystem
> implementation in your system?
>
> Thank you!.
>
> Vladimir.
>
> Stack trace:
>
> java.io.IOException: No FileSystem for scheme: igfs
> at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
> ~[hadoop-common-2.7.1.jar:na]
> at
>
> org.apache.drill.exec.store.dfs.DrillFileSystem.<init>(DrillFileSystem.java:92)
> ~[drill-java-exec-1.4.0.jar:1.4.0]
> at
>
> org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:213)
> ~[drill-java-exec-1.4.0.jar:1.4.0]
> at
>
> org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:210)
> ~[drill-java-exec-1.4.0.jar:1.4.0]
> at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.8.0_40-ea]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_40-ea]
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> ~[hadoop-common-2.7.1.jar:na]
> at
>
> org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:210)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
>
> org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:202)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
>
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(WorkspaceSchemaFactory.java:150)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
>
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.<init>(FileSystemSchemaFactory.java:78)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
>
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas(FileSystemSchemaFactory.java:65)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
>
> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileSystemPlugin.java:131)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
>
> org.apache.drill.exec.store.StoragePluginRegistry$DrillSchemaFactory.registerSchemas(StoragePluginRegistry.java:403)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:166)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:155)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:143)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
>
> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContext.java:129)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
>
> org.apache.drill.exec.planner.sql.DrillSqlWorker.<init>(DrillSqlWorker.java:93)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:907)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:244)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_40-ea]
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_40-ea]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea]
>
>
>
> On Fri, Feb 5, 2016 at 4:18 PM, pshomov <pe...@activitystream.com> wrote:
>
> >
> > ​Hi Vladimir,
> >
> > My bad about that ifgs://, fixed it but it changed nothing.
> >
> > I don’t think Drill cares much about Hadoop settings. It never asked me
> to
> > point it to an installation or configuration of Hadoop. I believe they
> have
> > their own storage plugin mechanism and one of their built-in plugins
> > happens to be the HDFS one.
> >
> > Here is (part of) the Drill log
> >
> > 2016-02-05 13:14:03,507 [294b5fe3-8f63-2134-67e0-42f7111ead44:foreman]
> > ERROR o.a.d.exec.util.ImpersonationUtil - Failed to create
> DrillFileSystem
> > for proxy user: No FileSystem for scheme: igfs
> > java.io.IOException: No FileSystem for scheme: igfs
> > at
> > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
> > ~[hadoop-common-2.7.1.jar:na]
> > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
> > ~[hadoop-common-2.7.1.jar:na]
> > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> > ~[hadoop-common-2.7.1.jar:na]
> > at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> > ~[hadoop-common-2.7.1.jar:na]
> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> > ~[hadoop-common-2.7.1.jar:na]
> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> > ~[hadoop-common-2.7.1.jar:na]
> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
> > ~[hadoop-common-2.7.1.jar:na]
> > at
> >
> org.apache.drill.exec.store.dfs.DrillFileSystem.<init>(DrillFileSystem.java:92)
> > ~[drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:213)
> > ~[drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:210)
> > ~[drill-java-exec-1.4.0.jar:1.4.0]
> > at java.security.AccessController.doPrivileged(Native Method)
> > ~[na:1.8.0_40-ea]
> > at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_40-ea]
> > at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> > ~[hadoop-common-2.7.1.jar:na]
> > at
> >
> org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:210)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:202)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(WorkspaceSchemaFactory.java:150)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.<init>(FileSystemSchemaFactory.java:78)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas(FileSystemSchemaFactory.java:65)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileSystemPlugin.java:131)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.store.StoragePluginRegistry$DrillSchemaFactory.registerSchemas(StoragePluginRegistry.java:403)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:166)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:155)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:143)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContext.java:129)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> org.apache.drill.exec.planner.sql.DrillSqlWorker.<init>(DrillSqlWorker.java:93)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:907)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:244)
> > [drill-java-exec-1.4.0.jar:1.4.0]
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > [na:1.8.0_40-ea]
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > [na:1.8.0_40-ea]
> > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea]
> > 2016-02-05 13:14:03,556 [294b5fe3-8f63-2134-67e0-42f7111ead44:foreman]
> > ERROR o.a.drill.exec.work.foreman.Foreman - SYSTEM ERROR: IOException: No
> > FileSystem for scheme: igfs
> >
> >
> > [Error Id: 6c95179a-6d26-498c-905f-dc18509c1651 on 192.168.1.42:31010]
> > org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> > IOException: No FileSystem for scheme: igfs
> >
> >
> > I copied the same ignite jars that go into Hadoop to Drill just in case
> > but that did not help either.
> > I think the only way is to write a Drill storage plugin for Ignite. Or
> > somehow make the Ignite caching happen inside Hadoop and be totally
> > transparent to Drill.
> >
> > Thank you for detailed help, any further ideas are as always welcome ;)
> >
> > Best regards,
> >
> > Petar
> >
> > ------------------------------
> > View this message in context: Re: Apache Drill querying IGFS-accelerated
> > (H)DFS?
> > <
> http://apache-ignite-users.70518.x6.nabble.com/Apache-Drill-querying-IGFS-accelerated-H-DFS-tp2840p2859.html
> >
> > Sent from the Apache Ignite Users mailing list archive
> > <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
> >
>

Reply via email to