It looks like file system configuration is created inside
WorkspaceSchemaFactory
constructor:
this.fsConf = plugin.getFsConf();
Does it mean that we have to implement our own Ignite's FileSystemPlugin to
be able to work with Drill?
On Fri, Feb 5, 2016 at 8:55 PM, Vladimir Ozerov
Hello Vladimir,
I am not certain if Drill will source the core-site.xml file from the
hadoop directory, but I know that you can provide one in the Drill conf/
directory.
That being said, I did not think that a new core-site entry was needed to
add a filesystem implementation. I thought that the
*Peter,*
I created the ticket in Ignite JIRA. Hope someone form community will be
able to throw a glance on it soon -
https://issues.apache.org/jira/browse/IGNITE-2568
Please keep an eye on it.
Cross-posting the issue to Drill dev list.
*Dear Drill folks,*
We have our own implementation of
Wonderful, when you have a beta version I can test it if you need
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Exception-in-Kerberos-Yarn-cluster-tp1950p2849.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
I take tickets
https://issues.apache.org/jira/browse/IGNITE-1922 ,
https://issues.apache.org/jira/browse/IGNITE-2525 (they duplicate each
other).
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Exception-in-Kerberos-Yarn-cluster-tp1950p2847.html
Sent from the
Hi Jason,
adding
fs.igfs.impl
org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem
fs.AbstractFileSystem.igfs.impl
org.apache.ignite.hadoop.fs.v2.IgniteHadoopFileSystem
to the core-site.xml in the Drill conf/ folder tight the whole
Petar,
IGFS configuration consists of two steps: starting Ignite node and
adjusting Hadoop configuration.
*1) Starting Ignite node:*
- Download Apache Ignite Hadoop Accelerator (
http://ignite.apache.org/download.cgi#binaries) and unpack it.
- If you want to link IGFS and HDFS, please
Hi Vladimir,
Changed back fs.defaultFS (in my case to hdfs://localhost:9000) and the
HDFS service is back on its feet ;)
My problem is that Apache Drill does not accept igfs:// scheme (No
FileSystem for scheme: figs . It knows pretty well the hdfs:// schema
though. I am guessing that accessing
Hi,
To my knowledge there is an issue with data eviction from off-heap when near
cache configuration is used.
However, make sure that EvictionPolicy is not supported for offheap tired
mode and you have to define the limits this way
cacheCfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
Hi Vladimir,
Thank you for keeping very speedy responses to my questions! Much, much
appreciated!
I apparently missed the point where I am supposed to run an ignite node
outside of hadoop, I thought it would spin one in process. Anyways I
followed your instructions and setup a
Petar,
Yes, I mean setting igfs://igfs@localhost:10500 to Drill's config. I see in
your email that you typed "ifgs" instead of "igfs". Is it a typo in email
or in Drill configuration as well? Please try changing it to "igfs" and
possibly restart Drill instance because may be it simply didn't pick
Hi,
The client failed to connect to a server host. Was any server node active at
the time the client was connecting to the cluster?
Did you use the same configuration (example-ignite.xml) for the client node
as well? If you didn't then make sure that the client uses the same IP
finders settings
Petar,
Looks like this could be what we need - Storage Plugin -
https://drill.apache.org/docs/plugin-configuration-basics/
Could you please try configuring new plugin for IGFS?
If the problem is still there, could you please provide detailed error
description and possibly logs?
Vladimir.
On
Eviction of entries doesn't trigger flushing of data from the
write-behind store [1] to your disk storage.
The write-behind store flushes data basing on flush frequency and its
internal queue size.
Actually, when you put an entry its copy will be stored in offheap and
in the write-behind
Hi,
We don't use any Near caches. You can find our configuration on this gist:
https://gist.github.com/anonymous/ebfa88893764f8a3a5f8
Note that we know that these settings are ridiculous for any serious
production environment but we are just interested in what happens when the
offheap gets
Hi Vladimir,
My bad about that ifgs://, fixed it but it changed nothing.
I don’t think Drill cares much about Hadoop settings. It never asked me to
point it to an installation or configuration of Hadoop. I believe they have
their own storage plugin mechanism and one of their built-in plugins
I’m seeing this exception when my Service implementation calls loadCache() from
within its init() method:
Caused by: org.apache.ignite.IgniteCheckedException: class
org.apache.ignite.IgniteCheckedException: Sessions attributes and checkpoints
are disabled by default for better performance (to
Steve,
Looks like some of our internal jobs are not marked with the special
annotation that should prevent from such issues. I create a ticket to fix
this [1].
As a workaround I can suggest to filter out our jobs by package name
(org.apache.ignite) and apply your custom logic only to your jobs.
18 matches
Mail list logo