Re: Flink Streamer Compatibility.

2018-09-25 Thread Saikat Maitra
Hi

Here is the working example https://github.com/samaitra/streamers

Let me know if you have any questions.

Regards,
Saikat

On Tue, Sep 25, 2018 at 2:22 PM, Anand Vijai  wrote:

> Is there a working example of how the integration works to sink Flink data
> into an Ignite Cache?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Flink Streamer Compatibility.

2018-09-25 Thread Anand Vijai
Is there a working example of how the integration works to sink Flink data
into an Ignite Cache?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: java.lang.IllegalArgumentException: Can not set final

2018-09-25 Thread smurphy
Thanks for the responses.

Ilya - I did try your suggestion: removing the final modifier, deleting the
constructor and only using getters and setters. 
I even went as far as to make the fields public. 
All these changes resulted in the an IllegalArgumentException.

If it helps, I was able to write a JUnit that replicated the exception and
fixed it along the lines of the following article:

https://docs.oracle.com/javase/tutorial/reflect/member/fieldTrouble.html



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


ignite .net programatically access cluster config (not working)

2018-09-25 Thread wt
hi

I have a single node cluster started with the following config


  
  


  


when i launch a client and do Iignite.GetCluster().Configuration

here is the screenshot of the config the client gets - the data storage
config is null

2018-09-25_18-22-15.png

  

is there something i am missing with accessing the cluster config through a
client?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: java.lang.IllegalArgumentException: Can not set final

2018-09-25 Thread vkulichenko
Probably that's the issue :) In any case, Java serialization successfully
deserializes such objects, so I think it's a bug.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: java.lang.IllegalArgumentException: Can not set final

2018-09-25 Thread Ilya Kasnacheev
Hello!

I can see that java.lang.reflect.Field is used. Why would it set final
fields?

Regards,
-- 
Ilya Kasnacheev


вт, 25 сент. 2018 г. в 19:22, vkulichenko :

> Ilya,
>
> Do you know what is the reason for such limitation? It doesn't sounds right
> to me, I believe any other marshaller would work just fine with final
> fields.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is there a way to use Ignite optimization and Spark optimization together when using Spark Dataframe API?

2018-09-25 Thread vkulichenko
Ray,

This sounds suspicious. Please show your configuration and the execution
plan for the query.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: java.lang.IllegalArgumentException: Can not set final

2018-09-25 Thread vkulichenko
Ilya,

Do you know what is the reason for such limitation? It doesn't sounds right
to me, I believe any other marshaller would work just fine with final
fields.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: kafka.common.KafkaException: Failed to parse the broker info from zookeeper

2018-09-25 Thread ilya.kasnacheev
Hello!

>From the thread dumps looks like it's more of a Kafka issue than Apache
Ignite. Have you tried newer releases BTW?

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to wait for initial partition map exchange

2018-09-25 Thread Ilya Kasnacheev
Hello!

Regarding PME problems.
OOM will cause this. High GC could cause this under some circumstances.
High CPU or Disk usage should not cause this. Network inavailability (such
as closed communication port) could also cause it.

But the prime cause is programming errors. Either those are errors on
Apache Ignite side (caused by some strange circumstances since all normal
cases should be normally tested), or they are in your code.

Such as deadlocks. If you have deadlocks in your code exposed to Apache
Ignite, or you are manage to lock up Apache Ignite in other ways
(listeners, invokes and continuous queries are notorious for that, since
there are limitations on operations you can use from within them), you can
catch infinite PME very easily.

However, it's hard to say without reviewing logs and thread dumps./

Regards,
-- 
Ilya Kasnacheev


чт, 13 сент. 2018 г. в 1:31, ndipiazza3565 :

> I'm trying to build up a list of possible causes for this issue.
>
> I'm only really interested in the issues that occur after successful
> production deployments. Meaning the environment has been up for some time
> successfully, but then later on our ignite nodes will not start and stick
>
> But as of now, a certain bad behavior from a single node in the ignite
> cluster can cause a deadlock
>
> * Anything that causes one of the ignite nodes to become unresponsive
>   * oom
>   * high gc
>   * high cpu
>   * high disk usage
> * Network issues?
>
> I'm trying to get a list of the causes for this issue so I can troubleshoot
> further.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Map C# class to Cassandra persistence settings

2018-09-25 Thread ilya.kasnacheev
Hello!

Be the first to try!

Note that you should probably a) enable simple binary name mapping[1] (i.e.
use class="Person"), b) use Java types for primitives[2] (or their Java
wrapper classes).

1.
https://apacheignite-net.readme.io/docs/platform-interoperability#section-default-behavior
2.
https://apacheignite-net.readme.io/docs/platform-interoperability#section-type-compatibility

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is there a way to use Ignite optimization and Spark optimization together when using Spark Dataframe API?

2018-09-25 Thread Ilya Kasnacheev
Hello!

Can you show the index that you are creating here?

Regards,
-- 
Ilya Kasnacheev


вт, 25 сент. 2018 г. в 8:23, Ray :

> Let's say I have two tables I want to join together.
> Table a has around 10 millions of rows and it's primary key is x and y.
> I have created index on field x and y for table a.
>
> Table b has one row and it's primary key is x and y.
> The primary key for that row in table b has a correspondent row in table a
> which has the same primary key.
>
> When I try to execute this query to join "select a.*,b.* from a inner join
> b
> where (a.x=b.x) and (a.y = b.y);", ti takes more than 4 seconds to show
> only
> one record.
> I also examined the plan for that sql and confirmed the index I created is
> used for this sql.
>
> Ideally, if we use hash join it should take less than half a second.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: java.lang.NullPointerException in GridDhtPartitionsExchangeFuture

2018-09-25 Thread Ilya Kasnacheev
Hello!

It's hard to say without reviewing logs, but it seems that there's some
inconsistency with regards to cache metadata on nodes.

Regards,
-- 
Ilya Kasnacheev


вт, 25 сент. 2018 г. в 0:13, HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com>:

> Hi all,
>
> We use Ignite 1.9.
>
>
>
> We could see this in our logs.  All we do is cache.get() , cache.put()
> operations. With this log being seen, is it possible for  cache.put or
> ignite.getOrCreateCache() method calling threads be blocked forever ?
> (unfortunately we couldn’t get a thread dump to prove that, but from
> application logs, it looks like it).
>
>
>
> java.lang.NullPointerException: null
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.updatePartitionSingleMap(GridDhtPartitionsExchangeFuture.java:1446)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.processMessage(GridDhtPartitionsExchangeFuture.java:1199)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.access$100(GridDhtPartitionsExchangeFuture.java:86)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$1.apply(GridDhtPartitionsExchangeFuture.java:1167)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$1.apply(GridDhtPartitionsExchangeFuture.java:1155)
>
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:271)
>
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:228)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onReceive(GridDhtPartitionsExchangeFuture.java:1155)
>
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.processSinglePartitionUpdate(GridCachePartitionExchangeManager.java:1304)
>
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.access$1200(GridCachePartitionExchangeManager.java:116)
>
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$2.onMessage(GridCachePartitionExchangeManager.java:310)
>
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$2.onMessage(GridCachePartitionExchangeManager.java:308)
>
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:1992)
>
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:1974)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:827)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:369)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:293)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(GridCacheIoManager.java:95)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:238)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1222)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:850)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$2100(GridIoManager.java:108)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$7.run(GridIoManager.java:790)
>
> at
> org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:428)
>
> at java.lang.Thread.run(Thread.java:748)
>
>
>


Re: java.lang.IllegalArgumentException: Can not set final

2018-09-25 Thread Ilya Kasnacheev
Hello!

You can't set final fields when deserializing binary objects.

Consider changing them to non-final fields with getter-setter.

Regards,
-- 
Ilya Kasnacheev


вт, 25 сент. 2018 г. в 1:31, smurphy :

> Hmm pasting the stack trace into the page didn't work.
> Here it is as an attachment..:
>
>
> stackTrace.txt
> 
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Nullpointer exception in IgniteHadoopIgfsSecondaryFileSystem

2018-09-25 Thread Ilya Kasnacheev
Hello!

You should put IgniteHadoopIgfsSecondaryFileSystem into
IgniteConfiguration, then you should start Ignite instance, then it will be
initialized properly.

See https://apacheignite-fs.readme.io/docs/secondary-file-system

Regards,
-- 
Ilya Kasnacheev


вт, 25 сент. 2018 г. в 14:00, Divya Darshan DD :

> Hi Team,
>
>
>
> I am exploring Ignite for my use case (explained in the later section of
> this email). Currently I am trying to create a directory in HDFS
> (stand-alone) with Java code but I am getting Null pointer exception. Could
> you please help me out. Necessary information is as under. Please let me
> know if anymore information is needed from my end.
>
>
>
> *Java code:*
>
>
>
> IgniteHadoopIgfsSecondaryFileSystem i = new
> IgniteHadoopIgfsSecondaryFileSystem();
> IgfsPath workDir = new IgfsPath("/ddd/fs");
> i.mkdirs(workDir);
>
>
>
>
>
> *Error:*
>
> Line IGFSExample.java:169 à i.mkdirs(workDir)
>
>
>
> Exception in thread "main" java.lang.NullPointerException
>
> at
> org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.mkdirs(IgniteHadoopIgfsSecondaryFileSystem.java:198)
>
> at IGFSExample.mkdirs(IGFSExample.java:169)
>
> at IGFSExample.main(IGFSExample.java:82)
>
> [15:44:15] Ignite node stopped OK [uptime=00:00:00.090]
>
>
>
> *Default-config.xml (only important bits)*
>
>
>
> 
>
> http://www.springframework.org/schema/beans;
>
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xmlns:util="
> http://www.springframework.org/schema/util;
>
>xsi:schemaLocation="http://www.springframework.org/schema/beans
>
>http://www.springframework.org/schema/beans/spring-beans.xsd
>
>http://www.springframework.org/schema/util
>
>http://www.springframework.org/schema/util/spring-util.xsd;>
>
>
>
> 
>
>  class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>
>  value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
>
> 
>
> 
>
>
>
> 
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
>
> 
>
> 
>
>  class="org.apache.ignite.configuration.ConnectorConfiguration">
>
> 
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
>  class="org.apache.ignite.configuration.FileSystemConfiguration">
>
> 
>
> 
>
>
>
> 
>
>
>
>  class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
> 
>
>
>
> 
>
>  class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
>
> 
>
>  class="org.apache.ignite.hadoop.fs.CachingHadoopFileSystemFactory">
>
>  value="hdfs://localhost:9000/"/>
>
> 
>
> 
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
> 
>
>
>
>
>
> *Use case:*
>
>
>
> We have huge volume of data present in S3. We want to cache the data in
> Ignite (Caching will be done either on LRU or custom code as per
> requirement). We want to keep only that data which is being frequently used
> or have a sliding window i.e keep 5 days of data in cache in sliding window
> manner. Whatever data is not eligible for cache needs to be evicted to S3.
> Is it possible for Ignite to handle this kind of use case?
> Thanks and Regards,
> Divya Bamotra
>


kafka.common.KafkaException: Failed to parse the broker info from zookeeper

2018-09-25 Thread rishi007bansod
*I have deployed kafka in kubernetes using
https://github.com/Yolean/kubernetes-kafka. But while consuming using kafka
consumer, I get following error :
*
SEVERE: Failed to resolve default logging config file:
config/java.util.logging.properties
[10:23:00]__   
[10:23:00]   /  _/ ___/ |/ /  _/_  __/ __/ 
[10:23:00]  _/ // (7 7// /  / / / _/   
[10:23:00] /___/\___/_/|_/___/ /_/ /___/  
[10:23:00] 
[10:23:00] ver. 1.9.0#20170302-sha1:a8169d0a
[10:23:00] 2017 Copyright(C) Apache Software Foundation
[10:23:00] 
[10:23:00] Ignite documentation: http://ignite.apache.org
[10:23:00] 
[10:23:00] Quiet mode.
[10:23:00]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[10:23:00] 
[10:23:00] OS: Linux 3.10.0-862.11.6.el7.x86_64 amd64
[10:23:00] VM information: OpenJDK Runtime Environment
1.8.0_181-8u181-b13-1~deb9u1-b13 Oracle Corporation OpenJDK 64-Bit Server VM
25.181-b13
[10:23:02] Configured plugins:
[10:23:02]   ^-- None
[10:23:02] 
[10:23:02] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[10:23:02] Security status [authentication=off, tls/ssl=off]
[10:23:03] REST protocols do not start on client node. To start the
protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system
property.
[10:23:24] Topology snapshot [ver=8, servers=1, clients=1, CPUs=112,
heap=53.0GB]
[10:23:34] Performance suggestions for grid  (fix if possible)
[10:23:34] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[10:23:34]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[10:23:34]   ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to
JVM options)
[10:23:34]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[10:23:34]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[10:23:34] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[10:23:34] 
[10:23:34] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[10:23:34] 
[10:23:34] Ignite node started OK (id=c10d143b)
[10:23:34] Topology snapshot [ver=7, servers=1, clients=2, CPUs=168,
heap=80.0GB]
start creating caches
inside caches
{xgboostMainCache=IgniteCacheProxy [delegate=GridDhtAtomicCache
[deferredUpdateMsgSnd=org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$3@66c83fc8,
near=null, super=GridDhtCacheAdapter
[multiTxHolder=java.lang.ThreadLocal@ae7950d,
super=GridDistributedCacheAdapter [super=GridCacheAdapter
[locMxBean=org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl@6fd1660,
clusterMxBean=org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl@4a6c18ad,
aff=org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl@5e8604bf,
igfsDataCache=false, mongoDataCache=false, mongoMetaCache=false,
igfsDataCacheSize=null, igfsDataSpaceMax=0,
asyncOpsSem=java.util.concurrent.Semaphore@20095ab4[Permits = 500],
name=xgboostMainCache, size=0, opCtx=null],
xgboostTrainedDataColumnSetCache=IgniteCacheProxy
[delegate=GridDhtAtomicCache
[deferredUpdateMsgSnd=org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$3@53e3a87a,
near=null, super=GridDhtCacheAdapter
[multiTxHolder=java.lang.ThreadLocal@4dafba3e,
super=GridDistributedCacheAdapter [super=GridCacheAdapter
[locMxBean=org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl@546621c4,
clusterMxBean=org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl@621f89b8,
aff=org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl@f339eae,
igfsDataCache=false, mongoDataCache=false, mongoMetaCache=false,
igfsDataCacheSize=null, igfsDataSpaceMax=0,
asyncOpsSem=java.util.concurrent.Semaphore@2822c6ff[Permits = 500],
name=xgboostTrainedDataColumnSetCache, size=0, opCtx=null]}
end creating caches
start creating data streamers
end creating  data streamers
Launching Prediction Module
41098 [main] INFO  kafka.utils.VerifiableProperties  - Verifying properties
41527 [main] INFO  kafka.utils.VerifiableProperties  - Property
auto.offset.reset is overridden to smallest
41528 [main] WARN  kafka.utils.VerifiableProperties  - Property
bootstrap.servers is not valid
41528 [main] INFO  kafka.utils.VerifiableProperties  - Property group.id is
overridden to IgniteGroup_1
41528 [main] INFO  kafka.utils.VerifiableProperties  - Property
zookeeper.connect is overridden to zookeeper.kafka:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:rsrc:slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in

Re: Is ID generator split brain compliant?

2018-09-25 Thread Anton Vinogradov
Denis,

As far as I understand, question is about IgniteAtomicSequence?
We fixed IgniteSet to be persisted and recovered properly.

Pavel Pereslegin,

Could you please check whether we have the same issue with
IgniteAtomicSequence?

сб, 22 сент. 2018 г. в 4:17, Denis Magda :

> So far, it looks pretty good except that it does not provide persistence
>> out
>> of the box. But I can work around it by backing latest generated ID in a
>> persistent cache and initializing ID generator with the latest value on a
>> cluster restart.
>
>
> Sounds like a good solution. *Anton*, I do remember a discussion on the
> dev list in regards persistence support for data structures. Are we
> releasing anything related soon? Can't recall all the details.
>
> However, one thing I could not find an answer for is if the out of the box
>> ID generator is split brain compliant. I cannot afford to have a duplicate
>> ID and want to understand if duplicate ID(s) could occur in a split-brain
>> scenario. If yes, what is the recommended approach to handling that
>> scenario?
>
>
> It should be split-brain tolerant if ZooKeeper Discovery is used:
>
> https://apacheignite.readme.io/docs/zookeeper-discovery#section-failures-and-split-brain-handling
>
> --
> Denis
>
> On Wed, Sep 19, 2018 at 3:37 PM abatra  wrote:
>
>> Hi,
>>
>> I have a requirement to create a distributed cluster-unique ID generator
>> microservice. I have done a PoC on it using Apache Ignite ID Generator.
>>
>> I created a 2 node cluster with two instances of microservices running on
>> each node. Nodes are in the same datacenter (in fact in the same network
>> and
>> will always be deployed in the same network) and I use TCP/IP discovery to
>> discover cluster nodes.
>>
>> So far, it looks pretty good except that it does not provide persistence
>> out
>> of the box. But I can work around it by backing latest generated ID in a
>> persistent cache and initializing ID generator with the latest value on a
>> cluster restart.
>>
>> However, one thing I could not find an answer for is if the out of the box
>> ID generator is split brain compliant. I cannot afford to have a duplicate
>> ID and want to understand if duplicate ID(s) could occur in a split-brain
>> scenario. If yes, what is the recommended approach to handling that
>> scenario?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Nullpointer exception in IgniteHadoopIgfsSecondaryFileSystem

2018-09-25 Thread Divya Darshan DD
Hi Team,



I am exploring Ignite for my use case (explained in the later section of
this email). Currently I am trying to create a directory in HDFS
(stand-alone) with Java code but I am getting Null pointer exception. Could
you please help me out. Necessary information is as under. Please let me
know if anymore information is needed from my end.



*Java code:*



IgniteHadoopIgfsSecondaryFileSystem i = new
IgniteHadoopIgfsSecondaryFileSystem();
IgfsPath workDir = new IgfsPath("/ddd/fs");
i.mkdirs(workDir);





*Error:*

Line IGFSExample.java:169 à i.mkdirs(workDir)



Exception in thread "main" java.lang.NullPointerException

at
org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.mkdirs(IgniteHadoopIgfsSecondaryFileSystem.java:198)

at IGFSExample.mkdirs(IGFSExample.java:169)

at IGFSExample.main(IGFSExample.java:82)

[15:44:15] Ignite node stopped OK [uptime=00:00:00.090]



*Default-config.xml (only important bits)*





http://www.springframework.org/schema/beans;

   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xmlns:util="
http://www.springframework.org/schema/util;

   xsi:schemaLocation="http://www.springframework.org/schema/beans

   http://www.springframework.org/schema/beans/spring-beans.xsd

   http://www.springframework.org/schema/util

   http://www.springframework.org/schema/util/spring-util.xsd;>

















































   



























































*Use case:*



We have huge volume of data present in S3. We want to cache the data in
Ignite (Caching will be done either on LRU or custom code as per
requirement). We want to keep only that data which is being frequently used
or have a sliding window i.e keep 5 days of data in cache in sliding window
manner. Whatever data is not eligible for cache needs to be evicted to S3.
Is it possible for Ignite to handle this kind of use case?
Thanks and Regards,
Divya Bamotra