Re: Continuous Query on Multiple caches

2017-08-28 Thread rishi007bansod
Hi,
 In our case data is coming from 2 kafka streams. We want to compare current
data from 2 streams and take some action(e.g. raise alert). We want to make
this processing event based i.e. as soon as data comes from 2 streams, we
should take action associated with this event. 
For example, 
((Curr_stream1.f0 - Curr_stream2.f0) > T ) then > raise alert.

Initially I thought of caching both streams data and then compare it, but it
will take more time to process.

Thanks,
Rishikesh



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-on-Multiple-caches-tp16444p16473.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Configure Apache-Ignite as Out-Proc instead of in-Proc

2017-08-28 Thread shuvendu

Hi ,

Is there any configuration to make Apache-Ignite to run  as Out-Proc instead
of in-Proc

thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Configure-Apache-Ignite-as-Out-Proc-instead-of-in-Proc-tp16471.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: We want to configure near cache on client so that we can handle high TPS items and avoid network call to server

2017-08-28 Thread hiten
Point 5. According to documentation "utilizing Ignite with affinity
colocation, near caches should not be used".




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/We-want-to-configure-near-cache-on-client-so-that-we-can-handle-high-TPS-items-and-avoid-network-calr-tp16463p16470.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Retrieving multiple keys with filtering

2017-08-28 Thread Dmitriy Setrakyan
Andrey,

I am not sure I understand. According to EntryProcessor API [1] you can
chose to return nothing.

Also, to my knowledge, you can still do parallel reads while executing the
EntryProcessor. Perhaps other community members can elaborate on this.

[1]
https://static.javadoc.io/javax.cache/cache-api/1.0.0/index.html?javax/cache/processor/EntryProcessor.html

D.


On Mon, Aug 28, 2017 at 8:29 PM, Andrey Kornev 
wrote:

> Dmitriy,
>
>
> It's good to be back!  Glad to find Ignite community as vibrant
> and thriving as ever!
>
> Speaking of invokeAll(), even if we ignore for a moment the overhead
> associated with locking/unlocking a cache entry prior to passing it to the
> EntryProcessor as well as the overhead associated with enlisting the
> touched entries in a transaction, the bigger problem with using
> invokeAll() for filtering is that EntryProcessor must return a value. I'm
> not aware of any way to make EntryProcessor drop the entry from the
> response. The only options is to use a null (or false) to indicate a
> filtered out entry. In my specific case, I'll end up sending back a whole
> bunch of nulls in the result map as I expect most of the keys to be
> rejected by the filter.
>
> Overall, invokeAll() is not what one would call *efficient* (the key word
> in my original question) way of filtering.
>
> Thanks!
> Andrey
>
> --
> *From:* Dmitriy Setrakyan 
> *Sent:* Saturday, August 26, 2017 8:37 AM
> *To:* user
>
> *Subject:* Re: Retrieving multiple keys with filtering
>
> Andrey,
>
> Good to hear from you. Long time no talk.
>
> I don't think invokeAll has only update semantics. You can definitely use
> it just to look at the keys and return a result. Also, as you mentioned,
> Ignite compute is a viable option as well.
>
> The reason that predicates were removed from the get methods is because
> the API was becoming unwary, and also because JCache does not require it.
>
> D.
>
> On Thu, Aug 24, 2017 at 10:50 AM, Andrey Kornev 
> wrote:
>
>> Well, I believe invokeAll() has "update" semantics and using it for
>> read-only filtering of cache entries is probably not going to be efficient
>> or even appropriate.
>>
>>
>> I'm afraid the only viable option I'm left with is to use Ignite's
>> Compute feature:
>>
>> - on the sender, group the keys by affinity.
>>
>> - send each group along with the filter predicate to their affinity nodes
>> using IgniteCompute.
>>
>> - on each node, use getAll() to fetch the local keys and apply the filter.
>>
>> - on the sender node, collect the results of the compute jobs into a map.
>>
>>
>> It's unfortunate that Ignite dropped that original API. What used to be a
>> single API call is now a non-trivial algorithm and one have to worry about
>> things like what happens if the grid topology changes while the compute
>> jobs are executing, etc.
>>
>> Can anyone think of any other less complex/more robust approach?
>>
>> Thanks
>> Andrey
>>
>> --
>> *From:* slava.koptilin 
>> *Sent:* Thursday, August 24, 2017 9:03 AM
>> *To:* user@ignite.apache.org
>> *Subject:* Re: Retrieving multiple keys with filtering
>>
>> Hi Andrey,
>>
>> Yes, you are right. ScanQuery scans all entries.
>> Perhaps, IgniteCache#invokeAll(keys, cacheEntryProcessor) with custom
>> processor will work for you.
>> https://ignite.apache.org/releases/2.1.0/javadoc/org/apache/
>> ignite/IgniteCache.html#invokeAll(java.util.Set,%20org
>> .apache.ignite.cache.CacheEntryProcessor,%20java.lang.Object...)
>>
>> Thanks!
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Retrieving-multiple-keys-with-filtering-
>> tp16391p16400.html
>> Apache Ignite Users - Retrieving multiple keys with filtering
>> 
>> apache-ignite-users.70518.x6.nabble.com
>> Retrieving multiple keys with filtering. Hello, I have a list of cache
>> keys (up to a few hundred of them) and a filter predicate. I'd like to
>> efficiently retrieve only those values that pass the...
>>
>>
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Reusing CacheStores connections?

2017-08-28 Thread Matt
Hi,

Following the documentation I came up with the following code, but now I
wonder if this is really the way to go with CacheStores.

https://gist.github.com/fdc613759d4d7a845631e0b71aafa559

Using a profiler I found out openConnection() is executed more than 1000
times, on this method alone my application spend 10% of  the time.

Shouldn't Ignite be reusing the connections somehow? Any way to improve
this?

An example with better performance would be really helpful.

Cheers,
Matt


Re: DataStreamer operation failed

2017-08-28 Thread Jessie Lin
Hello Pranas, We had similar issues when having client node and server.
If you find out how to fix it, would appreciate if you could post the
solution here.

Jessie

On Mon, Aug 28, 2017 at 4:57 PM, Pranas Baliuka 
wrote:

> Thanks Konstantin in looking at it,
>
> The issue was what server node and client node serialization was not
> compatible. Moved to
> and using single server
> type node for further testing.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/DataStreamer-operation-failed-tp16439p16465.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Why DataStreamer.flush() is not flushing?

2017-08-28 Thread Pranas Baliuka
I'm trying to add 100M time series measurements  in chunks of BLOCK = 4_500
per value using structure:

Key:
public class Key {
  private int securityId;
  private long date;

Value:
public class OHLC {
  private long date;
  private int securityId;
  private int size;
  private long[] time;
  private double[] open;
  private double[] high;
  private double[] low;
  private double[] close;
  private double[] marketVWAP;

I need some kind of checkpoints to flush the queues to the cache  ideally
30second.

I've made attempts by configuring streamer:
streamer.allowOverwrite(true);
  streamer.perNodeBufferSize(20);
  streamer.autoFlushFrequency(TimeUnit.SECONDS.toMillis(30));
  streamer.skipStore(false);
  streamer.keepBinary(true);

and even explicitly  flushing :
if (blockId % 20 == 0) 
  streamer.flush();

After the flush() invoked (suppose to be blocking operation). I'm checking
the count of the cache:
  final IgniteCache cache =
ignite.getOrCreateCache(CACHE_NAME);
  System.out.println(" >>> Simulator - Inserted " + cache.size() *
BLOCK_SIZE + " " + new Date());
  Thread.sleep(TimeUnit.SECONDS.toMillis(40));
  System.out.println(" >>> Simulator - Inserted " + cache.size() *
BLOCK_SIZE + " " + new Date());
  Thread.sleep(TimeUnit.SECONDS.toMillis(40));
  System.out.println(" >>> Simulator - Inserted " + cache.size() *
BLOCK_SIZE + " " + new Date());

But getting .size() == 1

According documentation for 

flush(): "Streams any remaining data, ... this method blocks and doesn't
allow to add any data until all data is streamed."

size(): "Gets the number of all entries cached across all nodes. By default,
if {@code peekModes} value isn't defined, only size of primary copies across
all nodes will be returned."

It does not work from what I understand on 2.1.0. Is there some know work
around how to flush the data from streamer to the cache?

Thanks a lot
Pranas




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Why-DataStreamer-flush-is-not-flushing-tp16466.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: DataStreamer operation failed

2017-08-28 Thread Pranas Baliuka
Thanks Konstantin in looking at it,

The issue was what server node and client node serialization was not
compatible. Moved to 
and using single server
type node for further testing. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/DataStreamer-operation-failed-tp16439p16465.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster segmentation

2017-08-28 Thread Biren Shah
Hi Val,

Did you get a chance to look at the code snippet I shared?

If I understand correctly then when I do get() on cache, it creates a copy of 
the value and return that copy. Do you think turning off that behavior will 
help?

Thanks,
Biren

On 8/24/17, 2:16 PM, "vkulichenko"  wrote:

Biren,

Can you show the code of the receiver?

-Val



--
View this message in context: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_Cluster-2Dsegmentation-2Dtp16314p16411.html=DwICAg=Zok6nrOF6Fe0JtVEqKh3FEeUbToa1PtNBZf6G01cvEQ=rbkF1xy5tYmkV8VMdTRVaIVhaXCNGxmyTB5plfGtWuY=uTFJ0dsOfKebPVHeYtxynWyF05QZ1L_VwKl88GOCfhs=qpsio0YIs_DqiTGNkLSMR-z76AFBwbNv-LvhPjwQOy8=
Sent from the Apache Ignite Users mailing list archive at Nabble.com.




We want to configure near cache on client so that we can handle high TPS items and avoid network call to server

2017-08-28 Thread hiten
Below are the questions related to near cache on client side:
1. Is the below configuration is the correct way - 1000 items per client and
evict the entries if not accessed in 5 seconds?

NearCacheConfiguration nearConf = new
NearCacheConfiguration<>();
nearConf.setNearEvictionPolicy(new LruEvictionPolicy<>(1000));
nearConf.setExpiryPolicyFactory(AccessedExpiryPolicy.factoryOf(new
Duration(TimeUnit.SECONDS,5)));

2. Is it possible not to evict the entries from near cache if entry get
updated on the server node? We want to control the eviction through
ExpiryPolicy.

3. How can we get the near cache stats? Like owned entry count or eviction
count?

4. Is it possible to configure near cache for only predefined subset of the
keys?

5. Is Near Cache gets populated only when we access data via map.get(k) or
cache.get(k) methods, and not IgniteCallable ?




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/We-want-to-configure-near-cache-on-client-so-that-we-can-handle-high-TPS-items-and-avoid-network-calr-tp16463.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite 2.0 transaction timeout not holding up

2017-08-28 Thread Amit Pundir
Hi,
I am using Ignite 2.0 and working with transactions with a 60 seconds
timeout. In the logs I see Ignite timeout exceptions with timeout less than
60 seconds in the exception message. 

The transaction concurrency has been set to PESSIMISTIC and the transaction
isolation level is REPEATABLE_READ.

Following is the grep'ed pattern from the logs which show timeouts with
different timeouts which are not 60 seconds. 

Could you please explain how the timeout works?


class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=6,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@23687102]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=6,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@23687102]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=59980,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@5e9b3cc0]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=59980,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@5e9b3cc0]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=59979,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@5121ce59]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=59979,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@5121ce59]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=22679,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@2f69f495]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=22679,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@2f69f495]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=31604,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@5941d255]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=31604,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@5941d255]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=34418,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@403d0c15]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=34418,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@403d0c15]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=33692,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@52d9ec26]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=33692,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@52d9ec26]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=34509,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@6095b78b]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=34509,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@6095b78b]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=6,
tx=org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter$1@55dfcc29]

class
org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
Failed to acquire lock within provided timeout for transaction
[timeout=6,

Ignite 2.0 server node shutdown

2017-08-28 Thread Amit Pundir
Hi,
Does a Ignite server node shutdown automatically after a period of
inactivity? 

I am using Ignite 2.0 and found one of my node shutdown with the following
logs (below). The last 6 hours of logs just have the node metrics,
essentially no application activity. Will the node shutdown under such a
situation? If so, please suggest ways to avoid it.

Thanks


[18:47:06,537][INFO][grid-timeout-worker-#19%NPRO%][IgniteKernal%NPRO]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=2efcf0df, name=NPRO, uptime=37:08:21:647]
^-- H/N/C [hosts=24, nodes=24, CPUs=56]
^-- CPU [cur=0.23%, avg=0.25%, GC=0%]
^-- PageMemory [pages=8913]
^-- Heap [used=494MB, free=90.1%, comm=1920MB]
^-- Non heap [used=73MB, free=-1%, comm=75MB]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=6, qSize=0]
^-- Outbound messages queue [size=0]
*[18:47:06,537][INFO][grid-timeout-worker-#19%NPRO%][IgniteKernal%NPRO]
FreeList [name=NPRO, buckets=256, dataPages=1764, reusePages=2]
[18:47:57,683][INFO][Thread-2][G] Invoking shutdown hook...*




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-2-0-server-node-shutdown-tp16461.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Testing Ignite Applications Locally

2017-08-28 Thread Sergey Kozlov
Yakov, ok. I'll do that

On Mon, Aug 28, 2017 at 6:19 PM, Yakov Zhdanov  wrote:

> >As far as Maven archetype, Yakov, is the only purpose of it to load a
> project, so users can add tests to it?
>
> Yes, but not just add. We will also be providing test examples, data
> loading application sample, shell scripts, etc. This can be pretty handy
> thing to create project with all dependencies initialized and useful stuff.
>
> --Yakov
>



-- 
Sergey Kozlov
GridGain Systems
www.gridgain.com


NullPointerException at unwrapBinariesIfNeeded(GridCacheContext.java:1719)

2017-08-28 Thread zbyszek
Dear All,

I was wondering if this is a know issue (I am using 2.0) and if I could
prevent it somehow.
I has happened only twice and is not reproducible and was generated when
iterating query cursor:

try (QueryCursor> cursor = cache.query(sqlQuery)) {
for (List row : cursor) { // <-- this is the line causing
exception
if (rowConsumer != null) {
rowConsumer.accept(row);
}
}
}

java.lang.NullPointerException
at
org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinariesIfNeeded(GridCacheContext.java:1719)
at
org.apache.ignite.internal.processors.query.GridQueryCacheObjectsIterator.next(GridQueryCacheObjectsIterator.java:64)
at
org.apache.ignite.internal.processors.query.GridQueryCacheObjectsIterator.next(GridQueryCacheObjectsIterator.java:29)
at com.markit.n6platform.s6.dao.DynamicDAO.execute(DynamicDAO.java:42)


Thank you in advance,
zbyszek



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/NullPointerException-at-unwrapBinariesIfNeeded-GridCacheContext-java-1719-tp16459.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Equally distributing Kafka topics within the cluster

2017-08-28 Thread zbyszek
Thank you Slava,

I am testing some  solutions and if this yields something to be proud of
then I will share definitely...

regards,
zbyszek



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Equally-distributing-Kafka-topics-within-the-cluster-tp16377p16458.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Retrieving multiple keys with filtering

2017-08-28 Thread Andrey Kornev
Dmitriy,


It's good to be back!  Glad to find Ignite community as vibrant and thriving 
as ever!

Speaking of invokeAll(), even if we ignore for a moment the overhead associated 
with locking/unlocking a cache entry prior to passing it to the EntryProcessor 
as well as the overhead associated with enlisting the touched entries in a 
transaction, the bigger problem with using invokeAll() for filtering is that 
EntryProcessor must return a value. I'm not aware of any way to make 
EntryProcessor drop the entry from the response. The only options is to use a 
null (or false) to indicate a filtered out entry. In my specific case, I'll end 
up sending back a whole bunch of nulls in the result map as I expect most of 
the keys to be rejected by the filter.

Overall, invokeAll() is not what one would call *efficient* (the key word in my 
original question) way of filtering.

Thanks!
Andrey


From: Dmitriy Setrakyan 
Sent: Saturday, August 26, 2017 8:37 AM
To: user
Subject: Re: Retrieving multiple keys with filtering

Andrey,

Good to hear from you. Long time no talk.

I don't think invokeAll has only update semantics. You can definitely use it 
just to look at the keys and return a result. Also, as you mentioned, Ignite 
compute is a viable option as well.

The reason that predicates were removed from the get methods is because the API 
was becoming unwary, and also because JCache does not require it.

D.

On Thu, Aug 24, 2017 at 10:50 AM, Andrey Kornev 
> wrote:

Well, I believe invokeAll() has "update" semantics and using it for read-only 
filtering of cache entries is probably not going to be efficient or even 
appropriate.


I'm afraid the only viable option I'm left with is to use Ignite's Compute 
feature:

- on the sender, group the keys by affinity.

- send each group along with the filter predicate to their affinity nodes using 
IgniteCompute.

- on each node, use getAll() to fetch the local keys and apply the filter.

- on the sender node, collect the results of the compute jobs into a map.


It's unfortunate that Ignite dropped that original API. What used to be a 
single API call is now a non-trivial algorithm and one have to worry about 
things like what happens if the grid topology changes while the compute jobs 
are executing, etc.

Can anyone think of any other less complex/more robust approach?

Thanks
Andrey


From: slava.koptilin >
Sent: Thursday, August 24, 2017 9:03 AM
To: user@ignite.apache.org
Subject: Re: Retrieving multiple keys with filtering

Hi Andrey,

Yes, you are right. ScanQuery scans all entries.
Perhaps, IgniteCache#invokeAll(keys, cacheEntryProcessor) with custom
processor will work for you.
https://ignite.apache.org/releases/2.1.0/javadoc/org/apache/ignite/IgniteCache.html#invokeAll(java.util.Set,%20org.apache.ignite.cache.CacheEntryProcessor,%20java.lang.Object...)

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Retrieving-multiple-keys-with-filtering-tp16391p16400.html
Apache Ignite Users - Retrieving multiple keys with 
filtering
apache-ignite-users.70518.x6.nabble.com
Retrieving multiple keys with filtering. Hello, I have a list of cache keys (up 
to a few hundred of them) and a filter predicate. I'd like to efficiently 
retrieve only those values that pass the...



Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: Cache.destroy() & close() does not delete SwapFile Ignite 2.0

2017-08-28 Thread afedotov
Hi,

Deletion happens on next startup of the node.

Kind regards,
Alex.

On Thu, Aug 24, 2017 at 7:13 PM, Ramzinator [via Apache Ignite Users] <
ml+s70518n16401...@n6.nabble.com> wrote:

> Hi
>
> It appears that even when the ignite node shuts down, it does not delete
> the created cache files.
> Is there any prebuilt way in ignite to delete these files?
>
> Thanks,
> Ramz
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/Cache-
> destroy-close-does-not-delete-SwapFile-Ignite-2-0-tp13205p16401.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-destroy-close-does-not-delete-SwapFile-Ignite-2-0-tp13205p16456.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: SqlFieldsQuery NPE on my schema

2017-08-28 Thread slava.koptilin
Hello,

I tried the code you provided with Apache Ignite 2.1.0 and it works without
errors.
Could you check your code with the latest Apache Ignite version?
If the issue is still reproducible, could you provide a simple reproducer
(for instance maven project on github)?

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SqlFieldsQuery-NPE-on-my-schema-tp16409p16455.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Testing Ignite Applications Locally

2017-08-28 Thread Yakov Zhdanov
>As far as Maven archetype, Yakov, is the only purpose of it to load a
project, so users can add tests to it?

Yes, but not just add. We will also be providing test examples, data
loading application sample, shell scripts, etc. This can be pretty handy
thing to create project with all dependencies initialized and useful stuff.

--Yakov


Re: Testing Ignite Applications Locally

2017-08-28 Thread Yakov Zhdanov
Sergey, can you please elaborate?

--Yakov

2017-08-26 23:51 GMT+03:00 Sergey Kozlov :

> The idea is great!
>
> Also I would suggest an ability to run new (modified) tests 100 times in
> loop on the CI server to make sure that they don't cause no sporadic
> failures (we can include that as part of the requirements before review)
>
> --
> Sergey Kozlov
> GridGain Systems
> www.gridgain.com
>


Re: CacheMode - REPLICATED related questions

2017-08-28 Thread agura
Answers for all your questions (except of 4th) here depend on cache write
synchronization mode. In case of FULL_SYNC mode the exception will be thrown
on the client and you will be able to catch and process it. For PRIMARY_SYNC
and FULL_ASYNC modes the client will get successful result and it's possible
in this case that data will be inconsistent between primary and backup.

So you should choose mode that provides required consistency guarantee for
you.

Your (1,1) entry will be available only if it was written to partition on
node. You can use readFromBackup flag in order to control partition type
that will be source for entry reading. So, for your use case every get
operation will return (1,1) entry after put operation if readFromBackup ==
false independently of synchronization mode.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/CacheMode-REPLICATED-related-questions-tp16423p16452.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: CacheMode - REPLICATED related questions

2017-08-28 Thread userx
Hi Agura,
For QUESTION4 related to replication, I meant the following scenarios.

What does it mean for the clients which have done putAll operations 

1) While replicating from M1 to M2, there is a network glitch for say more
than joinTimeOutConfigured ?
2) M2 is shut down for some reason.
3) There is a RuntimeException while replication.
4) Say we intent to put 1,1 in a cache, If the cache is configured for a
REPLICATION mode then if the replication process is not completed whether
the sync mode is PRIMARY or FULL, is the entry 1,1 available to client for a
read even if it has not been replicated.

Do we have any unit tests available to cater to such scenarios on a local
node ?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/CacheMode-REPLICATED-related-questions-tp16423p16451.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Continuous Query on Multiple caches

2017-08-28 Thread slava.koptilin
Hi Rishikesh,

ContinuosQuery is designed to work with single cache only.
So, there is no way to use it with multiple caches.
Could you please share your use case in more details?

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-on-Multiple-caches-tp16444p16450.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Scalebility of Inite Threads

2017-08-28 Thread afedotov
Hi,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages.
To subscribe, send an empty email to user-subscr...@ignite.apache.org and
follow simple instructions in the reply.

I've some questions to clarify:
1. What do you mean by "Ignite threads"? Is it the number of
threads/clients reading/writing data from/into the cache?
2. Are you checking reads or writes, or both?
3. What is the CPU and memory utilization in case of 1 thread, 16 threads,
and 32 threads?
4. Do you observe high GC pauses under increasing load?

In general, every client thread implies that a request will be executed in
a separate thread on the server nodes.
Increasing the number of clients/threads you need to take into account such
parameters as the number of physical CPU cores,
amount of memory and its speed, network speed on the physical machines running
Ignite server nodes.
If, under some load, you become bound by, for example, CPU then response
time for each thread will increase, but overall throughput should grow.
Actually, it's a common trade-off between response time and throughput.

As well performance differs when you are referencing mostly the same set of
data or different sets.

If you are executing mostly reads then please check if enabling
auxiliary on-heap
cache

will
improve the things. In this case, an eviction policy

should be configured
properly to avoid OOM.






Kind regards,
Alex.

On Sat, Aug 26, 2017 at 10:05 PM, sandeep jain [via Apache Ignite Users] <
ml+s70518n16430...@n6.nabble.com> wrote:

> We have a scenario where we need to have around 600 Million keys in cache.
> We have partitioned the cache and followed the ignite manual on performance
> tuning. We are observing that the performance degrades when you are using
> increasing the number of threads. With one thread the speed we got was 5000
> records per sec and with increasing the number of threads to 16, speed
> reduced to 1800 and then with further increased the threads to 32, the
> speed reduced to 1200 records per sec. When analyzed the stack, we could
> see a lot of threads in the waiting state and only a few threads which were
> in the runnable state. Others threads were in waiting state. Suspect if
> there is potential issue in synchronization which is resulting in dipping
> down the performance. Also could be that I am doing things incorrectly. Any
> suggestions are welcome and appreciated.
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/
> Scalebility-of-Inite-Threads-tp16430.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Scalebility-of-Inite-Threads-tp16430p16449.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: DataStreamer operation failed

2017-08-28 Thread Konstantin Dudkov
Hi,

Could you please create a reproducer project fails this way? This will help
to find a bug.

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/DataStreamer-operation-failed-tp16439p16448.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Is the following statement true in all cases for REPLICATED mode ?

2017-08-28 Thread agura
Hi,

When persistence store is enabled the data pages that can't be stored in the
memory will be evicted to the persistence store. But capacity of your disks
is still limit your data set size. So the statement is still correct.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-the-following-statement-true-in-all-cases-for-REPLICATED-mode-tp16432p16447.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: CacheMode - REPLICATED related questions

2017-08-28 Thread agura
Hi,

please see my answers (inlined)


userx wrote
> QUESTION 1:
> Is there a way, we can identify which out of the three machines will be
> the PRIMARY mode ? And does it change w.r.t different clients, say if C1
> is doing put, the primary is M1 and if C2 is doing put the primary is M2.
> The reason I am asking this question is if replication fails for C1 and
> say M1 goes down for some reason, then even if M2 and M3 are there, they
> are not serving any purpose. 

You can identify primary node only for a particular partition. Partitions
are evenly distributed among the nodes and for particular partition primary
and backup nodes will be always different. So there is no primary node for
all partitions. Affinity function is responsible for assigning primary or
backup nodes for each partition and you can't change this behavior
dynamically.


userx wrote
> QUESTION 2:
> Can we configure or designate say M1 always to be primary ?

You can't do it using backup filter that should be passed to
RendezvousAffinityFunction. But I don't think that it's good idea because
usually all request will be sent to the primary node and, in your case, only
one node will be under load while other will not.


userx wrote
> QUESTION 3:
> We have one of the CacheWriteSynchronizationMode as PRIMARY_SYNC which
> means that put operation will wait for data to be written on PRIMARY NODE
> (say M1)? Is there a way that we ensure data to be replicated atleast 2
> out of the 3 machines listed. I could see the options as FULL_SYNC or
> PRIMARY_SYNC so its either all of them or just the primary one.

There is no such possibility at present.


userx wrote
> QUESTION 4:
> What happens if REPLICATION activity fails ? Is there any documentation
> for the same as to what the cluster does in such cases or does client have
> to take any actions on its part say log etc ?

I'm afraid that I don't understand your question. Do you mean rebalancing
due to a node outage or data propagation to backups?


userx wrote
> QUESTION5:
> I am soon going to enter production with Xmx as 1G and "Persistent Store
> Enabled" to take care of data to be cached whose size > 1G. Your
> https://apacheignite.readme.io/docs/preparing-for-production document has
> mentioned the JVM related settings for a server with Xmx of 10G. Since I
> am going with 1G, is there a documentation which helps me w.r.t
> recommended settings on Xmx of 1g ?


It depends on your use cases and there are no any "silver bullet" like
advice. It would be great to test your application ander load before
production.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/CacheMode-REPLICATED-related-questions-tp16423p16446.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


History, multiple source systems, Data Vault using Ignite...

2017-08-28 Thread Mikhail
Hello,

              We have typical task: we need to implement application, which 
will receive data (and updates) from multiple source systems. Also there will 
be default (our) data source, which can be updated by our application. Only the 
last version of data should be "actual" one, which could be retrieved from our 
application. But full audit trails of updates from every system should be kept 
always in order to investigate issues. Now the team consider Data Vault [1] as 
one of the possible solutions (but for me it looks superfluous). Is it a good 
option to implement Data Vault architecture by means of Ignite? Have anybody 
implemented applications with such requirements? We want to use Ignite, because 
in future we will have data analytics process (machine learning). What possible 
solutions for the task using Ignite do you see?

[1]  https://en.wikipedia.org/wiki/Data_vault_modeling  
--
Best Regards,
Mikhail

Continuous Query on Multiple caches

2017-08-28 Thread rishi007bansod
Hi,
   In documentation 
https://apacheignite.readme.io/v2.1/docs/continuous-queries
   , continuous
query is mentioned for single cache only. In our case I want to use it for
multiple caches, how can we use continuous query for same? Please provide
example 

Thanks,
Rishikesh



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-on-Multiple-caches-tp16444.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: fetching all the tasks already scheduled and to know the status of the task

2017-08-28 Thread chandrika
Hello Alex,

Thanks a lot for a quick reply .

Have one more query regarding the above point 2 mentioned in the earlier
post , we have a way to get using ComputeTaskFuture is what was said but if
i have to use SchedulerFuture  for getting the below information , then how
do we go about it.
1. fetching all the tasks in the ignite nodes
2. all the jobs associated with a task

our task is defined is as given below:
SchedulerFuture fut = ignite.scheduler().scheduleLocal(..)

the reason why i am asking cause i need to schedule all the jobs with a cron
expression which is our requirement.

thanks and regards,
chandrika



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/fetching-all-the-tasks-already-scheduled-and-to-know-the-status-of-the-task-tp16393p16443.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Equally distributing Kafka topics within the cluster

2017-08-28 Thread slava.koptilin
Hi Zbyszek,

I don't think there is a way that can be used out of the box.
It seems, your approach based on topology events and some shared state looks
workable.
Another option that can be used here is trying to implement your own
affinity function, but it's not a simple way I think.

In any way, it would be great if you can share your solution with the
community.

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Equally-distributing-Kafka-topics-within-the-cluster-tp16377p16442.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Failed to run reduce query locally

2017-08-28 Thread Taras Ledkov

Hi,

Looks like the problem has been fixed in ignite-2.1.

Please take a look at the ticket IGNITE-5190. [1]

[1] https://issues.apache.org/jira/browse/IGNITE-5190

On 24.08.2017 19:23, igor.tanackovic wrote:

I have a query which can be executed in H2 console but fails on Ignite's
.query(sql).getAll():

SELECT i.* FROM "cache".CACHEDITEM  AS i inner JOIN (
 SELECT ci.position, MAX(ci.lastModifiedTime) AS modifiedTime FROM
"cache".CACHEDITEM AS ci
   WHERE ci.startTime<=NOW()
   AND ci.endTime>NOW()
   AND ci.stripeId = 301
   GROUP BY ci.position ORDER BY ci.position) i2
 WHERE i.position=i2.position
 AND i.lastModifiedTime=i2.modifiedTime
 AND i.startTime<=NOW()
 AND i.endTime>NOW()
 AND i.stripeId=301
 GROUP BY i.position ORDER BY i.position


Caused by: org.apache.ignite.IgniteCheckedException: Failed to execute SQL
query.
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:1226)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:1278)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:1253)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:813)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1493)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:94)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$9.iterator(IgniteH2Indexing.java:1534)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:94)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:113)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.springframework.data.ignite.IgniteAdapter.execute(IgniteAdapter.java:135)
~[spring-data-ignite-1.0.9-BETA-SNAPSHOT.jar:1.0.9-BETA-SNAPSHOT]
at
org.springframework.data.ignite.repository.query.IgniteQueryEngine.execute(IgniteQueryEngine.java:74)
~[spring-data-ignite-1.0.9-BETA-SNAPSHOT.jar:1.0.9-BETA-SNAPSHOT]
at
org.springframework.data.keyvalue.core.AbstractKeyValueAdapter.find(AbstractKeyValueAdapter.java:84)
~[spring-data-keyvalue-1.2.3.RELEASE.jar:?]
at
org.springframework.data.ignite.IgniteTemplate$2.doInKeyValue(IgniteTemplate.java:307)
~[spring-data-ignite-1.0.9-BETA-SNAPSHOT.jar:1.0.9-BETA-SNAPSHOT]
at
org.springframework.data.ignite.IgniteTemplate$2.doInKeyValue(IgniteTemplate.java:302)
~[spring-data-ignite-1.0.9-BETA-SNAPSHOT.jar:1.0.9-BETA-SNAPSHOT]
at
org.springframework.data.ignite.IgniteTemplate.execute(IgniteTemplate.java:273)
~[spring-data-ignite-1.0.9-BETA-SNAPSHOT.jar:1.0.9-BETA-SNAPSHOT]
... 181 more
Caused by: org.h2.jdbc.JdbcSQLException: General error:
"java.lang.ArrayIndexOutOfBoundsException: 1"; SQL statement:
SELECT
I__Z0___KEY _KEY,
I__Z0___VAL _VAL
FROM (SELECT
__C0_0 POSITION,
MAX(__C0_1) AS MODIFIEDTIME
FROM PUBLIC.__T0
GROUP BY __C0_0
ORDER BY 1, 1, 2) I2__Z2
  INNER JOIN (SELECT
__C1_0 I__Z0__LASTMODIFIEDTIME,
__C1_1 I__Z0___VAL,
__C1_2 I__Z0___KEY,
__C1_3 I__Z0__POSITION
FROM PUBLIC.__T1
ORDER BY 4, 1) __Z3
  ON TRUE
WHERE TRUE AND (TRUE AND (TRUE AND ((I__Z0__POSITION = I2__Z2.POSITION) AND
(I__Z0__LASTMODIFIEDTIME = I2__Z2.MODIFIEDTIME
GROUP BY I__Z0__POSITION
ORDER BY =I__Z0__POSITION [5-195]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
~[h2-1.4.195.jar:1.4.195]
at org.h2.message.DbException.get(DbException.java:168)
~[h2-1.4.195.jar:1.4.195]
at org.h2.message.DbException.convert(DbException.java:295)
~[h2-1.4.195.jar:1.4.195]
at org.h2.command.Command.executeQuery(Command.java:215)
~[h2-1.4.195.jar:1.4.195]
at
org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:111)
~[h2-1.4.195.jar:1.4.195]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:1219)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:1278)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:1253)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:813)

Re: Kerberos installation

2017-08-28 Thread ravi
Hi Evgenii,
  Thanks for the reply. We wanted to use ignite on top of the existing
kerberized hadoop/hive deployment with out any code change(Means used HDFS
as secondary file system and refer ignite specific config files while
accessing hdfs and not writing any java code).
  In this case can you provide the needed ignite config file which has
KerberosHadoopFileSystemFactory and its allowed properties?. We are not
finding the enough documentation.
   Please reply.

Regards
Ravi



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Kerberos-installation-tp16200p16440.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


DataStreamer operation failed

2017-08-28 Thread Pranas Baliuka
Im am prototyping Apache Ignite to store some simple data grid with simple
objects:

Key:
  private final int securityId;
  private final long time;

OHLC: 
  private final int securityId;
  private final long time;
  private final double open;
  private final double high;
  private final double low;
  private final double close;
  private final double marketVWAP;

After inserting 20 to 30 such key-value entries getting:


[16:33:32]__   
[16:33:32]   /  _/ ___/ |/ /  _/_  __/ __/ 
[16:33:32]  _/ // (7 7// /  / / / _/   
[16:33:32] /___/\___/_/|_/___/ /_/ /___/  
[16:33:32] 
[16:33:32] ver. 2.1.0#20170721-sha1:a6ca5c8a
[16:33:32] 2017 Copyright(C) Apache Software Foundation
[16:33:32] 
[16:33:32] Ignite documentation: http://ignite.apache.org
[16:33:32] 
[16:33:32] Quiet mode.
[16:33:32]   ^-- Logging to file
'/Users/pranas/Apps/apache-ignite-fabric-2.1.0-bin/work/log/ignite-2ac0c6b2.log'
[16:33:32]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[16:33:32] 
[16:33:32] OS: Mac OS X 10.12.6 x86_64
[16:33:32] VM information: Java(TM) SE Runtime Environment 1.8.0_40-b27
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.40-b25
[16:33:32] Configured plugins:
[16:33:32]   ^-- None
[16:33:32] 
[16:33:32] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[16:33:32] Security status [authentication=off, tls/ssl=off]
[16:33:32] REST protocols do not start on client node. To start the
protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system
property.
[16:33:33] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[16:33:33] 
[16:33:33] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[16:33:33] 
[16:33:33] Ignite node started OK (id=2ac0c6b2)
[16:33:33] Topology snapshot [ver=6, servers=1, clients=1, CPUs=8,
heap=11.0GB]
 >>> Apache Ignite node is up and running.
 >>> Simulator - Real code would process journal events ... TODO.
Processed 0 events so far
Processed 10 events so far
Processed 20 events so far
[2017-08-28 16:33:46,592][ERROR][data-streamer-#36%null%][DataStreamerImpl]
DataStreamer operation failed.
class org.apache.ignite.IgniteCheckedException: Failed to finish operation
(too many remaps): 32
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5.apply(DataStreamerImpl.java:869)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5.apply(DataStreamerImpl.java:834)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:382)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:346)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:334)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:494)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:473)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1803)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:333)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1097)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteCheckedException: DataStreamer
request failed [node=74d746ea-6b57-49be-85eb-3064262ab039]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1792)
... 8 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to marshal
response error, see node log for details.
at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.start(DataStreamProcessor.java:102)
at
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1788)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:938)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1896)
at

Re: Kerberos installation

2017-08-28 Thread Evgenii Zhuravlev
Hi, take a look for a KerberosHadoopFileSystemFactory,it can be used in
kerberized hadoop env:

KerberosHadoopFileSystemFactory delegate = new
KerberosHadoopFileSystemFactory();

delegate.setKeyTab("foo.keytab");
delegate.setKeyTabPrincipal("foo");
delegate.setReloginInterval(0);

Evgenii


2017-08-26 18:09 GMT+03:00 ravi :

> Hi,
>   I have kerberos enabled zookeeper based hadoop HA environment. With out
> ignite, I am able to access my hadoop file system with valid kerberos
> ticket.
>   However by running the below command, I am getting below exception. Is
> there anything i am missing in default-config.xml?. Can u provide some
> proper documentation to make hdfs as secondary file system ( kerberos
> enabled hadoop environment ) to ignite?.
>
> $HADOOP_HOME/bin/hdfs --config /$HADOOP_HOME/ignite_conf dfs -mkdir /tut1
>
> Ignote default-config.xml:- (Hadoop as secondary file system)
>
>  
>  class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
> 
>  class="org.apache.ignite.hadoop.fs.CachingHadoopFileSystemFactory">
>  value="hdfs://192.168.126.207:8020/"/>
> 
> 
>
> /opt/Ignite2/ignite-hadoop-2.1.0/config/hadoop/
> core-site.xml
> 
> 
> 
> 
> 
> 
>
>
>
> Stack Trace:-
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.
> AccessControlException):
> SIMPLE authentication is not enabled.  Available:[TOKEN, KERBEROS]
> at org.apache.hadoop.ipc.Client.call(Client.java:1475)
> [2017-08-26 18:50:11,731][ERROR][igfs-igfs-ipc-#63%null%][IgfsImpl] File
> info operation in DUAL mode failed [path=/tut1]
> class org.apache.ignite.igfs.IgfsException: Failed to get file status
> [path=/tut1]
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Kerberos-installation-tp16200p16427.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>