Re: Re: Where can we get the partition assignment of a Cache after the cluster changed?

2017-10-24 Thread aa...@tophold.com
hi Slava,

Thanks for detail,   I adjust a bit,  CacheA and CacheB host on first node with 
NodeType as X



While CacheC host on another node which NodeType as Y, when create cache 
configuration I let CacheA only on Node X, and CacheC on node Y;

CacheConfiguration ccfg = new CacheConfiguration(name);

ccfg.setCacheMode(CacheMode.PARTITIONED);
ccfg.setEagerTtl(true);
ccfg.setName(name);
ccfg.setNodeFilter(new AttributeNodeFilter("NodeType", nodeType));

If I bring up the node X, every events as expected,  but Now If I start Node Y, 
I still got a lot CacheA and CacheB 's rebalance events. 

But Node Y's start or stop should not affect CacheA & CacheB, their partition 
never change or rebalance,  log after Y started:


[11:05:46] Topology snapshot [ver=2, servers=2, clients=0, CPUs=8, heap=18.0GB]

Received event [evt=CACHE_REBALANCE_STARTED,evt=CacheRebalancingEvent 
[cacheName=CacheA, part=-1, discoNode=TcpDiscoveryNode 
[id=55063d61-4168-4727-a5b9-1e1622fb57aa, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
192.168.10.36, 192.168.56.1, 2001:0:d388:7101:1c4c:155:3f57:f5db], 
sockAddrs=[Aaron/192.168.10.36:47501, 
/2001:0:d388:7101:1c4c:155:3f57:f5db:47501, /0:0:0:0:0:0:0:1:47501, 
/127.0.0.1:47501, /192.168.56.1:47501], discPort=47501, order=2, intOrder=2, 
lastExchangeTime=1508900746905, loc=false, ver=2.2.0#20170915-sha1:5747ce6b, 
isClient=false], discoEvtType=10, discoTs=1508900746975, 
discoEvtName=NODE_JOINED, nodeId8=16c5f906, msg=Cache rebalancing event., 
type=CACHE_REBALANCE_STARTED, tstamp=1508900747387]

Received event [evt=CACHE_REBALANCE_STOPPED,evt=CacheRebalancingEvent 
[cacheName=CacheA, part=-1, discoNode=TcpDiscoveryNode 
[id=55063d61-4168-4727-a5b9-1e1622fb57aa, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
192.168.10.36, 192.168.56.1, 2001:0:d388:7101:1c4c:155:3f57:f5db], 
sockAddrs=[Aaron/192.168.10.36:47501, 
/2001:0:d388:7101:1c4c:155:3f57:f5db:47501, /0:0:0:0:0:0:0:1:47501, 
/127.0.0.1:47501, /192.168.56.1:47501], discPort=47501, order=2, intOrder=2, 
lastExchangeTime=1508900746905, loc=false, ver=2.2.0#20170915-sha1:5747ce6b, 
isClient=false], discoEvtType=10, discoTs=1508900746975, 
discoEvtName=NODE_JOINED, nodeId8=16c5f906, msg=Cache rebalancing event., 
type=CACHE_REBALANCE_STOPPED, tstamp=1508900747387]



Regards
Aaron


aa...@tophold.com
 
From: slava.koptilin
Date: 2017-10-24 20:29
To: user
Subject: Re: Re: Where can we get the partition assignment of a Cache after the 
cluster changed?
Hi Aaron,
 
I just started a single Ignite node as follows:
Ignite ignite =
Ignition.start("examples/config/example-ignite.xml");
 
IgnitePredicate rebalanceEventLsnr = evt -> {
System.out.println("Received event [evt=" + evt.name() + ",
evt=" + evt.toString());
return true; // Continue listening.
};
ignite.events().localListen(rebalanceEventLsnr,
EventType.EVT_CACHE_REBALANCE_STOPPED,
EventType.EVT_CACHE_REBALANCE_STARTED);
 
Thread.sleep(5000);
IgniteCache cache =
ignite.getOrCreateCache(createCacheConfiguration("CacheA"));
IgniteCache cache2 =
ignite.getOrCreateCache(createCacheConfiguration("CacheB"));
 
and got the following output:
[15:22:39,460][INFO ][exchange-worker-#31%null%][GridCacheProcessor] Started
cache [name=CacheA, mode=PARTITIONED]
Received event [evt=CACHE_REBALANCE_STARTED, evt=CacheRebalancingEvent
[cacheName=CacheA, part=-1, discoNode=TcpDiscoveryNode [...]]
Received event [evt=CACHE_REBALANCE_STOPPED, evt=CacheRebalancingEvent
[cacheName=CacheA, part=-1, discoNode=TcpDiscoveryNode [...]]
[15:22:39,478][INFO
][exchange-worker-#31%null%][GridCachePartitionExchangeManager] Skipping
rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=1,
minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT,
node=1ee1a908-2424-46b0-911b-0a2c70ad68e2]
[15:22:39,480][INFO ][exchange-worker-#31%null%][GridCacheProcessor] Started
cache [name=CacheB, mode=PARTITIONED]
Received event [evt=CACHE_REBALANCE_STARTED, evt=CacheRebalancingEvent
[cacheName=CacheA, part=-1, discoNode=TcpDiscoveryNode [...]]
Received event [evt=CACHE_REBALANCE_STOPPED, evt=CacheRebalancingEvent
[cacheName=CacheA, part=-1, discoNode=TcpDiscoveryNode [...]]
Received event [evt=CACHE_REBALANCE_STARTED, evt=CacheRebalancingEvent
[cacheName=CacheB, part=-1, discoNode=TcpDiscoveryNode [...]]
Received event [evt=CACHE_REBALANCE_STOPPED, evt=CacheRebalancingEvent
[cacheName=CacheB, part=-1, discoNode=TcpDiscoveryNode [...]]
[15:22:39,485][INFO
][exchange-worker-#31%null%][GridCachePartitionExchangeManager] Skipping
rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=1,
minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT,
node=1ee1a908-2424-46b0-911b-0a2c70ad68e2]
 
So, I see messages for both caches.
 
 
 
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Client Near Cache Configuration Lost after Cluster Node Removed

2017-10-24 Thread Timay
I believe i had the same issue, i have posted a test and my finding to the
user group. Which can be found here. 

http://apache-ignite-users.70518.x6.nabble.com/Near-Cache-Topoolgy-change-causes-NearCache-to-always-miss-td17539.html



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Near Cache Topoolgy change causes NearCache to always miss.

2017-10-24 Thread Timay
Hey Slave, 

Just some more details, it looks like the GridNearCacheEntry.primaryNode is
the suspect. That is what updates the topVer. If you create 2 nodes, then X
clients. Using one of the clients to create a cache with a near cache config
it seems to work as expected. However, if you create a cache on the node
instance then populate that cache, but try to get from a near cache created
through the client the change of a topology will cause the setting of the
topVer to none, then the primaryNode never get to reset the topVer causing
the 
GridNearCacheEntry.valid to return false. 

Hope that makes some sense, but attached is a test i used. It's crude but
should at least show what i am trying to convey. The good test  to what is
expected with a miss count being the initial call and the one after the
invalidation of the topology change, the bad has almost all read marked as
misses which causes the hard hit to the cluster. 

GridCacheNearClientMissTest.java

  

Thanks
Tim(ay)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re: How the Ignite Service performance? When we test the CPU soon be occupied 100%

2017-10-24 Thread aa...@tophold.com
Thanks Andrey!  Now it better, we try to exclude no-core logic to separated 
instances.

What I learned from last several months using ignite, we should set up ignite 
as a standalone data node, while put my application logic in another one. 

Otherwise it will bring too much unstable to my application.  I not sure this 
is the best practice?


Regards
Aaron


aa...@tophold.com
 
From: Andrey Mashenkov
Date: 2017-10-24 19:29
To: user; aaron
Subject: Re: How the Ignite Service performance? When we test the CPU soon be 
occupied 100%
Hi aaron,

Can you make and share JFR profile [1] if it is still actual?

[1] 
https://apacheignite.readme.io/docs/jvm-and-system-tuning#section-flightrecorder-settings


-- 
Best regards,
Andrey V. Mashenkov


Re: Failed to query ignite

2017-10-24 Thread Andrey Mashenkov
Hi,

Got it.
Ignite has H2 in underneath. Try to use BIGINT as it should be mappe to
java Long type.

On Tue, Oct 24, 2017 at 9:54 PM, iostream  wrote:

> Hi Andrew,
>
> The result set was /null/ because the query failed in ignite. This is the
> only stack trace printed.
>
> The problem occurs only when I query on LONG column "id". When I query on
> VARCHAR column "name" (/eg. select * from test where name = "abc"/), I am
> able to get the result set.
>
> What is the correct way to set argument to query on a field whose data type
> is LONG?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite/Cassandra failing to use supplied value for where clause

2017-10-24 Thread Kenan Dalley
These were included in the project file link in the initial post.  Here they
are in text for reference.  The only thing not posted is the pom.xml and
it's just standard SpringBoot Ignite w/ Cassandra.
cassandra-ignite.xml
test1-cassandra-persistence-settings.xml
cassandra-connection-settings.xml
Test1Key.java
Test1.java
Application.java
ApplicationConfig.java




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Ignite-cassandra module issue

2017-10-24 Thread Tobias Eriksson
Hi 
 Did you ever resolve this problem, cause I have it too on and off
In fact it sometimes happens just like that, e.g. I have been running my
Ignite client and then stop it, and then it takes a while and run it again,
and all by a sudden this error shows up. An that is the first thing that
happens, and there is NOT a massive amount of load on Cassandra at that
time. But I have also seen it when I hammer Ignite/Cassandra with
updates/inserts.

This is a deal-breaker for me, I need to understand how to fix this, cause
having this in production is not an option.

-Tobias




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Web Session Clustering for JBoss...

2017-10-24 Thread vkulichenko
Hi Hugh,

Containers mentioned in the list are just the ones which were tested by
Ignite community. But actually only standard servlet APIs are used there, so
it's supposed to work with any container.

I would recommend to try setting this up and let us know if you have any
specific issues.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to query ignite

2017-10-24 Thread iostream
Hi Andrew,

The result set was /null/ because the query failed in ignite. This is the
only stack trace printed. 

The problem occurs only when I query on LONG column "id". When I query on
VARCHAR column "name" (/eg. select * from test where name = "abc"/), I am
able to get the result set. 

What is the correct way to set argument to query on a field whose data type
is LONG?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Grid freezing

2017-10-24 Thread smurphy
Thanks Evgenii,

I'll add that catch block and see if it sheds any light on the issue. 

The client's transaction configuration is set to Optimistic and Serializable
and the transaction within the try with resources block is explicitly set to
Optimistic and Serializable, which should preclude deadlock...but there does
seem to be deadlock..

The servers do not have any transaction configuration set explicitly..



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cannot activate Ignite with custom security plugin

2017-10-24 Thread calebs
I just checked out ignite-2.3 branch, and I'm gladly surprised to see the
following two cases are already included in the GridRestProcessor.authorize
method.

case CLUSTER_ACTIVE:
case CLUSTER_INACTIVE: 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Retrieving keys very slow from a partitioned, no backup, no replicated cache

2017-10-24 Thread Anand Vijai
Hi Andrew,


I did try with the Iterator implementation using iterator.hasNext() (you can
see that its commented in the code i shared) but the performance is only
slightly higher (hence gave the range 10-100 keys per second)

Do you want me to try to use a cursor explicitly instead of using an
iterator.
I need to go through all the keys and perform the computation.


Regards
Anand Vijai



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Retrieving keys very slow from a partitioned, no backup, no replicated cache

2017-10-24 Thread Andrey Mashenkov
Hi Anand,

Looks like a cluster wide operation cache.size() performed on every
iteration.
Please, take a look how to iterating over cache entries via ScanQuery [1] .

[1] https://apacheignite.readme.io/v2.2/docs/cache-queries#scan-queries

On Tue, Oct 24, 2017 at 7:40 PM, Anand Vijai  wrote:

> I have a cache with approx 1MM rows and is partitioned across 3 server
> nodes.
> I am trying to retrieve all the keys from the cache and then trying to
> perform an update operation using cache.invoke.
> The cache.invoke itself is very fast but the retrieval of the keys takes a
> lot of time - approximately 10-100 keys per second is the throughput I am
> getting.
> The cache has 15 fields and 4 of them are defined as key fields and are
> indexed but since I am using Scanquery i am guessing that the indexes arent
> used?
>
> What is the best/correct way to retrieve all keys and perform an update to
> the cache within the server nodes without returning any data back to the
> client node.
>
> Originally i had iterated through the iterator one key at a time and
> thought
> the update operation was taking time. But i changed the code to create a
> keyset first and then did a cache.invoke in separate blocks of code as
> below:
>
> // Execute the query.
> Iterator> iterator =
> cache.query(scanQuery).iterator();
> Set KEYS_SET = new HashSet();
> FactKey key = new FactKey();
> Fact val = null;
> ExecutionTimer t = new ExecutionTimer(); // this is just to track
> time taken in seconds
> t.start();
>
> for (int i = 0; i < cache.size(CachePeekMode.ALL); i++) {
> //while (iterator.hasNext()) {
> key = iterator.next().getKey();
>
> KEYS_SET.add(iterator.next().getKey());
>
> if(i%1000 == 0)
> {
> System.out.println(">>> Update Count: " + i);
> t.end();
> System.out.println("Time taken for: " + i + " " +
> t.duration());
> t.start();
> cache.invokeAll(KEYS_SET, (entry, args) -> {
> val = entry.getValue();
> val.setAmount(val.getAmt1() + val.getAmt2());
> ;
> return null;
> });
> KEYS_SET.clear();
> System.out.println(">>> Updating ADB: " + i);
> t.end();
> System.out.println("Timer for cache invoke: " + t.duration());
> t.start();
> }
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Retrieving keys very slow from a partitioned, no backup, no replicated cache

2017-10-24 Thread Anand Vijai
I have a cache with approx 1MM rows and is partitioned across 3 server nodes.
I am trying to retrieve all the keys from the cache and then trying to
perform an update operation using cache.invoke.
The cache.invoke itself is very fast but the retrieval of the keys takes a
lot of time - approximately 10-100 keys per second is the throughput I am
getting.
The cache has 15 fields and 4 of them are defined as key fields and are
indexed but since I am using Scanquery i am guessing that the indexes arent
used?

What is the best/correct way to retrieve all keys and perform an update to
the cache within the server nodes without returning any data back to the
client node. 

Originally i had iterated through the iterator one key at a time and thought
the update operation was taking time. But i changed the code to create a
keyset first and then did a cache.invoke in separate blocks of code as
below:

// Execute the query.
Iterator> iterator =
cache.query(scanQuery).iterator();
Set KEYS_SET = new HashSet();
FactKey key = new FactKey();
Fact val = null;
ExecutionTimer t = new ExecutionTimer(); // this is just to track
time taken in seconds
t.start();

for (int i = 0; i < cache.size(CachePeekMode.ALL); i++) {
//while (iterator.hasNext()) {
key = iterator.next().getKey();

KEYS_SET.add(iterator.next().getKey());

if(i%1000 == 0)
{
System.out.println(">>> Update Count: " + i);
t.end();
System.out.println("Time taken for: " + i + " " + t.duration());
t.start();
cache.invokeAll(KEYS_SET, (entry, args) -> {
val = entry.getValue();
val.setAmount(val.getAmt1() + val.getAmt2());
;
return null;
});
KEYS_SET.clear();
System.out.println(">>> Updating ADB: " + i);
t.end();
System.out.println("Timer for cache invoke: " + t.duration());
t.start();
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite/Cassandra failing to use supplied value for where clause

2017-10-24 Thread Andrey Mashenkov
Hi Kenan,

Seems, column mapping is missed.
You mix annotation configuration via @QuerySqlField annotation and creating
table via "CREATE TABLE". I'm not sure it can work this way.

Seems, you have neither QueryEntities configured nor IndexedTypes to Ignite
can process annotations and know about id<->my_id mapping.
Would you please share cache configuration?


On Mon, Oct 23, 2017 at 8:03 PM, Kenan Dalley  wrote:

> Here's the C* setup:
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Client Near Cache Configuration Lost after Cluster Node Removed

2017-10-24 Thread torjt
To answer your question, using Visor, one can see cache misses increase upon
the scenario described.  Furthermore, measure cache.get() calls increase
from sub millisecond to 20 to 30 miillis.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: queryResult.getAll() stuck !!

2017-10-24 Thread Andrey Mashenkov
Hi Sumit,

What is version of Ignite do you use?
Do you observe any CPU load?
Would you please share query plan? You can get it via H2 console [1].
Do you create table via "CREATE TABLE" SQL command.

It is possible, simple queries by PK are broken and full scan performed
instead of using index in some cases in 2.1 version.
This already fixed and will be available from 2.3 that is coming soon.

[1]
https://apacheignite.readme.io/v2.2/docs/sql-performance-and-debugging#using-h2-debug-console



On Wed, Oct 18, 2017 at 12:03 PM, Sumit Sethia 
wrote:

> Hi Alexander, there is no suspicious log as far as I can see.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: OBDC Ignite driver error in MS Excel: ODBC version is not supported

2017-10-24 Thread vkondraschenko
Igor,

Excel data import scenario is designed to query for any connection
parameters after clicking on 'Connect...'. These parameters are supposed to
be queried by the driver. But in fact the error occurs prior to querying for
any additional data.
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Hibernate Entity Cache Gets Stuck during GridCacheProxyImpl.put

2017-10-24 Thread Andrey Mashenkov
Hi,

Looks like thread is waiting for async transactional operation.

Would you please share a full thread dump?
There should be a thread in thread pool hanged or in progress.

On Tue, Oct 17, 2017 at 12:31 PM, ksampath 
wrote:

> Hi,
>
> *Environmental Details :*
> ignite 2.2.0, Hibernate 4.3.0, JDK version 1.8_131
>
> We configured Cache to use Local. Configured it as Second Level cache for
> Hibernate. Cache Strategy used as Read- Write. However, Application gets
> stuck while loading entity cache by Hibernate Can you please check this
> issue?
>
> Let me know if any other details required?
>
> *Stuck Thread Trace :*
>
> at java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.
> java:304)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.
> get0(GridFutureAdapter.java:176)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.
> get(GridFutureAdapter.java:139)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$21.op(
> GridCacheAdapter.java:2355)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$21.op(
> GridCacheAdapter.java:2353)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(
> GridCacheAdapter.java:4107)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(
> GridCacheAdapter.java:2353)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(
> GridCacheAdapter.java:2334)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(
> GridCacheAdapter.java:2311)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.put(
> GridCacheProxyImpl.java:561)
> at
> org.apache.ignite.cache.hibernate.HibernateCacheProxy.
> put(HibernateCacheProxy.java:195)
> at
> org.apache.ignite.cache.hibernate.HibernateReadWriteAccessStrate
> gy.putFromLoad(HibernateReadWriteAccessStrategy.java:102)
> at
> org.apache.ignite.cache.hibernate.HibernateAccessStrategyAdapter
> .putFromLoad(HibernateAccessStrategyAdapter.java:140)
> at
> org.apache.ignite.cache.hibernate.HibernateAbstractRegionAccessS
> trategy.putFromLoad(HibernateAbstractRegionAccessStrategy.java:54)
> at
> org.hibernate.engine.internal.TwoPhaseLoad.doInitializeEntity(
> TwoPhaseLoad.java:221)
> at
> org.hibernate.engine.internal.TwoPhaseLoad.initializeEntity(
> TwoPhaseLoad.java:144)
> at
> org.hibernate.loader.Loader.initializeEntitiesAndCollectio
> ns(Loader.java:1115)
> at org.hibernate.loader.Loader.processResultSet(Loader.java:973)
> at org.hibernate.loader.Loader.doQuery(Loader.java:921)
> at
> org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCol
> lections(Loader.java:355)
> at org.hibernate.loader.Loader.doList(Loader.java:2554)
> at org.hibernate.loader.Loader.doList(Loader.java:2540)
> at
> org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2370)
> at org.hibernate.loader.Loader.list(Loader.java:2365)
> at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:497)
> at
> org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(
> QueryTranslatorImpl.java:387)
> at
> org.hibernate.engine.query.spi.HQLQueryPlan.performList(
> HQLQueryPlan.java:236)
> at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1300)
> at org.hibernate.internal.QueryImpl.list(QueryImpl.java:103)
> at org.hibernate.jpa.internal.QueryImpl.list(QueryImpl.java:573)
> at
> org.hibernate.jpa.internal.QueryImpl.getResultList(QueryImpl.java:449)
>
> Regards,
> ksampath
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Benchmark results questions

2017-10-24 Thread dkarachentsev
Hi Ray,

I've got the same results on my environment and checking what happens.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cannot activate Ignite with custom security plugin

2017-10-24 Thread Andrey Mashenkov
Hi,

Looks like a bug, GridRestProcessor doesn't supports CLUSTER_ACTIVE
command.
However, command is present in rest command list (see enum GridRestCommand
code).


I've created a ticket for this [1]


[1] https://issues.apache.org/jira/browse/IGNITE-6741

On Tue, Oct 24, 2017 at 5:22 PM, calebs  wrote:

> Version: Ignite 2.2.
>
> Partial Ignite Config:
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
>
>
>
> class="org.apache.ignite.configuration.PersistentStoreConfiguration">
>  value="/tmp/ignite/work"/>
>
>
> 
>
> A jar that contains our custom security plugin for the security named
> ACSPluginProvider & ACSSecurityProcessor is placed in $IGNITE_HOME/libs
> folder.
>
> Run ignite.sh to start the single data node and see
> ACSSecurityProcessor.start method is called.
>
> 10-23 20:46:16.567 [main ] INFO
> apache.ignite.internal.IgniteKernal%cdev_cluster - Configured caches [in
> 'sysMemPlc' memoryPolicy: ['ignite-sys-cache']]
> 10-23 20:46:16.601 [main ] INFO
> apache.ignite.internal.IgniteKernal%cdev_cluster - 3-rd party licenses can
> be found at: /opt/ignite/libs/licenses
> 10-23 20:46:16.663 [main ] INFO
> internal.processors.plugin.IgnitePluginProcessor - Configured plugins:
> 10-23 20:46:16.664 [main ] INFO
> internal.processors.plugin.IgnitePluginProcessor -   ^-- ACSPluginProvider
> 1.0.0
> 10-23 20:46:16.664 [main ] INFO
> internal.processors.plugin.IgnitePluginProcessor -   ^-- MaxPoint
> 10-23 20:46:16.664 [main ] INFO
> internal.processors.plugin.IgnitePluginProcessor -
> 10-23 20:46:16.673 [main ] INFO  platform.auth.ignite.ACSSecurityProcessor
> -
> start
> 10-23 20:46:16.726 [main ] INFO  spi.communication.tcp.TcpCommunicationSpi
> -
> Successfully bound communication NIO server to TCP port [port=47100,
> locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0,
> pairedConn=false]
>
> Then run control.sh to activate
>
> /opt/ignite$ control.sh --port 11211 --activate
> Oct 23, 2017 8:48:29 PM
> org.apache.ignite.internal.client.impl.connection.
> GridClientNioTcpConnection
> 
> INFO: Client TCP connection established: /127.0.0.1:11211
> Oct 23, 2017 8:48:30 PM
> org.apache.ignite.internal.client.impl.GridClientImpl 
> INFO: Client started [id=d2e2b816-61e3-47ff-9d88-ae4c8b3eb2ae,
> protocol=TCP]
>
> Then, I see ACSSecurityProcessor.authenticate is called, but then followed
> by Unexpected command: CLUSTER_ACTIVE exception.
>
> 10-23 20:48:29.688 [rest-#46%cdev_cluster%] INFO
> platform.auth.ignite.ACSSecurityProcessor - authenticate:
> id=d2e2b816-61e3-47ff-9d88-ae4c8b3eb2ae, login=null
> 10-23 20:48:30.051 [rest-#47%cdev_cluster%] ERROR
> internal.processors.rest.GridRestProcessor - Client request execution
> failed
> with error.
> java.lang.AssertionError: Unexpected command: CLUSTER_ACTIVE
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor.authorize(
> GridRestProcessor.java:817)
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor.
> handleRequest(GridRestProcessor.java:250)
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor.access$100(
> GridRestProcessor.java:91)
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(
> GridRestProcessor.java:157)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 10-23 20:48:30.056 [rest-#47%cdev_cluster%] ERROR
> internal.processors.rest.GridRestProcessor - Runtime error caught during
> grid runnable execution: GridWorker [name=rest-proc-worker,
> igniteInstanceName=cdev_cluster, finished=false, hashCode=328132029,
> interrupted=false, runner=rest-#47%cdev_cluster%]
> java.lang.AssertionError: Unexpected command: CLUSTER_ACTIVE
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor.authorize(
> GridRestProcessor.java:817)
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor.
> handleRequest(GridRestProcessor.java:250)
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor.access$100(
> GridRestProcessor.java:91)
> at
> org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(
> GridRestProcessor.java:157)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> As I examine the source code of GridRestProcessor at line 817
>
> /**
>  * @param req REST request.
>  * @param sCtx 

Cannot activate Ignite with custom security plugin

2017-10-24 Thread calebs
Version: Ignite 2.2.

Partial Ignite Config:


   
   
   
   

   
   


A jar that contains our custom security plugin for the security named
ACSPluginProvider & ACSSecurityProcessor is placed in $IGNITE_HOME/libs
folder.

Run ignite.sh to start the single data node and see
ACSSecurityProcessor.start method is called.

10-23 20:46:16.567 [main ] INFO 
apache.ignite.internal.IgniteKernal%cdev_cluster - Configured caches [in
'sysMemPlc' memoryPolicy: ['ignite-sys-cache']]
10-23 20:46:16.601 [main ] INFO 
apache.ignite.internal.IgniteKernal%cdev_cluster - 3-rd party licenses can
be found at: /opt/ignite/libs/licenses
10-23 20:46:16.663 [main ] INFO 
internal.processors.plugin.IgnitePluginProcessor - Configured plugins:
10-23 20:46:16.664 [main ] INFO 
internal.processors.plugin.IgnitePluginProcessor -   ^-- ACSPluginProvider
1.0.0
10-23 20:46:16.664 [main ] INFO 
internal.processors.plugin.IgnitePluginProcessor -   ^-- MaxPoint
10-23 20:46:16.664 [main ] INFO 
internal.processors.plugin.IgnitePluginProcessor -
10-23 20:46:16.673 [main ] INFO  platform.auth.ignite.ACSSecurityProcessor -
start
10-23 20:46:16.726 [main ] INFO  spi.communication.tcp.TcpCommunicationSpi -
Successfully bound communication NIO server to TCP port [port=47100,
locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]

Then run control.sh to activate

/opt/ignite$ control.sh --port 11211 --activate
Oct 23, 2017 8:48:29 PM
org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection

INFO: Client TCP connection established: /127.0.0.1:11211
Oct 23, 2017 8:48:30 PM
org.apache.ignite.internal.client.impl.GridClientImpl 
INFO: Client started [id=d2e2b816-61e3-47ff-9d88-ae4c8b3eb2ae, protocol=TCP]

Then, I see ACSSecurityProcessor.authenticate is called, but then followed
by Unexpected command: CLUSTER_ACTIVE exception.

10-23 20:48:29.688 [rest-#46%cdev_cluster%] INFO 
platform.auth.ignite.ACSSecurityProcessor - authenticate:
id=d2e2b816-61e3-47ff-9d88-ae4c8b3eb2ae, login=null
10-23 20:48:30.051 [rest-#47%cdev_cluster%] ERROR
internal.processors.rest.GridRestProcessor - Client request execution failed
with error.
java.lang.AssertionError: Unexpected command: CLUSTER_ACTIVE
at
org.apache.ignite.internal.processors.rest.GridRestProcessor.authorize(GridRestProcessor.java:817)
at
org.apache.ignite.internal.processors.rest.GridRestProcessor.handleRequest(GridRestProcessor.java:250)
at
org.apache.ignite.internal.processors.rest.GridRestProcessor.access$100(GridRestProcessor.java:91)
at
org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:157)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
10-23 20:48:30.056 [rest-#47%cdev_cluster%] ERROR
internal.processors.rest.GridRestProcessor - Runtime error caught during
grid runnable execution: GridWorker [name=rest-proc-worker,
igniteInstanceName=cdev_cluster, finished=false, hashCode=328132029,
interrupted=false, runner=rest-#47%cdev_cluster%]
java.lang.AssertionError: Unexpected command: CLUSTER_ACTIVE
at
org.apache.ignite.internal.processors.rest.GridRestProcessor.authorize(GridRestProcessor.java:817)
at
org.apache.ignite.internal.processors.rest.GridRestProcessor.handleRequest(GridRestProcessor.java:250)
at
org.apache.ignite.internal.processors.rest.GridRestProcessor.access$100(GridRestProcessor.java:91)
at
org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:157)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

As I examine the source code of GridRestProcessor at line 817

/**
 * @param req REST request.
 * @param sCtx Security context.
 * @throws SecurityException If authorization failed.
 */
private void authorize(GridRestRequest req, SecurityContext sCtx) throws
SecurityException {
SecurityPermission perm = null;
String name = null;

switch (req.command()) {
case CACHE_GET:
case CACHE_CONTAINS_KEY:
case CACHE_CONTAINS_KEYS:

case NAME:
case LOG:
break;

default:
throw new AssertionError("Unexpected command: " +
req.command());   <- line 817
}

if (perm != null)
ctx.security().authorize(name, perm, sCtx);
}

So, why command 

Re: Failed to query ignite

2017-10-24 Thread Andrey Mashenkov
Hi,

Looks like stacktrace is incomplete and it doesn't contains any reason
why JdbcResponse
has 'null' res field.

Please, if connection string is correct (by default 10800 port should be
used) and share full logs.
Also, it is possible you close connection before results be retrieved from
ResultSet.

On Tue, Oct 24, 2017 at 3:02 PM, iostream  wrote:

> *Hi,
>
> I have created a table in my ignite cluster (v2.1) using the following DDL
> -*
>
> CREATE TABLE test
> (
>   id LONG,
>   name   VARCHAR,
>   PRIMARY KEY (id)
>
> )
> WITH "backups=1,affinityKey=id";
>
> *I am trying to query the table using IgniteJdbcThinDriver. The code to
> query is as follows -*
>
> String sql = "/select * from test where id = ?/";
> List params = new ArrayList();
> params.add(1L);
> ResultSet rs = null;
> Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
> Connection conn =
> DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1/");
> PreparedStatement stmt = conn.prepareStatement(sql);
> if (null != params) {
> int i = 1;
> for (Object param : params) {
> stmt.setObject(i, param);
> i++;
> }
> }
> rs = stmt.executeQuery();
> conn.close();
>
> *I am getting the following error -*
>
> java.sql.SQLException: Failed to query Ignite.
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.
> execute0(JdbcThinStatement.java:123)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinPreparedStatement.
> executeWithArguments(JdbcThinPreparedStatement.java:221)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinPreparedStatement.
> executeQuery(JdbcThinPreparedStatement.java:68)
> at
> com.walmart.ecommerce.fulfillment.node.commons.
> manager.dlr.work.Tester.main(Tester.java:34)
> Caused by: class org.apache.ignite.IgniteCheckedException: Error server
> response: [req=JdbcQueryExecuteRequest [schemaName=null, pageSize=1024,
> maxRows=0, sqlQry=select * from test where id = ?, args=[1]],
> resp=JdbcResponse [res=null, status=1, err=javax.cache.CacheException:
> class
> org.apache.ignite.IgniteCheckedException: null]]
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.
> sendRequest(JdbcThinTcpIo.java:253)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.
> queryExecute(JdbcThinTcpIo.java:227)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.
> execute0(JdbcThinStatement.java:109)
> ... 3 more
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: OBDC Ignite driver error in MS Excel: ODBC version is not supported

2017-10-24 Thread Igor Sapego
Ok, It seems like the first message just was not showed to you by
UI. The message is "Failed to establish connection with the host".

Please, make sure that Ignite Node is up and reachable with the
DSN you are specifying.

Best Regards,
Igor

On Tue, Oct 24, 2017 at 3:18 PM, vkondraschenko <
vkondrasche...@griddynamics.com> wrote:

> Igor, I have captured logs using the 'tracing feature'. I enabled tracing
> before the experiment and stopped it right after. Please find the SQL.LOG
> file attached.
>
> SQL.LOG  >
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: OutOfMemoryError

2017-10-24 Thread Andrey Mashenkov
Hi Shuvendu,

Looks like JVM can't allocate continuous memory buffer to serialize cache
entry into byte array.
Please, check if node has enough memory for it.



On Tue, Oct 24, 2017 at 11:37 AM, shuvendu <
shuvendu@travelcentrictechnology.com> wrote:

> Hi,
>
> While pushing a huge array list to cache we are getting the following
> error.
>
> Java exception occurred [class=java.lang.OutOfMemoryError, message=Java
> heap
> space]|Apache.Ignite.Core.Common.IgniteException: Java exception occurred
> [class=java.lang.OutOfMemoryError, message=Java heap space] --->
> Apache.Ignite.Core.Common.JavaException: java.lang.OutOfMemoryError: Java
> heap space
> at
> org.apache.ignite.internal.binary.streams.BinaryMemoryAllocatorChunk.
> reallocate(BinaryMemoryAllocatorChunk.java:69)
> at
> org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream.
> ensureCapacity(BinaryHeapOutputStream.java:65)
> at
> org.apache.ignite.internal.binary.streams.BinaryAbstractOutputStream.
> unsafeEnsure(BinaryAbstractOutputStream.java:253)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteBinaryObject(
> BinaryWriterExImpl.java:937)
> at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.
> write(BinaryClassDescriptor.java:729)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal0(BinaryWriterExImpl.java:206)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal(BinaryWriterExImpl.java:147)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal(BinaryWriterExImpl.java:134)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteObject(
> BinaryWriterExImpl.java:496)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteCollection(
> BinaryWriterExImpl.java:764)
> at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.
> write(BinaryClassDescriptor.java:694)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal0(BinaryWriterExImpl.java:206)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal(BinaryWriterExImpl.java:147)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal(BinaryWriterExImpl.java:134)
> at
> org.apache.ignite.internal.binary.GridBinaryMarshaller.
> marshal(GridBinaryMarshaller.java:248)
> at
> org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl.marshal(CacheObjectBinaryProcessorImpl
> .java:721)
> at
> org.apache.ignite.internal.processors.cache.CacheObjectImpl.
> prepareMarshal(CacheObjectImpl.java:116)
> at
> org.apache.ignite.internal.processors.cache.GridCacheMessage.
> prepareMarshalCacheObject(GridCacheMessage.java:528)
> at
> org.apache.ignite.internal.processors.cache.GridCacheMessage.
> prepareMarshalCacheObjects(GridCacheMessage.java:518)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.
> GridNearAtomicFullUpdateRequest.prepareMarshal(
> GridNearAtomicFullUpdateRequest.java:383)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onSend(
> GridCacheIoManager.java:1095)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(
> GridCacheIoManager.java:1129)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(
> GridCacheIoManager.java:1180)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.
> GridNearAtomicAbstractUpdateFuture.sendSingleRequest(
> GridNearAtomicAbstractUpdateFuture.java:311)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.
> GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFutu
> re.java:481)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.
> GridNearAtomicSingleUpdateFuture.mapOnTopology(
> GridNearAtomicSingleUpdateFuture.java:441)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.
> GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFu
> ture.java:248)
> at
> org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1167)
> at
> org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:656)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(
> GridCacheAdapter.java:2334)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(
> GridCacheAdapter.java:2311)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(
> IgniteCacheProxy.java:1502)
>
> Thanks
>
> Shuvendu
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: TcpDiscoveryS3IpFinder AmazonS3Exception: Slow Down

2017-10-24 Thread Dave Harvey
Opened a support tickets with GridGain and AWS.  The former suggested this,
which helped:

 20 
 21 
 22 
 23 
 24 
 25 

The throttling is per S3 bucket, and S3 discovery requires a private bucket
(or you get lots of errors), and AWS confirmed that the load was light, and
the errors were always on the first bucket enum.   They did suggest moving
from version 1.11.75 of the aws libs to 1.11.219, but fundamentally, they
delayed responding for long enough for the symptoms to go away on their own,
without explanation.   I can no longer reproduce, without any changes.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: heap dump on ignite servers

2017-10-24 Thread Andrey Mashenkov
Hi Sumit,

Sorry, for late answer.

Yes. Possibly there is a memory leak in some cases on ignite-1.7.
Try to switch to newer version.

Please, ignore if it is not actual any more.

-- 
Best regards,
Andrey V. Mashenkov


Re: Question Ignite Memory Consumption / Object size (Apache Ignite .NET)

2017-10-24 Thread Evgenii Zhuravlev
Hi Mario,

You have 7 indexed fields, I think it could be a reason of high memory
consumption. As for estimating memory size - I would recommend loading
various sized data several times and measuring Memory consumed by it. It
will help you to understand approximate needed memory size. These metrics
will help you in this: https://apacheignite.readme.io/docs/memory-metrics

Regards,
Evgenii

2017-10-20 10:35 GMT+03:00 Elmers, Mario (M) :

> Hi Pavel,
>
>
>
> After doing some research, with QuerySQLField and Indexes, also with the
> heap size,
>
> it seems that with each QuerySQLField the amount of memory needed is
> doubled by the size of the
>
> field.
>
>
>
> All test are done by 3 Windows 2012R2 Nodes with Apache.Ignite 2.2 heap
> size 1GB per java switch
>
> 7.5 GB Data ASCII Textfiles
>
> 64 million rows of data
>
> Key is a guid data type
>
>
>
> So my assumption was that I need a little more than 21GB
>
> 7.5 GB of ASCII data->15GB of UTF data
>
> 64 million keys of Guid  ->   1 GB
>
> Cache overhead 3 * 300MB  ->   1.2 GB
>
> Static assigned Heap 1GB per Node ->   3 GB
>
>
> 
>
> Sum
> 20.2 GB
>
>
>
> test 1
>
> data load with No QueryEntities defined. -> 21.4 GB used memory
>
>
>
> test 2
>
> data load with QueryEntities defined but no [QuerySQLField] attributes. ->
> 22.2 GB used memory
>
>
>
> test 3
>
> data load with QueryEntities defined and [QuerySQLField] attributes. ->
> 38,7 GB used memory
>
>
>
> After running test 3 it seems that when I use QuerySQLField then the
> memory needed is doubled
>
> by my estimation. The difference between test 2 and test 3 is 16.5 GB of
> memory needed.
>
>
>
> src:
>
>
>
> DataItem this is the only class which is used. It’s a simple class for
> only getting the data into the db.
>
>
>
>
>
> public class DataItem : IBinarizable
>
> {
>
> [QuerySqlField(IsIndexed = true)]
>
> public DateTime DateTime;
>
> [QuerySqlField]
>
> public short FracSec;
>
> [QuerySqlField(IsIndexed = true)]
>
> public string EventType = "";
>
> [QuerySqlField(IsIndexed = true)]
>
> public string Category = "";
>
> [QuerySqlField(IsIndexed = true)]
>
> public string Area = "";
>
> [QuerySqlField]
>
> public string Node = "";
>
> [QuerySqlField]
>
> public string Unit = "";
>
> [QuerySqlField(IsIndexed = true)]
>
> public string Module = "";
>
> [QuerySqlField]
>
> public string Module_Description = "";
>
> [QuerySqlField]
>
> public string Attribute = "";
>
> [QuerySqlField(IsIndexed = true)]
>
> public string State = "";
>
> [QuerySqlField(IsIndexed = true)]
>
> public string Level = "";
>
> [QuerySqlField]
>
> public string Desc1 = "";
>
> [QuerySqlField]
>
> public string Desc2 = "";
>
> [QuerySqlField]
>
> public string Desc3 = "";
>
> [QuerySqlField]
>
> public string Desc4 = "";
>
>
>
>
>
> public void ReadBinary(IBinaryReader reader)
>
> {
>
> Area = reader.ReadString("Area");
>
> Attribute = reader.ReadString("Attribute");
>
> Category =reader.ReadString("Category");
>
> DateTime? tmp = reader.ReadTimestamp("DateTime");
>
> if (tmp.HasValue)
>
> DateTime = tmp.Value;
>
> Desc1 =reader.ReadString("Desc1");
>
> Desc2=reader.ReadString("Desc2");
>
> Desc3 =reader.ReadString("Desc3");
>
> Desc4=reader.ReadString("Desc4");
>
> EventType =reader.ReadString("EventType" );
>
> FracSec= reader.ReadShort("FracSec");
>
> Level=reader.ReadString("Level");
>
> Module=reader.ReadString("Module");
>
> Module_Description=reader.ReadString("Module_Description");
>
> Node=reader.ReadString("Node");
>
> State=reader.ReadString("State");
>
> Unit=reader.ReadString("Unit");
>
> }
>
>
>
> public void WriteBinary(IBinaryWriter writer)
>
> {
>
> writer.WriteString("Area", Area);
>
> writer.WriteString("Attribute", Attribute);
>
> writer.WriteString("Category", Category);
>
> writer.WriteTimestamp("DateTime", DateTime);
>
> writer.WriteString("Desc1", Desc1);
>
> writer.WriteString("Desc2", Desc2);
>
> writer.WriteString("Desc3", Desc3);
>
> writer.WriteString("Desc4", Desc4);
>
> writer.WriteString("EventType", EventType);
>
> writer.WriteShort("FracSec", FracSec);
>
> writer.WriteString("Level", Level);
>
> writer.WriteString("Module", 

Re: How to cancel IgniteRunnale on remote node?

2017-10-24 Thread Andrey Mashenkov
Hi James,

Yes, Ignite will not cancel running Job if it is implement
ComputeJobMasterLeaveAware
and you can gracefully stop it.

You need nothing to do, node will always receive topology changes events
(e.g. NODE_LEFT).



> Thanks a lot
> Whether the Remote IgniteRunnable can be graceful shutdown if I implement
> ComputeJobMasterLeaveAware interface in my Kafka consumer IgniteRunnable,
> and close kafka streamer in onMasterNodeLeft().
> And if the remote server want to receive master node left event. Is there
> any configuration need to add to IgniteConfiguration?
> Thanks
> James


-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite server send big data to client and make client crash

2017-10-24 Thread Mikhail
Hi Shawn,

Could you please share logs? is there any errors? I don't think that client
should crash with no message in logs.

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: AWS Apache Ignite AMI startup.sh reports spurrious errors if options have blanks

2017-10-24 Thread Mikhail
Hi Dave,

I think you need to find an author of this script, but anyway I didn't find
it in our repository.

Please use the following doc to run ignite:
https://apacheignite.readme.io/docs/aws-deployment in aws.

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inserting data into Ignite got stuck when memory is full with persistent store enabled.

2017-10-24 Thread Dmitry Pavlov
Hi Ray

With regards to question about eviction:
yes, eviction is started only if memory region is full.

If eviction started then the warning message is logged. Since we did not
find this message, we can assume region increase will not speed up loading
for now.

Was the problem solved? Or is it still reproduced?

Sincerely,
Pavlov Dmitry


пт, 20 окт. 2017 г. в 14:26, Dmitry Pavlov :

> Hi Ray,
>
> I checked the dumps and from them it is clear that the client node can not
> provide more data load, since the cluster is already busy. But there is no
> activity on the server node.
>
> I suppose that the problem could be located on some other server node of
> the three remaining. The logs you sent earlier also do not contain obvious
> errors.
>
> Could you collect and send the logs and thread dumps from all server
> nodes?
>
>
>
> Sincerely,
>
> Dmitriy Pavlov
>
> чт, 19 окт. 2017 г. в 16:25, Ray :
>
>> Hi Dmitriy,
>>
>> Thanks for the reply.
>>
>> I know the eviction is automatic, but does eviction happen only when the
>> memory is full?
>> From the log, I didn't see any "Page evictions started, this will affect
>> storage performance" message.
>>
>> So my guess is that memory is not fully used up and no eviction happened.
>>
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Ignite is failing while starting Web Application

2017-10-24 Thread Mikhail
Hi Manipvl,

I think we can always restore after checked exception.

However your question is too general, could you please explain where you are
catching ignite checked exception?

Thanks,
Mike.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite node not stopping after segmentation

2017-10-24 Thread Mikhail
Hi Biren,

Ignite    has a good  doc
   for jmv tuning:
https://apacheignite.readme.io/docs/jvm-and-system-tuning

I think G1 can help to avoid long pauses, also you can increase failure
detection timeout:
https://apacheignite.readme.io/docs/cluster-config#failure-detection-timeout

Thanks,
Mike.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Grid freezing

2017-10-24 Thread Evgenii Zhuravlev
Looks like possible deadlock. Please add timeout for transactions and check
it with deadlock detection, as described here:
https://apacheignite.readme.io/docs/transactions#section-deadlock-detection

Evgenii

2017-10-23 21:16 GMT+03:00 smurphy :

> IgnitePortionDequeuer.java
>  t1317/IgnitePortionDequeuer.java>
>
> top.visor
> 
>
> Hi Evgenii,
>
> See the attached top command and java file..
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Re: Where can we get the partition assignment of a Cache after the cluster changed?

2017-10-24 Thread slava.koptilin
Hi Aaron,

I just started a single Ignite node as follows:
Ignite ignite =
Ignition.start("examples/config/example-ignite.xml");

IgnitePredicate rebalanceEventLsnr = evt -> {
System.out.println("Received event [evt=" + evt.name() + ",
evt=" + evt.toString());
return true; // Continue listening.
};
ignite.events().localListen(rebalanceEventLsnr,
EventType.EVT_CACHE_REBALANCE_STOPPED,
EventType.EVT_CACHE_REBALANCE_STARTED);

Thread.sleep(5000);
IgniteCache cache =
ignite.getOrCreateCache(createCacheConfiguration("CacheA"));
IgniteCache cache2 =
ignite.getOrCreateCache(createCacheConfiguration("CacheB"));

and got the following output:
[15:22:39,460][INFO ][exchange-worker-#31%null%][GridCacheProcessor] Started
cache [name=CacheA, mode=PARTITIONED]
Received event [evt=CACHE_REBALANCE_STARTED, evt=CacheRebalancingEvent
[cacheName=CacheA, part=-1, discoNode=TcpDiscoveryNode [...]]
Received event [evt=CACHE_REBALANCE_STOPPED, evt=CacheRebalancingEvent
[cacheName=CacheA, part=-1, discoNode=TcpDiscoveryNode [...]]
[15:22:39,478][INFO
][exchange-worker-#31%null%][GridCachePartitionExchangeManager] Skipping
rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=1,
minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT,
node=1ee1a908-2424-46b0-911b-0a2c70ad68e2]
[15:22:39,480][INFO ][exchange-worker-#31%null%][GridCacheProcessor] Started
cache [name=CacheB, mode=PARTITIONED]
Received event [evt=CACHE_REBALANCE_STARTED, evt=CacheRebalancingEvent
[cacheName=CacheA, part=-1, discoNode=TcpDiscoveryNode [...]]
Received event [evt=CACHE_REBALANCE_STOPPED, evt=CacheRebalancingEvent
[cacheName=CacheA, part=-1, discoNode=TcpDiscoveryNode [...]]
Received event [evt=CACHE_REBALANCE_STARTED, evt=CacheRebalancingEvent
[cacheName=CacheB, part=-1, discoNode=TcpDiscoveryNode [...]]
Received event [evt=CACHE_REBALANCE_STOPPED, evt=CacheRebalancingEvent
[cacheName=CacheB, part=-1, discoNode=TcpDiscoveryNode [...]]
[15:22:39,485][INFO
][exchange-worker-#31%null%][GridCachePartitionExchangeManager] Skipping
rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=1,
minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT,
node=1ee1a908-2424-46b0-911b-0a2c70ad68e2]

So, I see messages for both caches.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite YARN deployment mode issues

2017-10-24 Thread ilya.kasnacheev
Hello Ray,

Yes, please try it and tell if it helps.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: OBDC Ignite driver error in MS Excel: ODBC version is not supported

2017-10-24 Thread vkondraschenko
Igor, I have captured logs using the 'tracing feature'. I enabled tracing
before the experiment and stopped it right after. Please find the SQL.LOG
file attached.

SQL.LOG   



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Failed to query ignite

2017-10-24 Thread iostream
*Hi,

I have created a table in my ignite cluster (v2.1) using the following DDL
-*

CREATE TABLE test 
(
  id LONG,
  name   VARCHAR,
  PRIMARY KEY (id)
  
)
WITH "backups=1,affinityKey=id";

*I am trying to query the table using IgniteJdbcThinDriver. The code to
query is as follows -*

String sql = "/select * from test where id = ?/";
List params = new ArrayList();
params.add(1L);
ResultSet rs = null;
Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
Connection conn =
DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1/");
PreparedStatement stmt = conn.prepareStatement(sql);
if (null != params) {
int i = 1;
for (Object param : params) {
stmt.setObject(i, param);
i++;
}
}
rs = stmt.executeQuery();
conn.close();

*I am getting the following error -*

java.sql.SQLException: Failed to query Ignite.
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:123)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinPreparedStatement.executeWithArguments(JdbcThinPreparedStatement.java:221)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinPreparedStatement.executeQuery(JdbcThinPreparedStatement.java:68)
at
com.walmart.ecommerce.fulfillment.node.commons.manager.dlr.work.Tester.main(Tester.java:34)
Caused by: class org.apache.ignite.IgniteCheckedException: Error server
response: [req=JdbcQueryExecuteRequest [schemaName=null, pageSize=1024,
maxRows=0, sqlQry=select * from test where id = ?, args=[1]],
resp=JdbcResponse [res=null, status=1, err=javax.cache.CacheException: class
org.apache.ignite.IgniteCheckedException: null]]
at
org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.sendRequest(JdbcThinTcpIo.java:253)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.queryExecute(JdbcThinTcpIo.java:227)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:109)
... 3 more



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Spark - how to override and not append

2017-10-24 Thread Николай Ижиков
Hello, Yair.

According to documentation:
`saveValues` works as follows:  Saves values from given RDD into Ignite. *A
unique key will be generated for each value of the given RDD.*

I think, you can use `savePairs` with `overwrite` flag.



2017-10-24 11:44 GMT+03:00 Yair Ogen :

> I am using IgniteRdd to store and share RDD's between Spark jobs.
>
> I notices that if I run the spark job over and over again it seems that
> igniteContext.saveValues appends the values and doesn't behave like 'put'
> in the direct cache API.
>
> How can I work with IgniteRDD with override and not append?
>
> Regards,
>
> Yair
>



-- 
Nikolay Izhikov
nizhikov@gmail.com


Re: How the Ignite Service performance? When we test the CPU soon be occupied 100%

2017-10-24 Thread Andrey Mashenkov
Hi aaron,

Can you make and share JFR profile [1] if it is still actual?

[1]
https://apacheignite.readme.io/docs/jvm-and-system-tuning#section-flightrecorder-settings


-- 
Best regards,
Andrey V. Mashenkov


Re: using dataStreamer inside Ignite service grid execute

2017-10-24 Thread slava.koptilin
Hello,

> is there a way to pass the same streamer to execute function? 
I don't think so. The obvious solution is creating a data streamer inside
the service implementation and using it for populating data/executing stream
visitor logic.

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re: Where can we get the partition assignment of a Cache after the cluster changed?

2017-10-24 Thread aa...@tophold.com
Thanks Slava, yes after add those configuration, eventually I can receive those 
notification. 

But seem all the cache's rebalanced event will be published to me with cache 
name as the cache I monitored. 

Like I only care the CacheA 's events:

final CacheRebalancingEvent cre = (CacheRebalancingEvent) event;
if ("CacheA".equals(cre.cacheName())) {

}

But now even the CacheB join the cluster, I still get one event with 'CacheA'  
as #cacheName. 

If I compare local node's primary partition for CacheA, it's not changed 
actually. 


Regards
Aaron


aa...@tophold.com
 
From: slava.koptilin
Date: 2017-10-24 18:39
To: user
Subject: Re: Re: Where can we get the partition assignment of a Cache after the 
cluster changed?
Hi Aaron,
 
Please check that CACHE_REBALANCE events are registered in
IgniteConfiguration via xml or java code (by default, all these events are
disabled).
 











 
Could you try the following code?
IgnitePredicate rebalanceEventLsnr = evt -> {
System.out.println("Received event [evt=" + evt.name() + ",
evt=" + evt.toString());
 
if (evt instanceof CacheRebalancingEvent) {
CacheRebalancingEvent rebalancingEvt =
(CacheRebalancingEvent) evt;
 
if (rebalancingEvt.cacheName().equals(IG_CACHE_NAME)) {
IgniteCache c = ignite.cache(IG_CACHE_NAME);
ClusterNode localNode = ignite.cluster().localNode();
 
int[] backups =
ignite.affinity(c.getName()).backupPartitions(localNode);
int[] primaries =
ignite.affinity(c.getName()).primaryPartitions(localNode);
 
System.out.println("Local node : " + localNode.id());
System.out.println("\t primary: " +
Arrays.toString(primaries));
System.out.println("\t backups: " +
Arrays.toString(backups));
   
System.out.println("-");
}
}
 
return true; // Continue listening.
};
ignite.events().localListen(rebalanceEventLsnr,
EventType.EVT_CACHE_REBALANCE_STOPPED);
 
 
 
 
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ComputeGrid API in C++

2017-10-24 Thread Igor Sapego
Why have you decided to modify an implementation? Are there
any special requirements which making this necessary? Why
can't you register all the necessary classes in user code?

Best Regards,
Igor

On Mon, Oct 23, 2017 at 5:04 PM, asingh  wrote:

> That was it!
> Thank you so much, Igor.
>
> I just had to add a call to GetBinding() and RegisterComputeFunc() in
> platforms/cpp/ignite/src/ignite.cpp main() method. I'm not used to
> modifying
> an implementation in order to run an example ;)
>
> Although, this means that every time I change the closure, I have to
> rebuild
> ignite as well. Not a big problem but if anyone has a more elegant
> solution,
> I am all ears!
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Near Cache Topoolgy change causes NearCache to always miss.

2017-10-24 Thread slava.koptilin
Hi Timay,

I will try to reproduce the issue on my side.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: OBDC Ignite driver error in MS Excel: ODBC version is not supported

2017-10-24 Thread Igor Sapego
Hi,

Can you provide a log file?
You can find how to enable ODBC logging here - [1].

[1] -
https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.datacon.doc/t_datacon_enabling_odbc_logging_win.html

Best Regards,
Igor

On Tue, Oct 24, 2017 at 1:29 PM, vkondraschenko <
vkondrasche...@griddynamics.com> wrote:

> I am trying to use the ODBC Ignite driver from MS Excel 2010 (importing
> data
> from the ODBC data source), but it fails to connect with the message "ODBC
> version is not supported".
>
> OS: Windows 10 64-bit
> Client: MS Excel 2010 64-bit
> ODBC driver:
> apache-ignite-fabric-2.2.0-bin\platforms\cpp\bin\odbc\
> ignite-odbc-amd64.msi
> Error message: ODBC version is not supported
>
> I see similar reported issues for other clients. Could you please clarify
> how to get around the problem?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Re: Where can we get the partition assignment of a Cache after the cluster changed?

2017-10-24 Thread slava.koptilin
Hi Aaron,

Please check that CACHE_REBALANCE events are registered in
IgniteConfiguration via xml or java code (by default, all these events are
disabled).













Could you try the following code?
IgnitePredicate rebalanceEventLsnr = evt -> {
System.out.println("Received event [evt=" + evt.name() + ",
evt=" + evt.toString());

if (evt instanceof CacheRebalancingEvent) {
CacheRebalancingEvent rebalancingEvt =
(CacheRebalancingEvent) evt;

if (rebalancingEvt.cacheName().equals(IG_CACHE_NAME)) {
IgniteCache c = ignite.cache(IG_CACHE_NAME);
ClusterNode localNode = ignite.cluster().localNode();

int[] backups =
ignite.affinity(c.getName()).backupPartitions(localNode);
int[] primaries =
ignite.affinity(c.getName()).primaryPartitions(localNode);

System.out.println("Local node : " + localNode.id());
System.out.println("\t primary: " +
Arrays.toString(primaries));
System.out.println("\t backups: " +
Arrays.toString(backups));
   
System.out.println("-");
}
}

return true; // Continue listening.
};
ignite.events().localListen(rebalanceEventLsnr,
EventType.EVT_CACHE_REBALANCE_STOPPED);




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


OBDC Ignite driver error in MS Excel: ODBC version is not supported

2017-10-24 Thread vkondraschenko
I am trying to use the ODBC Ignite driver from MS Excel 2010 (importing data
from the ODBC data source), but it fails to connect with the message "ODBC
version is not supported". 

OS: Windows 10 64-bit 
Client: MS Excel 2010 64-bit 
ODBC driver:
apache-ignite-fabric-2.2.0-bin\platforms\cpp\bin\odbc\ignite-odbc-amd64.msi 
Error message: ODBC version is not supported 

I see similar reported issues for other clients. Could you please clarify
how to get around the problem?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Spark - how to override and not append

2017-10-24 Thread Yair Ogen
I am using IgniteRdd to store and share RDD's between Spark jobs.

I notices that if I run the spark job over and over again it seems that
igniteContext.saveValues appends the values and doesn't behave like 'put'
in the direct cache API.

How can I work with IgniteRDD with override and not append?

Regards,

Yair


OutOfMemoryError

2017-10-24 Thread shuvendu
Hi,

While pushing a huge array list to cache we are getting the following error.

Java exception occurred [class=java.lang.OutOfMemoryError, message=Java heap
space]|Apache.Ignite.Core.Common.IgniteException: Java exception occurred
[class=java.lang.OutOfMemoryError, message=Java heap space] --->
Apache.Ignite.Core.Common.JavaException: java.lang.OutOfMemoryError: Java
heap space
at
org.apache.ignite.internal.binary.streams.BinaryMemoryAllocatorChunk.reallocate(BinaryMemoryAllocatorChunk.java:69)
at
org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream.ensureCapacity(BinaryHeapOutputStream.java:65)
at
org.apache.ignite.internal.binary.streams.BinaryAbstractOutputStream.unsafeEnsure(BinaryAbstractOutputStream.java:253)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteBinaryObject(BinaryWriterExImpl.java:937)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:729)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:206)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:147)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:134)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteObject(BinaryWriterExImpl.java:496)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteCollection(BinaryWriterExImpl.java:764)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:694)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:206)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:147)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:134)
at
org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:248)
at
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshal(CacheObjectBinaryProcessorImpl.java:721)
at
org.apache.ignite.internal.processors.cache.CacheObjectImpl.prepareMarshal(CacheObjectImpl.java:116)
at
org.apache.ignite.internal.processors.cache.GridCacheMessage.prepareMarshalCacheObject(GridCacheMessage.java:528)
at
org.apache.ignite.internal.processors.cache.GridCacheMessage.prepareMarshalCacheObjects(GridCacheMessage.java:518)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicFullUpdateRequest.prepareMarshal(GridNearAtomicFullUpdateRequest.java:383)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onSend(GridCacheIoManager.java:1095)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1129)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1180)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:311)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:481)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:441)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1167)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:656)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2334)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2311)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1502)

Thanks 

Shuvendu



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Fwd: Hadoop Accelerator doesn't work when use SnappyCodec compression

2017-10-24 Thread Evgenii Zhuravlev
-- Forwarded message --
From: Evgenii Zhuravlev 
Date: 2017-10-20 12:31 GMT+03:00
Subject: Re: Hadoop Accelerator doesn't work when use SnappyCodec
compression
To: C Reid 


I've run a few days ago hive, hadoop and ignite with snappy compression
without any problem. It was hadoop 2.7.1, but your version should work too,
I think. Apache Ignite codebase contains tests for snappy codec. Here is
one of them in attachments with small changes - please run it in your
environment and show us results.

Thanks,
Evgenii

2017-10-20 11:30 GMT+03:00 C Reid :

> Yah, i tried all those methods found on the Google, and results were the
> same.
>
> Also because it's just an "export LD_LIBRARY_PATH=..." expression in
> 'ignite.sh', i'm not sure it takes efforts or not on a grid start up.
>
> We are planing to run more than 1000+ grids in a cluster, but production
> env has plenty of .snappy file, i'm struggling now...
> Btw, my hadoop version is 2.6.0, does it matter?
>
> Thanks for your patience.
> --
> *From:* Evgenii Zhuravlev 
> *Sent:* 19 October 2017 17:07
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
> compression
>
> Could you also try to set LD_LIBRARY_PATH variable with path to the
> folder with native libraries?
>
> 2017-10-17 17:56 GMT+03:00 C Reid :
>
>> I just tried, got the same:
>> "Unable to load native-hadoop library for your platform... using
>> builtin-java classes where applicable"
>> "java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeC
>> odeLoader.buildSupportsSnappy()Z"
>>
>> I also tried adding all related native library under one of folders under
>> jdk where all *.so are located. But ignite just couldn't load them, it's
>> strange.
>> --
>> *From:* Evgenii Zhuravlev 
>> *Sent:* 17 October 2017 21:25
>>
>> *To:* user@ignite.apache.org
>> *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
>> compression
>>
>> Have you tried to remove from path libraries from ${HADOOP_HOME}/lib/native
>> and add only /usr/lib64/ folder?
>>
>> 2017-10-17 12:18 GMT+03:00 C Reid :
>>
>>> Tried, and did not work.
>>>
>>> --
>>> *From:* Evgenii Zhuravlev 
>>> *Sent:* 17 October 2017 16:41
>>> *To:* C Reid
>>> *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
>>> compression
>>>
>>> I'd recommend adding /usr/lib64/ to JAVA_LIBRARY_PATH
>>>
>>> Evgenii
>>>
>>> 2017-10-17 11:29 GMT+03:00 C Reid :
>>>
 Yes, IgniteNode runs on the DataNode machine.

 [had...@hadoop-offline033.dx.momo.com ignite]$ echo $HADOOP_HOME
 /opt/hadoop-2.8.1-all
 [had...@hadoop-offline033.dx.momo.com ignite]$ echo $IGNITE_HOME
 /opt/apache-ignite-hadoop-2.2.0-bin

 and in ignite.sh
 JVM_OPTS="${JVM_OPTS} -Djava.library.path=${HADOOP_H
 OME}/lib/native:/usr/lib64/libsnappy.so.1:${HADOOP_HOME}/lib
 /native/libhadoop.so"

 But exception is thrown as mentioned.
 --
 *From:* Evgenii Zhuravlev 
 *Sent:* 17 October 2017 15:44

 *To:* user@ignite.apache.org
 *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
 compression

 Do you run Ignite on the same machine as hadoop?

 I'd recommend you to check these env variables:
 IGNITE_HOME, HADOOP_HOME and JAVA_LIBRARY_PATH. JAVA_LIBRARY_PATH
 should contain a path to the folder of libsnappy files.

 Evgenii

 2017-10-17 8:45 GMT+03:00 C Reid :

> Hi Evgenii,
>
> Checked, as shown:
>
> 17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Trying to load the
> custom-built native-hadoop library...
> 17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Loaded the
> native-hadoop library
> 17/10/17 13:43:12 WARN bzip2.Bzip2Factory: Failed to load/initialize
> native-bzip2 library system-native, will use pure-Java version
> 17/10/17 13:43:12 INFO zlib.ZlibFactory: Successfully loaded &
> initialized native-zlib library
> Native library checking:
> hadoop:  true /opt/hadoop-2.8.1-all/lib/native/libhadoop.so
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib64/libsnappy.so.1
> lz4: true revision:10301
> bzip2:   false
> openssl: true /usr/lib64/libcrypto.so
>
> --
> *From:* Evgenii Zhuravlev 
> *Sent:* 17 October 2017 13:34
> *To:* user@ignite.apache.org
> *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
> compression
>
> Hi,
>
> Have you checked "hadoop checknative -a" ? What it shows for snappy?