Re: Ignite-cassandra module issue

2017-11-09 Thread Michael Cherkasov
Hi Dmitriy,

I created a ticket for this:
https://issues.apache.org/jira/browse/IGNITE-6853
it will be fixed in 2.4.

Thanks,
Mike.

2017-11-09 2:35 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:

> Hi Michael, do you have any update for the issue?
>
> On Thu, Nov 2, 2017 at 5:14 PM, Michael Cherkasov <
> michael.cherka...@gmail.com> wrote:
>
>> Hi Tobias,
>>
>> Thank you for explaining how to reproduce it, I'll try your instruction.
>> I spend several days trying to reproduce the issue,
>> but I thought that the reason of this is too high load and I didn't stop
>> client during testing.
>> I'll check your instruction and try to fix the issue.
>>
>> Thanks,
>> Mike.
>>
>> 2017-10-25 16:23 GMT+03:00 Tobias Eriksson <tobias.eriks...@qvantel.com>:
>>
>>> Hi Andrey et al
>>>
>>> I believe I now know what the problem is, the Cassandra session is
>>> refreshed, but before it is a prepared statement is created/used and there,
>>> and so using a new session with an old prepared statement is not working.
>>>
>>>
>>>
>>> The way to reproduce is
>>>
>>> 1)   Start Ignite Server Node
>>>
>>> 2)   Start client which inserts a batch of 100 elements
>>>
>>> 3)   End client
>>>
>>> 4)   Now Ignite Server Node returns the Cassandra Session to the
>>> pool
>>>
>>> 5)   Wait 5+ minutes
>>>
>>> 6)   Now Ignite Server Node has does a clean-up of the “unused”
>>> Cassandra sessions
>>>
>>> 7)   Start client which inserts a batch of 100 elements
>>>
>>> 8)   Boom ! The exception starts to happen
>>>
>>>
>>>
>>> Reason is
>>>
>>> 1)   Execute is called for a BATCH
>>>
>>> 2)   Prepared-statement is reused since there is a cache of those
>>>
>>> 3)   It is about to do session().execute( batch )
>>>
>>> 4)   BUT the call to session() results in refreshing the session,
>>> and this is where the prepared statements as the old session new them are
>>> cleaned up
>>>
>>> 5)   Now it is looping over 100 times with a NEW session but with
>>> an OLD prepared statement
>>>
>>>
>>>
>>> This is a bug,
>>>
>>>
>>>
>>> -Tobias
>>>
>>>
>>>
>>>
>>>
>>> *From: *Andrey Mashenkov <andrey.mashen...@gmail.com>
>>> *Reply-To: *"user@ignite.apache.org" <user@ignite.apache.org>
>>> *Date: *Wednesday, 25 October 2017 at 14:12
>>> *To: *"user@ignite.apache.org" <user@ignite.apache.org>
>>> *Subject: *Re: Ignite-cassandra module issue
>>>
>>>
>>>
>>> Hi Tobias,
>>>
>>>
>>>
>>> What ignite version do you use? May be this was already fixed in latest
>>> one?
>>>
>>> I see related fix inclueded in upcoming 2.3 version.
>>>
>>>
>>>
>>> See IGNITE-5897 [1] issue. It is unobvious, but this fix session
>>> init\end logic, so session should be closed in proper way.
>>>
>>>
>>>
>>> [1] https://issues.apache.org/jira/browse/IGNITE-5897
>>>
>>>
>>>
>>>
>>>
>>> On Wed, Oct 25, 2017 at 11:13 AM, Tobias Eriksson <
>>> tobias.eriks...@qvantel.com> wrote:
>>>
>>> Hi
>>>  Sorry did not include the context when I replied
>>>  Has anyone been able to resolve this problem, cause I have it too on and
>>> off
>>> In fact it sometimes happens just like that, e.g. I have been running my
>>> Ignite client and then stop it, and then it takes a while and run it
>>> again,
>>> and all by a sudden this error shows up. An that is the first thing that
>>> happens, and there is NOT a massive amount of load on Cassandra at that
>>> time. But I have also seen it when I hammer Ignite/Cassandra with
>>> updates/inserts.
>>>
>>> This is a deal-breaker for me, I need to understand how to fix this,
>>> cause
>>> having this in production is not an option.
>>>
>>> -Tobias
>>>
>>>
>>> Hi!
>>> I'm using the cassandra as persistence store for my caches and have one
>>> issue by handling a huge data (via IgniteDataStreamer from kafka).
>>> Ignite Configuration:
>>

Re: Ignite-cassandra module issue

2017-11-08 Thread Dmitriy Setrakyan
Hi Michael, do you have any update for the issue?

On Thu, Nov 2, 2017 at 5:14 PM, Michael Cherkasov <
michael.cherka...@gmail.com> wrote:

> Hi Tobias,
>
> Thank you for explaining how to reproduce it, I'll try your instruction. I
> spend several days trying to reproduce the issue,
> but I thought that the reason of this is too high load and I didn't stop
> client during testing.
> I'll check your instruction and try to fix the issue.
>
> Thanks,
> Mike.
>
> 2017-10-25 16:23 GMT+03:00 Tobias Eriksson <tobias.eriks...@qvantel.com>:
>
>> Hi Andrey et al
>>
>> I believe I now know what the problem is, the Cassandra session is
>> refreshed, but before it is a prepared statement is created/used and there,
>> and so using a new session with an old prepared statement is not working.
>>
>>
>>
>> The way to reproduce is
>>
>> 1)   Start Ignite Server Node
>>
>> 2)   Start client which inserts a batch of 100 elements
>>
>> 3)   End client
>>
>> 4)   Now Ignite Server Node returns the Cassandra Session to the pool
>>
>> 5)   Wait 5+ minutes
>>
>> 6)   Now Ignite Server Node has does a clean-up of the “unused”
>> Cassandra sessions
>>
>> 7)   Start client which inserts a batch of 100 elements
>>
>> 8)   Boom ! The exception starts to happen
>>
>>
>>
>> Reason is
>>
>> 1)   Execute is called for a BATCH
>>
>> 2)   Prepared-statement is reused since there is a cache of those
>>
>> 3)   It is about to do session().execute( batch )
>>
>> 4)   BUT the call to session() results in refreshing the session,
>> and this is where the prepared statements as the old session new them are
>> cleaned up
>>
>> 5)   Now it is looping over 100 times with a NEW session but with an
>> OLD prepared statement
>>
>>
>>
>> This is a bug,
>>
>>
>>
>> -Tobias
>>
>>
>>
>>
>>
>> *From: *Andrey Mashenkov <andrey.mashen...@gmail.com>
>> *Reply-To: *"user@ignite.apache.org" <user@ignite.apache.org>
>> *Date: *Wednesday, 25 October 2017 at 14:12
>> *To: *"user@ignite.apache.org" <user@ignite.apache.org>
>> *Subject: *Re: Ignite-cassandra module issue
>>
>>
>>
>> Hi Tobias,
>>
>>
>>
>> What ignite version do you use? May be this was already fixed in latest
>> one?
>>
>> I see related fix inclueded in upcoming 2.3 version.
>>
>>
>>
>> See IGNITE-5897 [1] issue. It is unobvious, but this fix session init\end
>> logic, so session should be closed in proper way.
>>
>>
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-5897
>>
>>
>>
>>
>>
>> On Wed, Oct 25, 2017 at 11:13 AM, Tobias Eriksson <
>> tobias.eriks...@qvantel.com> wrote:
>>
>> Hi
>>  Sorry did not include the context when I replied
>>  Has anyone been able to resolve this problem, cause I have it too on and
>> off
>> In fact it sometimes happens just like that, e.g. I have been running my
>> Ignite client and then stop it, and then it takes a while and run it
>> again,
>> and all by a sudden this error shows up. An that is the first thing that
>> happens, and there is NOT a massive amount of load on Cassandra at that
>> time. But I have also seen it when I hammer Ignite/Cassandra with
>> updates/inserts.
>>
>> This is a deal-breaker for me, I need to understand how to fix this, cause
>> having this in production is not an option.
>>
>> -Tobias
>>
>>
>> Hi!
>> I'm using the cassandra as persistence store for my caches and have one
>> issue by handling a huge data (via IgniteDataStreamer from kafka).
>> Ignite Configuration:
>> final IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
>> igniteConfiguration.setIgniteInstanceName("test");
>> igniteConfiguration.setClientMode(true);
>> igniteConfiguration.setGridLogger(new Slf4jLogger());
>> igniteConfiguration.setMetricsLogFrequency(0);
>> igniteConfiguration.setDiscoverySpi(configureTcpDiscoverySpi());
>> final BinaryConfiguration binaryConfiguration = new BinaryConfiguration();
>> binaryConfiguration.setCompactFooter(false);
>> igniteConfiguration.setBinaryConfiguration(binaryConfiguration);
>> igniteConfiguration.setPeerClassLoadingEnabled(true);
>> 

Re: Ignite-cassandra module issue

2017-11-02 Thread Michael Cherkasov
Hi Tobias,

Thank you for explaining how to reproduce it, I'll try your instruction. I
spend several days trying to reproduce the issue,
but I thought that the reason of this is too high load and I didn't stop
client during testing.
I'll check your instruction and try to fix the issue.

Thanks,
Mike.

2017-10-25 16:23 GMT+03:00 Tobias Eriksson <tobias.eriks...@qvantel.com>:

> Hi Andrey et al
>
> I believe I now know what the problem is, the Cassandra session is
> refreshed, but before it is a prepared statement is created/used and there,
> and so using a new session with an old prepared statement is not working.
>
>
>
> The way to reproduce is
>
> 1)   Start Ignite Server Node
>
> 2)   Start client which inserts a batch of 100 elements
>
> 3)   End client
>
> 4)   Now Ignite Server Node returns the Cassandra Session to the pool
>
> 5)   Wait 5+ minutes
>
> 6)   Now Ignite Server Node has does a clean-up of the “unused”
> Cassandra sessions
>
> 7)   Start client which inserts a batch of 100 elements
>
> 8)   Boom ! The exception starts to happen
>
>
>
> Reason is
>
> 1)   Execute is called for a BATCH
>
> 2)   Prepared-statement is reused since there is a cache of those
>
> 3)   It is about to do session().execute( batch )
>
> 4)   BUT the call to session() results in refreshing the session, and
> this is where the prepared statements as the old session new them are
> cleaned up
>
> 5)   Now it is looping over 100 times with a NEW session but with an
> OLD prepared statement
>
>
>
> This is a bug,
>
>
>
> -Tobias
>
>
>
>
>
> *From: *Andrey Mashenkov <andrey.mashen...@gmail.com>
> *Reply-To: *"user@ignite.apache.org" <user@ignite.apache.org>
> *Date: *Wednesday, 25 October 2017 at 14:12
> *To: *"user@ignite.apache.org" <user@ignite.apache.org>
> *Subject: *Re: Ignite-cassandra module issue
>
>
>
> Hi Tobias,
>
>
>
> What ignite version do you use? May be this was already fixed in latest
> one?
>
> I see related fix inclueded in upcoming 2.3 version.
>
>
>
> See IGNITE-5897 [1] issue. It is unobvious, but this fix session init\end
> logic, so session should be closed in proper way.
>
>
>
> [1] https://issues.apache.org/jira/browse/IGNITE-5897
>
>
>
>
>
> On Wed, Oct 25, 2017 at 11:13 AM, Tobias Eriksson <
> tobias.eriks...@qvantel.com> wrote:
>
> Hi
>  Sorry did not include the context when I replied
>  Has anyone been able to resolve this problem, cause I have it too on and
> off
> In fact it sometimes happens just like that, e.g. I have been running my
> Ignite client and then stop it, and then it takes a while and run it again,
> and all by a sudden this error shows up. An that is the first thing that
> happens, and there is NOT a massive amount of load on Cassandra at that
> time. But I have also seen it when I hammer Ignite/Cassandra with
> updates/inserts.
>
> This is a deal-breaker for me, I need to understand how to fix this, cause
> having this in production is not an option.
>
> -Tobias
>
>
> Hi!
> I'm using the cassandra as persistence store for my caches and have one
> issue by handling a huge data (via IgniteDataStreamer from kafka).
> Ignite Configuration:
> final IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
> igniteConfiguration.setIgniteInstanceName("test");
> igniteConfiguration.setClientMode(true);
> igniteConfiguration.setGridLogger(new Slf4jLogger());
> igniteConfiguration.setMetricsLogFrequency(0);
> igniteConfiguration.setDiscoverySpi(configureTcpDiscoverySpi());
> final BinaryConfiguration binaryConfiguration = new BinaryConfiguration();
> binaryConfiguration.setCompactFooter(false);
> igniteConfiguration.setBinaryConfiguration(binaryConfiguration);
> igniteConfiguration.setPeerClassLoadingEnabled(true);
> final MemoryPolicyConfiguration memoryPolicyConfiguration = new
> MemoryPolicyConfiguration();
> memoryPolicyConfiguration.setName("3Gb_Region_Eviction");
> memoryPolicyConfiguration.setInitialSize(1024L * 1024L * 1024L);
> memoryPolicyConfiguration.setMaxSize(3072L * 1024L * 1024L);
>
> memoryPolicyConfiguration.setPageEvictionMode(
> DataPageEvictionMode.RANDOM_2_LRU);
> final MemoryConfiguration memoryConfiguration = new MemoryConfiguration();
> memoryConfiguration.setMemoryPolicies(memoryPolicyConfiguration);
> igniteConfiguration.setMemoryConfiguration(memoryConfiguration);
>
> Cache configuration:
> final CacheConfiguration

Re: Ignite-cassandra module issue

2017-10-25 Thread Tobias Eriksson
Hi Andrey et al
I believe I now know what the problem is, the Cassandra session is refreshed, 
but before it is a prepared statement is created/used and there, and so using a 
new session with an old prepared statement is not working.

The way to reproduce is

1)   Start Ignite Server Node

2)   Start client which inserts a batch of 100 elements

3)   End client

4)   Now Ignite Server Node returns the Cassandra Session to the pool

5)   Wait 5+ minutes

6)   Now Ignite Server Node has does a clean-up of the “unused” Cassandra 
sessions

7)   Start client which inserts a batch of 100 elements

8)   Boom ! The exception starts to happen

Reason is

1)   Execute is called for a BATCH

2)   Prepared-statement is reused since there is a cache of those

3)   It is about to do session().execute( batch )

4)   BUT the call to session() results in refreshing the session, and this 
is where the prepared statements as the old session new them are cleaned up

5)   Now it is looping over 100 times with a NEW session but with an OLD 
prepared statement

This is a bug,

-Tobias


From: Andrey Mashenkov <andrey.mashen...@gmail.com>
Reply-To: "user@ignite.apache.org" <user@ignite.apache.org>
Date: Wednesday, 25 October 2017 at 14:12
To: "user@ignite.apache.org" <user@ignite.apache.org>
Subject: Re: Ignite-cassandra module issue

Hi Tobias,

What ignite version do you use? May be this was already fixed in latest one?
I see related fix inclueded in upcoming 2.3 version.

See IGNITE-5897 [1] issue. It is unobvious, but this fix session init\end 
logic, so session should be closed in proper way.

[1] https://issues.apache.org/jira/browse/IGNITE-5897


On Wed, Oct 25, 2017 at 11:13 AM, Tobias Eriksson 
<tobias.eriks...@qvantel.com<mailto:tobias.eriks...@qvantel.com>> wrote:
Hi
 Sorry did not include the context when I replied
 Has anyone been able to resolve this problem, cause I have it too on and
off
In fact it sometimes happens just like that, e.g. I have been running my
Ignite client and then stop it, and then it takes a while and run it again,
and all by a sudden this error shows up. An that is the first thing that
happens, and there is NOT a massive amount of load on Cassandra at that
time. But I have also seen it when I hammer Ignite/Cassandra with
updates/inserts.

This is a deal-breaker for me, I need to understand how to fix this, cause
having this in production is not an option.

-Tobias


Hi!
I'm using the cassandra as persistence store for my caches and have one
issue by handling a huge data (via IgniteDataStreamer from kafka).
Ignite Configuration:
final IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
igniteConfiguration.setIgniteInstanceName("test");
igniteConfiguration.setClientMode(true);
igniteConfiguration.setGridLogger(new Slf4jLogger());
igniteConfiguration.setMetricsLogFrequency(0);
igniteConfiguration.setDiscoverySpi(configureTcpDiscoverySpi());
final BinaryConfiguration binaryConfiguration = new BinaryConfiguration();
binaryConfiguration.setCompactFooter(false);
igniteConfiguration.setBinaryConfiguration(binaryConfiguration);
igniteConfiguration.setPeerClassLoadingEnabled(true);
final MemoryPolicyConfiguration memoryPolicyConfiguration = new
MemoryPolicyConfiguration();
memoryPolicyConfiguration.setName("3Gb_Region_Eviction");
memoryPolicyConfiguration.setInitialSize(1024L * 1024L * 1024L);
memoryPolicyConfiguration.setMaxSize(3072L * 1024L * 1024L);

memoryPolicyConfiguration.setPageEvictionMode(DataPageEvictionMode.RANDOM_2_LRU);
final MemoryConfiguration memoryConfiguration = new MemoryConfiguration();
memoryConfiguration.setMemoryPolicies(memoryPolicyConfiguration);
igniteConfiguration.setMemoryConfiguration(memoryConfiguration);

Cache configuration:
final CacheConfiguration<String, BinaryObject> cacheConfiguration = new
CacheConfiguration<>();
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheConfiguration.setStoreKeepBinary(true);
cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
cacheConfiguration.setBackups(0);
cacheConfiguration.setStatisticsEnabled(false);
cacheConfiguration.setName("TestCache");

cacheConfiguration.setReadThrough(true);
cacheConfiguration.setWriteThrough(true);

cacheConfiguration.setWriteBehindEnabled(true);
cacheConfiguration.setWriteBehindFlushFrequency(1);
cacheConfiguration.setWriteBehindFlushSize(0);
cacheConfiguration.setWriteBehindFlushThreadCount(2);
cacheConfiguration.setWriteBehindBatchSize(1);


final CassandraCacheStoreFactory<String, BinaryObject>
cacheStoreFactory = new CassandraCacheStoreFactory<>();
 

Re: Ignite-cassandra module issue

2017-10-25 Thread Andrey Mashenkov
Hi Tobias,

What ignite version do you use? May be this was already fixed in latest one?
I see related fix inclueded in upcoming 2.3 version.

See IGNITE-5897 [1] issue. It is unobvious, but this fix session init\end
logic, so session should be closed in proper way.

[1] https://issues.apache.org/jira/browse/IGNITE-5897


On Wed, Oct 25, 2017 at 11:13 AM, Tobias Eriksson <
tobias.eriks...@qvantel.com> wrote:

> Hi
>  Sorry did not include the context when I replied
>  Has anyone been able to resolve this problem, cause I have it too on and
> off
> In fact it sometimes happens just like that, e.g. I have been running my
> Ignite client and then stop it, and then it takes a while and run it again,
> and all by a sudden this error shows up. An that is the first thing that
> happens, and there is NOT a massive amount of load on Cassandra at that
> time. But I have also seen it when I hammer Ignite/Cassandra with
> updates/inserts.
>
> This is a deal-breaker for me, I need to understand how to fix this, cause
> having this in production is not an option.
>
> -Tobias
>
>
> Hi!
> I'm using the cassandra as persistence store for my caches and have one
> issue by handling a huge data (via IgniteDataStreamer from kafka).
> Ignite Configuration:
> final IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
> igniteConfiguration.setIgniteInstanceName("test");
> igniteConfiguration.setClientMode(true);
> igniteConfiguration.setGridLogger(new Slf4jLogger());
> igniteConfiguration.setMetricsLogFrequency(0);
> igniteConfiguration.setDiscoverySpi(configureTcpDiscoverySpi());
> final BinaryConfiguration binaryConfiguration = new BinaryConfiguration();
> binaryConfiguration.setCompactFooter(false);
> igniteConfiguration.setBinaryConfiguration(binaryConfiguration);
> igniteConfiguration.setPeerClassLoadingEnabled(true);
> final MemoryPolicyConfiguration memoryPolicyConfiguration = new
> MemoryPolicyConfiguration();
> memoryPolicyConfiguration.setName("3Gb_Region_Eviction");
> memoryPolicyConfiguration.setInitialSize(1024L * 1024L * 1024L);
> memoryPolicyConfiguration.setMaxSize(3072L * 1024L * 1024L);
>
> memoryPolicyConfiguration.setPageEvictionMode(
> DataPageEvictionMode.RANDOM_2_LRU);
> final MemoryConfiguration memoryConfiguration = new MemoryConfiguration();
> memoryConfiguration.setMemoryPolicies(memoryPolicyConfiguration);
> igniteConfiguration.setMemoryConfiguration(memoryConfiguration);
>
> Cache configuration:
> final CacheConfiguration cacheConfiguration = new
> CacheConfiguration<>();
> cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> cacheConfiguration.setStoreKeepBinary(true);
> cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
> cacheConfiguration.setBackups(0);
> cacheConfiguration.setStatisticsEnabled(false);
> cacheConfiguration.setName("TestCache");
>
> cacheConfiguration.setReadThrough(true);
> cacheConfiguration.setWriteThrough(true);
>
> cacheConfiguration.setWriteBehindEnabled(true);
> cacheConfiguration.setWriteBehindFlushFrequency(1);
> cacheConfiguration.setWriteBehindFlushSize(0);
> cacheConfiguration.setWriteBehindFlushThreadCount(2);
> cacheConfiguration.setWriteBehindBatchSize(1);
>
>
> final CassandraCacheStoreFactory
> cacheStoreFactory = new CassandraCacheStoreFactory<>();
> final DataSource dataSource = new DataSource();
> dataSource.setContactPoints(contactPoints);
> dataSource.setReadConsistency("ONE");
> dataSource.setWriteConsistency("ONE");
> dataSource.setLoadBalancingPolicy(new TokenAwarePolicy(new
> RoundRobinPolicy()));
> cacheStoreFactory.setDataSource(dataSource);
>
> final String CASSANDRA_PERSISTENCE = " table=\"%s\">" +
> "" +
> "" +
> "";
> final KeyValuePersistenceSettings settings = new
> KeyValuePersistenceSettings(
> String.format(CASSANDRA_PERSISTENCE, "test", "test_table",
> "java.lang.String", "PRIMITIVE",
> "org.apache.ignite.binary.BinaryObject", "BLOB"));
> cacheStoreFactory.setPersistenceSettings(settings);
> cacheConfiguration.setCacheStoreFactory(cacheStoreFactory);
>
> When application works some time (hour or more, may be less - from time to
> time) I see this exceptions on the ignite nodes:
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=4f43d78b, name=null, uptime=00:12:00:072]
> ^-- H/N/C [hosts=3, nodes=3, CPUs=96]
> ^-- CPU [cur=0%, avg=1.86%, GC=0%]
> ^-- PageMemory [pages=118064]
> ^-- Heap [used=4800MB, free=53.12%, comm=10240MB]
> ^-- Non heap [used=78MB, free=-1%, comm=80MB]
> ^-- Public thread pool 

Re: Ignite-cassandra module issue

2017-10-25 Thread Tobias Eriksson
Hi 
 Sorry did not include the context when I replied 
 Has anyone been able to resolve this problem, cause I have it too on and
off 
In fact it sometimes happens just like that, e.g. I have been running my 
Ignite client and then stop it, and then it takes a while and run it again, 
and all by a sudden this error shows up. An that is the first thing that 
happens, and there is NOT a massive amount of load on Cassandra at that 
time. But I have also seen it when I hammer Ignite/Cassandra with 
updates/inserts. 

This is a deal-breaker for me, I need to understand how to fix this, cause 
having this in production is not an option. 

-Tobias 


Hi! 
I'm using the cassandra as persistence store for my caches and have one
issue by handling a huge data (via IgniteDataStreamer from kafka). 
Ignite Configuration: 
final IgniteConfiguration igniteConfiguration = new IgniteConfiguration(); 
igniteConfiguration.setIgniteInstanceName("test"); 
igniteConfiguration.setClientMode(true); 
igniteConfiguration.setGridLogger(new Slf4jLogger()); 
igniteConfiguration.setMetricsLogFrequency(0); 
igniteConfiguration.setDiscoverySpi(configureTcpDiscoverySpi()); 
final BinaryConfiguration binaryConfiguration = new BinaryConfiguration(); 
binaryConfiguration.setCompactFooter(false); 
igniteConfiguration.setBinaryConfiguration(binaryConfiguration); 
igniteConfiguration.setPeerClassLoadingEnabled(true); 
final MemoryPolicyConfiguration memoryPolicyConfiguration = new
MemoryPolicyConfiguration(); 
memoryPolicyConfiguration.setName("3Gb_Region_Eviction"); 
memoryPolicyConfiguration.setInitialSize(1024L * 1024L * 1024L); 
memoryPolicyConfiguration.setMaxSize(3072L * 1024L * 1024L); 
   
memoryPolicyConfiguration.setPageEvictionMode(DataPageEvictionMode.RANDOM_2_LRU);
 
final MemoryConfiguration memoryConfiguration = new MemoryConfiguration(); 
memoryConfiguration.setMemoryPolicies(memoryPolicyConfiguration); 
igniteConfiguration.setMemoryConfiguration(memoryConfiguration); 

Cache configuration: 
final CacheConfiguration cacheConfiguration = new
CacheConfiguration<>(); 
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC); 
cacheConfiguration.setStoreKeepBinary(true); 
cacheConfiguration.setCacheMode(CacheMode.PARTITIONED); 
cacheConfiguration.setBackups(0); 
cacheConfiguration.setStatisticsEnabled(false); 
cacheConfiguration.setName("TestCache"); 

cacheConfiguration.setReadThrough(true); 
cacheConfiguration.setWriteThrough(true); 

cacheConfiguration.setWriteBehindEnabled(true); 
cacheConfiguration.setWriteBehindFlushFrequency(1); 
cacheConfiguration.setWriteBehindFlushSize(0); 
cacheConfiguration.setWriteBehindFlushThreadCount(2); 
cacheConfiguration.setWriteBehindBatchSize(1); 


final CassandraCacheStoreFactory
cacheStoreFactory = new CassandraCacheStoreFactory<>(); 
final DataSource dataSource = new DataSource(); 
dataSource.setContactPoints(contactPoints); 
dataSource.setReadConsistency("ONE"); 
dataSource.setWriteConsistency("ONE"); 
dataSource.setLoadBalancingPolicy(new TokenAwarePolicy(new
RoundRobinPolicy())); 
cacheStoreFactory.setDataSource(dataSource); 

final String CASSANDRA_PERSISTENCE = "" + 
"" + 
"" + 
""; 
final KeyValuePersistenceSettings settings = new
KeyValuePersistenceSettings( 
String.format(CASSANDRA_PERSISTENCE, "test", "test_table", 
"java.lang.String", "PRIMITIVE", 
"org.apache.ignite.binary.BinaryObject", "BLOB")); 
cacheStoreFactory.setPersistenceSettings(settings); 
cacheConfiguration.setCacheStoreFactory(cacheStoreFactory); 

When application works some time (hour or more, may be less - from time to
time) I see this exceptions on the ignite nodes: 
Metrics for local node (to disable set 'metricsLogFrequency' to 0) 
^-- Node [id=4f43d78b, name=null, uptime=00:12:00:072] 
^-- H/N/C [hosts=3, nodes=3, CPUs=96] 
^-- CPU [cur=0%, avg=1.86%, GC=0%] 
^-- PageMemory [pages=118064] 
^-- Heap [used=4800MB, free=53.12%, comm=10240MB] 
^-- Non heap [used=78MB, free=-1%, comm=80MB] 
^-- Public thread pool [active=0, idle=32, qSize=0] 
^-- System thread pool [active=0, idle=32, qSize=0] 
^-- Outbound messages queue [size=0] 
[15:28:28,626][INFO][grid-timeout-worker-#39%null%][IgniteKernal] FreeList
[name=null, buckets=256, dataPages=102080, reusePages=0] 
[15:29:02,317][WARNING][sys-#106%null%][CassandraCacheStore] Prepared
statement cluster error detected, refreshing Cassandra session 
com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute
unknown prepared query : 

Re: Ignite-cassandra module issue

2017-10-24 Thread Tobias Eriksson
Hi 
 Did you ever resolve this problem, cause I have it too on and off
In fact it sometimes happens just like that, e.g. I have been running my
Ignite client and then stop it, and then it takes a while and run it again,
and all by a sudden this error shows up. An that is the first thing that
happens, and there is NOT a massive amount of load on Cassandra at that
time. But I have also seen it when I hammer Ignite/Cassandra with
updates/inserts.

This is a deal-breaker for me, I need to understand how to fix this, cause
having this in production is not an option.

-Tobias




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite-cassandra module issue

2017-06-21 Thread dkarachentsev
Hi,

Is it possible to provide a simple reproducer?
Answering your second question, yes, you can use BinaryObject as persistence
class name.

Thanks!
-Dmitry.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cassandra-module-issue-tp13808p14025.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite-cassandra module issue

2017-06-15 Thread nash1k
s.java:6621)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:954)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

After this I have to restart the cluster and application or have
"transactions deadlock" message.

And I checked: network is fine, no OOME in the ignite and cassandra
(cassandra logs don't say anything about it). I understand that this is
issue of persistence into cassandra but don't know how can I fix it. And
after some attempts ignite node is stopped. I tried to change write behind
parameters and disable it at all - but have the same issue. May be I have to
see in the other place? May be this is important - i use IgniteDataStreamer
for reading values from kafka (earlier I used the simple cache operations
and didn't have this problem at all). 

final IgniteDataStreamer<String, BinaryObject> streamer =
ignite.dataStreamer(callCache().getName()); 
streamer.autoFlushFrequency(5000); 
streamer.keepBinary(true); 
streamer.perNodeBufferSize(5120); 


And one more question about persistence settings. If I'm working with
BinaryObjects and ignite doesn't know about my class types, should I use
valuePersistence class =  "org.apache.ignite.binary.BinaryObject"? I did it
because peer class loading doesn't work for my own classes and I don't want
to add new jars into all ignite nodes and all my apps.

Thanks for help!




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cassandra-module-issue-tp13808.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.