3D XPoint outperforms SSDs - verified on Apache Ignite

2017-10-23 Thread Denis Magda
Igniters,

Some of our community members benchmarked regular SSDs vs Intel Optan (3d 
XPoint). Here is a report:
http://dmagda.blogspot.com/2017/10/3d-xpoint-outperforms-ssds-verified-on.html

Please help to spread a word:
https://twitter.com/denismagda/status/922587654033518593

—
Denis

Re: intermittent exceptions while starting ignite clsuter

2017-10-23 Thread slava.koptilin
Hi,

Let's start with communication spi first. Please configure communication
ports in that way and check that discovery ports and communication are not
intersected.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cannot find schema for object with compact footer

2017-10-23 Thread zshamrock
Slava, what is the consequence/price of disabling compact footer? Although it
looks like it only happens for the integration tests, and never (at least we
have not observed this problem) in production.

This is the test (we use Spock):

class SessionsCacheITSpec extends IgniteCacheITSpec {
@Inject
@Subject
IgniteCache sessionsCache

@Inject
SessionsRepository sessionsRepository

def "verify on the start sessionsCache cache is empty"() {
expect:
sessionsCache.size() == 0
}

def "verify get on non existent session id returns null"() {
setup:
def sessionId = UUID.randomUUID().toString()

when:
def session = sessionsCache.get(sessionId)

then:
session == null
}

def "verify get on the existing session id session is loaded from db"()
{
setup:
...
def session = new Session(...
)
sessionsRepository.save(session)

when:
def actualSession = sessionsCache.get(sessionId)

then:
actualSession == session
}
}

where IgniteCacheITSpec is the following:

@SpringApplicationConfiguration
@IntegrationTest
@DirtiesContext
class IgniteCacheITSpec extends Specification {

@Inject
CacheEnvironment cacheEnvironment

@Inject
@Qualifier("client")
Ignite igniteClient

@Shared
Ignite igniteServer

@Configuration
@ComponentScan([...])
static class IgniteClientConfiguration {
}

def setupSpec() {
Ignition.setClientMode(false)
igniteServer = Ignition.start(new
ClassPathResource("META-INF/grid.xml").inputStream)
}

def cleanupSpec() {
def grids = Ignition.allGrids()
// stop all clients first
grids.findAll { it.configuration().isClientMode() } forEach {
it.close() }
igniteServer.close()
}
}


where the igniteClient is defined as the spring bean in one of the
configuration class, like the following:

@Bean(name = "ignite", destroyMethod = "close")
@Qualifier("client")
public Ignite ignite() {
Ignition.setClientMode(true);
return Ignition.start(igniteConfiguration());
}

Probably that could due to usage of the @DirtiesContext, although just a
guess.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: intermittent exceptions while starting ignite clsuter

2017-10-23 Thread Ranjit Sahu
for both discovery as well as communication spi you mean ?

On Mon, Oct 23, 2017 at 11:26 PM, slava.koptilin 
wrote:

> Hi Ranjit,
>
> It seems that all ports in the range you provided are already bound.
> Perhaps, your machine doesn't release ports fast enough.
> Please try to increase the port range up to 100.
>
> Thanks!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: map types in Cassandra

2017-10-23 Thread afedotov
Hi,

It's hard to say for sure what is the problem without the logs. But
obviously, it's a configuration issue.
Please take a look at the corresponding documentation 
https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra
  

As well, you could find my example here 
https://github.com/symbicator/apache-ignite-cassandra1
  

Kind regards,
Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Grid freezing

2017-10-23 Thread smurphy
IgnitePortionDequeuer.java

  

top.visor
  

Hi Evgenii,

See the attached top command and java file..



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How do you fix affinity key backups in cache configuration?

2017-10-23 Thread slava.koptilin
Hello,

It looks like you trying to start node which has an inconsistent
configuration for atomic types.
Please check that all nodes in the cluster (clients and servers) have the
identical configuration [1].

something like the following:






[1] https://apacheignite.readme.io/docs/atomic-types#atomic-configuration

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: intermittent exceptions while starting ignite clsuter

2017-10-23 Thread slava.koptilin
Hi Ranjit,

It seems that all ports in the range you provided are already bound.
Perhaps, your machine doesn't release ports fast enough.
Please try to increase the port range up to 100.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Web Session Clustering

2017-10-23 Thread whcmachado
Hello!

First, I'm sorry for my late reply.

I didn't close my web browser between those tests.

But as you talked about cookies, I looked at the cookies on chrome developer
tools and for some unknow reason I had a cookie for '/' and
'/Ignite-WebApp'. I cleaned those cookies and everything works well now.

Now we're going to try to set up Ignite on our main project which uses
frameworks like Jboss Seam 2.2.1.CR1, JSF 1.2_15, Facelets 1.1.14, Richfaces
3.3.3, Hibernate 3.3.2, c3p0 0.9.1.2, Spring 3.1.3 and Servlet 2.5 (maybe we
can update the Servlet version, but not the others).

As we're not using the most recent versions of those frameworks and there
are sommething like Seam contexts, do you think that our application is
suitable for Ignite Web Session Clustering?

Do you have any advice?

I really appreciate any help you can provide.

Thank you!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite/Cassandra failing to use supplied value for where clause

2017-10-23 Thread Kenan Dalley
Here's the C* setup:





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite/Cassandra failing to use supplied value for where clause

2017-10-23 Thread Kenan Dalley
I've got to be missing something simple here, but I'm having trouble with
Ignite calling C* with the POJO key that I'm supplying it when it tries to
"cache.get".  Whenever I try to supply the "cache.get" method with my POJO
key, it complains that it's null when it tries to execute the CQL.  I've
included a link to my project below.

This is the call in the application:
final Test1 value3 = cache.get(new
Test1Key("test123"));

With Test1Key as:


Exception Thrown
*java.lang.IllegalStateException: Failed to execute CommandLineRunner*  at
org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:779)
~[spring-boot-1.5.1.RELEASE.jar:1.5.1.RELEASE]  at
org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:760)
~[spring-boot-1.5.1.RELEASE.jar:1.5.1.RELEASE]  at
org.springframework.boot.SpringApplication.afterRefresh(SpringApplication.java:747)
~[spring-boot-1.5.1.RELEASE.jar:1.5.1.RELEASE]  at
org.springframework.boot.SpringApplication.run(SpringApplication.java:315)
~[spring-boot-1.5.1.RELEASE.jar:1.5.1.RELEASE]  at
com.gm.examples.cassandra_persistence_store.Application.main(Application.java:55)
[classes/:na]*Caused by: javax.cache.integration.CacheLoaderException: class
org.apache.ignite.IgniteException: Failed to execute Cassandra CQL
statement: select "column_1", "column_2", "column_3", "column_4",
"column_5", "column_6", "column_7", "column_8", "column_9" from
"dev_qlty"."test1" where "my_id"=?;*at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:327)
~[ignite-core-2.2.0.jar:2.2.0]  at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.load(GridCacheStoreManagerAdapter.java:282)
~[ignite-core-2.2.0.jar:2.2.0]  at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAllFromStore(GridCacheStoreManagerAdapter.java:418)
~[ignite-core-2.2.0.jar:2.2.0]  at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAll(GridCacheStoreManagerAdapter.java:384)
~[ignite-core-2.2.0.jar:2.2.0]  at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$15.call(GridCacheAdapter.java:2024)
~[ignite-core-2.2.0.jar:2.2.0]  at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$15.call(GridCacheAdapter.java:2022)
~[ignite-core-2.2.0.jar:2.2.0]  at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6608)
~[ignite-core-2.2.0.jar:2.2.0]  at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:959)
~[ignite-core-2.2.0.jar:2.2.0]  at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
~[ignite-core-2.2.0.jar:2.2.0]  at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
~[na:1.8.0_92]  at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
~[na:1.8.0_92]  at java.lang.Thread.run(Thread.java:745)
~[na:1.8.0_92]*Caused by: org.apache.ignite.IgniteException: Failed to
execute Cassandra CQL statement: select "column_1", "column_2", "column_3",
"column_4", "column_5", "column_6", "column_7", "column_8", "column_9" from
"dev_qlty"."test1" where "my_id"=?;*at
org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:185)
~[ignite-cassandra-store-2.2.0.jar:2.2.0]   at
org.apache.ignite.cache.store.cassandra.CassandraCacheStore.load(CassandraCacheStore.java:189)
~[ignite-cassandra-store-2.2.0.jar:2.2.0]   at
org.apache.ignite.internal.processors.cache.CacheStoreBalancingWrapper.load(CacheStoreBalancingWrapper.java:98)
~[ignite-core-2.2.0.jar:2.2.0]  at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:316)
~[ignite-core-2.2.0.jar:2.2.0]  ... 11 common frames omitted*Caused by:
org.apache.ignite.IgniteException: Failed to execute Cassandra CQL
statement: select "column_1", "column_2", "column_3", "column_4",
"column_5", "column_6", "column_7", "column_8", "column_9" from
"dev_qlty"."test1" where "my_id"=?;*at
org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:167)
~[ignite-cassandra-store-2.2.0.jar:2.2.0]   ... 14 common frames 
omittedCaused
by: com.datastax.driver.core.exceptions.InvalidQueryException: Invalid null
value in condition for column my_id at
com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:50)
~[cassandra-driver-core-3.0.0.jar:na]   at
com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
~[cassandra-driver-core-3.0.0.jar:na]   at
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
~[cassandra-driver-core-3.0.0.jar:na]   at
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
~[cassandra-driver-core-3.0.0.jar:na]   at

intermittent exceptions while starting ignite clsuter

2017-10-23 Thread Ranjit Sahu
Hi Team,

I am geting the attached exception once in a while while starting the
ignite cluster.
Even though no one is running ignite i get some times address already in
use.
I have specified a ramnge of 10 for both the discovery and communication
ports.
Any help what i am missing here.

my code below

val igniteConfig = new IgniteConfiguration()
igniteConfig.setGridName(igniteGridName)
val tcpDiscoverySpi = new TcpDiscoverySpi()
val ipFinder = new TcpDiscoveryVmIpFinder()
ipFinder.setAddresses(ipAddresses.value.toList.asJava)
tcpDiscoverySpi.setIpFinder(ipFinder)
tcpDiscoverySpi.setLocalPort(Integer.valueOf(portNumber.value))
tcpDiscoverySpi.setLocalPortRange(10)
igniteConfig.setDiscoverySpi(tcpDiscoverySpi)

val commSpi=new TcpCommunicationSpi()

commSpi.setLocalPort(Integer.valueOf(portNumber.value)+1000)
commSpi.setLocalPortRange(10)
// Overriding communication SPI.
igniteConfig.setCommunicationSpi(commSpi)
Ignition.getOrStart(igniteConfig)


Thanks,
Ranjit


17/10/23 10:46:06 WARN scheduler.TaskSetManager: Lost task 18.0 in
stage 1.0 (TID 58, c157mgh.int.westgroup.com, executor 25): class
org.apache.ignite.IgniteException: Failed to start manager:
GridManagerAdapter [enabled=true,
name=org.apache.ignite.internal.managers.communication.GridIoManager]
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:927)
at org.apache.ignite.Ignition.getOrStart(Ignition.java:417)
at un.api.resolver.dao.CompanyDAO.getIgnite(CompanyDAO.scala:92)
at 
un.api.dataloader.IgniteDataLoader$$anonfun$initIgniteCluster$2.apply(IgniteDataLoader.scala:71)
at 
un.api.dataloader.IgniteDataLoader$$anonfun$initIgniteCluster$2.apply(IgniteDataLoader.scala:67)
at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1866)
at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1866)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
start manager: GridManagerAdapter [enabled=true,
name=org.apache.ignite.internal.managers.communication.GridIoManager]
at 
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1589)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:857)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1794)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1598)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1041)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:568)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:529)
at org.apache.ignite.Ignition.getOrStart(Ignition.java:414)
... 13 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
get SPI attributes.
at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:249)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.start(GridIoManager.java:231)
at 
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1584)
... 20 more
Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to
initialize TCP server: 0.0.0.0/0.0.0.0
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.getNodeAttributes(TcpCommunicationSpi.java:1481)
at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:232)
... 22 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
bind to any port within range [startPort=48750, portRange=10,
locHost=0.0.0.0/0.0.0.0]
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.resetNioServer(TcpCommunicationSpi.java:1762)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.getNodeAttributes(TcpCommunicationSpi.java:1478)
... 23 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
initialize NIO selector.
at 
org.apache.ignite.internal.util.nio.GridNioServer.createSelector(GridNioServer.java:670)
at 
org.apache.ignite.internal.util.nio.GridNioServer.(GridNioServer.java:289)
at 
org.apache.ignite.internal.util.nio.GridNioServer.(GridNioServer.java:88)

Re: How to query to local partition cache data

2017-10-23 Thread Andrey Mashenkov
Hi,

setLocal - if true then query will be executed on local data node only.
setCollocated - if true then rows are collocated on GROUP BY field and
Ignite can evaluate aggregate functions on Map step.
setDistributedJoins - if true then Ignite will try to join rows with rows
on other nodes.

Would you please clarify what and how you observe the weird things?

BTW:
In first case, affinityRun makes task to be executed on node that is
primary to Key given, but local SQL query will run on all primary data on
available on local node.

In second case, using setPartition restrict SQL query with given Partition
and a Key can be non-affinity key for this partition.
So, here task can be run on node that is primary for Key=1, but make a
remote query to node that is primary to partition=1.



On Sun, Oct 22, 2017 at 11:53 AM, dark  wrote:

> Result of the document
>
> setLocal is a local query
> setCollocated specifies that the data associated with Cache's Join function
> is local
>
> It seems like this.
> But why does the Partitioned Cache increase the GridQueryExecutor's Task on
> all nodes?
>
> Thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


How do you fix affinity key backups in cache configuration?

2017-10-23 Thread ndipiazza3565
Was just hit with this error causing my whole ignite cluster to fail to
start:

org.apache.ignite.IgniteCheckedException: Affinity key backups mismatch (fix
affinity key backups in cache configuration or set
-DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true system property)
[cacheName=ignite-atomics-sys-cache, localAffinityKeyBackups=1,
remoteAffinityKeyBackups=0, rmtNodeId=d663345e-x7ba-5c85-6144-1234a7d3f721]
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.checkAttributeMismatch(GridCacheUtils.java:1144)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.checkCache(GridCacheProcessor.java:2915)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onKernalStart(GridCacheProcessor.java:756)
~[ignite-core-1.7.0.jar:1.7.0]
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:930)
[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1739)
[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1589)
[ignite-core-1.7.0.jar:1.7.0]
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
[ignite-core-1.7.0.jar:1.7.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:569)
[ignite-core-1.7.0.jar:1.7.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:516)
[ignite-core-1.7.0.jar:1.7.0]
at org.apache.ignite.Ignition.start(Ignition.java:322)
[ignite-core-1.7.0.jar:1.7.0]

What does it mean by "fix affinity key backups in cache configuration" ? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite in embeded mode in Spark running in dynamic mode

2017-10-23 Thread Ranjit Sahu
Hi Val,

Could you please confirm.

Thanks,
Ranjit

On Thu, Oct 19, 2017 at 6:24 PM, Ranjit Sahu  wrote:

> Hi Val,
>
> Do you mean use yarn deployment ?
>
> Thanks,
> Ranjit
>
> On Wed, Oct 18, 2017 at 10:54 PM, vkulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
>> Hi Ranjir,
>>
>> That's a known problem with embedded mode. Spark can start and stop
>> executor
>> processors at any moment of time, and having embedded Ignite server nodes
>> there means frequent rebalancing an potential data loss. I would recommend
>> to use standalone mode instead.
>>
>> -Val
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: Client Near Cache Configuration Lost after Cluster Node Removed

2017-10-23 Thread Nikolai Tikhonov
Hello,

Could you say how you determine that client node loads data from remote
node bypasses near cache?
I'm not able reproduce this behaviour locally, could you share a simple
maven project that reproduces this behaviour?

On Tue, Oct 17, 2017 at 12:54 AM, torjt  wrote:

> Hello All,
>
> We are having an issue with Ignite client near caches being "lost" upon a
> cluster node being removed from the Ignite cluster.  Furthermore, using
> version 2.1.0, we are seeing this issue when another client joins the
> topology.  I built Ignite from GIT today, 10/16/17, with the latest
> changes,
> ver. 2.3.0-SNAPSHOT.  As of version 2.3.0-SNAPSHOT, bringing clients
> up/down
> does not cause an active client to lose its near cache and performance is
> good.  However, when we remove a node from the cluster, the client
> immediately communicates with the cluster and disregards its near cache.
> Restarting the client remedies the issue.
>
> The following is the steps to reproduce the issue:
> Apache Ignite Version:
> *SNAPSHOT#20171016-sha1:ca6662bcb4eecc62493e2e25a572ed0b982c046c*
> 1.  Start 2 Ignite servers
> 2.  Start client with caches configured as near-cache
> 3.  Access caches
> 4.  Stop node client is connected to
> 4a.  Client immediately bypasses near cache and access "cluster" for cache
> miss
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous update Data Grid Cache

2017-10-23 Thread blackfield

The use case is similar to OP.

We have a large table that we import/load from a dataset.

This dataset is generated periodically such that we need to re-load this
whole dataset to Ignite.

If we re-load the new dataset against the same Ignite table, the users will
be impacted during that window.

We are looking for ways to minimize the impact, e.g. load the data to
another table->perform swap, etc.

Let me know if there are existing capabilities to minimize the above
scenario.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ComputeGrid API in C++

2017-10-23 Thread asingh
That was it!
Thank you so much, Igor.

I just had to add a call to GetBinding() and RegisterComputeFunc() in 
platforms/cpp/ignite/src/ignite.cpp main() method. I'm not used to modifying
an implementation in order to run an example ;)

Although, this means that every time I change the closure, I have to rebuild
ignite as well. Not a big problem but if anyone has a more elegant solution,
I am all ears!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node and cluster maintenance best practice

2017-10-23 Thread blackfield
Thank you, Denis.

How about best practice for bringing down a node for maintenance (e.g.
applying OS patch, hardware maintenance, etc)? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Grid freezing

2017-10-23 Thread Evgenii Zhuravlev
Hi!

How many nodes do you have? How many CPUs? Could you provide a code of
com.company.node.ignite.IgnitePortionDequeuer
or at least a dequeuePortions method?

Evgenii

2017-10-20 0:24 GMT+03:00 smurphy :

> threaddump.tdump
>  t1317/threaddump.tdump>
>
> I am using Ignite v2.1 and my code using Optimistic/Serializable
> transactions and is locking up. When it does, there are a lot of `WARNING:
> Found long running transaction` and `WARNING: Found long running cache
> future` messages in the logs (see below). Eventually the grid freezes and
> there are a lot of "IgniteInterruptedException: Got interrupted while
> waiting for future to complete." (see below). Also, see attached thread
> dump
> when the grid is finally stopped.
>
> Using visor, I could see the active threads increase until all are
> consumed.
> I boosted the thread pool count to 100 and saw the active threads rise
> steadily until all were consumed. At theat point, the grid freezes up.
>
> Using visor ping multiple times, the servers always ping successfully but
> the client fails intermittently.
>
> LONG RUNNING TRANSACTION:
>
> 2017-10-19 13:03:06,698 WARN  [ ] grid-timeout-worker-#15%null%
> {Log4JLogger.java:480} - Found long running transaction
> [startTime=12:30:33.985, curTime=13:03:06.682, tx=GridNearTxLocal
> [mappings=IgniteTxMappingsSingleImpl [mapping=GridDistributedTxMapping
> [entries=[IgniteTxEntry [key=KeyCacheObjectImpl [part=126,
> val=GridCacheInternalKeyImpl [name=SCAN_IDX, grpName=default-ds-group],
> hasValBytes=true], cacheId=1481046058, txKey=IgniteTxKey
> [key=KeyCacheObjectImpl [part=126, val=GridCacheInternalKeyImpl
> [name=SCAN_IDX, grpName=default-ds-group], hasValBytes=true],
> cacheId=1481046058], val=[op=TRANSFORM, val=null], prevVal=[op=NOOP,
> val=null], oldVal=[op=NOOP, val=null], entryProcessorsCol=[IgniteBiTuple
> [val1=GetAndIncrementProcessor [], val2=[Ljava.lang.Object;@49d9dc6c]],
> ttl=-1, conflictExpireTime=-1, conflictVer=null, explicitVer=null,
> dhtVer=null, filters=[], filtersPassed=false, filtersSet=true,
> entry=GridDhtDetachedCacheEntry [super=GridDistributedCacheEntry
> [super=GridCacheMapEntry [key=KeyCacheObjectImpl [part=126,
> val=GridCacheInternalKeyImpl [name=SCAN_IDX, grpName=default-ds-group],
> hasValBytes=true], val=null, startVer=1508434155232, ver=GridCacheVersion
> [topVer=119473256, order=1508434155232, nodeOrder=2243], hash=-375255214,
> extras=null, flags=0]]], prepared=0, locked=false,
> nodeId=36b4d422-d011-4f77-919a-b8ffb089614b, locMapped=false,
> expiryPlc=null, transferExpiryPlc=false, flags=0, partUpdateCntr=0,
> serReadVer=null, xidVer=GridCacheVersion [topVer=119473256,
> order=1508434155230, nodeOrder=2243]]], explicitLock=false, dhtVer=null,
> last=false, nearEntries=0, clientFirst=false,
> node=36b4d422-d011-4f77-919a-b8ffb089614b]], nearLocallyMapped=false,
> colocatedLocallyMapped=false, needCheckBackup=null, hasRemoteLocks=false,
> thread=,
> mappings=IgniteTxMappingsSingleImpl [mapping=GridDistributedTxMapping
> [entries=[IgniteTxEntry [key=KeyCacheObjectImpl [part=126,
> val=GridCacheInternalKeyImpl [name=SCAN_IDX, grpName=default-ds-group],
> hasValBytes=true], cacheId=1481046058, txKey=IgniteTxKey
> [key=KeyCacheObjectImpl [part=126, val=GridCacheInternalKeyImpl
> [name=SCAN_IDX, grpName=default-ds-group], hasValBytes=true],
> cacheId=1481046058], val=[op=TRANSFORM, val=null], prevVal=[op=NOOP,
> val=null], oldVal=[op=NOOP, val=null], entryProcessorsCol=[IgniteBiTuple
> [val1=GetAndIncrementProcessor [], val2=[Ljava.lang.Object;@49d9dc6c]],
> ttl=-1, conflictExpireTime=-1, conflictVer=null, explicitVer=null,
> dhtVer=null, filters=[], filtersPassed=false, filtersSet=true,
> entry=GridDhtDetachedCacheEntry [super=GridDistributedCacheEntry
> [super=GridCacheMapEntry [key=KeyCacheObjectImpl [part=126,
> val=GridCacheInternalKeyImpl [name=SCAN_IDX, grpName=default-ds-group],
> hasValBytes=true], val=null, startVer=1508434155232, ver=GridCacheVersion
> [topVer=119473256, order=1508434155232, nodeOrder=2243], hash=-375255214,
> extras=null, flags=0]]], prepared=0, locked=false,
> nodeId=36b4d422-d011-4f77-919a-b8ffb089614b, locMapped=false,
> expiryPlc=null, transferExpiryPlc=false, flags=0, partUpdateCntr=0,
> serReadVer=null, xidVer=GridCacheVersion [topVer=119473256,
> order=1508434155230, nodeOrder=2243]]], explicitLock=false, dhtVer=null,
> last=false, nearEntries=0, clientFirst=false,
> node=36b4d422-d011-4f77-919a-b8ffb089614b]], super=GridDhtTxLocalAdapter
> [nearOnOriginatingNode=false, nearNodes=[], dhtNodes=[],
> explicitLock=false,
> super=IgniteTxLocalAdapter [completedBase=null, sndTransformedVals=false,
> depEnabled=false, txState=IgniteTxImplicitSingleStateImpl [init=true,
> recovery=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
> [topVer=119473256, order=1508434155230, nodeOrder=2243], writeVer=null,
> implicit=true, loc=true, threadId=109, 

Re: Cannot find schema for object with compact footer

2017-10-23 Thread slava.koptilin
Hi,

It looks like there is an issue with the binary serialization of objects.
I would suggest the following workaround:







Could you please share the code of your test? It will be very useful in
order to understand the root cause of the issue.

By the way, could you try to reproduce the issue using the latest version of
Apache Ignite?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite 2.3 - replicated cache lost data after restart cluster nodes with persistence enabled

2017-10-23 Thread Dmitry Pavlov
Hi,

I tried to write the same code that will execute the described scenario.
The results are as follows:

If I do not give enough time to completely rebalance partitions, then the
newly launched node will not have enough data to count(*).

If I do not wait for enough time to allow to distribute the data on the
grid, the query will return a smaller number - the number of records that
have been uploaded to the node. I guess there is
GridDhtPartitionDemandMessage’s can be found in Ignite debug log in this
moment.

If I wait for a sufficient amount of time or directly call the wait on the
newly joined node

ignite2.cache (CACHE) .rebalance (). get ();

then all results will be correct.


About your question>  what's happen if one cluster node crashes in the
middle of rebalance process?

In this case normal failover scenario is started, data is rebalanced within
cluster. And if there is enought WAL records on nodes representing history
from crash point, then only recent changes (delta) will be send over
network. If there is no enought history to apply rebalance with most recent
changes, then partition will be rebalanced from scratch to new node.


Sincerely,

Pavlov Dmitry


сб, 21 окт. 2017 г. в 2:07, Manu :

> Hi,
>
> after restart data seems not be consistent.
>
> We have been waiting until rebalance was fully completed to restart the
> cluster to check if durable memory data rebalance works correctly and sql
> queries still work.
> Another question (it´s not this case), what's happen if one cluster node
> crashes in the middle of rebalance process?
>
> Thanks!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Hammering Cassandra with Ignite seems to get ignite into an infinite loop

2017-10-23 Thread slava.koptilin
Hi Tobias,

As far as I know, Cassandra module was tested with Cassandra Database 3.3 &
Cassandra Driver 3.0.0.
Perhaps, this issue related to Cassandra v3.11. Could you please check that
the issue is reproducible with Cassandra Database 3.3 & Cassandra Driver
3.0.0?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ComputeGrid API in C++

2017-10-23 Thread Igor Sapego
Have you registered the same function for all three nodes,
or only for the one you are calling RunAsync() on? You should
register it on every node, as this is a local operation.

Best Regards,
Igor

On Fri, Oct 20, 2017 at 7:06 PM, asingh  wrote:

> That makes sense.
>
> Now I am using ignite.GetBinding() from within the main() function
>
> In the main function, I am populating a vector of strings and calling
> RunAsync() three times.
> The ignite node within the binary is able to process the string but the
> other two ignite nodes report the same IgniteException of job not being
> registered.
>
>
> Looks like the example binary is able to communicate with the two ignite
> server nodes but those two nodes have no idea about the PrintWords' Call
> method.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


need suggestions for Ignite cluster isolation

2017-10-23 Thread Jeff Jiao
Hi Ignite Community,

We implemented a generic Ignite cache component in our group, the purpose is
that we want everyone in our group can integrate Ignite caches in their
applications by simply adding some configurations.

But we find that Ignite cluster isolation mechanism is not very friendly
here... based on my knowledge of Ignite, people must specify the port to
isolate clusters. we have a lot of services that contains cache components,
if we switch to use Ignite, it will be very difficult to manage the ports
and we cannot guarantee that there is no port conflicts.

Do you have any suggestions for this?


Thanks in advance,
Jeff



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/