Re: Ignite + Spark Streaming on Amazon EMR Exception

2016-10-21 Thread vkulichenko
Hi,

You can try to set -Djava.net.preferIPv4Stack=true system property to bind
to IPv4 addresses only. Probably this will help.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Spark-Streaming-on-Amazon-EMR-Exception-tp8410p8423.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Random SSL unsupported record version

2016-10-21 Thread vkulichenko
Hi,

Isn't this thread a duplicate of this one?
http://apache-ignite-users.70518.x6.nabble.com/Random-SSL-unsupported-record-version-td8236.html

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Random-SSL-unsupported-record-version-tp8406p8422.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite servers go down if put to large data into cluster

2016-10-21 Thread vkulichenko
Hi,

There is a response on SO:
http://stackoverflow.com/questions/40171122/ignite-servers-go-down-if-put-large-data-into-cluster

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-servers-go-down-if-put-to-large-data-into-cluster-tp8395p8421.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SLF4J AND LOG4J delegation exception with ignite dependency

2016-10-21 Thread vkulichenko
There is a response on SO:
http://stackoverflow.com/questions/40183878/slf4j-and-log4j-binding-exception-in-spring-boot-app-with-ignite-dependency

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SLF4J-AND-LOG4J-delegation-exception-with-ignite-dependency-tp8415p8420.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: zero downtime while upgrade

2016-10-21 Thread Abhishek Jain
Thanks Denis for the information.

Regards
Abhishek

On Fri, Oct 21, 2016 at 4:16 PM, Denis Magda  wrote:

> Hello Abhishek,
>
> It will not work as well. Only the nodes with the same Apache Ignite
> version can co-exist in a single cluster.
>
> —
> Denis
>
> On Oct 21, 2016, at 8:11 AM, Abhishek Jain 
> wrote:
>
> Thanks for your very quick reponse.
>
> But is it possible that within the same cluster if we can run multiple
> versions of apache ignite nodes on different host machines ?
>
> Regards
> Abhishek
>
> On Fri, Oct 21, 2016 at 8:37 AM, Vladislav Pyatkov 
> wrote:
>
>> Hi,
>>
>> Unfortunately not, because changes between Ignite version often going
>> with feature of architecture.
>>
>> GridGain supports rolling-upgardes[1], but this works for minor version
>> only.
>>
>>
>> [1]: https://gridgain.readme.io/docs/rolling-upgardes
>>
>> On Fri, Oct 21, 2016 at 3:50 PM, Abhishek Jain <
>> mail.abhishekj...@gmail.com> wrote:
>>
>>> Hi Folks,
>>>
>>> Does apache Ignite supports zero downtime while upgrading to new version
>>> ?
>>>
>>> Regards
>>> Abhishek
>>>
>>
>>
>>
>> --
>> Vladislav Pyatkov
>>
>
>
>


Re: zero downtime while upgrade

2016-10-21 Thread Denis Magda
Hello Abhishek,

It will not work as well. Only the nodes with the same Apache Ignite version 
can co-exist in a single cluster. 

—
Denis

> On Oct 21, 2016, at 8:11 AM, Abhishek Jain  
> wrote:
> 
> Thanks for your very quick reponse.
> 
> But is it possible that within the same cluster if we can run multiple 
> versions of apache ignite nodes on different host machines ?
> 
> Regards
> Abhishek
> 
> On Fri, Oct 21, 2016 at 8:37 AM, Vladislav Pyatkov  > wrote:
> Hi,
> 
> Unfortunately not, because changes between Ignite version often going with 
> feature of architecture.
> 
> GridGain supports rolling-upgardes[1], but this works for minor version only.
> 
> 
> [1]: https://gridgain.readme.io/docs/rolling-upgardes 
> 
> 
> On Fri, Oct 21, 2016 at 3:50 PM, Abhishek Jain  > wrote:
> Hi Folks,
> 
> Does apache Ignite supports zero downtime while upgrading to new version ?
> 
> Regards
> Abhishek
> 
> 
> 
> -- 
> Vladislav Pyatkov
> 



SLF4J AND LOG4J delegation exception with ignite dependency

2016-10-21 Thread chevy
Hi,

Below exception is thrown when I include ignite dependency with my spring
boot app. Even though reason seems obvious that 2 jars are in deadlock here,
can you suggest me how can I fix this. I am not adding any of these jars
directly and these are included automatically with other dependencies.

build.gradle
--
buildscript {
repositories {
mavenCentral()
maven { url "http://repo.spring.io/libs-snapshot; } 
}
dependencies {
   
classpath("org.springframework.boot:spring-boot-gradle-plugin:1.4.1.RELEASE")
classpath 'mysql:mysql-connector-java:5.1.34' 
}
}

// Apply the java plugin to add support for Java
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'spring-boot'

jar {
baseName = 'gs-spring-boot'
version =  '0.1.0'
}

// In this section you declare where to find the dependencies of your
project
repositories {
// Use 'jcenter' for resolving your dependencies.
// You can declare any Maven/Ivy/file repository here.
mavenCentral()
maven { url "http://repo.spring.io/libs-snapshot; } 
jcenter()
}

sourceCompatibility = 1.8
targetCompatibility = 1.8

// In this section you declare the dependencies for your production and test
code
dependencies {

compile("org.springframework.boot:spring-boot-starter-web") 
compile("org.springframework.boot:spring-boot-starter-jdbc")
compile("org.springframework.boot:spring-boot-starter-data-jpa") 
compile("mysql:mysql-connector-java:5.1.34")

// Ignite dependencies
compile group: 'org.apache.ignite', name: 'ignite-core', version: 
'1.6.0'
compile group: 'org.apache.ignite', name: 'ignite-spring', version:
'1.6.0'
compile group: 'org.apache.ignite', name: 'ignite-indexing', version:
'1.6.0'
compile group: 'org.apache.ignite', name: 'ignite-rest-http', version:
'1.6.0'
compile group: 'org.apache.ignite', name: 'ignite-log4j', version: 
'1.6.0'

testCompile 'junit:junit:4.12'
}


Exception:
- 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/Cellar/gradle/2.13/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.21/7238b064d1aba20da2ac03217d700d91e02460fa/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/Cellar/gradle/2.13/caches/modules-2/files-2.1/ch.qos.logback/logback-classic/1.1.7/9865cf6994f9ff13fce0bf93f2054ef6c65bb462/logback-classic-1.1.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Detected both log4j-over-slf4j.jar AND bound slf4j-log4j12.jar on the
class path, preempting StackOverflowError. 
SLF4J: See also http://www.slf4j.org/codes.html#log4jDelegationLoop for more
details.
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
at 
org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at com.boot.App.(App.java:42)
Caused by: java.lang.IllegalStateException: Detected both
log4j-over-slf4j.jar AND bound slf4j-log4j12.jar on the class path,
preempting StackOverflowError. See also
http://www.slf4j.org/codes.html#log4jDelegationLoop for more details.
at 
org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:54)
... 8 more




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SLF4J-AND-LOG4J-delegation-exception-with-ignite-dependency-tp8415.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to enable includeEventTypes property of IgniteConfiguration without restart cluster?

2016-10-21 Thread vkulichenko
You can use IgniteEvents.enableLocal(int... types) method for this.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-enable-includeEventTypes-property-of-IgniteConfiguration-without-restart-cluster-tp8391p8414.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite query threads

2016-10-21 Thread vkulichenko
Hi,

Public thread is used for computations and system pool is used for all cache
operations (gets, puts, queries, etc.). Having separate thread pool for
queries will not give you performance improvement, because it's OS
responsibility to schedule threads execution anyway.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-query-threads-tp8398p8413.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data Streamer

2016-10-21 Thread Vladislav Pyatkov
Hi Anil,

This sounds is very questionable.
Could you please attach your sources?

On Fri, Oct 21, 2016 at 5:16 PM, Anil  wrote:

> HI,
>
> I was loading data into ignite cache using parallel tasks by broadcasting
> the task. Each taks (IgniteCallable implementation) has its own data
> streamer. was it correct approach.
>
> Loading data into ignite cache using data streamer is very slow compared
> normal cache.put.
>
> is that expected ? or need to some configurations to improve the
> performance.
>
> Thanks.
>
>


-- 
Vladislav Pyatkov


Re: zero downtime while upgrade

2016-10-21 Thread Abhishek Jain
Thanks for your very quick reponse.

But is it possible that within the same cluster if we can run multiple
versions of apache ignite nodes on different host machines ?

Regards
Abhishek

On Fri, Oct 21, 2016 at 8:37 AM, Vladislav Pyatkov 
wrote:

> Hi,
>
> Unfortunately not, because changes between Ignite version often going with
> feature of architecture.
>
> GridGain supports rolling-upgardes[1], but this works for minor version
> only.
>
>
> [1]: https://gridgain.readme.io/docs/rolling-upgardes
>
> On Fri, Oct 21, 2016 at 3:50 PM, Abhishek Jain <
> mail.abhishekj...@gmail.com> wrote:
>
>> Hi Folks,
>>
>> Does apache Ignite supports zero downtime while upgrading to new version ?
>>
>> Regards
>> Abhishek
>>
>
>
>
> --
> Vladislav Pyatkov
>


Ignite + Spark Streaming on Amazon EMR Exception

2016-10-21 Thread Geektimus
Hello,I'm working with this set of technologies (Spark 1.6.0 and Ignite 1.6.0
) and Im having some issues, the most troublesome is this exception:
6/10/20 22:09:15 ERROR DAGSchedulerEventProcessLoop:
DAGSchedulerEventProcessLoop failed; shutting down
SparkContextjava.lang.NumberFormatException: For input string: "1%1"at
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:492) at
java.lang.Integer.parseInt(Integer.java:527)at
scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229) at
scala.collection.immutable.StringOps.toInt(StringOps.scala:31)  at
org.apache.spark.util.Utils$.parseHostPort(Utils.scala:877) at
org.apache.spark.scheduler.cluster.YarnScheduler.getRackForHost(YarnScheduler.scala:37)
at
org.apache.spark.scheduler.TaskSetManager$$anonfun$org$apache$spark$scheduler$TaskSetManager$$addPendingTask$1.apply(TaskSetManager.scala:208)
at
org.apache.spark.scheduler.TaskSetManager$$anonfun$org$apache$spark$scheduler$TaskSetManager$$addPendingTask$1.apply(TaskSetManager.scala:187)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)   at
org.apache.spark.scheduler.TaskSetManager.org$apache$spark$scheduler$TaskSetManager$$addPendingTask(TaskSetManager.scala:187)
at
org.apache.spark.scheduler.TaskSetManager$$anonfun$1.apply$mcVI$sp(TaskSetManager.scala:166)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at
org.apache.spark.scheduler.TaskSetManager.(TaskSetManager.scala:165)at
org.apache.spark.scheduler.TaskSchedulerImpl.createTaskSetManager(TaskSchedulerImpl.scala:200)
at
org.apache.spark.scheduler.TaskSchedulerImpl.submitTasks(TaskSchedulerImpl.scala:164)
at
org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1052)
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:921)
at
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:861)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1607)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
I'm using the XML config to use the DiscoverySPI from S3
property name=discoverySpibean
class=org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpiproperty
name=ipFinderbean
class=org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinderproperty
name=awsCredentials ref=aws.creds/property
name=bucketName
value=embedded-ignite-discovery//bean/property
But that doesn't work, So I changed the scala code to do the same inside the
code: 
val cfg = new IgniteConfiguration with Serializable//val clientCfg = new
ClientConfiguration()val ipFinder = new CustomTcpDiscoveryS3IpFinderval
discoverySPI = new TcpDiscoverySpival accessKey =
configHelper.envOrElseConfig("aws.security.credentials.access_key")val
secretKey =
configHelper.envOrElseConfig("aws.security.credentials.secret_key")val
awsCredentials = new BasicAWSCredentials(accessKey,
secretKey)ipFinder.setAwsCredentials(awsCredentials)ipFinder.setBucketName(configHelper.envOrElseConfig("ignite.aws.s3.bucket_name"))discoverySPI.setIpFinder(ipFinder)
But that fails too, it throws the same exception with the "1%1",When I check
the bucket I can see a file called "0:0:0:0:0:0:0:1:1%1" and I think that is
the IPv6 address of the loopback interface and in some way spark or ignite
is failing to handle that address.In the attempt log I can see this:
>>> +--+>>>
>>> Ignite ver.
>>> 1.6.0#20160518-sha1:0b22c45bb9b97692208fd0705ddf8045ff34a031>>>
>>> +--+>>>
>>> OS name: Linux 4.1.17-22.30.amzn1.x86_64 amd64>>> CPU(s): 4>>> Heap:
>>> 11.0GB>>> VM name: 6268@ip-172-31-41-59>>> Local node
>>> [ID=B284DCEF-BB88-4AF1-BAA2-C7F337A8E579, order=2, clientMode=true]>>>
>>> Local node addresses: [ip-172-31-41-59.ec2.internal/*0:0:0:0:0:0:0:1%1*,
>>> /127.0.0.1, /172.31.41.59]>>> Local ports: TCP:11211 TCP:47100 TCP:48100 
Notice the "*0:0:0:0:0:0:0:1%1*"I also tried to do a:
cfg.setDiscoverySpi(discoverySPI)cfg.setLocalHost("127.0.0.1")cfg.setClientMode(true)cfg.setPeerClassLoadingEnabled(false)
But in that case it hangs in:
16/10/20 22:49:27 INFO IgniteKernal: Security status [authentication=off,
tls/ssl=off]16/10/20 22:49:27 INFO GridTcpRestProtocol: Command protocol
successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0,
port=11211]16/10/20 22:49:28 WARN CustomTcpDiscoveryS3IpFinder: Amazon
client configuration is not set (will use default).16/10/20 22:49:28 WARN
TcpDiscoverySpi: Failed to connect to any address from IP finder 

Data Streamer

2016-10-21 Thread Anil
HI,

I was loading data into ignite cache using parallel tasks by broadcasting
the task. Each taks (IgniteCallable implementation) has its own data
streamer. was it correct approach.

Loading data into ignite cache using data streamer is very slow compared
normal cache.put.

is that expected ? or need to some configurations to improve the
performance.

Thanks.


Re: Killing a node under load stalls the grid with ignite 1.7

2016-10-21 Thread Vladislav Pyatkov
Hi,

Yes, please attach new dumps (without putting in cache into cache store).
That reduce search of reason.

On Fri, Oct 21, 2016 at 3:54 PM, bintisepaha  wrote:

> This was done to optimize our writes to the DB. on every save, we do not
> want
> to delete and insert records, so we do a digest comparison. Do you think
> this causes an issue? How does cache store handle transactions or locks?
> when a node dies, if a flusher thread is doing write-behind how does that
> affect data rebalancing?
>
> If you could answer the above questions, it will give us more clarity.
>
> We are removing it now. but still killing a node is stalling the cluster.
> Will send the latest thread dumps to you today.
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-
> grid-with-ignite-1-7-tp8130p8405.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: zero downtime while upgrade

2016-10-21 Thread Vladislav Pyatkov
Hi,

Unfortunately not, because changes between Ignite version often going with
feature of architecture.

GridGain supports rolling-upgardes[1], but this works for minor version
only.


[1]: https://gridgain.readme.io/docs/rolling-upgardes

On Fri, Oct 21, 2016 at 3:50 PM, Abhishek Jain 
wrote:

> Hi Folks,
>
> Does apache Ignite supports zero downtime while upgrading to new version ?
>
> Regards
> Abhishek
>



-- 
Vladislav Pyatkov


Random SSL unsupported record version

2016-10-21 Thread styriver
Hello from time to time we see random errors like these. We are running java
8. I am assuming that is happening because of the Caused by:
javax.net.ssl.SSLException: Unsupported record version Unknown-4.6 nested
exception.
What concerns me is the other caused by class
org.apache.ignite.IgniteCheckedException: Remote node ID is not as expected
[expected=e0cd4a40-6cc2-49f2-9536-b3453713f649,
rcvd=e55562b0-c39f-4550-9d94-255fde805e52]

We are using certificates and have tried both a 1024 and 2048 key size. We
would like to move to QA for certification but this is preventing us from
doing so. I have run ssl debug it seems these two ciphers seem to come up as
invalidated much more than others. I included some session information for
the logs. If it is a cipher selection issue is there someway to restrict
them in the application?

TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_RSA_WITH_AES_128_CBC_SHA


Line 459:   Line 83584:  44 %% Initialized:  [Session-2,
SSL_NULL_WITH_NULL_NULL]
Line 460:   Line 83635: %% Negotiating:  [Session-2,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384]
Line 463:   Line 84711: %% Cached server session: [Session-2,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384]
Line 464:   Line 84711: %% Cached server session: [Session-2,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384]
Line 474:   Line 193780: %% Invalidated:  [Session-2,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384]
Line 781:   Line 62199: 0260: 74 82 3D E1 %% Initialized:  
[Session-2,
SSL_NULL_WITH_NULL_NULL]
Line 782:   Line 62201: %% Negotiating:  [Session-2,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384]
Line 785:   Line 62782: %% Cached server session: [Session-2,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384]
Line 786:   Line 62782: %% Cached server session: [Session-2,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384]
Line 837:   Line 1264792: %% Invalidated:  [Session-2,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384]

I only see this cipher being invalidated once out of the 69 occurrences of
invalidated
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

62 occurrences of invalidated
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384

6 occurrences of invalidated
TLS_RSA_WITH_AES_128_CBC_SHA


javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException:
Query execution failed: GridCacheQueryBean [qry=GridCacheQueryAdapter
[type=SCAN, clsName=null, clause=null, filter=null, part=null,
incMeta=false, metrics=GridCacheQueryMetricsAdapter [minTime=0, maxTime=0,
sumTime=0, avgTime=0.0, execs=0, completed=0, fails=0], pageSize=1024,
timeout=0, keepAll=true, incBackups=false, dedup=false,
prj=org.apache.ignite.internal.cluster.ClusterGroupAdapter@7696161a,
keepBinary=false, subjId=daace623-eb07-49a4-a586-6d1735e24859, taskHash=0],
rdc=null, trans=null]
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1502)
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryFutureAdapter.next(GridCacheQueryFutureAdapter.java:176)
at
org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager$5.onHasNext(GridCacheDistributedQueryManager.java:634)
at
org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy$2$1.onHasNext(IgniteCacheProxy.java:501)
at
org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
at
org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:73)
at
com.xxx.documentviewer.imaging.cache.service.IgniteLoanCacheServiceImpl.getEvictionKeys(IgniteLoanCacheServiceImpl.java:232)
at
com.xxx.documentviewer.imaging.cache.service.IgniteLoanCacheServiceImpl.unregisterInactiveImages(IgniteLoanCacheServiceImpl.java:131)
at
com.xxx.documentviewer.imaging.service.CacheEvictionServiceImpl.unregisterInactiveImages(CacheEvictionServiceImpl.java:52)
at
com.xxx.documentviewer.controller.ImageCleanupMaintenanceWebserviceController.imageCleanup(ImageCleanupMaintenanceWebserviceController.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at
org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:222)
at
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:137)
at

Re: Killing a node under load stalls the grid with ignite 1.7

2016-10-21 Thread bintisepaha
This was done to optimize our writes to the DB. on every save, we do not want
to delete and insert records, so we do a digest comparison. Do you think
this causes an issue? How does cache store handle transactions or locks?
when a node dies, if a flusher thread is doing write-behind how does that
affect data rebalancing?

If you could answer the above questions, it will give us more clarity. 

We are removing it now. but still killing a node is stalling the cluster.
Will send the latest thread dumps to you today.

Thanks,
Binti



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-grid-with-ignite-1-7-tp8130p8405.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


zero downtime while upgrade

2016-10-21 Thread Abhishek Jain
Hi Folks,

Does apache Ignite supports zero downtime while upgrading to new version ?

Regards
Abhishek


Re: One question about Partition-aware data loading

2016-10-21 Thread Vladislav Pyatkov
Hi Bob,

This not clear for me. Why do a list of columns has bad impact to
performance?
Ignite does not have specific pttern for existing CacheStore implementation
reuse, like everything else you can see from code of CacheJdbcPojoStore[1].

[1]:
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStore.java

On Fri, Oct 21, 2016 at 11:57 AM, 胡永亮/Bob  wrote:

> Hi everyone,
>
> In official document, there are some code about Partition-aware data
> loading.
>
> private void loadPartition(Connection conn, int part,
> IgniteBiInClosure clo) {
>
> try (PreparedStatement st = conn.prepareStatement("select * from PERSONS 
> where partId=?")) {
>   st.setInt(1, part);
>
>   try (ResultSet rs = st.executeQuery()) {
> while (rs.next()) {
>   *Person person = new Person(rs.getLong(1), rs.getString(2), 
> rs.getString(3));*
>
>   clo.apply(person.getId(), person);
> }
>   }
> }
> catch (SQLException e) {
>   throw new CacheLoaderException("Failed to load values from cache 
> store.", e);
> }
>   }
>
> I have a question in real scenario in the previous bold code: My table 
> like Person has 100 columns, so I will list so many colmuns, it is not very 
> efficient.
>
> But, in the default implemention of cache.loadCache(), there are good 
> code for mapping the DB table to cache object.
>
> Can I reuse these code through some API?
>
> Thanks your reply.
>
>
> --
> Bob
>
> 
> ---
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
> 
> ---
>



-- 
Vladislav Pyatkov


Re: Couchbase as persistent store

2016-10-21 Thread Igor Sapego
You are welcome )

Best Regards,
Igor

On Fri, Oct 21, 2016 at 10:39 AM, kvipin  wrote:

> Igor, thanks a lot for confirming that everything is working fine from
> Apache-Ignite side. That helped me focus into my code only, which was
> really
> a problematic piece.
>
> The biggest problem was that I didn't have exception handling in my code
> hence I was not getting any clue. Once I added exception handling,
> everything was out in open.
>
> First problem was that I had "storeKeepBinary" property enabled through my
> configuration file, which was creating problem while typecasting record
> back
> to TestTable object before constructing json document from it. Second
> problem was that bytes and timestamp java datatypes couldn't directly be
> stored in json object, hence I stored them as string data.
>
> Following is the relevant portion of working code:
>
> ...
> // This mehtod is called whenever "putAll(...)" methods are called
> on IgniteCache.
> @Override public void writeAll(Collection> entries) throws CacheWriterException {
> Bucket conn = connection();
> // Syntax of MERGE statement is database specific and
> should
> be adopted for your database.
> // If your database does not support MERGE statement then
> use sequentially update, insert statements.
> for (Cache.Entry entry
> : entries) {
> try {
> *TestTable val = entry.getValue();* // type
> casting error if "storeKeepBinary" property enabled.
>
> conn.insert(JsonDocument.create(entry.getKey().toString(),
> JsonObject.create().put("tid", val.getTid()).put("idint",
> val.getIdint()).put("idbigint", val.getIdbigint()).put("idchar",
> val.getIdchar()).put("idbinary",
> *val.getIdbinary()*.toString()).put("idvarbinary",
> *val.getIdvarbinary()*.toString()).put("idvarchar",
> val.getIdvarchar()).put("idts", val.getIdts().toString(; // bytes
> couldn't be stored in json object. applied toString on them to make it work
> } catch (Exception e) {
> System.out.println("There was an error
> inserting record: " + e.getMessage());
> }
> }
> }
> ...
>
> Now things work fine for me.
>
> Special thanks to Igor and Val for giving your precious time, things
> would've been much more difficult without your kind support.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Couchbase-as-persistent-store-tp7476p8396.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite metrics

2016-10-21 Thread Vladislav Pyatkov
Hi Anil,

The row (Non heap) does not contains information about cache. If you want
to see how many memory was used by particular cache.

You can switch up metrics for it:

* *
* *
...

and tracking a metrics of cache by API:

*ignite.cache("name").metrics().getOffHeapAllocatedSize()*

or by JMX bean.

On Fri, Oct 21, 2016 at 10:38 AM, Anil  wrote:

> HI,
>
> i have loaded around 20 M records into 4 node ignite cluster.
>
> Following is the ignite metrics logged in the log of one node.
>
>
> ^-- Node [id=c0e3dc45, name=my-grid, uptime=20:17:27:096]
> ^-- H/N/C [hosts=4, nodes=4, CPUs=32]
> ^-- CPU [cur=0.23%, avg=0.26%, GC=0%]
> ^-- Heap [used=999MB, free=71.96%, comm=1819MB]
> ^-- Non heap [used=101MB, free=-1%, comm=105MB]
> ^-- Public thread pool [active=0, idle=16, qSize=0]
> ^-- System thread pool [active=0, idle=16, qSize=0]
> ^-- Outbound messages queue [size=0]
>
>
> Each node is consuming around 20 gb RAM (can see it from htop command).
>
> Ignite Configuration :
>
>  
> 
> 
> 
> 
> 
>
> From the log, non heap used is 101 MB and heap used is 999MB. But actual
> RAM used by jar is 20 GB.
>
> Can you please clarify the numbers ?
>
> Thanks
>



-- 
Vladislav Pyatkov


Re: Does Apache Ignite is suitable for real time notifications in a distributed project?

2016-10-21 Thread Pavel Tupitsyn
Hi Alexandr,

Ignite is a good fit for your use case. (4) and (5) can be achieved
via Continuous Query [1] with a remote filter and local listener:

- create an Ignite cache to store messages (use expiration to store
temporarily)
- start a continuous query on all servers with a remote filter and local
listener
- add messages to cache, include receiver user name
- remote filter analyses the message and accepts only messages which are
for the users on the current server
- local listener delivers the message through WebSocket

Let me know if this helps.

Pavel.

[1] https://apacheignite.readme.io/docs/continuous-queries



On Fri, Oct 21, 2016 at 9:49 AM, Jörn Franke  wrote:

> Hi,
>
> For me that looks more like something suitable for stomp.js+messaging bus
> (eg rabbitmq).
>
> > On 21 Oct 2016, at 07:08, Alexandr Porunov 
> wrote:
> >
> > Hello,
> >
> > I am developing a messaging system with notifications via WebSockets
> (When the user 'A' sends a message to the user 'B' I need to show a
> notification for the user 'B' about a new message). Different users are
> connected to different servers. I wonder to know if Apache Ignite is
> suitable for this kind of situation.
> > I am on the design stage right now. I think that it have to be like this:
> > 1. User 'A' sends a message to the user 'B'
> > 2. Server which is connected with the user 'A' receives the message.
> > 3. Server which is connected with the user 'A' sends the message to the
> Apache Ignite.
> > 4. Apache Ignite somehow understands to which server it has to deliver
> the message.
> > 5. Apache Ignite sends the message to the server which is connected with
> the user 'B'.
> > 6. Server which is connected with the user 'B' sends a notification to
> the user 'B' through the WebSocket.
> >
> > Maybe I am wrong about the design for real time notifications. Maybe it
> has to be done in totally different way. I haven't found the information
> about building notifications in the distributed project.
> >
> > Is it possible to build such a system with Apache Ignite or Apache
> Ignite isn't suitable for such purposes?
> >
> > Sincerely,
> > Alexandr
>


One question about Partition-aware data loading

2016-10-21 Thread 胡永亮/Bob
Hi everyone,

In official document, there are some code about Partition-aware data 
loading.

private void loadPartition(Connection conn, int part, 
IgniteBiInClosure clo) {
try (PreparedStatement st = conn.prepareStatement("select * from PERSONS 
where partId=?")) {
  st.setInt(1, part);
  
  try (ResultSet rs = st.executeQuery()) {
while (rs.next()) {
  Person person = new Person(rs.getLong(1), rs.getString(2), 
rs.getString(3));
  
  clo.apply(person.getId(), person);
}
  }
}
catch (SQLException e) {
  throw new CacheLoaderException("Failed to load values from cache store.", 
e);
}
  }I have a question in real scenario in the previous bold code: My table 
like Person has 100 columns, so I will list so many colmuns, it is not very 
efficient.But, in the default implemention of cache.loadCache(), there are 
good code for mapping the DB table to cache object.Can I reuse these code 
through some API? Thanks your reply.



Bob


---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please
immediately notify the sender by return e-mail, and delete the original message 
and all copies from
your system. Thank you.
---


Ignite query threads

2016-10-21 Thread Anil
HI Ignite team,

is there way to configure fixed resources (threads) only for ignite queries
?

jobs, queries and other tasks will be running on ignite cluster at the same
time. So having fixed set of resources for query would have not impact on
query response time.

I see public and system thread pool. How can i use them as fixed resources
for particular task ?

Thanks.


Re: Couchbase as persistent store

2016-10-21 Thread kvipin
Igor, thanks a lot for confirming that everything is working fine from
Apache-Ignite side. That helped me focus into my code only, which was really
a problematic piece.

The biggest problem was that I didn't have exception handling in my code
hence I was not getting any clue. Once I added exception handling,
everything was out in open.

First problem was that I had "storeKeepBinary" property enabled through my
configuration file, which was creating problem while typecasting record back
to TestTable object before constructing json document from it. Second
problem was that bytes and timestamp java datatypes couldn't directly be
stored in json object, hence I stored them as string data.

Following is the relevant portion of working code:

...
// This mehtod is called whenever "putAll(...)" methods are called
on IgniteCache.
@Override public void writeAll(Collection> entries) throws CacheWriterException {
Bucket conn = connection();
// Syntax of MERGE statement is database specific and should
be adopted for your database.
// If your database does not support MERGE statement then
use sequentially update, insert statements.
for (Cache.Entry entry
: entries) {
try {
*TestTable val = entry.getValue();* // type
casting error if "storeKeepBinary" property enabled.
   
conn.insert(JsonDocument.create(entry.getKey().toString(),
JsonObject.create().put("tid", val.getTid()).put("idint",
val.getIdint()).put("idbigint", val.getIdbigint()).put("idchar",
val.getIdchar()).put("idbinary",
*val.getIdbinary()*.toString()).put("idvarbinary",
*val.getIdvarbinary()*.toString()).put("idvarchar",
val.getIdvarchar()).put("idts", val.getIdts().toString(; // bytes
couldn't be stored in json object. applied toString on them to make it work
} catch (Exception e) {
System.out.println("There was an error
inserting record: " + e.getMessage());
}
}
}
...

Now things work fine for me.

Special thanks to Igor and Val for giving your precious time, things
would've been much more difficult without your kind support.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Couchbase-as-persistent-store-tp7476p8396.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite metrics

2016-10-21 Thread Anil
HI,

i have loaded around 20 M records into 4 node ignite cluster.

Following is the ignite metrics logged in the log of one node.


^-- Node [id=c0e3dc45, name=my-grid, uptime=20:17:27:096]
^-- H/N/C [hosts=4, nodes=4, CPUs=32]
^-- CPU [cur=0.23%, avg=0.26%, GC=0%]
^-- Heap [used=999MB, free=71.96%, comm=1819MB]
^-- Non heap [used=101MB, free=-1%, comm=105MB]
^-- Public thread pool [active=0, idle=16, qSize=0]
^-- System thread pool [active=0, idle=16, qSize=0]
^-- Outbound messages queue [size=0]


Each node is consuming around 20 gb RAM (can see it from htop command).

Ignite Configuration :

 






>From the log, non heap used is 101 MB and heap used is 999MB. But actual
RAM used by jar is 20 GB.

Can you please clarify the numbers ?

Thanks


Re: ClassCastException while fetching data from IgniteCache (with custom persistent store)

2016-10-21 Thread amit
Val,
Thanks for the quick reply.
Peer class loading is not configured. and cache is running under spring-boot
app.

you are right is a class-loader issue.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ClassCastException-while-fetching-data-from-IgniteCache-with-custom-persistent-store-tp8377p8394.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Does Apache Ignite is suitable for real time notifications in a distributed project?

2016-10-21 Thread Jörn Franke
Hi,

For me that looks more like something suitable for stomp.js+messaging bus (eg 
rabbitmq).

> On 21 Oct 2016, at 07:08, Alexandr Porunov  wrote:
> 
> Hello,
> 
> I am developing a messaging system with notifications via WebSockets (When 
> the user 'A' sends a message to the user 'B' I need to show a notification 
> for the user 'B' about a new message). Different users are connected to 
> different servers. I wonder to know if Apache Ignite is suitable for this 
> kind of situation.
> I am on the design stage right now. I think that it have to be like this:
> 1. User 'A' sends a message to the user 'B'
> 2. Server which is connected with the user 'A' receives the message.
> 3. Server which is connected with the user 'A' sends the message to the 
> Apache Ignite.
> 4. Apache Ignite somehow understands to which server it has to deliver the 
> message.
> 5. Apache Ignite sends the message to the server which is connected with the 
> user 'B'.
> 6. Server which is connected with the user 'B' sends a notification to the 
> user 'B' through the WebSocket.
> 
> Maybe I am wrong about the design for real time notifications. Maybe it has 
> to be done in totally different way. I haven't found the information about 
> building notifications in the distributed project.
> 
> Is it possible to build such a system with Apache Ignite or Apache Ignite 
> isn't suitable for such purposes? 
> 
> Sincerely,
> Alexandr