Re: Embedded mode ignite on spark

2016-08-16 Thread percent620
==5)===
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/filecache/652/spark-assembly-1.6.0-hadoop2.5.0-cdh5.3.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/17 09:28:10 INFO executor.CoarseGrainedExecutorBackend: Registered
signal handlers for [TERM, HUP, INT]
16/08/17 09:28:11 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:11 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:11 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:12 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:12 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:12 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:12 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/08/17 09:28:12 INFO Remoting: Starting remoting
16/08/17 09:28:13 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkExecutorActorSystem@xxx:46368]
16/08/17 09:28:13 INFO util.Utils: Successfully started service
'sparkExecutorActorSystem' on port 46368.
16/08/17 09:28:13 INFO storage.DiskBlockManager: Created local directory at
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/blockmgr-8f0bd302-1f10-4f03-b639-e5a38341e80c
16/08/17 09:28:13 INFO storage.MemoryStore: MemoryStore started with
capacity 10.3 GB
16/08/17 09:28:13 INFO executor.CoarseGrainedExecutorBackend: Connecting to
driver: spark://CoarseGrainedScheduler@yyy:64381
16/08/17 09:28:13 INFO executor.CoarseGrainedExecutorBackend: Successfully
registered with driver
16/08/17 09:28:13 INFO executor.Executor: Starting executor ID 5 on host

16/08/17 09:28:13 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 9012.
16/08/17 09:28:13 INFO netty.NettyBlockTransferService: Server created on
9012
16/08/17 09:28:13 INFO storage.BlockManagerMaster: Trying to register
BlockManager
16/08/17 09:28:13 INFO storage.BlockManagerMaster: Registered BlockManager
16/08/17 09:28:17 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 0
16/08/17 09:28:17 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID
0)
16/08/17 09:28:17 INFO executor.Executor: Fetching
http://xxx:64443/jars/spark_zmqpull_engine841.jar with timestamp
1471397276248
16/08/17 09:28:17 INFO util.Utils: Fetching
http://xxx:64443/jars/spark_zmqpull_engine841.jar to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-47ab6a3d-0286-492c-b9d1-fd5bd52ec80d/fetchFileTemp1877526252998936286.tmp
16/08/17 09:28:23 INFO util.Utils: Copying
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-47ab6a3d-0286-492c-b9d1-fd5bd52ec80d/15389795251471397276248_cache
to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_07/./spark_zmqpull_engine841.jar
16/08/17 09:28:24 INFO executor.Executor: Adding
file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_07/./spark_zmqpull_engine841.jar
to class loader
16/08/17 09:28:24 INFO broadcast.TorrentBroadcast: Started reading broadcast
variable 0
16/08/17 09:28:24 INFO storage.MemoryStore: Block broadcast_0_piece0 stored
as bytes in memory (estimated size 1571.0 B, free 1571.0 B)
16/08/17 09:28:24 INFO broadcast.TorrentBroadcast: Reading broadcast
variable 0 took 144 ms
16/08/17 09:28:24 INFO storage.MemoryStore: Block broadcast_0 stored as
values in memory (estimated size 1608.0 B, free 3.1 KB)
16/08/17 09:28:26 INFO internal.IgniteKernal: 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 1.6.0#20160518-sha1:0b22c45b
>>> 2016 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

16/08/17 09:28:26 INFO internal.IgniteKernal: Config URL: n/a
16/08/17 09:28:26 INFO internal.IgniteKernal: Daemon mode: off
16/08/17 09:28:26 INFO internal.IgniteKernal: OS: Linux
2.6.32-220.23.2.ali878.el6.x86_64 amd64
16/08/17 09:28:26 INFO internal.IgniteKernal: OS user: hbase
16/08/17 09:28:26 INFO internal.IgniteKernal: Language runtime: Java
Platform API Specification ver. 1.7
16/08/17 09:28:26 INFO 

Re: Embedded mode ignite on spark

2016-08-16 Thread percent620
4)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/filecache/785/spark-assembly-1.6.0-hadoop2.5.0-cdh5.3.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/17 09:28:13 INFO executor.CoarseGrainedExecutorBackend: Registered
signal handlers for [TERM, HUP, INT]
16/08/17 09:28:14 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:14 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:14 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:15 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:15 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:15 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/08/17 09:28:15 INFO Remoting: Starting remoting
16/08/17 09:28:15 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkExecutorActorSystem@v:62569]
16/08/17 09:28:15 INFO util.Utils: Successfully started service
'sparkExecutorActorSystem' on port 62569.
16/08/17 09:28:15 INFO storage.DiskBlockManager: Created local directory at
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/blockmgr-eeadd1d1-8dd8-48ef-84da-f3ed72048c80
16/08/17 09:28:15 INFO storage.MemoryStore: MemoryStore started with
capacity 10.3 GB
16/08/17 09:28:16 INFO executor.CoarseGrainedExecutorBackend: Connecting to
driver: spark://CoarseGrainedScheduler@10.194.70.26:64381
16/08/17 09:28:16 INFO executor.CoarseGrainedExecutorBackend: Successfully
registered with driver
16/08/17 09:28:16 INFO executor.Executor: Starting executor ID 4 on host

16/08/17 09:28:16 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 15883.
16/08/17 09:28:16 INFO netty.NettyBlockTransferService: Server created on
15883
16/08/17 09:28:16 INFO storage.BlockManagerMaster: Trying to register
BlockManager
16/08/17 09:28:16 INFO storage.BlockManagerMaster: Registered BlockManager
16/08/17 09:28:17 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 4
16/08/17 09:28:17 INFO executor.Executor: Running task 4.0 in stage 0.0 (TID
4)
16/08/17 09:28:17 INFO executor.Executor: Fetching
http://10.194.70.26:64443/jars/spark_zmqpull_engine841.jar with timestamp
1471397276248
16/08/17 09:28:17 INFO util.Utils: Fetching
http://10.194.70.26:64443/jars/spark_zmqpull_engine841.jar to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-e30ebe68-c99e-443d-87c2-9f9372e9759d/fetchFileTemp4290126145592989981.tmp
16/08/17 09:28:23 INFO util.Utils: Copying
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-e30ebe68-c99e-443d-87c2-9f9372e9759d/15389795251471397276248_cache
to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_05/./spark_zmqpull_engine841.jar
16/08/17 09:28:24 INFO executor.Executor: Adding
file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_05/./spark_zmqpull_engine841.jar
to class loader
16/08/17 09:28:24 INFO broadcast.TorrentBroadcast: Started reading broadcast
variable 0
16/08/17 09:28:24 INFO storage.MemoryStore: Block broadcast_0_piece0 stored
as bytes in memory (estimated size 1571.0 B, free 1571.0 B)
16/08/17 09:28:24 INFO broadcast.TorrentBroadcast: Reading broadcast
variable 0 took 143 ms
16/08/17 09:28:24 INFO storage.MemoryStore: Block broadcast_0 stored as
values in memory (estimated size 1608.0 B, free 3.1 KB)
16/08/17 09:28:25 INFO internal.IgniteKernal: 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 1.6.0#20160518-sha1:0b22c45b
>>> 2016 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

16/08/17 09:28:25 INFO internal.IgniteKernal: Config URL: n/a
16/08/17 09:28:25 INFO internal.IgniteKernal: Daemon mode: off
16/08/17 09:28:25 INFO internal.IgniteKernal: OS: Linux
2.6.32-220.23.2.ali878.el6.x86_64 amd64
16/08/17 09:28:25 INFO internal.IgniteKernal: OS user: hbase
16/08/17 09:28:25 INFO internal.IgniteKernal: Language runtime: Java
Platform API Specification ver. 1.7

Re: Embedded mode ignite on spark

2016-08-16 Thread percent620
Thanks vkulichenko.

1)I have tried this code as your new method on our production env and submit
to yarn cluster, final result is not correct, it seems that case data was
lost.
val sharedRDD = ic.fromCache(new CacheConfiguration[String,
String]().setCacheMode(CacheMode.REPLICATED)); 

2) this is all the executor logs
spark-submit print logs
$/u01/spark-1.6.0-hive/bin/spark-submit --driver-memory 4G --class
com..ValidSparkCache --master yarn --executor-cores 5 --executor-memory
15000m --num-executors 5 --conf spark.rdd.compress=false --conf
spark.shuffle.compress=false --conf spark.broadcast.compress=false
/u01/xx/spark_zmqpull_engine841.jar 
 
initalRDD.couner=/. 10  partition=> 10  
*=>totalcounter2  paris => 1024 is should be 10 

**=>1*

totally num-executors 5 

==1)=
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/filecache/871/spark-assembly-1.6.0-hadoop2.5.0-cdh5.3.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/17 09:28:11 INFO executor.CoarseGrainedExecutorBackend: Registered
signal handlers for [TERM, HUP, INT]
16/08/17 09:28:12 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:12 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:12 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:13 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:13 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:13 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:13 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/08/17 09:28:13 INFO Remoting: Starting remoting
16/08/17 09:28:13 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkExecutorActorSystem@:18374]
16/08/17 09:28:13 INFO util.Utils: Successfully started service
'sparkExecutorActorSystem' on port 18374.
16/08/17 09:28:13 INFO storage.DiskBlockManager: Created local directory at
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/blockmgr-5c0d998e-15b8-43c5-b73d-96e1e76af179
16/08/17 09:28:14 INFO storage.MemoryStore: MemoryStore started with
capacity 10.3 GB
16/08/17 09:28:14 INFO executor.CoarseGrainedExecutorBackend: Connecting to
driver: spark://CoarseGrainedScheduler@10.194.70.26:64381
16/08/17 09:28:14 INFO executor.CoarseGrainedExecutorBackend: Successfully
registered with driver
16/08/17 09:28:14 INFO executor.Executor: Starting executor ID 1 on host xxx
16/08/17 09:28:14 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 28593.
16/08/17 09:28:14 INFO netty.NettyBlockTransferService: Server created on
28593
16/08/17 09:28:14 INFO storage.BlockManagerMaster: Trying to register
BlockManager
16/08/17 09:28:14 INFO storage.BlockManagerMaster: Registered BlockManager
16/08/17 09:28:17 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 1
16/08/17 09:28:17 INFO executor.Executor: Running task 1.0 in stage 0.0 (TID
1)
16/08/17 09:28:17 INFO executor.Executor: Fetching
http://10.194.70.26:64443/jars/spark_zmqpull_engine841.jar with timestamp
1471397276248
16/08/17 09:28:17 INFO util.Utils: Fetching
http://x:64443/jars/spark_zmqpull_engine841.jar to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-a963fd26-783c-4f09-947c-48163270764d/fetchFileTemp8027243307717085221.tmp
16/08/17 09:28:23 INFO util.Utils: Copying
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-a963fd26-783c-4f09-947c-48163270764d/15389795251471397276248_cache
to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_02/./spark_zmqpull_engine841.jar
16/08/17 09:28:24 INFO executor.Executor: Adding
file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_02/./spark_zmqpull_engine841.jar
to class loader
16/08/17 09:28:24 INFO broadcast.TorrentBroadcast: Started reading broadcast
variable 0
16/08/17 09:28:24 INFO storage.MemoryStore: Block broadcast_0_piece0 stored
as bytes in memory (estimated size 1571.0 B, free 1571.0 B)
16/08/17 09:28:24 INFO 

what are the differences between put and replace method

2016-08-16 Thread timothy
what are the differences between put and replace API? in the document,
replace method is described as Atomic method




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/what-are-the-differences-between-put-and-replace-method-tp7117.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Near Cache

2016-08-16 Thread javastuff....@gmail.com
How can I monitor NEAR cache? MBEAN and Visor not helping, is there another
way? 





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Near-Cache-tp7033p7116.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: IgniteException: No FileSystem for scheme: hdfs

2016-08-16 Thread vkulichenko
Hi Vikram,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.


vikramT wrote
> I would like to know about how I can specify the hadoop library path in my
> java program provided in my question. As you might have observed that I am
> using just a java program to initiate the ignite application which is
> starting but not able to access the HDFS. 

You can use '-cp' option to provide additional classpath when running the
Java app in the command line. See [1] for details.

[1] http://docs.oracle.com/javase/7/docs/technotes/tools/windows/java.html

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/IgniteException-No-FileSystem-for-scheme-hdfs-tp7014p7115.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: CacheWriter#writeAll javadocs inconsistent with implementations ?

2016-08-16 Thread vkulichenko
Hi Kristian,

That's actually a good point! I see that we're handling this properly in the
cache store manager, but our own implementation are not correct. I created a
ticket: https://issues.apache.org/jira/browse/IGNITE-3700

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/CacheWriter-writeAll-javadocs-inconsistent-with-implementations-tp7101p7113.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Puzzled about JVM configuration

2016-08-16 Thread vkulichenko
I would also add that these setting are optimized for on-heap caches. If you
have a different use case (especially if it's a compute use case which can
introduce a lot of different types of load), additional tuning can be
required.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Puzzled-about-JVM-configuration-tp7091p7114.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Any plan for a book on Apache Ignite

2016-08-16 Thread Denis Magda
Shamim,

The table of contents is impressive. Do you plan to sell the book or going to 
spread it out for free?

In both the cases I don’t see anything wrong if we add the book to docs [1] or 
some other site section.

However before adding it to the Apache Ignite main side I think that the book 
should be reviewed by Apache Ignite committer(s).

[1] https://apacheignite.readme.io/docs

—
Denis
 
> On Aug 12, 2016, at 11:47 PM, srecon  wrote:
> 
> Denis Magda wrote
>> Cross-posting to the dev list.
>> 
>> Wow, this is great. Are you the one of the authors?
>> 
>> —
>> Denis
>> 
>>> On Aug 12, 2016, at 12:08 PM, srecon 
> 
>> srecon@
> 
>>  wrote:
>>> 
>>> One book is going to publish soon, you can check the table of contents
>>> here  https://leanpub.com/igniteBest Regards 
>>> View this message in context: Re: Any plan for a book on Apache Ignite
>>> http://apache-ignite-users.70518.x6.nabble.com/Any-plan-for-a-book-on-Apache-Ignite-tp357p7022.html;
>>> Sent from the Apache Ignite Users mailing list archive
>>> http://apache-ignite-users.70518.x6.nabble.com/; at Nabble.com.
> 
> Yes, i am one of them.
> -- 
> Shamim
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Any-plan-for-a-book-on-Apache-Ignite-tp357p7037.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: Ignite read through cache with cache loader factory does not work with CreatedExpiryPolicy

2016-08-16 Thread vkulichenko
Hi,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.


itzmedinesh wrote
> I have enabled read-through and created custom cache loader factory for my
> ignite configuration (cacheTemplateConfig)
> 
> I have also set CreatedExpiryPolicy for the complete cache instance
> 
> igniteCacheConfig.setExpiryPolicyFactory(CreatedExpiryPolicy  
> 
> .factoryOf(CACHE_EXPIRY_DURATION.get(cacheConfig
>   
> .getExpiryDuration(;
> 
> this.igniteCache = ignite.getOrCreateCache(cacheTemplateConfig);
> 
> When i use this.igniteCache .get(key), the load(key) method is called and
> my cache is populated. However the cache entry does not expiry after the
> duration of "ONE_MINUTE" which I have configured.
> 
> This configuration works as expected for AccessedExpiryPolicy and
> TouchedExpiryPolicy.
> 
> If i load an entry into the cache using igniteCache.put(key,value)
> (without cache loader factory) then CreatedExpiryPolicy works perfectly.

It seems that you're right. I created a ticket [1] that you can watch.
Hopefully someone in the community will pick it up and fix.

[1] https://issues.apache.org/jira/browse/IGNITE-3699

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-read-through-cache-with-cache-loader-factory-does-not-work-with-CreatedExpiryPolicy-tp7106p7111.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data collocation question

2016-08-16 Thread vkulichenko
AdamDec wrote
> hmm...but I have several indexes///with different types, what than?

In setIndexedTypes you have to provide classes that are scanned for
annotations. These classes can define multiple indexes.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-collocation-question-tp7042p7108.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Puzzled about JVM configuration

2016-08-16 Thread Vladislav Pyatkov
Hello,

Please, provide additional information.
Can you provide thread dump from server?
What processes doing on cluster?

Need to understanding, where garbage is generated.

On Tue, Aug 16, 2016 at 12:39 PM, yucigou  wrote:

> I'm a bit puzzled about the basic JVM configuration (which is said has
> proven
> to provide fairly smooth throughput without large spikes) on the Ignite JVM
> and System Tuning page
> (https://apacheignite.readme.io/docs/jvm-and-system-tuning):
>
> -XX:NewSize=128m
> -XX:MaxNewSize=128m
> -XX:MaxTenuringThreshold=0
> -XX:SurvivorRatio=1024
>
> Below is the CPU and memory profile captured:
>  n7091/Ignite_basic_JVM_configuration.png>
>
> (By the way, there is no heavy job running for my application, except for
> the Ignite REST API enabled with Ignite web console invoking it through it
> agent.)
>
> You can see that the minor GCs are so frequent, and also the full GCs, and
> the CPU usage is so high as result. The size of young gen space is very
> small. Also live objects are not aged at all, and they are promoted to old
> gen immediately.
>
> I just wonder what is the reasoning behind these settings:
> (1) Small young gen space size
> (2) Tiny survivor space with max tenuring threshold set to zero (which
> means
> no aging)
>
> Thank you,
> Yuci
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Puzzled-about-JVM-configuration-tp7091.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Cannot get the data in Tableau

2016-08-16 Thread austin solomon
Hi Igor,

I have specified "Cache" connection argument while connecting to Ignite's
ODBC, also I can see the column names of the created schema. (Check my
previous post for screen shots)

I can't  able to preview or query the cache.

Below is my full xml configuration.

http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd;>

























































































10.10.1.33:47500..47509









Thanks,



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cannot-get-the-data-in-Tableau-tp6876p7104.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: how to verify the cached elements in each of partitions

2016-08-16 Thread Alexey Goncharuk
Hi,

If I understand correctly, you want to reduce the total number of
partitions for your cache to 2. Is there any reason you want to do this? It
is impossible to change the number of partitions without the full cluster
restart, so if at some point in time you want to add more nodes to your
cluster, it will be impossible unless you change your configuration and
restart all the nodes.

If this is really what you want, you need to override method "int
partitions()" and return 2 from it (otherwise, as Vladislav mentioned, you
will still have 1024 partitions).
To check that your affinity function works, you can use
ignite.affinity(String cacheName) interface.

2016-08-16 14:48 GMT+03:00 Vladislav Pyatkov :

> Hi,
>
> At first. This method (public int partition(final Object key)) returns
> partition number by key, if you have 1024 partitions it must will be
> returned number between 0 and 1023. Look at the implementation for
> RendezvousAffinityFunction[1].
>
> If you want to understand how partitions distributed by nodes, you can use
> the code snippet:
>
>
> // Getting affinity for person cache.
> Affinity affinity = ignite().affinity(cacheName);
>
> // Building a list of all partitions numbers.
> List rndParts = new ArrayList<>(10);
>
> for (int i = 0; i < affinity.partitions(); i++)
>   rndParts.add(i);
>
> // Getting partition to node mapping.
> Map partPerNodes = affinity.mapPartitionsToNodes(
> rndParts);
>
>
> Amount of entry on node or on partition you can see in 
> cache.localSizeLong(CachePeekMode.ALL)
> and cache.localSizeLong(1, CachePeekMode.ALL) respectively.
>
> [1] org.apache.ignite.cache.affinity.rendezvous.
> RendezvousAffinityFunction#partition
>
> On Tue, Aug 16, 2016 at 7:00 AM, minisoft_rm 
> wrote:
>
>> Hi Experts, i implemented the customised partition thing like below:
>> [
>> public class MattRendezvousAffinityFunction extends
>> RendezvousAffinityFunction
>> {
>>
>> ..
>> public int partition(final Object key){
>>.
>>//return 0 or 1 because I launch two nodes
>> }
>>
>> }
>> ]
>>
>> I feel I already touch the most important part.. now I know the default
>> partitions is 1024.
>> after launched two ignite server node jvms, I expected to see items goto
>> each of partitions in nodes
>>
>> but how to check this thing? (I feel all the items goto the first started
>> node)
>>
>> the "cache -scan" can not show things within one partition... right?
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/how-to-verify-the-cached-elements-in-each-
>> of-partitions-tp7085.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Vladislav Pyatkov
>


Re: LOOK THORUGH THIS ERROR

2016-08-16 Thread Ravi Puri
 

THIS IS MY JAR LIST IN DEPENDENCIES which i reduced and matched as per ur
suggestions . Still the same error , now what to do ?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/LOOK-THORUGH-THIS-ERROR-tp6977p7102.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Local mode recommended setup

2016-08-16 Thread Piubelli, Manuel
Perfect, that answers my question, thank you Vladislav.

From: Vladislav Pyatkov [mailto:vldpyat...@gmail.com]
Sent: 16 August 2016 11:49
To: user@ignite.apache.org
Subject: Re: Local mode recommended setup

Hello,

If I understand correctly, you want to create node which works until it will by 
cancelation by system (for eample by kill comand).
If yes, you need do not invoke ignite.close() (explisitly or in try{} block):

public static void main(String[] args) throws URISyntaxException, IOException {
  Ignition.setClientMode(false);

  Ignite ignite = 
Ignition.start("examples/config/example-ignite.xml");

  IgniteCache cache = 
FullOrdCacheLoader.getCacherLoader().loadAndGetCache();
}

On Tue, Aug 16, 2016 at 1:14 PM, Piubelli, Manuel 
> wrote:
Hi All,

I am trying to setup a cluster in Local Mode: all my data fits in RAM and 
operations are read only , data is loaded from a cache or disk at startup. I 
have a very simple question:

How can I configure some code to be run at startup of every node? When defining 
an own start class the ignite node expectedly stops after loading the cache? Is 
there any stayAlive() method or similar I can execute on the ignite or cluster 
object?

Sample Code:

public static void main(String[] args) throws URISyntaxException, IOException {
  Ignition.setClientMode(false);

  try (Ignite ignite = 
Ignition.start("examples/config/example-ignite.xml")) {
 IgniteCache cache = 
FullOrdCacheLoader.getCacherLoader().loadAndGetCache();
 // load complete, now stay alive to receive client requests
  }
}

Much appreciated,
Manuel



--
Vladislav Pyatkov


Re: Ignite Compute - Rebalance Scenario

2016-08-16 Thread Taras Ledkov

Hi,

Affinity job doesn't block rebalance. We reserve partition to prevent 
remove data from the node while job is running.
So, topology is changed, partitions are moved to another nodes but data 
is available locally during affinity job. Magic!
Please modify your test like below and be sure that data is available 
for local query.


@IgniteInstanceResource
private Ignite ig;
...
public void run() {
...
while (true) {
assert ignite.cache(CACHE_NAME).localPeek('a', null).equals('a');

On 16.08.2016 11:32, Kamal C wrote:

Vladislav,

Partitions are moved to another node while executing the job.

Val,

I've tried with the latest nighty build. Still, it has the same behavior.
To reproduce the issue, I've used `while(true)` inside the IgniteRunnable
task. You can reproduce it with the below gist[1].

[1]: https://gist.github.com/Kamal15/0a4066de152b8ebc856fc264f7b4037d

Regards,
Kamal C

On Sat, Aug 13, 2016 at 12:15 AM, vkulichenko 
> 
wrote:


Note that this was changed recently [1] and the change was not
released in
1.7. Kamal, can you try the nightly build [2] and check if it
works as you
expect there?

[1] https://issues.apache.org/jira/browse/IGNITE-2310

[2]

https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/





--
View this message in context:

http://apache-ignite-users.70518.x6.nabble.com/Ignite-Compute-Rebalance-Scenario-tp7004p7021.html


Sent from the Apache Ignite Users mailing list archive at Nabble.com.




--
Taras Ledkov
Mail-To: tled...@gridgain.com



Rebalance is skipped

2016-08-16 Thread Sergii Tyshlek
Hello there!

Some time ago we started moving from old GridGain to current Apache Ignite
(1.6, now 1.7).

Here are some cache config properties we use:


cacheMode=PARTITIONED
atomicityMode=ATOMIC
atomicWriteOrderMode=PRIMARY
writeSynchronizationMode=PRIMARY_SYNC

rebalanceMode=ASYNC
rebalanceBatchesPrefetchCount=2
rebalanceDelay=3
rebalanceTimeout=1

backups=1
affinity=FairAffinityFunction
affinity.partitions=1024


At first, everything looked OK, but then I noticed that our data is not
distributed evenly between nodes (despite the fact we use
FairAffinityFunction, coordinator node hoards most of the cache entries).
Later I discovered (through wrapping my custom class around
FairAffinityFunction) that it works as expected, but the rebalancing is not.

Short after starting 6 nodes (after the last one joins the topology), such
debug logs appear:

-
2016-08-15 07:36:43,070 DEBUG [exchange-worker-#318%EQGrid%]
[preloader.GridDhtPreloader] -  Skipping partition
assignment (state is not MOVING): GridDhtLocalPartition [id=0,
map=org.apache.ignite.internal.processors.cache.GridCacheCon
currentMapImpl@5c35248d, rmvQueue=GridCircularBuffer [sizeMask=255,
idxGen=0], cntr=0, state=OWNING, reservations=0, empty=true,
createTime=08/15/2016 07:36:33]
...
// repeats total of 1024 times, where id=0..1023, one for every partition
// then it's followed by
2016-08-15 07:36:43,177 DEBUG [exchange-worker-#318%EQGrid%]
[preloader.GridDhtPartitionDemander] -  Adding partition
assignments: GridDhtPreloaderAssignments [topVer=AffinityTopologyVersion
[topVer=6, minorTopVer=0], cancelled=false, exchId=GridDhtPartitionExchangeId
[topVer=AffinityTopologyVersion [topVer=6, minorTopVer=0], nodeId=383991fb,
evt=NODE_JOINED], super={}]
2016-08-15 07:36:43,177 DEBUG [exchange-worker-#318%EQGrid%]
[preloader.GridDhtPartitionDemander] -  Rebalancing is not
required [cache=p_queryResults, topology=AffinityTopologyVersion [topVer=6,
minorTopVer=0]]
2016-08-15 07:36:43,178 DEBUG [exchange-worker-#318%EQGrid%]
[preloader.GridDhtPartitionDemander] -  Completed rebalance
future: RebalanceFuture [sndStoppedEvnt=false,
topVer=AffinityTopologyVersion [topVer=6, minorTopVer=0], updateSeq=6]
2016-08-15 07:36:43,179 INFO [exchange-worker-#318%EQGrid%]
[cache.GridCachePartitionExchangeManager] - Skipping rebalancing (nothing
scheduled) [top=AffinityTopologyVersion [topVer=6, minorTopVer=0],
evt=NODE_JOINED, node=383991fb-5453-4893-9040-1baa1291881a]
-

So I started digging. Using GridDhtPartitionTopology, I got partitions map,
which (aggregated) looked like this:
-
Node: 38ae4165-474d-4ed4-a292-cca78b8df5c3, partitions: {MOVING=340}
Node: 8ac8d327-dc59-473f-a3e1-c5861f63f0e6, partitions: {MOVING=341}
Node: c7047158-9e7b-494f-bceb-3a5774853a6c, partitions: {MOVING=342}
Node: c9cc1a1f-f037-43c8-8855-0f1ccb8f0ec5, partitions: {MOVING=342}
Node: dce874ff-cc1e-41c8-9e82-abfb3dfa535e, partitions: {OWNING=1024}
Node: de783f6d-dc48-46b8-a387-91dd3d181150, partitions: {MOVING=342}
-

Important point is that such distribution never changes, neither right
after grid start, nor after few hours. Ingesting (or not ingesting) data
also doesn't seem to affect this. Changing rebalanceDelay and commenting
out affinityMapper also made no difference.
>From what I'm seeing, affinity function distributes partitions evenly (6
nodes, ~341 partitions each = 2048, i.e. 1024 partitions and a backup), but
the coordinator node just never releases 1024-341=683 partitions, being an
owner of every partition in a grid.

Please, help me understand what might cause such behavior. I included logs
and properties, which seemed relevant to the issue, but I'll provide more
if needed.

- regards, Sergii

-- 
The information in this message may be confidential.  It is intended solely 
for
the addressee(s).  If you are not the intended recipient, any disclosure,
copying or distribution of the message, or any action or omission taken by 
you
in reliance on it, is prohibited and may be unlawful.  Please immediately
contact the sender if you have received this message in error.



Re: Fail to join topology and repeat join process

2016-08-16 Thread Jason
hi Vladislav,

By scanning through the log, found that the serialization error may be the
root cause. When the previous node of the new added node tried to send the
messages (NodeAddedMessage/NodeAddFinishedMessage) to the new node, it
failed, so it marked the new node as "failed" and sent an NodeFailedMessage
to coordinator and when the coordinator received that, removed the new node
and notified all the other nodes.

I've attached the serialization error, would you like to help take a look
at?

Thanks,
-Jason

serialization_error.txt

  





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Fail-to-join-topology-and-repeat-join-process-tp6987p7092.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Ignite performance

2016-08-16 Thread Piubelli, Manuel
Hello,



Just to close this off I separated the latency and throughput measurements and 
also noticed the main reason for the flat growth: a slow node in the cluster. 
With all clusters in the same subnet, I am seeing the following latencies:



Latency (ms)

Entries (K) 50   75   90   95   99   100

5   3334993

50 1012   13   142346

5007477   83   89   108   119

10,0001457  2010 2304 2756  3471  3476



Thanks all for the help.



Manuel



-Original Message-
From: vkulichenko [mailto:valentin.kuliche...@gmail.com]
Sent: 06 August 2016 06:20
To: user@ignite.apache.org
Subject: RE: Ignite performance



Hi Manuel,



Are the results in the latest table in milliseconds? I don't see significant 
growth with the number of elements there, am I missing something? Actually your 
query calculates average for the whole data set which means that the cache is 
scanned, so I would expect that the latency will depend on the cache size. 
Makes sense?



-Val







--

View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-performance-tp6703p6825.html

Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Puzzled about JVM configuration

2016-08-16 Thread yucigou
I'm a bit puzzled about the basic JVM configuration (which is said has proven
to provide fairly smooth throughput without large spikes) on the Ignite JVM
and System Tuning page
(https://apacheignite.readme.io/docs/jvm-and-system-tuning):

-XX:NewSize=128m 
-XX:MaxNewSize=128m 
-XX:MaxTenuringThreshold=0 
-XX:SurvivorRatio=1024

Below is the CPU and memory profile captured:

 

(By the way, there is no heavy job running for my application, except for
the Ignite REST API enabled with Ignite web console invoking it through it
agent.)

You can see that the minor GCs are so frequent, and also the full GCs, and
the CPU usage is so high as result. The size of young gen space is very
small. Also live objects are not aged at all, and they are promoted to old
gen immediately.

I just wonder what is the reasoning behind these settings:
(1) Small young gen space size
(2) Tiny survivor space with max tenuring threshold set to zero (which means
no aging)

Thank you,
Yuci






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Puzzled-about-JVM-configuration-tp7091.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cassandra - Ignite Exception

2016-08-16 Thread Kamal C
Igor,

I'm facing this issue in the cassandra nodes side. I've added a distributed
lock (Ignite BinarySemaphore) in
the client to avoid the error as you've mentioned when multiple clients
start simultaneously.

Sometimes, In client nodes, I face the below warnings:

WARN [2016-07-30T10:11:08,092] CassandraCacheStore: warning(): Cassandra
session refreshed
 WARN [2016-07-30T10:11:08,093] CassandraCacheStore: warning(): Prepared
statement cluster error detected, refreshing Cassandra session
com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute
unknown prepared query : 0x92523c1ca5cb89ede8b4e16b79b31c63. You may have
used a PreparedStatement that was created with another Cluster instance.
at 
com.datastax.driver.core.SessionManager.makeRequestMessage(SessionManager.java:568)
~[cassandra-driver-core-3.0.0.jar:?]
at 
com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:131)
~[cassandra-driver-core-3.0.0.jar:?]
at org.apache.ignite.cache.store.cassandra.session.CassandraSes
sionImpl.execute(CassandraSessionImpl.java:225)
[ignite-cassandra-1.6.0.jar:1.6.0]
at org.apache.ignite.cache.store.cassandra.CassandraCacheStore.
deleteAll(CassandraCacheStore.java:354) [ignite-cassandra-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.store.GridCacheS
toreManagerAdapter.removeAll(GridCacheStoreManagerAdapter.java:725)
[ignite-core-1.6.0.jar:1.6.0
]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridDhtAtomicCache.updatePartialBatch(GridDhtAtomicCache.java:2482)
[ignite-core-1.6.0.ja
r:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridDhtAtomicCache.updateWithBatch(GridDhtAtomicCache.java:2087)
[ignite-core-1.6.0.jar:1
.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1624)
[ignite-core-1.6
.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1484)
[ignite-core-1.6.
0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridNearAtomicUpdateFuture.mapSingle(GridNearAtomicUpdateFuture.java:555)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridNearAtomicUpdateFuture.map(GridNearAtomicUpdateFuture.java:749)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:544)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:202)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridDhtAtomicCache$25.apply(GridDhtAtomicCache.java:1242)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridDhtAtomicCache$25.apply(GridDhtAtomicCache.java:1240)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:703)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridDhtAtomicCache.removeAllAsync0(GridDhtAtomicCache.java:1240)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridDhtAtomicCache.removeAllAsync(GridDhtAtomicCache.java:620)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.distributed.dht.
atomic.GridDhtAtomicCache.removeAll(GridDhtAtomicCache.java:613)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
.removeAll(IgniteCacheProxy.java:1409) [ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.datastreamer.DataStrea
merCacheUpdaters.updateAll(DataStreamerCacheUpdaters.java:93)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.datastreamer.DataStrea
merCacheUpdaters$Batched.receive(DataStreamerCacheUpdaters.java:162)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.datastreamer.DataStrea
merUpdateJob.call(DataStreamerUpdateJob.java:140)
[ignite-core-1.6.0.jar:1.6.0]
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6459)
[ignite-core-1.6.0.jar:1.6.0]
at org.apache.ignite.internal.processors.closure.GridClosurePro
cessor$2.body(GridClosureProcessor.java:944) [ignite-core-1.6.0.jar:1.6.0]
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
[ignite-core-1.6.0.jar:1.6.0]
at 

Re: Ignite Compute - Rebalance Scenario

2016-08-16 Thread Kamal C
Vladislav,

Partitions are moved to another node while executing the job.

Val,

I've tried with the latest nighty build. Still, it has the same behavior.
To reproduce the issue, I've used `while(true)` inside the IgniteRunnable
task. You can reproduce it with the below gist[1].

[1]: https://gist.github.com/Kamal15/0a4066de152b8ebc856fc264f7b4037d

Regards,
Kamal C

On Sat, Aug 13, 2016 at 12:15 AM, vkulichenko  wrote:

> Note that this was changed recently [1] and the change was not released in
> 1.7. Kamal, can you try the nightly build [2] and check if it works as you
> expect there?
>
> [1] https://issues.apache.org/jira/browse/IGNITE-2310
> [2]
> https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/
> lastSuccessfulBuild/
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Compute-Rebalance-Scenario-tp7004p7021.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: LOOK THORUGH THIS ERROR

2016-08-16 Thread Ravi Puri
 
 
 
 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/LOOK-THORUGH-THIS-ERROR-tp6977p7087.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.