Re: ignite memory issues:Urgent in production

2016-09-20 Thread percent620
Can anyone help me to fix this issue as this issue happens in our production
env?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-memory-issues-Urgent-in-production-tp7817p7842.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ignite memory issues:Urgent in production

2016-09-17 Thread percent620
*Another server logs and I found that several ignite server automaticlly
shutdown.
*
Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to send
message to remote node: TcpDiscoveryNode
[id=f59d7d01-b01d-46b2-b679-17b73313ae98, addrs=[y, 127.0.0.1],
sockAddrs=[y/y:0, /127.0.0.1:0], discPort=0, order=2486,
intOrder=1265, lastExchangeTime=1474169373925, loc=false,
ver=1.7.0#20160801-sha1:383273e3, isClient=true]
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:1996)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:1936)
at
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1304)
... 30 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to connect
to node (is node still alive?). Make sure that each ComputeTask and cache
Transaction has a timeout set in order to prevent parties from waiting
forever in case of network issues
[nodeId=f59d7d01-b01d-46b2-b679-17b73313ae98, addrs=[y/y:47100,
/127.0.0.1:47100]]
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:2499)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2140)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2034)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:1970)
... 32 more
Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to
connect to address: y/y:47100
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:2504)
... 35 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:117)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:2363)
... 35 more
Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to
connect to address: /127.0.0.1:47100
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:2504)
... 35 more
Caused by: class org.apache.ignite.IgniteCheckedException: Remote node 
ID
is not as expected [expected=f59d7d01-b01d-46b2-b679-17b73313ae98,
rcvd=a4df12c5-fe9e-4b3f-b652-0ec02111dc7b]
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.safeHandshake(TcpCommunicationSpi.java:2614)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:2371)
... 35 more
[11:29:44,714][SEVERE][marshaller-cache-#228%null%][CacheContinuousQueryHandler]
Failed to send event notification to node:
19ca5b90-ae92-41fa-ae54-d1427e41185d
class org.apache.ignite.IgniteCheckedException: Failed to send message (node
may have left the grid or TCP connection cannot be established due to
firewall issues) [node=TcpDiscoveryNode
[id=19ca5b90-ae92-41fa-ae54-d1427e41185d, addrs=[y, 127.0.0.1],
sockAddrs=[/127.0.0.1:0, y/y:0], discPort=0, order=2481,
intOrder=1260, lastExchangeTime=1474169373764, loc=false,
ver=1.7.0#20160801-sha1:383273e3, isClient=true], topic=T4
[topic=TOPIC_CACHE, id1=1fd3a002-42a8-3e13-a1aa-bf164b7f2d64,
id2=19ca5b90-ae92-41fa-ae54-d1427e41185d, id3=1], msg=GridContinuousMessage
[type=MSG_EVT_NOTIFICATION, routineId=bf2fb8b0-db98-4f6e-8fa4-514d00dcf5e7,
data=null, futId=null], policy=2]
at
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1309)
at
org.apache.ignite.internal.managers.communication.GridIoManager.sendOrderedMessage(GridIoManager.java:1540)
at
org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1337)
at
org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1308)
at
org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1290)
at
org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendNotification(GridContinuousProcessor.java:945)
at
org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.addNotification(GridContinuousProcessor.java:888)
at
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.onEntryUpdate(CacheContinuousQueryHandler.java:787)
   

ignite memory issues:Urgent in production

2016-09-17 Thread percent620
Hello, I have a urgent issues on our production env for ignite issues.

I have deployed ignite cluster with standalone server for 7 server nodes,
each ignite node memory is 40G. totally is 270G.

[10:05:11] Topology snapshot [ver=2356, servers=7, clients=0, CPUs=1488,
heap=270GB]



we have set all the ignite connection is "client" mode, when we have 60
clients(each clients is 4GB), then ignite with the following information 

*[10:05:11] Topology snapshot [ver=2356, servers=7, clients=60, CPUs=1488,
heap=510GB]*

sometimes all the ignite shut down quickly  and error message is 
[13:08:12,549][SEVERE][exchange-worker-#136%null%][GridDhtPartitionsExchangeFuture]
Failed to reinitialize local partitions (preloading will be stopped):
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2582,
minorTopVer=0], nodeId=8837eae8, evt=NODE_FAILED]
java.lang.NullPointerException
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:734)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:473)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1440)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
[13:08:12] Ignite node stopped OK [uptime=25:07:08:283]



I have 2 questions as below
1) Can you please tell me what's wrong with this error message?
2)
*[10:05:11] Topology snapshot [ver=2356, servers=7, clients=60, CPUs=1488,
heap=510GB]*
client total memory is 240GB(60 client nodes * 4GB), is this is root cause? 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-memory-issues-Urgent-in-production-tp7817.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


how to get all keys and values stored into ignite cache for ignite1.6?

2016-09-08 Thread percent620
Hello,

I have request to get all the keys and values from igniteCache for ignite
1.6 version?

Can anyone tell me how to do that? thanks!!!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/how-to-get-all-keys-and-values-stored-into-ignite-cache-for-ignite1-6-tp7612.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Yarn Ignite Container Automatically exit when other yarn application running

2016-09-07 Thread percent620
Thanks Nikolai Tikhonov-2 response.


*Did you use kerberos authentication for YARN?
*No

*And could you clarify which version of hadoop are you using?*
hadoop-2.6.0-cdh5.5.0



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Yarn-Ignite-Container-Automatically-exit-when-other-yarn-application-running-tp7335p7599.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Spark stage was hang via ignite

2016-09-04 Thread percent620
We have two executors that running very slowly...

 1





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Spark-stage-was-hang-via-ignite-tp7485p7525.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Can anyone give me a full demo for embedded ignite for spark?

2016-09-02 Thread percent620




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Can-anyone-give-me-a-full-demo-for-embedded-ignite-for-spark-tp7486.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Spark stage was hang via ignite

2016-09-02 Thread percent620
/u01/spark-1.6.0-hive/bin/spark-submit --driver-memory 4G --class
com.ETLTransform --master yarn --executor-cores 4 --executor-memory 1000m
--num-executors 20 --conf spark.rdd.compress=false --conf
spark.shuffle.compress=false --conf spark.broadcast.compress=false
/u01/spark_engine863.jar -quesize 10 -batchSize 5000 -writethread 30
-runningSeconds 20
args name=-quesize value=10
args name=-batchSize value=5000
args name=-writethread value=30
args name=-runningSeconds value=20
16/09/02 22:21:17 WARN Utils: Service 'SparkUI' could not bind on port 4040.
Attempting port 4041.
16/09/02 22:21:17 WARN Utils: Service 'SparkUI' could not bind on port 4041.
Attempting port 4042.
16/09/02 22:21:17 WARN Utils: Service 'SparkUI' could not bind on port 4042.
Attempting port 4043.
16/09/02 22:21:17 WARN Utils: Service 'SparkUI' could not bind on port 4043.
Attempting port 4044.
ignite:start==
*[Stage 0:==>  (15 + 5)
/ 20]*

Stage was handed by this, any progress on this.


I found running executor on spark ui and found the following error message
as below

16/09/02 22:21:43 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/09/02 22:21:44 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/09/02 22:21:44 INFO Remoting: Starting remoting
16/09/02 22:21:44 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkExecutorActorSystem@vmsecdomain010194070063.cm10:56914]
16/09/02 22:21:44 INFO util.Utils: Successfully started service
'sparkExecutorActorSystem' on port 56914.
16/09/02 22:21:44 INFO storage.DiskBlockManager: Created local directory at
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5645/blockmgr-218e5cde-129e-41f8-b05e-c262e24c346f
16/09/02 22:21:44 INFO storage.MemoryStore: MemoryStore started with
capacity 6.8 GB
16/09/02 22:21:44 INFO executor.CoarseGrainedExecutorBackend: Connecting to
driver: spark://CoarseGrainedScheduler@10.194.70.26:51811
16/09/02 22:21:44 INFO executor.CoarseGrainedExecutorBackend: Successfully
registered with driver
16/09/02 22:21:44 INFO executor.Executor: Starting executor ID 2 on host
xxx
16/09/02 22:21:45 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 41006.
16/09/02 22:21:45 INFO netty.NettyBlockTransferService: Server created on
41006
16/09/02 22:21:45 INFO storage.BlockManagerMaster: Trying to register
BlockManager
16/09/02 22:21:45 INFO storage.BlockManagerMaster: Registered BlockManager
16/09/02 22:21:50 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 1
16/09/02 22:21:50 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 9
16/09/02 22:21:50 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 17
16/09/02 22:21:50 INFO executor.Executor: Running task 9.0 in stage 0.0 (TID
9)
16/09/02 22:21:50 INFO executor.Executor: Running task 1.0 in stage 0.0 (TID
1)
16/09/02 22:21:50 INFO executor.Executor: Running task 17.0 in stage 0.0
(TID 17)
16/09/02 22:21:50 INFO executor.Executor: Fetching
http://10.194.70.26:48676/jars/spark_zmqpull_engine863.jar with timestamp
1472826078761
16/09/02 22:21:50 INFO util.Utils: Fetching
http://10.194.70.26:48676/jars/spark_zmqpull_engine863.jar to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5645/spark-50073b87-72e3-4f4a-84cd-f01a7d5061dd/fetchFileTemp8818016450869617667.tmp
16/09/02 22:21:58 INFO util.Utils: Copying
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5645/spark-50073b87-72e3-4f4a-84cd-f01a7d5061dd/-10798427751472826078761_cache
to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5645/container_1455892346017_5645_01_09/./spark_zmqpull_engine863.jar
16/09/02 22:21:59 INFO executor.Executor: Adding
file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5645/container_1455892346017_5645_01_09/./spark_zmqpull_engine863.jar
to class loader
16/09/02 22:21:59 INFO broadcast.TorrentBroadcast: Started reading broadcast
variable 0
16/09/02 22:21:59 INFO storage.MemoryStore: Block broadcast_0_piece0 stored
as bytes in memory (estimated size 1568.0 B, free 1568.0 B)
16/09/02 22:21:59 INFO broadcast.TorrentBroadcast: Reading broadcast
variable 0 took 129 ms
16/09/02 22:21:59 INFO storage.MemoryStore: Block broadcast_0 stored as
values in memory (estimated size 1608.0 B, free 3.1 KB)
16/09/02 22:21:59 INFO internal.IgniteKernal: 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 1.6.0#20160518-sha1:0b22c45b
>>> 2016 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

16/09/02 22:21:59 INFO internal.IgniteKernal: Config URL: n/a
16/09/02 22:21:59

Re: Ignite Cache Update(k,v)

2016-09-01 Thread percent620
IgniteCluster API provides startNodes() method that allows to do this
programmatically on the remote machines (it uses SSH connection). 


Can you please give me full examples? or a link for this? thanks again



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cache-Update-k-v-tp7296p7470.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Yarn Ignite Container Automatically exit when other yarn application running

2016-09-01 Thread percent620
but from yarn contains, i can't find any errors.


sometimes the yarn am ignite was shutdown down(and sometimes restart a new
AM, i don't know why)?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Yarn-Ignite-Container-Automatically-exit-when-other-yarn-application-running-tp7335p7469.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Cache Update(k,v)

2016-08-28 Thread percent620
OK,thanks vkulichenko!

If i use IgniteCache API directly on my scenarios, how to start ignite
server on different worker machine?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cache-Update-k-v-tp7296p7362.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Embedded mode ignite on spark

2016-08-27 Thread percent620
We have two  scenes that need to be used on our production env. 

1、We have lots of middle datas that need to be saved.

We use yarn ignite integration to save these datas. We have already deployed
this successfully. 

our idea code like this to start ignite yarn code as below
*val igniteContext = new IgniteContext[Integer, Integer](sc,
"/u01/apache-ignite-fabric-1.6.0-bin/config/default-config.xml")*

We have no issues for this 1)

2、Another scene that we also have embedded ignite spark integration, all the
ignite process need to run with spark job. we can't shared these data.


val igniteContext = new IgniteContext[Integer, Integer](sc,
"/u01/apache-ignite-fabric-1.6.0-bin/config/default-config.xml",*false*)

We use this code to start ignite, is this right?

we get error message when we use this way, error issues is 
16/08/26 16:23:52 ERROR YarnScheduler: Lost executor 1 on : Container
marked as failed: container_1455892346017_5494_01_02 on host:
vmsecdomain010194054060.cm10. Exit status: -100. Diagnostics: Container
released on a *lost* node 

sometime this mode will clean *yarn-ignite-contain existed proces*s , I
think that they are separated. right?







--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Embedded-mode-ignite-on-spark-tp6942p7359.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Yarn Ignite Container Automatically exit when other yarn application running

2016-08-26 Thread percent620
Hello, 

I faced important issues. I have deployed yarn ignite application
successfully. everything is okay.


But today, when others running spark job on yarn(this job can't contain
ignite),and faced error message as below

*16/08/26 16:23:52 ERROR YarnScheduler: Lost executor 1 on : Container
marked as failed: container_1455892346017_5494_01_02 on host:
vmsecdomain010194054060.cm10. Exit status: -100. Diagnostics: Container
released on a *lost* node
*16/08/26 16:23:52 ERROR YarnScheduler: Lost an executor 1 (already
removed): Pending loss reason.

this is contain is yarn ignite container. I saw yarn contain logs and found
that lost 1 electors on ignite.


Why? Can anyone help me?


Regarding yarn ignite integration, I use static ip discovery. I have
specified all the workers ip ignite configuration, is this lead to error?





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Yarn-Ignite-Container-Automatically-exit-when-other-yarn-application-running-tp7335.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite Cache Update(k,v)

2016-08-25 Thread percent620
Hello, I faced a issue on real business scope.

i have deploy my spark job to ignite(embedded integration) , we have saved
RDD(key,v) to ignite.

But when we have a key updated,  we want to only to update this key? how to
handle with this situation? 

Ignite will call saveRDD to call all the RDDS, I'm afraid this is will lead
to performance issues, thanks!!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cache-Update-k-v-tp7296.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Embedded mode ignite on spark

2016-08-25 Thread percent620
I just saw this article, and only have a question


1)how to different ignite with spark embedded integration,  ignite with yarn
integration? thanks!!!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Embedded-mode-ignite-on-spark-tp6942p7294.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Yarn deployment for static TcpDiscoverySpi issues:Urgent In Production

2016-08-24 Thread percent620
I have already fixed my issues, and add classpth://srping-bean.jar to
load native jar NOT internet.

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Yarn-deployment-for-static-TcpDiscoverySpi-issues-Urgent-In-Production-tp7205p7289.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Yarn deployment for memory capacity make a bigger than before: Urgent!!!

2016-08-24 Thread percent620
Thanks  Nikolai Tikhonov-2,

2 questions

1)How to configure -Xmx and -Xms jvm options for client nodes? I have
submitted this job via spark-sub, you mean change this memory.


2)You can't explain the reason? Can you please double check this?

**why local port 127.0.0.1 join yarn ignite topology, is this correct?
Ithink that this is not correct.*



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Yarn-deployment-for-memory-capacity-make-a-bigger-than-before-Urgent-tp7275p7282.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Yarn deployment for memory capacity make a bigger than before: Urgent!!!

2016-08-24 Thread percent620
Here is my detailed steps
1)root@sparkup1 config]# cat default-config.xml

http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
xmlns:util="http://www.springframework.org/schema/util";
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans-4.1.xsd";>


 
   

 

   
   
172.16.186.200:47500..47509
   
172.16.186.201:47500..47509
   
172.16.186.202:47500..47509
   

   

 
   
  

[root@sparkup1 config]# 

2)yarn contains log as below

i started 3 ignite node and for every is 1024G 


$cat cluster.properties
# The number of nodes in the cluster.
IGNITE_NODE_COUNT=3
# The number of CPU Cores for each Apache Ignite node.
IGNITE_RUN_CPU_PER_NODE=1
# The number of Megabytes of RAM for each Apache Ignite node.
IGNITE_MEMORY_PER_NODE=1024
# The version ignite which will be run on nodes.
IGNITE_VERSION=1.0.6
IGNITE_WORK_DIR=/u01/yueyi/apache-ignite-hadoop-1.6.0-bin
IGNITE_XML_CONFIG=/ignite/releases/apache-ignite-hadoop-1.6.0-bin/config/default-config.xml
IGNITE_RELEASES_DIR=/ignite/releases/
IGNITE_USERS_LIBS=/ignite/releases/apache-ignite-hadoop-1.6.0-bin/libs/
#IGNITE_HOSTNAME_CONSTRAINT=vmsecdomain010194070026.cm10
IGNITE_PATH=/ignite/releases/


[root@sparkup3 config]# tail -f
/usr/hadoop-2.4.1/logs/userlogs/application_1472047995043_0001/container_1472047995043_0001_01_03/stdout
[07:13:45] Configured plugins:
[07:13:45]   ^-- None
[07:13:45]
[07:13:46] Security status [authentication=off, tls/ssl=off]
[07:13:47] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[07:13:47]
[07:13:47] Ignite node started OK (id=20fb73be)
[07:13:47] Topology snapshot [ver=1, servers=1, clients=0, CPUs=1,
heap=1.0GB]
[07:13:48] Topology snapshot [ver=2, servers=2, clients=0, CPUs=2,
heap=2.0GB]
[07:13:50] Topology snapshot [ver=3, servers=3, clients=0, CPUs=3,
heap=3.0GB]
==
Thanks is ok for above steps



3)
spark-submit *--driver-memory 4G* --class com.ignite.testIgniteSharedRDD
--master yarn --executor-cores 2 --executor-memory 1000m --num-executors 2
--conf spark.rdd.compress=false --conf spark.shuffle.compress=false --conf
spark.broadcast.compress=false
/root/limu/ignite/spark-project-jar-with-dependencies.jar


4)Yarn logs become is 
[07:13:46] Security status [authentication=off, tls/ssl=off]
[07:13:47] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[07:13:47]
[07:13:47] Ignite node started OK (id=20fb73be)
[07:13:47] Topology snapshot [ver=1, servers=1, clients=0, CPUs=1,
heap=1.0GB]
[07:13:48] Topology snapshot [ver=2, servers=2, clients=0, CPUs=2,
heap=2.0GB]
[07:13:50] Topology snapshot [ver=3, servers=3, clients=0, CPUs=3,
heap=3.0GB]
*[07:16:54] Topology snapshot [ver=4, servers=3, clients=1, CPUs=3,
heap=7.0GB] correct

*
/[07:17:06] Topology snapshot [ver=5, servers=4, clients=1, CPUs=3,
heap=8.0GB]
[07:17:07] Topology snapshot [ver=6, servers=5, clients=1, CPUs=3,
heap=9.0GB]/

is not correct, why become 5 servers and 9GB memory



details log 
nohup: ignoring input
16/08/24 07:16:17 INFO spark.SparkContext: Running Spark version 1.6.1
16/08/24 07:16:18 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
16/08/24 07:16:18 INFO spark.SecurityManager: Changing view acls to: root
16/08/24 07:16:18 INFO spark.SecurityManager: Changing modify acls to: root
16/08/24 07:16:18 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(root); users with modify permissions: Set(root)
16/08/24 07:16:19 INFO util.Utils: Successfully started service
'sparkDriver' on port 56368.
16/08/24 07:16:20 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/08/24 07:16:20 INFO Remoting: Starting remoting
16/08/24 07:16:20 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkDriverActorSystem@172.16.186.200:45375]
16/08/24 07:16:20 INFO util.Utils: Successfully started service
'sparkDriverActorSystem' on port 45375.
16/08/24 07:16:20 INFO spark.SparkEnv: Registering MapOutputTracker
16/08/24 07:16:20 INFO spark.SparkEnv: Registering BlockManagerMaster
16/08/24 07:16:20 INFO storage.DiskBlockManager: Created local directory at
/tmp/blockmgr-8739bc15-9e06-449

Re: Yarn deployment for static TcpDiscoverySpi issues:Urgent In Production

2016-08-24 Thread percent620
Thanks Val,

I have run successfully  this with yarn on my local env. but when i run this
on production env, I always get spring beans error message

 
http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation=" 
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>   
 


 
   

   

   
   
172.16.186.200:47500..47509
   
172.16.186.201:47500..47509
   
172.16.186.202:47500..47509
   

   

 
   
  



in my local env , we can ping internet, but on our production env, the
internet is disabled, is this root cause for this ?
16/08/24 16:43:33 ERROR TaskSetManager: Task 5 in stage 1.0 failed 4 times;
aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due
to stage failure: Task 5 in stage 1.0 failed 4 times, most recent failure:
Lost task 5.3 in stage 1.0 (TID 24, ): class
org.apache.ignite.IgniteCheckedException: Failed to instantiate Spring XML
application context
[springUrl=file:/u01/apache-ignite-fabric-1.6.0-bin/config/default-config.xml,
err=Line 34 in XML document from URL
[file:/u01/apache-ignite-fabric-1.6.0-bin/config/default-config.xml] is
invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 34;
columnNumber: 69; cvc-elt.1: Cannot find the declaration of element
'beans'.]
at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:391)
at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:104)
at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:639)
at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:678)
at
org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:717)
at
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:85)
at
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:85)
at org.apache.ignite.spark.Once.apply(IgniteContext.scala:198)
at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:138)
at
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:171)
at
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:170)
at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:756)
Caused by:
org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line
34 in XML document from URL
[file:/u01/apache-ignite-fabric-1.6.0-bin/config/default-config.xml] is
invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 34;
columnNumber: 69; cvc-elt.1: Cannot find the declaration of element 'beans'.
at
org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:398)
at
org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:335)
at
org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:303)
at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:379)
... 21 more





--
View this message in context: 
http://apache-ignite-users.

Re: Yarn deployment for static TcpDiscoverySpi issues:Urgent In Production

2016-08-23 Thread percent620
Thanks Val, I have fixed my issues for above, thanks for your great help on
this issue.






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Yarn-deployment-for-static-TcpDiscoverySpi-issues-Urgent-In-Production-tp7205p7253.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Yarn deployment for static TcpDiscoverySpi issues:Urgent In Production

2016-08-23 Thread percent620
This is my java code as below
1)
val igniteContext = new IgniteContext[Integer, Integer](sc,
"/u01//apache-ignite-fabric-1.6.0-bin/config/default-config.xml")
println("define igniteContext end===")
val sharedRDD = igniteContext.fromCache("sharedBaselineCacheRDD")
//val sharedRDD = igniteContext.fromCache(new
CacheConfiguration[Integer, Integer]().setCacheMode(CacheMode.REPLICATED))

val initalRDD = sc.parallelize(1 to 10,10).map(i => (new Integer(i),
new Integer(i)))
println("initalRDD.couner=/. " + initalRDD.count() +"\tpartition=> " +
initalRDD.partitions.size)

sharedRDD.savePairs(initalRDD, true)
println("=>totalcounter" + sharedRDD.count + "\t paris => " +
sharedRDD.partitions.size)
println("=>" + sharedRDD.filter(_._2 > 5).count)


2) my ignite default.xml configuration is static IP NOT mulicast



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Yarn-deployment-for-static-TcpDiscoverySpi-issues-Urgent-In-Production-tp7205p7238.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Yarn deployment for static TcpDiscoverySpi issues:Urgent In Production

2016-08-23 Thread percent620
I have fixed the file issues
"file:/usr/apache-ignite-fabric-1.6.0-bin/config/default-config.xml" on
today.

I verified again on our production env and found another issues as below
1) I have set IGNTE_HOME on spark driver machine and start to submit this
jar to yarn cluster 
export IGNITE_HOME=/u01//apache-ignite-fabric-1.6.0-bin

$/u01/spark-1.6.0-hive/bin/spark-submit --driver-memory 4G --class
com.ValidSparkCache --master yarn --executor-cores 5 --executor-memory
15000m --num-executors 50 --conf spark.rdd.compress=false --conf
spark.shuffle.compress=false --conf spark.broadcast.compress=false
/u01//spark_zmqpull_engine847.jar 

16/08/23 16:42:07 WARN Utils: Service 'SparkUI' could not bind on port 4040.
Attempting port 4041.
16/08/23 16:42:07 WARN Utils: Service 'SparkUI' could not bind on port 4041.
Attempting port 4042.
16/08/23 16:42:07 WARN Utils: Service 'SparkUI' could not bind on port 4042.
Attempting port 4043.
16/08/23 16:42:07 WARN Utils: Service 'SparkUI' could not bind on port 4043.
Attempting port 4044.
define igniteContext start===
define igniteContext end===
[Stage 0:===>  (4 + 6) /
10][16:43:17] New version is available at ignite.apache.org: 1.7.0
initalRDD.couner=/. 10  partition=> 10  
*16/08/23 16:43:21 WARN TaskSetManager: Lost task 7.0 in stage 1.0 (TID 17,
xxx): class org.apache.ignite.IgniteCheckedException: Spring XML
configuration path is invalid: config/default-config.xml. Note that this
path should be either absolute or a relative local file system path,
relative to META-INF in classpath or valid URL to IGNITE_HOME.
*   at
org.apache.ignite.internal.util.IgniteUtils.resolveSpringUrl(IgniteUtils.java:3580)
at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:678)
at
org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:717)
at
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:85)
at
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:85)
at org.apache.ignite.spark.Once.apply(IgniteContext.scala:198)
at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:138)
at
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:171)
at
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:170)
at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:756)
Caused by: java.net.MalformedURLException: no protocol:
config/default-config.xml
at java.net.URL.(URL.java:585)
at java.net.URL.(URL.java:482)
at java.net.URL.(URL.java:431)
at
org.apache.ignite.internal.util.IgniteUtils.resolveSpringUrl(IgniteUtils.java:3571)
... 18 more

[Stage 1:>(0 + 10) /
10]16/08/23 16:43:31 WARN TaskSetManager: Lost task 3.2 in stage 1.0 (TID
23, ): class org.apache.ignite.IgniteCheckedException: Spring XML
configuration path is invalid: config/default-config.xml. Note that this
path should be either absolute or a relative local file system path,
relative to META-INF in classpath or valid URL to IGNITE_HOME.
at
org.apache.ignite.internal.util.IgniteUtils.resolveSpringUrl(IgniteUtils.java:3580)
at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:678)
at
org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:717)
at
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:85)
at
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:85)
at org.apache.ignite.spark.Once.apply(IgniteContext.scala:198)
at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:138)
at
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:171)
at
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:170)
at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.sca

Yarn deployment for static TcpDiscoverySpi issues:Urgent In Production

2016-08-22 Thread percent620
Hello, 

Everything is okay for me to integrate ignite with yarn on *Multicast Based
Discovery* in my local spark and yarn cluster , but in our production env,
some of ports could't be opened .So, I need to specify a static ip address
to discovery each other.

but when running my configuration and encountered the following issue. List
my detailed steps as below.

1、config/default-config.mxl

http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>   


 
   

  

   
   
172.16.186.200:47500..47509
   
172.16.186.201:47500..47509
   
172.16.186.202:47500..47509
   

   

 
   
  


2、my java code on idea
ackage com.ignite
import org.apache.ignite.spark._
import org.apache.ignite.configuration._
import org.apache.spark.{SparkConf, SparkContext}
/**
  * Created by limu on 2016/8/14.
  */
object testIgniteSharedRDD {
  def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("testIgniteSharedRDD")
val sc = new SparkContext(conf)

  /*  val cfg = new IgniteConfiguration()
cfg.setIgniteHome("/usr/apache-ignite-fabric-1.6.0-bin")
*/
//val ic = new IgniteContext[Integer, Integer](sc, () => new
IgniteConfiguration())
  val ic = new IgniteContext[Integer, Integer](sc,
"/usr/apache-ignite-fabric-1.6.0-bin/config/default-config.xml")
val sharedRDD = ic.fromCache("sharedIgniteRDD-ling-sha111o")
println("original.sharedCounter=> " + sharedRDD.count())

sharedRDD.savePairs(sc.parallelize(1 to 77000, 10).map(i => (new
Integer(i), new Integer(i
println("final.sharedCounter=> " + sharedRDD.count())

println("final.condition.couner=> " + sharedRDD.filter(_._2 >
21000).count )
  }


3、Yarn container logs
Logs for container_1471869381289_0001_01_01

About Apache Hadoop
ResourceManager

RM Home 

NodeManager
Tools


SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/hadoop-2.4.1/tmp/nm-local-dir/usercache/root/appcache/application_1471869381289_0001/filecache/10/ignite-yarn.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/22 05:38:52 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-nodemanagers-proxies : 500
16/08/22 05:38:52 INFO client.RMProxy: Connecting to ResourceManager at
sparkup1/172.16.186.200:8030
Aug 22, 2016 5:38:53 AM org.apache.ignite.yarn.ApplicationMaster run
INFO: Application master registered.
Aug 22, 2016 5:38:53 AM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 1,908, cpu 1.
Aug 22, 2016 5:38:53 AM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 1,908, cpu 1.
Aug 22, 2016 5:38:53 AM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 1,908, cpu 1.
16/08/22 05:38:54 INFO impl.AMRMClientImpl: Received new token for :
sparkup3:46170
16/08/22 05:38:54 INFO impl.AMRMClientImpl: Received new token for :
sparkup2:55406
Aug 22, 2016 5:38:54 AM org.apache.ignite.yarn.ApplicationMaster
onContainersAllocated
INFO: Launching container: container_1471869381289_0001_01_02.
16/08/22 05:38:54 INFO impl.ContainerManagementProtocolProxy: Opening proxy
: sparkup3:46170
16/08/22 05:38:54 INFO impl.AMRMClientImpl: Received new token for :
sparkup1:53711
Aug 22, 2016 5:38:54 AM org.apache.ignite.yarn.ApplicationMaster
onContainersAllocated
INFO: Launching container: container_1471869381289_0001_01_03.
16/08/22 05:38:54 INFO impl.ContainerManagementProtocolProxy: Opening proxy
: sparkup2:55406
Aug 22, 2016 5:38:55 AM org.apache.ignite.yarn.ApplicationMaster
onContainersAllocated
INFO: Launching container: container_1471869381289_0001_01_04.
16/08/22 05:38:55 INFO impl.ContainerManagementProtocolProxy: Opening proxy
: sparkup1:53711


4、spark-submit errors


[root@sparkup1 config]# clear
[root@sparkup1 config]# spark-submit --driver-memory 2G --class
com.ignite.testIgniteSharedRDD --master yarn --executor-c

Re: Embedded mode ignite on spark

2016-08-21 Thread percent620
Can you please provider a demo to me? I'm n new ignite, thanks very much  for
your great hep on this!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Embedded-mode-ignite-on-spark-tp6942p7198.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Embedded mode ignite on spark

2016-08-21 Thread percent620
Thanks vkulichenko,
I will use TcpDiscoveryVmIpFinder instead of mutinies for ignite, but i
faced another issues.

my integration mode is : embedded mode ignite on spark
1) 
Scala code for idea as below
val spi = new TcpDiscoverySpi()
val ipFinder = new TcpDiscoveryVmIpFinder()
// Set initial IP addresses.
// Note that you can optionally specify a port or a port range.
ipFinder.setAddresses(util.Arrays.asList("172.16.186.200",
"172.16.186.200:47500..47509"))
spi.setIpFinder(ipFinder)
val cfg = new IgniteConfiguration()
// Override default discovery SPI.
cfg.setDiscoverySpi(spi)
*val igniteContext = new IgniteContext[Integer,Integer](sc, () => cfg,
false)
*


2)run spark-submit to submit my jar to yarn cluster, but faced the following
erormessage

spark-submit --driver-memory 2G --class com.ignite.testIgniteEmbedRDD
--master yarn --executor-cores 2 --executor-memory 1000m --num-executors 2
--conf spark.rdd.compress=false --conf spark.shuffle.compress=false --conf
spark.broadcast.compress=false
/root/limu/ignite/spark-project-jar-with-dependencies.jar
16/08/21 03:15:13 INFO spark.SparkContext: Running Spark version 1.6.1
16/08/21 03:15:14 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
16/08/21 03:15:14 INFO spark.SecurityManager: Changing view acls to: root
16/08/21 03:15:14 INFO spark.SecurityManager: Changing modify acls to: root
16/08/21 03:15:14 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(root); users with modify permissions: Set(root)
16/08/21 03:15:15 INFO util.Utils: Successfully started service
'sparkDriver' on port 38970.
16/08/21 03:15:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/08/21 03:15:15 INFO Remoting: Starting remoting
16/08/21 03:15:16 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkDriverActorSystem@172.16.186.200:34025]
16/08/21 03:15:16 INFO util.Utils: Successfully started service
'sparkDriverActorSystem' on port 34025.
16/08/21 03:15:16 INFO spark.SparkEnv: Registering MapOutputTracker
16/08/21 03:15:16 INFO spark.SparkEnv: Registering BlockManagerMaster
16/08/21 03:15:16 INFO storage.DiskBlockManager: Created local directory at
/tmp/blockmgr-ffc108d3-da0d-4ff4-a910-8ff2a66d0463
16/08/21 03:15:16 INFO storage.MemoryStore: MemoryStore started with
capacity 1259.8 MB
16/08/21 03:15:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/08/21 03:15:17 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/08/21 03:15:17 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
16/08/21 03:15:17 INFO util.Utils: Successfully started service 'SparkUI' on
port 4040.
16/08/21 03:15:17 INFO ui.SparkUI: Started SparkUI at
http://172.16.186.200:4040
16/08/21 03:15:17 INFO spark.HttpFileServer: HTTP File server directory is
/tmp/spark-6406a8e6-0a53-4925-a17e-158ce3b4aa6e/httpd-292e03c2-1805-4e68-916f-acd4eef0c265
16/08/21 03:15:17 INFO spark.HttpServer: Starting HTTP Server
16/08/21 03:15:17 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/08/21 03:15:17 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:36680
16/08/21 03:15:17 INFO util.Utils: Successfully started service 'HTTP file
server' on port 36680.
16/08/21 03:15:18 INFO spark.SparkContext: Added JAR
file:/root/limu/ignite/spark-project-jar-with-dependencies.jar at
http://172.16.186.200:36680/jars/spark-project-jar-with-dependencies.jar
with timestamp 1471774518173
16/08/21 03:15:18 INFO client.RMProxy: Connecting to ResourceManager at
sparkup1/172.16.186.200:8032
16/08/21 03:15:18 INFO yarn.Client: Requesting a new application from
cluster with 3 NodeManagers
16/08/21 03:15:18 INFO yarn.Client: Verifying our application has not
requested more than the maximum memory capability of the cluster (8192 MB
per container)
16/08/21 03:15:18 INFO yarn.Client: Will allocate AM container, with 896 MB
memory including 384 MB overhead
16/08/21 03:15:18 INFO yarn.Client: Setting up container launch context for
our AM
16/08/21 03:15:18 INFO yarn.Client: Setting up the launch environment for
our AM container
16/08/21 03:15:18 INFO yarn.Client: Preparing resources for our AM container
16/08/21 03:15:19 INFO yarn.Client: Uploading resource
file:/usr/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar
->
hdfs://sparkup1:9000/user/root/.sparkStaging/application_1471720446331_0026/spark-assembly-1.6.1-hadoop2.6.0.jar
16/08/21 03:15:25 INFO yarn.Client: Uploading resource
file:/tmp/spark-6406a8e6-0a53-4925-a17e-158ce3b4aa6e/__spark_conf__3046103417669035498.zip
->
hdfs://sparkup1:9000/user/root/.sparkStaging/application_1471720446331_0026/__spark_conf__3046103417669035498.zip
16/08/21 03:15:25 INFO spark.SecurityManager: Changing view acls to: root
16/08/21 03:15:25 INFO spark.SecurityManager: Changing modify acls to: root
16/08/21 03:15:25 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users wi

Re: Embedded mode ignite on spark

2016-08-18 Thread percent620
Anyone can help me to fix this issue? thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Embedded-mode-ignite-on-spark-tp6942p7146.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Embedded mode ignite on spark

2016-08-16 Thread percent620
Can you please help me to fix this issue on this week, i will apply this
function to our production env. thanks again for you great help on this
issue.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Embedded-mode-ignite-on-spark-tp6942p7123.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Embedded mode ignite on spark

2016-08-16 Thread percent620
==5)===
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/filecache/652/spark-assembly-1.6.0-hadoop2.5.0-cdh5.3.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/17 09:28:10 INFO executor.CoarseGrainedExecutorBackend: Registered
signal handlers for [TERM, HUP, INT]
16/08/17 09:28:11 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:11 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:11 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:12 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:12 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:12 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:12 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/08/17 09:28:12 INFO Remoting: Starting remoting
16/08/17 09:28:13 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkExecutorActorSystem@xxx:46368]
16/08/17 09:28:13 INFO util.Utils: Successfully started service
'sparkExecutorActorSystem' on port 46368.
16/08/17 09:28:13 INFO storage.DiskBlockManager: Created local directory at
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/blockmgr-8f0bd302-1f10-4f03-b639-e5a38341e80c
16/08/17 09:28:13 INFO storage.MemoryStore: MemoryStore started with
capacity 10.3 GB
16/08/17 09:28:13 INFO executor.CoarseGrainedExecutorBackend: Connecting to
driver: spark://CoarseGrainedScheduler@yyy:64381
16/08/17 09:28:13 INFO executor.CoarseGrainedExecutorBackend: Successfully
registered with driver
16/08/17 09:28:13 INFO executor.Executor: Starting executor ID 5 on host

16/08/17 09:28:13 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 9012.
16/08/17 09:28:13 INFO netty.NettyBlockTransferService: Server created on
9012
16/08/17 09:28:13 INFO storage.BlockManagerMaster: Trying to register
BlockManager
16/08/17 09:28:13 INFO storage.BlockManagerMaster: Registered BlockManager
16/08/17 09:28:17 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 0
16/08/17 09:28:17 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID
0)
16/08/17 09:28:17 INFO executor.Executor: Fetching
http://xxx:64443/jars/spark_zmqpull_engine841.jar with timestamp
1471397276248
16/08/17 09:28:17 INFO util.Utils: Fetching
http://xxx:64443/jars/spark_zmqpull_engine841.jar to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-47ab6a3d-0286-492c-b9d1-fd5bd52ec80d/fetchFileTemp1877526252998936286.tmp
16/08/17 09:28:23 INFO util.Utils: Copying
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-47ab6a3d-0286-492c-b9d1-fd5bd52ec80d/15389795251471397276248_cache
to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_07/./spark_zmqpull_engine841.jar
16/08/17 09:28:24 INFO executor.Executor: Adding
file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_07/./spark_zmqpull_engine841.jar
to class loader
16/08/17 09:28:24 INFO broadcast.TorrentBroadcast: Started reading broadcast
variable 0
16/08/17 09:28:24 INFO storage.MemoryStore: Block broadcast_0_piece0 stored
as bytes in memory (estimated size 1571.0 B, free 1571.0 B)
16/08/17 09:28:24 INFO broadcast.TorrentBroadcast: Reading broadcast
variable 0 took 144 ms
16/08/17 09:28:24 INFO storage.MemoryStore: Block broadcast_0 stored as
values in memory (estimated size 1608.0 B, free 3.1 KB)
16/08/17 09:28:26 INFO internal.IgniteKernal: 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 1.6.0#20160518-sha1:0b22c45b
>>> 2016 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

16/08/17 09:28:26 INFO internal.IgniteKernal: Config URL: n/a
16/08/17 09:28:26 INFO internal.IgniteKernal: Daemon mode: off
16/08/17 09:28:26 INFO internal.IgniteKernal: OS: Linux
2.6.32-220.23.2.ali878.el6.x86_64 amd64
16/08/17 09:28:26 INFO internal.IgniteKernal: OS user: hbase
16/08/17 09:28:26 INFO internal.IgniteKernal: Language runtime: Java
Platform API Specification ver. 1.7
16/08/17 09:28:26 INFO internal.Igni

Re: Embedded mode ignite on spark

2016-08-16 Thread percent620
4)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/filecache/785/spark-assembly-1.6.0-hadoop2.5.0-cdh5.3.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/17 09:28:13 INFO executor.CoarseGrainedExecutorBackend: Registered
signal handlers for [TERM, HUP, INT]
16/08/17 09:28:14 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:14 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:14 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:15 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:15 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:15 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/08/17 09:28:15 INFO Remoting: Starting remoting
16/08/17 09:28:15 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkExecutorActorSystem@v:62569]
16/08/17 09:28:15 INFO util.Utils: Successfully started service
'sparkExecutorActorSystem' on port 62569.
16/08/17 09:28:15 INFO storage.DiskBlockManager: Created local directory at
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/blockmgr-eeadd1d1-8dd8-48ef-84da-f3ed72048c80
16/08/17 09:28:15 INFO storage.MemoryStore: MemoryStore started with
capacity 10.3 GB
16/08/17 09:28:16 INFO executor.CoarseGrainedExecutorBackend: Connecting to
driver: spark://CoarseGrainedScheduler@10.194.70.26:64381
16/08/17 09:28:16 INFO executor.CoarseGrainedExecutorBackend: Successfully
registered with driver
16/08/17 09:28:16 INFO executor.Executor: Starting executor ID 4 on host

16/08/17 09:28:16 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 15883.
16/08/17 09:28:16 INFO netty.NettyBlockTransferService: Server created on
15883
16/08/17 09:28:16 INFO storage.BlockManagerMaster: Trying to register
BlockManager
16/08/17 09:28:16 INFO storage.BlockManagerMaster: Registered BlockManager
16/08/17 09:28:17 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 4
16/08/17 09:28:17 INFO executor.Executor: Running task 4.0 in stage 0.0 (TID
4)
16/08/17 09:28:17 INFO executor.Executor: Fetching
http://10.194.70.26:64443/jars/spark_zmqpull_engine841.jar with timestamp
1471397276248
16/08/17 09:28:17 INFO util.Utils: Fetching
http://10.194.70.26:64443/jars/spark_zmqpull_engine841.jar to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-e30ebe68-c99e-443d-87c2-9f9372e9759d/fetchFileTemp4290126145592989981.tmp
16/08/17 09:28:23 INFO util.Utils: Copying
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-e30ebe68-c99e-443d-87c2-9f9372e9759d/15389795251471397276248_cache
to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_05/./spark_zmqpull_engine841.jar
16/08/17 09:28:24 INFO executor.Executor: Adding
file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_05/./spark_zmqpull_engine841.jar
to class loader
16/08/17 09:28:24 INFO broadcast.TorrentBroadcast: Started reading broadcast
variable 0
16/08/17 09:28:24 INFO storage.MemoryStore: Block broadcast_0_piece0 stored
as bytes in memory (estimated size 1571.0 B, free 1571.0 B)
16/08/17 09:28:24 INFO broadcast.TorrentBroadcast: Reading broadcast
variable 0 took 143 ms
16/08/17 09:28:24 INFO storage.MemoryStore: Block broadcast_0 stored as
values in memory (estimated size 1608.0 B, free 3.1 KB)
16/08/17 09:28:25 INFO internal.IgniteKernal: 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 1.6.0#20160518-sha1:0b22c45b
>>> 2016 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

16/08/17 09:28:25 INFO internal.IgniteKernal: Config URL: n/a
16/08/17 09:28:25 INFO internal.IgniteKernal: Daemon mode: off
16/08/17 09:28:25 INFO internal.IgniteKernal: OS: Linux
2.6.32-220.23.2.ali878.el6.x86_64 amd64
16/08/17 09:28:25 INFO internal.IgniteKernal: OS user: hbase
16/08/17 09:28:25 INFO internal.IgniteKernal: Language runtime: Java
Platform API Specification ver. 1.7
16

Re: Embedded mode ignite on spark

2016-08-16 Thread percent620
Thanks vkulichenko.

1)I have tried this code as your new method on our production env and submit
to yarn cluster, final result is not correct, it seems that case data was
lost.
val sharedRDD = ic.fromCache(new CacheConfiguration[String,
String]().setCacheMode(CacheMode.REPLICATED)); 

2) this is all the executor logs
spark-submit print logs
$/u01/spark-1.6.0-hive/bin/spark-submit --driver-memory 4G --class
com..ValidSparkCache --master yarn --executor-cores 5 --executor-memory
15000m --num-executors 5 --conf spark.rdd.compress=false --conf
spark.shuffle.compress=false --conf spark.broadcast.compress=false
/u01/xx/spark_zmqpull_engine841.jar 
 
initalRDD.couner=/. 10  partition=> 10  
*=>totalcounter2  paris => 1024 is should be 10 

**=>1*

totally num-executors 5 

==1)=
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/filecache/871/spark-assembly-1.6.0-hadoop2.5.0-cdh5.3.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/17 09:28:11 INFO executor.CoarseGrainedExecutorBackend: Registered
signal handlers for [TERM, HUP, INT]
16/08/17 09:28:12 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:12 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:12 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:13 INFO spark.SecurityManager: Changing view acls to: hbase
16/08/17 09:28:13 INFO spark.SecurityManager: Changing modify acls to: hbase
16/08/17 09:28:13 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(hbase); users with modify permissions: Set(hbase)
16/08/17 09:28:13 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/08/17 09:28:13 INFO Remoting: Starting remoting
16/08/17 09:28:13 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkExecutorActorSystem@:18374]
16/08/17 09:28:13 INFO util.Utils: Successfully started service
'sparkExecutorActorSystem' on port 18374.
16/08/17 09:28:13 INFO storage.DiskBlockManager: Created local directory at
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/blockmgr-5c0d998e-15b8-43c5-b73d-96e1e76af179
16/08/17 09:28:14 INFO storage.MemoryStore: MemoryStore started with
capacity 10.3 GB
16/08/17 09:28:14 INFO executor.CoarseGrainedExecutorBackend: Connecting to
driver: spark://CoarseGrainedScheduler@10.194.70.26:64381
16/08/17 09:28:14 INFO executor.CoarseGrainedExecutorBackend: Successfully
registered with driver
16/08/17 09:28:14 INFO executor.Executor: Starting executor ID 1 on host xxx
16/08/17 09:28:14 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 28593.
16/08/17 09:28:14 INFO netty.NettyBlockTransferService: Server created on
28593
16/08/17 09:28:14 INFO storage.BlockManagerMaster: Trying to register
BlockManager
16/08/17 09:28:14 INFO storage.BlockManagerMaster: Registered BlockManager
16/08/17 09:28:17 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 1
16/08/17 09:28:17 INFO executor.Executor: Running task 1.0 in stage 0.0 (TID
1)
16/08/17 09:28:17 INFO executor.Executor: Fetching
http://10.194.70.26:64443/jars/spark_zmqpull_engine841.jar with timestamp
1471397276248
16/08/17 09:28:17 INFO util.Utils: Fetching
http://x:64443/jars/spark_zmqpull_engine841.jar to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-a963fd26-783c-4f09-947c-48163270764d/fetchFileTemp8027243307717085221.tmp
16/08/17 09:28:23 INFO util.Utils: Copying
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/spark-a963fd26-783c-4f09-947c-48163270764d/15389795251471397276248_cache
to
/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_02/./spark_zmqpull_engine841.jar
16/08/17 09:28:24 INFO executor.Executor: Adding
file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5248/container_1455892346017_5248_01_02/./spark_zmqpull_engine841.jar
to class loader
16/08/17 09:28:24 INFO broadcast.TorrentBroadcast: Started reading broadcast
variable 0
16/08/17 09:28:24 INFO storage.MemoryStore: Block broadcast_0_piece0 stored
as bytes in memory (estimated size 1571.0 B, free 1571.0 B)
16/08/17 09:28:24 INFO broadcast.

Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-14 Thread percent620
Can you pleas tell me what ports should be opened on driver or yarn cluster
machine? this is our production env, I suspect that something need be
configured on firewall ports.

1)
I verified all these steps on my local cluster(3 spark machine[1 master and
2 workers], one yarn cluster machine), everything is OK except cache
counter.




 cache counter issues:

val igniteContext = new IgniteContext[Integer,Integer](sc, () => new
IgniteConfiguration(),false)
val sharedRDD = igniteContext.fromCache("sharedBaselineCacheRDD")
val initalRDD = sc.parallelize(1 to 10,10).map(i => (new Integer(i),
new Integer(i)))
println("initalRDD.couner=/. " + initalRDD.count() +"\tpartition=> " +
initalRDD.partitions.size)

sharedRDD.savePairs(initalRDD, true)
println("=>totalcounter" + sharedRDD.count + "\t paris => " +
sharedRDD.partitions.size)
println("=>" + sharedRDD.filter(_._2 > 5).count) 
*final result is 4 NOT 5, this is happened also on embed integration
with spark.*



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp6910p7051.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-13 Thread percent620
Hello, Nikolai,

YES, I can ping yarn cluster from driver machine.

I try to telnet localhost 50075 on driver machine, but failed, is this root
cause for this issues? thanks!!!





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp6910p7040.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Embedded mode ignite on spark

2016-08-13 Thread percent620
Thanks vkulichenko

I will describe my issues detailed as below. I have the same code on two
scenarios  but firestone is correct and second one is not correct.

I'm studying ignite recent days but can't get correct result on
thisHopefully anyone can help me on this.

==
1、Run ignite with spark-shell
1)./spark-shell --jars
/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/ignite-core-1.6.0.jar,/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/ignite-spark/ignite-spark-1.6.0.jar,/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/cache-api-1.0.0.jar,/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/ignite-log4j/ignite-log4j-1.6.0.jar,/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/ignite-log4j/log4j-1.2.17.jar
--packages
org.apache.ignite:ignite-spark:1.6.0,org.apache.ignite:ignite-spring:1.6.0

2)running the following code on spark-shell
val ic = new IgniteContext[Int, Int](sc, () => new
IgniteConfiguration(),false)
val sharedRDD = ic.fromCache("sharedBaselineCacheRDD")
val initalRDD = sc.parallelize(1 to 10,10).map(i => (i, i))
println("initalRDD.counter=/. " + initalRDD.count() +"\t
partitionCounter=> " + initalRDD.partitions.size)

//sharedRDD.saveValues(initalRDD.map(line=>line._1))
sharedRDD.savePairs(initalRDD, true)//override cache on ignite
println("=>totalIgniteEmbedCounter" + sharedRDD.count + "\t
igniteParitionCounter => " + sharedRDD.partitions.size)
println("=>totalIgniteFilterConditionEmbedCounter" +
sharedRDD.filter(_._2 > 5).count)

3)result as below
scala> import org.apache.ignite.spark._
import org.apache.ignite.spark._

scala> import org.apache.ignite.configuration._
import org.apache.ignite.configuration._

scala> val ic = new IgniteContext[Int, Int](sc, () => new
IgniteConfiguration(),false)
ic: org.apache.ignite.spark.IgniteContext[Int,Int] =
org.apache.ignite.spark.IgniteContext@74e72ff4

scala> val sharedRDD = ic.fromCache("sharedBaselineCacheRDD")
sharedRDD: org.apache.ignite.spark.IgniteRDD[Int,Int] = IgniteRDD[1] at RDD
at IgniteAbstractRDD.scala:31

scala> val initalRDD = sc.parallelize(1 to 10,10).map(i => (i, i))
initalRDD: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[3] at map
at :33

scala> println("initalRDD.counter=/. " + initalRDD.count() +"\t
partitionCounter=> " + initalRDD.partitions.size)
initalRDD.counter=/. 10  partitionCounter=> 10

scala> sharedRDD.savePairs(initalRDD, true)//override cache on ignite

scala> println("=>totalIgniteEmbedCounter" + sharedRDD.count + "\t
igniteParitionCounter => " + sharedRDD.partitions.size)
*=>totalIgniteEmbedCounter10  igniteParitionCounter => 1024 
 
*
scala> println("=>totalIgniteFilterConditionEmbedCounter" +
sharedRDD.filter(_._2 > 5).count)
*=>totalIgniteFilterConditionEmbedCounter5*

totalIgniteEmbedCounter is :10 ,right
totalIgniteFilterConditionEmbedCounteris :5, right
==


2、IDEA project
1)create a maven project on idea
2) import ignite maven files as above [1]
3)code as below for idea
object TestIgniteEmbedCache {
  def main(args: Array[String]) {
val conf = new SparkConf().setAppName("TestIgniteEmbedCache")
val sc = new SparkContext(conf)

//val ic = new IgniteContext[Int, Int](sc, () => new
IgniteConfiguration().setIncludeEventTypes(EventType.EVT_TASK_FAILED),false)
val ic = new IgniteContext[Int, Int](sc, () => new
IgniteConfiguration(),false)
val sharedRDD = ic.fromCache("sharedBaselineCacheRDD")
val initalRDD = sc.parallelize(1 to 10,10).map(i => (i, i))
println("initalRDD.counter=/. " + initalRDD.count() +"\t
partitionCounter=> " + initalRDD.partitions.size)

//sharedRDD.saveValues(initalRDD.map(line=>line._1))
sharedRDD.savePairs(initalRDD, true)//override cache on ignite
println("=>totalIgniteEmbedCounter" + sharedRDD.count + "\t
igniteParitionCounter => " + sharedRDD.partitions.size)
println("=>totalIgniteFilterConditionEmbedCounter" +
sharedRDD.filter(_._2 > 5).count)

  }

}
4、running maven clean assembly:assembly and get sparkignitedemo.jar

5、upload this jar to our linux driver machine and submit jar to yarn cluster
using spark-submit command as below

/u01/spark-1.6.0-hive/bin/spark-submit --driver-memory 8G --class
com.TestIgniteEmbedCache *--master yarn *--executor-cores 5
--executor-memory 1000m --num-executors 10 --conf spark.rdd.compress=false
--conf spark.shuffle.compress=false --conf spark.broadcast.compress=false
/home/sparkignitedemo.jar


6、result: this is issue on this
totalIgniteEmbedCounter is : 4 or 3000(I think is random)
totalIgniteFilterConditionEmbedCounteris :1 or 2000(random)
==

This result is very make me confused on this why the same code have two
different result on this? Can anyone help me on this? I'm blocking this
issue on sereval days. If

Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-12 Thread percent620
scala> [14:52:04] New version is available at ignite.apache.org: 1.7.0
16/08/12 14:52:46 ERROR TcpDiscoverySpi: Failed to reconnect to cluster
(consider increasing 'networkTimeout' configuration property)
[networkTimeout=5000]
16/08/12 15:37:54 ERROR GridClockSyncProcessor: Failed to send time sync
snapshot to remote node (did not leave grid?)
[nodeId=20d8035e-abec-44ca-a5c0-7e3308984d83,
msg=GridClockDeltaSnapshotMessage [snapVer=GridClockDeltaVersion [ver=34,
topVer=13], deltas={3286d19e-72d5-4353-86d2-03ffdb6c4733=0,
ac2c7723-fa93-49d5-92c3-1d815b6b178b=0,
7d7790d2-a67f-4d76-b36f-47dc08025594=0,
3cbcbe73-f29d-4051-952a-9ba7b80cf1c3=0,
20d8035e-abec-44ca-a5c0-7e3308984d83=0,
f94defdc-ca9b-450b-91c0-6a6b26f5d553=0}], err=Failed to send message (node
may have left the grid or TCP connection cannot be established due to
firewall issues) [node=TcpDiscoveryNode
[id=20d8035e-abec-44ca-a5c0-7e3308984d83, addrs=[XXX, 127.0.0.1],
sockAddrs=[/Z:47500, /XXX:47500, /127.0.0.1:47500], discPort=47500,
order=1, intOrder=1, lastExchangeTime=1470987075851, loc=false,
ver=1.6.0#20160518-sha1:0b22c45b, isClient=false], topic=TOPIC_TIME_SYNC,
msg=GridClockDeltaSnapshotMessage [snapVer=GridClockDeltaVersion [ver=34,
topVer=13], deltas={3286d19e-72d5-4353-86d2-03ffdb6c4733=0,
ac2c7723-fa93-49d5-92c3-1d815b6b178b=0,
7d7790d2-a67f-4d76-b36f-47dc08025594=0,
3cbcbe73-f29d-4051-952a-9ba7b80cf1c3=0,
20d8035e-abec-44ca-a5c0-7e3308984d83=0,
f94defdc-ca9b-450b-91c0-6a6b26f5d553=0}], policy=2]]
16/08/12 15:38:02 ERROR TcpDiscoverySpi: Failed to reconnect to cluster
(consider increasing 'networkTimeout' configuration property)
[networkTimeout=5000]
16/08/12 15:50:02 ERROR TcpDiscoverySpi: Failed to reconnect to cluster
(consider increasing 'networkTimeout' configuration property)
[networkTimeout=5000]
Exception in thread "ignite-update-notifier-timer" class
org.apache.ignite.IgniteClientDisconnectedException: Client node
disconnected: null
at
org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:87)
at
org.apache.ignite.internal.cluster.ClusterGroupAdapter.guard(ClusterGroupAdapter.java:170)
at
org.apache.ignite.internal.cluster.ClusterGroupAdapter.nodes(ClusterGroupAdapter.java:288)
at
org.apache.ignite.internal.processors.cluster.ClusterProcessor$UpdateNotifierTimerTask.safeRun(ClusterProcessor.java:224)
at 
org.apache.ignite.internal.util.GridTimerTask.run(GridTimerTask.java:34)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp6910p7010.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-12 Thread percent620
Get the following error message after a few minutes
scala> val ic = new IgniteContext[Integer,
Integer](sc,"/u01/yueyi/apache-ignite-fabric-1.6.0-bin/config/default-config.xml")
ic: org.apache.ignite.spark.IgniteContext[Integer,Integer] =
org.apache.ignite.spark.IgniteContext@26917c50

scala> [14:52:04] New version is available at ignite.apache.org: 1.7.0
16/08/12 14:52:46 ERROR TcpDiscoverySpi: Failed to reconnect to cluster
(consider increasing 'networkTimeout' configuration property)
[networkTimeout=5000]


What should i do next step on this issue? Can anyone tell me? thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp6910p7007.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Embedded mode ignite on spark

2016-08-12 Thread percent620
I verified this code as below

val ic = new IgniteContext[Int, Int](sc, () => new
IgniteConfiguration(),false)
val sharedRDD = ic.fromCache("sharedBaselineCacheRDD")
val initalRDD = sc.parallelize(1 to 10,10).map(i => (i, i))
println("initalRDD.couner=/. " + initalRDD.count() +"\tparition=> " +
initalRDD.partitions.size)
sharedRDD.savePairs(initalRDD)
println("=>totalcounter" + sharedRDD.count + "\t paris => " +
sharedRDD.partitions.size)
println("=>" + sharedRDD.filter(_._2 > 5).count)

but this result is not what i need, 

define shared context start
define shared context end   
[Stage 1:=>   (28 + 2) /
30][09:29:21] New version is available at ignite.apache.org: 1.7.0
initalRDD.couner=/. 10  parition=> 10   
=>totalcounter4  paris => 1024  (need to be 1)  
  
=>1 
==


Can anyone help me on this issues? Thanks!!!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Embedded-mode-ignite-on-spark-tp6942p7001.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Embedded mode ignite on spark

2016-08-12 Thread percent620
thaks ,Alisher,

Can you also give more ideas about discard datas for ignite cache? thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Embedded-mode-ignite-on-spark-tp6942p6999.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread percent620
Hello, Nikolai,

I'm a new ignite, can you please provide more detail steps for this? thanks
again!!!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp6910p6986.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread percent620
1) containers logs as below
 
1、[20:07:14] Ignite node started OK (id=a3a87e37)
[20:07:14] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24,
heap=2.0GB]
[20:07:14] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48,
heap=4.0GB]
[20:07:15] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72,
heap=6.0GB]
[20:07:16] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
heap=8.0GB]

2、[20:07:13] Security status [authentication=off, tls/ssl=off]
[20:07:16] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
heap=8.0GB]
[20:07:17] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[20:07:17] 
[20:07:17] Ignite node started OK (id=6c5ee58d)
[20:07:17] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
heap=8.0GB]

3、[20:07:13] Security status [authentication=off, tls/ssl=off]
[20:07:15] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72,
heap=6.0GB]
[20:07:16] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
heap=8.0GB]
[20:07:17] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[20:07:17] 
[20:07:17] Ignite node started OK (id=d5a9f244)
[20:07:17] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
heap=8.0GB]

4、[20:07:15] Security status [authentication=off, tls/ssl=off]
[20:07:17] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[20:07:17] 
[20:07:17] Ignite node started OK (id=9c74f586)
[20:07:17] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
heap=8.0GB]



2)spark-shell hangs there

3) I'm running spark-shell on driver machine NOT yarn cluster?

Thanks!!!




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp6910p6979.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread percent620
Hello, Nikolai,
1、
Just updated configuration default-config.xm on
/u01/XXX/apache-ignite-fabric-1.6.0-bin/config/default-config.xm

2)./hdfs dfs -put
/u01/XXX/apache-ignite-fabric-1.6.0-bin/config/default-config.xm
/ignite/release16/apache-ignite-fabric-1.6.0-bin/config/



3)
scala> import org.apache.ignite.spark._
import org.apache.ignite.spark._

scala> import org.apache.ignite.configuration._
import org.apache.ignite.configuration._

scala> val ic = new IgniteContext[Integer, Integer](sc,
"/u01/XXX/apache-ignite-fabric-1.6.0-bin/config/default-config.xml")


it also hanging, 

Can I miss some files to be changed? thanks!!!




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp6910p6974.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread percent620
Thanks Nikolai very much.

As your request, and changed configuration
/ignite/release16/apache-ignite-fabric-1.6.0-bin/config/default-config.xml

but this file is hfs file

1)
$./hdfs dfs -text
/ignite/release16/apache-ignite-fabric-1.6.0-bin/config/default-config.xml
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/u01/hadoop-2.6.0-cdh5.5.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/u01/hadoop-2.6.0-cdh5.5.0/share/hadoop/common/lib/tachyon-client-0.9.0-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]




http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>

   

  
  
  
  
  
 


 


 


 




2) get the following error message as below

scala> val ic = new IgniteContext[Integer, Integer](sc,
"/ignite/release16/apache-ignite-fabric-1.6.0-bin/config/default-config.xml")
class org.apache.ignite.IgniteCheckedException: Spring XML configuration
path is invalid:
/ignite/release16/apache-ignite-fabric-1.6.0-bin/config/default-config.xml.
Note that this path should be either absolute or a relative local file
system path, relative to META-INF in classpath or valid URL to IGNITE_HOME.
at
org.apache.ignite.internal.util.IgniteUtils.resolveSpringUrl(IgniteUtils.java:3580)
at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:678)
at
org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:717)
at
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:85)
at
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(IgniteContext.scala:85)
at org.apache.ignite.spark.Once.apply(IgniteContext.scala:198)
at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:138)
at org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:59)
at org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:85)
at
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:33)
at
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:38)
at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:40)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:42)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:44)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:46)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:48)
at $iwC$$iwC$$iwC$$iwC$$iwC.(:50)
at $iwC$$iwC$$iwC$$iwC.(:52)
at $iwC$$iwC$$iwC.(:54)
at $iwC$$iwC.(:56)
at $iwC.(:58)
at (:60)
at .(:64)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at 
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at 
org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at
org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
at
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
at
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
at
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkIL

Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread percent620
$cat default-config.xml




http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>





$cat example-default.xml





http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xmlns:util="http://www.springframework.org/schema/util";
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd";>








































127.0.0.1:47500..47509












--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp6910p6963.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread percent620
1、Adjusted cluster.properites as below
$cat cluster16.properties 
# The number of nodes in the cluster.
IGNITE_NODE_COUNT=4
# The number of CPU Cores for each Apache Ignite node.
IGNITE_RUN_CPU_PER_NODE=1
# The number of Megabytes of RAM for each Apache Ignite node.
IGNITE_MEMORY_PER_NODE=2048
# The version ignite which will be run on nodes.
IGNITE_VERSION=1.6.0
IGNITE_WORK_DIR=/u01/yueyi/apache-ignite-fabric-1.6.0-bin/
IGNITE_XML_CONFIG=/ignite/release16/apache-ignite-fabric-1.6.0-bin/config/default-config.xml
IGNITE_RELEASES_DIR=/ignite/release16/
#IGNITE_USERS_LIBS=/u01/yueyi/apache-ignite-fabric-1.6.0-bin/libs/
#IGNITE_HOSTNAME_CONSTRAINT=vmsecdomain010194070026.cm10
IGNITE_PATH=/ignite/release16/apache-ignite-fabric-1.6.0-bin.zip

2、hdfs directory
$./hdfs dfs -ls /ignite/release16
drwxr-xr-x   - hbase hbase  0 2016-08-11 12:44
/ignite/release16/apache-ignite-fabric-1.6.0-bin
-rw-r--r--   3 hbase hbase  175866626 2016-08-11 18:31
/ignite/release16/apache-ignite-fabric-1.6.0-bin.zip

3、yarn console
INFO: Application master registered.
Aug 11, 2016 6:32:57 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 2,432, cpu 1.
Aug 11, 2016 6:32:57 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 2,432, cpu 1.
Aug 11, 2016 6:32:57 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 2,432, cpu 1.
Aug 11, 2016 6:32:57 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 2,432, cpu 1.
16/08/11 18:32:57 INFO impl.AMRMClientImpl: Received new token for :
xx1:29077
16/08/11 18:32:57 INFO impl.AMRMClientImpl: Received new token for :
xxx2:57492
16/08/11 18:32:57 INFO impl.AMRMClientImpl: Received new token for :
xxx3:59929
16/08/11 18:32:57 INFO impl.AMRMClientImpl: Received new token for :
xx4:23159

5、contain logs as below
1)
[18:33:13] Security status [authentication=off, tls/ssl=off]
[18:33:18] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[18:33:18] 
[18:33:18] Ignite node started OK (id=a060a3ee)
[18:33:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72,
heap=6.0GB]
[18:33:19] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
heap=8.0GB]

2)[18:33:13] Security status [authentication=off, tls/ssl=off]
[18:33:19] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[18:33:19] 
[18:33:19] Ignite node started OK (id=5c8dfd50)
[18:33:19] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
heap=8.0GB]

3)[18:33:12] Ignite node started OK (id=4e75d238)
[18:33:12] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24,
heap=2.0GB]
[18:33:14] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48,
heap=4.0GB]
[18:33:17] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72,
heap=6.0GB]
[18:33:19] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
heap=8.0GB]

4)[18:33:14] Ignite node started OK (id=250fcb93)
[18:33:14] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48,
heap=4.0GB]
[18:33:17] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72,
heap=6.0GB]
[18:33:19] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
heap=8.0GB]

faced issues=

1、spark-shell test
./spark-shell --jars
/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/ignite-core-1.6.0.jar,/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/ignite-spark/ignite-spark-1.6.0.jar,/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/cache-api-1.0.0.jar,/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/ignite-log4j/ignite-log4j-1.6.0.jar,/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/ignite-log4j/log4j-1.2.17.jar
--packages
org.apache.ignite:ignite-spark:1.6.0,org.apache.ignite:ignite-spring:1.6.0

2、SQL context available as sqlContext.

scala> import org.apache.ignite.spark._
import org.apache.ignite.spark._

scala> import org.apache.ignite.configuration._
import org.apache.ignite.configuration._

scala> val ic = new IgniteContext[Integer, Integer](sc, () => new
IgniteConfiguration())




the spark-shell hang here

Thanks again




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp6910p6961.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re:Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread percent620


Thanks very much for your quick response.!!!
I will attache contain logs for this issues.

Can you please also help me on this issues as below?
http://apache-ignite-users.70518.x6.nabble.com/Embedded-mode-ignite-on-spark-tt6942.html

 


At 2016-08-11 17:43:40, "Nikolai Tikhonov-2 [via Apache Ignite Users]" 
 wrote:

Hi,


Could you please provide logs from containers? 
Also IGNITE_PATH property incorrect. The property should contains path to 
apache ignite zip archive. For example: 
/ignite/apache-ignite-fabric-1.7.0-bin.zip. Also IGNITE_USERS_LIBS not needed 
here. The property need to use when you want deploy to cluster your own libs.


On Thu, Aug 11, 2016 at 7:54 AM, percent620 <[hidden email]> wrote:
Thanks vkulichenko's quick response.

Here is my detailed steps about how to deploy and integration spark with
ignite as below.

1、Followed this guidelines about how to deploy ignite-yarn application.
<http://apache-ignite-users.70518.x6.nabble.com/file/n6941/ignite-yarn.png>

it's successfully and the log is displayed ok
==
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5077/filecache/10/ignite-yarn.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/10 22:54:57 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
Aug 10, 2016 10:54:58 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Application master registered.
Aug 10, 2016 10:54:58 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 2,432, cpu 1.
Aug 10, 2016 10:54:58 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 2,432, cpu 1.
16/08/10 22:54:59 INFO impl.AMRMClientImpl: Received new token for :
vmsecdomain010194062066.cm10:61362
16/08/10 22:54:59 INFO impl.AMRMClientImpl: Received new token for :
vmsecdomain010194062042.cm10:42077
Aug 10, 2016 10:54:59 PM org.apache.ignite.yarn.ApplicationMaster
onContainersAllocated
INFO: Launching container: container_1455892346017_5077_02_02.
16/08/10 22:54:59 INFO impl.ContainerManagementProtocolProxy: Opening proxy
: vmsecdomain010194062066.cm10:61362
Aug 10, 2016 10:54:59 PM org.apache.ignite.yarn.ApplicationMaster
onContainersAllocated
INFO: Launching container: container_1455892346017_5077_02_03.
16/08/10 22:54:59 INFO impl.ContainerManagementProtocolProxy: Opening proxy
: vmsecdomain010194062042.cm10:42077
Aug 10, 2016 10:55:08 PM org.apache.ignite.yarn.ApplicationMaster
onContainersCompleted
INFO: Container completed. Container id:
container_1455892346017_5077_02_02. State: COMPLETE.
Aug 10, 2016 10:55:09 PM org.apache.ignite.yarn.ApplicationMaster
onContainersCompleted
INFO: Container completed. Container id:
container_1455892346017_5077_02_03. State: COMPLETE.


2、downloaded this apache-ignite-fabric-1.6.0-bin.zip and unzip this file to
the /u01/XXX/apache-ignite-fabric-1.6.0-bin directory.

3、cluster16.properis content is as below
$cat cluster16.properties
# The number of nodes in the cluster.
IGNITE_NODE_COUNT=2
# The number of CPU Cores for each Apache Ignite node.
IGNITE_RUN_CPU_PER_NODE=1
# The number of Megabytes of RAM for each Apache Ignite node.
IGNITE_MEMORY_PER_NODE=2048
# The version ignite which will be run on nodes.
IGNITE_VERSION=1.6.0
IGNITE_WORK_DIR=/u01/XXX/apache-ignite-fabric-1.6.0-bin/
IGNITE_XML_CONFIG=/ignite/release16/apache-ignite-fabric-1.6.0-bin/config/default-config.xml
IGNITE_RELEASES_DIR=/ignite/release16/
IGNITE_USERS_LIBS=/u01/XXX/apache-ignite-fabric-1.6.0-bin/libs/
IGNITE_PATH=/ignite/release16/


hdfs directory is as below
===
./hdfs dfs -ls /ignite/
drwxr-xr-x   - hbase hbase  0 2016-08-10 17:07 /ignite/release16
drwxr-xr-x   - hbase hbase  0 2016-08-02 06:25 /ignite/releases
drwxr-xr-x   - hbase hbase  0 2016-08-10 17:24 /ignite/workdir
-rw-r--r--   3 hbase hbase   27710331 2016-08-10 17:04 /ignite/yarn
=
$./hdfs dfs -ls /ignite/release16
drwxr-xr-x   - hbase hbase  0 2016-08-11 12:44
/ignite/release16/apache-ignite-fabric-1.6.0-bin

=

4、Running spark code on yarn and the code is as below
val igniteContext = new IgniteContext[String, BaseLine](sc,() => new
IgniteConfiguration())


the code is hanging on this and i think that this client can't connect this
server

5、from yarn console and found the following error message
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/common/lib

Embedded mode ignite on spark

2016-08-10 Thread percent620
Hello, I want to integrate ignite with spark on Embedded mode , and here is
my detailed steps for this.

1、added maven dependencies for pom.xml
 

org.apache.ignite
ignite-core
1.6.0


org.apache.ignite
ignite-indexing
1.6.0


org.apache.ignite
ignite-visor-console
1.6.0


org.apache.ignite
ignite-spring
1.6.0


org.apache.ignite
ignite-spark
1.6.0


org.apache.ignite
ignite-yarn
1.6.0


2、Write spark code and submit these code to yarn application, and the code
is 
val igniteContext = new IgniteContext[String, BaseLine](sc,() => new
IgniteConfiguration(),false)

false: is embedded mode for ignite, right?
tempDStream.foreachRDD(rdd =>{
  val rddCounter = rdd.count().toInt
totallyCounter += rddCounter
println(DateUtil.getNowDate() + " invoked! count=" + rddCounter +
";totallyCounter= => " + totallyCounter)
if(!rdd.isEmpty()) {
//==start==
val cacheRdd = igniteContext.fromCache("partitioned")

   //fianlLatestBaselineRDD: total rdd that need to cache on ignite
   cacheRdd.savePairs(fianlLatestBaselineRDD)

println("xx=> " + xxx() +"\t " +
  " xxx => " + xxx() + "\t " +
  " cacheRdd.counter=> " + cacheRdd.count())
}
  


}
3、the code is running on yarn successfully and I found that the ignite can't
cache all the datas,and it discard some datas, such as
2016-08-11 11:28:00:es write invoked! count=10;totally totallyCounter= => 10
cacheRdd.start.counter=> 0   
2、cacheRdd.start.fianlLatestBaselineRDD.counter=> 10   
xxx.count=> 140   yyy.count => 10 cacheRdd.counter=> 5(need to be 10)
===
2016-08-11 11:28:48:write invoked! count= 9;totallyCounter= => 19
cacheRdd.start.counter=> 5   
.count=> 126   yyy.count => 9 cacheRdd.counter=> 9(need to be 19)
=


Can anyone help me on this issues? thanks again





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Embedded-mode-ignite-on-spark-tp6942.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite for Spark on YARN Deployment

2016-08-10 Thread percent620
Thanks vkulichenko's quick response.

Here is my detailed steps about how to deploy and integration spark with
ignite as below.

1、Followed this guidelines about how to deploy ignite-yarn application.
 

it's successfully and the log is displayed ok
==
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5077/filecache/10/ignite-yarn.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/10 22:54:57 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
Aug 10, 2016 10:54:58 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Application master registered.
Aug 10, 2016 10:54:58 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 2,432, cpu 1.
Aug 10, 2016 10:54:58 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 2,432, cpu 1.
16/08/10 22:54:59 INFO impl.AMRMClientImpl: Received new token for :
vmsecdomain010194062066.cm10:61362
16/08/10 22:54:59 INFO impl.AMRMClientImpl: Received new token for :
vmsecdomain010194062042.cm10:42077
Aug 10, 2016 10:54:59 PM org.apache.ignite.yarn.ApplicationMaster
onContainersAllocated
INFO: Launching container: container_1455892346017_5077_02_02.
16/08/10 22:54:59 INFO impl.ContainerManagementProtocolProxy: Opening proxy
: vmsecdomain010194062066.cm10:61362
Aug 10, 2016 10:54:59 PM org.apache.ignite.yarn.ApplicationMaster
onContainersAllocated
INFO: Launching container: container_1455892346017_5077_02_03.
16/08/10 22:54:59 INFO impl.ContainerManagementProtocolProxy: Opening proxy
: vmsecdomain010194062042.cm10:42077
Aug 10, 2016 10:55:08 PM org.apache.ignite.yarn.ApplicationMaster
onContainersCompleted
INFO: Container completed. Container id:
container_1455892346017_5077_02_02. State: COMPLETE.
Aug 10, 2016 10:55:09 PM org.apache.ignite.yarn.ApplicationMaster
onContainersCompleted
INFO: Container completed. Container id:
container_1455892346017_5077_02_03. State: COMPLETE.


2、downloaded this apache-ignite-fabric-1.6.0-bin.zip and unzip this file to
the /u01/XXX/apache-ignite-fabric-1.6.0-bin directory.

3、cluster16.properis content is as below
$cat cluster16.properties 
# The number of nodes in the cluster.
IGNITE_NODE_COUNT=2
# The number of CPU Cores for each Apache Ignite node.
IGNITE_RUN_CPU_PER_NODE=1
# The number of Megabytes of RAM for each Apache Ignite node.
IGNITE_MEMORY_PER_NODE=2048
# The version ignite which will be run on nodes.
IGNITE_VERSION=1.6.0
IGNITE_WORK_DIR=/u01/XXX/apache-ignite-fabric-1.6.0-bin/
IGNITE_XML_CONFIG=/ignite/release16/apache-ignite-fabric-1.6.0-bin/config/default-config.xml
IGNITE_RELEASES_DIR=/ignite/release16/
IGNITE_USERS_LIBS=/u01/XXX/apache-ignite-fabric-1.6.0-bin/libs/
IGNITE_PATH=/ignite/release16/


hdfs directory is as below
===
./hdfs dfs -ls /ignite/
drwxr-xr-x   - hbase hbase  0 2016-08-10 17:07 /ignite/release16
drwxr-xr-x   - hbase hbase  0 2016-08-02 06:25 /ignite/releases
drwxr-xr-x   - hbase hbase  0 2016-08-10 17:24 /ignite/workdir
-rw-r--r--   3 hbase hbase   27710331 2016-08-10 17:04 /ignite/yarn
=
$./hdfs dfs -ls /ignite/release16
drwxr-xr-x   - hbase hbase  0 2016-08-11 12:44
/ignite/release16/apache-ignite-fabric-1.6.0-bin

=

4、Running spark code on yarn and the code is as below
val igniteContext = new IgniteContext[String, BaseLine](sc,() => new
IgniteConfiguration())


the code is hanging on this and i think that this client can't connect this
server

5、from yarn console and found the following error message
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/appcache/application_1455892346017_5077/filecache/10/ignite-yarn.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/10 17:24:16 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
Aug 10, 2016 5:24:16 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Application master registered.
Aug 10, 2016 5:24:16 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 2,432, cpu 1.
Aug 10, 2016 5:24:16 PM org.apache.ignite.yar