Failed to find affinity server node

2019-05-11 Thread mehdi sey
hi. i have 3 servernode and on clientnode. i want to only put data via client
node and put them on servernode cache. i started 3 server node with bellow
command :
/usr/local/apache-ignite-fabric-2.6.0-bin/bin/ignite.sh  
i have choosen example-cache.xml for input of above command with attached
file as name example-cache.xml.
my program code just as below:

public class CacheCreateClient {
/** Cache name. */
private static final String EMP_CACHE_NAME = "Employee_Cache";

public static void main(String[] args) {
System.out.println(">>> Please make sure server node is started
using class  " +
"com.igniteexamples.createcache.StartServerNode before
starting this..");
//Setting this Instance as Client node and then starting it..
Ignition.setClientMode(true);
//try(Ignite ignite = Ignition.start("example-ignite-config.xml")) {
try(Ignite ignite =
Ignition.start("/usr/local/apache-ignite-fabric-2.6.0-bin/examples/config/example-cache.xml"))
{
System.out.println();
System.out.println(">>> CacheCreateClient example started.");
System.out.println(">>> Is Client node : " +
Ignition.isClientMode());
System.out.println(">>> Number of nodes in cluster : " + 
ignite.cluster().nodes().size());


//Below line will create Employee Cache with default
configuration
IgniteCache employeeCache =
ignite.createCache(EMP_CACHE_NAME);
System.out.println(">>> Cache created with name : " +
ignite.cache(EMP_CACHE_NAME).getName());


System.out.println(">>> Inserting record in the cache..");
Employee employee = new Employee();
employee.setName("Simbaa");
employee.setProjectId(123);
employee.setAddress("Planet Earth");
employee.setSalary(10);

employeeCache.put(1001L, employee);

System.out.println(">>> Inserting record in the cache..");
Employee employee2 = new Employee();
employee.setName("mehdi");
employee.setProjectId(125);
employee.setAddress("yasouj");
employee.setSalary(11);

employeeCache.put(1002L, employee2);


System.out.println(">>> Number of records in the cache : " +
employeeCache.size());

//Number of records on this client node should be zero, as
Caches are always deployed on Server Node.
System.out.println(">>> Number of records on the local client
node : " + employeeCache.localSize());

//Stop both Server and client node in this example.
// close ignite instance
Scanner readUserInput=new Scanner(System.in);
String myName=readUserInput.nextLine();
ignite.cluster().stopNodes();
System.out.println("Finished!");


}

}
}

 after executing above code in my intellij ide i have encounter with below
error:

/usr/lib/jvm/java-8-oracle/bin/java
-javaagent:/snap/intellij-idea-community/143/lib/idea_rt.jar=38523:/snap/intellij-idea-community/143/bin
-Dfile.encoding=UTF-8 -classpath

igfs as cache for hdfs run on apache ignite accelerator but not on apache ignite 2.6

2019-03-14 Thread mehdi sey
i want to execute a wordcount example of hadoop over apache ignite. i
haveused IGFS as cache for HDFS configuration in ignite, but after
submittingjob via hadoop for execution on ignite i encountered with below
error.thanks in advance to anyone who could help me! there is a note that i
can execute igfs as cache for hdfs over apache ignite hadoop accelerator
version 2.6.Using configuration:
examples/config/filesystem/example-igfs-hdfs.xml[00:47:13]__ 
[00:47:13]   /  _/ ___/ |/ /  _/_  __/ __/[00:47:13]  _/ //
(7 7// /  / / / _/  [00:47:13] /___/\___/_/|_/___/ /_/ /___/ 
[00:47:13][00:47:13] ver. 2.6.0#20180710-sha1:669feacc[00:47:13] 2018
Copyright(C) Apache Software Foundation[00:47:13][00:47:13] Ignite
documentation: http://ignite.apache.org[00:47:13][00:47:13] Quiet
mode.[00:47:13]   ^-- Logging to
file'/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-f3712946.log'[00:47:13]
  
^-- Logging by 'Log4JLogger
[quiet=true,config=/usr/local/apache-ignite-fabric-2.6.0-bin/config/ignite-log4j.xml]'[00:47:13]
  
^-- To see **FULL** console log here add -DIGNITE_QUIET=falseor "-v" to
ignite.{sh|bat}[00:47:13][00:47:13] OS: Linux 4.15.0-46-generic
amd64[00:47:13] VM information: Java(TM) SE Runtime Environment
1.8.0_192-ea-b04Oracle Corporation Java HotSpot(TM) 64-Bit Server VM
25.192-b04[00:47:13] Configured plugins:[00:47:13]   ^-- Ignite Native I/O
Plugin [Direct I/O][00:47:13]   ^-- Copyright(C) Apache Software
Foundation[00:47:13][00:47:13] Configured failure handler:
[hnd=StopNodeOrHaltFailureHandler[tryStop=false, timeout=0]][00:47:22]
Message queue limit is set to 0 which may lead to potential OOMEswhen
running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due tomessage
queues growth on sender and receiver sides.[00:47:22] Security status
[authentication=off, tls/ssl=off]SLF4J: Class path contains multiple SLF4J
bindings.SLF4J: Found binding
in[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
Found binding
in[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/htrace%20dependency/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
Found binding
in[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-rest-http/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
Found binding
in[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-yarn/ignite-yarn-2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
Found binding
in[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-zookeeper/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
See http://www.slf4j.org/codes.html#multiple_bindings for
anexplanation.SLF4J: Actual binding is of type
[org.slf4j.helpers.NOPLoggerFactory][00:47:23] HADOOP_HOME is set to
/usr/local/hadoop[00:47:23] Resolved Hadoop classpath
locations:/usr/local/hadoop/share/hadoop/common,
/usr/local/hadoop/share/hadoop/hdfs,/usr/local/hadoop/share/hadoop/mapreduce[00:47:26]
Performance suggestions for grid  (fix if possible)[00:47:26] To disable,
set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true[00:47:26]   ^-- Enable G1
Garbage Collector (add '-XX:+UseG1GC' to JVMoptions)[00:47:26]   ^-- Set max
direct memory size if getting 'OOME: Direct buffermemory' (add
'-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)[00:47:26]   ^--
Disable processing of calls to System.gc() (add'-XX:+DisableExplicitGC' to
JVM options)[00:47:26]   ^-- Enable ATOMIC mode if not using transactions
(set'atomicityMode' to ATOMIC)[00:47:26]   ^-- Disable fully synchronous
writes (set'writeSynchronizationMode' to PRIMARY_SYNC or
FULL_ASYNC)[00:47:26] Refer to this page for more performance
suggestions:https://apacheignite.readme.io/docs/jvm-and-system-tuning[00:47:26][00:47:26]
To start Console Management & Monitoring
runignitevisorcmd.{sh|bat}[00:47:26][00:47:26] Ignite node started OK
(id=f3712946)[00:47:26] Topology snapshot [ver=1, servers=1, clients=0,
CPUs=8,offheap=1.6GB, heap=1.0GB][00:47:26]   ^-- Node
[id=F3712946-0810-440F-A440-140FE4AB6FA7,clusterState=ACTIVE][00:47:26] Data
Regions Configured:[00:47:27]   ^-- default [initSize=256.0 MiB, maxSize=1.6
GiB,persistenceEnabled=false][00:47:35] New version is available at
ignite.apache.org: 2.7.0[2019-03-13
00:47:46,978][ERROR][igfs-igfs-ipc-#53][IgfsImpl] File infooperation in DUAL
mode failed [path=/output]class org.apache.ignite.IgniteException: For input
string: "30s"   
atorg.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100)
   
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.getWithMappedName(HadoopCachingFileSystemFactoryDelegate.java:53)
   
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:75)
   

org.apache.ignite.IgniteException: For input string: ā€œ30sā€ in ignite hadoop execution

2019-03-13 Thread mehdi sey
0

i want to execute a wordcount example of hadoop over apache ignite. i have
used IGFS as cache for HDFS configuration in ignite, but after submitting
job via hadoop for execution on ignite i encountered with below error.
thanks in advance to anyone who could help me!

Using configuration: examples/config/filesystem/example-igfs-hdfs.xml

[00:47:13]__   
[00:47:13]   /  _/ ___/ |/ /  _/_  __/ __/ 
[00:47:13]  _/ // (7 7// /  / / / _/   
[00:47:13] /___/\___/_/|_/___/ /_/ /___/  
[00:47:13] 
[00:47:13] ver. 2.6.0#20180710-sha1:669feacc
[00:47:13] 2018 Copyright(C) Apache Software Foundation
[00:47:13] 
[00:47:13] Ignite documentation: http://ignite.apache.org
[00:47:13] 
[00:47:13] Quiet mode.
[00:47:13]   ^-- Logging to file
'/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-f3712946.log'
[00:47:13]   ^-- Logging by 'Log4JLogger [quiet=true,
config=/usr/local/apache-ignite-fabric-2.6.0-bin/config/ignite-log4j.xml]'
[00:47:13]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[00:47:13] 
[00:47:13] OS: Linux 4.15.0-46-generic amd64
[00:47:13] VM information: Java(TM) SE Runtime Environment 1.8.0_192-ea-b04
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.192-b04
[00:47:13] Configured plugins:
[00:47:13]   ^-- Ignite Native I/O Plugin [Direct I/O] 
[00:47:13]   ^-- Copyright(C) Apache Software Foundation
[00:47:13] 
[00:47:13] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]
[00:47:22] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[00:47:22] Security status [authentication=off, tls/ssl=off]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/htrace%20dependency/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-rest-http/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-yarn/ignite-yarn-2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-zookeeper/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory]
[00:47:23] HADOOP_HOME is set to /usr/local/hadoop
[00:47:23] Resolved Hadoop classpath locations:
/usr/local/hadoop/share/hadoop/common, /usr/local/hadoop/share/hadoop/hdfs,
/usr/local/hadoop/share/hadoop/mapreduce
[00:47:26] Performance suggestions for grid  (fix if possible)
[00:47:26] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[00:47:26]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[00:47:26]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[00:47:26]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[00:47:26]   ^-- Enable ATOMIC mode if not using transactions (set
'atomicityMode' to ATOMIC)
[00:47:26]   ^-- Disable fully synchronous writes (set
'writeSynchronizationMode' to PRIMARY_SYNC or FULL_ASYNC)
[00:47:26] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[00:47:26] 
[00:47:26] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[00:47:26] 
[00:47:26] Ignite node started OK (id=f3712946)
[00:47:26] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8,
offheap=1.6GB, heap=1.0GB]
[00:47:26]   ^-- Node [id=F3712946-0810-440F-A440-140FE4AB6FA7,
clusterState=ACTIVE]
[00:47:26] Data Regions Configured:
[00:47:27]   ^-- default [initSize=256.0 MiB, maxSize=1.6 GiB,
persistenceEnabled=false]
[00:47:35] New version is available at ignite.apache.org: 2.7.0
[2019-03-13 00:47:46,978][ERROR][igfs-igfs-ipc-#53][IgfsImpl] File info
operation in DUAL mode failed [path=/output]
class org.apache.ignite.IgniteException: For input string: "30s"
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.getWithMappedName(HadoopCachingFileSystemFactoryDelegate.java:53)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:75)
at

Re: Exception when running hadoop fs -ls igfs://igfs@localhost:10500/

2019-03-12 Thread mehdi sey
hi. i have same problem just as you. i follow your post but my problem have
not solved yet. i encounter this error when i execute wordcount example in
hadoop for running on ignite ( i have used IGFS as a cache for HDFS). when i
execute wordcount example i encounter the following error:
[23:11:13]__   
[23:11:13]   /  _/ ___/ |/ /  _/_  __/ __/ 
[23:11:13]  _/ // (7 7// /  / / / _/   
[23:11:13] /___/\___/_/|_/___/ /_/ /___/  
[23:11:13] 
[23:11:13] ver. 2.6.0#20180710-sha1:669feacc
[23:11:13] 2018 Copyright(C) Apache Software Foundation
[23:11:13] 
[23:11:13] Ignite documentation: http://ignite.apache.org
[23:11:13] 
[23:11:13] Quiet mode.
[23:11:13]   ^-- Logging to file
'/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-66905db1.log'
[23:11:13]   ^-- Logging by 'Log4JLogger [quiet=true,
config=/usr/local/apache-ignite-fabric-2.6.0-bin/config/ignite-log4j.xml]'
[23:11:13]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[23:11:13] 
[23:11:13] OS: Linux 4.15.0-46-generic amd64
[23:11:13] VM information: Java(TM) SE Runtime Environment 1.8.0_192-ea-b04
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.192-b04
[23:11:14] Configured plugins:
[23:11:14]   ^-- Ignite Native I/O Plugin [Direct I/O] 
[23:11:14]   ^-- Copyright(C) Apache Software Foundation
[23:11:14] 
[23:11:14] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]
[23:11:14] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[23:11:14] Security status [authentication=off, tls/ssl=off]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/htrace%20dependency/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-rest-http/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-yarn/ignite-yarn-2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-zookeeper/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory]
[23:11:16] HADOOP_HOME is set to /usr/local/hadoop
[23:11:16] Resolved Hadoop classpath locations:
/usr/local/hadoop/share/hadoop/common, /usr/local/hadoop/share/hadoop/hdfs,
/usr/local/hadoop/share/hadoop/mapreduce
[23:11:18] Performance suggestions for grid  (fix if possible)
[23:11:18] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[23:11:18]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[23:11:18]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[23:11:18]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[23:11:18]   ^-- Enable ATOMIC mode if not using transactions (set
'atomicityMode' to ATOMIC)
[23:11:18]   ^-- Disable fully synchronous writes (set
'writeSynchronizationMode' to PRIMARY_SYNC or FULL_ASYNC)
[23:11:18] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[23:11:18] 
[23:11:18] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[23:11:18] 
[23:11:18] Ignite node started OK (id=66905db1)
[23:11:18] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8,
offheap=1.6GB, heap=1.0GB]
[23:11:18]   ^-- Node [id=66905DB1-732F-40F3-BD65-7CE9E73DB610,
clusterState=ACTIVE]
[23:11:18] Data Regions Configured:
[23:11:18]   ^-- default [initSize=256.0 MiB, maxSize=1.6 GiB,
persistenceEnabled=false]
[23:11:28] New version is available at ignite.apache.org: 2.7.0
[2019-03-12 23:11:29,119][ERROR][igfs-igfs-ipc-#52][IgfsImpl] File info
operation in DUAL mode failed [path=/output]
class org.apache.ignite.IgniteException: For input string: "30s"
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.getWithMappedName(HadoopCachingFileSystemFactoryDelegate.java:53)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:75)
at

Re: Exception when running hadoop fs -ls igfs://igfs@localhost:10500/

2019-03-12 Thread mehdi sey
hi. i have same problem just as you. i follow your post but my problem have
not solved yet. i encounter this error when i execute wordcount example in
hadoop for running on ignite ( i have used IGFS as a cache for HDFS). when i
execute wordcount example i encounter the following error:
[23:11:13]__   
[23:11:13]   /  _/ ___/ |/ /  _/_  __/ __/ 
[23:11:13]  _/ // (7 7// /  / / / _/   
[23:11:13] /___/\___/_/|_/___/ /_/ /___/  
[23:11:13] 
[23:11:13] ver. 2.6.0#20180710-sha1:669feacc
[23:11:13] 2018 Copyright(C) Apache Software Foundation
[23:11:13] 
[23:11:13] Ignite documentation: http://ignite.apache.org
[23:11:13] 
[23:11:13] Quiet mode.
[23:11:13]   ^-- Logging to file
'/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-66905db1.log'
[23:11:13]   ^-- Logging by 'Log4JLogger [quiet=true,
config=/usr/local/apache-ignite-fabric-2.6.0-bin/config/ignite-log4j.xml]'
[23:11:13]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[23:11:13] 
[23:11:13] OS: Linux 4.15.0-46-generic amd64
[23:11:13] VM information: Java(TM) SE Runtime Environment 1.8.0_192-ea-b04
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.192-b04
[23:11:14] Configured plugins:
[23:11:14]   ^-- Ignite Native I/O Plugin [Direct I/O] 
[23:11:14]   ^-- Copyright(C) Apache Software Foundation
[23:11:14] 
[23:11:14] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]
[23:11:14] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[23:11:14] Security status [authentication=off, tls/ssl=off]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/htrace%20dependency/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-rest-http/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-yarn/ignite-yarn-2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-zookeeper/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory]
[23:11:16] HADOOP_HOME is set to /usr/local/hadoop
[23:11:16] Resolved Hadoop classpath locations:
/usr/local/hadoop/share/hadoop/common, /usr/local/hadoop/share/hadoop/hdfs,
/usr/local/hadoop/share/hadoop/mapreduce
[23:11:18] Performance suggestions for grid  (fix if possible)
[23:11:18] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[23:11:18]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[23:11:18]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[23:11:18]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[23:11:18]   ^-- Enable ATOMIC mode if not using transactions (set
'atomicityMode' to ATOMIC)
[23:11:18]   ^-- Disable fully synchronous writes (set
'writeSynchronizationMode' to PRIMARY_SYNC or FULL_ASYNC)
[23:11:18] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[23:11:18] 
[23:11:18] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[23:11:18] 
[23:11:18] Ignite node started OK (id=66905db1)
[23:11:18] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8,
offheap=1.6GB, heap=1.0GB]
[23:11:18]   ^-- Node [id=66905DB1-732F-40F3-BD65-7CE9E73DB610,
clusterState=ACTIVE]
[23:11:18] Data Regions Configured:
[23:11:18]   ^-- default [initSize=256.0 MiB, maxSize=1.6 GiB,
persistenceEnabled=false]
[23:11:28] New version is available at ignite.apache.org: 2.7.0
[2019-03-12 23:11:29,119][ERROR][igfs-igfs-ipc-#52][IgfsImpl] File info
operation in DUAL mode failed [path=/output]
class org.apache.ignite.IgniteException: For input string: "30s"
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.getWithMappedName(HadoopCachingFileSystemFactoryDelegate.java:53)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:75)
at

Re: ClassNotFoundException when using IgniteHadoopIgfsSecondaryFileSystem

2019-03-10 Thread mehdi sey
hi. i want to start ignite node with a configuration name as
example-igfs.xml. i have alter this configuration for using IGFS as cache
layer for HDFS. but when i execute the below command  for start ignite node
i encounter with error:
/usr/local/apache-ignite-fabric-2.6.0-bin/bin/ignite.sh
/usr/local/apache-ignite-fabric-2.6.0-bin/examples/config/filesystem/example-igfs.xml

but after executing above command i will encounter below error:
/  _/ ___/ |/ /  _/_  __/ __/
[09:57:48]  _/ // (7 7// /  / / / _/  
[09:57:48] /___/\___/_/|_/___/ /_/ /___/  
[09:57:48]
[09:57:48] ver. 2.6.0#20180710-sha1:669feacc
[09:57:48] 2018 Copyright(C) Apache Software Foundation
[09:57:48]
[09:57:48] Ignite documentation: http://ignite.apache.org
[09:57:48]
[09:57:48] Quiet mode.
[09:57:48]   ^-- Logging to file
'/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-246509e8.0.log'
[09:57:48]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[09:57:48]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[09:57:48]
[09:57:48] OS: Linux 4.15.0-43-generic amd64
[09:57:48] VM information: Java(TM) SE Runtime Environment 1.8.0_192-ea-b04
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.192-b04
[09:57:48] Configured plugins:
[09:57:48]   ^-- None
[09:57:48]
[09:57:48] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]
[09:57:48] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[09:57:48] Security status [authentication=off, tls/ssl=off]
[09:57:49,412][SEVERE][main][IgniteKernal] Exception during start
processors, node will be stopped and close connections
java.lang.NoClassDefFoundError: com/google/common/base/Preconditions
at
org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:361)
at
org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:374)
at
org.apache.hadoop.conf.Configuration.(Configuration.java:456)
at
org.apache.ignite.internal.processors.hadoop.impl.HadoopUtils.safeCreateConfiguration(HadoopUtils.java:334)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.start(HadoopBasicFileSystemFactoryDelegate.java:129)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.start(HadoopCachingFileSystemFactoryDelegate.java:58)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.start(HadoopIgfsSecondaryFileSystemDelegateImpl.java:413)
at
org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.start(IgniteHadoopIgfsSecondaryFileSystem.java:276)
at
org.apache.ignite.internal.processors.igfs.IgfsImpl.(IgfsImpl.java:185)
at
org.apache.ignite.internal.processors.igfs.IgfsContext.(IgfsContext.java:102)
at
org.apache.ignite.internal.processors.igfs.IgfsProcessor.start(IgfsProcessor.java:116)
at
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1739)
at
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:990)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
at
org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
at
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1069)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:955)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:854)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:724)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:693)
at org.apache.ignite.Ignition.start(Ignition.java:352)
at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:301)
Caused by: java.lang.ClassNotFoundException:
com.google.common.base.Preconditions
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 23 more
[09:57:49,414][SEVERE][main][IgniteKernal] Got exception while starting
(will rollback startup routine).
java.lang.NoClassDefFoundError: com/google/common/base/Preconditions
at
org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:361)
at
org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:374)
at
org.apache.hadoop.conf.Configuration.(Configuration.java:456)
at

Failed to start grid: com/google/common/base/Preconditions

2019-03-07 Thread mehdi sey
hi. i want to start ignite node with a configuration name as
example-igfs.xml. i have alter this configuration for using IGFS as cache
layer for HDFS. but when i execute the below command  for start ignite node
i encounter with error:
/usr/local/apache-ignite-fabric-2.6.0-bin/bin/ignite.sh
/usr/local/apache-ignite-fabric-2.6.0-bin/examples/config/filesystem/example-igfs.xml

but after executing above command i will encounter below error:
/  _/ ___/ |/ /  _/_  __/ __/ 
[09:57:48]  _/ // (7 7// /  / / / _/   
[09:57:48] /___/\___/_/|_/___/ /_/ /___/  
[09:57:48] 
[09:57:48] ver. 2.6.0#20180710-sha1:669feacc
[09:57:48] 2018 Copyright(C) Apache Software Foundation
[09:57:48] 
[09:57:48] Ignite documentation: http://ignite.apache.org
[09:57:48] 
[09:57:48] Quiet mode.
[09:57:48]   ^-- Logging to file
'/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-246509e8.0.log'
[09:57:48]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[09:57:48]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[09:57:48] 
[09:57:48] OS: Linux 4.15.0-43-generic amd64
[09:57:48] VM information: Java(TM) SE Runtime Environment 1.8.0_192-ea-b04
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.192-b04
[09:57:48] Configured plugins:
[09:57:48]   ^-- None
[09:57:48] 
[09:57:48] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]
[09:57:48] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[09:57:48] Security status [authentication=off, tls/ssl=off]
[09:57:49,412][SEVERE][main][IgniteKernal] Exception during start
processors, node will be stopped and close connections
java.lang.NoClassDefFoundError: com/google/common/base/Preconditions
at
org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:361)
at
org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:374)
at org.apache.hadoop.conf.Configuration.(Configuration.java:456)
at
org.apache.ignite.internal.processors.hadoop.impl.HadoopUtils.safeCreateConfiguration(HadoopUtils.java:334)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.start(HadoopBasicFileSystemFactoryDelegate.java:129)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.start(HadoopCachingFileSystemFactoryDelegate.java:58)
at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.start(HadoopIgfsSecondaryFileSystemDelegateImpl.java:413)
at
org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.start(IgniteHadoopIgfsSecondaryFileSystem.java:276)
at
org.apache.ignite.internal.processors.igfs.IgfsImpl.(IgfsImpl.java:185)
at
org.apache.ignite.internal.processors.igfs.IgfsContext.(IgfsContext.java:102)
at
org.apache.ignite.internal.processors.igfs.IgfsProcessor.start(IgfsProcessor.java:116)
at
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1739)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:990)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
at
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1069)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:955)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:854)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:724)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:693)
at org.apache.ignite.Ignition.start(Ignition.java:352)
at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:301)
Caused by: java.lang.ClassNotFoundException:
com.google.common.base.Preconditions
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 23 more
[09:57:49,414][SEVERE][main][IgniteKernal] Got exception while starting
(will rollback startup routine).
java.lang.NoClassDefFoundError: com/google/common/base/Preconditions
at
org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:361)
at
org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:374)
at org.apache.hadoop.conf.Configuration.(Configuration.java:456)
at

Error in running wordcount hadoop example in ignite

2019-02-27 Thread mehdi sey
hi 
i want to execute wordcount example of hadoop in apache ignite. i have used
apache-ignite-hadoop-2.6.0-bin for execute map reduce tasks. my
default-config.xml in apache-ignite-hadoop-2.6.0-bin/config folder is just
as bellow:
http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   http://www.springframework.org/schema/util/spring-util.xsd;>



Spring file for Ignite node configuration with IGFS and Apache
Hadoop map-reduce support enabled.
Ignite node will start with this configuration by default.






















 












i have run an ignite node with bellow command in command line in linux
ubuntu:
*/usr/local/apache-ignite-hadoop-2.6.0-bin/bin/ignite.sh
/usr/local/apache-ignite-hadoop-2.6.0-bin/config/default-config.xml*

after starting ignite node i execute a wordcount example in hadoop for
ruuning in ignite with bellow command:

*./hadoop jar
/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar
wordcount /input/hadoop output2*

but after executing above command i have encounter an error just as attached
image. please help about solving problem. i have seen also below link but it
could not help.
http://apache-ignite-users.70518.x6.nabble.com/NPE-issue-with-trying-to-submit-Hadoop-MapReduce-tc2146.html#a2183
 
 





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: read from igniteRDD and write to igniteRDD

2019-01-22 Thread mehdi sey
i have wrote a piece of code for reading data from ignite cache table but i
encounter following error. 

/usr/lib/jvm/java-8-oracle/bin/java
-javaagent:/snap/intellij-idea-community/113/lib/idea_rt.jar=46131:/snap/intellij-idea-community/113/bin
-Dfile.encoding=UTF-8 -classpath

read from igniteRDD and write to igniteRDD

2019-01-14 Thread mehdi sey
hi. as we know we can create igniteRDD for sharing between spark worker. i
want to know how we can read from igniteRDD from spark executor and how to
write to igniteRDD from spark executor. is it possible to share an example?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Distributed Training in tensorflow

2019-01-06 Thread mehdi sey
Distributed training allows computational resources to be used on the whole
cluster and thus speed up training of deep learning models. TensorFlow is a
machine learning framework that natively supports distributed neural network
training, inference and other computations.Using this ability, we can
calculate gradients on the nodes the data are stored on, reduce them and
then finally update model parameters.In case of TensorFlow on Apache Ignite
does in a server in cluster we must run a tensorflow worker for doing work
on its data?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


error in running shared rdd in ignite

2019-01-05 Thread mehdi sey
hi, i have a code for writing into ignite rdd. this program read data from
spark rdd and catch it on ignite rdd. i run it with command line in Linux
Ubuntu but in the middle of execution i have encounter with below error. i
checked in spark UI for watching if job complete or not but the job is not
complete and failed. why? i have attached piece of code that i have wrote
and run with command.

$SPARK_HOME/bin/spark-submit --class "com.gridgain.RDDWriter" --master
spark://linux-client:7077 ~/spark\ and\ ignite\
issue/ignite-and-spark-integration-master/ignite-rdd/ignite-spark-scala/target/ignite-spark-scala-1.0.jar
 
2019-01-05 11:47:02 WARN  Utils:66 - Your hostname, linux-client resolves to
a loopback address: 127.0.1.1, but we couldn't find any external IP address!
2019-01-05 11:47:02 WARN  Utils:66 - Set SPARK_LOCAL_IP if you need to bind
to another address
2019-01-05 11:47:03 WARN  NativeCodeLoader:62 - Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
2019-01-05 11:47:03 INFO  SparkContext:54 - Running Spark version 2.4.0
2019-01-05 11:47:03 INFO  SparkContext:54 - Submitted application: RDDWriter
2019-01-05 11:47:03 INFO  SecurityManager:54 - Changing view acls to: mehdi
2019-01-05 11:47:03 INFO  SecurityManager:54 - Changing modify acls to:
mehdi
2019-01-05 11:47:03 INFO  SecurityManager:54 - Changing view acls groups to: 
2019-01-05 11:47:03 INFO  SecurityManager:54 - Changing modify acls groups
to: 
2019-01-05 11:47:03 INFO  SecurityManager:54 - SecurityManager:
authentication disabled; ui acls disabled; users  with view permissions:
Set(mehdi); groups with view permissions: Set(); users  with modify
permissions: Set(mehdi); groups with modify permissions: Set()
2019-01-05 11:47:03 WARN  MacAddressUtil:136 - Failed to find a usable
hardware address from the network interfaces; using random bytes:
88:26:00:23:5d:50:a0:61
2019-01-05 11:47:03 INFO  Utils:54 - Successfully started service
'sparkDriver' on port 36233.
2019-01-05 11:47:03 INFO  SparkEnv:54 - Registering MapOutputTracker
2019-01-05 11:47:03 INFO  SparkEnv:54 - Registering BlockManagerMaster
2019-01-05 11:47:03 INFO  BlockManagerMasterEndpoint:54 - Using
org.apache.spark.storage.DefaultTopologyMapper for getting topology
information
2019-01-05 11:47:03 INFO  BlockManagerMasterEndpoint:54 -
BlockManagerMasterEndpoint up
2019-01-05 11:47:03 INFO  DiskBlockManager:54 - Created local directory at
/tmp/blockmgr-6e47832e-855a-4305-a293-662379733b7f
2019-01-05 11:47:03 INFO  MemoryStore:54 - MemoryStore started with capacity
366.3 MB
2019-01-05 11:47:03 INFO  SparkEnv:54 - Registering OutputCommitCoordinator
2019-01-05 11:47:03 INFO  log:192 - Logging initialized @2024ms
2019-01-05 11:47:04 INFO  Server:351 - jetty-9.3.z-SNAPSHOT, build
timestamp: unknown, git hash: unknown
2019-01-05 11:47:04 INFO  Server:419 - Started @2108ms
2019-01-05 11:47:04 INFO  AbstractConnector:278 - Started
ServerConnector@5ba745bc{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2019-01-05 11:47:04 INFO  Utils:54 - Successfully started service 'SparkUI'
on port 4040.
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@606fc505{/jobs,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@2c30b71f{/jobs/json,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@1d81e101{/jobs/job,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@bf71cec{/jobs/job/json,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@22d6cac2{/stages,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@30cdae70{/stages/json,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@1654a892{/stages/stage,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@6c000e0c{/stages/stage/json,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@5f233b26{/stages/pool,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@44f9779c{/stages/pool/json,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@6974a715{/storage,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@5e8a459{/storage/json,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@43d455c9{/storage/rdd,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@4c9e9fb8{/storage/rdd/json,null,AVAILABLE,@Spark}
2019-01-05 11:47:04 INFO  ContextHandler:781 - Started

error in running shared rdd in ignite

2019-01-05 Thread mehdi sey
hi, i have a code for writing into ignite rdd. this program read data from
spark rdd and catch it on ignite rdd. i run it with command line in Linux
Ubuntu but in the middle of execution i have encounter with below error. i
checked in spark UI for watching if job complete or not but the job is not
complete and failed. why? i have attached piece of code that i have wrote
and run with command.

this is my scala code:
package com.gridgain

import org.apache.ignite.spark.{IgniteContext, IgniteRDD}
import org.apache.spark.{SparkConf, SparkContext}

object RDDWriter extends App {
  val conf = new SparkConf().setAppName("RDDWriter")
  val sc = new SparkContext(conf)
  val ic = new IgniteContext(sc,
"/usr/local/apache-ignite-fabric-2.6.0-bin/examples/config/spark/example-shared-rdd.xml")
  val sharedRDD: IgniteRDD[Int, Int] = ic.fromCache("sharedRDD")
  sharedRDD.savePairs(sc.parallelize(1 to 1000, 10).map(i => (i, i)))
  ic.close(true)
  sc.stop()
}

object RDDReader extends App {
  val conf = new SparkConf().setAppName("RDDReader")
  val sc = new SparkContext(conf)
  val ic = new IgniteContext(sc,
"/usr/local/apache-ignite-fabric-2.6.0-bin/examples/config/spark/example-shared-rdd.xml")
  val sharedRDD: IgniteRDD[Int, Int] = ic.fromCache("sharedRDD")
  val greaterThanFiveHundred = sharedRDD.filter(_._2 > 500)
  println("The count is " + greaterThanFiveHundred.count())
  ic.close(true)
  sc.stop()
}

this is result of running:


$SPARK_HOME/bin/spark-submit --class "com.gridgain.RDDWriter" --master
spark://linux-client:7077 ~/spark\ and\ ignite\
issue/ignite-and-spark-integration-master/ignite-rdd/ignite-spark-scala/target/ignite-spark-scala-1.0.jar
 
2019-01-05 12:10:44 WARN  Utils:66 - Your hostname, linux-client resolves to
a loopback address: 127.0.1.1; using 192.168.43.225 instead (on interface
wlp3s0)
2019-01-05 12:10:44 WARN  Utils:66 - Set SPARK_LOCAL_IP if you need to bind
to another address
2019-01-05 12:10:46 WARN  NativeCodeLoader:62 - Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
2019-01-05 12:10:48 INFO  SparkContext:54 - Running Spark version 2.4.0
2019-01-05 12:10:48 INFO  SparkContext:54 - Submitted application: RDDWriter
2019-01-05 12:10:48 INFO  SecurityManager:54 - Changing view acls to: mehdi
2019-01-05 12:10:48 INFO  SecurityManager:54 - Changing modify acls to:
mehdi
2019-01-05 12:10:48 INFO  SecurityManager:54 - Changing view acls groups to: 
2019-01-05 12:10:48 INFO  SecurityManager:54 - Changing modify acls groups
to: 
2019-01-05 12:10:48 INFO  SecurityManager:54 - SecurityManager:
authentication disabled; ui acls disabled; users  with view permissions:
Set(mehdi); groups with view permissions: Set(); users  with modify
permissions: Set(mehdi); groups with modify permissions: Set()
2019-01-05 12:10:51 INFO  Utils:54 - Successfully started service
'sparkDriver' on port 42209.
2019-01-05 12:10:51 INFO  SparkEnv:54 - Registering MapOutputTracker
2019-01-05 12:10:51 INFO  SparkEnv:54 - Registering BlockManagerMaster
2019-01-05 12:10:51 INFO  BlockManagerMasterEndpoint:54 - Using
org.apache.spark.storage.DefaultTopologyMapper for getting topology
information
2019-01-05 12:10:51 INFO  BlockManagerMasterEndpoint:54 -
BlockManagerMasterEndpoint up
2019-01-05 12:10:51 INFO  DiskBlockManager:54 - Created local directory at
/tmp/blockmgr-97d7b468-57a8-4fb3-a951-25a6a1312922
2019-01-05 12:10:51 INFO  MemoryStore:54 - MemoryStore started with capacity
366.3 MB
2019-01-05 12:10:51 INFO  SparkEnv:54 - Registering OutputCommitCoordinator
2019-01-05 12:10:52 INFO  log:192 - Logging initialized @9014ms
2019-01-05 12:10:52 INFO  Server:351 - jetty-9.3.z-SNAPSHOT, build
timestamp: unknown, git hash: unknown
2019-01-05 12:10:52 INFO  Server:419 - Started @9118ms
2019-01-05 12:10:52 INFO  AbstractConnector:278 - Started
ServerConnector@456abb66{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2019-01-05 12:10:52 INFO  Utils:54 - Successfully started service 'SparkUI'
on port 4040.
2019-01-05 12:10:52 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@77e80a5e{/jobs,null,AVAILABLE,@Spark}
2019-01-05 12:10:52 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@1654a892{/jobs/json,null,AVAILABLE,@Spark}
2019-01-05 12:10:52 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@2577d6c8{/jobs/job,null,AVAILABLE,@Spark}
2019-01-05 12:10:52 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@6c000e0c{/jobs/job/json,null,AVAILABLE,@Spark}
2019-01-05 12:10:52 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@5f233b26{/stages,null,AVAILABLE,@Spark}
2019-01-05 12:10:52 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@44f9779c{/stages/json,null,AVAILABLE,@Spark}
2019-01-05 12:10:52 INFO  ContextHandler:781 - Started
o.s.j.s.ServletContextHandler@6974a715{/stages/stage,null,AVAILABLE,@Spark}
2019-01-05 12:10:52 INFO  ContextHandler:781 - Started

Ignite and spark for deep learning

2019-01-02 Thread mehdi sey
Hi. two platforms (spark and ignite) use in memory for computing. instead of
loading data into ignite catch we also can loading data to spark memory and
catch it on spark node. if we could do  this (catching on spark node) why we
load data to ignite catch?. loading to ignite catch have benefit only for
sharing rdd between spark jobs ad indexing query index.i want to integrate
spark and ignite for deep learning platform. I want to use DL4J (deep
learning 4 java) as platform for deep learning. I want to use dl4j in spark
node and integrate spark node with ignite. Is there any speed up in this
idea,?if i want to use ignite i can use ignite only for cache data for
spark?or i can use spark and ignite as an engeen process simultaneously?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


error in running shared rdd example with intellij

2018-12-28 Thread mehdi sey
hi i want to run SharedRddExample in intellij IDE. but i have encounter error
just as bellow . why?
/usr/lib/jvm/java-8-oracle/bin/java
-DIGNITE_HOME=/usr/local/apache-ignite-fabric-2.6.0-bin/
-javaagent:/snap/intellij-idea-community/109/lib/idea_rt.jar=35933:/snap/intellij-idea-community/109/bin
-Dfile.encoding=UTF-8 -classpath

error in ignite-spark

2018-12-26 Thread mehdi sey
hi. i want to execute a RDD example in spark from example folder of ignite
2.7, but i have encounter and error just like an attached picture. in import
section you see an underlined line. i have added dependency but still is
remained why?


 

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


error in importing ignite 2.7 to netbeans

2018-12-25 Thread mehdi sey
hi, i use netbeans 8.2 and i have imported apache ignite 2.7 to it. when i
want to run example in this path
/examples/src/main/spark/org/apache/ignite/examples/spark/SharedRDDExamples.java
i have some error in import section related to bellow import:
import org.apache.ignite.spark.JavaIgniteContext;
import org.apache.ignite.spark.JavaIgniteRDD;

on the top of code two above import have underlined with red color. i have
imported package from dependency and find the org.apache.ignite:ignite-spark
and select 2.7.jar. but after importing the same error has remained. why?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


diferrences between IgniteRdd and SparkRdd

2018-12-25 Thread mehdi sey
hi.
first question
if we could create rdd with spark and store in ignite rdd or we only can
create rdd with ignite and share with spark job?

second question
what exactly the piece of bellow code?

object RDDProducer extends App {
val conf = new SparkConf().setAppName("SparkIgnite")
val sc = new SparkContext(conf)
val ic = new IgniteContext[Int, Int](sc, () => new IgniteConfiguration())
val sharedRDD: IgniteRDD[Int,Int] = ic.fromCache("partitioned")
sharedRDD.savePairs(sc.parallelize(1 to 10, 10).map(i => (i, i)))
}




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/