Thanks Alexey. I believe I have tried all the 4 methods you suggested below
without any success.

1. Specified the absolute path of the default-config.xml using the file:///
as well as hdfs:///

2. passed the config file as file parameter to spark-submit as
--files=file:///.../..../apache-ignite-fabric-2.1.0-bin/config/default-config.xml

3. exported IGNITE_HOME=/..../..../apache-ignite-fabric-2.1.0-bin/ in all
the spark nodes and passed "config/default-config.xml" as relative path

4. created the config file under "src/resources/" but no luck.

Please see the complete logs.

Also, I have the default-config.xml file at the default location on all
spark nodes. Do I need to pass it in the code then? If I do not pass it, ""it
cannot find any IPs from multicast or IPFinder. "


Thank you.



On Fri, Nov 3, 2017 at 6:52 AM, Alexey Kukushkin <kukushkinale...@gmail.com>
wrote:

> Hi,
>
> The problem is Ignite cannot find your configuration file.
>
> Ignite tries these 3 steps to find configuration file:
>
>    1. Try to resolve it as a URI
>    2. If previous step fails, try to resolve it as $IGNITE_HOME + <the
>    path you specified>
>    3. If previous step fails, try to resolve it in CLASSPATH.
>
> Thus, you have multiple options to specify config path:
>
>    1. Specify an absolute path
>    2. Pass the config path as a command line parameter
>    3. Create your path relative to $IGNITE_HOME
>    4. If you use maven or gradle java plugin, then you can create file
>    under "resources/path-to-ignite-config.xml". On the "process
>    resources" phase maven will copy it under 
> target/classes/path-to-ignite-config.xml"
>    and then it will go into JAR. Your app will find it by
>    "path-to-ignite-config.xml" since it will be in CLASSPATH.
>
>
>
17/11/03 23:48:24 main INFO Utils: Successfully started service 'SparkUI' on 
port 4043.
17/11/03 23:48:24 main INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at 
http://10.20.19.61:4043
17/11/03 23:48:24 main INFO SparkContext: Added JAR 
file:/home/ec2-user/rchalil/omega-turbine-2.1.0-jar-with-dependencies.jar at 
spark://10.20.19.61:44155/jars/omega-turbine-2.1.0-jar-with-dependencies.jar 
with timestamp 1509752904136
17/11/03 23:48:24 main INFO deprecation: fs.default.name is deprecated. 
Instead, use fs.defaultFS
17/11/03 23:48:24 main INFO TimelineClientImpl: Timeline service address: 
http://ec2-54-70-208-24.us-west-2.compute.amazonaws.com:8188/ws/v1/timeline/
17/11/03 23:48:25 main INFO RMProxy: Connecting to ResourceManager at 
ec2-54-70-208-24.us-west-2.compute.amazonaws.com/10.20.19.61:8032
17/11/03 23:48:25 main INFO Client: Requesting a new application from cluster 
with 4 NodeManagers
17/11/03 23:48:25 main INFO Client: Verifying our application has not requested 
more than the maximum memory capability of the cluster (56448 MB per container)
17/11/03 23:48:25 main INFO Client: Will allocate AM container, with 896 MB 
memory including 384 MB overhead
17/11/03 23:48:25 main INFO Client: Setting up container launch context for our 
AM
17/11/03 23:48:25 main INFO Client: Setting up the launch environment for our 
AM container
17/11/03 23:48:25 main INFO Client: Preparing resources for our AM container
17/11/03 23:48:26 main WARN Client: Neither spark.yarn.jars nor 
spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/11/03 23:48:27 main INFO deprecation: fs.default.name is deprecated. 
Instead, use fs.defaultFS
17/11/03 23:48:27 main INFO Client: Uploading resource 
file:/tmp/spark-77fc135a-efec-49eb-97a5-5bf752b8aca6/__spark_libs__6294623288396555268.zip
 -> 
hdfs://ec2-54-70-208-24.us-west-2.compute.amazonaws.com:9000/user/ec2-user/.sparkStaging/application_1504040437393_0459/__spark_libs__6294623288396555268.zip
17/11/03 23:48:28 main INFO Client: Uploading resource 
file:/usr/lib/spark/python/lib/pyspark.zip -> 
hdfs://ec2-54-70-208-24.us-west-2.compute.amazonaws.com:9000/user/ec2-user/.sparkStaging/application_1504040437393_0459/pyspark.zip
17/11/03 23:48:28 main INFO Client: Uploading resource 
file:/usr/lib/spark/python/lib/py4j-0.10.4-src.zip -> 
hdfs://ec2-54-70-208-24.us-west-2.compute.amazonaws.com:9000/user/ec2-user/.sparkStaging/application_1504040437393_0459/py4j-0.10.4-src.zip
17/11/03 23:48:28 main INFO Client: Uploading resource 
file:/tmp/spark-77fc135a-efec-49eb-97a5-5bf752b8aca6/__spark_conf__8122462146742146551.zip
 -> 
hdfs://ec2-54-70-208-24.us-west-2.compute.amazonaws.com:9000/user/ec2-user/.sparkStaging/application_1504040437393_0459/__spark_conf__.zip
17/11/03 23:48:28 main INFO SecurityManager: Changing view acls to: ec2-user
17/11/03 23:48:28 main INFO SecurityManager: Changing modify acls to: ec2-user
17/11/03 23:48:28 main INFO SecurityManager: Changing view acls groups to: 
17/11/03 23:48:28 main INFO SecurityManager: Changing modify acls groups to: 
17/11/03 23:48:28 main INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(ec2-user); groups 
with view permissions: Set(); users  with modify permissions: Set(ec2-user); 
groups with modify permissions: Set()
17/11/03 23:48:28 main INFO Client: Submitting application 
application_1504040437393_0459 to ResourceManager
17/11/03 23:48:28 main INFO YarnClientImpl: Submitted application 
application_1504040437393_0459
17/11/03 23:48:28 main INFO SchedulerExtensionServices: Starting Yarn extension 
services with app application_1504040437393_0459 and attemptId None
17/11/03 23:48:29 main INFO Client: Application report for 
application_1504040437393_0459 (state: ACCEPTED)
17/11/03 23:48:29 main INFO Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: root.ec2-user
         start time: 1509752908500
         final status: UNDEFINED
         tracking URL: 
http://ec2-54-70-208-24.us-west-2.compute.amazonaws.com:8088/proxy/application_1504040437393_0459/?spark=true
         user: ec2-user
17/11/03 23:48:30 main INFO Client: Application report for 
application_1504040437393_0459 (state: ACCEPTED)
17/11/03 23:48:31 main INFO Client: Application report for 
application_1504040437393_0459 (state: ACCEPTED)
17/11/03 23:48:32 dispatcher-event-loop-2 INFO 
YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as 
NettyRpcEndpointRef(null)
17/11/03 23:48:32 dispatcher-event-loop-0 INFO 
YarnSchedulerBackend$YarnSchedulerEndpoint: Received 
AMStart(container_1504040437393_0459_01_000001, 
ip-10-20-19-182.us-west-2.compute.internal)
17/11/03 23:48:32 dispatcher-event-loop-7 INFO YarnClientSchedulerBackend: 
addWebUIFilter: Setting spark.ui.proxyBase to 
/proxy/application_1504040437393_0459
17/11/03 23:48:32 dispatcher-event-loop-7 INFO YarnClientSchedulerBackend: Add 
WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, 
Map(PROXY_HOSTS -> ec2-54-70-208-24.us-west-2.compute.amazonaws.com, 
PROXY_URI_BASES -> 
http://ec2-54-70-208-24.us-west-2.compute.amazonaws.com:8088/proxy/application_1504040437393_0459),
 /proxy/application_1504040437393_0459
17/11/03 23:48:32 dispatcher-event-loop-7 INFO JettyUtils: Adding filter: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/11/03 23:48:32 main INFO Client: Application report for 
application_1504040437393_0459 (state: ACCEPTED)
17/11/03 23:48:33 main INFO Client: Application report for 
application_1504040437393_0459 (state: RUNNING)
17/11/03 23:48:33 main INFO Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: 10.20.19.182
         ApplicationMaster RPC port: 0
         queue: root.ec2-user
         start time: 1509752908500
         final status: UNDEFINED
         tracking URL: 
http://ec2-54-70-208-24.us-west-2.compute.amazonaws.com:8088/proxy/application_1504040437393_0459/?spark=true
         user: ec2-user
17/11/03 23:48:33 main INFO YarnClientSchedulerBackend: Application 
application_1504040437393_0459 has started running.
17/11/03 23:48:33 main INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 45115.
17/11/03 23:48:33 main INFO NettyBlockTransferService: Server created on 
10.20.19.61:45115
17/11/03 23:48:33 main INFO BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
17/11/03 23:48:33 main INFO BlockManagerMaster: Registering BlockManager 
BlockManagerId(driver, 10.20.19.61, 45115, None)
17/11/03 23:48:33 dispatcher-event-loop-3 INFO BlockManagerMasterEndpoint: 
Registering block manager 10.20.19.61:45115 with 2.5 GB RAM, 
BlockManagerId(driver, 10.20.19.61, 45115, None)
17/11/03 23:48:33 main INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, 10.20.19.61, 45115, None)
17/11/03 23:48:33 main INFO BlockManager: external shuffle service port = 7337
17/11/03 23:48:33 main INFO BlockManager: Initialized BlockManager: 
BlockManagerId(driver, 10.20.19.61, 45115, None)
17/11/03 23:48:33 main INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6150afc6{/metrics/json,null,AVAILABLE,@Spark}
17/11/03 23:48:33 main INFO deprecation: fs.default.name is deprecated. 
Instead, use fs.defaultFS
17/11/03 23:48:33 main INFO EventLoggingListener: Logging events to 
hdfs://ec2-54-70-208-24.us-west-2.compute.amazonaws.com:9000/spark-history/application_1504040437393_0459.lz4
17/11/03 23:48:33 main INFO QuboleAllocationStrategy: Starting Qubole dynamic 
allocation with min executors : 1 and max executors : 2
17/11/03 23:48:33 main INFO ExecutorAllocationManager: QuboleAllocationStrategy 
is used for Executor dynamic allocation
17/11/03 23:48:33 main INFO ExecutorAllocationManager: Request for initial 
executors(1) acknowledged - true
17/11/03 23:48:33 main INFO YarnScheduler$$anon$1: Adding shutdown hook for 
context org.apache.spark.SparkContext@6cb662e8.
17/11/03 23:48:33 SparkListenerBus INFO ExecutorsListener: 
onAMStart(Some(container_1504040437393_0459_01_000001), 
Some(ip-10-20-19-182.us-west-2.compute.internal))
17/11/03 23:48:36 dispatcher-event-loop-3 INFO 
YarnSchedulerBackend$YarnDriverEndpoint: Registered executor 
NettyRpcEndpointRef(null) (10.20.19.162:51520) with ID 1 and 
container_1504040437393_0459_01_000002 with size 14495514624
17/11/03 23:48:36 SparkListenerBus INFO ExecutorAllocationManager: New executor 
1 has registered (new total is 1)
17/11/03 23:48:36 dispatcher-event-loop-4 INFO BlockManagerMasterEndpoint: 
Registering block manager ip-10-20-19-162.us-west-2.compute.internal:41287 with 
5.7 GB RAM, BlockManagerId(1, ip-10-20-19-162.us-west-2.compute.internal, 
41287, None)
17/11/03 23:48:36 main INFO YarnClientSchedulerBackend: SchedulerBackend is 
ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
17/11/03 23:48:36 main INFO YarnScheduler: 
YarnClientClusterScheduler.postStartHook done.
17/11/03 23:48:37 main INFO XmlBeanDefinitionReader: Loading XML bean 
definitions from URL 
[file:/home/ec2-user/rchalil/apache-ignite-fabric-2.1.0-bin/config/default-config.xml]
17/11/03 23:48:37 main INFO GenericApplicationContext: Refreshing 
org.springframework.context.support.GenericApplicationContext@6a07e6ca: startup 
date [Fri Nov 03 23:48:37 UTC 2017]; root of context hierarchy
17/11/03 23:48:37 main INFO IgniteKernal: 

>>>    __________  ________________  
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7    // /  / / / _/    
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 2.1.0#20170720-sha1:a6ca5c8a
>>> 2017 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

17/11/03 23:48:37 main INFO IgniteKernal: Config URL: n/a
17/11/03 23:48:37 main INFO IgniteKernal: Daemon mode: off
17/11/03 23:48:37 main INFO IgniteKernal: OS: Linux 4.9.43-17.38.amzn1.x86_64 
amd64
17/11/03 23:48:37 main INFO IgniteKernal: OS user: ec2-user
17/11/03 23:48:37 main INFO IgniteKernal: PID: 5300
17/11/03 23:48:37 main INFO IgniteKernal: Language runtime: Java Platform API 
Specification ver. 1.8
17/11/03 23:48:37 main INFO IgniteKernal: VM information: Java(TM) SE Runtime 
Environment 1.8.0_60-b27 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 
25.60-b23
17/11/03 23:48:37 main INFO IgniteKernal: VM total memory: 4.4GB
17/11/03 23:48:37 main INFO IgniteKernal: Remote Management [restart: off, 
REST: off, JMX (remote: off)]
17/11/03 23:48:37 main INFO IgniteKernal: 
IGNITE_HOME=/home/ec2-user/rchalil/apache-ignite-fabric-2.1.0-bin/
17/11/03 23:48:37 main INFO IgniteKernal: VM arguments: [-Xmx5G, 
-Djava.net.preferIPv4Stack=true]
17/11/03 23:48:37 main INFO IgniteKernal: Configured caches [in 'sysMemPlc' 
memoryPolicy: ['ignite-sys-cache']]
17/11/03 23:48:37 main INFO IgniteKernal: 3-rd party licenses can be found at: 
/home/ec2-user/rchalil/apache-ignite-fabric-2.1.0-bin//libs/licenses
17/11/03 23:48:38 main INFO IgnitePluginProcessor: Configured plugins:
17/11/03 23:48:38 main INFO IgnitePluginProcessor:   ^-- None
17/11/03 23:48:38 main INFO IgnitePluginProcessor: 
17/11/03 23:48:38 main INFO TcpCommunicationSpi: Successfully bound 
communication NIO server to TCP port [port=47113, locHost=0.0.0.0/0.0.0.0, 
selectorsCnt=4, selectorSpins=0, pairedConn=false]
17/11/03 23:48:38 main WARN TcpCommunicationSpi: Message queue limit is set to 
0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC 
or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
17/11/03 23:48:38 main WARN NoopCheckpointSpi: Checkpoints are disabled (to 
enable configure any GridCheckpointSpi implementation)
17/11/03 23:48:38 main WARN GridCollisionManager: Collision resolution is 
disabled (all jobs will be activated upon arrival).
17/11/03 23:48:38 main INFO IgniteKernal: Security status [authentication=off, 
tls/ssl=off]
Can't load log handler "org.apache.ignite.logger.java.JavaLoggerFileHandler"
java.lang.ClassNotFoundException: 
org.apache.ignite.logger.java.JavaLoggerFileHandler
java.lang.ClassNotFoundException: 
org.apache.ignite.logger.java.JavaLoggerFileHandler
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.util.logging.LogManager$5.run(LogManager.java:965)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.util.logging.LogManager.loadLoggerHandlers(LogManager.java:958)
        at 
java.util.logging.LogManager.initializeGlobalHandlers(LogManager.java:1578)
        at java.util.logging.LogManager.access$1500(LogManager.java:145)
        at 
java.util.logging.LogManager$RootLogger.accessCheckedHandlers(LogManager.java:1667)
        at java.util.logging.Logger.getHandlers(Logger.java:1776)
        at 
org.apache.ignite.logger.java.JavaLogger.findHandler(JavaLogger.java:399)
        at 
org.apache.ignite.logger.java.JavaLogger.configure(JavaLogger.java:229)
        at org.apache.ignite.logger.java.JavaLogger.<init>(JavaLogger.java:170)
        at org.apache.ignite.logger.java.JavaLogger.<init>(JavaLogger.java:126)
        at 
org.apache.ignite.IgniteJdbcDriver.<clinit>(IgniteJdbcDriver.java:369)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at java.lang.Class.newInstance(Class.java:442)
        at 
java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
        at 
java.util.ServiceLoader$LazyIterator.access$700(ServiceLoader.java:323)
        at java.util.ServiceLoader$LazyIterator$2.run(ServiceLoader.java:407)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:409)
        at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
        at java.sql.DriverManager$2.run(DriverManager.java:603)
        at java.sql.DriverManager$2.run(DriverManager.java:583)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.sql.DriverManager.loadInitialDrivers(DriverManager.java:583)
        at java.sql.DriverManager.<clinit>(DriverManager.java:101)
        at org.h2.Driver.load(Driver.java:155)
        at org.h2.Driver.<clinit>(Driver.java:41)
        at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.start(IgniteH2Indexing.java:1837)
        at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.start(GridQueryProcessor.java:233)
        at 
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1788)
        at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:930)
        at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1896)
        at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1648)
        at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1076)
        at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:596)
        at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:536)
        at org.apache.ignite.Ignition.getOrStart(Ignition.java:414)
        at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:143)
        at org.apache.ignite.spark.IgniteContext.<init>(IgniteContext.scala:58)
        at org.apache.ignite.spark.IgniteContext.<init>(IgniteContext.scala:71)
        at 
com.expedia.ecpr.risk.data.spark.util.SparkStreamingContextController.<init>(SparkStreamingContextController.scala:53)
        at 
com.expedia.ecpr.risk.data.spark.streaming.IncrementalStreamingProcesser.process(IncrementalStreamingProcesser.scala:48)
        at 
com.expedia.ecpr.risk.data.spark.streaming.omegaturbine$.main(omegaturbine.scala:46)
        at 
com.expedia.ecpr.risk.data.spark.streaming.omegaturbine.main(omegaturbine.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:884)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:205)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:230)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:133)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/11/03 23:48:38 main INFO SqlListenerProcessor: SQL connector processor has 
started on TCP port 10812
17/11/03 23:48:38 main INFO GridRestProcessor: REST protocols do not start on 
client node. To start the protocols on client node set 
'-DIGNITE_REST_START_ON_CLIENT=true' system property.
17/11/03 23:48:38 main INFO IgniteKernal: Non-loopback local IPs: 10.20.19.61
17/11/03 23:48:38 main INFO IgniteKernal: Enabled local MACs: 06FF3B84A312
17/11/03 23:48:39 main WARN GridDiscoveryManager: Local node's value of 
'java.net.preferIPv4Stack' system property differs from remote node's (all 
nodes in topology should have identical value) [locPreferIpV4=true, 
rmtPreferIpV4=null, locId8=e9b8f75b, rmtId8=eabde2e4, 
rmtAddrs=[ip-10-20-19-61.us-west-2.compute.internal/0:0:0:0:0:0:0:1%lo, 
/10.20.19.61, /127.0.0.1]]
17/11/03 23:48:39 exchange-worker-#29%null% INFO time: Started exchange init 
[topVer=AffinityTopologyVersion [topVer=83, minorTopVer=0], crd=false, evt=10, 
node=TcpDiscoveryNode [id=e9b8f75b-4088-43c7-b899-d96f5383383b, 
addrs=[10.20.19.61, 127.0.0.1], 
sockAddrs=[ip-10-20-19-61.us-west-2.compute.internal/10.20.19.61:0, 
/127.0.0.1:0], discPort=0, order=83, intOrder=0, 
lastExchangeTime=1509752918820, loc=true, ver=2.1.0#20170720-sha1:a6ca5c8a, 
isClient=true], evtNode=TcpDiscoveryNode 
[id=e9b8f75b-4088-43c7-b899-d96f5383383b, addrs=[10.20.19.61, 127.0.0.1], 
sockAddrs=[ip-10-20-19-61.us-west-2.compute.internal/10.20.19.61:0, 
/127.0.0.1:0], discPort=0, order=83, intOrder=0, 
lastExchangeTime=1509752918820, loc=true, ver=2.1.0#20170720-sha1:a6ca5c8a, 
isClient=true], customEvt=null]
17/11/03 23:48:39 exchange-worker-#29%null% INFO GridCacheProcessor: Started 
cache [name=ignite-sys-cache, memoryPolicyName=sysMemPlc, mode=REPLICATED, 
atomicity=TRANSACTIONAL]
17/11/03 23:48:39 grid-nio-worker-tcp-comm-0-#17%null% INFO 
TcpCommunicationSpi: Established outgoing communication connection 
[locAddr=/10.20.19.61:54590, 
rmtAddr=ip-10-20-19-182.us-west-2.compute.internal/10.20.19.182:47100]
17/11/03 23:48:39 exchange-worker-#29%null% INFO 
GridDhtPartitionsExchangeFuture: Snapshot initialization completed 
[topVer=AffinityTopologyVersion [topVer=83, minorTopVer=0], time=0ms]
17/11/03 23:48:39 exchange-worker-#29%null% INFO time: Finished exchange init 
[topVer=AffinityTopologyVersion [topVer=83, minorTopVer=0], crd=false]
17/11/03 23:48:39 sys-#31%null% INFO GridDhtPartitionsExchangeFuture: Snapshot 
initialization completed [topVer=AffinityTopologyVersion [topVer=83, 
minorTopVer=0], time=0ms]
17/11/03 23:48:39 grid-nio-worker-tcp-comm-1-#18%null% INFO 
TcpCommunicationSpi: Established outgoing communication connection 
[locAddr=/10.20.19.61:45578, 
rmtAddr=ip-10-20-19-84.us-west-2.compute.internal/10.20.19.84:47100]
17/11/03 23:48:39 main INFO IgniteKernal: Performance suggestions for grid  
(fix if possible)
17/11/03 23:48:39 main INFO IgniteKernal: To disable, set 
-DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
17/11/03 23:48:39 main INFO IgniteKernal:   ^-- Enable G1 Garbage Collector 
(add '-XX:+UseG1GC' to JVM options)
17/11/03 23:48:39 main INFO IgniteKernal:   ^-- Set max direct memory size if 
getting 'OOME: Direct buffer memory' (add 
'-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
17/11/03 23:48:39 main INFO IgniteKernal:   ^-- Disable processing of calls to 
System.gc() (add '-XX:+DisableExplicitGC' to JVM options)
17/11/03 23:48:39 main INFO IgniteKernal: Refer to this page for more 
performance suggestions: 
https://apacheignite.readme.io/docs/jvm-and-system-tuning
17/11/03 23:48:39 main INFO IgniteKernal: 
17/11/03 23:48:39 main INFO IgniteKernal: To start Console Management & 
Monitoring run ignitevisorcmd.{sh|bat}
17/11/03 23:48:39 main INFO IgniteKernal: 
17/11/03 23:48:39 main INFO IgniteKernal: 

>>> +----------------------------------------------------------------------+
>>> Ignite ver. 2.1.0#20170720-sha1:a6ca5c8a97e9a4c9d73d40ce76d1504c14ba1940
>>> +----------------------------------------------------------------------+
>>> OS name: Linux 4.9.43-17.38.amzn1.x86_64 amd64
>>> CPU(s): 8
>>> Heap: 4.4GB
>>> VM name: 5...@ip-10-20-19-61.us-west-2.compute.internal
>>> Local node [ID=E9B8F75B-4088-43C7-B899-D96F5383383B, order=83, 
>>> clientMode=true]
>>> Local node addresses: 
>>> [ip-10-20-19-61.us-west-2.compute.internal/10.20.19.61, /127.0.0.1]
>>> Local ports: TCP:10812 TCP:47113 

17/11/03 23:48:39 main INFO GridDiscoveryManager: Topology snapshot [ver=83, 
servers=4, clients=3, CPUs=40, heap=14.0GB]
17/11/03 23:48:39 main INFO SparkContext: Starting job: foreachPartition at 
IgniteRDD.scala:226
17/11/03 23:48:39 dag-scheduler-event-loop INFO DAGScheduler: Got job 0 
(foreachPartition at IgniteRDD.scala:226) with 10 output partitions
17/11/03 23:48:39 dag-scheduler-event-loop INFO DAGScheduler: Final stage: 
ResultStage 0 (foreachPartition at IgniteRDD.scala:226)
17/11/03 23:48:39 dag-scheduler-event-loop INFO DAGScheduler: Parents of final 
stage: List()
17/11/03 23:48:39 dag-scheduler-event-loop INFO DAGScheduler: Missing parents: 
List()
17/11/03 23:48:39 dag-scheduler-event-loop INFO DAGScheduler: Submitting 
ResultStage 0 (MapPartitionsRDD[2] at map at 
SparkStreamingContextController.scala:56), which has no missing parents
17/11/03 23:48:39 dag-scheduler-event-loop INFO MemoryStore: Block broadcast_0 
stored as values in memory (estimated size 3.0 KB, free 2.5 GB)
17/11/03 23:48:39 dag-scheduler-event-loop INFO MemoryStore: Block 
broadcast_0_piece0 stored as bytes in memory (estimated size 1914.0 B, free 2.5 
GB)
17/11/03 23:48:39 dispatcher-event-loop-6 INFO BlockManagerInfo: Added 
broadcast_0_piece0 in memory on 10.20.19.61:45115 (size: 1914.0 B, free: 2.5 GB)
17/11/03 23:48:39 dag-scheduler-event-loop INFO SparkContext: Created broadcast 
0 from broadcast at DAGScheduler.scala:1022
17/11/03 23:48:39 dag-scheduler-event-loop INFO DAGScheduler: Submitting 10 
missing tasks from ResultStage 0 (MapPartitionsRDD[2] at map at 
SparkStreamingContextController.scala:56)
17/11/03 23:48:39 dag-scheduler-event-loop INFO YarnScheduler: Adding task set 
0.0 with 10 tasks
17/11/03 23:48:39 dispatcher-event-loop-3 INFO TaskSetManager: Starting task 
0.0 in stage 0.0 (TID 0, ip-10-20-19-162.us-west-2.compute.internal, executor 
1, partition 0, PROCESS_LOCAL, 6495 bytes)
17/11/03 23:48:40 dispatcher-event-loop-4 INFO BlockManagerInfo: Added 
broadcast_0_piece0 in memory on 
ip-10-20-19-162.us-west-2.compute.internal:41287 (size: 1914.0 B, free: 5.7 GB)
17/11/03 23:48:40 spark-dynamic-executor-allocation INFO 
QuboleAllocationStrategy: Jobs running : [SQL = 0 Other = 1] Executors need : 
[Job = 0 Mem = 0] Stages : [Active = 1 Pending = 0 Completed = 0] Tasks : 
[Active = 1 Pending = 9 Completed = 0] 
17/11/03 23:48:40 spark-dynamic-executor-allocation INFO 
QuboleAllocationStrategy: Requesting 1 more executor(s) to be added and desired 
total is 1
17/11/03 23:48:40 dispatcher-event-loop-2 INFO TaskSetManager: Starting task 
1.0 in stage 0.0 (TID 1, ip-10-20-19-162.us-west-2.compute.internal, executor 
1, partition 1, PROCESS_LOCAL, 6495 bytes)
17/11/03 23:48:40 task-result-getter-0 WARN TaskSetManager: Lost task 0.0 in 
stage 0.0 (TID 0, ip-10-20-19-162.us-west-2.compute.internal, executor 1): 
class org.apache.ignite.IgniteCheckedException: Spring XML configuration path 
is invalid: config/default-config.xml. Note that this path should be either 
absolute or a relative local file system path, relative to META-INF in 
classpath or valid URL to IGNITE_HOME.
        at 
org.apache.ignite.internal.util.IgniteUtils.resolveSpringUrl(IgniteUtils.java:3734)
        at 
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:708)
        at 
org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:747)
        at 
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$1.apply(IgniteContext.scala:71)
        at 
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$1.apply(IgniteContext.scala:71)
        at org.apache.ignite.spark.Once.apply(IgniteContext.scala:197)
        at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:137)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:227)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:226)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2131)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2131)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.MalformedURLException: no protocol: 
config/default-config.xml
        at java.net.URL.<init>(URL.java:586)
        at java.net.URL.<init>(URL.java:483)
        at java.net.URL.<init>(URL.java:432)
        at 
org.apache.ignite.internal.util.IgniteUtils.resolveSpringUrl(IgniteUtils.java:3725)
        ... 18 more

17/11/03 23:48:40 dispatcher-event-loop-0 INFO TaskSetManager: Starting task 
0.1 in stage 0.0 (TID 2, ip-10-20-19-162.us-west-2.compute.internal, executor 
1, partition 0, PROCESS_LOCAL, 6495 bytes)
17/11/03 23:48:40 task-result-getter-1 INFO TaskSetManager: Lost task 1.0 in 
stage 0.0 (TID 1) on ip-10-20-19-162.us-west-2.compute.internal, executor 1: 
org.apache.ignite.IgniteCheckedException (Spring XML configuration path is 
invalid: config/default-config.xml. Note that this path should be either 
absolute or a relative local file system path, relative to META-INF in 
classpath or valid URL to IGNITE_HOME.) [duplicate 1]
17/11/03 23:48:40 dispatcher-event-loop-7 INFO TaskSetManager: Starting task 
1.1 in stage 0.0 (TID 3, ip-10-20-19-162.us-west-2.compute.internal, executor 
1, partition 1, PROCESS_LOCAL, 6495 bytes)
17/11/03 23:48:40 task-result-getter-2 INFO TaskSetManager: Lost task 0.1 in 
stage 0.0 (TID 2) on ip-10-20-19-162.us-west-2.compute.internal, executor 1: 
org.apache.ignite.IgniteCheckedException (Spring XML configuration path is 
invalid: config/default-config.xml. Note that this path should be either 
absolute or a relative local file system path, relative to META-INF in 
classpath or valid URL to IGNITE_HOME.) [duplicate 2]
17/11/03 23:48:40 dispatcher-event-loop-5 INFO TaskSetManager: Starting task 
0.2 in stage 0.0 (TID 4, ip-10-20-19-162.us-west-2.compute.internal, executor 
1, partition 0, PROCESS_LOCAL, 6495 bytes)
17/11/03 23:48:40 task-result-getter-3 INFO TaskSetManager: Lost task 1.1 in 
stage 0.0 (TID 3) on ip-10-20-19-162.us-west-2.compute.internal, executor 1: 
org.apache.ignite.IgniteCheckedException (Spring XML configuration path is 
invalid: config/default-config.xml. Note that this path should be either 
absolute or a relative local file system path, relative to META-INF in 
classpath or valid URL to IGNITE_HOME.) [duplicate 3]
17/11/03 23:48:40 dispatcher-event-loop-6 INFO TaskSetManager: Starting task 
1.2 in stage 0.0 (TID 5, ip-10-20-19-162.us-west-2.compute.internal, executor 
1, partition 1, PROCESS_LOCAL, 6495 bytes)
17/11/03 23:48:40 task-result-getter-0 INFO TaskSetManager: Lost task 0.2 in 
stage 0.0 (TID 4) on ip-10-20-19-162.us-west-2.compute.internal, executor 1: 
org.apache.ignite.IgniteCheckedException (Spring XML configuration path is 
invalid: config/default-config.xml. Note that this path should be either 
absolute or a relative local file system path, relative to META-INF in 
classpath or valid URL to IGNITE_HOME.) [duplicate 4]
17/11/03 23:48:40 dispatcher-event-loop-7 INFO TaskSetManager: Starting task 
0.3 in stage 0.0 (TID 6, ip-10-20-19-162.us-west-2.compute.internal, executor 
1, partition 0, PROCESS_LOCAL, 6495 bytes)
17/11/03 23:48:40 task-result-getter-1 INFO TaskSetManager: Lost task 1.2 in 
stage 0.0 (TID 5) on ip-10-20-19-162.us-west-2.compute.internal, executor 1: 
org.apache.ignite.IgniteCheckedException (Spring XML configuration path is 
invalid: config/default-config.xml. Note that this path should be either 
absolute or a relative local file system path, relative to META-INF in 
classpath or valid URL to IGNITE_HOME.) [duplicate 5]
17/11/03 23:48:40 dispatcher-event-loop-5 INFO TaskSetManager: Starting task 
1.3 in stage 0.0 (TID 7, ip-10-20-19-162.us-west-2.compute.internal, executor 
1, partition 1, PROCESS_LOCAL, 6495 bytes)
17/11/03 23:48:40 task-result-getter-2 INFO TaskSetManager: Lost task 0.3 in 
stage 0.0 (TID 6) on ip-10-20-19-162.us-west-2.compute.internal, executor 1: 
org.apache.ignite.IgniteCheckedException (Spring XML configuration path is 
invalid: config/default-config.xml. Note that this path should be either 
absolute or a relative local file system path, relative to META-INF in 
classpath or valid URL to IGNITE_HOME.) [duplicate 6]
17/11/03 23:48:40 task-result-getter-2 ERROR TaskSetManager: Task 0 in stage 
0.0 failed 4 times; aborting job
17/11/03 23:48:40 dag-scheduler-event-loop INFO YarnScheduler: Cancelling stage 0
17/11/03 23:48:40 dag-scheduler-event-loop INFO YarnScheduler: Stage 0 was 
cancelled
17/11/03 23:48:40 dag-scheduler-event-loop INFO DAGScheduler: ResultStage 0 
(foreachPartition at IgniteRDD.scala:226) failed in 0.978 s due to Job aborted 
due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: 
Lost task 0.3 in stage 0.0 (TID 6, ip-10-20-19-162.us-west-2.compute.internal, 
executor 1): class org.apache.ignite.IgniteCheckedException: Spring XML 
configuration path is invalid: config/default-config.xml. Note that this path 
should be either absolute or a relative local file system path, relative to 
META-INF in classpath or valid URL to IGNITE_HOME.
        at 
org.apache.ignite.internal.util.IgniteUtils.resolveSpringUrl(IgniteUtils.java:3734)
        at 
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:708)
        at 
org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:747)
        at 
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$1.apply(IgniteContext.scala:71)
        at 
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$1.apply(IgniteContext.scala:71)
        at org.apache.ignite.spark.Once.apply(IgniteContext.scala:197)
        at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:137)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:227)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:226)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2131)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2131)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.MalformedURLException: no protocol: 
config/default-config.xml
        at java.net.URL.<init>(URL.java:586)
        at java.net.URL.<init>(URL.java:483)
        at java.net.URL.<init>(URL.java:432)
        at 
org.apache.ignite.internal.util.IgniteUtils.resolveSpringUrl(IgniteUtils.java:3725)
        ... 18 more

Driver stacktrace:
17/11/03 23:48:40 task-result-getter-3 INFO TaskSetManager: Lost task 1.3 in 
stage 0.0 (TID 7) on ip-10-20-19-162.us-west-2.compute.internal, executor 1: 
org.apache.ignite.IgniteCheckedException (Spring XML configuration path is 
invalid: config/default-config.xml. Note that this path should be either 
absolute or a relative local file system path, relative to META-INF in 
classpath or valid URL to IGNITE_HOME.) [duplicate 7]
17/11/03 23:48:40 task-result-getter-3 INFO YarnScheduler: Removed TaskSet 0.0, 
whose tasks have all completed, from pool 
17/11/03 23:48:40 main INFO DAGScheduler: Job 0 failed: foreachPartition at 
IgniteRDD.scala:226, took 1.108570 s
17/11/03 23:48:40 SparkListenerBus WARN ExecutorAllocationManager: No stages 
are running, but numRunningTasks != 0
17/11/03 23:48:40 main INFO DAGScheduler: Throwing exception before/after 
sc.stop
17/11/03 23:48:40 main ERROR myLogger: The following exception was thrown: 
IncrementalStreamingProcessor.streamingProcessor
    Exception   : class org.apache.spark.SparkException
    Message     : Job aborted due to stage failure: Task 0 in stage 0.0 failed 
4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6, 
ip-10-20-19-162.us-west-2.compute.internal, executor 1): class 
org.apache.ignite.IgniteCheckedException: Spring XML configuration path is 
invalid: config/default-config.xml. Note that this path should be either 
absolute or a relative local file system path, relative to META-INF in 
classpath or valid URL to IGNITE_HOME.
        at 
org.apache.ignite.internal.util.IgniteUtils.resolveSpringUrl(IgniteUtils.java:3734)
        at 
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:708)
        at 
org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:747)
        at 
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$1.apply(IgniteContext.scala:71)
        at 
org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$1.apply(IgniteContext.scala:71)
        at org.apache.ignite.spark.Once.apply(IgniteContext.scala:197)
        at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:137)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:227)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$savePairs$1.apply(IgniteRDD.scala:226)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2131)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2131)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.MalformedURLException: no protocol: 
config/default-config.xml
        at java.net.URL.<init>(URL.java:586)
        at java.net.URL.<init>(URL.java:483)
        at java.net.URL.<init>(URL.java:432)
        at 
org.apache.ignite.internal.util.IgniteUtils.resolveSpringUrl(IgniteUtils.java:3725)
        ... 18 more

Driver stacktrace:
    Cause       : class org.apache.ignite.IgniteCheckedException: Spring XML 
configuration path is invalid: config/default-config.xml. Note that this path 
should be either absolute or a relative local file system path, relative to 
META-INF in classpath or valid URL to IGNITE_HOME.
    Stack Trace : 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1461)
        
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1449)
        
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1448)
        
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1448)
        
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:828)
        
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:828)
        scala.Option.foreach(Option.scala:257)
        
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:828)
        
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1676)
        
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1631)
        
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1620)
        org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:633)
        org.apache.spark.SparkContext.runJob(SparkContext.scala:2105)
        org.apache.spark.SparkContext.runJob(SparkContext.scala:2118)
        org.apache.spark.SparkContext.runJob(SparkContext.scala:2131)
        org.apache.spark.SparkContext.runJob(SparkContext.scala:2145)
        
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:926)
        
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:924)
        
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:924)
        org.apache.ignite.spark.IgniteRDD.savePairs(IgniteRDD.scala:226)
        
com.expedia.ecpr.risk.data.spark.util.SparkStreamingContextController.<init>(SparkStreamingContextController.scala:56)
        
com.expedia.ecpr.risk.data.spark.streaming.IncrementalStreamingProcesser.process(IncrementalStreamingProcesser.scala:48)
        
com.expedia.ecpr.risk.data.spark.streaming.omegaturbine$.main(omegaturbine.scala:46)
        
com.expedia.ecpr.risk.data.spark.streaming.omegaturbine.main(omegaturbine.scala)
        sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        java.lang.reflect.Method.invoke(Method.java:497)
        
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:884)
        org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:205)
        org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:230)
        org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:133)
        org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Attachment: ignite-spark-submit-log-0
Description: Binary data

Reply via email to