Re: IgniteSparkSession exception:Ignite instance with provided name doesn't exist. Did you call Ignition.start(..) to start an Ignite instance? [name=null]

2018-09-20 Thread vkulichenko
Does the configuration file exist on worker nodes? I looks like it actually
fails to start there for some reason, and then you eventually get this
exception. Are there any other exception in worker/executor logs?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteSparkSession exception:Ignite instance with provided name doesn't exist. Did you call Ignition.start(..) to start an Ignite instance? [name=null]

2018-09-20 Thread yangjiajun
 The configuration is
from:https://github.com/apache/ignite/blob/master/examples/config/example-default.xml.

The code runs well with spark local[*].But it throws exception when I run it
with my spark standalone cluster  which has a master node and a worker
node.The exception is in the sql execution stage.The listTables method runs
well.

Here is my code:
   /**
 * Ignite config file.
 */
private static final String CONFIG = "example-ignite.xml";

/**
 * Test cache name.
 */
private static final String CACHE_NAME = "testCache";

/**
 * @throws IOException  */
public static void main(String args[]) throws AnalysisException,
IOException {

//setupServerAndData();
Ignite ignite = Ignition.start(CONFIG);
CacheConfiguration ccfg = new
CacheConfiguration<>(CACHE_NAME).setSqlSchema("PUBLIC");

IgniteCache cache = ignite.getOrCreateCache(ccfg);

//Creating Ignite-specific implementation of Spark session.
IgniteSparkSession igniteSession = IgniteSparkSession.builder()
.appName("Spark Ignite catalog example")
.master("spark://my_computer_ip:7077")
//.master("local[*]")
.config("spark.executor.instances", "2")
.config("spark.serializer",
"org.apache.spark.serializer.KryoSerializer")
.config("spark.sql.adaptive.enabled",true)
.config("spark.sql.cbo.enabled",true)
.igniteConfig(CONFIG)
.getOrCreate();

//Adjust the logger to exclude the logs of no interest.
Logger.getRootLogger().setLevel(Level.ERROR);
Logger.getLogger("org.apache.ignite").setLevel(Level.INFO);

System.out.println("List of available tables:");

//Showing existing tables.
igniteSession.catalog().listTables().show();

//Selecting data through Spark SQL engine.
Dataset df = igniteSession.sql("select count(*) from my 
test_table");
System.out.println(df.count());
Ignition.stop(false);
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteSparkSession exception:Ignite instance with provided name doesn't exist. Did you call Ignition.start(..) to start an Ignite instance? [name=null]

2018-09-20 Thread aealexsandrov
Hi,

Can you provide the code and configuration? Next one works okay for me:

//start the server
Ignite ignite = Ignition.start("server_config.xml");

//activate the cluster
ignite.cluster().active();

//create the spark session with daemon client node inside
IgniteSparkSession igniteSparkSession = IgniteSparkSession.builder()
.appName("example")
.master("local")
.config("spark.executor.instances", "2")
.igniteConfig("client_config.xml")
.getOrCreate();

System.out.println("spark start--");

igniteSparkSession.sql("SELECT * FROM DdaRecord");

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


IgniteSparkSession exception:Ignite instance with provided name doesn't exist. Did you call Ignition.start(..) to start an Ignite instance? [name=null]

2018-09-19 Thread yangjiajun
I use IgniteSparkSession to excute spark sql but get an
exception:org.apache.ignite.IgniteIllegalStateException: Ignite instance
with provided name doesn't exist. Did you call Ignition.start(..) to start
an Ignite instance? [name=null] 

My test case runs well when I run spark in local mode,but it throws the
exception when I run it with my local spark cluster.I try to find out why in
mailing list but did not get a clear reason. 

My test environment: 
My app uses default settings in the examples 
OS:Windows 10 
JDK:1.8.0_112 
Ignition version is 2.6.0,I start a node with default settings. 
Sprak version is 2.3.1,I start a standalone cluster with a master and a
worker.I have copied required jars from Ignition to Spark. 

The full exception stack trace is: 

Exception in thread "main" org.apache.spark.SparkException: Exception thrown
in awaitResult: 
at
org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205) 
at
org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:136)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:144)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:140)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) 
at
org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:140) 
at
org.apache.spark.sql.execution.joins.BroadcastNestedLoopJoinExec.doExecute(BroadcastNestedLoopJoinExec.scala:343)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) 
at
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) 
at
org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.prepareShuffleDependency(ShuffleExchangeExec.scala:92)
 
at
org.apache.spark.sql.execution.exchange.ExchangeCoordinator.doEstimationIfNecessary(ExchangeCoordinator.scala:211)
 
at
org.apache.spark.sql.execution.exchange.ExchangeCoordinator.postShuffleRDD(ExchangeCoordinator.scala:259)
 
at
org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:124)
 
at
org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:119)
 
at
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52) 
at
org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:119)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) 
at
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) 
at
org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:371)
 
at
org.apache.spark.sql.execution.SortExec.inputRDDs(SortExec.scala:121) 
at
org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:605)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) 
at
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) 
at
org.apache.spark.sql.execution.InputAdapter.doExecute(WholeStageCodegenExec.scala:363)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 
at