Hi,
I am able to fetch data, create table, put data from spark shell (scala
command line) from spark to hive 
but when I create java code to do same and submitting it through
spark-submit i am getting *"Initial job has not accepted any resources;
check your cluster UI to ensure that workers are registered and have
sufficient resources"*.
using spark 1.4.0 and tried with hive 0.13.1 and 0.14.0 with hadoop 2.4
I can see job is able to connect through metastore and gives error if
putting wrong table's name in select stmt, as it is parsing query as well
but for valid query it goes into loop of above msg  *"Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources"*.

Also I am having doubt about first warning I am getting saying *" WARN
util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable"* 
Kindly suggest if I am missing any jars on classpath or conflicts in jars 
Or do I need to build spark with appropriate version of hive
below is console msg I am getting while submitting job FYI 

*Thanx in advance*


15/10/09 12:54:35 INFO spark.SparkContext: Running Spark version 1.4.0
15/10/09 12:54:35 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
15/10/09 12:54:35 INFO spark.SecurityManager: Changing view acls to:
someuser
15/10/09 12:54:35 INFO spark.SecurityManager: Changing modify acls to:
someuser
15/10/09 12:54:35 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(someuser); users with modify permissions: Set(someuser)
15/10/09 12:54:36 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/10/09 12:54:36 INFO Remoting: Starting remoting
15/10/09 12:54:36 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkDriver@masterip:39712]
15/10/09 12:54:36 INFO util.Utils: Successfully started service
'sparkDriver' on port 39712.
15/10/09 12:54:36 INFO spark.SparkEnv: Registering MapOutputTracker
15/10/09 12:54:36 INFO spark.SparkEnv: Registering BlockManagerMaster
15/10/09 12:54:36 INFO storage.DiskBlockManager: Created local directory at
/tmp/spark-b8531c8e-1005-46ab-bfc6-293acc9f9677/blockmgr-2ce2e308-db03-4d7f-8361-274c6ee2551f
15/10/09 12:54:36 INFO storage.MemoryStore: MemoryStore started with
capacity 265.4 MB
15/10/09 12:54:36 INFO spark.HttpFileServer: HTTP File server directory is
/tmp/spark-b8531c8e-1005-46ab-bfc6-293acc9f9677/httpd-2efe7a54-3b3b-4d68-b11b-38a374eedd67
15/10/09 12:54:36 INFO spark.HttpServer: Starting HTTP Server
15/10/09 12:54:36 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/10/09 12:54:36 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:57194
15/10/09 12:54:36 INFO util.Utils: Successfully started service 'HTTP file
server' on port 57194.
15/10/09 12:54:36 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/10/09 12:54:36 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/10/09 12:54:36 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
15/10/09 12:54:36 INFO util.Utils: Successfully started service 'SparkUI' on
port 4040.
15/10/09 12:54:36 INFO ui.SparkUI: Started SparkUI at http://masterip:4040
15/10/09 12:54:37 INFO spark.SparkContext: Added JAR
file:/home/someuser/eoc/spark/sparkhive/hivespark.jar at
http://masterip:57194/jars/hivespark.jar with timestamp 1444375477084
15/10/09 12:54:37 INFO client.AppClient$ClientActor: Connecting to master
akka.tcp://masterip:7077/user/Master...
15/10/09 12:54:38 INFO cluster.SparkDeploySchedulerBackend: Connected to
Spark cluster with app ID app-20151009125437-0008
15/10/09 12:54:38 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 56979.
15/10/09 12:54:38 INFO netty.NettyBlockTransferService: Server created on
56979
15/10/09 12:54:38 INFO storage.BlockManagerMaster: Trying to register
BlockManager
15/10/09 12:54:38 INFO storage.BlockManagerMasterEndpoint: Registering block
manager masterip:56979 with 265.4 MB RAM, BlockManagerId(driver, masterip,
56979)
15/10/09 12:54:38 INFO storage.BlockManagerMaster: Registered BlockManager
15/10/09 12:54:38 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend
is ready for scheduling beginning after reached minRegisteredResourcesRatio:
0.0
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> starting   false
15/10/09 12:54:40 INFO hive.HiveContext: Initializing execution hive,
version 0.13.1
15/10/09 12:54:41 INFO hive.metastore: Trying to connect to metastore with
URI thrift://masterip:9083
15/10/09 12:54:41 INFO hive.metastore: Connected to metastore.
15/10/09 12:54:42 INFO session.SessionState: No Tez session required at this
point. hive.execution.engine=mr.
15/10/09 12:54:43 INFO parse.ParseDriver: Parsing command: FROM
createdfromhive SELECT key, value
15/10/09 12:54:43 INFO parse.ParseDriver: Parse Completed
15/10/09 12:54:43 INFO hive.HiveContext: Initializing
HiveMetastoreConnection version 0.13.1 using Spark classes.
15/10/09 12:54:44 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
15/10/09 12:54:44 INFO hive.metastore: Trying to connect to metastore with
URI thrift://masterip:9083
15/10/09 12:54:44 INFO hive.metastore: Connected to metastore.
15/10/09 12:54:44 INFO session.SessionState: No Tez session required at this
point. hive.execution.engine=mr.
15/10/09 12:54:45 INFO Configuration.deprecation: mapred.map.tasks is
deprecated. Instead, use mapreduce.job.maps
15/10/09 12:54:46 INFO storage.MemoryStore: ensureFreeSpace(392664) called
with curMem=0, maxMem=278302556
15/10/09 12:54:46 INFO storage.MemoryStore: Block broadcast_0 stored as
values in memory (estimated size 383.5 KB, free 265.0 MB)
15/10/09 12:54:46 INFO storage.MemoryStore: ensureFreeSpace(33948) called
with curMem=392664, maxMem=278302556
15/10/09 12:54:46 INFO storage.MemoryStore: Block broadcast_0_piece0 stored
as bytes in memory (estimated size 33.2 KB, free 265.0 MB)
15/10/09 12:54:46 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in
memory on masterip:56979 (size: 33.2 KB, free: 265.4 MB)
15/10/09 12:54:46 INFO spark.SparkContext: Created broadcast 0 from count at
SparkHiveInsertor.java:22
15/10/09 12:54:46 INFO execution.Exchange: Using SparkSqlSerializer2.
15/10/09 12:54:47 INFO spark.SparkContext: Starting job: count at
SparkHiveInsertor.java:22
15/10/09 12:54:47 INFO mapred.FileInputFormat: Total input paths to process
: 1
15/10/09 12:54:48 INFO scheduler.DAGScheduler: Registering RDD 4 (count at
SparkHiveInsertor.java:22)
15/10/09 12:54:48 INFO scheduler.DAGScheduler: Got job 0 (count at
SparkHiveInsertor.java:22) with 1 output partitions (allowLocal=false)
15/10/09 12:54:48 INFO scheduler.DAGScheduler: Final stage: ResultStage
1(count at SparkHiveInsertor.java:22)
15/10/09 12:54:48 INFO scheduler.DAGScheduler: Parents of final stage:
List(ShuffleMapStage 0)
15/10/09 12:54:48 INFO scheduler.DAGScheduler: Missing parents:
List(ShuffleMapStage 0)
15/10/09 12:54:48 INFO scheduler.DAGScheduler: Submitting ShuffleMapStage 0
(MapPartitionsRDD[4] at count at SparkHiveInsertor.java:22), which has no
missing parents
15/10/09 12:54:48 INFO storage.MemoryStore: ensureFreeSpace(10912) called
with curMem=426612, maxMem=278302556
15/10/09 12:54:48 INFO storage.MemoryStore: Block broadcast_1 stored as
values in memory (estimated size 10.7 KB, free 265.0 MB)
15/10/09 12:54:48 INFO storage.MemoryStore: ensureFreeSpace(5565) called
with curMem=437524, maxMem=278302556
15/10/09 12:54:48 INFO storage.MemoryStore: Block broadcast_1_piece0 stored
as bytes in memory (estimated size 5.4 KB, free 265.0 MB)
15/10/09 12:54:48 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on masterip:56979 (size: 5.4 KB, free: 265.4 MB)
15/10/09 12:54:48 INFO spark.SparkContext: Created broadcast 1 from
broadcast at DAGScheduler.scala:874
15/10/09 12:54:48 INFO scheduler.DAGScheduler: Submitting 2 missing tasks
from ShuffleMapStage 0 (MapPartitionsRDD[4] at count at
SparkHiveInsertor.java:22)
15/10/09 12:54:48 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with
2 tasks
15/10/09 12:55:03 WARN scheduler.TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
15/10/09 12:55:18 WARN scheduler.TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
15/10/09 12:55:33 WARN scheduler.TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
15/10/09 12:55:48 WARN scheduler.TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
15/10/09 12:56:03 WARN scheduler.TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
15/10/09 12:56:18 WARN scheduler.TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
15/10/09 12:56:33 WARN scheduler.TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
15/10/09 12:56:48 WARN scheduler.TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
15/10/09 12:57:03 WARN scheduler.TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources
15/10/09 12:57:18 WARN scheduler.TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-hive-connection-through-spark-Initial-job-has-not-accepted-any-resources-tp24993.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to