Nanyin K created SPARK-34964:
--------------------------------

             Summary: Initial job has not accepted any resources
                 Key: SPARK-34964
                 URL: https://issues.apache.org/jira/browse/SPARK-34964
             Project: Spark
          Issue Type: Bug
          Components: Spark Shell
    Affects Versions: 2.4.3
            Reporter: Nanyin K


When I do 

val df = spark.read.parquet("file-name")

I get 

---

WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your 
cluster UI to ensure that workers are registered and have sufficient resources

---

 

But from my spark UI I can see the workers have got the resource, also the job 
can be finished finally.

---

21/04/06 18:53:19 DEBUG CoarseGrainedSchedulerBackend$DriverEndpoint: Launching 
task 0 on executor id: 0 hostname: 172.19.1.10.
[Stage 0:> (0 + 1) / 1]21/04/06 18:53:19 TRACE MessageDecoder: Received message 
RpcRequest: RpcRequest\{requestId=5229867462640513598, 
body=NettyManagedBuffer{buf=PooledUnsafeDirectByteBuf(ridx: 21, widx: 1266, 
cap: 4096)}}
21/04/06 18:53:19 DEBUG DefaultTopologyMapper: Got a request for 172.19.1.10
21/04/06 18:53:19 INFO BlockManagerMasterEndpoint: Registering block manager 
172.19.1.10:33604 with 6.2 GB RAM, BlockManagerId(0, 172.19.1.10, 33604, None)
21/04/06 18:53:19 TRACE TransportRequestHandler: Sent result 
RpcResponse\{requestId=5229867462640513598, 
body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=84 cap=128]}} to 
client /192.168.86.166:52962
21/04/06 18:53:19 TRACE MessageDecoder: Received message RpcRequest: 
RpcRequest\{requestId=6615969934743750947, 
body=NettyManagedBuffer{buf=PooledUnsafeDirectByteBuf(ridx: 21, widx: 184, cap: 
4096)}}
21/04/06 18:53:19 TRACE TransportRequestHandler: Sent result 
RpcResponse\{requestId=6615969934743750947, 
body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=47 cap=64]}} to 
client /192.168.86.166:52962
21/04/06 18:53:19 TRACE MessageDecoder: Received message OneWayMessage: 
OneWayMessage\{body=NettyManagedBuffer{buf=PooledUnsafeDirectByteBuf(ridx: 13, 
widx: 1580, cap: 4096)}}
21/04/06 18:53:20 TRACE MessageDecoder: Received message RpcRequest: 
RpcRequest\{requestId=7930182830316713697, 
body=NettyManagedBuffer{buf=PooledUnsafeDirectByteBuf(ridx: 21, widx: 308, cap: 
4096)}}
21/04/06 18:53:20 TRACE TransportRequestHandler: Sent result 
RpcResponse\{requestId=7930182830316713697, 
body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=695 cap=1024]}} to 
client /192.168.86.166:52962
21/04/06 18:53:20 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0.0, 
runningTasks: 1
21/04/06 18:53:20 TRACE MessageDecoder: Received message RpcRequest: 
RpcRequest\{requestId=7986398875126667853, 
body=NettyManagedBuffer{buf=PooledUnsafeDirectByteBuf(ridx: 21, widx: 85, cap: 
1024)}}
21/04/06 18:53:20 TRACE NettyBlockRpcServer: Received request: 
OpenBlocks\{appId=app-20210406185106-0013, execId=driver, 
blockIds=[broadcast_0_piece0]}
21/04/06 18:53:20 TRACE NettyBlockRpcServer: Registered streamId 181166493000 
with 1 buffers
21/04/06 18:53:20 TRACE TransportRequestHandler: Sent result 
RpcResponse\{requestId=7986398875126667853, 
body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=13 cap=13]}} to 
client /192.168.86.166:41162
21/04/06 18:53:20 TRACE MessageDecoder: Received message ChunkFetchRequest: 
ChunkFetchRequest\{streamChunkId=StreamChunkId{streamId=181166493000, 
chunkIndex=0}}
21/04/06 18:53:20 TRACE TransportRequestHandler: Received req from 
/192.168.86.166:41162 to fetch block StreamChunkId\{streamId=181166493000, 
chunkIndex=0}
21/04/06 18:53:20 DEBUG BlockManager: Getting local block broadcast_0_piece0 as 
bytes
21/04/06 18:53:20 TRACE BlockInfoManager: Task -1024 trying to acquire read 
lock for broadcast_0_piece0
21/04/06 18:53:20 TRACE BlockInfoManager: Task -1024 acquired read lock for 
broadcast_0_piece0
21/04/06 18:53:20 DEBUG BlockManager: Level for block broadcast_0_piece0 is 
StorageLevel(disk, memory, 1 replicas)
21/04/06 18:53:20 TRACE OneForOneStreamManager: Removing stream id 181166493000
21/04/06 18:53:20 TRACE BlockInfoManager: Task -1024 releasing lock for 
broadcast_0_piece0
21/04/06 18:53:20 TRACE TransportRequestHandler: Sent result 
ChunkFetchSuccess\{streamChunkId=StreamChunkId{streamId=181166493000, 
chunkIndex=0}, 
buffer=org.apache.spark.storage.BlockManagerManagedBuffer@3e40e742} to client 
/192.168.86.166:41162
21/04/06 18:53:20 TRACE MessageDecoder: Received message RpcRequest: 
RpcRequest\{requestId=5705684349884845205, 
body=NettyManagedBuffer{buf=PooledUnsafeDirectByteBuf(ridx: 21, widx: 199, cap: 
2048)}}
21/04/06 18:53:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
172.19.1.10:33604 (size: 26.3 KB, free: 6.2 GB)
21/04/06 18:53:20 TRACE TransportRequestHandler: Sent result 
RpcResponse\{requestId=5705684349884845205, 
body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=47 cap=64]}} to 
client /192.168.86.166:52962
21/04/06 18:53:21 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0.0, 
runningTasks: 1
21/04/06 18:53:21 TRACE MessageDecoder: Received message OneWayMessage: 
OneWayMessage\{body=NettyManagedBuffer{buf=CompositeByteBuf(ridx: 5, widx: 
7700, cap: 7700, components=4)}}
21/04/06 18:53:21 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0.0, 
runningTasks: 0
21/04/06 18:53:21 DEBUG TaskSetManager: No tasks for locality level NO_PREF, so 
moving to locality level ANY
21/04/06 18:53:21 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) 
in 2404 ms on 172.19.1.10 (executor 0) (1/1)
21/04/06 18:53:21 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have 
all completed, from pool
21/04/06 18:53:21 INFO DAGScheduler: ResultStage 0 (parquet at <console>:23) 
finished in 107.840 s
21/04/06 18:53:21 DEBUG DAGScheduler: After removal of stage 0, remaining 
stages = 0
21/04/06 18:53:21 INFO DAGScheduler: Job 0 finished: parquet at <console>:23, 
took 107.883663 s

---

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to