asheeshgarg commented on issue #621: Broadcast Join Failure
URL: 
https://github.com/apache/incubator-iceberg/issues/621#issuecomment-554484451
 
 
   Ran the spark with debug logging and I have following observations
   
   19/11/15 15:59:36 INFO Executor: Starting executor ID driver on host 
localhost
   19/11/15 15:59:36 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 45811.
   19/11/15 15:59:36 INFO NettyBlockTransferService: Server created on 
100.80.47.20:45811
   19/11/15 15:59:36 INFO BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
   19/11/15 15:59:36 INFO BlockManagerMaster: Registering BlockManager 
BlockManagerId(driver, 100.80.47.20, 45811, None)
   19/11/15 15:59:36 INFO BlockManagerMasterEndpoint: Registering block manager 
100.80.47.20:45811 with 34.0 GB RAM, BlockManagerId(driver, 100.80.47.20, 
45811, None)
   19/11/15 15:59:36 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, 100.80.47.20, 45811, None)
   19/11/15 15:59:36 INFO BlockManager: Initialized BlockManager: 
BlockManagerId(driver, 100.80.47.20, 45811, None)
   19/11/15 15:59:37 INFO SharedState: loading hive config file: 
file:/hadoop/conf/hive-site.xml
   19/11/15 15:59:37 INFO SharedState: Setting hive.metastore.warehouse.dir 
('null') to the value of spark.sql.warehouse.dir ('file:/job/spark-warehouse').
   19/11/15 15:59:37 INFO SharedState: Warehouse path is 
'file:/job/spark-warehouse'.
   19/11/15 15:59:37 INFO StateStoreCoordinatorRef: Registered 
StateStoreCoordinator endpoint
   19/11/15 15:59:42 INFO TableScan: Scanning table 
hdfs:///user/datalake/iceberg/eqty/reference snapshot 5286228287138102009 
created at 2019-11-05 21:40:03.510 with filter true
   19/11/15 15:59:42 INFO TableScan: Scanning table 
hdfs:///user/datalake/iceberg/eqty/pricing snapshot 9199449611237852387 created 
at 2019-11-05 22:14:58.716 with filter true
   19/11/15 15:59:43 INFO DataSourceV2Strategy: 
   
   19/11/15 15:59:46 INFO CodeGenerator: Code generated in 12.349638 ms
   19/11/15 15:59:46 INFO ZlibFactory: Successfully loaded & initialized 
native-zlib library
   19/11/15 15:59:46 INFO CodecPool: Got brand-new decompressor [.gz]
   19/11/15 15:59:47 INFO MemoryStore: Block taskresult_0 stored as bytes in 
memory (estimated size 7.8 MB, free 34.0 GB)
   19/11/15 15:59:47 INFO BlockManagerInfo: Added taskresult_0 in memory on 
100.80.47.20:45811 (size: 7.8 MB, free: 34.0 GB)
   19/11/15 15:59:47 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 
8178722 bytes result sent via BlockManager)
   19/11/15 15:59:47 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 
1, localhost, executor driver, partition 1, PROCESS_LOCAL, 138170 bytes)
   19/11/15 15:59:47 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
   19/11/15 15:59:48 INFO TransportClientFactory: Successfully created 
connection to /100.80.47.20:45811 after 59 ms (0 ms spent in bootstraps)
   19/11/15 15:59:48 ERROR TransportResponseHandler: Still have 1 requests 
outstanding when connection from /100.80.47.20:45811 is closed
   19/11/15 15:59:48 ERROR OneForOneBlockFetcher: Failed while starting block 
fetches
   java.io.IOException: Connection from /100.80.47.20:45811 closed
        at 
org.apache.spark.network.client.TransportResponseHandler.channelInactive(TransportResponseHandler.java:146)
        at 
org.apache.spark.network.server.TransportChannelHandler.channelInactive(TransportChannelHandler.java:108)
        at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
        at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
        at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
   
   **The executor dies immediately with the block fetch is called. The 
TableScan seems to be returning table label information 
   19/11/15 15:59:42 INFO TableScan: Scanning table 
hdfs:///user/datalake/iceberg/eqty/reference snapshot 5286228287138102009 
created at 2019-11-05 21:40:03.510 with filter true
   19/11/15 15:59:42 INFO TableScan: Scanning table 
hdfs:///user/datalake/iceberg/eqty/pricing snapshot 9199449611237852387 created 
at 2019-11-05 22:14:58.716 with filter true
   
   Does it include how many parquet files it has? It immediately fails make me 
think the code generated by the sql executor has some issues. If needed I can 
add the logs for sql generated by spark.**
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to