Hi all,

I had some of my queries run on 1.1.0-SANPSHOT at commit b1b20301(Aug 24), but 
in current master branch, my queries would not work. I looked into the stderr 
file in executor, and find the following lines:

14/09/26 16:52:46 ERROR nio.NioBlockTransferService: Exception handling buffer 
message
java.io.IOException: Channel not open for writing - cannot extend file to 
required size
        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:868)
        at 
org.apache.spark.network.FileSegmentManagedBuffer.nioByteBuffer(ManagedBuffer.scala:73)
        at 
org.apache.spark.network.nio.NioBlockTransferService.getBlock(NioBlockTransferService.scala:203)
        at 
org.apache.spark.network.nio.NioBlockTransferService.org$apache$spark$network$nio$NioBlockTransferService$$processBlockMessage(NioBlockTransferService.scala:179)
        at 
org.apache.spark.network.nio.NioBlockTransferService$$anonfun$2.apply(NioBlockTransferService.scala:149)
        at 
org.apache.spark.network.nio.NioBlockTransferService$$anonfun$2.apply(NioBlockTransferService.scala:149)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at 
org.apache.spark.network.nio.BlockMessageArray.foreach(BlockMessageArray.scala:28)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at 
org.apache.spark.network.nio.BlockMessageArray.map(BlockMessageArray.scala:28)
        at 
org.apache.spark.network.nio.NioBlockTransferService.org$apache$spark$network$nio$NioBlockTransferService$$onBlockMessageReceive(NioBlockTransferService.scala:149)
        at 
org.apache.spark.network.nio.NioBlockTransferService$$anonfun$init$1.apply(NioBlockTransferService.scala:68)
        at 
org.apache.spark.network.nio.NioBlockTransferService$$anonfun$init$1.apply(NioBlockTransferService.scala:68)
        at 
org.apache.spark.network.nio.ConnectionManager.org$apache$spark$network$nio$ConnectionManager$$handleMessage(ConnectionManager.scala:677)
        at 
org.apache.spark.network.nio.ConnectionManager$$anon$10.run(ConnectionManager.scala:515)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Shuffle compress was turned off, because I encountered parsing_error when with 
shuffle compress. Even after I set the native library path, I got errors when 
uncompress in snappy. With shuffle compress turned off, I still get message 
above in some of my nodes, and the others would have a message that saying ack 
is not received after 60s. Any one get some ideas? Thanks for your help!

Thanks,
Daoyuan Wang

Reply via email to