surajchoubey opened a new issue, #5327:
URL: https://github.com/apache/incubator-gluten/issues/5327

   ### Problem description
   
   ```
   $SPARK_HOME/bin/spark-shell \                                                
                                                                             
127 ✘ 
    --master spark://localhost:7077 \
    --deploy-mode client \
    --conf spark.plugins=io.glutenproject.GlutenPlugin \
    --conf spark.memory.offHeap.enabled=true \
    --conf spark.memory.offHeap.size=20g \
    --conf 
spark.shuffle.manager=org.apache.spark.shuffle.sort.ColumnarShuffleManager \
    --jars 
/home/agentperry/Documents/Zettabolt/spark-3.2.4-bin-hadoop3.2/jars/gluten-velox-bundle-spark3.2_2.12-1.1.1.jar
   2024-04-08 16:40:55,238 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
   Setting default log level to "WARN".
   To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
   OpenJDK 64-Bit Server VM warning: You have loaded library 
/tmp/gluten-b515788b-b16c-4cd4-98e9-43cbf50fd401/jni/0d51f686-650e-46a1-a8a4-1c767df4c01d/gluten-7964092948296879699/libvelox.so
 which might have disabled stack guard. The VM will try to fix the stack guard 
now.
   It's highly recommended that you fix the library with 'execstack -c 
<libfile>', or link it with '-z noexecstack'.
   #
   # A fatal error has been detected by the Java Runtime Environment:
   #
   #  SIGILL (0x4) at pc=0x000077d021717353, pid=53799, tid=0x000077d056fff640
   #
   # JRE version: OpenJDK Runtime Environment (8.0_402-b06) (build 
1.8.0_402-8u402-ga-2ubuntu1~22.04-b06)
   # Java VM: OpenJDK 64-Bit Server VM (25.402-b06 mixed mode linux-amd64 
compressed oops)
   # Problematic frame:
   # C  [libgluten.so+0x317353]  gluten::Runtime::registerFactory(std::string 
const&, std::function<gluten::Runtime* (std::unordered_map<std::string, 
std::string, std::hash<std::string>, std::equal_to<std::string>, 
std::allocator<std::pair<std::string const, std::string> > > const&)>)+0x23
   #
   # Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again
   #
   # An error report file with more information is saved as:
   # /home/agentperry/hs_err_pid53799.log
   #
   # If you would like to submit a bug report, please visit:
   #   http://bugreport.java.com/bugreport/crash.jsp
   # The crash happened outside the Java Virtual Machine in native code.
   # See problematic frame for where to report the bug.
   
   ```
   
   ### System information
   
   #########################
   # Monday 08 April, 2024 #
   #########################
   
   agentperry@warmachine86-2 (20Y700B5IG)
   Ubuntu 22.04.4 LTS (GNU/Linux 6.5.0-26-generic)
   Arch: x86_64
   Packages: 2221 (dpkg) 
   Shell: Z Shell 
   Public IPv4:  (AS17488 - Hathway IP Over Cable Internet - IN)
   Local IP: 192.168.1.8 172.17.0.1 
   
   
   ### CMake log
   
   ```bash
   $SPARK_HOME/bin/spark-shell \                                                
                                                                             
127 ✘ 
    --master spark://localhost:7077 \
    --deploy-mode client \
    --conf spark.plugins=io.glutenproject.GlutenPlugin \
    --conf spark.memory.offHeap.enabled=true \
    --conf spark.memory.offHeap.size=20g \
    --conf 
spark.shuffle.manager=org.apache.spark.shuffle.sort.ColumnarShuffleManager \
    --jars 
/home/agentperry/Documents/Zettabolt/spark-3.2.4-bin-hadoop3.2/jars/gluten-velox-bundle-spark3.2_2.12-1.1.1.jar
   2024-04-08 16:40:55,238 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
   Setting default log level to "WARN".
   To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
   OpenJDK 64-Bit Server VM warning: You have loaded library 
/tmp/gluten-b515788b-b16c-4cd4-98e9-43cbf50fd401/jni/0d51f686-650e-46a1-a8a4-1c767df4c01d/gluten-7964092948296879699/libvelox.so
 which might have disabled stack guard. The VM will try to fix the stack guard 
now.
   It's highly recommended that you fix the library with 'execstack -c 
<libfile>', or link it with '-z noexecstack'.
   #
   # A fatal error has been detected by the Java Runtime Environment:
   #
   #  SIGILL (0x4) at pc=0x000077d021717353, pid=53799, tid=0x000077d056fff640
   #
   # JRE version: OpenJDK Runtime Environment (8.0_402-b06) (build 
1.8.0_402-8u402-ga-2ubuntu1~22.04-b06)
   # Java VM: OpenJDK 64-Bit Server VM (25.402-b06 mixed mode linux-amd64 
compressed oops)
   # Problematic frame:
   # C  [libgluten.so+0x317353]  gluten::Runtime::registerFactory(std::string 
const&, std::function<gluten::Runtime* (std::unordered_map<std::string, 
std::string, std::hash<std::string>, std::equal_to<std::string>, 
std::allocator<std::pair<std::string const, std::string> > > const&)>)+0x23
   #
   # Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again
   #
   # An error report file with more information is saved as:
   # /home/agentperry/hs_err_pid53799.log
   #
   # If you would like to submit a bug report, please visit:
   #   http://bugreport.java.com/bugreport/crash.jsp
   # The crash happened outside the Java Virtual Machine in native code.
   # See problematic frame for where to report the bug.
   #
   
/home/agentperry/Documents/Zettabolt/spark-3.2.4-bin-hadoop3.2//bin/spark-shell:
 line 47: 53799 Aborted                 (core dumped) 
"${SPARK_HOME}"/bin/spark-submit --class org.apache.spark.repl.Main --name 
"Spark shell" "$@"
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to