sogngenWang opened a new issue, #56431:
URL: https://github.com/apache/doris/issues/56431

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/doris/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Version
   
   3.0.7
   
   ### What's Wrong?
   
   when  we execute sql 
   ```
   ALTER TABLE xxx  SET (
       "bloom_filter_columns" = "country, activate_date, gid, data_type, 
channelname, platform, timestamp, pending_value"
   );
   ```
   after a few miniutes (about 1~2 minite), 2 of BE process down (total is 3) , 
this table is about 32GB size , total line is about 120 million  , and then 
restart BE process is also failed but no more error log , only if we cancel the 
job for this schema change , be will restart success . 
   <img width="1882" height="77" alt="Image" 
src="https://github.com/user-attachments/assets/2388823e-e948-4302-8593-51881ea238ca";
 />
   
   this is my be.out : 
   ```
   INFO: java_cmd /data2/apache-doris-3.0.7-bin-x64/jdk-17.0.10/bin/java
   INFO: jdk_version 17
   StdoutLogger 2025-09-25 02:28:26,093 Start time: Thu Sep 25 02:28:26 UTC 2025
   INFO: java_cmd /data2/apache-doris-3.0.7-bin-x64/jdk-17.0.10/bin/java
   INFO: jdk_version 17
   Java HotSpot(TM) 64-Bit Server VM warning: Option CriticalJNINatives was 
deprecated in version 16.0 and will likely be removed in a future release.
   WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will 
impact performance.
   start BE in local mode
   *** Query id: 0-0 ***
   *** is nereids: 0 ***
   *** tablet id: 0 ***
   *** Aborted at 1758767359 (unix time) try "date -d @1758767359" if you are 
using GNU date ***
   *** Current BE git commitID: 64651a9f2e ***
   *** SIGSEGV address not mapped to object (@0x138) received by PID 44774 (TID 
45534 OR 0x7fd0d0e3a700) from PID 312; stack trace: ***
    0# doris::signal::(anonymous namespace)::FailureSignalHandler(int, 
siginfo_t*, void*) at 
/home/zcp/repo_center/doris_release/doris/be/src/common/signal_handler.h:421
    1# PosixSignals::chained_handler(int, siginfo*, void*) [clone .part.0] in 
/data2/apache-doris-3.0.7-bin-x64/jdk-17.0.10/lib/server/libjvm.so
    2# JVM_handle_linux_signal in 
/data2/apache-doris-3.0.7-bin-x64/jdk-17.0.10/lib/server/libjvm.so
    3# 0x00007FD2C1C5E400 in /lib64/libc.so.6
    4# 
doris::SegcompactionWorker::_do_compact_segments(std::shared_ptr<std::vector<std::shared_ptr<doris::segment_v2::Segment>,
 std::allocator<std::shared_ptr<doris::segment_v2::Segment> > > >) at 
/home/zcp/repo_center/doris_release/doris/be/src/olap/rowset/segcompaction.cpp:267
    5# 
doris::SegcompactionWorker::compact_segments(std::shared_ptr<std::vector<std::shared_ptr<doris::segment_v2::Segment>,
 std::allocator<std::shared_ptr<doris::segment_v2::Segment> > > >) at 
/home/zcp/repo_center/doris_release/doris/be/src/olap/rowset/segcompaction.cpp:355
    6# 
doris::StorageEngine::_handle_seg_compaction(std::shared_ptr<doris::SegcompactionWorker>,
 std::shared_ptr<std::vector<std::shared_ptr<doris::segment_v2::Segment>, 
std::allocator<std::shared_ptr<doris::segment_v2::Segment> > > >, unsigned 
long) at 
/home/zcp/repo_center/doris_release/doris/be/src/olap/olap_server.cpp:1233
    7# std::_Function_handler<void (), 
doris::StorageEngine::submit_seg_compaction_task(std::shared_ptr<doris::SegcompactionWorker>,
 std::shared_ptr<std::vector<std::shared_ptr<doris::segment_v2::Segment>, 
std::allocator<std::shared_ptr<doris::segment_v2::Segment> > > 
>)::$_0>::_M_invoke(std::_Any_data const&) at 
/var/local/ldb-toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/std_function.h:291
    8# doris::ThreadPool::dispatch_thread() at 
/home/zcp/repo_center/doris_release/doris/be/src/util/threadpool.cpp:609
    9# doris::Thread::supervise_thread(void*) at 
/home/zcp/repo_center/doris_release/doris/be/src/util/thread.cpp:499
   10# start_thread in /lib64/libpthread.so.0
   11# clone in /lib64/libc.so.6
   
   ```
   In addision  : 
   when I create a new table for 10,000 line in a new table , and then execute  
add bloom_filter_columns sql , it will run success . finally , if we create a 
new table with this bloom_filter_columns  , it will also success . 
   
   
   
   
   ### What You Expected?
   
   How to resolve this question , we will often optimization sql or table 
schema for our application . 
   
   ### How to Reproduce?
   
   create a table with about 32G size and 120 million line , then add bloom 
filter , it will cause the Be process down 
   
   ### Anything Else?
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to