CritasWang opened a new pull request, #17207: URL: https://github.com/apache/iotdb/pull/17207
修复 Flight SQL 集成测试中的 "end-of-stream mid-frame" HTTP/2 帧截断错误。 Root cause / 根本原因: The gRPC default thread pool executor fails to properly handle subsequent RPCs on the same HTTP/2 connection in the DataNode JVM environment, where standalone Netty JARs coexist with grpc-netty bundled in the fat jar. DataNode JVM 环境中,gRPC 默认线程池执行器无法正确处理同一 HTTP/2 连接上 的后续 RPC 调用。根因是类路径上独立的 Netty JAR 与 fat jar 中捆绑的 grpc-netty 产生冲突。 Fix / 修复方案: 1. directExecutor() — run gRPC handlers in the Netty event loop thread, bypassing the default executor's thread scheduling issues (关键修复) 2. flowControlWindow(1MB) — explicit HTTP/2 flow control prevents framing errors when duplicate Netty JARs coexist on the classpath 3. Exclude io.netty from fat jar POM — use standalone Netty JARs already on the DataNode classpath instead of bundling duplicates Additional bug fixes / 其他修复: - TsBlockToArrowConverter: fix NPE when getColumnNameIndexMap() returns null for SHOW DATABASES queries (回退到列索引) - FlightSqlAuthHandler: add null guards in authenticate() and appendToOutgoingHeaders() for CallHeaders with null internal maps - FlightSqlAuthHandler: rewrite as CallHeaderAuthenticator with Bearer token reuse and Basic auth fallback - FlightSqlSessionManager: add user token cache for session reuse - IoTDBFlightSqlProducer: handle non-query statements (USE, CREATE, etc.) by returning empty FlightInfo, use TicketStatementQuery protobuf format Test changes / 测试改动: - Use fully qualified table names (database.table) instead of USE statement to keep each test to one GetFlightInfo + one DoGet RPC per connection - All 5 integration tests pass: testShowDatabases, testQueryWithAllDataTypes, testQueryWithFilter, testQueryWithAggregation, testEmptyResult -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
