linghengqian commented on issue #21347:
URL: 
https://github.com/apache/shardingsphere/issues/21347#issuecomment-1685514896

   - Now ShardingSphere's Native Image Build CI is starting to fail 
miraculously after https://github.com/apache/shardingsphere/pull/28099. Refer 
to 
https://github.com/apache/shardingsphere/actions/runs/5912312066/job/16035597496
 . At least 
https://github.com/apache/shardingsphere/commit/fd574b8148331d9d2d23a2843ef83fef07323018
 is normal.
   ```shell
   [1/8] Initializing...                                                        
                           (19.9s @ 0.39GB)
    Java version: 17.0.8+7, vendor version: GraalVM CE 17.0.8+7.1
    Graal compiler: optimization level: 2, target machine: x86-64-v3
    C compiler: gcc (linux, x86_64, 11.4.0)
    Garbage collector: Serial GC (max heap size: 80% of RAM)
    2 user-specific feature(s)
    - com.oracle.svm.polyglot.groovy.GroovyIndyInterfaceFeature
    - com.oracle.svm.thirdparty.gson.GsonFeature
   [2/8] Performing analysis...  []                                             
                           (56.0s @ 1.20GB)
     11,564 (82.58%) of 14,003 types reachable
     18,916 (60.69%) of 31,169 fields reachable
     58,086 (53.95%) of 107,671 methods reachable
      3,818 types,   148 fields, and 2,673 methods registered for reflection
   
   Error: java.util.concurrent.ExecutionException: 
com.oracle.graal.pointsto.constraints.UnsupportedFeatureException: No instances 
of ch.qos.logback.core.status.InfoStatus are allowed in the image heap as this 
class should be initialized at image runtime. Object has been initialized by 
the io.grpc.netty.shaded.io.netty.channel.AbstractChannel class initializer 
with a trace: 
           at ch.qos.logback.core.status.InfoStatus.<init>(InfoStatus.java:18)
           at 
ch.qos.logback.classic.util.ContextInitializer.statusOnResourceSearch(ContextInitializer.java:156)
           at 
ch.qos.logback.classic.util.ContextInitializer.getResource(ContextInitializer.java:125)
           at 
ch.qos.logback.classic.util.ContextInitializer.findURLOfDefaultConfigurationFile(ContextInitializer.java:114)
           at 
ch.qos.logback.classic.util.ContextInitializer.autoConfig(ContextInitializer.java:132)
           at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.java:84)
           at 
org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:55)
           at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
           at 
org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
           at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
           at 
io.grpc.netty.shaded.io.netty.util.internal.logging.Slf4JLoggerFactory.<init>(Slf4JLoggerFactory.java:42)
           at 
io.grpc.netty.shaded.io.netty.util.internal.logging.Slf4JLoggerFactory$NopInstanceHolder.<clinit>(Slf4JLoggerFactory.java:63)
           at 
io.grpc.netty.shaded.io.netty.util.internal.logging.Slf4JLoggerFactory.getInstanceWithNopCheck(Slf4JLoggerFactory.java:59)
           at 
io.grpc.netty.shaded.io.netty.util.internal.logging.InternalLoggerFactory.useSlf4JLoggerFactory(InternalLoggerFactory.java:62)
           at 
io.grpc.netty.shaded.io.netty.util.internal.logging.InternalLoggerFactory.newDefaultFactory(InternalLoggerFactory.java:42)
           at 
io.grpc.netty.shaded.io.netty.util.internal.logging.InternalLoggerFactory.getDefaultFactory(InternalLoggerFactory.java:111)
           at 
io.grpc.netty.shaded.io.netty.util.internal.logging.InternalLoggerFactory.getInstance(InternalLoggerFactory.java:134)
           at 
io.grpc.netty.shaded.io.netty.util.internal.logging.InternalLoggerFactory.getInstance(InternalLoggerFactory.java:127)
           at 
io.grpc.netty.shaded.io.netty.channel.AbstractChannel.<clinit>(AbstractChannel.java:45)
   .  To fix the issue mark ch.qos.logback.core.status.InfoStatus for 
build-time initialization with 
--initialize-at-build-time=ch.qos.logback.core.status.InfoStatus or use the the 
information from the trace to find the culprit and 
--initialize-at-run-time=<culprit> to prevent its instantiation.
   
   Error: Use -H:+ReportExceptionStackTraces to print stacktrace of underlying 
exception
   
------------------------------------------------------------------------------------------------------------------------
                          11.8s (15.1% of total time) in 68 GCs | Peak RSS: 
3.08GB | CPU load: 4.74
   
========================================================================================================================
   Finished generating 'apache-shardingsphere-proxy-native' in 1m 16s.
   com.oracle.svm.driver.NativeImage$NativeImageError
           at 
org.graalvm.nativeimage.driver/com.oracle.svm.driver.NativeImage.showError(NativeImage.java:1982)
           at 
org.graalvm.nativeimage.driver/com.oracle.svm.driver.NativeImage.build(NativeImage.java:1598)
           at 
org.graalvm.nativeimage.driver/com.oracle.svm.driver.NativeImage.performBuild(NativeImage.java:1557)
           at 
org.graalvm.nativeimage.driver/com.oracle.svm.driver.NativeImage.main(NativeImage.java:1531)
   
   ```
   - The reason for this CI failure is simple. The error happens because 
`org.apache.shardingsphere:shardingsphere-proxy-bootstrap` has dependency on 
`io.grpc:grpc-netty-shaded:1.51.0`. In the shared metadata repository there is 
no metadata for `io.grpc:grpc-netty-shaded:1.51.0` that would override metadata 
from `io.grpc:grpc-netty-shaded:1.51.0` library like there is for `io.netty` 
libs. The early investigation is located in 
https://github.com/micronaut-projects/micronaut-gcp/issues/532 . It looks like 
the GraalVM Team hasn't reached a result yet. 
   - I'll open a PR to temporarily disable 
`org.apache.shardingsphere:shardingsphere-cluster-mode-repository-etcd` into 
the Native Image build process.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to