Junning Liang created HUDI-7286:
-----------------------------------

             Summary: On the Flink side, the index. type parameter is case 
sensitive
                 Key: HUDI-7286
                 URL: https://issues.apache.org/jira/browse/HUDI-7286
             Project: Apache Hudi
          Issue Type: Bug
            Reporter: Junning Liang
            Assignee: Junning Liang


If using SQL Hints to add lowercase index. type on the Flink side, the task 
will throw the following exception:
{code:java}
Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to start the 
operator coordinators
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startAllOperatorCoordinators(SchedulerBase.java:1254)
 ~[flink-dist_2.11-1.12.0-2101.jar:1.12.0-2101]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:620)
 ~[flink-dist_2.11-1.12.0-2101.jar:1.12.0-2101]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1035)
 ~[flink-dist_2.11-1.12.0-2101.jar:1.12.0-2101]
        at 
java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:719) 
~[?:1.8.0_312]
        ... 27 more
Caused by: java.lang.IllegalArgumentException: No enum constant 
org.apache.hudi.index.HoodieIndex.IndexType.bloom
        at java.lang.Enum.valueOf(Enum.java:238) ~[?:1.8.0_312]
        at 
org.apache.hudi.index.HoodieIndex$IndexType.valueOf(HoodieIndex.java:155) 
~[blob_p-9c5a14ae562d04e25991a03bf9668559004f6a49-3c2d929685fdf31da3c98d017482739a:0.13.1-012]
        at 
org.apache.hudi.configuration.OptionsResolver.getIndexType(OptionsResolver.java:249)
 
~[blob_p-9c5a14ae562d04e25991a03bf9668559004f6a49-3c2d929685fdf31da3c98d017482739a:0.13.1-012]
        at 
org.apache.hudi.util.StreamerUtil.getIndexConfig(StreamerUtil.java:148) 
~[blob_p-9c5a14ae562d04e25991a03bf9668559004f6a49-3c2d929685fdf31da3c98d017482739a:0.13.1-012]
        at 
org.apache.hudi.util.FlinkWriteClients.getHoodieClientConfig(FlinkWriteClients.java:226)
 
~[blob_p-9c5a14ae562d04e25991a03bf9668559004f6a49-3c2d929685fdf31da3c98d017482739a:0.13.1-012]
        at 
org.apache.hudi.util.FlinkWriteClients.createWriteClient(FlinkWriteClients.java:74)
 
~[blob_p-9c5a14ae562d04e25991a03bf9668559004f6a49-3c2d929685fdf31da3c98d017482739a:0.13.1-012]
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:212)
 
~[blob_p-9c5a14ae562d04e25991a03bf9668559004f6a49-3c2d929685fdf31da3c98d017482739a:0.13.1-012]
        at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:190)
 ~[flink-dist_2.11-1.12.0-2101.jar:1.12.0-2101]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startAllOperatorCoordinators(SchedulerBase.java:1248)
 ~[flink-dist_2.11-1.12.0-2101.jar:1.12.0-2101]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:620)
 ~[flink-dist_2.11-1.12.0-2101.jar:1.12.0-2101]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1035)
 ~[flink-dist_2.11-1.12.0-2101.jar:1.12.0-2101]
        at 
java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:719) 
~[?:1.8.0_312]
        ... 27 more {code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to