It is caused by the following assert. Maybe we could *File.pathSeparator*
instead of "/".

*assertThat(optional.get()).isEqualTo(hadoopHome + "/conf");*

Would you like to create a ticket and attach a PR for this issue?

Best,
Yang

hjw <1010445...@qq.com> 于2022年8月21日周日 19:44写道:

> When I run mvn clean install ,It will run Flink test case .
> However , I get Error:
> [ERROR] Failures:
> [ERROR]
>  
> KubernetesClusterDescriptorTest.testDeployApplicationClusterWithNonLocalSchema:155
> Previous method call should have failed but it returned:
> org.apache.flink.kubernetes.KubernetesClusterDescriptor$$Lambda$839/1619964974@70e5737f
> [ERROR]
>  
> AbstractKubernetesParametersTest.testGetLocalHadoopConfigurationDirectoryFromHadoop1HomeEnv:132->runTestWithEmptyEnv:149->lambda$testGetLocalHadoopConfigurationDirectoryFromHadoop1HomeEnv$3:141
> Expected: is
> "C:\Users\10104\AppData\Local\Temp\junit5662202040601670287/conf"
>      but: was
> "C:\Users\10104\AppData\Local\Temp\junit5662202040601670287\conf"
> [ERROR]
>  
> AbstractKubernetesParametersTest.testGetLocalHadoopConfigurationDirectoryFromHadoop2HomeEnv:117->runTestWithEmptyEnv:149->lambda$testGetLocalHadoopConfigurationDirectoryFromHadoop2HomeEnv$2:126
> Expected: is
> "C:\Users\10104\AppData\Local\Temp\junit7094401822178578683/etc/hadoop"
>      but: was
> "C:\Users\10104\AppData\Local\Temp\junit7094401822178578683\etc\hadoop"
> [ERROR]
>  KubernetesUtilsTest.testLoadPodFromTemplateWithNonExistPathShouldFail:110
> Expected: Expected error message is "Pod template file
> /path/of/non-exist.yaml does not exist."
>      but: The throwable <org.apache.flink.util.FlinkRuntimeException: Pod
> template file \path\of\non-exist.yaml does not exist.> does not contain the
> expected error message "Pod template file /path/of/non-exist.yaml does not
> exist."
>
> I judge the error occurred due to different
> fileSysyem(unix,Windows..etc) separators.
>
>
>
>
>
> Env:
> Flink version :1.15
> Maven:3.2.5
> Jdk:1.8
> Environment:Win10
>

Reply via email to