[ 
https://issues.apache.org/jira/browse/FLINK-15929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-15929:
----------------------------
    Summary: test_set_requirements_with_cached_directory failed on travis  
(was: test_dependency failed on travis)

> test_set_requirements_with_cached_directory failed on travis
> ------------------------------------------------------------
>
>                 Key: FLINK-15929
>                 URL: https://issues.apache.org/jira/browse/FLINK-15929
>             Project: Flink
>          Issue Type: Test
>          Components: API / Python
>            Reporter: Dian Fu
>            Priority: Major
>             Fix For: 1.10.1, 1.11.0
>
>
> The Python tests "test_dependency" is instable. It failed on travis with the 
> following exception:
> {code}
> "Source: PythonInputFormatTableSource(a, b) -> 
> SourceConversion(table=[default_catalog.default_database.Unregistered_TableSource_862767019,
>  source: [PythonInputFormatTableSource(a, b)]], fields=[a, b]) -> 
> StreamExecPythonCalc -> Calc(select=[f0 AS _c0, a]) -> SinkConversionToRow -> 
> Map -> Sink: Unnamed (1/2)" #581 prio=5 os_prio=0 cpu=32.04ms elapsed=302.56s 
> tid=0x0000000001f26000 nid=0x4662 waiting on condition  [0x00007f0acb7f5000]
>    java.lang.Thread.State: TIMED_WAITING (parking)
>       at jdk.internal.misc.Unsafe.park([email protected]/Native Method)
>       - parking to wait for  <0x000000008aa3bfc0> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>       at 
> java.util.concurrent.locks.LockSupport.parkNanos([email protected]/LockSupport.java:234)
>       at 
> java.util.concurrent.CompletableFuture$Signaller.block([email protected]/CompletableFuture.java:1798)
>       at 
> java.util.concurrent.ForkJoinPool.managedBlock([email protected]/ForkJoinPool.java:3128)
>       at 
> java.util.concurrent.CompletableFuture.timedGet([email protected]/CompletableFuture.java:1868)
>       at 
> java.util.concurrent.CompletableFuture.get([email protected]/CompletableFuture.java:2021)
>       at 
> org.apache.beam.runners.fnexecution.control.MapControlClientPool.getClient(MapControlClientPool.java:69)
>       at 
> org.apache.beam.runners.fnexecution.control.MapControlClientPool$$Lambda$1090/0x0000000100d70040.take(Unknown
>  Source)
>       at 
> org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:126)
>       at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:178)
>       at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:162)
>       at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
>       at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
>       at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
>       - locked <0x000000008aa02788> (a 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$StrongEntry)
>       at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
>       at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
>       at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
>       at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
>       at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
>       at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:211)
>       at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:202)
>       at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:185)
>       at 
> org.apache.flink.python.AbstractPythonFunctionRunner.open(AbstractPythonFunctionRunner.java:179)
>       at 
> org.apache.flink.table.runtime.operators.python.AbstractPythonScalarFunctionOperator$ProjectUdfInputPythonScalarFunctionRunner.open(AbstractPythonScalarFunctionOperator.java:193)
>       at 
> org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:139)
>       at 
> org.apache.flink.table.runtime.operators.python.AbstractPythonScalarFunctionOperator.open(AbstractPythonScalarFunctionOperator.java:143)
>       at 
> org.apache.flink.table.runtime.operators.python.BaseRowPythonScalarFunctionOperator.open(BaseRowPythonScalarFunctionOperator.java:86)
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1007)
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$$Lambda$874/0x0000000100a4d840.run(Unknown
>  Source)
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
>       - locked <0x000000008ae28e58> (a java.lang.Object)
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>       at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
>       at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
>       at java.lang.Thread.run([email protected]/Thread.java:834)
> {code}
> instance: https://api.travis-ci.org/v3/job/646511525/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to