See 
<https://ci-beam.apache.org/job/beam_LoadTests_Python_ParDo_Flink_Batch/1279/display/redirect>

Changes:


------------------------------------------
[...truncated 2.76 KB...]
FLINK_NUM_WORKERS=5
DETACHED_MODE=true
JOB_SERVER_IMAGE=gcr.io/apache-beam-testing/beam_portability/beam_flink1.15_job_server:latest
GCS_BUCKET=gs://beam-flink-cluster
HARNESS_IMAGES_TO_PULL=gcr.io/apache-beam-testing/beam_portability/beam_python3.7_sdk:latest
HADOOP_DOWNLOAD_URL=https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-10.0/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
GCLOUD_ZONE=us-central1-a
FLINK_TASKMANAGER_SLOTS=1
FLINK_DOWNLOAD_URL=https://archive.apache.org/dist/flink/flink-1.15.0/flink-1.15.0-bin-scala_2.12.tgz
ARTIFACTS_DIR=gs://beam-flink-cluster/beam-loadtests-python-pardo-flink-batch-1279

[EnvInject] - Variables injected successfully.
[beam_LoadTests_Python_ParDo_Flink_Batch] $ /bin/bash -xe 
/tmp/jenkins829094611264905160.sh
+ echo Setting up flink cluster
Setting up flink cluster
[beam_LoadTests_Python_ParDo_Flink_Batch] $ /bin/bash -xe 
/tmp/jenkins2184745189623917528.sh
+ cd 
<https://ci-beam.apache.org/job/beam_LoadTests_Python_ParDo_Flink_Batch/ws/src/.test-infra/dataproc>
+ ./flink_cluster.sh create
+ GCLOUD_ZONE=us-central1-a
+ DATAPROC_VERSION=2.1-debian
++ echo us-central1-a
++ sed -E 's/(-[a-z])?$//'
+ GCLOUD_REGION=us-central1
+ MASTER_NAME=beam-loadtests-python-pardo-flink-batch-1279-m
+ INIT_ACTIONS_FOLDER_NAME=init-actions
+ FLINK_INIT=gs://beam-flink-cluster/init-actions/flink.sh
+ BEAM_INIT=gs://beam-flink-cluster/init-actions/beam.sh
+ DOCKER_INIT=gs://beam-flink-cluster/init-actions/docker.sh
+ FLINK_LOCAL_PORT=8081
+ FLINK_TASKMANAGER_SLOTS=1
+ YARN_APPLICATION_MASTER=
+ create
+ upload_init_actions
+ echo 'Uploading initialization actions to GCS bucket: gs://beam-flink-cluster'
Uploading initialization actions to GCS bucket: gs://beam-flink-cluster
+ gsutil cp -r init-actions/beam.sh init-actions/docker.sh 
init-actions/flink.sh gs://beam-flink-cluster/init-actions
Copying file://init-actions/beam.sh [Content-Type=text/x-sh]...
/ [0 files][    0.0 B/  2.3 KiB]                                                
/ [1 files][  2.3 KiB/  2.3 KiB]                                                
Copying file://init-actions/docker.sh [Content-Type=text/x-sh]...
/ [1 files][  2.3 KiB/  6.0 KiB]                                                
/ [2 files][  6.0 KiB/  6.0 KiB]                                                
Copying file://init-actions/flink.sh [Content-Type=text/x-sh]...
/ [2 files][  6.0 KiB/ 13.5 KiB]                                                
/ [3 files][ 13.5 KiB/ 13.5 KiB]                                                
-
Operation completed over 3 objects/13.5 KiB.                                    
 
+ create_cluster
+ local 
metadata=flink-snapshot-url=https://archive.apache.org/dist/flink/flink-1.15.0/flink-1.15.0-bin-scala_2.12.tgz,
+ metadata+=flink-start-yarn-session=true,
+ metadata+=flink-taskmanager-slots=1,
+ 
metadata+=hadoop-jar-url=https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-10.0/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
+ [[ -n gcr.io/apache-beam-testing/beam_portability/beam_python3.7_sdk:latest ]]
+ 
metadata+=,beam-sdk-harness-images-to-pull=gcr.io/apache-beam-testing/beam_portability/beam_python3.7_sdk:latest
+ [[ -n 
gcr.io/apache-beam-testing/beam_portability/beam_flink1.15_job_server:latest ]]
+ 
metadata+=,beam-job-server-image=gcr.io/apache-beam-testing/beam_portability/beam_flink1.15_job_server:latest
+ local image_version=2.1-debian
+ echo 'Starting dataproc cluster. Dataproc version: 2.1-debian'
Starting dataproc cluster. Dataproc version: 2.1-debian
+ gcloud dataproc clusters create beam-loadtests-python-pardo-flink-batch-1279 
--region=us-central1 --num-****s=5 --master-machine-type=n1-standard-2 
--****-machine-type=n1-standard-2 --metadata 
flink-snapshot-url=https://archive.apache.org/dist/flink/flink-1.15.0/flink-1.15.0-bin-scala_2.12.tgz,flink-start-yarn-session=true,flink-taskmanager-slots=1,hadoop-jar-url=https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-10.0/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar,beam-sdk-harness-images-to-pull=gcr.io/apache-beam-testing/beam_portability/beam_python3.7_sdk:latest,beam-job-server-image=gcr.io/apache-beam-testing/beam_portability/beam_flink1.15_job_server:latest,
 --image-version=2.1-debian --zone=us-central1-a 
--optional-components=FLINK,DOCKER --quiet
Waiting on operation 
[projects/apache-beam-testing/regions/us-central1/operations/2700f9fa-a04e-3f2d-86f0-37d8b4837f0b].
Waiting for cluster creation operation...
WARNING: Consider using Auto Zone rather than selecting a zone manually. See 
https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone
WARNING: Permissions are missing for the default service account 
'[email protected]', missing permissions: 
[storage.objects.get, storage.objects.update] on the staging_bucket 
'projects/_/buckets/dataproc-staging-us-central1-844138762903-fb7i4jhf'. This 
usually happens when a custom resource (ex: custom staging bucket) or a 
user-managed VM Service account has been provided and the default/user-managed 
service account hasn't been granted enough permissions on the resource. See 
https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#VM_service_account.
WARNING: Permissions are missing for the default service account 
'[email protected]', missing permissions: 
[storage.objects.get, storage.objects.update] on the temp_bucket 
'projects/_/buckets/dataproc-temp-us-central1-844138762903-khnec1vl'. This 
usually happens when a custom resource (ex: custom staging bucket) or a 
user-managed VM Service account has been provided and the default/user-managed 
service account hasn't been granted enough permissions on the resource. See 
https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#VM_service_account.
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.
Created 
[https://dataproc.googleapis.com/v1/projects/apache-beam-testing/regions/us-central1/clusters/beam-loadtests-python-pardo-flink-batch-1279]
 Cluster placed in zone [us-central1-a].
+ get_leader
+ local i=0
+ local application_ids
+ local application_masters
+ echo 'Yarn Applications'
Yarn Applications
++ gcloud compute ssh --zone=us-central1-a --quiet 
yarn@beam-loadtests-python-pardo-flink-batch-1279-m '--command=yarn application 
-list'
++ grep 'Apache Flink'
Writing 3 keys to /home/jenkins/.ssh/google_compute_known_hosts
2023-04-07 13:55:18,035 INFO client.DefaultNoHARMFailoverProxyProvider: 
Connecting to ResourceManager at 
beam-loadtests-python-pardo-flink-batch-1279-m.c.apache-beam-testing.internal./10.128.1.53:8032
2023-04-07 13:55:18,363 INFO client.AHSProxy: Connecting to Application History 
server at 
beam-loadtests-python-pardo-flink-batch-1279-m.c.apache-beam-testing.internal./10.128.1.53:10200
+ read line
+ echo application_1680875596067_0001 flink-dataproc Apache Flink root default 
RUNNING UNDEFINED 100% http://10.128.1.55:46241
application_1680875596067_0001 flink-dataproc Apache Flink root default RUNNING 
UNDEFINED 100% http://10.128.1.55:46241
++ sed 's/ .*//'
++ echo application_1680875596067_0001 flink-dataproc Apache Flink root default 
RUNNING UNDEFINED 100% http://10.128.1.55:46241
+ application_ids[$i]=application_1680875596067_0001
++ sed -E 's#.*(https?://)##'
++ echo application_1680875596067_0001 flink-dataproc Apache Flink root default 
RUNNING UNDEFINED 100% http://10.128.1.55:46241
++ sed 's/ .*//'
+ application_masters[$i]=10.128.1.55:46241
+ i=1
+ read line
+ '[' 1 '!=' 1 ']'
+ YARN_APPLICATION_MASTER=10.128.1.55:46241
+ echo 'Using Yarn Application master: 10.128.1.55:46241'
Using Yarn Application master: 10.128.1.55:46241
+ [[ -n 
gcr.io/apache-beam-testing/beam_portability/beam_flink1.15_job_server:latest ]]
+ start_job_server
+ gcloud compute ssh --zone=us-central1-a --quiet 
yarn@beam-loadtests-python-pardo-flink-batch-1279-m '--command=sudo --user yarn 
docker run --detach --publish 8099:8099 --publish 8098:8098 --publish 8097:8097 
--volume ~/.config/gcloud:/root/.config/gcloud 
gcr.io/apache-beam-testing/beam_portability/beam_flink1.15_job_server:latest 
--flink-master=10.128.1.55:46241 
--artifacts-dir=gs://beam-flink-cluster/beam-loadtests-python-pardo-flink-batch-1279'
Existing host keys found in /home/jenkins/.ssh/google_compute_known_hosts
Unable to find image 
'gcr.io/apache-beam-testing/beam_portability/beam_flink1.15_job_server:latest' 
locally
latest: Pulling from 
apache-beam-testing/beam_portability/beam_flink1.15_job_server
001c52e26ad5: Pulling fs layer
d9d4b9b6e964: Pulling fs layer
2068746827ec: Pulling fs layer
9daef329d350: Pulling fs layer
d85151f15b66: Pulling fs layer
52a8c426d30b: Pulling fs layer
8754a66e0050: Pulling fs layer
621ea3255a31: Pulling fs layer
a55acb86507c: Pulling fs layer
2e2ccd5961b8: Pulling fs layer
57a28fd866a8: Pulling fs layer
be64d156422f: Pulling fs layer
9daef329d350: Waiting
d85151f15b66: Waiting
52a8c426d30b: Waiting
8754a66e0050: Waiting
621ea3255a31: Waiting
a55acb86507c: Waiting
2e2ccd5961b8: Waiting
57a28fd866a8: Waiting
be64d156422f: Waiting
d9d4b9b6e964: Verifying Checksum
d9d4b9b6e964: Download complete
2068746827ec: Verifying Checksum
2068746827ec: Download complete
d85151f15b66: Verifying Checksum
d85151f15b66: Download complete
52a8c426d30b: Verifying Checksum
52a8c426d30b: Download complete
001c52e26ad5: Verifying Checksum
001c52e26ad5: Download complete
9daef329d350: Verifying Checksum
9daef329d350: Download complete
621ea3255a31: Verifying Checksum
621ea3255a31: Download complete
2e2ccd5961b8: Verifying Checksum
2e2ccd5961b8: Download complete
57a28fd866a8: Verifying Checksum
57a28fd866a8: Download complete
be64d156422f: Verifying Checksum
be64d156422f: Download complete
8754a66e0050: Verifying Checksum
8754a66e0050: Download complete
001c52e26ad5: Pull complete
a55acb86507c: Verifying Checksum
a55acb86507c: Download complete
d9d4b9b6e964: Pull complete
2068746827ec: Pull complete
9daef329d350: Pull complete
d85151f15b66: Pull complete
52a8c426d30b: Pull complete
8754a66e0050: Pull complete
621ea3255a31: Pull complete
a55acb86507c: Pull complete
2e2ccd5961b8: Pull complete
57a28fd866a8: Pull complete
be64d156422f: Pull complete
Digest: sha256:a430b98fe0c85e097dcf202d761fdf1cec460cfb9aed9c1e202285a60ecf3bd7
Status: Downloaded newer image for 
gcr.io/apache-beam-testing/beam_portability/beam_flink1.15_job_server:latest
0fd0fb93de556881fe42c70c890d288b8581d68e8e9facec1a7ff59101cf9871
+ start_tunnel
++ gcloud compute ssh --quiet --zone=us-central1-a 
yarn@beam-loadtests-python-pardo-flink-batch-1279-m '--command=curl -s 
"http://10.128.1.55:46241/jobmanager/config";'
Existing host keys found in /home/jenkins/.ssh/google_compute_known_hosts
+ local 
'job_server_config=[{"key":"blob.server.port","value":"39969"},{"key":"yarn.flink-dist-jar","value":"file:/usr/lib/flink/lib/flink-dist-1.15.3.jar"},{"key":"classloader.check-leaked-classloader","value":"False"},{"key":"jobmanager.execution.failover-strategy","value":"region"},{"key":"high-availability.cluster-id","value":"application_1680875596067_0001"},{"key":"jobmanager.rpc.address","value":"beam-loadtests-python-pardo-flink-batch-1279-w-0.c.apache-beam-testing.internal"},{"key":"jobmanager.memory.jvm-overhead.min","value":"611948962b"},{"key":"io.tmp.dirs","value":"/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1680875596067_0001"},{"key":"taskmanager.network.numberOfBuffers","value":"4096"},{"key":"parallelism.default","value":"8"},{"key":"taskmanager.numberOfTaskSlots","value":"2"},{"key":"env.hadoop.conf.dir","value":"/etc/hadoop/conf"},{"key":"yarn.application.name","value":"flink-dataproc"},{"key":"taskmanager.heap.mb","value":"5836"},{"key":"taskmanager.memory.process.size","value":"5836
 
mb"},{"key":"web.port","value":"0"},{"key":"jobmanager.heap.mb","value":"5836"},{"key":"jobmanager.memory.off-heap.size","value":"134217728b"},{"key":"execution.target","value":"yarn-session"},{"key":"jobmanager.memory.process.size","value":"5836
 
mb"},{"key":"web.tmpdir","value":"/tmp/flink-web-73fc9bc2-4ea6-4b16-a9c1-17e8b96505ce"},{"key":"jobmanager.rpc.port","value":"35665"},{"key":"rest.bind-address","value":"beam-loadtests-python-pardo-flink-batch-1279-w-0.c.apache-beam-testing.internal"},{"key":"internal.io.tmpdirs.use-local-default","value":"true"},{"key":"execution.attached","value":"false"},{"key":"internal.cluster.execution-mode","value":"NORMAL"},{"key":"rest.address","value":"beam-loadtests-python-pardo-flink-batch-1279-w-0.c.apache-beam-testing.internal"},{"key":"fs.hdfs.hadoopconf","value":"/etc/hadoop/conf"},{"key":"jobmanager.memory.jvm-metaspace.size","value":"268435456b"},{"key":"$internal.deployment.config-dir","value":"/usr/lib/flink/conf"},{"key":"$internal.yarn.log-config-file","value":"/usr/lib/flink/conf/log4j.properties"},{"key":"jobmanager.memory.heap.size","value":"5104887390b"},{"key":"jobmanager.memory.jvm-overhead.max","value":"611948962b"}]'
+ local key=jobmanager.rpc.port
++ echo 10.128.1.55:46241
++ cut -d : -f1
+ local yarn_application_master_host=10.128.1.55
++ python -c 'import sys, json; print([e['\''value'\''] for e in 
json.load(sys.stdin) if e['\''key'\''] == u'\''jobmanager.rpc.port'\''][0])'
++ echo 
'[{"key":"blob.server.port","value":"39969"},{"key":"yarn.flink-dist-jar","value":"file:/usr/lib/flink/lib/flink-dist-1.15.3.jar"},{"key":"classloader.check-leaked-classloader","value":"False"},{"key":"jobmanager.execution.failover-strategy","value":"region"},{"key":"high-availability.cluster-id","value":"application_1680875596067_0001"},{"key":"jobmanager.rpc.address","value":"beam-loadtests-python-pardo-flink-batch-1279-w-0.c.apache-beam-testing.internal"},{"key":"jobmanager.memory.jvm-overhead.min","value":"611948962b"},{"key":"io.tmp.dirs","value":"/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1680875596067_0001"},{"key":"taskmanager.network.numberOfBuffers","value":"4096"},{"key":"parallelism.default","value":"8"},{"key":"taskmanager.numberOfTaskSlots","value":"2"},{"key":"env.hadoop.conf.dir","value":"/etc/hadoop/conf"},{"key":"yarn.application.name","value":"flink-dataproc"},{"key":"taskmanager.heap.mb","value":"5836"},{"key":"taskmanager.memory.process.size","value":"5836'
 
'mb"},{"key":"web.port","value":"0"},{"key":"jobmanager.heap.mb","value":"5836"},{"key":"jobmanager.memory.off-heap.size","value":"134217728b"},{"key":"execution.target","value":"yarn-session"},{"key":"jobmanager.memory.process.size","value":"5836'
 
'mb"},{"key":"web.tmpdir","value":"/tmp/flink-web-73fc9bc2-4ea6-4b16-a9c1-17e8b96505ce"},{"key":"jobmanager.rpc.port","value":"35665"},{"key":"rest.bind-address","value":"beam-loadtests-python-pardo-flink-batch-1279-w-0.c.apache-beam-testing.internal"},{"key":"internal.io.tmpdirs.use-local-default","value":"true"},{"key":"execution.attached","value":"false"},{"key":"internal.cluster.execution-mode","value":"NORMAL"},{"key":"rest.address","value":"beam-loadtests-python-pardo-flink-batch-1279-w-0.c.apache-beam-testing.internal"},{"key":"fs.hdfs.hadoopconf","value":"/etc/hadoop/conf"},{"key":"jobmanager.memory.jvm-metaspace.size","value":"268435456b"},{"key":"$internal.deployment.config-dir","value":"/usr/lib/flink/conf"},{"key":"$internal.yarn.log-config-file","value":"/usr/lib/flink/conf/log4j.properties"},{"key":"jobmanager.memory.heap.size","value":"5104887390b"},{"key":"jobmanager.memory.jvm-overhead.max","value":"611948962b"}]'
+ local jobmanager_rpc_port=35665
++ [[ true == \t\r\u\e ]]
++ echo ' -Nf >& /dev/null'
+ local 'detached_mode_params= -Nf >& /dev/null'
++ [[ -n 
gcr.io/apache-beam-testing/beam_portability/beam_flink1.15_job_server:latest ]]
++ echo '-L 8099:localhost:8099 -L 8098:localhost:8098 -L 8097:localhost:8097'
+ local 'job_server_ports_forwarding=-L 8099:localhost:8099 -L 
8098:localhost:8098 -L 8097:localhost:8097'
+ local 'tunnel_command=gcloud compute ssh --zone=us-central1-a --quiet 
yarn@beam-loadtests-python-pardo-flink-batch-1279-m -- -L 
8081:10.128.1.55:46241 -L 35665:10.128.1.55:35665 -L 8099:localhost:8099 -L 
8098:localhost:8098 -L 8097:localhost:8097 -D 1080  -Nf >& /dev/null'
+ eval gcloud compute ssh --zone=us-central1-a --quiet 
yarn@beam-loadtests-python-pardo-flink-batch-1279-m -- -L 
8081:10.128.1.55:46241 -L 35665:10.128.1.55:35665 -L 8099:localhost:8099 -L 
8098:localhost:8098 -L 8097:localhost:8097 -D 1080 -Nf '>&' /dev/null
++ gcloud compute ssh --zone=us-central1-a --quiet 
yarn@beam-loadtests-python-pardo-flink-batch-1279-m -- -L 
8081:10.128.1.55:46241 -L 35665:10.128.1.55:35665 -L 8099:localhost:8099 -L 
8098:localhost:8098 -L 8097:localhost:8097 -D 1080 -Nf
[beam_LoadTests_Python_ParDo_Flink_Batch] $ /bin/bash -xe 
/tmp/jenkins3271435652555297470.sh
+ echo '*** ParDo Python Load test: 20M 100 byte records 10 iterations ***'
*** ParDo Python Load test: 20M 100 byte records 10 iterations ***
[Gradle] - Launching build.
[src] $ 
<https://ci-beam.apache.org/job/beam_LoadTests_Python_ParDo_Flink_Batch/ws/src/gradlew>
 -PloadTest.mainClass=apache_beam.testing.load_tests.pardo_test 
-Prunner=PortableRunner 
'-PloadTest.args=--job_name=load-tests-python-flink-batch-pardo-1-0407125558 
--project=apache-beam-testing --publish_to_big_query=true 
--metrics_dataset=load_test --metrics_table=python_flink_batch_pardo_1 
--influx_measurement=python_batch_pardo_1 --input_options='{"num_records": 
20000000,"key_size": 10,"value_size": 90}' --iterations=10 
--number_of_counter_operations=0 --number_of_counters=0 --parallelism=5 
--job_endpoint=localhost:8099 --environment_type=DOCKER 
--environment_config=gcr.io/apache-beam-testing/beam_portability/beam_python3.7_sdk:latest
 --influx_db_name=beam_test_metrics --influx_hostname=http://10.128.0.96:8086 
--runner=PortableRunner' -PpythonVersion=3.7 --continue --max-****s=8 
-Dorg.gradle.jvmargs=-Xms2g -Dorg.gradle.jvmargs=-Xmx6g 
-Dorg.gradle.vfs.watch=false -Pdocker-pull-licenses 
:sdks:python:apache_beam:testing:load_tests:run
To honour the JVM settings for this build a single-use Daemon process will be 
forked. See 
https://docs.gradle.org/7.5.1/userguide/gradle_daemon.html#sec:disabling_the_daemon.

FAILURE: Build failed with an exception.

* What went wrong:
Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the User Manual chapter on the daemon at 
https://docs.gradle.org/7.5.1/userguide/gradle_daemon.html
Process command line: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Xmx6g 
-Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -cp 
/home/jenkins/.gradle/wrapper/dists/gradle-7.5.1-bin/7jzzequgds1hbszbhq3npc5ng/gradle-7.5.1/lib/gradle-launcher-7.5.1.jar
 org.gradle.launcher.daemon.bootstrap.GradleDaemon 7.5.1
Please read the following process output to find out more:
-----------------------

FAILURE: Build failed with an exception.

* What went wrong:
Timeout waiting to lock daemon addresses registry. It is currently in use by 
another Gradle instance.
Owner PID: 1065572
Our PID: 1469369
Owner Operation: 
Our operation: 
Lock file: /home/jenkins/.gradle/daemon/7.5.1/registry.bin.lock

* Try:
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.

* Exception is:
org.gradle.cache.LockTimeoutException: Timeout waiting to lock daemon addresses 
registry. It is currently in use by another Gradle instance.
Owner PID: 1065572
Our PID: 1469369
Owner Operation: 
Our operation: 
Lock file: /home/jenkins/.gradle/daemon/7.5.1/registry.bin.lock
        at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.timeoutException(DefaultFileLockManager.java:344)
        at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.lock(DefaultFileLockManager.java:305)
        at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.<init>(DefaultFileLockManager.java:164)
        at 
org.gradle.cache.internal.DefaultFileLockManager.lock(DefaultFileLockManager.java:110)
        at 
org.gradle.cache.internal.DefaultFileLockManager.lock(DefaultFileLockManager.java:96)
        at 
org.gradle.cache.internal.DefaultFileLockManager.lock(DefaultFileLockManager.java:91)
        at 
org.gradle.cache.internal.OnDemandFileAccess.updateFile(OnDemandFileAccess.java:51)
        at 
org.gradle.cache.internal.SimpleStateCache.update(SimpleStateCache.java:87)
        at 
org.gradle.cache.internal.FileIntegrityViolationSuppressingPersistentStateCacheDecorator$1.create(FileIntegrityViolationSuppressingPersistentStateCacheDecorator.java:50)
        at 
org.gradle.cache.internal.FileIntegrityViolationSuppressingPersistentStateCacheDecorator.doUpdate(FileIntegrityViolationSuppressingPersistentStateCacheDecorator.java:67)
        at 
org.gradle.cache.internal.FileIntegrityViolationSuppressingPersistentStateCacheDecorator.update(FileIntegrityViolationSuppressingPersistentStateCacheDecorator.java:47)
        at 
org.gradle.launcher.daemon.registry.PersistentDaemonRegistry.store(PersistentDaemonRegistry.java:232)
        at 
org.gradle.launcher.daemon.server.DaemonRegistryUpdater.onStart(DaemonRegistryUpdater.java:80)
        at org.gradle.launcher.daemon.server.Daemon.start(Daemon.java:171)
        at 
org.gradle.launcher.daemon.bootstrap.DaemonMain.doAction(DaemonMain.java:125)
        at org.gradle.launcher.bootstrap.EntryPoint.run(EntryPoint.java:50)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:60)
        at 
org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:37)
        at 
org.gradle.launcher.daemon.bootstrap.GradleDaemon.main(GradleDaemon.java:22)


* Get more help at https://help.gradle.org


* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to