[ 
https://issues.apache.org/jira/browse/FLINK-16750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17072551#comment-17072551
 ] 

Robert Metzger commented on FLINK-16750:
----------------------------------------

Yes, there were other problems. The problem was that AZP machines have pretty 
large disks (80GB), but only 15GB are available. Since we are downloading maven 
artifacts, docker images + we are producing logs etc., the disk was less than 
10% free when the YARN tests were running.
That's why YARN's NodeManagers declared themselves as "my disk is full", the 
test could not schedule anything on those NodeManagers.

In above logs, we seem to have 40GB of disk space available. That should be 
more than enough. I believe we are facing a different issue here.


> Kerberized YARN on Docker test fails due to stalling Hadoop cluster
> -------------------------------------------------------------------
>
>                 Key: FLINK-16750
>                 URL: https://issues.apache.org/jira/browse/FLINK-16750
>             Project: Flink
>          Issue Type: Bug
>          Components: Deployment / Docker, Deployment / YARN, Tests
>            Reporter: Zhijiang
>            Priority: Critical
>              Labels: test-stability
>             Fix For: 1.11.0
>
>
> Build: 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6563&view=results]
> logs
> {code:java}
> 2020-03-24T08:48:53.3813297Z 
> ==============================================================================
> 2020-03-24T08:48:53.3814016Z Running 'Running Kerberized YARN on Docker test 
> (custom fs plugin)'
> 2020-03-24T08:48:53.3814511Z 
> ==============================================================================
> 2020-03-24T08:48:53.3827028Z TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53382133956
> 2020-03-24T08:48:56.1944456Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-24T08:48:56.2300265Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-24T08:48:56.2412349Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-24T08:48:56.2861072Z Docker version 19.03.8, build afacb8b7f0
> 2020-03-24T08:48:56.8025297Z docker-compose version 1.25.4, build 8d51620a
> 2020-03-24T08:48:56.8499071Z Flink Tarball directory 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53382133956
> 2020-03-24T08:48:56.8501170Z Flink tarball filename flink.tar.gz
> 2020-03-24T08:48:56.8502612Z Flink distribution directory name 
> flink-1.11-SNAPSHOT
> 2020-03-24T08:48:56.8504724Z End-to-end directory 
> /home/vsts/work/1/s/flink-end-to-end-tests
> 2020-03-24T08:48:56.8620115Z Building Hadoop Docker container
> 2020-03-24T08:48:56.9117609Z Sending build context to Docker daemon  56.83kB
> 2020-03-24T08:48:56.9117926Z 
> 2020-03-24T08:48:57.0076373Z Step 1/54 : FROM sequenceiq/pam:ubuntu-14.04
> 2020-03-24T08:48:57.0082811Z  ---> df7bea4c5f64
> 2020-03-24T08:48:57.0084798Z Step 2/54 : RUN set -x     && addgroup hadoop    
>  && useradd -d /home/hdfs -ms /bin/bash -G hadoop -p hdfs hdfs     && useradd 
> -d /home/yarn -ms /bin/bash -G hadoop -p yarn yarn     && useradd -d 
> /home/mapred -ms /bin/bash -G hadoop -p mapred mapred     && useradd -d 
> /home/hadoop-user -ms /bin/bash -p hadoop-user hadoop-user
> 2020-03-24T08:48:57.0092833Z  ---> Using cache
> 2020-03-24T08:48:57.0093976Z  ---> 3c12a7d3e20c
> 2020-03-24T08:48:57.0096889Z Step 3/54 : RUN set -x     && apt-get update && 
> apt-get install -y     curl tar sudo openssh-server openssh-client rsync 
> unzip krb5-user
> 2020-03-24T08:48:57.0106188Z  ---> Using cache
> 2020-03-24T08:48:57.0107830Z  ---> 9a59599596be
> 2020-03-24T08:48:57.0110793Z Step 4/54 : RUN set -x     && mkdir -p 
> /var/log/kerberos     && touch /var/log/kerberos/kadmind.log
> 2020-03-24T08:48:57.0118896Z  ---> Using cache
> 2020-03-24T08:48:57.0121035Z  ---> c83551d4f695
> 2020-03-24T08:48:57.0125298Z Step 5/54 : RUN set -x     && rm -f 
> /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_rsa_key /root/.ssh/id_rsa     && 
> ssh-keygen -q -N "" -t dsa -f /etc/ssh/ssh_host_dsa_key     && ssh-keygen -q 
> -N "" -t rsa -f /etc/ssh/ssh_host_rsa_key     && ssh-keygen -q -N "" -t rsa 
> -f /root/.ssh/id_rsa     && cp /root/.ssh/id_rsa.pub 
> /root/.ssh/authorized_keys
> 2020-03-24T08:48:57.0133473Z  ---> Using cache
> 2020-03-24T08:48:57.0134240Z  ---> f69560c2bc0a
> 2020-03-24T08:48:57.0135683Z Step 6/54 : RUN set -x     && mkdir -p 
> /usr/java/default     && curl -Ls 
> 'http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz'
>  -H 'Cookie: oraclelicense=accept-securebackup-cookie' |         tar 
> --strip-components=1 -xz -C /usr/java/default/
> 2020-03-24T08:48:57.0148145Z  ---> Using cache
> 2020-03-24T08:48:57.0149008Z  ---> f824256d72f1
> 2020-03-24T08:48:57.0152616Z Step 7/54 : ENV JAVA_HOME /usr/java/default
> 2020-03-24T08:48:57.0155992Z  ---> Using cache
> 2020-03-24T08:48:57.0160104Z  ---> 770e6bfd219a
> 2020-03-24T08:48:57.0160410Z Step 8/54 : ENV PATH $PATH:$JAVA_HOME/bin
> 2020-03-24T08:48:57.0168690Z  ---> Using cache
> 2020-03-24T08:48:57.0169451Z  ---> 2643e1a25898
> 2020-03-24T08:48:57.0174785Z Step 9/54 : RUN set -x     && curl -LOH 'Cookie: 
> oraclelicense=accept-securebackup-cookie' 
> 'http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip'     && unzip 
> jce_policy-8.zip     && cp /UnlimitedJCEPolicyJDK8/local_policy.jar 
> /UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security
> 2020-03-24T08:48:57.0187797Z  ---> Using cache
> 2020-03-24T08:48:57.0188202Z  ---> 51cf2085f95d
> 2020-03-24T08:48:57.0188467Z Step 10/54 : ARG HADOOP_VERSION=2.8.4
> 2020-03-24T08:48:57.0199344Z  ---> Using cache
> 2020-03-24T08:48:57.0199846Z  ---> d169c15c288c
> 2020-03-24T08:48:57.0200652Z Step 11/54 : ENV HADOOP_URL 
> http://archive.apache.org/dist/hadoop/common/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz
> 2020-03-24T08:48:57.0207191Z  ---> Using cache
> 2020-03-24T08:48:57.0207580Z  ---> 08ac89421521
> 2020-03-24T08:48:57.0208300Z Step 12/54 : RUN set -x     && curl -fSL 
> "$HADOOP_URL" -o /tmp/hadoop.tar.gz     && tar -xf /tmp/hadoop.tar.gz -C 
> /usr/local/     && rm /tmp/hadoop.tar.gz*
> 2020-03-24T08:48:57.0217559Z  ---> Using cache
> 2020-03-24T08:48:57.0222452Z  ---> 96f43975850d
> 2020-03-24T08:48:57.0222688Z Step 13/54 : WORKDIR /usr/local
> 2020-03-24T08:48:57.0227121Z  ---> Using cache
> 2020-03-24T08:48:57.0227742Z  ---> d7a373550b3a
> 2020-03-24T08:48:57.0230107Z Step 14/54 : RUN set -x     && ln -s 
> /usr/local/hadoop-${HADOOP_VERSION} /usr/local/hadoop     && chown root:root 
> -R /usr/local/hadoop-${HADOOP_VERSION}/     && chown root:root -R 
> /usr/local/hadoop/     && chown root:yarn 
> /usr/local/hadoop/bin/container-executor     && chmod 6050 
> /usr/local/hadoop/bin/container-executor     && mkdir -p 
> /hadoop-data/nm-local-dirs     && mkdir -p /hadoop-data/nm-log-dirs     && 
> chown yarn:yarn /hadoop-data     && chown yarn:yarn 
> /hadoop-data/nm-local-dirs     && chown yarn:yarn /hadoop-data/nm-log-dirs    
>  && chmod 755 /hadoop-data     && chmod 755 /hadoop-data/nm-local-dirs     && 
> chmod 755 /hadoop-data/nm-log-dirs
> 2020-03-24T08:48:57.0239657Z  ---> Using cache
> 2020-03-24T08:48:57.0240080Z  ---> fe6ee683c47c
> 2020-03-24T08:48:57.0240498Z Step 15/54 : ENV HADOOP_HOME /usr/local/hadoop
> 2020-03-24T08:48:57.0254055Z  ---> Using cache
> 2020-03-24T08:48:57.0255217Z  ---> b2f008722129
> 2020-03-24T08:48:57.0255627Z Step 16/54 : ENV HADOOP_COMMON_HOME 
> /usr/local/hadoop
> 2020-03-24T08:48:57.0265972Z  ---> Using cache
> 2020-03-24T08:48:57.0268289Z  ---> cb9ef72f009c
> 2020-03-24T08:48:57.0268597Z Step 17/54 : ENV HADOOP_HDFS_HOME 
> /usr/local/hadoop
> 2020-03-24T08:48:57.0282380Z  ---> Using cache
> 2020-03-24T08:48:57.0282761Z  ---> 8807470383b6
> 2020-03-24T08:48:57.0283002Z Step 18/54 : ENV HADOOP_MAPRED_HOME 
> /usr/local/hadoop
> 2020-03-24T08:48:57.0290009Z  ---> Using cache
> 2020-03-24T08:48:57.0290398Z  ---> f29af9e468d3
> 2020-03-24T08:48:57.0290643Z Step 19/54 : ENV HADOOP_YARN_HOME 
> /usr/local/hadoop
> 2020-03-24T08:48:57.0301335Z  ---> Using cache
> 2020-03-24T08:48:57.0303437Z  ---> a40b81c07751
> 2020-03-24T08:48:57.0303940Z Step 20/54 : ENV HADOOP_CONF_DIR 
> /usr/local/hadoop/etc/hadoop
> 2020-03-24T08:48:57.0312152Z  ---> Using cache
> 2020-03-24T08:48:57.0312724Z  ---> 3ddde8f321fc
> 2020-03-24T08:48:57.0313016Z Step 21/54 : ENV YARN_CONF_DIR 
> /usr/local/hadoop/etc/hadoop
> 2020-03-24T08:48:57.0324346Z  ---> Using cache
> 2020-03-24T08:48:57.0326658Z  ---> e7c5c87c78ae
> 2020-03-24T08:48:57.0326945Z Step 22/54 : ENV HADOOP_LOG_DIR /var/log/hadoop
> 2020-03-24T08:48:57.0335638Z  ---> Using cache
> 2020-03-24T08:48:57.0336303Z  ---> 19fa74b6ddf5
> 2020-03-24T08:48:57.0336627Z Step 23/54 : ENV HADOOP_BIN_HOME $HADOOP_HOME/bin
> 2020-03-24T08:48:57.0347178Z  ---> Using cache
> 2020-03-24T08:48:57.0347569Z  ---> 0e61cb4baa17
> 2020-03-24T08:48:57.0347853Z Step 24/54 : ENV PATH $PATH:$HADOOP_BIN_HOME
> 2020-03-24T08:48:57.0357957Z  ---> Using cache
> 2020-03-24T08:48:57.0359318Z  ---> 1132f6bea4b0
> 2020-03-24T08:48:57.0359590Z Step 25/54 : ENV KRB_REALM EXAMPLE.COM
> 2020-03-24T08:48:57.0375069Z  ---> Using cache
> 2020-03-24T08:48:57.0375659Z  ---> eb4310843bad
> 2020-03-24T08:48:57.0376107Z Step 26/54 : ENV DOMAIN_REALM example.com
> 2020-03-24T08:48:57.0388001Z  ---> Using cache
> 2020-03-24T08:48:57.0388620Z  ---> 76f551f2be74
> 2020-03-24T08:48:57.0388884Z Step 27/54 : ENV KERBEROS_ADMIN admin/admin
> 2020-03-24T08:48:57.0400652Z  ---> Using cache
> 2020-03-24T08:48:57.0401285Z  ---> f5873d8f1421
> 2020-03-24T08:48:57.0401533Z Step 28/54 : ENV KERBEROS_ADMIN_PASSWORD admin
> 2020-03-24T08:48:57.0411164Z  ---> Using cache
> 2020-03-24T08:48:57.0411557Z  ---> 36d1d14c6f5e
> 2020-03-24T08:48:57.0411803Z Step 29/54 : ENV KEYTAB_DIR /etc/security/keytabs
> 2020-03-24T08:48:57.0426178Z  ---> Using cache
> 2020-03-24T08:48:57.0426591Z  ---> d42e99aee9ae
> 2020-03-24T08:48:57.0426863Z Step 30/54 : RUN mkdir /var/log/hadoop
> 2020-03-24T08:48:57.0439458Z  ---> Using cache
> 2020-03-24T08:48:57.0439952Z  ---> 9e9595ce01fd
> 2020-03-24T08:48:57.0440478Z Step 31/54 : ADD config/core-site.xml 
> $HADOOP_HOME/etc/hadoop/core-site.xml
> 2020-03-24T08:48:57.0451866Z  ---> Using cache
> 2020-03-24T08:48:57.0452282Z  ---> 099e5f6c7f65
> 2020-03-24T08:48:57.0453642Z Step 32/54 : ADD config/hdfs-site.xml 
> $HADOOP_HOME/etc/hadoop/hdfs-site.xml
> 2020-03-24T08:48:57.0466808Z  ---> Using cache
> 2020-03-24T08:48:57.0467397Z  ---> fce74a42eb4b
> 2020-03-24T08:48:57.0467880Z Step 33/54 : ADD config/mapred-site.xml 
> $HADOOP_HOME/etc/hadoop/mapred-site.xml
> 2020-03-24T08:48:57.0480297Z  ---> Using cache
> 2020-03-24T08:48:57.0480722Z  ---> 9fb3ffe76409
> 2020-03-24T08:48:57.0481203Z Step 34/54 : ADD config/yarn-site.xml 
> $HADOOP_HOME/etc/hadoop/yarn-site.xml
> 2020-03-24T08:48:57.0494140Z  ---> Using cache
> 2020-03-24T08:48:57.0494795Z  ---> 3320ebae06da
> 2020-03-24T08:48:57.0495379Z Step 35/54 : ADD config/container-executor.cfg 
> $HADOOP_HOME/etc/hadoop/container-executor.cfg
> 2020-03-24T08:48:57.0511280Z  ---> Using cache
> 2020-03-24T08:48:57.0511808Z  ---> 05140847c212
> 2020-03-24T08:48:57.0512075Z Step 36/54 : ADD config/krb5.conf /etc/krb5.conf
> 2020-03-24T08:48:57.0519329Z  ---> Using cache
> 2020-03-24T08:48:57.0519738Z  ---> 25e1c2e8f1a3
> 2020-03-24T08:48:57.0520220Z Step 37/54 : ADD config/ssl-server.xml 
> $HADOOP_HOME/etc/hadoop/ssl-server.xml
> 2020-03-24T08:48:57.0530068Z  ---> Using cache
> 2020-03-24T08:48:57.0531115Z  ---> 70b679e30d8f
> 2020-03-24T08:48:57.0531614Z Step 38/54 : ADD config/ssl-client.xml 
> $HADOOP_HOME/etc/hadoop/ssl-client.xml
> 2020-03-24T08:48:57.0543078Z  ---> Using cache
> 2020-03-24T08:48:57.0543489Z  ---> 845390e7b152
> 2020-03-24T08:48:57.0543940Z Step 39/54 : ADD config/keystore.jks 
> $HADOOP_HOME/lib/keystore.jks
> 2020-03-24T08:48:57.0559208Z  ---> Using cache
> 2020-03-24T08:48:57.0559624Z  ---> 1e6832e790ae
> 2020-03-24T08:48:57.0561034Z Step 40/54 : RUN set -x     && chmod 400 
> $HADOOP_HOME/etc/hadoop/container-executor.cfg     && chown root:yarn 
> $HADOOP_HOME/etc/hadoop/container-executor.cfg
> 2020-03-24T08:48:57.0570803Z  ---> Using cache
> 2020-03-24T08:48:57.0571279Z  ---> fb3ccf7f2ddf
> 2020-03-24T08:48:57.0583483Z Step 41/54 : ADD config/ssh_config 
> /root/.ssh/config
> 2020-03-24T08:48:57.0585160Z  ---> Using cache
> 2020-03-24T08:48:57.0585589Z  ---> a28602396090
> 2020-03-24T08:48:57.0593011Z Step 42/54 : RUN set -x     && chmod 600 
> /root/.ssh/config     && chown root:root /root/.ssh/config
> 2020-03-24T08:48:57.0604119Z  ---> Using cache
> 2020-03-24T08:48:57.0604615Z  ---> c2bf58d06d91
> 2020-03-24T08:48:57.0608303Z Step 43/54 : RUN set -x     && ls -la 
> /usr/local/hadoop/etc/hadoop/*-env.sh     && chmod +x 
> /usr/local/hadoop/etc/hadoop/*-env.sh     && ls -la 
> /usr/local/hadoop/etc/hadoop/*-env.sh
> 2020-03-24T08:48:57.0621126Z  ---> Using cache
> 2020-03-24T08:48:57.0621985Z  ---> 505c2265d857
> 2020-03-24T08:48:57.0623404Z Step 44/54 : RUN set -x     && sed  -i 
> "/^[^#]*UsePAM/ s/.*/#&/"  /etc/ssh/sshd_config     && echo "UsePAM no" >> 
> /etc/ssh/sshd_config     && echo "Port 2122" >> /etc/ssh/sshd_config
> 2020-03-24T08:48:57.0636513Z  ---> Using cache
> 2020-03-24T08:48:57.0637000Z  ---> a6bdf40286cd
> 2020-03-24T08:48:57.0637693Z Step 45/54 : EXPOSE 50470 9000 50010 50020 50070 
> 50075 50090 50475 50091 8020
> 2020-03-24T08:48:57.0642927Z  ---> Using cache
> 2020-03-24T08:48:57.0643330Z  ---> 1e1811dee129
> 2020-03-24T08:48:57.0643565Z Step 46/54 : EXPOSE 19888
> 2020-03-24T08:48:57.0662359Z  ---> Using cache
> 2020-03-24T08:48:57.0662744Z  ---> 84ac2b50e182
> 2020-03-24T08:48:57.0663074Z Step 47/54 : EXPOSE 8030 8031 8032 8033 8040 
> 8042 8088 8188
> 2020-03-24T08:48:57.0669101Z  ---> Using cache
> 2020-03-24T08:48:57.0669455Z  ---> e9fb1d01d351
> 2020-03-24T08:48:57.0669674Z Step 48/54 : EXPOSE 49707 2122
> 2020-03-24T08:48:57.0686164Z  ---> Using cache
> 2020-03-24T08:48:57.0686531Z  ---> 7eab694ca1ff
> 2020-03-24T08:48:57.0686768Z Step 49/54 : ADD bootstrap.sh /etc/bootstrap.sh
> 2020-03-24T08:48:57.0699979Z  ---> Using cache
> 2020-03-24T08:48:57.0700602Z  ---> 84a17e5aeb61
> 2020-03-24T08:48:57.0707198Z Step 50/54 : RUN chown root:root 
> /etc/bootstrap.sh
> 2020-03-24T08:48:57.0712464Z  ---> Using cache
> 2020-03-24T08:48:57.0713356Z  ---> e8f0d0747d96
> 2020-03-24T08:48:57.0713607Z Step 51/54 : RUN chmod 700 /etc/bootstrap.sh
> 2020-03-24T08:48:57.0725383Z  ---> Using cache
> 2020-03-24T08:48:57.0725744Z  ---> a3925ed49b38
> 2020-03-24T08:48:57.0725996Z Step 52/54 : ENV BOOTSTRAP /etc/bootstrap.sh
> 2020-03-24T08:48:57.0737194Z  ---> Using cache
> 2020-03-24T08:48:57.0737575Z  ---> 03c6eb94061b
> 2020-03-24T08:48:57.0737841Z Step 53/54 : ENTRYPOINT ["/etc/bootstrap.sh"]
> 2020-03-24T08:48:57.0771133Z  ---> Using cache
> 2020-03-24T08:48:57.0823069Z  ---> 38b3364feb31
> 2020-03-24T08:48:57.0823496Z Step 54/54 : CMD ["-h"]
> 2020-03-24T08:48:57.0824017Z  ---> Using cache
> 2020-03-24T08:48:57.0824353Z  ---> 6a0ea5b176da
> 2020-03-24T08:48:57.0824554Z Successfully built 6a0ea5b176da
> 2020-03-24T08:48:57.0892074Z Successfully tagged 
> flink/docker-hadoop-secure-cluster:latest
> 2020-03-24T08:48:57.0910678Z Starting Hadoop cluster
> 2020-03-24T08:48:57.6739281Z Creating network "docker-hadoop-cluster-network" 
> with the default driver
> 2020-03-24T08:48:57.8449099Z Creating kdc ... 
> 2020-03-24T08:49:02.0113435Z 
> 2020-03-24T08:49:02.0113912Z Creating kdc ... done
> 2020-03-24T08:49:02.0189918Z Creating master ... 
> 2020-03-24T08:49:02.9580010Z 
> 2020-03-24T08:49:02.9581843Z Creating master ... done
> 2020-03-24T08:49:02.9777185Z Creating slave1 ... 
> 2020-03-24T08:49:02.9842419Z Creating slave2 ... 
> 2020-03-24T08:49:04.1086391Z 
> 2020-03-24T08:49:04.1088004Z Creating slave1 ... done
> 2020-03-24T08:49:05.2410969Z 
> 2020-03-24T08:49:05.2411416Z Creating slave2 ... done
> 2020-03-24T08:49:05.3990290Z Waiting for hadoop cluster to come up. We 
> have been trying for 0 seconds, retrying ...
> 2020-03-24T08:49:10.5008535Z Waiting for hadoop cluster to come up. We have 
> been trying for 5 seconds, retrying ...
> 2020-03-24T08:49:15.5982809Z Waiting for hadoop cluster to come up. We have 
> been trying for 10 seconds, retrying ...
> 2020-03-24T08:49:20.7037451Z Waiting for hadoop cluster to come up. We have 
> been trying for 15 seconds, retrying ...
> 2020-03-24T08:49:26.2425385Z Waiting for hadoop cluster to come up. We have 
> been trying for 20 seconds, retrying ...
> 2020-03-24T08:49:30.8851829Z Waiting for hadoop cluster to come up. We have 
> been trying for 25 seconds, retrying ...
> 2020-03-24T08:49:35.9631385Z Waiting for hadoop cluster to come up. We have 
> been trying for 30 seconds, retrying ...
> 2020-03-24T08:49:41.0247910Z Waiting for hadoop cluster to come up. We have 
> been trying for 36 seconds, retrying ...
> 2020-03-24T08:49:46.0842495Z Waiting for hadoop cluster to come up. We have 
> been trying for 41 seconds, retrying ...
> 2020-03-24T08:49:51.1365516Z Waiting for hadoop cluster to come up. We have 
> been trying for 46 seconds, retrying ...
> 2020-03-24T08:49:56.2420482Z Waiting for hadoop cluster to come up. We have 
> been trying for 51 seconds, retrying ...
> 2020-03-24T08:50:01.3742474Z Waiting for hadoop cluster to come up. We have 
> been trying for 56 seconds, retrying ...
> 2020-03-24T08:50:06.4544741Z Waiting for hadoop cluster to come up. We have 
> been trying for 61 seconds, retrying ...
> 2020-03-24T08:50:11.5222600Z Waiting for hadoop cluster to come up. We have 
> been trying for 66 seconds, retrying ...
> 2020-03-24T08:50:16.6059130Z Waiting for hadoop cluster to come up. We have 
> been trying for 71 seconds, retrying ...
> 2020-03-24T08:50:21.6991616Z Waiting for hadoop cluster to come up. We have 
> been trying for 76 seconds, retrying ...
> 2020-03-24T08:50:26.7822382Z Waiting for hadoop cluster to come up. We have 
> been trying for 81 seconds, retrying ...
> 2020-03-24T08:50:31.8510300Z Waiting for hadoop cluster to come up. We have 
> been trying for 86 seconds, retrying ...
> 2020-03-24T08:50:36.9395910Z Waiting for hadoop cluster to come up. We have 
> been trying for 91 seconds, retrying ...
> 2020-03-24T08:50:42.0380732Z Waiting for hadoop cluster to come up. We have 
> been trying for 97 seconds, retrying ...
> 2020-03-24T08:50:47.1131270Z Waiting for hadoop cluster to come up. We have 
> been trying for 102 seconds, retrying ...
> 2020-03-24T08:50:52.1891377Z Waiting for hadoop cluster to come up. We have 
> been trying for 107 seconds, retrying ...
> 2020-03-24T08:50:57.2581160Z Waiting for hadoop cluster to come up. We have 
> been trying for 112 seconds, retrying ...
> 2020-03-24T08:51:02.3508713Z Waiting for hadoop cluster to come up. We have 
> been trying for 117 seconds, retrying ...
> 2020-03-24T08:51:07.4743652Z Command: start_hadoop_cluster failed. Retrying...
> 2020-03-24T08:51:07.4745108Z Starting Hadoop cluster
> 2020-03-24T08:51:08.2718568Z kdc is up-to-date
> 2020-03-24T08:51:08.2725012Z master is up-to-date
> 2020-03-24T08:51:08.2733408Z slave2 is up-to-date
> 2020-03-24T08:51:08.2736954Z slave1 is up-to-date
> 2020-03-24T08:51:08.3947596Z Waiting for hadoop cluster to come up. We have 
> been trying for 0 seconds, retrying ...
> 2020-03-24T08:51:13.5303970Z Waiting for hadoop cluster to come up. We have 
> been trying for 5 seconds, retrying ...
> 2020-03-24T08:51:18.6516317Z Waiting for hadoop cluster to come up. We have 
> been trying for 10 seconds, retrying ...
> 2020-03-24T08:51:23.7639624Z Waiting for hadoop cluster to come up. We have 
> been trying for 15 seconds, retrying ...
> 2020-03-24T08:51:28.8441232Z Waiting for hadoop cluster to come up. We have 
> been trying for 20 seconds, retrying ...
> 2020-03-24T08:51:33.9296121Z Waiting for hadoop cluster to come up. We have 
> been trying for 25 seconds, retrying ...
> 2020-03-24T08:51:38.9915568Z Waiting for hadoop cluster to come up. We have 
> been trying for 30 seconds, retrying ...
> 2020-03-24T08:51:44.1440391Z Waiting for hadoop cluster to come up. We have 
> been trying for 36 seconds, retrying ...
> 2020-03-24T08:51:49.2617758Z Waiting for hadoop cluster to come up. We have 
> been trying for 41 seconds, retrying ...
> 2020-03-24T08:51:54.3844462Z Waiting for hadoop cluster to come up. We have 
> been trying for 46 seconds, retrying ...
> 2020-03-24T08:51:59.5216946Z Waiting for hadoop cluster to come up. We have 
> been trying for 51 seconds, retrying ...
> 2020-03-24T08:52:04.6230187Z Waiting for hadoop cluster to come up. We have 
> been trying for 56 seconds, retrying ...
> 2020-03-24T08:52:09.7220391Z Waiting for hadoop cluster to come up. We have 
> been trying for 61 seconds, retrying ...
> 2020-03-24T08:52:15.3140656Z Waiting for hadoop cluster to come up. We have 
> been trying for 66 seconds, retrying ...
> 2020-03-24T08:52:19.9193415Z Waiting for hadoop cluster to come up. We have 
> been trying for 71 seconds, retrying ...
> 2020-03-24T08:52:25.0348418Z Waiting for hadoop cluster to come up. We have 
> been trying for 77 seconds, retrying ...
> 2020-03-24T08:52:30.1658755Z Waiting for hadoop cluster to come up. We have 
> been trying for 82 seconds, retrying ...
> 2020-03-24T08:52:35.2446996Z Waiting for hadoop cluster to come up. We have 
> been trying for 87 seconds, retrying ...
> 2020-03-24T08:52:40.4010037Z Waiting for hadoop cluster to come up. We have 
> been trying for 92 seconds, retrying ...
> 2020-03-24T08:52:45.5485657Z Waiting for hadoop cluster to come up. We have 
> been trying for 97 seconds, retrying ...
> 2020-03-24T08:52:50.6730841Z Waiting for hadoop cluster to come up. We have 
> been trying for 102 seconds, retrying ...
> 2020-03-24T08:52:55.7893643Z Waiting for hadoop cluster to come up. We have 
> been trying for 107 seconds, retrying ...
> 2020-03-24T08:53:00.9478818Z Waiting for hadoop cluster to come up. We have 
> been trying for 112 seconds, retrying ...
> 2020-03-24T08:53:06.0800169Z Waiting for hadoop cluster to come up. We have 
> been trying for 118 seconds, retrying ...
> 2020-03-24T08:53:11.2323229Z Command: start_hadoop_cluster failed. Retrying...
> 2020-03-24T08:53:11.2338512Z Starting Hadoop cluster
> 2020-03-24T08:53:11.9390009Z kdc is up-to-date
> 2020-03-24T08:53:11.9399273Z master is up-to-date
> 2020-03-24T08:53:11.9406393Z slave1 is up-to-date
> 2020-03-24T08:53:11.9410878Z slave2 is up-to-date
> 2020-03-24T08:53:12.1021795Z Waiting for hadoop cluster to come up. We have 
> been trying for 1 seconds, retrying ...
> 2020-03-24T08:53:17.1752513Z Waiting for hadoop cluster to come up. We have 
> been trying for 6 seconds, retrying ...
> 2020-03-24T08:53:22.3236055Z Waiting for hadoop cluster to come up. We have 
> been trying for 11 seconds, retrying ...
> 2020-03-24T08:53:27.5275023Z Waiting for hadoop cluster to come up. We have 
> been trying for 16 seconds, retrying ...
> 2020-03-24T08:53:32.6631513Z Waiting for hadoop cluster to come up. We have 
> been trying for 21 seconds, retrying ...
> 2020-03-24T08:53:37.7841989Z Waiting for hadoop cluster to come up. We have 
> been trying for 26 seconds, retrying ...
> 2020-03-24T08:53:42.9402545Z Waiting for hadoop cluster to come up. We have 
> been trying for 31 seconds, retrying ...
> 2020-03-24T08:53:48.0328066Z Waiting for hadoop cluster to come up. We have 
> been trying for 37 seconds, retrying ...
> 2020-03-24T08:53:53.2066746Z Waiting for hadoop cluster to come up. We have 
> been trying for 42 seconds, retrying ...
> 2020-03-24T08:53:58.3635830Z Waiting for hadoop cluster to come up. We have 
> been trying for 47 seconds, retrying ...
> 2020-03-24T08:54:03.5110570Z Waiting for hadoop cluster to come up. We have 
> been trying for 52 seconds, retrying ...
> 2020-03-24T08:54:08.6926874Z Waiting for hadoop cluster to come up. We have 
> been trying for 57 seconds, retrying ...
> 2020-03-24T08:54:13.8639022Z Waiting for hadoop cluster to come up. We have 
> been trying for 62 seconds, retrying ...
> 2020-03-24T08:54:19.0404467Z Waiting for hadoop cluster to come up. We have 
> been trying for 68 seconds, retrying ...
> 2020-03-24T08:54:24.1378591Z Waiting for hadoop cluster to come up. We have 
> been trying for 73 seconds, retrying ...
> 2020-03-24T08:54:29.2954846Z Waiting for hadoop cluster to come up. We have 
> been trying for 78 seconds, retrying ...
> 2020-03-24T08:54:34.4262440Z Waiting for hadoop cluster to come up. We have 
> been trying for 83 seconds, retrying ...
> 2020-03-24T08:54:39.5658523Z Waiting for hadoop cluster to come up. We have 
> been trying for 88 seconds, retrying ...
> 2020-03-24T08:54:44.7203280Z Waiting for hadoop cluster to come up. We have 
> been trying for 93 seconds, retrying ...
> 2020-03-24T08:54:49.8942413Z Waiting for hadoop cluster to come up. We have 
> been trying for 98 seconds, retrying ...
> 2020-03-24T08:54:55.0230760Z Waiting for hadoop cluster to come up. We have 
> been trying for 104 seconds, retrying ...
> 2020-03-24T08:55:00.1042970Z Waiting for hadoop cluster to come up. We have 
> been trying for 109 seconds, retrying ...
> 2020-03-24T08:55:05.3062066Z Waiting for hadoop cluster to come up. We have 
> been trying for 114 seconds, retrying ...
> 2020-03-24T08:55:10.4777123Z Waiting for hadoop cluster to come up. We have 
> been trying for 119 seconds, retrying ...
> 2020-03-24T08:55:15.6283207Z Command: start_hadoop_cluster failed. Retrying...
> 2020-03-24T08:55:15.6311726Z Command: start_hadoop_cluster failed 3 times.
> 2020-03-24T08:55:15.6312657Z ERROR: Could not start hadoop cluster. 
> Aborting...
> 2020-03-24T08:55:16.2261690Z Stopping slave2 ... 
> 2020-03-24T08:55:16.2264733Z Stopping slave1 ... 
> 2020-03-24T08:55:16.2266876Z Stopping master ... 
> 2020-03-24T08:55:16.2269543Z Stopping kdc    ... 
> 2020-03-24T08:55:27.2063580Z 
> 2020-03-24T08:55:27.2065065Z Stopping slave1 ... done
> 2020-03-24T08:55:27.2065797Z 
> 2020-03-24T08:55:27.2066403Z Stopping slave2 ... done
> 2020-03-24T08:55:37.4761055Z 
> 2020-03-24T08:55:37.4762182Z Stopping master ... done
> 2020-03-24T08:55:47.7228995Z 
> 2020-03-24T08:55:47.7230042Z Stopping kdc    ... done
> 2020-03-24T08:55:47.7634469Z Removing slave2 ... 
> 2020-03-24T08:55:47.7635546Z Removing slave1 ... 
> 2020-03-24T08:55:47.7635808Z Removing master ... 
> 2020-03-24T08:55:47.7636190Z Removing kdc    ... 
> 2020-03-24T08:55:47.7832191Z 
> 2020-03-24T08:55:47.7833067Z Removing master ... done
> 2020-03-24T08:55:47.7905052Z 
> 2020-03-24T08:55:47.7905828Z Removing slave1 ... done
> 2020-03-24T08:55:47.7909474Z 
> 2020-03-24T08:55:47.7910596Z Removing slave2 ... done
> 2020-03-24T08:55:47.8035403Z 
> 2020-03-24T08:55:47.8036069Z Removing kdc    ... done
> 2020-03-24T08:55:47.8036506Z Removing network 
> docker-hadoop-cluster-network
> 2020-03-24T08:55:47.9130868Z rm: cannot remove 
> '/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53382133956/flink.tar.gz':
>  No such file or directory
> 2020-03-24T08:55:47.9134301Z [FAIL] Test script contains errors.
> 2020-03-24T08:55:47.9141146Z Checking for errors...
> 2020-03-24T08:55:47.9287104Z No errors in log files.
> 2020-03-24T08:55:47.9395651Z Checking for exceptions...
> 2020-03-24T08:55:47.9457646Z No exceptions in log files.
> 2020-03-24T08:55:47.9458492Z Checking for non-empty .out files...
> 2020-03-24T08:55:47.9473048Z grep: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT/log/*.out:
>  No such file or directory
> 2020-03-24T08:55:47.9477287Z No non-empty .out files.
> 2020-03-24T08:55:47.9477632Z 
> 2020-03-24T08:55:47.9478361Z [FAIL] 'Running Kerberized YARN on Docker test 
> (custom fs plugin)' failed after 6 minutes and 51 seconds! Test exited with 
> exit code 1
> 2020-03-24T08:55:47.9479020Z 
> 2020-03-24T08:55:48.2388098Z No taskexecutor daemon to stop on host fv-az661.
> 2020-03-24T08:55:48.4589272Z No standalonesession daemon to stop on host 
> fv-az661.
> 2020-03-24T08:55:48.8998885Z 
> 2020-03-24T08:55:48.9104592Z ##[error]Bash exited with code '1'.
> 2020-03-24T08:55:48.9120831Z ##[section]Finishing: Run e2e tests{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to