[jira] [Created] (HADOOP-19064) [thirdparty] add -mvnargs option to create-release command line

2024-02-01 Thread Shilun Fan (Jira)
Shilun Fan created HADOOP-19064:
---

 Summary: [thirdparty]  add -mvnargs option to create-release 
command line
 Key: HADOOP-19064
 URL: https://issues.apache.org/jira/browse/HADOOP-19064
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: hadoop-thirdparty
Affects Versions: thirdparty-1.3.0
Reporter: Shilun Fan
Assignee: Shilun Fan


We need mvn to support extended parameters. This JIRA is similar to 
HADOOP-18198.

I use this parameter to refer to the user.home.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19063) [thirdparty] Fix the docker image setuptools-scm && lazy-object-proxy

2024-02-01 Thread Shilun Fan (Jira)
Shilun Fan created HADOOP-19063:
---

 Summary: [thirdparty] Fix the docker image setuptools-scm && 
lazy-object-proxy
 Key: HADOOP-19063
 URL: https://issues.apache.org/jira/browse/HADOOP-19063
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: hadoop-thirdparty
Affects Versions: thirdparty-1.3.0
Reporter: Shilun Fan
Assignee: Shilun Fan


setuptools-scm and lazy-object-proxy have been upgraded many times. Directly 
relying on pip to install dependencies will report errors. We need to specify 
the installation version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19062) Improve create-release RUN script

2024-02-01 Thread Shilun Fan (Jira)
Shilun Fan created HADOOP-19062:
---

 Summary: Improve create-release RUN script
 Key: HADOOP-19062
 URL: https://issues.apache.org/jira/browse/HADOOP-19062
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: hadoop-thirdparty
Affects Versions: 3.5.0
Reporter: Shilun Fan
Assignee: Shilun Fan


Using create-release will create a docker image locally, but three of the RUN 
scripts may fail to execute.

1. RUN groupadd --non-unique -g 0 root
{code:java}
=> ERROR [16/20] RUN groupadd --non-unique -g 0 root
0.2s
--
 > [16/20] RUN groupadd --non-unique -g 0 root:
0.154 groupadd: group 'root' already exists
--
Dockerfile:100

  98 |
  99 | LABEL org.apache.hadoop.create-release="cr-19697"
 100 | >>> RUN groupadd --non-unique -g 0 root
 101 | RUN useradd -g 0 -u 0 -m root
 102 | RUN chown -R root /home/root
{code}
2. RUN useradd -g 0 -u 0 -m root
{code:java}
 > [17/20] RUN useradd -g 0 -u 0 -m root:
0.165 useradd: user 'root' already exists
--
Dockerfile:101

  99 | LABEL org.apache.hadoop.create-release="cr-12068"
 100 | RUN groupadd --non-unique -g 0 root; exit 0;
 101 | >>> RUN useradd -g 0 -u 0 -m root
 102 | RUN chown -R root /home/root
 103 | ENV HOME /home/root
{code}
3. RUN chown -R root /home/root
{code:java}
 > [18/20] RUN chown -R root /home/root:
0.168 chown: cannot access '/home/root': No such file or directory
--
Dockerfile:102

 100 | RUN groupadd --non-unique -g 0 root; exit 0;
 101 | RUN useradd -g 0 -u 0 -m root; exit 0;
 102 | >>> RUN chown -R root /home/root
 103 | ENV HOME /home/root
 104 | RUN mkdir -p /maven

{code}
Even if these three scripts fail, subsequent steps can continue to be executed, 
so I added exit 0 after the script.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop Thirdparty 1.2.0 RC0

2024-02-01 Thread slfan1989
Thank you very much for the review! I will avoid the diff.

Best Regards,
Shilun Fan.

On Fri, Feb 2, 2024 at 9:59 AM Takanobu Asanuma  wrote:

> It also looks good to me, except for the diff.
>
> * Verified signatures and hashes
> * Reviewed the documents
> * Successfully built from source with `mvn clean install`
> * Successfully compiled Hadoop trunk and branch-3.4 using the Hadoop
> thirdparty 1.2.0
>
> Anyway, since hadoop-thirdparty-1.1.1 has some high vulnerabilities,
> hadoop-thirdparty-1.2.0 would be required for Hadoop-3.4.0.
>
> Thanks,
> - Takanobu
>
> 2024年2月2日(金) 4:45 slfan1989 :
>
> > Thank you for helping to review Hadoop-Thirdparty-1.2.0-RC0 and providing
> > feedback!
> >
> > I followed the "how to release" documentation and tried to package it
> using
> > create-release and Dockerfile, but I couldn't successfully package it
> > directly. Some modifications are required before compilation. I should
> > submit a pull request to fix this issue before
> > Hadoop-Thirdparty-1.2.0-RC0 compile.
> >
> > This is an area that needs improvement. We should ensure that the code of
> > src is consistent with the tag.
> >
> > On Fri, Feb 2, 2024 at 2:25 AM Ayush Saxena  wrote:
> >
> > >
> > > There is some diff b/w the git tag & the src tar, the Dockerfile & the
> > > create-release are different, Why?
> > >
> > > Files hadoop-thirdparty/dev-support/bin/create-release and
> > > hadoop-thirdparty-1.2.0-src/dev-support/bin/create-release differ
> > >
> > > Files hadoop-thirdparty/dev-support/docker/Dockerfile and
> > > hadoop-thirdparty-1.2.0-src/dev-support/docker/Dockerfile differ
> > >
> > >
> > > ayushsaxena@ayushsaxena hadoop-thirdparty-1.2.0-RC0 % diff
> > > hadoop-thirdparty/dev-support/bin/create-release
> > > hadoop-thirdparty-1.2.0-src/dev-support/bin/create-release
> > >
> > > 444,446c444,446
> > >
> > > < echo "RUN groupadd --non-unique -g ${group_id} ${user_name}"
> > >
> > > < echo "RUN useradd -g ${group_id} -u ${user_id} -m ${user_name}"
> > >
> > > < echo "RUN chown -R ${user_name} /home/${user_name}"
> > >
> > > ---
> > >
> > > > echo "RUN groupadd --non-unique -g ${group_id} ${user_name}; exit
> > > 0;"
> > >
> > > > echo "RUN useradd -g ${group_id} -u ${user_id} -m ${user_name};
> > > exit 0;"
> > >
> > > > echo "RUN chown -R ${user_name} /home/${user_name}; exit 0;"
> > >
> > > ayushsaxena@ayushsaxena hadoop-thirdparty-1.2.0-RC0 % diff
> > > hadoop-thirdparty/dev-support/docker/Dockerfile
> > > hadoop-thirdparty-1.2.0-src/dev-support/docker/Dockerfile
> > >
> > > 103a104,105
> > >
> > > > RUN rm -f /etc/maven/settings.xml && ln -s
> /home/root/.m2/settings.xml
> > > /etc/maven/settings.xml
> > >
> > > >
> > >
> > > 126a129,130
> > >
> > > > RUN pip2 install setuptools-scm==5.0.2
> > >
> > > > RUN pip2 install lazy-object-proxy==1.5.0
> > >
> > > 159d162
> > >
> > > <
> > >
> > >
> > >
> > >
> > > Other things look Ok,
> > > * Built from source
> > > * Verified Checksums
> > > * Verified Signatures
> > > * Validated files have ASF header
> > >
> > > Not sure if having diff b/w the git tag & src tar is ok, this doesn't
> > look
> > > like core code change though, can anybody check & confirm?
> > >
> > > -Ayush
> > >
> > >
> > > On Thu, 1 Feb 2024 at 13:39, Xiaoqiao He 
> wrote:
> > >
> > >> Gentle ping. @Ayush Saxena  @Steve Loughran
> > >>  @inigo...@apache.org 
> > >> @Masatake
> > >> Iwasaki  and some other folks.
> > >>
> > >> On Wed, Jan 31, 2024 at 10:17 AM slfan1989 
> > wrote:
> > >>
> > >> > Thank you for the review and vote! Looking forward to other forks
> > >> helping
> > >> > with voting and verification.
> > >> >
> > >> > Best Regards,
> > >> > Shilun Fan.
> > >> >
> > >> > On Tue, Jan 30, 2024 at 6:20 PM Xiaoqiao He 
> > >> wrote:
> > >> >
> > >> > > Thanks Shilun for driving it and making it happen.
> > >> > >
> > >> > > +1(binding).
> > >> > >
> > >> > > [x] Checksums and PGP signatures are valid.
> > >> > > [x] LICENSE files exist.
> > >> > > [x] NOTICE is included.
> > >> > > [x] Rat check is ok. `mvn clean apache-rat:check`
> > >> > > [x] Built from source works well: `mvn clean install`
> > >> > > [x] Built Hadoop trunk with updated thirdparty successfully
> (include
> > >> > update
> > >> > > protobuf shaded path).
> > >> > >
> > >> > > BTW, hadoop-thirdparty-1.2.0 will be included in release-3.4.0,
> hope
> > >> we
> > >> > > could finish this vote before 2024/02/06(UTC) if there are no
> > >> concerns.
> > >> > > Thanks all.
> > >> > >
> > >> > > Best Regards,
> > >> > > - He Xiaoqiao
> > >> > >
> > >> > >
> > >> > >
> > >> > > On Mon, Jan 29, 2024 at 10:42 PM slfan1989 
> > >> wrote:
> > >> > >
> > >> > > > Hi folks,
> > >> > > >
> > >> > > > Xiaoqiao He and I have put together a release candidate (RC0)
> for
> > >> > Hadoop
> > >> > > > Thirdparty 1.2.0.
> > >> > > >
> > >> > > > The RC is available at:
> > >> > > >
> > >> > >
> > >> >
> > >>
> >
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-thirdparty-1.2.0-RC0
> > >> > > >
> 

Re: [VOTE] Release Apache Hadoop Thirdparty 1.2.0 RC0

2024-02-01 Thread Takanobu Asanuma
It also looks good to me, except for the diff.

* Verified signatures and hashes
* Reviewed the documents
* Successfully built from source with `mvn clean install`
* Successfully compiled Hadoop trunk and branch-3.4 using the Hadoop
thirdparty 1.2.0

Anyway, since hadoop-thirdparty-1.1.1 has some high vulnerabilities,
hadoop-thirdparty-1.2.0 would be required for Hadoop-3.4.0.

Thanks,
- Takanobu

2024年2月2日(金) 4:45 slfan1989 :

> Thank you for helping to review Hadoop-Thirdparty-1.2.0-RC0 and providing
> feedback!
>
> I followed the "how to release" documentation and tried to package it using
> create-release and Dockerfile, but I couldn't successfully package it
> directly. Some modifications are required before compilation. I should
> submit a pull request to fix this issue before
> Hadoop-Thirdparty-1.2.0-RC0 compile.
>
> This is an area that needs improvement. We should ensure that the code of
> src is consistent with the tag.
>
> On Fri, Feb 2, 2024 at 2:25 AM Ayush Saxena  wrote:
>
> >
> > There is some diff b/w the git tag & the src tar, the Dockerfile & the
> > create-release are different, Why?
> >
> > Files hadoop-thirdparty/dev-support/bin/create-release and
> > hadoop-thirdparty-1.2.0-src/dev-support/bin/create-release differ
> >
> > Files hadoop-thirdparty/dev-support/docker/Dockerfile and
> > hadoop-thirdparty-1.2.0-src/dev-support/docker/Dockerfile differ
> >
> >
> > ayushsaxena@ayushsaxena hadoop-thirdparty-1.2.0-RC0 % diff
> > hadoop-thirdparty/dev-support/bin/create-release
> > hadoop-thirdparty-1.2.0-src/dev-support/bin/create-release
> >
> > 444,446c444,446
> >
> > < echo "RUN groupadd --non-unique -g ${group_id} ${user_name}"
> >
> > < echo "RUN useradd -g ${group_id} -u ${user_id} -m ${user_name}"
> >
> > < echo "RUN chown -R ${user_name} /home/${user_name}"
> >
> > ---
> >
> > > echo "RUN groupadd --non-unique -g ${group_id} ${user_name}; exit
> > 0;"
> >
> > > echo "RUN useradd -g ${group_id} -u ${user_id} -m ${user_name};
> > exit 0;"
> >
> > > echo "RUN chown -R ${user_name} /home/${user_name}; exit 0;"
> >
> > ayushsaxena@ayushsaxena hadoop-thirdparty-1.2.0-RC0 % diff
> > hadoop-thirdparty/dev-support/docker/Dockerfile
> > hadoop-thirdparty-1.2.0-src/dev-support/docker/Dockerfile
> >
> > 103a104,105
> >
> > > RUN rm -f /etc/maven/settings.xml && ln -s /home/root/.m2/settings.xml
> > /etc/maven/settings.xml
> >
> > >
> >
> > 126a129,130
> >
> > > RUN pip2 install setuptools-scm==5.0.2
> >
> > > RUN pip2 install lazy-object-proxy==1.5.0
> >
> > 159d162
> >
> > <
> >
> >
> >
> >
> > Other things look Ok,
> > * Built from source
> > * Verified Checksums
> > * Verified Signatures
> > * Validated files have ASF header
> >
> > Not sure if having diff b/w the git tag & src tar is ok, this doesn't
> look
> > like core code change though, can anybody check & confirm?
> >
> > -Ayush
> >
> >
> > On Thu, 1 Feb 2024 at 13:39, Xiaoqiao He  wrote:
> >
> >> Gentle ping. @Ayush Saxena  @Steve Loughran
> >>  @inigo...@apache.org 
> >> @Masatake
> >> Iwasaki  and some other folks.
> >>
> >> On Wed, Jan 31, 2024 at 10:17 AM slfan1989 
> wrote:
> >>
> >> > Thank you for the review and vote! Looking forward to other forks
> >> helping
> >> > with voting and verification.
> >> >
> >> > Best Regards,
> >> > Shilun Fan.
> >> >
> >> > On Tue, Jan 30, 2024 at 6:20 PM Xiaoqiao He 
> >> wrote:
> >> >
> >> > > Thanks Shilun for driving it and making it happen.
> >> > >
> >> > > +1(binding).
> >> > >
> >> > > [x] Checksums and PGP signatures are valid.
> >> > > [x] LICENSE files exist.
> >> > > [x] NOTICE is included.
> >> > > [x] Rat check is ok. `mvn clean apache-rat:check`
> >> > > [x] Built from source works well: `mvn clean install`
> >> > > [x] Built Hadoop trunk with updated thirdparty successfully (include
> >> > update
> >> > > protobuf shaded path).
> >> > >
> >> > > BTW, hadoop-thirdparty-1.2.0 will be included in release-3.4.0, hope
> >> we
> >> > > could finish this vote before 2024/02/06(UTC) if there are no
> >> concerns.
> >> > > Thanks all.
> >> > >
> >> > > Best Regards,
> >> > > - He Xiaoqiao
> >> > >
> >> > >
> >> > >
> >> > > On Mon, Jan 29, 2024 at 10:42 PM slfan1989 
> >> wrote:
> >> > >
> >> > > > Hi folks,
> >> > > >
> >> > > > Xiaoqiao He and I have put together a release candidate (RC0) for
> >> > Hadoop
> >> > > > Thirdparty 1.2.0.
> >> > > >
> >> > > > The RC is available at:
> >> > > >
> >> > >
> >> >
> >>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-thirdparty-1.2.0-RC0
> >> > > >
> >> > > > The RC tag is
> >> > > >
> >> > >
> >> >
> >>
> https://github.com/apache/hadoop-thirdparty/releases/tag/release-1.2.0-RC0
> >> > > >
> >> > > > The maven artifacts are staged at
> >> > > >
> >> >
> https://repository.apache.org/content/repositories/orgapachehadoop-1398
> >> > > >
> >> > > > Comparing to 1.1.1, there are three additional fixes:
> >> > > >
> >> > > > HADOOP-18197. Upgrade Protobuf-Java to 3.21.12
> >> > > > 

Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2024-02-01 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/623/

[Jan 31, 2024, 5:30:35 AM] (github) HADOOP-19056. Highlight RBF features and 
improvements targeting version 3.4. (#6512) Contributed by Takanobu Asanuma.




-1 overall


The following subsystems voted -1:
blanks hadolint mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 356] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 333] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 356] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 

Apache Hadoop qbt Report: branch-3.3+JDK8 on Linux/x86_64

2024-02-01 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/

[Jan 25, 2024, 3:25:01 PM] (github) YARN-11639. CME and NPE in 
PriorityUtilizationQueueOrderingPolicy (#6… (#6493)
[Jan 25, 2024, 10:24:01 PM] (github) HADOOP-18883. [ABFS]: Expect-100 JDK bug 
resolution: prevent multiple server calls (#6022)
[Jan 25, 2024, 11:19:48 PM] (github) HADOOP-19015.  Increase 
fs.s3a.connection.maximum to 500 to minimize risk of Timeout waiting for 
connection from pool. (#6372) (#6487)




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.TestReconstructStripedFileWithValidator 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/results-compile-cc-root.txt
 [48K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/results-compile-javac-root.txt
 [364K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/blanks-eol.txt
 [15M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/results-shellcheck.txt
 [20K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/xml.txt
 [32K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/results-javadoc-javadoc-root.txt
 [972K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 [528K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/146/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 [96K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-19061) Capture exception in rpcRequestSender.start() in IPC.Connection.run()

2024-02-01 Thread Xing Lin (Jira)
Xing Lin created HADOOP-19061:
-

 Summary: Capture exception in rpcRequestSender.start() in 
IPC.Connection.run()
 Key: HADOOP-19061
 URL: https://issues.apache.org/jira/browse/HADOOP-19061
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 3.5.0
Reporter: Xing Lin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop Thirdparty 1.2.0 RC0

2024-02-01 Thread slfan1989
Thank you for helping to review Hadoop-Thirdparty-1.2.0-RC0 and providing
feedback!

I followed the "how to release" documentation and tried to package it using
create-release and Dockerfile, but I couldn't successfully package it
directly. Some modifications are required before compilation. I should
submit a pull request to fix this issue before
Hadoop-Thirdparty-1.2.0-RC0 compile.

This is an area that needs improvement. We should ensure that the code of
src is consistent with the tag.

On Fri, Feb 2, 2024 at 2:25 AM Ayush Saxena  wrote:

>
> There is some diff b/w the git tag & the src tar, the Dockerfile & the
> create-release are different, Why?
>
> Files hadoop-thirdparty/dev-support/bin/create-release and
> hadoop-thirdparty-1.2.0-src/dev-support/bin/create-release differ
>
> Files hadoop-thirdparty/dev-support/docker/Dockerfile and
> hadoop-thirdparty-1.2.0-src/dev-support/docker/Dockerfile differ
>
>
> ayushsaxena@ayushsaxena hadoop-thirdparty-1.2.0-RC0 % diff
> hadoop-thirdparty/dev-support/bin/create-release
> hadoop-thirdparty-1.2.0-src/dev-support/bin/create-release
>
> 444,446c444,446
>
> < echo "RUN groupadd --non-unique -g ${group_id} ${user_name}"
>
> < echo "RUN useradd -g ${group_id} -u ${user_id} -m ${user_name}"
>
> < echo "RUN chown -R ${user_name} /home/${user_name}"
>
> ---
>
> > echo "RUN groupadd --non-unique -g ${group_id} ${user_name}; exit
> 0;"
>
> > echo "RUN useradd -g ${group_id} -u ${user_id} -m ${user_name};
> exit 0;"
>
> > echo "RUN chown -R ${user_name} /home/${user_name}; exit 0;"
>
> ayushsaxena@ayushsaxena hadoop-thirdparty-1.2.0-RC0 % diff
> hadoop-thirdparty/dev-support/docker/Dockerfile
> hadoop-thirdparty-1.2.0-src/dev-support/docker/Dockerfile
>
> 103a104,105
>
> > RUN rm -f /etc/maven/settings.xml && ln -s /home/root/.m2/settings.xml
> /etc/maven/settings.xml
>
> >
>
> 126a129,130
>
> > RUN pip2 install setuptools-scm==5.0.2
>
> > RUN pip2 install lazy-object-proxy==1.5.0
>
> 159d162
>
> <
>
>
>
>
> Other things look Ok,
> * Built from source
> * Verified Checksums
> * Verified Signatures
> * Validated files have ASF header
>
> Not sure if having diff b/w the git tag & src tar is ok, this doesn't look
> like core code change though, can anybody check & confirm?
>
> -Ayush
>
>
> On Thu, 1 Feb 2024 at 13:39, Xiaoqiao He  wrote:
>
>> Gentle ping. @Ayush Saxena  @Steve Loughran
>>  @inigo...@apache.org 
>> @Masatake
>> Iwasaki  and some other folks.
>>
>> On Wed, Jan 31, 2024 at 10:17 AM slfan1989  wrote:
>>
>> > Thank you for the review and vote! Looking forward to other forks
>> helping
>> > with voting and verification.
>> >
>> > Best Regards,
>> > Shilun Fan.
>> >
>> > On Tue, Jan 30, 2024 at 6:20 PM Xiaoqiao He 
>> wrote:
>> >
>> > > Thanks Shilun for driving it and making it happen.
>> > >
>> > > +1(binding).
>> > >
>> > > [x] Checksums and PGP signatures are valid.
>> > > [x] LICENSE files exist.
>> > > [x] NOTICE is included.
>> > > [x] Rat check is ok. `mvn clean apache-rat:check`
>> > > [x] Built from source works well: `mvn clean install`
>> > > [x] Built Hadoop trunk with updated thirdparty successfully (include
>> > update
>> > > protobuf shaded path).
>> > >
>> > > BTW, hadoop-thirdparty-1.2.0 will be included in release-3.4.0, hope
>> we
>> > > could finish this vote before 2024/02/06(UTC) if there are no
>> concerns.
>> > > Thanks all.
>> > >
>> > > Best Regards,
>> > > - He Xiaoqiao
>> > >
>> > >
>> > >
>> > > On Mon, Jan 29, 2024 at 10:42 PM slfan1989 
>> wrote:
>> > >
>> > > > Hi folks,
>> > > >
>> > > > Xiaoqiao He and I have put together a release candidate (RC0) for
>> > Hadoop
>> > > > Thirdparty 1.2.0.
>> > > >
>> > > > The RC is available at:
>> > > >
>> > >
>> >
>> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-thirdparty-1.2.0-RC0
>> > > >
>> > > > The RC tag is
>> > > >
>> > >
>> >
>> https://github.com/apache/hadoop-thirdparty/releases/tag/release-1.2.0-RC0
>> > > >
>> > > > The maven artifacts are staged at
>> > > >
>> > https://repository.apache.org/content/repositories/orgapachehadoop-1398
>> > > >
>> > > > Comparing to 1.1.1, there are three additional fixes:
>> > > >
>> > > > HADOOP-18197. Upgrade Protobuf-Java to 3.21.12
>> > > > https://github.com/apache/hadoop-thirdparty/pull/26
>> > > >
>> > > > HADOOP-18921. Upgrade to avro 1.11.3
>> > > > https://github.com/apache/hadoop-thirdparty/pull/24
>> > > >
>> > > > HADOOP-18843. Guava version 32.0.1 bump to fix CVE-2023-2976
>> > > > https://github.com/apache/hadoop-thirdparty/pull/23
>> > > >
>> > > > You can find my public key at :
>> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>> > > >
>> > > > Best Regards,
>> > > > Shilun Fan.
>> > > >
>> > >
>> >
>>
>


Re: [VOTE] Release Apache Hadoop Thirdparty 1.2.0 RC0

2024-02-01 Thread Ayush Saxena
There is some diff b/w the git tag & the src tar, the Dockerfile & the
create-release are different, Why?

Files hadoop-thirdparty/dev-support/bin/create-release and
hadoop-thirdparty-1.2.0-src/dev-support/bin/create-release differ

Files hadoop-thirdparty/dev-support/docker/Dockerfile and
hadoop-thirdparty-1.2.0-src/dev-support/docker/Dockerfile differ


ayushsaxena@ayushsaxena hadoop-thirdparty-1.2.0-RC0 % diff
hadoop-thirdparty/dev-support/bin/create-release
hadoop-thirdparty-1.2.0-src/dev-support/bin/create-release

444,446c444,446

< echo "RUN groupadd --non-unique -g ${group_id} ${user_name}"

< echo "RUN useradd -g ${group_id} -u ${user_id} -m ${user_name}"

< echo "RUN chown -R ${user_name} /home/${user_name}"

---

> echo "RUN groupadd --non-unique -g ${group_id} ${user_name}; exit 0;"

> echo "RUN useradd -g ${group_id} -u ${user_id} -m ${user_name}; exit
0;"

> echo "RUN chown -R ${user_name} /home/${user_name}; exit 0;"

ayushsaxena@ayushsaxena hadoop-thirdparty-1.2.0-RC0 % diff
hadoop-thirdparty/dev-support/docker/Dockerfile
hadoop-thirdparty-1.2.0-src/dev-support/docker/Dockerfile

103a104,105

> RUN rm -f /etc/maven/settings.xml && ln -s /home/root/.m2/settings.xml
/etc/maven/settings.xml

>

126a129,130

> RUN pip2 install setuptools-scm==5.0.2

> RUN pip2 install lazy-object-proxy==1.5.0

159d162

<




Other things look Ok,
* Built from source
* Verified Checksums
* Verified Signatures
* Validated files have ASF header

Not sure if having diff b/w the git tag & src tar is ok, this doesn't look
like core code change though, can anybody check & confirm?

-Ayush


On Thu, 1 Feb 2024 at 13:39, Xiaoqiao He  wrote:

> Gentle ping. @Ayush Saxena  @Steve Loughran
>  @inigo...@apache.org  @Masatake
> Iwasaki  and some other folks.
>
> On Wed, Jan 31, 2024 at 10:17 AM slfan1989  wrote:
>
> > Thank you for the review and vote! Looking forward to other forks helping
> > with voting and verification.
> >
> > Best Regards,
> > Shilun Fan.
> >
> > On Tue, Jan 30, 2024 at 6:20 PM Xiaoqiao He 
> wrote:
> >
> > > Thanks Shilun for driving it and making it happen.
> > >
> > > +1(binding).
> > >
> > > [x] Checksums and PGP signatures are valid.
> > > [x] LICENSE files exist.
> > > [x] NOTICE is included.
> > > [x] Rat check is ok. `mvn clean apache-rat:check`
> > > [x] Built from source works well: `mvn clean install`
> > > [x] Built Hadoop trunk with updated thirdparty successfully (include
> > update
> > > protobuf shaded path).
> > >
> > > BTW, hadoop-thirdparty-1.2.0 will be included in release-3.4.0, hope we
> > > could finish this vote before 2024/02/06(UTC) if there are no concerns.
> > > Thanks all.
> > >
> > > Best Regards,
> > > - He Xiaoqiao
> > >
> > >
> > >
> > > On Mon, Jan 29, 2024 at 10:42 PM slfan1989 
> wrote:
> > >
> > > > Hi folks,
> > > >
> > > > Xiaoqiao He and I have put together a release candidate (RC0) for
> > Hadoop
> > > > Thirdparty 1.2.0.
> > > >
> > > > The RC is available at:
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-thirdparty-1.2.0-RC0
> > > >
> > > > The RC tag is
> > > >
> > >
> >
> https://github.com/apache/hadoop-thirdparty/releases/tag/release-1.2.0-RC0
> > > >
> > > > The maven artifacts are staged at
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1398
> > > >
> > > > Comparing to 1.1.1, there are three additional fixes:
> > > >
> > > > HADOOP-18197. Upgrade Protobuf-Java to 3.21.12
> > > > https://github.com/apache/hadoop-thirdparty/pull/26
> > > >
> > > > HADOOP-18921. Upgrade to avro 1.11.3
> > > > https://github.com/apache/hadoop-thirdparty/pull/24
> > > >
> > > > HADOOP-18843. Guava version 32.0.1 bump to fix CVE-2023-2976
> > > > https://github.com/apache/hadoop-thirdparty/pull/23
> > > >
> > > > You can find my public key at :
> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > >
> > > > Best Regards,
> > > > Shilun Fan.
> > > >
> > >
> >
>


Re: [VOTE] Release Apache Hadoop Thirdparty 1.2.0 RC0

2024-02-01 Thread Xiaoqiao He
Gentle ping. @Ayush Saxena  @Steve Loughran
 @inigo...@apache.org  @Masatake
Iwasaki  and some other folks.

On Wed, Jan 31, 2024 at 10:17 AM slfan1989  wrote:

> Thank you for the review and vote! Looking forward to other forks helping
> with voting and verification.
>
> Best Regards,
> Shilun Fan.
>
> On Tue, Jan 30, 2024 at 6:20 PM Xiaoqiao He  wrote:
>
> > Thanks Shilun for driving it and making it happen.
> >
> > +1(binding).
> >
> > [x] Checksums and PGP signatures are valid.
> > [x] LICENSE files exist.
> > [x] NOTICE is included.
> > [x] Rat check is ok. `mvn clean apache-rat:check`
> > [x] Built from source works well: `mvn clean install`
> > [x] Built Hadoop trunk with updated thirdparty successfully (include
> update
> > protobuf shaded path).
> >
> > BTW, hadoop-thirdparty-1.2.0 will be included in release-3.4.0, hope we
> > could finish this vote before 2024/02/06(UTC) if there are no concerns.
> > Thanks all.
> >
> > Best Regards,
> > - He Xiaoqiao
> >
> >
> >
> > On Mon, Jan 29, 2024 at 10:42 PM slfan1989  wrote:
> >
> > > Hi folks,
> > >
> > > Xiaoqiao He and I have put together a release candidate (RC0) for
> Hadoop
> > > Thirdparty 1.2.0.
> > >
> > > The RC is available at:
> > >
> >
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-thirdparty-1.2.0-RC0
> > >
> > > The RC tag is
> > >
> >
> https://github.com/apache/hadoop-thirdparty/releases/tag/release-1.2.0-RC0
> > >
> > > The maven artifacts are staged at
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1398
> > >
> > > Comparing to 1.1.1, there are three additional fixes:
> > >
> > > HADOOP-18197. Upgrade Protobuf-Java to 3.21.12
> > > https://github.com/apache/hadoop-thirdparty/pull/26
> > >
> > > HADOOP-18921. Upgrade to avro 1.11.3
> > > https://github.com/apache/hadoop-thirdparty/pull/24
> > >
> > > HADOOP-18843. Guava version 32.0.1 bump to fix CVE-2023-2976
> > > https://github.com/apache/hadoop-thirdparty/pull/23
> > >
> > > You can find my public key at :
> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > >
> > > Best Regards,
> > > Shilun Fan.
> > >
> >
>