Re: [DISCUSS] hadoop-thirdparty 1.0.0 release

2020-02-21 Thread Ayush Saxena
Thanx Vinay for initiating this.
+1, for the plan.
Good luck!!!
-Ayush

> On 21-Feb-2020, at 12:26 PM, Vinayakumar B  wrote:
> 
> Hi All,
> 
> Since Hadoop-3.3.0 release is around the corner, its time to release
> hadoop-thirdparty's first ever release, without which hadoop-3.3.0 cannot
> proceed for release.
> 
> Below are the tentative date for RC and release. Since there is no much
> activity in this repo (other than the opentracing related one, which I just
> merged), Keeping the plan little aggressive.
> Please let me know any concerns regarding the same.
> 
> RC-0 : 25-Feb-2020
> Release : 03-Mar-2020 (after 7 days Voting)
> 
> -Vinay

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: This week's Hadoop storage community online meetup (APAC)

2020-02-21 Thread Wei-Chiu Chuang
Hi Ahmed,

The schedule is up to date. Would you like to volunteer to host one? Do you
have a topic you'd like to share? I've been trying to keep the call
informative and have a topic each time and I send out a notice beforehand.
But maybe that's not the right way of doing it.

In fact, given that 3.3.0 is approaching, I would suggest to use this call
as an opportunity to push forward the release (discuss blocker issues,
features to be included). How does that sound? Hadoop community tend to be
asynchronous historically so I don't know if people would accept that but
I'd like to give that a try.

On Wed, Feb 19, 2020 at 12:11 PM Ahmed Hussein  wrote:

> Hi Wei-Chiu,
>
> I joined the meeting on Feb 19th on the zoom link sent in the original
> message but no one was there.
> Is the schedule up to date? (
>
> https://calendar.google.com/calendar/b/3?cid=aGFkb29wLmNvbW11bml0eS5zeW5jLnVwQGdtYWlsLmNvbQ
> )
>
> On Thu, Feb 13, 2020 at 11:47 AM Wei-Chiu Chuang
>  wrote:
>
> > Thanks for joining the call last night/yesterday.
> >
> > Please find the video recording here:
> >
> >
> https://cloudera.zoom.us/rec/play/tMYpdeyp_Ts3EoaR5ASDVPV7W429Kays03Id-PMPzxq9WyZQZlGkbrtBM-Hk48mM3YD3z9xi2zWexHZz?continueMode=true
> >
> > The presentation slides is available in my personal Google Drive:
> > https://drive.google.com/open?id=1IUPtknaPUeKIL74TpNt-R6CK5ICz5veW
> >
> > On Wed, Feb 12, 2020 at 9:50 PM Wei-Chiu Chuang 
> > wrote:
> >
> > > Gentle reminder for this event.
> > > Siyao will lead the session first, followed by a demo.
> > >
> > > Zoom link: https://cloudera.zoom.us/j/880548968
> > >
> > > Past meeting minutes:
> > >
> > >
> >
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit?usp=sharing
> > >
> > >
> > > On Mon, Feb 10, 2020 at 1:42 PM Wei-Chiu Chuang 
> > > wrote:
> > >
> > >> Hi!
> > >>
> > >> I would like to spend this week's session to discuss Distributed
> Tracing
> > >> in Hadoop.
> > >>
> > >> We had a session at the last Bay Area Hadoop meetup back in June
> > >> discussing the Distributed Tracing work we have been doing. I'd like
> to
> > >> share our latest update with the APAC community.
> > >>
> > >> Zoom link: https://cloudera.zoom.us/j/880548968
> > >>
> > >> Time/Date:
> > >> Feb 12 10PM (US west coast PST) / Feb 13 2pm (Beijing) / Feb 13 3pm
> > >> (Tokyo) / Feb 13 11:30am (New Delhi)
> > >>
> > >
> >
>


Re: [DISCUSS] hadoop-thirdparty 1.0.0 release

2020-02-21 Thread Wei-Chiu Chuang
+1

On Fri, Feb 21, 2020 at 1:22 AM Akira Ajisaka  wrote:

> Thanks Vinayakumar for starting the discussion,
>
> +1 for the release plan.
> I think the release vote timeframe is now 5 days, not 7 days.
>
> -Akira
>
> On Fri, Feb 21, 2020 at 3:56 PM Vinayakumar B 
> wrote:
>
> > Hi All,
> >
> > Since Hadoop-3.3.0 release is around the corner, its time to release
> > hadoop-thirdparty's first ever release, without which hadoop-3.3.0 cannot
> > proceed for release.
> >
> > Below are the tentative date for RC and release. Since there is no much
> > activity in this repo (other than the opentracing related one, which I
> just
> > merged), Keeping the plan little aggressive.
> > Please let me know any concerns regarding the same.
> >
> >  RC-0 : 25-Feb-2020
> > Release : 03-Mar-2020 (after 7 days Voting)
> >
> > -Vinay
> >
>


[GitHub] [hadoop-thirdparty] smengcl commented on issue #5: HADOOP-16867. [thirdparty] Add shaded JaegerTracer

2020-02-21 Thread GitBox
smengcl commented on issue #5: HADOOP-16867. [thirdparty] Add shaded 
JaegerTracer
URL: https://github.com/apache/hadoop-thirdparty/pull/5#issuecomment-589702740
 
 
   Thanks @jojochuang and @vinayakumarb for reviewing and committing this!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-02-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/

[Feb 20, 2020 2:27:15 PM] (snemeth) YARN-10143. YARN-10101 broke Yarn logs CLI. 
Contributed by Adam Antal
[Feb 20, 2020 3:04:06 PM] (pjoseph) YARN-10119. Option to reset AM failure 
count for YARN Service




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestFSEditLogLoader 
   hadoop.hdfs.TestDecommissionWithBackoffMonitor 
   hadoop.hdfs.TestFileAppend4 
   hadoop.hdfs.TestErasureCodingExerciseAPIs 
   hadoop.hdfs.server.namenode.TestQuotaByStorageType 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.hdfs.TestDFSStripedOutputStream 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant 
   hadoop.hdfs.TestDFSStripedInputStream 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-compile-cc-root.txt
  [8.0K]

   javac:

   

[jira] [Resolved] (HADOOP-16711) Add way to skip "verifyBuckets" check in S3A fs init()

2020-02-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16711.
-
Fix Version/s: 3.3.0
 Release Note: 
The probe for an S3 bucket existing now uses the V2 API, which fails fast if 
the caller lacks access to the bucket.
The property fs.s3a.bucket.probe can be used to change this. If using a third 
party library which doesn't support this API call, change it to "1".

to skip the probe entirely, use "0". This will make filesystem instantiation 
slightly faster, at a cost of postponing all issues related to bucket existence 
and client authentication until the filesystem API calls are first used.


  was:
The probe for an S3 bucket existing now uses the V2 API, which fails fast if 
the caller lacks access to the bucket.
The property fs.s3a.bucket.probe can be used to change this. If using a third 
party library which doesn't support this API call, change it to "1".

to skip the test entirely, use "0". This will make filesystem instantiation 
slightly faster, at a cost of postponing all issues related to bucket existence 
and client authentication until the filesystem API calls are first used.


   Resolution: Fixed

+1; fixed in trunk ... credited everyone who contributed to it.

FWIW I've been trying "v0" with my hadoop-aws tests; all seems good. Not tried 
to compare the suite with/without the flag

> Add way to skip "verifyBuckets" check in S3A fs init()
> --
>
> Key: HADOOP-16711
> URL: https://issues.apache.org/jira/browse/HADOOP-16711
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Rajesh Balamohan
>Assignee: Mukund Thakur
>Priority: Minor
>  Labels: performance
> Fix For: 3.3.0
>
> Attachments: HADOOP-16711.prelim.1.patch
>
>
> When authoritative mode is enabled with s3guard, it would be good to skip 
> verifyBuckets call during S3A filesystem init(). This would save call to S3 
> during init method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-02-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem 
   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/patch-javadoc-root-jdk1.8.0_242.txt
  [4.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/603/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   

[jira] [Created] (HADOOP-16877) S3A FS deleteOnExit to skip the exists check

2020-02-21 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16877:
---

 Summary: S3A FS deleteOnExit to skip the exists check
 Key: HADOOP-16877
 URL: https://issues.apache.org/jira/browse/HADOOP-16877
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.1
Reporter: Steve Loughran


S3A FS deleteOnExiit is getting that 404 in because it looks for file.exists() 
before adding. it should just queue for a delete

proposeAlso, processDeleteOnExit() to skip those checks too; just call 
delete(). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16706) ITestClientUrlScheme fails for accounts which don't support HTTP

2020-02-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16706.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> ITestClientUrlScheme fails for accounts which don't support HTTP
> 
>
> Key: HADOOP-16706
> URL: https://issues.apache.org/jira/browse/HADOOP-16706
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
>
> I'm setting up a new Storage account to Play with encryption options. I'm 
> getting a test failure in 
> {{testClientUrlScheme[0](org.apache.hadoop.fs.azurebfs.ITestClientUrlScheme)}}
>  as it doesn't support HTTP.
> Proposed: catch, recognise 'AccountRequiresHttps' and downgrade those 
> particular parameterised tests to skipped tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] hadoop-thirdparty 1.0.0 release

2020-02-21 Thread Akira Ajisaka
Thanks Vinayakumar for starting the discussion,

+1 for the release plan.
I think the release vote timeframe is now 5 days, not 7 days.

-Akira

On Fri, Feb 21, 2020 at 3:56 PM Vinayakumar B 
wrote:

> Hi All,
>
> Since Hadoop-3.3.0 release is around the corner, its time to release
> hadoop-thirdparty's first ever release, without which hadoop-3.3.0 cannot
> proceed for release.
>
> Below are the tentative date for RC and release. Since there is no much
> activity in this repo (other than the opentracing related one, which I just
> merged), Keeping the plan little aggressive.
> Please let me know any concerns regarding the same.
>
>  RC-0 : 25-Feb-2020
> Release : 03-Mar-2020 (after 7 days Voting)
>
> -Vinay
>