[jira] [Resolved] (HADOOP-17224) Install Intel ISA-L library in Dockerfile

2021-01-21 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HADOOP-17224.
---
Resolution: Fixed

Merged the PR again. Thank you, everyone.

> Install Intel ISA-L library in Dockerfile
> -
>
> Key: HADOOP-17224
> URL: https://issues.apache.org/jira/browse/HADOOP-17224
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> Currently, there is not isa-l library in the docker container, and jenkins 
> skips the natvie tests, TestNativeRSRawCoder and TestNativeXORRawCoder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17486) Provide fallbacks for callqueue ipc namespace properties

2021-01-21 Thread Jim Brennan (Jira)
Jim Brennan created HADOOP-17486:


 Summary: Provide fallbacks for callqueue ipc namespace properties
 Key: HADOOP-17486
 URL: https://issues.apache.org/jira/browse/HADOOP-17486
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.1.4
Reporter: Jim Brennan


Filing this proposal on behalf of [~daryn], based on comments he made in one of 
our internal Jiras.

The following settings are currently specified per port:
{noformat}
  /**
   * CallQueue related settings. These are not used directly, but rather
   * combined with a namespace and port. For instance:
   * IPC_NAMESPACE + ".8020." + IPC_CALLQUEUE_IMPL_KEY
   */
  public static final String IPC_NAMESPACE = "ipc";
  public static final String IPC_CALLQUEUE_IMPL_KEY = "callqueue.impl";
  public static final String IPC_SCHEDULER_IMPL_KEY = "scheduler.impl";
  public static final String IPC_IDENTITY_PROVIDER_KEY = 
"identity-provider.impl";
  public static final String IPC_COST_PROVIDER_KEY = "cost-provider.impl";
  public static final String IPC_BACKOFF_ENABLE = "backoff.enable";
  public static final boolean IPC_BACKOFF_ENABLE_DEFAULT = false;
 {noformat}
If one of these properties is not specified for the port, the defaults are 
hard-coded.
It would be nice to provide a way to specify a fallback default property that 
would be used for all ports.  If the property for a specific port is not 
defined, the fallback would be used, and if the fallback is not defined it 
would use the hard-coded defaults.

We would likely need to make the same change for properties specified by these 
classes.  For example, properties used in WeightedTimeCostProvider.

The fallback properties could be specified by dropping the port from the 
property name.  For example, the fallback for {{ipc.8020.cost-provider.impl}} 
would be {{ipc.cost-provider.impl}}.
Another option would be to use something more explicit like 
{{ipc.default.cost-provider.impl}}.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2021-01-21 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/

[Jan 19, 2021 8:42:40 AM] (Szilard Nemeth) YARN-10573. Enhance placement rule 
conversion in fs2cs in weight mode and enable it by default. Contributed by 
Peter Bacsko
[Jan 19, 2021 5:19:27 PM] (noreply) HADOOP-17433. Skipping network I/O in S3A 
getFileStatus(/) breaks ITestAssumeRole. (#2600)
[Jan 20, 2021 6:58:59 AM] (Sunil G) YARN-10512. CS Flexible Auto Queue 
Creation: Modify RM /scheduler endpoint to include mode of operation for CS. 
Contributed by Szilard Nemeth.
[Jan 20, 2021 2:22:44 PM] (Szilard Nemeth) YARN-10578. Fix Auto Queue Creation 
parent handling. Contributed by Andras Gyori
[Jan 21, 2021 1:07:46 AM] (noreply) HDFS-15783. Speed up 
BlockPlacementPolicyRackFaultTolerant#verifyBlockPlacement (#2626)




-1 overall


The following subsystems voted -1:
mvnsite pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.client.TestRMFailoverProxyProvider 
   hadoop.yarn.client.TestNoHaRMFailoverProxyProvider 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.documentstore.TestDocumentStoreCollectionCreator
 
   
hadoop.yarn.server.timelineservice.documentstore.TestDocumentStoreTimelineReaderImpl
 
   
hadoop.yarn.server.timelineservice.documentstore.TestDocumentStoreTimelineWriterImpl
 
   
hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.TestCosmosDBDocumentStoreWriter
 
   
hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.TestCosmosDBDocumentStoreReader
 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/diff-compile-cc-root.txt
 [48K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/diff-compile-javac-root.txt
 [600K]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/diff-checkstyle-root.txt
 [16M]

   mvnsite:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/patch-mvnsite-root.txt
 [500K]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/pathlen.txt
 [12K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/diff-patch-pylint.txt
 [60K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/diff-patch-shellcheck.txt
 [20K]

   shelldocs:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/diff-patch-shelldocs.txt
 [96K]

   whitespace:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/whitespace-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/whitespace-tabs.txt
 [2.0M]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/diff-javadoc-javadoc-root.txt
 [480K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/109/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 [112K]
  

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2021-01-21 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/

[Jan 20, 2021 6:58:59 AM] (Sunil G) YARN-10512. CS Flexible Auto Queue 
Creation: Modify RM /scheduler endpoint to include mode of operation for CS. 
Contributed by Szilard Nemeth.
[Jan 20, 2021 2:22:44 PM] (Szilard Nemeth) YARN-10578. Fix Auto Queue Creation 
parent handling. Contributed by Andras Gyori
[Jan 21, 2021 1:07:46 AM] (noreply) HDFS-15783. Speed up 
BlockPlacementPolicyRackFaultTolerant#verifyBlockPlacement (#2626)




-1 overall


The following subsystems voted -1:
pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.hdfs.server.federation.router.TestRouterMultiRack 
   hadoop.hdfs.server.federation.router.TestRouterAllResolver 
   hadoop.hdfs.server.federation.router.TestSafeMode 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/diff-compile-cc-root.txt
  [48K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/diff-compile-javac-root.txt
  [564K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/whitespace-eol.txt
  [13M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/whitespace-tabs.txt
  [2.0M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/xml.txt
  [24K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/diff-javadoc-javadoc-root.txt
  [2.0M]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [352K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [156K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [56K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/393/artifact/out/patch-unit-hadoop-tools_hadoop-dynamometer_hadoop-dynamometer-infra.txt
  [8.0K]
   

[jira] [Created] (HADOOP-17485) port UGI#getGroupsSet optimizations into 2.10

2021-01-21 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17485:
--

 Summary: port UGI#getGroupsSet optimizations into 2.10
 Key: HADOOP-17485
 URL: https://issues.apache.org/jira/browse/HADOOP-17485
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


HADOOP-17079 introduced an optimization adding a UGI#getGroupsSet and use 
Set#contains() instead of List#contains() to speed up large group look up while 
minimize List->Set conversions in Groups#getGroups() call.

This ticket is to port the changes into branch-2.10.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17484) Typo in hadop-aws index.md

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17484.
-
Fix Version/s: 3.3.1
   Resolution: Fixed

> Typo in hadop-aws index.md
> --
>
> Key: HADOOP-17484
> URL: https://issues.apache.org/jira/browse/HADOOP-17484
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3
>Affects Versions: 3.4.0
>Reporter: Maksim
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> - https://github.com/apache/hadoop/pull/2634/files



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17484) Fixed a small syntax in Markdown for index.md

2021-01-21 Thread Maksim (Jira)
Maksim created HADOOP-17484:
---

 Summary: Fixed a small syntax in Markdown for index.md
 Key: HADOOP-17484
 URL: https://issues.apache.org/jira/browse/HADOOP-17484
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Maksim






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16535) wasb Azure storage exceptions get wrapped as NoSuchElementExceptions

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16535.
-
Resolution: Won't Fix

> wasb Azure storage exceptions get wrapped as NoSuchElementExceptions
> 
>
> Key: HADOOP-16535
> URL: https://issues.apache.org/jira/browse/HADOOP-16535
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Steve Loughran
>Priority: Minor
>
> trying to run the abfs tests on branch-3.2; mkdirs calls are failing with a 
> storage exception about unknown container...one which is being wrapped by too 
> many layers.
> As well as identifying the cause for the test failure, it might be good for 
> the abfs exception handling code to look in NoSuchElementException exceptions 
> and pull out any StorageException cause



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15624) Release Hadoop 2.7.8

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15624.
-
Resolution: Won't Fix

> Release Hadoop 2.7.8
> 
>
> Key: HADOOP-15624
> URL: https://issues.apache.org/jira/browse/HADOOP-15624
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.7.7
>Reporter: Steve Loughran
>Priority: Major
>
> Planning ahead for the 2.7.8 release



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15371) Wasb fs prints (jetty's) "Logging initialized" message when instantiated

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15371.
-
Resolution: Won't Fix

> Wasb fs prints (jetty's) "Logging initialized" message when instantiated
> 
>
> Key: HADOOP-15371
> URL: https://issues.apache.org/jira/browse/HADOOP-15371
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Wasb initialization triggers an irrelevant log message from jetty about it 
> being initialized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15044) Wasb getFileBlockLocations() returns too many locations.

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15044.
-
Resolution: Won't Fix

> Wasb getFileBlockLocations() returns too many locations.
> 
>
> Key: HADOOP-15044
> URL: https://issues.apache.org/jira/browse/HADOOP-15044
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> The wasb mimicking of {{getFileBlockLocations()}} uses the length of the file 
> as the number to use to calculate the # of blocks to create (i.e. 
> file.length/blocksize), when it should be just the range of the request.
> As a result, you always get the number of blocks in the total file, not the 
> number spanning the range of (start, len). If this is less (i.e start > 0 or 
> len < file.length), you end up with some 0-byte-range blocks at the end



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15050) swift:// doesn't support createNonRecursive. hence the new FS createFile(path) builder

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15050.
-
Resolution: Won't Fix

> swift:// doesn't support createNonRecursive. hence the new FS 
> createFile(path) builder
> --
>
> Key: HADOOP-15050
> URL: https://issues.apache.org/jira/browse/HADOOP-15050
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Major
>
> swift throws an exception in createNonRecursive(), so fails to work with the 
> new {{FileSystem.createFile(path)}} which uses that mode by default (because 
> its the right thing to do :)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13256) define FileSystem.listStatusIterator, implement contract tests

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13256.
-
Resolution: Duplicate

> define FileSystem.listStatusIterator, implement contract tests
> --
>
> Key: HADOOP-13256
> URL: https://issues.apache.org/jira/browse/HADOOP-13256
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Priority: Major
>
> HADOOP-10987 added a new listing API to FS, but left out the specification 
> and contract tests. This JIRA covers the task of adding them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14767) WASB to implement copyFromLocalFile()

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14767.
-
Resolution: Won't Fix

> WASB to implement copyFromLocalFile()
> -
>
> Key: HADOOP-14767
> URL: https://issues.apache.org/jira/browse/HADOOP-14767
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
>
> WASB just uses the default FS copyFromLocalFile. If HADOOP-14766 adds an 
> object-store-friendly upload command, wasb would benefit the most if it had a 
> {{copyFromLocalFile()}} command tuned to make the most of the API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14813) Windows build fails "command line too long"

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14813.
-
Resolution: Won't Fix

> Windows build fails "command line too long"
> ---
>
> Key: HADOOP-14813
> URL: https://issues.apache.org/jira/browse/HADOOP-14813
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
> Environment: Windows. username "Administrator"
>Reporter: Steve Loughran
>Priority: Minor
>
> Trying to build trunk as user "administrator" is failing in - 
> native-maven-plugin/hadoop common; command line too long. By the look of 
> things, its the number of artifacts from the maven repository which is 
> filling up the line; the CP really needs to go in a file instead, assuming 
> the maven plugin will let us.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14755) WASB to implement listFiles(Path f, boolean recursive) through flat list

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14755.
-
Resolution: Won't Fix

> WASB to implement  listFiles(Path f, boolean recursive) through flat list
> -
>
> Key: HADOOP-14755
> URL: https://issues.apache.org/jira/browse/HADOOP-14755
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Major
>
> WASB doesn't override {{FileSystem.listFiles(Path f, boolean recursive)}}, so 
> picks up the base treewalk implementation. As the blobstore does implement 
> deep listing itself, it should be "straightforward" to implement this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12996) remove @InterfaceAudience.LimitedPrivate({"HDFS"}) from FSInputStream

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12996.
-
Resolution: Duplicate

> remove @InterfaceAudience.LimitedPrivate({"HDFS"}) from FSInputStream
> -
>
> Key: HADOOP-12996
> URL: https://issues.apache.org/jira/browse/HADOOP-12996
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
>  Labels: newbie
>
> FSInputStream is universally used and subclassed, core of the HCFS 
> specifications, yet tagged {{@InterfaceAudience.LimitedPrivate("HDFS")}}
> Remove that tag as its clearly untrue



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13172) And an UnsupportedFeatureException for Filesystems to throw on unsupported operations

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13172.
-
Resolution: Won't Fix

> And an UnsupportedFeatureException for Filesystems to throw on unsupported 
> operations
> -
>
> Key: HADOOP-13172
> URL: https://issues.apache.org/jira/browse/HADOOP-13172
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Filesystems which don't support things like append() tend to throw a simple 
> IOE, which makes it hard to distinguish "append didn't work for some IO 
> problem" and "append isn't implemented".
> If we add a new exception, {{UnsupportedFeatureException}}, make it the 
> strict failure mode of such operations if not supported and patch all our 
> filesystems to raise it, then at least code has a straightforward check. Same 
> for any other unimplemented feature



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12614) Add a generic .isOffline() method to filesystems to probe availability

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12614.
-
Resolution: Won't Fix

> Add a generic .isOffline() method to filesystems to probe availability
> --
>
> Key: HADOOP-12614
> URL: https://issues.apache.org/jira/browse/HADOOP-12614
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Priority: Minor
>
> Looking at some of the spark `HistoryServer` code, they do reflection games 
> to check whether HDFS is in safe mode or not, games which vary with version 
> and could be at risk of failing with the client/server split (fortunately, 
> it's all client-side). Nor do the checks apply to other filesystems, which 
> could have their own online/offline state.
> I propose adding the new methods {{FileSystem.isOffline()}}, 
> {{FileContext.isOffline()}}, to return true if an FS knows that it is 
> offline. For HDFS: Safe mode. For other filesystems? Maybe network state, 
> disk being r/W, etc. Their choice. The default would be false: an FS is not 
> offline,
> obvously, {{!isOffline()}} doesn't guarantee the FS is fully functional; 
> that's why I propose {{isOffline()}}; less dangerous than the opposite values 
> of {{isLive()}} or {{isAvailable()}}, which may be making promises which 
> cannot hold 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11531) NativeAzureFsInputStream doesn't report error on seek+read past EOF

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-11531.
-
Resolution: Won't Fix

> NativeAzureFsInputStream doesn't report error on seek+read past EOF
> ---
>
> Key: HADOOP-11531
> URL: https://issues.apache.org/jira/browse/HADOOP-11531
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Priority: Minor
>
> This is based on looking at the code, needs a test to verify.
> If you look at {{NativeAzureFsInputStream.skip(pos)}}, the code opens the 
> input source, then sets its position to be {{pos = in.skip(pos)}}. This will 
> be the position requested, or the length of the file: whichever is less.
> A read() will then return -1 —its the end of the file. 
> All the other filesystems behave differently in throwing an EOF if you seek 
> past the length of the file, or, POSIX-style, raising it on the read after a 
> seek past the EOF.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9626) Add an interface for any exception to serve up an Exit code

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9626.

Resolution: Duplicate

> Add an interface for any exception to serve up an Exit code
> ---
>
> Key: HADOOP-9626
> URL: https://issues.apache.org/jira/browse/HADOOP-9626
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-9626-001.patch
>
>
> Various exception included exit codes, specifically 
> {{Shell.ExitCodeException}}, {{ExitUtils.ExitException()}}.
> If all exceptions that wanted to pass up an exit code to the main method 
> implemented an interface with the method {{int getExitCode()}}, it'd be 
> easier to extract exit codes from these methods in a unified way, so 
> generating the desired exit codes on the application itself



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8607) Replace references to "Dr Who" in codebase with @BigDataBorat

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-8607.

Resolution: Won't Fix

Nobody has submitted a patch; this issue is 9 years old. And everyone should 
use kerberos. closing as WONTFIX

> Replace references to "Dr Who" in codebase with @BigDataBorat
> -
>
> Key: HADOOP-8607
> URL: https://issues.apache.org/jira/browse/HADOOP-8607
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.15.3, 0.16.4, 0.18.3, 0.19.1, 1.0.3, 2.0.0-alpha
>Reporter: Steve Loughran
>Assignee: Sanjay Radia
>Priority: Minor
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> People complain that having "Dr Who" in the code causes confusion and isn't 
> appropriate in Hadoop now that it has matured.
> I propose that we replace this anonymous user ID with {{@BigDataBorat}}. This 
> will
> # Increase brand awareness of @BigDataBorat and their central role in the Big 
> Data ecosystem.
> # Drive traffic to twitter, and increase their revenue. As contributors to 
> the Hadoop platform, this will fund further Hadoop development.
> Patching the code is straightforward; no easy tests, though we could monitor 
> twitter followers to determine rollout of the patch in the field.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12593) multiple "volatile long" field declarations exist in the Hadoop codebase

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12593.
-
Resolution: Invalid

> multiple "volatile long" field declarations exist in the Hadoop codebase
> 
>
> Key: HADOOP-12593
> URL: https://issues.apache.org/jira/browse/HADOOP-12593
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> If you get your IDE to scan for "volatile long", you find 20-30 entries. 
> Volatile operations on `long` variables are not guaranteed to be atomic, so 
> these usages can be vulnerable to race conditions generating invalid data.
> they need to be replaced by AtomicLong references, except in the specific 
> case that you want performance values for statistics, and are prepared to 
> take the risk



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8047) CachedDNSToSwitchMapping caches negative results forever

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-8047.

Resolution: Won't Fix

> CachedDNSToSwitchMapping caches negative results forever
> 
>
> Key: HADOOP-8047
> URL: https://issues.apache.org/jira/browse/HADOOP-8047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 1.0.0, 0.23.0, 0.24.0
>Reporter: Steve Loughran
>Priority: Trivial
>
> This is very minor, just worth filing in JIRA unless someone wants to rethink 
> topology caching for a dynamic world.
> # The CachedDNSToSwitchMapping caches the results from all relayed DNS 
> queries.
> # The DNS script mapper returns the default rack for all unknown entries (or 
> when the script fails)
> # The Cache stores this in its map and never re-resolves it.
> As a result, if a node is added to a live cluster that the existing script 
> cannot resolve, then it won't get assigned to a rack unless the script is 
> updated before the rack map is resolved. 
> This isn't usually that important, it just means "update your scripts before 
> adding new racks". Perhaps there should be a page on that activity, "runbook 
> and checklist for adding new servers and racks".
> Where it would matter if anyone started playing with dynamic topologies, but 
> in that situation the cached mapping itself would become the liability, as it 
> assumes that servers never switch switches in a live system: the topology is 
> static for existing nodes. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8232) Provide a command line entry point to view/test topology options

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-8232.

Resolution: Won't Fix

> Provide a command line entry point to view/test topology options
> 
>
> Key: HADOOP-8232
> URL: https://issues.apache.org/jira/browse/HADOOP-8232
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Affects Versions: 0.23.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-8232.patch, HADOOP-8232.patch
>
>
> Add a new command line entry point "topo" with commands for preflight 
> checking of a clusters topology setup. 
> The initial operations would be to list the implementation class of the 
> mapper, and attempt to load it, resolve a set of supplied hostnames, then 
> dump the topology map after the resolution process.
> Target audience: 
> # ops teams trying to get a new/changed script working before deploying it on 
> a cluster.
> # someone trying to write their first script.
> Resolve and list the rack mappings of the given host
> {code}
> hadoop topo test [host1] [host2] ... 
> {code}
> This would load the hostnames from a given file, resolve all of them and list 
> the results:
> {code}
> hadoop topo testfile filename
> {code}
>  This version is intended for the ops team who have a list of hostnames, IP 
> addresses. 
> * Rather than just list them, the ops team may want to mandate that there 
> were no /default-rack mappings found, as that is invariably a sign that the 
> script isn't handling a hostname properly.
> * No attempt to be clever and do IP address resolution, FQDN to hostname 
> mapping, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8629) Add option for TableMapping to reload mapping file on match failure

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-8629.

Resolution: Won't Fix

> Add option for TableMapping to reload mapping file on match failure
> ---
>
> Key: HADOOP-8629
> URL: https://issues.apache.org/jira/browse/HADOOP-8629
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Steve Loughran
>Priority: Minor
>
> As commented in HADOOP-7030, the table mapping topology mapper handles new 
> node addition worse than the script mapping, because the table mapping is 
> frozen for the life of the service.
> I propose adding an option (true by default?) for the class to look at the 
> timestamp of the mapping file, and reload it on a change -if a hostname 
> lookup failed. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8628) TableMapping init sets initialized flag prematurely

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-8628.

Resolution: Won't Fix

> TableMapping init sets initialized flag prematurely
> ---
>
> Key: HADOOP-8628
> URL: https://issues.apache.org/jira/browse/HADOOP-8628
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha, 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Steve Loughran
>Priority: Minor
>
> As reported in HADOOP-7030; the TableMapping class sets the initialized flag 
> to true before attempting to load the table. This means that it is set even 
> if the load failed, so preventing other attempts to load the file from 
> working without restarting the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7382) hadoop 0.20.203.0 Eclipse Plugin does not work with Eclipse

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-7382.

Resolution: Won't Fix

> hadoop 0.20.203.0 Eclipse Plugin does not work with Eclipse
> ---
>
> Key: HADOOP-7382
> URL: https://issues.apache.org/jira/browse/HADOOP-7382
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: contrib/eclipse-plugin
>Affects Versions: 0.20.203.0
> Environment: window 7 ; eclipse3.6-rc2
>Reporter: shanlin
>Priority: Major
>  Labels: hadoop
>
> hadoop 0.20.203.0 Eclipse Plugin does not work with Eclipse,while adding DFS 
> Locations,Advance parameters didin't hava a option hadoop.job.ugi



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9706) Provide Hadoop Karaf support

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9706.

Resolution: Won't Fix

> Provide Hadoop Karaf support
> 
>
> Key: HADOOP-9706
> URL: https://issues.apache.org/jira/browse/HADOOP-9706
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools
>Reporter: Jean-Baptiste Onofré
>Priority: Major
> Attachments: HADOOP-9706.patch, Karaf-HDFS-client.pdf
>
>
> To follow the discussion about OSGi, and in order to move forward, I propose 
> the following hadoop-karaf bundle.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8574) Enable starting hadoop services from inside OSGi

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-8574.

Resolution: Won't Fix

> Enable starting hadoop services from inside OSGi
> 
>
> Key: HADOOP-8574
> URL: https://issues.apache.org/jira/browse/HADOOP-8574
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Guillaume Nodet
>Priority: Major
>
> This JIRA captures the needed things in order to start hadoop services in 
> OSGi.
> The main idea I used so far consists in:
>   * using the OSGi ConfigAdmin to store the hadoop configuration
>   * in that configuration, use a few boolean properties to determine which 
> services should be started (nameNode, dataNode ...)
>   * expose a configured url handler so that the whole OSGi runtime can use 
> urls in hdfs:/xxx
>   * the use of an OSGi ManagedService means that when the configuration 
> changes, the services will be stopped and restarted with the new configuration



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-6484) OSGI headers in jar manifest

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-6484.

Resolution: Won't Fix

> OSGI headers in jar manifest
> 
>
> Key: HADOOP-6484
> URL: https://issues.apache.org/jira/browse/HADOOP-6484
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Leen Toelen
>Priority: Major
> Attachments: HADOOP-6484.patch
>
>
> When using hadoop inside an OSGI environment one needs to change the 
> META-INF/MANIFEST.MF file to include OSGI headers (version and symbolic 
> name). It would be convenient to do this in the default build.xml. 
> There are no runtime dependencies.
> An easy way of doing this is to use the bnd ant task: 
> http://www.aqute.biz/Code/Bnd
>  
>   classpath="bnd.jar"/> 
>   classpath="src" 
>   eclipse="true" 
>   failok="false" 
>   exceptions="true" 
>   files="test.bnd"/> 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7977) Allow Hadoop clients and services to run in an OSGi container

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-7977.

Resolution: Won't Fix

> Allow Hadoop clients and services to run in an OSGi container
> -
>
> Key: HADOOP-7977
> URL: https://issues.apache.org/jira/browse/HADOOP-7977
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 0.24.0
> Environment: OSGi client runtime (Spring ), possibly service 
> runtime (e.g. Apache Karaf)
>Reporter: Steve Loughran
>Priority: Minor
>
> There's been past discussion on running Hadoop client and service code in 
> OSGi. This JIRA issue exists to wrap up the needs and issues. 
> # client-side use of public Hadoop APIs would seem most important.
> # service-side deployments could offer benefits. The non-standard Hadoop Java 
> security configuration may interfere with this goal.
> # testing would all be functional with dependencies on external services, to 
> make things harder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14201) Some 2.8.0 unit tests are failing on windows

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14201.
-
Resolution: Won't Fix

> Some 2.8.0 unit tests are failing on windows
> 
>
> Key: HADOOP-14201
> URL: https://issues.apache.org/jira/browse/HADOOP-14201
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.8.0
> Environment: Windows Server 2012.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14201-001.patch
>
>
> Some of the 2.8.0 tests are failing locally, without much in the way of 
> diagnostics. They may be false alarms related to system, VM setup, 
> performance, or they may be a sign of a problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15894) getFileChecksum() needs to adopt S3Guard

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15894.
-
Resolution: Won't Fix

> getFileChecksum() needs to adopt S3Guard
> 
>
> Key: HADOOP-15894
> URL: https://issues.apache.org/jira/browse/HADOOP-15894
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
>
> Encountered a 404 failure in 
> {{ITestS3AMiscOperations.testNonEmptyFileChecksumsUnencrypted}}; newly 
> created file wasn't seen. Even with S3guard enabled, that method isn't doing 
> anything to query the store for it existing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15722) regression: Hadoop 2.7.7 release breaks spark submit

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15722.
-
Resolution: Won't Fix

> regression: Hadoop 2.7.7 release breaks spark submit
> 
>
> Key: HADOOP-15722
> URL: https://issues.apache.org/jira/browse/HADOOP-15722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, conf, security
>Affects Versions: 2.7.7
>Reporter: Steve Loughran
>Priority: Major
>
> SPARK-25330 highlights that upgrading spark to hadoop 2.7.7 is causing a 
> regression in client setup, with things only working when 
> {{Configuration.getRestrictParserDefault(Object resource)}} = false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15485) reduce/tune read failure fault injection on inconsistent client

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15485.
-
Resolution: Won't Fix

S3 is now consistent; wontfix.

> reduce/tune read failure fault injection on inconsistent client
> ---
>
> Key: HADOOP-15485
> URL: https://issues.apache.org/jira/browse/HADOOP-15485
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> If you crank up the s3guard directory inconsistency rate to stress test the 
> directory listings, then the read failure rate can go up high enough that 
> things read IO fails,.
> Maybe  that read injection should only happen for the first few seconds of a 
> stream being created, to better model delayed consistency, or least limit the 
> #of times it can surface in a stream. (This woluld imply some kind of 
> stream-specific binding)
> Otherwise: provide a way to explicitly set it, including disable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2021-01-21 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed junit tests :

   hadoop.ipc.TestRPC 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   jshint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/diff-patch-jshint.txt
  [208K]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/diff-compile-javac-root.txt
  [456K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [208K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [284K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/184/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [108K]
   

[jira] [Resolved] (HADOOP-14766) Cloudup: an object store high performance dfs put command

2021-01-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14766.
-
Resolution: Won't Fix

This lives here: https://github.com/steveloughran/cloudstore

> Cloudup: an object store high performance dfs put command
> -
>
> Key: HADOOP-14766
> URL: https://issues.apache.org/jira/browse/HADOOP-14766
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14766-001.patch, HADOOP-14766-002.patch
>
>
> {{hdfs put local s3a://path}} is suboptimal as it treewalks down down the 
> source tree then, sequentially, copies up the file through copying the file 
> (opened as a stream) contents to a buffer, writes that to the dest file, 
> repeats.
> For S3A that hurts because
> * it;s doing the upload inefficiently: the file can be uploaded just by 
> handling the pathname to the AWS xter manager
> * it is doing it sequentially, when some parallelised upload would work. 
> * as the ordering of the files to upload is a recursive treewalk, it doesn't 
> spread the upload across multiple shards. 
> Better:
> * build the list of files to upload
> * upload in parallel, picking entries from the list at random and spreading 
> across a pool of uploaders
> * upload straight from local file (copyFromLocalFile()
> * track IO load (files created/second) to estimate risk of throttling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17483) magic committer to be enabled for all S3 buckets

2021-01-21 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17483:
---

 Summary: magic committer to be enabled for all S3 buckets
 Key: HADOOP-17483
 URL: https://issues.apache.org/jira/browse/HADOOP-17483
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


now that S3 is consistent, there is no need to disable the magic committer for 
safety.

remove option to enable magic committer (fs.s3a.committer.magic.enabled) and 
the associated checks/probes through the code.

May want to retain the constants and probes just for completeness/API/CLI 
consistency. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17371) Bump Jetty to the latest version 9.4.35

2021-01-21 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reopened HADOOP-17371:
--

> Bump Jetty to the latest version 9.4.35
> ---
>
> Key: HADOOP-17371
> URL: https://issues.apache.org/jira/browse/HADOOP-17371
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> The Hadoop 3 branches are on 9.4.20. We should update to the latest version: 
> 9.4.34



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17371) Bump Jetty to the latest version 9.4.35

2021-01-21 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-17371.
--
Resolution: Fixed

> Bump Jetty to the latest version 9.4.35
> ---
>
> Key: HADOOP-17371
> URL: https://issues.apache.org/jira/browse/HADOOP-17371
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> The Hadoop 3 branches are on 9.4.20. We should update to the latest version: 
> 9.4.34



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org