[jira] [Created] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.
Chris Nauroth created HADOOP-13727: -- Summary: S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider. Key: HADOOP-13727 URL: https://issues.apache.org/jira/browse/HADOOP-13727 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Rajesh Balamohan Assignee: Chris Nauroth When running in an EC2 VM, S3A can make use of {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials from the EC2 Instance Metadata Service. We have observed that for a highly multi-threaded application, this may generate a high number of calls to the Instance Metadata Service. The service may throttle the client by replying with an HTTP 429 response or forcibly closing connections. We can greatly reduce the number of calls to the service by enforcing that all threads use a single shared instance of {{InstanceProfileCredentialsProvider}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13728) S3A can support short user-friendly aliases for configuration of credential providers.
Chris Nauroth created HADOOP-13728: -- Summary: S3A can support short user-friendly aliases for configuration of credential providers. Key: HADOOP-13728 URL: https://issues.apache.org/jira/browse/HADOOP-13728 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Chris Nauroth Priority: Minor This issue proposes to support configuration of the S3A credential provider chain using short aliases to refer to the common credential providers in addition to allowing full class names. Supporting short aliases would provide a simpler operations experience for the most common cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-8190) Eclipse plugin fails to access remote cluster
[ https://issues.apache.org/jira/browse/HADOOP-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-8190. - Resolution: Not A Problem This JIRA is 4 years old, I'm going to resolve. Please reopen if you intend to work on it. > Eclipse plugin fails to access remote cluster > - > > Key: HADOOP-8190 > URL: https://issues.apache.org/jira/browse/HADOOP-8190 > Project: Hadoop Common > Issue Type: Bug > Components: contrib/eclipse-plugin >Affects Versions: 0.20.205.0 > Environment: Windows and Linux (all) >Reporter: Ambud Sharma >Priority: Critical > Labels: gsoc2012 > Original Estimate: 12h > Remaining Estimate: 12h > > Eclipse plugin fails to access remote file system. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-9280) HADOOP-7101 was never merged from 0.23.x to the 1.x branch
[ https://issues.apache.org/jira/browse/HADOOP-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-9280. - Resolution: Won't Fix Old JIRA for branch-1, resolving. > HADOOP-7101 was never merged from 0.23.x to the 1.x branch > -- > > Key: HADOOP-9280 > URL: https://issues.apache.org/jira/browse/HADOOP-9280 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 1.0.4 >Reporter: Claus Ibsen >Assignee: Suresh Srinivas >Priority: Critical > > See HADOOP-7101 > This code fix went into the 0.23 branch. > But was never merged into the 1.x branch, which causing problems for people > upgrading from 0.23 to 1.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Chrome extension to collapse JIRA comments
Hi folks Sorry for the widespread email, but thought you would find this useful. My colleague, Peter, had put together this chrome extension to collapse comments from certain users (HadoopQA, Githubbot) that makes tracking conversations in JIRAs much easier. Cheers! Karthik
Re: Chrome extension to collapse JIRA comments
Never included the link :) https://github.com/gezapeti/jira-comment-collapser On Mon, Oct 17, 2016 at 6:46 PM, Karthik Kambatlawrote: > Hi folks > > Sorry for the widespread email, but thought you would find this useful. > > My colleague, Peter, had put together this chrome extension to collapse > comments from certain users (HadoopQA, Githubbot) that makes tracking > conversations in JIRAs much easier. > > Cheers! > Karthik > > >
RE: Updated 2.8.0-SNAPSHOT artifact
Hi Vinod, Any plan on first RC for branch-2.8 ? I think, it has been long time. --Brahma Reddy Battula -Original Message- From: Vinod Kumar Vavilapalli [mailto:vino...@apache.org] Sent: 20 August 2016 00:56 To: Jonathan Eagles Cc: common-dev@hadoop.apache.org Subject: Re: Updated 2.8.0-SNAPSHOT artifact Jon, That is around the time when I branched 2.8, so I guess you were getting SNAPSHOT artifacts till then from the branch-2 nightly builds. If you need it, we can set up SNAPSHOT builds. Or just wait for the first RC, which is around the corner. +Vinod > On Jul 28, 2016, at 4:27 PM, Jonathan Eagleswrote: > > Latest snapshot is uploaded in Nov 2015, but checkins are still coming > in quite frequently. > https://repository.apache.org/content/repositories/snapshots/org/apach > e/hadoop/hadoop-yarn-api/ > > Are there any plans to start producing updated SNAPSHOT artifacts for > current hadoop development lines? - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13726) Enforce that FileSystem initializes only a single instance of the requested FileSystem.
Chris Nauroth created HADOOP-13726: -- Summary: Enforce that FileSystem initializes only a single instance of the requested FileSystem. Key: HADOOP-13726 URL: https://issues.apache.org/jira/browse/HADOOP-13726 Project: Hadoop Common Issue Type: Improvement Components: fs Reporter: Chris Nauroth The {{FileSystem}} cache is intended to guarantee reuse of instances by multiple call sites or multiple threads. The current implementation does provide this guarantee, but there is a brief race condition window during which multiple threads could perform redundant initialization. If the file system implementation has expensive initialization logic, then this is wasteful. This issue proposes to eliminate that race condition and guarantee initialization of only a single instance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13725) Open MapFile for append
VITALIY SAVCHENKO created HADOOP-13725: -- Summary: Open MapFile for append Key: HADOOP-13725 URL: https://issues.apache.org/jira/browse/HADOOP-13725 Project: Hadoop Common Issue Type: New Feature Reporter: VITALIY SAVCHENKO I think it possible to open MapFile for appending. SequenceFile support it (Option SequenceFile.Writer.appendIfExists(true) HADOOP-7139) Now it almost working. But if use SequenceFile.Writer.appendIfExists(true) MapFile.Writer - it not read last key and does not check new keys. That's why MapFile can be corrupted. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13310) S3A reporting of file group as empty is harmful to compatibility for the shell.
[ https://issues.apache.org/jira/browse/HADOOP-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-13310. - Resolution: Duplicate > S3A reporting of file group as empty is harmful to compatibility for the > shell. > --- > > Key: HADOOP-13310 > URL: https://issues.apache.org/jira/browse/HADOOP-13310 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Priority: Minor > > S3A does not persist group information in file metadata. Instead, it stubs > the value of the group to an empty string. Although the JavaDocs for > {{FileStatus#getGroup}} indicate that empty string is a possible return > value, this is likely to cause compatibility problems. Most notably, shell > scripts that expect to be able to perform positional parsing on the output of > things like {{hadoop fs -ls}} will stop working if retargeted from HDFS to > S3A. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/ No changes -1 overall The following subsystems voted -1: asflicense findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-common-project/hadoop-kms Exception is caught when Exception is not thrown in org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At KMS.java:is not thrown in org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At KMS.java:[line 169] Exception is caught when Exception is not thrown in org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, String, int) At KMS.java:is not thrown in org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, String, int) At KMS.java:[line 501] Failed junit tests : hadoop.ha.TestZKFailoverController hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-compile-javac-root.txt [168K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-checkstyle-root.txt [16M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-patch-pylint.txt [16K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/whitespace-eol.txt [11M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/whitespace-tabs.txt [1.3M] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/branch-findbugs-hadoop-common-project_hadoop-kms-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-javadoc-javadoc-root.txt [2.2M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [120K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [148K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [40K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [268K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [124K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/ [Oct 17, 2016 12:04:49 PM] (weichiu) HADOOP-13661. Upgrade HTrace version. Contributed by Sean Mackrory. -1 overall The following subsystems voted -1: compile unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.timelineservice.storage.common.TestRowKeys hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters hadoop.yarn.server.timelineservice.storage.common.TestSeparator hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.resourcemanager.TestResourceTrackerService hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService Timed out junit tests : org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.mapred.TestMRIntermediateDataEncryption compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-compile-root.txt [312K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-compile-root.txt [312K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-compile-root.txt [312K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [196K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt [20K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [72K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [268K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt [28K]
Planning for 3.0.0-alpha2
Hi folks, It's been a month since 3.0.0-alpha1, and we've been incorporating fixes based on downstream feedback. Thus, it's getting to be time for 3.0.0-alpha2. I'm using this JIRA query to track open issues: https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20MAPREDUCE%2C%20YARN)%20AND%20%22Target%20Version%2Fs%22%20in%20(3.0.0-alpha2%2C%203.0.0-beta1%2C%202.8.0)%20AND%20statusCategory%20not%20in%20(Complete)%20ORDER%20BY%20priority If alpha2 goes well, we can declare feature freeze, cut branch-3, and move onto beta1. My plan for the 3.0.0 release timeline looks like this: * alpha2 in early November * beta1 in early Jan * GA in early March I'd appreciate everyone's help in resolving blocker and critical issues on the above JIRA search. Thanks, Andrew