+1 (binding)
- Built from source
- Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and 3 NMs
- Validated inter- and intra-queue preemption
- Validated exclusive node labels
Thanks a lot Chao for your diligence and hard work on this release.
Eric
On
+1 (binding)
- Built from source
- Brought up cluster
- Tested streaming and sleep jobs
-Eric Payne
On Wednesday, December 9, 2020, 11:01:38 AM CST, Xiaoqiao He
wrote:
Hi folks,
The release candidate (RC4) for Hadoop-3.2.2 is available now.
There are 10 commits[1] differences between RC4
Congratulations, Lisheng Sun!
-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Masatake,
Thank you for the good work on creating this release!
+1
I downloaded and built the source. I ran a one-node cluster with 6 NMs.
I manually ran apps in the Capacity Scheduler to test labels and capacity
assignments.
-Eric
On Monday, September 14, 2020, 12:59:17 PM CDT, Masatake
Eric Payne created HDFS-14758:
-
Summary: Decrease lease hard limit
Key: HDFS-14758
URL: https://issues.apache.org/jira/browse/HDFS-14758
Project: Hadoop HDFS
Issue Type: Improvement
Congratulations Tao! Well deserved!
On Monday, July 15, 2019, 4:54:10 AM CDT, Weiwei Yang
wrote:
Hi Dear Apache Hadoop Community
It's my pleasure to announce that Tao Yang has been elected as an Apache
Hadoop committer, this is to recognize his contributions to Apache Hadoop
YARN
It is my pleasure to announce that Eric Badger has accepted an invitation to
become a Hadoop Core committer.
Congratulations, Eric! This is well-deserved!
-Eric Payne
+1 (binding)
- RM refresh updates values as expected
- Streaming jobs complete successfully
- Moving apps between queues succeeds
- Inter-queue preemption works as expected
- Successfully ran selected yarn unit tests.
===
Eric Payne
===
On Tuesday, January 8, 2019, 5:42:46 AM CST
a blocker for
release. I'm on the fence.
Thanks,
-Eric
On Wednesday, November 28, 2018, 4:58:50 PM CST, Eric Payne
wrote:
Sunil,
So, the basic symptoms are that if preemption is enabled on any queue, the
preemption is disabled after a 'yarn rm -refreshQueues'. In addition, all of
the
CST, Eric Payne
wrote:
Sunil, thanks for all of the hard work on this release.
I have discovered that queue refresh doesn't work in some cases. For example,
when I change yarn.scheduler.capacity.root.default.disable_preemption, it
doesn't take effect unless I restart the RM.
I am still
Sunil, thanks for all of the hard work on this release.
I have discovered that queue refresh doesn't work in some cases. For example,
when I change yarn.scheduler.capacity.root.default.disable_preemption, it
doesn't take effect unless I restart the RM.
I am still investigating, but I thought I
+1 (binding)
-- Built from source-- Installed on 6-node pseudo cluster-- Tested intra-
inter-queue preemption, user weights-- Ran streaming jobs, word count, and tara
gen/sort tests
Thanks Akira for all of the hard work.-Eric Payne
On Tuesday, November 13, 2018, 7:02:51 PM CST, Akira
Thanks a lot Junping!
+1 (binding)
Tested the following:
- Built from source
- Installed on a 7 node, multi-tenant, insecure pseudo cluster, running YARN
capacity scheduler
- Added a queue via refresh
- Verified various GUI pages
- Streaming jobs
- Cross-queue (Inter) preemption
- In-queue
Thanks Wangda for creating this release.
+1 (binding)
Tested:
- Built from source
- Deployed to 6-node, multi-tennant, unsecured pseudo cluster with hierarchical
queue structure (CS)
- Refreshed queue (CS) properties
- Intra-queue preemption (CS)
- inter-queue preemption (CS)
- User weights (CS)
Sorry, Yongjun. My +1 is also binding+1 (binding)-Eric Payne
On Friday, June 1, 2018, 12:25:36 PM CDT, Eric Payne
wrote:
Thanks a lot, Yongjun, for your hard work on this release.
+1
- Built from source
- Installed on 6 node pseudo cluster
Tested the following in the Capacity
Thanks a lot, Yongjun, for your hard work on this release.
+1
- Built from source
- Installed on 6 node pseudo cluster
Tested the following in the Capacity Scheduler:
- Verified that running apps in labelled queues restricts tasks to the labelled
nodes.
- Verified that various queue config
anches if the branch is cut too early.
My 2 cents...
Thanks,
-Eric Payne
On Monday, May 7, 2018, 12:09:00 AM CDT, Yongjun Zhang <yjzhan...@apache.org>
wrote:
Hi All,
>
We have released Apache Hadoop 3.0.2 in April of this year [1]. Since then,
there are quite some commits don
line
- intra-queue preemption
- inter-queue preemption- verified preemption properties are refreshable
Thanks,
Eric Payne
On Wednesday, April 25, 2018, 12:12:24 AM CDT, Chen, Sammi
<sammi.c...@intel.com> wrote:
Paste the links here,
The artifacts are available here:
, with and without size-based
weight
- tested user weights
Thanks!
Eric Payne
On Monday, April 16, 2018, 7:00:03 PM CDT, Lei Xu <l...@apache.org> wrote:
Hi, All
I've created release candidate RC-1 for Apache Hadoop 3.0.2, to
address missing source jars in the maven repository in RC-0.
weights with FifoOrderingPolicy to ensure that weights were
assigned to users as expected.
Eric Payne
On Friday, April 6, 2018, 1:17:10 PM CDT, Lei Xu <l...@apache.org> wrote:
Hi, All
I've created release candidate RC-0 for Apache Hadoop 3.0.2.
Please note: this is an ame
- Tested simple inter-queue preemption
- Tested priority first intra-queue preemption
- Tested userlimit first intra-queue preemption
Thanks,Eric Payne
===
On Thursday, March 29, 2018, 11:15:51 PM CDT, Wangda Tan
<wheele...@gmail.
Thanks for working on this release!
+1 (binding)
I tested the following:
- yarn distributed shell job
- yarn streaming job
- inter-queue preemption
- compared behavior of fair and fifo ordering policy
- both userlimit_first mode and priority_first mode of intra-queue preemption
Eric Payne
Thanks for the hard work on this release, Konstantin.
+1 (binding)
- Built from source
- Verified that refreshing of queues works as expected.
- Verified can run multiple users in a single queue
- Ran terasort test
- Verified that cross-queue preemption works as expected
Thanks. Eric Payne
.
Eric Payne
From: Junping Du <j...@hortonworks.com>
To: "common-...@hadoop.apache.org" <common-...@hadoop.apache.org>;
"hdfs-dev@hadoop.apache.org" <hdfs-dev@hadoop.apache.org>;
"mapreduce-...@hadoop.apache.org" <mapreduce-...@hado
this feature.
Huge thanks to everyone who helped with reviews, commits, guidance, and
technical discussion/design, including Wangda Tan, Vinod Vavilapalli,
Rohith Sharma K S, Eric Payne .
[1] :
http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201711.mbox/%3CCACYiTuhKhF1JCtR7ZFuZSEKQ4s
Thanks Sunil for the great work on this feature.
I looked through the design document, reviewed the code, and tested out branch
YARN-5881. The design makes sense and the code looks like it is implementing
the desing in a sensible way. However, I have encountered a couple of bugs. I
opened
during In-queue preemption
o Users with different weights are assigned resources proportional to their
weights.
o User weights are refreshable, and in-queue preemption works to honor the
post-refresh weights
Thanks,
-Eric Payne
From: Andrew Wang <andrew.w...@cloudera.com>
To: &
the command line
o Users with different weights are assigned resources proportional to their
weights.
Thanks,-Eric Payne
From: Arun Suresh <asur...@apache.org>
To: yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; Hadoop Common
<common-...@hadoop.apache.org>; Hdfs-d
+1 (binding)
Thanks a lot, Junping!
I built and installed the source on a 6-node pseudo cluster. I simple sleep and
streaming jobs that exercised intra-queue and inter-queue preemption, and used
user weights.
-Eric
From: Junping Du
To:
Eric Payne created HDFS-12625:
-
Summary: Reduce expense of deleting large directories
Key: HDFS-12625
URL: https://issues.apache.org/jira/browse/HDFS-12625
Project: Hadoop HDFS
Issue Type: Bug
) preemption, both USERFIRST and PRIORITYFIRST
o User weights not equals to 1
o User weights in conjunction with in-queue preemption.
-Eric Payne
From: Andrew Wang <andrew.w...@cloudera.com>
To: "common-...@hadoop.apache.org" <common-...@hadoop.apache.org>;
"hdfs-dev@h
+1 for this branching proposal.-Eric
From: Andrew Wang
To: "common-...@hadoop.apache.org" ;
"mapreduce-...@hadoop.apache.org" ;
"hdfs-dev@hadoop.apache.org" ;
around under there and see if the blocks still contain
your data. Depending on how big your data was and how much other data you have
in the filesystem, you may be able to piece your deleted data together.
: Eric Payne
From: Wei-Chiu Chuang <weic...@apache.org>
To: panfei <cnwe...@gmail.co
+1 (binding)
Tested the following:
- Application History Server
-- Apps can be observed from UI
-- App and container metadata can be retrieved via REST APIs
- RM UI
-- Can kill an app from the RM UI
- Apps run in different frameworks. Frameworks tested: MR and yarn shell
-- In yarn
Thanks Andrew.
I downloaded the source, built it, and installed it onto a pseudo distributed
4-node cluster.
I ran mapred and streaming test cases, including sleep and wordcount.
+1 (non-binding)
-Eric
From: Andrew Wang
To: "common-...@hadoop.apache.org"
How do we come to a resolution regarding whether or not re-cut branch-2.8 or
release it as it is (after fixing blockers)?
There are some things in branch-2 that I would like to pull back into
branch-2.8, so a resolution to this question will affect how I proceed.
Thanks,-Eric
From: Karthik
Thank you very much, Andrew.
+1 (non-binding)
- Downloaded source and built native
- Installed on 3-node, non-secure cluster
- Ran sleep jobs
- Ensured preemption works as expected
-Eric Payne
- Original Message -
From: Andrew Wang <andrew.w...@cloudera.com>
To: &
mpted as I thought should have
been, but once the other underserved app began to run, it stopped preempting.
Also, it didn't preempt between 2 queues with the same partition label.
Partition preemption may not be supported in 2.7, so this is probably also okay.
Thanks!
jobs are running
in labelled queues (YARN-4751).
- Ensure that a yarn distributed shell application can be launched and complete
successfully.
Eric Payne
From: Vinod Kumar Vavilapalli <vino...@apache.org>
To: "common-...@hadoop.apache.org" <common-...@hadoop.apa
Vinod, I have opened https://issues.apache.org/jira/browse/YARN-4751 to cover
this issue.It happens in 2.7, not 2.8.Thanks,-Eric
From: Vinod Kumar Vavilapalli <vino...@apache.org>
To: hdfs-dev@hadoop.apache.org; Eric Payne <eric.payne1...@yahoo.com>
Cc: Hadoop Co
, which is
the behavior I would expect if no node had the specified label. I will also add
that this procedure works fine in 2.7.
Thanks,-Eric Payne
From: Junping Du <j...@hortonworks.com>
To: "hdfs-dev@hadoop.apache.org" <hdfs-dev@hadoop.apache.org>;
"yarn-.
com>
To: mapreduce-...@hadoop.apache.org; Eric Payne <eric.payne1...@yahoo.com>
Cc: "common-...@hadoop.apache.org" <common-...@hadoop.apache.org>;
"hdfs-dev@hadoop.apache.org" <hdfs-dev@hadoop.apache.org>;
"yarn-...@hadoop.apache.org" <yarn-...@hadoop.apach
for that queue in the "Application
Queues" section.
- Tested to make sure application data is preserved in the generic application
history server when the AM crashes during startup (e.g., exceeds max splits).
-Eric Payne
From: Vinod Kumar Vavilap
Eric Payne created HDFS-9634:
Summary: webhdfs client side exceptions don't provide enough
details
Key: HDFS-9634
URL: https://issues.apache.org/jira/browse/HDFS-9634
Project: Hadoop HDFS
Issue
2.7 and branch-2.8 as well, when changes are applicable, so we can
maintain consistency across those releases as well.
Thanks,-Eric Payne
From: Junping Du <j...@hortonworks.com>
To: "common-...@hadoop.apache.org" <common-...@hadoop.apache.org>;
"yar
Eric Payne created HDFS-9235:
Summary: hdfs-native-client build getting errors when built with
cmake 2.6
Key: HDFS-9235
URL: https://issues.apache.org/jira/browse/HDFS-9235
Project: Hadoop HDFS
Eric Payne created HDFS-9216:
Summary: Fix RAT licensing issues
Key: HDFS-9216
URL: https://issues.apache.org/jira/browse/HDFS-9216
Project: Hadoop HDFS
Issue Type: Bug
Components
[
https://issues.apache.org/jira/browse/HDFS-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Eric Payne resolved HDFS-9216.
--
Resolution: Duplicate
> Fix RAT licensing issues
>
>
>
Eric Payne created HDFS-9211:
Summary: branch-2 broken by incorrect version in
hadoop-hdfs-native-client/pom.xml
Key: HDFS-9211
URL: https://issues.apache.org/jira/browse/HDFS-9211
Project: Hadoop HDFS
container, and verified that it would in fact keep the containers running
across AM restarts.- sucessfully ran wordcount jobs.
Thank you,
-Eric Payne
From: Vinod Kumar Vavilapalli <vino...@apache.org>
To: common-...@hadoop.apache.org; yarn-...@hadoop.apache.org;
hdfs-dev@hadoop.apac
+1 (non-binding)
+ Downloaded and built source+ Installed on one-node cluster+ Ran simple manual
tests+ spot-checked unit tests
Thanks, Vinod, for managing this release!-Eric Payne
From: Vinod Kumar Vavilapalli vino...@apache.org
To: common-...@hadoop.apache.org; hdfs-dev
Thank you Allen and Barbara for organizing the bug bash and for the
post-mortem. I would definitely like to have another one in the fall.
From: Allen Wittenauer a...@altiscale.com
To: common-...@hadoop.apache.org common-...@hadoop.apache.org
Cc: mapreduce-...@hadoop.apache.org
, and
TestWebHdfsFileSystemContract#testGetFileBlockLocations.
Thank you,-Eric Payne
From: Vinod Kumar Vavilapalli vino...@apache.org
To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org;
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Cc: vino...@apache.org
Sent: Friday, April 10, 2015 6:44 PM
you,-Eric Payne
From: Arun C Murthy a...@hortonworks.com
To: common-...@hadoop.apache.org common-...@hadoop.apache.org;
hdfs-dev@hadoop.apache.org hdfs-dev@hadoop.apache.org;
yarn-...@hadoop.apache.org yarn-...@hadoop.apache.org;
mapreduce-...@hadoop.apache.org mapreduce
the containers start.
Enabled the preemption feature and verified containers were preempted and
queues were levelized.
Ran unit tests for hadoop-yarn-server-resourcemanagerRan unit tests for
hadoop-hdfs
Thank you,-Eric Payne
From: Arun C Murthy a...@hortonworks.com
To: common
[
https://issues.apache.org/jira/browse/HDFS-7362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Eric Payne resolved HDFS-7362.
--
Resolution: Duplicate
This is not exactly a dup of HADOOP-10817, but changes made for HADOOP-10817
Eric Payne created HDFS-7362:
Summary: Proxy user refresh won't modify or remove existing groups
or hosts from super user list
Key: HDFS-7362
URL: https://issues.apache.org/jira/browse/HDFS-7362
Project
Eric Payne created HDFS-7224:
Summary: Allow reuse of NN connections via webhdfs
Key: HDFS-7224
URL: https://issues.apache.org/jira/browse/HDFS-7224
Project: Hadoop HDFS
Issue Type: Bug
Eric Payne created HDFS-7163:
Summary: port read retry logic from 0.23's
WebHdfsFilesystem#WebHdfsInputStream to 2.x
Key: HDFS-7163
URL: https://issues.apache.org/jira/browse/HDFS-7163
Project: Hadoop
the containers start.
Ran unit tests for hadoop-yarn-server-resourcemanager, with one failure:
TestRMWebServicesAppsModification#testSingleAppKill. (YARN-2158)
Ran unit tests for hadoop-hdfs, with one failure:
TestByteRangeInputStream#testPropagatedClose
Thank you,
-Eric Payne
Eric Payne created HDFS-6915:
Summary: hadoop fs -text of zero-length file causes EOFException
Key: HDFS-6915
URL: https://issues.apache.org/jira/browse/HDFS-6915
Project: Hadoop HDFS
Issue Type
Eric Payne created HDFS-6269:
Summary: NameNode Audit Log should differentiate between webHDFS
open and HDFS open.
Key: HDFS-6269
URL: https://issues.apache.org/jira/browse/HDFS-6269
Project: Hadoop HDFS
Thanks Eli.
I have resolvers=internal in my $HOME/build.properties file. Is that enough,
our should I also put -Dresolvers=internal on the command line?
Thanks,
-Eric
-Original Message-
From: Eli Collins [mailto:e...@cloudera.com]
Sent: Friday, August 12, 2011 12:06 PM
To: Eric Payne
[
https://issues.apache.org/jira/browse/HDFS-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Eric Payne resolved HDFS-2171.
--
Resolution: Duplicate
Putting patches for both 0.20.205.0 and 0.23.0 on HDFS-2202.
Changes
Type: Bug
Components: balancer, data-node
Affects Versions: 0.20.205.0
Reporter: Eric Payne
Assignee: Eric Payne
Fix For: 0.20.205.0
Currently in order to change the value of the balancer bandwidth
(dfs.datanode.balance.bandwidthPerSec
Components: balancer, data-node
Affects Versions: 0.20.205.0
Reporter: Eric Payne
Assignee: Eric Payne
Fix For: 0.20.205.0
Currently in order to change the value of the balancer bandwidth
(dfs.datanode.balance.bandwidthPerSec), the datanode daemon
Issue Type: Bug
Components: name-node
Affects Versions: 0.23.0
Reporter: Eric Payne
I am measureing the performance of the namenode by running the
org.apache.hadoop.fs.loadGenerator.LoadGenerator application. This application
shows there is a very large slowdown
:54 AM, Eric Payne er...@yahoo-inc.com wrote:
Thanks Todd.
Yes, the stress test is NN-only. The simulated datanodes (using
MiniDFSCluster) don't read or write actual data, only log the metadata.
So, it sounds like the slowdown on the NN is to be expected, correct?
The
race condition
: Wednesday, July 06, 2011 11:12 AM
To: hdfs-dev@hadoop.apache.org
Subject: Re: HDFS on trunk is now quite slow
On Wed, Jul 6, 2011 at 9:00 AM, Eric Payne er...@yahoo-inc.com wrote:
I will attempt to recreate the tests on 20.203.
Currently, I'm comparing trunk against branches/MR-279
Hi all,
I ran some stress tests on the latest HDFS trunk yesterday, and the performance
is a lot slower (sometimes 10 times slower) when compared with the HDFS in
MR-279. The HDFS in MR-279 is slightly behind trunk. The stability of the
namenode in trunk seems to be better than in MR-279
: Hadoop HDFS
Issue Type: Improvement
Components: test
Affects Versions: 0.22.0
Reporter: Eric Payne
Fix For: 0.23.0
In Jira HDFS-1875, Tanping Wang added the following comment. In order to keep
the scope of HDFS-1875 small, I have created
Components: test
Affects Versions: 0.22.0
Reporter: Eric Payne
Assignee: Eric Payne
Fix For: 0.23.0
When creating RPC addresses that represent the communication sockets for each
simulated DataNode, the MiniDFSCluster class hard-codes the address
72 matches
Mail list logo