+1
On 10/19/09 2:34 PM, Tsz Wo (Nicholas), Sze s29752-hadoop...@yahoo.com
wrote:
DFSClient has a retry mechanism on block acquiring for read. If the number of
retries attends to a certain limit (defined by
dfs.client.max.block.acquire.failures), DFSClient will throw a
BlockMissingException
On 2/24/10 12:25 AM, jian yi eyj...@gmail.com wrote:
Hi All,
Run hadoop fsck / will give you summary of current HDFS status
including some useful information :
Minimally replicated blocks: 51224 (100.0 %)
// Number of blocks with number of replicas = minimum number of replicas
triplets[i%3] is used to store the reference of datanode where the block
replica is stored.
The datanode maintains a link list of blocks. The next two object entries are
used for storing the next and previous block references of link list for that
datanode. This organization is done to reduce
I am planning to create a new branch from the trunk for the work related to
HDFS-1052 - HDFS scalability using multiple namenodes/namespaces. Please see
the jira for more details. Doing the development in a separate branch will
reduce the churn on the trunk. The development will be done
Sounds good. We will make sure jiras created have Federation: prefix.
On Fri, Feb 25, 2011 at 2:55 PM, Aaron T. Myers a...@cloudera.com wrote:
+1
Looks like some of the relevant tickets are getting created with a prefix,
but not all. It'd be great if this got standardized.
--
Aaron T.
Having the naming convention that we use helps manage emails better due to
subject lines including HDFS Federation. As far as tagging is concerned,
all federation jiras are subtasks of HDFS-1052. Hence some kind of grouping
can be done based on that.
On Fri, Feb 25, 2011 at 4:28 PM, Konstantin
We have started pushing changes for namenode federation in to the feature
branch HDFS-1052. The work items are created as subtask of the jira HDFS-1052
and are based on the design document published in the same jira. By the end of
this week, we will complete pushing the changes to HDFS-1052
Thanks for starting off the discussion.
This is a huge new feature with 86 jiras already filed, which
substantially increases the complexity of the code base.
These are 86 jiras file in a feature branch. We decided to make these
changes, in smaller increments, instead of a jumbo patch. This was
That is a different motivation. The document talks about why you should use
federation. I am asking about motivation of supporting the code base while
not using it. At least this is how understand Allen's question and some of
my colleagues'.
Namenode code is not changed at all. Datanode code
Namenode code is not changed at all.
Want to make sure I qualify this right. The change is not significant, other
than notion of BPID that the NN uses is added.
Doug,
1. Can you please describe the significant advantages this approach has
over a symlink-based approach?
Federation is complementary with symlink approach. You could choose to
provide integrated namespace using symlinks. However, client side mount
tables seems a better approach for many
, Sanjay,
Thank you very much for addressing my questions.
Cheers,
Doug
On 04/26/2011 10:29 AM, suresh srinivas wrote:
Doug,
1. Can you please describe the significant advantages this approach
has
over a symlink-based approach?
Federation is complementary
Konstantin,
Could you provide me link to how this was done on a big feature, like say
append and how benchmark info was captured? I am planning to run dfsio
tests, btw.
Regards,
Suresh
On Tue, Apr 26, 2011 at 11:34 PM, suresh srinivas srini30...@gmail.comwrote:
Konstantin,
On Tue, Apr 26
As Eli suggested, I have uploaded a new patch to the jira. Merging new trunk
changes and testing them took several hours! It passes all the tests except
two unit test failure. These failures do not happen on my machine - if this
is a real failure we will address them after merging the patch to the
We have been testing federation regularly with MapReduce with yahoo-merge
branches. With trunk we missed the contrib (raid). The dependency with
project splits has been crazy. Not sure how large changes can keep on top of
all these things.
I am working on fixing the raid contrib.
On Mon, May 2,
https://builds.apache.org/job/PreCommit-HDFS-Build/989/console
Hudson did start builds. But it failed due to test failures.
On Thu, Jul 21, 2011 at 4:31 PM, sravan korumilli
sravan.korumi...@huawei.com wrote:
Hi,
Hi I have submitted the patch twice for the issue
HDFS-2025 but still it did
+1 Todd.
On 2011-10-06 23:37:24, Todd Lipcon wrote:
branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java,
line 28
https://reviews.apache.org/r/2150/diff/2/?file=48551#file48551line28
perhaps should be abstract since it won't ever be
Arun,
Following issues need to be ported from trunk to 0.23.
HADOOP-2298 TestDFSOverAvroRpc fails
Justification: This makes a protocol method name change. Required for
compatibility of 0.23 with future releases.
Risk: low
HDFS-2355 Enable using same configuration file in federation
Under hadoop-hdfs/src I run:
Protoc -I=proto proto/file.proto --java_out main/src
On Thursday, December 8, 2011, Alejandro Abdelnur t...@cloudera.com wrote:
I'm trying to change the build to compile the proto files (instead
checking
in the generated java files)
However, I'm not able to run
Abdelnur t...@cloudera.comwrote:
Suresh,
One proto file at the time works, but doing *.proto fails complaining about
duplicate definitions.
Thxs.
Alejandro
On Thu, Dec 8, 2011 at 11:46 PM, Suresh Srinivas sur...@hortonworks.com
wrote:
Under hadoop-hdfs/src I run:
Protoc -I=proto proto
I will fix this bug today.
On Thursday, December 15, 2011, Uma Maheswara Rao G mahesw...@huawei.com
wrote:
Yes Eli, Agree with you.
I think we must take a look immediately now.
I just debugged the failure related to this class cast exception.
Following is the initial analysis:
Todd, can you please hold off for the merge till Tuesday, until some of the
other folks working on HA could catch up with some of the recent changes.
On Fri, Feb 3, 2012 at 2:10 PM, Todd Lipcon t...@cloudera.com wrote:
I've got a merge pending of trunk into HDFS-1623 -- it was a bit
I looked at the merge. It looks good. +1.
On Wed, Feb 8, 2012 at 9:08 PM, Todd Lipcon t...@cloudera.com wrote:
The branch developed some new conflicts due to recent changes in trunk
affecting the RPC between the DN and the NN (the StorageReport
stuff). I've done a new merge to address these
I am not sure any of these issues are serious show stoppers for merging
into trunk.
Why not merge into trunk and fix some of these issues?
The reason is, merging is non trivial with two branches changing
independently. Given that
Jitendra has posted a merge patch, why not do it earlier? Do we
I am +1 on merging this to trunk.
On Feb 29, 2012, at 4:03 PM, Todd Lipcon t...@cloudera.com wrote:
+1 as well.
My latest tests after applying the performance fixes indicate that
there is no statistically significant performance regression between
trunk and HA, even in tests designed to
As I indicated in the earlier thread, I am +1 on merging HDFS-1623 branch to
trunk.
On Mar 1, 2012, at 9:05 PM, Aaron T. Myers a...@cloudera.com wrote:
Hello HDFS devs,
As mentioned in a thread started last week, I'd like to merge the HA branch
to trunk. I had originally intended to do
. Thanks for volunteering!
You'll probably want to merge HDFS-1580, HDFS-1765, HDFS-2158,
HDFS-2188, HDFS-2334, HDFS-2476, HDFS-2477, and HDFS-2495 to branch-23
first as these conflict and the patch will contain a bunch of non-HA
stuff.
Thanks,
Eli
On Fri, Mar 2, 2012 at 6:15 PM, Suresh
I have merged the change required for merging Namenode HA. I have also
attached a release 23 patch in the jira HDFS-1623. Please take a look the
attached patch and let me know if that looks good.
Regards,
Suresh
I am sure Jitendra understands what Todd meant, given he was quite involved
in the work. As Jitendra said, I would like to keep the wire type from the
implementation type. Even for internal protocols. Rolling upgrades is
important.
I understand where Todd is coming from. We did this work for 10
I need a week or so to go over the design and review the code changes. I
will post my comments to the jira directly. Meanwhile any updates made to
the design document would help.
Regards,
Suresh
On Wed, Sep 19, 2012 at 12:53 PM, Todd Lipcon t...@cloudera.com wrote:
Hi all,
Work has been
I am in favor of keeping QJM in HDFS.
QJM is very specific to HDFS and is tightly coupled with HDFS code,
essentially extending the current editlog functionality that writes to
local disk to writing to a separate set of daemons. Clearly there is a need
for this in HDFS. Konstantin, I see your
Todd,
As I indicated in my comments on the jira, I think some of the design
discussions and further simplification of design should happen before the
merge. See -
On Mon, Oct 8, 2012 at 6:20 PM, Todd Lipcon t...@cloudera.com wrote:
On Mon, Oct 8, 2012 at 6:01 PM, Suresh Srinivas sur...@hortonworks.com
wrote:
Todd,
As I indicated in my comments on the jira, I think some of the design
discussions and further simplification of design should happen
On Mon, Oct 8, 2012 at 8:03 PM, Andrew Purtell apurt...@apache.org wrote:
Our position on the QJM is we've already taken delivery from the feature
branch and will maintain a private HDFS fork of branch-2 if necessary, i.e.
we don't have a significant stake in this discussion except at a meta
On Mon, Feb 4, 2013 at 10:46 AM, Arun C Murthy a...@hortonworks.com wrote:
On Feb 1, 2013, at 2:34 AM, Tom White wrote:
Whereas Arun is proposing
2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.1.0-alpha, 2.2.0-beta, 2.3.0
and the casual observer might expect there to be a stable 2.0.1
On Mon, Feb 4, 2013 at 1:07 PM, Owen O'Malley omal...@apache.org wrote:
I think that using -(alpha,beta) tags on the release versions is a really
bad idea.
Why? Can you please share some reasons?
I actually think alpha and beta and stable/GA are much better way to set
the expectation
of the
The support for Hadoop on Windows was proposed in
HADOOP-8079https://issues.apache.org/jira/browse/HADOOP-8079 almost
a year ago. The goal was to make Hadoop natively integrated, full-featured,
and performance and scalability tuned on Windows Server or Windows Azure.
We are happy to announce that
Todd,
Some of us have been trying to help test and review the code. However you
might have missed the following, which has resulted in the review not
completing:
02/06/13 - After intent for merge was sent, I posted comment saying
consolidate patch has extraneous changes. That was non trivial
This was just an error with the consolidate merge patch. Like I said
in the previous email, these patches are just for Jenkins QA to run
on, and I assume that any HDFS committer is able to look at the branch
itself to understand the changes in it. It's easy to accidentally end
up with
On Wed, Feb 20, 2013 at 3:13 PM, Todd Lipcon t...@cloudera.com wrote:
On Wed, Feb 20, 2013 at 3:08 PM, Tsz Wo Sze szets...@yahoo.com wrote:
The reason to keep it around is that the HDFS-347 only support Unix but
not
other OS.
Given that this is an optimization, and we have a ton of
I have to disagree. No where in the jira or the design it is explicitly
stated that
the old short circuit functionality is being removed. My assumption has
been
that it will not be removed.
I've tried this avenue in the past on other insecurities which were
fixed. Sorry if you were
The patches even going back as far as last September have all removed
the old code path. I sort of assumed that, if you are taking time to
review the patches, you would have noticed this... additionally,
Colin's comments on the JIRA said as much... eg:
Todd, we have different ways of
On Wed, Feb 20, 2013 at 3:40 PM, Todd Lipcon t...@cloudera.com wrote:
On Wed, Feb 20, 2013 at 3:31 PM, Suresh Srinivas sur...@hortonworks.com
wrote:
Given that this is an optimization, and we have a ton of optimizations
which don't yet run on Windows, I don't think that should
On Wed, Feb 20, 2013 at 5:12 PM, Aaron T. Myers a...@cloudera.com wrote:
On Wed, Feb 20, 2013 at 4:29 PM, Chris Douglas cdoug...@apache.org
wrote:
Given that HDFS-347 is a strictly better approach, once committed,
there will be ample motivation to add support for other OSes and
remove
ATM's suggestion of removing HDFS-2246 in trunk, but not branch-2, is
a rational compromise: it allows some period for others to adapt, but
not an indefinite one. It's not clear what you're proposing, if
anything.
I am not sure why a release that supports both these is such a bad idea.
As
There's no reason to maintain multiple implementations of the same
feature, that's why per the 2246 jira it was proposed as a good short
term solution till HDFS-347 is completed. Why is ATM's compromise
unacceptable?
We have already discussed this.
Here is the recap:
HDFS-347 does not
I assume you mean in trunk? Given that ATM's proposal is to only
remove HDFS-2246 from branch-2 once (a) we're confident in HDFS-347
and (b) adds Windows support, and we won't be releasing from trunk any
time soon - from a user perspective - HDFS-2246 will only be replaced
with HDFS-347
Suresh, if you're willing to support and maintain HDFS-2246, do you
have cycles to propose a patch to the HDFS-347 branch reintegrating
HDFS-2246 with the simplifications you outlined? In your review, did
you find anything else you'd like to address prior to the merge, or is
this the only
. This is not a blocker for me
because we often rely on individuals and groups to test Hadoop, but I do
think we need to have this discussion before we put it in.
--Bobby
On 2/26/13 4:55 PM, Suresh Srinivas sur...@hortonworks.com wrote:
I had posted heads up about merging branch-trunk-win
I'm concerned about the above. Personally, I don't have access to any
Windows boxes with development tools, and I know nothing about developing
on Windows. The only Windows I run is an 8GB VM with 1 GB RAM allocated,
for powerpoint :)
If I submit a patch and it gets -1 tests failed on the
Thanks Colin. Will check it out as soon as I can.
On Tue, Mar 5, 2013 at 12:24 PM, Colin McCabe cmcc...@alumni.cmu.eduwrote:
On Tue, Feb 26, 2013 at 5:09 PM, Suresh Srinivas sur...@hortonworks.com
wrote:
Suresh, if you're willing to support and maintain HDFS-2246, do you
have cycles
Thank you all for voting and participating in the discussions.
With 11 +1s from committers (more than the required 3 +1s from
active committers per the Hadoop bylaws), 1 +0, 8 +1s from other
contributors, and no -1s the merge vote passes.
I have committed the consolidated patch from
You are right, in heartbeat response namenode sends commands to the
datanode. Commands sent this way include deletion of blocks, replication,
block recovery secret key updates etc. Increasing the heartbeat interval
results
in namenode not being able to quickly act on the events in the cluster and
Adding other mailing lists I missed earlier.
Cos,
There is progress being made on that ticket. Also it has nothing to do with
that.
Please follow the discussion here and why this happened due to an invalid
commit that was reverted -
This does seem like inode id change related. I will follow up on HDFS-4654.
Sent from a mobile device
On Mar 31, 2013, at 10:12 PM, Harsh J ha...@cloudera.com wrote:
A JIRA was posted by Azuryy for this at
https://issues.apache.org/jira/browse/HDFS-4654.
On Mon, Apr 1, 2013 at 10:40 AM,
Colin,
For the record, the last email in the previous thread in ended with the
following comment from Nicholas:
It is great to hear that you agree to keep HDFS-2246. Please as well
address my comments posted on HDFS-347 and let me know once you have posted
a new patch on HDFS-347.
I did not
...@alumni.cmu.eduwrote:
On Mon, Apr 1, 2013 at 6:58 PM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
On Mon, Apr 1, 2013 at 5:04 PM, Suresh Srinivas sur...@hortonworks.com
wrote:
Colin,
For the record, the last email in the previous thread in ended with the
following comment from Nicholas
We usually conclude the last VOTE before starting a new one. Otherwise,
people may be confused between the VOTEs. (In case you don't know our
convention. Please check with someone before starting a VOTE. Thanks.)
-1
* The previous VOTE started by Colin has not been concluded.
Support for snapshots feature is being worked on in the jira
https://issues.apache.org/jira/browse/HDFS-2802. This is an important and a
large feature in HDFS. Please see a brief presentation that describes the
feature at a highlevel from the Snapshot discussion meetup we had a while
back -
I think we should take this on the jira than the merge heads up thread.
Nicholas, please suggest a jira where we can continue the
Some comments inline:
On Wed, Apr 24, 2013 at 1:25 PM, Todd Lipcon t...@cloudera.com wrote:
On Fri, Apr 19, 2013 at 3:36 AM, Aaron T. Myers a...@cloudera.com
Thanks for starting this discussion. I volunteer to do a final review of
protocol changes, so we can avoid incompatible changes to API and wire
protocol post 2.0.5 in Common and HDFS.
We have been working really hard on the following features. I would like to
get into 2.x and see it reach HDFS
Eli, I will post a more detailed reply soon. But one small correction:
I'm also not sure there's currently consensus on what an incompatible
change is. For example, I think HADOOP-9151 is incompatible because it
broke client/server wire compatibility with previous releases and any
change that
This is a follow up to my earlier heads up about merging Snapshot feature
to trunk - http://markmail.org/message/ixkyku2cebkewnzy. I am happy to
announce that we have completed the development of the feature. It is ready
to be merged into trunk.
Development of snapshot feature is tracked in the
Additional reason, HDFS does not have limit on number of files in a
directory. Some
clusters had millions of files in a single directory. Listing such a
directory
resulted in very large responses, requiring large contiguous memory
allocation in JVM
(for the array) and unpredictable GC failures.
Konstantin,
I am arguing against invasive and destructive features proposed for the
release.
Your choice of words is deplorable, to say the least.
Can you explain what do you mean by *destructive*? Please substantiate your
claim on technical grounds.
So far you have been quiet while we have
. Mind doing that?
--
Aaron T. Myers
Software Engineer, Cloudera
On Wed, May 1, 2013 at 11:54 AM, Suresh Srinivas sur...@hortonworks.com
wrote:
This is a follow up to my earlier heads up about merging Snapshot feature
to trunk - http://markmail.org/message/ixkyku2cebkewnzy. I am happy
, Suresh Srinivas sur...@hortonworks.comwrote:
Aaron, I have created HDFS-4808 to discuss the API issues you brought up.
I agree that this needs to be resolved before merging the code to branch-2.
Thanks for voting.
On Tue, May 7, 2013 at 5:29 PM, Aaron T. Myers a...@cloudera.com wrote:
I'm +1
I still need to look at hadoop-env.sh.
I suspect this is due to not passing Xms (starting java heap size) set to same
value as Xmx (max java heap size). The recent inode id changes create hash apps
based on max heap size and I have seen this problem without Xms.
Sent from a mobile device
On
Can you please run the tests with -Xms2G added?
Sent from a mobile device
On Jun 20, 2013, at 6:02 PM, Roman Shaposhnik r...@apache.org wrote:
On Thu, Jun 20, 2013 at 5:59 PM, Suresh Srinivas sur...@hortonworks.com
wrote:
I still need look at hadoop-env.sh
Bigtopt deployments don't
FYI HDFS committers and developers... This will be participant-driven
unconference style session.
So no formal presentations are planned. Some of the things that we could
discuss are:
- New features recently added to HDFS such as HA, Snapshots, NFS etc.
- Other features that we as a community
available?
Dave
On Mon, Jul 1, 2013 at 3:16 PM, Suresh Srinivas
sur...@hortonworks.comwrote:
Yes this is a known issue.
The HDFS part of this was addressed in
https://issues.apache.org/jira/browse/HDFS-744 for 2.0.2-alpha and is
not
available in 1.x release. I think HBase does not use
On Wed, Jul 3, 2013 at 8:12 AM, Colin McCabe cmcc...@alumni.cmu.edu wrote:
On Mon, Jul 1, 2013 at 8:48 PM, Suresh Srinivas sur...@hortonworks.com
wrote:
Dave,
Thanks for the detailed email. Sorry I did not read all the details you
had
sent earlier completely (on my phone). As you said
Vivek,
For information on how to contribute see -
http://wiki.apache.org/hadoop/HowToContribute
Please register an iCLA with ASF - see
http://www.apache.org/licenses/icla.txt
Once this is done, I will add you as a contributor and assign the jira to
you.
Regards,
Suresh
On Mon, Jul 8, 2013 at
registering the iCLA with ASF.
Thanks,
Niranjan Singh
On Mon, Jul 8, 2013 at 8:05 PM, Suresh Srinivas sur...@hortonworks.com
wrote:
Vivek,
For information on how to contribute see -
http://wiki.apache.org/hadoop/HowToContribute
Please register an iCLA with ASF - see
http
assign the JIRA to me?
Thank you.
Regards,
Vivek
On Mon, Jul 8, 2013 at 8:05 PM, Suresh Srinivas sur...@hortonworks.com
wrote:
Vivek,
For information on how to contribute see -
http://wiki.apache.org/hadoop/HowToContribute
Please register an iCLA with ASF - see
http
Jira has these comments right. I do not see necessity for posting that again to
hdfs-dev mailing list. Please continue the discussion in Jira.
Sent from phone
On Jul 15, 2013, at 7:04 AM, Vivek Ganesan vi...@vivekganesan.com wrote:
Hi,
We have 3 options (actually view points) laid out for
Please look at some of the work happening in HADOOP-9688, which is adding a
unique UUID (16 bytes) for each RPC request. This is common to all Hadoop
RPC, will be available in HDFS, YARN and MAPREDUCE. Please see the jira for
more details. Reach out to me if you have any questions.
On Wed, Jul
This is being targeted for release 2.3.
2.1.x release stream is for stabilizing. When it reaches stability, 2.2 GA
will be released. The current features in development will make it to 2.3,
including HDFS-2832.
On Thu, Aug 8, 2013 at 2:04 PM, Matevz Tadel mta...@ucsd.edu wrote:
Thanks Colin,
Vivek,
The current branch where features/bug fixes go to for release 1.x is
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1
1.x releases are created from the above branch. A new branch is created and
the release is made from that branch. For example, you can see branch
I agree that this is an important change. However, 2.2.0 GA is getting
ready to rollout in weeks. I am concerned that these changes will add not
only incompatible changes late in the game, but also possibly instability.
Java API incompatibility is some thing we have avoided for the most part
and I
+1 (binding)
Verified the signatures and hashes for both src and binary tars. Built from
the source, the binary distribution and the documentation. Started a single
node cluster and tested the following:
# Started HDFS cluster, verified the hdfs CLI commands such ls, copying
data back and
(This time copying all the lists)
I am +1 for naming the new branch 2.2.0.
On Tue, Oct 1, 2013 at 4:15 PM, Arun C Murthy a...@hortonworks.com wrote:
Guys,
I took a look at the content in 2.1.2-beta so far, other than the
critical fixes such as HADOOP-9984 (symlinks) and few others in
Milind, please stop this. The topic here is not what Steve's employers
wants to sell or recommend. Please stick to the technical issue. This is
the second time this week where a thread unnecessarily goes beyond
technical issues. If you have an axe to grind, please keep it off this
forum. It is
I posted a comment in the other thread about feature branch merges.
My preference is to make sure the requirements we have for regular patches
be applied to feature branch patch as well (3 +1s is the only exception).
Also
adding details about what functionality is missing (I posted a comment on
No this is not a bug. This is the new behavior. Please see for details -
https://issues.apache.org/jira/browse/HDFS-1547
On Tue, Oct 29, 2013 at 1:48 AM, lei liu liulei...@gmail.com wrote:
Should the datanode be shutdown when it is Decommissioned?
I think if this is bug, I can fix it.
have many
different clusters that talk to each other.
--Bobby
On 11/4/13 4:15 PM, lohit lohit.vijayar...@gmail.com wrote:
Thanks Suresh!
2013/11/4 Suresh Srinivas sur...@hortonworks.com
Lohit,
The option you have enumerated at the end is the current way to set up
multi cluster
Thanks Haohui for all your hard work in this area. I am +1 on this proposal.
On Tue, Nov 26, 2013 at 12:50 PM, Haohui Mai h...@hortonworks.com wrote:
Hi,
Recently I've been focusing on fixing hftp / hsftp / webhdfs / swebhdfs in
various set ups. Now we have reached the state that all the
bugs that may be
found on trunk as well as add further tests as outlined in the test plan.
The bulk of the design and implementation was done by Suresh Srinivas,
Sanjay Radia, Nicholas Sze, Junping Du and me. Also, thanks to Eric
Sirianni, Chris Nauroth, Steve Loughran, Bikas Saha, Andrew Wang
It is almost an year a jira proposed deprecating backup node -
https://issues.apache.org/jira/browse/HDFS-4114.
Maintaining it adds unnecessary work. As an example, when I added support
for retry cache there were bunch of code paths related to backup node that
added unnecessary work. I do not
Konstantin,
On Sun, Dec 8, 2013 at 1:06 PM, Konstantin Shvachko shv.had...@gmail.comwrote:
I explained my reasoning in the jira
https://issues.apache.org/jira/browse/HDFS-4114?focusedCommentId=13841326page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13841326
Stack,
4. Bigtop/Oozie, other Apache projects altogether, have Cloudera references
in their UI (?) so the take away is HWX can do it too, or, because I raise
an issue here -- it is only legit if I do it too in all projects under the
Apache rainbow?
My point was different; these type of
this
feature.
Once the feature is merged into trunk, we will continue to test and
fix any bugs that may be found on trunk.
The bulk of the design and implementation was done by Jing Zhao,
Suresh Srinivas and me. Thanks Kihwal Lee, Todd Lipcon, Colin
Patrick McCabe, Chris Nauroth, and Daryn Sharp
It is handled using svn rename command. There are times when committers do
not notice this and end up committing the patch as is. This ends up
deleting the contents of renamed files, leaving behind empty files. We had
to cleanup such files sometime ago.
On Tue, Feb 11, 2014 at 11:18 PM,
the test plan. The design document incorporates feedback
from many community members: Dilli Arumugam, Brandon Li, Haohui Mai, Kevin
Minder, Chris Nauroth, Sanjay Radia, Suresh Srinivas, Tsz Wo (Nicholas),
SZE and Jing Zhao. Code reviewers on individual patches include Arpit
Agarwal, Colin
Arun,
Some of the previously 2.4 targeted features were made available in 2.3:
- Heterogeneous storage support
- Datanode cache
The following are being targeted for 2.4:
- Use protobuf for fsimge (already in)
- ACLs (in trunk. In a week or so, this will be merged to branch-2.4)
- Rolling
.
The other remaining works are:
- Revise the design doc
- Post a test plan (Haohui is working on it.)
- Execute the manual tests (Haohui and Fengdong will work on it.)
The work was a collective effort of Nathan Roberts, Sanjay Radia, Suresh
Srinivas, Kihwal Lee, Jing Zhao, Arpit Agarwal, Brandon Li
+1 (binding)
Verified the signatures and hashes for both src and binary tars. Built from
the source, the binary distribution and the documentation. Started a single
node cluster and tested the following:
# Started HDFS cluster, verified the hdfs CLI commands such ls, copying
data back and forth,
I have not looked at the development closely. With rolling upgrades feature
support in, are there any incompatible changes with this feature?
Sent from phone
On May 16, 2014, at 10:30 AM, Chris Nauroth cnaur...@hortonworks.com wrote:
+1 for the merge.
I've participated in ongoing design
the above means we should still be able to
rolling upgrade between 2.4 and whatever version eventually includes xattr
support.
RPC-wise, HDFS-2006 only adds new RPCs, so I don't think there are any
concerns there.
Thanks,
Andrew
On Fri, May 16, 2014 at 2:56 PM, Suresh Srinivas sur
1 - 100 of 455 matches
Mail list logo