Some methods are moved from classes in hadoop-hdfs to classes in
hadoop-hdfs-client.
ClientProtocol.addBlock method adds an extra parameter.
DFSClient.Conf is moved to a separated file and renamed to DFSClientConf.

Not very hard. I promise that I can give a patch within 3 days after the
release of hadoop-2.8.0.

Thanks.

2016-05-10 16:23 GMT+08:00 张铎 <palomino...@gmail.com>:

> Yeah the 'push to upstream' work has been started already. See here
>
> https://issues.apache.org/jira/browse/HADOOP-12910
>
> But it is much harder to push code into HDFS than HBase. It is the core of
> all hadoop systems and I do not have many contacts in the hdfs community...
>
> And it is more convincing if we make it default as it means that we will
> keep maintaining the code rather than make it stale and unstable.
>
> And for the compatibility of different hadoop versions, I think I can deal
> with it. Let me check hadoop-2.8.0-SNAPSHOT first.
>
> Thanks.
>
> 2016-05-10 14:59 GMT+08:00 Gary Helmling <ghelml...@gmail.com>:
>
>> Thanks for adding the tests and fixing up AES support.
>>
>> My only real concern is the maintainability of this code as our own
>> private
>> DFS client.  The SASL support, for example, is largely based on reflection
>> and reaches in to private fields of @InterfaceAudience.Private Hadoop
>> classes.  This seems bound to break with a future Hadoop release.  I
>> appreciate the parameterized testing wrapped around this because it
>> doesn't
>> seem like we'll have much else in the way of safety checking.  This is not
>> a knock on the code -- it's a pretty clean reach into the HDFS guts, but a
>> reach it is.  For a component at the core of our data integrity, this
>> seems
>> like a risk.
>>
>> To me, it seems much safer to actively try to push this upstream into HDFS
>> right now, and still pointing to its optional, non-default use in HBase as
>> a compelling story.  I don't understand why making it the default in 2.0
>> is
>> necessary for this.  Do you really think it will make that big a
>> difference
>> for upstreaming?  Once it's actually in Hadoop and maintained, it seems
>> like a no-brainer to make it the default.
>>
>> On Mon, May 9, 2016 at 5:09 PM Stack <st...@duboce.net> wrote:
>>
>> > Any other suggestions/objections here? If not, will make the cut over in
>> > next day or so.
>> > Thanks,
>> > St.Ack
>> >
>> > On Thu, May 5, 2016 at 10:02 PM, Stack <st...@duboce.net> wrote:
>> >
>> > > On Thu, May 5, 2016 at 7:39 PM, Yu Li <car...@gmail.com> wrote:
>> > >
>> > >> Almost miss the party...
>> > >>
>> > >> bq. Do you think it worth to backport this feature to branch-1 and
>> > release
>> > >> it in the next 1.x release? This may introduce a compatibility issue
>> as
>> > >> said
>> > >> in HBASE-14949 that we need HBASE-14949 to make sure that the rolling
>> > >> upgrade
>> > >> does not lose data...
>> > >> From current perf data I think the effort is worthwhile, we already
>> > >> started
>> > >> some work here and will run it on production after some carefully
>> > testing
>> > >> (and of course, if the perf number confirmed, but I'm optimistic
>> somehow
>> > >> :-P). Regarding HBASE-14949, I guess a two-step rolling upgrade will
>> > make
>> > >> it work, right? (And I guess this will also be a question when we
>> > upgrade
>> > >> from 1.x to 2.0 later?)
>> > >>
>> > >>
>> > > Or a clean shutdown and restart? Or a fresh install? I'd think
>> backport
>> > > would be fine if you have to enable it and it has warnings and is
>> clear
>> > on
>> > > circumstances under which there could be dataloss.
>> > >
>> > > St.Ack
>> > >
>> > >
>> > >
>> > >> btw, I'm +1 about making asyncfswal as default in 2.0 :-)
>> > >>
>> > >> Best Regards,
>> > >> Yu
>> > >>
>> > >> On 6 May 2016 at 09:49, Ted Yu <yuzhih...@gmail.com> wrote:
>> > >>
>> > >> > Thanks for your effort, Duo.
>> > >> >
>> > >> > I am in favor of turning AsyncWAL as default in master branch.
>> > >> >
>> > >> > Cheers
>> > >> >
>> > >> > On Thu, May 5, 2016 at 6:03 PM, 张铎 <palomino...@gmail.com> wrote:
>> > >> >
>> > >> > > Some progress.
>> > >> > >
>> > >> > > I have filed HBASE-15743 for the transparent encryption support,
>> > >> > > and HBASE-15754 for the AES encryption UT. Now both of them are
>> > >> resolved.
>> > >> > > Let's resume the discussion here.
>> > >> > >
>> > >> > > Thanks.
>> > >> > >
>> > >> > > 2016-05-03 10:09 GMT+08:00 张铎 <palomino...@gmail.com>:
>> > >> > >
>> > >> > > > Fine, will add the testcase.
>> > >> > > >
>> > >> > > > And for the RPC, we only implement a new client side DTP here
>> and
>> > >> still
>> > >> > > > use the original RPC.
>> > >> > > >
>> > >> > > > Thanks.
>> > >> > > >
>> > >> > > > 2016-05-03 3:20 GMT+08:00 Gary Helmling <ghelml...@gmail.com>:
>> > >> > > >
>> > >> > > >> On Fri, Apr 29, 2016 at 6:24 PM 张铎 <palomino...@gmail.com>
>> > wrote:
>> > >> > > >>
>> > >> > > >> > Yes, it does. There is testcase that enumerates all the
>> > possible
>> > >> > > >> protection
>> > >> > > >> > level(authentication, integrity and privacy) and encryption
>> > >> > > >> algorithm(none,
>> > >> > > >> > 3des, rc4).
>> > >> > > >> >
>> > >> > > >> >
>> > >> > > >> >
>> > >> > > >>
>> > >> > >
>> > >> >
>> > >>
>> >
>> https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/io/asyncfs/TestSaslFanOutOneBlockAsyncDFSOutput.java
>> > >> > > >> >
>> > >> > > >> > I have also tested it in a secure
>> cluster(hbase-2.0.0-SNAPSHOT
>> > >> and
>> > >> > > >> > hadoop-2.4.0).
>> > >> > > >> >
>> > >> > > >>
>> > >> > > >> Thanks.  Can you add in support for testing with AES
>> > >> > > >> (dfs.encrypt.data.transfer.cipher.suites=AES/CTR/NoPadding)?
>> > This
>> > >> is
>> > >> > > only
>> > >> > > >> available in Hadoop 2.6.0+, but I think is far more likely to
>> be
>> > >> used
>> > >> > in
>> > >> > > >> production than 3des or rc4.
>> > >> > > >
>> > >> > > >
>> > >> > > >> Also, have you been following HADOOP-10768?  That is changing
>> > >> Hadoop
>> > >> > RPC
>> > >> > > >> encryption negotiation to support more performant AES
>> wrapping,
>> > >> > similar
>> > >> > > to
>> > >> > > >> what is now supported in the data transfer pipeline.
>> > >> > > >>
>> > >> > > >
>> > >> > > >
>> > >> > >
>> > >> >
>> > >>
>> > >
>> > >
>> >
>>
>
>

Reply via email to