Jenkins build is back to normal : Hadoop-common-trunk-Java8 #1145

2016-02-27 Thread Apache Jenkins Server
See 



RE: Introduce Apache Kerby to Hadoop

2016-02-27 Thread Zheng, Kai
Thanks Andrew for the update on HBase side!

>> Throughput drops 3-4x, or worse.
Hopefully we can avoid much of the encryption overhead. We're prototyping a 
solution working on that.

Regards,
Kai

-Original Message-
From: Andrew Purtell [mailto:andrew.purt...@gmail.com] 
Sent: Saturday, February 27, 2016 5:35 PM
To: common-dev@hadoop.apache.org
Subject: Re: Introduce Apache Kerby to Hadoop

I get a excited thinking about the prospect of better performance with 
auth-conf QoP. HBase RPC is an increasingly distant fork but still close enough 
to Hadoop in that respect. Our bulk data transfer protocol isn't a separate 
thing like in HDFS, which avoids a SASL wrapped implementation, so we really 
suffer when auth-conf is negotiated. You'll see the same impact where there 
might be a high frequency of NameNode RPC calls or similar still. Throughput 
drops 3-4x, or worse. 

> On Feb 22, 2016, at 4:56 PM, Zheng, Kai  wrote:
> 
> Thanks for the confirm and further inputs, Steve. 
> 
>>> the latter would dramatically reduce the cost of wire-encrypting IPC.
> Yes to optimize Hadoop IPC/RPC encryption is another opportunity Kerby can 
> help with, it's possible because we may hook Chimera or AES-NI thing into the 
> Kerberos layer by leveraging the Kerberos library. As it may be noted, 
> HADOOP-12725 is on the going for this aspect. There may be good result and 
> further update on this recently.
> 
>>> For now, I'd like to see basic steps -upgrading minkdc to krypto, see how 
>>> it works.
> Yes, starting with this initial steps upgrading MiniKDC to use Kerby is the 
> right thing we could do. After some interactions with Kerby project, we may 
> have more ideas how to proceed on the followings.
> 
>>> Long term, I'd like Hadoop 3 to be Kerby-ized
> This sounds great! With necessary support from the community like feedback 
> and patch reviewing, we can speed up the related work.
> 
> Regards,
> Kai
> 
> -Original Message-
> From: Steve Loughran [mailto:ste...@hortonworks.com]
> Sent: Monday, February 22, 2016 6:51 PM
> To: common-dev@hadoop.apache.org
> Subject: Re: Introduce Apache Kerby to Hadoop
> 
> 
> 
> I've discussed this offline with Kai, as part of the "let's fix kerberos" 
> project. Not only is it a better Kerberos engine, we can do more diagnostics, 
> get better algorithms and ultimately get better APIs for doing Kerberos and 
> SASL —the latter would dramatically reduce the cost of wire-encrypting IPC.
> 
> For now, I'd like to see basic steps -upgrading minkdc to krypto, see how it 
> works.
> 
> Long term, I'd like Hadoop 3 to be Kerby-ized
> 
> 
>> On 22 Feb 2016, at 06:41, Zheng, Kai  wrote:
>> 
>> Hi folks,
>> 
>> I'd like to mention Apache Kerby [1] here to the community and propose to 
>> introduce the project to Hadoop, a sub project of Apache Directory project.
>> 
>> Apache Kerby is a Kerberos centric project and aims to provide a first Java 
>> Kerberos library that contains both client and server supports. The relevant 
>> features include:
>> It supports full Kerberos encryption types aligned with both MIT KDC 
>> and MS AD; Client APIs to allow to login via password, credential 
>> cache, keytab file and etc.; Utilities for generate, operate and 
>> inspect keytab and credential cache files; A simple KDC server that 
>> borrows some ideas from Hadoop-MiniKDC and can be used in tests but 
>> with minimal overhead in external dependencies; A brand new token mechanism 
>> is provided, can be experimentally used, using it a JWT token can be used to 
>> exchange a TGT or service ticket; Anonymous PKINIT support, can be 
>> experientially used, as the first Java library that supports the Kerberos 
>> major extension.
>> 
>> The project stands alone and is ensured to only depend on JRE for easier 
>> usage. It has made the first release (1.0.0-RC1) and 2nd release (RC2) is 
>> upcoming.
>> 
>> 
>> As an initial step, this proposal suggests using Apache Kerby to upgrade the 
>> existing codes related to ApacheDS for the Kerberos support. The 
>> advantageous:
>> 
>> 1. The kerby-kerb library is all the need, which is purely in Java, 
>> SLF4J is the only dependency, the whole is rather small;
>> 
>> 2. There is a SimpleKDC in the library for test usage, which borrowed 
>> the MiniKDC idea and implemented all the support existing in MiniKDC.
>> We had a POC that rewrote MiniKDC using Kerby SimpleKDC and it works 
>> fine;
>> 
>> 3. Full Kerberos encryption types (many of them are not available in 
>> JRE but supported by major Kerberos vendors) and more functionalities 
>> like credential cache support;
>> 
>> 4. Perhaps the most concerned, Hadoop MiniKDC and etc. depend on the 
>> old Kerberos implementation in Directory Server project, but the 
>> implementation is stopped being maintained. Directory project has a 
>> plan to replace the implementation using Kerby. MiniKDC can use Kerby 
>> directly to simplify the deps;
>> 
>> 5. 

RE: Introduce Apache Kerby to Hadoop

2016-02-27 Thread Zheng, Kai
Hi Haohui,

I'm glad to know GRPC and it sounds cool. I think it's a good proposal to 
suggest Hadoop IPC/RPC upgrading to GRPC. 

We haven't evaluated GRPC for the question of RPC encryption optimization 
because it's another story. It's not an overlap for the optimization work 
because even if we use GRPC, the RPC protocol messages still need to go through 
the stack of SASL/GSSAPI/Kerberos. What's desired here is not to re-implement 
any RPC layer, or the stack, but is to optimize the stack, by possibly 
implementing and plugin-ing new SASL or GSSAPI mechanism. Hope this clarifying 
helps. Thanks.

Regards,
Kai

-Original Message-
From: Haohui Mai [mailto:ricet...@gmail.com] 
Sent: Sunday, February 28, 2016 3:02 AM
To: common-dev@hadoop.apache.org
Subject: Re: Introduce Apache Kerby to Hadoop

Have we evaluated GRPC? A robust RPC requires significant effort. Migrating to 
GRPC can save ourselves a lot of headache.

Haohui
On Sat, Feb 27, 2016 at 1:35 AM Andrew Purtell 
wrote:

> I get a excited thinking about the prospect of better performance with 
> auth-conf QoP. HBase RPC is an increasingly distant fork but still 
> close enough to Hadoop in that respect. Our bulk data transfer 
> protocol isn't a separate thing like in HDFS, which avoids a SASL 
> wrapped implementation, so we really suffer when auth-conf is 
> negotiated. You'll see the same impact where there might be a high 
> frequency of NameNode RPC calls or similar still. Throughput drops 3-4x, or 
> worse.
>
> > On Feb 22, 2016, at 4:56 PM, Zheng, Kai  wrote:
> >
> > Thanks for the confirm and further inputs, Steve.
> >
> >>> the latter would dramatically reduce the cost of wire-encrypting IPC.
> > Yes to optimize Hadoop IPC/RPC encryption is another opportunity 
> > Kerby
> can help with, it's possible because we may hook Chimera or AES-NI 
> thing into the Kerberos layer by leveraging the Kerberos library. As 
> it may be noted, HADOOP-12725 is on the going for this aspect. There 
> may be good result and further update on this recently.
> >
> >>> For now, I'd like to see basic steps -upgrading minkdc to krypto, 
> >>> see
> how it works.
> > Yes, starting with this initial steps upgrading MiniKDC to use Kerby 
> > is
> the right thing we could do. After some interactions with Kerby 
> project, we may have more ideas how to proceed on the followings.
> >
> >>> Long term, I'd like Hadoop 3 to be Kerby-ized
> > This sounds great! With necessary support from the community like
> feedback and patch reviewing, we can speed up the related work.
> >
> > Regards,
> > Kai
> >
> > -Original Message-
> > From: Steve Loughran [mailto:ste...@hortonworks.com]
> > Sent: Monday, February 22, 2016 6:51 PM
> > To: common-dev@hadoop.apache.org
> > Subject: Re: Introduce Apache Kerby to Hadoop
> >
> >
> >
> > I've discussed this offline with Kai, as part of the "let's fix
> kerberos" project. Not only is it a better Kerberos engine, we can do 
> more diagnostics, get better algorithms and ultimately get better APIs 
> for doing Kerberos and SASL —the latter would dramatically reduce the 
> cost of wire-encrypting IPC.
> >
> > For now, I'd like to see basic steps -upgrading minkdc to krypto, 
> > see
> how it works.
> >
> > Long term, I'd like Hadoop 3 to be Kerby-ized
> >
> >
> >> On 22 Feb 2016, at 06:41, Zheng, Kai  wrote:
> >>
> >> Hi folks,
> >>
> >> I'd like to mention Apache Kerby [1] here to the community and 
> >> propose
> to introduce the project to Hadoop, a sub project of Apache Directory 
> project.
> >>
> >> Apache Kerby is a Kerberos centric project and aims to provide a 
> >> first
> Java Kerberos library that contains both client and server supports. 
> The relevant features include:
> >> It supports full Kerberos encryption types aligned with both MIT 
> >> KDC and MS AD; Client APIs to allow to login via password, 
> >> credential cache, keytab file and etc.; Utilities for generate, 
> >> operate and inspect keytab and credential cache files; A simple KDC 
> >> server that borrows some ideas from Hadoop-MiniKDC and can be used 
> >> in tests but with minimal overhead in external dependencies; A 
> >> brand new token
> mechanism is provided, can be experimentally used, using it a JWT 
> token can be used to exchange a TGT or service ticket; Anonymous 
> PKINIT support, can be experientially used, as the first Java library 
> that supports the Kerberos major extension.
> >>
> >> The project stands alone and is ensured to only depend on JRE for
> easier usage. It has made the first release (1.0.0-RC1) and 2nd 
> release
> (RC2) is upcoming.
> >>
> >>
> >> As an initial step, this proposal suggests using Apache Kerby to
> upgrade the existing codes related to ApacheDS for the Kerberos support.
> The advantageous:
> >>
> >> 1. The kerby-kerb library is all the need, which is purely in Java, 
> >> SLF4J is the only dependency, the whole is rather small;
> >>
> 

Build failed in Jenkins: Hadoop-Common-trunk #2441

2016-02-27 Thread Apache Jenkins Server
See 

Changes:

[gtcarrera9] HADOOP-12831. LocalFS/FSOutputSummer NPEs in constructor if bytes 
per

--
[...truncated 5095 lines...]
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.496 sec - in 
org.apache.hadoop.metrics2.filter.TestPatternFilter
Running org.apache.hadoop.conf.TestConfigurationSubclass
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.469 sec - in 
org.apache.hadoop.conf.TestConfigurationSubclass
Running org.apache.hadoop.conf.TestGetInstances
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.408 sec - in 
org.apache.hadoop.conf.TestGetInstances
Running org.apache.hadoop.conf.TestConfigurationDeprecation
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.698 sec - in 
org.apache.hadoop.conf.TestConfigurationDeprecation
Running org.apache.hadoop.conf.TestDeprecatedKeys
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.678 sec - in 
org.apache.hadoop.conf.TestDeprecatedKeys
Running org.apache.hadoop.conf.TestConfiguration
Tests run: 62, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.61 sec - in 
org.apache.hadoop.conf.TestConfiguration
Running org.apache.hadoop.conf.TestReconfiguration
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.495 sec - in 
org.apache.hadoop.conf.TestReconfiguration
Running org.apache.hadoop.conf.TestConfServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.787 sec - in 
org.apache.hadoop.conf.TestConfServlet
Running org.apache.hadoop.test.TestJUnitSetup
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec - in 
org.apache.hadoop.test.TestJUnitSetup
Running org.apache.hadoop.test.TestMultithreadedTestUtil
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.199 sec - in 
org.apache.hadoop.test.TestMultithreadedTestUtil
Running org.apache.hadoop.test.TestGenericTestUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.191 sec - in 
org.apache.hadoop.test.TestGenericTestUtils
Running org.apache.hadoop.test.TestTimedOutTestsListener
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.227 sec - in 
org.apache.hadoop.test.TestTimedOutTestsListener
Running org.apache.hadoop.metrics.TestMetricsServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.095 sec - in 
org.apache.hadoop.metrics.TestMetricsServlet
Running org.apache.hadoop.metrics.spi.TestOutputRecord
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.054 sec - in 
org.apache.hadoop.metrics.spi.TestOutputRecord
Running org.apache.hadoop.metrics.ganglia.TestGangliaContext
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.201 sec - in 
org.apache.hadoop.metrics.ganglia.TestGangliaContext
Running org.apache.hadoop.net.TestNetUtils
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.791 sec - in 
org.apache.hadoop.net.TestNetUtils
Running org.apache.hadoop.net.TestDNS
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.375 sec - in 
org.apache.hadoop.net.TestDNS
Running org.apache.hadoop.net.TestSocketIOWithTimeout
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.471 sec - in 
org.apache.hadoop.net.TestSocketIOWithTimeout
Running org.apache.hadoop.net.TestNetworkTopologyWithNodeGroup
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.226 sec - in 
org.apache.hadoop.net.TestNetworkTopologyWithNodeGroup
Running org.apache.hadoop.net.TestClusterTopology
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.253 sec - in 
org.apache.hadoop.net.TestClusterTopology
Running org.apache.hadoop.net.TestScriptBasedMappingWithDependency
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.703 sec - in 
org.apache.hadoop.net.TestScriptBasedMappingWithDependency
Running org.apache.hadoop.net.TestTableMapping
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.704 sec - in 
org.apache.hadoop.net.TestTableMapping
Running org.apache.hadoop.net.TestScriptBasedMapping
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.602 sec - in 
org.apache.hadoop.net.TestScriptBasedMapping
Running org.apache.hadoop.net.unix.TestDomainSocketWatcher
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.66 sec - in 
org.apache.hadoop.net.unix.TestDomainSocketWatcher
Running org.apache.hadoop.net.unix.TestDomainSocket
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.96 sec - in 
org.apache.hadoop.net.unix.TestDomainSocket
Running org.apache.hadoop.net.TestSwitchMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.479 sec - in 
org.apache.hadoop.net.TestSwitchMapping
Running org.apache.hadoop.net.TestStaticMapping
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.79 sec - in 

Re: Introduce Apache Kerby to Hadoop

2016-02-27 Thread Andrew Purtell
I get a excited thinking about the prospect of better performance with 
auth-conf QoP. HBase RPC is an increasingly distant fork but still close enough 
to Hadoop in that respect. Our bulk data transfer protocol isn't a separate 
thing like in HDFS, which avoids a SASL wrapped implementation, so we really 
suffer when auth-conf is negotiated. You'll see the same impact where there 
might be a high frequency of NameNode RPC calls or similar still. Throughput 
drops 3-4x, or worse. 

> On Feb 22, 2016, at 4:56 PM, Zheng, Kai  wrote:
> 
> Thanks for the confirm and further inputs, Steve. 
> 
>>> the latter would dramatically reduce the cost of wire-encrypting IPC.
> Yes to optimize Hadoop IPC/RPC encryption is another opportunity Kerby can 
> help with, it's possible because we may hook Chimera or AES-NI thing into the 
> Kerberos layer by leveraging the Kerberos library. As it may be noted, 
> HADOOP-12725 is on the going for this aspect. There may be good result and 
> further update on this recently.
> 
>>> For now, I'd like to see basic steps -upgrading minkdc to krypto, see how 
>>> it works.
> Yes, starting with this initial steps upgrading MiniKDC to use Kerby is the 
> right thing we could do. After some interactions with Kerby project, we may 
> have more ideas how to proceed on the followings.
> 
>>> Long term, I'd like Hadoop 3 to be Kerby-ized
> This sounds great! With necessary support from the community like feedback 
> and patch reviewing, we can speed up the related work.
> 
> Regards,
> Kai
> 
> -Original Message-
> From: Steve Loughran [mailto:ste...@hortonworks.com] 
> Sent: Monday, February 22, 2016 6:51 PM
> To: common-dev@hadoop.apache.org
> Subject: Re: Introduce Apache Kerby to Hadoop
> 
> 
> 
> I've discussed this offline with Kai, as part of the "let's fix kerberos" 
> project. Not only is it a better Kerberos engine, we can do more diagnostics, 
> get better algorithms and ultimately get better APIs for doing Kerberos and 
> SASL —the latter would dramatically reduce the cost of wire-encrypting IPC.
> 
> For now, I'd like to see basic steps -upgrading minkdc to krypto, see how it 
> works.
> 
> Long term, I'd like Hadoop 3 to be Kerby-ized
> 
> 
>> On 22 Feb 2016, at 06:41, Zheng, Kai  wrote:
>> 
>> Hi folks,
>> 
>> I'd like to mention Apache Kerby [1] here to the community and propose to 
>> introduce the project to Hadoop, a sub project of Apache Directory project.
>> 
>> Apache Kerby is a Kerberos centric project and aims to provide a first Java 
>> Kerberos library that contains both client and server supports. The relevant 
>> features include:
>> It supports full Kerberos encryption types aligned with both MIT KDC 
>> and MS AD; Client APIs to allow to login via password, credential 
>> cache, keytab file and etc.; Utilities for generate, operate and 
>> inspect keytab and credential cache files; A simple KDC server that 
>> borrows some ideas from Hadoop-MiniKDC and can be used in tests but 
>> with minimal overhead in external dependencies; A brand new token mechanism 
>> is provided, can be experimentally used, using it a JWT token can be used to 
>> exchange a TGT or service ticket; Anonymous PKINIT support, can be 
>> experientially used, as the first Java library that supports the Kerberos 
>> major extension.
>> 
>> The project stands alone and is ensured to only depend on JRE for easier 
>> usage. It has made the first release (1.0.0-RC1) and 2nd release (RC2) is 
>> upcoming.
>> 
>> 
>> As an initial step, this proposal suggests using Apache Kerby to upgrade the 
>> existing codes related to ApacheDS for the Kerberos support. The 
>> advantageous:
>> 
>> 1. The kerby-kerb library is all the need, which is purely in Java, 
>> SLF4J is the only dependency, the whole is rather small;
>> 
>> 2. There is a SimpleKDC in the library for test usage, which borrowed 
>> the MiniKDC idea and implemented all the support existing in MiniKDC. 
>> We had a POC that rewrote MiniKDC using Kerby SimpleKDC and it works 
>> fine;
>> 
>> 3. Full Kerberos encryption types (many of them are not available in 
>> JRE but supported by major Kerberos vendors) and more functionalities 
>> like credential cache support;
>> 
>> 4. Perhaps the most concerned, Hadoop MiniKDC and etc. depend on the 
>> old Kerberos implementation in Directory Server project, but the 
>> implementation is stopped being maintained. Directory project has a 
>> plan to replace the implementation using Kerby. MiniKDC can use Kerby 
>> directly to simplify the deps;
>> 
>> 5. Extensively tested with all kinds of unit tests, already being used 
>> for some time (like PSU), even in production environment;
>> 
>> 6. Actively developed, and can be fixed and released in time if necessary, 
>> separately and independently from other components in Apache Directory 
>> project. By actively developing Apache Kerby and now applying it to Hadoop, 
>> our side wish to make the 

Re: Introduce Apache Kerby to Hadoop

2016-02-27 Thread Haohui Mai
Have we evaluated GRPC? A robust RPC requires significant effort. Migrating
to GRPC can save ourselves a lot of headache.

Haohui
On Sat, Feb 27, 2016 at 1:35 AM Andrew Purtell 
wrote:

> I get a excited thinking about the prospect of better performance with
> auth-conf QoP. HBase RPC is an increasingly distant fork but still close
> enough to Hadoop in that respect. Our bulk data transfer protocol isn't a
> separate thing like in HDFS, which avoids a SASL wrapped implementation, so
> we really suffer when auth-conf is negotiated. You'll see the same impact
> where there might be a high frequency of NameNode RPC calls or similar
> still. Throughput drops 3-4x, or worse.
>
> > On Feb 22, 2016, at 4:56 PM, Zheng, Kai  wrote:
> >
> > Thanks for the confirm and further inputs, Steve.
> >
> >>> the latter would dramatically reduce the cost of wire-encrypting IPC.
> > Yes to optimize Hadoop IPC/RPC encryption is another opportunity Kerby
> can help with, it's possible because we may hook Chimera or AES-NI thing
> into the Kerberos layer by leveraging the Kerberos library. As it may be
> noted, HADOOP-12725 is on the going for this aspect. There may be good
> result and further update on this recently.
> >
> >>> For now, I'd like to see basic steps -upgrading minkdc to krypto, see
> how it works.
> > Yes, starting with this initial steps upgrading MiniKDC to use Kerby is
> the right thing we could do. After some interactions with Kerby project, we
> may have more ideas how to proceed on the followings.
> >
> >>> Long term, I'd like Hadoop 3 to be Kerby-ized
> > This sounds great! With necessary support from the community like
> feedback and patch reviewing, we can speed up the related work.
> >
> > Regards,
> > Kai
> >
> > -Original Message-
> > From: Steve Loughran [mailto:ste...@hortonworks.com]
> > Sent: Monday, February 22, 2016 6:51 PM
> > To: common-dev@hadoop.apache.org
> > Subject: Re: Introduce Apache Kerby to Hadoop
> >
> >
> >
> > I've discussed this offline with Kai, as part of the "let's fix
> kerberos" project. Not only is it a better Kerberos engine, we can do more
> diagnostics, get better algorithms and ultimately get better APIs for doing
> Kerberos and SASL —the latter would dramatically reduce the cost of
> wire-encrypting IPC.
> >
> > For now, I'd like to see basic steps -upgrading minkdc to krypto, see
> how it works.
> >
> > Long term, I'd like Hadoop 3 to be Kerby-ized
> >
> >
> >> On 22 Feb 2016, at 06:41, Zheng, Kai  wrote:
> >>
> >> Hi folks,
> >>
> >> I'd like to mention Apache Kerby [1] here to the community and propose
> to introduce the project to Hadoop, a sub project of Apache Directory
> project.
> >>
> >> Apache Kerby is a Kerberos centric project and aims to provide a first
> Java Kerberos library that contains both client and server supports. The
> relevant features include:
> >> It supports full Kerberos encryption types aligned with both MIT KDC
> >> and MS AD; Client APIs to allow to login via password, credential
> >> cache, keytab file and etc.; Utilities for generate, operate and
> >> inspect keytab and credential cache files; A simple KDC server that
> >> borrows some ideas from Hadoop-MiniKDC and can be used in tests but
> >> with minimal overhead in external dependencies; A brand new token
> mechanism is provided, can be experimentally used, using it a JWT token can
> be used to exchange a TGT or service ticket; Anonymous PKINIT support, can
> be experientially used, as the first Java library that supports the
> Kerberos major extension.
> >>
> >> The project stands alone and is ensured to only depend on JRE for
> easier usage. It has made the first release (1.0.0-RC1) and 2nd release
> (RC2) is upcoming.
> >>
> >>
> >> As an initial step, this proposal suggests using Apache Kerby to
> upgrade the existing codes related to ApacheDS for the Kerberos support.
> The advantageous:
> >>
> >> 1. The kerby-kerb library is all the need, which is purely in Java,
> >> SLF4J is the only dependency, the whole is rather small;
> >>
> >> 2. There is a SimpleKDC in the library for test usage, which borrowed
> >> the MiniKDC idea and implemented all the support existing in MiniKDC.
> >> We had a POC that rewrote MiniKDC using Kerby SimpleKDC and it works
> >> fine;
> >>
> >> 3. Full Kerberos encryption types (many of them are not available in
> >> JRE but supported by major Kerberos vendors) and more functionalities
> >> like credential cache support;
> >>
> >> 4. Perhaps the most concerned, Hadoop MiniKDC and etc. depend on the
> >> old Kerberos implementation in Directory Server project, but the
> >> implementation is stopped being maintained. Directory project has a
> >> plan to replace the implementation using Kerby. MiniKDC can use Kerby
> >> directly to simplify the deps;
> >>
> >> 5. Extensively tested with all kinds of unit tests, already being used
> >> for some time (like PSU), even in production