Re: Exception in SASL negotiation

2019-09-18 Thread Ruslan Dautkhanov
Might be a race condition similar or same as
https://issues.apache.org/jira/browse/HIVE-19785


-- 
Ruslan Dautkhanov


On Tue, Sep 17, 2019 at 10:14 AM César Tenganán  wrote:

> Hi,
>
> We have been working to configure Apache livy-0.5.0-incubating with Spark
> on Windows,  in this case using WSL Windows Subsystem for Linux /
> Ubuntu-18.04 distribution.
> Both Livy and spark have been configured on the linux subsystem, but Livy
> is throwing an error when it is creating the spark session that says:
>
> 19/09/17 09:51:41 INFO RpcServer$SaslServerHandler: Exception in SASL
> negotiation.
> java.lang.IllegalArgumentException: Unexpected client ID
> 'fca3ee25-ae81-42a7-b07d-9613238bd820' in SASL handshake.
> at org.apache.livy.rsc.Utils.checkArgument(Utils.java:40)
> at
> org.apache.livy.rsc.rpc.RpcServer$SaslServerHandler.update(RpcServer.java:288)
> at org.apache.livy.rsc.rpc.SaslHandler.channelRead0(SaslHandler.java:61)
> at org.apache.livy.rsc.rpc.SaslHandler.channelRead0(SaslHandler.java:36)
> at
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:321)
> at
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
> at
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
> at
> io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:321)
> at
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1280)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
> at
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:890)
> at
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:564)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:505)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:419)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:391)
> at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
> at java.lang.Thread.run(Thread.java:748)
> 19/09/17 09:51:41 WARN DefaultChannelPipeline: An exceptionCaught() event
> was fired, and it reached at the tail of the pipeline. It usually means the
> last handler in the pipeline did not handle the exception.
> java.lang.IllegalArgumentException: Unexpected client ID
> 'fca3ee25-ae81-42a7-b07d-9613238bd820' in SASL handshake.
> at org.apache.livy.rsc.Utils.checkArgument(Utils.java:40)
> at
> org.apache.livy.rsc.rpc.RpcServer$SaslServerHandler.update(RpcServer.java:288)
> at org.apache.livy.rsc.rpc.SaslHandler.channelRead0(SaslHandler.java:61)
> at org.apache.livy.rsc.rpc.SaslHandler.channelRead0(SaslHandler.java:36)
> at
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>
> We have tested the same configuration directly on Ubuntu, Centos and MacOS
> machines and it worked correctly, the error just appeared when we try on
> Windows Subsystem Linux / Ubuntu-18.04 distro.
>
> Could you please help us to understand what this error means? or how can
> we validate that the system has the requirements to perform the SASL
> negotiation correctly?
>
> I have attached a file with more details of trace log
>
> Thanks for your help!
>
> --
> Julio César Tenganán Daza
> Software Engineer
>


Re: [ANNOUNCE] Apache Livy 0.6.0-incubating released

2019-04-02 Thread Ruslan Dautkhanov
Thanks a lot Marcelo !

Ruslan



On Tue, Apr 2, 2019 at 12:24 PM Marcelo Vanzin  wrote:

> The Apache Livy team is proud to announce the release of Apache Livy
> 0.6.0-incubating.
>
> Livy is web service that exposes a REST interface for managing long
> running Apache Spark contexts in your cluster. Livy enables
> programmatic, fault-tolerant, multi-tenant submission of Spark jobs
> from web/mobile apps (no Spark client needed). So, multiple users can
> interact with your Spark cluster concurrently and reliably.
>
> Download Apache Livy 0.6.0-incubating:
> http://livy.incubator.apache.org/download/
>
> Release Notes:
> http://livy.incubator.apache.org/history/
>
> For more about Livy check our website:
> http://livy.incubator.apache.org/
>
> We would like to thank the contributors that made the release possible!
>
>
> --
> Marcelo
>
-- 

-- 
Ruslan Dautkhanov


Re: Livy-0.6 release?

2019-03-14 Thread Ruslan Dautkhanov
Thank you guys!


-- 
Ruslan Dautkhanov


On Tue, Mar 12, 2019 at 9:55 AM Marcelo Vanzin  wrote:

> I think we can build the thrift module. It's disabled by default
> anyway, and if anyone wants to play with it, it will be there.
>
> On Tue, Mar 12, 2019 at 5:42 AM Saisai Shao 
> wrote:
> >
> > I can also help to release a new version. My only concern is that how
> mature the thrift module is, shall we enable it by default or leave it
> disabled?
> >
> > Thanks
> > Saisai
> >
> > Jeff Zhang  于2019年3月12日周二 上午10:54写道:
> >>
> >> Thanks Marcelo, I can help to test it in zeppelin side which use livy
> as one interpreter.
> >>
> >> Marcelo Vanzin  于2019年3月12日周二 上午7:25写道:
> >>>
> >>> Since there isn't much activity going on from the project committers,
> >>> I guess I could spend some time to create a release.
> >>>
> >>> The main problem from my side is that I haven't actually used Livy in
> >>> a long time. So personally I have no idea of how stable the current
> >>> master is, and the most I can do is just run the built-in integration
> >>> tests. So there would be a release (assuming other PPMC members are
> >>> still around), but I wouldn't really be able to attest to its
> >>> stability. If people are ok with that...
> >>>
> >>> On Sat, Mar 2, 2019 at 6:04 AM kant kodali  wrote:
> >>> >
> >>> > Any rough timeline on 0.6? IfLivyy doesn't allow to choose a higher
> spark version I guess that will be a blocker fora lot of people who want to
> leverage new features from spark. Any good solution to fix this?
> >>> >
> >>> > On Mon, Feb 11, 2019 at 3:46 PM Ruslan Dautkhanov <
> dautkha...@gmail.com> wrote:
> >>> >>
> >>> >> Got it. Thanks Marcelo.
> >>> >>
> >>> >> I see LIVY-551 is now part of the master. Hope to see Livy 0.6
> perhaps soon.
> >>> >>
> >>> >>
> >>> >> Thank you!
> >>> >> Ruslan Dautkhanov
> >>> >>
> >>> >>
> >>> >> On Tue, Feb 5, 2019 at 12:38 PM Marcelo Vanzin 
> wrote:
> >>> >>>
> >>> >>> I think LIVY-551 is the current blocker. Unfortunately I don't
> think
> >>> >>> we're really tracking things in jira that well, as far as releases
> go.
> >>> >>> At least I'm not.
> >>> >>>
> >>> >>> On Mon, Feb 4, 2019 at 6:32 PM Ruslan Dautkhanov <
> dautkha...@gmail.com> wrote:
> >>> >>> >
> >>> >>> > +1 for 0.6 release so folks can upgrade to Spark 2.4..
> >>> >>> >
> >>> >>> > Marcelo, what particular patches are blocking Livy 0.6 release?
> >>> >>> >
> >>> >>> > I see 3 jiras with 0.6 as Fix Version - not sure if that's
> correct way to find blockers.
> >>> >>> > https://goo.gl/9axfsw
> >>> >>> >
> >>> >>> >
> >>> >>> > Thank you!
> >>> >>> > Ruslan Dautkhanov
> >>> >>> >
> >>> >>> >
> >>> >>> > On Mon, Jan 28, 2019 at 2:24 PM Marcelo Vanzin <
> van...@cloudera.com> wrote:
> >>> >>> >>
> >>> >>> >> There are a couple of patches under review that are currently
> blocking
> >>> >>> >> the release.
> >>> >>> >>
> >>> >>> >> Once those are done, we can work on releasing 0.6.
> >>> >>> >>
> >>> >>> >> On Mon, Jan 28, 2019 at 11:18 AM Roger Liu <
> liu.ro...@microsoft.com> wrote:
> >>> >>> >> >
> >>> >>> >> > Hey there,
> >>> >>> >> >
> >>> >>> >> >
> >>> >>> >> >
> >>> >>> >> > I’m wondering if we have a timeline for releasing Livy-0.6?
> Its been a year since the last release and there are features like
> Spark-2.4 support that are not incorporated in the livy-0.5 package.
> >>> >>> >> >
> >>> >>> >> >
> >>> >>> >> >
> >>> >>> >> > Thanks,
> >>> >>> >> >
> >>> >>> >> > Roger Liu
> >>> >>> >>
> >>> >>> >>
> >>> >>> >>
> >>> >>> >> --
> >>> >>> >> Marcelo
> >>> >>>
> >>> >>>
> >>> >>>
> >>> >>> --
> >>> >>> Marcelo
> >>>
> >>>
> >>>
> >>> --
> >>> Marcelo
> >>
> >>
> >>
> >> --
> >> Best Regards
> >>
> >> Jeff Zhang
>
>
>
> --
> Marcelo
>


Re: Livy-0.6 release?

2019-02-11 Thread Ruslan Dautkhanov
Got it. Thanks Marcelo.

I see LIVY-551 is now part of the master. Hope to see Livy 0.6 perhaps soon.


Thank you!
Ruslan Dautkhanov


On Tue, Feb 5, 2019 at 12:38 PM Marcelo Vanzin  wrote:

> I think LIVY-551 is the current blocker. Unfortunately I don't think
> we're really tracking things in jira that well, as far as releases go.
> At least I'm not.
>
> On Mon, Feb 4, 2019 at 6:32 PM Ruslan Dautkhanov 
> wrote:
> >
> > +1 for 0.6 release so folks can upgrade to Spark 2.4..
> >
> > Marcelo, what particular patches are blocking Livy 0.6 release?
> >
> > I see 3 jiras with 0.6 as Fix Version - not sure if that's correct way
> to find blockers.
> > https://goo.gl/9axfsw
> >
> >
> > Thank you!
> > Ruslan Dautkhanov
> >
> >
> > On Mon, Jan 28, 2019 at 2:24 PM Marcelo Vanzin 
> wrote:
> >>
> >> There are a couple of patches under review that are currently blocking
> >> the release.
> >>
> >> Once those are done, we can work on releasing 0.6.
> >>
> >> On Mon, Jan 28, 2019 at 11:18 AM Roger Liu 
> wrote:
> >> >
> >> > Hey there,
> >> >
> >> >
> >> >
> >> > I’m wondering if we have a timeline for releasing Livy-0.6? Its been
> a year since the last release and there are features like Spark-2.4 support
> that are not incorporated in the livy-0.5 package.
> >> >
> >> >
> >> >
> >> > Thanks,
> >> >
> >> > Roger Liu
> >>
> >>
> >>
> >> --
> >> Marcelo
>
>
>
> --
> Marcelo
>


Re: Livy-0.6 release?

2019-02-04 Thread Ruslan Dautkhanov
+1 for 0.6 release so folks can upgrade to Spark 2.4..

Marcelo, what particular patches are blocking Livy 0.6 release?

I see 3 jiras with 0.6 as Fix Version - not sure if that's correct way to
find blockers.
https://goo.gl/9axfsw


Thank you!
Ruslan Dautkhanov


On Mon, Jan 28, 2019 at 2:24 PM Marcelo Vanzin  wrote:

> There are a couple of patches under review that are currently blocking
> the release.
>
> Once those are done, we can work on releasing 0.6.
>
> On Mon, Jan 28, 2019 at 11:18 AM Roger Liu 
> wrote:
> >
> > Hey there,
> >
> >
> >
> > I’m wondering if we have a timeline for releasing Livy-0.6? Its been a
> year since the last release and there are features like Spark-2.4 support
> that are not incorporated in the livy-0.5 package.
> >
> >
> >
> > Thanks,
> >
> > Roger Liu
>
>
>
> --
> Marcelo
>


Re: appending @realm to usernames

2019-01-17 Thread Ruslan Dautkhanov
Hi Kevin,

Hortonworks link you posted doesn't say realm is optional.

Have you tried auth_to_local for usernames coming from Livy over to Hadoop
-
if username doesn't have a realm, did auth_to_local map usernames to short
names?

Actually Hadoop code says opposite - there is an explicit check - if
realm is empty, auth_to_local rules are not applied

https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java#L376

[image: image.png]

rules application starts down below on line 383

[image: image.png]

so it never reaches rules transformations loop if realm is empty.

We can argue that this is might be a Hadoop bug, as Kerberos C library
states empty realm is possible

https://github.com/krb5/krb5/blob/krb5-1.17-final/src/lib/krb5/os/localauth_rule.c#L38

Although in the same place it says it's can be dangerous -

which can be *dangerous in multi-realm environments*, but is our historical
> behavior


So we can now say that "bug" is actually a security feature and Hadoop's
auth_to_local
implementation left this "historical behavior" out for a good reason.

I think the only way to enable auth_to_local for proxy authentication like
in Livy case
is to have a config setting in Livy to append a realm, like explained in
https://issues.apache.org/jira/browse/LIVY-548


Thank you,
Ruslan Dautkhanov


On Thu, Jan 17, 2019 at 9:51 AM Kevin Risden  wrote:

> I don't think I follow your statement that @realm is mandatory. Auth
> to local is basically just a regex.
>
>
> https://community.hortonworks.com/articles/14463/auth-to-local-rules-syntax.html
>
> I don't know why you want to append the realm back anyway since
> usually the username is what you are after anyway.
>
> Kevin Risden
>
> On Tue, Jan 15, 2019 at 12:36 PM Ruslan Dautkhanov 
> wrote:
> >
> > We'd like Hadoop to map user names to short names.
> >
> > For auth_to_local to work, @realm part is mandatory.
> >
> > For example, Apache Knox if authenticates users using LDAP,
> > and then sends requests over to Livy, doesn't append realm.
> > Obviously LDAP, PAM etc authentications don't have kerberos
> > realms there.
> >
> > Is there is a way for append realm in Livy, before it sends
> > those requests over to Spark / Hadoop?
> >
> > It seems we could duplicate rules from Hadoop's auth_to_local
> > using `livy.server.auth.kerberos.name_rules` but it doesn't work
> > for the same reason (kerberos rules requires realm to be present).
> >
> > Also created https://issues.apache.org/jira/browse/LIVY-548
> >
> > Thank you for any ideas.
> >
> > --
> > Ruslan Dautkhanov
>


appending @realm to usernames

2019-01-15 Thread Ruslan Dautkhanov
We'd like Hadoop to map user names to short names.

For auth_to_local to work, @realm part is mandatory.

For example, Apache Knox if authenticates users using LDAP,
and then sends requests over to Livy, doesn't append realm.
Obviously LDAP, PAM etc authentications don't have kerberos
realms there.

Is there is a way for append realm in Livy, before it sends
those requests over to Spark / Hadoop?

It seems we could duplicate rules from Hadoop's auth_to_local
using `livy.server.auth.kerberos.name_rules` but it doesn't work
for the same reason (kerberos rules requires realm to be present).

Also created https://issues.apache.org/jira/browse/LIVY-548

Thank you for any ideas.

-- 
Ruslan Dautkhanov