[ 
https://issues.apache.org/jira/browse/ATLAS-3779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nixon Rodrigues updated ATLAS-3779:
-----------------------------------
    Description: 
Spark uses Kafka as source and sink in secure cluster. The test creates a JAAS 
file like this:
{code:java}
KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  debug=true
  useKeyTab=true
  storeKey=true
  keyTab="/xxx/keytabs/systest.keytab"
  useTicketCache=false
  serviceName="kafka"
  principal="[email protected]";
};
{code}
As one can see serviceName is set properly.

Then the test pass the JAAS file to Spark's driver + executor as well:
{code:java}
"--conf 
\"spark.driver.extraJavaOptions=-Djava.security.auth.login.config=./kafka_source_jaas.conf..."
"--conf 
\"spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./kafka_source_jaas.conf..."
{code}
Later on SAC + atlas makes some magic in the background with the Jvm JAAS 
configuration. As a result Spark is not able to create consumer for processing 
data:
{code:java}
Caused by: java.lang.IllegalArgumentException: No serviceName defined in either 
JAAS or Kafka config
{code}
When I've turned off SAC then all the problem gone away.

Atlas replaces the JVM global JAAS configuration with InMemoryJAASConfiguration 
once Atlas configuration is initialized. InMemoryJAASConfiguration has an old 
JAAS config as "parent" but Atlas config takes precedence which is unexpected.

We never want to let Atlas to overwrite existing JAAS configuration if there's 
a conflict. (I believe most endpoints using Atlas client as a library would 
agree with this.) This may be achieved via swapping precedence for "parent" vs 
"Atlas config" in InMemoryJAASConfiguration, but I have no idea the change 
would be safe to Atlas side. In any way, Atlas should at least provide a config 
to let "parent" take precedence for the conflict.

 

  was:
Spark uses Kafka as source and sink in secure cluster. The test creates a JAAS 
file like this:
{code:java}
KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  debug=true
  useKeyTab=true
  storeKey=true
  keyTab="/cdep/keytabs/systest.keytab"
  useTicketCache=false
  serviceName="kafka"
  principal="[email protected]";
};
{code}
As one can see serviceName is set properly.

Then the test pass the JAAS file to Spark's driver + executor as well:
{code:java}
"--conf 
\"spark.driver.extraJavaOptions=-Djava.security.auth.login.config=./kafka_source_jaas.conf..."
"--conf 
\"spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./kafka_source_jaas.conf..."
{code}
Later on SAC + atlas makes some magic in the background with the Jvm JAAS 
configuration. As a result Spark is not able to create consumer for processing 
data:
{code:java}
Caused by: java.lang.IllegalArgumentException: No serviceName defined in either 
JAAS or Kafka config
{code}
When I've turned off SAC then all the problem gone away.

Atlas replaces the JVM global JAAS configuration with InMemoryJAASConfiguration 
once Atlas configuration is initialized. InMemoryJAASConfiguration has an old 
JAAS config as "parent" but Atlas config takes precedence which is unexpected.

We never want to let Atlas to overwrite existing JAAS configuration if there's 
a conflict. (I believe most endpoints using Atlas client as a library would 
agree with this.) This may be achieved via swapping precedence for "parent" vs 
"Atlas config" in InMemoryJAASConfiguration, but I have no idea the change 
would be safe to Atlas side. In any way, Atlas should at least provide a config 
to let "parent" take precedence for the conflict.

 


> Inmemory JAASConfig issue in Atlas
> ----------------------------------
>
>                 Key: ATLAS-3779
>                 URL: https://issues.apache.org/jira/browse/ATLAS-3779
>             Project: Atlas
>          Issue Type: Bug
>            Reporter: Mayank Jain
>            Priority: Major
>
> Spark uses Kafka as source and sink in secure cluster. The test creates a 
> JAAS file like this:
> {code:java}
> KafkaClient {
>   com.sun.security.auth.module.Krb5LoginModule required
>   debug=true
>   useKeyTab=true
>   storeKey=true
>   keyTab="/xxx/keytabs/systest.keytab"
>   useTicketCache=false
>   serviceName="kafka"
>   principal="[email protected]";
> };
> {code}
> As one can see serviceName is set properly.
> Then the test pass the JAAS file to Spark's driver + executor as well:
> {code:java}
> "--conf 
> \"spark.driver.extraJavaOptions=-Djava.security.auth.login.config=./kafka_source_jaas.conf..."
> "--conf 
> \"spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./kafka_source_jaas.conf..."
> {code}
> Later on SAC + atlas makes some magic in the background with the Jvm JAAS 
> configuration. As a result Spark is not able to create consumer for 
> processing data:
> {code:java}
> Caused by: java.lang.IllegalArgumentException: No serviceName defined in 
> either JAAS or Kafka config
> {code}
> When I've turned off SAC then all the problem gone away.
> Atlas replaces the JVM global JAAS configuration with 
> InMemoryJAASConfiguration once Atlas configuration is initialized. 
> InMemoryJAASConfiguration has an old JAAS config as "parent" but Atlas config 
> takes precedence which is unexpected.
> We never want to let Atlas to overwrite existing JAAS configuration if 
> there's a conflict. (I believe most endpoints using Atlas client as a library 
> would agree with this.) This may be achieved via swapping precedence for 
> "parent" vs "Atlas config" in InMemoryJAASConfiguration, but I have no idea 
> the change would be safe to Atlas side. In any way, Atlas should at least 
> provide a config to let "parent" take precedence for the conflict.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to