[jira] [Updated] (ATLAS-2287) Include lucene libraries when building atlas distribution with Janus profile

2017-11-28 Thread Sarath Subramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sarath Subramanian updated ATLAS-2287:
--
Description: 
When Atlas is build using -Pdist profile, lucene jars are excluded during 
packaging of the war file. Since we are not shading graphdb module for janus 
profile, these jars are needed as run time dependency.

Titan's shaded jar includes the lucene libraries and hence were excluded during 
packaging of war to avoid duplicate dependencies.

  was:
When Atlas is build using -Pdist profile, lucene jars are excluded during 
packaging of the war file. Since we are not shading graphdb module for janus 
profile, these jars need to be included as run time dependency.

Titan's shaded jar already included the lucene libraries and hence were 
excluded during packaging of war to avoid duplicate dependencies.


> Include lucene libraries when building atlas distribution with Janus profile
> 
>
> Key: ATLAS-2287
> URL: https://issues.apache.org/jira/browse/ATLAS-2287
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core
>Affects Versions: 1.0.0
>Reporter: Sarath Subramanian
>Assignee: Sarath Subramanian
> Fix For: 1.0.0
>
> Attachments: ATLAS-2287.1.patch
>
>
> When Atlas is build using -Pdist profile, lucene jars are excluded during 
> packaging of the war file. Since we are not shading graphdb module for janus 
> profile, these jars are needed as run time dependency.
> Titan's shaded jar includes the lucene libraries and hence were excluded 
> during packaging of war to avoid duplicate dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (ATLAS-2287) Include lucene libraries when building atlas distribution with Janus profile

2017-11-28 Thread Sarath Subramanian (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269972#comment-16269972
 ] 

Sarath Subramanian commented on ATLAS-2287:
---

Code Review: https://reviews.apache.org/r/64141/

> Include lucene libraries when building atlas distribution with Janus profile
> 
>
> Key: ATLAS-2287
> URL: https://issues.apache.org/jira/browse/ATLAS-2287
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core
>Affects Versions: 1.0.0
>Reporter: Sarath Subramanian
>Assignee: Sarath Subramanian
> Fix For: 1.0.0
>
> Attachments: ATLAS-2287.1.patch
>
>
> When Atlas is build using -Pdist profile, lucene jars are excluded during 
> packaging of the war file. Since we are not shading graphdb module for janus 
> profile, these jars are needed as run time dependency.
> Titan's shaded jar includes the lucene libraries and hence were excluded 
> during packaging of war to avoid duplicate dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Review Request 64141: [ATLAS-2287]: Include lucene libraries when building atlas distribution with Janus profile

2017-11-28 Thread Sarath Subramanian

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64141/
---

Review request for atlas, Apoorv Naik, Ashutosh Mestry, and Madhan Neethiraj.


Bugs: ATLAS-2287
https://issues.apache.org/jira/browse/ATLAS-2287


Repository: atlas


Description
---

When Atlas is build using -Pdist profile, lucene jars are excluded during 
packaging of the war file. Since we are not shading graphdb module for janus 
profile, these jars are needed as run time dependency.
Titan's shaded jar includes the lucene libraries and hence were excluded during 
packaging of war to avoid duplicate dependencies.


Diffs
-

  distro/pom.xml eea256d8 
  pom.xml 3720c1f5 
  webapp/pom.xml b4a96d36 


Diff: https://reviews.apache.org/r/64141/diff/1/


Testing
---

validated building atlas distribution using both janus and titan0 profile. 
Atlas starts fine and basic functionalities working.

mvn clean install -DskipTests -Pdist,embedded-hbase-solr
mvn clean install -DskipTests -Pdist,embedded-hbase-solr -DGRAPH-PROVIDER=titan0


Thanks,

Sarath Subramanian



[jira] [Created] (ATLAS-2287) Include lucene libraries when building atlas distribution with Janus profile

2017-11-28 Thread Sarath Subramanian (JIRA)
Sarath Subramanian created ATLAS-2287:
-

 Summary: Include lucene libraries when building atlas distribution 
with Janus profile
 Key: ATLAS-2287
 URL: https://issues.apache.org/jira/browse/ATLAS-2287
 Project: Atlas
  Issue Type: Bug
  Components:  atlas-core
Affects Versions: 1.0.0
Reporter: Sarath Subramanian
Assignee: Sarath Subramanian
 Fix For: 1.0.0


When Atlas is build using -Pdist profile, lucene jars are excluded during 
packaging of the war file. Since we are not shading graphdb module for janus 
profile, these jars need to be included as run time dependency.

Titan's shaded jar already included the lucene libraries and hence were 
excluded during packaging of war to avoid duplicate dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ATLAS-2287) Include lucene libraries when building atlas distribution with Janus profile

2017-11-28 Thread Sarath Subramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sarath Subramanian updated ATLAS-2287:
--
Attachment: ATLAS-2287.1.patch

> Include lucene libraries when building atlas distribution with Janus profile
> 
>
> Key: ATLAS-2287
> URL: https://issues.apache.org/jira/browse/ATLAS-2287
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core
>Affects Versions: 1.0.0
>Reporter: Sarath Subramanian
>Assignee: Sarath Subramanian
> Fix For: 1.0.0
>
> Attachments: ATLAS-2287.1.patch
>
>
> When Atlas is build using -Pdist profile, lucene jars are excluded during 
> packaging of the war file. Since we are not shading graphdb module for janus 
> profile, these jars need to be included as run time dependency.
> Titan's shaded jar already included the lucene libraries and hence were 
> excluded during packaging of war to avoid duplicate dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Jenkins build is back to normal : Atlas-0.8-UnitTests #139

2017-11-28 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Atlas-0.8-IntegrationTests #139

2017-11-28 Thread Apache Jenkins Server
See 


Changes:

[madhan] ATLAS-2286: fix attribute kafka_topic.topic to set isUnique=false

--
[...truncated 529.61 KB...]
127.0.0.1 - - [29/Nov/2017:01:49:30 +] "GET 
/api/atlas/v2/types/classificationdef/guid/blah HTTP/1.1" 404 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:30 +] "GET 
/api/atlas/v2/types/entitydef/name/blah HTTP/1.1" 404 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:30 +] "GET 
/api/atlas/v2/types/entitydef/guid/blah HTTP/1.1" 404 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:30 +] "POST /api/atlas/v2/types/typedefs 
HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:30 +] "GET 
/api/atlas/v2/types/typedefs?supertype=AgGabY1Ykxf&type=CLASS HTTP/1.1" 200 - 
"-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:30 +] "GET 
/api/atlas/v2/types/typedefs?supertype=AgGabY1Ykxf&type=CLASS¬supertype=Bl7FoEt9p3a
 HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:30 +] "POST /api/atlas/v2/types/typedefs 
HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "PUT /api/atlas/v2/types/typedefs 
HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedefs?type=ENTITY HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/classificationdef/name/fetl HTTP/1.1" 200 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/entitydef/name/database HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/entitydef/name/table HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/tableType HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/serdeType HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/hive_db_v2 HTTP/1.1" 404 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/hive_column_v2 HTTP/1.1" 404 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/hive_table_v2 HTTP/1.1" 404 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/hive_process_v2 HTTP/1.1" 404 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/classification HTTP/1.1" 200 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/pii_Tag HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/phi_Tag HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/pci_Tag HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/sox_Tag HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/sec_Tag HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "GET 
/api/atlas/v2/types/typedef/name/finance_Tag HTTP/1.1" 200 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:31 +] "POST /api/atlas/v2/types/typedefs 
HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:32 +] "POST /api/atlas/v2/entity HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:32 +] "POST /api/atlas/v2/entity HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:33 +] "GET 
/api/atlas/v2/entity/guid/ec4bf4c0-20af-4d50-942d-45bd1543ae23 HTTP/1.1" 200 - 
"-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:33 +] "POST /api/atlas/v2/entity HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:33 +] "POST 
/api/atlas/v2/entity/guid/random/classifications HTTP/1.1" 404 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:33 +] "POST /api/atlas/v2/entity/bulk 
HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:33 +] "POST /api/atlas/v2/entity HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:33 +] "POST /api/atlas/v2/entity HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:33 +] "DELETE 
/api/atlas/v2/entity/bulk?guid=c8218e1d-8de7-4d6b-b126-6fba19bba2c7&guid=f67e4523-c5b1-41f7-bdb8-9d19975f51bb
 HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:34 +] "POST /api/atlas/v2/entity HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [29/Nov/2017:01:49:34 +] "DELETE 
/api/atlas/v2/entity/uniqueAttribute/type/hive_db_v2?attr:n

Build failed in Jenkins: Atlas-master-IntegrationTests #196

2017-11-28 Thread Apache Jenkins Server
See 


Changes:

[madhan] ATLAS-2286: fix attribute kafka_topic.topic to set isUnique=false

--
[...truncated 385.68 KB...]
at org.eclipse.jetty.webapp.WebAppContext.startWebapp 
(WebAppContext.java:1404)
at org.eclipse.jetty.maven.plugin.JettyWebAppContext.startWebapp 
(JettyWebAppContext.java:323)
at org.eclipse.jetty.webapp.WebAppContext.startContext 
(WebAppContext.java:1366)
at org.eclipse.jetty.server.handler.ContextHandler.doStart 
(ContextHandler.java:778)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart 
(ServletContextHandler.java:262)
at org.eclipse.jetty.webapp.WebAppContext.doStart (WebAppContext.java:520)
at org.eclipse.jetty.maven.plugin.JettyWebAppContext.doStart 
(JettyWebAppContext.java:398)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:61)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart 
(ContextHandlerCollection.java:161)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:61)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:131)
at org.eclipse.jetty.server.Server.start (Server.java:422)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart (Server.java:389)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.maven.plugin.AbstractJettyMojo.startJetty 
(AbstractJettyMojo.java:460)
at org.eclipse.jetty.maven.plugin.AbstractJettyMojo.execute 
(AbstractJettyMojo.java:328)
at org.eclipse.jetty.maven.plugin.JettyRunWarMojo.execute 
(JettyRunWarMojo.java:64)
at org.eclipse.jetty.maven.plugin.JettyDeployWar.execute 
(JettyDeployWar.java:65)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
(DefaultBuildPluginManager.java:134)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:208)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:154)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:146)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:309)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:194)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:107)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:955)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:290)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main 
(Launcher.java:356)
[INFO] Started ServerConnector@1b3786d{HTTP/1.1,[http/1.1]}{0.0.0.0:31000}
[INFO] Started @313463ms
[INFO] Started Jetty Server
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20.1:integration-test (integration-test) @ 
atlas-webapp ---
[WARNING] useSystemClassloader setting has no effect when not forking
[INFO] Running TestSuite
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Fail

Build failed in Jenkins: Atlas-master-UnitTests #178

2017-11-28 Thread Apache Jenkins Server
See 


Changes:

[madhan] ATLAS-2286: fix attribute kafka_topic.topic to set isUnique=false

--
[...truncated 15.48 KB...]
[INFO] Exclude: **/data/**
[INFO] Exclude: **/maven-eclipse.xml
[INFO] Exclude: **/.externalToolBuilders/**
[INFO] Exclude: **/build.log
[INFO] Exclude: **/.bowerrc
[INFO] Exclude: *.json
[INFO] Exclude: **/overlays/**
[INFO] Exclude: dev-support/**
[INFO] Exclude: **/users-credentials.properties
[INFO] Exclude: **/public/css/animate.min.css
[INFO] Exclude: **/public/css/bootstrap-sidebar.css
[INFO] Exclude: **/public/js/external_lib/**
[INFO] Exclude: **/node_modules/**
[INFO] Exclude: **/public/js/libs/**
[INFO] Exclude: **/atlas.data/**
[INFO] Exclude: **/${sys:atlas.data}/**
[INFO] Exclude: **/policy-store.txt
[INFO] Exclude: **/*rebel*.xml
[INFO] Exclude: **/*rebel*.xml.bak
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ atlas-intg ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
atlas-intg ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 2 resources to META-INF
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.7.0:compile (default-compile) @ atlas-intg 
---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 83 source files to 

[INFO] 
:
 Some input files use or override a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
atlas-intg ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 7 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.7.0:testCompile (default-testCompile) @ 
atlas-intg ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 33 source files to 

[INFO] 
:
 Some input files use or override a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
[INFO] --- maven-surefire-plugin:2.20.1:test (default-test) @ atlas-intg ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.atlas.model.typedef.TestAtlasStructDef
[INFO] Running org.apache.atlas.type.TestAtlasByteType
[INFO] Running org.apache.atlas.type.TestAtlasTypeRegistry
[INFO] Running org.apache.atlas.type.TestAtlasBigIntegerType
[INFO] Running org.apache.atlas.type.TestAtlasArrayType
[INFO] Running org.apache.atlas.type.TestAtlasRelationshipType
[INFO] Running org.apache.atlas.TestUtilsV2
[INFO] Running org.apache.atlas.type.TestAtlasShortType
[INFO] Running org.apache.atlas.type.TestAtlasLongType
[INFO] Running org.apache.atlas.model.instance.TestAtlasClassification
[INFO] Running 
org.apache.atlas.security.InMemoryJAASConfigurationTicketBasedKafkaClientTest
[INFO] Running org.apache.atlas.type.TestAtlasBuiltInTypesFloatDouble
[INFO] Running org.apache.atlas.model.typedef.TestAtlasEntityDef
[INFO] Running org.apache.atlas.model.typedef.TestAtlasClassificationDef
[INFO] Running org.apache.atlas.type.TestAtlasDateType
[INFO] Running org.apache.atlas.ApplicationPropertiesTest
[INFO] Running org.apache.atlas.type.TestAtlasEntityType
[INFO] Running org.apache.atlas.type.TestAtlasStringType
[INFO] Running org.apache.atlas.security.InMemoryJAASConfigurationTest
[INFO] Running org.apache.atlas.type.TestAtlasStructType
[INFO] Running org.apache.atlas.type.TestAtlasIntType
[INFO] Running org.apache.atlas.TestRelationshipUtilsV2
[INFO] Running org.apache.atlas.type.TestAtlasClassificationType
[INFO] Running org.apache.atlas.type.TestAtlasObjectIdType
[INFO] Running org.apache.atlas.type.TestAtlasBooleanType
[INFO] Running org.apache.atlas.model.instance.TestAtlasEntity
[INFO] Running org.apache.atlas.model.typedef.TestAtlasEnumDef
[INFO] Running org.apach

[jira] [Commented] (ATLAS-2286) Pre-built type 'kafka_topic' should not declare 'topic' attribute as unique

2017-11-28 Thread Madhan Neethiraj (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269727#comment-16269727
 ] 

Madhan Neethiraj commented on ATLAS-2286:
-

+1 for the patch. Thanks [~ashutoshm].

> Pre-built type 'kafka_topic' should not declare 'topic' attribute as unique
> ---
>
> Key: ATLAS-2286
> URL: https://issues.apache.org/jira/browse/ATLAS-2286
> Project: Atlas
>  Issue Type: Bug
>  Components: atlas-intg
>Affects Versions: trunk, 0.8.1
>Reporter: Ashutosh Mestry
>Assignee: Ashutosh Mestry
> Fix For: trunk, 0.8.1
>
> Attachments: ATLAS-2286-branch-08.diff, ATLAS-2286-master.diff
>
>
> Currently, 'kafka_topic' defines its 'topic' attribute as an unique 
> attribute. It negatively affects use-cases that requires managing multiple 
> Kafka clusters in a single Atlas. Even if user specifies different cluster 
> name to 'kafka_topic' qualified name, Atlas seems using 'topic' to identify 
> an entity.
> For example, sending a POST request to Atlas entity API to create one, if 
> there's an existing 'kafka_topic' with the same 'topic', Atlas updates the 
> existing entity instead of creating new one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ATLAS-2286) Pre-built type 'kafka_topic' should not declare 'topic' attribute as unique

2017-11-28 Thread Ashutosh Mestry (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Mestry updated ATLAS-2286:
---
Attachment: (was: kafka_topic-branch08.patch)

> Pre-built type 'kafka_topic' should not declare 'topic' attribute as unique
> ---
>
> Key: ATLAS-2286
> URL: https://issues.apache.org/jira/browse/ATLAS-2286
> Project: Atlas
>  Issue Type: Bug
>  Components: atlas-intg
>Affects Versions: trunk, 0.8.1
>Reporter: Ashutosh Mestry
>Assignee: Ashutosh Mestry
> Fix For: trunk, 0.8.1
>
> Attachments: ATLAS-2286-branch-08.diff, ATLAS-2286-master.diff
>
>
> Currently, 'kafka_topic' defines its 'topic' attribute as an unique 
> attribute. It negatively affects use-cases that requires managing multiple 
> Kafka clusters in a single Atlas. Even if user specifies different cluster 
> name to 'kafka_topic' qualified name, Atlas seems using 'topic' to identify 
> an entity.
> For example, sending a POST request to Atlas entity API to create one, if 
> there's an existing 'kafka_topic' with the same 'topic', Atlas updates the 
> existing entity instead of creating new one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ATLAS-2286) Pre-built type 'kafka_topic' should not declare 'topic' attribute as unique

2017-11-28 Thread Ashutosh Mestry (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Mestry updated ATLAS-2286:
---
Attachment: ATLAS-2286-branch-08.diff
ATLAS-2286-master.diff

> Pre-built type 'kafka_topic' should not declare 'topic' attribute as unique
> ---
>
> Key: ATLAS-2286
> URL: https://issues.apache.org/jira/browse/ATLAS-2286
> Project: Atlas
>  Issue Type: Bug
>  Components: atlas-intg
>Affects Versions: trunk, 0.8.1
>Reporter: Ashutosh Mestry
>Assignee: Ashutosh Mestry
> Fix For: trunk, 0.8.1
>
> Attachments: ATLAS-2286-branch-08.diff, ATLAS-2286-master.diff
>
>
> Currently, 'kafka_topic' defines its 'topic' attribute as an unique 
> attribute. It negatively affects use-cases that requires managing multiple 
> Kafka clusters in a single Atlas. Even if user specifies different cluster 
> name to 'kafka_topic' qualified name, Atlas seems using 'topic' to identify 
> an entity.
> For example, sending a POST request to Atlas entity API to create one, if 
> there's an existing 'kafka_topic' with the same 'topic', Atlas updates the 
> existing entity instead of creating new one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ATLAS-2286) Pre-built type 'kafka_topic' should not declare 'topic' attribute as unique

2017-11-28 Thread Ashutosh Mestry (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Mestry updated ATLAS-2286:
---
Attachment: (was: kafka_topic-master.patch)

> Pre-built type 'kafka_topic' should not declare 'topic' attribute as unique
> ---
>
> Key: ATLAS-2286
> URL: https://issues.apache.org/jira/browse/ATLAS-2286
> Project: Atlas
>  Issue Type: Bug
>  Components: atlas-intg
>Affects Versions: trunk, 0.8.1
>Reporter: Ashutosh Mestry
>Assignee: Ashutosh Mestry
> Fix For: trunk, 0.8.1
>
> Attachments: ATLAS-2286-branch-08.diff, ATLAS-2286-master.diff
>
>
> Currently, 'kafka_topic' defines its 'topic' attribute as an unique 
> attribute. It negatively affects use-cases that requires managing multiple 
> Kafka clusters in a single Atlas. Even if user specifies different cluster 
> name to 'kafka_topic' qualified name, Atlas seems using 'topic' to identify 
> an entity.
> For example, sending a POST request to Atlas entity API to create one, if 
> there's an existing 'kafka_topic' with the same 'topic', Atlas updates the 
> existing entity instead of creating new one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ATLAS-2286) Pre-built type 'kafka_topic' should not declare 'topic' attribute as unique

2017-11-28 Thread Ashutosh Mestry (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Mestry updated ATLAS-2286:
---
Attachment: kafka_topic-branch08.patch
kafka_topic-master.patch

> Pre-built type 'kafka_topic' should not declare 'topic' attribute as unique
> ---
>
> Key: ATLAS-2286
> URL: https://issues.apache.org/jira/browse/ATLAS-2286
> Project: Atlas
>  Issue Type: Bug
>  Components: atlas-intg
>Affects Versions: trunk, 0.8.1
>Reporter: Ashutosh Mestry
>Assignee: Ashutosh Mestry
> Fix For: trunk, 0.8.1
>
> Attachments: kafka_topic-branch08.patch, kafka_topic-master.patch
>
>
> Currently, 'kafka_topic' defines its 'topic' attribute as an unique 
> attribute. It negatively affects use-cases that requires managing multiple 
> Kafka clusters in a single Atlas. Even if user specifies different cluster 
> name to 'kafka_topic' qualified name, Atlas seems using 'topic' to identify 
> an entity.
> For example, sending a POST request to Atlas entity API to create one, if 
> there's an existing 'kafka_topic' with the same 'topic', Atlas updates the 
> existing entity instead of creating new one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (ATLAS-2286) Pre-built type 'kafka_topic' should not declare 'topic' attribute as unique

2017-11-28 Thread Ashutosh Mestry (JIRA)
Ashutosh Mestry created ATLAS-2286:
--

 Summary: Pre-built type 'kafka_topic' should not declare 'topic' 
attribute as unique
 Key: ATLAS-2286
 URL: https://issues.apache.org/jira/browse/ATLAS-2286
 Project: Atlas
  Issue Type: Bug
  Components: atlas-intg
Affects Versions: 0.8.1, trunk
Reporter: Ashutosh Mestry
Assignee: Ashutosh Mestry
 Fix For: trunk, 0.8.1


Currently, 'kafka_topic' defines its 'topic' attribute as an unique attribute. 
It negatively affects use-cases that requires managing multiple Kafka clusters 
in a single Atlas. Even if user specifies different cluster name to 
'kafka_topic' qualified name, Atlas seems using 'topic' to identify an entity.

For example, sending a POST request to Atlas entity API to create one, if 
there's an existing 'kafka_topic' with the same 'topic', Atlas updates the 
existing entity instead of creating new one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: Atlas-0.8-IntegrationTests #138

2017-11-28 Thread Apache Jenkins Server
See 


Changes:

[madhan] ATLAS-2276: update Hive hook to add an option to retain 
case-sensitivity

--
[...truncated 57.01 MB...]
at org.apache.atlas.hive.HiveITBase.runCommand(HiveITBase.java:105)
at 
org.apache.atlas.hive.hook.HiveHookIT.testLoadLocalPathIntoPartition(HiveHookIT.java:439)

testTraitsPreservedOnColumnRename(org.apache.atlas.hive.hook.HiveHookIT)  Time 
elapsed: 6.543 sec  <<< FAILURE!
java.lang.AssertionError: Assertions failed. Failing after waiting for timeout 
1000 msecs
at 
org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:387)
at 
org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:320)
at 
org.apache.atlas.AtlasBaseClient.callAPIWithRetries(AtlasBaseClient.java:471)
at org.apache.atlas.AtlasClient.callAPIWithRetries(AtlasClient.java:967)
at org.apache.atlas.AtlasClient.getEntity(AtlasClient.java:651)
at org.apache.atlas.hive.HiveITBase$1.evaluate(HiveITBase.java:161)
at org.apache.atlas.hive.HiveITBase.waitFor(HiveITBase.java:198)
at 
org.apache.atlas.hive.HiveITBase.assertEntityIsRegistered(HiveITBase.java:158)
at 
org.apache.atlas.hive.hook.HiveHookIT.assertColumnIsRegistered(HiveHookIT.java:283)
at 
org.apache.atlas.hive.hook.HiveHookIT.assertColumnIsRegistered(HiveHookIT.java:278)
at 
org.apache.atlas.hive.hook.HiveHookIT.testTraitsPreservedOnColumnRename(HiveHookIT.java:1255)

testTruncateTable(org.apache.atlas.hive.hook.HiveHookIT)  Time elapsed: 6.065 
sec  <<< FAILURE!
java.lang.AssertionError: Assertions failed. Failing after waiting for timeout 
1000 msecs
at 
org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:387)
at 
org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:320)
at 
org.apache.atlas.AtlasBaseClient.callAPIWithRetries(AtlasBaseClient.java:471)
at org.apache.atlas.AtlasClient.callAPIWithRetries(AtlasClient.java:967)
at org.apache.atlas.AtlasClient.getEntity(AtlasClient.java:651)
at org.apache.atlas.hive.HiveITBase$1.evaluate(HiveITBase.java:161)
at org.apache.atlas.hive.HiveITBase.waitFor(HiveITBase.java:198)
at 
org.apache.atlas.hive.HiveITBase.assertEntityIsRegistered(HiveITBase.java:158)
at 
org.apache.atlas.hive.HiveITBase.assertTableIsRegistered(HiveITBase.java:152)
at 
org.apache.atlas.hive.HiveITBase.assertTableIsRegistered(HiveITBase.java:146)
at 
org.apache.atlas.hive.hook.HiveHookIT.testTruncateTable(HiveHookIT.java:1197)

testUpdateProcess(org.apache.atlas.hive.hook.HiveHookIT)  Time elapsed: 7.794 
sec  <<< FAILURE!
java.lang.AssertionError: Assertions failed. Failing after waiting for timeout 
1000 msecs
at 
org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:387)
at 
org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:320)
at 
org.apache.atlas.AtlasBaseClient.callAPIWithRetries(AtlasBaseClient.java:471)
at org.apache.atlas.AtlasClient.callAPIWithRetries(AtlasClient.java:967)
at org.apache.atlas.AtlasClient.getEntity(AtlasClient.java:651)
at org.apache.atlas.hive.HiveITBase$1.evaluate(HiveITBase.java:161)
at org.apache.atlas.hive.HiveITBase.waitFor(HiveITBase.java:198)
at 
org.apache.atlas.hive.HiveITBase.assertEntityIsRegistered(HiveITBase.java:158)
at 
org.apache.atlas.hive.hook.HiveHookIT.assertProcessIsRegistered(HiveHookIT.java:1735)
at 
org.apache.atlas.hive.hook.HiveHookIT.validateProcess(HiveHookIT.java:487)
at 
org.apache.atlas.hive.hook.HiveHookIT.validateProcess(HiveHookIT.java:507)
at 
org.apache.atlas.hive.hook.HiveHookIT.testUpdateProcess(HiveHookIT.java:590)


Results :

Failed tests: 
  
HiveHookIT.testAlterTableChangeColumn:1018->getColumns:928->HiveITBase.assertTableIsRegistered:146->HiveITBase.assertTableIsRegistered:152->HiveITBase.assertEntityIsRegistered:168
 » AtlasService
  HiveHookIT.testAlterTableLocation:1305->HiveITBase.validateHDFSPaths:222 
expected [50731dc7-2090-4a6c-8272-2e0455a84382] but found 
[00d66c97-4c7e-40de-b276-77bc724f1a7e]
  
HiveHookIT.testCreateView:368->assertProcessIsRegistered:1710->HiveITBase.assertEntityIsRegistered:158->HiveITBase.waitFor:202
 Assertions failed. Failing after waiting for timeout 1000 msecs
  
HiveHookIT.testDropDatabaseWithCascade:1470->assertDBIsNotRegistered:1798->assertEntityIsNotRegistered:1806->HiveITBase.waitFor:202
 Assertions failed. Failing after waiting for timeout 1000 msecs
  
HiveHookIT.testDropTable:1416->HiveITBase.assertTableIsRegistered:146->HiveITBase.assertTableIsRegistered:152->HiveITBase.assertEntityIsRegistered:158->HiveITBase.waitFor:202
 Assertions failed. Failing after waiting for timeout 1000 msecs
  
HiveHook

Build failed in Jenkins: Atlas-0.8-UnitTests #138

2017-11-28 Thread Apache Jenkins Server
See 


Changes:

[madhan] ATLAS-2276: update Hive hook to add an option to retain 
case-sensitivity

--
[...truncated 194.16 KB...]
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.454 sec - in 
org.apache.atlas.repository.store.graph.v1.InverseReferenceUpdateV1Test
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.689 sec - in 
org.apache.atlas.util.CompiledQueryCacheKeyTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.914 sec - in 
org.apache.atlas.query.QueryProcessorTest
Running org.apache.atlas.repository.graph.GraphBackedSearchIndexerTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Running org.apache.atlas.repository.graph.GraphRepoMapperScaleTest
Running org.apache.atlas.repository.graph.GraphHelperTest
Running org.apache.atlas.repository.graph.ReverseReferenceUpdateHardDeleteTest
Running org.apache.atlas.repository.graph.ReverseReferenceUpdateSoftDeleteTest
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.633 sec - 
in org.apache.atlas.query.ParserTest
Running org.apache.atlas.repository.graph.Gremlin3QueryOptimizerTest
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.442 sec - in 
org.apache.atlas.query.BaseGremlinTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Running org.apache.atlas.repository.graph.GraphBackedRepositoryHardDeleteTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.722 sec - 
in org.apache.atlas.query.ExpressionTest
Running org.apache.atlas.repository.graph.GraphBackedRepositorySoftDeleteTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Running org.apache.atlas.repository.graph.GraphBackedSearchIndexerMockTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.279 sec - in 
org.apache.atlas.services.MetricsServiceTest
Running org.apache.atlas.repository.graph.GraphHelperMockTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.352 sec - in 
org.apache.atlas.discovery.EntityDiscoveryServiceTest
Running org.apache.atlas.repository.graph.GraphBackedMetadataRepositoryTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.934 sec - in 
org.apache.atlas.discovery.DataSetLineageServiceTest
Running org.apache.atlas.repository.userprofile.UserProfileServiceTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Running org.apache.atlas.repository.impexp.ImportTransformerJSONTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.213 sec - in 
org.apache.atlas.RepositoryServiceLoadingTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Running org.apache.atlas.repository.impexp.ImportTransformsTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.218 sec - in 
org.apache.atlas.repository.store.graph.v1.AtlasEntityDefStoreV1Test
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.036 sec - in 
org.apache.atlas.repository.typestore.GraphBackedTypeStoreTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Running org.apache.atlas.repository.impexp.ImportServiceReportingTest
Running org.apache.atlas.repository.impexp.ExportServiceTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.238 sec - in 
org.apache.atlas.repository.typestore.StoreBackedTypeCacheTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Running org.apache.atlas.repository.impexp.UniqueListTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 123.176 sec - 
in org.apache.atlas.service.StoreBackedTypeCacheMetadataServiceTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.267 sec - in 
org.apache.atlas.repository.graph.AbstractGremlinQueryOptimizerTest
Running org.apache.atlas.repository.impexp.AtlasImportRequestTest
Java

Build failed in Jenkins: Atlas-master-IntegrationTests #195

2017-11-28 Thread Apache Jenkins Server
See 


Changes:

[madhan] ATLAS-2276: update Hive hook to add an option to retain 
case-sensitivity

--
[...truncated 1.18 MB...]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:61)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart 
(ContextHandlerCollection.java:161)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:61)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:131)
at org.eclipse.jetty.server.Server.start (Server.java:422)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart (Server.java:389)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.maven.plugin.AbstractJettyMojo.startJetty 
(AbstractJettyMojo.java:460)
at org.eclipse.jetty.maven.plugin.AbstractJettyMojo.execute 
(AbstractJettyMojo.java:328)
at org.eclipse.jetty.maven.plugin.JettyRunWarMojo.execute 
(JettyRunWarMojo.java:64)
at org.eclipse.jetty.maven.plugin.JettyDeployWar.execute 
(JettyDeployWar.java:65)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
(DefaultBuildPluginManager.java:134)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:208)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:154)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:146)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:309)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:194)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:107)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:955)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:290)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main 
(Launcher.java:356)
[INFO] Started ServerConnector@3ca55b6{HTTP/1.1,[http/1.1]}{0.0.0.0:31000}
[INFO] Started @403904ms
[INFO] Started Jetty Server
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20.1:integration-test (integration-test) @ 
atlas-webapp ---
[WARNING] useSystemClassloader setting has no effect when not forking
Downloading from central: 
https://repo.maven.apache.org/maven2/org/apache/maven/surefire/surefire-testng/2.20.1/surefire-testng-2.20.1.pom
Progress (1): 2.1/2.4 kBProgress (1): 2.4 kBDownloaded 
from central: 
https://repo.maven.apache.org/maven2/org/apache/maven/surefire/surefire-testng/2.20.1/surefire-testng-2.20.1.pom
 (2.4 kB at 141 kB/s)
Downloading from central: 
https://repo.maven.apache.org/maven2/org/apache/maven/surefire/surefire-testng-utils/2.20.1/surefire-testng-utils-2.20.1.pom
Progress (1): 2.1/3.1 kBProgress (1): 3.1 kBDownloaded 
from central: 
https://repo.maven.apache.org/maven2/org/apache/maven/surefire/surefire-testng-utils/2.20.1/

Jenkins build is still unstable: Atlas-master-UnitTests #177

2017-11-28 Thread Apache Jenkins Server
See 




Re: Review Request 64061: ATLAS-2276 : Path value for hdfs_path type entity is set to lower case from hive-bridge.

2017-11-28 Thread Madhan Neethiraj

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64061/#review192056
---


Ship it!




Ship It!

- Madhan Neethiraj


On Nov. 28, 2017, 7:49 p.m., Nixon Rodrigues wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/64061/
> ---
> 
> (Updated Nov. 28, 2017, 7:49 p.m.)
> 
> 
> Review request for atlas, Apoorv Naik, Ashutosh Mestry, Madhan Neethiraj, and 
> Sarath Subramanian.
> 
> 
> Bugs: ATLAS-2276
> https://issues.apache.org/jira/browse/ATLAS-2276
> 
> 
> Repository: atlas
> 
> 
> Description
> ---
> 
> This patch include fix for Path value for hdfs_path type entity which is set 
> to lower case from hive-bridge.
> Added a new property *atlas.hook.hive.hdfs_path.allowlowercase* if set to 
> true, then the path will be set to lowercase, else by default the path value 
> will same as hdfs filename.
> 
> 
> Diffs
> -
> 
>   
> addons/hive-bridge/src/main/java/org/apache/atlas/hive/bridge/HiveMetaStoreBridge.java
>  ab0094b8 
>   addons/hive-bridge/src/main/java/org/apache/atlas/hive/hook/HiveHook.java 
> 57f5efb5 
> 
> 
> Diff: https://reviews.apache.org/r/64061/diff/3/
> 
> 
> Testing
> ---
> 
> HiveBridge unittest-cases running properly.
> Create table and verified the hive/hdfs_path entities in Atlas.
> Tested import-hive script and verified the hive/hdfs_path entities in Atlas.
> 
> 
> Thanks,
> 
> Nixon Rodrigues
> 
>



Re: Review Request 64061: ATLAS-2276 : Path value for hdfs_path type entity is set to lower case from hive-bridge.

2017-11-28 Thread Nixon Rodrigues

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64061/
---

(Updated Nov. 28, 2017, 7:49 p.m.)


Review request for atlas, Apoorv Naik, Ashutosh Mestry, Madhan Neethiraj, and 
Sarath Subramanian.


Changes
---

This patch includes changes to handle review comment.

Testing :- Created Atlas entities from import hive script and atlas hive hook 
when atlas.hook.hive.hdfs_path.convert_to_lowercase=true|false


Bugs: ATLAS-2276
https://issues.apache.org/jira/browse/ATLAS-2276


Repository: atlas


Description
---

This patch include fix for Path value for hdfs_path type entity which is set to 
lower case from hive-bridge.
Added a new property *atlas.hook.hive.hdfs_path.allowlowercase* if set to true, 
then the path will be set to lowercase, else by default the path value will 
same as hdfs filename.


Diffs (updated)
-

  
addons/hive-bridge/src/main/java/org/apache/atlas/hive/bridge/HiveMetaStoreBridge.java
 ab0094b8 
  addons/hive-bridge/src/main/java/org/apache/atlas/hive/hook/HiveHook.java 
57f5efb5 


Diff: https://reviews.apache.org/r/64061/diff/3/

Changes: https://reviews.apache.org/r/64061/diff/2-3/


Testing
---

HiveBridge unittest-cases running properly.
Create table and verified the hive/hdfs_path entities in Atlas.
Tested import-hive script and verified the hive/hdfs_path entities in Atlas.


Thanks,

Nixon Rodrigues



[jira] [Comment Edited] (ATLAS-2259) Add Janus Graph Cassandra support to the default build

2017-11-28 Thread Pierre Padovani (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269278#comment-16269278
 ] 

Pierre Padovani edited comment on ATLAS-2259 at 11/28/17 7:10 PM:
--

Here is the current working pom.xml dependency additions required.

{code:java}

org.janusgraph
janusgraph-cassandra
${janus.version}


org.codehaus.jettison
jettison


commons-lang
commons-lang





org.janusgraph
janusgraph-cql
${janus.version}

{code}

This does not support Elasticsearch as there is a separate issue with that, as 
being discussed here: 
[ATLAS-2270|https://issues.apache.org/jira/browse/ATLAS-2270]


was (Author: ppadovani):
Here is the current working pom.xml dependency additions required.

{code:java}

org.janusgraph
janusgraph-cassandra
${janus.version}


org.codehaus.jettison
jettison


commons-lang
commons-lang





org.janusgraph
janusgraph-cql
${janus.version}

{code}


> Add Janus Graph Cassandra support to the default build
> --
>
> Key: ATLAS-2259
> URL: https://issues.apache.org/jira/browse/ATLAS-2259
> Project: Atlas
>  Issue Type: Improvement
>  Components:  atlas-core
>Affects Versions: 1.0.0
>Reporter: Pierre Padovani
> Fix For: 1.0.0
>
>
> Atlas should have support for Cassandra as a backend for Janus available by 
> default. If someone wants this type of configuration, they have to modify the 
> pom.xml and rebuild Atlas. Here is the pom.xml modification required to 
> enable this support:
> {code:java}
> 
> org.janusgraph
> janusgraph-cassandra
> ${janus.version}
> 
> 
> org.codehaus.jettison
> jettison
> 
> 
>   commons-lang
>   commons-lang
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Instructions to build & run Atlas in dev environment

2017-11-28 Thread Sarath Subramanian
Hi Graham,

Could you share your modified webapp/pom.xml

I’m trying to follow the instructions and build for Janus 0.2.0 but I’m facing 
a class not found exception during atlas startup - 
java.lang.ClassNotFoundException: org.apache.lucene.analysis.TokenStream
 
Thanks,

Sarath Subramanian



On 11/28/17, 10:40 AM, "Graham Wallis"  wrote:

modified the webapp/pom.xml



[jira] [Commented] (ATLAS-2259) Add Janus Graph Cassandra support to the default build

2017-11-28 Thread Pierre Padovani (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269278#comment-16269278
 ] 

Pierre Padovani commented on ATLAS-2259:


Here is the current working pom.xml dependency additions required.

{code:java}

org.janusgraph
janusgraph-cassandra
${janus.version}


org.codehaus.jettison
jettison


commons-lang
commons-lang





org.janusgraph
janusgraph-cql
${janus.version}

{code}


> Add Janus Graph Cassandra support to the default build
> --
>
> Key: ATLAS-2259
> URL: https://issues.apache.org/jira/browse/ATLAS-2259
> Project: Atlas
>  Issue Type: Improvement
>  Components:  atlas-core
>Affects Versions: 1.0.0
>Reporter: Pierre Padovani
> Fix For: 1.0.0
>
>
> Atlas should have support for Cassandra as a backend for Janus available by 
> default. If someone wants this type of configuration, they have to modify the 
> pom.xml and rebuild Atlas. Here is the pom.xml modification required to 
> enable this support:
> {code:java}
> 
> org.janusgraph
> janusgraph-cassandra
> ${janus.version}
> 
> 
> org.codehaus.jettison
> jettison
> 
> 
>   commons-lang
>   commons-lang
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (ATLAS-2270) Supported combinations of persistent store and index backend

2017-11-28 Thread Pierre Padovani (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269246#comment-16269246
 ] 

Pierre Padovani edited comment on ATLAS-2270 at 11/28/17 6:48 PM:
--

Hi [~grahamwallis]

IMHO - In terms of the Janus-ES topology support:
* Fix the janus pom.xml and include the es rest client dependency and ship that 
with the atlas distribution. Otherwise implementors will have to copy that jar 
into the lib of Atlas before starting.
* Require anyone wanting to use a janus-es topology to stand up their own es 
cluster and supply the hostnames in the atlas properties. This isn't all that 
hard: download ES then start it (for a single node config).
* Include a docker file (I'll gladly contribute mine as an alternative to the 
current one.) that provides this topology for development/example purposes. (We 
exclusively use this docker for dev/debug at this point, it is just a single 
Atlas node.)

We are in the middle of our development cycle in terms of metadata support, and 
our infrastructure lacks any Hadoop support, and will likely not contain it in 
the future. As we evaluated possible metadata products, Atlas stood out, 
however using Titan was not something we wanted. Seeing the move to Janus was 
huge for us, we intend to deploy Atlas Janus 0.2.0 on Cassandra + ES.

*EDIT* To be clear I implemented all of the above changes, and have built and 
deployed an Atlas docker container running Janus 0.2.0 using Cassandra and 
Elasticsearch. As far as we can tell it works very well.

Thanks!
Pierre


was (Author: ppadovani):
Hi [~grahamwallis]

IMHO - In terms of the Janus-ES topology support:
* Fix the janus pom.xml and include the es rest client dependency and ship that 
with the atlas distribution. Otherwise implementors will have to copy that jar 
into the lib of Atlas before starting.
* Require anyone wanting to use a janus-es topology to stand up their own es 
cluster and supply the hostnames in the atlas properties. This isn't all that 
hard: download ES then start it (for a single node config).
* Include a docker file (I'll gladly contribute mine as an alternative to the 
current one.) that provides this topology for development/example purposes. (We 
exclusively use this docker for dev/debug at this point, it is just a single 
Atlas node.)

We are in the middle of our development cycle in terms of metadata support, and 
our infrastructure lacks any Hadoop support, and will likely not contain it in 
the future. As we evaluated possible metadata products, Atlas stood out, 
however using Titan was not something we wanted. Seeing the move to Janus was 
huge for us, we intend to deploy Atlas Janus 0.2.0 on Cassandra + ES.

Thanks!
Pierre

> Supported combinations of persistent store and index backend
> 
>
> Key: ATLAS-2270
> URL: https://issues.apache.org/jira/browse/ATLAS-2270
> Project: Atlas
>  Issue Type: Bug
>Reporter: Graham Wallis
>
> We need to discuss and decide which combinations of persistent store and 
> indexing backend Atlas 1.0.0 (master) should support. This includes 
> building/running Atlas as a standalone package and running UTs/ITs as part of 
> the Atlas build. 
> This JIRA focusses on titan0 and janusgraph 0.2.0, as they are the graph 
> databases that will be supported in master/1.0.0. This JIRA deliberately 
> ignores titan1 and janusgraph 0.1.1 as the former should be 
> deprecated/removed and the other is a transient state as we get to janusgraph 
> 0.2.0. 
> With titan0 as the graph provider, Atlas has supported the following 
> combinations of persistent store and indexer. It is suggested that this set 
> is kept unchanged:
> {{
> titan0  solr  es
> 
> berkeley   0  1
> hbase   1  0
> cassandra0  0
> }}
> With janusgraph (0.2.0) as the graph provider, Atlas *could* support 
> additional combinations. Cassandra is included in this discussion pending 
> response to ATLAS-2259.
> {{
> janus 0.2.0  solr  es
> 
> berkeley   ?  1
> hbase   1  ?
> cassandra?  ?
> }}
> It is suggested that the combinations marked with '1' should be continued and 
> the remaining 4 combinations, marked with '?', should be considered. There 
> seems to be evidence of people using all 4 of these combinations, although 
> not necessarily with Atlas.
> Depending on the decision made above, we need to ensure that it is possible 
> to build Atlas as a standalone package with any of the combinations - i.e. 
> that they are mutually exclusive and do not interfere with one another. They 
> currently interfere which makes it i

[jira] [Commented] (ATLAS-2270) Supported combinations of persistent store and index backend

2017-11-28 Thread Pierre Padovani (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269246#comment-16269246
 ] 

Pierre Padovani commented on ATLAS-2270:


Hi [~grahamwallis]

IMHO - In terms of the Janus-ES topology support:
* Fix the janus pom.xml and include the es rest client dependency and ship that 
with the atlas distribution. Otherwise implementors will have to copy that jar 
into the lib of Atlas before starting.
* Require anyone wanting to use a janus-es topology to stand up their own es 
cluster and supply the hostnames in the atlas properties. This isn't all that 
hard: download ES then start it (for a single node config).
* Include a docker file (I'll gladly contribute mine as an alternative to the 
current one.) that provides this topology for development/example purposes. (We 
exclusively use this docker for dev/debug at this point, it is just a single 
Atlas node.)

We are in the middle of our development cycle in terms of metadata support, and 
our infrastructure lacks any Hadoop support, and will likely not contain it in 
the future. As we evaluated possible metadata products, Atlas stood out, 
however using Titan was not something we wanted. Seeing the move to Janus was 
huge for us, we intend to deploy Atlas Janus 0.2.0 on Cassandra + ES.

Thanks!
Pierre

> Supported combinations of persistent store and index backend
> 
>
> Key: ATLAS-2270
> URL: https://issues.apache.org/jira/browse/ATLAS-2270
> Project: Atlas
>  Issue Type: Bug
>Reporter: Graham Wallis
>
> We need to discuss and decide which combinations of persistent store and 
> indexing backend Atlas 1.0.0 (master) should support. This includes 
> building/running Atlas as a standalone package and running UTs/ITs as part of 
> the Atlas build. 
> This JIRA focusses on titan0 and janusgraph 0.2.0, as they are the graph 
> databases that will be supported in master/1.0.0. This JIRA deliberately 
> ignores titan1 and janusgraph 0.1.1 as the former should be 
> deprecated/removed and the other is a transient state as we get to janusgraph 
> 0.2.0. 
> With titan0 as the graph provider, Atlas has supported the following 
> combinations of persistent store and indexer. It is suggested that this set 
> is kept unchanged:
> {{
> titan0  solr  es
> 
> berkeley   0  1
> hbase   1  0
> cassandra0  0
> }}
> With janusgraph (0.2.0) as the graph provider, Atlas *could* support 
> additional combinations. Cassandra is included in this discussion pending 
> response to ATLAS-2259.
> {{
> janus 0.2.0  solr  es
> 
> berkeley   ?  1
> hbase   1  ?
> cassandra?  ?
> }}
> It is suggested that the combinations marked with '1' should be continued and 
> the remaining 4 combinations, marked with '?', should be considered. There 
> seems to be evidence of people using all 4 of these combinations, although 
> not necessarily with Atlas.
> Depending on the decision made above, we need to ensure that it is possible 
> to build Atlas as a standalone package with any of the combinations - i.e. 
> that they are mutually exclusive and do not interfere with one another. They 
> currently interfere which makes it impossible to build Atlas with 
> -Pdist,berkeley-elasticsearch because the 'dist' profile will exclude jars 
> that are needed by the berkeley-elasticsearch profile - which leads to class 
> not found exceptions when the Atlas server is started. The solution to this 
> could be very simple, or slightly more sophisticated, depending on how many 
> of the combinations we choose to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Instructions to build & run Atlas in dev environment

2017-11-28 Thread Graham Wallis
Thanks Sarath,

I have now got an Atlas server started using a variant on Ashutosh's 
instructions. The key difference is that I am using JanusGraph 0.2.0. I 
cannot reproduce Ashutosh, Madhan and Sarath's result with Titan 0.5.4.

My environment is like this:
I am on Windows and have the latest rat plugin (0.12 v 0.7) to cope with 
the downloaded solr archive. You don't seem to need that on other 
platforms.
I am running with a modified the webapp/pom.xml but that will not affect 
this combination of profiles.
Built using 'mvn clean install -DskipTests -Pdist,embedded-hbase-solr'
set MANAGE_LOCAL_HBASE=true
set MANAGE_LOCAL_SOLR=true
Checked the atlas-application.properties looks OK - but made no changes to 
it as it defaults to AtlasJanusGraphDatabase anyway.
cd /bin, where 'atlas_root' is 
c:/dev/atlas/atlas/distro/target/apache-atlas-1.0.0-SNAPSHOT-bin/apache-atlas-1.0.0-SNAPSHOT/
atlas_start.py

hbase.cmd launches in a separate window and appears to survive - when I do 
this with Titan 0.5.4 it does not.

Atlas took a *very* long time to start - ages in fact, but it did get 
there eventually. I have not investigated what it was doing during this 
time.

I could then connect to localhost:21000.

I have not tried to run quick_start yet.

Best regards,
  Graham

Graham Wallis
IBM Analytics Emerging Technology Center
Internet: graham_wal...@uk.ibm.com 
IBM Laboratories, Hursley Park, Hursley, Hampshire SO21 2JN
Tel: +44-1962-815356Tie: 7-245356




From:   Sarath Subramanian 
To: "dev@atlas.apache.org" 
Cc: Ashutosh Mestry 
Date:   28/11/2017 17:56
Subject:Re: Instructions to build & run Atlas in dev environment



Thanks for the instructions Ashutosh and Madhan. I’m able to bring up 
atlas fine and run Quickstart. 

I initially got a ‘connection refused’ to zookeeper and then I figured I 
need to set my JAVA_HOME environment variable (which is needed for 
embedded HBase server startup/shutdown).

 
Thanks,

Sarath Subramanian



On 11/28/17, 12:19 AM, "Madhan Neethiraj"  wrote:

 
atlas.graphdb.backend=org.apache.atlas.repository.graphdb.titan0.Titan0GraphDatabase



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU



[jira] [Commented] (ATLAS-2270) Supported combinations of persistent store and index backend

2017-11-28 Thread Graham Wallis (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269195#comment-16269195
 ] 

Graham Wallis commented on ATLAS-2270:
--

Hi [~ppadovani]

As you said JanusGraph fully supports Cassandra and ElasticSearch. 

I think there may be two things that we need to fix to get that combination 
working in Atlas - in addition to your pom changes. 
* I hadn't realised that there was an issue with the '.' character - but until 
now I guess Atlas has only been using ES 1.x.  
* The other issue is that from JanusGraph 0.2.0 onwards JanusGraph expects to 
communicate with ElasticSearch using the REST API. Until now Atlas has relied 
on the graphDB (whether titan or janus) to bring up an embedded ES index 
server; but from JanusGraph 0.2.0 onwards we need to spawn a separate ES server 
process that JanusGraph can connect to as a REST client, as described here 
[http://docs.janusgraph.org/latest/upgrade.html]

That change to the internal Janus-ES topology was one of the things that 
triggered this JIRA - as I wanted to assess the level of interest in ES rather 
than just assume that Atlas should continue with it. Many uses of Atlas are 
based on hadoop and I think would understandably have a bias toward Solr. So 
with regard to my (rather poorly formatted :-) ) tables in this JIRA I am 
starting to think that the interesting combinations might be HBase-Solr, 
Cassandra-ES and _possibly_ BDB with either Solr or ES (but maybe not both). 
The latter is IMHO quite useful for dev/debug purposes as it is very easy to 
stop Atlas and configure a gremlin-console to look at the BDB graph on the 
filesystem. I wonder whether that set of combinations may provide reasonable 
coverage at least for now - i.e. it would be minimal but sufficient. Please 
(anyone) comment if you agree or disagree.

BTW: Are you currently working with JanusGraph 0.1.1 or 0.2.0? 

Many thanks 
  Graham


> Supported combinations of persistent store and index backend
> 
>
> Key: ATLAS-2270
> URL: https://issues.apache.org/jira/browse/ATLAS-2270
> Project: Atlas
>  Issue Type: Bug
>Reporter: Graham Wallis
>
> We need to discuss and decide which combinations of persistent store and 
> indexing backend Atlas 1.0.0 (master) should support. This includes 
> building/running Atlas as a standalone package and running UTs/ITs as part of 
> the Atlas build. 
> This JIRA focusses on titan0 and janusgraph 0.2.0, as they are the graph 
> databases that will be supported in master/1.0.0. This JIRA deliberately 
> ignores titan1 and janusgraph 0.1.1 as the former should be 
> deprecated/removed and the other is a transient state as we get to janusgraph 
> 0.2.0. 
> With titan0 as the graph provider, Atlas has supported the following 
> combinations of persistent store and indexer. It is suggested that this set 
> is kept unchanged:
> {{
> titan0  solr  es
> 
> berkeley   0  1
> hbase   1  0
> cassandra0  0
> }}
> With janusgraph (0.2.0) as the graph provider, Atlas *could* support 
> additional combinations. Cassandra is included in this discussion pending 
> response to ATLAS-2259.
> {{
> janus 0.2.0  solr  es
> 
> berkeley   ?  1
> hbase   1  ?
> cassandra?  ?
> }}
> It is suggested that the combinations marked with '1' should be continued and 
> the remaining 4 combinations, marked with '?', should be considered. There 
> seems to be evidence of people using all 4 of these combinations, although 
> not necessarily with Atlas.
> Depending on the decision made above, we need to ensure that it is possible 
> to build Atlas as a standalone package with any of the combinations - i.e. 
> that they are mutually exclusive and do not interfere with one another. They 
> currently interfere which makes it impossible to build Atlas with 
> -Pdist,berkeley-elasticsearch because the 'dist' profile will exclude jars 
> that are needed by the berkeley-elasticsearch profile - which leads to class 
> not found exceptions when the Atlas server is started. The solution to this 
> could be very simple, or slightly more sophisticated, depending on how many 
> of the combinations we choose to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Instructions to build & run Atlas in dev environment

2017-11-28 Thread Sarath Subramanian
Thanks for the instructions Ashutosh and Madhan. I’m able to bring up atlas 
fine and run Quickstart. 

I initially got a ‘connection refused’ to zookeeper and then I figured I need 
to set my JAVA_HOME environment variable (which is needed for embedded HBase 
server startup/shutdown).

 
Thanks,

Sarath Subramanian



On 11/28/17, 12:19 AM, "Madhan Neethiraj"  wrote:


atlas.graphdb.backend=org.apache.atlas.repository.graphdb.titan0.Titan0GraphDatabase



Re: Instructions to build & run Atlas in dev environment

2017-11-28 Thread Graham Wallis
Hi Nigel,

You need to prevent webapp from excluding the sleepycat jar. It's the mod 
to webapp/pom.xml that I sent you.

Best regards,
  Graham

Graham Wallis
IBM Analytics Emerging Technology Center
Internet: graham_wal...@uk.ibm.com 
IBM Laboratories, Hursley Park, Hursley, Hampshire SO21 2JN
Tel: +44-1962-815356Tie: 7-245356




From:   "Nigel Jones" 
To: 
Date:   28/11/2017 17:17
Subject:Re: Instructions to build & run Atlas in dev environment





On 2017-11-28 08:19, Madhan Neethiraj  wrote: 

> Please follow the instructions at the end of this email to build & run 
Apache Atlas in your local dev environment.

Thanks Madhan,
 I gave this script a go under macOS with mvn 3.50 & java 8 (152, oracle), 
but I fall over getting the berkeleydb library:

[INFO] Apache Atlas JanusGraph DB Impl  FAILURE [ 
18.174 s]
...
[INFO] BUILD FAILURE
...
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process 
(default) on project atlas-graphdb-janus: Error resolving project 
artifact: Could not transfer artifact com.sleepycat:je:pom:7.3.7 from/to 
java.net-Public (
https://urldefense.proofpoint.com/v2/url?u=https-3A__maven.java.net_content_groups_public_&d=DwIBAw&c=jf_iaSHvJObTbx-siA1ZOg&r=Li6zJa6pLsu8dGS1GSW1j00glumVmLp0H5HVkn5QyAA&m=HKCho76WN5ZTIg-LVxPQj7fJwI5EGoVzc-5RHjUfcVY&s=aRdbYUPZ7MDwJPDX9clUkgRaIZ1YfSbQZVneEtxFRd4&e=
): Failed to transfer file: 
https://urldefense.proofpoint.com/v2/url?u=https-3A__maven.java.net_content_groups_public_com_sleepycat_je_7.3.7_je-2D7.3.7.pom&d=DwIBAw&c=jf_iaSHvJObTbx-siA1ZOg&r=Li6zJa6pLsu8dGS1GSW1j00glumVmLp0H5HVkn5QyAA&m=HKCho76WN5ZTIg-LVxPQj7fJwI5EGoVzc-5RHjUfcVY&s=dMGv0PbIDO14PoJNGREmKDe1J80pxPFxVGNzm82Iwqc&e=
. Return code is: 500, ReasonPhrase: Internal Server Error. for project 
com.sleepycat:je:jar:7.3.7 -> [Help 1]

Perhaps a temporary server glitch, but I don't see that sleepycat 
package. ?



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Re: Instructions to build & run Atlas in dev environment

2017-11-28 Thread Andrew Hulbert

Nigel,

I was able to add the "oracle" repo to solve the sleepycat problem:

https://github.com/jahhulbert-ccri/graphy-stuff/blob/master/atlas/doc/setup.md#update-settingsxml

I didn't end up needing the jitpack one so ignore that.


On 11/28/2017 12:17 PM, Nigel Jones wrote:


On 2017-11-28 08:19, Madhan Neethiraj  wrote:


Please follow the instructions at the end of this email to build & run Apache 
Atlas in your local dev environment.

Thanks Madhan,
  I gave this script a go under macOS with mvn 3.50 & java 8 (152, oracle), but 
I fall over getting the berkeleydb library:

[INFO] Apache Atlas JanusGraph DB Impl  FAILURE [ 18.174 s]
...
[INFO] BUILD FAILURE
...
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project atlas-graphdb-janus: Error resolving project artifact: Could not transfer 
artifact com.sleepycat:je:pom:7.3.7 from/to java.net-Public 
(https://maven.java.net/content/groups/public/): Failed to transfer file: 
https://maven.java.net/content/groups/public/com/sleepycat/je/7.3.7/je-7.3.7.pom. 
Return code is: 500, ReasonPhrase: Internal Server Error. for project 
com.sleepycat:je:jar:7.3.7 -> [Help 1]

Perhaps a temporary server glitch, but I don't see that sleepycat package. ?





[jira] [Comment Edited] (ATLAS-2270) Supported combinations of persistent store and index backend

2017-11-28 Thread Pierre Padovani (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269001#comment-16269001
 ] 

Pierre Padovani edited comment on ATLAS-2270 at 11/28/17 5:24 PM:
--

[~davidrad] Per the JanusGraph documentation, Cassandra and Elasticsearch are 
fully supported. 

After some debugging, it seems that the main issue with supporting 
Elasticsearch has to do with two lines in 
org.apache.atlas.repository.Constants.java:

{code:java}
public static final String VERTEX_TYPE_PROPERTY_KEY = 
INTERNAL_PROPERTY_KEY_PREFIX + "type";
public static final String TYPENAME_PROPERTY_KEY = 
INTERNAL_PROPERTY_KEY_PREFIX + "type.name";
{code}

Those constants are used in this code:

{code:java}
private void createTypeStoreIndexes(AtlasGraphManagement management) {
//Create unique index on typeName
createIndexes(management, Constants.TYPENAME_PROPERTY_KEY, 
String.class, true, AtlasCardinality.SINGLE,
true, true);

//create index on vertex type
createIndexes(management, Constants.VERTEX_TYPE_PROPERTY_KEY, 
String.class, false, AtlasCardinality.SINGLE,
true, true);
}
{code}

In Elasticsearch 2.x and above, creating two fields named as above is 
disallowed. The reason for this has to do with the treatment of a '.' in the 
name as a path within a sub object. So the above code would attempt to create 
the following:

1) { "type" : { "name" : "string"}}
2) { "type": "string"}

This fails because the update is attempting to change an object to a field 
index. The simple fix would be to change the '.' to '_', however this may cause 
backward incompatibilities for existing systems. 

*Update:* - Changing the '.' to '_' solves the issues, and I have a fully 
operational Atlas with Cassandra and Elasticsearch.


was (Author: ppadovani):
[~davidrad] Per the JanusGraph documentation, Cassandra and Elasticsearch are 
fully supported. 

After some debugging, it seems that the main issue with supporting 
Elasticsearch has to do with two lines in 
org.apache.atlas.repository.Constants.java:

{code:java}
public static final String VERTEX_TYPE_PROPERTY_KEY = 
INTERNAL_PROPERTY_KEY_PREFIX + "type";
public static final String TYPENAME_PROPERTY_KEY = 
INTERNAL_PROPERTY_KEY_PREFIX + "type.name";
{code}

Those constants are used in this code:

{code:java}
private void createTypeStoreIndexes(AtlasGraphManagement management) {
//Create unique index on typeName
createIndexes(management, Constants.TYPENAME_PROPERTY_KEY, 
String.class, true, AtlasCardinality.SINGLE,
true, true);

//create index on vertex type
createIndexes(management, Constants.VERTEX_TYPE_PROPERTY_KEY, 
String.class, false, AtlasCardinality.SINGLE,
true, true);
}
{code}

In Elasticsearch 2.x and above, creating two fields named as above is 
disallowed. The reason for this has to do with the treatment of a '.' in the 
name as a path within a sub object. So the above code would attempt to create 
the following:

1) { "type" : { "name" : "string"}}
2) { "type": "string"}

This fails because the update is attempting to change an object to a field 
index. The simple fix would be to change the '.' to '_', however this may cause 
backward incompatibilities for existing systems. 

> Supported combinations of persistent store and index backend
> 
>
> Key: ATLAS-2270
> URL: https://issues.apache.org/jira/browse/ATLAS-2270
> Project: Atlas
>  Issue Type: Bug
>Reporter: Graham Wallis
>
> We need to discuss and decide which combinations of persistent store and 
> indexing backend Atlas 1.0.0 (master) should support. This includes 
> building/running Atlas as a standalone package and running UTs/ITs as part of 
> the Atlas build. 
> This JIRA focusses on titan0 and janusgraph 0.2.0, as they are the graph 
> databases that will be supported in master/1.0.0. This JIRA deliberately 
> ignores titan1 and janusgraph 0.1.1 as the former should be 
> deprecated/removed and the other is a transient state as we get to janusgraph 
> 0.2.0. 
> With titan0 as the graph provider, Atlas has supported the following 
> combinations of persistent store and indexer. It is suggested that this set 
> is kept unchanged:
> {{
> titan0  solr  es
> 
> berkeley   0  1
> hbase   1  0
> cassandra0  0
> }}
> With janusgraph (0.2.0) as the graph provider, Atlas *could* support 
> additional combinations. Cassandra is included in this discussion pending 
> response to ATLAS-2259.
> {{
> janus 0.2.0  solr  es
> 
> berkeley   ?  1

Re: Instructions to build & run Atlas in dev environment

2017-11-28 Thread Nigel Jones


On 2017-11-28 08:19, Madhan Neethiraj  wrote: 

> Please follow the instructions at the end of this email to build & run Apache 
> Atlas in your local dev environment.

Thanks Madhan,
 I gave this script a go under macOS with mvn 3.50 & java 8 (152, oracle), but 
I fall over getting the berkeleydb library:

[INFO] Apache Atlas JanusGraph DB Impl  FAILURE [ 18.174 s]
...
[INFO] BUILD FAILURE
...
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project atlas-graphdb-janus: Error resolving project artifact: Could not 
transfer artifact com.sleepycat:je:pom:7.3.7 from/to java.net-Public 
(https://maven.java.net/content/groups/public/): Failed to transfer file: 
https://maven.java.net/content/groups/public/com/sleepycat/je/7.3.7/je-7.3.7.pom.
 Return code is: 500, ReasonPhrase: Internal Server Error. for project 
com.sleepycat:je:jar:7.3.7 -> [Help 1]

Perhaps a temporary server glitch, but I don't see that sleepycat package. ?



[jira] [Commented] (ATLAS-2270) Supported combinations of persistent store and index backend

2017-11-28 Thread Pierre Padovani (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269001#comment-16269001
 ] 

Pierre Padovani commented on ATLAS-2270:


[~davidrad] Per the JanusGraph documentation, Cassandra and Elasticsearch are 
fully supported. 

After some debugging, it seems that the main issue with supporting 
Elasticsearch has to do with two lines in 
org.apache.atlas.repository.Constants.java:

{code:java}
public static final String VERTEX_TYPE_PROPERTY_KEY = 
INTERNAL_PROPERTY_KEY_PREFIX + "type";
public static final String TYPENAME_PROPERTY_KEY = 
INTERNAL_PROPERTY_KEY_PREFIX + "type.name";
{code}

Those constants are used in this code:

{code:java}
private void createTypeStoreIndexes(AtlasGraphManagement management) {
//Create unique index on typeName
createIndexes(management, Constants.TYPENAME_PROPERTY_KEY, 
String.class, true, AtlasCardinality.SINGLE,
true, true);

//create index on vertex type
createIndexes(management, Constants.VERTEX_TYPE_PROPERTY_KEY, 
String.class, false, AtlasCardinality.SINGLE,
true, true);
}
{code}

In Elasticsearch 2.x and above, creating two fields named as above is 
disallowed. The reason for this has to do with the treatment of a '.' in the 
name as a path within a sub object. So the above code would attempt to create 
the following:

1) { "type" : { "name" : "string"}}
2) { "type": "string"}

This fails because the update is attempting to change an object to a field 
index. The simple fix would be to change the '.' to '_', however this may cause 
backward incompatibilities for existing systems. 

> Supported combinations of persistent store and index backend
> 
>
> Key: ATLAS-2270
> URL: https://issues.apache.org/jira/browse/ATLAS-2270
> Project: Atlas
>  Issue Type: Bug
>Reporter: Graham Wallis
>
> We need to discuss and decide which combinations of persistent store and 
> indexing backend Atlas 1.0.0 (master) should support. This includes 
> building/running Atlas as a standalone package and running UTs/ITs as part of 
> the Atlas build. 
> This JIRA focusses on titan0 and janusgraph 0.2.0, as they are the graph 
> databases that will be supported in master/1.0.0. This JIRA deliberately 
> ignores titan1 and janusgraph 0.1.1 as the former should be 
> deprecated/removed and the other is a transient state as we get to janusgraph 
> 0.2.0. 
> With titan0 as the graph provider, Atlas has supported the following 
> combinations of persistent store and indexer. It is suggested that this set 
> is kept unchanged:
> {{
> titan0  solr  es
> 
> berkeley   0  1
> hbase   1  0
> cassandra0  0
> }}
> With janusgraph (0.2.0) as the graph provider, Atlas *could* support 
> additional combinations. Cassandra is included in this discussion pending 
> response to ATLAS-2259.
> {{
> janus 0.2.0  solr  es
> 
> berkeley   ?  1
> hbase   1  ?
> cassandra?  ?
> }}
> It is suggested that the combinations marked with '1' should be continued and 
> the remaining 4 combinations, marked with '?', should be considered. There 
> seems to be evidence of people using all 4 of these combinations, although 
> not necessarily with Atlas.
> Depending on the decision made above, we need to ensure that it is possible 
> to build Atlas as a standalone package with any of the combinations - i.e. 
> that they are mutually exclusive and do not interfere with one another. They 
> currently interfere which makes it impossible to build Atlas with 
> -Pdist,berkeley-elasticsearch because the 'dist' profile will exclude jars 
> that are needed by the berkeley-elasticsearch profile - which leads to class 
> not found exceptions when the Atlas server is started. The solution to this 
> could be very simple, or slightly more sophisticated, depending on how many 
> of the combinations we choose to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ATLAS-2284) Import hive fails with "java.lang.NoClassDefFoundError: com/sun/jersey/multipart/BodyPart"

2017-11-28 Thread Madhan Neethiraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Madhan Neethiraj updated ATLAS-2284:

Affects Version/s: (was: 0.8.1)
   0.8.2

> Import hive fails with "java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart"
> --
>
> Key: ATLAS-2284
> URL: https://issues.apache.org/jira/browse/ATLAS-2284
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core
>Affects Versions: 1.0.0, 0.8.2
>Reporter: Sharmadha Sainath
>Assignee: Nixon Rodrigues
>Priority: Blocker
> Fix For: 1.0.0, 0.8.2
>
> Attachments: ATLAS-2284-branch-0.8.patch, ATLAS-2284-master.patch
>
>
> Import hive fails with following error :
> {code}
> java.io.FileNotFoundException: 
> /usr/hdp/current/atlas-server/logs/import-hive.log (No such file or directory)
>   at java.io.FileOutputStream.open0(Native Method)
>   at java.io.FileOutputStream.open(FileOutputStream.java:270)
>   at java.io.FileOutputStream.(FileOutputStream.java:213)
>   at java.io.FileOutputStream.(FileOutputStream.java:133)
>   at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
>   at 
> org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
>   at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
>   at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseCategory(DOMConfigurator.java:436)
>   at org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1004)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:872)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:778)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at 
> org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:64)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:281)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:301)
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.(HiveMetaStoreBridge.java:97)
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:634)
> Caused by: java.lang.ClassNotFoundException: com.sun.jersey.multipart.BodyPart
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 1 more
> Failed to import Hive Data Model!!
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: Atlas-0.8-IntegrationTests #137

2017-11-28 Thread Apache Jenkins Server
See 


Changes:

[madhan] ATLAS-2284: added jersey-multipart jar to Hive hook packaging

--
[...truncated 528.92 KB...]
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "GET 
/api/atlas/v2/entity/guid/48da6acb-eef7-4c96-befe-b48c989f6912 HTTP/1.1" 200 - 
"-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "POST /api/atlas/v2/entity HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "GET 
/api/atlas/v2/entity/guid/d12a2617-6a68-47da-969d-d6e24f1b7ebb HTTP/1.1" 200 - 
"-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "POST /api/atlas/v2/entity HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "GET 
/api/atlas/v2/entity/guid/48146e55-a6bb-4274-853b-89c069234e8f HTTP/1.1" 200 - 
"-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "POST /api/atlas/v2/entity HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "GET 
/api/atlas/v2/entity/guid/1dbf96c6-7d4f-48a9-8097-f3955bdd39e1 HTTP/1.1" 200 - 
"-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "POST /api/atlas/v2/entity HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "GET 
/api/atlas/v2/entity/guid/23dc4347-7f45-41f6-9de7-fbf3b22dc91d HTTP/1.1" 200 - 
"-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "POST /api/atlas/v2/entity HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "GET 
/api/atlas/v2/entity/guid/56cc4ea7-288c-47e1-a65d-b4c479c13d45 HTTP/1.1" 200 - 
"-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:36 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=from+DB&limit=10 HTTP/1.1" 200 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:37 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=DB&limit=10 HTTP/1.1" 200 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:38 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=DB+where+name%3D%22Reporting%22&limit=10
 HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:39 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=DB+where+DB.name%3D%22Reporting%22&limit=10
 HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:39 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=DB+name+%3D+%22Reporting%22&limit=10 
HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:40 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=DB+DB.name+%3D+%22Reporting%22&limit=10 
HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:41 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=DB+where+name%3D%22Reporting%22+select+name,+owner&limit=10
 HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:42 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=DB+where+DB.name%3D%22Reporting%22+select+name,+owner&limit=10
 HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:43 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=DB+has+name&limit=10 HTTP/1.1" 200 - 
"-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:44 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=DB+where+DB+has+name&limit=10 HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:45 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=DB,+Table&limit=10 HTTP/1.1" 200 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:45 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=DB+is+JdbcAccess&limit=10 HTTP/1.1" 200 
- "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:46 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=from+Table&limit=10 HTTP/1.1" 200 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:47 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=Table&limit=10 HTTP/1.1" 200 - "-" 
"Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:47 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=Table+is+Dimension&limit=10 HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:48 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=Column+where+Column+isa+PII&limit=10 
HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:49 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=View+is+Dimension&limit=10 HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:49 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=Column+select+Column.name&limit=10 
HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:50 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=Column+select+name&limit=10 HTTP/1.1" 
200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:51 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=Column+where+Column.name%3D%22customer_id%22&limit=10
 HTTP/1.1" 200 - "-" "Java/1.8.0_152"
127.0.0.1 - - [28/Nov/2017:15:48:52 +] "GET 
/api/atlas/v2/search/dsl?offset=0&query=from+Table+select+Table.

Build failed in Jenkins: Atlas-master-IntegrationTests #194

2017-11-28 Thread Apache Jenkins Server
See 


Changes:

[madhan] ATLAS-2284: added jersey-multipart jar to Hive hook packaging

--
[...truncated 442.57 KB...]
at org.eclipse.jetty.servlet.ServletContextHandler.startContext 
(ServletContextHandler.java:345)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp 
(WebAppContext.java:1404)
at org.eclipse.jetty.maven.plugin.JettyWebAppContext.startWebapp 
(JettyWebAppContext.java:323)
at org.eclipse.jetty.webapp.WebAppContext.startContext 
(WebAppContext.java:1366)
at org.eclipse.jetty.server.handler.ContextHandler.doStart 
(ContextHandler.java:778)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart 
(ServletContextHandler.java:262)
at org.eclipse.jetty.webapp.WebAppContext.doStart (WebAppContext.java:520)
at org.eclipse.jetty.maven.plugin.JettyWebAppContext.doStart 
(JettyWebAppContext.java:398)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:61)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart 
(ContextHandlerCollection.java:161)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:61)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:131)
at org.eclipse.jetty.server.Server.start (Server.java:422)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart (Server.java:389)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:68)
at org.eclipse.jetty.maven.plugin.AbstractJettyMojo.startJetty 
(AbstractJettyMojo.java:460)
at org.eclipse.jetty.maven.plugin.AbstractJettyMojo.execute 
(AbstractJettyMojo.java:328)
at org.eclipse.jetty.maven.plugin.JettyRunWarMojo.execute 
(JettyRunWarMojo.java:64)
at org.eclipse.jetty.maven.plugin.JettyDeployWar.execute 
(JettyDeployWar.java:65)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
(DefaultBuildPluginManager.java:134)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:208)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:154)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:146)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:309)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:194)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:107)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:955)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:290)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main 
(Launcher.java:356)
[INFO] Started ServerConnector@26f83ac5{HTTP/1.1,[http/1.1]}{0.0.0.0:31000}
[INFO] Started @434916ms
[INFO] Started Jetty Server
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20.1:integration-test (integration-test) @ 
atlas-webapp ---
[WARNING] useSystemClassloader setting has no effect 

Re: Instructions to build & run Atlas in dev environment

2017-11-28 Thread Graham Wallis
Hi David,

The cause for the original exception you saw is the exclusions in the 
webapp. If you restore the titan packages you will get further than you 
did. However, the build process does not seem to result in a viable build. 


There are a number of minor niggles:
To build this combination of profiles on Windows you need to upgrade the 
version of rat; I am now using 0.12. 
The build should not explicitly specify the graph-provider-titan0 profile, 
but that is not what causes the problems. 
During the build, even with skipTests (which I dislike) there are failures 
due to the jetty integration tests trying to bring up ElasticSearch - this 
is a by-product of the mismatch between UT/IT and distribution profiles 
that we discussed earlier. Messy, but not the real issue.
More importantly, if you skip over the above failures and let the 
packaging step complete (it's using the correct indexing backend) it does 
not result in a working build:
If I configure the MANAGE_LOCAL_xxx environment variables and appropriate 
GraphDatabase class in the properties file and try to start it, the result 
is repeated ZK session timeouts, which are retried up to a threshold 
(looks like 30 retries) then ZK gives up. 
Atlas eventually (after an infeasibly long delay - we're talking multiple 
rows of dots) claims to have started but it hasn't really and the logs are 
full of ZK errors.
If you try to connect to it you will get an ECONNREF.
After the test: 
“stopping” atlas is slow because you have to wait for lots of ZK timeouts
you will need to terminate the dead ZK process in order to clean the 
distro tree if you want to rebuild.


Best regards,
  Graham

Graham Wallis
IBM Analytics Emerging Technology Center
Internet: graham_wal...@uk.ibm.com 
IBM Laboratories, Hursley Park, Hursley, Hampshire SO21 2JN
Tel: +44-1962-815356Tie: 7-245356




From:   Madhan Neethiraj 
To: "dev@atlas.apache.org" 
Date:   28/11/2017 15:06
Subject:Re: Instructions to build & run Atlas in dev environment



David,

There is no other Solr running in my env. Do you see any other error in 
logs/application.log file?

Madhan




On 11/28/17, 7:03 AM, "David Radley"  wrote:

Hi Madhan,
I get 



Error 503 


HTTP ERROR: 503
Problem accessing /api/atlas/v2/entity. Reason:
 
Service Unavailable


https://urldefense.proofpoint.com/v2/url?u=http-3A__eclipse.org_jetty&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Li6zJa6pLsu8dGS1GSW1j00glumVmLp0H5HVkn5QyAA&m=Bbja5Mr7Vx9naQtYlm5-MUUOyezfQd6rm0jDMYJSSI0&s=Yh1bUAJWsTz7jebksxdwkCzSmghEKVyXgqbFiPdwQWk&e=
">Powered by Jetty:// 
9.3.14.v20161028



 
I wonder if you are picking up a solr instance you have running in 
your 
environment,  allowing you to run,all the best, David. 
 
 
 
From:   Madhan Neethiraj 
To: "dev@atlas.apache.org" 
Date:   28/11/2017 14:54
Subject:Re: Instructions to build & run Atlas in dev 
environment
 
 
 
David,
 
Did your REST API call return failure? I see the exception in my env 
too, 
but didn’t see any issues in using Atlas.
 
Thanks,
Madhan
 
 
 
On 11/28/17, 4:37 AM, "David Radley"  wrote:
 
Hi Madhan,
Thanks for sharing this. Unfortunately I am still getting errors. 
Any 
thoughts ? 
 
I get this in the application log when I issue a rest call. 
 
2017-11-28 12:32:11,055 INFO  - [main:] ~ Not running setup per 
configuration atlas.server.run.setup.on.start. 
(SetupSteps$SetupRequired:189)
2017-11-28 12:32:18,390 WARN  - [main:] ~ Failed to load class 
class 
com.thinkaurelius.titan.diskstorage.solr.SolrIndex or its 
referenced 
types; this usually indicates a broken classpath/classloader 
(ReflectiveConfigOptionLoader:229)
java.lang.VerifyError: Bad type on operand stack
Exception Details:
  Location:
 
 
 
com/thinkaurelius/titan/diskstorage/solr/SolrIndex.(Lcom/thinkaurelius/titan/diskstorage/configuration/Configuration;)V
 

 
@162: putfield
  Reason:
Type 'org/apache/solr/client/solrj/impl/CloudSolrServer' 
(current 
frame, stack[1]) is not assignable to 
'org/apache/solr/client/solrj/SolrServer'
  Current Frame:
bci: @162
flags: { }
locals: { 
'com/thinkaurelius/titan/diskstorage/solr/SolrIndex', 
'com/thinkaurelius/titan/diskstorage/configuration/Configuration', 

'java/lang/String', 
'org/apache/solr/client/solrj/impl/CloudSolrServer' }
stack: { 'com/thinkaurelius/titan/diskstorage/solr/SolrIndex', 

'org/apache/solr/client/solrj/impl/CloudSolrServer' }
  Bytecode:
0x000: 2ab7 008b 2bc6 0007 04a7 0004 03b8 0093
0x010: 2a2b b500 952a 2bb2 0097 03bd 0099 b900

Build failed in Jenkins: Atlas-0.8-UnitTests #137

2017-11-28 Thread Apache Jenkins Server
See 


Changes:

[madhan] ATLAS-2284: added jersey-multipart jar to Hive hook packaging

--
[...truncated 123.41 KB...]
[INFO] Including commons-io:commons-io:jar:2.4 in the shaded jar.
[INFO] Including commons-lang:commons-lang:jar:2.6 in the shaded jar.
[INFO] Including commons-logging:commons-logging:jar:1.2 in the shaded jar.
[INFO] Including com.google.guava:guava:jar:12.0.1 in the shaded jar.
[INFO] Including com.google.code.findbugs:jsr305:jar:1.3.9 in the shaded jar.
[INFO] Including com.google.protobuf:protobuf-java:jar:2.5.0 in the shaded jar.
[INFO] Including io.netty:netty-all:jar:4.0.23.Final in the shaded jar.
[INFO] Including org.apache.zookeeper:zookeeper:jar:3.4.6 in the shaded jar.
[INFO] Including org.apache.htrace:htrace-core:jar:3.1.0-incubating in the 
shaded jar.
[INFO] Including org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13 in the 
shaded jar.
[INFO] Including org.codehaus.jackson:jackson-core-asl:jar:1.9.13 in the shaded 
jar.
[INFO] Including org.jruby.jcodings:jcodings:jar:1.0.8 in the shaded jar.
[INFO] Including org.jruby.joni:joni:jar:2.1.2 in the shaded jar.
[INFO] Including com.github.stephenc.findbugs:findbugs-annotations:jar:1.3.9-1 
in the shaded jar.
[INFO] Excluding org.slf4j:slf4j-api:jar:1.7.21 from the shaded jar.
[INFO] Excluding org.slf4j:slf4j-log4j12:jar:1.7.21 from the shaded jar.
[INFO] Including log4j:log4j:jar:1.2.17 in the shaded jar.
[INFO] Excluding org.slf4j:jul-to-slf4j:jar:1.7.21 from the shaded jar.
[INFO] Including cglib:cglib:jar:2.2.2 in the shaded jar.
[INFO] Including asm:asm:jar:3.3.1 in the shaded jar.
[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing 

 with 

[INFO] Dependency-reduced POM written at: 

[INFO] 
[INFO] 
[INFO] Building Apache Atlas Titan 0.5.4 Graph DB Impl 0.8.2-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ atlas-graphdb-titan0 
---
[INFO] Deleting 
 (includes 
= [**/*.pyc], excludes = [])
[INFO] 
[INFO] --- buildnumber-maven-plugin:1.4:create (default) @ atlas-graphdb-titan0 
---
[INFO] Executing: /bin/sh -c cd 
' && 
'git' 'rev-parse' '--verify' 'HEAD'
[INFO] Working directory: 

[INFO] Storing buildNumber: 7f52e13c4b580a38296096e85aedde1b4c39518e at 
timestamp: 1511883368811
[WARNING] Cannot get the branch information from the git repository: 
Detecting the current branch failed: fatal: ref HEAD is not a symbolic ref

[INFO] Executing: /bin/sh -c cd 
' && 
'git' 'rev-parse' '--verify' 'HEAD'
[INFO] Working directory: 

[INFO] Storing buildScmBranch: UNKNOWN
[INFO] 
[INFO] --- apache-rat-plugin:0.7:check (rat-check) @ atlas-graphdb-titan0 ---
[INFO] Exclude: **/dependency-reduced-pom.xml
[INFO] Exclude: **/javax.script.ScriptEngineFactory
[INFO] Exclude: .reviewboardrc
[INFO] Exclude: 3party-licenses/**
[INFO] Exclude: **/.cache
[INFO] Exclude: **/.cache-main
[INFO] Exclude: **/.cache-tests
[INFO] Exclude: **/.checkstyle
[INFO] Exclude: *.txt
[INFO] Exclude: **/*.json
[INFO] Exclude: .pc/**
[INFO] Exclude: debian/**
[INFO] Exclude: .svn/**
[INFO] Exclude: .git/**
[INFO] Exclude: .gitignore
[INFO] Exclude: **/.idea/**
[INFO] Exclude: **/*.twiki
[INFO] Exclude: **/*.iml
[INFO] Exclude: **/*.json
[INFO] Exclude: **/*.log
[INFO] Exclude: **/target/**
[INFO] Exclude: **/target*/**
[INFO] Exclude: **/build/**
[INFO] Exclude: **/*.patch
[INFO] Exclude: derby.log
[INFO] Exclude: **/logs/**
[INFO] Exclude: **/.classpath
[INFO] Exclude: **/.project
[INFO] Exclude: **/.settings/**
[INFO] Exclude: **/test-output/**
[INFO] Exclude: **/mock/**
[INFO] Exclude: **/data/**
[INFO] Exclude: **/maven-eclipse.xml
[INFO] Exclude: **/.externalToolBuilders/**
[INFO] Exclude: **/build.log
[INFO] Exclude: **/.bowerrc
[INFO] Exclude: *.json
[INFO] Exclude: **/overlays/**
[INFO] Exclude: dev-support/**
[INFO] Exclude: **/users-credentials.properties
[INFO] Exclude: **/public/css/animate.min.css
[INFO] Exclude: **/public/css/bootstrap-sidebar.css
[INFO] Exclude: **/public/js/external_lib/**
[INFO] Exclude: **/no

Jenkins build is still unstable: Atlas-master-UnitTests #176

2017-11-28 Thread Apache Jenkins Server
See 




[jira] [Issue Comment Deleted] (ATLAS-2270) Supported combinations of persistent store and index backend

2017-11-28 Thread Pierre Padovani (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Padovani updated ATLAS-2270:
---
Comment: was deleted

(was: [~davidrad] Looking at the Janus documentation it indicates full support 
for Cassandra and Elasticsearch. It just requires particular setup steps. 

In terms of Atlas,  what I can tell the only issue seems to be, is a bug in the 
Atlas support of Elasticsearch. It looks like the mapping for Elasticsearch is 
created in one form, then an attempt to update it in a way that is incompatible 
with the previous creation causes the bean creation lifecycle to fail. 

I'll use the ticket I referenced above to put a full patch forward that:
* Adds Cassandra support
* Fixes Elasticsearch support
* Adds a new Dockerfile that creates a standalone Cassandra + Elasticsearch 
configuration.
)

> Supported combinations of persistent store and index backend
> 
>
> Key: ATLAS-2270
> URL: https://issues.apache.org/jira/browse/ATLAS-2270
> Project: Atlas
>  Issue Type: Bug
>Reporter: Graham Wallis
>
> We need to discuss and decide which combinations of persistent store and 
> indexing backend Atlas 1.0.0 (master) should support. This includes 
> building/running Atlas as a standalone package and running UTs/ITs as part of 
> the Atlas build. 
> This JIRA focusses on titan0 and janusgraph 0.2.0, as they are the graph 
> databases that will be supported in master/1.0.0. This JIRA deliberately 
> ignores titan1 and janusgraph 0.1.1 as the former should be 
> deprecated/removed and the other is a transient state as we get to janusgraph 
> 0.2.0. 
> With titan0 as the graph provider, Atlas has supported the following 
> combinations of persistent store and indexer. It is suggested that this set 
> is kept unchanged:
> {{
> titan0  solr  es
> 
> berkeley   0  1
> hbase   1  0
> cassandra0  0
> }}
> With janusgraph (0.2.0) as the graph provider, Atlas *could* support 
> additional combinations. Cassandra is included in this discussion pending 
> response to ATLAS-2259.
> {{
> janus 0.2.0  solr  es
> 
> berkeley   ?  1
> hbase   1  ?
> cassandra?  ?
> }}
> It is suggested that the combinations marked with '1' should be continued and 
> the remaining 4 combinations, marked with '?', should be considered. There 
> seems to be evidence of people using all 4 of these combinations, although 
> not necessarily with Atlas.
> Depending on the decision made above, we need to ensure that it is possible 
> to build Atlas as a standalone package with any of the combinations - i.e. 
> that they are mutually exclusive and do not interfere with one another. They 
> currently interfere which makes it impossible to build Atlas with 
> -Pdist,berkeley-elasticsearch because the 'dist' profile will exclude jars 
> that are needed by the berkeley-elasticsearch profile - which leads to class 
> not found exceptions when the Atlas server is started. The solution to this 
> could be very simple, or slightly more sophisticated, depending on how many 
> of the combinations we choose to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (ATLAS-2270) Supported combinations of persistent store and index backend

2017-11-28 Thread Pierre Padovani (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268883#comment-16268883
 ] 

Pierre Padovani commented on ATLAS-2270:


[~davidrad] Looking at the Janus documentation it indicates full support for 
Cassandra and Elasticsearch. It just requires particular setup steps. 

In terms of Atlas,  what I can tell the only issue seems to be, is a bug in the 
Atlas support of Elasticsearch. It looks like the mapping for Elasticsearch is 
created in one form, then an attempt to update it in a way that is incompatible 
with the previous creation causes the bean creation lifecycle to fail. 

I'll use the ticket I referenced above to put a full patch forward that:
* Adds Cassandra support
* Fixes Elasticsearch support
* Adds a new Dockerfile that creates a standalone Cassandra + Elasticsearch 
configuration.


> Supported combinations of persistent store and index backend
> 
>
> Key: ATLAS-2270
> URL: https://issues.apache.org/jira/browse/ATLAS-2270
> Project: Atlas
>  Issue Type: Bug
>Reporter: Graham Wallis
>
> We need to discuss and decide which combinations of persistent store and 
> indexing backend Atlas 1.0.0 (master) should support. This includes 
> building/running Atlas as a standalone package and running UTs/ITs as part of 
> the Atlas build. 
> This JIRA focusses on titan0 and janusgraph 0.2.0, as they are the graph 
> databases that will be supported in master/1.0.0. This JIRA deliberately 
> ignores titan1 and janusgraph 0.1.1 as the former should be 
> deprecated/removed and the other is a transient state as we get to janusgraph 
> 0.2.0. 
> With titan0 as the graph provider, Atlas has supported the following 
> combinations of persistent store and indexer. It is suggested that this set 
> is kept unchanged:
> {{
> titan0  solr  es
> 
> berkeley   0  1
> hbase   1  0
> cassandra0  0
> }}
> With janusgraph (0.2.0) as the graph provider, Atlas *could* support 
> additional combinations. Cassandra is included in this discussion pending 
> response to ATLAS-2259.
> {{
> janus 0.2.0  solr  es
> 
> berkeley   ?  1
> hbase   1  ?
> cassandra?  ?
> }}
> It is suggested that the combinations marked with '1' should be continued and 
> the remaining 4 combinations, marked with '?', should be considered. There 
> seems to be evidence of people using all 4 of these combinations, although 
> not necessarily with Atlas.
> Depending on the decision made above, we need to ensure that it is possible 
> to build Atlas as a standalone package with any of the combinations - i.e. 
> that they are mutually exclusive and do not interfere with one another. They 
> currently interfere which makes it impossible to build Atlas with 
> -Pdist,berkeley-elasticsearch because the 'dist' profile will exclude jars 
> that are needed by the berkeley-elasticsearch profile - which leads to class 
> not found exceptions when the Atlas server is started. The solution to this 
> could be very simple, or slightly more sophisticated, depending on how many 
> of the combinations we choose to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (ATLAS-2284) Import hive fails with "java.lang.NoClassDefFoundError: com/sun/jersey/multipart/BodyPart"

2017-11-28 Thread Madhan Neethiraj (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268868#comment-16268868
 ] 

Madhan Neethiraj commented on ATLAS-2284:
-

+1 for the patch. Thanks [~nixonrodrigues].

> Import hive fails with "java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart"
> --
>
> Key: ATLAS-2284
> URL: https://issues.apache.org/jira/browse/ATLAS-2284
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core
>Affects Versions: 1.0.0, 0.8.1
>Reporter: Sharmadha Sainath
>Assignee: Nixon Rodrigues
>Priority: Blocker
> Fix For: trunk, 0.8.2
>
> Attachments: ATLAS-2284-branch-0.8.patch, ATLAS-2284-master.patch
>
>
> Import hive fails with following error :
> {code}
> java.io.FileNotFoundException: 
> /usr/hdp/current/atlas-server/logs/import-hive.log (No such file or directory)
>   at java.io.FileOutputStream.open0(Native Method)
>   at java.io.FileOutputStream.open(FileOutputStream.java:270)
>   at java.io.FileOutputStream.(FileOutputStream.java:213)
>   at java.io.FileOutputStream.(FileOutputStream.java:133)
>   at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
>   at 
> org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
>   at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
>   at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseCategory(DOMConfigurator.java:436)
>   at org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1004)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:872)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:778)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at 
> org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:64)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:281)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:301)
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.(HiveMetaStoreBridge.java:97)
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:634)
> Caused by: java.lang.ClassNotFoundException: com.sun.jersey.multipart.BodyPart
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 1 more
> Failed to import Hive Data Model!!
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Instructions to build & run Atlas in dev environment

2017-11-28 Thread Madhan Neethiraj
David,

There is no other Solr running in my env. Do you see any other error in 
logs/application.log file?

Madhan




On 11/28/17, 7:03 AM, "David Radley"  wrote:

Hi Madhan,
I get 



Error 503 


HTTP ERROR: 503
Problem accessing /api/atlas/v2/entity. Reason:

Service Unavailable


http://eclipse.org/jetty";>Powered by Jetty:// 
9.3.14.v20161028




I wonder if you are picking up a solr instance you have running in your 
environment,  allowing you to run,all the best, David. 



From:   Madhan Neethiraj 
To: "dev@atlas.apache.org" 
Date:   28/11/2017 14:54
Subject:Re: Instructions to build & run Atlas in dev environment



David,

Did your REST API call return failure? I see the exception in my env too, 
but didn’t see any issues in using Atlas.

Thanks,
Madhan



On 11/28/17, 4:37 AM, "David Radley"  wrote:

Hi Madhan,
Thanks for sharing this. Unfortunately I am still getting errors. Any 
thoughts ? 
 
I get this in the application log when I issue a rest call. 
 
2017-11-28 12:32:11,055 INFO  - [main:] ~ Not running setup per 
configuration atlas.server.run.setup.on.start. 
(SetupSteps$SetupRequired:189)
2017-11-28 12:32:18,390 WARN  - [main:] ~ Failed to load class class 
com.thinkaurelius.titan.diskstorage.solr.SolrIndex or its referenced 
types; this usually indicates a broken classpath/classloader 
(ReflectiveConfigOptionLoader:229)
java.lang.VerifyError: Bad type on operand stack
Exception Details:
  Location:
 
 

com/thinkaurelius/titan/diskstorage/solr/SolrIndex.(Lcom/thinkaurelius/titan/diskstorage/configuration/Configuration;)V
 

@162: putfield
  Reason:
Type 'org/apache/solr/client/solrj/impl/CloudSolrServer' (current 
frame, stack[1]) is not assignable to 
'org/apache/solr/client/solrj/SolrServer'
  Current Frame:
bci: @162
flags: { }
locals: { 'com/thinkaurelius/titan/diskstorage/solr/SolrIndex', 
'com/thinkaurelius/titan/diskstorage/configuration/Configuration', 
'java/lang/String', 
'org/apache/solr/client/solrj/impl/CloudSolrServer' }
stack: { 'com/thinkaurelius/titan/diskstorage/solr/SolrIndex', 
'org/apache/solr/client/solrj/impl/CloudSolrServer' }
  Bytecode:
0x000: 2ab7 008b 2bc6 0007 04a7 0004 03b8 0093
0x010: 2a2b b500 952a 2bb2 0097 03bd 0099 b900
0x020: 9d03 00c0 0099 b800 a1b5 00a3 2a2b b200
0x030: a503 bd00 99b9 009d 0300 c000 a7b6 00ab
0x040: b500 ad2a 2a2b b700 b1b5 00b3 2a2b b200
0x050: b803 bd00 99b9 009d 0300 c000 bab6 00be
0x060: b500 c02a 2bb2 00c2 03bd 0099 b900 9d03
0x070: 00c0 0099 b500 c42a b400 a3b2 00c7 a600
0x080: 2a2b b200 c903 bd00 99b9 009d 0300 c000
0x090: 994d bb00 cb59 2c04 b700 ce4e 2db6 00d1
0x0a0: 2a2d b500 d3a7 0057 2ab4 00a3 b200 d6a6
0x0b0: 002f bb00 1559 2a2b b700 d9b8 00df 4d2a
0x0c0: bb00 e159 2c2b b200 e303 bd00 99b9 009d
0x0d0: 0300 c000 e5b7 00e8 b500 d3a7 0021 bb00
0x0e0: ea59 bb00 ec59 b700 ed12 efb6 00f3 2ab4
0x0f0: 00a3 b600 f6b6 00fa b700 fdbf a700 0d4d
0x100: bb00 ff59 2cb7 0102 bfb1
  Exception Handler Table:
bci [119, 252] => handler: 255
  Stackmap Table:
full_frame(@12,{Object[#2],Object[#141]},{})
same_locals_1_stack_item_frame(@13,Integer)
same_frame_extended(@168)
same_frame(@222)
same_frame(@252)
same_locals_1_stack_item_frame(@255,Object[#136])
same_frame(@265)
 
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
at java.lang.Class.getDeclaredFields(Class.java:1916)
at 
 

com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadSingleClassUnsafe(ReflectiveConfigOptionLoader.java:270)
at 
 

com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadAllClassesUnsafe(ReflectiveConfigOptionLoader.java:227)
at 
 

com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.load(ReflectiveConfigOptionLoader.java:194)
at 
 

com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadAll(ReflectiveConfigOptionLoader.java:86)
at 
 

com.thinkaurelius

Re: Instructions to build & run Atlas in dev environment

2017-11-28 Thread David Radley
Hi Madhan,
I get 



Error 503 


HTTP ERROR: 503
Problem accessing /api/atlas/v2/entity. Reason:

Service Unavailable


http://eclipse.org/jetty";>Powered by Jetty:// 
9.3.14.v20161028




I wonder if you are picking up a solr instance you have running in your 
environment,  allowing you to run,all the best, David. 



From:   Madhan Neethiraj 
To: "dev@atlas.apache.org" 
Date:   28/11/2017 14:54
Subject:Re: Instructions to build & run Atlas in dev environment



David,

Did your REST API call return failure? I see the exception in my env too, 
but didn’t see any issues in using Atlas.

Thanks,
Madhan



On 11/28/17, 4:37 AM, "David Radley"  wrote:

Hi Madhan,
Thanks for sharing this. Unfortunately I am still getting errors. Any 
thoughts ? 
 
I get this in the application log when I issue a rest call. 
 
2017-11-28 12:32:11,055 INFO  - [main:] ~ Not running setup per 
configuration atlas.server.run.setup.on.start. 
(SetupSteps$SetupRequired:189)
2017-11-28 12:32:18,390 WARN  - [main:] ~ Failed to load class class 
com.thinkaurelius.titan.diskstorage.solr.SolrIndex or its referenced 
types; this usually indicates a broken classpath/classloader 
(ReflectiveConfigOptionLoader:229)
java.lang.VerifyError: Bad type on operand stack
Exception Details:
  Location:
 
 
com/thinkaurelius/titan/diskstorage/solr/SolrIndex.(Lcom/thinkaurelius/titan/diskstorage/configuration/Configuration;)V
 

@162: putfield
  Reason:
Type 'org/apache/solr/client/solrj/impl/CloudSolrServer' (current 
frame, stack[1]) is not assignable to 
'org/apache/solr/client/solrj/SolrServer'
  Current Frame:
bci: @162
flags: { }
locals: { 'com/thinkaurelius/titan/diskstorage/solr/SolrIndex', 
'com/thinkaurelius/titan/diskstorage/configuration/Configuration', 
'java/lang/String', 
'org/apache/solr/client/solrj/impl/CloudSolrServer' }
stack: { 'com/thinkaurelius/titan/diskstorage/solr/SolrIndex', 
'org/apache/solr/client/solrj/impl/CloudSolrServer' }
  Bytecode:
0x000: 2ab7 008b 2bc6 0007 04a7 0004 03b8 0093
0x010: 2a2b b500 952a 2bb2 0097 03bd 0099 b900
0x020: 9d03 00c0 0099 b800 a1b5 00a3 2a2b b200
0x030: a503 bd00 99b9 009d 0300 c000 a7b6 00ab
0x040: b500 ad2a 2a2b b700 b1b5 00b3 2a2b b200
0x050: b803 bd00 99b9 009d 0300 c000 bab6 00be
0x060: b500 c02a 2bb2 00c2 03bd 0099 b900 9d03
0x070: 00c0 0099 b500 c42a b400 a3b2 00c7 a600
0x080: 2a2b b200 c903 bd00 99b9 009d 0300 c000
0x090: 994d bb00 cb59 2c04 b700 ce4e 2db6 00d1
0x0a0: 2a2d b500 d3a7 0057 2ab4 00a3 b200 d6a6
0x0b0: 002f bb00 1559 2a2b b700 d9b8 00df 4d2a
0x0c0: bb00 e159 2c2b b200 e303 bd00 99b9 009d
0x0d0: 0300 c000 e5b7 00e8 b500 d3a7 0021 bb00
0x0e0: ea59 bb00 ec59 b700 ed12 efb6 00f3 2ab4
0x0f0: 00a3 b600 f6b6 00fa b700 fdbf a700 0d4d
0x100: bb00 ff59 2cb7 0102 bfb1
  Exception Handler Table:
bci [119, 252] => handler: 255
  Stackmap Table:
full_frame(@12,{Object[#2],Object[#141]},{})
same_locals_1_stack_item_frame(@13,Integer)
same_frame_extended(@168)
same_frame(@222)
same_frame(@252)
same_locals_1_stack_item_frame(@255,Object[#136])
same_frame(@265)
 
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
at java.lang.Class.getDeclaredFields(Class.java:1916)
at 
 
com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadSingleClassUnsafe(ReflectiveConfigOptionLoader.java:270)
at 
 
com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadAllClassesUnsafe(ReflectiveConfigOptionLoader.java:227)
at 
 
com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.load(ReflectiveConfigOptionLoader.java:194)
at 
 
com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadAll(ReflectiveConfigOptionLoader.java:86)
at 
 
com.thinkaurelius.titan.diskstorage.configuration.ConfigNamespace.getChild(ConfigNamespace.java:71)
at 
 
com.thinkaurelius.titan.diskstorage.configuration.ConfigElement.parse(ConfigElement.java:169)
at 
 
com.thinkaurelius.titan.diskstorage.configuration.BasicConfiguration.getAll(BasicConfiguration.java:80)
at 
 
com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.(GraphDatabaseConfiguration.java:1344)
at 
com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)
 
 
all the best, David. 
 
 
 
 
From:   Madhan Neethiraj 
To: "dev@atlas.apache.org" 
Cc: Ashutosh Mestry 
   

Re: Instructions to build & run Atlas in dev environment

2017-11-28 Thread Madhan Neethiraj
David,

Did your REST API call return failure? I see the exception in my env too, but 
didn’t see any issues in using Atlas.

Thanks,
Madhan



On 11/28/17, 4:37 AM, "David Radley"  wrote:

Hi Madhan,
Thanks for sharing this. Unfortunately I am still getting errors. Any 
thoughts ? 

I get this in the application log when I issue a rest call. 

2017-11-28 12:32:11,055 INFO  - [main:] ~ Not running setup per 
configuration atlas.server.run.setup.on.start. 
(SetupSteps$SetupRequired:189)
2017-11-28 12:32:18,390 WARN  - [main:] ~ Failed to load class class 
com.thinkaurelius.titan.diskstorage.solr.SolrIndex or its referenced 
types; this usually indicates a broken classpath/classloader 
(ReflectiveConfigOptionLoader:229)
java.lang.VerifyError: Bad type on operand stack
Exception Details:
  Location:
 

com/thinkaurelius/titan/diskstorage/solr/SolrIndex.(Lcom/thinkaurelius/titan/diskstorage/configuration/Configuration;)V
 
@162: putfield
  Reason:
Type 'org/apache/solr/client/solrj/impl/CloudSolrServer' (current 
frame, stack[1]) is not assignable to 
'org/apache/solr/client/solrj/SolrServer'
  Current Frame:
bci: @162
flags: { }
locals: { 'com/thinkaurelius/titan/diskstorage/solr/SolrIndex', 
'com/thinkaurelius/titan/diskstorage/configuration/Configuration', 
'java/lang/String', 'org/apache/solr/client/solrj/impl/CloudSolrServer' }
stack: { 'com/thinkaurelius/titan/diskstorage/solr/SolrIndex', 
'org/apache/solr/client/solrj/impl/CloudSolrServer' }
  Bytecode:
0x000: 2ab7 008b 2bc6 0007 04a7 0004 03b8 0093
0x010: 2a2b b500 952a 2bb2 0097 03bd 0099 b900
0x020: 9d03 00c0 0099 b800 a1b5 00a3 2a2b b200
0x030: a503 bd00 99b9 009d 0300 c000 a7b6 00ab
0x040: b500 ad2a 2a2b b700 b1b5 00b3 2a2b b200
0x050: b803 bd00 99b9 009d 0300 c000 bab6 00be
0x060: b500 c02a 2bb2 00c2 03bd 0099 b900 9d03
0x070: 00c0 0099 b500 c42a b400 a3b2 00c7 a600
0x080: 2a2b b200 c903 bd00 99b9 009d 0300 c000
0x090: 994d bb00 cb59 2c04 b700 ce4e 2db6 00d1
0x0a0: 2a2d b500 d3a7 0057 2ab4 00a3 b200 d6a6
0x0b0: 002f bb00 1559 2a2b b700 d9b8 00df 4d2a
0x0c0: bb00 e159 2c2b b200 e303 bd00 99b9 009d
0x0d0: 0300 c000 e5b7 00e8 b500 d3a7 0021 bb00
0x0e0: ea59 bb00 ec59 b700 ed12 efb6 00f3 2ab4
0x0f0: 00a3 b600 f6b6 00fa b700 fdbf a700 0d4d
0x100: bb00 ff59 2cb7 0102 bfb1
  Exception Handler Table:
bci [119, 252] => handler: 255
  Stackmap Table:
full_frame(@12,{Object[#2],Object[#141]},{})
same_locals_1_stack_item_frame(@13,Integer)
same_frame_extended(@168)
same_frame(@222)
same_frame(@252)
same_locals_1_stack_item_frame(@255,Object[#136])
same_frame(@265)

at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
at java.lang.Class.getDeclaredFields(Class.java:1916)
at 

com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadSingleClassUnsafe(ReflectiveConfigOptionLoader.java:270)
at 

com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadAllClassesUnsafe(ReflectiveConfigOptionLoader.java:227)
at 

com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.load(ReflectiveConfigOptionLoader.java:194)
at 

com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadAll(ReflectiveConfigOptionLoader.java:86)
at 

com.thinkaurelius.titan.diskstorage.configuration.ConfigNamespace.getChild(ConfigNamespace.java:71)
at 

com.thinkaurelius.titan.diskstorage.configuration.ConfigElement.parse(ConfigElement.java:169)
at 

com.thinkaurelius.titan.diskstorage.configuration.BasicConfiguration.getAll(BasicConfiguration.java:80)
at 

com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.(GraphDatabaseConfiguration.java:1344)
at 
com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)


all the best, David. 




From:   Madhan Neethiraj 
To: "dev@atlas.apache.org" 
Cc: Ashutosh Mestry 
Date:   28/11/2017 08:20
Subject:Instructions to build & run Atlas in dev environment



Atlas dev,

 

Please follow the instructions at the end of this email to build & run 
Apache Atlas in your local dev environment.

 

This setup uses the following configuration:

  - Graph DB: titan-0.5.4

  - Embedded HBase

  - Embedded Solr

 

Thanks to Ashutosh for compiling these instructions.

 

Hope thi

[jira] [Updated] (ATLAS-2284) Import hive fails with "java.lang.NoClassDefFoundError: com/sun/jersey/multipart/BodyPart"

2017-11-28 Thread Nixon Rodrigues (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nixon Rodrigues updated ATLAS-2284:
---
Fix Version/s: 0.8.2
   trunk

> Import hive fails with "java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart"
> --
>
> Key: ATLAS-2284
> URL: https://issues.apache.org/jira/browse/ATLAS-2284
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core
>Affects Versions: 1.0.0, 0.8.1
>Reporter: Sharmadha Sainath
>Assignee: Nixon Rodrigues
>Priority: Blocker
> Fix For: trunk, 0.8.2
>
> Attachments: ATLAS-2284-branch-0.8.patch, ATLAS-2284-master.patch
>
>
> Import hive fails with following error :
> {code}
> java.io.FileNotFoundException: 
> /usr/hdp/current/atlas-server/logs/import-hive.log (No such file or directory)
>   at java.io.FileOutputStream.open0(Native Method)
>   at java.io.FileOutputStream.open(FileOutputStream.java:270)
>   at java.io.FileOutputStream.(FileOutputStream.java:213)
>   at java.io.FileOutputStream.(FileOutputStream.java:133)
>   at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
>   at 
> org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
>   at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
>   at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseCategory(DOMConfigurator.java:436)
>   at org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1004)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:872)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:778)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at 
> org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:64)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:281)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:301)
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.(HiveMetaStoreBridge.java:97)
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:634)
> Caused by: java.lang.ClassNotFoundException: com.sun.jersey.multipart.BodyPart
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 1 more
> Failed to import Hive Data Model!!
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ATLAS-2284) Import hive fails with "java.lang.NoClassDefFoundError: com/sun/jersey/multipart/BodyPart"

2017-11-28 Thread Nixon Rodrigues (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nixon Rodrigues updated ATLAS-2284:
---
Affects Version/s: (was: 0.8.2)
   0.8.1

> Import hive fails with "java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart"
> --
>
> Key: ATLAS-2284
> URL: https://issues.apache.org/jira/browse/ATLAS-2284
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core
>Affects Versions: 1.0.0, 0.8.1
>Reporter: Sharmadha Sainath
>Assignee: Nixon Rodrigues
>Priority: Blocker
> Fix For: trunk, 0.8.2
>
> Attachments: ATLAS-2284-branch-0.8.patch, ATLAS-2284-master.patch
>
>
> Import hive fails with following error :
> {code}
> java.io.FileNotFoundException: 
> /usr/hdp/current/atlas-server/logs/import-hive.log (No such file or directory)
>   at java.io.FileOutputStream.open0(Native Method)
>   at java.io.FileOutputStream.open(FileOutputStream.java:270)
>   at java.io.FileOutputStream.(FileOutputStream.java:213)
>   at java.io.FileOutputStream.(FileOutputStream.java:133)
>   at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
>   at 
> org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
>   at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
>   at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseCategory(DOMConfigurator.java:436)
>   at org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1004)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:872)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:778)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at 
> org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:64)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:281)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:301)
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.(HiveMetaStoreBridge.java:97)
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:634)
> Caused by: java.lang.ClassNotFoundException: com.sun.jersey.multipart.BodyPart
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 1 more
> Failed to import Hive Data Model!!
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Instructions to build & run Atlas in dev environment

2017-11-28 Thread David Radley
Hi Madhan,
Thanks for sharing this. Unfortunately I am still getting errors. Any 
thoughts ? 

I get this in the application log when I issue a rest call. 

2017-11-28 12:32:11,055 INFO  - [main:] ~ Not running setup per 
configuration atlas.server.run.setup.on.start. 
(SetupSteps$SetupRequired:189)
2017-11-28 12:32:18,390 WARN  - [main:] ~ Failed to load class class 
com.thinkaurelius.titan.diskstorage.solr.SolrIndex or its referenced 
types; this usually indicates a broken classpath/classloader 
(ReflectiveConfigOptionLoader:229)
java.lang.VerifyError: Bad type on operand stack
Exception Details:
  Location:
 
com/thinkaurelius/titan/diskstorage/solr/SolrIndex.(Lcom/thinkaurelius/titan/diskstorage/configuration/Configuration;)V
 
@162: putfield
  Reason:
Type 'org/apache/solr/client/solrj/impl/CloudSolrServer' (current 
frame, stack[1]) is not assignable to 
'org/apache/solr/client/solrj/SolrServer'
  Current Frame:
bci: @162
flags: { }
locals: { 'com/thinkaurelius/titan/diskstorage/solr/SolrIndex', 
'com/thinkaurelius/titan/diskstorage/configuration/Configuration', 
'java/lang/String', 'org/apache/solr/client/solrj/impl/CloudSolrServer' }
stack: { 'com/thinkaurelius/titan/diskstorage/solr/SolrIndex', 
'org/apache/solr/client/solrj/impl/CloudSolrServer' }
  Bytecode:
0x000: 2ab7 008b 2bc6 0007 04a7 0004 03b8 0093
0x010: 2a2b b500 952a 2bb2 0097 03bd 0099 b900
0x020: 9d03 00c0 0099 b800 a1b5 00a3 2a2b b200
0x030: a503 bd00 99b9 009d 0300 c000 a7b6 00ab
0x040: b500 ad2a 2a2b b700 b1b5 00b3 2a2b b200
0x050: b803 bd00 99b9 009d 0300 c000 bab6 00be
0x060: b500 c02a 2bb2 00c2 03bd 0099 b900 9d03
0x070: 00c0 0099 b500 c42a b400 a3b2 00c7 a600
0x080: 2a2b b200 c903 bd00 99b9 009d 0300 c000
0x090: 994d bb00 cb59 2c04 b700 ce4e 2db6 00d1
0x0a0: 2a2d b500 d3a7 0057 2ab4 00a3 b200 d6a6
0x0b0: 002f bb00 1559 2a2b b700 d9b8 00df 4d2a
0x0c0: bb00 e159 2c2b b200 e303 bd00 99b9 009d
0x0d0: 0300 c000 e5b7 00e8 b500 d3a7 0021 bb00
0x0e0: ea59 bb00 ec59 b700 ed12 efb6 00f3 2ab4
0x0f0: 00a3 b600 f6b6 00fa b700 fdbf a700 0d4d
0x100: bb00 ff59 2cb7 0102 bfb1
  Exception Handler Table:
bci [119, 252] => handler: 255
  Stackmap Table:
full_frame(@12,{Object[#2],Object[#141]},{})
same_locals_1_stack_item_frame(@13,Integer)
same_frame_extended(@168)
same_frame(@222)
same_frame(@252)
same_locals_1_stack_item_frame(@255,Object[#136])
same_frame(@265)

at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
at java.lang.Class.getDeclaredFields(Class.java:1916)
at 
com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadSingleClassUnsafe(ReflectiveConfigOptionLoader.java:270)
at 
com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadAllClassesUnsafe(ReflectiveConfigOptionLoader.java:227)
at 
com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.load(ReflectiveConfigOptionLoader.java:194)
at 
com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader.loadAll(ReflectiveConfigOptionLoader.java:86)
at 
com.thinkaurelius.titan.diskstorage.configuration.ConfigNamespace.getChild(ConfigNamespace.java:71)
at 
com.thinkaurelius.titan.diskstorage.configuration.ConfigElement.parse(ConfigElement.java:169)
at 
com.thinkaurelius.titan.diskstorage.configuration.BasicConfiguration.getAll(BasicConfiguration.java:80)
at 
com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.(GraphDatabaseConfiguration.java:1344)
at 
com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)


all the best, David. 




From:   Madhan Neethiraj 
To: "dev@atlas.apache.org" 
Cc: Ashutosh Mestry 
Date:   28/11/2017 08:20
Subject:Instructions to build & run Atlas in dev environment



Atlas dev,

 

Please follow the instructions at the end of this email to build & run 
Apache Atlas in your local dev environment.

 

This setup uses the following configuration:

  - Graph DB: titan-0.5.4

  - Embedded HBase

  - Embedded Solr

 

Thanks to Ashutosh for compiling these instructions.

 

Hope this helps.

 

Madhan

 

 

Add the following to a script file, like build-start-atlas.sh, and execute 
the script.

 

 

ATLAS_SOURCE_DIR=/tmp/atlas-source

ATLAS_HOME=/tmp/atlas-bin

 

# Clone Apache Atlas sources

mkdir -p ${ATLAS_SOURCE_DIR}

cd ${ATLAS_SOURCE_DIR}

git clone 
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_atlas.git&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=QhpUQPr5YlG95aAgCvZGStEXHg4hBbSYQ9JkRqR_svY&m=DD5aENG2FF-Ivch0-k5ZtzL9-whaZRKTF-3B4r1BglU&s=knK0SCgqAiFSaZyB8ubpv9OCrs5J8GYM5vZajEeISaw&e=
 
-b master

 

# build Apache Atlas

cd atlas

mvn clean -DskipTests -DGRAPH_PROVIDER=titan0 install 
-Pdist,embedded-hbase-solr,graph-prov

[jira] [Updated] (ATLAS-2284) Import hive fails with "java.lang.NoClassDefFoundError: com/sun/jersey/multipart/BodyPart"

2017-11-28 Thread Nixon Rodrigues (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nixon Rodrigues updated ATLAS-2284:
---
Attachment: ATLAS-2284-branch-0.8.patch
ATLAS-2284-master.patch

> Import hive fails with "java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart"
> --
>
> Key: ATLAS-2284
> URL: https://issues.apache.org/jira/browse/ATLAS-2284
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core
>Affects Versions: 1.0.0
>Reporter: Sharmadha Sainath
>Assignee: Nixon Rodrigues
>Priority: Blocker
> Attachments: ATLAS-2284-branch-0.8.patch, ATLAS-2284-master.patch
>
>
> Import hive fails with following error :
> {code}
> java.io.FileNotFoundException: 
> /usr/hdp/current/atlas-server/logs/import-hive.log (No such file or directory)
>   at java.io.FileOutputStream.open0(Native Method)
>   at java.io.FileOutputStream.open(FileOutputStream.java:270)
>   at java.io.FileOutputStream.(FileOutputStream.java:213)
>   at java.io.FileOutputStream.(FileOutputStream.java:133)
>   at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
>   at 
> org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
>   at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
>   at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseCategory(DOMConfigurator.java:436)
>   at org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1004)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:872)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:778)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at 
> org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:64)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:281)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:301)
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.(HiveMetaStoreBridge.java:97)
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:634)
> Caused by: java.lang.ClassNotFoundException: com.sun.jersey.multipart.BodyPart
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 1 more
> Failed to import Hive Data Model!!
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (ATLAS-2284) Import hive fails with "java.lang.NoClassDefFoundError: com/sun/jersey/multipart/BodyPart"

2017-11-28 Thread Nixon Rodrigues (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nixon Rodrigues reassigned ATLAS-2284:
--

Assignee: Nixon Rodrigues

> Import hive fails with "java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart"
> --
>
> Key: ATLAS-2284
> URL: https://issues.apache.org/jira/browse/ATLAS-2284
> Project: Atlas
>  Issue Type: Bug
>  Components:  atlas-core
>Affects Versions: 1.0.0
>Reporter: Sharmadha Sainath
>Assignee: Nixon Rodrigues
>Priority: Blocker
>
> Import hive fails with following error :
> {code}
> java.io.FileNotFoundException: 
> /usr/hdp/current/atlas-server/logs/import-hive.log (No such file or directory)
>   at java.io.FileOutputStream.open0(Native Method)
>   at java.io.FileOutputStream.open(FileOutputStream.java:270)
>   at java.io.FileOutputStream.(FileOutputStream.java:213)
>   at java.io.FileOutputStream.(FileOutputStream.java:133)
>   at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
>   at 
> org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
>   at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
>   at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:295)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176)
>   at 
> org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfigurator.java:191)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOMConfigurator.java:523)
>   at 
> org.apache.log4j.xml.DOMConfigurator.parseCategory(DOMConfigurator.java:436)
>   at org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:1004)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:872)
>   at 
> org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java:778)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at 
> org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:64)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:281)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:301)
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.(HiveMetaStoreBridge.java:97)
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/jersey/multipart/BodyPart
>   at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:634)
> Caused by: java.lang.ClassNotFoundException: com.sun.jersey.multipart.BodyPart
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 1 more
> Failed to import Hive Data Model!!
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (ATLAS-2285) UI : Renamed saved search with date attribute.

2017-11-28 Thread Sharmadha Sainath (JIRA)
Sharmadha Sainath created ATLAS-2285:


 Summary: UI : Renamed saved search with date attribute.
 Key: ATLAS-2285
 URL: https://issues.apache.org/jira/browse/ATLAS-2285
 Project: Atlas
  Issue Type: Bug
  Components: atlas-webui
Affects Versions: 1.0.0
Reporter: Sharmadha Sainath
Priority: Critical
 Attachments: saved_search_with_date_renamed.mov

1. Fired basic search typename = hive_table , filter CreateTime = 08/02/2017 
9:10 PM and saved search as date_search
2. Clicked on the saved search "date_search" and verified the filters. filter 
CreateTime = 08/02/2017 9:10 PM is rightly set.
3. Clicked on the "date_search"  and renamed it to "date_search_renamed" 
without modifying the filters .
4.PUT REST call to update the search had the correct filter value.
5. Refreshed the page and clicked on the "date_search_renamed". Now , the date 
filter was modified to CreateTime =  01/01/1970 5:30 AM .
6. /api/atlas/v2/search/saved GET response had  08/02/2017 9:10 PM filter value 
for "date_search_renamed" but UI shows 01/01/1970 5:30 AM.

Attached the video recording.

CC : [~kevalbhatt] [~praik24mac]





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (ATLAS-2270) Supported combinations of persistent store and index backend

2017-11-28 Thread David Radley (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268443#comment-16268443
 ] 

David Radley commented on ATLAS-2270:
-

[~ppadovani] Thanks for your response. Some thoughts:

>From my understanding :
- Janus 0.2 only works with external solr and hbase. 

You will need to make Atlas code changes in order to run your configuration. 
The reasons Cassandra wont run at the moment include :
- the web app pom exclusions 
- at Janus 0.2 the indexing engine needs to be communicated to with rest and 
run separately to Janus. This meant that a separate solr indexing engine needs 
to be started. I guess you will need to work with a external elastic search 
engine. 
- you will need to ensure the created configuration matches the build. 

If you have questions/proposals regarding the current poms - we would be happy 
to feedback. 

There has been a Jira creating docker images as build output. It would be great 
if you could change the Atlas build to create docker images for your 
configuration. 

I suggest doing this Cassandra work in a separate Jira. 

As you think about how to add elastic search - it would be worth looking at the 
way the graph providers are specified in the build, in a similar way, my 
suggestion is that the choice of index engine and backing store are also 
cleanly specified with exclusivity in the poms.   

Another approach might be to re-add Janus 0.1.1 support - which was working for 
you. 
 

> Supported combinations of persistent store and index backend
> 
>
> Key: ATLAS-2270
> URL: https://issues.apache.org/jira/browse/ATLAS-2270
> Project: Atlas
>  Issue Type: Bug
>Reporter: Graham Wallis
>
> We need to discuss and decide which combinations of persistent store and 
> indexing backend Atlas 1.0.0 (master) should support. This includes 
> building/running Atlas as a standalone package and running UTs/ITs as part of 
> the Atlas build. 
> This JIRA focusses on titan0 and janusgraph 0.2.0, as they are the graph 
> databases that will be supported in master/1.0.0. This JIRA deliberately 
> ignores titan1 and janusgraph 0.1.1 as the former should be 
> deprecated/removed and the other is a transient state as we get to janusgraph 
> 0.2.0. 
> With titan0 as the graph provider, Atlas has supported the following 
> combinations of persistent store and indexer. It is suggested that this set 
> is kept unchanged:
> {{
> titan0  solr  es
> 
> berkeley   0  1
> hbase   1  0
> cassandra0  0
> }}
> With janusgraph (0.2.0) as the graph provider, Atlas *could* support 
> additional combinations. Cassandra is included in this discussion pending 
> response to ATLAS-2259.
> {{
> janus 0.2.0  solr  es
> 
> berkeley   ?  1
> hbase   1  ?
> cassandra?  ?
> }}
> It is suggested that the combinations marked with '1' should be continued and 
> the remaining 4 combinations, marked with '?', should be considered. There 
> seems to be evidence of people using all 4 of these combinations, although 
> not necessarily with Atlas.
> Depending on the decision made above, we need to ensure that it is possible 
> to build Atlas as a standalone package with any of the combinations - i.e. 
> that they are mutually exclusive and do not interfere with one another. They 
> currently interfere which makes it impossible to build Atlas with 
> -Pdist,berkeley-elasticsearch because the 'dist' profile will exclude jars 
> that are needed by the berkeley-elasticsearch profile - which leads to class 
> not found exceptions when the Atlas server is started. The solution to this 
> could be very simple, or slightly more sophisticated, depending on how many 
> of the combinations we choose to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Instructions to build & run Atlas in dev environment

2017-11-28 Thread Madhan Neethiraj
Atlas dev,

 

Please follow the instructions at the end of this email to build & run Apache 
Atlas in your local dev environment.

 

This setup uses the following configuration:

  - Graph DB: titan-0.5.4

  - Embedded HBase

  - Embedded Solr

 

Thanks to Ashutosh for compiling these instructions.

 

Hope this helps.

 

Madhan

 

 

Add the following to a script file, like build-start-atlas.sh, and execute the 
script.

 

 

ATLAS_SOURCE_DIR=/tmp/atlas-source

ATLAS_HOME=/tmp/atlas-bin

 

# Clone Apache Atlas sources

mkdir -p ${ATLAS_SOURCE_DIR}

cd ${ATLAS_SOURCE_DIR}

git clone https://github.com/apache/atlas.git -b master

 

# build Apache Atlas

cd atlas

mvn clean -DskipTests -DGRAPH_PROVIDER=titan0 install 
-Pdist,embedded-hbase-solr,graph-provider-titan0

 

# Install Apache Atlas

mkdir -p ${ATLAS_HOME}

tar xfz distro/target/apache-atlas-*bin.tar.gz --strip-components 1 -C 
${ATLAS_HOME}

 

# Setup environment and configuration

export MANAGE_LOCAL_HBASE=true

export MANAGE_LOCAL_SOLR=true

export PATH=${PATH}:/tmp/atlas-bin/bin

echo 
atlas.graphdb.backend=org.apache.atlas.repository.graphdb.titan0.Titan0GraphDatabase
 >> ${ATLAS_HOME}/conf/atlas-application.properties

 

# Start Apache Atlas

atlas_start.py

 

# Access Apache Atlas at http://localhost:21000

 

 



Atlas snapshot libraries

2017-11-28 Thread Madhan Neethiraj
Atlas dev,

 

As APIs/features are added in Apache Atlas, it is useful to publish snapshot 
libraries regularly to make the latest APIs available for components that 
integrate with Apache Atlas – like Apache Ranger. Sarath updated Apache Atlas 
build to publish snapshot libraries at regular intervals, daily for now, from 
master and branch-0.8 branches.

 

Any component that needs to refer to Apache Atlas snapshot libraries can do so 
by:

 

1. adding following repository in pom.xml:

    

    apache.snapshots.https

    Apache Development Snapshot Repository

    
https://repository.apache.org/content/repositories/snapshots

    

    true

    

    

 

2. using Atlas version ‘0.8.2-SNAPSHOT’ or ‘1.0.0-SNAPSHOT’

    0.8.2-SNAPSHOT

 

For an example of above

 

, please look in at Apache Ranger pom.xml (in ranger-0.7 branch), which 
references Apache Atlas snapshot.

 

Hope this helps.

 

Madhan