[jira] [Updated] (ATLAS-448) Hive IllegalArgumentException with Atlas hook enabled on SHOW TRANSACTIONS AND SHOW COMPACTIONS

2016-01-21 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-448:
--
Attachment: ATLAS-448-v2.patch

> Hive IllegalArgumentException with Atlas hook enabled on SHOW TRANSACTIONS 
> AND SHOW COMPACTIONS
> ---
>
> Key: ATLAS-448
> URL: https://issues.apache.org/jira/browse/ATLAS-448
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.5-incubating
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: trunk
>
> Attachments: ATLAS-448-v2.patch, ATLAS-448.patch
>
>
> SHOW TRANSACTIONS;
> SHOW COMPACTIONS;
> explode with an exception like the following:
> {noformat}
> 2015-10-10 00:50:49,683 ERROR [HiveServer2-Background-Pool: Thread-273]: 
> ql.Driver (SessionState.java:printError(960)) - FAILED: Hi
> ve Internal Error: java.lang.IllegalArgumentException(No enum constant 
> org.apache.hadoop.hive.ql.plan.HiveOperation.SHOW TRANSACTIO
> NS)
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.hive.ql.plan.HiveOperation.SHOW TRANSACTIONS
> at java.lang.Enum.valueOf(Enum.java:238)
> at 
> org.apache.hadoop.hive.ql.plan.HiveOperation.valueOf(HiveOperation.java:23)
> at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:151)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1522)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206)
> at java..AccessController.doPrivileged(Native Method)
> at javax..auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop..UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-439) Investigate apache build failures

2016-01-21 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-439:
--
Attachment: (was: ATLAS-448-v2.patch)

> Investigate apache build failures
> -
>
> Key: ATLAS-439
> URL: https://issues.apache.org/jira/browse/ATLAS-439
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.6-incubating
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>Priority: Critical
> Fix For: trunk
>
> Attachments: ATLAS-439.patch
>
>
> The latest code builds fine on the local machine. However, the builds have 
> been failing continuously at apache builds - 
> https://builds.apache.org/job/apache-atlas-nightly/159/ 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-439) Investigate apache build failures

2016-01-21 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-439:
--
Attachment: ATLAS-439.patch

> Investigate apache build failures
> -
>
> Key: ATLAS-439
> URL: https://issues.apache.org/jira/browse/ATLAS-439
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.6-incubating
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>Priority: Critical
> Fix For: trunk
>
> Attachments: ATLAS-439.patch
>
>
> The latest code builds fine on the local machine. However, the builds have 
> been failing continuously at apache builds - 
> https://builds.apache.org/job/apache-atlas-nightly/159/ 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-396) Creating an entity with non-existing type results in "Unable to deserialize json" error

2016-01-19 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15108000#comment-15108000
 ] 

Shwetha G S commented on ATLAS-396:
---

[~dkantor], I have added [~guptaneeru] as contributor and have assigned this 
jira to her. She is all set. Thanks

> Creating an entity with non-existing type results in "Unable to deserialize 
> json" error
> ---
>
> Key: ATLAS-396
> URL: https://issues.apache.org/jira/browse/ATLAS-396
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.6-incubating
>Reporter: Ayub Khan
>Assignee: Neeru Gupta
>Priority: Minor
>  Labels: regression
> Fix For: trunk
>
>
> Creating an entity with non-existing type results in "Unable to deserialize 
> json" error. This error message should have been "Unknown datatype: ".  
> This is a regression issue..
> Logs
> {noformat}
> Configuring TestNG with: TestNG652Configurator
> 2015-12-17 14:21:59,381 INFO  - [main:] ~ Request Url: 
> http://os-r6-atlas-erie-tp-testing-2.novalocal:21000/api/atlas/types?user.name=apathan
>  (BaseRequest:164)
> 2015-12-17 14:21:59,383 INFO  - [main:] ~ Request Method: POST 
> (BaseRequest:165)
> 2015-12-17 14:21:59,384 INFO  - [main:] ~ Request Header: Name=Content-Type 
> Value=application/json; charset=UTF-8 (BaseRequest:168)
> 2015-12-17 14:22:00,238 INFO  - [main:] ~ Response Status: HTTP/1.1 201 
> Created (BaseRequest:195)
> 2015-12-17 14:22:00,238 INFO  - [main:] ~ Response Header: Name=Date 
> Value=Thu, 17 Dec 2015 08:51:59 GMT (BaseRequest:197)
> 2015-12-17 14:22:00,238 INFO  - [main:] ~ Response Header: Name=Content-Type 
> Value=application/json; charset=UTF-8 (BaseRequest:197)
> 2015-12-17 14:22:00,238 INFO  - [main:] ~ Response Header: 
> Name=Transfer-Encoding Value=chunked (BaseRequest:197)
> 2015-12-17 14:22:00,238 INFO  - [main:] ~ Response Header: Name=Server 
> Value=Jetty(9.2.12.v20150709) (BaseRequest:197)
> 2015-12-17 14:22:00,254 INFO  - [main:] ~ 
> 
>  (TestNGListener:36)
> 2015-12-17 14:22:00,254 INFO  - [main:] ~ Testing going to start for: 
> org.apache.atlas.regression.tests.EntityResourceTest.createEntityForNonExistantType([])
>  (TestNGListener:37)
> 2015-12-17 14:22:00,678 INFO  - [main:createEntityForNonExistantType] ~ 
> Request body is :{
>   
> "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference",
>   "id":{
> "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id",
> "id":"-145034232025579",
> "version":0,
> "typeName":"createEntityForNonExistantTypeo88sflmm7k"
>   },
>   "typeName":"createEntityForNonExistantTypeo88sflmm7k",
>   "values":{
> 
>   },
>   "traitNames":[
> 
>   ],
>   "traits":{
> 
>   }
> } (EntityResourceTest:394)
> 2015-12-17 14:22:00,680 INFO  - [main:createEntityForNonExistantType] ~ 
> Request Url: 
> http://os-r6-atlas-erie-tp-testing-2.novalocal:21000/api/atlas/entities?user.name=apathan
>  (BaseRequest:164)
> 2015-12-17 14:22:00,681 INFO  - [main:createEntityForNonExistantType] ~ 
> Request Method: POST (BaseRequest:165)
> 2015-12-17 14:22:00,681 INFO  - [main:createEntityForNonExistantType] ~ 
> Request Header: Name=Content-Type Value=application/json; charset=UTF-8 
> (BaseRequest:168)
> 2015-12-17 14:22:01,309 INFO  - [main:createEntityForNonExistantType] ~ 
> Response Status: HTTP/1.1 400 Bad Request (BaseRequest:195)
> 2015-12-17 14:22:01,310 INFO  - [main:createEntityForNonExistantType] ~ 
> Response Header: Name=Date Value=Thu, 17 Dec 2015 08:52:01 GMT 
> (BaseRequest:197)
> 2015-12-17 14:22:01,312 INFO  - [main:createEntityForNonExistantType] ~ 
> Response Header: Name=Content-Type Value=application/json; charset=UTF-8 
> (BaseRequest:197)
> 2015-12-17 14:22:01,312 INFO  - [main:createEntityForNonExistantType] ~ 
> Response Header: Name=Transfer-Encoding Value=chunked (BaseRequest:197)
> 2015-12-17 14:22:01,312 INFO  - [main:createEntityForNonExistantType] ~ 
> Response Header: Name=Server Value=Jetty(9.2.12.v20150709) (BaseRequest:197)
> 2015-12-17 14:22:01,314 INFO  - [main:createEntityForNonExistantType] ~ 
> Response is: {"error":"Unable to deserialize 
> json","stackTrace":"java.lang.IllegalArgumentException: Unable to deserialize 
> json\n\tat 
> org.apache.atlas.services.DefaultMetadataService.deserializeClassInstances(DefaultMetadataService.java:315)\n\tat
>  
> org.apache.atlas.services.DefaultMetadataService.createEntities(DefaultMetadataService.java:280)\n\tat
>  
> org.apache.atlas.web.resources.EntityResource.submit(EntityResource.java:114)\n\tat
>  sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)\n\tat 
> 

[jira] [Updated] (ATLAS-396) Creating an entity with non-existing type results in "Unable to deserialize json" error

2016-01-19 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-396:
--
Assignee: Neeru Gupta

> Creating an entity with non-existing type results in "Unable to deserialize 
> json" error
> ---
>
> Key: ATLAS-396
> URL: https://issues.apache.org/jira/browse/ATLAS-396
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.6-incubating
>Reporter: Ayub Khan
>Assignee: Neeru Gupta
>Priority: Minor
>  Labels: regression
> Fix For: trunk
>
>
> Creating an entity with non-existing type results in "Unable to deserialize 
> json" error. This error message should have been "Unknown datatype: ".  
> This is a regression issue..
> Logs
> {noformat}
> Configuring TestNG with: TestNG652Configurator
> 2015-12-17 14:21:59,381 INFO  - [main:] ~ Request Url: 
> http://os-r6-atlas-erie-tp-testing-2.novalocal:21000/api/atlas/types?user.name=apathan
>  (BaseRequest:164)
> 2015-12-17 14:21:59,383 INFO  - [main:] ~ Request Method: POST 
> (BaseRequest:165)
> 2015-12-17 14:21:59,384 INFO  - [main:] ~ Request Header: Name=Content-Type 
> Value=application/json; charset=UTF-8 (BaseRequest:168)
> 2015-12-17 14:22:00,238 INFO  - [main:] ~ Response Status: HTTP/1.1 201 
> Created (BaseRequest:195)
> 2015-12-17 14:22:00,238 INFO  - [main:] ~ Response Header: Name=Date 
> Value=Thu, 17 Dec 2015 08:51:59 GMT (BaseRequest:197)
> 2015-12-17 14:22:00,238 INFO  - [main:] ~ Response Header: Name=Content-Type 
> Value=application/json; charset=UTF-8 (BaseRequest:197)
> 2015-12-17 14:22:00,238 INFO  - [main:] ~ Response Header: 
> Name=Transfer-Encoding Value=chunked (BaseRequest:197)
> 2015-12-17 14:22:00,238 INFO  - [main:] ~ Response Header: Name=Server 
> Value=Jetty(9.2.12.v20150709) (BaseRequest:197)
> 2015-12-17 14:22:00,254 INFO  - [main:] ~ 
> 
>  (TestNGListener:36)
> 2015-12-17 14:22:00,254 INFO  - [main:] ~ Testing going to start for: 
> org.apache.atlas.regression.tests.EntityResourceTest.createEntityForNonExistantType([])
>  (TestNGListener:37)
> 2015-12-17 14:22:00,678 INFO  - [main:createEntityForNonExistantType] ~ 
> Request body is :{
>   
> "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference",
>   "id":{
> "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id",
> "id":"-145034232025579",
> "version":0,
> "typeName":"createEntityForNonExistantTypeo88sflmm7k"
>   },
>   "typeName":"createEntityForNonExistantTypeo88sflmm7k",
>   "values":{
> 
>   },
>   "traitNames":[
> 
>   ],
>   "traits":{
> 
>   }
> } (EntityResourceTest:394)
> 2015-12-17 14:22:00,680 INFO  - [main:createEntityForNonExistantType] ~ 
> Request Url: 
> http://os-r6-atlas-erie-tp-testing-2.novalocal:21000/api/atlas/entities?user.name=apathan
>  (BaseRequest:164)
> 2015-12-17 14:22:00,681 INFO  - [main:createEntityForNonExistantType] ~ 
> Request Method: POST (BaseRequest:165)
> 2015-12-17 14:22:00,681 INFO  - [main:createEntityForNonExistantType] ~ 
> Request Header: Name=Content-Type Value=application/json; charset=UTF-8 
> (BaseRequest:168)
> 2015-12-17 14:22:01,309 INFO  - [main:createEntityForNonExistantType] ~ 
> Response Status: HTTP/1.1 400 Bad Request (BaseRequest:195)
> 2015-12-17 14:22:01,310 INFO  - [main:createEntityForNonExistantType] ~ 
> Response Header: Name=Date Value=Thu, 17 Dec 2015 08:52:01 GMT 
> (BaseRequest:197)
> 2015-12-17 14:22:01,312 INFO  - [main:createEntityForNonExistantType] ~ 
> Response Header: Name=Content-Type Value=application/json; charset=UTF-8 
> (BaseRequest:197)
> 2015-12-17 14:22:01,312 INFO  - [main:createEntityForNonExistantType] ~ 
> Response Header: Name=Transfer-Encoding Value=chunked (BaseRequest:197)
> 2015-12-17 14:22:01,312 INFO  - [main:createEntityForNonExistantType] ~ 
> Response Header: Name=Server Value=Jetty(9.2.12.v20150709) (BaseRequest:197)
> 2015-12-17 14:22:01,314 INFO  - [main:createEntityForNonExistantType] ~ 
> Response is: {"error":"Unable to deserialize 
> json","stackTrace":"java.lang.IllegalArgumentException: Unable to deserialize 
> json\n\tat 
> org.apache.atlas.services.DefaultMetadataService.deserializeClassInstances(DefaultMetadataService.java:315)\n\tat
>  
> org.apache.atlas.services.DefaultMetadataService.createEntities(DefaultMetadataService.java:280)\n\tat
>  
> org.apache.atlas.web.resources.EntityResource.submit(EntityResource.java:114)\n\tat
>  sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)\n\tat 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat
>  java.lang.reflect.Method.invoke(Method.java:497)\n\tat 
> 

[jira] [Updated] (ATLAS-106) Store createTimestamp and modified timestamp separately for an entity

2016-01-19 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-106:
--
Attachment: ATLAS-106-v8.patch

Attaching the patch from reviewboard

> Store createTimestamp and modified timestamp separately for an entity
> -
>
> Key: ATLAS-106
> URL: https://issues.apache.org/jira/browse/ATLAS-106
> Project: Atlas
>  Issue Type: Improvement
>Reporter: Suma Shivaprasad
>Assignee: David Kantor
> Attachments: ATLAS-106-v7.patch, ATLAS-106-v8.patch
>
>
> Currently we store only the Create timestamp in atlas. Would be better to 
> separate to track create and modified time separately , in cases where we 
> want to support search queries give all entities which have been modified in 
> the past 1 day etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (ATLAS-446) IndexWriter.updateDocument on FSDirectory does not work Ver.5.4.0

2016-01-19 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S resolved ATLAS-446.
---
Resolution: Invalid

Doesn't look like the issue belongs to Atlas project

> IndexWriter.updateDocument on FSDirectory does not work Ver.5.4.0
> -
>
> Key: ATLAS-446
> URL: https://issues.apache.org/jira/browse/ATLAS-446
> Project: Atlas
>  Issue Type: Bug
>Reporter: uygar yuzsuren
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-455) Build failure in Hive integration tests

2016-01-24 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15114764#comment-15114764
 ] 

Shwetha G S commented on ATLAS-455:
---

Can you attach the application.log. It should be in webapp/target/logs or 
addons/hive-bridge/target/logs

> Build failure in Hive integration tests
> ---
>
> Key: ATLAS-455
> URL: https://issues.apache.org/jira/browse/ATLAS-455
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
>Reporter: Nigel Jones
>
> With a latest git pull from ~1400 UTC , plus the ATLAS-439 patch to fix a 
> webapp build failure, I am now seeing the Hive integration tests fail. This 
> is in a pretty clean environment, centos 7.1
> Tests run: 12, Failures: 5, Errors: 0, Skipped: 0, Time elapsed: 177.625 sec 
> <<< FAILURE! - in org.apache.atlas.hive.hook
> .HiveHookIT
> testAlterTableRename(org.apache.atlas.hive.hook.HiveHookIT)  Time elapsed: 
> 27.741 sec  <<< FAILURE!
> java.lang.Exception: Waiting timed out after 2000 msec
> at org.apache.atlas.hive.hook.HiveHookIT.waitFor(HiveHookIT.java:427)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.assertEntityIsRegistered(HiveHookIT.java:342)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.assertTableIsRegistered(HiveHookIT.java:320)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.testAlterTableRename(HiveHookIT.java:269)
> testAlterViewRename(org.apache.atlas.hive.hook.HiveHookIT)  Time elapsed: 
> 10.27 sec  <<< FAILURE!
> java.lang.Exception: Waiting timed out after 2000 msec
> at org.apache.atlas.hive.hook.HiveHookIT.waitFor(HiveHookIT.java:427)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.assertEntityIsRegistered(HiveHookIT.java:342)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.assertTableIsRegistered(HiveHookIT.java:320)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.testAlterViewRename(HiveHookIT.java:284)
> testCTAS(org.apache.atlas.hive.hook.HiveHookIT)  Time elapsed: 13.085 sec  
> <<< FAILURE!
> java.lang.Exception: Waiting timed out after 2000 msec
> at org.apache.atlas.hive.hook.HiveHookIT.waitFor(HiveHookIT.java:427)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.assertEntityIsRegistered(HiveHookIT.java:342)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.assertProcessIsRegistered(HiveHookIT.java:297)
> at org.apache.atlas.hive.hook.HiveHookIT.testCTAS(HiveHookIT.java:171)
> testCreateView(org.apache.atlas.hive.hook.HiveHookIT)  Time elapsed: 4.736 
> sec  <<< FAILURE!
> java.lang.Exception: Waiting timed out after 2000 msec
> at org.apache.atlas.hive.hook.HiveHookIT.waitFor(HiveHookIT.java:427)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.assertEntityIsRegistered(HiveHookIT.java:342)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.assertProcessIsRegistered(HiveHookIT.java:297)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.testCreateView(HiveHookIT.java:182)
> testInsert(org.apache.atlas.hive.hook.HiveHookIT)  Time elapsed: 10.654 sec  
> <<< FAILURE!
> java.lang.Exception: Waiting timed out after 2000 msec
> at org.apache.atlas.hive.hook.HiveHookIT.waitFor(HiveHookIT.java:427)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.assertEntityIsRegistered(HiveHookIT.java:342)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.assertProcessIsRegistered(HiveHookIT.java:297)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.testInsert(HiveHookIT.java:206)
> Results :
> Failed tests:
>   
> HiveHookIT.testAlterTableRename:269->assertTableIsRegistered:320->assertEntityIsRegistered:342->waitFor:427
>   
> HiveHookIT.testAlterViewRename:284->assertTableIsRegistered:320->assertEntityIsRegistered:342->waitFor:427
>   
> HiveHookIT.testCTAS:171->assertProcessIsRegistered:297->assertEntityIsRegistered:342->waitFor:427
>   
> HiveHookIT.testCreateView:182->assertProcessIsRegistered:297->assertEntityIsRegistered:342->waitFor:427
>   
> HiveHookIT.testInsert:206->assertProcessIsRegistered:297->assertEntityIsRegistered:342->waitFor:427
> Tests run: 12, Failures: 5, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-451) Doc: Fix few broken links due to Wiki words in Atlas documentation

2016-01-24 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15114792#comment-15114792
 ] 

Shwetha G S commented on ATLAS-451:
---

Whenever we need links, we always specify explicit links. Instead of disabling 
every wikiword, can we disable wikiword links always. Looks like there are some 
plugins available for this

> Doc: Fix few broken links due to Wiki words in Atlas documentation
> --
>
> Key: ATLAS-451
> URL: https://issues.apache.org/jira/browse/ATLAS-451
> Project: Atlas
>  Issue Type: Bug
>Reporter: Hemanth Yamijala
>Assignee: Sharmadha Sainath
>Priority: Trivial
>  Labels: newbie
> Attachments: ATLAS-451-v1.patch
>
>
> There are a few missing links in Atlas documentation created due to not 
> escaping Wiki words. This is a ticket to clean them up.
> http://atlas.incubator.apache.org/SingleQuery.html
> http://atlas.incubator.apache.org/JdbcAccess.html
> http://atlas.incubator.apache.org/WithPath.html
> http://atlas.incubator.apache.org/SingleQueries.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-439) Investigate apache build failures - EntityJerseyResourceIT.

2016-01-24 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-439:
--
Summary: Investigate apache build failures - EntityJerseyResourceIT.  (was: 
Investigate apache build failures - )

> Investigate apache build failures - EntityJerseyResourceIT.
> ---
>
> Key: ATLAS-439
> URL: https://issues.apache.org/jira/browse/ATLAS-439
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.6-incubating
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>Priority: Critical
> Fix For: trunk
>
> Attachments: ATLAS-439.patch
>
>
> The latest code builds fine on the local machine. However, the builds have 
> been failing continuously at apache builds - 
> https://builds.apache.org/job/apache-atlas-nightly/159/ 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-452) Exceptions while running HiveHookIT#testAlterTableRename

2016-01-24 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-452:
--
Summary: Exceptions while running HiveHookIT#testAlterTableRename  (was: 
Exceptions while running HiveHookIT#testRenameTable)

> Exceptions while running HiveHookIT#testAlterTableRename
> 
>
> Key: ATLAS-452
> URL: https://issues.apache.org/jira/browse/ATLAS-452
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>
> {noformat}
> 27.0.0.1 - - [22/Jan/2016:06:08:54 +] "OPTIONS 
> /api/atlas/discovery/search?query=hive_table+as+t+where+tableName+%3D+'tablehdj2zjk9bz',+db+where+name+%3D+'default'+and+clusterName+%3D+'test'+select+t=sshivalingamurthy
>  HTTP/1.1" 200 - "-" "Java/1.7.0_79"
> 2016-01-22 11:38:54.720:WARN:oejs.ServletHandler:qtp127815766-153: 
> /api/atlas/entities
> java.lang.IllegalArgumentException: Input String cannot be null cannot be null
>   at org.apache.atlas.utils.ParamChecker.notNull(ParamChecker.java:40)
>   at 
> org.apache.atlas.web.util.Servlets.escapeJsonString(Servlets.java:146)
>   at 
> org.apache.atlas.web.util.Servlets.getErrorResponse(Servlets.java:125)
>   at 
> org.apache.atlas.web.util.Servlets.getErrorResponse(Servlets.java:107)
>   at 
> org.apache.atlas.web.resources.EntityResource.updateEntities(EntityResource.java:177)
> {noformat}
> {noformat}
> 2016-01-22 11:47:08,614 ERROR - [qtp1582368608-323 - 
> ddb67cab-8e11-4434-9cb3-53d95fccc10a:] ~ Unable to persist entity instance 
> (EntityResource:176)
> java.lang.NullPointerException
> at 
> org.apache.atlas.repository.graph.TypedInstanceToGraphMapper.updateClassEdge(TypedInstanceToGraphMapper.java:567)
> at 
> org.apache.atlas.repository.graph.TypedInstanceToGraphMapper.addOrUpdateClassVertex(TypedInstanceToGraphMapper.java:513)
> at 
> org.apache.atlas.repository.graph.TypedInstanceToGraphMapper.addOrUpdateCollectionEntry(TypedInstanceToGraphMapper.java:470)
> at 
> org.apache.atlas.repository.graph.TypedInstanceToGraphMapper.mapArrayCollectionToVertex(TypedInstanceToGraphMapper.java:370)
> at 
> org.apache.atlas.repository.graph.TypedInstanceToGraphMapper.mapAttributesToVertex(TypedInstanceToGraphMapper.java:208)
> at 
> org.apache.atlas.repository.graph.TypedInstanceToGraphMapper.mapInstanceToVertex(TypedInstanceToGraphMapper.java:183)
> at 
> org.apache.atlas.repository.graph.TypedInstanceToGraphMapper.addOrUpdateAttributesAndTraits(TypedInstanceToGraphMapper.java:163)
> at 
> org.apache.atlas.repository.graph.TypedInstanceToGraphMapper.addOrUpdateAttributesAndTraits(TypedInstanceToGraphMapper.java:139)
> at 
> org.apache.atlas.repository.graph.TypedInstanceToGraphMapper.mapTypedInstanceToGraph(TypedInstanceToGraphMapper.java:105)
> at 
> org.apache.atlas.repository.graph.GraphBackedMetadataRepository.updateEntities(GraphBackedMetadataRepository.java:298)
> at 
> org.apache.atlas.GraphTransactionInterceptor.invoke(GraphTransactionInterceptor.java:42)
> at 
> org.apache.atlas.services.DefaultMetadataService.updateEntities(DefaultMetadataService.java:384)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-181) Integrate storm topology metadata into Atlas

2016-01-19 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-181:
--
Attachment: ATLAS-181-2.patch

Attaching the latest patch from reviewboard

> Integrate storm topology metadata into Atlas
> 
>
> Key: ATLAS-181
> URL: https://issues.apache.org/jira/browse/ATLAS-181
> Project: Atlas
>  Issue Type: Improvement
>Affects Versions: 0.6-incubating
>Reporter: Venkatesh Seetharam
>Assignee: Hemanth Yamijala
> Fix For: trunk
>
> Attachments: ATLAS-181-1.patch, ATLAS-181-2.patch, ATLAS-181.patch, 
> ApacheStormIntegrationWithApacheAtlas.pdf
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ATLAS-439) Investigate apache build failures

2016-01-20 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S reassigned ATLAS-439:
-

Assignee: Shwetha G S

> Investigate apache build failures
> -
>
> Key: ATLAS-439
> URL: https://issues.apache.org/jira/browse/ATLAS-439
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>Priority: Critical
>
> The latest code builds fine on the local machine. However, the builds have 
> been failing continuously at apache builds - 
> https://builds.apache.org/job/apache-atlas-nightly/159/ 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-476) Update type attribute with Reserved characters updated the original type as unknown

2016-02-17 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150044#comment-15150044
 ] 

Shwetha G S commented on ATLAS-476:
---

[~yhemanth]
We can't have transaction across different systems. We knew this was an issue, 
but there is no easy way to do the right thing. We can't share the same 
transaction across 2nd and 3rd steps, hence we left the indexes as is and 
reversed the 1st step in case of failures.
How is this related to the error mentioned above?




> Update type attribute with Reserved characters updated the original type as 
> unknown
> ---
>
> Key: ATLAS-476
> URL: https://issues.apache.org/jira/browse/ATLAS-476
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: sandbox
>Reporter: Chethana
>Assignee: Hemanth Yamijala
>Priority: Blocker
> Fix For: 0.7-incubating
>
> Attachments: 1.log
>
>
> create a type with required attribute
> try to get this type created - the type data is returned
> try update this type by adding attribute with attribute name consisting of a 
> reserved character eg:test$
> this throws exception.
> Now use to get call to get the previously created type
> Expected:
> The type should not be updated.
> Actual:
> "error": "Unknown datatype: className_update_vsvrbzqaqg",
> "stackTrace": "org.apache.atlas.typesystem.exception.TypeNotFoundException: 
> Unknown datatype: className_update_vsvrbzqaqg\n\tat 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-398) Delete trait that exists but not linked to entity results in "400 Bad request". It should result "404 not found"

2016-02-17 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150237#comment-15150237
 ] 

Shwetha G S commented on ATLAS-398:
---

[~ndjouhr], In the review requests, please add 'atlas' in the groups always

> Delete trait that exists but not linked to entity results in "400 Bad 
> request". It should result "404 not found"
> 
>
> Key: ATLAS-398
> URL: https://issues.apache.org/jira/browse/ATLAS-398
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.6-incubating
>Reporter: Ayub Khan
>Assignee: Naima Djouhri
> Fix For: trunk
>
> Attachments: ATLAS-356-398-V1.patch, ATLAS-356-ATLAS-398-V2.patch, 
> Atlas-356-Atlas-398-V0.patch
>
>
> Delete trait that exists but not linked to entity results in "400 Bad 
> request". It should result "404 not found"
> {noformat}
> curl -v -X DELETE 
> http://os-r6-atlas-erie-tp-testing-2.novalocal:21000/api/atlas/entities/c4d364e5-c5d0-4971-8b35-9a661128a5d9/traits/deleteTraitThatExistsButNotLinkedToEntitywzrfuznlno?user.name=apathan
> * About to connect() to os-r6-atlas-erie-tp-testing-2.novalocal port 21000 
> (#0)
> *   Trying 172.22.100.121... connected
> * Connected to os-r6-atlas-erie-tp-testing-2.novalocal (172.22.100.121) port 
> 21000 (#0)
> > DELETE 
> > /api/atlas/entities/c4d364e5-c5d0-4971-8b35-9a661128a5d9/traits/deleteTraitThatExistsButNotLinkedToEntitywzrfuznlno?user.name=apathan
> >  HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: os-r6-atlas-erie-tp-testing-2.novalocal:21000
> > Accept: */*
> >
> < HTTP/1.1 400 Bad Request
> < Date: Thu, 17 Dec 2015 17:57:02 GMT
> < Content-Type: application/json; charset=UTF-8
> < Transfer-Encoding: chunked
> < Server: Jetty(9.2.12.v20150709)
> <
> {"error":"org.apache.atlas.typesystem.exception.EntityNotFoundException: 
> Could not find trait=deleteTraitThatExistsButNotLinkedToEntitywzrfuznlno in 
> the repository for entity: 
> c4d364e5-c5d0-4971-8b35-9a661128a5d9","stackTrace":"org.apache.atlas.repository.RepositoryException:
>  org.apache.atlas.typesystem.exception.EntityNotFoundException: Could not 
> find trait=deleteTraitThatExistsButNotLinkedToEntitywzrfuznlno in the 
> repository for entity: c4d364e5-c5d0-4971-8b35-9a661128a5d9\n\tat 
> org.apache.atlas.repository.graph.GraphBackedMetadataRepository.deleteTrait(GraphBackedMetadataRepository.java:266)\n\tat
>  
> org.apache.atlas.GraphTransactionInterceptor.invoke(GraphTransactionInterceptor.java:42)\n\tat
>  
> org.apache.atlas.services.DefaultMetadataService.deleteTrait(DefaultMetadataService.java:607)\n\tat
>  
> org.apache.atlas.web.resources.EntityResource.deleteTrait(EntityResource.java:523)\n\tat
>  sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)\n\tat 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat
>  java.lang.reflect.Method.invoke(Method.java:497)\n\tat 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)\n\tat
>  
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)\n\tat
>  
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)\n\tat
>  
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)\n\tat
>  
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)\n\tat
>  
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)\n\tat
>  
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)\n\tat
>  
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)\n\tat
>  
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)\n\tat
>  
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)\n\tat
>  
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)\n\tat
>  
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)\n\tat
>  
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)\n\tat
>  
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)\n\tat
>  
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)\n\tat
>  javax.servlet.http.HttpServlet.service(HttpServlet.java:790)\n\tat 
> 

[jira] [Updated] (ATLAS-349) SSL - Atlas SSL connection has weak/unsafe Ciphers suites

2016-02-18 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-349:
--
Attachment: ATLAS-349-v1.patch

Attaching the latest patch from reviewboard

> SSL - Atlas SSL connection has weak/unsafe Ciphers suites
> -
>
> Key: ATLAS-349
> URL: https://issues.apache.org/jira/browse/ATLAS-349
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.6-incubating
>Reporter: Naima Djouhri
>Assignee: Naima Djouhri
> Fix For: trunk
>
> Attachments: ATLAS-349-V0.patch, ATLAS-349-v1.patch
>
>
> After establishing an Atlas SSL , I wanted to see the Cipher suites of the 
> Atlas server.
> Run the following 
> nmap –Pn –script ssl-cert, ssl-enum-ciphers –p 21443 localhost
> Got the following results
> ssl-enum-ciphers:
>TLSv1.0:
>  ciphers:
>TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (dh 1024) - E
>TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 1024) - C
>TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (secp160k1) - E
>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp160k1) - C
>TLS_ECDHE_RSA_WITH_RC4_128_SHA (secp160k1) - C
>TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 512) - E
>TLS_RSA_WITH_AES_128_CBC_SHA (rsa 512) - C
>TLS_RSA_WITH_RC4_128_MD5 (rsa 512) - C
>TLS_RSA_WITH_RC4_128_SHA (rsa 512) - C
>  compressors:
>NULL
>  cipher preference: client
>  warnings:
>Ciphersuite uses MD5 for message integrity
>Weak certificate signature: SHA1
> _  least strength: E
> AC Address: 00:00:00:41:47:4E (Xerox)
> map done: 1 IP address (1 host up) scanned in 8.75 seconds
> The unsafe ciphers need to be excluded 
> Per jetty/Configuring/SSL/TLS documentation at the section Disabling/Enabling 
> specific cipher suites 
> http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html
> ExcludeCipherSuites need to be set 
> But since Atlas has an embedded jetty, this property need to be set to 
> exclude the weak/unsafe cipher suites
> The Open Web Application Project (OWASP) has a nice recommendation tools for 
> testing for weak SSL/TLS ciphers 
> https://www.owasp.org/index.php/Testing_for_Weak_SSL/TLS_Ciphers,_Insufficient_Transport_Layer_Protection_%28OTG-CRYPST-001%29#Tools



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (ATLAS-484) Disconnect uni-directional references to deletion candidates

2016-02-18 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S resolved ATLAS-484.
---
Resolution: Duplicate

> Disconnect uni-directional references to deletion candidates
> 
>
> Key: ATLAS-484
> URL: https://issues.apache.org/jira/browse/ATLAS-484
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: David Kantor
>Assignee: David Kantor
>
> When deleting entities, uni-directional references from other entities to the 
> deletion candidates should be disconnected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-476) Update type attribute with Reserved characters updated the original type as unknown

2016-02-18 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151993#comment-15151993
 ] 

Shwetha G S commented on ATLAS-476:
---

TransientTypeSystem is a wrapper around TypeSystem and is used to validate the 
new types. Every request uses new TransientTypeSystem. Instead of updating 
types as part of TransientTypeSystem.defineTypes(), we should commit the types 
after step3. We will also need locking for types to handle concurrent 
updates(there is another bug already)

> Update type attribute with Reserved characters updated the original type as 
> unknown
> ---
>
> Key: ATLAS-476
> URL: https://issues.apache.org/jira/browse/ATLAS-476
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: sandbox
>Reporter: Chethana
>Assignee: Hemanth Yamijala
>Priority: Blocker
> Fix For: 0.7-incubating
>
> Attachments: 1.log
>
>
> create a type with required attribute
> try to get this type created - the type data is returned
> try update this type by adding attribute with attribute name consisting of a 
> reserved character eg:test$
> this throws exception.
> Now use to get call to get the previously created type
> Expected:
> The type should not be updated.
> Actual:
> "error": "Unknown datatype: className_update_vsvrbzqaqg",
> "stackTrace": "org.apache.atlas.typesystem.exception.TypeNotFoundException: 
> Unknown datatype: className_update_vsvrbzqaqg\n\tat 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-483) Rename client.properties to atlas-client.properties

2016-02-11 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143172#comment-15143172
 ] 

Shwetha G S commented on ATLAS-483:
---

There are some configs which are common to both application and client 
properties and its confusing to decide which config should go where. So, should 
we just get rid of client.properties?

> Rename client.properties to atlas-client.properties
> ---
>
> Key: ATLAS-483
> URL: https://issues.apache.org/jira/browse/ATLAS-483
> Project: Atlas
>  Issue Type: Bug
>Reporter: Tom Beerbower
>Assignee: Tom Beerbower
>
> Atlas conf needs to be included in other components for hooks. The name 
> resolution can conflict and hence properties filename should be atlas 
> specific.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-349) SSL - Atlas SSL connection has weak/unsafe Ciphers suites

2016-02-22 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-349:
--
Attachment: ATLAS-349-v2.patch

The patch committed

> SSL - Atlas SSL connection has weak/unsafe Ciphers suites
> -
>
> Key: ATLAS-349
> URL: https://issues.apache.org/jira/browse/ATLAS-349
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.6-incubating
>Reporter: Naima Djouhri
>Assignee: Naima Djouhri
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-349-V0.patch, ATLAS-349-v1.patch, 
> ATLAS-349-v2.patch
>
>
> After establishing an Atlas SSL , I wanted to see the Cipher suites of the 
> Atlas server.
> Run the following 
> nmap –Pn –script ssl-cert, ssl-enum-ciphers –p 21443 localhost
> Got the following results
> ssl-enum-ciphers:
>TLSv1.0:
>  ciphers:
>TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (dh 1024) - E
>TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 1024) - C
>TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (secp160k1) - E
>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp160k1) - C
>TLS_ECDHE_RSA_WITH_RC4_128_SHA (secp160k1) - C
>TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 512) - E
>TLS_RSA_WITH_AES_128_CBC_SHA (rsa 512) - C
>TLS_RSA_WITH_RC4_128_MD5 (rsa 512) - C
>TLS_RSA_WITH_RC4_128_SHA (rsa 512) - C
>  compressors:
>NULL
>  cipher preference: client
>  warnings:
>Ciphersuite uses MD5 for message integrity
>Weak certificate signature: SHA1
> _  least strength: E
> AC Address: 00:00:00:41:47:4E (Xerox)
> map done: 1 IP address (1 host up) scanned in 8.75 seconds
> The unsafe ciphers need to be excluded 
> Per jetty/Configuring/SSL/TLS documentation at the section Disabling/Enabling 
> specific cipher suites 
> http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html
> ExcludeCipherSuites need to be set 
> But since Atlas has an embedded jetty, this property need to be set to 
> exclude the weak/unsafe cipher suites
> The Open Web Application Project (OWASP) has a nice recommendation tools for 
> testing for weak SSL/TLS ciphers 
> https://www.owasp.org/index.php/Testing_for_Weak_SSL/TLS_Ciphers,_Insufficient_Transport_Layer_Protection_%28OTG-CRYPST-001%29#Tools



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ATLAS-539) Store for entity update audit

2016-02-29 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S reassigned ATLAS-539:
-

Assignee: Shwetha G S

> Store for entity update audit
> -
>
> Key: ATLAS-539
> URL: https://issues.apache.org/jira/browse/ATLAS-539
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
>
> We need to store the entity update events in some store. The search supported 
> should return all events for a given entity id within some timerange.
> Two choices are:
> 1. Existing graph db - We can create a vertex for every update with 
> properties for entity id, timestamp, action and details. This will create 
> disjoint vertices. The direct gremlin search is enough to retrieve all events 
> for the entity. 
> Pros - We already have configurations for graph and utilities to store/get 
> from graph
> Cons - It will create extra data and doesn't fit the graph model
> 2. HBase - Store events with key = entity id + timestamp and columns for 
> action and details. The table scan supports the required search
> Pros - Fits the data model
> Cons - We will need the configurations and code to read and write from hbase
> In either case, we should expose an interface so that alternative 
> implementations can be added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-536) Falcon hook loads incorrect configuration when -Datlas.conf is not given when falcon server startup

2016-02-29 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-536:
--
Summary: Falcon hook loads incorrect configuration when -Datlas.conf is not 
given when falcon server startup  (was: Falcon hook loads incorrect 
configuration when -Datlas.conf is not given when falcon server startup. this 
needs to be documented.)

> Falcon hook loads incorrect configuration when -Datlas.conf is not given when 
> falcon server startup
> ---
>
> Key: ATLAS-536
> URL: https://issues.apache.org/jira/browse/ATLAS-536
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
>Reporter: Ayub Khan
>Assignee: Ayub Khan
>Priority: Critical
>  Labels: patch
> Fix For: trunk
>
> Attachments: ATLAS-536-v4.patch, 
> Adding_atlas_conf_path_to_falcon_server_opts-2.patch, 
> Adding_atlas_conf_path_to_falcon_server_opts-3.patch, 
> Adding_atlas_conf_path_to_falcon_server_opts-4.patch, 
> Adding_atlas_conf_path_to_falcon_server_opts.patch
>
>
> Problem:
> Falcon server startup expects "-Datlas.conf=" to read 
> the atlas configuration file(application.properties), if this is not 
> provided, then falcon hook tries to read the default values from the 
> atlas-typesystem jar file. 
> *Basically falcon hook depends on "atlas.rest.address" config option present 
> in application.properties to post the entity create/update to atlas. But in 
> the above the scenario, when "atlas.conf" option is not provided at falcon 
> server startup, this will read the atlas.rest.address as localhost which is 
> incorrect and all the subsequent falcon hook updates will fail.*
> This also needs to be documented.
> Falcon applicaiton log showing the conf file is read from jar file which has 
> default values
> {noformat}
> 2016-02-25 08:29:13,131 INFO  - [main:] ~ Initiated backend operations thread 
> pool of size 2 (Backend:148)
> 2016-02-25 08:29:13,510 INFO  - [main:] ~ Indexes already exist for graph 
> (MetadataMappingService:143)
> 2016-02-25 08:29:13,510 INFO  - [main:] ~ Initialized graph db: 
> titangraph[berkeleyje:/grid/0/hadoop/falcon/data/lineage/graphdb] 
> (MetadataMappingService:88)
> 2016-02-25 08:29:13,511 INFO  - [main:] ~ Init vertex property keys: [name, 
> type, version, timestamp] (MetadataMappingService:91)
> 2016-02-25 08:29:13,511 INFO  - [main:] ~ Init edge property keys: [name] 
> (MetadataMappingService:94)
> 2016-02-25 08:29:13,515 INFO  - [main:] ~ Service initialized: 
> org.apache.falcon.metadata.MetadataMappingService (ServiceInitializer:52)
> 2016-02-25 08:29:13,517 INFO  - [main:] ~ Initializing service: 
> org.apache.falcon.atlas.service.AtlasService (ServiceInitializer:45)
> 2016-02-25 08:29:13,522 INFO  - [main:] ~ Loading application.properties from 
> jar:file:/grid/0/hdp/2.3.99.0-196/falcon/webapp/falcon/WEB-INF/lib/atlas-typesystem-0.6.0.2.3.99.0-196.jar!/application.properties
>  (ApplicationProperties:62)
> 2016-02-25 08:29:13,539 INFO  - [main:] ~ Loading client.properties from 
> file:/grid/0/hdp/2.3.99.0-196/falcon/webapp/falcon/WEB-INF/classes/client.properties
>  (ApplicationProperties:62)
> 2016-02-25 08:29:13,914 INFO  - [main:] ~ Real User: falcon (auth:SIMPLE), is 
> from ticket cache? false (SecureClientUtils:90)
> 2016-02-25 08:29:13,915 INFO  - [main:] ~ doAsUser: falcon 
> (SecureClientUtils:93)
> 2016-02-25 08:29:14,780 INFO  - [main:] ~ Created Atlas Hook for Falcon 
> (FalconHook:144)
> 2016-02-25 08:29:14,780 INFO  - [main:] ~ Service initialized: 
> org.apache.falcon.atlas.service.AtlasService (ServiceInitializer:52)
> 2016-02-25 08:29:14,782 INFO  - [main:] ~ FalconAuditFilter initialization 
> started (FalconAuditFilter:49)
> 2016-02-25 08:29:14,785 INFO  - [main:] ~ FalconAuthenticationFilter 
> initialization started (FalconAuthenticationFilter:83)
> 2016-02-25 08:29:14,802 INFO  - [main:] ~ Falcon is running with 
> authorization enabled (FalconAuthorizationFilter:62)
> {noformat}
> Falcon application log showing conf file read from /etc/atlas/conf when 
> -Datlas.conf is provided
> {noformat}
> 2016-02-25 11:36:52,605 INFO  - [main:] ~ Initiated backend operations thread 
> pool of size 2 (Backend:148)
> 2016-02-25 11:36:53,119 INFO  - [main:] ~ Indexes already exist for graph 
> (MetadataMappingService:143)
> 2016-02-25 11:36:53,120 INFO  - [main:] ~ Initialized graph db: 
> titangraph[berkeleyje:/grid/0/hadoop/falcon/data/lineage/graphdb] 
> (MetadataMappingService:88)
> 2016-02-25 11:36:53,122 INFO  - [main:] ~ Init vertex property keys: [name, 
> type, version, timestamp] (MetadataMappingService:91)
> 2016-02-25 11:36:53,123 INFO  - [main:] ~ Init edge property keys: [name] 
> (MetadataMappingService:94)
> 2016-02-25 11:36:53,128 INFO  - [main:] 

[jira] [Commented] (ATLAS-479) Add description for different types during create time

2016-02-29 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173293#comment-15173293
 ] 

Shwetha G S commented on ATLAS-479:
---

Tests failed for me with the latest patch. Updated review board. Can you check?

> Add description for different types during create time
> --
>
> Key: ATLAS-479
> URL: https://issues.apache.org/jira/browse/ATLAS-479
> Project: Atlas
>  Issue Type: Sub-task
>Affects Versions: 0.6-incubating
>Reporter: Neeru Gupta
>Assignee: Neeru Gupta
> Fix For: 0.7-incubating
>
> Attachments: rb43531(5).patch
>
>
> Ability to specify description while creating different types like Struct, 
> Enum, Class and Trait type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-536) Falcon hook loads incorrect configuration when -Datlas.conf is not given when falcon server startup. this needs to be documented.

2016-02-29 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173306#comment-15173306
 ] 

Shwetha G S commented on ATLAS-536:
---

+1

> Falcon hook loads incorrect configuration when -Datlas.conf is not given when 
> falcon server startup. this needs to be documented.
> -
>
> Key: ATLAS-536
> URL: https://issues.apache.org/jira/browse/ATLAS-536
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
>Reporter: Ayub Khan
>Assignee: Ayub Khan
>Priority: Critical
>  Labels: patch
> Fix For: trunk
>
> Attachments: Adding_atlas_conf_path_to_falcon_server_opts-2.patch, 
> Adding_atlas_conf_path_to_falcon_server_opts-3.patch, 
> Adding_atlas_conf_path_to_falcon_server_opts-4.patch, 
> Adding_atlas_conf_path_to_falcon_server_opts.patch
>
>
> Problem:
> Falcon server startup expects "-Datlas.conf=" to read 
> the atlas configuration file(application.properties), if this is not 
> provided, then falcon hook tries to read the default values from the 
> atlas-typesystem jar file. 
> *Basically falcon hook depends on "atlas.rest.address" config option present 
> in application.properties to post the entity create/update to atlas. But in 
> the above the scenario, when "atlas.conf" option is not provided at falcon 
> server startup, this will read the atlas.rest.address as localhost which is 
> incorrect and all the subsequent falcon hook updates will fail.*
> This also needs to be documented.
> Falcon applicaiton log showing the conf file is read from jar file which has 
> default values
> {noformat}
> 2016-02-25 08:29:13,131 INFO  - [main:] ~ Initiated backend operations thread 
> pool of size 2 (Backend:148)
> 2016-02-25 08:29:13,510 INFO  - [main:] ~ Indexes already exist for graph 
> (MetadataMappingService:143)
> 2016-02-25 08:29:13,510 INFO  - [main:] ~ Initialized graph db: 
> titangraph[berkeleyje:/grid/0/hadoop/falcon/data/lineage/graphdb] 
> (MetadataMappingService:88)
> 2016-02-25 08:29:13,511 INFO  - [main:] ~ Init vertex property keys: [name, 
> type, version, timestamp] (MetadataMappingService:91)
> 2016-02-25 08:29:13,511 INFO  - [main:] ~ Init edge property keys: [name] 
> (MetadataMappingService:94)
> 2016-02-25 08:29:13,515 INFO  - [main:] ~ Service initialized: 
> org.apache.falcon.metadata.MetadataMappingService (ServiceInitializer:52)
> 2016-02-25 08:29:13,517 INFO  - [main:] ~ Initializing service: 
> org.apache.falcon.atlas.service.AtlasService (ServiceInitializer:45)
> 2016-02-25 08:29:13,522 INFO  - [main:] ~ Loading application.properties from 
> jar:file:/grid/0/hdp/2.3.99.0-196/falcon/webapp/falcon/WEB-INF/lib/atlas-typesystem-0.6.0.2.3.99.0-196.jar!/application.properties
>  (ApplicationProperties:62)
> 2016-02-25 08:29:13,539 INFO  - [main:] ~ Loading client.properties from 
> file:/grid/0/hdp/2.3.99.0-196/falcon/webapp/falcon/WEB-INF/classes/client.properties
>  (ApplicationProperties:62)
> 2016-02-25 08:29:13,914 INFO  - [main:] ~ Real User: falcon (auth:SIMPLE), is 
> from ticket cache? false (SecureClientUtils:90)
> 2016-02-25 08:29:13,915 INFO  - [main:] ~ doAsUser: falcon 
> (SecureClientUtils:93)
> 2016-02-25 08:29:14,780 INFO  - [main:] ~ Created Atlas Hook for Falcon 
> (FalconHook:144)
> 2016-02-25 08:29:14,780 INFO  - [main:] ~ Service initialized: 
> org.apache.falcon.atlas.service.AtlasService (ServiceInitializer:52)
> 2016-02-25 08:29:14,782 INFO  - [main:] ~ FalconAuditFilter initialization 
> started (FalconAuditFilter:49)
> 2016-02-25 08:29:14,785 INFO  - [main:] ~ FalconAuthenticationFilter 
> initialization started (FalconAuthenticationFilter:83)
> 2016-02-25 08:29:14,802 INFO  - [main:] ~ Falcon is running with 
> authorization enabled (FalconAuthorizationFilter:62)
> {noformat}
> Falcon application log showing conf file read from /etc/atlas/conf when 
> -Datlas.conf is provided
> {noformat}
> 2016-02-25 11:36:52,605 INFO  - [main:] ~ Initiated backend operations thread 
> pool of size 2 (Backend:148)
> 2016-02-25 11:36:53,119 INFO  - [main:] ~ Indexes already exist for graph 
> (MetadataMappingService:143)
> 2016-02-25 11:36:53,120 INFO  - [main:] ~ Initialized graph db: 
> titangraph[berkeleyje:/grid/0/hadoop/falcon/data/lineage/graphdb] 
> (MetadataMappingService:88)
> 2016-02-25 11:36:53,122 INFO  - [main:] ~ Init vertex property keys: [name, 
> type, version, timestamp] (MetadataMappingService:91)
> 2016-02-25 11:36:53,123 INFO  - [main:] ~ Init edge property keys: [name] 
> (MetadataMappingService:94)
> 2016-02-25 11:36:53,128 INFO  - [main:] ~ Service initialized: 
> org.apache.falcon.metadata.MetadataMappingService (ServiceInitializer:52)
> 2016-02-25 11:36:53,130 INFO  - [main:] ~ Initializing service: 
> 

[jira] [Created] (ATLAS-558) Build failure - HiveHookIT.testCTAS

2016-03-09 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-558:
-

 Summary: Build failure - HiveHookIT.testCTAS
 Key: ATLAS-558
 URL: https://issues.apache.org/jira/browse/ATLAS-558
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S


https://builds.apache.org/job/apache-atlas-nightly/214/console

{noformat}
testCTAS(org.apache.atlas.hive.hook.HiveHookIT)  Time elapsed: 6.976 sec  <<< 
FAILURE!
java.lang.Exception: Waiting timed out after 2000 msec
at org.apache.atlas.hive.hook.HiveHookIT.waitFor(HiveHookIT.java:528)
at 
org.apache.atlas.hive.hook.HiveHookIT.assertEntityIsRegistered(HiveHookIT.java:443)
at 
org.apache.atlas.hive.hook.HiveHookIT.assertProcessIsRegistered(HiveHookIT.java:390)
at org.apache.atlas.hive.hook.HiveHookIT.testCTAS(HiveHookIT.java:184)


Results :

Failed tests: 
  
HiveHookIT.testCTAS:184->assertProcessIsRegistered:390->assertEntityIsRegistered:443->waitFor:528
 

Tests run: 15, Failures: 1, Errors: 0, Skipped: 0
{noformat}

The entity message was sent at 2016-03-09 21:04:51,601 by hook, but was read at 
2016-03-09 21:05:30,628 by the atlas server. The test failed waiting for the 
table created. The number of hook consumer threads is 1 by default, we should 
try to increase this

{noformat}
2016-03-09 21:04:51,601 DEBUG - [main:] ~ Sending message for topic ATLAS_HOOK: 
{"entities":[{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-7437970183512903","version":0,"typeName":"hive_process"},"typeName":"hive_process","values":{"queryId":"jenkins_20160309210447_4599708b-b9cd-42fc-9c67-3ce0429166de","name":"create
 table table5qkzzepo7s as select * from 
table7uv0dffbbk","startTime":1457557487604,"queryPlan":"{\"STAGE 
PLANS\":{\"Stage-8\":{\"Create Table Operator:\":{\"Create 
Table\":{\"columns:\":[\"id int\",\"name string\",\"dt 
string\"],\"name:\":\"default.table5QkZZepo7s\",\"input 
format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"serde 
name:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"output 
format:\":\"org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat\"}}},\"Stage-7\":{\"Conditional
 Operator\":{}},\"Stage-2\":{\"Stats-Aggr Operator\":{}},\"Stage-1\":{\"Map 
Reduce\":{\"Map Operator 
Tree:\":[{\"TableScan\":{\"alias:\":\"table7uv0dffbbk\",\"children\":{\"Select 
Operator\":{\"expressions:\":\"id (type: int), name (type: string), dt (type: 
string)\",\"outputColumnNames:\":[\"_col0\",\"_col1\",\"_col2\"],\"children\":{\"File
 Output 
Operator\":{\"compressed:\":\"false\",\"table:\":{\"serde:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"name:\":\"default.table5QkZZepo7s\",\"input
 format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"output 
format:\":\"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\"}}}]}},\"Stage-0\":{\"Move
 
Operator\":{\"files:\":{\"destination:\":\"file:/home/jenkins/jenkins-slave/workspace/apache-atlas-nightly/target/metastore/table5qkzzepo7s\",\"hdfs
 directory:\":\"true\"}}},\"Stage-6\":{\"Move 
Operator\":{\"files:\":{\"destination:\":\"file:/home/jenkins/jenkins-slave/workspace/apache-atlas-nightly/target/metastore/.hive-staging_hive_2016-03-09_21-04-47_604_3373253081212415427-1/-ext-10002\",\"hdfs
 directory:\":\"true\"}}},\"Stage-5\":{\"Map Reduce\":{\"Map Operator 
Tree:\":[{\"TableScan\":{\"children\":{\"File Output 
Operator\":{\"compressed:\":\"false\",\"table:\":{\"serde:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"name:\":\"default.table5QkZZepo7s\",\"input
 format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"output 
format:\":\"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\"}]}},\"Stage-4\":{\"Move
 
Operator\":{\"files:\":{\"destination:\":\"file:/home/jenkins/jenkins-slave/workspace/apache-atlas-nightly/target/metastore/.hive-staging_hive_2016-03-09_21-04-47_604_3373253081212415427-1/-ext-10002\",\"hdfs
 directory:\":\"true\"}}},\"Stage-3\":{\"Map Reduce\":{\"Map Operator 
Tree:\":[{\"TableScan\":{\"children\":{\"File Output 
Operator\":{\"compressed:\":\"false\",\"table:\":{\"serde:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"name:\":\"default.table5QkZZepo7s\",\"input
 format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"output 
format:\":\"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\"}]}}},\"STAGE
 DEPENDENCIES\":{\"Stage-8\":{\"DEPENDENT 
STAGES\":\"Stage-0\"},\"Stage-7\":{\"DEPENDENT 
STAGES\":\"Stage-1\",\"CONDITIONAL CHILD TASKS\":\"Stage-4, Stage-3, 
Stage-5\"},\"Stage-2\":{\"DEPENDENT STAGES\":\"Stage-8\"},\"Stage-1\":{\"ROOT 
STAGE\":\"TRUE\"},\"Stage-0\":{\"DEPENDENT 
STAGES\":\"Stage-4\"},\"Stage-6\":{},\"Stage-5\":{},\"Stage-4\":{\"DEPENDENT 

[jira] [Assigned] (ATLAS-558) Build failure - HiveHookIT.testCTAS

2016-03-09 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S reassigned ATLAS-558:
-

Assignee: Shwetha G S

> Build failure - HiveHookIT.testCTAS
> ---
>
> Key: ATLAS-558
> URL: https://issues.apache.org/jira/browse/ATLAS-558
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Attachments: ATLAS-558.patch
>
>
> https://builds.apache.org/job/apache-atlas-nightly/214/console
> {noformat}
> testCTAS(org.apache.atlas.hive.hook.HiveHookIT)  Time elapsed: 6.976 sec  <<< 
> FAILURE!
> java.lang.Exception: Waiting timed out after 2000 msec
>   at org.apache.atlas.hive.hook.HiveHookIT.waitFor(HiveHookIT.java:528)
>   at 
> org.apache.atlas.hive.hook.HiveHookIT.assertEntityIsRegistered(HiveHookIT.java:443)
>   at 
> org.apache.atlas.hive.hook.HiveHookIT.assertProcessIsRegistered(HiveHookIT.java:390)
>   at org.apache.atlas.hive.hook.HiveHookIT.testCTAS(HiveHookIT.java:184)
> Results :
> Failed tests: 
>   
> HiveHookIT.testCTAS:184->assertProcessIsRegistered:390->assertEntityIsRegistered:443->waitFor:528
>  
> Tests run: 15, Failures: 1, Errors: 0, Skipped: 0
> {noformat}
> The entity message was sent at 2016-03-09 21:04:51,601 by hook, but was read 
> at 2016-03-09 21:05:30,628 by the atlas server. The test failed waiting for 
> the table created. The number of hook consumer threads is 1 by default, we 
> should try to increase this
> {noformat}
> 2016-03-09 21:04:51,601 DEBUG - [main:] ~ Sending message for topic 
> ATLAS_HOOK: 
> {"entities":[{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-7437970183512903","version":0,"typeName":"hive_process"},"typeName":"hive_process","values":{"queryId":"jenkins_20160309210447_4599708b-b9cd-42fc-9c67-3ce0429166de","name":"create
>  table table5qkzzepo7s as select * from 
> table7uv0dffbbk","startTime":1457557487604,"queryPlan":"{\"STAGE 
> PLANS\":{\"Stage-8\":{\"Create Table Operator:\":{\"Create 
> Table\":{\"columns:\":[\"id int\",\"name string\",\"dt 
> string\"],\"name:\":\"default.table5QkZZepo7s\",\"input 
> format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"serde 
> name:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"output 
> format:\":\"org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat\"}}},\"Stage-7\":{\"Conditional
>  Operator\":{}},\"Stage-2\":{\"Stats-Aggr Operator\":{}},\"Stage-1\":{\"Map 
> Reduce\":{\"Map Operator 
> Tree:\":[{\"TableScan\":{\"alias:\":\"table7uv0dffbbk\",\"children\":{\"Select
>  Operator\":{\"expressions:\":\"id (type: int), name (type: string), dt 
> (type: 
> string)\",\"outputColumnNames:\":[\"_col0\",\"_col1\",\"_col2\"],\"children\":{\"File
>  Output 
> Operator\":{\"compressed:\":\"false\",\"table:\":{\"serde:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"name:\":\"default.table5QkZZepo7s\",\"input
>  format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"output 
> format:\":\"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\"}}}]}},\"Stage-0\":{\"Move
>  
> Operator\":{\"files:\":{\"destination:\":\"file:/home/jenkins/jenkins-slave/workspace/apache-atlas-nightly/target/metastore/table5qkzzepo7s\",\"hdfs
>  directory:\":\"true\"}}},\"Stage-6\":{\"Move 
> Operator\":{\"files:\":{\"destination:\":\"file:/home/jenkins/jenkins-slave/workspace/apache-atlas-nightly/target/metastore/.hive-staging_hive_2016-03-09_21-04-47_604_3373253081212415427-1/-ext-10002\",\"hdfs
>  directory:\":\"true\"}}},\"Stage-5\":{\"Map Reduce\":{\"Map Operator 
> Tree:\":[{\"TableScan\":{\"children\":{\"File Output 
> Operator\":{\"compressed:\":\"false\",\"table:\":{\"serde:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"name:\":\"default.table5QkZZepo7s\",\"input
>  format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"output 
> format:\":\"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\"}]}},\"Stage-4\":{\"Move
>  
> Operator\":{\"files:\":{\"destination:\":\"file:/home/jenkins/jenkins-slave/workspace/apache-atlas-nightly/target/metastore/.hive-staging_hive_2016-03-09_21-04-47_604_3373253081212415427-1/-ext-10002\",\"hdfs
>  directory:\":\"true\"}}},\"Stage-3\":{\"Map Reduce\":{\"Map Operator 
> Tree:\":[{\"TableScan\":{\"children\":{\"File Output 
> Operator\":{\"compressed:\":\"false\",\"table:\":{\"serde:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"name:\":\"default.table5QkZZepo7s\",\"input
>  format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"output 
> format:\":\"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\"}]}}},\"STAGE
>  DEPENDENCIES\":{\"Stage-8\":{\"DEPENDENT 
> STAGES\":\"Stage-0\"},\"Stage-7\":{\"DEPENDENT 
> 

[jira] [Updated] (ATLAS-558) Build failure - HiveHookIT.testCTAS

2016-03-09 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-558:
--
Attachment: ATLAS-558.patch

Minor patch to increase number of hook consumer threads for tests

> Build failure - HiveHookIT.testCTAS
> ---
>
> Key: ATLAS-558
> URL: https://issues.apache.org/jira/browse/ATLAS-558
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
> Attachments: ATLAS-558.patch
>
>
> https://builds.apache.org/job/apache-atlas-nightly/214/console
> {noformat}
> testCTAS(org.apache.atlas.hive.hook.HiveHookIT)  Time elapsed: 6.976 sec  <<< 
> FAILURE!
> java.lang.Exception: Waiting timed out after 2000 msec
>   at org.apache.atlas.hive.hook.HiveHookIT.waitFor(HiveHookIT.java:528)
>   at 
> org.apache.atlas.hive.hook.HiveHookIT.assertEntityIsRegistered(HiveHookIT.java:443)
>   at 
> org.apache.atlas.hive.hook.HiveHookIT.assertProcessIsRegistered(HiveHookIT.java:390)
>   at org.apache.atlas.hive.hook.HiveHookIT.testCTAS(HiveHookIT.java:184)
> Results :
> Failed tests: 
>   
> HiveHookIT.testCTAS:184->assertProcessIsRegistered:390->assertEntityIsRegistered:443->waitFor:528
>  
> Tests run: 15, Failures: 1, Errors: 0, Skipped: 0
> {noformat}
> The entity message was sent at 2016-03-09 21:04:51,601 by hook, but was read 
> at 2016-03-09 21:05:30,628 by the atlas server. The test failed waiting for 
> the table created. The number of hook consumer threads is 1 by default, we 
> should try to increase this
> {noformat}
> 2016-03-09 21:04:51,601 DEBUG - [main:] ~ Sending message for topic 
> ATLAS_HOOK: 
> {"entities":[{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference","id":{"jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id","id":"-7437970183512903","version":0,"typeName":"hive_process"},"typeName":"hive_process","values":{"queryId":"jenkins_20160309210447_4599708b-b9cd-42fc-9c67-3ce0429166de","name":"create
>  table table5qkzzepo7s as select * from 
> table7uv0dffbbk","startTime":1457557487604,"queryPlan":"{\"STAGE 
> PLANS\":{\"Stage-8\":{\"Create Table Operator:\":{\"Create 
> Table\":{\"columns:\":[\"id int\",\"name string\",\"dt 
> string\"],\"name:\":\"default.table5QkZZepo7s\",\"input 
> format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"serde 
> name:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"output 
> format:\":\"org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat\"}}},\"Stage-7\":{\"Conditional
>  Operator\":{}},\"Stage-2\":{\"Stats-Aggr Operator\":{}},\"Stage-1\":{\"Map 
> Reduce\":{\"Map Operator 
> Tree:\":[{\"TableScan\":{\"alias:\":\"table7uv0dffbbk\",\"children\":{\"Select
>  Operator\":{\"expressions:\":\"id (type: int), name (type: string), dt 
> (type: 
> string)\",\"outputColumnNames:\":[\"_col0\",\"_col1\",\"_col2\"],\"children\":{\"File
>  Output 
> Operator\":{\"compressed:\":\"false\",\"table:\":{\"serde:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"name:\":\"default.table5QkZZepo7s\",\"input
>  format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"output 
> format:\":\"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\"}}}]}},\"Stage-0\":{\"Move
>  
> Operator\":{\"files:\":{\"destination:\":\"file:/home/jenkins/jenkins-slave/workspace/apache-atlas-nightly/target/metastore/table5qkzzepo7s\",\"hdfs
>  directory:\":\"true\"}}},\"Stage-6\":{\"Move 
> Operator\":{\"files:\":{\"destination:\":\"file:/home/jenkins/jenkins-slave/workspace/apache-atlas-nightly/target/metastore/.hive-staging_hive_2016-03-09_21-04-47_604_3373253081212415427-1/-ext-10002\",\"hdfs
>  directory:\":\"true\"}}},\"Stage-5\":{\"Map Reduce\":{\"Map Operator 
> Tree:\":[{\"TableScan\":{\"children\":{\"File Output 
> Operator\":{\"compressed:\":\"false\",\"table:\":{\"serde:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"name:\":\"default.table5QkZZepo7s\",\"input
>  format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"output 
> format:\":\"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\"}]}},\"Stage-4\":{\"Move
>  
> Operator\":{\"files:\":{\"destination:\":\"file:/home/jenkins/jenkins-slave/workspace/apache-atlas-nightly/target/metastore/.hive-staging_hive_2016-03-09_21-04-47_604_3373253081212415427-1/-ext-10002\",\"hdfs
>  directory:\":\"true\"}}},\"Stage-3\":{\"Map Reduce\":{\"Map Operator 
> Tree:\":[{\"TableScan\":{\"children\":{\"File Output 
> Operator\":{\"compressed:\":\"false\",\"table:\":{\"serde:\":\"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\",\"name:\":\"default.table5QkZZepo7s\",\"input
>  format:\":\"org.apache.hadoop.mapred.TextInputFormat\",\"output 
> format:\":\"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\"}]}}},\"STAGE
>  DEPENDENCIES\":{\"Stage-8\":{\"DEPENDENT 
> STAGES\":\"Stage-0\"},\"Stage-7\":{\"DEPENDENT 

[jira] [Updated] (ATLAS-573) Inherited attributes disappear from entities after server restart

2016-03-18 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-573:
--
Priority: Blocker  (was: Major)

> Inherited attributes disappear from entities after server restart
> -
>
> Key: ATLAS-573
> URL: https://issues.apache.org/jira/browse/ATLAS-573
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.6-incubating, 0.7-incubating
>Reporter: David Kantor
>Assignee: David Kantor
>Priority: Blocker
>
> After server restart, attributes that are inherited from a superclass are not 
> included in the representation of an entity when retrieved from the 
> repository.  It appears to be an issue with how the type system is loaded 
> from the repository at server startup (GraphBackedTypeStore.restore(), such 
> that the field mappings for subclasses do not include the attributes from 
> superclasses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-575) jetty-maven-plugin fails with ShutdownMonitorThread already started

2016-03-19 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-575:
-

 Summary: jetty-maven-plugin fails with ShutdownMonitorThread 
already started
 Key: ATLAS-575
 URL: https://issues.apache.org/jira/browse/ATLAS-575
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S


{noformat}
[INFO] --- jetty-maven-plugin:9.2.12.v20150709:deploy-war (start-jetty) @ 
falcon-bridge ---
[INFO] Configuring Jetty for project: Apache Atlas Falcon Bridge
[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Atlas UI  SUCCESS [01:04 min]
[INFO] Apache Atlas Web Application ... SUCCESS [03:13 min]
[INFO] Apache Atlas Documentation . SUCCESS [  4.433 s]
[INFO] Apache Atlas Hive Bridge ... SUCCESS [01:31 min]
[INFO] Apache Atlas Falcon Bridge . FAILURE [  3.805 s]
[INFO] Apache Atlas Sqoop Bridge .. SKIPPED
[INFO] Apache Atlas Storm Bridge .. SKIPPED
[INFO] Apache Atlas Distribution .. SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 05:58 min
[INFO] Finished at: 2016-03-18T12:42:32+05:30
[INFO] Final Memory: 188M/687M
[INFO] 
[ERROR] Failed to execute goal 
org.eclipse.jetty:jetty-maven-plugin:9.2.12.v20150709:deploy-war (start-jetty) 
on project falcon-bridge: Failure: ShutdownMonitorThread already started -> 
[Help 1]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-523) Support alter view

2016-03-13 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15192743#comment-15192743
 ] 

Shwetha G S commented on ATLAS-523:
---

Minor comment. +1 otherwise. You can fix it and commit

{code}
+//Check if properties dont exist
+if (parameters != null) {
+for (String propKey : expectedProps.keySet()) {
+Assert.assertNull(parameters.get(propKey));
+}
+}
{code}
Should be assertFalse(parameters.containsKey(propKey))?

> Support alter view
> --
>
> Key: ATLAS-523
> URL: https://issues.apache.org/jira/browse/ATLAS-523
> Project: Atlas
>  Issue Type: Sub-task
>Affects Versions: 0.7-incubating
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-523.patch
>
>
> support alter view as select , drop, properties - 
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Create/Drop/AlterView



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-539) Store for entity audit events

2016-03-14 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-539:
--
Attachment: ATLAS-539-v3.patch

Addressed review comments from reviewboard

> Store for entity audit events
> -
>
> Key: ATLAS-539
> URL: https://issues.apache.org/jira/browse/ATLAS-539
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-539-v2.patch, ATLAS-539-v3.patch, ATLAS-539.patch
>
>
> We need to store the entity update events in some store. The search supported 
> should return all events for a given entity id within some timerange.
> Two choices are:
> 1. Existing graph db - We can create a vertex for every update with 
> properties for entity id, timestamp, action and details. This will create 
> disjoint vertices. The direct gremlin search is enough to retrieve all events 
> for the entity. 
> Pros - We already have configurations for graph and utilities to store/get 
> from graph
> Cons - It will create extra data and doesn't fit the graph model
> 2. HBase - Store events with key = entity id + timestamp and columns for 
> action and details. The table scan supports the required search
> Pros - Fits the data model
> Cons - We will need the configurations and code to read and write from hbase
> In either case, we should expose an interface so that alternative 
> implementations can be added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-539) Store for entity audit events

2016-03-14 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193110#comment-15193110
 ] 

Shwetha G S commented on ATLAS-539:
---

Yes, the events will be stored synchronously in create/update entity APIs.

> Store for entity audit events
> -
>
> Key: ATLAS-539
> URL: https://issues.apache.org/jira/browse/ATLAS-539
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-539-v2.patch, ATLAS-539.patch
>
>
> We need to store the entity update events in some store. The search supported 
> should return all events for a given entity id within some timerange.
> Two choices are:
> 1. Existing graph db - We can create a vertex for every update with 
> properties for entity id, timestamp, action and details. This will create 
> disjoint vertices. The direct gremlin search is enough to retrieve all events 
> for the entity. 
> Pros - We already have configurations for graph and utilities to store/get 
> from graph
> Cons - It will create extra data and doesn't fit the graph model
> 2. HBase - Store events with key = entity id + timestamp and columns for 
> action and details. The table scan supports the required search
> Pros - Fits the data model
> Cons - We will need the configurations and code to read and write from hbase
> In either case, we should expose an interface so that alternative 
> implementations can be added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-555) Tag creation from UI fails due to missing description attribute

2016-03-08 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15186568#comment-15186568
 ] 

Shwetha G S commented on ATLAS-555:
---

We should fix this issue. But, the UI should use atlas libraries for type and 
entity serialisation and deserialisation. 

> Tag creation from UI fails due to missing description attribute
> ---
>
> Key: ATLAS-555
> URL: https://issues.apache.org/jira/browse/ATLAS-555
> Project: Atlas
>  Issue Type: Bug
>Reporter: Hemanth Yamijala
>Priority: Blocker
> Attachments: application.log
>
>
> I compiled Atlas from the master branch (git id: 
> 5b748aa47b970298a3c6b0c03495b3299079cd3e) and deployed. Ran hive-import 
> (which worked fine). Then tried to create a trait from the UI. This failed. 
> Relevant part of the stack trace: 
> {code} 
> Caused by: org.json4s.package$MappingException: No usable value for 
> typeDescription Did not find value which can be converted into 
> java.lang.String at org.json4s.reflect.package$.fail(package.scala:96) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:462)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$14.apply(Extraction.scala:482)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$14.apply(Extraction.scala:482)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:105) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$instantiate(Extraction.scala:470)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:515)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:512)
>  at 
> org.json4s.Extraction$.org$json4s$Extraction$$customOrElse(Extraction.scala:524)
>  at org.json4s.Extraction$ClassInstanceBuilder.result(Extraction.scala:512) 
> at org.json4s.Extraction$.extract(Extraction.scala:351) at 
> org.json4s.Extraction$CollectionBuilder$$anonfun$6.apply(Extraction.scala:360)
>  at 
> org.json4s.Extraction$CollectionBuilder$$anonfun$6.apply(Extraction.scala:360)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at scala.collection.immutable.List.foreach(List.scala:318) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:105) at 
> org.json4s.Extraction$CollectionBuilder.mkCollection(Extraction.scala:360) at 
> org.json4s.Extraction$CollectionBuilder.result(Extraction.scala:384) at 
> org.json4s.Extraction$.extract(Extraction.scala:339) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:450)
>  ... 72 more Caused by: org.json4s.package$MappingException: Did not find 
> value which can be converted into java.lang.String at 
> org.json4s.Extraction$.convert(Extraction.scala:603) at 
> org.json4s.Extraction$.extract(Extraction.scala:350) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:450)
>  ... 97 more 
> {code} 
> (Will attach entire stack trace separately)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-556) Hive hook fails for select without table

2016-03-08 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-556:
-

 Summary: Hive hook fails for select without table
 Key: ATLAS-556
 URL: https://issues.apache.org/jira/browse/ATLAS-556
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S


Command: select 42

{noformat}
2016-03-09 11:30:05,669 WARN  - [main:] ~ Failed to get database 
_dummy_database, returning NoSuchObjectException (ObjectStore:568)
FAILED: Hive Internal Error: java.lang.NullPointerException(null)
java.lang.NullPointerException
at 
org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.java:325)
at 
org.apache.atlas.hive.hook.HiveHook.registerProcess(HiveHook.java:387)
at org.apache.atlas.hive.hook.HiveHook.fireAndForget(HiveHook.java:224)
at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:182)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1520)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.atlas.hive.hook.HiveHookIT.runCommand(HiveHookIT.java:75)
at 
org.apache.atlas.hive.hook.HiveHookIT.testSelect2(HiveHookIT.java:265)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:673)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:842)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1166)
at 
org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109)
at org.testng.TestRunner.runWorkers(TestRunner.java:1178)
at org.testng.TestRunner.privateRun(TestRunner.java:757)
at org.testng.TestRunner.run(TestRunner.java:608)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
at org.testng.SuiteRunner.run(SuiteRunner.java:240)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1158)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1083)
at org.testng.TestNG.run(TestNG.java:999)
at 
org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:115)
at 
org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.executeSingleClass(TestNGDirectoryTestSuite.java:129)
at 
org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.execute(TestNGDirectoryTestSuite.java:113)
at 
org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:111)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-556) Hive hook fails for select without table

2016-03-08 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-556:
--
Description: 
Reported from: From 
https://community.hortonworks.com/questions/21766/hive-queries-without-from-clause-seem-to-fail-when.html

Command: select 42

{noformat}
2016-03-09 11:30:05,669 WARN  - [main:] ~ Failed to get database 
_dummy_database, returning NoSuchObjectException (ObjectStore:568)
FAILED: Hive Internal Error: java.lang.NullPointerException(null)
java.lang.NullPointerException
at 
org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.java:325)
at 
org.apache.atlas.hive.hook.HiveHook.registerProcess(HiveHook.java:387)
at org.apache.atlas.hive.hook.HiveHook.fireAndForget(HiveHook.java:224)
at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:182)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1520)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.atlas.hive.hook.HiveHookIT.runCommand(HiveHookIT.java:75)
at 
org.apache.atlas.hive.hook.HiveHookIT.testSelect2(HiveHookIT.java:265)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:673)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:842)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1166)
at 
org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109)
at org.testng.TestRunner.runWorkers(TestRunner.java:1178)
at org.testng.TestRunner.privateRun(TestRunner.java:757)
at org.testng.TestRunner.run(TestRunner.java:608)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
at org.testng.SuiteRunner.run(SuiteRunner.java:240)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1158)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1083)
at org.testng.TestNG.run(TestNG.java:999)
at 
org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:115)
at 
org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.executeSingleClass(TestNGDirectoryTestSuite.java:129)
at 
org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.execute(TestNGDirectoryTestSuite.java:113)
at 
org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:111)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{noformat}

  was:
Command: select 42

{noformat}
2016-03-09 11:30:05,669 WARN  - [main:] ~ Failed to get database 
_dummy_database, returning NoSuchObjectException (ObjectStore:568)
FAILED: Hive Internal Error: java.lang.NullPointerException(null)
java.lang.NullPointerException
at 
org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.java:325)
at 
org.apache.atlas.hive.hook.HiveHook.registerProcess(HiveHook.java:387)
at org.apache.atlas.hive.hook.HiveHook.fireAndForget(HiveHook.java:224)
at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:182)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1520)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.atlas.hive.hook.HiveHookIT.runCommand(HiveHookIT.java:75)
at 
org.apache.atlas.hive.hook.HiveHookIT.testSelect2(HiveHookIT.java:265)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

[jira] [Commented] (ATLAS-555) Tag creation from UI fails due to missing description attribute

2016-03-08 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15186599#comment-15186599
 ] 

Shwetha G S commented on ATLAS-555:
---

As I said earlier, we need to fix the issue. There is no question about it. Not 
sure about UI technology

Everywhere in our tests, we use our type serdes for serialisation, 
deserialisation. We don't have any that uses jsons directly. There is a test 
that creates types without description, but as long as it goes through the 
library for serialisation, it works fine. I think, in general, our jsons are 
verbose and expect even empty fields. There is another jira to track it and we 
should revisit that

> Tag creation from UI fails due to missing description attribute
> ---
>
> Key: ATLAS-555
> URL: https://issues.apache.org/jira/browse/ATLAS-555
> Project: Atlas
>  Issue Type: Bug
>Reporter: Hemanth Yamijala
>Priority: Blocker
> Attachments: application.log
>
>
> I compiled Atlas from the master branch (git id: 
> 5b748aa47b970298a3c6b0c03495b3299079cd3e) and deployed. Ran hive-import 
> (which worked fine). Then tried to create a trait from the UI. This failed. 
> Relevant part of the stack trace: 
> {code} 
> Caused by: org.json4s.package$MappingException: No usable value for 
> typeDescription Did not find value which can be converted into 
> java.lang.String at org.json4s.reflect.package$.fail(package.scala:96) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:462)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$14.apply(Extraction.scala:482)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$14.apply(Extraction.scala:482)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:105) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$instantiate(Extraction.scala:470)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:515)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:512)
>  at 
> org.json4s.Extraction$.org$json4s$Extraction$$customOrElse(Extraction.scala:524)
>  at org.json4s.Extraction$ClassInstanceBuilder.result(Extraction.scala:512) 
> at org.json4s.Extraction$.extract(Extraction.scala:351) at 
> org.json4s.Extraction$CollectionBuilder$$anonfun$6.apply(Extraction.scala:360)
>  at 
> org.json4s.Extraction$CollectionBuilder$$anonfun$6.apply(Extraction.scala:360)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at scala.collection.immutable.List.foreach(List.scala:318) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:105) at 
> org.json4s.Extraction$CollectionBuilder.mkCollection(Extraction.scala:360) at 
> org.json4s.Extraction$CollectionBuilder.result(Extraction.scala:384) at 
> org.json4s.Extraction$.extract(Extraction.scala:339) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:450)
>  ... 72 more Caused by: org.json4s.package$MappingException: Did not find 
> value which can be converted into java.lang.String at 
> org.json4s.Extraction$.convert(Extraction.scala:603) at 
> org.json4s.Extraction$.extract(Extraction.scala:350) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:450)
>  ... 97 more 
> {code} 
> (Will attach entire stack trace separately)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-575) jetty-maven-plugin fails with ShutdownMonitorThread already started

2016-03-19 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201100#comment-15201100
 ] 

Shwetha G S commented on ATLAS-575:
---

https://bugs.eclipse.org/bugs/show_bug.cgi?id=412637 - adding stopWait should 
fix the issue

> jetty-maven-plugin fails with ShutdownMonitorThread already started
> ---
>
> Key: ATLAS-575
> URL: https://issues.apache.org/jira/browse/ATLAS-575
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
>
> {noformat}
> [INFO] --- jetty-maven-plugin:9.2.12.v20150709:deploy-war (start-jetty) @ 
> falcon-bridge ---
> [INFO] Configuring Jetty for project: Apache Atlas Falcon Bridge
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Atlas UI  SUCCESS [01:04 
> min]
> [INFO] Apache Atlas Web Application ... SUCCESS [03:13 
> min]
> [INFO] Apache Atlas Documentation . SUCCESS [  4.433 
> s]
> [INFO] Apache Atlas Hive Bridge ... SUCCESS [01:31 
> min]
> [INFO] Apache Atlas Falcon Bridge . FAILURE [  3.805 
> s]
> [INFO] Apache Atlas Sqoop Bridge .. SKIPPED
> [INFO] Apache Atlas Storm Bridge .. SKIPPED
> [INFO] Apache Atlas Distribution .. SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 05:58 min
> [INFO] Finished at: 2016-03-18T12:42:32+05:30
> [INFO] Final Memory: 188M/687M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.eclipse.jetty:jetty-maven-plugin:9.2.12.v20150709:deploy-war 
> (start-jetty) on project falcon-bridge: Failure: ShutdownMonitorThread 
> already started -> [Help 1]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-539) Store for entity audit events

2016-03-16 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-539:
--
Attachment: ATLAS-539-v4.patch

Latest patch from reviewboard

> Store for entity audit events
> -
>
> Key: ATLAS-539
> URL: https://issues.apache.org/jira/browse/ATLAS-539
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-539-v2.patch, ATLAS-539-v3.patch, 
> ATLAS-539-v4.patch, ATLAS-539.patch
>
>
> We need to store the entity update events in some store. The search supported 
> should return all events for a given entity id within some timerange.
> Two choices are:
> 1. Existing graph db - We can create a vertex for every update with 
> properties for entity id, timestamp, action and details. This will create 
> disjoint vertices. The direct gremlin search is enough to retrieve all events 
> for the entity. 
> Pros - We already have configurations for graph and utilities to store/get 
> from graph
> Cons - It will create extra data and doesn't fit the graph model
> 2. HBase - Store events with key = entity id + timestamp and columns for 
> action and details. The table scan supports the required search
> Pros - Fits the data model
> Cons - We will need the configurations and code to read and write from hbase
> In either case, we should expose an interface so that alternative 
> implementations can be added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-551) alter table rename should modify the list of columns and storage descriptor

2016-03-16 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197029#comment-15197029
 ] 

Shwetha G S commented on ATLAS-551:
---

Rename command fires two messages - full update request on table and partial 
update on table(contains just table name change). The first full update request 
contains table and column entities that are created from new table definition. 
So, the first message already contains columns and storage descriptor with new 
qualified name

> alter table rename should modify the list of columns and storage descriptor
> ---
>
> Key: ATLAS-551
> URL: https://issues.apache.org/jira/browse/ATLAS-551
> Project: Atlas
>  Issue Type: Sub-task
>Affects Versions: 0.7-incubating
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-551.1.patch, ATLAS-551.patch
>
>
> Need to modify columns, sd during entity partial updates since column 
> qualified name and sd qualified name will change when the table is renamed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-555) Tag creation from UI fails due to missing description attribute

2016-03-10 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190469#comment-15190469
 ] 

Shwetha G S commented on ATLAS-555:
---

Lets commit this to unblock UI and target ATLAS-559 for 0.7 release

> Tag creation from UI fails due to missing description attribute
> ---
>
> Key: ATLAS-555
> URL: https://issues.apache.org/jira/browse/ATLAS-555
> Project: Atlas
>  Issue Type: Bug
>Reporter: Hemanth Yamijala
>Assignee: Neeru Gupta
>Priority: Blocker
> Attachments: application.log, rb44588.patch
>
>
> I compiled Atlas from the master branch (git id: 
> 5b748aa47b970298a3c6b0c03495b3299079cd3e) and deployed. Ran hive-import 
> (which worked fine). Then tried to create a trait from the UI. This failed. 
> Relevant part of the stack trace: 
> {code} 
> Caused by: org.json4s.package$MappingException: No usable value for 
> typeDescription Did not find value which can be converted into 
> java.lang.String at org.json4s.reflect.package$.fail(package.scala:96) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:462)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$14.apply(Extraction.scala:482)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$14.apply(Extraction.scala:482)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:105) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$instantiate(Extraction.scala:470)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:515)
>  at 
> org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:512)
>  at 
> org.json4s.Extraction$.org$json4s$Extraction$$customOrElse(Extraction.scala:524)
>  at org.json4s.Extraction$ClassInstanceBuilder.result(Extraction.scala:512) 
> at org.json4s.Extraction$.extract(Extraction.scala:351) at 
> org.json4s.Extraction$CollectionBuilder$$anonfun$6.apply(Extraction.scala:360)
>  at 
> org.json4s.Extraction$CollectionBuilder$$anonfun$6.apply(Extraction.scala:360)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>  at scala.collection.immutable.List.foreach(List.scala:318) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:105) at 
> org.json4s.Extraction$CollectionBuilder.mkCollection(Extraction.scala:360) at 
> org.json4s.Extraction$CollectionBuilder.result(Extraction.scala:384) at 
> org.json4s.Extraction$.extract(Extraction.scala:339) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:450)
>  ... 72 more Caused by: org.json4s.package$MappingException: Did not find 
> value which can be converted into java.lang.String at 
> org.json4s.Extraction$.convert(Extraction.scala:603) at 
> org.json4s.Extraction$.extract(Extraction.scala:350) at 
> org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:450)
>  ... 97 more 
> {code} 
> (Will attach entire stack trace separately)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-560) HiveHook calls to metastore

2016-03-10 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-560:
-

 Summary: HiveHook calls to metastore
 Key: ATLAS-560
 URL: https://issues.apache.org/jira/browse/ATLAS-560
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S


HiveHook currently makes calls to metastore to load database and tables 
involved in the command. Hive loads these already for its execution. But since 
the data already loaded is not complete, hive hook loads the data again. The 
calls to metastore are expensive - adds to hook execution delay and more load 
on metastore. We should look to avoid these extra calls - make previous calls 
load all the data so that extra calls are not required



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-577) Integrate entity audit with DefaultMetadataService

2016-03-31 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-577:
--
Attachment: ATLAS-577-final.patch

Final patch for commit. Thanks Hemanth for reviewing

> Integrate entity audit with DefaultMetadataService
> --
>
> Key: ATLAS-577
> URL: https://issues.apache.org/jira/browse/ATLAS-577
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-577-final.patch, ATLAS-577.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (ATLAS-72) Use atlas.rest.address on server

2016-03-31 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-72?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S resolved ATLAS-72.
--
Resolution: Invalid

With HA, atlas.rest.address will be a proxy

> Use atlas.rest.address on server
> 
>
> Key: ATLAS-72
> URL: https://issues.apache.org/jira/browse/ATLAS-72
> Project: Atlas
>  Issue Type: Improvement
>Reporter: Shwetha G S
> Fix For: 0.7-incubating
>
>
> On server, there are separate configs for should enable ssl, and port. 
> Instead, it can use the client side config atlas.rest.address to derive ssl, 
> port and host to bind to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-72) Use atlas.rest.address on server

2016-03-31 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-72?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-72:
-
Fix Version/s: 0.7-incubating

> Use atlas.rest.address on server
> 
>
> Key: ATLAS-72
> URL: https://issues.apache.org/jira/browse/ATLAS-72
> Project: Atlas
>  Issue Type: Improvement
>Reporter: Shwetha G S
> Fix For: 0.7-incubating
>
>
> On server, there are separate configs for should enable ssl, and port. 
> Instead, it can use the client side config atlas.rest.address to derive ssl, 
> port and host to bind to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (ATLAS-52) Merge client.properties and application.properties

2016-03-31 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-52?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S resolved ATLAS-52.
--
Resolution: Fixed

client.properties is removed as part of ATLAS-483

> Merge client.properties and application.properties
> --
>
> Key: ATLAS-52
> URL: https://issues.apache.org/jira/browse/ATLAS-52
> Project: Atlas
>  Issue Type: Task
>Reporter: Shwetha G S
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-608) Hook modules depend on atlas-server-api

2016-03-31 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-608:
-

 Summary: Hook modules depend on atlas-server-api
 Key: ATLAS-608
 URL: https://issues.apache.org/jira/browse/ATLAS-608
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S
 Fix For: 0.7-incubating


All the hook modules depend on atlas-server-api module currently. 
atlas-server-api module was created to contain only the server side logic. We 
should decouple client and server side modules cleanly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-647) entityText property should be prefixed with __

2016-04-06 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-647:
-

 Summary: entityText property should be prefixed with __
 Key: ATLAS-647
 URL: https://issues.apache.org/jira/browse/ATLAS-647
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S
 Fix For: 0.7-incubating


All internal properties for the vertex in the repository are prefixed with __ 
to avoid conflicts with user defined attribute names. Looks like entityText is 
missed out



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-645) GraphBackedMetadataRepositoryDeleteEntitiesTest.testDisconnectMapReferenceFromClassType results in stack overflow

2016-04-06 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-645:
-

 Summary: 
GraphBackedMetadataRepositoryDeleteEntitiesTest.testDisconnectMapReferenceFromClassType
 results in stack overflow
 Key: ATLAS-645
 URL: https://issues.apache.org/jira/browse/ATLAS-645
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S


{noformat}
SLF4J: Failed toString() invocation on an object of type 
[org.apache.atlas.typesystem.persistence.ReferenceableInstance]
java.lang.StackOverflowError
at java.util.regex.Pattern$GroupHead.match(Pattern.java:4556)
at java.util.regex.Pattern$Branch.match(Pattern.java:4502)
at java.util.regex.Pattern$BranchConn.match(Pattern.java:4466)
at java.util.regex.Pattern$GroupTail.match(Pattern.java:4615)
at java.util.regex.Pattern$Curly.match0(Pattern.java:4177)
at java.util.regex.Pattern$Curly.match(Pattern.java:4132)
at java.util.regex.Pattern$GroupHead.match(Pattern.java:4556)
at java.util.regex.Pattern$Branch.match(Pattern.java:4502)
at java.util.regex.Pattern$Branch.match(Pattern.java:4500)
at java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3715)
at java.util.regex.Pattern$Start.match(Pattern.java:3408)
at java.util.regex.Matcher.search(Matcher.java:1199)
at java.util.regex.Matcher.find(Matcher.java:618)
at java.util.Formatter.parse(Formatter.java:2517)
at java.util.Formatter.format(Formatter.java:2469)
at java.util.Formatter.format(Formatter.java:2423)
at java.lang.String.format(String.java:2792)
at org.apache.atlas.typesystem.persistence.Id.toString(Id.java:98)
at 
org.apache.atlas.typesystem.types.FieldMapping.output(FieldMapping.java:114)
at 
org.apache.atlas.typesystem.persistence.ReferenceableInstance.toString(ReferenceableInstance.java:92)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-543) Entity Instance requests should not require ID element for new Entities

2016-04-13 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-543:
--
Assignee: Harish Jaiprakash

> Entity Instance requests should not require ID element for new Entities
> ---
>
> Key: ATLAS-543
> URL: https://issues.apache.org/jira/browse/ATLAS-543
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.6-incubating
> Environment: Ubuntu 14, OpenJDK 64-Bit 1.7.0_95
>Reporter: Joseph Niemiec
>Assignee: Harish Jaiprakash
>Priority: Minor
>
> When utilizing the REST API to create an 'new' Entity Instance of a given 
> Type all ID elements for the class and structs are required, requests without 
> these elements will fail despite the fact a random GUID will be assigned at 
> instance time. 
> #
> Example 1 (Good Entity Posts correctly) 
> #
> {
>   "jsonClass": 
> "org.apache.atlas.typesystem.json.InstanceSerialization$_Reference",
>   "id": {
>   "jsonClass": 
> "org.apache.atlas.typesystem.json.InstanceSerialization$_Id",
>   "id": "-984848",
>   "version": 0,
>   "typeName": "HDFS_RESOURCE"
>   },
>   "typeName": "HDFS_RESOURCE",
>   "values": {
>   "name": "Cluser_A_DevFolder_A",
>   "description": "Fully Public Dev Folder",
>   "resource": {
>   "jsonClass": 
> "org.apache.atlas.typesystem.json.InstanceSerialization$_Reference",
>   "id": {
>   "jsonClass": 
> "org.apache.atlas.typesystem.json.InstanceSerialization$_Id",
>   "id": "-2630837415522",
>   "version": 0,
>   "typeName": "HDFS_OBJECT"
>   },
>   "typeName": "HDFS_OBJECT",
>   "values": {
>   "uri": "/user/dev/a",
>   "isDir" : true
>   },
>   "traitNames": [],
>   "traits": {}
>   }
>   },
>   "traitNames": ["Public"],
>   "traits": {
>   "Public": {
>   "jsonClass": 
> "org.apache.atlas.typesystem.json.InstanceSerialization$_Struct",
>   "typeName": "Public",
>   "values": { }
>   }
>   }
> }
> 
> Example #2 Bad Entity that fails.
> 
> {
>   "jsonClass": 
> "org.apache.atlas.typesystem.json.InstanceSerialization$_Reference",
>   "typeName": "HDFS_RESOURCE",
>   "values": {
>   "name": "Cluser_A_DevFolder_A",
>   "description": "Fully Public Dev Folder",
>   "resource": {
>   "jsonClass": 
> "org.apache.atlas.typesystem.json.InstanceSerialization$_Reference",
>   "typeName": "HDFS_OBJECT",
>   "values": {
>   "uri": "/user/dev/a",
>   "isDir" : true
>   },
>   "traitNames": [],
>   "traits": {}
>   }
>   },
>   "traitNames": ["Public"],
>   "traits": {
>   "Public": {
>   "jsonClass": 
> "org.apache.atlas.typesystem.json.InstanceSerialization$_Struct",
>   "typeName": "Public",
>   "values": { }
>   }
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-621) Introduce entity state in Id object

2016-04-09 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-621:
--
Attachment: ATLAS-621-v2.patch

Addressed review comments from review board

> Introduce entity state in Id object
> ---
>
> Key: ATLAS-621
> URL: https://issues.apache.org/jira/browse/ATLAS-621
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-621-v2.patch, ATLAS-621.patch
>
>
> Add entity state with ACTIVE and DELETED. The state should be returned in get 
> entity definition and in search results where entity is returned. In entity 
> create, mark the state as ACTIVE



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-660) Validate that same entity is part of two composite references

2016-04-12 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-660:
-

 Summary: Validate that same entity is part of two composite 
references
 Key: ATLAS-660
 URL: https://issues.apache.org/jira/browse/ATLAS-660
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S


Currently, atlas allows an entity to be part of composite references of two 
parent entities. For example, hive_column is a composite attribute of 
hive_table. A hive_column entity can't be part of two different hive_table 
entities (by the definition of composite which means child entity's lifecycle 
depends on parent entity's lifecycle)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-660) Validate that same entity is part of two composite references

2016-04-12 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-660:
--
Fix Version/s: 0.7-incubating

> Validate that same entity is part of two composite references
> -
>
> Key: ATLAS-660
> URL: https://issues.apache.org/jira/browse/ATLAS-660
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
> Fix For: 0.7-incubating
>
>
> Currently, atlas allows an entity to be part of composite references of two 
> parent entities. For example, hive_column is a composite attribute of 
> hive_table. A hive_column entity can't be part of two different hive_table 
> entities (by the definition of composite which means child entity's lifecycle 
> depends on parent entity's lifecycle)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-650) Use Apache logo on the webapp

2016-04-11 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-650:
--
Assignee: Jean-Baptiste Onofré

> Use Apache logo on the webapp
> -
>
> Key: ATLAS-650
> URL: https://issues.apache.org/jira/browse/ATLAS-650
> Project: Atlas
>  Issue Type: Wish
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Trivial
> Fix For: 0.7-incubating
>
>
> My Apache member small heart would love to see the Apache logo on the Atlas 
> webapp ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-540) API to retrieve entity version events

2016-04-11 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-540:
--
Attachment: ATLAS-540-v4.patch

Final patch for commit

> API to retrieve entity version events
> -
>
> Key: ATLAS-540
> URL: https://issues.apache.org/jira/browse/ATLAS-540
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-540-v2.patch, ATLAS-540-v3.patch, 
> ATLAS-540-v4.patch, ATLAS-540.patch
>
>
> We will need an API that will take entity id as input and returns the events 
> in the decreasing order of timestamp. The API should return n events and 
> return the timestamp of the next event



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-650) Use Apache logo on the webapp

2016-04-11 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234871#comment-15234871
 ] 

Shwetha G S commented on ATLAS-650:
---

Can you add the patch here?

> Use Apache logo on the webapp
> -
>
> Key: ATLAS-650
> URL: https://issues.apache.org/jira/browse/ATLAS-650
> Project: Atlas
>  Issue Type: Wish
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Trivial
> Fix For: 0.7-incubating
>
>
> My Apache member small heart would love to see the Apache logo on the Atlas 
> webapp ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-621) Introduce entity state in Id object

2016-04-10 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233967#comment-15233967
 ] 

Shwetha G S commented on ATLAS-621:
---

Sample DSL search response json:
{noformat}
{  
   "query":"Department",
   "dataType":{  
  "superTypes":[  

  ],
  "hierarchicalMetaTypeName":"org.apache.atlas.typesystem.types.ClassType",
  "typeName":"Department",
  "typeDescription":"Department_description",
  "attributeDefinitions":[  
 {  
"name":"name",
"dataTypeName":"string",
"multiplicity":{  
   "lower":1,
   "upper":1,
   "isUnique":false
},
"isComposite":false,
"isUnique":false,
"isIndexable":true,
"reverseAttributeName":null
 },
 {  
"name":"employees",
"dataTypeName":"array",
"multiplicity":{  
   "lower":1,
   "upper":2147483647,
   "isUnique":false
},
"isComposite":true,
"isUnique":false,
"isIndexable":true,
"reverseAttributeName":"department"
 }
  ]
   },
   "rows":[  
  {  
 "$typeName$":"Department",
 "$id$":{  
"id":"c785891b-2b68-4279-8e5f-ead1921ca8b2",
"$typeName$":"Department",
"version":0,
"state":"ACTIVE"
 },
 "employees":[  
{  
   "$typeName$":"Person",
   "$id$":{  
  "id":"40af8bd1-4512-4466-b5ab-b3477de47dbd",
  "$typeName$":"Person",
  "version":0,
  "state":"ACTIVE"
   },
   "manager":{  
  "id":"19d8c253-75bf-4fd9-9f81-5fcba6cc0185",
  "$typeName$":"Manager",
  "version":0,
  "state":"ACTIVE"
   },
   "orgLevel":null,
   "address":{  
  "$typeName$":"Address",
  "city":"Sunnyvale",
  "street":"Stewart Drive"
   },
   "department":{  
  "id":"c785891b-2b68-4279-8e5f-ead1921ca8b2",
  "$typeName$":"Department",
  "version":0,
  "state":"ACTIVE"
   },
   "name":"John",
   "mentor":{  
  "id":"4d1e27d4-9d13-4f61-a9e4-f426ee538c6d",
  "$typeName$":"Person",
  "version":0,
  "state":"ACTIVE"
   }
},
{  
   "$typeName$":"Manager",
   "$id$":{  
  "id":"b1502645-ed1b-4dae-9f24-31264810d78e",
  "$typeName$":"Manager",
  "version":0,
  "state":"ACTIVE"
   },
   "manager":null,
   "orgLevel":null,
   "address":{  
  "$typeName$":"Address",
  "city":"Newtonville",
  "street":"Madison Ave"
   },
   "subordinates":null,
   "department":{  
  "id":"c785891b-2b68-4279-8e5f-ead1921ca8b2",
  "$typeName$":"Department",
  "version":0,
  "state":"ACTIVE"
   },
   "name":"Julius",
   "mentor":null
},
{  
   "$typeName$":"Person",
   "$id$":{  
  "id":"4d1e27d4-9d13-4f61-a9e4-f426ee538c6d",
  "$typeName$":"Person",
  "version":0,
  "state":"ACTIVE"
   },
   "manager":{  
  "id":"19d8c253-75bf-4fd9-9f81-5fcba6cc0185",
  "$typeName$":"Manager",
  "version":0,
  "state":"ACTIVE"
   },
   "orgLevel":null,
   "address":{  
  "$typeName$":"Address",
  "city":"Newton",
  "street":"Ripley St"
   },
   "department":{  
  "id":"c785891b-2b68-4279-8e5f-ead1921ca8b2",
  "$typeName$":"Department",
  "version":0,
  "state":"ACTIVE"
   },
   "name":"Max",
   "mentor":{  
  "id":"b1502645-ed1b-4dae-9f24-31264810d78e",
  "$typeName$":"Person",
  "version":0,
  "state":"ACTIVE"
   }
}
 ],
 "name":"hr"
  }
   ]
}
{noformat}

> Introduce entity state in Id object
> ---
>
> Key: ATLAS-621
> URL: https://issues.apache.org/jira/browse/ATLAS-621
> Project: 

[jira] [Commented] (ATLAS-657) Packaged hbase should be of version hbase.version property

2016-04-10 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233947#comment-15233947
 ] 

Shwetha G S commented on ATLAS-657:
---

atlas_stop.py prints 'stopping master.'. Should say 'stopping hbase 
master.'



> Packaged hbase should be of version hbase.version property
> --
>
> Key: ATLAS-657
> URL: https://issues.apache.org/jira/browse/ATLAS-657
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
>
> hbase version packaged as part of ATLAS-498 is hard coded to 
> hbase-1.1.4-bin.tar.gz. It should be of the same version as hbase.version 
> property in atlas/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-656) Provide a flag to get HBase from a downloaded location for developer convenience

2016-04-10 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233945#comment-15233945
 ] 

Shwetha G S commented on ATLAS-656:
---

Should we move the target directory out of mvn build target so that mvn clean 
doesn't clean it up?

> Provide a flag to get HBase from a downloaded location for developer 
> convenience
> 
>
> Key: ATLAS-656
> URL: https://issues.apache.org/jira/browse/ATLAS-656
> Project: Atlas
>  Issue Type: Improvement
>Reporter: Hemanth Yamijala
>
> With ATLAS-498, Atlas now supports packaging HBase out of the box for all its 
> dependencies. However, this requires the download of HBase whenever we do a 
> clean install or package. Since this is a common operation (needs to be done 
> at least once before every patch submission), it would be nice to be able to 
> point to an existing location to get the HBase tarball instead of 
> downloading. This will help in offline builds, improve build time, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-657) Packaged hbase should be of version hbase.version property

2016-04-10 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-657:
-

 Summary: Packaged hbase should be of version hbase.version property
 Key: ATLAS-657
 URL: https://issues.apache.org/jira/browse/ATLAS-657
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S


hbase version packaged as part of ATLAS-498 is hard coded to 
hbase-1.1.4-bin.tar.gz. It should be of the same version as hbase.version 
property in atlas/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-540) API to retrieve entity version events

2016-04-10 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-540:
--
Attachment: ATLAS-540-v3.patch

> API to retrieve entity version events
> -
>
> Key: ATLAS-540
> URL: https://issues.apache.org/jira/browse/ATLAS-540
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-540-v2.patch, ATLAS-540-v3.patch, ATLAS-540.patch
>
>
> We will need an API that will take entity id as input and returns the events 
> in the decreasing order of timestamp. The API should return n events and 
> return the timestamp of the next event



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-622) Introduce soft delete

2016-04-11 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234767#comment-15234767
 ] 

Shwetha G S commented on ATLAS-622:
---

The review board link has the first cut of the patch. Still need to debug and 
add more tests

> Introduce soft delete
> -
>
> Key: ATLAS-622
> URL: https://issues.apache.org/jira/browse/ATLAS-622
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>
> Currently, in entity delete API, the entity and its related 
> entities(composite entities) are deleted and there is no trace of it in the 
> system. Instead, change delete to mark the entities to be deleted with 
> state=DELETED



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ATLAS-575) jetty-maven-plugin fails with ShutdownMonitorThread already started

2016-03-19 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S reassigned ATLAS-575:
-

Assignee: Shwetha G S

> jetty-maven-plugin fails with ShutdownMonitorThread already started
> ---
>
> Key: ATLAS-575
> URL: https://issues.apache.org/jira/browse/ATLAS-575
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>
> {noformat}
> [INFO] --- jetty-maven-plugin:9.2.12.v20150709:deploy-war (start-jetty) @ 
> falcon-bridge ---
> [INFO] Configuring Jetty for project: Apache Atlas Falcon Bridge
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Atlas UI  SUCCESS [01:04 
> min]
> [INFO] Apache Atlas Web Application ... SUCCESS [03:13 
> min]
> [INFO] Apache Atlas Documentation . SUCCESS [  4.433 
> s]
> [INFO] Apache Atlas Hive Bridge ... SUCCESS [01:31 
> min]
> [INFO] Apache Atlas Falcon Bridge . FAILURE [  3.805 
> s]
> [INFO] Apache Atlas Sqoop Bridge .. SKIPPED
> [INFO] Apache Atlas Storm Bridge .. SKIPPED
> [INFO] Apache Atlas Distribution .. SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 05:58 min
> [INFO] Finished at: 2016-03-18T12:42:32+05:30
> [INFO] Final Memory: 188M/687M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.eclipse.jetty:jetty-maven-plugin:9.2.12.v20150709:deploy-war 
> (start-jetty) on project falcon-bridge: Failure: ShutdownMonitorThread 
> already started -> [Help 1]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-576) Build with Maven 3.0.5 fails

2016-03-19 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-576:
--
Assignee: Rajendra Patil

> Build with Maven 3.0.5 fails
> 
>
> Key: ATLAS-576
> URL: https://issues.apache.org/jira/browse/ATLAS-576
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: Maven 3.0.5 on Mac OS X (10.10.5)
>Reporter: Rajendra Patil
>Assignee: Rajendra Patil
>Priority: Trivial
>  Labels: Maven, dashboard, front-end, frontend
> Fix For: trunk
>
> Attachments: ATLAS-576-1.patch
>
>
> *Problem*:
> Atlast dashboard, the subproject fails due to maven frontend plugin version 
> 0.0.23 that needs maven 3.1.0
> --
> INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 2.598s
> [INFO] Finished at: Fri Mar 18 13:18:03 IST 2016
> [INFO] Final Memory: 14M/309M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> com.github.eirslett:frontend-maven-plugin:0.0.23:install-node-and-npm 
> (install node and npm) on project atlas-dashboard: The plugin 
> com.github.eirslett:frontend-maven-plugin:0.0.23 requires Maven version 3.1.0 
> -> [Help 1]
> --
> *Solution/Proposal*: frontend plugin  0.0.22 works absolutely fine and I 
> don't think we have any must have dependency on 0.0.23 or higher at this 
> time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-577) Integrate entity audit with DefaultMetadataService

2016-03-19 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-577:
--
Attachment: ATLAS-577.patch

> Integrate entity audit with DefaultMetadataService
> --
>
> Key: ATLAS-577
> URL: https://issues.apache.org/jira/browse/ATLAS-577
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-577.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-408) UI : Add a close link (x) on the top right when Tag is added

2016-03-20 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203702#comment-15203702
 ] 

Shwetha G S commented on ATLAS-408:
---

Committed to master. Thanks Darshan and Hemanth

> UI : Add a close link (x) on the top right when Tag is added
> 
>
> Key: ATLAS-408
> URL: https://issues.apache.org/jira/browse/ATLAS-408
> Project: Atlas
>  Issue Type: Task
>Reporter: Anilsg
>Assignee: Darshan Kumar
>  Labels: Atlas-UI
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-408.patch
>
>
> Modal pop-up, which is opened on Search Add Tag & From details page Add Tag,
> Need to have the 'X' on top right corner for user to close the modal pop up



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (ATLAS-576) Build with Maven 3.0.5 fails

2016-03-22 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S reopened ATLAS-576:
---

> Build with Maven 3.0.5 fails
> 
>
> Key: ATLAS-576
> URL: https://issues.apache.org/jira/browse/ATLAS-576
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: Maven 3.0.5 on Mac OS X (10.10.5)
>Reporter: Rajendra Patil
>Assignee: Rajendra Patil
>Priority: Trivial
>  Labels: Maven, dashboard, front-end, frontend
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-576-1.patch, ATLAS-576-2.patch
>
>
> *Problem*:
> Atlast dashboard, the subproject fails due to maven frontend plugin version 
> 0.0.23 that needs maven 3.1.0
> --
> INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 2.598s
> [INFO] Finished at: Fri Mar 18 13:18:03 IST 2016
> [INFO] Final Memory: 14M/309M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> com.github.eirslett:frontend-maven-plugin:0.0.23:install-node-and-npm 
> (install node and npm) on project atlas-dashboard: The plugin 
> com.github.eirslett:frontend-maven-plugin:0.0.23 requires Maven version 3.1.0 
> -> [Help 1]
> --
> *Solution/Proposal*: frontend plugin  0.0.22 works absolutely fine and I 
> don't think we have any must have dependency on 0.0.23 or higher at this 
> time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-556) Hive hook fails for select without table

2016-03-21 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-556:
--
Fix Version/s: 0.7-incubating

> Hive hook fails for select without table
> 
>
> Key: ATLAS-556
> URL: https://issues.apache.org/jira/browse/ATLAS-556
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
> Fix For: 0.7-incubating
>
>
> Reported from: From 
> https://community.hortonworks.com/questions/21766/hive-queries-without-from-clause-seem-to-fail-when.html
> Command: select 42
> {noformat}
> 2016-03-09 11:30:05,669 WARN  - [main:] ~ Failed to get database 
> _dummy_database, returning NoSuchObjectException (ObjectStore:568)
> FAILED: Hive Internal Error: java.lang.NullPointerException(null)
> java.lang.NullPointerException
> at 
> org.apache.atlas.hive.hook.HiveHook.createOrUpdateEntities(HiveHook.java:325)
> at 
> org.apache.atlas.hive.hook.HiveHook.registerProcess(HiveHook.java:387)
> at 
> org.apache.atlas.hive.hook.HiveHook.fireAndForget(HiveHook.java:224)
> at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:182)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1520)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.runCommand(HiveHookIT.java:75)
> at 
> org.apache.atlas.hive.hook.HiveHookIT.testSelect2(HiveHookIT.java:265)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:673)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:842)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1166)
> at 
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109)
> at org.testng.TestRunner.runWorkers(TestRunner.java:1178)
> at org.testng.TestRunner.privateRun(TestRunner.java:757)
> at org.testng.TestRunner.run(TestRunner.java:608)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
> at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
> at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
> at org.testng.SuiteRunner.run(SuiteRunner.java:240)
> at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> at org.testng.TestNG.runSuitesSequentially(TestNG.java:1158)
> at org.testng.TestNG.runSuitesLocally(TestNG.java:1083)
> at org.testng.TestNG.run(TestNG.java:999)
> at 
> org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:115)
> at 
> org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.executeSingleClass(TestNGDirectoryTestSuite.java:129)
> at 
> org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.execute(TestNGDirectoryTestSuite.java:113)
> at 
> org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:111)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-576) Build with Maven 3.0.5 fails

2016-03-22 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205997#comment-15205997
 ] 

Shwetha G S commented on ATLAS-576:
---

I have mvn 3.2.5, and 0.0.22 of frontend-maven-plugin worked for me. Will check

> Build with Maven 3.0.5 fails
> 
>
> Key: ATLAS-576
> URL: https://issues.apache.org/jira/browse/ATLAS-576
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: Maven 3.0.5 on Mac OS X (10.10.5)
>Reporter: Rajendra Patil
>Assignee: Rajendra Patil
>Priority: Trivial
>  Labels: Maven, dashboard, front-end, frontend
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-576-1.patch, ATLAS-576-2.patch
>
>
> *Problem*:
> Atlast dashboard, the subproject fails due to maven frontend plugin version 
> 0.0.23 that needs maven 3.1.0
> --
> INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 2.598s
> [INFO] Finished at: Fri Mar 18 13:18:03 IST 2016
> [INFO] Final Memory: 14M/309M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> com.github.eirslett:frontend-maven-plugin:0.0.23:install-node-and-npm 
> (install node and npm) on project atlas-dashboard: The plugin 
> com.github.eirslett:frontend-maven-plugin:0.0.23 requires Maven version 3.1.0 
> -> [Help 1]
> --
> *Solution/Proposal*: frontend plugin  0.0.22 works absolutely fine and I 
> don't think we have any must have dependency on 0.0.23 or higher at this 
> time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-576) Build with Maven 3.0.5 fails

2016-03-22 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206020#comment-15206020
 ] 

Shwetha G S commented on ATLAS-576:
---

This looks like a bug in mvn 3.3.1 and 3.3.3 and affects other plugins as well 
- https://issues.apache.org/jira/browse/MNG-5787. Its fixed in mvn 3.3.9. But 
frontend-maven-plugin(with any version) didn't work with mvn 3.3.9. So, until 
we need some specific feature in higher version of frontend-maven-plugin, we 
should recommend max mvn version to be 3.2.x?

> Build with Maven 3.0.5 fails
> 
>
> Key: ATLAS-576
> URL: https://issues.apache.org/jira/browse/ATLAS-576
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: Maven 3.0.5 on Mac OS X (10.10.5)
>Reporter: Rajendra Patil
>Assignee: Rajendra Patil
>Priority: Trivial
>  Labels: Maven, dashboard, front-end, frontend
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-576-1.patch, ATLAS-576-2.patch
>
>
> *Problem*:
> Atlast dashboard, the subproject fails due to maven frontend plugin version 
> 0.0.23 that needs maven 3.1.0
> --
> INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 2.598s
> [INFO] Finished at: Fri Mar 18 13:18:03 IST 2016
> [INFO] Final Memory: 14M/309M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> com.github.eirslett:frontend-maven-plugin:0.0.23:install-node-and-npm 
> (install node and npm) on project atlas-dashboard: The plugin 
> com.github.eirslett:frontend-maven-plugin:0.0.23 requires Maven version 3.1.0 
> -> [Help 1]
> --
> *Solution/Proposal*: frontend plugin  0.0.22 works absolutely fine and I 
> don't think we have any must have dependency on 0.0.23 or higher at this 
> time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-576) Build with Maven 3.0.5 fails

2016-03-22 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206111#comment-15206111
 ] 

Shwetha G S commented on ATLAS-576:
---

Dashboard is failing on my box with any version, looks like an issue with my 
environment. I can't verify if this patch works with mvn 3.3.9. So, reverting 
to unblock development. Will check with mvn 3.3.9 later and commit

> Build with Maven 3.0.5 fails
> 
>
> Key: ATLAS-576
> URL: https://issues.apache.org/jira/browse/ATLAS-576
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: Maven 3.0.5 on Mac OS X (10.10.5)
>Reporter: Rajendra Patil
>Assignee: Rajendra Patil
>Priority: Trivial
>  Labels: Maven, dashboard, front-end, frontend
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-576-1.patch, ATLAS-576-2.patch
>
>
> *Problem*:
> Atlast dashboard, the subproject fails due to maven frontend plugin version 
> 0.0.23 that needs maven 3.1.0
> --
> INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 2.598s
> [INFO] Finished at: Fri Mar 18 13:18:03 IST 2016
> [INFO] Final Memory: 14M/309M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> com.github.eirslett:frontend-maven-plugin:0.0.23:install-node-and-npm 
> (install node and npm) on project atlas-dashboard: The plugin 
> com.github.eirslett:frontend-maven-plugin:0.0.23 requires Maven version 3.1.0 
> -> [Help 1]
> --
> *Solution/Proposal*: frontend plugin  0.0.22 works absolutely fine and I 
> don't think we have any must have dependency on 0.0.23 or higher at this 
> time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-575) jetty-maven-plugin fails with ShutdownMonitorThread already started

2016-03-22 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-575:
--
Attachment: ATLAS-575-v2.patch

> jetty-maven-plugin fails with ShutdownMonitorThread already started
> ---
>
> Key: ATLAS-575
> URL: https://issues.apache.org/jira/browse/ATLAS-575
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-575-v2.patch, ATLAS-575.patch
>
>
> {noformat}
> [INFO] --- jetty-maven-plugin:9.2.12.v20150709:deploy-war (start-jetty) @ 
> falcon-bridge ---
> [INFO] Configuring Jetty for project: Apache Atlas Falcon Bridge
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Atlas UI  SUCCESS [01:04 
> min]
> [INFO] Apache Atlas Web Application ... SUCCESS [03:13 
> min]
> [INFO] Apache Atlas Documentation . SUCCESS [  4.433 
> s]
> [INFO] Apache Atlas Hive Bridge ... SUCCESS [01:31 
> min]
> [INFO] Apache Atlas Falcon Bridge . FAILURE [  3.805 
> s]
> [INFO] Apache Atlas Sqoop Bridge .. SKIPPED
> [INFO] Apache Atlas Storm Bridge .. SKIPPED
> [INFO] Apache Atlas Distribution .. SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 05:58 min
> [INFO] Finished at: 2016-03-18T12:42:32+05:30
> [INFO] Final Memory: 188M/687M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.eclipse.jetty:jetty-maven-plugin:9.2.12.v20150709:deploy-war 
> (start-jetty) on project falcon-bridge: Failure: ShutdownMonitorThread 
> already started -> [Help 1]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (ATLAS-588) import-hive.sh fails while importing partitions for a non-partitioned table

2016-03-23 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209677#comment-15209677
 ] 

Shwetha G S edited comment on ATLAS-588 at 3/24/16 3:36 AM:


{code}
+LOG.info("Skipping partition for table {} since partition 
values are null", getTableQualifiedName(clusterName, table.getDbName(), 
table.getTableName()));
{code}
add partition.values in the log statement. 
Create variable for "82e06b34-9151-4023-aa9d-b82103a50e77" in the test


+1 otherwise. You can fix it and commit. Thanks


was (Author: shwethags):
{code}
+LOG.info("Skipping partition for table {} since partition 
values are null", getTableQualifiedName(clusterName, table.getDbName(), 
table.getTableName()));
{code}
add partition.values in the log statement. 

+1 otherwise. You can fix it and commit. Thanks

> import-hive.sh fails while importing partitions for a non-partitioned table
> ---
>
> Key: ATLAS-588
> URL: https://issues.apache.org/jira/browse/ATLAS-588
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.7-incubating
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-588.patch
>
>
> {noformat}
> [root@sneethiraj-ambari-atlas-job-160323-0215-1 atlas-server]# export 
> HIVE_CONF_DIR=/etc/hive/conf
> [root@sneethiraj-ambari-atlas-job-160323-0215-1 atlas-server]# 
> bin/import-hive.sh
> Using Hive configuration directory [/etc/hive/conf]
> Log file for import is /var/log/atlas/import-hive.log
> Exception in thread "main" org.codehaus.jettison.json.JSONException: 
> JSONArray[0] not found.
> at org.codehaus.jettison.json.JSONArray.get(JSONArray.java:193)
> at org.codehaus.jettison.json.JSONArray.getString(JSONArray.java:316)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerInstance(HiveMetaStoreBridge.java:191)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerPartition(HiveMetaStoreBridge.java:452)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerPartitions(HiveMetaStoreBridge.java:438)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importTables(HiveMetaStoreBridge.java:256)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importDatabases(HiveMetaStoreBridge.java:122)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importHiveMetadata(HiveMetaStoreBridge.java:114)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:608)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-588) import-hive.sh fails while importing partitions for a non-partitioned table

2016-03-23 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209677#comment-15209677
 ] 

Shwetha G S commented on ATLAS-588:
---

{code}
+LOG.info("Skipping partition for table {} since partition 
values are null", getTableQualifiedName(clusterName, table.getDbName(), 
table.getTableName()));
{code}
add partition.values in the log statement. 

+1 otherwise. You can fix it and commit. Thanks

> import-hive.sh fails while importing partitions for a non-partitioned table
> ---
>
> Key: ATLAS-588
> URL: https://issues.apache.org/jira/browse/ATLAS-588
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.7-incubating
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-588.patch
>
>
> {noformat}
> [root@sneethiraj-ambari-atlas-job-160323-0215-1 atlas-server]# export 
> HIVE_CONF_DIR=/etc/hive/conf
> [root@sneethiraj-ambari-atlas-job-160323-0215-1 atlas-server]# 
> bin/import-hive.sh
> Using Hive configuration directory [/etc/hive/conf]
> Log file for import is /var/log/atlas/import-hive.log
> Exception in thread "main" org.codehaus.jettison.json.JSONException: 
> JSONArray[0] not found.
> at org.codehaus.jettison.json.JSONArray.get(JSONArray.java:193)
> at org.codehaus.jettison.json.JSONArray.getString(JSONArray.java:316)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerInstance(HiveMetaStoreBridge.java:191)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerPartition(HiveMetaStoreBridge.java:452)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerPartitions(HiveMetaStoreBridge.java:438)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importTables(HiveMetaStoreBridge.java:256)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importDatabases(HiveMetaStoreBridge.java:122)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importHiveMetadata(HiveMetaStoreBridge.java:114)
> at 
> org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:608)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-592) Queue for failed hook messages

2016-03-24 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-592:
-

 Summary: Queue for failed hook messages
 Key: ATLAS-592
 URL: https://issues.apache.org/jira/browse/ATLAS-592
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S
 Fix For: 0.7-incubating


If NotificationHookConsumer fails to process a message, its just logged and the 
message is lost. Instead, it should be added to failed queue, possibly for 
manual intervention



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-553) Entity mutation - Fix issue with reordering of elements in array with composite references

2016-03-24 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210644#comment-15210644
 ] 

Shwetha G S commented on ATLAS-553:
---

+1

> Entity mutation - Fix issue with reordering of elements in array with 
> composite references
> -
>
> Key: ATLAS-553
> URL: https://issues.apache.org/jira/browse/ATLAS-553
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: 0.7-incubating
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-553.1.patch, ATLAS-553.patch
>
>
> Currently, if theres an update on the entity to the order of elements in an 
> array/aray etc, it is not handled correctly since entity 
> mutation logic thinks that the element is deleted resulting in a deleted 
> vertex being referred later causing entity mutation to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-593) importing hive metadata for specific database/table

2016-03-24 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-593:
-

 Summary: importing hive metadata for specific database/table
 Key: ATLAS-593
 URL: https://issues.apache.org/jira/browse/ATLAS-593
 Project: Atlas
  Issue Type: Improvement
Reporter: Shwetha G S


import-hive.sh currently import whole hive metadata. Its useful to allow import 
of specific database/table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-590) HBaseBasedAuditRepository.start() fails

2016-03-24 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-590:
-

 Summary: HBaseBasedAuditRepository.start() fails
 Key: ATLAS-590
 URL: https://issues.apache.org/jira/browse/ATLAS-590
 Project: Atlas
  Issue Type: Sub-task
Reporter: Shwetha G S
Assignee: Shwetha G S
 Fix For: 0.7-incubating


The application doesn't fail, but prints this warning. Need to investigate
{noformat}
2016-03-23 11:01:45,496 WARN  - [main:] ~ Failed to identify the fs of dir 
hdfs://localhost.localdomain:8020/apps/hbase/data/lib, ignored 
(DynamicClassLoader:106)
java.io.IOException: No FileSystem for scheme: hdfs
at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at 
org.apache.hadoop.hbase.util.DynamicClassLoader.(DynamicClassLoader.java:104)
at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:241)
at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
at 
org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
at 
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:635)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
at 
org.apache.atlas.repository.audit.HBaseBasedAuditRepository.start(HBaseBasedAuditRepository.java:257)
at org.apache.atlas.service.Services.start(Services.java:45)
at 
org.apache.atlas.web.listeners.GuiceServletConfig.startServices(GuiceServletConfig.java:130)
at 
org.apache.atlas.web.listeners.GuiceServletConfig.contextInitialized(GuiceServletConfig.java:124)
at 
org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:800)
at 
org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:444)
at 
org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:791)
at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:294)
at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
at 
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.server.Server.start(Server.java:387)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart(Server.java:354)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:93)
at org.apache.atlas.Atlas.main(Atlas.java:107)
2016-03-23 11:01:45,555 INFO  - [main:] ~ Checking if table 
ATLAS_ENTITY_AUDIT_EVENTS exists (HBaseBasedAuditRepository:232)
2016-03-23 11:01:46,362 INFO  - [main:] ~ Table ATLAS_ENTITY_AUDIT_EVENTS 
exists (HBaseBasedAuditRepository:241)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-589) Document entity audit

2016-03-23 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-589:
-

 Summary: Document entity audit
 Key: ATLAS-589
 URL: https://issues.apache.org/jira/browse/ATLAS-589
 Project: Atlas
  Issue Type: Sub-task
Reporter: Shwetha G S
Assignee: Shwetha G S
 Fix For: 0.7-incubating






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-591) Atlas client ssl configuration failure

2016-03-24 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-591:
-

 Summary: Atlas client ssl configuration failure
 Key: ATLAS-591
 URL: https://issues.apache.org/jira/browse/ATLAS-591
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S
 Fix For: 0.7-incubating


When server is not set-up with ssl, client shouldn't configure params for ssl
{noformat}
2016-03-23 13:59:35,013 DEBUG [main]: atlas.ApplicationProperties 
(ApplicationProperties.java:logConfiguration(86)) - atlas.enableTLS = false
2016-03-23 13:59:35,018 DEBUG [main]: atlas.ApplicationProperties 
(ApplicationProperties.java:logConfiguration(86)) - atlas.rest.address = 
http://localhost:21000

2016-03-23 13:59:35,420 DEBUG [Atlas Logger 0]: security.SecureClientUtils 
(SecureClientUtils.java:newConnConfigurator(138)) - Cannot load customized ssl 
related configuration. Fallback to system-generic settings.
java.io.FileNotFoundException: /etc/security/clientKeys/all.jks (No such file 
or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:164)
at 
org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:81)
at 
org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:209)
at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131)
at 
org.apache.atlas.security.SecureClientUtils.newSslConnConfigurator(SecureClientUtils.java:150)
at 
org.apache.atlas.security.SecureClientUtils.newConnConfigurator(SecureClientUtils.java:136)
at 
org.apache.atlas.security.SecureClientUtils.getClientConnectionHandler(SecureClientUtils.java:69)
at org.apache.atlas.AtlasClient.(AtlasClient.java:126)
at 
org.apache.atlas.hive.bridge.HiveMetaStoreBridge.(HiveMetaStoreBridge.java:97)
at org.apache.atlas.hive.hook.HiveHook.fireAndForget(HiveHook.java:195)
at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:62)
at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:181)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-539) Store for entity update audit

2016-03-07 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184578#comment-15184578
 ] 

Shwetha G S commented on ATLAS-539:
---

[~yhemanth], the key used here is entity id + timestamp. To avoid conflict, we 
need to add server id(the one used in HA) as well. FYI

> Store for entity update audit
> -
>
> Key: ATLAS-539
> URL: https://issues.apache.org/jira/browse/ATLAS-539
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-539.patch
>
>
> We need to store the entity update events in some store. The search supported 
> should return all events for a given entity id within some timerange.
> Two choices are:
> 1. Existing graph db - We can create a vertex for every update with 
> properties for entity id, timestamp, action and details. This will create 
> disjoint vertices. The direct gremlin search is enough to retrieve all events 
> for the entity. 
> Pros - We already have configurations for graph and utilities to store/get 
> from graph
> Cons - It will create extra data and doesn't fit the graph model
> 2. HBase - Store events with key = entity id + timestamp and columns for 
> action and details. The table scan supports the required search
> Pros - Fits the data model
> Cons - We will need the configurations and code to read and write from hbase
> In either case, we should expose an interface so that alternative 
> implementations can be added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-474) Server does not start if the type is updated with same super type class information

2016-03-03 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179384#comment-15179384
 ] 

Shwetha G S commented on ATLAS-474:
---

[~dkantor], [~yhemanth] will review

> Server does not start if the type is updated with same super type class 
> information
> ---
>
> Key: ATLAS-474
> URL: https://issues.apache.org/jira/browse/ATLAS-474
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: sandbox
>Reporter: Chethana
>Assignee: David Kantor
>Priority: Blocker
> Attachments: rb44100.patch
>
>
> Create a class with a superType class.
> Use update API and do not change the request used.
> Restart atlas server 
> Fails with exception
> K":1},"pattern":"static","timestamp":"1454921806183"} stored data: 
> {"version":1,"subscription":{"ATLAS_HOOK":1},"pattern":"static","timestamp":"1454921372384"}
>  (ZkUtils$:68)
> 2016-02-09 00:00:02,149 INFO  - [ZkClient-EventThread-91-localhost:9026:] ~ I 
> wrote this conflicted ephemeral node 
> [{"version":1,"subscription":{"ATLAS_HOOK":1},"pattern":"static","timestamp":"1454921806183"}]
>  at /consumers/atlas/ids/atlas_Chethanas-MBP.local-1454412213224-de1ce8e6 a 
> while back in a different session, hence I will backoff for this node to be 
> deleted by Zookeeper and retry (ZkUtils$:68)
> 2016-02-09 00:00:02,554 INFO  - [ProcessThread(sid:0 cport:-1)::] ~ Got 
> user-level KeeperException when processing sessionid:0x152a1b9238e0051 
> type:create cxid:0x3f0bf zxid:0x1c0ec txntype:-1 reqpath:n/a Error 
> Path:/consumers/atlas/ids/atlas_Chethanas-MBP.local-1454412213224-de1ce8e6 
> Error:KeeperErrorCode = NodeExists for 
> /consumers/atlas/ids/atlas_Chethanas-MBP.local-1454412213224-de1ce8e6 
> (PrepRequest...skipping...
> at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
> at org.eclipse.jetty.server.Server.doStart(Server.java:354)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:88)
> at org.apache.atlas.Atlas.main(Atlas.java:107)
> Caused by: java.lang.RuntimeException: org.apache.atlas.AtlasException: Type 
> classa3ozcd7yra extends superType superClassa3ozcd7yra multiple times
> at 
> org.apache.atlas.services.DefaultMetadataService.restoreTypeSystem(DefaultMetadataService.java:113)
> at 
> org.apache.atlas.services.DefaultMetadataService.(DefaultMetadataService.java:100)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> com.google.inject.internal.DefaultConstructionProxyFactory$2.newInstance(DefaultConstructionProxyFactory.java:86)
> at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:105)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-474) Server does not start if the type is updated with same super type class information

2016-03-03 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179368#comment-15179368
 ] 

Shwetha G S commented on ATLAS-474:
---

Ah ok, I missed that in update. Thanks

> Server does not start if the type is updated with same super type class 
> information
> ---
>
> Key: ATLAS-474
> URL: https://issues.apache.org/jira/browse/ATLAS-474
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: sandbox
>Reporter: Chethana
>Assignee: David Kantor
>Priority: Blocker
> Attachments: rb44100.patch
>
>
> Create a class with a superType class.
> Use update API and do not change the request used.
> Restart atlas server 
> Fails with exception
> K":1},"pattern":"static","timestamp":"1454921806183"} stored data: 
> {"version":1,"subscription":{"ATLAS_HOOK":1},"pattern":"static","timestamp":"1454921372384"}
>  (ZkUtils$:68)
> 2016-02-09 00:00:02,149 INFO  - [ZkClient-EventThread-91-localhost:9026:] ~ I 
> wrote this conflicted ephemeral node 
> [{"version":1,"subscription":{"ATLAS_HOOK":1},"pattern":"static","timestamp":"1454921806183"}]
>  at /consumers/atlas/ids/atlas_Chethanas-MBP.local-1454412213224-de1ce8e6 a 
> while back in a different session, hence I will backoff for this node to be 
> deleted by Zookeeper and retry (ZkUtils$:68)
> 2016-02-09 00:00:02,554 INFO  - [ProcessThread(sid:0 cport:-1)::] ~ Got 
> user-level KeeperException when processing sessionid:0x152a1b9238e0051 
> type:create cxid:0x3f0bf zxid:0x1c0ec txntype:-1 reqpath:n/a Error 
> Path:/consumers/atlas/ids/atlas_Chethanas-MBP.local-1454412213224-de1ce8e6 
> Error:KeeperErrorCode = NodeExists for 
> /consumers/atlas/ids/atlas_Chethanas-MBP.local-1454412213224-de1ce8e6 
> (PrepRequest...skipping...
> at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
> at org.eclipse.jetty.server.Server.doStart(Server.java:354)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:88)
> at org.apache.atlas.Atlas.main(Atlas.java:107)
> Caused by: java.lang.RuntimeException: org.apache.atlas.AtlasException: Type 
> classa3ozcd7yra extends superType superClassa3ozcd7yra multiple times
> at 
> org.apache.atlas.services.DefaultMetadataService.restoreTypeSystem(DefaultMetadataService.java:113)
> at 
> org.apache.atlas.services.DefaultMetadataService.(DefaultMetadataService.java:100)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> com.google.inject.internal.DefaultConstructionProxyFactory$2.newInstance(DefaultConstructionProxyFactory.java:86)
> at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:105)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-247) hive_column leve lineage

2016-03-03 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179418#comment-15179418
 ] 

Shwetha G S commented on ATLAS-247:
---

Can you add asserts on the column level lineage in HiveHookIT? Can you add the 
documentation in 
https://github.com/hbutani/incubator-atlas/wiki/Column-Level-Lineage to 
Bridge-Hive.twiki?

Looks like hive publishes snapshot jars. So, we can change hive version to 
2.1.0-SNAPSHOT and commit this after hive patch is committed, right?

> hive_column leve lineage
> 
>
> Key: ATLAS-247
> URL: https://issues.apache.org/jira/browse/ATLAS-247
> Project: Atlas
>  Issue Type: New Feature
>Affects Versions: 0.5-incubating
>Reporter: Herman Yu
>Assignee: Harish Butani
> Attachments: ATLAS-247.2.patch, ATLAS-247.patch
>
>
> hive_column is not inherited from DataSet, thus can't be using hive_process 
> to track column level lineages
> Is there specific reason that hive_column is not inheriting from Data Set? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-247) hive_column leve lineage

2016-03-03 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179486#comment-15179486
 ] 

Shwetha G S commented on ATLAS-247:
---

Since the hive patch will be in a future hive release and can't expect all to 
use the new hive version, we shouldn't fail the hook with old hive releases. We 
should add catch throwable around adding column level lineage and log and 
ignore. We should document this as well

> hive_column leve lineage
> 
>
> Key: ATLAS-247
> URL: https://issues.apache.org/jira/browse/ATLAS-247
> Project: Atlas
>  Issue Type: New Feature
>Affects Versions: 0.5-incubating
>Reporter: Herman Yu
>Assignee: Harish Butani
> Attachments: ATLAS-247.2.patch, ATLAS-247.patch
>
>
> hive_column is not inherited from DataSet, thus can't be using hive_process 
> to track column level lineages
> Is there specific reason that hive_column is not inheriting from Data Set? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-422) JavaDoc NotificationConsumer and NotificationInterface.

2016-03-01 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-422:
--
Attachment: ATLAS-422-additional.patch

Build was broken
{noformat}
[INFO] --- maven-checkstyle-plugin:2.9.1:check (checkstyle-check) @
atlas-notification ---
[INFO] Starting audit...
/home/jenkins/jenkins-slave/workspace/apache-atlas-nightly/notification/src
/main/java/org/apache/atlas/notification/NotificationConsumer.java:25:41:
'>' is followed by an illegal character.
Audit done.
{noformat}

Have committed the fix


> JavaDoc NotificationConsumer and NotificationInterface.
> ---
>
> Key: ATLAS-422
> URL: https://issues.apache.org/jira/browse/ATLAS-422
> Project: Atlas
>  Issue Type: Improvement
>Reporter: Tom Beerbower
>Assignee: Tom Beerbower
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-422-additional.patch, ATLAS-422.patch
>
>
> Missing javadocs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-479) Add description for different types during create time

2016-03-01 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175014#comment-15175014
 ] 

Shwetha G S commented on ATLAS-479:
---

Added a small test with description as null, serialisation works fine. 
Deserialisation fails. You need to debug:
{code}
@Test
public void testCls() throws Exception {
HierarchicalTypeDefinition clsType = TypesUtil
.createClassTypeDef("Random", ImmutableList.of(),
TypesUtil.createRequiredAttrDef("name", 
DataTypes.STRING_TYPE));
TypesDef typesDef = getTypesDef(clsType);
String json = TypesSerialization.toJson(typesDef);
System.out.println(json);
TypesSerialization.fromJson(json);
}
{code}

> Add description for different types during create time
> --
>
> Key: ATLAS-479
> URL: https://issues.apache.org/jira/browse/ATLAS-479
> Project: Atlas
>  Issue Type: Sub-task
>Affects Versions: 0.6-incubating
>Reporter: Neeru Gupta
>Assignee: Neeru Gupta
> Fix For: 0.7-incubating
>
> Attachments: graycol.gif, rb43531(6).patch
>
>
> Ability to specify description while creating different types like Struct, 
> Enum, Class and Trait type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ATLAS-545) Attribute names with $ doesn't work

2016-03-01 Thread Shwetha G S (JIRA)
Shwetha G S created ATLAS-545:
-

 Summary: Attribute names with $ doesn't work
 Key: ATLAS-545
 URL: https://issues.apache.org/jira/browse/ATLAS-545
 Project: Atlas
  Issue Type: Bug
Reporter: Shwetha G S


Create index fails with attribute names that contain $:
{noformat}
2016-02-10 11:16:21,634 ERROR - [qtp1565844247-232 - 
ad7c63a0-be93-4344-bb33-5dd1f2d1f756:] ~ Error creating index for type 
org.apache.atlas.typesystem.types.ClassType@4c019ea6 
(GraphBackedSearchIndexer:154)
java.lang.IllegalArgumentException: Name can not contains reserved character $: 
className_update_7cyd8o3dnn.multiplicityOptional$
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:119)
at 
com.thinkaurelius.titan.graphdb.types.StandardRelationTypeMaker.checkName(StandardRelationTypeMaker.java:70)
at 
com.thinkaurelius.titan.graphdb.types.StandardRelationTypeMaker.checkGeneralArguments(StandardRelationTypeMaker.java:79)
at 
com.thinkaurelius.titan.graphdb.types.StandardRelationTypeMaker.makeDefinition(StandardRelationTypeMaker.java:113)
at 
com.thinkaurelius.titan.graphdb.types.StandardPropertyKeyMaker.make(StandardPropertyKeyMaker.java:76)
at 
org.apache.atlas.repository.graph.GraphBackedSearchIndexer.createCompositeAndMixedIndex(GraphBackedSearchIndexer.java:297)
at 
org.apache.atlas.repository.graph.GraphBackedSearchIndexer.createIndexForAttribute(GraphBackedSearchIndexer.java:212)
at 
org.apache.atlas.repository.graph.GraphBackedSearchIndexer.createIndexForFields(GraphBackedSearchIndexer.java:202)
at 
org.apache.atlas.repository.graph.GraphBackedSearchIndexer.addIndexForType(GraphBackedSearchIndexer.java:191)
at 
org.apache.atlas.repository.graph.GraphBackedSearchIndexer.onAdd(GraphBackedSearchIndexer.java:151)
at 
org.apache.atlas.repository.graph.GraphBackedSearchIndexer.onChange(GraphBackedSearchIndexer.java:166)
at 
org.apache.atlas.services.DefaultMetadataService.onTypesUpdated(DefaultMetadataService.java:645)
at 
org.apache.atlas.services.DefaultMetadataService.updateType(DefaultMetadataService.java:207)
at 
org.apache.atlas.web.resources.TypesResource.update(TypesResource.java:130)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-476) Update type attribute with Reserved characters updated the original type as unknown

2016-03-01 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175141#comment-15175141
 ] 

Shwetha G S commented on ATLAS-476:
---

Created ATLAS-545 to handle attribute names with $

> Update type attribute with Reserved characters updated the original type as 
> unknown
> ---
>
> Key: ATLAS-476
> URL: https://issues.apache.org/jira/browse/ATLAS-476
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: sandbox
>Reporter: Chethana
>Assignee: Hemanth Yamijala
>Priority: Blocker
> Fix For: 0.7-incubating
>
> Attachments: 1.log, ATLAS-476.patch
>
>
> create a type with required attribute
> try to get this type created - the type data is returned
> try update this type by adding attribute with attribute name consisting of a 
> reserved character eg:test$
> this throws exception.
> Now use to get call to get the previously created type
> Expected:
> The type should not be updated.
> Actual:
> "error": "Unknown datatype: className_update_vsvrbzqaqg",
> "stackTrace": "org.apache.atlas.typesystem.exception.TypeNotFoundException: 
> Unknown datatype: className_update_vsvrbzqaqg\n\tat 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-508) Apache nightly build failure - UnsupportedOperationException: Not a single key: __traitNames

2016-03-02 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-508:
--
Attachment: ATLAS-508.patch

Looks like some graph data not cleaned up by previous test. This patch uses new 
data directory for the failing tests

> Apache nightly build failure - UnsupportedOperationException: Not a single 
> key: __traitNames
> 
>
> Key: ATLAS-508
> URL: https://issues.apache.org/jira/browse/ATLAS-508
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>Priority: Critical
> Attachments: ATLAS-508.patch
>
>
> https://builds.apache.org/job/apache-atlas-nightly/184/console
> {noformat}
> Failed tests: 
>   GremlinTest.beforeAll:41 » Script javax.script.ScriptException: 
> java.lang.Unsu...
>   LineageQueryTest.beforeAll:41 » Script javax.script.ScriptException: 
> java.lang...
> Tests run: 200, Failures: 2, Errors: 0, Skipped: 28
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] apache-atlas .. SUCCESS [ 21.262 s]
> [INFO] Apache Atlas Common ... SUCCESS [ 58.381 s]
> [INFO] Apache Atlas Typesystem ... SUCCESS [04:51 min]
> [INFO] Apache Atlas Server API ... SUCCESS [ 37.922 s]
> [INFO] Apache Atlas Client ... SUCCESS [01:12 min]
> [INFO] Apache Atlas Notification . SUCCESS [01:25 min]
> [INFO] Apache Atlas Titan  SUCCESS [01:56 min]
> [INFO] Apache Atlas Repository ... FAILURE [10:51 min]
> [INFO] Apache Atlas UI ... SKIPPED
> [INFO] Apache Atlas Web Application .. SKIPPED
> [INFO] Apache Atlas Documentation  SKIPPED
> [INFO] Apache Atlas Hive Bridge .. SKIPPED
> [INFO] Apache Atlas Falcon Bridge  SKIPPED
> [INFO] Apache Atlas Sqoop Bridge . SKIPPED
> [INFO] Apache Atlas Storm Bridge . SKIPPED
> [INFO] Apache Atlas Distribution . SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> Running org.apache.atlas.query.LineageQueryTest
> Tests run: 7, Failures: 1, Errors: 0, Skipped: 6, Time elapsed: 16.236 sec 
> <<< FAILURE! - in org.apache.atlas.query.LineageQueryTest
> beforeAll(org.apache.atlas.query.LineageQueryTest)  Time elapsed: 16.159 sec  
> <<< FAILURE!
> javax.script.ScriptException: javax.script.ScriptException: 
> java.lang.UnsupportedOperationException: Not a single key: __traitNames. Use 
> addProperty instead
>   at 
> com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.setProperty(StandardTitanTx.java:755)
>   at 
> com.thinkaurelius.titan.graphdb.vertices.AbstractVertex.setProperty(AbstractVertex.java:244)
>   at 
> com.thinkaurelius.titan.graphdb.vertices.AbstractVertex.setProperty(AbstractVertex.java:239)
>   at 
> com.tinkerpop.blueprints.util.wrappers.batch.BatchGraph$BatchVertex.setProperty(BatchGraph.java:492)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONUtility.vertexFromJson(GraphSONUtility.java:136)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:158)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:104)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:88)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
>   at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
>   at 
> org.codehaus.groovy.runtime.callsite.StaticMetaMethodSite.invoke(StaticMetaMethodSite.java:43)
>   at 
> org.codehaus.groovy.runtime.callsite.StaticMetaMethodSite.call(StaticMetaMethodSite.java:88)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
>   at 
> 

[jira] [Commented] (ATLAS-508) Apache nightly build failure - UnsupportedOperationException: Not a single key: __traitNames

2016-03-02 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177220#comment-15177220
 ] 

Shwetha G S commented on ATLAS-508:
---

Committed to trunk. Thanks Suma for reviewing. Will keep the bug open to check 
the jenkins build

> Apache nightly build failure - UnsupportedOperationException: Not a single 
> key: __traitNames
> 
>
> Key: ATLAS-508
> URL: https://issues.apache.org/jira/browse/ATLAS-508
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>Priority: Critical
> Attachments: ATLAS-508.patch
>
>
> https://builds.apache.org/job/apache-atlas-nightly/184/console
> {noformat}
> Failed tests: 
>   GremlinTest.beforeAll:41 » Script javax.script.ScriptException: 
> java.lang.Unsu...
>   LineageQueryTest.beforeAll:41 » Script javax.script.ScriptException: 
> java.lang...
> Tests run: 200, Failures: 2, Errors: 0, Skipped: 28
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] apache-atlas .. SUCCESS [ 21.262 s]
> [INFO] Apache Atlas Common ... SUCCESS [ 58.381 s]
> [INFO] Apache Atlas Typesystem ... SUCCESS [04:51 min]
> [INFO] Apache Atlas Server API ... SUCCESS [ 37.922 s]
> [INFO] Apache Atlas Client ... SUCCESS [01:12 min]
> [INFO] Apache Atlas Notification . SUCCESS [01:25 min]
> [INFO] Apache Atlas Titan  SUCCESS [01:56 min]
> [INFO] Apache Atlas Repository ... FAILURE [10:51 min]
> [INFO] Apache Atlas UI ... SKIPPED
> [INFO] Apache Atlas Web Application .. SKIPPED
> [INFO] Apache Atlas Documentation  SKIPPED
> [INFO] Apache Atlas Hive Bridge .. SKIPPED
> [INFO] Apache Atlas Falcon Bridge  SKIPPED
> [INFO] Apache Atlas Sqoop Bridge . SKIPPED
> [INFO] Apache Atlas Storm Bridge . SKIPPED
> [INFO] Apache Atlas Distribution . SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> Running org.apache.atlas.query.LineageQueryTest
> Tests run: 7, Failures: 1, Errors: 0, Skipped: 6, Time elapsed: 16.236 sec 
> <<< FAILURE! - in org.apache.atlas.query.LineageQueryTest
> beforeAll(org.apache.atlas.query.LineageQueryTest)  Time elapsed: 16.159 sec  
> <<< FAILURE!
> javax.script.ScriptException: javax.script.ScriptException: 
> java.lang.UnsupportedOperationException: Not a single key: __traitNames. Use 
> addProperty instead
>   at 
> com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.setProperty(StandardTitanTx.java:755)
>   at 
> com.thinkaurelius.titan.graphdb.vertices.AbstractVertex.setProperty(AbstractVertex.java:244)
>   at 
> com.thinkaurelius.titan.graphdb.vertices.AbstractVertex.setProperty(AbstractVertex.java:239)
>   at 
> com.tinkerpop.blueprints.util.wrappers.batch.BatchGraph$BatchVertex.setProperty(BatchGraph.java:492)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONUtility.vertexFromJson(GraphSONUtility.java:136)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:158)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:104)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:88)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
>   at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
>   at 
> org.codehaus.groovy.runtime.callsite.StaticMetaMethodSite.invoke(StaticMetaMethodSite.java:43)
>   at 
> org.codehaus.groovy.runtime.callsite.StaticMetaMethodSite.call(StaticMetaMethodSite.java:88)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
>   

[jira] [Commented] (ATLAS-508) Apache nightly build failure - UnsupportedOperationException: Not a single key: __traitNames

2016-03-02 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177340#comment-15177340
 ] 

Shwetha G S commented on ATLAS-508:
---

I couldn't verify the fix even with latest build - 
https://builds.apache.org/job/apache-atlas-nightly/209. The builds were failing 
when HiveLineageServiceTest and GraphBackedDiscoveryServiceTest run before 
GremlinTest. But in the latest build, HiveLineageServiceTest and 
GraphBackedDiscoveryServiceTest ran after GremlinTest. In any case, since we 
use different data directory with this patch, the issue shouldn't happen again. 
Will resolve this for now and re-open if we see the issue again

> Apache nightly build failure - UnsupportedOperationException: Not a single 
> key: __traitNames
> 
>
> Key: ATLAS-508
> URL: https://issues.apache.org/jira/browse/ATLAS-508
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>Priority: Critical
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-508.patch
>
>
> https://builds.apache.org/job/apache-atlas-nightly/184/console
> {noformat}
> Failed tests: 
>   GremlinTest.beforeAll:41 » Script javax.script.ScriptException: 
> java.lang.Unsu...
>   LineageQueryTest.beforeAll:41 » Script javax.script.ScriptException: 
> java.lang...
> Tests run: 200, Failures: 2, Errors: 0, Skipped: 28
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] apache-atlas .. SUCCESS [ 21.262 s]
> [INFO] Apache Atlas Common ... SUCCESS [ 58.381 s]
> [INFO] Apache Atlas Typesystem ... SUCCESS [04:51 min]
> [INFO] Apache Atlas Server API ... SUCCESS [ 37.922 s]
> [INFO] Apache Atlas Client ... SUCCESS [01:12 min]
> [INFO] Apache Atlas Notification . SUCCESS [01:25 min]
> [INFO] Apache Atlas Titan  SUCCESS [01:56 min]
> [INFO] Apache Atlas Repository ... FAILURE [10:51 min]
> [INFO] Apache Atlas UI ... SKIPPED
> [INFO] Apache Atlas Web Application .. SKIPPED
> [INFO] Apache Atlas Documentation  SKIPPED
> [INFO] Apache Atlas Hive Bridge .. SKIPPED
> [INFO] Apache Atlas Falcon Bridge  SKIPPED
> [INFO] Apache Atlas Sqoop Bridge . SKIPPED
> [INFO] Apache Atlas Storm Bridge . SKIPPED
> [INFO] Apache Atlas Distribution . SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> Running org.apache.atlas.query.LineageQueryTest
> Tests run: 7, Failures: 1, Errors: 0, Skipped: 6, Time elapsed: 16.236 sec 
> <<< FAILURE! - in org.apache.atlas.query.LineageQueryTest
> beforeAll(org.apache.atlas.query.LineageQueryTest)  Time elapsed: 16.159 sec  
> <<< FAILURE!
> javax.script.ScriptException: javax.script.ScriptException: 
> java.lang.UnsupportedOperationException: Not a single key: __traitNames. Use 
> addProperty instead
>   at 
> com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.setProperty(StandardTitanTx.java:755)
>   at 
> com.thinkaurelius.titan.graphdb.vertices.AbstractVertex.setProperty(AbstractVertex.java:244)
>   at 
> com.thinkaurelius.titan.graphdb.vertices.AbstractVertex.setProperty(AbstractVertex.java:239)
>   at 
> com.tinkerpop.blueprints.util.wrappers.batch.BatchGraph$BatchVertex.setProperty(BatchGraph.java:492)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONUtility.vertexFromJson(GraphSONUtility.java:136)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:158)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:104)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:88)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
>   at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
>   at 
> 

[jira] [Assigned] (ATLAS-537) Falcon hook failing when tried to submit a process which creates a hive table.

2016-03-02 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S reassigned ATLAS-537:
-

Assignee: Shwetha G S

> Falcon hook failing when tried to submit a process which creates a hive table.
> --
>
> Key: ATLAS-537
> URL: https://issues.apache.org/jira/browse/ATLAS-537
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
>Reporter: Ayub Khan
>Assignee: Shwetha G S
>Priority: Blocker
> Attachments: logs.tar.gz
>
>
> Falcon hook failing when tried to submit a hive process.
> Stack trace from log:
> {noformat}
> 2016-02-25 11:40:38,894 INFO  - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:hrt_qa:POST//entities/submit/process] ~ 
> PROCESS/Aae61bc53-c2ee37c0123 is published into config store (AUDIT:229)
> 2016-02-25 11:40:38,894 INFO  - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:hrt_qa:POST//entities/submit/process] ~ 
> Submit successful: (process): Aae61bc53-c2ee37c0123 
> (AbstractEntityManager:417)
> 2016-02-25 11:40:38,895 INFO  - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:hrt_qa:POST//entities/submit/process] ~ 
> {Action:submit, Dimensions:{colo=NULL, entityType=process}, Status: 
> SUCCEEDED, Time-taken:350536678 ns} (METRIC:38)
> 2016-02-25 11:40:38,896 DEBUG - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:] ~ Audit: hrt_qa/172.22.101.126 
> performed request 
> http://apathan-atlas-erie-tp-testing-3.novalocal:15000/api/entities/submit/process
>  (172.22.101.123) at time 2016-02-25T11:40Z (FalconAuditFilter:86)
> 2016-02-25 11:40:38,896 INFO  - [Atlas Logger 0:] ~ Entered Atlas hook for 
> Falcon hook operation ADD_PROCESS (FalconHook:167)
> 2016-02-25 11:40:39,625 INFO  - [Atlas Logger 0:] ~ 0: Opening raw store with 
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 
> (HiveMetaStore:590)
> 2016-02-25 11:40:39,650 INFO  - [Atlas Logger 0:] ~ ObjectStore, initialize 
> called (ObjectStore:294)
> 2016-02-25 11:40:39,965 INFO  - [Atlas Logger 0:] ~ Property 
> hive.metastore.integral.jdo.pushdown unknown - will be ignored 
> (Persistence:77)
> 2016-02-25 11:40:39,966 INFO  - [Atlas Logger 0:] ~ Property 
> datanucleus.cache.level2 unknown - will be ignored (Persistence:77)
> 2016-02-25 11:40:41,822 WARN  - [Atlas Logger 0:] ~ Retrying creating default 
> database after error: Unexpected exception caught. (HiveMetaStore:623)
> javax.jdo.JDOFatalInternalException: Unexpected exception caught.
>   at 
> javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1193)
>   at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
>   at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:374)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:403)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:296)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:263)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:57)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:594)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:621)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5789)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)
>   at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 

[jira] [Commented] (ATLAS-537) Falcon hook failing when tried to submit a process which creates a hive table.

2016-03-02 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177348#comment-15177348
 ] 

Shwetha G S commented on ATLAS-537:
---

I didn't see the classpath issue in the environment. However, 
{noformat}
java.lang.NullPointerException
at java.util.ArrayList.addAll(ArrayList.java:577)
at 
org.apache.atlas.falcon.hook.FalconHook.createProcessInstance(FalconHook.java:238)
at 
org.apache.atlas.falcon.hook.FalconHook.createEntities(FalconHook.java:180)
at 
org.apache.atlas.falcon.hook.FalconHook.fireAndForget(FalconHook.java:174)
at 
org.apache.atlas.falcon.hook.FalconHook.access$200(FalconHook.java:68)
at org.apache.atlas.falcon.hook.FalconHook$2.run(FalconHook.java:157)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}
is an issue and happens when input/output is hdfs based feed

> Falcon hook failing when tried to submit a process which creates a hive table.
> --
>
> Key: ATLAS-537
> URL: https://issues.apache.org/jira/browse/ATLAS-537
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
>Reporter: Ayub Khan
>Assignee: Shwetha G S
>Priority: Blocker
> Attachments: logs.tar.gz
>
>
> Falcon hook failing when tried to submit a hive process.
> Stack trace from log:
> {noformat}
> 2016-02-25 11:40:38,894 INFO  - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:hrt_qa:POST//entities/submit/process] ~ 
> PROCESS/Aae61bc53-c2ee37c0123 is published into config store (AUDIT:229)
> 2016-02-25 11:40:38,894 INFO  - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:hrt_qa:POST//entities/submit/process] ~ 
> Submit successful: (process): Aae61bc53-c2ee37c0123 
> (AbstractEntityManager:417)
> 2016-02-25 11:40:38,895 INFO  - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:hrt_qa:POST//entities/submit/process] ~ 
> {Action:submit, Dimensions:{colo=NULL, entityType=process}, Status: 
> SUCCEEDED, Time-taken:350536678 ns} (METRIC:38)
> 2016-02-25 11:40:38,896 DEBUG - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:] ~ Audit: hrt_qa/172.22.101.126 
> performed request 
> http://apathan-atlas-erie-tp-testing-3.novalocal:15000/api/entities/submit/process
>  (172.22.101.123) at time 2016-02-25T11:40Z (FalconAuditFilter:86)
> 2016-02-25 11:40:38,896 INFO  - [Atlas Logger 0:] ~ Entered Atlas hook for 
> Falcon hook operation ADD_PROCESS (FalconHook:167)
> 2016-02-25 11:40:39,625 INFO  - [Atlas Logger 0:] ~ 0: Opening raw store with 
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 
> (HiveMetaStore:590)
> 2016-02-25 11:40:39,650 INFO  - [Atlas Logger 0:] ~ ObjectStore, initialize 
> called (ObjectStore:294)
> 2016-02-25 11:40:39,965 INFO  - [Atlas Logger 0:] ~ Property 
> hive.metastore.integral.jdo.pushdown unknown - will be ignored 
> (Persistence:77)
> 2016-02-25 11:40:39,966 INFO  - [Atlas Logger 0:] ~ Property 
> datanucleus.cache.level2 unknown - will be ignored (Persistence:77)
> 2016-02-25 11:40:41,822 WARN  - [Atlas Logger 0:] ~ Retrying creating default 
> database after error: Unexpected exception caught. (HiveMetaStore:623)
> javax.jdo.JDOFatalInternalException: Unexpected exception caught.
>   at 
> javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1193)
>   at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
>   at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:374)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:403)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:296)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:263)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:57)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:594)
>   at 
> 

[jira] [Commented] (ATLAS-474) Server does not start if the type is updated with same super type class information

2016-03-03 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177463#comment-15177463
 ] 

Shwetha G S commented on ATLAS-474:
---

It doesn't make sense for superTypes to be a list, it should be a set. Instead 
of fixing the type store, shouldn't we fix type system?

> Server does not start if the type is updated with same super type class 
> information
> ---
>
> Key: ATLAS-474
> URL: https://issues.apache.org/jira/browse/ATLAS-474
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
> Environment: sandbox
>Reporter: Chethana
>Assignee: David Kantor
>Priority: Blocker
> Attachments: rb44100.patch
>
>
> Create a class with a superType class.
> Use update API and do not change the request used.
> Restart atlas server 
> Fails with exception
> K":1},"pattern":"static","timestamp":"1454921806183"} stored data: 
> {"version":1,"subscription":{"ATLAS_HOOK":1},"pattern":"static","timestamp":"1454921372384"}
>  (ZkUtils$:68)
> 2016-02-09 00:00:02,149 INFO  - [ZkClient-EventThread-91-localhost:9026:] ~ I 
> wrote this conflicted ephemeral node 
> [{"version":1,"subscription":{"ATLAS_HOOK":1},"pattern":"static","timestamp":"1454921806183"}]
>  at /consumers/atlas/ids/atlas_Chethanas-MBP.local-1454412213224-de1ce8e6 a 
> while back in a different session, hence I will backoff for this node to be 
> deleted by Zookeeper and retry (ZkUtils$:68)
> 2016-02-09 00:00:02,554 INFO  - [ProcessThread(sid:0 cport:-1)::] ~ Got 
> user-level KeeperException when processing sessionid:0x152a1b9238e0051 
> type:create cxid:0x3f0bf zxid:0x1c0ec txntype:-1 reqpath:n/a Error 
> Path:/consumers/atlas/ids/atlas_Chethanas-MBP.local-1454412213224-de1ce8e6 
> Error:KeeperErrorCode = NodeExists for 
> /consumers/atlas/ids/atlas_Chethanas-MBP.local-1454412213224-de1ce8e6 
> (PrepRequest...skipping...
> at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
> at org.eclipse.jetty.server.Server.doStart(Server.java:354)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:88)
> at org.apache.atlas.Atlas.main(Atlas.java:107)
> Caused by: java.lang.RuntimeException: org.apache.atlas.AtlasException: Type 
> classa3ozcd7yra extends superType superClassa3ozcd7yra multiple times
> at 
> org.apache.atlas.services.DefaultMetadataService.restoreTypeSystem(DefaultMetadataService.java:113)
> at 
> org.apache.atlas.services.DefaultMetadataService.(DefaultMetadataService.java:100)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> com.google.inject.internal.DefaultConstructionProxyFactory$2.newInstance(DefaultConstructionProxyFactory.java:86)
> at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:105)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-537) Falcon hook failing when tried to submit a process which creates a hive table.

2016-03-03 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-537:
--
Attachment: ATLAS-537.patch

> Falcon hook failing when tried to submit a process which creates a hive table.
> --
>
> Key: ATLAS-537
> URL: https://issues.apache.org/jira/browse/ATLAS-537
> Project: Atlas
>  Issue Type: Bug
>Affects Versions: trunk
>Reporter: Ayub Khan
>Assignee: Shwetha G S
>Priority: Blocker
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-537.patch, logs.tar.gz
>
>
> Falcon hook failing when tried to submit a hive process.
> Stack trace from log:
> {noformat}
> 2016-02-25 11:40:38,894 INFO  - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:hrt_qa:POST//entities/submit/process] ~ 
> PROCESS/Aae61bc53-c2ee37c0123 is published into config store (AUDIT:229)
> 2016-02-25 11:40:38,894 INFO  - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:hrt_qa:POST//entities/submit/process] ~ 
> Submit successful: (process): Aae61bc53-c2ee37c0123 
> (AbstractEntityManager:417)
> 2016-02-25 11:40:38,895 INFO  - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:hrt_qa:POST//entities/submit/process] ~ 
> {Action:submit, Dimensions:{colo=NULL, entityType=process}, Status: 
> SUCCEEDED, Time-taken:350536678 ns} (METRIC:38)
> 2016-02-25 11:40:38,896 DEBUG - [479730212@qtp-989447607-2 - 
> 9f45adf1-2264-420a-8a4d-9d7a729f34b8:] ~ Audit: hrt_qa/172.22.101.126 
> performed request 
> http://apathan-atlas-erie-tp-testing-3.novalocal:15000/api/entities/submit/process
>  (172.22.101.123) at time 2016-02-25T11:40Z (FalconAuditFilter:86)
> 2016-02-25 11:40:38,896 INFO  - [Atlas Logger 0:] ~ Entered Atlas hook for 
> Falcon hook operation ADD_PROCESS (FalconHook:167)
> 2016-02-25 11:40:39,625 INFO  - [Atlas Logger 0:] ~ 0: Opening raw store with 
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 
> (HiveMetaStore:590)
> 2016-02-25 11:40:39,650 INFO  - [Atlas Logger 0:] ~ ObjectStore, initialize 
> called (ObjectStore:294)
> 2016-02-25 11:40:39,965 INFO  - [Atlas Logger 0:] ~ Property 
> hive.metastore.integral.jdo.pushdown unknown - will be ignored 
> (Persistence:77)
> 2016-02-25 11:40:39,966 INFO  - [Atlas Logger 0:] ~ Property 
> datanucleus.cache.level2 unknown - will be ignored (Persistence:77)
> 2016-02-25 11:40:41,822 WARN  - [Atlas Logger 0:] ~ Retrying creating default 
> database after error: Unexpected exception caught. (HiveMetaStore:623)
> javax.jdo.JDOFatalInternalException: Unexpected exception caught.
>   at 
> javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1193)
>   at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
>   at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:374)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:403)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:296)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:263)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:57)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:594)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:621)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5789)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)
>   at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> 

[jira] [Updated] (ATLAS-539) Store for entity update audit

2016-03-07 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-539:
--
Attachment: ATLAS-539.patch

> Store for entity update audit
> -
>
> Key: ATLAS-539
> URL: https://issues.apache.org/jira/browse/ATLAS-539
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-539.patch
>
>
> We need to store the entity update events in some store. The search supported 
> should return all events for a given entity id within some timerange.
> Two choices are:
> 1. Existing graph db - We can create a vertex for every update with 
> properties for entity id, timestamp, action and details. This will create 
> disjoint vertices. The direct gremlin search is enough to retrieve all events 
> for the entity. 
> Pros - We already have configurations for graph and utilities to store/get 
> from graph
> Cons - It will create extra data and doesn't fit the graph model
> 2. HBase - Store events with key = entity id + timestamp and columns for 
> action and details. The table scan supports the required search
> Pros - Fits the data model
> Cons - We will need the configurations and code to read and write from hbase
> In either case, we should expose an interface so that alternative 
> implementations can be added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ATLAS-479) Add description for different types during create time

2016-03-01 Thread Shwetha G S (JIRA)

[ 
https://issues.apache.org/jira/browse/ATLAS-479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173889#comment-15173889
 ] 

Shwetha G S commented on ATLAS-479:
---

Even I ran 'mvn clean install' on my mac laptop

> Add description for different types during create time
> --
>
> Key: ATLAS-479
> URL: https://issues.apache.org/jira/browse/ATLAS-479
> Project: Atlas
>  Issue Type: Sub-task
>Affects Versions: 0.6-incubating
>Reporter: Neeru Gupta
>Assignee: Neeru Gupta
> Fix For: 0.7-incubating
>
> Attachments: graycol.gif, rb43531(5).patch
>
>
> Ability to specify description while creating different types like Struct, 
> Enum, Class and Trait type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (ATLAS-508) Apache nightly build failure - UnsupportedOperationException: Not a single key: __traitNames

2016-03-02 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S reopened ATLAS-508:
---
  Assignee: Shwetha G S  (was: David Kantor)

The builds are still failing with the same error. I think the issue is because 
of ordering of testclasses in the build machine. Checking

> Apache nightly build failure - UnsupportedOperationException: Not a single 
> key: __traitNames
> 
>
> Key: ATLAS-508
> URL: https://issues.apache.org/jira/browse/ATLAS-508
> Project: Atlas
>  Issue Type: Bug
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>Priority: Critical
>
> https://builds.apache.org/job/apache-atlas-nightly/184/console
> {noformat}
> Failed tests: 
>   GremlinTest.beforeAll:41 » Script javax.script.ScriptException: 
> java.lang.Unsu...
>   LineageQueryTest.beforeAll:41 » Script javax.script.ScriptException: 
> java.lang...
> Tests run: 200, Failures: 2, Errors: 0, Skipped: 28
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] apache-atlas .. SUCCESS [ 21.262 s]
> [INFO] Apache Atlas Common ... SUCCESS [ 58.381 s]
> [INFO] Apache Atlas Typesystem ... SUCCESS [04:51 min]
> [INFO] Apache Atlas Server API ... SUCCESS [ 37.922 s]
> [INFO] Apache Atlas Client ... SUCCESS [01:12 min]
> [INFO] Apache Atlas Notification . SUCCESS [01:25 min]
> [INFO] Apache Atlas Titan  SUCCESS [01:56 min]
> [INFO] Apache Atlas Repository ... FAILURE [10:51 min]
> [INFO] Apache Atlas UI ... SKIPPED
> [INFO] Apache Atlas Web Application .. SKIPPED
> [INFO] Apache Atlas Documentation  SKIPPED
> [INFO] Apache Atlas Hive Bridge .. SKIPPED
> [INFO] Apache Atlas Falcon Bridge  SKIPPED
> [INFO] Apache Atlas Sqoop Bridge . SKIPPED
> [INFO] Apache Atlas Storm Bridge . SKIPPED
> [INFO] Apache Atlas Distribution . SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> Running org.apache.atlas.query.LineageQueryTest
> Tests run: 7, Failures: 1, Errors: 0, Skipped: 6, Time elapsed: 16.236 sec 
> <<< FAILURE! - in org.apache.atlas.query.LineageQueryTest
> beforeAll(org.apache.atlas.query.LineageQueryTest)  Time elapsed: 16.159 sec  
> <<< FAILURE!
> javax.script.ScriptException: javax.script.ScriptException: 
> java.lang.UnsupportedOperationException: Not a single key: __traitNames. Use 
> addProperty instead
>   at 
> com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.setProperty(StandardTitanTx.java:755)
>   at 
> com.thinkaurelius.titan.graphdb.vertices.AbstractVertex.setProperty(AbstractVertex.java:244)
>   at 
> com.thinkaurelius.titan.graphdb.vertices.AbstractVertex.setProperty(AbstractVertex.java:239)
>   at 
> com.tinkerpop.blueprints.util.wrappers.batch.BatchGraph$BatchVertex.setProperty(BatchGraph.java:492)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONUtility.vertexFromJson(GraphSONUtility.java:136)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:158)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:104)
>   at 
> com.tinkerpop.blueprints.util.io.graphson.GraphSONReader.inputGraph(GraphSONReader.java:88)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
>   at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
>   at 
> org.codehaus.groovy.runtime.callsite.StaticMetaMethodSite.invoke(StaticMetaMethodSite.java:43)
>   at 
> org.codehaus.groovy.runtime.callsite.StaticMetaMethodSite.call(StaticMetaMethodSite.java:88)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
>   at 
> 

[jira] [Updated] (ATLAS-539) Store for entity audit events

2016-03-09 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-539:
--
Summary: Store for entity audit events  (was: Store for entity update audit)

> Store for entity audit events
> -
>
> Key: ATLAS-539
> URL: https://issues.apache.org/jira/browse/ATLAS-539
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 0.7-incubating
>
> Attachments: ATLAS-539-v2.patch, ATLAS-539.patch
>
>
> We need to store the entity update events in some store. The search supported 
> should return all events for a given entity id within some timerange.
> Two choices are:
> 1. Existing graph db - We can create a vertex for every update with 
> properties for entity id, timestamp, action and details. This will create 
> disjoint vertices. The direct gremlin search is enough to retrieve all events 
> for the entity. 
> Pros - We already have configurations for graph and utilities to store/get 
> from graph
> Cons - It will create extra data and doesn't fit the graph model
> 2. HBase - Store events with key = entity id + timestamp and columns for 
> action and details. The table scan supports the required search
> Pros - Fits the data model
> Cons - We will need the configurations and code to read and write from hbase
> In either case, we should expose an interface so that alternative 
> implementations can be added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ATLAS-622) Introduce soft delete

2016-04-04 Thread Shwetha G S (JIRA)

 [ 
https://issues.apache.org/jira/browse/ATLAS-622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shwetha G S updated ATLAS-622:
--
Summary: Introduce soft delete  (was: Modify delete to soft delete)

> Introduce soft delete
> -
>
> Key: ATLAS-622
> URL: https://issues.apache.org/jira/browse/ATLAS-622
> Project: Atlas
>  Issue Type: Sub-task
>Reporter: Shwetha G S
>Assignee: Shwetha G S
>
> Currently, in entity delete API, the entity and its related 
> entities(composite entities) are deleted and there is no trace of it in the 
> system. Instead, change delete to mark the entities to be deleted with 
> state=DELETED



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2   3   4   5   6   7   8   9   10   >