[jira] [Created] (ATLAS-1909) Import/Export : Exception while using "transforms" option during import
Sharmadha Sainath created ATLAS-1909: Summary: Import/Export : Exception while using "transforms" option during import Key: ATLAS-1909 URL: https://issues.apache.org/jira/browse/ATLAS-1909 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Blocker Attachments: Import_transforms_error.txt 1. Created a table t5 in cluster1. 2. Created the export zip file for table t5 on firing export API on cluster1. 3. Created the t5transforms.json file {code} { "options": { "transforms": { "hive_table": { "qualifiedName": [ "replace:@cl1:@cl2" ] } } } } {code} 4.Fired import API on cluster2 using curl call : {code} curl -v -g -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H "Cache-Control: no-cache" -F request=@t5transforms.json -F data=@t5.zip "http://cluster2:21000/api/atlas/admin/import"; {code} Request failed with 500 Internal server error and the following message : {code} {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: java.lang.NullPointerException"} {code} Attached the complete exception stack trace found in cluster2's application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1911) UI / Search using entity and trait attributes : Specifying no value for one of the filters makes the complete filter null.
Sharmadha Sainath created ATLAS-1911: Summary: UI / Search using entity and trait attributes : Specifying no value for one of the filters makes the complete filter null. Key: ATLAS-1911 URL: https://issues.apache.org/jira/browse/ATLAS-1911 Project: Atlas Issue Type: Bug Components: atlas-webui Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath 1.Created a kafka_topic with name="kt1" and empty description. 2.Searched kafka_topic in typeName , applied filters: 1. Name = kt1 2. Description = (empty Description) 3.The body sent to POST request is : {code} {"entityFilters":null,"tagFilters":null,"query":null,"excludeDeletedEntities":true,"limit":25,"typeName":"kafka_topic","classification":null} {code} entityFilters is null , and the Name applied filter was not found in UI or in the request body.The search result was similar to typeName='kafka_topic'. The same applies to tag filters. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1914) UI - Search using entity and trait attributes : "Clear" button doesn't clear the filter applied to a type
Sharmadha Sainath created ATLAS-1914: Summary: UI - Search using entity and trait attributes : "Clear" button doesn't clear the filter applied to a type Key: ATLAS-1914 URL: https://issues.apache.org/jira/browse/ATLAS-1914 Project: Atlas Issue Type: Bug Components: atlas-webui Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath 1. Searched type "hive_table" and applied filter "name = table1" 2.Clicked Search. 3.Clicked Clear button , which cleared the type selected. 4.Searched type "hive_table" again. The search had the same "name" filter applied before Clearing. Expected that the Clear button would clear the filter too. Is this expected ? Should there be a Save button instead to explicitly save the queries ? CC : [~kevalbhatt] [~apoorvnaik] [~ashutoshm][~madhan.neethiraj] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1915) UI - Search using entity and trait attributes - Refresh Button fires the search query (Basic/DSL) again.
Sharmadha Sainath created ATLAS-1915: Summary: UI - Search using entity and trait attributes - Refresh Button fires the search query (Basic/DSL) again. Key: ATLAS-1915 URL: https://issues.apache.org/jira/browse/ATLAS-1915 Project: Atlas Issue Type: Bug Components: atlas-webui Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Fix For: 0.9-incubating On clicking the Refresh Button to refresh types (data-id : refreshBtn) , makes the following 2 queries : 1./api/atlas/v2/types/typedefs/headers 2.Search query Before the new search feature in place , only query no.1 was made. In Advanced search tab , when refresh button is clicked without any type or query , following invalid query is made in addition to query no.1 {code} http://localhost:21000/api/atlas/v2/search? {code} which throws 500 Internal server error. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1915) UI - Search using entity and trait attributes - Refresh Button fires the search query (Basic/DSL) again.
[ https://issues.apache.org/jira/browse/ATLAS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-1915: - Description: On clicking the Refresh Button to refresh types (data-id : refreshBtn) , makes the following 2 queries : 1./api/atlas/v2/types/typedefs/headers 2.Search query Before the new search feature in place , only query no.1 was made. In Advanced search tab , when refresh button is clicked without any type or query , following invalid query is made in addition to query no.1 {code} http://localhost:21000/api/atlas/v2/search? {code} which throws 500 Internal server error. CC :[~kevalbhatt] was: On clicking the Refresh Button to refresh types (data-id : refreshBtn) , makes the following 2 queries : 1./api/atlas/v2/types/typedefs/headers 2.Search query Before the new search feature in place , only query no.1 was made. In Advanced search tab , when refresh button is clicked without any type or query , following invalid query is made in addition to query no.1 {code} http://localhost:21000/api/atlas/v2/search? {code} which throws 500 Internal server error. > UI - Search using entity and trait attributes - Refresh Button fires the > search query (Basic/DSL) again. > > > Key: ATLAS-1915 > URL: https://issues.apache.org/jira/browse/ATLAS-1915 > Project: Atlas > Issue Type: Bug > Components: atlas-webui >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath > Fix For: 0.9-incubating > > > On clicking the Refresh Button to refresh types (data-id : refreshBtn) , > makes the following 2 queries : > 1./api/atlas/v2/types/typedefs/headers > 2.Search query > Before the new search feature in place , only query no.1 was made. > In Advanced search tab , when refresh button is clicked without any type or > query , following invalid query is made in addition to query no.1 > {code} > http://localhost:21000/api/atlas/v2/search? > {code} > which throws 500 Internal server error. > CC :[~kevalbhatt] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1916) UI - Search using entity and trait attributes - Using Date picker to change only month/year
Sharmadha Sainath created ATLAS-1916: Summary: UI - Search using entity and trait attributes - Using Date picker to change only month/year Key: ATLAS-1916 URL: https://issues.apache.org/jira/browse/ATLAS-1916 Project: Atlas Issue Type: Bug Components: atlas-webui Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Fix For: 0.9-incubating 1. Searched for hive_table and applied filter "Creation Time" with value 07/05/2017 12:00 AM (Picked month year and then day) . 2.Now tried to modify only the year in the filter to 2016 using the datepicker. Selected 2016 from year and clicked Apply.The new value didn't persist. If month/year is changed , date picker expects user to explicitly specify the day again. If day is picked , the new year/month is persisted along with day. But the user can change the text directly in the text box to modify year to 2016, which works fine. CC : [~kevalbhatt] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1917) Search using entity and trait attributes - NOT EQUALS operator (!=) doesn't fetch results for many data types.
Sharmadha Sainath created ATLAS-1917: Summary: Search using entity and trait attributes - NOT EQUALS operator (!=) doesn't fetch results for many data types. Key: ATLAS-1917 URL: https://issues.apache.org/jira/browse/ATLAS-1917 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath 1.Created hive_tables t1,t2,t3. 2.Searched type name = hive_table and filter name = t1 . t1 was fetched. 3.Searched type name = hive_table and filter name != t1. No results were found. This happens for datatypes like string,long,integer except boolean. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1918) Search using entity and trait attributes - Using wildcard search in filters
Sharmadha Sainath created ATLAS-1918: Summary: Search using entity and trait attributes - Using wildcard search in filters Key: ATLAS-1918 URL: https://issues.apache.org/jira/browse/ATLAS-1918 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Fix For: 0.9-incubating 1. Created 2 hive tables hive_table_a and hive_table_b 2. Searched for : {code} type name = hive_table and query = /hive_table_[ab]/ . {code} The above query fetched correct results ,i.e both hive_table_a and hive_table_b. 3.Cleared search. Searched type name = hive_table and added filters : 1. The following fetched both hive_table_a and hive_table_b which is correct {code} name = /hive_table_[ab]/ {code} 2. The following filters fetched all hive tables in my Atlas instance {code} name beginsWith /hive_table_[ab]/ name contains /hive_table_[ab]/ {code} 3. The following threw 500 Internal Server error. Attached the exception stack trace. {code} name endsWith /hive_table_[ab]/ {code} 4. No results were found on applying the following filter .(filed as part of [ATLAS-1917|https://issues.apache.org/jira/browse/ATLAS-1917]) {code} name != /hive_table_[ab]/ {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1918) Search using entity and trait attributes - Using wildcard search in filters
[ https://issues.apache.org/jira/browse/ATLAS-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-1918: - Attachment: RegExSearchError.txt > Search using entity and trait attributes - Using wildcard search in filters > --- > > Key: ATLAS-1918 > URL: https://issues.apache.org/jira/browse/ATLAS-1918 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath > Fix For: 0.9-incubating > > Attachments: RegExSearchError.txt > > > 1. Created 2 hive tables hive_table_a and hive_table_b > 2. Searched for : > {code} > type name = hive_table and query = /hive_table_[ab]/ . > {code} > The above query fetched correct results ,i.e both hive_table_a and > hive_table_b. > 3.Cleared search. Searched type name = hive_table and added filters : > > 1. The following fetched both hive_table_a and hive_table_b which is correct > {code} > name = /hive_table_[ab]/ > {code} > > 2. The following filters fetched all hive tables in my Atlas instance > {code} > name beginsWith /hive_table_[ab]/ > name contains /hive_table_[ab]/ > {code} > > 3. The following threw 500 Internal Server error. Attached the exception > stack trace. > {code} > name endsWith /hive_table_[ab]/ > {code} > 4. No results were found on applying the following filter .(filed as part of > [ATLAS-1917|https://issues.apache.org/jira/browse/ATLAS-1917]) > {code} > name != /hive_table_[ab]/ > {code} > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1920) UI : Search using entity and trait attributes - Operators for enum and boolean data type filters.
Sharmadha Sainath created ATLAS-1920: Summary: UI : Search using entity and trait attributes - Operators for enum and boolean data type filters. Key: ATLAS-1920 URL: https://issues.apache.org/jira/browse/ATLAS-1920 Project: Atlas Issue Type: Bug Components: atlas-webui Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Minor Fix For: 0.9-incubating Operators for attribute filters of enum , boolean data type have similar operators that of numerical data type (<,>,<=,>=,!=,=). For such data types , EQUALS and NOT EQUALS operators would be sufficient. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1921) UI : Search using entity and trait attributes : UI doesn't perform range check and allows providing out of bounds values for integral and float data types.
Sharmadha Sainath created ATLAS-1921: Summary: UI : Search using entity and trait attributes : UI doesn't perform range check and allows providing out of bounds values for integral and float data types. Key: ATLAS-1921 URL: https://issues.apache.org/jira/browse/ATLAS-1921 Project: Atlas Issue Type: Bug Components: atlas-webui Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Minor When applying filter for integral/float data types , Number picker (the up and down button) doesn't do range check according to the data type and lets user go up and down and set any value.Hence when filter is applied and search is made , 500 Internal server error is thrown. Ex : Maximum value for an integer data type is 2 ^31^ -1 Number picker for integer allows user go to 2147483648 and throws Invalid number exception when searched. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1922) Search using entity and trait attributes - On applying contains/ends with/begins with filter on a string with '/' (forward slash) in it, search results are empty.
[ https://issues.apache.org/jira/browse/ATLAS-1922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-1922: - Description: Created a hive_table employee. The hive_storagedesc for the table had the location value "hdfs://localhost:8020/apps/hive/warehouse/employee" . Selected type as hive_storagedesc , and applied filter {code} Location = hdfs\://localhost\:8020/apps/hive/warehouse/employee {code} which listed the storage descriptor of employee table Selected type as hive_storagedesc and applied "contains" filter {code} Location contains /apps/hive/warehouse/employee {code} The above query didn't fetch any result .The same applies for ends with , begins with operators. Hence "/" works with "=" operator but not with others. Application logs for the search with contains operator: {code} Converted query string with 3 replacements: [v."__typeName": (hive_storagedesc) AND ( v."hive_storagedesc.location": (*/apps/hive/warehouse/employee*) ) AND v."__state":ACTIVE] => [iyt_t: (hive_storagedesc) AND ( 9hc5_t: (*/apps/hive/warehouse/employee*) ) AND b2d_t:ACTIVE] (IndexSerializer:648) {code} was: Created a hive_table employee. The hive_storagedesc for the table had the location value "hdfs://localhost:8020/apps/hive/warehouse/employee" . Selected type as hive_storagedesc , and applied filter {code} Location = hdfs\://localhost:\8020/apps/hive/warehouse/employee {code} which listed the storage descriptor of employee table Selected type as hive_storagedesc and applied "contains" filter {code} Location contains /apps/hive/warehouse/employee {code} The above query didn't fetch any result .The same applies for ends with , begins with operators. Hence "/" works with "=" operator but not with others. Application logs for the search with contains operator: {code} Converted query string with 3 replacements: [v."__typeName": (hive_storagedesc) AND ( v."hive_storagedesc.location": (*/apps/hive/warehouse/employee*) ) AND v."__state":ACTIVE] => [iyt_t: (hive_storagedesc) AND ( 9hc5_t: (*/apps/hive/warehouse/employee*) ) AND b2d_t:ACTIVE] (IndexSerializer:648) {code} > Search using entity and trait attributes - On applying contains/ends > with/begins with filter on a string with '/' (forward slash) in it, search > results are empty. > -- > > Key: ATLAS-1922 > URL: https://issues.apache.org/jira/browse/ATLAS-1922 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath > Fix For: 0.9-incubating > > > Created a hive_table employee. The hive_storagedesc for the table had the > location value "hdfs://localhost:8020/apps/hive/warehouse/employee" . > Selected type as hive_storagedesc , and applied filter > {code} > Location = hdfs\://localhost\:8020/apps/hive/warehouse/employee > {code} > which listed the storage descriptor of employee table > Selected type as hive_storagedesc and applied "contains" filter > {code} > Location contains /apps/hive/warehouse/employee > {code} > The above query didn't fetch any result .The same applies for ends with , > begins with operators. > Hence "/" works with "=" operator but not with others. > Application logs for the search with contains operator: > {code} > Converted query string with 3 replacements: [v."__typeName": > (hive_storagedesc) AND ( v."hive_storagedesc.location": > (*/apps/hive/warehouse/employee*) ) AND v."__state":ACTIVE] => [iyt_t: > (hive_storagedesc) AND ( 9hc5_t: (*/apps/hive/warehouse/employee*) ) AND > b2d_t:ACTIVE] (IndexSerializer:648) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1922) Search using entity and trait attributes - On applying contains/ends with/begins with filter on a string with '/' (forward slash) in it, search results are empty.
Sharmadha Sainath created ATLAS-1922: Summary: Search using entity and trait attributes - On applying contains/ends with/begins with filter on a string with '/' (forward slash) in it, search results are empty. Key: ATLAS-1922 URL: https://issues.apache.org/jira/browse/ATLAS-1922 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Fix For: 0.9-incubating Created a hive_table employee. The hive_storagedesc for the table had the location value "hdfs://localhost:8020/apps/hive/warehouse/employee" . Selected type as hive_storagedesc , and applied filter {code} Location = hdfs\://localhost:\8020/apps/hive/warehouse/employee {code} which listed the storage descriptor of employee table Selected type as hive_storagedesc and applied "contains" filter {code} Location contains /apps/hive/warehouse/employee {code} The above query didn't fetch any result .The same applies for ends with , begins with operators. Hence "/" works with "=" operator but not with others. Application logs for the search with contains operator: {code} Converted query string with 3 replacements: [v."__typeName": (hive_storagedesc) AND ( v."hive_storagedesc.location": (*/apps/hive/warehouse/employee*) ) AND v."__state":ACTIVE] => [iyt_t: (hive_storagedesc) AND ( 9hc5_t: (*/apps/hive/warehouse/employee*) ) AND b2d_t:ACTIVE] (IndexSerializer:648) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1924) Search using entity and trait attributes - Filtering integral and float data types with != (NOT EQUALS) on negative number throws exception
Sharmadha Sainath created ATLAS-1924: Summary: Search using entity and trait attributes - Filtering integral and float data types with != (NOT EQUALS) on negative number throws exception Key: ATLAS-1924 URL: https://issues.apache.org/jira/browse/ATLAS-1924 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Fix For: 0.9-incubating Attachments: NotEqualsSearchWithNegative.txt Selected type = hive_table and applied filter {code} Retention != -1 {code} The above threw 500 internal server error. Attached the application logs . POST request body : {code} { "entityFilters":{ "condition":"AND", "criterion":[ { "attributeName":"retention", "operator":"!=", "attributeValue":"-1" } ] }, "tagFilters":null, "query":null, "excludeDeletedEntities":true, "limit":25, "typeName":"hive_table", "classification":null } {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1924) Search using entity and trait attributes - Filtering integral and float data types with != (NOT EQUALS) on negative number throws exception
[ https://issues.apache.org/jira/browse/ATLAS-1924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-1924: - Description: Selected type = hive_table and applied filter {code} Retention != -1 {code} other filters like <,>,>=,<=,= on negative numbers work as expected. This happens for all integral and floating point data types . The above threw 500 internal server error. Attached the application logs . POST request body : {code} { "entityFilters":{ "condition":"AND", "criterion":[ { "attributeName":"retention", "operator":"!=", "attributeValue":"-1" } ] }, "tagFilters":null, "query":null, "excludeDeletedEntities":true, "limit":25, "typeName":"hive_table", "classification":null } {code} was: Selected type = hive_table and applied filter {code} Retention != -1 {code} The above threw 500 internal server error. Attached the application logs . POST request body : {code} { "entityFilters":{ "condition":"AND", "criterion":[ { "attributeName":"retention", "operator":"!=", "attributeValue":"-1" } ] }, "tagFilters":null, "query":null, "excludeDeletedEntities":true, "limit":25, "typeName":"hive_table", "classification":null } {code} > Search using entity and trait attributes - Filtering integral and float data > types with != (NOT EQUALS) on negative number throws exception > --- > > Key: ATLAS-1924 > URL: https://issues.apache.org/jira/browse/ATLAS-1924 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath > Fix For: 0.9-incubating > > Attachments: NotEqualsSearchWithNegative.txt > > > Selected type = hive_table and applied filter > {code} > Retention != -1 > {code} > other filters like <,>,>=,<=,= on negative numbers work as expected. > This happens for all integral and floating point data types . > The above threw 500 internal server error. Attached the application logs . > POST request body : > {code} > { >"entityFilters":{ > "condition":"AND", > "criterion":[ > { > "attributeName":"retention", > "operator":"!=", > "attributeValue":"-1" > } > ] >}, >"tagFilters":null, >"query":null, >"excludeDeletedEntities":true, >"limit":25, >"typeName":"hive_table", >"classification":null > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1924) Search using entity and trait attributes - Filtering integral and float data types with != (NOT EQUALS) on negative number throws exception
[ https://issues.apache.org/jira/browse/ATLAS-1924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-1924: - Description: Selected type = hive_table and applied filter {code} Retention != -1 {code} The above throws 500 internal server error. Attached the application logs . Other filters like <,>,>=,<=,= on negative numbers work as expected. This happens for all integral and floating point data types POST request body : {code} { "entityFilters":{ "condition":"AND", "criterion":[ { "attributeName":"retention", "operator":"!=", "attributeValue":"-1" } ] }, "tagFilters":null, "query":null, "excludeDeletedEntities":true, "limit":25, "typeName":"hive_table", "classification":null } {code} was: Selected type = hive_table and applied filter {code} Retention != -1 {code} other filters like <,>,>=,<=,= on negative numbers work as expected. This happens for all integral and floating point data types . The above threw 500 internal server error. Attached the application logs . POST request body : {code} { "entityFilters":{ "condition":"AND", "criterion":[ { "attributeName":"retention", "operator":"!=", "attributeValue":"-1" } ] }, "tagFilters":null, "query":null, "excludeDeletedEntities":true, "limit":25, "typeName":"hive_table", "classification":null } {code} > Search using entity and trait attributes - Filtering integral and float data > types with != (NOT EQUALS) on negative number throws exception > --- > > Key: ATLAS-1924 > URL: https://issues.apache.org/jira/browse/ATLAS-1924 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath > Fix For: 0.9-incubating > > Attachments: NotEqualsSearchWithNegative.txt > > > Selected type = hive_table and applied filter > {code} > Retention != -1 > {code} > The above throws 500 internal server error. Attached the application logs . > Other filters like <,>,>=,<=,= on negative numbers work as expected. > This happens for all integral and floating point data types > POST request body : > {code} > { >"entityFilters":{ > "condition":"AND", > "criterion":[ > { > "attributeName":"retention", > "operator":"!=", > "attributeValue":"-1" > } > ] >}, >"tagFilters":null, >"query":null, >"excludeDeletedEntities":true, >"limit":25, >"typeName":"hive_table", >"classification":null > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1926) Search using entity and trait attributes - Explicit escaping by user if string has space,colon and other special characters.
Sharmadha Sainath created ATLAS-1926: Summary: Search using entity and trait attributes - Explicit escaping by user if string has space,colon and other special characters. Key: ATLAS-1926 URL: https://issues.apache.org/jira/browse/ATLAS-1926 Project: Atlas Issue Type: Bug Components: atlas-core, atlas-webui Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Minor Fix For: 0.9-incubating Provided filter for description of entity {code} Description = topic or message queue {code} which threw 500 internal server error. On escaping spaces with "\" , like {code} Description = topic\ or\ message\ queue {code} the expected entity was fetched. This happens with other special characters also. This requires users to explicitly escape characters which is not very user friendly. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1932) Search using entity and trait attributes - Filter Attribute window keeps buffering forever on filtering a type which has same attribute that of its super type
Sharmadha Sainath created ATLAS-1932: Summary: Search using entity and trait attributes - Filter Attribute window keeps buffering forever on filtering a type which has same attribute that of its super type Key: ATLAS-1932 URL: https://issues.apache.org/jira/browse/ATLAS-1932 Project: Atlas Issue Type: Bug Components: atlas-core, atlas-webui Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Minor Fix For: 0.9-incubating 1.Created a type "super_type" with attributes a , b. 2.Created a type "child_type" with attributes a , c and superType as super_type. 3.Searched "child_type" and on clicking filter button , Filter window keeps loading forever. Following exception is seen in Console tag : {code} Uncaught Error: Filter "a" already defined at Object.k.error (http://172.27.52.210:21000/js/libs/jQueryQueryBuilder/js/query-builder.standalone.min.js:7:11517) at g. (http://172.27.52.210:21000/js/libs/jQueryQueryBuilder/js/query-builder.standalone.min.js:6:12273) at Array.forEach (native) at g.checkFilters (http://172.27.52.210:21000/js/libs/jQueryQueryBuilder/js/query-builder.standalone.min.js:6:12171) at new g (http://172.27.52.210:21000/js/libs/jQueryQueryBuilder/js/query-builder.standalone.min.js:6:6589) at n.fn.init.$.fn.queryBuilder (http://172.27.52.210:21000/js/libs/jQueryQueryBuilder/js/query-builder.standalone.min.js:7:12735) at n.onRender (http://172.27.52.210:21000/js/views/search/QueryBuilderView.js:149:37) at http://172.27.52.210:21000/js/libs/backbone-marionette/backbone.marionette.min.js:20:7823 at n.triggerMethod (http://172.27.52.210:21000/js/libs/backbone-marionette/backbone.marionette.min.js:20:20703) at n.render (http://172.27.52.210:21000/js/libs/backbone-marionette/backbone.marionette.min.js:20:21699) {code} Marking this issue as minor because , 1. Creating child type same attribute as parent type is quite a rare case. 2. While tag creation/update , Atlas doesn't allow to add attributes to child tag same as that of its parent tag in UI. But its possible via REST. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-1874) V2 DSL search query does not support for count() but v1 do.
[ https://issues.apache.org/jira/browse/ATLAS-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16081686#comment-16081686 ] Sharmadha Sainath commented on ATLAS-1874: -- [~grahamwallis] , _=1497434602692 is the timestamp at which the query was fired. > V2 DSL search query does not support for count() but v1 do. > --- > > Key: ATLAS-1874 > URL: https://issues.apache.org/jira/browse/ATLAS-1874 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.8.1-incubating >Reporter: Ayub Pathan > Fix For: 0.9-incubating, 0.8.1-incubating > > > v1 DSL query supports count() > {noformat} > curl -u admin:admin > 'http://ctr-e133-1493418528701-113468-01-02.hwx.site:21000/api/atlas/discovery/search/dsl?query=hive_table%20where%20db.name%3D%22default%22%20select%20count()%20as%20%27count%27' > | python -m json.tool > % Total% Received % Xferd Average Speed TimeTime Time > Current > Dload Upload Total SpentLeft Speed > 100 5370 5370 0339 0 --:--:-- 0:00:01 --:--:-- 339 > { > "count": 1, > "dataType": { > "attributeDefinitions": [ > { > "dataTypeName": "long", > "isComposite": false, > "isIndexable": false, > "isUnique": false, > "multiplicity": { > "isUnique": false, > "lower": 0, > "upper": 1 > }, > "name": "count", > "reverseAttributeName": null > } > ], > "typeDescription": null, > "typeName": "__tempQueryResultStruct286", > "typeVersion": "1.0" > }, > "query": "hive_table where db.name=\"default\" select count() as 'count'", > "queryType": "dsl", > "requestId": "pool-2-thread-9 - 47389e3f-bbf1-4209-8e50-8a3235a7e5a9", > "results": [ > { > "$typeName$": "__tempQueryResultStruct286", > "count": 68 > } > ] > } > {noformat} > v2 DSL search query does not > {noformat} > curl -u admin:admin > 'http://ctr-e133-1493418528701-113468-01-02.hwx.site:21000/api/atlas/v2/search/dsl?limit=25&excludeDeletedEntities=true&query=where+db.name%3D%22default%22+select+count()+as+%27count%27&typeName=hive_table&_=1497434602692' > | python -m json.tool > { > "queryText": "`hive_table` where db.name=\"default\" select count() as > 'count'", > "queryType": "DSL" > } > {noformat} > *From the initial analysis, it seems like, V2 API response does not have the > count attribute which is cause for this failure.* might want to consider > adding the count in V2? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1939) Export/Import Regression : NPE during import
Sharmadha Sainath created ATLAS-1939: Summary: Export/Import Regression : NPE during import Key: ATLAS-1939 URL: https://issues.apache.org/jira/browse/ATLAS-1939 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Blocker Attachments: Import_NPE.txt Exported a hive_table and created zip file t5.zip and tried to import in into another cluster using {code} curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H "Cache-Control: no-cache" --data-binary @t5.zip "http://host2:21000/api/atlas/admin/import"; {code} Import Request failed with 500 internal server error with NPE in application logs. Attached the exception stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-1939) Export/Import Regression : NPE during import
[ https://issues.apache.org/jira/browse/ATLAS-1939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16082337#comment-16082337 ] Sharmadha Sainath commented on ATLAS-1939: -- [~ashutoshm] I am trying basic import without any option. {code} curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H "Cache-Control: no-cache" -F data=@t5.zip "http://localhost:21000/api/atlas/admin/import"; {code} Still it fails with NPE.[^Import_error_NPE2.txt] But if used with options , import works fine {code} curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H "Cache-Control: no-cache" -F data=@t5.zip "http://localhost:21000/api/atlas/admin/import"; -F request=@t5transform.json {code} > Export/Import Regression : NPE during import > > > Key: ATLAS-1939 > URL: https://issues.apache.org/jira/browse/ATLAS-1939 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Priority: Blocker > Attachments: Import_error_NPE2.txt, Import_NPE.txt > > > Exported a hive_table and created zip file t5.zip and tried to import in into > another cluster using > {code} > curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H > "Cache-Control: no-cache" --data-binary @t5.zip > "http://host2:21000/api/atlas/admin/import"; > {code} > Import Request failed with 500 internal server error with NPE in application > logs. > Attached the exception stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1939) Export/Import Regression : NPE during import
[ https://issues.apache.org/jira/browse/ATLAS-1939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-1939: - Attachment: Import_error_NPE2.txt > Export/Import Regression : NPE during import > > > Key: ATLAS-1939 > URL: https://issues.apache.org/jira/browse/ATLAS-1939 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Priority: Blocker > Attachments: Import_error_NPE2.txt, Import_NPE.txt > > > Exported a hive_table and created zip file t5.zip and tried to import in into > another cluster using > {code} > curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H > "Cache-Control: no-cache" --data-binary @t5.zip > "http://host2:21000/api/atlas/admin/import"; > {code} > Import Request failed with 500 internal server error with NPE in application > logs. > Attached the exception stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1942) Export/Import - Transforms option doesn't work for datatypes other than string
Sharmadha Sainath created ATLAS-1942: Summary: Export/Import - Transforms option doesn't work for datatypes other than string Key: ATLAS-1942 URL: https://issues.apache.org/jira/browse/ATLAS-1942 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Fix For: 0.9-incubating Import with transforms option on any string attribute works and the string attribute is updated. Other data types like int,date,boolean etc., are not updated. transforms.json file : Following works: {code} "options": { "transforms": "{ \"hive_table\": { \"qualifiedName\": [ \"replace:@cl1:@cl2\" ] }}" } } {code} Following doesn't work: {code} "options": { "transforms": "{ \"hive_table\": { \"retention\": [ \"replace:0:1\" ] }}" } } {code} In both cases , import is successful and there are no exceptions found in application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1948) Importing hive_table in a database which is a CTAS of another table in different database throws exception due to export order.
Sharmadha Sainath created ATLAS-1948: Summary: Importing hive_table in a database which is a CTAS of another table in different database throws exception due to export order. Key: ATLAS-1948 URL: https://issues.apache.org/jira/browse/ATLAS-1948 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Critical Fix For: 0.9-incubating Attachments: ImportTransformsErrorOnCTASonDiffDB.txt 1.Created 2 databases db1 , db2 in cluster1 2.Created 2 tables 1. db1.t1 2. db2.t2 as select * from db1.t1 3.Exported db1.t1 into zip file. 4.Imported zip file into cluster 2 with transforms option : {code} { "options": { "transforms": "{ \"hive_column\": { \"qualifiedName\": [ \"replace:cl1:cl2\" ]} }" } } {code} 5. Import fails with {code} {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: ObjectId is not valid AtlasObjectId{guid='51c77c1e-265e-46ab-bbb5-5316cf80a53c', typeName='hive_column', uniqueAttributes={}}"} {code} Only db1.t1 is imported into Atlas without any lineage. Attached the exception stack trace. After this exporting db2.t2 and importing completes successfully. That is , first import ,either db1.t1 or db2.t1 is unsuccessful with exception. Next import is successful. The exception *doesn't* happen and tables are successfully imported If both the tables are in a single database. Export order if tables are in same db is 1.table1, 2.db, 3.table2, 4.hive_process 5. hive_column_lineage If the tables are in different db , the order is , 1.table1, 2.db1, 3.hive_process, 4.hive_column_lineage 5.ctas table 6.db2 which is possibly causing the issue. When cluster2 starts importing , it imports table1 , db1 and when it comes to hive_column_lineage , it finds that column specified in hive_column_lineage is not in cluster2 yet ,since ctas table comes after the hive_column_lineage in import order and it throws "ObjectId is not valid AtlasObjectId{guid='51c77c1e-265e-46ab-bbb5-5316cf80a53c', typeName='hive_column' ". Thanks [~ayubkhan] for the analysis. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1948) Importing hive_table in a database which is a CTAS of another table in different database throws exception due to export order.
[ https://issues.apache.org/jira/browse/ATLAS-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-1948: - Attachment: db1tb1.zip > Importing hive_table in a database which is a CTAS of another table in > different database throws exception due to export order. > --- > > Key: ATLAS-1948 > URL: https://issues.apache.org/jira/browse/ATLAS-1948 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > Fix For: 0.9-incubating > > Attachments: db1tb1.zip, ImportTransformsErrorOnCTASonDiffDB.txt > > > 1.Created 2 databases db1 , db2 in cluster1 > 2.Created 2 tables > 1. db1.t1 > 2. db2.t2 as select * from db1.t1 > 3.Exported db1.t1 into zip file. > 4.Imported zip file into cluster 2 with transforms option : > {code} > { > "options": { >"transforms": "{ \"hive_column\": { \"qualifiedName\": [ > \"replace:cl1:cl2\" ]} }" > } > } > {code} > 5. Import fails with > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > ObjectId is not valid > AtlasObjectId{guid='51c77c1e-265e-46ab-bbb5-5316cf80a53c', > typeName='hive_column', uniqueAttributes={}}"} > {code} > Only db1.t1 is imported into Atlas without any lineage. > Attached the exception stack trace. > After this exporting db2.t2 and importing completes successfully. > That is , first import ,either db1.t1 or db2.t1 is unsuccessful with > exception. Next import is successful. > The exception *doesn't* happen and tables are successfully imported If both > the tables are in a single database. Export order if tables are in same db is > 1.table1, > 2.db, > 3.table2, > 4.hive_process > 5. hive_column_lineage > If the tables are in different db , the order is , > 1.table1, > 2.db1, > 3.hive_process, > 4.hive_column_lineage > 5.ctas table > 6.db2 > which is possibly causing the issue. > When cluster2 starts importing , it imports table1 , db1 and when it comes > to hive_column_lineage , it finds that column specified in > hive_column_lineage is not in cluster2 yet ,since ctas table comes after the > hive_column_lineage in import order and it throws "ObjectId is not valid > AtlasObjectId{guid='51c77c1e-265e-46ab-bbb5-5316cf80a53c', > typeName='hive_column' ". > Thanks [~ayubkhan] for the analysis. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-1948) Importing hive_table in a database which is a CTAS of another table in different database throws exception due to export order.
[ https://issues.apache.org/jira/browse/ATLAS-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16086851#comment-16086851 ] Sharmadha Sainath commented on ATLAS-1948: -- [~ashutoshm], hive commands : > create database database1; > create database database2; > create table database1.table1(id int,name string); > create table database2.table2 as select * from database1.table1; Export command : {code} curl -v -X POST -u admin:admin -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d @db1tb1.json "http://host1:21000/api/atlas/admin/export"; > db1tb1.zip {code} db1t1.json contents : {code} { "itemsToExport": [ { "typeName": "hive_table", "uniqueAttributes": { "qualifiedName": "database1.table1@cl1" } } ], "options":{ "fetchType":"full" } } {code} Zip file after export : [^db1tb1.zip] Import command : {code} curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H "Cache-Control: no-cache" -F data=@db1tb1.zip "http://host2:21000/api/atlas/admin/import"; -F request=@tabletransform.json {code} tabletransform.json file : {code} { "options": { "transforms": "{ \"hive_column\": { \"qualifiedName\": [ \"replace:cl1:cl2\" ] }}" } } {code} Result : {code} {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: ObjectId is not valid AtlasObjectId{guid='a6954b5a-a5c4-4e30-bdf5-7b3408842bfa', typeName='hive_column', uniqueAttributes={}}"} {code} > Importing hive_table in a database which is a CTAS of another table in > different database throws exception due to export order. > --- > > Key: ATLAS-1948 > URL: https://issues.apache.org/jira/browse/ATLAS-1948 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Ashutosh Mestry >Priority: Critical > Fix For: 0.9-incubating > > Attachments: db1tb1.zip, ImportTransformsErrorOnCTASonDiffDB.txt > > > 1.Created 2 databases db1 , db2 in cluster1 > 2.Created 2 tables > 1. db1.t1 > 2. db2.t2 as select * from db1.t1 > 3.Exported db1.t1 into zip file. > 4.Imported zip file into cluster 2 with transforms option : > {code} > { > "options": { >"transforms": "{ \"hive_column\": { \"qualifiedName\": [ > \"replace:cl1:cl2\" ]} }" > } > } > {code} > 5. Import fails with > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > ObjectId is not valid > AtlasObjectId{guid='51c77c1e-265e-46ab-bbb5-5316cf80a53c', > typeName='hive_column', uniqueAttributes={}}"} > {code} > Only db1.t1 is imported into Atlas without any lineage. > Attached the exception stack trace. > After this exporting db2.t2 and importing completes successfully. > That is , first import ,either db1.t1 or db2.t1 is unsuccessful with > exception. Next import is successful. > The exception *doesn't* happen and tables are successfully imported If both > the tables are in a single database. Export order if tables are in same db is > 1.table1, > 2.db, > 3.table2, > 4.hive_process > 5. hive_column_lineage > If the tables are in different db , the order is , > 1.table1, > 2.db1, > 3.hive_process, > 4.hive_column_lineage > 5.ctas table > 6.db2 > which is possibly causing the issue. > When cluster2 starts importing , it imports table1 , db1 and when it comes > to hive_column_lineage , it finds that column specified in > hive_column_lineage is not in cluster2 yet ,since ctas table comes after the > hive_column_lineage in import order and it throws "ObjectId is not valid > AtlasObjectId{guid='51c77c1e-265e-46ab-bbb5-5316cf80a53c', > typeName='hive_column' ". > Thanks [~ayubkhan] for the analysis. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1950) Import Transform option using supertype instead of a specific type
Sharmadha Sainath created ATLAS-1950: Summary: Import Transform option using supertype instead of a specific type Key: ATLAS-1950 URL: https://issues.apache.org/jira/browse/ATLAS-1950 Project: Atlas Issue Type: Improvement Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Assignee: Ashutosh Mestry Users can provide a tranforms option while import to replace @cl1 in hive_table to @cl2 using the following JSON. {code} { "options": { "transforms": "{ \"hive_table\": { \"qualifiedName\": [ \"replace:@cl1:@cl2\" ] } }" } } {code} It would be easy to specify a super type like 'Asset' to transform all types in the export items which inherit from the super type to have "@cl1" replaced with "@cl2" like {code} { "options": { "transforms": "{ \"Asset\": { \"qualifiedName\": [ \"replace:@cl1:@cl2\" ] } }" } } {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1939) Export/Import Regression : NPE during import
[ https://issues.apache.org/jira/browse/ATLAS-1939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-1939: - Attachment: ATLAS-1939.patch > Export/Import Regression : NPE during import > > > Key: ATLAS-1939 > URL: https://issues.apache.org/jira/browse/ATLAS-1939 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Priority: Blocker > Attachments: ATLAS-1939.patch, Import_error_NPE2.txt, Import_NPE.txt > > > Exported a hive_table and created zip file t5.zip and tried to import in into > another cluster using > {code} > curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H > "Cache-Control: no-cache" --data-binary @t5.zip > "http://host2:21000/api/atlas/admin/import"; > {code} > Import Request failed with 500 internal server error with NPE in application > logs. > Attached the exception stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-1939) Export/Import Regression : NPE during import
[ https://issues.apache.org/jira/browse/ATLAS-1939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16087925#comment-16087925 ] Sharmadha Sainath commented on ATLAS-1939: -- Attached a patch that would not throw NPE when JSON is not provided for request parameter . The following request with the patch now would succeed : {code} curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H "Cache-Control: no-cache" -F data=@t5.zip "http://localhost:21000/api/atlas/admin/import"; {code} In the patch , added an empty JSON to request object if request is null or empty. CC : [~ashutoshm] [~mad...@apache.org] [~ayubkhan] > Export/Import Regression : NPE during import > > > Key: ATLAS-1939 > URL: https://issues.apache.org/jira/browse/ATLAS-1939 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Priority: Blocker > Attachments: ATLAS-1939.patch, Import_error_NPE2.txt, Import_NPE.txt > > > Exported a hive_table and created zip file t5.zip and tried to import in into > another cluster using > {code} > curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H > "Cache-Control: no-cache" --data-binary @t5.zip > "http://host2:21000/api/atlas/admin/import"; > {code} > Import Request failed with 500 internal server error with NPE in application > logs. > Attached the exception stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-1939) Export/Import Regression : NPE during import
[ https://issues.apache.org/jira/browse/ATLAS-1939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16088547#comment-16088547 ] Sharmadha Sainath commented on ATLAS-1939: -- [~ashutoshm] , Created review request : https://reviews.apache.org/r/60895/ > Export/Import Regression : NPE during import > > > Key: ATLAS-1939 > URL: https://issues.apache.org/jira/browse/ATLAS-1939 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Priority: Blocker > Attachments: ATLAS-1939.patch, Import_error_NPE2.txt, Import_NPE.txt > > > Exported a hive_table and created zip file t5.zip and tried to import in into > another cluster using > {code} > curl -v -X POST -u admin:admin -H "Content-Type: multipart/form-data" -H > "Cache-Control: no-cache" --data-binary @t5.zip > "http://host2:21000/api/atlas/admin/import"; > {code} > Import Request failed with 500 internal server error with NPE in application > logs. > Attached the exception stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-1833) (Create Tag) - If a Tag is created without description then in update Description tag name appears.
[ https://issues.apache.org/jira/browse/ATLAS-1833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath reassigned ATLAS-1833: Assignee: Sharmadha Sainath (was: Nixon Rodrigues) > (Create Tag) - If a Tag is created without description then in update > Description tag name appears. > --- > > Key: ATLAS-1833 > URL: https://issues.apache.org/jira/browse/ATLAS-1833 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Reporter: Kalyani Kashikar >Assignee: Sharmadha Sainath > > requestJSON - > {code} > { > "classificationDefs": [{ > "name": "test2", > "description": "", > "superTypes": [], > "attributeDefs": [] > }], > "entityDefs": [], > "enumDefs": [], > "structDefs": [] > } > {code} > ResponseJSON : > {code} > { > "enumDefs": [], > "structDefs": [], > "classificationDefs": [{ > "category": "CLASSIFICATION", > "guid": "cb2af955-d2de-4754-bcdf-58f76763c293", > "createTime": 1495795905288, > "updateTime": 1495795905288, > "version": 1, > "name": "test2", > "description": "test2", > "typeVersion": "1.0", > "attributeDefs": [], > "superTypes": [] > }], > "entityDefs": [] > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1960) Import command fired on passive server throws Exception
Sharmadha Sainath created ATLAS-1960: Summary: Import command fired on passive server throws Exception Key: ATLAS-1960 URL: https://issues.apache.org/jira/browse/ATLAS-1960 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Critical Attachments: ImportOnPassiveHost.txt 1.Fired import command on an ACTIVE host which resulted in successful import. 2. On firing import command with a zip file containing exported items of a hive_table ,on PASSIVE host, following exception is thrown : {code} {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: org.apache.atlas.typesystem.exception.TypeNotFoundException: Unknown datatype: hive_table"} {code} Expected 30X with redirection URL. Attached the complete exception stack trace found in PASSIVE host's application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1968) Export/Import - Exception thrown when Import fired with zip file found on server
Sharmadha Sainath created ATLAS-1968: Summary: Export/Import - Exception thrown when Import fired with zip file found on server Key: ATLAS-1968 URL: https://issues.apache.org/jira/browse/ATLAS-1968 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating, 0.8.1-incubating Reporter: Sharmadha Sainath Priority: Blocker Attachments: ImportExceptionWhenZipFileOnServer.txt 1. Exported an entity on the cluster1 ,created zip file and placed it in /tmp location of cluster2 . 2. Fired the following import command against cluster2 : {code} curl -v -X POST -u admin:admin "http://cluster2:21000/api/atlas/admin/importFile?FILENAME=/tmp/entity.zip"; {code} The import command failed with 500 internal server error. Attached the exception stack trace found in cluster2's application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1970) Export/Import - When updateTypeDefinition set to false , new types are not imported
Sharmadha Sainath created ATLAS-1970: Summary: Export/Import - When updateTypeDefinition set to false , new types are not imported Key: ATLAS-1970 URL: https://issues.apache.org/jira/browse/ATLAS-1970 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating, 0.8.1-incubating Reporter: Sharmadha Sainath Priority: Critical Attachments: ImportFailureDueToUnknownType.txt Import has option "updateTypeDefinition" which is used to update the type definitions in the backup cluster (cluster on which import is done) when the value is set to true. (Default is set to true). When its value is set to false , types in backup cluster are not updated with types present in exported zip file. This works fine when , say a type type1 is present in both clusters , an entity of type type1 is exported and imported into backup cluster with updateTypeDefinition is set to false - Import is done successfully and type is not updated. When the zip file contains type5 *which is not present in backup cluster* and the when import is fired , import fails with following exception: {code} {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: Type ENTITY with name type5 does not exist"} {code} Attached the complete exception stack trace found in backup cluster's application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ATLAS-1970) Export/Import - When updateTypeDefinition set to false , new types are not imported
[ https://issues.apache.org/jira/browse/ATLAS-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath reassigned ATLAS-1970: Assignee: Sharmadha Sainath > Export/Import - When updateTypeDefinition set to false , new types are not > imported > --- > > Key: ATLAS-1970 > URL: https://issues.apache.org/jira/browse/ATLAS-1970 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating, 0.8.1-incubating >Reporter: Sharmadha Sainath >Assignee: Sharmadha Sainath >Priority: Critical > Attachments: ImportFailureDueToUnknownType.txt > > > Import has option "updateTypeDefinition" which is used to update the type > definitions in the backup cluster (cluster on which import is done) when the > value is set to true. (Default is set to true). When its value is set to > false , types in backup cluster are not updated with types present in > exported zip file. > This works fine when , say a type type1 is present in both clusters , an > entity of type type1 is exported and imported into backup cluster with > updateTypeDefinition is set to false - Import is done successfully and type > is not updated. > When the zip file contains type5 *which is not present in backup cluster* and > the when import is fired , import fails with following exception: > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > Type ENTITY with name type5 does not exist"} > {code} > Attached the complete exception stack trace found in backup cluster's > application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1970) Export/Import - When updateTypeDefinition set to false , new types are not imported
[ https://issues.apache.org/jira/browse/ATLAS-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-1970: - Attachment: ATLAS-1970.patch Added a patch with fix to allow creation of types even when updateTypeDefinition is set to false. As there can be many types in the exported zip file ( which may / may not be present in the backup cluster ) , it would be a little overhead to check if type is already present and update if present. Also , updateTypeDefinition's default value is true. So , if updateTypeDefinition is set to true or not set at all , Atlas would still go ahead and create/update the types. Please let me know/re-assign if there could be a better way of fixing this. Tested the patch with following : 1. updateTypeDefinition set to false 2. updateTypeDefinition set to true 3. updateTypeDefinition is not specified 4. updateTypeDefinition with true/false with other options such as transforms > Export/Import - When updateTypeDefinition set to false , new types are not > imported > --- > > Key: ATLAS-1970 > URL: https://issues.apache.org/jira/browse/ATLAS-1970 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating, 0.8.1-incubating >Reporter: Sharmadha Sainath >Assignee: Sharmadha Sainath >Priority: Critical > Attachments: ATLAS-1970.patch, ImportFailureDueToUnknownType.txt > > > Import has option "updateTypeDefinition" which is used to update the type > definitions in the backup cluster (cluster on which import is done) when the > value is set to true. (Default is set to true). When its value is set to > false , types in backup cluster are not updated with types present in > exported zip file. > This works fine when , say a type type1 is present in both clusters , an > entity of type type1 is exported and imported into backup cluster with > updateTypeDefinition is set to false - Import is done successfully and type > is not updated. > When the zip file contains type5 *which is not present in backup cluster* and > the when import is fired , import fails with following exception: > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > Type ENTITY with name type5 does not exist"} > {code} > Attached the complete exception stack trace found in backup cluster's > application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (ATLAS-1970) Export/Import - When updateTypeDefinition set to false , new types are not imported
[ https://issues.apache.org/jira/browse/ATLAS-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094679#comment-16094679 ] Sharmadha Sainath edited comment on ATLAS-1970 at 7/20/17 1:30 PM: --- Added a patch with fix to allow creation of types even when updateTypeDefinition is set to false. As there can be many types in the exported zip file ( which may / may not be present in the backup cluster ) , it would be a little overhead to check if type is already present and update if present. Also , updateTypeDefinition's default value is true. So , if updateTypeDefinition is set to true or not set at all , Atlas would still go ahead and create/update the types. Please let me know/re-assign if there is be a better way of fixing this. Tested the patch with following : 1. updateTypeDefinition set to false 2. updateTypeDefinition set to true 3. updateTypeDefinition is not specified 4. updateTypeDefinition with true/false with other options such as transforms CC : [~ashutoshm] [~mad...@apache.org] was (Author: ssainath): Added a patch with fix to allow creation of types even when updateTypeDefinition is set to false. As there can be many types in the exported zip file ( which may / may not be present in the backup cluster ) , it would be a little overhead to check if type is already present and update if present. Also , updateTypeDefinition's default value is true. So , if updateTypeDefinition is set to true or not set at all , Atlas would still go ahead and create/update the types. Please let me know/re-assign if there could be a better way of fixing this. Tested the patch with following : 1. updateTypeDefinition set to false 2. updateTypeDefinition set to true 3. updateTypeDefinition is not specified 4. updateTypeDefinition with true/false with other options such as transforms > Export/Import - When updateTypeDefinition set to false , new types are not > imported > --- > > Key: ATLAS-1970 > URL: https://issues.apache.org/jira/browse/ATLAS-1970 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating, 0.8.1-incubating >Reporter: Sharmadha Sainath >Assignee: Sharmadha Sainath >Priority: Critical > Attachments: ATLAS-1970.patch, ImportFailureDueToUnknownType.txt > > > Import has option "updateTypeDefinition" which is used to update the type > definitions in the backup cluster (cluster on which import is done) when the > value is set to true. (Default is set to true). When its value is set to > false , types in backup cluster are not updated with types present in > exported zip file. > This works fine when , say a type type1 is present in both clusters , an > entity of type type1 is exported and imported into backup cluster with > updateTypeDefinition is set to false - Import is done successfully and type > is not updated. > When the zip file contains type5 *which is not present in backup cluster* and > the when import is fired , import fails with following exception: > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > Type ENTITY with name type5 does not exist"} > {code} > Attached the complete exception stack trace found in backup cluster's > application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Issue Comment Deleted] (ATLAS-1970) Export/Import - When updateTypeDefinition set to false , new types are not imported
[ https://issues.apache.org/jira/browse/ATLAS-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-1970: - Comment: was deleted (was: Added a patch with fix to allow creation of types even when updateTypeDefinition is set to false. As there can be many types in the exported zip file ( which may / may not be present in the backup cluster ) , it would be a little overhead to check if type is already present and update if present. Also , updateTypeDefinition's default value is true. So , if updateTypeDefinition is set to true or not set at all , Atlas would still go ahead and create/update the types. Please let me know/re-assign if there is be a better way of fixing this. Tested the patch with following : 1. updateTypeDefinition set to false 2. updateTypeDefinition set to true 3. updateTypeDefinition is not specified 4. updateTypeDefinition with true/false with other options such as transforms CC : [~ashutoshm] [~mad...@apache.org]) > Export/Import - When updateTypeDefinition set to false , new types are not > imported > --- > > Key: ATLAS-1970 > URL: https://issues.apache.org/jira/browse/ATLAS-1970 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating, 0.8.1-incubating >Reporter: Sharmadha Sainath >Assignee: Sharmadha Sainath >Priority: Critical > Attachments: ATLAS-1970.patch, ImportFailureDueToUnknownType.txt > > > Import has option "updateTypeDefinition" which is used to update the type > definitions in the backup cluster (cluster on which import is done) when the > value is set to true. (Default is set to true). When its value is set to > false , types in backup cluster are not updated with types present in > exported zip file. > This works fine when , say a type type1 is present in both clusters , an > entity of type type1 is exported and imported into backup cluster with > updateTypeDefinition is set to false - Import is done successfully and type > is not updated. > When the zip file contains type5 *which is not present in backup cluster* and > the when import is fired , import fails with following exception: > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > Type ENTITY with name type5 does not exist"} > {code} > Attached the complete exception stack trace found in backup cluster's > application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-1970) Export/Import - When updateTypeDefinition set to false , new types are not imported
[ https://issues.apache.org/jira/browse/ATLAS-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-1970: - Attachment: (was: ATLAS-1970.patch) > Export/Import - When updateTypeDefinition set to false , new types are not > imported > --- > > Key: ATLAS-1970 > URL: https://issues.apache.org/jira/browse/ATLAS-1970 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating, 0.8.1-incubating >Reporter: Sharmadha Sainath >Assignee: Sharmadha Sainath >Priority: Critical > Attachments: ImportFailureDueToUnknownType.txt > > > Import has option "updateTypeDefinition" which is used to update the type > definitions in the backup cluster (cluster on which import is done) when the > value is set to true. (Default is set to true). When its value is set to > false , types in backup cluster are not updated with types present in > exported zip file. > This works fine when , say a type type1 is present in both clusters , an > entity of type type1 is exported and imported into backup cluster with > updateTypeDefinition is set to false - Import is done successfully and type > is not updated. > When the zip file contains type5 *which is not present in backup cluster* and > the when import is fired , import fails with following exception: > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > Type ENTITY with name type5 does not exist"} > {code} > Attached the complete exception stack trace found in backup cluster's > application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (ATLAS-1970) Export/Import - When updateTypeDefinition set to false , new types are not imported
[ https://issues.apache.org/jira/browse/ATLAS-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath resolved ATLAS-1970. -- Resolution: Not A Bug > Export/Import - When updateTypeDefinition set to false , new types are not > imported > --- > > Key: ATLAS-1970 > URL: https://issues.apache.org/jira/browse/ATLAS-1970 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating, 0.8.1-incubating >Reporter: Sharmadha Sainath >Assignee: Sharmadha Sainath >Priority: Critical > Attachments: ImportFailureDueToUnknownType.txt > > > Import has option "updateTypeDefinition" which is used to update the type > definitions in the backup cluster (cluster on which import is done) when the > value is set to true. (Default is set to true). When its value is set to > false , types in backup cluster are not updated with types present in > exported zip file. > This works fine when , say a type type1 is present in both clusters , an > entity of type type1 is exported and imported into backup cluster with > updateTypeDefinition is set to false - Import is done successfully and type > is not updated. > When the zip file contains type5 *which is not present in backup cluster* and > the when import is fired , import fails with following exception: > {code} > {"errorCode":"ATLAS-500-00-001","errorMessage":"org.apache.atlas.exception.AtlasBaseException: > Type ENTITY with name type5 does not exist"} > {code} > Attached the complete exception stack trace found in backup cluster's > application logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-1985) Regression : Basic/DSL Query fired on PASSIVE server redirects to ACTIVE server , adding an extra "amp;" to the parameters
Sharmadha Sainath created ATLAS-1985: Summary: Regression : Basic/DSL Query fired on PASSIVE server redirects to ACTIVE server , adding an extra "amp;" to the parameters Key: ATLAS-1985 URL: https://issues.apache.org/jira/browse/ATLAS-1985 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating, 0.8.1-incubating Reporter: Sharmadha Sainath Priority: Critical 1. Fired the query {code} http://PassiveHost:21000/api/atlas/discovery/search/fulltext?limit=100&query=hive_table {code} The query failed with {code} { error: "dslQuery cannot be null cannot be null" } {code} The redirected URL is : {code} http://activehost:21000/api/atlas/discovery/search/fulltext?limit=100&query=hive_table {code} Redirection adds an extra "amp;" and ignores rest of the query after the first parameter. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2007) Regression : Storm model not registered in Atlas due to recent changes
Sharmadha Sainath created ATLAS-2007: Summary: Regression : Storm model not registered in Atlas due to recent changes Key: ATLAS-2007 URL: https://issues.apache.org/jira/browse/ATLAS-2007 Project: Atlas Issue Type: Bug Components: atlas-intg Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Blocker Attachments: Storm_model_registration_exception.txt Atlas on startup throws following exception while registering Storm model. {code} org.apache.atlas.exception.AtlasBaseException: AGGREGATION relationshipDef storm_topology_nodes creation attempted without an end specifying isContainer {code} This issue is a regression possibly caused by [ATLAS-1979|https://issues.apache.org/jira/browse/ATLAS-1979] Attached the complete exception stack trace . CC : [~sarath.ku...@gmail.com] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2009) Any non-admin user in users-credentials.properties is able to access /api/atlas/admin path
Sharmadha Sainath created ATLAS-2009: Summary: Any non-admin user in users-credentials.properties is able to access /api/atlas/admin path Key: ATLAS-2009 URL: https://issues.apache.org/jira/browse/ATLAS-2009 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath Priority: Critical Any non-admin user (ex: rangertagsync) specified in conf/users-credentials.properties is able to access the /api/atlas/admin path. Is this expected ? One of the use cases is Export and Import API's ,which should be permitted only by admin user to be executed. But any user is able to execute it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2697) V2 Notifications : "Propagated classification added" audit message is present instead of "classification added"
Sharmadha Sainath created ATLAS-2697: Summary: V2 Notifications : "Propagated classification added" audit message is present instead of "classification added" Key: ATLAS-2697 URL: https://issues.apache.org/jira/browse/ATLAS-2697 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 1.0.0 Reporter: Sharmadha Sainath With V2 notifications ,when tag is added to an entity , instead of "classification added" audit , "Propagated classification added" audit message is present. CC : [~sarath.ku...@gmail.com] [~nixonrodrigues] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-2826) Saving Search fails with 403 auth error after the new relationship auth changes
Sharmadha Sainath created ATLAS-2826: Summary: Saving Search fails with 403 auth error after the new relationship auth changes Key: ATLAS-2826 URL: https://issues.apache.org/jira/browse/ATLAS-2826 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 1.1.0 Reporter: Sharmadha Sainath Attachments: SaveSearchLog.txt Saving search as any user fails with 403 after new fine grained authorization support added for relationship. Attached the complete stack trace. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-2895) $ , null characters as replicatedTo value on export.
Sharmadha Sainath created ATLAS-2895: Summary: $ , null characters as replicatedTo value on export. Key: ATLAS-2895 URL: https://issues.apache.org/jira/browse/ATLAS-2895 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath 1. When the replicatedTo value is set to value that contains "$" in it , cluster name is set incorrectly. Example : data_centre$1SFO$cl2 On exporting , target cluster name is created as 1SFO 2. When replicatedTo is set to "$cl2", "Array out of bounds exception" is seen . Export happens successfully and 200 is returned, but Export Import audits are not created. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-2899) Regression : When export "options" is provided null , export fails with NPE at getSkipLineageOptionValue
Sharmadha Sainath created ATLAS-2899: Summary: Regression : When export "options" is provided null , export fails with NPE at getSkipLineageOptionValue Key: ATLAS-2899 URL: https://issues.apache.org/jira/browse/ATLAS-2899 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath With the following export "options" , export fails with NPE at getSkipLineageOptionValue(AtlasExportRequest.java:97) : {code} { "itemsToExport":[ { "typeName":"hdfs_path", "uniqueAttributes":{ "qualifiedName":"/atlas" } } ], "options":null } {code} whereas , when no options key is specified or some valid options are provided , export succeeds. Complete exception stack trace : {code} 2018-09-26 20:23:30,967 ERROR - [pool-2-thread-18 - 76af7fbc-cdc6-4196-9273-bc8b40b62d15:] ~ Error handling a request: eca69d150a7a5b5c (ExceptionMapperUtil:32) java.lang.NullPointerException at org.apache.atlas.model.impexp.AtlasExportRequest.getSkipLineageOptionValue(AtlasExportRequest.java:97) at org.apache.atlas.repository.impexp.ExportService$ExportContext.(ExportService.java:633) at org.apache.atlas.repository.impexp.ExportService.run(ExportService.java:97) at org.apache.atlas.web.resources.AdminResource.export(AdminResource.java:342) at sun.reflect.GeneratedMethodAccessor200.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1507) at org.apache.atlas.web.filters.AuditFilter.doFilter(AuditFilter.java:76) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1495) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) at org.apache.atlas.web.filters.AtlasAuthorizationFilter.doFilter(AtlasAuthorizationFilter.java:157) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137) at org.springframework.security.web.FilterChainProxy$Virtu
[jira] [Created] (ATLAS-2900) Regression : Import fails with Error converting file to JSON
Sharmadha Sainath created ATLAS-2900: Summary: Regression : Import fails with Error converting file to JSON Key: ATLAS-2900 URL: https://issues.apache.org/jira/browse/ATLAS-2900 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath Attachments: uPqvN.zip On Importing the attached zip file , request fails with {code} 2018-09-26 23:02:50,705 INFO - [pool-2-thread-18 - 84ea8d84-ead8-428d-a4db-4451812dd241:] ~ ==> import(user=admin, from=10.22.16.153, request=AtlasImportRequest{options={}}) (ImportService:89) 2018-09-26 23:02:50,864 INFO - [pool-2-thread-18 - 84ea8d84-ead8-428d-a4db-4451812dd241:] ~ bulkImport(): progress: 14% (of 7) - entity:last-imported:hive_table:[1]:(d7773c32-ce56-4d8a-96c1-5a4aa97e982e) (AtlasEntityStoreV1:146) 2018-09-26 23:02:50,865 ERROR - [pool-2-thread-18 - 84ea8d84-ead8-428d-a4db-4451812dd241:] ~ getNextEntityWithExtInfo (ZipSource:213) org.apache.atlas.exception.AtlasBaseException: Error converting file to JSON. at org.apache.atlas.repository.impexp.ZipSource.convertFromJson(ZipSource.java:177) at org.apache.atlas.repository.impexp.ZipSource.getEntityWithExtInfo(ZipSource.java:139) at org.apache.atlas.repository.impexp.ZipSource.getNextEntityWithExtInfo(ZipSource.java:211) at org.apache.atlas.repository.store.graph.v1.BulkImporterImpl$EntityImportStreamWithResidualList.getNextEntityWithExtInfo(BulkImporterImpl.java:183) at org.apache.atlas.repository.store.graph.v1.BulkImporterImpl.bulkImport(BulkImporterImpl.java:72) at org.apache.atlas.repository.impexp.ImportService.processEntities(ImportService.java:213) at org.apache.atlas.repository.impexp.ImportService.run(ImportService.java:100) at org.apache.atlas.web.resources.AdminResource.importData(AdminResource.java:390) at sun.reflect.GeneratedMethodAccessor227.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1507) at org.apache.atlas.web.filters.AuditFilter.doFilter(AuditFilter.java:76) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1495) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) at org.apache.atlas.web.filters.AtlasAuthorizationFilter.doFilter(AtlasAuthorizationFilter.java:157) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) at org.springframework.security.web.FilterChain
[jira] [Created] (ATLAS-3068) Regression : Removal of ownedRef/inverseRef constraints in hive types causes related entities missing from the response's "attributes"
Sharmadha Sainath created ATLAS-3068: Summary: Regression : Removal of ownedRef/inverseRef constraints in hive types causes related entities missing from the response's "attributes" Key: ATLAS-3068 URL: https://issues.apache.org/jira/browse/ATLAS-3068 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 1.2.0, 2.0.0 Reporter: Sharmadha Sainath Fix For: 1.2.0, 2.0.0 In GET entity response of hive_table , "attributes" doesn't have table, column attributes anymore due to recent change [ATLAS-3067|https://issues.apache.org/jira/browse/ATLAS-3067]. Example : {code} attributes: { owner: "admin", temporary: false, lastAccessTime: 769588200, aliases: null, replicatedTo: null, replicatedFrom: null, qualifiedName: "default.hive_table_nogsc_9@cl1", description: null, viewExpandedText: null, tableType: "MANAGED_TABLE", createTime: 1552306685000, name: "hive_table_nogsc_9", comment: null, parameters: { totalSize: "0", numRows: "0", rawDataSize: "0", transactional_properties: "default", COLUMN_STATS_ACCURATE: "{"BASIC_STATS":"true","COLUMN_STATS":{"id":"true","name":"true"}}", numFiles: "0", transient_lastDdlTime: "1552306685", bucketing_version: "2", transactional: "true" }, retention: 1, viewOriginalText: null }, {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-3086) Regression : Atlas Hive Hook doesn't capture insert into table values() queries
Sharmadha Sainath created ATLAS-3086: Summary: Regression : Atlas Hive Hook doesn't capture insert into table values() queries Key: ATLAS-3086 URL: https://issues.apache.org/jira/browse/ATLAS-3086 Project: Atlas Issue Type: Bug Components: atlas-intg Affects Versions: 1.0.0 Reporter: Sharmadha Sainath Fix For: 2.0.0 Atlas Hive Hook doesn't capture the query "insert into table values()" . This is a regression as this event was captured in 0.8 Other queries like "from table1 a insert overwrite table bkp_table select a.id", "insert into tableB select c1,c2 from tableA" , information is captured and lineage is created in 1.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-3101) UI,Regression : Unable to search by _CLASSIFIED
Sharmadha Sainath created ATLAS-3101: Summary: UI,Regression : Unable to search by _CLASSIFIED Key: ATLAS-3101 URL: https://issues.apache.org/jira/browse/ATLAS-3101 Project: Atlas Issue Type: Bug Components: atlas-webui Reporter: Sharmadha Sainath On selecting _CLASSIFIED in classifications dropdown , "Search" button is not enabled. After selecting _CLASSIFIED in classifications dropdown, selecting a type (say hive_table) from types list , clears the _CLASSIFIED in classifications. CC : [~kevalbhatt] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (ATLAS-3108) Unable search terms at a subcategory glossary
[ https://issues.apache.org/jira/browse/ATLAS-3108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16806400#comment-16806400 ] Sharmadha Sainath commented on ATLAS-3108: -- [~Raymond_C] , >> After creating category and subcategory via AtlasUI >> (http://sandbox-hdp.hortonworks.com:21000/), at the one subcategory, unable >> to search any terms. In the attached screenshot ,I see that there is no sub category created yet for "2nd deliveryLeadTime". Here ,Time is the Glossary and 2nd "deliveryLeadTime" is the category. I see you're trying to search "deliveryLeadTime" - which is a Glossary. So , if you want to create a term under Time , switch to "Terms" in the toggle , Create a term under the Glossary. Now to assign Term to Category, you can either do that in "Terms" or toggle back to "Category". By your example , toggle to "Category" , you will see "Terms +" (Terms Assign) button . Click on that and search the term you want to assign the category to. Please let me know if you still find issues. Also , refer to http://atlas.apache.org/Glossary.html for more detailed information. > Unable search terms at a subcategory glossary > - > > Key: ATLAS-3108 > URL: https://issues.apache.org/jira/browse/ATLAS-3108 > Project: Atlas > Issue Type: Bug > Components: atlas-webui >Affects Versions: 1.1.0 >Reporter: Raymond C >Priority: Major > Labels: beginner > Attachments: image-2019-03-31-12-51-55-042.png > > > 1 Launching HDP Sandbox 3.0. > 2 After creating category and subcategory via AtlasUI > ([http://sandbox-hdp.hortonworks.com:21000/]), at the one subcategory, unable > to search any terms. > 3 Typed one existing term, but did not show any term on the search bar. Also > the Assign button is invalid. > !image-2019-03-31-12-51-55-042.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (ATLAS-3108) Unable search terms at a subcategory glossary
[ https://issues.apache.org/jira/browse/ATLAS-3108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16806400#comment-16806400 ] Sharmadha Sainath edited comment on ATLAS-3108 at 4/1/19 5:42 AM: -- [~Raymond_C] , >> After creating category and subcategory via AtlasUI >> (http://sandbox-hdp.hortonworks.com:21000/), at the one subcategory, unable >> to search any terms. In the attached screenshot ,I see that there is no sub category created yet for "2nd deliveryLeadTime". Here ,Time is the Glossary and "2nd deliveryLeadTime" is the category. I see you're trying to search "deliveryLeadTime" - which is a Glossary. So , if you want to create a term under Time , switch to "Terms" in the toggle , Create a term under the Glossary. Now to assign Term to Category, you can either do that in "Terms" or toggle back to "Category". By your example , toggle to "Category" , you will see "Terms +" (Terms Assign) button . Click on that and search the term you want to assign the category to. Please let me know if you still find issues. Also , refer to http://atlas.apache.org/Glossary.html for more detailed information. was (Author: ssainath): [~Raymond_C] , >> After creating category and subcategory via AtlasUI >> (http://sandbox-hdp.hortonworks.com:21000/), at the one subcategory, unable >> to search any terms. In the attached screenshot ,I see that there is no sub category created yet for "2nd deliveryLeadTime". Here ,Time is the Glossary and 2nd "deliveryLeadTime" is the category. I see you're trying to search "deliveryLeadTime" - which is a Glossary. So , if you want to create a term under Time , switch to "Terms" in the toggle , Create a term under the Glossary. Now to assign Term to Category, you can either do that in "Terms" or toggle back to "Category". By your example , toggle to "Category" , you will see "Terms +" (Terms Assign) button . Click on that and search the term you want to assign the category to. Please let me know if you still find issues. Also , refer to http://atlas.apache.org/Glossary.html for more detailed information. > Unable search terms at a subcategory glossary > - > > Key: ATLAS-3108 > URL: https://issues.apache.org/jira/browse/ATLAS-3108 > Project: Atlas > Issue Type: Bug > Components: atlas-webui >Affects Versions: 1.1.0 >Reporter: Raymond C >Priority: Major > Labels: beginner > Attachments: image-2019-03-31-12-51-55-042.png > > > 1 Launching HDP Sandbox 3.0. > 2 After creating category and subcategory via AtlasUI > ([http://sandbox-hdp.hortonworks.com:21000/]), at the one subcategory, unable > to search any terms. > 3 Typed one existing term, but did not show any term on the search bar. Also > the Assign button is invalid. > !image-2019-03-31-12-51-55-042.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-3176) Regression , Export : Export zip doesn't contain all entities which are listed in atlas-export-order.json
Sharmadha Sainath created ATLAS-3176: Summary: Regression , Export : Export zip doesn't contain all entities which are listed in atlas-export-order.json Key: ATLAS-3176 URL: https://issues.apache.org/jira/browse/ATLAS-3176 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 1.1.0, 2.0.0 Reporter: Sharmadha Sainath Fix For: 2.0.0 Attachments: export.zip In Previous versions , entities listed in atlas-export-order.json are present in zip as .json. In recent versions, hive_column_lineage entities are missing. But this doesn't affect import. The lineage is created between the columns without any issue. The hive_column_lineage information is present in entity definition of hive_table and hive_process so it creates the hive_column_lineage. Attaching the zip file. CC : [~ashutoshm] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-3177) Regression , Export : changeMarker is not set right - set to older value compared to entity's lastModifedTime
Sharmadha Sainath created ATLAS-3177: Summary: Regression , Export : changeMarker is not set right - set to older value compared to entity's lastModifedTime Key: ATLAS-3177 URL: https://issues.apache.org/jira/browse/ATLAS-3177 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 1.1.0 Reporter: Sharmadha Sainath Fix For: 2.0.0, 1.1.0 Attachments: db_axyz.zip, db_def.json Export request body : {code} {"itemsToExport": [{"typeName": "hive_db", "uniqueAttributes": {"qualifiedName": "database_rbltx@mycluster0"}}], "options": {"fetchType": "incremental", "changeMarker": 0, "replicatedTo": "mycluster1"}} {code} The db's createTime is 1556276114134 (Friday, April 26, 2019 4:25:14.134 PM GMT+05:30) updatedTime is 1556276125588 (Friday, April 26, 2019 4:25:25.588 PM GMT+05:30) The changeMarker in exported zip file is 1556187028462 (Thursday, April 25, 2019 3:40:28.462 PM GMT+05:30) due to this unmodified entities are also exported everytime due to incremental export. Attaching the entity definition of db entity and the exported zip. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-3187) Regression, Relationship updates : In GET entity definition of a deleted table, columns,sd etc., are empty
Sharmadha Sainath created ATLAS-3187: Summary: Regression, Relationship updates : In GET entity definition of a deleted table, columns,sd etc., are empty Key: ATLAS-3187 URL: https://issues.apache.org/jira/browse/ATLAS-3187 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 2.0.0 Reporter: Sharmadha Sainath Fix For: 2.0.0 Attachments: table_entity_def.json Adding the table_entity_def["entity"]["attributes"]: Here , table is deleted , but the columns array and sd are [] and null respectively. Expected is all entities in DELETED state. Attaching the complete table definition : {code} attributes: { owner: "hrt_qa", temporary: false, lastAccessTime: 1556788686000, aliases: null, replicatedTo: null, replicatedFrom: null, qualifiedName: "default.table1@cl1", columns: [ ], description: null, viewExpandedText: null, tableType: "MANAGED_TABLE", sd: null, createTime: 1556788686000, name: "table1", comment: null, partitionKeys: [ ], parameters: { totalSize: "0", numRows: "0", rawDataSize: "0", transactional_properties: "default", COLUMN_STATS_ACCURATE: "{"BASIC_STATS":"true","COLUMN_STATS":{"id":"true"}}", numFiles: "0", transient_lastDdlTime: "1556788686", bucketing_version: "2", transactional: "true" }, retention: 0, viewOriginalText: null, db: { guid: "ab02dd3b-1d1c-4522-8c2c-0fa60d82fcbe", typeName: "hive_db" } } {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-3232) Relationships, Export & Import : Updates to relationshipDef are not honored during import
Sharmadha Sainath created ATLAS-3232: Summary: Relationships, Export & Import : Updates to relationshipDef are not honored during import Key: ATLAS-3232 URL: https://issues.apache.org/jira/browse/ATLAS-3232 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath Fix For: 2.0.0 1. In source cluster , updated relationshipDef "hive_db_columns" propagateTags to "ONE_TO_TWO" from NONE and exported. 2.In export zip , atlas-typesdef.json has "ONE_TO_TWO" propagateTags value for "hive_db_columns" . 3. But in the import cluster, propagateTags value for hive_db_columns is NONE still. In the import options , provided updateTypeDefinition to True too. CC : [~ashutoshm] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (ATLAS-3232) Relationships, Export & Import : Updates to relationshipDef are not honored during import
[ https://issues.apache.org/jira/browse/ATLAS-3232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-3232: - Description: 1. In source cluster , updated relationshipDef "hive_table_db" propagateTags to "ONE_TO_TWO" from NONE and exported. 2.In export zip , atlas-typesdef.json has "ONE_TO_TWO" propagateTags value for "hive_table_db" . 3. But in the import cluster, propagateTags value for hive_table_db is NONE still. In the import options , provided updateTypeDefinition to True too. CC : [~ashutoshm] was: 1. In source cluster , updated relationshipDef "hive_db_columns" propagateTags to "ONE_TO_TWO" from NONE and exported. 2.In export zip , atlas-typesdef.json has "ONE_TO_TWO" propagateTags value for "hive_db_columns" . 3. But in the import cluster, propagateTags value for hive_db_columns is NONE still. In the import options , provided updateTypeDefinition to True too. CC : [~ashutoshm] > Relationships, Export & Import : Updates to relationshipDef are not honored > during import > - > > Key: ATLAS-3232 > URL: https://issues.apache.org/jira/browse/ATLAS-3232 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Reporter: Sharmadha Sainath >Priority: Major > Fix For: 2.0.0 > > > 1. In source cluster , updated relationshipDef "hive_table_db" propagateTags > to "ONE_TO_TWO" from NONE and exported. > 2.In export zip , atlas-typesdef.json has "ONE_TO_TWO" propagateTags value > for "hive_table_db" . > 3. But in the import cluster, propagateTags value for hive_table_db is NONE > still. In the import options , provided updateTypeDefinition to True too. > CC : [~ashutoshm] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-3233) Relationships, Export & Import : Issue with ACTIVE and DELETED relationship-def instances
Sharmadha Sainath created ATLAS-3233: Summary: Relationships, Export & Import : Issue with ACTIVE and DELETED relationship-def instances Key: ATLAS-3233 URL: https://issues.apache.org/jira/browse/ATLAS-3233 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 1.0.0 Reporter: Sharmadha Sainath Fix For: 2.0.0 1. In source cluster , create 2 types and a relationship-def between them. 2. Create 2 entities and create relationship instance between them. (relationship_guid_1 is created and it is ACTIVE) 3. Export & Import entity1. 4. On target cluster , relationship_guid_1 is imported and it is ACTIVE. 5. Now, on source cluster , delete relationship_guid_1 and create a new relationship between entity1 and entity2. 6. In source cluster , relationship_guid_1 is DELETED and new relationship_guid_2 is ACTIVE. 7. Export and Import the entity1 to target cluster. 8. In target cluster , relationship_guid_1 is still ACTIVE and relationship_guid_2 is not imported at all . (GET on relationship_guid_2 throws 404 on target cluster) 9. In the exported zip , there is no information about relationship_guid_1 but only about relationship_guid_2. CC : [~ashutoshm] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (ATLAS-3177) Regression , Export : changeMarker is not set right - set to older value compared to entity's lastModifedTime
[ https://issues.apache.org/jira/browse/ATLAS-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16850538#comment-16850538 ] Sharmadha Sainath commented on ATLAS-3177: -- [~ashutoshm] , But changeMarker is always set to value even older than the entity's creation timestamp.I tried quite many times. Due to this , incremental export is not adding any value. > Regression , Export : changeMarker is not set right - set to older value > compared to entity's lastModifedTime > - > > Key: ATLAS-3177 > URL: https://issues.apache.org/jira/browse/ATLAS-3177 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 1.1.0 >Reporter: Sharmadha Sainath >Priority: Major > Fix For: 1.1.0, trunk > > Attachments: db_axyz.zip, db_def.json > > > Export request body : > {code} > {"itemsToExport": [{"typeName": "hive_db", "uniqueAttributes": > {"qualifiedName": "database_rbltx@mycluster0"}}], "options": {"fetchType": > "incremental", "changeMarker": 0, "replicatedTo": "mycluster1"}} > {code} > The db's createTime is 1556276114134 (Friday, April 26, 2019 4:25:14.134 PM > GMT+05:30) > updatedTime is 1556276125588 (Friday, April 26, 2019 4:25:25.588 PM GMT+05:30) > The changeMarker in exported zip file is 1556187028462 (Thursday, April 25, > 2019 3:40:28.462 PM GMT+05:30) > due to this unmodified entities are also exported everytime due to > incremental export. > Attaching the entity definition of db entity and the exported zip. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (ATLAS-3177) Regression , Export : changeMarker is not set right - set to older value compared to entity's lastModifedTime
[ https://issues.apache.org/jira/browse/ATLAS-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16850538#comment-16850538 ] Sharmadha Sainath edited comment on ATLAS-3177 at 5/29/19 6:31 AM: --- [~ashutoshm] , But changeMarker is always set to value even older than the entity's creation timestamp.I tried quite many times. Due to this , incremental export is not adding any value. Also could you please explain what is request timestamp ? Is it the latest time stamp the entity was accessed / requested ? was (Author: ssainath): [~ashutoshm] , But changeMarker is always set to value even older than the entity's creation timestamp.I tried quite many times. Due to this , incremental export is not adding any value. > Regression , Export : changeMarker is not set right - set to older value > compared to entity's lastModifedTime > - > > Key: ATLAS-3177 > URL: https://issues.apache.org/jira/browse/ATLAS-3177 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 1.1.0 >Reporter: Sharmadha Sainath >Priority: Major > Fix For: 1.1.0, trunk > > Attachments: db_axyz.zip, db_def.json > > > Export request body : > {code} > {"itemsToExport": [{"typeName": "hive_db", "uniqueAttributes": > {"qualifiedName": "database_rbltx@mycluster0"}}], "options": {"fetchType": > "incremental", "changeMarker": 0, "replicatedTo": "mycluster1"}} > {code} > The db's createTime is 1556276114134 (Friday, April 26, 2019 4:25:14.134 PM > GMT+05:30) > updatedTime is 1556276125588 (Friday, April 26, 2019 4:25:25.588 PM GMT+05:30) > The changeMarker in exported zip file is 1556187028462 (Thursday, April 25, > 2019 3:40:28.462 PM GMT+05:30) > due to this unmodified entities are also exported everytime due to > incremental export. > Attaching the entity definition of db entity and the exported zip. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-3310) Relationships : After updating a bigint attribute , any operation on relationship instance,entity throws 500 internal server exception.
Sharmadha Sainath created ATLAS-3310: Summary: Relationships : After updating a bigint attribute , any operation on relationship instance,entity throws 500 internal server exception. Key: ATLAS-3310 URL: https://issues.apache.org/jira/browse/ATLAS-3310 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath After updating bigint attribute , any operation (example : GET) on the relationship fails throwing 500 internal server exception with the following stack trace. Stack trace : {code:java} 2019-05-20 06:19:28,592 ERROR - [pool-2-thread-3 - 74ed4949-00ff-4f6d-87cb-e287684b55b8:] ~ Error handling a request: aceb2d520cb74b37 (ExceptionMapperUtil:32) java.lang.ArrayIndexOutOfBoundsException: Required size [1] exceeds actual remaining size [0] at org.janusgraph.diskstorage.util.StaticArrayBuffer.require(StaticArrayBuffer.java:94) at org.janusgraph.diskstorage.util.StaticArrayBuffer.getByte(StaticArrayBuffer.java:170) at org.janusgraph.diskstorage.util.StaticArrayBuffer.getBytes(StaticArrayBuffer.java:253) at org.janusgraph.diskstorage.util.ReadArrayBuffer.getBytes(ReadArrayBuffer.java:120) at org.janusgraph.graphdb.database.serialize.attribute.ByteArraySerializer.read(ByteArraySerializer.java:46) at org.apache.atlas.repository.graphdb.janus.serializer.BigIntegerSerializer.read(BigIntegerSerializer.java:36) at org.apache.atlas.repository.graphdb.janus.serializer.BigIntegerSerializer.read(BigIntegerSerializer.java:30) at org.janusgraph.graphdb.database.serialize.StandardSerializer.readObjectNotNullInternal(StandardSerializer.java:265) at org.janusgraph.graphdb.database.serialize.StandardSerializer.readObjectInternal(StandardSerializer.java:255) at org.janusgraph.graphdb.database.serialize.StandardSerializer.readObject(StandardSerializer.java:235) at org.janusgraph.graphdb.database.EdgeSerializer.readPropertyValue(EdgeSerializer.java:205) at org.janusgraph.graphdb.database.EdgeSerializer.readInline(EdgeSerializer.java:191) at org.janusgraph.graphdb.database.EdgeSerializer.parseRelation(EdgeSerializer.java:162) at org.janusgraph.graphdb.database.EdgeSerializer.readRelation(EdgeSerializer.java:73) at org.janusgraph.graphdb.transaction.RelationConstructor.readRelationCache(RelationConstructor.java:41) at org.janusgraph.graphdb.relations.CacheEdge.getPropertyMap(CacheEdge.java:101) at org.janusgraph.graphdb.relations.CacheEdge.getValueDirect(CacheEdge.java:108) at org.janusgraph.graphdb.relations.AbstractTypedRelation.lambda$properties$1(AbstractTypedRelation.java:166) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.Spliterators$ArraySpliterator.tryAdvance(Spliterators.java:958) at java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:294) at java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:206) at java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161) at java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300) at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681) at org.apache.tinkerpop.gremlin.structure.Element.property(Element.java:79) at org.apache.atlas.repository.graphdb.janus.AtlasJanusElement.getProperty(AtlasJanusElement.java:65) at org.apache.atlas.repository.store.graph.v2.AtlasGraphUtilsV2.getTypeName(AtlasGraphUtilsV2.java:126) at org.apache.atlas.repository.store.graph.v2.AtlasRelationshipStoreV2.update(AtlasRelationshipStoreV2.java:148) at org.apache.atlas.repository.store.graph.v2.AtlasRelationshipStoreV2$$FastClassBySpringCGLIB$$a8165974.invoke() at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:736) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.apache.atlas.GraphTransactionInterceptor.invoke(GraphTransactionInterceptor.java:80) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:671) at org.apache.atlas.repository.store.graph.v2.AtlasRelationshipStoreV2$$EnhancerBySpringCGLIB$$a12f9384.update() at org.apac
[jira] [Created] (ATLAS-2294) Extra parameter "description" added when creating a type
Sharmadha Sainath created ATLAS-2294: Summary: Extra parameter "description" added when creating a type Key: ATLAS-2294 URL: https://issues.apache.org/jira/browse/ATLAS-2294 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 1.0.0 Reporter: Sharmadha Sainath When creating a type , there is inconsistency with "description" parameter of the attributes in the type definition. For example : 1.Adding no description parameter in the POST adds “description” :”” in the created type. 2.Adding “description” : “null" in the POST , no description parameter is found in the created type. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2301) Concurrent modification exception when saving the search
Sharmadha Sainath created ATLAS-2301: Summary: Concurrent modification exception when saving the search Key: ATLAS-2301 URL: https://issues.apache.org/jira/browse/ATLAS-2301 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath Priority: Critical Attachments: ConModExceptionSaveSearch.txt The Atlas instance has 626 types , 332 saved searches. (Search queries are large). Post 332 saved searches ,when trying to save a query , ConcurrentModification exception is thrown with {code} audit record too long: entityType=__AtlasUserProfile, guid=8b9f8e79-30e5-4bfe-afca-cc3adc2a97c6, size=1048743; maxSize=1048576. entity attribute values not stored in audit (EntityAuditListener:156) {code} Attached the complete exception stack trace. Other functionalities like creating tag , entity ,hive hook functionalities , tag association , tag deletion , basic search , faceted basic search works well. On other instance of Atlas , less types are created and 350+ saved searches are created (The save search attributes content differ in first instance and second instance. Former has more filters). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2318) UI : Clicking on child tag twice , parent tag is selected
Sharmadha Sainath created ATLAS-2318: Summary: UI : Clicking on child tag twice , parent tag is selected Key: ATLAS-2318 URL: https://issues.apache.org/jira/browse/ATLAS-2318 Project: Atlas Issue Type: Bug Components: atlas-webui Reporter: Sharmadha Sainath 1.Created tags super_tag and child_tag with supertag as super_tag. 2.Navigated to child_tag in Tree structure. 3.Clicked on child_tag. The child_tag was selected. 4.Clicked on child_tag again. Now its parent tag (super_tag) was selected. 5.Now any operation performed on super_tag is applied to child_tag (like deleting tag) which is confusing. 6. But if super_tag is selected , operations are applied to super_tag only. CC : [~kevalbhatt] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2319) UI : Deleting a tag which at 25+ position in the tag list in both Flat and Tree structure needs a refresh to remove the tag from the list.
Sharmadha Sainath created ATLAS-2319: Summary: UI : Deleting a tag which at 25+ position in the tag list in both Flat and Tree structure needs a refresh to remove the tag from the list. Key: ATLAS-2319 URL: https://issues.apache.org/jira/browse/ATLAS-2319 Project: Atlas Issue Type: Bug Components: atlas-webui Reporter: Sharmadha Sainath Created a tag tag26 which is the 26th tag in the list. On deleting tag26 from UI , tag26 is successfully deleted , but the entry is still present in the list in both Flat and Tree list structure. After clicking refresh button for tags , tag26 is removed. This doesn't happen with tags which are at position 25 or less. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2320) classification "*" with query throws 500 Internal server exception.
Sharmadha Sainath created ATLAS-2320: Summary: classification "*" with query throws 500 Internal server exception. Key: ATLAS-2320 URL: https://issues.apache.org/jira/browse/ATLAS-2320 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath Attachments: query_and_classification_asterisk.txt Following basic post query throws 500 internal server exception : {code} { "query":"hive_table", "classification":"*" } {code} Exception: {code} Request to collection fulltext_index failed due to (400) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://172.27.58.128:8886/solr/fulltext_index: org.apache.solr.search.SyntaxError: Cannot parse 'sg5_t:(hive_table AND ())': Encountered " ")" ") "" at line 1, column 16. Was expecting one of: ... "+" ... "-" ... ... "(" ... "*" ... ... ... ... ... ... "[" ... "{" ... ... "filter(" ... ... ... "*" ... , retry? 0 (CloudSolrClient:903) {code} Attached the complete exception stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2324) Regression : Entity update by PUT using V1 APIs throws 500 Internal server exception
Sharmadha Sainath created ATLAS-2324: Summary: Regression : Entity update by PUT using V1 APIs throws 500 Internal server exception Key: ATLAS-2324 URL: https://issues.apache.org/jira/browse/ATLAS-2324 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath Attachments: NPE_during_PUT_update.txt, ent_v1, put_v1, type_v1 Steps to repro : 1. Create type using JSON type_v1 by POSTing to /api/atlas/types 2. Create entity for the type using JSON ent_v1 by POSTing to /api/atlas/entities 3. Update the entity using JSON put_v1 (guid to be changed in the JSON) by PUTting to /api/atlas/entities. Update using PUT using V1 APIs throws NPE. Attached the complete exception stack trace . in V2 APIs , full entity update is done using POST and not PUT. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2325) Creating entity containing attribute of type set with duplicate values
Sharmadha Sainath created ATLAS-2325: Summary: Creating entity containing attribute of type set with duplicate values Key: ATLAS-2325 URL: https://issues.apache.org/jira/browse/ATLAS-2325 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath Creating an entity having an attribute of type set with duplicate values is expected to have only unique values. But duplicate values are retained. Example : POSTing [1,2,3,1] stored as [1,2,3,1]. Expected is [1,2,3] Repro : 1. POST the following JSON to /api/atlas/v2/types/typedefs {code} { "enumDefs":[ ], "structDefs":[ ], "classificationDefs":[ ], "entityDefs":[ { "superTypes":[ ], "attributeDefs":[ { "name":"type_set", "typeName":"array", "isOptional":true, "cardinality":"SET", "valuesMinCount":-1, "valuesMaxCount":-1, "isUnique":false, "isIndexable":false } ], "category":"ENTITY", "guid":"kcdnvdsvsdvidnvidsonvosid", "createdBy":"USER", "updatedBy":"USER", "createTime":12345, "updateTime":12345, "version":12345, "name":"simple_entity_type_set", "description":"simple_entity_type_set", "typeVersion":"0.1" } ] } {code} 2. Create entity of the type {code} "referredEntities":{}, "entity":{ "typeName":"simple_entity_type_set", "attributes":{ "type_set":["a","a","b"] }, "guid":"-1", "status":"ACTIVE", "createdBy":"admin", "updatedBy":"admin", "createTime":1489585008165, "updateTime":1489585008801, "version":0, "classifications":[], "superTypes":[] } } {code} 3. entity is created with value for type_set as [a,a,b] instead of [a,b] This is a regression in V1 APIs and seen in V2 APIs too. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2326) V1 API /api/atlas/entities/ throws 500 Internal exception
Sharmadha Sainath created ATLAS-2326: Summary: V1 API /api/atlas/entities/ throws 500 Internal exception Key: ATLAS-2326 URL: https://issues.apache.org/jira/browse/ATLAS-2326 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath Attachments: invalid_guid_exception.txt V1 API /api/atlas/entities/ throws 500 Internal server exception but with expected error message : {code} { error: "Given instance guid invalidGUIDkqDFDFNyb2zs5 is invalid\/not found" } {code} >From application logs, we could see that it throws the Atlas Base Exception >first , but later WebApplicationException is thrown. V2 API /api/atlas/v2/entity/ throws 404. Attached the complete stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2327) Regression : Creating entity using V1 APIs with invalid values for attribute types succeeds
Sharmadha Sainath created ATLAS-2327: Summary: Regression : Creating entity using V1 APIs with invalid values for attribute types succeeds Key: ATLAS-2327 URL: https://issues.apache.org/jira/browse/ATLAS-2327 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath Creating an entity with invalid values using V1 APIs succeeds with 201. Expected is 400 Bad Request. This doesn't happen with V2 APIs. Example : 1.Create type by posting JSON to /api/atlas/types 2.Create entity with random string value for float attribute by posting JSON to /api/atlas/entities 3. Request succeeds with 201. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2328) Regression : Request body with invalid JSON gets accepted by Atlas
Sharmadha Sainath created ATLAS-2328: Summary: Regression : Request body with invalid JSON gets accepted by Atlas Key: ATLAS-2328 URL: https://issues.apache.org/jira/browse/ATLAS-2328 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Sharmadha Sainath Priority: Minor {code} { "tagFilters": null, "classification": "*", "entityFilters": null, "excludeDeletedEntities": true, "typeName": "hdfs_path", "limit": 100, "offset": 0, "includeClassificationAttributes": false, "attributes": [] }} {code} s a wrong JSON with extra braces at the end. Atlas accepts such JSONs and returns 20X code. Options -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2324) Regression : Entity update by PUT using V1 APIs throws 500 Internal server exception
[ https://issues.apache.org/jira/browse/ATLAS-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308043#comment-16308043 ] Sharmadha Sainath commented on ATLAS-2324: -- [~grahamwallis] , While creating entity using REST , we provide some negative value to GUID. After creation , a positive GUID is assigned to the entity by Atlas. While updating that entity , corresponding GUID should be used. In my Atlas Instance , GUID "406a4202-90cc-4887-a8e5-e446389e14cb" (GUID mentioned in put_v1) is generated. So POSTing the same JSON "put_v1" in other Atlas instance will not work. In other Atlas instance , some other GUID will be created for entity creation , and that GUID should be used while updating the entity. I meant this change to be done in the JIRA description. Please let me know if the explanation was clear . > Regression : Entity update by PUT using V1 APIs throws 500 Internal server > exception > > > Key: ATLAS-2324 > URL: https://issues.apache.org/jira/browse/ATLAS-2324 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 1.0.0 >Reporter: Sharmadha Sainath >Assignee: Madhan Neethiraj > Fix For: 1.0.0 > > Attachments: ATLAS-2324.patch, NPE_during_PUT_update.txt, ent_v1, > put_v1, type_v1 > > > Steps to repro : > 1. Create type using JSON type_v1 by POSTing to /api/atlas/types > 2. Create entity for the type using JSON ent_v1 by POSTing to > /api/atlas/entities > 3. Update the entity using JSON put_v1 (guid to be changed in the JSON) by > PUTting to /api/atlas/entities. > Update using PUT using V1 APIs throws NPE. > Attached the complete exception stack trace . > in V2 APIs , full entity update is done using POST and not PUT. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2423) UI : Min values for Double and Float require modification
Sharmadha Sainath created ATLAS-2423: Summary: UI : Min values for Double and Float require modification Key: ATLAS-2423 URL: https://issues.apache.org/jira/browse/ATLAS-2423 Project: Atlas Issue Type: Bug Components: atlas-webui Reporter: Sharmadha Sainath https://github.com/apache/atlas/blob/master/dashboardv2/public/js/utils/Enums.js has hard coded values for min and max values for double and float : {code} "float": { min: 1.4E-45, max: 3.4028235E38 }, "double": { min: 4.9E-324, max: 1.7976931348623157E308 } {code} Hence UI doesn't allow negative values at all in faceted search. Values should be -3.4028235E38 and -1.7976931348623157E308 for min value of float and double respectively. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (ATLAS-2423) UI : Min values for Double and Float require modification
[ https://issues.apache.org/jira/browse/ATLAS-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-2423: - Attachment: ATLAS-2423.patch > UI : Min values for Double and Float require modification > - > > Key: ATLAS-2423 > URL: https://issues.apache.org/jira/browse/ATLAS-2423 > Project: Atlas > Issue Type: Bug > Components: atlas-webui >Reporter: Sharmadha Sainath >Priority: Major > Attachments: ATLAS-2423.patch > > > https://github.com/apache/atlas/blob/master/dashboardv2/public/js/utils/Enums.js > has hard coded values for min and max values for double and float : > {code} > "float": { > min: 1.4E-45, > max: 3.4028235E38 > }, > "double": { > min: 4.9E-324, > max: 1.7976931348623157E308 > } > {code} > Hence UI doesn't allow negative values at all in faceted search. > Values should be -3.4028235E38 and -1.7976931348623157E308 for min value of > float and double respectively. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (ATLAS-2423) UI : Min values for Double and Float require modification
[ https://issues.apache.org/jira/browse/ATLAS-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344503#comment-16344503 ] Sharmadha Sainath commented on ATLAS-2423: -- Review request : https://reviews.apache.org/r/65417/ > UI : Min values for Double and Float require modification > - > > Key: ATLAS-2423 > URL: https://issues.apache.org/jira/browse/ATLAS-2423 > Project: Atlas > Issue Type: Bug > Components: atlas-webui >Reporter: Sharmadha Sainath >Priority: Major > Attachments: ATLAS-2423.patch > > > https://github.com/apache/atlas/blob/master/dashboardv2/public/js/utils/Enums.js > has hard coded values for min and max values for double and float : > {code} > "float": { > min: 1.4E-45, > max: 3.4028235E38 > }, > "double": { > min: 4.9E-324, > max: 1.7976931348623157E308 > } > {code} > Hence UI doesn't allow negative values at all in faceted search. > Values should be -3.4028235E38 and -1.7976931348623157E308 for min value of > float and double respectively. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-2452) HBase Atlas Hook : isReadOnly hbase_table parameter is always false
Sharmadha Sainath created ATLAS-2452: Summary: HBase Atlas Hook : isReadOnly hbase_table parameter is always false Key: ATLAS-2452 URL: https://issues.apache.org/jira/browse/ATLAS-2452 Project: Atlas Issue Type: Bug Components: atlas-intg Reporter: Sharmadha Sainath isReadOnly hbase_table attribute is always set as -1 because of : table.setAttribute(ATTR_TABLE_ISREADONLY, htableDescriptor.getMaxFileSize()); (https://github.com/apache/atlas/blob/master/addons/hbase-bridge/src/main/java/org/apache/atlas/hbase/bridge/HBaseAtlasHook.java#L453) Change needed to use htableDescriptor.isReadOnly() instead of htableDescriptor.getMaxFileSize(). CC : [~rmani] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (ATLAS-2452) HBase Atlas Hook : isReadOnly hbase_table parameter is always false
[ https://issues.apache.org/jira/browse/ATLAS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-2452: - Attachment: ATLAS-2452.patch > HBase Atlas Hook : isReadOnly hbase_table parameter is always false > --- > > Key: ATLAS-2452 > URL: https://issues.apache.org/jira/browse/ATLAS-2452 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Reporter: Sharmadha Sainath >Priority: Major > Attachments: ATLAS-2452.patch > > > isReadOnly hbase_table attribute is always set as -1 because of : > table.setAttribute(ATTR_TABLE_ISREADONLY, htableDescriptor.getMaxFileSize()); > (https://github.com/apache/atlas/blob/master/addons/hbase-bridge/src/main/java/org/apache/atlas/hbase/bridge/HBaseAtlasHook.java#L453) > Change needed to use htableDescriptor.isReadOnly() instead of > htableDescriptor.getMaxFileSize(). > CC : [~rmani] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (ATLAS-2452) HBase Atlas Hook : isReadOnly hbase_table parameter is always false
[ https://issues.apache.org/jira/browse/ATLAS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367305#comment-16367305 ] Sharmadha Sainath commented on ATLAS-2452: -- Review request : https://reviews.apache.org/r/65684/ > HBase Atlas Hook : isReadOnly hbase_table parameter is always false > --- > > Key: ATLAS-2452 > URL: https://issues.apache.org/jira/browse/ATLAS-2452 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Reporter: Sharmadha Sainath >Priority: Major > Attachments: ATLAS-2452.patch > > > isReadOnly hbase_table attribute is always set as -1 because of : > table.setAttribute(ATTR_TABLE_ISREADONLY, htableDescriptor.getMaxFileSize()); > (https://github.com/apache/atlas/blob/master/addons/hbase-bridge/src/main/java/org/apache/atlas/hbase/bridge/HBaseAtlasHook.java#L453) > Change needed to use htableDescriptor.isReadOnly() instead of > htableDescriptor.getMaxFileSize(). > CC : [~rmani] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-2453) Atlas HBase Hook : Datatype of maxFileSize should be Long instead of Integer
Sharmadha Sainath created ATLAS-2453: Summary: Atlas HBase Hook : Datatype of maxFileSize should be Long instead of Integer Key: ATLAS-2453 URL: https://issues.apache.org/jira/browse/ATLAS-2453 Project: Atlas Issue Type: Bug Components: atlas-intg Reporter: Sharmadha Sainath maxFileSize attribute of hbase_table should be of type long. Currently it is an Integer , hence , when creating/updating hbase_table with maxFileSize value more than integer's range , data between HBase and Atlas is inconsistent. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (ATLAS-2453) Atlas HBase Hook : Datatype of maxFileSize should be Long instead of Integer
[ https://issues.apache.org/jira/browse/ATLAS-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-2453: - Attachment: ATLAS-2453.patch > Atlas HBase Hook : Datatype of maxFileSize should be Long instead of Integer > > > Key: ATLAS-2453 > URL: https://issues.apache.org/jira/browse/ATLAS-2453 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Reporter: Sharmadha Sainath >Priority: Major > Attachments: ATLAS-2453.patch > > > maxFileSize attribute of hbase_table should be of type long. > Currently it is an Integer , hence , when creating/updating hbase_table with > maxFileSize value more than integer's range , data between HBase and Atlas is > inconsistent. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (ATLAS-2453) Atlas HBase Hook : Datatype of maxFileSize should be Long instead of Integer
[ https://issues.apache.org/jira/browse/ATLAS-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367478#comment-16367478 ] Sharmadha Sainath commented on ATLAS-2453: -- Review Request : https://reviews.apache.org/r/65688/ > Atlas HBase Hook : Datatype of maxFileSize should be Long instead of Integer > > > Key: ATLAS-2453 > URL: https://issues.apache.org/jira/browse/ATLAS-2453 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Reporter: Sharmadha Sainath >Priority: Major > Attachments: ATLAS-2453.patch > > > maxFileSize attribute of hbase_table should be of type long. > Currently it is an Integer , hence , when creating/updating hbase_table with > maxFileSize value more than integer's range , data between HBase and Atlas is > inconsistent. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (ATLAS-2453) Atlas HBase Hook : Datatype of maxFileSize should be Long instead of Integer
[ https://issues.apache.org/jira/browse/ATLAS-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367478#comment-16367478 ] Sharmadha Sainath edited comment on ATLAS-2453 at 2/16/18 3:38 PM: --- Review Request : https://reviews.apache.org/r/65688/ CC : [~rmani] was (Author: ssainath): Review Request : https://reviews.apache.org/r/65688/ > Atlas HBase Hook : Datatype of maxFileSize should be Long instead of Integer > > > Key: ATLAS-2453 > URL: https://issues.apache.org/jira/browse/ATLAS-2453 > Project: Atlas > Issue Type: Bug > Components: atlas-intg >Reporter: Sharmadha Sainath >Priority: Major > Attachments: ATLAS-2453.patch > > > maxFileSize attribute of hbase_table should be of type long. > Currently it is an Integer , hence , when creating/updating hbase_table with > maxFileSize value more than integer's range , data between HBase and Atlas is > inconsistent. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-2475) HBase Atlas import utility : importing using -t default:table fails when namespace is default
Sharmadha Sainath created ATLAS-2475: Summary: HBase Atlas import utility : importing using -t default:table fails when namespace is default Key: ATLAS-2475 URL: https://issues.apache.org/jira/browse/ATLAS-2475 Project: Atlas Issue Type: Bug Components: atlas-intg Affects Versions: 1.0.0 Reporter: Sharmadha Sainath Fix For: 1.0.0 Running following scripts work : ./import-hbase.sh -t ns1:table5 ./import-hbase.sh -t table2 But following import script specifying default namespace doesn't work : ./import-hbase.sh -t default:table2 and throws following exception : {code} Exception in thread "main" org.apache.atlas.hook.AtlasHookException: ImportHBaseEntities failed. at org.apache.atlas.hbase.util.ImportHBaseEntities.main(ImportHBaseEntities.java:57) Caused by: java.lang.NullPointerException at org.apache.atlas.hbase.util.ImportHBaseEntities.importTable(ImportHBaseEntities.java:115) at org.apache.atlas.hbase.util.ImportHBaseEntities.execute(ImportHBaseEntities.java:88) at org.apache.atlas.hbase.util.ImportHBaseEntities.main(ImportHBaseEntities.java:54) Failed to import HBase Data Model!!! {code} In HBase shell , it is possible to create table in default namespace by specifying the default namespace explicitly : > create 'default:table2,'cf1' so ,expectation is that importing by explicitly specifying default namespace also should work. CC : [~rmani] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (ATLAS-2019) Search using entity and trait attributes - Equals comparison using filter key when filter key is present between special characters
Sharmadha Sainath created ATLAS-2019: Summary: Search using entity and trait attributes - Equals comparison using filter key when filter key is present between special characters Key: ATLAS-2019 URL: https://issues.apache.org/jira/browse/ATLAS-2019 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath 1. Created hdfs_path entity with name = "user" , path = "/user" 2. Created another hdfs_path with name="dir1" , path = "/user/dir1" 3. Searched for hdfs_path , filter Path = /user 4. The search listed down both the hdfs_path entities - user and dir1. Expected that only "user" entity would be returned. Here, the filter key is "/user" and if the filter key is present between special characters in the attribute values ( "/" in this case) , all those entities are returned. Please note that , filter applied is "=" and not "contains" Created few random hdfs_path entities for testing : For the query type ="hdfs_path" and path = "/user" , entities with following path attribute values are returned : * user * /user * user/ * /user/dir1 * @user@dir1 * /user@dir1 Entities with following path values are *NOT* returned * /userdir1 * rootuser/dir2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2020) Result Table Column Filter : Filtering using Columns in Result table always sets excludeDeletedEntities to True
Sharmadha Sainath created ATLAS-2020: Summary: Result Table Column Filter : Filtering using Columns in Result table always sets excludeDeletedEntities to True Key: ATLAS-2020 URL: https://issues.apache.org/jira/browse/ATLAS-2020 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath 1. Searched hive_table in basic search. 2. Checked "Include historical entities" check box - Result table had DELETED entities. 3. Added Columns filter "columns" - This fired a new query with excludeDeletedEntities set to True and fetched only the ACTIVE entities and reset (unchecked) "Include historical entities" . Expected that the current value of excludeDeletedEntities would be used when Column filtering. After Column filtering , explicitly had to check "Include historical entities" to list the DELETED entities. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2021) UI Regression : Tags Tab in Entity details page not loaded.
Sharmadha Sainath created ATLAS-2021: Summary: UI Regression : Tags Tab in Entity details page not loaded. Key: ATLAS-2021 URL: https://issues.apache.org/jira/browse/ATLAS-2021 Project: Atlas Issue Type: Bug Components: atlas-webui Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Critical 1.Assigned a tag to an entity . 2.On clicking the Tags tab in the detailsPage of that entity , content of Tags tab is not displayed. Following exception is seen in Console tab : {code} TagDetailTableLayoutView.js:138 Uncaught TypeError: Cannot read property 'toJSON' of undefined at Object.fromRaw (TagDetailTableLayoutView.js:138) at n.render (Overrides.js:103) at n.render (backgrid.js:1986) at n.render (backgrid.js:2511) at n.render (backgrid.js:2923) at constructor.show (backbone.marionette.min.js:20) at n.renderTable (TableLayout.js:241) at n.onRender (TableLayout.js:210) at backbone.marionette.min.js:20 at n.triggerMethod (backbone.marionette.min.js:20) {code} The issue is seen when more than 25 tags are created and tag at location > 25 in tag list is tagged to an entity.Thanks [~kevalbhatt] for helping to reproduce the issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2022) Regression : Empty results fetched from GET Basic search query
Sharmadha Sainath created ATLAS-2022: Summary: Regression : Empty results fetched from GET Basic search query Key: ATLAS-2022 URL: https://issues.apache.org/jira/browse/ATLAS-2022 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Blocker Basic search query fired as a POST request with attribute , tag filters fetches correct results. But Basic query fired as a GET request with query params encoded in the URL , returns empty results. For example : Basic query : typeName = "hive_table" query = http://localhost:21000/api/atlas/v2/search/basic?typeName=hive_table&query=employee returns {code} { queryType: "BASIC", searchParameters: { query: "employee", typeName: "hive_table", excludeDeletedEntities: false, limit: 0, offset: 0 }, queryText: "employee" } {code} Few commits back , following was the response : {code} { queryType: "BASIC", type: "hive_table", entities: [ { typeName: "hive_table", attributes: { owner: "admin", qualifiedName: "default.employee@cl1", name: "employee", description: null }, guid: "253aa208-0415-4e86-8611-3858fad78ede", status: "ACTIVE", displayText: "employee", classificationNames: [ ] } ] } {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2022) Regression : Empty results fetched from GET Basic search query
[ https://issues.apache.org/jira/browse/ATLAS-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16114236#comment-16114236 ] Sharmadha Sainath commented on ATLAS-2022: -- Reason for empty results is limit is set to 0 by default. When limit parameter is added to the query , the response is correct. > Regression : Empty results fetched from GET Basic search query > -- > > Key: ATLAS-2022 > URL: https://issues.apache.org/jira/browse/ATLAS-2022 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Priority: Blocker > > Basic search query fired as a POST request with attribute , tag filters > fetches correct results. > But Basic query fired as a GET request with query params encoded in the URL , > returns empty results. > For example : Basic query : typeName = "hive_table" query = hive_table in Atlas" > http://localhost:21000/api/atlas/v2/search/basic?typeName=hive_table&query=employee > returns > {code} > { > queryType: "BASIC", > searchParameters: { > query: "employee", > typeName: "hive_table", > excludeDeletedEntities: false, > limit: 0, > offset: 0 > }, > queryText: "employee" > } > {code} > Few commits back , following was the response : > {code} > { > queryType: "BASIC", > type: "hive_table", > entities: [ > { > typeName: "hive_table", > attributes: { > owner: "admin", > qualifiedName: "default.employee@cl1", > name: "employee", > description: null > }, > guid: "253aa208-0415-4e86-8611-3858fad78ede", > status: "ACTIVE", > displayText: "employee", > classificationNames: [ ] > } > ] > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2022) Regression : Empty results fetched from GET Basic search query
[ https://issues.apache.org/jira/browse/ATLAS-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-2022: - Attachment: ATLAS-2022.patch Commit https://github.com/apache/atlas/commit/9b72de98072f4b4adcc3179399342fd63494043b introduced using atlasDiscoveryService.searchWithParameters() instead of atlasDiscoveryService.searchUsingBasicQuery() . The latter had the limit and offset check whereas the former one didn't have it. Adding the check in the patch. CC : [~apoorvnaik] [~madhan.neethiraj] [~ayubkhan] > Regression : Empty results fetched from GET Basic search query > -- > > Key: ATLAS-2022 > URL: https://issues.apache.org/jira/browse/ATLAS-2022 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Priority: Blocker > Attachments: ATLAS-2022.patch > > > Basic search query fired as a POST request with attribute , tag filters > fetches correct results. > But Basic query fired as a GET request with query params encoded in the URL , > returns empty results. > For example : Basic query : typeName = "hive_table" query = hive_table in Atlas" > http://localhost:21000/api/atlas/v2/search/basic?typeName=hive_table&query=employee > returns > {code} > { > queryType: "BASIC", > searchParameters: { > query: "employee", > typeName: "hive_table", > excludeDeletedEntities: false, > limit: 0, > offset: 0 > }, > queryText: "employee" > } > {code} > Few commits back , following was the response : > {code} > { > queryType: "BASIC", > type: "hive_table", > entities: [ > { > typeName: "hive_table", > attributes: { > owner: "admin", > qualifiedName: "default.employee@cl1", > name: "employee", > description: null > }, > guid: "253aa208-0415-4e86-8611-3858fad78ede", > status: "ACTIVE", > displayText: "employee", > classificationNames: [ ] > } > ] > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ATLAS-2021) UI Regression : Tags Tab in Entity details page not loaded.
[ https://issues.apache.org/jira/browse/ATLAS-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16114305#comment-16114305 ] Sharmadha Sainath commented on ATLAS-2021: -- Tested the patch on Master. +1 > UI Regression : Tags Tab in Entity details page not loaded. > --- > > Key: ATLAS-2021 > URL: https://issues.apache.org/jira/browse/ATLAS-2021 > Project: Atlas > Issue Type: Bug > Components: atlas-webui >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Assignee: Keval Bhatt >Priority: Critical > Fix For: 0.8-incubating, 0.9-incubating > > Attachments: ATLAS-2021.patch > > > 1.Assigned a tag to an entity . > 2.On clicking the Tags tab in the detailsPage of that entity , content of > Tags tab is not displayed. > Following exception is seen in Console tab : > {code} > TagDetailTableLayoutView.js:138 Uncaught TypeError: Cannot read property > 'toJSON' of undefined > at Object.fromRaw (TagDetailTableLayoutView.js:138) > at n.render (Overrides.js:103) > at n.render (backgrid.js:1986) > at n.render (backgrid.js:2511) > at n.render (backgrid.js:2923) > at constructor.show (backbone.marionette.min.js:20) > at n.renderTable (TableLayout.js:241) > at n.onRender (TableLayout.js:210) > at backbone.marionette.min.js:20 > at n.triggerMethod (backbone.marionette.min.js:20) > {code} > The issue is seen when more than 25 tags are created and tag at location > 25 > in tag list is tagged to an entity.Thanks [~kevalbhatt] for helping to > reproduce the issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2023) Default value set as "null" for newly added tag attributes.
Sharmadha Sainath created ATLAS-2023: Summary: Default value set as "null" for newly added tag attributes. Key: ATLAS-2023 URL: https://issues.apache.org/jira/browse/ATLAS-2023 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath 1. Created a tag 'tag1' with attributes int1,date1 2. Associated the tag 'tag1' to an entity 'entity1' without specifying value to the attributes. Default values are set to the tag attributes : (i.e) 0 for int1 Fri Aug 04 2017 00:00:00 GMT+0530 (current date) for date1 3. Added 2 more attributes to 'tag1' - int2 ,date2. 4. Expected that default values would be set for new tag attributes of 'entity1' . But null is set for int2 and date , which on UI shows as '-' for int2 and 'invalid date' for date2. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2025) Basic query with entity and tag filters : Providing invalid tag name for classification returns all entities matching typename/query
Sharmadha Sainath created ATLAS-2025: Summary: Basic query with entity and tag filters : Providing invalid tag name for classification returns all entities matching typename/query Key: ATLAS-2025 URL: https://issues.apache.org/jira/browse/ATLAS-2025 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating, 0.8.1-incubating Reporter: Sharmadha Sainath Priority: Critical Fired a basic query with following POST request body with Non Existing tag using curl (because UI only lets user select from existing tags): {code} { "entityFilters":null, "tagFilters":null, "attributes":null, "query":null, "excludeDeletedEntities":true, "limit":25, "typeName":"kafka_topic", "classification":"non_existing_tag" } {code} GET Basic search request with classification parameter also lists all the kakfa_topic entities: {code} http://localhost:21000/api/atlas/v2/search/basic?typeName=kafka_topic&classification=non_existing_tag&limit=50 {code} Expected 40X Response code saying that tag doesn't exist. But the response listed all the kafka_topic entities. This gives an delusion that all returned kafka_topic entities are tagged with 'non_existing_tag' -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2026) Basic/DSL Query with invalid typeName/classification name throws 500 Internal server error
Sharmadha Sainath created ATLAS-2026: Summary: Basic/DSL Query with invalid typeName/classification name throws 500 Internal server error Key: ATLAS-2026 URL: https://issues.apache.org/jira/browse/ATLAS-2026 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating, 0.8.1-incubating Reporter: Sharmadha Sainath Following DSL/Basic queries with invalid classification name and type name throw 500 Internal server error {code} http://localhost:21000/api/atlas/v2/search/basic?typeName=unknown_type&classification=unknown_tag http://localhost:21000/api/atlas/v2/search/dsl?typeName=unknown_type&classification=unknown_tag {code} It would be appropriate to throw 40X Error code instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2026) Basic/DSL Query with invalid typeName/classification name throws 500 Internal server error
[ https://issues.apache.org/jira/browse/ATLAS-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-2026: - Attachment: ATLAS-2026.patch > Basic/DSL Query with invalid typeName/classification name throws 500 Internal > server error > -- > > Key: ATLAS-2026 > URL: https://issues.apache.org/jira/browse/ATLAS-2026 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating, 0.8.1-incubating >Reporter: Sharmadha Sainath > Attachments: ATLAS-2026.patch > > > Following DSL/Basic queries with invalid classification name and type name > throw 500 Internal server error > {code} > http://localhost:21000/api/atlas/v2/search/basic?typeName=unknown_type&classification=unknown_tag > http://localhost:21000/api/atlas/v2/search/dsl?typeName=unknown_type&classification=unknown_tag > {code} > It would be appropriate to throw 40X Error code instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2026) Basic/DSL Query with invalid typeName/classification name throws 500 Internal server error
[ https://issues.apache.org/jira/browse/ATLAS-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-2026: - Affects Version/s: (was: 0.8.1-incubating) > Basic/DSL Query with invalid typeName/classification name throws 500 Internal > server error > -- > > Key: ATLAS-2026 > URL: https://issues.apache.org/jira/browse/ATLAS-2026 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath > Attachments: ATLAS-2026.patch > > > Following DSL/Basic queries with invalid classification name and type name > throw 500 Internal server error > {code} > http://localhost:21000/api/atlas/v2/search/basic?typeName=unknown_type&classification=unknown_tag > http://localhost:21000/api/atlas/v2/search/dsl?typeName=unknown_type&classification=unknown_tag > {code} > It would be appropriate to throw 40X Error code instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2028) Basic query with entity and tag filters : Invalid filter keys for type and tags attributes are ignored and fetches all entities of the type / associated to the tag
Sharmadha Sainath created ATLAS-2028: Summary: Basic query with entity and tag filters : Invalid filter keys for type and tags attributes are ignored and fetches all entities of the type / associated to the tag Key: ATLAS-2028 URL: https://issues.apache.org/jira/browse/ATLAS-2028 Project: Atlas Issue Type: Bug Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Basic search request with POST body fired using curl: {code} { "entityFilters":{"condition":"AND","criterion":[{"attributeName":"invalid_attr","operator":"=","attributeValue":"userdir"}]}, "tagFilters":null, "attributes":null, "query":null, "excludeDeletedEntities":true, "limit":25, "typeName":"hdfs_path", "classification":null } {code} All the hdfs_path entities are fetched because invalid filter keys are ignored : {code} Converted query string with 2 replacements: [v."__typeName":(hdfs_path) AND v."__state":ACTIVE] => [iyt_t:(hdfs_path) AND b2d_t:ACTIVE] (IndexSerializer:648) {code} Same happens with invalid tag attribute filters also. This gives a delusion that all returned hdfs_path entities are satisfied by the filter condition. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2026) Basic/DSL Query with invalid typeName/classification name throws 500 Internal server error
[ https://issues.apache.org/jira/browse/ATLAS-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-2026: - Attachment: ATLAS-2026.2.patch > Basic/DSL Query with invalid typeName/classification name throws 500 Internal > server error > -- > > Key: ATLAS-2026 > URL: https://issues.apache.org/jira/browse/ATLAS-2026 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath > Attachments: ATLAS-2026.2.patch, ATLAS-2026.patch > > > Following DSL/Basic queries with invalid classification name and type name > throw 500 Internal server error > {code} > http://localhost:21000/api/atlas/v2/search/basic?typeName=unknown_type&classification=unknown_tag > http://localhost:21000/api/atlas/v2/search/dsl?typeName=unknown_type&classification=unknown_tag > {code} > It would be appropriate to throw 40X Error code instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ATLAS-2035) Search using entity and trait attributes - Issue with Case insensitive search in entity attributes and tag
Sharmadha Sainath created ATLAS-2035: Summary: Search using entity and trait attributes - Issue with Case insensitive search in entity attributes and tag Key: ATLAS-2035 URL: https://issues.apache.org/jira/browse/ATLAS-2035 Project: Atlas Issue Type: Bug Components: atlas-core Affects Versions: 0.9-incubating Reporter: Sharmadha Sainath Priority: Critical 1. Created an hdfs_path entity with description "hdfs_path" 2. Created a tag "tag1" and associated the tag to hdfs_path entity 3. In Basic Search a) Typename = hdfs_path , filter : description = hdfs_path returned the hdfs_path entity b) Typename = hdfs_path , filter : description = HDFS_PATH returned the hdfs_path entity (to verify case insensitivity) c) Typename = hdfs_path , filter : description = hdfs_path , tag = tag1 returned the hdfs_path entity. d) But , Typename = hdfs_path , filter : description = HDFS_PATH , tag = tag1 did not fetch the entity. Therefore , any search with tag and case insensitive search in entityFilters does not fetch expected results. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ATLAS-2035) Search using entity and trait attributes - Issue with Case insensitive search in entity attributes and tag
[ https://issues.apache.org/jira/browse/ATLAS-2035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharmadha Sainath updated ATLAS-2035: - Description: 1. Created an hdfs_path entity with description "hdfs_path" 2. Created a tag "tag1" and associated the tag to hdfs_path entity 3. In Basic Search a) Typename = hdfs_path , filter : description = hdfs_path returned the hdfs_path entity b) Typename = hdfs_path , filter : description = HDFS_PATH returned the hdfs_path entity (to verify case insensitivity) c) Typename = hdfs_path , filter : description = hdfs_path , tag = tag1 returned the hdfs_path entity. d) But , Typename = hdfs_path , filter : description = HDFS_PATH , tag = tag1 did not fetch the entity. Therefore , any search with tag and case insensitive search in entityFilters does not fetch expected results. Logs from EntitySearchProcessor: {code} 2017-08-09 12:30:06,761 DEBUG - [pool-2-thread-10 - 56d0fb40-683d-48bb-8cf7-8d2dee690379:] ~ ==> EntitySearchProcessor.execute(searchParameters={query='null', typeName='hdfs_path', classification='tag1', excludeDeletedEntities=true, limit=25, offset=0, entityFilters={attributeName='null', operator=null, attributeValue='null', condition=AND, criterion=[{attributeName='description', operator=eq, attributeValue='HDFS_PATH', condition=null, criterion=null}]}, tagFilters=null, attributes=null}) (EntitySearchProcessor:129) 2017-08-09 12:30:06,788 DEBUG - [pool-2-thread-10 - 56d0fb40-683d-48bb-8cf7-8d2dee690379:] ~ <== EntitySearchProcessor.execute(searchParameters={query='null', typeName='hdfs_path', classification='tag1', excludeDeletedEntities=true, limit=25, offset=0, entityFilters={attributeName='null', operator=null, attributeValue='null', condition=AND, criterion=[{attributeName='description', operator=eq, attributeValue='HDFS_PATH', condition=null, criterion=null}]}, tagFilters=null, attributes=null}): ret.size()=0 (EntitySearchProcessor:213) {code} was: 1. Created an hdfs_path entity with description "hdfs_path" 2. Created a tag "tag1" and associated the tag to hdfs_path entity 3. In Basic Search a) Typename = hdfs_path , filter : description = hdfs_path returned the hdfs_path entity b) Typename = hdfs_path , filter : description = HDFS_PATH returned the hdfs_path entity (to verify case insensitivity) c) Typename = hdfs_path , filter : description = hdfs_path , tag = tag1 returned the hdfs_path entity. d) But , Typename = hdfs_path , filter : description = HDFS_PATH , tag = tag1 did not fetch the entity. Therefore , any search with tag and case insensitive search in entityFilters does not fetch expected results. > Search using entity and trait attributes - Issue with Case insensitive search > in entity attributes and tag > --- > > Key: ATLAS-2035 > URL: https://issues.apache.org/jira/browse/ATLAS-2035 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Affects Versions: 0.9-incubating >Reporter: Sharmadha Sainath >Priority: Critical > > 1. Created an hdfs_path entity with description "hdfs_path" > 2. Created a tag "tag1" and associated the tag to hdfs_path entity > 3. In Basic Search > a) Typename = hdfs_path , filter : description = hdfs_path returned the > hdfs_path entity > b) Typename = hdfs_path , filter : description = HDFS_PATH returned the > hdfs_path entity (to verify case insensitivity) >c) Typename = hdfs_path , filter : description = hdfs_path , tag = tag1 > returned the hdfs_path entity. > d) But , Typename = hdfs_path , filter : description = HDFS_PATH , tag = > tag1 did not fetch the entity. > Therefore , any search with tag and case insensitive search in entityFilters > does not fetch expected results. > Logs from EntitySearchProcessor: > {code} > 2017-08-09 12:30:06,761 DEBUG - [pool-2-thread-10 - > 56d0fb40-683d-48bb-8cf7-8d2dee690379:] ~ ==> > EntitySearchProcessor.execute(searchParameters={query='null', > typeName='hdfs_path', classification='tag1', excludeDeletedEntities=true, > limit=25, offset=0, entityFilters={attributeName='null', operator=null, > attributeValue='null', condition=AND, > criterion=[{attributeName='description', operator=eq, > attributeValue='HDFS_PATH', condition=null, criterion=null}]}, > tagFilters=null, attributes=null}) (EntitySearchProcessor:129) > 2017-08-09 12:30:06,788 DEBUG - [pool-2-thread-10 - > 56d0fb40-683d-48bb-8cf7-8d2dee690379:] ~ <== > EntitySearchProcessor.execute(searchParameters={query='null', > typeName='hdfs_path', classification='tag1', excludeDeletedEntities=true, > limit=25, offset=0, entityFilters={attributeName='null', operator=null, > attributeValue='null', condition=AND, > criterion=[{attributeName='description', operator=eq,