[jira] [Updated] (ATLAS-4646) [Lineage Improvements][Regression]On a single standalone table, lineage information is missing on the latest bits
[ https://issues.apache.org/jira/browse/ATLAS-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4646: -- Summary: [Lineage Improvements][Regression]On a single standalone table, lineage information is missing on the latest bits (was: [Lineage Improvements] On a single standalone table, lineage information is missing on the latest bits) > [Lineage Improvements][Regression]On a single standalone table, lineage > information is missing on the latest bits > - > > Key: ATLAS-4646 > URL: https://issues.apache.org/jira/browse/ATLAS-4646 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy > Assignee: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2022-07-23 at 1.06.26 AM.png > > > Just create a table: > Eg:create table table_only(name string, e_id int, contact_no int); > This should have just the table as lineage info. but with the latest changes, > it displays > {code:java} > No lineage data found {code} > Attached the screenshot for the same -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (ATLAS-4646) [Lineage Improvements] On a single standalone table, lineage information is missing on the latest bits
Dharshana M Krishnamoorthy created ATLAS-4646: - Summary: [Lineage Improvements] On a single standalone table, lineage information is missing on the latest bits Key: ATLAS-4646 URL: https://issues.apache.org/jira/browse/ATLAS-4646 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Assignee: Dharshana M Krishnamoorthy Attachments: Screenshot 2022-07-23 at 1.06.26 AM.png Just create a table: Eg:create table table_only(name string, e_id int, contact_no int); This should have just the table as lineage info. but with the latest changes, it displays {code:java} No lineage data found {code} Attached the screenshot for the same -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (ATLAS-4645) [Lineage Improvements] When inputRelationsLimit or outputRelationsLimit is 0, then it is replaced with the default value of node count
Dharshana M Krishnamoorthy created ATLAS-4645: - Summary: [Lineage Improvements] When inputRelationsLimit or outputRelationsLimit is 0, then it is replaced with the default value of node count Key: ATLAS-4645 URL: https://issues.apache.org/jira/browse/ATLAS-4645 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Input setup: Enable lineage improvements: *atlas.lineage.on.demand.enabled=true* Run the following to repro the issue {code:java} create database scenario_1; use scenario_1; create table table_1(fname string, lname string, mname string, e_id int, contact_no int); create table table_11 as select * from table_1; create table table_12 as select * from table_1; create table table_13 as select * from table_1; create table table_14 as select * from table_1; {code} Payload With the following payload {code:java} { "": { "direction": "BOTH", "inputRelationsLimit": 3, "outputRelationsLimit": 0 } }{code} The 'lineageOnDemandPayload' should be same as that of the input. Here we can see it is replaces with the default node count value 3.Check "outputRelationsLimit" in the below response {code:java} { "baseEntityGuid": "f92a6057-f6c8-4c0e-a3a0-50dba5f507d3", "lineageDirection": "BOTH", "lineageDepth": 3, {code} {color:#009100}...{color} {code:java} "lineageOnDemandPayload": { "f92a6057-f6c8-4c0e-a3a0-50dba5f507d3": { "direction": "BOTH", "inputRelationsLimit": 3, "outputRelationsLimit": 3, "depth": 3 } } } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (ATLAS-4644) [Lineage Improvements] When direction is 'INPUT' for a lineage which does not have inputs, the response does not contain guidEntityMap
Dharshana M Krishnamoorthy created ATLAS-4644: - Summary: [Lineage Improvements] When direction is 'INPUT' for a lineage which does not have inputs, the response does not contain guidEntityMap Key: ATLAS-4644 URL: https://issues.apache.org/jira/browse/ATLAS-4644 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Input setup: Enable lineage improvements: *atlas.lineage.on.demand.enabled=true* Run the following to repro the issue {code:java} create database scenario_1; use scenario_1; create table table_1(fname string, lname string, mname string, e_id int, contact_no int); create table table_11 as select * from table_1; create table table_12 as select * from table_1; create table table_13 as select * from table_1; create table table_14 as select * from table_1; {code} Payload {code:java} { "": { "direction": "INPUT", "inputRelationsLimit": 1 } }{code} Response: {code:java} { "baseEntityGuid": "f92a6057-f6c8-4c0e-a3a0-50dba5f507d3", "lineageDirection": "INPUT", "lineageDepth": 3, "guidEntityMap": {}, "relations": [], "relationsOnDemand": {}, "lineageOnDemandPayload": { "f92a6057-f6c8-4c0e-a3a0-50dba5f507d3": { "direction": "INPUT", "inputRelationsLimit": 1, "outputRelationsLimit": 3, "depth": 3 } } } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (ATLAS-4643) [Lineage Improvements] Incorrect response when inputRelationsLimit and outputRelationsLimit is 0
Dharshana M Krishnamoorthy created ATLAS-4643: - Summary: [Lineage Improvements] Incorrect response when inputRelationsLimit and outputRelationsLimit is 0 Key: ATLAS-4643 URL: https://issues.apache.org/jira/browse/ATLAS-4643 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Attachments: Screenshot 2022-07-22 at 7.08.16 PM.png Input setup: Enable lineage improvements: *atlas.lineage.on.demand.enabled=true* Run the following to repro the issue {code:java} create database scenario_1; use scenario_1; create table table_1(fname string, lname string, mname string, e_id int, contact_no int); create table table_11 as select * from table_1; create table table_12 as select * from table_1; create table table_13 as select * from table_1; create table table_14 as select * from table_1; {code} Payload {code:java} { "f92a6057-f6c8-4c0e-a3a0-50dba5f507d3": { "direction": "BOTH", "inputRelationsLimit": 0, "outputRelationsLimit": 0 } }{code} With the following payload {code:java} { "": { "direction": "BOTH", "inputRelationsLimit": 0, "outputRelationsLimit": 0 } }{code} We expect guidEntityMap to contain only the current entity. But the current result mis matched {code:java} { "baseEntityGuid": "f92a6057-f6c8-4c0e-a3a0-50dba5f507d3", "lineageDirection": "BOTH", "lineageDepth": 3, "guidEntityMap": { "f92a6057-f6c8-4c0e-a3a0-50dba5f507d3": { "typeName": "hive_table", "attributes": { "owner": "hrt_qa", "createTime": 165849599, "qualifiedName": "scenario_1.table_1@cm", "name": "table_1", "description": "" }, "guid": "f92a6057-f6c8-4c0e-a3a0-50dba5f507d3", "status": "ACTIVE", "displayText": "table_1", "classificationNames": [], "meaningNames": [], "meanings": [], "isIncomplete": false, "labels": [] }, "008977e7-9860-4b67-adb9-8c55635d937b": { "typeName": "hive_table", "attributes": { "owner": "hrt_qa", "createTime": 1658495994000, "qualifiedName": "scenario_1.table_11@cm", "name": "table_11", "description": "" }, "guid": "008977e7-9860-4b67-adb9-8c55635d937b", "status": "ACTIVE", "displayText": "table_11", "classificationNames": [], "meaningNames": [], "meanings": [], "isIncomplete": false, "labels": [] }, "e017a7de-803b-4f8b-8d2a-7568f930316e": { "typeName": "hive_table", "attributes": { "owner": "hrt_qa", "createTime": 1658496001000, "qualifiedName": "scenario_1.table_14@cm", "name": "table_14", "description": "" }, "guid": "e017a7de-803b-4f8b-8d2a-7568f930316e", "status": "ACTIVE", "displayText": "table_14", "classificationNames": [], "meaningNames": [], "meanings": [], "isIncomplete": false, "labels": [] }, "e3244d74-25d7-4622-8365-d828802d0aa3": { "typeName": "hive_process", "attributes": { "owner": "", "qualifiedName": "scenario_1.table_12@cm:1658495996000", "name": "scenario_1.table_12@cm:1658495996000", "description": "" }, "guid": "e3244d74-25d7-4622-8365-d828802d0aa3", "status": "ACTIVE", "displayText": "scenario_1.table_12
[jira] [Updated] (ATLAS-4641) [Lineage Improvements] The expand button is broken on the UI, but on clicking it, the actual purpose is served
[ https://issues.apache.org/jira/browse/ATLAS-4641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4641: -- Description: The expand button is broken on the UI, when on demand lineage is enabled. Attached the screenshot of the same. Not seeing any errors in the network tab too was:The expand button is broken on the UI, when on demand lineage is enabled > [Lineage Improvements] The expand button is broken on the UI, but on clicking > it, the actual purpose is served > -- > > Key: ATLAS-4641 > URL: https://issues.apache.org/jira/browse/ATLAS-4641 > Project: Atlas > Issue Type: Bug > Components: atlas-webui > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2022-07-20 at 11.42.13 PM.png > > > The expand button is broken on the UI, when on demand lineage is enabled. > Attached the screenshot of the same. Not seeing any errors in the network tab > too -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (ATLAS-4641) [Lineage Improvements] The expand button is broken on the UI, but on clicking it, the actual purpose is served
[ https://issues.apache.org/jira/browse/ATLAS-4641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4641: -- Description: The expand button is broken on the UI, when on demand lineage is enabled > [Lineage Improvements] The expand button is broken on the UI, but on clicking > it, the actual purpose is served > -- > > Key: ATLAS-4641 > URL: https://issues.apache.org/jira/browse/ATLAS-4641 > Project: Atlas > Issue Type: Bug > Components: atlas-webui > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2022-07-20 at 11.42.13 PM.png > > > The expand button is broken on the UI, when on demand lineage is enabled -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (ATLAS-4641) [Lineage Improvements] The expand button is broken on the UI, but on clicking it, the actual purpose is served
[ https://issues.apache.org/jira/browse/ATLAS-4641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4641: -- Component/s: atlas-webui (was: atlas-core) > [Lineage Improvements] The expand button is broken on the UI, but on clicking > it, the actual purpose is served > -- > > Key: ATLAS-4641 > URL: https://issues.apache.org/jira/browse/ATLAS-4641 > Project: Atlas > Issue Type: Bug > Components: atlas-webui > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2022-07-20 at 11.42.13 PM.png > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (ATLAS-4641) [Lineage Improvements] The expand button is broken on the UI, but on clicking it, the actual purpose is served
[ https://issues.apache.org/jira/browse/ATLAS-4641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4641: -- Attachment: Screenshot 2022-07-20 at 11.42.13 PM.png > [Lineage Improvements] The expand button is broken on the UI, but on clicking > it, the actual purpose is served > -- > > Key: ATLAS-4641 > URL: https://issues.apache.org/jira/browse/ATLAS-4641 > Project: Atlas > Issue Type: Bug > Components: atlas-webui > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2022-07-20 at 11.42.13 PM.png > > > The expand button is broken on the UI, when on demand lineage is enabled -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (ATLAS-4641) [Lineage Improvements] The expand button is broken on the UI, but on clicking it, the actual purpose is served
Dharshana M Krishnamoorthy created ATLAS-4641: - Summary: [Lineage Improvements] The expand button is broken on the UI, but on clicking it, the actual purpose is served Key: ATLAS-4641 URL: https://issues.apache.org/jira/browse/ATLAS-4641 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (ATLAS-4640) "Apply Validity Period" in tag association needs to have more details in documents
Dharshana M Krishnamoorthy created ATLAS-4640: - Summary: "Apply Validity Period" in tag association needs to have more details in documents Key: ATLAS-4640 URL: https://issues.apache.org/jira/browse/ATLAS-4640 Project: Atlas Issue Type: Bug Components: atlas-webui Reporter: Dharshana M Krishnamoorthy Apply Validity Period means that the tag is active when the validity period is active and the permission that was applied based on the validity period is removed. The customer mis-understood that is the validity period expires, then the tag gets dis-associated from the entity. This needs to be clearly documented There are more details on tag propagation but not on the tag validity period: [https://atlas.apache.org/1.2.0/ClassificationPropagation.html] This need to be documented -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (ATLAS-4636) [Regression] When an kafka console consumer group is run, more than 1 update audits are seen
Dharshana M Krishnamoorthy created ATLAS-4636: - Summary: [Regression] When an kafka console consumer group is run, more than 1 update audits are seen Key: ATLAS-4636 URL: https://issues.apache.org/jira/browse/ATLAS-4636 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Assignee: Dharshana M Krishnamoorthy Attachments: Screenshot 2022-06-17 at 12.03.50 PM.png * Run console consumer with a consumer group * Verify the consumer group entity created * Verify the metrics and notifications for the consumer group and topic On performing the above operation we expect 1 'ENTITY_CREATE' audit and 1 'ENTITY_UPDATE' audit. But more than 1 ENTITY_UPDATE audits are seen -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (ATLAS-4635) Source information is missing in the Hook message of Spark entries
[ https://issues.apache.org/jira/browse/ATLAS-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4635: -- Description: Source information is missing for impala messages {code:java} {"source":{"version":"2.1.0.7.1.8.0-723"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657759634684,"spooled":false,"message":...} {code} This is a message from ATLAS_SPARK_HOOK, the source infor of the message is missing here Expected: "source":\{"source": "spark", "version":"2.1.0.7.1.8.0-723"}, but source is missing It is available for other services: HMS: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"hive_metastore"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657700529795,"spooled":false,"message":{...} {code} HBase: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"hbase"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657700897665,"spooled":false,"message":{...} {code} Impala: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"impala"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657695780022,"spooled":false,"message":{"...}}{code} h4. was: Source information is missing for impala messages {code:java} {"source":{"version":"2.1.0.7.1.8.0-723"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657759634684,"spooled":false,"message":...} {code} This is a message from ATLAS_SPARK_HOOK, the source infor of the message is missing here Expected: "source":\{"source": "spark", "version":"2.1.0.7.1.8.0-723"}, but source is missing It is available for other services: HMS: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"hive_metastore"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657700529795,"spooled":false,"message":{ {code} {color:#009100}...{color} {code:java} }{code} HBase: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"hbase"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657700897665,"spooled":false,"message":{ {code} {color:#009100}...{color} {code:java} }{code} Impala: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"impala"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657695780022,"spooled":false,"message":{"...}}{code} h4. > Source information is missing
[jira] [Updated] (ATLAS-4635) Source information is missing in the Hook message of Spark entries
[ https://issues.apache.org/jira/browse/ATLAS-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4635: -- Description: Source information is missing for impala messages {code:java} {"source":{"version":"2.1.0.7.1.8.0-723"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657759634684,"spooled":false,"message":...} {code} This is a message from ATLAS_SPARK_HOOK, the source infor of the message is missing here Expected: "source":\{"source": "spark", "version":"2.1.0.7.1.8.0-723"}, but source is missing It is available for other services: HMS: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"hive_metastore"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657700529795,"spooled":false,"message":{ {code} {color:#009100}...{color} {code:java} }{code} HBase: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"hbase"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657700897665,"spooled":false,"message":{ {code} {color:#009100}...{color} {code:java} }{code} Impala: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"impala"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"...","msgCreatedBy":"...","msgCreationTime":1657695780022,"spooled":false,"message":{"...}}{code} h4. was: Source information is missing for impala messages {code:java} {"source":{"version":"2.1.0.7.1.8.0-723"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"172.27.134.1","msgCreatedBy":"hrt_qa","msgCreationTime":1657759634684,"spooled":false,"message":{"type":"ENTITY_CREATE_V2","user":"hrt...@root.hwx.site","entities":{"entities":[{"typeName":"spark_column_lineage","attributes":{"outputs":[{"typeName":"hive_column","uniqueAttributes":{"qualifiedName":"default.tab_1_2235274282_ctas.col1@cm"}}],"qualifiedName":"default.tab_1_2235274282_ctas@cm:1657759634668:col1","inputs":[{"typeName":"hive_column","uniqueAttributes":{"qualifiedName":"default.tab_1_2235274282.col1@cm"}}],"name":"default.tab_1_2235274282_ctas@cm:1657759634668:col1"},"guid":"-215346008271268","isIncomplete":false,"provenanceType":0,"version":0,"relationshipAttributes":{"process":{"typeName":"spark_process","uniqueAttributes":{"qualifiedName":"application_1657738076117_0064-execution-12"}}},"proxy":false}]}}} {code} This is a message from ATLAS_SPARK_HOOK, the source infor of the message is missing here Expected: "source":\{"source": "spark", "version":"2.1.0.7.1.8.0-723"}, but source is missing It is available for other services: HMS: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"hive_metastore"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"172.27.10.134","msgCreatedBy":"hive","msgCreationTime":1657
[jira] [Created] (ATLAS-4635) Source information is missing in the Hook message of Spark entries
Dharshana M Krishnamoorthy created ATLAS-4635: - Summary: Source information is missing in the Hook message of Spark entries Key: ATLAS-4635 URL: https://issues.apache.org/jira/browse/ATLAS-4635 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Source information is missing for impala messages {code:java} {"source":{"version":"2.1.0.7.1.8.0-723"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"172.27.134.1","msgCreatedBy":"hrt_qa","msgCreationTime":1657759634684,"spooled":false,"message":{"type":"ENTITY_CREATE_V2","user":"hrt...@root.hwx.site","entities":{"entities":[{"typeName":"spark_column_lineage","attributes":{"outputs":[{"typeName":"hive_column","uniqueAttributes":{"qualifiedName":"default.tab_1_2235274282_ctas.col1@cm"}}],"qualifiedName":"default.tab_1_2235274282_ctas@cm:1657759634668:col1","inputs":[{"typeName":"hive_column","uniqueAttributes":{"qualifiedName":"default.tab_1_2235274282.col1@cm"}}],"name":"default.tab_1_2235274282_ctas@cm:1657759634668:col1"},"guid":"-215346008271268","isIncomplete":false,"provenanceType":0,"version":0,"relationshipAttributes":{"process":{"typeName":"spark_process","uniqueAttributes":{"qualifiedName":"application_1657738076117_0064-execution-12"}}},"proxy":false}]}}} {code} This is a message from ATLAS_SPARK_HOOK, the source infor of the message is missing here Expected: "source":\{"source": "spark", "version":"2.1.0.7.1.8.0-723"}, but source is missing It is available for other services: HMS: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"hive_metastore"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"172.27.10.134","msgCreatedBy":"hive","msgCreationTime":1657700529795,"spooled":false,"message":{"type":"ENTITY_DELETE_V2","user":"hue","entities":[{"typeName":"hive_db","uniqueAttributes":{"qualifiedName":"cloudera_manager_metastore_canary_test_catalog_hive_hivemetastore_1:default@cm"}}]}}{code} HBase: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"hbase"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"172.27.72.195","msgCreatedBy":"hbase","msgCreationTime":1657700897665,"spooled":false,"message":{"type":"ENTITY_CREATE_V2","user":"hrt_qa","entities":{"entities":[{"typeName":"hbase_namespace","attributes":{"owner":"hrt_qa","modifiedTime":1657700897665,"createTime":1657700897665,"qualifiedName":"ns_zsptr@cm","clusterName":"cm","name":"ns_zsptr","description":"ns_zsptr","parameters":null},"guid":"-3685335942958094","isIncomplete":false,"provenanceType":0,"version":0,"proxy":false}]}}}{code} Impala: {code:java} {"source":{"version":"2.1.0.7.1.8.0-723","source":"impala"},"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"172.27.187.129","msgCreatedBy":"impala","msgCreationTime":1657695780022,"spooled":false,"message":{"type":"ENTITY_CREATE_V2","user":"hrt...@qe-infra-ad.cloudera.com","entities":{"referredEntities"
[jira] [Created] (ATLAS-4608) [Export Import] When exported with starting entity as hive table and fetch type is "connected", the exported entities are incorrect
Dharshana M Krishnamoorthy created ATLAS-4608: - Summary: [Export Import] When exported with starting entity as hive table and fetch type is "connected", the exported entities are incorrect Key: ATLAS-4608 URL: https://issues.apache.org/jira/browse/ATLAS-4608 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Scenario: * db_1 has 2 tables table_1 and table_2 table_2 is ctas of table_1 ** *db_1=>* *table_1=>* *table_2=>* *CTAS of table_1* * perform import with fetch type: CONNECTED r{*}equest_body = '\{"itemsToExport": [{"typeName": "hive_table", "uniqueAttributes": {"qualifiedName": "db_1.table_1@cm"}}], "options": \{"fetchType": "connected"}}{*} {*}Expectation{*}: as per https://atlas.apache.org/2.0.0/Export-API.html, only directly connected entities should be exported but managed location and external location of the database is also exported -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (ATLAS-4601) [Hive import v2] When we provide an old zip file import fails
Dharshana M Krishnamoorthy created ATLAS-4601: - Summary: [Hive import v2] When we provide an old zip file import fails Key: ATLAS-4601 URL: https://issues.apache.org/jira/browse/ATLAS-4601 Project: Atlas Issue Type: Bug Components: atlas-intg Reporter: Dharshana M Krishnamoorthy When we perform an import using v2 api, with a file that is already present in the file system, import fails This is an expected behaviour as per discussion with [~sidmishra] and this needs to be documented -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (ATLAS-4591) [Hive import v2] While importing with hive import V2, seeing exceptions in the log for successful imports
[ https://issues.apache.org/jira/browse/ATLAS-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4591: -- Summary: [Hive import v2] While importing with hive import V2, seeing exceptions in the log for successful imports (was: [Hive import v2] While importing with hive import V2, seeing exceptions in the log) > [Hive import v2] While importing with hive import V2, seeing exceptions in > the log for successful imports > - > > Key: ATLAS-4591 > URL: https://issues.apache.org/jira/browse/ATLAS-4591 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > > Seeing this error in import log > {code:java} > 2022-04-27 07:34:02,760 INFO - [main:] ~ method=GET > path=api/atlas/v2/entity/uniqueAttribute/type/hive_db/header > contentType=application/json; charset=UTF-8 accept=application/json > status=404 (AtlasBaseClient:407) > 2022-04-27 07:34:02,762 WARN - [main:] ~ Failed to get DB guid from Atlas > with qualified name db_ecwvo@cm (HiveMetaStoreBridgeV2:740) > org.apache.atlas.AtlasServiceException: Metadata service API > org.apache.atlas.AtlasBaseClient$API@69de5bed failed with status 404 (Not > Found) Response Body > ({"errorCode":"ATLAS-404-00-009","errorMessage":"Instance hive_db with unique > attribute {qualifiedName=db_ecwvo@cm} does not exist"}) > at > org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:447) > at > org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:366) > at > org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:352) > at org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:256) > at > org.apache.atlas.AtlasClientV2.getEntityHeaderByAttribute(AtlasClientV2.java:413) > at > org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.getDBGuidFromAtlas(HiveMetaStoreBridgeV2.java:738) > at > org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.getGuid(HiveMetaStoreBridgeV2.java:1019) > at > org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.toDbEntity(HiveMetaStoreBridgeV2.java:718) > at > org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.toDbEntity(HiveMetaStoreBridgeV2.java:705) > at > org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.writeDatabase(HiveMetaStoreBridgeV2.java:645) > at > org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.importHiveDatabases(HiveMetaStoreBridgeV2.java:314) > at > org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.exportDataToZipAndRunAtlasImport(HiveMetaStoreBridgeV2.java:190) > at > org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:180) > 2022-04-27 07:34:02,813 INFO - [main:] ~ Importing Hive Tables > (HiveMetaStoreBridgeV2:405) {code} > Steps followed: > # Disabled Hive hook > # Created a database > # Called the import v2 api by passing -o to import-hive.sh > # The above step will create the zip and also import the data > Though the import is successful, seeing the above mentioned error in the logs. > This is not seen when -o is not passed to the import script (Not seeing issue > in old import script) -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (ATLAS-4598) [Hive import v2] deleteNonExisting is not honoured when used with v2 api
Dharshana M Krishnamoorthy created ATLAS-4598: - Summary: [Hive import v2] deleteNonExisting is not honoured when used with v2 api Key: ATLAS-4598 URL: https://issues.apache.org/jira/browse/ATLAS-4598 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Scenario: Perform import using v2 api by using *deleteNonExisting* Steps: # Create a db # Create 2 tables in that db # Perform an import # Verify the tables are reflected in atlas # Drop the table # Perform an import with *deleteNonExisting* # command: *export JAVA_HOME=/usr/java/default; /opt/cloudera/parcels/CDH/lib/atlas/hook-bin/import-hive.sh -d db_hive_db_jwics -deleteNonExisting -o /tmp/db_wetbq.zip* Problem: Here deleteNonExisting is not honoured when used in the V2 api and the dropped table's status is still active in Atlas -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (ATLAS-4595) [Hive import v2]When using file name to import via v2 api, the entities are not reflected in atlas though the import is successful
[ https://issues.apache.org/jira/browse/ATLAS-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4595: -- Description: Scenario: use --filename in the import script in along with --output so that v2 api is invoked Eg: {code:java} export JAVA_HOME=/usr/java/default; /opt/cloudera/parcels/CDH/lib/atlas/hook-bin/import-hive.sh --filename /tmp/file_tejqc.txt --output /tmp/db_okgbi.zip{code} Steps: # Create 2 databases db_1 and db_2 # Create 2 tables under each db # Run import using filename that has database db_1 name The import was success, but the entities are not reflected in atlas {code:java} 2022-04-28 10:50:52,693|INFO|MainThread|machine.py:185 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|RUNNING: ssh -l root -i /tmp/hw-qe-keypair.pem -q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null quasar-jagdkt-5.quasar-jagdkt.root.hwx.site "sudo -u root sh -c 'export JAVA_HOME=/usr/java/default; /opt/cloudera/parcels/CDH/lib/atlas/hook-bin/import-hive.sh --filename /tmp/file_tejqc.txt --output /tmp/db_okgbi.zip'" 2022-04-28 10:50:52,957|INFO|MainThread|machine.py:200 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|Using Hive configuration directory [/etc/hive/conf] 2022-04-28 10:50:53,152|INFO|MainThread|machine.py:200 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|/etc/hive/conf:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.25947682/lib/hadoop/libexec/../../hadoop/lib/*:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.25947682/lib/hadoop/libexec/../../hadoop/.//*:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.25947682/lib/hadoop/libexec/../../hadoop-hdfs/./:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.25947682/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.25947682/lib/hadoop/libexec/../../hadoop-hdfs/.//*:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//*:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.25947682/lib/hadoop/libexec/../../hadoop-yarn/./:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.25947682/lib/hadoop/libexec/../../hadoop-yarn/lib/*:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.25947682/lib/hadoop/libexec/../../hadoop-yarn/.//* 2022-04-28 10:50:53,152|INFO|MainThread|machine.py:200 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|Log file for import is /var/log/atlas/import-hive.log 2022-04-28 10:50:55,328|INFO|MainThread|machine.py:200 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|log4j:WARN No such property [maxFileSize] in org.apache.log4j.PatternLayout. 2022-04-28 10:50:55,329|INFO|MainThread|machine.py:200 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.PatternLayout. 2022-04-28 10:51:18,889|INFO|MainThread|machine.py:200 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|WARNING: An illegal reflective access operation has occurred 2022-04-28 10:51:18,890|INFO|MainThread|machine.py:200 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|WARNING: Illegal reflective access by org.apache.hadoop.hive.common.StringInternUtils (file:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.25947682/jars/hive-exec-3.1.3000.7.1.8.0-581.jar) to field java.net.URI.string 2022-04-28 10:51:18,890|INFO|MainThread|machine.py:200 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.hive.common.StringInternUtils 2022-04-28 10:51:18,890|INFO|MainThread|machine.py:200 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations 2022-04-28 10:51:18,891|INFO|MainThread|machine.py:200 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|WARNING: All illegal access operations will be denied in a future release 2022-04-28 10:51:20,824|INFO|MainThread|machine.py:200 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|Hive Meta Data imported successfully! 2022-04-28 10:51:20,850|INFO|MainThread|machine.py:227 - run()||GUID=003c431b-4087-4990-9d50-d763ef06c51a|Exit Code: 0 {code} Additional details: file_tejqc.txt file content {code:java} cat /tmp/file_tejqc.txt db_hive_db_dumeh {code} Tables in the db: {code:java} 0: jdbc:hive2://quasar-jagdkt-1.quasar-jagdkt> use db_hive_db_dumeh; INFO : Compiling command(queryId=hive_20220428115112_876a9b0b-a19c-4ee6-b827-c777e4398463): use db_hive_db_dumeh INFO : Semantic Analysis Completed (retrial = false) INFO : Created Hive schema: Schema(fieldSchemas:null, properties:null) INFO : Completed compiling command(queryId=hive_20220428115112_876a9b0b-a19c-4ee6-b827-c777e4398463); Time taken: 0.016 seconds INFO : Executing command(queryId=hive_20220428115112_876a9b0b-a19c-4ee6-b827-c777e4398463): use db_hive_db_dumeh INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing co
[jira] [Updated] (ATLAS-4596) [Hive import v2] When using v2 api to import, there are additional audits present for an entity
[ https://issues.apache.org/jira/browse/ATLAS-4596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4596: -- Description: When v2 is used for import, there are additional audits present in the entities created, which is equal to the number of tables that were imported. Eg: # Create a database # Perform an import This creates no additional audits # Create a database and create 2 tables within the database # Perform an import this creates 2 additional audits dharsh_test_db_111_v2 : Has no tables, so no additional audits dharsh_test_db_222_v2: Has 1 table, so contains 1 additional audit dharsh_test_db_333_v2: Has 2 tables, so contains, 2 additional audits was: When v2 is used, there are additional audits present in the entities created, which is equal to the number of tables that were imported. Eg: # Create a database # Perform an import This creates no additional audits # Create a database and create 2 tables within the database # Perform an import this creates 2 additional audits dharsh_test_db_111_v2 : Has no tables, so no additional audits dharsh_test_db_222_v2: Has 1 table, so contains 1 additional audit dharsh_test_db_333_v2: Has 2 tables, so contains, 2 additional audits > [Hive import v2] When using v2 api to import, there are additional audits > present for an entity > --- > > Key: ATLAS-4596 > URL: https://issues.apache.org/jira/browse/ATLAS-4596 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2022-04-28 at 3.13.23 PM.png, Screenshot > 2022-04-28 at 3.13.56 PM.png, Screenshot 2022-04-28 at 3.14.21 PM.png > > > When v2 is used for import, there are additional audits present in the > entities created, which is equal to the number of tables that were imported. > Eg: > # Create a database > # Perform an import > This creates no additional audits > # Create a database and create 2 tables within the database > # Perform an import > this creates 2 additional audits > dharsh_test_db_111_v2 : Has no tables, so no additional audits > dharsh_test_db_222_v2: Has 1 table, so contains 1 additional audit > dharsh_test_db_333_v2: Has 2 tables, so contains, 2 additional audits -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (ATLAS-4595) [Hive import v2] [Performance]When using file name to import via v2 api, there is some delay before the entities are reflected in atlas
[ https://issues.apache.org/jira/browse/ATLAS-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4595: -- Summary: [Hive import v2] [Performance]When using file name to import via v2 api, there is some delay before the entities are reflected in atlas (was: [Hive import v2] When using file name to import via v2 api, the tables of the imported db are not reflected) > [Hive import v2] [Performance]When using file name to import via v2 api, > there is some delay before the entities are reflected in atlas > --- > > Key: ATLAS-4595 > URL: https://issues.apache.org/jira/browse/ATLAS-4595 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > > Scenario: > use --filename in the import script in along with --output so that v2 api is > invoked > Eg: > {code:java} > '/opt/cloudera/parcels/CDH/lib/atlas/hook-bin/import-hive.sh --filename > /tmp/file_hqavs.txt --output /tmp/db_axmqv.zip {code} > There is some delay (few seconds) before it reflects in atlas. > Steps: > # Create 2 databases db_1 and db_2 > # Run import using filename that has tables belonging to database1 > When a search is performed immediately after the import, the data is not > reflected in atlas, if we wait for 5 seconds and then search again, data is > reflected. > This does not happen in the following scenarios: > # when v1 api is used > # when v2 api is used with database name > # when v2 api is used with table name > *It happens only when v2 api is used along with file name* > This is not a blocker bug as the data reflects in atlas. > But creating to find the reason why this happens only while using file name > in v2 api. > > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (ATLAS-4595) [Hive import v2] When using file name to import via v2 api, the tables of the imported db are not reflected
[ https://issues.apache.org/jira/browse/ATLAS-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4595: -- Summary: [Hive import v2] When using file name to import via v2 api, the tables of the imported db are not reflected (was: [Hive import v2] When using file name to import via v2 api, the tables of the imported ) > [Hive import v2] When using file name to import via v2 api, the tables of the > imported db are not reflected > --- > > Key: ATLAS-4595 > URL: https://issues.apache.org/jira/browse/ATLAS-4595 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > > Scenario: > use --filename in the import script in along with --output so that v2 api is > invoked > Eg: > {code:java} > '/opt/cloudera/parcels/CDH/lib/atlas/hook-bin/import-hive.sh --filename > /tmp/file_hqavs.txt --output /tmp/db_axmqv.zip {code} > There is some delay (few seconds) before it reflects in atlas. > Steps: > # Create 2 databases db_1 and db_2 > # Run import using filename that has tables belonging to database1 > When a search is performed immediately after the import, the data is not > reflected in atlas, if we wait for 5 seconds and then search again, data is > reflected. > This does not happen in the following scenarios: > # when v1 api is used > # when v2 api is used with database name > # when v2 api is used with table name > *It happens only when v2 api is used along with file name* > This is not a blocker bug as the data reflects in atlas. > But creating to find the reason why this happens only while using file name > in v2 api. > > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (ATLAS-4595) [Hive import v2] When using file name to import via v2 api, the tables of the imported
[ https://issues.apache.org/jira/browse/ATLAS-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4595: -- Summary: [Hive import v2] When using file name to import via v2 api, the tables of the imported (was: [Hive import v2] [Performance]When using file name to import via v2 api, there is some delay before the entities are reflected in atlas) > [Hive import v2] When using file name to import via v2 api, the tables of the > imported > --- > > Key: ATLAS-4595 > URL: https://issues.apache.org/jira/browse/ATLAS-4595 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > > Scenario: > use --filename in the import script in along with --output so that v2 api is > invoked > Eg: > {code:java} > '/opt/cloudera/parcels/CDH/lib/atlas/hook-bin/import-hive.sh --filename > /tmp/file_hqavs.txt --output /tmp/db_axmqv.zip {code} > There is some delay (few seconds) before it reflects in atlas. > Steps: > # Create 2 databases db_1 and db_2 > # Run import using filename that has tables belonging to database1 > When a search is performed immediately after the import, the data is not > reflected in atlas, if we wait for 5 seconds and then search again, data is > reflected. > This does not happen in the following scenarios: > # when v1 api is used > # when v2 api is used with database name > # when v2 api is used with table name > *It happens only when v2 api is used along with file name* > This is not a blocker bug as the data reflects in atlas. > But creating to find the reason why this happens only while using file name > in v2 api. > > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (ATLAS-4595) [Hive import v2] [Performance]When using file name to import via v2 api, there is some delay before the entities are reflected in atlas
Dharshana M Krishnamoorthy created ATLAS-4595: - Summary: [Hive import v2] [Performance]When using file name to import via v2 api, there is some delay before the entities are reflected in atlas Key: ATLAS-4595 URL: https://issues.apache.org/jira/browse/ATLAS-4595 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Scenario: use --filename in the import script in along with --output so that v2 api is invoked Eg: {code:java} '/opt/cloudera/parcels/CDH/lib/atlas/hook-bin/import-hive.sh --filename /tmp/file_hqavs.txt --output /tmp/db_axmqv.zip {code} There is some delay (few seconds) before it reflects in atlas. Steps: # Create 2 databases db_1 and db_2 # Run import using filename that has tables belonging to database1 When a search is performed immediately after the import, the data is not reflected in atlas, if we wait for 5 seconds and then search again, data is reflected. This does not happen in the following scenarios: # when v1 api is used # when v2 api is used with database name # when v2 api is used with table name *It happens only when v2 api is used along with file name* This is not a blocker bug as the data reflects in atlas. But creating to find the reason why this happens only while using file name in v2 api. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (ATLAS-4594) [Hive import v2] When importing table with large columns via v2, java.net.SocketTimeoutException is thrown
Dharshana M Krishnamoorthy created ATLAS-4594: - Summary: [Hive import v2] When importing table with large columns via v2, java.net.SocketTimeoutException is thrown Key: ATLAS-4594 URL: https://issues.apache.org/jira/browse/ATLAS-4594 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Steps followed: # Create a table with 1000 columns # set atlas config *atlas.client.readTimeoutMSecs=60* # import the same by passing -o so v2 api is called # Eg: *"/opt/cloudera/parcels/CDH/lib/atlas/hook-bin/import-hive.sh -d db_hive_db_qaonp -t table_1 -o /tmp/db_isfrz.zip"* On following the above steps, we get the following exception This same case works fine, when invoked via v1 api {code:java} 1@cm (HiveMetaStoreBridgeV2:543) 2022-04-28 02:39:41,023 ERROR - [main:] ~ Import Failed (HiveMetaStoreBridge:200) com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155) at com.sun.jersey.api.client.Client.handle(Client.java:652) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.method(WebResource.java:634) at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:402) at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:366) at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:352) at org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:228) at org.apache.atlas.AtlasBaseClient.performImportData(AtlasBaseClient.java:550) at org.apache.atlas.AtlasBaseClient.importData(AtlasBaseClient.java:536) at org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.runAtlasImport(HiveMetaStoreBridgeV2.java:558) at org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.exportDataToZipAndRunAtlasImport(HiveMetaStoreBridgeV2.java:200) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:180) Caused by: java.net.SocketTimeoutException: Read timed out at java.base/java.net.SocketInputStream.socketRead0(Native Method) at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:448) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:68) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1104) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:823) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:252) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:292) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:351) at java.base/sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:746) at java.base/sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:689) at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1604) at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509) at java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527) at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:329) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:253) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153) {code} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (ATLAS-4592) [Hive import v2] While importing the zip file created via import without passing the import options, null pointer exception is thrown
Dharshana M Krishnamoorthy created ATLAS-4592: - Summary: [Hive import v2] While importing the zip file created via import without passing the import options, null pointer exception is thrown Key: ATLAS-4592 URL: https://issues.apache.org/jira/browse/ATLAS-4592 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Attachments: entity_with_tag.zip, ibeuj.zip *Scenario:* Import the zip file created via hive import v2 without passing the import options [^ibeuj.zip] {code:java} curl -k --negotiate -u hrt_qa:Password@123 -X POST 'https://quasar-jagdkt-5.quasar-jagdkt.root.hwx.site:31443/api/atlas/admin/import' --form 'data=@/tmp/ibeuj.zip'{code} *Exception:* {code:java} 2022-04-27 10:52:53,459 WARN - [etp1477637771-266 - 58b0fdc5-e96c-470a-b748-1f8e81ac8b6e:hrt_qa:POST/api/atlas/admin/import] ~ Could not fetch requested contents of file: atlas-export-order (ZipSource:214) 2022-04-27 10:52:53,459 ERROR - [etp1477637771-266 - 58b0fdc5-e96c-470a-b748-1f8e81ac8b6e:hrt_qa:POST/api/atlas/admin/import] ~ Error converting file to JSON. (ZipSource:201) 2022-04-27 10:52:53,459 ERROR - [etp1477637771-266 - 58b0fdc5-e96c-470a-b748-1f8e81ac8b6e:hrt_qa:POST/api/atlas/admin/import] ~ importData(binary) failed (AdminResource:505) java.lang.NullPointerException at org.apache.atlas.repository.impexp.ZipSource.setCreationOrder(ZipSource.java:125) at org.apache.atlas.repository.impexp.ZipSource.(ZipSource.java:71) at org.apache.atlas.repository.impexp.ZipSource.(ZipSource.java:58) at org.apache.atlas.repository.impexp.ImportService.createZipSource(ImportService.java:264) at org.apache.atlas.repository.impexp.ImportService.run(ImportService.java:95) at org.apache.atlas.web.resources.AdminResource.importData(AdminResource.java:492) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder$NotAsync.service(ServletHolder.java:1452) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:791) at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626) at org.apache.atlas.web.filters.AuditFilter.doFilter(AuditFilter.java:106) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:327) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:115) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:81) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336
[jira] [Created] (ATLAS-4591) [Hive import v2] While importing with hive import V2, seeing exceptions in the log
Dharshana M Krishnamoorthy created ATLAS-4591: - Summary: [Hive import v2] While importing with hive import V2, seeing exceptions in the log Key: ATLAS-4591 URL: https://issues.apache.org/jira/browse/ATLAS-4591 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Seeing this error in import log {code:java} 2022-04-27 07:34:02,760 INFO - [main:] ~ method=GET path=api/atlas/v2/entity/uniqueAttribute/type/hive_db/header contentType=application/json; charset=UTF-8 accept=application/json status=404 (AtlasBaseClient:407) 2022-04-27 07:34:02,762 WARN - [main:] ~ Failed to get DB guid from Atlas with qualified name db_ecwvo@cm (HiveMetaStoreBridgeV2:740) org.apache.atlas.AtlasServiceException: Metadata service API org.apache.atlas.AtlasBaseClient$API@69de5bed failed with status 404 (Not Found) Response Body ({"errorCode":"ATLAS-404-00-009","errorMessage":"Instance hive_db with unique attribute {qualifiedName=db_ecwvo@cm} does not exist"}) at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:447) at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:366) at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:352) at org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:256) at org.apache.atlas.AtlasClientV2.getEntityHeaderByAttribute(AtlasClientV2.java:413) at org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.getDBGuidFromAtlas(HiveMetaStoreBridgeV2.java:738) at org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.getGuid(HiveMetaStoreBridgeV2.java:1019) at org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.toDbEntity(HiveMetaStoreBridgeV2.java:718) at org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.toDbEntity(HiveMetaStoreBridgeV2.java:705) at org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.writeDatabase(HiveMetaStoreBridgeV2.java:645) at org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.importHiveDatabases(HiveMetaStoreBridgeV2.java:314) at org.apache.atlas.hive.bridge.HiveMetaStoreBridgeV2.exportDataToZipAndRunAtlasImport(HiveMetaStoreBridgeV2.java:190) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:180) 2022-04-27 07:34:02,813 INFO - [main:] ~ Importing Hive Tables (HiveMetaStoreBridgeV2:405) {code} Steps followed: # Disabled Hive hook # Created a database # Called the import v2 api by passing -o to import-hive.sh # The above step will create the zip and also import the data Though the import is successful, seeing the above mentioned error in the logs. This is not seen when -o is not passed to the import script (Not seeing issue in old import script) -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (ATLAS-4574) When tls is enabled the steps differ for AtlasRepairIndex tool, the doc does not capture it
Dharshana M Krishnamoorthy created ATLAS-4574: - Summary: When tls is enabled the steps differ for AtlasRepairIndex tool, the doc does not capture it Key: ATLAS-4574 URL: https://issues.apache.org/jira/browse/ATLAS-4574 Project: Atlas Issue Type: Bug Components: atlas-webui Reporter: Dharshana M Krishnamoorthy [https://atlas.apache.org/2.0.0/AtlasRepairIndex.html] It has the steps for a regular cluster. While following the same, the following error is seen https://issues.apache.org/jira/browse/ATLAS-4558 captures the code changes that are required, created this for DOC update {code:java} WARNING: Illegal reflective access by org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase (file:/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.22612204/jars/atlas-graphdb-janus-2.1.0.7.1.7.1000-102.jar) to field java.lang.reflect.Field.modifiers WARNING: Please consider reporting this to the maintainers of org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ERROR StatusLogger Reconfiguration failed: No configuration found for '46fbb2c1' at 'null' in 'null' Graph Initialized! Restoring: vertex_index Exception in thread "Thread-11" org.janusgraph.core.JanusGraphException: Could not restore Solr index at org.janusgraph.graphdb.olap.job.IndexRepairJob.workerIterationEnd(IndexRepairJob.java:239) at org.janusgraph.graphdb.olap.VertexJobConverter.workerIterationEnd(VertexJobConverter.java:90) at org.janusgraph.diskstorage.keycolumnvalue.scan.StandardScannerExecutor$Processor.run(StandardScannerExecutor.java:263) Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not restore Solr index at org.janusgraph.diskstorage.solr.Solr6Index.restore(Solr6Index.java:596) at org.janusgraph.diskstorage.indexing.IndexTransaction.restore(IndexTransaction.java:134) at org.janusgraph.graphdb.olap.job.IndexRepairJob.workerIterationEnd(IndexRepairJob.java:234) ... 2 more Caused by: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: IOException occurred when talking to server at: https://quasar-tiuhte-5.quasar-tiuhte.root.hwx.site:8995/solr/vertex_index_shard1_replica_n1 at org.apache.solr.client.solrj.impl.CloudSolrClient.getRouteException(CloudSolrClient.java:125) at org.apache.solr.client.solrj.impl.CloudSolrClient.getRouteException(CloudSolrClient.java:46) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.directUpdate(BaseCloudSolrClient.java:559) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1046) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:906) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:838) at org.janusgraph.diskstorage.solr.Solr6Index.commitChanges(Solr6Index.java:633) at org.janusgraph.diskstorage.solr.Solr6Index.restore(Solr6Index.java:593) ... 4 more Caused by: org.apache.solr.client.solrj.SolrServerException: IOException occurred when talking to server at: https://quasar-tiuhte-5.quasar-tiuhte.root.hwx.site:8995/solr/vertex_index_shard1_replica_n1 at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:682) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248) at org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368) at org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.lambda$directUpdate$0(BaseCloudSolrClient.java:533) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264) at
[jira] [Updated] (ATLAS-4558) When tls is enabled the steps differ for AtlasRepairIndex tool
[ https://issues.apache.org/jira/browse/ATLAS-4558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4558: -- Description: [https://atlas.apache.org/2.0.0/AtlasRepairIndex.html] It has the steps for a regular cluster. While following the same, the following error is seen {code:java} WARNING: Illegal reflective access by org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase (file:/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.22612204/jars/atlas-graphdb-janus-2.1.0.7.1.7.1000-102.jar) to field java.lang.reflect.Field.modifiers WARNING: Please consider reporting this to the maintainers of org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ERROR StatusLogger Reconfiguration failed: No configuration found for '46fbb2c1' at 'null' in 'null' Graph Initialized! Restoring: vertex_index Exception in thread "Thread-11" org.janusgraph.core.JanusGraphException: Could not restore Solr index at org.janusgraph.graphdb.olap.job.IndexRepairJob.workerIterationEnd(IndexRepairJob.java:239) at org.janusgraph.graphdb.olap.VertexJobConverter.workerIterationEnd(VertexJobConverter.java:90) at org.janusgraph.diskstorage.keycolumnvalue.scan.StandardScannerExecutor$Processor.run(StandardScannerExecutor.java:263) Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not restore Solr index at org.janusgraph.diskstorage.solr.Solr6Index.restore(Solr6Index.java:596) at org.janusgraph.diskstorage.indexing.IndexTransaction.restore(IndexTransaction.java:134) at org.janusgraph.graphdb.olap.job.IndexRepairJob.workerIterationEnd(IndexRepairJob.java:234) ... 2 more Caused by: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: IOException occurred when talking to server at: https://quasar-tiuhte-5.quasar-tiuhte.root.hwx.site:8995/solr/vertex_index_shard1_replica_n1 at org.apache.solr.client.solrj.impl.CloudSolrClient.getRouteException(CloudSolrClient.java:125) at org.apache.solr.client.solrj.impl.CloudSolrClient.getRouteException(CloudSolrClient.java:46) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.directUpdate(BaseCloudSolrClient.java:559) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1046) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:906) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:838) at org.janusgraph.diskstorage.solr.Solr6Index.commitChanges(Solr6Index.java:633) at org.janusgraph.diskstorage.solr.Solr6Index.restore(Solr6Index.java:593) ... 4 more Caused by: org.apache.solr.client.solrj.SolrServerException: IOException occurred when talking to server at: https://quasar-tiuhte-5.quasar-tiuhte.root.hwx.site:8995/solr/vertex_index_shard1_replica_n1 at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:682) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248) at org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368) at org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.lambda$directUpdate$0(BaseCloudSolrClient.java:533) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:259) at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:642) at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:461) at
[jira] [Updated] (ATLAS-4573) [Relationships] Updating legacyAttribute from False to True resets the initially created relationshipAttributes values
[ https://issues.apache.org/jira/browse/ATLAS-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4573: -- Description: Steps followed: Create types , entities, relationship with *is_legacy_attribute* set initially to *False* Update the relationshipDef to have *is_legacy_attribute* to *True* For the entities that were created before updating the *is_legacy_attribute* to *True,* *relationshipAttributes* is now reset Initial relationship def: [https://quasar-uytlae-4.quasar-uytlae.root.hwx.site:31443/api/atlas/v2/types/relationshipdef/name/ASSOCIATION_5YEDIO] (This will now fetch the updated def) {code:java} { category: "RELATIONSHIP", guid: "-294437519020", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474404121, updateTime: 1648474404121, version: 1, name: "ASSOCIATION_5YEDIO", description: "default relationshipDef description with name: ASSOCIATION_5YEDIO", typeVersion: "1.0", attributeDefs: [], relationshipCategory: "ASSOCIATION", propagateTags: "NONE", endDef1: { type: "type_1_ASSOCIATION_O6FR7Q", name: "rel_attribute", isContainer: false, cardinality: "SINGLE", isLegacyAttribute: false, description: "default relationshipEndDef description with name: rel_attribute" }, endDef2: { type: "type_2_ASSOCIATION_XP3JPH", name: "rel_attribute", isContainer: false, cardinality: "SINGLE", isLegacyAttribute: false, description: "default relationshipEndDef description with name: rel_attribute" } } {code} Entiry1 def before update to True: {code:java} { referredEntities: {}, entity: { typeName: "type_1_ASSOCIATION_O6FR7Q", attributes: { name: "entity_1_PP8ULL" }, guid: "daa724fe-1e14-4734-ab95-85c4a5aafee4", isIncomplete: false, status: "ACTIVE", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474464955, updateTime: 1648474464955, version: 0, relationshipAttributes: { rel_attribute: { guid: "5271-ecdb-4792-8962-4bc6a68df3a2", typeName: "type_2_ASSOCIATION_XP3JPH", entityStatus: "ACTIVE", displayText: "entity_2_L47P2H", relationshipType: "ASSOCIATION_5YEDIO", relationshipGuid: "8e390507-cdfc-4f83-bded-16862498ac0c", relationshipStatus: "ACTIVE", relationshipAttributes: { typeName: "ASSOCIATION_5YEDIO" } } }, labels: [] } } {code} Entity2 def before Update to True: {code:java} { referredEntities: {}, entity: { typeName: "type_2_ASSOCIATION_XP3JPH", attributes: { name: "entity_2_L47P2H" }, guid: "5271-ecdb-4792-8962-4bc6a68df3a2", isIncomplete: false, status: "ACTIVE", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474464955, updateTime: 1648474464955, version: 0, relationshipAttributes: { rel_attribute: { guid: "daa724fe-1e14-4734-ab95-85c4a5aafee4", typeName: "type_1_ASSOCIATION_O6FR7Q", entityStatus: "ACTIVE", displayText: "entity_1_PP8ULL", relationshipType: "ASSOCIATION_5YEDIO", relationshipGuid: "8e390507-cdfc-4f83-bded-16862498ac0c", relationshipStatus: "ACTIVE", relationshipAttributes: { typeName: "ASSOCIATION_5YEDIO" } } }, labels: [] } } {code} Updated relationship def: {code:java} { category: "RELATIONSHIP", guid: "-294437519020", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474404121, updateTime: 1648474705804, version: 2, name: "ASSOCIATION_5YEDIO", description: "default relationshipDef description with name: ASSOCIATION_5YEDIO", typeVersion: "1.0", attributeDefs: [], relationshipCategory: "ASSOCIATION", propagateTags: "NONE", endDef1: { type: "type_1_ASSOCIATION_O6FR7Q", name: "rel_attribute",
[jira] [Updated] (ATLAS-4573) [Relationships] Updating legacyAttribute from False to True resets the initially created relationshipAttributes values
[ https://issues.apache.org/jira/browse/ATLAS-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4573: -- Description: Steps followed: Create types , entities, relationship with *is_legacy_attribute* set initially to *False* Update the relationshipDef to have *is_legacy_attribute* to *True* For the entities that were created before updating the *is_legacy_attribute* to *True,* *relationshipAttributes* is now reset Initial relationship def: [https://quasar-uytlae-4.quasar-uytlae.root.hwx.site:31443/api/atlas/v2/types/relationshipdef/name/ASSOCIATION_5YEDIO] (This will now fetch the updated def) {code:java} { category: "RELATIONSHIP", guid: "-294437519020", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474404121, updateTime: 1648474404121, version: 1, name: "ASSOCIATION_5YEDIO", description: "default relationshipDef description with name: ASSOCIATION_5YEDIO", typeVersion: "1.0", attributeDefs: [], relationshipCategory: "ASSOCIATION", propagateTags: "NONE", endDef1: { type: "type_1_ASSOCIATION_O6FR7Q", name: "rel_attribute", isContainer: false, cardinality: "SINGLE", isLegacyAttribute: false, description: "default relationshipEndDef description with name: rel_attribute" }, endDef2: { type: "type_2_ASSOCIATION_XP3JPH", name: "rel_attribute", isContainer: false, cardinality: "SINGLE", isLegacyAttribute: false, description: "default relationshipEndDef description with name: rel_attribute" } } {code} Entiry1 def before update to True: {code:java} { referredEntities: {}, entity: { typeName: "type_1_ASSOCIATION_O6FR7Q", attributes: { name: "entity_1_PP8ULL" }, guid: "daa724fe-1e14-4734-ab95-85c4a5aafee4", isIncomplete: false, status: "ACTIVE", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474464955, updateTime: 1648474464955, version: 0, relationshipAttributes: { rel_attribute: { guid: "5271-ecdb-4792-8962-4bc6a68df3a2", typeName: "type_2_ASSOCIATION_XP3JPH", entityStatus: "ACTIVE", displayText: "entity_2_L47P2H", relationshipType: "ASSOCIATION_5YEDIO", relationshipGuid: "8e390507-cdfc-4f83-bded-16862498ac0c", relationshipStatus: "ACTIVE", relationshipAttributes: { typeName: "ASSOCIATION_5YEDIO" } } }, labels: [] } } {code} Entity2 def before Update to True: {code:java} { referredEntities: {}, entity: { typeName: "type_2_ASSOCIATION_XP3JPH", attributes: { name: "entity_2_L47P2H" }, guid: "5271-ecdb-4792-8962-4bc6a68df3a2", isIncomplete: false, status: "ACTIVE", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474464955, updateTime: 1648474464955, version: 0, relationshipAttributes: { rel_attribute: { guid: "daa724fe-1e14-4734-ab95-85c4a5aafee4", typeName: "type_1_ASSOCIATION_O6FR7Q", entityStatus: "ACTIVE", displayText: "entity_1_PP8ULL", relationshipType: "ASSOCIATION_5YEDIO", relationshipGuid: "8e390507-cdfc-4f83-bded-16862498ac0c", relationshipStatus: "ACTIVE", relationshipAttributes: { typeName: "ASSOCIATION_5YEDIO" } } }, labels: [] } } {code} Updated relationship def: {code:java} { category: "RELATIONSHIP", guid: "-294437519020", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474404121, updateTime: 1648474705804, version: 2, name: "ASSOCIATION_5YEDIO", description: "default relationshipDef description with name: ASSOCIATION_5YEDIO", typeVersion: "1.0", attributeDefs: [], relationshipCategory: "ASSOCIATION", propagateTags: "NONE", endDef1: { type: "type_1_ASSOCIATION_O6FR7Q", name: "rel_attribute",
[jira] [Created] (ATLAS-4573) [Relationships] Updating legacyAttribute from False to True resets the initially created relationshipAttributes values
Dharshana M Krishnamoorthy created ATLAS-4573: - Summary: [Relationships] Updating legacyAttribute from False to True resets the initially created relationshipAttributes values Key: ATLAS-4573 URL: https://issues.apache.org/jira/browse/ATLAS-4573 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Attachments: 1_rel_legacy_false.png, 2_entity1_legacy_false.png, 3_entity_2_legacy_false.png, 4_rel_legacy_true.png, 5_entity_1_legacy_true.png, 6_entity_2_legacy_true.png, 7_entity_3_legacy_true.png, 8_entity_4_legacy_true.png Steps followed: Create types , entities, relationship with *is_legacy_attribute* set initially to *False* Update the relationshipDef to have *is_legacy_attribute* to *True* For the entities that were created before updating the *is_legacy_attribute* to *True,* *relationshipAttributes* is now reset Initial relationship def: {code:java} { category: "RELATIONSHIP", guid: "-294437519020", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474404121, updateTime: 1648474404121, version: 1, name: "ASSOCIATION_5YEDIO", description: "default relationshipDef description with name: ASSOCIATION_5YEDIO", typeVersion: "1.0", attributeDefs: [], relationshipCategory: "ASSOCIATION", propagateTags: "NONE", endDef1: { type: "type_1_ASSOCIATION_O6FR7Q", name: "rel_attribute", isContainer: false, cardinality: "SINGLE", isLegacyAttribute: false, description: "default relationshipEndDef description with name: rel_attribute" }, endDef2: { type: "type_2_ASSOCIATION_XP3JPH", name: "rel_attribute", isContainer: false, cardinality: "SINGLE", isLegacyAttribute: false, description: "default relationshipEndDef description with name: rel_attribute" } } {code} Entiry1 def before update to True: {code:java} { referredEntities: {}, entity: { typeName: "type_1_ASSOCIATION_O6FR7Q", attributes: { name: "entity_1_PP8ULL" }, guid: "daa724fe-1e14-4734-ab95-85c4a5aafee4", isIncomplete: false, status: "ACTIVE", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474464955, updateTime: 1648474464955, version: 0, relationshipAttributes: { rel_attribute: { guid: "5271-ecdb-4792-8962-4bc6a68df3a2", typeName: "type_2_ASSOCIATION_XP3JPH", entityStatus: "ACTIVE", displayText: "entity_2_L47P2H", relationshipType: "ASSOCIATION_5YEDIO", relationshipGuid: "8e390507-cdfc-4f83-bded-16862498ac0c", relationshipStatus: "ACTIVE", relationshipAttributes: { typeName: "ASSOCIATION_5YEDIO" } } }, labels: [] } } {code} Entity2 def before Update to True: {code:java} { referredEntities: {}, entity: { typeName: "type_2_ASSOCIATION_XP3JPH", attributes: { name: "entity_2_L47P2H" }, guid: "5271-ecdb-4792-8962-4bc6a68df3a2", isIncomplete: false, status: "ACTIVE", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474464955, updateTime: 1648474464955, version: 0, relationshipAttributes: { rel_attribute: { guid: "daa724fe-1e14-4734-ab95-85c4a5aafee4", typeName: "type_1_ASSOCIATION_O6FR7Q", entityStatus: "ACTIVE", displayText: "entity_1_PP8ULL", relationshipType: "ASSOCIATION_5YEDIO", relationshipGuid: "8e390507-cdfc-4f83-bded-16862498ac0c", relationshipStatus: "ACTIVE", relationshipAttributes: { typeName: "ASSOCIATION_5YEDIO" } } }, labels: [] } } {code} Updated relationship def: {code:java} { category: "RELATIONSHIP", guid: "-294437519020", createdBy: "hrt_qa", updatedBy: "hrt_qa", createTime: 1648474404121, updateTime: 1648474705804, version: 2, name: "ASSOCIATION_5YEDIO", description: "default relationshipDef description with
[jira] [Created] (ATLAS-4569) Import kafka is failing with NoSuchMethod error
Dharshana M Krishnamoorthy created ATLAS-4569: - Summary: Import kafka is failing with NoSuchMethod error Key: ATLAS-4569 URL: https://issues.apache.org/jira/browse/ATLAS-4569 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Unable to perform kafka import as it is failing with the following exception. Log from console {code:java} >>>>> /opt/cloudera/parcels/CDH/lib/atlas Using Kafka configuration directory [/etc/kafka/conf] Log file for import is /var/log/atlas/import-kafka.log SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.23291269/jars/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.23291269/jars/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Exception in thread "main" java.lang.NoSuchMethodError: org.apache.log4j.helpers.OptionConverter.convertLevel(Ljava/lang/String;Lorg/apache/logging/log4j/Level;)Lorg/apache/logging/log4j/Level; at org.apache.log4j.xml.XmlConfiguration.parseLevel(XmlConfiguration.java:646) at org.apache.log4j.xml.XmlConfiguration.lambda$parseChildrenOfLoggerElement$4(XmlConfiguration.java:563) at org.apache.log4j.xml.XmlConfiguration.forEachElement(XmlConfiguration.java:762) at org.apache.log4j.xml.XmlConfiguration.parseChildrenOfLoggerElement(XmlConfiguration.java:548) at org.apache.log4j.xml.XmlConfiguration.parseCategory(XmlConfiguration.java:530) at org.apache.log4j.xml.XmlConfiguration.lambda$parse$6(XmlConfiguration.java:721) at org.apache.log4j.xml.XmlConfiguration.forEachElement(XmlConfiguration.java:762) at org.apache.log4j.xml.XmlConfiguration.parse(XmlConfiguration.java:717) at org.apache.log4j.xml.XmlConfiguration.doConfigure(XmlConfiguration.java:166) at org.apache.log4j.xml.XmlConfiguration.doConfigure(XmlConfiguration.java:141) at org.apache.log4j.config.Log4j1Configuration.initialize(Log4j1Configuration.java:60) at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:293) at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:626) at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:699) at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:716) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:270) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:155) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:47) at org.apache.logging.log4j.LogManager.getContext(LogManager.java:196) at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:137) at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:55) at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:47) at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:33) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:363) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388) at org.apache.atlas.kafka.bridge.KafkaBridge.(KafkaBridge.java:56) Failed to import Kafka Data Model!!! {code} Issue is seen on 7.1.8 build: *23291269* -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (ATLAS-4568) Basic search with tag filter gives approximateCount as -1 where there is no match and is 0 otherwise
Dharshana M Krishnamoorthy created ATLAS-4568: - Summary: Basic search with tag filter gives approximateCount as -1 where there is no match and is 0 otherwise Key: ATLAS-4568 URL: https://issues.apache.org/jira/browse/ATLAS-4568 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy *Scenario 1: faceted search with both tag filter and entity filter* Payload with entity filter and tag filter {code:java} { "excludeDeletedEntities": true, "includeSubClassifications": true, "includeSubTypes": true, "includeClassificationAttributes": true, "entityFilters": { "condition": "AND", "criterion": [{ "attributeName": "name", "operator": "contains", "attributeValue": "zfmrp" }, { "attributeName": "clusterName", "operator": "contains", "attributeValue": "@cm" }, { "attributeName": "owner", "operator": "neq", "attributeValue": "hrt_qa" }, { "attributeName": "fileSize", "operator": "gt", "attributeValue": "-2" }] }, "tagFilters": { "condition": "AND", "criterion": [{ "attributeName": "string", "operator": "neq", "attributeValue": "str5" }] }, "attributes": ["clusterName", "fileSize"], "limit": 25, "offset": 0, "typeName": "hdfs_path", "classification": "tag_piakb_1", "termName": null }{code} Post a basic search with with the above payload we get the following response {code:java} { "queryType": "BASIC", "searchParameters": { "typeName": "hdfs_path", "classification": "tag_piakb_1", "excludeDeletedEntities": true, "includeClassificationAttributes": true, "includeSubTypes": true, "includeSubClassifications": true, "limit": 25, "offset": 0, "entityFilters": { "condition": "AND", "criterion": [{ "attributeName": "name", "operator": "contains", "attributeValue": "zfmrp" }, { "attributeName": "clusterName", "operator": "contains", "attributeValue": "@cm" }, { "attributeName": "owner", "operator": "!=", "attributeValue": "hrt_qa" }, { "attributeName": "fileSize", "operator": ">", "attributeValue": "-2" }] }, "tagFilters": { "condition": "AND", "criterion": [{ "attributeName": "string", "operator": "!=", "attributeValue": "str5" }] }, "attributes": ["fileSize", "clusterName"] }, "approximateCount": -1 } {code} Here we can see the *approximateCount* is {*}-1{*}. Here both entity filter and tag filter are present and when there is no match we get *-1* in response *Scenario 2: faceted search with only entity filter* Payload with only entity filter {code:java} { "excludeDeletedEntities": true, "includeSubClassifications": true, "includeSubTypes": true, "includeClassificationAttributes": true, "entityFilters": { "condition": "AND", "criterion": [{ "attributeName": "name", "operator": "contains", "attributeValue": "zfmrp" }, { "attributeName": "clusterName", "op
[jira] [Updated] (ATLAS-4559) Basic search with GET method gives incorrect result when single quote is used in the query
[ https://issues.apache.org/jira/browse/ATLAS-4559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4559: -- Summary: Basic search with GET method gives incorrect result when single quote is used in the query (was: Basic search with GET api gives incorrect result when single quote is used in the query) > Basic search with GET method gives incorrect result when single quote is used > in the query > -- > > Key: ATLAS-4559 > URL: https://issues.apache.org/jira/browse/ATLAS-4559 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: image-2022-02-25-15-28-27-518.png, > image-2022-02-25-15-30-31-001.png > > > Query: > [https://quasar-tiuhte-1.quasar-tiuhte.root.hwx.site:31443/api/atlas/v2/search/basic?typeName=hive_table=qualifiedName='default.avbzi_random_table@cm'|https://quasar-tiuhte-1.quasar-tiuhte.root.hwx.site:31443/api/atlas/v2/search/basic?typeName=hive_table=qualifiedName=%27default.avbzi_random_table@cm%27] > Results: > {code:java} > {queryType: "BASIC",…}approximateCount: 22entities: [{typeName: > "hive_table",…}, {typeName: "hive_table",…},…]queryText: > "qualifiedName='default.avbzi_random_table@cm'"queryType: > "BASIC"searchParameters: {query: > "qualifiedName='default.avbzi_random_table@cm'", typeName: "hive_table",…} > {code} > !image-2022-02-25-15-30-31-001.png|width=1189,height=370! > But when double quotes is used it gives the right output: > Query: > https://quasar-tiuhte-1.quasar-tiuhte.root.hwx.site:31443/api/atlas/v2/search/basic?typeName=hive_table=qualifiedName="default.avbzi_random_table@cm; > {code:java} > {queryType: "BASIC",…}approximateCount: 1entities: [{typeName: > "hive_table",…}]queryText: > "qualifiedName=\"default.avbzi_random_table@cm\""queryType: > "BASIC"searchParameters: {query: > "qualifiedName="default.avbzi_random_table@cm"", typeName: "hive_table",…} > {code} > !image-2022-02-25-15-28-27-518.png|width=869,height=284! -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (ATLAS-4559) Basic search with GET api gives incorrect result when single quote is used in the query
Dharshana M Krishnamoorthy created ATLAS-4559: - Summary: Basic search with GET api gives incorrect result when single quote is used in the query Key: ATLAS-4559 URL: https://issues.apache.org/jira/browse/ATLAS-4559 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Attachments: image-2022-02-25-15-28-27-518.png, image-2022-02-25-15-30-31-001.png Query: [https://quasar-tiuhte-1.quasar-tiuhte.root.hwx.site:31443/api/atlas/v2/search/basic?typeName=hive_table=qualifiedName='default.avbzi_random_table@cm'|https://quasar-tiuhte-1.quasar-tiuhte.root.hwx.site:31443/api/atlas/v2/search/basic?typeName=hive_table=qualifiedName=%27default.avbzi_random_table@cm%27] Results: {code:java} {queryType: "BASIC",…}approximateCount: 22entities: [{typeName: "hive_table",…}, {typeName: "hive_table",…},…]queryText: "qualifiedName='default.avbzi_random_table@cm'"queryType: "BASIC"searchParameters: {query: "qualifiedName='default.avbzi_random_table@cm'", typeName: "hive_table",…} {code} !image-2022-02-25-15-30-31-001.png|width=1189,height=370! But when double quotes is used it gives the right output: Query: https://quasar-tiuhte-1.quasar-tiuhte.root.hwx.site:31443/api/atlas/v2/search/basic?typeName=hive_table=qualifiedName="default.avbzi_random_table@cm; {code:java} {queryType: "BASIC",…}approximateCount: 1entities: [{typeName: "hive_table",…}]queryText: "qualifiedName=\"default.avbzi_random_table@cm\""queryType: "BASIC"searchParameters: {query: "qualifiedName="default.avbzi_random_table@cm"", typeName: "hive_table",…} {code} !image-2022-02-25-15-28-27-518.png|width=869,height=284! -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (ATLAS-4526) [Regression] When a kafka topic is created atlas response does not contail replicationFactor
Dharshana M Krishnamoorthy created ATLAS-4526: - Summary: [Regression] When a kafka topic is created atlas response does not contail replicationFactor Key: ATLAS-4526 URL: https://issues.apache.org/jira/browse/ATLAS-4526 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy When a kafka topic is created, and when it reflected in Atlas, the response does not contain 'replicationFactor' Request: [https://quasar-cgfwjs-1.quasar-cgfwjs.root.hwx.site:31443/api/atlas/v2/entity/guid/eef4216f-0dc8-485d-9cc3-8ec96cfabf54] Response: {code:java} { "referredEntities": {}, "entity": { "typeName": "kafka_topic", "attributes": { "partitionCountLocal": 0, "desiredRetentionInHrs": 0, "retentionBytesNational": 0, "contactInfo": null, "replicatedFrom": null, "displayName": null, "numberOfEventsPerDay": 0, "maxThroughputPerSec": 0, "description": "topic_rhgua", "retentiontimeLocalInHrs": 0, "type": null, "avroSchema": [], "partitionCount": 1, "owner": null, "replicatedTo": null, "userDescription": null, "qualifiedName": "topic_rhgua@cm", "segmentBytesNational": 0, "partitionCountNational": 0, "segmentBytesLocal": 0, "uri": "topic_rhgua", "replicationFactorNational": 0, "avgMessageSizeInBytes": 0, "replicationFactorLocal": 0, "retentiontimeNationalInHrs": 0, "retentionBytesLocal": 0, "name": "topic_rhgua", "topic": "topic_rhgua", "keyClassname": null }, "guid": "eef4216f-0dc8-485d-9cc3-8ec96cfabf54", "isIncomplete": false, "status": "ACTIVE", "createdBy": "kafka", "updatedBy": "kafka", "createTime": 1641491509705, "updateTime": 1641491509705, "version": 0, "relationshipAttributes": { "inputToProcesses": [], "pipeline": null, "schema": [], "kafkaConsumerLineage": null, "model": null, "kafkaProducerLineage": null, "avroSchema": [], "meanings": [], "outputFromProcesses": [] }, "labels": [] } }{code} It has *replicationFactorLocal* and *replicationFactorNational* but does not contain *replicationFactor* It was working fine in the previous successful build: -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (ATLAS-4525) SchemaViolationException should be handled gracefully.
Dharshana M Krishnamoorthy created ATLAS-4525: - Summary: SchemaViolationException should be handled gracefully. Key: ATLAS-4525 URL: https://issues.apache.org/jira/browse/ATLAS-4525 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy SchemaViolationException should be handled gracefully. As this is a user error it should throw 400 instead of throwing 500 error *Repro steps:* Test SchemaViolationException while modifying unique attribute of an entity 1) Create an entity type with attribute(say attrib_1) that has isUnique set to true 2) Create 2 entities with this type 3) These 2 entities will now have the attribute attrib_1 4) Update the value of attrib_1 as attrib_1_value in entity 1 (This is succeed) 5) Update the value of attrib_1 as attrib_1_value in entity 2 ( This will fail with SchemaViolationException) and error code 500 Since this is a user error, we need to handle it gracefully and throw bad request with the right reason instead of throwing server exception with 500 error code -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (ATLAS-4524) HiveMetaStoreBridge.registerTable() fails when importing hive tables with large columns
Dharshana M Krishnamoorthy created ATLAS-4524: - Summary: HiveMetaStoreBridge.registerTable() fails when importing hive tables with large columns Key: ATLAS-4524 URL: https://issues.apache.org/jira/browse/ATLAS-4524 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy *Steps to repro:* # Disable hive hook # Create a table with 1000 columns # Import the table using hive-import command {code:java} 2022-01-06 13:51:52,329 ERROR - [main:] ~ Import failed for hive_table hive_table_tmwos (HiveMetaStoreBridge:503) org.apache.atlas.hook.AtlasHookException: HiveMetaStoreBridge.registerTable() failed. at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerTable(HiveMetaStoreBridge.java:559) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importTable(HiveMetaStoreBridge.java:448) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importTables(HiveMetaStoreBridge.java:426) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importDatabases(HiveMetaStoreBridge.java:395) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importDataDirectlyToAtlas(HiveMetaStoreBridge.java:352) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:186) Caused by: com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155) at com.sun.jersey.api.client.Client.handle(Client.java:652) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.method(WebResource.java:634) at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:402) at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:366) at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:352) at org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:228) at org.apache.atlas.AtlasClientV2.createEntity(AtlasClientV2.java:431) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerInstance(HiveMetaStoreBridge.java:575) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerTable(HiveMetaStoreBridge.java:548) ... 5 more Caused by: java.net.SocketTimeoutException: Read timed out at java.base/java.net.SocketInputStream.socketRead0(Native Method) at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:448) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:68) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1104) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:823) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:252) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:292) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:351) at java.base/sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:746) at java.base/sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:689) at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1604) at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509) at java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527) at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:329) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:253) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153) ... 16 more {code} Sample query to create a table with 1000 columns : {code:java} "create database hive_db_vxofz; use hive_db_vxofz; create table hive_table_tmwos(hive_column_bhngv_1 bigint, hive_column_bhngv_2 boolean, hive_column_bhngv_3 varchar(25), hive_column_bhngv_4 smallint, hive_column_bhngv_5 bigint, hive_column_bhngv_6 int, hive_column_bhngv_7 int, hive_column_bhngv_8 timestamp, hive_column_bhngv_9 float, hive_column_bhngv_10 float, hive_column_bhngv_11 float, hive_column_bhngv_12 boolean, hive_column_bhngv_13 timestamp, hive_column_bhngv_14 boolean, hive_column_bhngv_15 bigint, hive_column_bhngv_16 de
[jira] [Resolved] (ATLAS-4510) [Regression] CTAS table creation is failing with "org.apache.hive.service.cli.HiveSQLException" with "Processor has no capabilities, cannot create an ACID table"
[ https://issues.apache.org/jira/browse/ATLAS-4510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy resolved ATLAS-4510. --- Resolution: Invalid This is a Hive issue > [Regression] CTAS table creation is failing with > "org.apache.hive.service.cli.HiveSQLException" with "Processor has no > capabilities, cannot create an ACID table" > - > > Key: ATLAS-4510 > URL: https://issues.apache.org/jira/browse/ATLAS-4510 > Project: Atlas > Issue Type: Bug >Reporter: Dharshana M Krishnamoorthy >Priority: Major > > Seeing the below exception while creating hive ctas tables > Error while compiling statement: FAILED: SemanticException > org.apache.hadoop.hive.ql.metadata.HiveException: > MetaException(message:Processor has no capabilities, cannot create an ACID > table.) > {code:java} > org.apache.hive.service.cli.HiveSQLException: Error while compiling > statement: FAILED: SemanticException > org.apache.hadoop.hive.ql.metadata.HiveException: > MetaException(message:Processor has no capabilities, cannot create an ACID > table.) > at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:356) > at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:342) > at > org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:324) > at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:265) > at > org.apache.hive.jdbc.HiveStatement.executeUpdate(HiveStatement.java:511) > at > org.apache.atlas.regression.hive.HiveUtils.executeDDL(HiveUtils.java:84) > at > org.apache.atlas.regression.tests.HiveIntegrationCreateTableTest.createTableAsSelect(HiveIntegrationCreateTableTest.java:251) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:85) > at org.testng.internal.Invoker.invokeMethod(Invoker.java:659) > at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:845) > at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1153) > at > org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125) > at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108) > at org.testng.TestRunner.privateRun(TestRunner.java:771) > at org.testng.TestRunner.run(TestRunner.java:621) > at org.testng.SuiteRunner.runTest(SuiteRunner.java:357) > at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:352) > at org.testng.SuiteRunner.privateRun(SuiteRunner.java:310) > at org.testng.SuiteRunner.run(SuiteRunner.java:259) > at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52) > at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86) > at org.testng.TestNG.runSuitesSequentially(TestNG.java:1199) > at org.testng.TestNG.runSuitesLocally(TestNG.java:1124) > at org.testng.TestNG.run(TestNG.java:1032) > at > org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:295) > at > org.apache.maven.surefire.testng.TestNGXmlTestSuite.execute(TestNGXmlTestSuite.java:84) > at > org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:90) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) > Caused by: org.apache.hive.service.cli.HiveSQLException: Error while > compiling statement: FAILED: SemanticException > org.apache.hadoop.hive.ql.metadata.HiveException: > MetaException(message:Processor has no capabilities, cannot create an ACID > table.) > at > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:362) > at > org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:207) > at > org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:261) > at > org.
[jira] [Updated] (ATLAS-4510) [Regression] CTAS table creation is failing with "org.apache.hive.service.cli.HiveSQLException" with "Processor has no capabilities, cannot create an ACID table"
[ https://issues.apache.org/jira/browse/ATLAS-4510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4510: -- Component/s: (was: atlas-core) > [Regression] CTAS table creation is failing with > "org.apache.hive.service.cli.HiveSQLException" with "Processor has no > capabilities, cannot create an ACID table" > - > > Key: ATLAS-4510 > URL: https://issues.apache.org/jira/browse/ATLAS-4510 > Project: Atlas > Issue Type: Bug >Reporter: Dharshana M Krishnamoorthy >Priority: Major > > Seeing the below exception while creating hive ctas tables > Error while compiling statement: FAILED: SemanticException > org.apache.hadoop.hive.ql.metadata.HiveException: > MetaException(message:Processor has no capabilities, cannot create an ACID > table.) > {code:java} > org.apache.hive.service.cli.HiveSQLException: Error while compiling > statement: FAILED: SemanticException > org.apache.hadoop.hive.ql.metadata.HiveException: > MetaException(message:Processor has no capabilities, cannot create an ACID > table.) > at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:356) > at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:342) > at > org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:324) > at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:265) > at > org.apache.hive.jdbc.HiveStatement.executeUpdate(HiveStatement.java:511) > at > org.apache.atlas.regression.hive.HiveUtils.executeDDL(HiveUtils.java:84) > at > org.apache.atlas.regression.tests.HiveIntegrationCreateTableTest.createTableAsSelect(HiveIntegrationCreateTableTest.java:251) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:85) > at org.testng.internal.Invoker.invokeMethod(Invoker.java:659) > at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:845) > at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1153) > at > org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125) > at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108) > at org.testng.TestRunner.privateRun(TestRunner.java:771) > at org.testng.TestRunner.run(TestRunner.java:621) > at org.testng.SuiteRunner.runTest(SuiteRunner.java:357) > at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:352) > at org.testng.SuiteRunner.privateRun(SuiteRunner.java:310) > at org.testng.SuiteRunner.run(SuiteRunner.java:259) > at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52) > at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86) > at org.testng.TestNG.runSuitesSequentially(TestNG.java:1199) > at org.testng.TestNG.runSuitesLocally(TestNG.java:1124) > at org.testng.TestNG.run(TestNG.java:1032) > at > org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:295) > at > org.apache.maven.surefire.testng.TestNGXmlTestSuite.execute(TestNGXmlTestSuite.java:84) > at > org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:90) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) > Caused by: org.apache.hive.service.cli.HiveSQLException: Error while > compiling statement: FAILED: SemanticException > org.apache.hadoop.hive.ql.metadata.HiveException: > MetaException(message:Processor has no capabilities, cannot create an ACID > table.) > at > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:362) > at > org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:207) > at > org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:261) > at > org.apache.hive.service.cli.o
[jira] [Created] (ATLAS-4510) [Regression] CTAS table creation is failing with "org.apache.hive.service.cli.HiveSQLException" with "Processor has no capabilities, cannot create an ACID table"
Dharshana M Krishnamoorthy created ATLAS-4510: - Summary: [Regression] CTAS table creation is failing with "org.apache.hive.service.cli.HiveSQLException" with "Processor has no capabilities, cannot create an ACID table" Key: ATLAS-4510 URL: https://issues.apache.org/jira/browse/ATLAS-4510 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Seeing the below exception while creating hive ctas tables Error while compiling statement: FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Processor has no capabilities, cannot create an ACID table.) {code:java} org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Processor has no capabilities, cannot create an ACID table.) at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:356) at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:342) at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:324) at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:265) at org.apache.hive.jdbc.HiveStatement.executeUpdate(HiveStatement.java:511) at org.apache.atlas.regression.hive.HiveUtils.executeDDL(HiveUtils.java:84) at org.apache.atlas.regression.tests.HiveIntegrationCreateTableTest.createTableAsSelect(HiveIntegrationCreateTableTest.java:251) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:85) at org.testng.internal.Invoker.invokeMethod(Invoker.java:659) at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:845) at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1153) at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125) at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108) at org.testng.TestRunner.privateRun(TestRunner.java:771) at org.testng.TestRunner.run(TestRunner.java:621) at org.testng.SuiteRunner.runTest(SuiteRunner.java:357) at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:352) at org.testng.SuiteRunner.privateRun(SuiteRunner.java:310) at org.testng.SuiteRunner.run(SuiteRunner.java:259) at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52) at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86) at org.testng.TestNG.runSuitesSequentially(TestNG.java:1199) at org.testng.TestNG.runSuitesLocally(TestNG.java:1124) at org.testng.TestNG.run(TestNG.java:1032) at org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:295) at org.apache.maven.surefire.testng.TestNGXmlTestSuite.execute(TestNGXmlTestSuite.java:84) at org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:90) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Processor has no capabilities, cannot create an ACID table.) at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:362) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:207) at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:261) at org.apache.hive.service.cli.operation.Operation.run(Operation.java:274) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:549) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:535) at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:315) at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:567) at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1550) at org.apache.hive.service.rp
[jira] [Updated] (ATLAS-4509) After restart, Atlas service does not come up
[ https://issues.apache.org/jira/browse/ATLAS-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4509: -- Attachment: Screen Shot 2021-12-16 at 11.34.23 AM.png > After restart, Atlas service does not come up > - > > Key: ATLAS-4509 > URL: https://issues.apache.org/jira/browse/ATLAS-4509 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screen Shot 2021-12-16 at 11.34.23 AM.png > > > After some config update or any other change when we perform a restart on the > cluster or on the service, the restart is success but Atlas is not coming up > after a restart. > We can see the following exception: > {code:java} > 2021-12-16 06:04:05,667 ERROR - [etp187457031-125:] ~ Could not retrieve > active server address as it is null. Cannot redirect request /favicon.ico > (ActiveServerFilter:103) > 2021-12-16 06:04:16,162 ERROR - [etp187457031-118:] ~ URL not supported in HA > mode: /api/atlas/admin/metrics (ActiveServerFilter:121) > 2021-12-16 06:04:16,164 ERROR - [etp187457031-118:] ~ Error getting active > server address (ActiveInstanceState:142) > org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = > NoNode for /atlas/active_server_info > at org.apache.zookeeper.KeeperException.create(KeeperException.java:118) > at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) > at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:2131) > at > org.apache.curator.framework.imps.GetDataBuilderImpl$4.call(GetDataBuilderImpl.java:327) > at > org.apache.curator.framework.imps.GetDataBuilderImpl$4.call(GetDataBuilderImpl.java:316) > at > org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:67) > at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:81) > at > org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:313) > at > org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:304) > at > org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:35) > at > org.apache.atlas.web.service.ActiveInstanceState.getActiveServerAddress(ActiveInstanceState.java:139) > at > org.apache.atlas.web.filters.ActiveServerFilter.doFilter(ActiveServerFilter.java:101) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) > at > org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:149) > at > org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) > at > org.apache.atlas.web.filters.AtlasKnoxSSOAuthenticationFilter.doFilter(AtlasKnoxSSOAuthenticationFilter.java:142) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) > at > org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:218) > at > org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:212) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) > at > org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:103) > at > org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:89) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) > at > org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) > at > org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) > at > org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) > at > org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) > at > org.springframework.
[jira] [Created] (ATLAS-4509) After restart, Atlas service does not come up
Dharshana M Krishnamoorthy created ATLAS-4509: - Summary: After restart, Atlas service does not come up Key: ATLAS-4509 URL: https://issues.apache.org/jira/browse/ATLAS-4509 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy After some config update or any other change when we perform a restart on the cluster or on the service, the restart is success but Atlas is not coming up after a restart. We can see the following exception: {code:java} 2021-12-16 06:04:05,667 ERROR - [etp187457031-125:] ~ Could not retrieve active server address as it is null. Cannot redirect request /favicon.ico (ActiveServerFilter:103) 2021-12-16 06:04:16,162 ERROR - [etp187457031-118:] ~ URL not supported in HA mode: /api/atlas/admin/metrics (ActiveServerFilter:121) 2021-12-16 06:04:16,164 ERROR - [etp187457031-118:] ~ Error getting active server address (ActiveInstanceState:142) org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atlas/active_server_info at org.apache.zookeeper.KeeperException.create(KeeperException.java:118) at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:2131) at org.apache.curator.framework.imps.GetDataBuilderImpl$4.call(GetDataBuilderImpl.java:327) at org.apache.curator.framework.imps.GetDataBuilderImpl$4.call(GetDataBuilderImpl.java:316) at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:67) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:81) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:313) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:304) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:35) at org.apache.atlas.web.service.ActiveInstanceState.getActiveServerAddress(ActiveInstanceState.java:139) at org.apache.atlas.web.filters.ActiveServerFilter.doFilter(ActiveServerFilter.java:101) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:149) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) at org.apache.atlas.web.filters.AtlasKnoxSSOAuthenticationFilter.doFilter(AtlasKnoxSSOAuthenticationFilter.java:142) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:218) at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:212) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:103) at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:89) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:110) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:80) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:55) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.springframework.security.web.FilterChainProxy
[jira] [Created] (ATLAS-4480) [API] Observing slowness in rest api response
Dharshana M Krishnamoorthy created ATLAS-4480: - Summary: [API] Observing slowness in rest api response Key: ATLAS-4480 URL: https://issues.apache.org/jira/browse/ATLAS-4480 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Attachments: Screenshot 2021-11-15 at 3.51.10 PM.png Observing slowness in the api response Eg: Scenario # Create a entity # Soft delete it # Purge the entity # Verify purge audit Here we fire a admin api call immediately after the entity is purged But the admin api response is empty [] If we query after some time, we can see the audit is present for the purge call !Screenshot 2021-11-15 at 3.51.10 PM.png|width=377,height=180! Logs showing the above steps {code:java} 2021-11-12 10:34:17,702|INFO|MainThread|atlasv2.py:659 - get_entity_def()|https://quasar-gmxrxz-2.quasar-gmxrxz.root.hwx.site:31443/api/atlas/v2/entity/guid/e15aeb8e-cfb2-4b9d-96f1-b9bce94586e4 2021-11-12 10:34:17,703|INFO|MainThread|atlas.py:1135 - http_get_request()|https://quasar-gmxrxz-2.quasar-gmxrxz.root.hwx.site:31443/api/atlas/v2/entity/guid/e15aeb8e-cfb2-4b9d-96f1-b9bce94586e4 2021-11-12 10:34:17,703|INFO|MainThread|atlas.py:1202 - http_request()|https://quasar-gmxrxz-2.quasar-gmxrxz.root.hwx.site:31443/api/atlas/v2/entity/guid/e15aeb8e-cfb2-4b9d-96f1-b9bce94586e4 2021-11-12 10:34:17,704|INFO|MainThread|atlas.py:1209 - http_request()|HTTP Method: GET, Body: None 2021-11-12 10:34:17,705|INFO|MainThread|atlas.py:1238 - http_request()|Making HTTP requests via Kerberos auth 2021-11-12 10:34:17,787|INFO|MainThread|atlas.py:1257 - http_request()|HTTP response code: 200 2021-11-12 10:34:17,790|INFO|MainThread|atlas.py:807 - set_base_url()|Base url: https://quasar-gmxrxz-2.quasar-gmxrxz.root.hwx.site:31443 2021-11-12 10:34:17,791|INFO|MainThread|atlas.py:1202 - http_request()|https://quasar-gmxrxz-2.quasar-gmxrxz.root.hwx.site:31443/api/atlas/admin/purge 2021-11-12 10:34:17,791|INFO|MainThread|atlas.py:1209 - http_request()|HTTP Method: PUT, Body: ["e15aeb8e-cfb2-4b9d-96f1-b9bce94586e4"] 2021-11-12 10:34:17,792|INFO|MainThread|atlas.py:1238 - http_request()|Making HTTP requests via Kerberos auth 2021-11-12 10:34:18,184|INFO|MainThread|atlas.py:1257 - http_request()|HTTP response code: 200 2021-11-12 10:34:18,186|INFO|MainThread|utils.py:18 - purge_entity_and_verify_status()|{u'mutatedEntities': {u'PURGE': [{u'status': u'DELETED', u'isIncomplete': False, u'guid': u'9c9d8fc0-3f81-4d24-987f-60172e776d84', u'classifications': [], u'labels': [], u'typeName': u'hive_column', u'meaningNames': [], u'displayText': u'id', u'meanings': [], u'attributes': {u'owner': u'hrt_qa', u'qualifiedName': u'default.table_lxjgr.id@cm', u'name': u'id'}, u'classificationNames': []}, {u'status': u'DELETED', u'isIncomplete': False, u'guid': u'6c7b8038-3c46-4d25-9ef3-c4bdb7ddb5a3', u'classifications': [], u'labels': [], u'typeName': u'hive_column', u'meaningNames': [], u'displayText': u'name', u'meanings': [], u'attributes': {u'owner': u'hrt_qa', u'qualifiedName': u'default.table_lxjgr.name@cm', u'name': u'name'}, u'classificationNames': []}, {u'status': u'DELETED', u'isIncomplete': False, u'guid': u'e15aeb8e-cfb2-4b9d-96f1-b9bce94586e4', u'classifications': [], u'labels': [], u'typeName': u'hive_table', u'meaningNames': [], u'displayText': u'table_lxjgr', u'meanings': [], u'attributes': {u'owner': u'hrt_qa', u'qualifiedName': u'default.table_lxjgr@cm', u'createTime': 1636713248000, u'name': u'table_lxjgr'}, u'classificationNames': []}, {u'status': u'DELETED', u'isIncomplete': False, u'guid': u'6132f1b4-6efb-45af-8ec2-70f197aa16c2', u'classifications': [], u'labels': [], u'typeName': u'hive_storagedesc', u'meaningNames': [], u'displayText': u'default.table_lxjgr@cm_storage', u'meanings': [], u'attributes': {u'qualifiedName': u'default.table_lxjgr@cm_storage'}, u'classificationNames': []}, {u'status': u'DELETED', u'isIncomplete': False, u'guid': u'0354d287-c598-4325-b0ec-d48dfaf6fd9a', u'classifications': [], u'labels': [], u'typeName': u'hive_table_ddl', u'meaningNames': [], u'displayText': u'default.table_lxjgr@cm:1636713248787', u'meanings': [], u'attributes': {u'qualifiedName': u'default.table_lxjgr@cm:1636713248787'}, u'classificationNames': []}]}} 2021-11-12 10:34:18,186|INFO|MainThread|utils.py:72 - purge_async_deleted_entities()|{u'mutatedEntities': {u'PURGE': [{u'status': u'DELETED', u'isIncomplete': False, u'guid': u'9c9d8fc0-3f81-4d24-987f-60172e776d84', u'classifications': [], u'labels': [], u'typeName': u'hive_column', u'meaningNames': [], u'displayText': u'id', u'meanings': [], u'attributes': {u'owner': u'hrt_qa', u'qualifiedName': u'default.table_lxjgr.id@cm', u'name': u'id'}, u'classificationNames': []}, {u'status': u'DELETED', u'isIncomplete': False, u'guid': u'6c7b8038-3c46
[jira] [Created] (ATLAS-4421) [Atlas: Hive Import] When import-hive is run with incorrect input the information is not conveyed to the user
Dharshana M Krishnamoorthy created ATLAS-4421: - Summary: [Atlas: Hive Import] When import-hive is run with incorrect input the information is not conveyed to the user Key: ATLAS-4421 URL: https://issues.apache.org/jira/browse/ATLAS-4421 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy {code:java} [root@quasar-cxwzxp-3 bin]# /opt/cloudera/parcels/CDH/lib/atlas/hook-bin/import-hive.sh -f /tmp/file2.txt ... Log file for import is /var/log/atlas/import-hive.log ... Hive Meta Data imported successfully!!! {code} In the above example, file */tmp/file2.txt* does not exists. The log file is not created in the expected location /var/log/atlas/import-hive.log which is tracked by Jira** Here, this gives the user an impression that the import is success while nothing has actually happened. It would be good to convey that no data was imported in such cases This is true even when the database name/pattern provided to -d or table name/pattern provided to to -t are incorrect -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4420) [Atlas: Hive Import] The hive import logs are not generated
Dharshana M Krishnamoorthy created ATLAS-4420: - Summary: [Atlas: Hive Import] The hive import logs are not generated Key: ATLAS-4420 URL: https://issues.apache.org/jira/browse/ATLAS-4420 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy {code:java} [root@quasar-cxwzxp-3 bin]# /opt/cloudera/parcels/CDH/lib/atlas/hook-bin/import-hive.sh -f /tmp/file2.txt Using Hive configuration directory [/etc/hive/conf] ... Log file for import is /var/log/atlas/import-hive.log SLF4J: Class path contains multiple SLF4J bindings. ... Hive Meta Data imported successfully!!! {code} As seen above, the import command says that import log file will be generated in */var/log/atlas/import-hive.log* location But no such file is created -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4419) [Atlas: Hive Import] Suggestion suggests entity that is deleted
Dharshana M Krishnamoorthy created ATLAS-4419: - Summary: [Atlas: Hive Import] Suggestion suggests entity that is deleted Key: ATLAS-4419 URL: https://issues.apache.org/jira/browse/ATLAS-4419 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Scenario: # Create a database: say dharsh_db # Create few tables : say table_1, table_2, table_3 # Disable the hive hook # Delete table table_1 from dharsh_db # Run import hive command as "/opt/cloudera/parcels/CDH/lib/atlas/hook-bin/import-hive.sh -deleteNonExisting" # Now search with string "dharsh_db" Expectation: Suggestion should suggest entities that are still active in atlas "table_2, table_3" Observation: Suggestions suggests table1 of dharsh_db, which is now a deleted entity [^Screen Recording 2021-09-09 at 4.28.34 PM.mov]w -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4375) [Atlas: Debug Metrics] Debug metrics is empty on cluster with custom principal
Dharshana M Krishnamoorthy created ATLAS-4375: - Summary: [Atlas: Debug Metrics] Debug metrics is empty on cluster with custom principal Key: ATLAS-4375 URL: https://issues.apache.org/jira/browse/ATLAS-4375 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Debug metrics is fetched after 30 seconds after an operation is performed. Occasionally, the debug metrics are not updated even after 5 mins. Restart would sometimes solve this issue -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4374) [Send lineage only information] When a ctas table is created, there is an additional notification sent by hive
Dharshana M Krishnamoorthy created ATLAS-4374: - Summary: [Send lineage only information] When a ctas table is created, there is an additional notification sent by hive Key: ATLAS-4374 URL: https://issues.apache.org/jira/browse/ATLAS-4374 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Attachments: Screenshot 2021-07-27 at 3.06.59 PM.png, Screenshot 2021-07-27 at 3.22.47 PM.png *Scenario*: create a table and create a ctas table from it *Expectation*: There has to be only 1 notification from hive with *ENTITY_CREATE_V2* type *Observation*: There are 2 notifications one with *ENTITY_CREATE_V2* and other with *ENTITY_PARTIAL_UPDATE_V2* {code:java} {"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"172.27.202.195","msgCreatedBy":"hive","msgCreationTime":1627372412791,"spooled":false,"message":{"type":"ENTITY_PARTIAL_UPDATE_V2","user":"hive","entityId":{"typeName":"hive_table","uniqueAttributes":{"qualifiedName":"db_mtkns.ctas_table_vwsci@cm"}},"entity":{"entity":{"typeName":"hive_table","attributes":{"parameters":{"numRows":"0","rawDataSize":"0","transient_lastDdlTime":"1627372412","bucketing_version":"2","numFilesErasureCoded":"0","totalSize":"0","transactional_properties":"default","COLUMN_STATS_ACCURATE":"{\"BASIC_STATS\":\"true\"}","numFiles":"0","transactional":"true"}},"isIncomplete":false,"provenanceType":0,"version":0,"proxy":false {"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"172.27.202.195","msgCreatedBy":"hive","msgCreationTime":1627372412906,"spooled":false,"message":{"type":"ENTITY_CREATE_V2","user":"hrt_qa","entities":{"referredEntities":{},"entities":[{"typeName":"hive_table_ddl","attributes":{"serviceType":"hive","qualifiedName":"db_mtkns.ctas_table_vwsci@cm:1627372386732","execTime":1627372386732,"queryText":"create table db_mtkns.ctas_table_vwsci as select * from db_mtkns.table_gmqnw","name":"create table db_mtkns.ctas_table_vwsci as select * from db_mtkns.table_gmqnw","userName":"hrt_qa"},"guid":"-7790679096914344","isIncomplete":false,"provenanceType":0,"version":0,"relationshipAttributes":{"table":{"typeName":"hive_table","uniqueAttributes":{"qualifiedName":"db_mtkns.ctas_table_vwsci@cm"},"relationshipType":"hive_table_ddl_queries"}},"proxy":false},{"typeName":"hive_process","attributes":{"recentQueries":["create table db_mtkns.ctas_table_vwsci as select * from db_mtkns.table_gmqnw"],"qualifiedName":"db_mtkns.ctas_table_vwsci@cm:1627372412000","clusterName":"cm","name":"db_mtkns.ctas_table_vwsci@cm:1627372412000","queryText":"","operationType":"CREATETABLE_AS_SELECT","startTime":1627372412905,"queryPlan":"Not Supported","endTime":1627372412905,"userName":"","queryId":""},"guid":"-7790679096914345","isIncomplete":false,"provenanceType":0,"version":0,"relationshipAttributes":{"outputs":[{"typeName":"hive_table","uniqueAttributes":{"qualifiedName":"db_mtkns.ctas_table_vwsci@cm"},"relationshipType":"process_dataset_outputs"}],"inputs":[{"typeName":"hive_table","uniqueAttributes":{"qualifiedName":"db_mtkns.table_gmqnw@cm"},"relationshipType":"dataset_process_inputs"}]},"proxy":false},{"typeName":"
[jira] [Updated] (ATLAS-4374) [Send lineage only information] When a ctas table is created, there is an additional notification sent by hive
[ https://issues.apache.org/jira/browse/ATLAS-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4374: -- Attachment: Screenshot 2021-07-27 at 3.06.59 PM.png > [Send lineage only information] When a ctas table is created, there is an > additional notification sent by hive > -- > > Key: ATLAS-4374 > URL: https://issues.apache.org/jira/browse/ATLAS-4374 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2021-07-27 at 3.06.59 PM.png, Screenshot > 2021-07-27 at 3.22.47 PM.png > > > *Scenario*: > create a table and create a ctas table from it > *Expectation*: > There has to be only 1 notification from hive with *ENTITY_CREATE_V2* type > *Observation*: > There are 2 notifications > one with *ENTITY_CREATE_V2* and other with *ENTITY_PARTIAL_UPDATE_V2* > {code:java} > > {"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"172.27.202.195","msgCreatedBy":"hive","msgCreationTime":1627372412791,"spooled":false,"message":{"type":"ENTITY_PARTIAL_UPDATE_V2","user":"hive","entityId":{"typeName":"hive_table","uniqueAttributes":{"qualifiedName":"db_mtkns.ctas_table_vwsci@cm"}},"entity":{"entity":{"typeName":"hive_table","attributes":{"parameters":{"numRows":"0","rawDataSize":"0","transient_lastDdlTime":"1627372412","bucketing_version":"2","numFilesErasureCoded":"0","totalSize":"0","transactional_properties":"default","COLUMN_STATS_ACCURATE":"{\"BASIC_STATS\":\"true\"}","numFiles":"0","transactional":"true"}},"isIncomplete":false,"provenanceType":0,"version":0,"proxy":false > {"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"172.27.202.195","msgCreatedBy":"hive","msgCreationTime":1627372412906,"spooled":false,"message":{"type":"ENTITY_CREATE_V2","user":"hrt_qa","entities":{"referredEntities":{},"entities":[{"typeName":"hive_table_ddl","attributes":{"serviceType":"hive","qualifiedName":"db_mtkns.ctas_table_vwsci@cm:1627372386732","execTime":1627372386732,"queryText":"create > table db_mtkns.ctas_table_vwsci as select * from > db_mtkns.table_gmqnw","name":"create table db_mtkns.ctas_table_vwsci as > select * from > db_mtkns.table_gmqnw","userName":"hrt_qa"},"guid":"-7790679096914344","isIncomplete":false,"provenanceType":0,"version":0,"relationshipAttributes":{"table":{"typeName":"hive_table","uniqueAttributes":{"qualifiedName":"db_mtkns.ctas_table_vwsci@cm"},"relationshipType":"hive_table_ddl_queries"}},"proxy":false},{"typeName":"hive_process","attributes":{"recentQueries":["create > table db_mtkns.ctas_table_vwsci as select * from > db_mtkns.table_gmqnw"],"qualifiedName":"db_mtkns.ctas_table_vwsci@cm:1627372412000","clusterName":"cm","name":"db_mtkns.ctas_table_vwsci@cm:1627372412000","queryText":"","operationType":"CREATETABLE_AS_SELECT","startTime":1627372412905,"queryPlan":"Not > > Supported","endTime":1627372412905,"userName":"","queryId":""},"guid":"-7790679096914345","isIncomplete":false,"provenanceType":0,"version":0,"relationshipAttributes":{"outputs":[{"typeN
[jira] [Created] (ATLAS-4325) [Atlas: Glossary Term Bulk Import] [UI] Unable to perform bulk import glossary term via UI
Dharshana M Krishnamoorthy created ATLAS-4325: - Summary: [Atlas: Glossary Term Bulk Import] [UI] Unable to perform bulk import glossary term via UI Key: ATLAS-4325 URL: https://issues.apache.org/jira/browse/ATLAS-4325 Project: Atlas Issue Type: Bug Components: atlas-webui Reporter: Dharshana M Krishnamoorthy Attachments: Screenshot 2021-06-03 at 5.41.57 PM.png Bulk import is failing with "\{"msgDesc":"Missing header or invalid Header value for CSRF Vulnerability Protection"}" !Screenshot 2021-06-03 at 5.41.57 PM.png|width=443,height=273! NOTE: the same succeeds via api -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4312) [Atlas: Debug Metrics] When an entity is created via hive EntityREST_createOrUpdate metrics does not capture it
Dharshana M Krishnamoorthy created ATLAS-4312: - Summary: [Atlas: Debug Metrics] When an entity is created via hive EntityREST_createOrUpdate metrics does not capture it Key: ATLAS-4312 URL: https://issues.apache.org/jira/browse/ATLAS-4312 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy When debug metrics is enabled, all the debug data are captured. When an entity is created by hitting the /entity endpoint, then the *EntityREST_createOrUpdate* is created/updated. But if a hive table is created, this entry is not created/updated *Steps to repro:* # Restart Atlas # Create a hive table after restart # Fetch debug metrics Expectation: EntityREST_createOrUpdate should be created in debug_audit_response Observation: The response is empty -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4298) [Atlas: Debug Metrics][UI] When REST API Metric page has more than 25 entries, the page does not load unless manually refreshed
[ https://issues.apache.org/jira/browse/ATLAS-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4298: -- Component/s: (was: atlas-core) atlas-webui > [Atlas: Debug Metrics][UI] When REST API Metric page has more than 25 > entries, the page does not load unless manually refreshed > --- > > Key: ATLAS-4298 > URL: https://issues.apache.org/jira/browse/ATLAS-4298 > Project: Atlas > Issue Type: Bug > Components: atlas-webui > Reporter: Dharshana M Krishnamoorthy >Priority: Major > > When debug metrics is enabled by setting *atlas.debug.metrics.enabled=true* > Debug metrics url page: :31443/index.html#!/debugMetrics displays the > REST API Metrics. > When the result page entry count crosses 25, the page does not load, unless > refresh button is hit > There are no error logs found in the cluster but console displays the > following > {code:java} > backbone.paginator.min.js?bust=1621440039818:1 > Uncaught TypeError: Cannot read property 'add' of undefined > at e (backbone.paginator.min.js?bust=1621440039818:1) > at Object. (backbone.paginator.min.js?bust=1621440039818:1) > at s (backbone-min.js?bust=1621440039818:1) > at r (backbone-min.js?bust=1621440039818:1) > at m (backbone-min.js?bust=1621440039818:1) > at N.d.k.trigger (backbone-min.js?bust=1621440039818:1) > at N.d._onModelEvent (backbone-min.js?bust=1621440039818:1) > at s (backbone-min.js?bust=1621440039818:1) > at r (backbone-min.js?bust=1621440039818:1) > at m (backbone-min.js?bust=1621440039818:1){code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4299) [Atlas: Debug Metrics] Several UI Issues are seen when the total count crossed 25
[ https://issues.apache.org/jira/browse/ATLAS-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4299: -- Description: A list of UI issues are seen when the total issues crosses 25 # The total number of data as per response is 29, but UI says 30 # The data displayed are only 29 but one of them is a blank entry (Highlighted in screenshot) # Among the 29 entries present in api response, one of them is missing in the UI in this case [*TypesREST_getTypeDefHeaders*] # When all the data are made to display in same page, the UI shows page 1 and 2 # When the selection is made to display only 25, it still displays all the data [25+ entries are displayed] # Sorting does not work without refresh Response Data: {code:java} { "TypesREST_getTypeDefHeaders": { "name": "TypesREST_getTypeDefHeaders", "numops": 13, "minTime": 2, "maxTime": 37, "stdDevTime": 0, "avgTime": 2 }, "GlossaryREST_createGlossaryTerm": { "name": "GlossaryREST_createGlossaryTerm", "numops": 11, "minTime": 218, "maxTime": 308, "stdDevTime": 29.160475, "avgTime": 235.3 }, "GlossaryREST_deleteGlossaryCategory": { "name": "GlossaryREST_deleteGlossaryCategory", "numops": 2, "minTime": 197, "maxTime": 219, "stdDevTime": 0, "avgTime": 197 }, "GlossaryREST_deleteGlossary": { "name": "GlossaryREST_deleteGlossary", "numops": 3, "minTime": 165, "maxTime": 521, "stdDevTime": 251.73001, "avgTime": 343 }, "EntityREST_addClassifications": { "name": "EntityREST_addClassifications", "numops": 339, "minTime": 64, "maxTime": 430, "stdDevTime": 2.5495098, "avgTime": 101.5 }, "DiscoveryREST_getSavedSearches": { "name": "DiscoveryREST_getSavedSearches", "numops": 13, "minTime": 3, "maxTime": 38, "stdDevTime": 0, "avgTime": 3 }, "TypesREST_getClassificationDefByName": { "name": "TypesREST_getClassificationDefByName", "numops": 11, "minTime": 2, "maxTime": 2, "stdDevTime": 0, "avgTime": 1 }, "GlossaryREST_createGlossary": { "name": "GlossaryREST_createGlossary", "numops": 4, "minTime": 167, "maxTime": 197, "stdDevTime": 0, "avgTime": 179 }, "EntityREST_createOrUpdate": { "name": "EntityREST_createOrUpdate", "numops": 2, "minTime": 76, "maxTime": 133, "stdDevTime": 40.305088, "avgTime": 104.5 }, "GlossaryREST_getGlossaryCategory": { "name": "GlossaryREST_getGlossaryCategory", "numops": 14, "minTime": 5, "maxTime": 47, "stdDevTime": 18.384777, "avgTime": 25 }, "GlossaryREST_updateGlossaryTerm": { "name": "GlossaryREST_updateGlossaryTerm", "numops": 5, "minTime": 62, "maxTime": 559, "stdDevTime": 340.11835, "avgTime": 62 }, "DiscoveryREST_searchUsingBasic": { "name": "DiscoveryREST_searchUsingBasic", "numops": 2, "minTime": 25, "maxTime": 451, "stdDevTime": 0, "avgTime": 25 }, "EntityREST_getById": { "name": "EntityREST_getById", "numops": 5, "minTime": 37, "maxTime": 84, "stdDevTime": 20.067387, "avgTime": 52.8 }, "GlossaryREST_getGlossaries": { "name": "GlossaryREST_getGlossaries", "numops": 34, "minTime": 10, "maxTime": 291, "stdDevTime": 48.294235, "avgTime": 180 }, "TypesREST_getAllTypeDefs": { "name": "TypesREST_getAllTypeDefs", "numops": 44, "minTime": 0, "maxTime": 32, "stdDevTime": 0.5, "avgTime": 0.25 }, "DiscoveryREST_searchUsingDSL": { "name": "DiscoveryREST_searchUsingDSL", "numops": 4, &
[jira] [Updated] (ATLAS-4299) [Atlas: Debug Metrics] Several UI Issues are seen when the total count crossed 25
[ https://issues.apache.org/jira/browse/ATLAS-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4299: -- Description: A list of UI issues are seen when the total issues crosses 25 # The total number of data as per response is 29, but UI says 30 # The data displayed are only 29 but one of them is a blank entry (Highlighted in screenshot) # Among the 29 entries present in api response, one of them is missing in the UI in this case [*TypesREST_getTypeDefHeaders*] # When all the data are made to display in same page, the UI shows page 1 and 2 # When the selection is made to display only 25, it still displays all the data [25+ entries are displayed] # Sorting does not work # Response Data: {code:java} { "TypesREST_getTypeDefHeaders": { "name": "TypesREST_getTypeDefHeaders", "numops": 13, "minTime": 2, "maxTime": 37, "stdDevTime": 0, "avgTime": 2 }, "GlossaryREST_createGlossaryTerm": { "name": "GlossaryREST_createGlossaryTerm", "numops": 11, "minTime": 218, "maxTime": 308, "stdDevTime": 29.160475, "avgTime": 235.3 }, "GlossaryREST_deleteGlossaryCategory": { "name": "GlossaryREST_deleteGlossaryCategory", "numops": 2, "minTime": 197, "maxTime": 219, "stdDevTime": 0, "avgTime": 197 }, "GlossaryREST_deleteGlossary": { "name": "GlossaryREST_deleteGlossary", "numops": 3, "minTime": 165, "maxTime": 521, "stdDevTime": 251.73001, "avgTime": 343 }, "EntityREST_addClassifications": { "name": "EntityREST_addClassifications", "numops": 339, "minTime": 64, "maxTime": 430, "stdDevTime": 2.5495098, "avgTime": 101.5 }, "DiscoveryREST_getSavedSearches": { "name": "DiscoveryREST_getSavedSearches", "numops": 13, "minTime": 3, "maxTime": 38, "stdDevTime": 0, "avgTime": 3 }, "TypesREST_getClassificationDefByName": { "name": "TypesREST_getClassificationDefByName", "numops": 11, "minTime": 2, "maxTime": 2, "stdDevTime": 0, "avgTime": 1 }, "GlossaryREST_createGlossary": { "name": "GlossaryREST_createGlossary", "numops": 4, "minTime": 167, "maxTime": 197, "stdDevTime": 0, "avgTime": 179 }, "EntityREST_createOrUpdate": { "name": "EntityREST_createOrUpdate", "numops": 2, "minTime": 76, "maxTime": 133, "stdDevTime": 40.305088, "avgTime": 104.5 }, "GlossaryREST_getGlossaryCategory": { "name": "GlossaryREST_getGlossaryCategory", "numops": 14, "minTime": 5, "maxTime": 47, "stdDevTime": 18.384777, "avgTime": 25 }, "GlossaryREST_updateGlossaryTerm": { "name": "GlossaryREST_updateGlossaryTerm", "numops": 5, "minTime": 62, "maxTime": 559, "stdDevTime": 340.11835, "avgTime": 62 }, "DiscoveryREST_searchUsingBasic": { "name": "DiscoveryREST_searchUsingBasic", "numops": 2, "minTime": 25, "maxTime": 451, "stdDevTime": 0, "avgTime": 25 }, "EntityREST_getById": { "name": "EntityREST_getById", "numops": 5, "minTime": 37, "maxTime": 84, "stdDevTime": 20.067387, "avgTime": 52.8 }, "GlossaryREST_getGlossaries": { "name": "GlossaryREST_getGlossaries", "numops": 34, "minTime": 10, "maxTime": 291, "stdDevTime": 48.294235, "avgTime": 180 }, "TypesREST_getAllTypeDefs": { "name": "TypesREST_getAllTypeDefs", "numops": 44, "minTime": 0, "maxTime": 32, "stdDevTime": 0.5, "avgTime": 0.25 }, "DiscoveryREST_searchUsingDSL": { "name": "DiscoveryREST_searchUsingDSL", "numops": 4, &
[jira] [Updated] (ATLAS-4299) [Atlas: Debug Metrics] Several UI Issues are seen when the total count crossed 25
[ https://issues.apache.org/jira/browse/ATLAS-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4299: -- Attachment: Screenshot 2021-05-24 at 4.40.15 PM.png > [Atlas: Debug Metrics] Several UI Issues are seen when the total count > crossed 25 > - > > Key: ATLAS-4299 > URL: https://issues.apache.org/jira/browse/ATLAS-4299 > Project: Atlas > Issue Type: Bug > Components: atlas-webui > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2021-05-24 at 4.13.08 PM.png, Screenshot > 2021-05-24 at 4.40.15 PM.png > > > A list of UI issues are seen when the total issues crosses 25 > # > !Screenshot 2021-05-24 at 4.13.08 PM.png|width=478,height=270! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4299) [Atlas: Debug Metrics] Several UI Issues are seen when the total count crossed 25
Dharshana M Krishnamoorthy created ATLAS-4299: - Summary: [Atlas: Debug Metrics] Several UI Issues are seen when the total count crossed 25 Key: ATLAS-4299 URL: https://issues.apache.org/jira/browse/ATLAS-4299 Project: Atlas Issue Type: Bug Components: atlas-webui Reporter: Dharshana M Krishnamoorthy Attachments: Screenshot 2021-05-24 at 4.13.08 PM.png A list of UI issues are seen when the total issues crosses 25 # !Screenshot 2021-05-24 at 4.13.08 PM.png|width=478,height=270! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4298) [Atlas: Debug Metrics][UI] When REST API Metric page has more than 25 entries, the page does not load unless manually refreshed
Dharshana M Krishnamoorthy created ATLAS-4298: - Summary: [Atlas: Debug Metrics][UI] When REST API Metric page has more than 25 entries, the page does not load unless manually refreshed Key: ATLAS-4298 URL: https://issues.apache.org/jira/browse/ATLAS-4298 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy When debug metrics is enabled by setting *atlas.debug.metrics.enabled=true* Debug metrics url page: :31443/index.html#!/debugMetrics displays the REST API Metrics. When the result page entry count crosses 25, the page does not load, unless refresh button is hit There are no error logs found in the cluster but console displays the following {code:java} backbone.paginator.min.js?bust=1621440039818:1 Uncaught TypeError: Cannot read property 'add' of undefined at e (backbone.paginator.min.js?bust=1621440039818:1) at Object. (backbone.paginator.min.js?bust=1621440039818:1) at s (backbone-min.js?bust=1621440039818:1) at r (backbone-min.js?bust=1621440039818:1) at m (backbone-min.js?bust=1621440039818:1) at N.d.k.trigger (backbone-min.js?bust=1621440039818:1) at N.d._onModelEvent (backbone-min.js?bust=1621440039818:1) at s (backbone-min.js?bust=1621440039818:1) at r (backbone-min.js?bust=1621440039818:1) at m (backbone-min.js?bust=1621440039818:1){code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4297) [Atlas: Debug Metrics] Old data is lost when the feature is re-enabled
Dharshana M Krishnamoorthy created ATLAS-4297: - Summary: [Atlas: Debug Metrics] Old data is lost when the feature is re-enabled Key: ATLAS-4297 URL: https://issues.apache.org/jira/browse/ATLAS-4297 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Enable the feature debug metrics by setting *atlas.debug.metrics.enabled=true* Data will be collected Now turn the feature off by setting *atlas.debug.metrics.enabled=false* Re-enable the feature *atlas.debug.metrics.enabled=true* This will be the data collect fresh and all the old data collected will be lost -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4297) [Atlas: Debug Metrics] Old data is lost when the feature is re-enabled
[ https://issues.apache.org/jira/browse/ATLAS-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4297: -- Description: Enable the feature debug metrics by setting *atlas.debug.metrics.enabled=true* Data will be collected Now turn the feature off by setting *atlas.debug.metrics.enabled=false* Re-enable the feature *atlas.debug.metrics.enabled=true* This will be the data collected fresh and all the old data collected will be lost was: Enable the feature debug metrics by setting *atlas.debug.metrics.enabled=true* Data will be collected Now turn the feature off by setting *atlas.debug.metrics.enabled=false* Re-enable the feature *atlas.debug.metrics.enabled=true* This will be the data collect fresh and all the old data collected will be lost > [Atlas: Debug Metrics] Old data is lost when the feature is re-enabled > -- > > Key: ATLAS-4297 > URL: https://issues.apache.org/jira/browse/ATLAS-4297 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > > Enable the feature debug metrics by setting *atlas.debug.metrics.enabled=true* > Data will be collected > Now turn the feature off by setting *atlas.debug.metrics.enabled=false* > Re-enable the feature *atlas.debug.metrics.enabled=true* > This will be the data collected fresh and all the old data collected will be > lost > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4296) [Atlas: Debug Metrics] Min Time , Max Time and Average Time in UI are not matching the api response values
[ https://issues.apache.org/jira/browse/ATLAS-4296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4296: -- Description: Scenario: Enable debug Metrics !Screenshot 2021-05-21 at 4.18.34 PM.png|width=646,height=322! Eg: Consider the above highlighted example. *GlossaryREST_deleteGlossary* Here Min Time : 1.920 (seconds) Max Time : 5.530 (seconds) and Average Time : 5.530 (seconds) *Api response:* {code:java} "GlossaryREST_deleteGlossary": { "name": "GlossaryREST_deleteGlossary", "numops": 6, "minTime": 192, "maxTime": 553, "stdDevTime": 21.556128, "avgTime": 532 } {code} 1.92 (seconds) has to be 1920 if the value is response is stored in milliseconds but it appears as 192 was: Scenario: Enable debug Metrics !Screenshot 2021-05-21 at 4.18.34 PM.png|width=596,height=297! Eg: Consider the above highlighted example. *GlossaryREST_deleteGlossary* Here Min Time : 1.920 (seconds) Max Time : 5.530 (seconds) and Average Time : 5.530 (seconds) *Api response:* {code:java} "GlossaryREST_deleteGlossary": { "name": "GlossaryREST_deleteGlossary", "numops": 6, "minTime": 192, "maxTime": 553, "stdDevTime": 21.556128, "avgTime": 532 } {code} 1.92 (seconds) has to be 1920 if the value is response is stored in milliseconds but it appears as 192 > [Atlas: Debug Metrics] Min Time , Max Time and Average Time in UI are not > matching the api response values > -- > > Key: ATLAS-4296 > URL: https://issues.apache.org/jira/browse/ATLAS-4296 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2021-05-21 at 4.18.34 PM.png, Screenshot > 2021-05-21 at 4.18.34 PM.png > > > Scenario: Enable debug Metrics > !Screenshot 2021-05-21 at 4.18.34 PM.png|width=646,height=322! > Eg: Consider the above highlighted example. *GlossaryREST_deleteGlossary* > Here > Min Time : 1.920 (seconds) > Max Time : 5.530 (seconds) and > Average Time : 5.530 (seconds) > *Api response:* > {code:java} > "GlossaryREST_deleteGlossary": { > "name": "GlossaryREST_deleteGlossary", > "numops": 6, > "minTime": 192, > "maxTime": 553, > "stdDevTime": 21.556128, > "avgTime": 532 > } {code} > > 1.92 (seconds) has to be 1920 if the value is response is stored in > milliseconds but it appears as 192 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4296) [Atlas: Debug Metrics] Min Time , Max Time and Average Time in UI are not matching the api response values
[ https://issues.apache.org/jira/browse/ATLAS-4296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4296: -- Attachment: Screenshot 2021-05-21 at 4.18.34 PM.png > [Atlas: Debug Metrics] Min Time , Max Time and Average Time in UI are not > matching the api response values > -- > > Key: ATLAS-4296 > URL: https://issues.apache.org/jira/browse/ATLAS-4296 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2021-05-21 at 4.18.34 PM.png, Screenshot > 2021-05-21 at 4.18.34 PM.png > > > Scenario: Enable debug Metrics > !Screenshot 2021-05-21 at 4.18.34 PM.png|width=596,height=297! > Eg: Consider the above highlighted example. *GlossaryREST_deleteGlossary* > Here > Min Time : 1.920 (seconds) > Max Time : 5.530 (seconds) and > Average Time : 5.530 (seconds) > *Api response:* > {code:java} > "GlossaryREST_deleteGlossary": { > "name": "GlossaryREST_deleteGlossary", > "numops": 6, > "minTime": 192, > "maxTime": 553, > "stdDevTime": 21.556128, > "avgTime": 532 > } {code} > > 1.92 (seconds) has to be 1920 if the value is response is stored in > milliseconds but it appears as 192 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4296) [Atlas: Debug Metrics] Min Time , Max Time and Average Time in UI are not matching the api response values
Dharshana M Krishnamoorthy created ATLAS-4296: - Summary: [Atlas: Debug Metrics] Min Time , Max Time and Average Time in UI are not matching the api response values Key: ATLAS-4296 URL: https://issues.apache.org/jira/browse/ATLAS-4296 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Attachments: Screenshot 2021-05-21 at 4.18.34 PM.png Scenario: Enable debug Metrics !Screenshot 2021-05-21 at 4.18.34 PM.png|width=596,height=297! Eg: Consider the above highlighted example. *GlossaryREST_deleteGlossary* Here Min Time : 1.920 (seconds) Max Time : 5.530 (seconds) and Average Time : 5.530 (seconds) *Api response:* {code:java} "GlossaryREST_deleteGlossary": { "name": "GlossaryREST_deleteGlossary", "numops": 6, "minTime": 192, "maxTime": 553, "stdDevTime": 21.556128, "avgTime": 532 } {code} 1.92 (seconds) has to be 1920 if the value is response is stored in milliseconds but it appears as 192 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4292) [Atlas: Debug Metrics] com.sun.jersey.api.MessageException thrown while fetching debug metrics via browser
Dharshana M Krishnamoorthy created ATLAS-4292: - Summary: [Atlas: Debug Metrics] com.sun.jersey.api.MessageException thrown while fetching debug metrics via browser Key: ATLAS-4292 URL: https://issues.apache.org/jira/browse/ATLAS-4292 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Attachments: Screenshot 2021-05-20 at 7.44.34 PM.png, Screenshot 2021-05-20 at 8.03.07 PM.png While fetching the data via browser, the following exception is thrown {code:java} 2021-05-20 13:47:32,150 INFO - [etp522553046-44:HTTP:GET/api/atlas/admin/metrics] ~ Request from authenticated user: HTTP, URL=/api/atlas/admin/metrics (AtlasAuthenticationFilter$KerberosFilterChainWrapper:739) 2021-05-20 13:47:50,657 ERROR - [etp522553046-269 - 41f0e041-f379-4389-b998-62bb43cafd88:] ~ Error handling a request: 58173f6d7e3447db (ExceptionMapperUtil:32)at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:284) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1510) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder$NotAsync.service(ServletHolder.java:1452) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:791) at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626) at org.apache.atlas.web.filters.AuditFilter.doFilter(AuditFilter.java:106) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.apache.atlas.web.filters.AtlasCSRFPreventionFilter$ServletFilterHttpInteraction.proceed(AtlasCSRFPreventionFilter.java:235) at org.apache.atlas.web.filters.AtlasCSRFPreventionFilter.handleHttpInteraction(AtlasCSRFPreventionFilter.java:177) at org.apache.atlas.web.filters.AtlasCSRFPreventionFilter.doFilter(AtlasCSRFPreventionFilter.java:190) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.apache.atlas.web.filters.AtlasAuthenticationFilter.doFilter(AtlasAuthenticationFilter.java:358) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.apache.atlas.web.filters.StaleTransactionCleanupFilter.doFilter(StaleTransactionCleanupFilter.java:55
[jira] [Created] (ATLAS-4291) [Atlas: Debug Metrics] The Average time calculated for some endpoints/methods are incorrect
Dharshana M Krishnamoorthy created ATLAS-4291: - Summary: [Atlas: Debug Metrics] The Average time calculated for some endpoints/methods are incorrect Key: ATLAS-4291 URL: https://issues.apache.org/jira/browse/ATLAS-4291 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Attachments: Screenshot 2021-05-20 at 6.58.22 PM.png The average time calculated for some of the requests are incorrect !Screenshot 2021-05-20 at 6.58.22 PM.png|width=461,height=223! *Repro steps:* Enable debug metrics by setting *atlas.debug.metrics.enabled=true* Perform some operations create/delete glossary/entity or any other operation Open the REST API metrics page -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (ATLAS-4289) [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for xls/xlsx input when it refers to the term created as a part of the same import
[ https://issues.apache.org/jira/browse/ATLAS-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy reassigned ATLAS-4289: - Assignee: Dharshana M Krishnamoorthy > [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for > xls/xlsx input when it refers to the term created as a part of the same import > > > Key: ATLAS-4289 > URL: https://issues.apache.org/jira/browse/ATLAS-4289 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy > Assignee: Dharshana M Krishnamoorthy >Priority: Major > Attachments: xls_input.xls, xlsx_input.xlsx > > > [^xls_input.xls] [^xlsx_input.xlsx] > While performing import with the provided input, there should be no failure > and all data should have proper relationship. > This was working fine before. > Currently, seeing a failure while importing xlsx: > {code:java} > { > "failedImportInfoList": [ > { > "parentObjectName": "xlsx_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "FAILED", > "remarks": "The provided Reference 0%@termAttribute does not exist at > Atlas referred at record with TermName : term_2 and GlossaryName : > xlsx_input_glossary_2" > } > ], > "successImportInfoList": [ > { > "parentObjectName": "xlsx_input_glossary_1", > "childObjectName": "term_1", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"19b8fec2-9061-4234-9504-e716da735275\",\"qualifiedName\":\"term_1@xlsx_input_glossary_1\"}" > }, > { > "parentObjectName": "xlsx_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"7bf0dfb2-15d8-49ab-9e4f-d07f4ecb8dc6\",\"qualifiedName\":\"term_2@xlsx_input_glossary_2\"}" > } > ] > } {code} > Importing xls: > {code:java} > { > "failedImportInfoList": [ > { > "parentObjectName": "xls_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "FAILED", > "remarks": "The provided Reference 0%@termAttribute does not exist at > Atlas referred at record with TermName : term_2 and GlossaryName : > xls_input_glossary_2" > } > ], > "successImportInfoList": [ > { > "parentObjectName": "xls_input_glossary_1", > "childObjectName": "term_1", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"c6487c38-f45e-4897-9a86-ea84d3b706a6\",\"qualifiedName\":\"term_1@xls_input_glossary_1\"}" > }, > { > "parentObjectName": "xls_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"44c6838b-cfe6-43ac-998f-c609bbec53ee\",\"qualifiedName\":\"term_2@xls_input_glossary_2\"}" > } > ] > } {code} > Also, please note the 0%0%@termAttribute in the remarks of the failed message -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (ATLAS-4289) [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for xls/xlsx input when it refers to the term created as a part of the same import
[ https://issues.apache.org/jira/browse/ATLAS-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy resolved ATLAS-4289. --- Resolution: Invalid Used an incorrect xls input which created the chaos. Closing this. > [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for > xls/xlsx input when it refers to the term created as a part of the same import > > > Key: ATLAS-4289 > URL: https://issues.apache.org/jira/browse/ATLAS-4289 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: xls_input.xls, xlsx_input.xlsx > > > [^xls_input.xls] [^xlsx_input.xlsx] > While performing import with the provided input, there should be no failure > and all data should have proper relationship. > This was working fine before. > Currently, seeing a failure while importing xlsx: > {code:java} > { > "failedImportInfoList": [ > { > "parentObjectName": "xlsx_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "FAILED", > "remarks": "The provided Reference 0%@termAttribute does not exist at > Atlas referred at record with TermName : term_2 and GlossaryName : > xlsx_input_glossary_2" > } > ], > "successImportInfoList": [ > { > "parentObjectName": "xlsx_input_glossary_1", > "childObjectName": "term_1", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"19b8fec2-9061-4234-9504-e716da735275\",\"qualifiedName\":\"term_1@xlsx_input_glossary_1\"}" > }, > { > "parentObjectName": "xlsx_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"7bf0dfb2-15d8-49ab-9e4f-d07f4ecb8dc6\",\"qualifiedName\":\"term_2@xlsx_input_glossary_2\"}" > } > ] > } {code} > Importing xls: > {code:java} > { > "failedImportInfoList": [ > { > "parentObjectName": "xls_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "FAILED", > "remarks": "The provided Reference 0%@termAttribute does not exist at > Atlas referred at record with TermName : term_2 and GlossaryName : > xls_input_glossary_2" > } > ], > "successImportInfoList": [ > { > "parentObjectName": "xls_input_glossary_1", > "childObjectName": "term_1", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"c6487c38-f45e-4897-9a86-ea84d3b706a6\",\"qualifiedName\":\"term_1@xls_input_glossary_1\"}" > }, > { > "parentObjectName": "xls_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"44c6838b-cfe6-43ac-998f-c609bbec53ee\",\"qualifiedName\":\"term_2@xls_input_glossary_2\"}" > } > ] > } {code} > Also, please note the 0%0%@termAttribute in the remarks of the failed message -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4288) [Atlas: Glossary Term Bulk Import] Will all the data populated, while performing bulk import, PreferredToTerms relationship alone is not created
[ https://issues.apache.org/jira/browse/ATLAS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4288: -- Component/s: atlas-core > [Atlas: Glossary Term Bulk Import] Will all the data populated, while > performing bulk import, PreferredToTerms relationship alone is not created > > > Key: ATLAS-4288 > URL: https://issues.apache.org/jira/browse/ATLAS-4288 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Assignee: Sidharth Kumar Mishra >Priority: Major > Attachments: ATLAS-4288.patch, image-2021-05-17-16-43-31-487.png > > > Consider the following input, here all the relations are established except > the preferredToTerms (term_2) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > dharshmk_11,term_1,"short desc","long description", "Example", "G1", "Usage", > "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" > dharshmk_11,term_2,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2", > dharshmk_11,term_3,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",, > dharshmk_11,term_4,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,, > dharshmk_11,term_5,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" > dharshmk_11,term_6,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2", > dharshmk_11,term_7,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",, > dharshmk_11,term_8,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,, > dharshmk_11,term_9,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" > dharshmk_11,term_10,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2", > dharshmk_11,term_11,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",, > dharshmk_11,term_12,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,, > dharshmk_11,term_13,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" > {co
[jira] [Updated] (ATLAS-4287) [Atlas: Glossary Term Bulk Import] When there is self-reference in the input, bulk import is broken
[ https://issues.apache.org/jira/browse/ATLAS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4287: -- Component/s: atlas-core > [Atlas: Glossary Term Bulk Import] When there is self-reference in the input, > bulk import is broken > --- > > Key: ATLAS-4287 > URL: https://issues.apache.org/jira/browse/ATLAS-4287 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Assignee: Sidharth Kumar Mishra >Priority: Major > > This was working on the previous build before that latest fix, but broken now. > Scenario 1: Self-reference and Failure (2 failures expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1|glossary_1:term_2",{code} > Scenario 2: Self-reference and Success (1 failure expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1|glossary_1:term_2", > glossary_1,term_2 {code} > Scenatio 3: Only self reference (1 failures expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1", {code} > In all these cases we expect failure message which mentions about self > reference, this is currently broken. > What makes this important is, in case of self-reference and success scenario, > the relationship is not established > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4277) [Atlas: Glossary Term Bulk Import] [Regression] Unable to create term term_1 under glossary glossary_1 via bulk import
[ https://issues.apache.org/jira/browse/ATLAS-4277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4277: -- Component/s: atlas-core > [Atlas: Glossary Term Bulk Import] [Regression] Unable to create term term_1 > under glossary glossary_1 via bulk import > -- > > Key: ATLAS-4277 > URL: https://issues.apache.org/jira/browse/ATLAS-4277 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Assignee: Sidharth Kumar Mishra >Priority: Major > > Unable to create term *term_1* under glossary *glossary_1* the cluster with > latest bits > Import input: > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1 > glossary_1,term_2{code} > Current result: > {code:java} > { > "failedImportInfoList":[ > { > "parentObjectName":"glossary_1", > "childObjectName":"term_1", > "importStatus":"FAILED", > "remarks":"Glossary term with qualifiedName term_1@glossary_1 already > exists" > } > ], > "successImportInfoList":[ > { > "parentObjectName":"glossary_1", > "childObjectName":"term_2", > "importStatus":"SUCCESS", > > "remarks":"{\"termGuid\":\"8fe9a26a-aa14-4ed4-9a37-ef6db69ec29b\",\"qualifiedName\":\"term_2@glossary_1\"}" > } > ] > } {code} > Even though there is no glossary with name glossary_1 and you are creating it > for the first time, this error is thrown. > This was working fine on the older bits -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4276) [Atlas: Glossary Term Bulk Import] Unhandled exception java.lang.ArrayIndexOutOfBoundsException thrown, when related terms are not provided in the right format
[ https://issues.apache.org/jira/browse/ATLAS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4276: -- Component/s: atlas-core > [Atlas: Glossary Term Bulk Import] Unhandled exception > java.lang.ArrayIndexOutOfBoundsException thrown, when related terms are not > provided in the right format > --- > > Key: ATLAS-4276 > URL: https://issues.apache.org/jira/browse/ATLAS-4276 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Assignee: Sidharth Kumar Mishra >Priority: Major > Attachments: ATLAS-4274_2.patch > > > Input: > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > dharsh,term_1,"short desc","long description", "Example", "G1", "Usage", > "glossary:100%""abcd", {code} > Here term is provided as "abcd" instead of glossary@term format > The following exception is thrown, which is an Internal server error > {code:java} > 2021-05-07 06:31:39,080 ERROR - [etp1770642014-187 - > 11bce930-c6b9-4eec-a353-a553450425bb:] ~ Error handling a request: > a2e852df89225e3a (ExceptionMapperUtil:32)2021-05-07 06:31:39,080 ERROR - > [etp1770642014-187 - 11bce930-c6b9-4eec-a353-a553450425bb:] ~ Error handling > a request: a2e852df89225e3a > (ExceptionMapperUtil:32)java.lang.ArrayIndexOutOfBoundsException: 1 at > org.apache.atlas.glossary.GlossaryTermUtils.getAtlasRelatedTermHeaderSet(GlossaryTermUtils.java:718) > at > org.apache.atlas.glossary.GlossaryTermUtils.populateGlossaryTermObject(GlossaryTermUtils.java:778) > at > org.apache.atlas.glossary.GlossaryTermUtils.getGlossaryTermDataWithRelations(GlossaryTermUtils.java:610) > at > org.apache.atlas.glossary.GlossaryService.importGlossaryData(GlossaryService.java:1134) > at > org.apache.atlas.glossary.GlossaryService$$FastClassBySpringCGLIB$$e1f893e0.invoke() > at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) > at > org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:668) > at > org.apache.atlas.glossary.GlossaryService$$EnhancerBySpringCGLIB$$3c19046e.importGlossaryData() > at > org.apache.atlas.web.rest.GlossaryREST.importGlossaryData(GlossaryREST.java:1021) > at > org.apache.atlas.web.rest.GlossaryREST$$FastClassBySpringCGLIB$$29dc059.invoke() > at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) > at > org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:737) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) > at > org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:84) > at > org.apache.atlas.web.service.TimedAspectInterceptor.timerAdvice(TimedAspectInterceptor.java:46) > at sun.reflect.GeneratedMethodAccessor208.invoke(Unknown Source) at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:627) > at > org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:616) > at > org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:168) > at > org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) > at > org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:672) > at > org.apache.atlas.web.rest.GlossaryREST$$EnhancerBySpringCGLIB$$d1d409d6.importGlossaryData() &g
[jira] [Updated] (ATLAS-4290) [Atlas: Glossary Term Bulk Import] There is not much info available in logs importing terms in bulk
[ https://issues.apache.org/jira/browse/ATLAS-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4290: -- Component/s: atlas-core > [Atlas: Glossary Term Bulk Import] There is not much info available in logs > importing terms in bulk > --- > > Key: ATLAS-4290 > URL: https://issues.apache.org/jira/browse/ATLAS-4290 > Project: Atlas > Issue Type: Improvement > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > > While creating a term there is a statement "GraphTransaction intercept for > org.apache.atlas.glossary.GlossaryService.createTerm" in logs, but while > performing bulk import, there are no information logs available. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4290) [Atlas: Glossary Term Bulk Import] There is not much info available in logs importing terms in bulk
Dharshana M Krishnamoorthy created ATLAS-4290: - Summary: [Atlas: Glossary Term Bulk Import] There is not much info available in logs importing terms in bulk Key: ATLAS-4290 URL: https://issues.apache.org/jira/browse/ATLAS-4290 Project: Atlas Issue Type: Improvement Reporter: Dharshana M Krishnamoorthy While creating a term there is a statement "GraphTransaction intercept for org.apache.atlas.glossary.GlossaryService.createTerm" in logs, but while performing bulk import, there are no information logs available. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4289) [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for xls/xlsx input when it refers to the term created as a part of the same import
[ https://issues.apache.org/jira/browse/ATLAS-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4289: -- Summary: [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for xls/xlsx input when it refers to the term created as a part of the same import (was: [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for xls/xlsx input ) > [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for > xls/xlsx input when it refers to the term created as a part of the same import > > > Key: ATLAS-4289 > URL: https://issues.apache.org/jira/browse/ATLAS-4289 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: xls_input.xls, xlsx_input.xlsx > > > [^xls_input.xls] [^xlsx_input.xlsx] > While performing import with the provided input, there should be no failure > and all data should have proper relationship. > This was working fine before. > Currently, seeing a failure while importing xlsx: > {code:java} > { > "failedImportInfoList": [ > { > "parentObjectName": "xlsx_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "FAILED", > "remarks": "The provided Reference 0%@termAttribute does not exist at > Atlas referred at record with TermName : term_2 and GlossaryName : > xlsx_input_glossary_2" > } > ], > "successImportInfoList": [ > { > "parentObjectName": "xlsx_input_glossary_1", > "childObjectName": "term_1", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"19b8fec2-9061-4234-9504-e716da735275\",\"qualifiedName\":\"term_1@xlsx_input_glossary_1\"}" > }, > { > "parentObjectName": "xlsx_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"7bf0dfb2-15d8-49ab-9e4f-d07f4ecb8dc6\",\"qualifiedName\":\"term_2@xlsx_input_glossary_2\"}" > } > ] > } {code} > Importing xls: > {code:java} > { > "failedImportInfoList": [ > { > "parentObjectName": "xls_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "FAILED", > "remarks": "The provided Reference 0%@termAttribute does not exist at > Atlas referred at record with TermName : term_2 and GlossaryName : > xls_input_glossary_2" > } > ], > "successImportInfoList": [ > { > "parentObjectName": "xls_input_glossary_1", > "childObjectName": "term_1", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"c6487c38-f45e-4897-9a86-ea84d3b706a6\",\"qualifiedName\":\"term_1@xls_input_glossary_1\"}" > }, > { > "parentObjectName": "xls_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"44c6838b-cfe6-43ac-998f-c609bbec53ee\",\"qualifiedName\":\"term_2@xls_input_glossary_2\"}" > } > ] > } {code} > Also, please note the 0%0%@termAttribute in the remarks of the failed message -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4289) [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for xls/xlsx input
[ https://issues.apache.org/jira/browse/ATLAS-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4289: -- Description: [^xls_input.xls] [^xlsx_input.xlsx] While performing import with the provided input, there should be no failure and all data should have proper relationship. This was working fine before. Currently, seeing a failure while importing xlsx: {code:java} { "failedImportInfoList": [ { "parentObjectName": "xlsx_input_glossary_2", "childObjectName": "term_2", "importStatus": "FAILED", "remarks": "The provided Reference 0%@termAttribute does not exist at Atlas referred at record with TermName : term_2 and GlossaryName : xlsx_input_glossary_2" } ], "successImportInfoList": [ { "parentObjectName": "xlsx_input_glossary_1", "childObjectName": "term_1", "importStatus": "SUCCESS", "remarks": "{\"termGuid\":\"19b8fec2-9061-4234-9504-e716da735275\",\"qualifiedName\":\"term_1@xlsx_input_glossary_1\"}" }, { "parentObjectName": "xlsx_input_glossary_2", "childObjectName": "term_2", "importStatus": "SUCCESS", "remarks": "{\"termGuid\":\"7bf0dfb2-15d8-49ab-9e4f-d07f4ecb8dc6\",\"qualifiedName\":\"term_2@xlsx_input_glossary_2\"}" } ] } {code} Importing xls: {code:java} { "failedImportInfoList": [ { "parentObjectName": "xls_input_glossary_2", "childObjectName": "term_2", "importStatus": "FAILED", "remarks": "The provided Reference 0%@termAttribute does not exist at Atlas referred at record with TermName : term_2 and GlossaryName : xls_input_glossary_2" } ], "successImportInfoList": [ { "parentObjectName": "xls_input_glossary_1", "childObjectName": "term_1", "importStatus": "SUCCESS", "remarks": "{\"termGuid\":\"c6487c38-f45e-4897-9a86-ea84d3b706a6\",\"qualifiedName\":\"term_1@xls_input_glossary_1\"}" }, { "parentObjectName": "xls_input_glossary_2", "childObjectName": "term_2", "importStatus": "SUCCESS", "remarks": "{\"termGuid\":\"44c6838b-cfe6-43ac-998f-c609bbec53ee\",\"qualifiedName\":\"term_2@xls_input_glossary_2\"}" } ] } {code} Also, please note the 0%0%@termAttribute in the remarks of the failed message was: [^xls_input.xls] [^xlsx_input.xlsx] While performing import with the provided input, there should be no failure and all data should have proper relationship. This was working fine before. > [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for > xls/xlsx input > - > > Key: ATLAS-4289 > URL: https://issues.apache.org/jira/browse/ATLAS-4289 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: xls_input.xls, xlsx_input.xlsx > > > [^xls_input.xls] [^xlsx_input.xlsx] > While performing import with the provided input, there should be no failure > and all data should have proper relationship. > This was working fine before. > Currently, seeing a failure while importing xlsx: > {code:java} > { > "failedImportInfoList": [ > { > "parentObjectName": "xlsx_input_glossary_2", > "childObjectName": "term_2", > "importStatus": "FAILED", > "remarks": "The provided Reference 0%@termAttribute does not exist at > Atlas referred at record with TermName : term_2 and GlossaryName : > xlsx_input_glossary_2" > } > ], > "successImportInfoList": [ > { > "parentObjectName": "xlsx_input_glossary_1", > "childObjectName": "term_1", > "importStatus": "SUCCESS", > "remarks": > "{\"termGuid\":\"19b8fec2-9061-4234-9504-e716da735275\",\"qualifiedName\":\"
[jira] [Created] (ATLAS-4289) [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for xls/xlsx input
Dharshana M Krishnamoorthy created ATLAS-4289: - Summary: [Atlas: Glossary Term Bulk Import] [regression]Bulk import broken for xls/xlsx input Key: ATLAS-4289 URL: https://issues.apache.org/jira/browse/ATLAS-4289 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Attachments: xls_input.xls, xlsx_input.xlsx [^xls_input.xls] [^xlsx_input.xlsx] While performing import with the provided input, there should be no failure and all data should have proper relationship. This was working fine before. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (ATLAS-4287) [Atlas: Glossary Term Bulk Import] When there is self-reference in the input, bulk import is broken
[ https://issues.apache.org/jira/browse/ATLAS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy resolved ATLAS-4287. --- Resolution: Duplicate Discussed with [~sidharthkmishra], and the root cause is the same. Hence closing this as a duplicate > [Atlas: Glossary Term Bulk Import] When there is self-reference in the input, > bulk import is broken > --- > > Key: ATLAS-4287 > URL: https://issues.apache.org/jira/browse/ATLAS-4287 > Project: Atlas > Issue Type: Bug > Reporter: Dharshana M Krishnamoorthy >Assignee: Sidharth Kumar Mishra >Priority: Major > > This was working on the previous build before that latest fix, but broken now. > Scenario 1: Self-reference and Failure (2 failures expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1|glossary_1:term_2",{code} > Scenario 2: Self-reference and Success (1 failure expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1|glossary_1:term_2", > glossary_1,term_2 {code} > Scenatio 3: Only self reference (1 failures expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1", {code} > In all these cases we expect failure message which mentions about self > reference, this is currently broken. > What makes this important is, in case of self-reference and success scenario, > the relationship is not established > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (ATLAS-4288) [Atlas: Glossary Term Bulk Import] Will all the data populated, while performing bulk import, PreferredToTerms relationship alone is not created
[ https://issues.apache.org/jira/browse/ATLAS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17346594#comment-17346594 ] Dharshana M Krishnamoorthy commented on ATLAS-4288: --- [~sidharthkmishra]This is not a self-reference one Please check > [Atlas: Glossary Term Bulk Import] Will all the data populated, while > performing bulk import, PreferredToTerms relationship alone is not created > > > Key: ATLAS-4288 > URL: https://issues.apache.org/jira/browse/ATLAS-4288 > Project: Atlas > Issue Type: Bug > Reporter: Dharshana M Krishnamoorthy >Assignee: Sidharth Kumar Mishra >Priority: Major > Attachments: ATLAS-4288.patch, image-2021-05-17-16-43-31-487.png > > > Consider the following input, here all the relations are established except > the preferredToTerms (term_2) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > dharshmk_11,term_1,"short desc","long description", "Example", "G1", "Usage", > "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" > dharshmk_11,term_2,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2", > dharshmk_11,term_3,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",, > dharshmk_11,term_4,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,, > dharshmk_11,term_5,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" > dharshmk_11,term_6,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2", > dharshmk_11,term_7,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",, > dharshmk_11,term_8,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,, > dharshmk_11,term_9,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" > dharshmk_11,term_10,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2", > dharshmk_11,term_11,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",, > dharshmk_11,term_12,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,, > dharshmk_11,term_13,"short desc","long description", "Example", "G1", > "Usage", > "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBul
[jira] [Updated] (ATLAS-4288) [Atlas: Glossary Term Bulk Import] Will all the data populated, while performing bulk import, PreferredToTerms relationship alone is not created
[ https://issues.apache.org/jira/browse/ATLAS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4288: -- Description: Consider the following input, here all the relations are established except the preferredToTerms (term_2) {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms dharshmk_11,term_1,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" dharshmk_11,term_2,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2", dharshmk_11,term_3,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",, dharshmk_11,term_4,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,, dharshmk_11,term_5,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" dharshmk_11,term_6,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2", dharshmk_11,term_7,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",, dharshmk_11,term_8,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,, dharshmk_11,term_9,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" dharshmk_11,term_10,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2", dharshmk_11,term_11,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",, dharshmk_11,term_12,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,, dharshmk_11,term_13,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" {code} Before the above import happens, please do the initial import of the related terms with the following input {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms glossaryBulkImport_1,termBulkImport_1 glossaryBulkImport_1,termBulkImport_2 glossaryBulkImport_1,termBulkImport_3 glossaryBulkImport_1,termBulkImport_4 glossaryBulkImport_1,termBulkImport_5 glossaryBulkImport_2,termBulkImport_1 glossaryBulkImport_2,termBulkImport_2 glossaryBulkImport_2,termBulkImport_3 glossaryBulkImport_2,termBulkImport_4 glossaryBulkImport_2,termBulkImport_5 glossaryBulkImport_3,termBulkImport_1 glossaryBulkImport_3,termBulkImport_2 glossaryBulkImport_3,termBulkImport_3 glossaryBulkImport_3,termBulkImport_4 glossaryBulkImport_3,termBulkImport_5 glossaryBulkImport_4,
[jira] [Commented] (ATLAS-4287) [Atlas: Glossary Term Bulk Import] When there is self-reference in the input, bulk import is broken
[ https://issues.apache.org/jira/browse/ATLAS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17346592#comment-17346592 ] Dharshana M Krishnamoorthy commented on ATLAS-4287: --- Hi [~sidharthkmishra], the current issue is due to self reference being present. Here there is not need to perform any pre-setup. Eg: {code:java} glossary_1,term_1,,"glossary_1:term_1", {code} In the above input there is term_1 of glossary_1 is created and the same is referred in PreferredToTerms. There is no pre-setup required and this is not a duplicate of ATLAS-4288, where "preferredToTerms" will not be assigned as required. Hence re-opening and removing the duplicate link > [Atlas: Glossary Term Bulk Import] When there is self-reference in the input, > bulk import is broken > --- > > Key: ATLAS-4287 > URL: https://issues.apache.org/jira/browse/ATLAS-4287 > Project: Atlas > Issue Type: Bug > Reporter: Dharshana M Krishnamoorthy >Assignee: Sidharth Kumar Mishra >Priority: Major > > This was working on the previous build before that latest fix, but broken now. > Scenario 1: Self-reference and Failure (2 failures expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1|glossary_1:term_2",{code} > Scenario 2: Self-reference and Success (1 failure expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1|glossary_1:term_2", > glossary_1,term_2 {code} > Scenatio 3: Only self reference (1 failures expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1", {code} > In all these cases we expect failure message which mentions about self > reference, this is currently broken. > What makes this important is, in case of self-reference and success scenario, > the relationship is not established > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (ATLAS-4287) [Atlas: Glossary Term Bulk Import] When there is self-reference in the input, bulk import is broken
[ https://issues.apache.org/jira/browse/ATLAS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy reopened ATLAS-4287: --- > [Atlas: Glossary Term Bulk Import] When there is self-reference in the input, > bulk import is broken > --- > > Key: ATLAS-4287 > URL: https://issues.apache.org/jira/browse/ATLAS-4287 > Project: Atlas > Issue Type: Bug > Reporter: Dharshana M Krishnamoorthy >Assignee: Sidharth Kumar Mishra >Priority: Major > > This was working on the previous build before that latest fix, but broken now. > Scenario 1: Self-reference and Failure (2 failures expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1|glossary_1:term_2",{code} > Scenario 2: Self-reference and Success (1 failure expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1|glossary_1:term_2", > glossary_1,term_2 {code} > Scenatio 3: Only self reference (1 failures expected at failedinfolist) > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > glossary_1,term_1,,"glossary_1:term_1", {code} > In all these cases we expect failure message which mentions about self > reference, this is currently broken. > What makes this important is, in case of self-reference and success scenario, > the relationship is not established > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4288) [Atlas: Glossary Term Bulk Import] Will all the data populated, while performing bulk import, PreferredToTerms relationship alone is not created
[ https://issues.apache.org/jira/browse/ATLAS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4288: -- Description: Consider the following input, here all the relations are established except the preferredToTerms (term_2) {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTermsGlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTermsd ,term_1,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2"dharshmk_11,term_2,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",dharshmk_11,term_3,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,dharshmk_11,term_4,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,,dharshmk_11,term_5,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2"dharshmk_11,term_6,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",dharshmk_11,term_7,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,dharshmk_11,term_8,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,,dharshmk_11,term_9,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2"dharshmk_11,term_10,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",dharshmk_11,term_11,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,dharshmk_11,term_12,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,,dharshmk_11,term_13,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" {code} Before the above import happens, please do the initial import of the related terms with the following input {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms glossaryBulkImport_1,termBulkImport_1 glossaryBulkImport_1,termBulkImport_2 glossaryBulkImport_1,termBulkImport_3 glossaryBulkImport_1,termBulkImport_4 glossaryBulkImport_1,termBulkImport_5 glossaryBulkImport_2,termBulkImport_1 glossaryBulkImport_2,termBulkImport_2 glossaryBulkImport_2,termBulkImport_3 glossaryBulkImport_2,termB
[jira] [Created] (ATLAS-4288) [Atlas: Glossary Term Bulk Import] Will all the data populated, while performing bulk import, PreferredToTerms relationship alone is not created
Dharshana M Krishnamoorthy created ATLAS-4288: - Summary: [Atlas: Glossary Term Bulk Import] Will all the data populated, while performing bulk import, PreferredToTerms relationship alone is not created Key: ATLAS-4288 URL: https://issues.apache.org/jira/browse/ATLAS-4288 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Consider the following input, here all the relations are established except the preferredToTerms (term_2) {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTermsGlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTermsd ,term_1,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2"dharshmk_11,term_2,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",dharshmk_11,term_3,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,dharshmk_11,term_4,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,,dharshmk_11,term_5,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2"dharshmk_11,term_6,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",dharshmk_11,term_7,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,dharshmk_11,term_8,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,,dharshmk_11,term_9,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2"dharshmk_11,term_10,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",dharshmk_11,term_11,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,dharshmk_11,term_12,"short desc","long description", "Example", "G1", "Usage", "glossary:100%",,"glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2",,,dharshmk_11,term_13,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","glossaryBulkImport_1:termBulkImport_1|glossaryBulkImport_2:termBulkImport_2" {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4287) [Atlas: Glossary Term Bulk Import] When there is self-reference in the input, bulk import is broken
Dharshana M Krishnamoorthy created ATLAS-4287: - Summary: [Atlas: Glossary Term Bulk Import] When there is self-reference in the input, bulk import is broken Key: ATLAS-4287 URL: https://issues.apache.org/jira/browse/ATLAS-4287 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy This was working on the previous build before that latest fix, but broken now. Scenario 1: Self-reference and Failure (2 failures expected at failedinfolist) {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms glossary_1,term_1,,"glossary_1:term_1|glossary_1:term_2",{code} Scenario 2: Self-reference and Success (1 failure expected at failedinfolist) {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms glossary_1,term_1,,"glossary_1:term_1|glossary_1:term_2", glossary_1,term_2 {code} Scenatio 3: Only self reference (1 failures expected at failedinfolist) {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms glossary_1,term_1,,"glossary_1:term_1", {code} In all these cases we expect failure message which mentions about self reference, this is currently broken. What makes this important is, in case of self-reference and success scenario, the relationship is not established -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4277) [Atlas: Glossary Term Bulk Import] [Regression] Unable to create term term_1 under glossary glossary_1 via bulk import
Dharshana M Krishnamoorthy created ATLAS-4277: - Summary: [Atlas: Glossary Term Bulk Import] [Regression] Unable to create term term_1 under glossary glossary_1 via bulk import Key: ATLAS-4277 URL: https://issues.apache.org/jira/browse/ATLAS-4277 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Unable to create term *term_1* under glossary *glossary_1* the cluster with latest bits Import input: {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms glossary_1,term_1 glossary_1,term_2{code} Current result: {code:java} { "failedImportInfoList":[ { "parentObjectName":"glossary_1", "childObjectName":"term_1", "importStatus":"FAILED", "remarks":"Glossary term with qualifiedName term_1@glossary_1 already exists" } ], "successImportInfoList":[ { "parentObjectName":"glossary_1", "childObjectName":"term_2", "importStatus":"SUCCESS", "remarks":"{\"termGuid\":\"8fe9a26a-aa14-4ed4-9a37-ef6db69ec29b\",\"qualifiedName\":\"term_2@glossary_1\"}" } ] } {code} Even though there is no glossary with name glossary_1 and you are creating it for the first time, this error is thrown. This was working fine on the older bits -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4276) [Atlas: Glossary Term Bulk Import] Unhandled exception java.lang.ArrayIndexOutOfBoundsException thrown, when related terms are not provided in the right format
Dharshana M Krishnamoorthy created ATLAS-4276: - Summary: [Atlas: Glossary Term Bulk Import] Unhandled exception java.lang.ArrayIndexOutOfBoundsException thrown, when related terms are not provided in the right format Key: ATLAS-4276 URL: https://issues.apache.org/jira/browse/ATLAS-4276 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Input: {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms dharsh,term_1,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""abcd", {code} Here term is provided as "abcd" instead of glossary@term format The following exception is thrown, which is an Internal server error {code:java} 2021-05-07 06:31:39,080 ERROR - [etp1770642014-187 - 11bce930-c6b9-4eec-a353-a553450425bb:] ~ Error handling a request: a2e852df89225e3a (ExceptionMapperUtil:32)2021-05-07 06:31:39,080 ERROR - [etp1770642014-187 - 11bce930-c6b9-4eec-a353-a553450425bb:] ~ Error handling a request: a2e852df89225e3a (ExceptionMapperUtil:32)java.lang.ArrayIndexOutOfBoundsException: 1 at org.apache.atlas.glossary.GlossaryTermUtils.getAtlasRelatedTermHeaderSet(GlossaryTermUtils.java:718) at org.apache.atlas.glossary.GlossaryTermUtils.populateGlossaryTermObject(GlossaryTermUtils.java:778) at org.apache.atlas.glossary.GlossaryTermUtils.getGlossaryTermDataWithRelations(GlossaryTermUtils.java:610) at org.apache.atlas.glossary.GlossaryService.importGlossaryData(GlossaryService.java:1134) at org.apache.atlas.glossary.GlossaryService$$FastClassBySpringCGLIB$$e1f893e0.invoke() at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:668) at org.apache.atlas.glossary.GlossaryService$$EnhancerBySpringCGLIB$$3c19046e.importGlossaryData() at org.apache.atlas.web.rest.GlossaryREST.importGlossaryData(GlossaryREST.java:1021) at org.apache.atlas.web.rest.GlossaryREST$$FastClassBySpringCGLIB$$29dc059.invoke() at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:737) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:84) at org.apache.atlas.web.service.TimedAspectInterceptor.timerAdvice(TimedAspectInterceptor.java:46) at sun.reflect.GeneratedMethodAccessor208.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:627) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:616) at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:168) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:672) at org.apache.atlas.web.rest.GlossaryREST$$EnhancerBySpringCGLIB$$d1d409d6.importGlossaryData() at sun.reflect.GeneratedMethodAccessor331.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.
[jira] [Updated] (ATLAS-4275) [Atlas: Glossary Term Bulk Import] When there an incorrect data in preferred term column, it is not considered while importing
[ https://issues.apache.org/jira/browse/ATLAS-4275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4275: -- Description: When there is an error in the PreferredTerms of the input, error is not considered Consider the following input. Here the provided term "abcd:efgh" does not exists in the system. The error is thrown when it is provided as an input for "*PreferredToTerms*" column but not for "*PreferredTerms*" column {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms dharsh,term_1,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","abcd:efgh" dharsh,term_2,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""abcd:efgh", {code} *Expectation:* 2 failed message info should be thrown one for term_1(incorrect PreferredTerms) and the other for term_2(incorrect PreferredToTerms), but only 1 is thrown *Current output:* {code:java} { "failedImportInfoList": [ { "parentObjectName": "dharsh", "childObjectName": "term_2", "importStatus": "FAILED", "remarks": "The provided Reference efgh@abcd does not exist at Atlas referred at record with TermName : term_2 and GlossaryName : dharsh" } ], "successImportInfoList": [ { "parentObjectName": "dharsh", "childObjectName": "term_1", "importStatus": "SUCCESS", "remarks": "{\"termGuid\":\"284fe9a7-911c-423a-90bf-adf8231afb27\",\"qualifiedName\":\"term_1@dharsh\"}" }, { "parentObjectName": "dharsh", "childObjectName": "term_2", "importStatus": "SUCCESS", "remarks": "{\"termGuid\":\"29a51c8a-ce92-4988-8fca-feaf683c58dd\",\"qualifiedName\":\"term_2@dharsh\"}" } ] } {code} *Expected output:* {code:java} { "failedImportInfoList": [ { "parentObjectName": "dharsh", "childObjectName": "term_1", "importStatus": "FAILED", "remarks": "The provided Reference efgh@abcd does not exist at Atlas referred at record with TermName : term_1 and GlossaryName : dharsh" }, { "parentObjectName": "dharsh", "childObjectName": "term_2", "importStatus": "FAILED", "remarks": "The provided Reference efgh@abcd does not exist at Atlas referred at record with TermName : term_2 and GlossaryName : dharsh" } ], "successImportInfoList": [ { "parentObjectName": "dharsh", "childObjectName": "term_1", "importStatus": "SUCCESS", "remarks": "{\"termGuid\":\"284fe9a7-911c-423a-90bf-adf8231afb27\",\"qualifiedName\":\"term_1@dharsh\"}" }, { "parentObjectName": "dharsh", "childObjectName": "term_2", "importStatus": "SUCCESS", "remarks": "{\"termGuid\":\"29a51c8a-ce92-4988-8fca-feaf683c58dd\",\"qualifiedName\":\"term_2@dharsh\"}" } ] } {code} was: When there is an error in the PreferredTerms of the input, error is not considered Consider the following input. Here the provided term "abcd:efgh" does not exists in the system. The error is thrown when it is provided as an input for "*PreferredToTerms*" column but not for "*PreferredTerms*" column {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms dharsh,term_1,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","abcd:efgh" dharsh,term_2,"short desc","long desc
[jira] [Created] (ATLAS-4275) [Atlas: Glossary Term Bulk Import] When there an incorrect data in preferred term column, it is not considered while importing
Dharshana M Krishnamoorthy created ATLAS-4275: - Summary: [Atlas: Glossary Term Bulk Import] When there an incorrect data in preferred term column, it is not considered while importing Key: ATLAS-4275 URL: https://issues.apache.org/jira/browse/ATLAS-4275 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy When there is an error in the PreferredTerms of the input, error is not considered Consider the following input. Here the provided term "abcd:efgh" does not exists in the system. The error is thrown when it is provided as an input for "*PreferredToTerms*" column but not for "*PreferredTerms*" column {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms dharsh,term_1,"short desc","long description", "Example", "G1", "Usage", "glossary:100%","abcd:efgh" dharsh,term_2,"short desc","long description", "Example", "G1", "Usage", "glossary:100%""abcd:efgh", {code} *Expectation:* 2 failed message info should be thrown one for term_1(incorrect PreferredTerms) and the other for term_2(incorrect PreferredToTerms), but only 1 is thrown *Current output:* {code:java} { "failedImportInfoList": [ { "parentObjectName": "dharsh", "childObjectName": "term_2", "importStatus": "FAILED", "remarks": "The provided Reference efgh@abcd does not exist at Atlas referred at record with TermName : term_2 and GlossaryName : dharsh" } ], "successImportInfoList": [ { "parentObjectName": "dharsh", "childObjectName": "term_1", "importStatus": "SUCCESS", "remarks": "{\"termGuid\":\"284fe9a7-911c-423a-90bf-adf8231afb27\",\"qualifiedName\":\"term_1@dharsh\"}" }, { "parentObjectName": "dharsh", "childObjectName": "term_2", "importStatus": "SUCCESS", "remarks": "{\"termGuid\":\"29a51c8a-ce92-4988-8fca-feaf683c58dd\",\"qualifiedName\":\"term_2@dharsh\"}" } ] } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4274) [Atlas: Glossary] Non matching relation are created via bulk import
Dharshana M Krishnamoorthy created ATLAS-4274: - Summary: [Atlas: Glossary] Non matching relation are created via bulk import Key: ATLAS-4274 URL: https://issues.apache.org/jira/browse/ATLAS-4274 Project: Atlas Issue Type: Bug Reporter: Dharshana M Krishnamoorthy Attachments: Screenshot 2021-05-04 at 3.31.00 PM.png, Screenshot 2021-05-04 at 3.34.03 PM.png, Screenshot 2021-05-04 at 3.34.21 PM.png, Screenshot 2021-05-04 at 3.34.36 PM.png The related terms provided in the input does not match the relation created via import {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms a_glossary_1,term_1,,,"a_glossary_1:term_2" a_glossary_1,term_2,"a_glossary_1:term_3",, a_glossary_1,term_3,,"a_glossary_1:term_1", {code} !Screenshot 2021-05-04 at 3.31.00 PM.png|width=1973,height=127! !Screenshot 2021-05-04 at 3.34.03 PM.png|width=1038,height=578! !Screenshot 2021-05-04 at 3.34.21 PM.png|width=1005,height=563! !Screenshot 2021-05-04 at 3.34.36 PM.png|width=541,height=303! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4273) [Atlas: Glossary Term Bulk Import] When there is only 1 term imported via bulk import and if it fails, no proper reason is mentioned in response
Dharshana M Krishnamoorthy created ATLAS-4273: - Summary: [Atlas: Glossary Term Bulk Import] When there is only 1 term imported via bulk import and if it fails, no proper reason is mentioned in response Key: ATLAS-4273 URL: https://issues.apache.org/jira/browse/ATLAS-4273 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy The following fails as we do not support @ in term name {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms glossary_l,term_@_1 {code} But the failure is {code:java} {"errorCode":"ATLAS-409-00-011","errorMessage":"Glossary import failed"} {code} The code is 409 which mean a conflict but the reason is different -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4272) [Atlas: Glossary Term Bulk Import] Re-importing after deleting a glossary fails
Dharshana M Krishnamoorthy created ATLAS-4272: - Summary: [Atlas: Glossary Term Bulk Import] Re-importing after deleting a glossary fails Key: ATLAS-4272 URL: https://issues.apache.org/jira/browse/ATLAS-4272 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Scenario: Import a glossary, after import delete the glossary and its terms and reimport the same input {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms abc_glossary,term_1 abc_glossary,term_2 abc_glossary,term_3{code} This throws the following error {code:java} errorCode: "ATLAS-404-00-005", errorMessage: "Given instance guid 6363d630-a96e-4c15-a3ae-d4812dd67aa2 is invalid/not found" {code} {code:java} 2021-05-03 13:33:43,562 ERROR - [etp813603842-30 - 8f83ce44-0c56-4224-9f57-8211b3435b2e:] ~ graph rollback due to exception AtlasBaseException:Given instance guid 45163be7-377a-4b74-b977-75fa5a4b5a2e is invalid/not found (GraphTransactionInterceptor:202) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4266) [Atlas: Glossary Term Bulk Import] Import fails with success state
[ https://issues.apache.org/jira/browse/ATLAS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4266: -- Description: The following is a complete failure scenario, where the user inputs blank spaces, here the import fails with complete failure, but the status is *200* ok *Scenario*: Import Blank glossary and terms {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms " ", " "{code} *Observation:* {code:java} { "failedImportInfoList":[ { "parentObjectName":"", "childObjectName":null, "importStatus":"FAILED", "remarks":"The GlossaryName is blank for the record : [ , ]" }, { "parentObjectName":"", "childObjectName":null, "importStatus":"FAILED", "remarks":"The GlossaryName is blank for the record : [ , ]" } ] }{code} !Screenshot 2021-04-28 at 5.00.33 PM.png|width=597,height=399! There is only failure report in the response !Screenshot 2021-04-28 at 5.00.33 PM.png|width=854,height=570! was: The following is a complete failure scenario, where the user inputs blank spaces, here the import fails with complete failure, but the status is *200* ok *Scenario*: Import Blank glossary and terms {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms " ", " "{code} *Observation:* {code:java} { "failedImportInfoList":[ { "parentObjectName":"", "childObjectName":null, "importStatus":"FAILED", "remarks":"The GlossaryName is blank for the record : [ , ]" }, { "parentObjectName":"", "childObjectName":null, "importStatus":"FAILED", "remarks":"The GlossaryName is blank for the record : [ , ]" } ] }{code} !Screenshot 2021-04-28 at 4.34.30 PM.png|width=546,height=217! > [Atlas: Glossary Term Bulk Import] Import fails with success state > -- > > Key: ATLAS-4266 > URL: https://issues.apache.org/jira/browse/ATLAS-4266 > Project: Atlas > Issue Type: Bug > Components: atlas-core >Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2021-04-28 at 4.34.30 PM.png, Screenshot > 2021-04-28 at 5.00.33 PM.png, Screenshot 2021-04-28 at 5.00.33 PM.png > > > The following is a complete failure scenario, where the user inputs blank > spaces, here the import fails with complete failure, but the status is *200* > ok > *Scenario*: Import Blank glossary and terms > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > " ", " "{code} > *Observation:* > {code:java} > { > "failedImportInfoList":[ > { > "parentObjectName":"", > "childObjectName":null, > "importStatus":"FAILED", > "remarks":"The GlossaryName is blank for the record : [ , ]" > }, > { > "parentObjectName":"", > "childObjectName":null, > "importStatus":"FAILED", > "remarks":"The GlossaryName is blank for the record : [ , ]" > } > ] > }{code} > !Screenshot 2021-04-28 at 5.00.33 PM.png|width=597,height=399! There is only > failure report in the response > !Screenshot 2021-04-28 at 5.00.33 PM.png|width=854,height=570! > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4266) [Atlas: Glossary Term Bulk Import] Import fails with success state
[ https://issues.apache.org/jira/browse/ATLAS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4266: -- Attachment: Screenshot 2021-04-28 at 5.00.33 PM.png > [Atlas: Glossary Term Bulk Import] Import fails with success state > -- > > Key: ATLAS-4266 > URL: https://issues.apache.org/jira/browse/ATLAS-4266 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2021-04-28 at 4.34.30 PM.png, Screenshot > 2021-04-28 at 5.00.33 PM.png, Screenshot 2021-04-28 at 5.00.33 PM.png > > > The following is a complete failure scenario, where the user inputs blank > spaces, here the import fails with complete failure, but the status is *200* > ok > *Scenario*: Import Blank glossary and terms > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > " ", " "{code} > *Observation:* > {code:java} > { > "failedImportInfoList":[ > { > "parentObjectName":"", > "childObjectName":null, > "importStatus":"FAILED", > "remarks":"The GlossaryName is blank for the record : [ , ]" > }, > { > "parentObjectName":"", > "childObjectName":null, > "importStatus":"FAILED", > "remarks":"The GlossaryName is blank for the record : [ , ]" > } > ] > }{code} > !Screenshot 2021-04-28 at 4.34.30 PM.png|width=546,height=217! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4266) [Atlas: Glossary Term Bulk Import] Import fails with success state
[ https://issues.apache.org/jira/browse/ATLAS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4266: -- Attachment: Screenshot 2021-04-28 at 5.00.33 PM.png > [Atlas: Glossary Term Bulk Import] Import fails with success state > -- > > Key: ATLAS-4266 > URL: https://issues.apache.org/jira/browse/ATLAS-4266 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2021-04-28 at 4.34.30 PM.png, Screenshot > 2021-04-28 at 5.00.33 PM.png, Screenshot 2021-04-28 at 5.00.33 PM.png > > > The following is a complete failure scenario, where the user inputs blank > spaces, here the import fails with complete failure, but the status is *200* > ok > *Scenario*: Import Blank glossary and terms > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > " ", " "{code} > *Observation:* > {code:java} > { > "failedImportInfoList":[ > { > "parentObjectName":"", > "childObjectName":null, > "importStatus":"FAILED", > "remarks":"The GlossaryName is blank for the record : [ , ]" > }, > { > "parentObjectName":"", > "childObjectName":null, > "importStatus":"FAILED", > "remarks":"The GlossaryName is blank for the record : [ , ]" > } > ] > }{code} > !Screenshot 2021-04-28 at 4.34.30 PM.png|width=546,height=217! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ATLAS-4266) [Atlas: Glossary Term Bulk Import] Import fails with success state
[ https://issues.apache.org/jira/browse/ATLAS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dharshana M Krishnamoorthy updated ATLAS-4266: -- Attachment: Screenshot 2021-04-28 at 4.34.30 PM.png > [Atlas: Glossary Term Bulk Import] Import fails with success state > -- > > Key: ATLAS-4266 > URL: https://issues.apache.org/jira/browse/ATLAS-4266 > Project: Atlas > Issue Type: Bug > Components: atlas-core > Reporter: Dharshana M Krishnamoorthy >Priority: Major > Attachments: Screenshot 2021-04-28 at 4.34.30 PM.png > > > The following is a complete failure scenario, where the user inputs blank > spaces, here the import fails with complete failure, but the status is *200* > ok > *Scenario*: Import Blank glossary and terms > {code:java} > GlossaryName, TermName, ShortDescription, LongDescription, Examples, > Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, > Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, > TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms > " ", " "{code} > *Observation:* > {code:java} > { > "failedImportInfoList":[ > { > "parentObjectName":"", > "childObjectName":null, > "importStatus":"FAILED", > "remarks":"The GlossaryName is blank for the record : [ , ]" > }, > { > "parentObjectName":"", > "childObjectName":null, > "importStatus":"FAILED", > "remarks":"The GlossaryName is blank for the record : [ , ]" > } > ] > }{code} > !Screenshot 2021-04-28 at 4.34.30 PM.png|width=546,height=217! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4266) [Atlas: Glossary Term Bulk Import] Import fails with success state
Dharshana M Krishnamoorthy created ATLAS-4266: - Summary: [Atlas: Glossary Term Bulk Import] Import fails with success state Key: ATLAS-4266 URL: https://issues.apache.org/jira/browse/ATLAS-4266 Project: Atlas Issue Type: Bug Components: atlas-core Reporter: Dharshana M Krishnamoorthy Attachments: Screenshot 2021-04-28 at 4.34.30 PM.png The following is a complete failure scenario, where the user inputs blank spaces, here the import fails with complete failure, but the status is *200* ok *Scenario*: Import Blank glossary and terms {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms " ", " "{code} *Observation:* {code:java} { "failedImportInfoList":[ { "parentObjectName":"", "childObjectName":null, "importStatus":"FAILED", "remarks":"The GlossaryName is blank for the record : [ , ]" }, { "parentObjectName":"", "childObjectName":null, "importStatus":"FAILED", "remarks":"The GlossaryName is blank for the record : [ , ]" } ] }{code} !Screenshot 2021-04-28 at 4.34.30 PM.png|width=546,height=217! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4265) [Atlas: Glossary Term Bulk Import]: Error message is displayed twice for an error in bulk import
Dharshana M Krishnamoorthy created ATLAS-4265: - Summary: [Atlas: Glossary Term Bulk Import]: Error message is displayed twice for an error in bulk import Key: ATLAS-4265 URL: https://issues.apache.org/jira/browse/ATLAS-4265 Project: Atlas Issue Type: Bug Components: atlas-core, atlas-webui Reporter: Dharshana M Krishnamoorthy Attachments: Screenshot 2021-04-28 at 4.34.30 PM.png Scenario: Import Blank glossary and terms {code:java} GlossaryName, TermName, ShortDescription, LongDescription, Examples, Abbreviation, Usage, AdditionalAttributes, TranslationTerms, ValidValuesFor, Synonyms, ReplacedBy, ValidValues, ReplacementTerms, SeeAlso, TranslatedTerms, IsA, Antonyms, Classifies, PreferredToTerms, PreferredTerms " ", " "{code} *Observation:* {code:java} { "failedImportInfoList":[ { "parentObjectName":"", "childObjectName":null, "importStatus":"FAILED", "remarks":"The GlossaryName is blank for the record : [ , ]" }, { "parentObjectName":"", "childObjectName":null, "importStatus":"FAILED", "remarks":"The GlossaryName is blank for the record : [ , ]" } ] }{code} !Screenshot 2021-04-28 at 4.34.30 PM.png|width=546,height=217! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ATLAS-4262) [Atlas: Glossary Term Bulk Import] Use Bulk import feature to create relationships on existing terms
Dharshana M Krishnamoorthy created ATLAS-4262: - Summary: [Atlas: Glossary Term Bulk Import] Use Bulk import feature to create relationships on existing terms Key: ATLAS-4262 URL: https://issues.apache.org/jira/browse/ATLAS-4262 Project: Atlas Issue Type: Improvement Reporter: Dharshana M Krishnamoorthy *Scenario*: Initial setup: Say I already have a glossary and 3 terms {code:java} glossary_1 term_1 term_2 term_3 {code} Create few terms via bulk import with relations {code:java} glossary_1,term_11,,,"glossary_1:term_14|glossary_1:term_15|glossary_1:term_13" glossary_1,term_12,,,"glossary_1:term_13" glossary_1,term_13,,,"glossary_1:term_14|glossary_1:term_15" glossary_1,term_14,,,"glossary_1:term_15" glossary_1,term_15 {code} Now if the users want to relate the new terms with the existing terms, its currently not possible Consider the following input {code:java} glossary_1,term_1,,,"glossary_1:term_14|glossary_1:term_15|glossary_1:term_13" {code} This would result in error, {code:java} Glossary term with qualifiedName term_1@glossary_1 already exists" {code} *Improvement:* Instead, if we support creating the relationship of the existing glossary it would be great -- This message was sent by Atlassian Jira (v8.3.4#803005)