[GitHub] [atlas] martin-g commented on pull request #128: ATLAS-4159 Update dependencies to fix the build on arm64
martin-g commented on pull request #128: URL: https://github.com/apache/atlas/pull/128#issuecomment-908271317 Could someone please schedule one more PreCommit for this PR? Thank you! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@atlas.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [atlas] nixonrodrigues commented on pull request #128: ATLAS-4159 Update dependencies to fix the build on arm64
nixonrodrigues commented on pull request #128: URL: https://github.com/apache/atlas/pull/128#issuecomment-908298157 > Could someone please schedule one more PreCommit for this PR? Thank you! https://ci-builds.apache.org/job/Atlas/job/PreCommit-ATLAS-Build-Test/835/console -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@atlas.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [atlas] martin-g commented on pull request #128: ATLAS-4159 Update dependencies to fix the build on arm64
martin-g commented on pull request #128: URL: https://github.com/apache/atlas/pull/128#issuecomment-908413491 OK! I was able to reproduce the issue on a fresh setup/checkout of Atlas. It was that Solr 8.6.3 depended on ZooKeeper 3.5.7 and there is an API break in 3.6.2. Upgrading to Solr 3.7.0 fixes the issue. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@atlas.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: Review Request 73536: ATLAS-4379 :- Atlas Filter changes for user inactivity on Atlas UI
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/73536/ --- (Updated Aug. 30, 2021, 7:35 p.m.) Review request for atlas, Ashutosh Mestry and Jayendra Parab. Changes --- Handled review comment for adding isDebugLog check for logging statement Bugs: ATLAS-4379 https://issues.apache.org/jira/browse/ATLAS-4379 Repository: atlas Description --- Atlas server filter changes to support user inactivity on Atlas UI and logout the user on UI invalidating the user session. This integration is required with Atlas with knox proxy. Diffs (updated) - intg/src/main/java/org/apache/atlas/AtlasConfiguration.java 2f2c8a540 webapp/src/main/java/org/apache/atlas/web/filters/AtlasAuthenticationFilter.java d9b1c82b1 webapp/src/main/java/org/apache/atlas/web/filters/RestUtil.java PRE-CREATION webapp/src/main/java/org/apache/atlas/web/resources/AdminResource.java 01fb8ec02 webapp/src/main/java/org/apache/atlas/web/security/AtlasAuthenticationSuccessHandler.java e7a5d668c Diff: https://reviews.apache.org/r/73536/diff/3/ Changes: https://reviews.apache.org/r/73536/diff/2-3/ Testing --- Tested Atlas UI flow on kerberos with trusted proxy and simple authentication flow. PC https://ci-builds.apache.org/job/Atlas/job/PreCommit-ATLAS-Build-Test/819/console Thanks, Nixon Rodrigues
Re: Review Request 73536: ATLAS-4379 :- Atlas Filter changes for user inactivity on Atlas UI
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/73536/#review223428 --- Ship it! Ship It! - Ashutosh Mestry On Aug. 30, 2021, 7:35 p.m., Nixon Rodrigues wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/73536/ > --- > > (Updated Aug. 30, 2021, 7:35 p.m.) > > > Review request for atlas, Ashutosh Mestry and Jayendra Parab. > > > Bugs: ATLAS-4379 > https://issues.apache.org/jira/browse/ATLAS-4379 > > > Repository: atlas > > > Description > --- > > Atlas server filter changes to support user inactivity on Atlas UI and logout > the user on UI invalidating the user session. > This integration is required with Atlas with knox proxy. > > > Diffs > - > > intg/src/main/java/org/apache/atlas/AtlasConfiguration.java 2f2c8a540 > > webapp/src/main/java/org/apache/atlas/web/filters/AtlasAuthenticationFilter.java > d9b1c82b1 > webapp/src/main/java/org/apache/atlas/web/filters/RestUtil.java > PRE-CREATION > webapp/src/main/java/org/apache/atlas/web/resources/AdminResource.java > 01fb8ec02 > > webapp/src/main/java/org/apache/atlas/web/security/AtlasAuthenticationSuccessHandler.java > e7a5d668c > > > Diff: https://reviews.apache.org/r/73536/diff/3/ > > > Testing > --- > > Tested Atlas UI flow on kerberos with trusted proxy and simple authentication > flow. > > PC > https://ci-builds.apache.org/job/Atlas/job/PreCommit-ATLAS-Build-Test/819/console > > > Thanks, > > Nixon Rodrigues > >
[jira] [Updated] (ATLAS-4408) Dynamic handling of failure in updating index
[ https://issues.apache.org/jira/browse/ATLAS-4408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Radhika Kundam updated ATLAS-4408: -- Description: *Index failure resilience:* dynamic handling of failure in updating index (i.e. HBase commit succeeds but index commit fails). To support this feature, need to enable *tx.log-tx* property which will start storing write-ahead logs.*With this approach we need to maintain more data related to write-ahead transaction logs*. But by comparing the advantages of index recovery proactively over reindexing entire data incase of secondary persistent failures, it's worth to have this feature though overhead of maintaining more data. Design details as below. # Start new service - IndexRecoveryService at Atlas startup. ## Continuously monitor for Solr(Index Client) health for every retryTime millisecs ### If Solr is healthy and recovery start time is available, Start Transaction Recovery with available recovery start time(which is noted when Solr became unhealthy) Persist current recovery time as previous which can be used later by passing as custom recovery time to start index recovery if required. Reset current recovery start time Continue with Solr health checkup. ### If Solr is unhealthy and no recovery start time is available, Shutdown the existing transaction recovery process. Note down the time which should be the next recovery start time and persist in graph. Continue with Solr health checkup. Configuration properties to be used for this feature. 1.To enable or disable index recovery(By default index recovery will be enabled on Atlas startup) *atlas.graph.enable.index.recovery=true* 2.To configure how frequently SOLR health check should be done *atlas.graph.index.search.solr.status.retry.interval=* 3.To start index recovery by custom recovery time as user provided *atlas.graph.index.search.solr.recovery.start.time=1630086622* was: *Index failure resilience:* dynamic handling of failure in updating index (i.e. HBase commit succeeds but index commit fails). Design details as below. # Start new service - IndexRecoveryService at Atlas startup. ## Continuously monitor for Solr(Index Client) health for every retryTime millisecs ### If Solr is healthy and recovery start time is available, Start Transaction Recovery with available recovery start time(which is noted when Solr became unhealthy) Persist current recovery time as previous which can be used later by passing as custom recovery time to start index recovery if required. Reset current recovery start time Continue with Solr health checkup. ### If Solr is unhealthy and no recovery start time is available, Shutdown the existing transaction recovery process. Note down the time which should be the next recovery start time and persist in graph. Continue with Solr health checkup. Configuration properties to be used for this feature. 1.To enable or disable index recovery(By default index recovery will be enabled on Atlas startup) *atlas.graph.enable.index.recovery=true* 2.To configure how frequently SOLR health check should be done *atlas.graph.index.search.solr.status.retry.interval=* 3.To start index recovery by custom recovery time as user provided *atlas.graph.index.search.solr.recovery.start.time=1630086622* > Dynamic handling of failure in updating index > - > > Key: ATLAS-4408 > URL: https://issues.apache.org/jira/browse/ATLAS-4408 > Project: Atlas > Issue Type: New Feature > Components: atlas-core >Reporter: Radhika Kundam >Assignee: Radhika Kundam >Priority: Major > > *Index failure resilience:* dynamic handling of failure in updating index > (i.e. HBase commit succeeds but index commit fails). > To support this feature, need to enable *tx.log-tx* property which will start > storing write-ahead logs.*With this approach we need to maintain more data > related to write-ahead transaction logs*. But by comparing the advantages of > index recovery proactively over reindexing entire data incase of secondary > persistent failures, it's worth to have this feature though overhead of > maintaining more data. > Design details as below. > # Start new service - IndexRecoveryService at Atlas startup. > ## Continuously monitor for Solr(Index Client) health for every retryTime > millisecs > ### If Solr is healthy and recovery start time is available, > Start Transaction Recovery with available recovery start time(which is > noted when Solr became unhealthy) > Persist current recovery time as previous which can be used later by > passing as custom recovery time to start index recovery if required. > Reset current recovery start time > Cont
[jira] [Updated] (ATLAS-4408) Dynamic handling of failure in updating index
[ https://issues.apache.org/jira/browse/ATLAS-4408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Radhika Kundam updated ATLAS-4408: -- Description: *Index failure resilience:* dynamic handling of failure in updating index (i.e. HBase commit succeeds but index commit fails). In case of secondary persistence failure scenario, there will be inconsistency with indexes for all the transactions failed at Solr. And to repair that, the existing option is re-indexing all the data which is time consuming as it involves indexing the entire database. To recover such inconsistencies we can use the *transaction write-ahead log option*. By enabling write-ahead log(tx.log-tx), JanusGraph maintains all the transaction log data which can be used to recover indices in case of failures. With this approach, it’s extra overhead to maintain the log data for all transactions but with this approach we can guarantee the system is more resilient and proactive. So advantages of this approach can nullify the overhead of maintaining log data. Design details as below. # Start new service - IndexRecoveryService at Atlas startup. ## Continuously monitor for Solr(Index Client) health for every retryTime millisecs ### If Solr is healthy and recovery start time is available, Start Transaction Recovery with available recovery start time(which is noted when Solr became unhealthy) Persist current recovery time as previous which can be used later by passing as custom recovery time to start index recovery if required. Reset current recovery start time Continue with Solr health checkup. ### If Solr is unhealthy and no recovery start time is available, Shutdown the existing transaction recovery process. Note down the time which should be the next recovery start time and persist in graph. Continue with Solr health checkup. Configuration properties to be used for this feature. 1.To enable or disable index recovery(By default index recovery will be enabled on Atlas startup) *atlas.graph.enable.index.recovery=true* 2.To configure how frequently SOLR health check should be done *atlas.graph.index.search.solr.status.retry.interval=* 3.To start index recovery by custom recovery time as user provided *atlas.graph.index.search.solr.recovery.start.time=1630086622* was: *Index failure resilience:* dynamic handling of failure in updating index (i.e. HBase commit succeeds but index commit fails). To support this feature, need to enable *tx.log-tx* property which will start storing write-ahead logs.*With this approach we need to maintain more data related to write-ahead transaction logs*. But by comparing the advantages of index recovery proactively over reindexing entire data incase of secondary persistent failures, it's worth to have this feature though overhead of maintaining more data. Design details as below. # Start new service - IndexRecoveryService at Atlas startup. ## Continuously monitor for Solr(Index Client) health for every retryTime millisecs ### If Solr is healthy and recovery start time is available, Start Transaction Recovery with available recovery start time(which is noted when Solr became unhealthy) Persist current recovery time as previous which can be used later by passing as custom recovery time to start index recovery if required. Reset current recovery start time Continue with Solr health checkup. ### If Solr is unhealthy and no recovery start time is available, Shutdown the existing transaction recovery process. Note down the time which should be the next recovery start time and persist in graph. Continue with Solr health checkup. Configuration properties to be used for this feature. 1.To enable or disable index recovery(By default index recovery will be enabled on Atlas startup) *atlas.graph.enable.index.recovery=true* 2.To configure how frequently SOLR health check should be done *atlas.graph.index.search.solr.status.retry.interval=* 3.To start index recovery by custom recovery time as user provided *atlas.graph.index.search.solr.recovery.start.time=1630086622* > Dynamic handling of failure in updating index > - > > Key: ATLAS-4408 > URL: https://issues.apache.org/jira/browse/ATLAS-4408 > Project: Atlas > Issue Type: New Feature > Components: atlas-core >Reporter: Radhika Kundam >Assignee: Radhika Kundam >Priority: Major > > *Index failure resilience:* dynamic handling of failure in updating index > (i.e. HBase commit succeeds but index commit fails). > In case of secondary persistence failure scenario, there will be > inconsistency with indexes for all the transactions failed at Solr. And to > repair that, the existing option is re-indexing all the data w
[GitHub] [atlas] jiutianfeiwu opened a new pull request #145: pull20210831
jiutianfeiwu opened a new pull request #145: URL: https://github.com/apache/atlas/pull/145 pull20210831 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@atlas.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [atlas] jiutianfeiwu closed pull request #145: pull20210831
jiutianfeiwu closed pull request #145: URL: https://github.com/apache/atlas/pull/145 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@atlas.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org