Re: Performance tuning of ServiceComb Saga Pack
I'm totally agree with @bismy and we should be very careful with these things. 2018-08-20 11:22 GMT+08:00 bismy : > I think async is the problem I mentioned. When the transaction is in > progress, the transaction log is not actually persistent. And transactions > recovery may rely on these logs. > > > However, I think this is an implementation problem and maybe there need > complicated algorithms to make sure the really important data is > persistent. > > > -- 原始邮件 -- > 发件人: "willem.jiang"; > 发送时间: 2018年8月20日(星期一) 中午11:15 > 收件人: "dev"; > > 主题: Re: Performance tuning of ServiceComb Saga Pack > > > > We could use the async writer to store the events to the database when > using the redis. Redis cluster has the persistent storage at the same time. > > > > Willem Jiang > > Twitter: willemjiang > Weibo: 姜宁willem > > On Mon, Aug 20, 2018 at 11:08 AM, bismy wrote: > > > One doubt about redis, maybe not correct. > > > > > > For transactions persistence is very important, if we use redis, the > > transactions may lose persistence property. Event you provide retry > > mechanism, but how about the redis instance is restarted and memory data > > get lost? > > > > > > Do you mean that using redis clusters and sync memory data between > > clusters and assume that clusters is high available? > > > > > > -- 原始邮件 -- > > 发件人: "willem.jiang"; > > 发送时间: 2018年8月16日(星期四) 中午11:00 > > 收件人: "dev"; > > > > 主题: Re: Performance tuning of ServiceComb Saga Pack > > > > > > > > We cannot guarantee to update the redis and DB at the same time, we can > > just do the retry in our code. > > > > > > > > Willem Jiang > > > > Twitter: willemjiang > > Weibo: 姜宁willem > > > > On Thu, Aug 16, 2018 at 10:29 AM, fu chengeng > > wrote: > > > > > Hi Willem > > > why not just store 'finished transacation' data to db in a async > way.can > > > we guarantee update on both > > > db and redis are success at same time when the transation is aborted? > > > 发件人: Willem Jiang > > > 发送时间: 8月16日星期四 09:40 > > > 主题: Re: Performance tuning of ServiceComb Saga Pack > > > 收件人: dev@servicecomb.apache.org > > > > > > > > > Hi Amos Alpha send response to the Omega once the message is updated > into > > > redis, then we just store the transaction events into the database in > > async > > > way (we don't change the states here). Current Redis cluster provides > the > > > persistent storage, it could reduce lot of effort of us. Now we just > use > > > redis as a smaller table for tracking all the unfinished transaction > > status > > > to get better performance. If the transaction is aborted, we can > updated > > > the transaction in the DB and Redis at same time, if any of those calls > > is > > > failed, I think we just keep trying to update the status. Willem Jiang > > > Twitter: willemjiang Weibo: 姜宁willem On Wed, Aug 15, 2018 at 10:48 PM, > > > Zheng Feng wrote: > Hi Willem, > > It makes sense to use the redis to > > store > > > the pending transactions (I assume > that you mean these are the "HOT" > > > ones). But we could be very careful to > "write" the transaction > status, > > > and it should be stored in the database at > last. So I think we must > > make > > > sure the transaction status in the redis and > the DB is consist and we > > > SHOULD NOT lose the any status of the transaction. > > How will you use > > the > > > redis and the database when storing the status of > transaction ? > 1. > > > write to the redis and the redis will sync to the database later. if > > > > failed, rollback the transaction. > 2. both write to the redis and the > > > database. if any of them failed, > rollback the transaction. > > We > need > > > the more detail :) > > Amos > > 2018-08-15 8:48 GMT+08:00 Willem Jiang > : > > > > > > > > Hi, > > > > With the help of JuZheng[1][2], we managed to deploy > the > > > saga-spring-demo > > into K8s and start the Jmeter tests for it. By > > running > > > the test for a > > while, the DB CPU usage is very high and the > response > > > time is up 2~3 > > seconds per call. > > > > It looks like all the > event > > >
?????? Performance tuning of ServiceComb Saga Pack
I think async is the problem I mentioned. When the transaction is in progress, the transaction log is not actually persistent. And transactions recovery may rely on these logs. However, I think this is an implementation problem and maybe there need complicated algorithms to make sure the really important data is persistent. -- -- ??: "willem.jiang"; : 2018??8??20??(??) 11:15 ??: "dev"; ????: Re: Performance tuning of ServiceComb Saga Pack We could use the async writer to store the events to the database when using the redis. Redis cluster has the persistent storage at the same time. Willem Jiang Twitter: willemjiang Weibo: willem On Mon, Aug 20, 2018 at 11:08 AM, bismy wrote: > One doubt about redis, maybe not correct. > > > For transactions persistence is very important, if we use redis, the > transactions may lose persistence property. Event you provide retry > mechanism, but how about the redis instance is restarted and memory data > get lost? > > > Do you mean that using redis clusters and sync memory data between > clusters and assume that clusters is high available? > > > -- -- > ??: "willem.jiang"; > ????: 2018??8??16??(??) ????11:00 > ??: "dev"; > > : Re: Performance tuning of ServiceComb Saga Pack > > > > We cannot guarantee to update the redis and DB at the same time, we can > just do the retry in our code. > > > > Willem Jiang > > Twitter: willemjiang > Weibo: willem > > On Thu, Aug 16, 2018 at 10:29 AM, fu chengeng > wrote: > > > Hi Willem > > why not just store 'finished transacation' data to db in a async way.can > > we guarantee update on both > > db and redis are success at same time when the transation is aborted? > > ??: Willem Jiang > > : 8??16 09:40 > > : Re: Performance tuning of ServiceComb Saga Pack > > ??: dev@servicecomb.apache.org > > > > > > Hi Amos Alpha send response to the Omega once the message is updated into > > redis, then we just store the transaction events into the database in > async > > way (we don't change the states here). Current Redis cluster provides the > > persistent storage, it could reduce lot of effort of us. Now we just use > > redis as a smaller table for tracking all the unfinished transaction > status > > to get better performance. If the transaction is aborted, we can updated > > the transaction in the DB and Redis at same time, if any of those calls > is > > failed, I think we just keep trying to update the status. Willem Jiang > > Twitter: willemjiang Weibo: willem On Wed, Aug 15, 2018 at 10:48 PM, > > Zheng Feng wrote: > Hi Willem, > > It makes sense to use the redis to > store > > the pending transactions (I assume > that you mean these are the "HOT" > > ones). But we could be very careful to > "write" the transaction status, > > and it should be stored in the database at > last. So I think we must > make > > sure the transaction status in the redis and > the DB is consist and we > > SHOULD NOT lose the any status of the transaction. > > How will you use > the > > redis and the database when storing the status of > transaction ? > 1. > > write to the redis and the redis will sync to the database later. if > > > failed, rollback the transaction. > 2. both write to the redis and the > > database. if any of them failed, > rollback the transaction. > > We need > > the more detail :) > > Amos > > 2018-08-15 8:48 GMT+08:00 Willem Jiang : > > > > > > Hi, > > > > With the help of JuZheng[1][2], we managed to deploy the > > saga-spring-demo > > into K8s and start the Jmeter tests for it. By > running > > the test for a > > while, the DB CPU usage is very high and the response > > time is up 2~3 > > seconds per call. > > > > It looks like all the event > > are stored into the database in the same > table > > and never cleaned. > > > > > Now we are thinking use redis to store the hot data (the saga > transaction > > > > which is not closed), and put the cold data (which is used for > auditing) > > > > into database. In this way it could keep the event data smaller and > the > > > > event sanner[4] can just go through the unfinished the Saga > > transactions > to > > fire the timeout event or the compensation event. > > > > > > > Any thought? > > > > [1]https://github.com/apache/ > > incubator-servicecomb-saga/pull/250 > > [2]https://github.com/apache/ > > incubator-servicecomb-saga/pull/252 > > [3] > > > https://github.com/apache/ > > incubator-servicecomb-saga/ > > tree/master/saga-demo/saga-spring-demo > > > > [4] > > https://github.com/apache/incubator-servicecomb-saga/blob/ > > > > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/ > > > > src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java > > > > > > Willem Jiang > > > > Twitter: willemjiang > > Weibo: willem > > > > > > > >
Re: Performance tuning of ServiceComb Saga Pack
We could use the async writer to store the events to the database when using the redis. Redis cluster has the persistent storage at the same time. Willem Jiang Twitter: willemjiang Weibo: 姜宁willem On Mon, Aug 20, 2018 at 11:08 AM, bismy wrote: > One doubt about redis, maybe not correct. > > > For transactions persistence is very important, if we use redis, the > transactions may lose persistence property. Event you provide retry > mechanism, but how about the redis instance is restarted and memory data > get lost? > > > Do you mean that using redis clusters and sync memory data between > clusters and assume that clusters is high available? > > > -- 原始邮件 -- > 发件人: "willem.jiang"; > 发送时间: 2018年8月16日(星期四) 中午11:00 > 收件人: "dev"; > > 主题: Re: Performance tuning of ServiceComb Saga Pack > > > > We cannot guarantee to update the redis and DB at the same time, we can > just do the retry in our code. > > > > Willem Jiang > > Twitter: willemjiang > Weibo: 姜宁willem > > On Thu, Aug 16, 2018 at 10:29 AM, fu chengeng > wrote: > > > Hi Willem > > why not just store 'finished transacation' data to db in a async way.can > > we guarantee update on both > > db and redis are success at same time when the transation is aborted? > > 发件人: Willem Jiang > > 发送时间: 8月16日星期四 09:40 > > 主题: Re: Performance tuning of ServiceComb Saga Pack > > 收件人: dev@servicecomb.apache.org > > > > > > Hi Amos Alpha send response to the Omega once the message is updated into > > redis, then we just store the transaction events into the database in > async > > way (we don't change the states here). Current Redis cluster provides the > > persistent storage, it could reduce lot of effort of us. Now we just use > > redis as a smaller table for tracking all the unfinished transaction > status > > to get better performance. If the transaction is aborted, we can updated > > the transaction in the DB and Redis at same time, if any of those calls > is > > failed, I think we just keep trying to update the status. Willem Jiang > > Twitter: willemjiang Weibo: 姜宁willem On Wed, Aug 15, 2018 at 10:48 PM, > > Zheng Feng wrote: > Hi Willem, > > It makes sense to use the redis to > store > > the pending transactions (I assume > that you mean these are the "HOT" > > ones). But we could be very careful to > "write" the transaction status, > > and it should be stored in the database at > last. So I think we must > make > > sure the transaction status in the redis and > the DB is consist and we > > SHOULD NOT lose the any status of the transaction. > > How will you use > the > > redis and the database when storing the status of > transaction ? > 1. > > write to the redis and the redis will sync to the database later. if > > > failed, rollback the transaction. > 2. both write to the redis and the > > database. if any of them failed, > rollback the transaction. > > We need > > the more detail :) > > Amos > > 2018-08-15 8:48 GMT+08:00 Willem Jiang : > > > > > > Hi, > > > > With the help of JuZheng[1][2], we managed to deploy the > > saga-spring-demo > > into K8s and start the Jmeter tests for it. By > running > > the test for a > > while, the DB CPU usage is very high and the response > > time is up 2~3 > > seconds per call. > > > > It looks like all the event > > are stored into the database in the same > table > > and never cleaned. > > > > > Now we are thinking use redis to store the hot data (the saga > transaction > > > > which is not closed), and put the cold data (which is used for > auditing) > > > > into database. In this way it could keep the event data smaller and > the > > > > event sanner[4] can just go through the unfinished the Saga > > transactions > to > > fire the timeout event or the compensation event. > > > > > > > Any thought? > > > > [1]https://github.com/apache/ > > incubator-servicecomb-saga/pull/250 > > [2]https://github.com/apache/ > > incubator-servicecomb-saga/pull/252 > > [3] > > > https://github.com/apache/ > > incubator-servicecomb-saga/ > > tree/master/saga-demo/saga-spring-demo > > > > [4] > > https://github.com/apache/incubator-servicecomb-saga/blob/ > > > > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/ > > > > src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java > > > > > > Willem Jiang > > > > Twitter: willemjiang > > Weibo: 姜宁willem > > > > > > > >
Re: Performance tuning of ServiceComb Saga Pack
We cannot guarantee to update the redis and DB at the same time, we can just do the retry in our code. Willem Jiang Twitter: willemjiang Weibo: 姜宁willem On Thu, Aug 16, 2018 at 10:29 AM, fu chengeng wrote: > Hi Willem > why not just store 'finished transacation' data to db in a async way.can > we guarantee update on both > db and redis are success at same time when the transation is aborted? > 发件人: Willem Jiang > 发送时间: 8月16日星期四 09:40 > 主题: Re: Performance tuning of ServiceComb Saga Pack > 收件人: dev@servicecomb.apache.org > > > Hi Amos Alpha send response to the Omega once the message is updated into > redis, then we just store the transaction events into the database in async > way (we don't change the states here). Current Redis cluster provides the > persistent storage, it could reduce lot of effort of us. Now we just use > redis as a smaller table for tracking all the unfinished transaction status > to get better performance. If the transaction is aborted, we can updated > the transaction in the DB and Redis at same time, if any of those calls is > failed, I think we just keep trying to update the status. Willem Jiang > Twitter: willemjiang Weibo: 姜宁willem On Wed, Aug 15, 2018 at 10:48 PM, > Zheng Feng wrote: > Hi Willem, > > It makes sense to use the redis to store > the pending transactions (I assume > that you mean these are the "HOT" > ones). But we could be very careful to > "write" the transaction status, > and it should be stored in the database at > last. So I think we must make > sure the transaction status in the redis and > the DB is consist and we > SHOULD NOT lose the any status of the transaction. > > How will you use the > redis and the database when storing the status of > transaction ? > 1. > write to the redis and the redis will sync to the database later. if > > failed, rollback the transaction. > 2. both write to the redis and the > database. if any of them failed, > rollback the transaction. > > We need > the more detail :) > > Amos > > 2018-08-15 8:48 GMT+08:00 Willem Jiang : > > > > Hi, > > > > With the help of JuZheng[1][2], we managed to deploy the > saga-spring-demo > > into K8s and start the Jmeter tests for it. By running > the test for a > > while, the DB CPU usage is very high and the response > time is up 2~3 > > seconds per call. > > > > It looks like all the event > are stored into the database in the same > table > > and never cleaned. > > > Now we are thinking use redis to store the hot data (the saga transaction > > > which is not closed), and put the cold data (which is used for auditing) > > > into database. In this way it could keep the event data smaller and the > > > event sanner[4] can just go through the unfinished the Saga > transactions > to > > fire the timeout event or the compensation event. > > > > > Any thought? > > > > [1]https://github.com/apache/ > incubator-servicecomb-saga/pull/250 > > [2]https://github.com/apache/ > incubator-servicecomb-saga/pull/252 > > [3] > > https://github.com/apache/ > incubator-servicecomb-saga/ > > tree/master/saga-demo/saga-spring-demo > > > [4] > > https://github.com/apache/incubator-servicecomb-saga/blob/ > > > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/ > > > src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java > > > > > Willem Jiang > > > > Twitter: willemjiang > > Weibo: 姜宁willem > > > > >
Re: Performance tuning of ServiceComb Saga Pack
Hi Willem why not just store 'finished transacation' data to db in a async way.can we guarantee update on both db and redis are success at same time when the transation is aborted? 发件人: Willem Jiang 发送时间: 8月16日星期四 09:40 主题: Re: Performance tuning of ServiceComb Saga Pack 收件人: dev@servicecomb.apache.org Hi Amos Alpha send response to the Omega once the message is updated into redis, then we just store the transaction events into the database in async way (we don't change the states here). Current Redis cluster provides the persistent storage, it could reduce lot of effort of us. Now we just use redis as a smaller table for tracking all the unfinished transaction status to get better performance. If the transaction is aborted, we can updated the transaction in the DB and Redis at same time, if any of those calls is failed, I think we just keep trying to update the status. Willem Jiang Twitter: willemjiang Weibo: 姜宁willem On Wed, Aug 15, 2018 at 10:48 PM, Zheng Feng wrote: > Hi Willem, > > It makes sense to use the redis to store the pending transactions (I assume > that you mean these are the "HOT" ones). But we could be very careful to > "write" the transaction status, and it should be stored in the database at > last. So I think we must make sure the transaction status in the redis and > the DB is consist and we SHOULD NOT lose the any status of the transaction. > > How will you use the redis and the database when storing the status of > transaction ? > 1. write to the redis and the redis will sync to the database later. if > failed, rollback the transaction. > 2. both write to the redis and the database. if any of them failed, > rollback the transaction. > > We need the more detail :) > > Amos > > 2018-08-15 8:48 GMT+08:00 Willem Jiang : > > > Hi, > > > > With the help of JuZheng[1][2], we managed to deploy the saga-spring-demo > > into K8s and start the Jmeter tests for it. By running the test for a > > while, the DB CPU usage is very high and the response time is up 2~3 > > seconds per call. > > > > It looks like all the event are stored into the database in the same > table > > and never cleaned. > > Now we are thinking use redis to store the hot data (the saga transaction > > which is not closed), and put the cold data (which is used for auditing) > > into database. In this way it could keep the event data smaller and the > > event sanner[4] can just go through the unfinished the Saga transactions > to > > fire the timeout event or the compensation event. > > > > Any thought? > > > > [1]https://github.com/apache/incubator-servicecomb-saga/pull/250 > > [2]https://github.com/apache/incubator-servicecomb-saga/pull/252 > > [3] > > https://github.com/apache/incubator-servicecomb-saga/ > > tree/master/saga-demo/saga-spring-demo > > [4] > > https://github.com/apache/incubator-servicecomb-saga/blob/ > > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/ > > src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java > > > > Willem Jiang > > > > Twitter: willemjiang > > Weibo: 姜宁willem > > >
Re: Performance tuning of ServiceComb Saga Pack
Thanksk Willem, it looks like the redis cluster is the better choice. 2018-08-16 9:57 GMT+08:00 Lance Ju : > Here is an abstracted design of how redis works: > https://raw.githubusercontent.com/crystaldust/saga- > performance-optimization/master/saga-redis.jpg > > Willem Jiang 于2018年8月16日周四 上午9:40写道: > > > Hi Amos > > > > Alpha send response to the Omega once the message is updated into redis, > > then we just store the transaction events into the database in async way > > (we don't change the states here). > > Current Redis cluster provides the persistent storage, it could reduce > lot > > of effort of us. > > Now we just use redis as a smaller table for tracking all the unfinished > > transaction status to get better performance. > > If the transaction is aborted, we can updated the transaction in the DB > and > > Redis at same time, if any of those calls is failed, I think we just keep > > trying to update the status. > > > > > > > > > > Willem Jiang > > > > Twitter: willemjiang > > Weibo: 姜宁willem > > > > On Wed, Aug 15, 2018 at 10:48 PM, Zheng Feng wrote: > > > > > Hi Willem, > > > > > > It makes sense to use the redis to store the pending transactions (I > > assume > > > that you mean these are the "HOT" ones). But we could be very careful > to > > > "write" the transaction status, and it should be stored in the database > > at > > > last. So I think we must make sure the transaction status in the redis > > and > > > the DB is consist and we SHOULD NOT lose the any status of the > > transaction. > > > > > > How will you use the redis and the database when storing the status of > > > transaction ? > > > 1. write to the redis and the redis will sync to the database later. if > > > failed, rollback the transaction. > > > 2. both write to the redis and the database. if any of them failed, > > > rollback the transaction. > > > > > > We need the more detail :) > > > > > > Amos > > > > > > 2018-08-15 8:48 GMT+08:00 Willem Jiang : > > > > > > > Hi, > > > > > > > > With the help of JuZheng[1][2], we managed to deploy the > > saga-spring-demo > > > > into K8s and start the Jmeter tests for it. By running the test for a > > > > while, the DB CPU usage is very high and the response time is up 2~3 > > > > seconds per call. > > > > > > > > It looks like all the event are stored into the database in the same > > > table > > > > and never cleaned. > > > > Now we are thinking use redis to store the hot data (the saga > > transaction > > > > which is not closed), and put the cold data (which is used for > > auditing) > > > > into database. In this way it could keep the event data smaller and > > the > > > > event sanner[4] can just go through the unfinished the Saga > > transactions > > > to > > > > fire the timeout event or the compensation event. > > > > > > > > Any thought? > > > > > > > > [1]https://github.com/apache/incubator-servicecomb-saga/pull/250 > > > > [2]https://github.com/apache/incubator-servicecomb-saga/pull/252 > > > > [3] > > > > https://github.com/apache/incubator-servicecomb-saga/ > > > > tree/master/saga-demo/saga-spring-demo > > > > [4] > > > > https://github.com/apache/incubator-servicecomb-saga/blob/ > > > > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/ > > > > src/main/java/org/apache/servicecomb/saga/alpha/core/ > EventScanner.java > > > > > > > > Willem Jiang > > > > > > > > Twitter: willemjiang > > > > Weibo: 姜宁willem > > > > > > > > > >
Re: Performance tuning of ServiceComb Saga Pack
Here is an abstracted design of how redis works: https://raw.githubusercontent.com/crystaldust/saga-performance-optimization/master/saga-redis.jpg Willem Jiang 于2018年8月16日周四 上午9:40写道: > Hi Amos > > Alpha send response to the Omega once the message is updated into redis, > then we just store the transaction events into the database in async way > (we don't change the states here). > Current Redis cluster provides the persistent storage, it could reduce lot > of effort of us. > Now we just use redis as a smaller table for tracking all the unfinished > transaction status to get better performance. > If the transaction is aborted, we can updated the transaction in the DB and > Redis at same time, if any of those calls is failed, I think we just keep > trying to update the status. > > > > > Willem Jiang > > Twitter: willemjiang > Weibo: 姜宁willem > > On Wed, Aug 15, 2018 at 10:48 PM, Zheng Feng wrote: > > > Hi Willem, > > > > It makes sense to use the redis to store the pending transactions (I > assume > > that you mean these are the "HOT" ones). But we could be very careful to > > "write" the transaction status, and it should be stored in the database > at > > last. So I think we must make sure the transaction status in the redis > and > > the DB is consist and we SHOULD NOT lose the any status of the > transaction. > > > > How will you use the redis and the database when storing the status of > > transaction ? > > 1. write to the redis and the redis will sync to the database later. if > > failed, rollback the transaction. > > 2. both write to the redis and the database. if any of them failed, > > rollback the transaction. > > > > We need the more detail :) > > > > Amos > > > > 2018-08-15 8:48 GMT+08:00 Willem Jiang : > > > > > Hi, > > > > > > With the help of JuZheng[1][2], we managed to deploy the > saga-spring-demo > > > into K8s and start the Jmeter tests for it. By running the test for a > > > while, the DB CPU usage is very high and the response time is up 2~3 > > > seconds per call. > > > > > > It looks like all the event are stored into the database in the same > > table > > > and never cleaned. > > > Now we are thinking use redis to store the hot data (the saga > transaction > > > which is not closed), and put the cold data (which is used for > auditing) > > > into database. In this way it could keep the event data smaller and > the > > > event sanner[4] can just go through the unfinished the Saga > transactions > > to > > > fire the timeout event or the compensation event. > > > > > > Any thought? > > > > > > [1]https://github.com/apache/incubator-servicecomb-saga/pull/250 > > > [2]https://github.com/apache/incubator-servicecomb-saga/pull/252 > > > [3] > > > https://github.com/apache/incubator-servicecomb-saga/ > > > tree/master/saga-demo/saga-spring-demo > > > [4] > > > https://github.com/apache/incubator-servicecomb-saga/blob/ > > > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/ > > > src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java > > > > > > Willem Jiang > > > > > > Twitter: willemjiang > > > Weibo: 姜宁willem > > > > > >
Re: Performance tuning of ServiceComb Saga Pack
Hi Amos Alpha send response to the Omega once the message is updated into redis, then we just store the transaction events into the database in async way (we don't change the states here). Current Redis cluster provides the persistent storage, it could reduce lot of effort of us. Now we just use redis as a smaller table for tracking all the unfinished transaction status to get better performance. If the transaction is aborted, we can updated the transaction in the DB and Redis at same time, if any of those calls is failed, I think we just keep trying to update the status. Willem Jiang Twitter: willemjiang Weibo: 姜宁willem On Wed, Aug 15, 2018 at 10:48 PM, Zheng Feng wrote: > Hi Willem, > > It makes sense to use the redis to store the pending transactions (I assume > that you mean these are the "HOT" ones). But we could be very careful to > "write" the transaction status, and it should be stored in the database at > last. So I think we must make sure the transaction status in the redis and > the DB is consist and we SHOULD NOT lose the any status of the transaction. > > How will you use the redis and the database when storing the status of > transaction ? > 1. write to the redis and the redis will sync to the database later. if > failed, rollback the transaction. > 2. both write to the redis and the database. if any of them failed, > rollback the transaction. > > We need the more detail :) > > Amos > > 2018-08-15 8:48 GMT+08:00 Willem Jiang : > > > Hi, > > > > With the help of JuZheng[1][2], we managed to deploy the saga-spring-demo > > into K8s and start the Jmeter tests for it. By running the test for a > > while, the DB CPU usage is very high and the response time is up 2~3 > > seconds per call. > > > > It looks like all the event are stored into the database in the same > table > > and never cleaned. > > Now we are thinking use redis to store the hot data (the saga transaction > > which is not closed), and put the cold data (which is used for auditing) > > into database. In this way it could keep the event data smaller and the > > event sanner[4] can just go through the unfinished the Saga transactions > to > > fire the timeout event or the compensation event. > > > > Any thought? > > > > [1]https://github.com/apache/incubator-servicecomb-saga/pull/250 > > [2]https://github.com/apache/incubator-servicecomb-saga/pull/252 > > [3] > > https://github.com/apache/incubator-servicecomb-saga/ > > tree/master/saga-demo/saga-spring-demo > > [4] > > https://github.com/apache/incubator-servicecomb-saga/blob/ > > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/ > > src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java > > > > Willem Jiang > > > > Twitter: willemjiang > > Weibo: 姜宁willem > > >
Re: Performance tuning of ServiceComb Saga Pack
Does it have the risk to lose the data ? If we write the data to the redis and it fails to persistent data just like crashing ? what is the aof mode of the redis ? Does it guarantee the data consistence ? 2018-08-16 9:28 GMT+08:00 fu chengeng : > Hi > can we use the write back way to use redis? only write data to redis, > and redis use the aof mode to persistent data. alpha uses an asynchronous > thread to put these 'cold data' to db and delete it from redis. alpha only > read and write with redis. > > > ?取 Outlook for Android<https://aka.ms/ghei36> > > > From: Zheng Feng > Sent: Wednesday, August 15, 2018 10:48:34 PM > To: dev@servicecomb.apache.org > Subject: Re: Performance tuning of ServiceComb Saga Pack > > Hi Willem, > > It makes sense to use the redis to store the pending transactions (I assume > that you mean these are the "HOT" ones). But we could be very careful to > "write" the transaction status, and it should be stored in the database at > last. So I think we must make sure the transaction status in the redis and > the DB is consist and we SHOULD NOT lose the any status of the transaction. > > How will you use the redis and the database when storing the status of > transaction ? > 1. write to the redis and the redis will sync to the database later. if > failed, rollback the transaction. > 2. both write to the redis and the database. if any of them failed, > rollback the transaction. > > We need the more detail :) > > Amos > > 2018-08-15 8:48 GMT+08:00 Willem Jiang : > > > Hi, > > > > With the help of JuZheng[1][2], we managed to deploy the saga-spring-demo > > into K8s and start the Jmeter tests for it. By running the test for a > > while, the DB CPU usage is very high and the response time is up 2~3 > > seconds per call. > > > > It looks like all the event are stored into the database in the same > table > > and never cleaned. > > Now we are thinking use redis to store the hot data (the saga transaction > > which is not closed), and put the cold data (which is used for auditing) > > into database. In this way it could keep the event data smaller and the > > event sanner[4] can just go through the unfinished the Saga transactions > to > > fire the timeout event or the compensation event. > > > > Any thought? > > > > [1]https://github.com/apache/incubator-servicecomb-saga/pull/250 > > [2]https://github.com/apache/incubator-servicecomb-saga/pull/252 > > [3] > > https://github.com/apache/incubator-servicecomb-saga/ > > tree/master/saga-demo/saga-spring-demo > > [4] > > https://github.com/apache/incubator-servicecomb-saga/blob/ > > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/ > > src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java > > > > Willem Jiang > > > > Twitter: willemjiang > > Weibo: 姜宁willem > > >
Re: Performance tuning of ServiceComb Saga Pack
Hi can we use the write back way to use redis? only write data to redis, and redis use the aof mode to persistent data. alpha uses an asynchronous thread to put these 'cold data' to db and delete it from redis. alpha only read and write with redis. ?取 Outlook for Android<https://aka.ms/ghei36> From: Zheng Feng Sent: Wednesday, August 15, 2018 10:48:34 PM To: dev@servicecomb.apache.org Subject: Re: Performance tuning of ServiceComb Saga Pack Hi Willem, It makes sense to use the redis to store the pending transactions (I assume that you mean these are the "HOT" ones). But we could be very careful to "write" the transaction status, and it should be stored in the database at last. So I think we must make sure the transaction status in the redis and the DB is consist and we SHOULD NOT lose the any status of the transaction. How will you use the redis and the database when storing the status of transaction ? 1. write to the redis and the redis will sync to the database later. if failed, rollback the transaction. 2. both write to the redis and the database. if any of them failed, rollback the transaction. We need the more detail :) Amos 2018-08-15 8:48 GMT+08:00 Willem Jiang : > Hi, > > With the help of JuZheng[1][2], we managed to deploy the saga-spring-demo > into K8s and start the Jmeter tests for it. By running the test for a > while, the DB CPU usage is very high and the response time is up 2~3 > seconds per call. > > It looks like all the event are stored into the database in the same table > and never cleaned. > Now we are thinking use redis to store the hot data (the saga transaction > which is not closed), and put the cold data (which is used for auditing) > into database. In this way it could keep the event data smaller and the > event sanner[4] can just go through the unfinished the Saga transactions to > fire the timeout event or the compensation event. > > Any thought? > > [1]https://github.com/apache/incubator-servicecomb-saga/pull/250 > [2]https://github.com/apache/incubator-servicecomb-saga/pull/252 > [3] > https://github.com/apache/incubator-servicecomb-saga/ > tree/master/saga-demo/saga-spring-demo > [4] > https://github.com/apache/incubator-servicecomb-saga/blob/ > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/ > src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java > > Willem Jiang > > Twitter: willemjiang > Weibo: 姜宁willem >
Re: Performance tuning of ServiceComb Saga Pack
Hi Willem, It makes sense to use the redis to store the pending transactions (I assume that you mean these are the "HOT" ones). But we could be very careful to "write" the transaction status, and it should be stored in the database at last. So I think we must make sure the transaction status in the redis and the DB is consist and we SHOULD NOT lose the any status of the transaction. How will you use the redis and the database when storing the status of transaction ? 1. write to the redis and the redis will sync to the database later. if failed, rollback the transaction. 2. both write to the redis and the database. if any of them failed, rollback the transaction. We need the more detail :) Amos 2018-08-15 8:48 GMT+08:00 Willem Jiang : > Hi, > > With the help of JuZheng[1][2], we managed to deploy the saga-spring-demo > into K8s and start the Jmeter tests for it. By running the test for a > while, the DB CPU usage is very high and the response time is up 2~3 > seconds per call. > > It looks like all the event are stored into the database in the same table > and never cleaned. > Now we are thinking use redis to store the hot data (the saga transaction > which is not closed), and put the cold data (which is used for auditing) > into database. In this way it could keep the event data smaller and the > event sanner[4] can just go through the unfinished the Saga transactions to > fire the timeout event or the compensation event. > > Any thought? > > [1]https://github.com/apache/incubator-servicecomb-saga/pull/250 > [2]https://github.com/apache/incubator-servicecomb-saga/pull/252 > [3] > https://github.com/apache/incubator-servicecomb-saga/ > tree/master/saga-demo/saga-spring-demo > [4] > https://github.com/apache/incubator-servicecomb-saga/blob/ > 44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/ > src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java > > Willem Jiang > > Twitter: willemjiang > Weibo: 姜宁willem >
Performance tuning of ServiceComb Saga Pack
Hi, With the help of JuZheng[1][2], we managed to deploy the saga-spring-demo into K8s and start the Jmeter tests for it. By running the test for a while, the DB CPU usage is very high and the response time is up 2~3 seconds per call. It looks like all the event are stored into the database in the same table and never cleaned. Now we are thinking use redis to store the hot data (the saga transaction which is not closed), and put the cold data (which is used for auditing) into database. In this way it could keep the event data smaller and the event sanner[4] can just go through the unfinished the Saga transactions to fire the timeout event or the compensation event. Any thought? [1]https://github.com/apache/incubator-servicecomb-saga/pull/250 [2]https://github.com/apache/incubator-servicecomb-saga/pull/252 [3] https://github.com/apache/incubator-servicecomb-saga/tree/master/saga-demo/saga-spring-demo [4] https://github.com/apache/incubator-servicecomb-saga/blob/44491f1dcbb9353792cb44d0be60946e0e4d7a1a/alpha/alpha-core/src/main/java/org/apache/servicecomb/saga/alpha/core/EventScanner.java Willem Jiang Twitter: willemjiang Weibo: 姜宁willem