It makes sense.

zhaojun <[email protected]> 于2019年5月30日周四 下午5:32写道:

> We have make the scope is ‘provided’, it will not be included while
> release.
>
> ------------------
> Zhao Jun
> Apache Sharding-Sphere & ServiceComb
>
> > On May 30, 2019, at 4:37 PM, Zheng Feng <[email protected]> wrote:
> >
> > Well, it looks good to me but I have to mention that the Narayana is LGPL
> > license and I think you have to exclude this artifact when doing the
> > release.
> >
> > zhaojun <[email protected]> 于2019年5月30日周四 上午9:32写道:
> >
> >> Hi, zheng
> >>
> >> FYI, I have moved sharding-transaction-xa-narayana to our dev branch.
> >>
> >> You can see here:
> >>
> >>
> https://github.com/apache/incubator-shardingsphere/tree/dev/sharding-transaction/sharding-transaction-2pc/sharding-transaction-xa/sharding-transaction-xa-narayana
> >>
> >> ------------------
> >> Zhao Jun
> >> Apache Sharding-Sphere & ServiceComb
> >>
> >>> On May 28, 2019, at 9:49 AM, zhaojun <[email protected]> wrote:
> >>>
> >>> I have just created another issue[1] for extend atomikos persistence.
> >>>
> >>> [1]https://github.com/apache/incubator-shardingsphere/issues/2455
> >>>
> >>> ------------------
> >>> Zhao Jun
> >>> Apache Sharding-Sphere & ServiceComb
> >>>
> >>>
> >>>> On May 28, 2019, at 9:08 AM, 赵俊 <[email protected]> wrote:
> >>>>
> >>>> Yeah, for ShardingSphere we also consider about adding group and
> >> application id configuration.
> >>>> We plan to implement it in the near future.
> >>>>
> >>>>
> >>>>> On May 27, 2019, at 10:18 PM, Zheng Feng <[email protected]> wrote:
> >>>>>
> >>>>> OK, I understand and we had test with the similar situation on the
> >>>>> kubernetes/openshift environment. The most important thing is that we
> >> have
> >>>>> to make sure the restart pod has the same nodeIdentifier in the
> >> narayana
> >>>>> configuration.
> >>>>>
> >>>>> zhaojun <[email protected]> 于2019年5月27日周一 下午9:40写道:
> >>>>>
> >>>>>> For cloud native architecture, if one instance have crashed, it can
> >>>>>> failover to another available instance.
> >>>>>> it is better to make instance stateless, we should also make
> >> transaction
> >>>>>> log saved in database or other sharable storage.
> >>>>>> So we consider about developing SPI for transaction log persistence
> >> and
> >>>>>> recovery, and narayana is an implement for that.
> >>>>>> For atomikos, we also plan to extend it.
> >>>>>>
> >>>>>> ------------------
> >>>>>> Zhao Jun
> >>>>>> Apache Sharding-Sphere & ServiceComb
> >>>>>>
> >>>>>>
> >>>>>>> On May 27, 2019, at 8:12 PM, Zheng Feng <[email protected]> wrote:
> >>>>>>>
> >>>>>>> yeah, the narayana can persist the transaction log into the backend
> >>>>>>> database by using the jdbc configuration. I will take a  loo at the
> >>>>>> issue.
> >>>>>>> Also why does the sharding sphere need this feature ?
> >>>>>>>
> >>>>>>> zhaojun <[email protected]> 于2019年5月27日周一 下午4:08写道:
> >>>>>>>
> >>>>>>>> Hi, zheng
> >>>>>>>>
> >>>>>>>> I have hear of narayana-trnasaction manager have provided a
> powerful
> >>>>>>>> feature that persisting transaction log into database.
> >>>>>>>> So we prefer you write an example for using it, you can reference
> >> the
> >>>>>>>> atomikos example from here[1].
> >>>>>>>> Also I have created a issue for that, please see here[2]
> >>>>>>>>
> >>>>>>>> [1]:
> >>>>>>>>
> >>>>>>
> >>
> https://github.com/apache/incubator-shardingsphere-example/tree/dev/sharding-jdbc-example/transaction-example/transaction-2pc-xa-example
> >>>>>>>> <
> >>>>>>>>
> >>>>>>
> >>
> https://github.com/apache/incubator-shardingsphere-example/tree/dev/sharding-jdbc-example/transaction-example/transaction-2pc-xa-example
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> [2]
> >>>>>>
> https://github.com/apache/incubator-shardingsphere-example/issues/143
> >> <
> >>>>>>>>
> >> https://github.com/apache/incubator-shardingsphere-example/issues/143>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> If you have any question, please feel free to let me know, thanks.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> ------------------
> >>>>>>>> Zhao Jun
> >>>>>>>> Apache Sharding-Sphere & ServiceComb
> >>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>
> >>
> >>
>
>

Reply via email to