hi, Janice This idea seems to me that is useful to detect the state of cinder-volume process more quickly, but I feel there is another issue that if the back-end device go to fail you still can't keep cloud in ha or create volume successfully since the service is up but device is down.
So, what I want to say is we maybe need to consider to detect and report the device state priority[1] and then consider to improve service if we need that. [1]https://review.openstack.org/#/c/252921/ 2015-12-28 9:18 GMT+08:00 <[email protected]>: >> Hmm, I see. There's this spec[1] that was discussed in the past with a >> similar proposal. There's a SPEC with some other points on the discussion, I >> think Janice >> forgot to mention. > >> Erlon > >> [1] https://review.openstack.org/#/c/176233/ >> [2] https://review.openstack.org/#/c/258968/ > >> On Tue, Dec 22, 2015 at 12:16 PM, Michał Dulko <[email protected]> >> wrote: >> On 12/22/2015 01:29 PM, Erlon Cruz wrote: >> > Hi Li, >> > >> > Can you give a quick background on servicegroups (or links to. The >> > spec you linked only describe the process on Nova to change from what >> > they are using to tooz)? Also, what are the use cases and benefits of >> > using this? >> > >> > Erlon >> > > >> This is simply and idea to be able to use something more sophisticated >> than DB heartbeats to monitor services states. With Tooz implemented for >> that we would be able to use for example ZooKeeper to know about service >> failure in a matter of seconds instead of around a minute. This would >> shrink the window in which c-sch doesn't-know-yet that c-vol failed and >> sends RPC messages to a service that will never answer. I think there >> are more use cases related to service monitoring and failover. > >> Service groups isn't probably a correct name for proposed enhancement - >> we have this concept somehow implemented, but proposed idea seems to be >> related to making it pluggable. > > > Hi Erlon and Michal, > Sorry for response you so late. > > The Cinder ServiceGroup is used for getting the state of services > quickly. > use case such as: > 1) As an admin, I want to know each cinder service state, so that > I can take some > actions to keep cloud in high availability if any service is > down. > 2) As an user, I want my volumes not to be scheduled to failed > cinder-volume > instances. > My colleague and I, have posted the specs[1] of ServiceGroup in > Cinder. > > Janice > > [1] https://review.openstack.org/#/c/258968/ > > > > > 发件人: Erlon Cruz <[email protected]> > 收件人: "OpenStack Development Mailing List (not for usage questions)" > <[email protected]>, > 日期: 2015/12/23 04:04 > 主题: Re: [openstack-dev] [cinder] [nova] whether the ServiceGroup in > Cinder is necessary > ________________________________ > > > > Hmm, I see. There's this spec[1] that was discussed in the past with a > similar proposal. There's a SPEC with some other points on the discussion, I > think Janice forgot to mention. > > Erlon > > [1] https://review.openstack.org/#/c/176233/ > [2] https://review.openstack.org/#/c/258968/ > > On Tue, Dec 22, 2015 at 12:16 PM, Michał Dulko <[email protected]> > wrote: > On 12/22/2015 01:29 PM, Erlon Cruz wrote: >> Hi Li, >> >> Can you give a quick background on servicegroups (or links to. The >> spec you linked only describe the process on Nova to change from what >> they are using to tooz)? Also, what are the use cases and benefits of >> using this? >> >> Erlon >> > > This is simply and idea to be able to use something more sophisticated > than DB heartbeats to monitor services states. With Tooz implemented for > that we would be able to use for example ZooKeeper to know about service > failure in a matter of seconds instead of around a minute. This would > shrink the window in which c-sch doesn't-know-yet that c-vol failed and > sends RPC messages to a service that will never answer. I think there > are more use cases related to service monitoring and failover. > > Service groups isn't probably a correct name for proposed enhancement - > we have this concept somehow implemented, but proposed idea seems to be > related to making it pluggable. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: [email protected]?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: [email protected]?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -------------------------------------------------------- > ZTE Information Security Notice: The information contained in this mail (and > any attachment transmitted herewith) is privileged and confidential and is > intended for the exclusive use of the addressee(s). If you are not an > intended recipient, any disclosure, reproduction, distribution or other > dissemination or use of the information contained is strictly prohibited. > If you have received this mail in error, please delete it and notify us > immediately. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: [email protected]?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
