Yup, makes sense. Thanks for the feedback. I agree that the external
caches are troublesome and we'll likely be focusing on the internal
ones. Whether that manifests itself as a memcache-like implementation or
another db view is unknown.

The other thing about in-process caching I like is the ability to have
it in a common (nova-common?) library where we can easily compute
hit/miss ratios and adjust accordingly.

-S


On 03/23/2012 12:02 AM, Mark Washenberger wrote:
> This is precisely my concern.
> 
> It must be brought up that with Rackspace Cloud Servers, nearly
> all client codes routinely submit requests with a query parameter 
> "cache-busting=<some random string>" just to get around problems with
> cache invalidation. And woe to the client that does not.
> 
> I get the feeling that once trust like this is lost, a project has
> a hard time regaining it. I'm not saying that we can avoid
> inconsistency entirely. Rather, I believe we will have to embrace
> some eventual-consistency models to enable the performance and
> scale we will ultimately attain. But I just get the feeling that
> generic caches are really only appropriate for write-once or at
> least write-rarely data. So personally I would rule out external
> caches entirely and try to be very judicious in selecting internal
> caches as well.
> 
> "Joshua Harlow" <harlo...@yahoo-inc.com> said:
> 
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>> Just from experience.
>>
>> They do a great job. But the killer thing about caching is how u do the cache
>> invalidation.
>>
>> Just caching stuff is easy-peasy, making sure it is invalidated on all 
>> servers in
>> all conditions, not so easy...
>>
>> On 3/22/12 4:26 PM, "Sandy Walsh" <sandy.wa...@rackspace.com> wrote:
>>
>> We're doing tests to find out where the bottlenecks are, caching is the
>> most obvious solution, but there may be others. Tools like memcache do a
>> really good job of sharing memory across servers so we don't have to
>> reinvent the wheel or hit the db at all.
>>
>> In addition to looking into caching technologies/approaches we're gluing
>> together some tools for finding those bottlenecks. Our first step will
>> be finding them, then squashing them ... however.
>>
>> -S
>>
>> On 03/22/2012 06:25 PM, Mark Washenberger wrote:
>>> What problems are caching strategies supposed to solve?
>>>
>>> On the nova compute side, it seems like streamlining db access and
>>> api-view tables would solve any performance problems caching would
>>> address, while keeping the stale data management problem small.
>>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
> 
> 
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to