Re: get_cache and multiple caches

2013-09-24 Thread Tom Evans
On Mon, Sep 23, 2013 at 5:55 AM, Anssi Kääriäinen
 wrote:
> On 09/20/2013 06:29 PM, Tom Evans wrote:
>>
>> On Fri, Sep 20, 2013 at 4:13 PM, Florian Apolloner
>>  wrote:

 It seems more sensible to hook something that has the lifetime of the
 request to the request, rather than stick it in TLS, keyed to the
 thread serving the request.
>>>
>>>
>>> Jupp, sadly I don't see a sensible way around thread local storage here
>>> :(
>>>
>> All good points, I just have this mental "HMM" when I see TLS as the
>> solution for anything. TLS is already used for things like the
>> language code of the current request, which avoids you having to pass
>> around a state object or passing down lang_code all down the stack,
>> but means that things like URL reversing and resolving benchmarks are
>> slow (O(n²)) with USE_I18N on.
>
>
> Huh? What is the n here? And why would passing the lang_code down the stack
> help?
>
>  - Anssi

n is the number of URLs.

In LocaleRegexProvider get_language() is called each time the reqexp
is requested - not just each time the regexp is compiled.
LocaleRegexProvider being the base class of RegexURLPattern and
RegexURLResolver

https://github.com/django/django/blob/master/django/core/urlresolvers.py#L160

Each time get_language() is called you do a TLS get. If you have a
large number of URLs, get_language() is called once for each URL
pattern that is checked against the current URL. If you have thousands
of URLs, and the current URL does not match (or matches one of the
last ones listed), then you have thousands of unnecessary TLS fetches
per URL resolve.

The fetches are unnecessary since the layer above could look up the
current language once, and pass that down. That pattern would not
allow the magic for regex() to be hidden behind a @property, and so
instead expensive TLS lookups occur inside regex() instead of inside
the caller of regex().

See also "URL dispatcher slow?" from last October, and:
  http://mindref.blogspot.co.uk/2012/10/python-web-routing-benchmark.html

Cheers

Tom

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-21 Thread Florian Apolloner


On Saturday, September 21, 2013 2:12:31 AM UTC+2, Curtis Maloney wrote:

> Is there anything else?
>

Ain't that enough? :p

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-20 Thread Curtis Maloney
OK.  So the goals of this effort are:

1) to avoid resource over commitment [e.g. too many connections]
2) to help relieve the burden of concurrency from the cache backends.

Issues to avoid are:
a) TLS is "slow" (citation, please?)
b) New API better damn well be worth it!

Is there anything else?

--
Curtis


On 21 September 2013 01:29, Tom Evans  wrote:

> On Fri, Sep 20, 2013 at 4:13 PM, Florian Apolloner
>  wrote:
> >> It seems more sensible to hook something that has the lifetime of the
> >> request to the request, rather than stick it in TLS, keyed to the
> >> thread serving the request.
> >
> >
> > Jupp, sadly I don't see a sensible way around thread local storage here
> :(
> >
>
> All good points, I just have this mental "HMM" when I see TLS as the
> solution for anything. TLS is already used for things like the
> language code of the current request, which avoids you having to pass
> around a state object or passing down lang_code all down the stack,
> but means that things like URL reversing and resolving benchmarks are
> slow (O(n²)) with USE_I18N on.
>
> The problem with tying everything to the request is that you end up
> with code that only works with requests or request-like-objects. The
> problem with not tying everything to the request is that things get
> slower. :(
>
> Cheers
>
> Tom
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-developers+unsubscr...@googlegroups.com.
> To post to this group, send email to django-developers@googlegroups.com.
> Visit this group at http://groups.google.com/group/django-developers.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-20 Thread Florian Apolloner
Hi Tom,

On Friday, September 20, 2013 5:04:41 PM UTC+2, Tom Evans wrote:
>
> On the other hand each call to get_cache('foo') now requires access to 
> TLS. TLS is slw. Going through something slow to get to something 
> that is supposed to be a speed up... 
>

You are making a good point, even though I do think that TLS is still way 
faster than anything waiting for IO (python has a python implementation of 
threading.local and a c implementation).

Would it be better to leave the API and semantics of get_cache alone 
> and provide a new way of accessing caches through the request object, 
> leaving them cached on the request object for the duration of the 
> request, and thus avoiding the need of TLS. 
>

I would prefer to have one API, but I think it would be okay for cache 
backends them self to specify that the shouldn't be put into a thread local 
storage (probably via a simple attribute). Does that sound like a solution 
to you? LocMemCache seems to be a candidate for that. 
  

> Requests aren't shared between threads, and so a 
> per request cache would be inherently thread safe. 
>

The same argument holds true for let's say django db connections; but I 
doubt it's gonna fly; eg passing request into get_cache as storage object 
seems somewhat nasty (at least as long as we name it request ;))
 

> It seems more sensible to hook something that has the lifetime of the 
> request to the request, rather than stick it in TLS, keyed to the 
> thread serving the request.
>

Jupp, sadly I don't see a sensible way around thread local storage here :(

Cheers,
Florian 

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-20 Thread Tom Evans
On Fri, Sep 20, 2013 at 4:13 PM, Florian Apolloner
 wrote:
>> It seems more sensible to hook something that has the lifetime of the
>> request to the request, rather than stick it in TLS, keyed to the
>> thread serving the request.
>
>
> Jupp, sadly I don't see a sensible way around thread local storage here :(
>

All good points, I just have this mental "HMM" when I see TLS as the
solution for anything. TLS is already used for things like the
language code of the current request, which avoids you having to pass
around a state object or passing down lang_code all down the stack,
but means that things like URL reversing and resolving benchmarks are
slow (O(n²)) with USE_I18N on.

The problem with tying everything to the request is that you end up
with code that only works with requests or request-like-objects. The
problem with not tying everything to the request is that things get
slower. :(

Cheers

Tom

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-20 Thread Tom Evans
On Fri, Sep 20, 2013 at 3:10 PM, Florian Apolloner
 wrote:
> The main issue here isn't recreating the objects on demand, but the impact
> they have, eg a new memcached connection. Now imagine a complex system where
> each part issues get_cache('something') to get the cache

On the other hand each call to get_cache('foo') now requires access to
TLS. TLS is slw. Going through something slow to get to something
that is supposed to be a speed up...

> and you'll end up
> with a few connections per request instead of one.

Would it be better to leave the API and semantics of get_cache alone
and provide a new way of accessing caches through the request object,
leaving them cached on the request object for the duration of the
request, and thus avoiding the need of TLS.

The reason to use TLS is because requests are commonly served from
threaded servers, and cache clients may not be thread safe, so a
desire to stop different requests from simultaneously accessing the
same cache client. Requests aren't shared between threads, and so a
per request cache would be inherently thread safe.

It seems more sensible to hook something that has the lifetime of the
request to the request, rather than stick it in TLS, keyed to the
thread serving the request.

Cheers

Tom

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-20 Thread Florian Apolloner
Hi Tom,

On Friday, September 20, 2013 3:29:03 PM UTC+2, Tom Evans wrote:
>
> Before you 
> go too far down the thread local route, could you verify that 
> retrieving cache objects from a thread local cache is in any way 
> faster than simply recreating them as demanded. 
>

The main issue here isn't recreating the objects on demand, but the impact 
they have, eg a new memcached connection. Now imagine a complex system 
where each part issues get_cache('something') to get the cache and you'll 
end up with a few connections per request instead of one.

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-20 Thread Tom Evans
On Wed, Sep 18, 2013 at 12:29 PM, Curtis Maloney
 wrote:
> I started working on a CacheManager for dealing with thread local cache
> instances, as was suggested on IRC by more than one person.
>

The problem that Florian identified was that recreating cache
instances each time get_cache() was called could be costly. Before you
go too far down the thread local route, could you verify that
retrieving cache objects from a thread local cache is in any way
faster than simply recreating them as demanded.

Cheers

Tom

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-20 Thread Curtis Maloney
Yeah... simpler solution is simpler :)

--
C



On 20 September 2013 17:04, Florian Apolloner  wrote:

>
>
> On Friday, September 20, 2013 8:58:25 AM UTC+2, Curtis Maloney wrote:
>>
>> I guess the remaining question to address is :  close()
>>
> Leave it as is I think.
>
>
>> Thinking as I type... it wouldn't hurt, also, to allow a cache backend to
>> provide an interface to a connection pool, so the manager can play friendly
>> with it.  If it doesn't have one, fall back to an instance-per-thread...
>> this would require still hooking request complete, but not so much for
>> "close" as "release".
>>
>
> If it can be added afterwards without to much issues, I prefer to leave
> APIs for connection pools out for now; since it will make the patch
> smaller, which makes it easier to merge.
>
> Florian
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-developers+unsubscr...@googlegroups.com.
> To post to this group, send email to django-developers@googlegroups.com.
> Visit this group at http://groups.google.com/group/django-developers.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-20 Thread Florian Apolloner


On Friday, September 20, 2013 8:58:25 AM UTC+2, Curtis Maloney wrote:
>
> I guess the remaining question to address is :  close()
>
Leave it as is I think.
 

> Thinking as I type... it wouldn't hurt, also, to allow a cache backend to 
> provide an interface to a connection pool, so the manager can play friendly 
> with it.  If it doesn't have one, fall back to an instance-per-thread... 
> this would require still hooking request complete, but not so much for 
> "close" as "release".
>

If it can be added afterwards without to much issues, I prefer to leave 
APIs for connection pools out for now; since it will make the patch 
smaller, which makes it easier to merge.

Florian

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-20 Thread Curtis Maloney
I guess the remaining question to address is :  close()

It looks like it was added to appease an issue with memcached, which may or
may not still be an issue [comments in tickets suggest it was a design
decision by the memcached authors].

Thinking as I type... it wouldn't hurt, also, to allow a cache backend to
provide an interface to a connection pool, so the manager can play friendly
with it.  If it doesn't have one, fall back to an instance-per-thread...
this would require still hooking request complete, but not so much for
"close" as "release".

--
Curtis



On 19 September 2013 01:33, Florian Apolloner  wrote:

> Hi,
>
>
> On Wednesday, September 18, 2013 1:29:25 PM UTC+2, Curtis Maloney wrote:
>>
>> 1) Can we share "ad-hoc" caches -- that is, ones created by passing more
>> than just the CACHES alias.
>>
> Imo no, you probably have a good reason if you create ad-hoc ones
>
>> 2) What to do about django.core.cache.cache ?
>>
> Has to stay for now, same as django.db.connection
>
>
>> A separate approach is to introduce a new API to provide access to the
>> shared, pre-configured caches, and retain get_cache for the old, ad-hoc,
>> non-shared caches.
>>
> I think it would be sensible if that API would mimic django.db.connections
>
> Florian.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-developers+unsubscr...@googlegroups.com.
> To post to this group, send email to django-developers@googlegroups.com.
> Visit this group at http://groups.google.com/group/django-developers.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-18 Thread Florian Apolloner
Hi,

On Wednesday, September 18, 2013 1:29:25 PM UTC+2, Curtis Maloney wrote:
>
> 1) Can we share "ad-hoc" caches -- that is, ones created by passing more 
> than just the CACHES alias.
>
Imo no, you probably have a good reason if you create ad-hoc ones 

> 2) What to do about django.core.cache.cache ?
>
Has to stay for now, same as django.db.connection
 

> A separate approach is to introduce a new API to provide access to the 
> shared, pre-configured caches, and retain get_cache for the old, ad-hoc, 
> non-shared caches.
>
I think it would be sensible if that API would mimic django.db.connections

Florian.

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-18 Thread Curtis Maloney
I started working on a CacheManager for dealing with thread local cache
instances, as was suggested on IRC by more than one person.

Firstly, I propose we remove the schema://backend...  syntax for defining
cache configs, as it's no longer even documented [that I could find quickly]

Secondly, have get_cache ask the Cache manager, instead of doing all the
lifting itself.

At this point, I could see two non-trivial things to solve:
1) Can we share "ad-hoc" caches -- that is, ones created by passing more
than just the CACHES alias.
2) What to do about django.core.cache.cache ?

As I was explaining this, something occurred to me: a wrapper object that
remembers the backend/location/params used to create the cache instance,
and deals with accessing the right thread-local instance.

A separate approach is to introduce a new API to provide access to the
shared, pre-configured caches, and retain get_cache for the old, ad-hoc,
non-shared caches.

--
Curtis


On 7 September 2013 22:32, Florian Apolloner  wrote:

> Hi,
>
>
> On Monday, September 2, 2013 6:39:09 AM UTC+2, Curtis Maloney wrote:
>>
>> Whilst it's conceivable  some cache backend will have the smarts to
>> multiplex requests on a single connection, I suspect that's more the
>> exception than the case.
>>
>
> Agreed
>
>
>> Obviously, the default would be one per thread.
>>
>
> This is what the pylibmc backend does already and memcached should do too;
> the db backend is per thread too since it uses Django's db connection. We
> might wanna supply a building block so that not everone has to implement
> their own variant of the threadlocal stuff :)  [Locmem and filebased
> shouldn't cause any issues either way and can stay as they are I think]
>
> Of course, that could be simplified by just always creating a new instance
>> when more than just a name is provided. [or a "force new" keyword is
>> passed].
>>
>
> I wonder how common it is to supply more than just the name to get_cache
> (aside from tests maybe), I am +0 for making get_cache return "cached"
> instances for a single name and construct new ones for the other cases (+ a
> force keyword if someone really wants the old behavior).
>
> Cheers,
> Florian
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-developers+unsubscr...@googlegroups.com.
> To post to this group, send email to django-developers@googlegroups.com.
> Visit this group at http://groups.google.com/group/django-developers.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-07 Thread Florian Apolloner
Hi,

On Monday, September 2, 2013 6:39:09 AM UTC+2, Curtis Maloney wrote:
>
> Whilst it's conceivable  some cache backend will have the smarts to 
> multiplex requests on a single connection, I suspect that's more the 
> exception than the case.
>

Agreed
  

> Obviously, the default would be one per thread.
>

This is what the pylibmc backend does already and memcached should do too; 
the db backend is per thread too since it uses Django's db connection. We 
might wanna supply a building block so that not everone has to implement 
their own variant of the threadlocal stuff :)  [Locmem and filebased 
shouldn't cause any issues either way and can stay as they are I think]

Of course, that could be simplified by just always creating a new instance 
> when more than just a name is provided. [or a "force new" keyword is 
> passed].
>

I wonder how common it is to supply more than just the name to get_cache 
(aside from tests maybe), I am +0 for making get_cache return "cached" 
instances for a single name and construct new ones for the other cases (+ a 
force keyword if someone really wants the old behavior).

Cheers,
Florian

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-01 Thread Curtis Maloney
Bit of a rambling, thinking-out-loud-ish post...

Whilst it's conceivable  some cache backend will have the smarts to
multiplex requests on a single connection, I suspect that's more the
exception than the case.

However, that doesn't mean the cache backend can't be left with the
opportunity to manage per-thread connections.  Obviously, the default would
be one per thread.

The situation is complicated with the "ad-hoc config" option.  Otherwise,
each cache backend could have a factory method, where the cares of thread
locality can be managed.  You'd just tell it the name, and it would take
care of the rest.

Of course, that could be simplified by just always creating a new instance
when more than just a name is provided. [or a "force new" keyword is
passed].

--
Curtis



On 1 September 2013 22:24, Florian Apolloner  wrote:

> Hi,
>
>
> On Sunday, September 1, 2013 4:34:54 AM UTC+2, Curtis Maloney wrote:
>>
>> I've a possible solution - https://github.com/funkybob/**
>> django/compare/simple_caches
>>
>> Basically, the existing API and behaviours are still available through
>> get_cache, but you can avoid duplicate instances of caches using
>> django.core.cache.caches[name]
>>
>
> As noted on the ticket (https://code.djangoproject.com/ticket/21012), I
> think this needs some more brainstorming (preferably on this ml) before we
> introduce a new pulbic API to access a cache. My main concern is: does it
> even make sense to share the cache connection between threads? eg what
> happens if two threads want to read a value from the cache, will one thread
> block till python-memcached returned the value for the other thread?
>
> Cheers,
> Florian
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-developers+unsubscr...@googlegroups.com.
> To post to this group, send email to django-developers@googlegroups.com.
> Visit this group at http://groups.google.com/group/django-developers.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-09-01 Thread Florian Apolloner
Hi,

On Sunday, September 1, 2013 4:34:54 AM UTC+2, Curtis Maloney wrote:
>
> I've a possible solution - 
> https://github.com/funkybob/django/compare/simple_caches
>
> Basically, the existing API and behaviours are still available through 
> get_cache, but you can avoid duplicate instances of caches using 
> django.core.cache.caches[name]
>

As noted on the ticket (https://code.djangoproject.com/ticket/21012), I 
think this needs some more brainstorming (preferably on this ml) before we 
introduce a new pulbic API to access a cache. My main concern is: does it 
even make sense to share the cache connection between threads? eg what 
happens if two threads want to read a value from the cache, will one thread 
block till python-memcached returned the value for the other thread?

Cheers,
Florian 

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-08-31 Thread Curtis Maloney
I've a possible solution -
https://github.com/funkybob/django/compare/simple_caches

Basically, the existing API and behaviours are still available through
get_cache, but you can avoid duplicate instances of caches using
django.core.cache.caches[name]

--
Curtis


On 31 August 2013 15:44, Curtis Maloney  wrote:

> As a simple short-term solution, why not cache calls to get_cache that
> don't pass additional arguments?  That is, ones that only get
> pre-configured caches.
>
> --
> Curtis
>
>
>
> On 25 August 2013 23:26, Florian Apolloner  wrote:
>
>> Hi,
>>
>> so when reviewing https://github.com/django/django/pull/1490/ I once
>> again ran over an issue with our current caching implementation: Namely
>> get_cache creates a new instance every time which is kind of suboptimal if
>> you don't store it as module level variable like we do with the default
>> cache. Are there any objections to make get_cache store those instances in
>> a dict and return those on request? It shouldn't cause to much problems,
>> since the current cache infrastructure expects you that you can share those
>> objects over multiple threads and requests anyways [And for caches which
>> don't support it like pylibmc we use threadlocals…]. Changing how get_cache
>> works could significantly reduce connections to the cache server depending
>> on how your views/templates are written.
>>
>> Thoughts?
>>
>> Cheers,
>> Florian
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Django developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to django-developers+unsubscr...@googlegroups.com.
>> To post to this group, send email to django-developers@googlegroups.com.
>> Visit this group at http://groups.google.com/group/django-developers.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.


Re: get_cache and multiple caches

2013-08-30 Thread Curtis Maloney
As a simple short-term solution, why not cache calls to get_cache that
don't pass additional arguments?  That is, ones that only get
pre-configured caches.

--
Curtis



On 25 August 2013 23:26, Florian Apolloner  wrote:

> Hi,
>
> so when reviewing https://github.com/django/django/pull/1490/ I once
> again ran over an issue with our current caching implementation: Namely
> get_cache creates a new instance every time which is kind of suboptimal if
> you don't store it as module level variable like we do with the default
> cache. Are there any objections to make get_cache store those instances in
> a dict and return those on request? It shouldn't cause to much problems,
> since the current cache infrastructure expects you that you can share those
> objects over multiple threads and requests anyways [And for caches which
> don't support it like pylibmc we use threadlocals…]. Changing how get_cache
> works could significantly reduce connections to the cache server depending
> on how your views/templates are written.
>
> Thoughts?
>
> Cheers,
> Florian
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-developers+unsubscr...@googlegroups.com.
> To post to this group, send email to django-developers@googlegroups.com.
> Visit this group at http://groups.google.com/group/django-developers.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
For more options, visit https://groups.google.com/groups/opt_out.