The idea of making a function was to be DRY. On one part create the
dictionary of ids representation once... Then use the same function to
update it where record get created which required that we add a new
representation. I could have done what you explain, plain cache.ram()
assignation and an
It's not clear what you are trying to achieve. If this code is in a model
file, then "id_represent" will never be in globals() before it is called
(each request is executed in an isolated, ephemeral environment, so
subsequent requests will not see the "id_represent" from previous
requests).
I ever use cache.ram... Function and function call is in models, so
variable is accessible from every controller files...
With Redis, this work :
from gluon.contrib.redis_cache import RedisCache
cache.redis = RedisCache('localhost:6379', db=None, debug=True,
with_lock=False, password=None)
This is true for any other cache except cache.ram right?
If so, there is no gain with cache.redis the way I use it...
@Anthony, are you sure about the issue with uwsgi/nginx and cache.ram dict
update?
I guess, I should start to look at how to get rid of these global dict
while not degrading
Yes, this may be an option (update whole dict in Redis)... Mean time I get
rid of them, if I can succeed in that...
:)
Thanks Anthony.
Richard
On Thu, Jan 14, 2016 at 12:06 PM, Anthony wrote:
> On Thursday, January 14, 2016 at 11:12:12 AM UTC-5, Richard wrote:
>>
>> This
>
> So, my main issue with both cache.ram and cache.redis is that new id
> representation never get added to the dict "permanently". In case of
> cache.ram, the issue may come from what Anthony explain because I use
> uwsgi/nginx. But I have made some test with redis and the issue still
>
@Niphold, I just send a PR with improvements mainly docstring and PEP8 over
cache redis contrib...
:D
Richard
On Thu, Jan 14, 2016 at 11:12 AM, Richard Vézina <
ml.richard.vez...@gmail.com> wrote:
> This is true for any other cache except cache.ram right?
>
> If so, there is no gain with
On Thursday, January 14, 2016 at 11:12:12 AM UTC-5, Richard wrote:
>
> This is true for any other cache except cache.ram right?
>
Right. cache.ram works because it doesn't have to pickle a Python object
and put it into external storage (and therefore create a fresh copy of the
stored object via
I don't understand something... I have this :
```python
def set_dict_test():
if 'dict123' not in globals():
print 'dict123 not in globals : %s' % str('dict123' not in
globals())
global dict123
dict123 = cache.ram('dict123', lambda: {1: 1, 2: 2, 3:
>
> If I don't use cache.ram dict123 is in globals() and function return the
> else: part of the function...
>
What do you mean by "if I don't use cache.ram"? Are you saying you are
using cache.disk, or no cache at all? If the latter, how is dict123 in
globals() (i.e., where do you define
Forget about last message I was making a mistake in my print statments
On Thu, Jan 14, 2016 at 3:40 PM, Richard Vézina wrote:
> There is something I don't understand... I put a couple of print statments
> to see if my cached vars was in globals() and I discover
There is something I don't understand... I put a couple of print statments
to see if my cached vars was in globals() and I discover that my var was
never there...
I am lost completly... If it pass throught my "if" my dict will be
recreated each request...
:(
Richard
On Thu, Jan 14, 2016 at
Hello Simone,
Thanks to jump in this thread... I understand what you say... The thing is
I need this dict to be global and I though it could be more clean to have
these dict create and update by the same function.
So, my main issue with both cache.ram and cache.redis is that new id
Redis keys are stick for ever...
I figure out most of how to use the Niphold contrib, though it seems that
created cached element stays in Redis for ever... I restart the server and
they still there...
My issue look like it still there... There maybe something wrong in my
logic...
It like if
hem... cache.ram behaves differently than ANY other backend because it
just stores a reference to the computed value .
That's why you can do a dict.update() without setting explicitely a new
cached value... but you'd need to do if you see it from a "flow"
perspective.
What you are
On Wednesday, January 13, 2016 at 3:18:10 PM UTC-5, Richard wrote:
>
> Redis keys are stick for ever...
>
> I figure out most of how to use the Niphold contrib, though it seems that
> created cached element stays in Redis for ever... I restart the server and
> they still there...
>
Are you
errata corrige on BTW2: on redis, time_expire=None results in key stored at
most one day. you can always do time_expire=30*24*60*60 for 30days worth.
things_to_know: only cache.disk and cache.ram can effectively cache a value
indefinitely and enable that strange behaviour of retrieving the
Are you using nginx/uwsgi? If so, I believe cache.ram would not be shared
across the different uwsgi worker processes. You might consider switching
to the Redis cache.
Anthony
On Wednesday, January 13, 2016 at 11:26:16 AM UTC-5, Richard wrote:
>
> Hello,
>
> Still struggle with this. I don't
Hello,
Still struggle with this. I don't understand why cache dict is not updated
in real time...
It get updated but there is a strange delay.
Thanks
Richard
On Mon, Jan 4, 2016 at 4:18 PM, Richard Vézina
wrote:
> UP here!
>
> Any help would be appreciate...
>
>
Ha!!
Yes nginx/uwsgi...
That what I suspecting, it looks like the dict was kind of unique by user...
I think we should leave a note somewhere in the book about this issue...
I was in the process of exploring Redis cache or memcache just to see if
there were not improvement. I will look into
UP here!
Any help would be appreciate...
Richard
On Mon, Dec 21, 2015 at 10:22 PM, Richard
wrote:
> Hello,
>
> I am still under 2.9.5, I have a simple dict cached in ram which never
> expire that I update when new key value are added to the system... Mainly
> the
Hello,
I am still under 2.9.5, I have a simple dict cached in ram which never
expire that I update when new key value are added to the system... Mainly
the dict contain id and their representation...
It works flawlessly in dev, but once I pushed in prod, it seems that the
cached dict takes
22 matches
Mail list logo