It's 20191227.1
On Friday, February 21, 2020 at 12:48:32 AM UTC-3, Massimo Di Pierro wrote:
>
> Can you check which version of pydal you are using?
> Need to decide whether to make this the default behavior
>
> On Tuesday, 18 February 2020 03:35:47 UTC-8, Marcello wrote:
>>
>> Hi Massimo,
>>
Can you check which version of pydal you are using?
Need to decide whether to make this the default behavior
On Tuesday, 18 February 2020 03:35:47 UTC-8, Marcello wrote:
>
> Hi Massimo,
>
> Great !
> Now worked fine...
>
> Thanks
>
>
> On Tuesday, February 18, 2020 at 12:22:04 AM UTC-3, Massimo
Hi Massimo,
Great !
Now worked fine...
Thanks
On Tuesday, February 18, 2020 at 12:22:04 AM UTC-3, Massimo Di Pierro wrote:
>
> sorry. My bad. try:
>
> +self._adapter.reconnect()
>
> On Sunday, 16 February 2020 19:17:25 UTC-8, Marcello wrote:
>>
>> I got this error:
>>
>> File
sorry. My bad. try:
+self._adapter.reconnect()
On Sunday, 16 February 2020 19:17:25 UTC-8, Marcello wrote:
>
> I got this error:
>
> File "/home/parra/py4web/py4web/core.py", line 239, in on_request
> db._adapter.reconnect()
> NameError: name 'db' is not defined
>
>
>
>
> On Sunday,
I got this error:
File "/home/parra/py4web/py4web/core.py", line 239, in on_request
db._adapter.reconnect()
NameError: name 'db' is not defined
On Sunday, February 16, 2020 at 10:22:48 PM UTC-3, Massimo Di Pierro wrote:
>
> The db fixture should automatiically try reconnect.
>
> To help
The db fixture should automatiically try reconnect.
To help me debug can you edit py4web/core.py and add the line below:
class DAL(pydal.DAL, Fixture):
def on_request(self):
threadsafevariable.ThreadSafeVariable.restore(ICECUBE)
+ db._adapter.reconnect()
Does it solve
I'm with lastest version...
Shouldn't reconnect to db every page load ???
If the server stays 1 hour without serving pages, a reconnection is
needed... or not ?
On Saturday, February 15, 2020 at 8:30:58 AM UTC-3, Massimo Di Pierro wrote:
>
> My guess is that there is some timeout in the
My guess is that there is some timeout in the mysql config example.
Can you repro with latest py4web?
On Friday, 14 February 2020 14:31:58 UTC-8, Marcello wrote:
>
> Hi,
>
> Any informatiion about this ?
> Nobody using py4web with mysql ??
>
>
>
> On Tuesday, February 11, 2020 at 6:58:11 PM
Hi,
Any informatiion about this ?
Nobody using py4web with mysql ??
On Tuesday, February 11, 2020 at 6:58:11 PM UTC-3, Marcello wrote:
>
> Hello,
>
> I'm trying py4web and found something stragne.
> Don't know if I'm making something wrong.
>
> I'm having "MySQL server has gone away" error if
everyone with a different unique_key ?
On Wednesday, April 17, 2013 12:43:35 AM UTC+2, Yarin wrote:
OK, now I've run into some special weirdness- I don't know if it's
related, but it makes no sense to me:
When I run a count of session records grouped by client IP, I get some
very big
Yes- I've documented this issue in a new question - please
see: https://groups.google.com/d/msg/web2py/A7P4HoST-Lg/NZ3kmTTwVG0J
On Wednesday, April 17, 2013 3:08:38 AM UTC-4, Niphlod wrote:
everyone with a different unique_key ?
On Wednesday, April 17, 2013 12:43:35 AM UTC+2, Yarin wrote:
if you're storing sessions in the database an update is made for every
request that changes the session . can you confirm that this would be
the case (i.e. your app does a lot of session.whatever changes) ?
On Tuesday, April 16, 2013 11:09:50 PM UTC+2, Yarin wrote:
We've had our
even so, 50 seconds spent on a single-row update (that has a PK involved
cause the update is somewhat
update session_table
set locked = False,
client_ip=blablabla,
modified_datetime=now,
session_data=blablablabla,
unique_key=blablabla
where id = number
)
is a ginormous amount of time . Any
Hi Niphlod
- We do basic session updates on some but not all requests- but as you
point out, that doesn't explain the huge update times or their sporadic
nature, especially with our light load of under 100 requests an hour.
- The db and app server are on the same private rackspace
either than a large blob/object in the session I really can't see how an
update so simple (syntax-wise) takes 50 seconds (when the minimum reported
time is 0.8ms) on mysql end.
--
---
You received this message because you are subscribed to the Google Groups
web2py-users group.
To
Largest session_data fields were 12 KiB
On Tuesday, April 16, 2013 5:49:45 PM UTC-4, Niphlod wrote:
either than a large blob/object in the session I really can't see how an
update so simple (syntax-wise) takes 50 seconds (when the minimum reported
time is 0.8ms) on mysql end.
edit: did you
OK, now I've run into some special weirdness- I don't know if it's related,
but it makes no sense to me:
When I run a count of session records grouped by client IP, I get some very
big numbers at the top end:
Massimo,
Accessing revision 1854 results in an Internal Server Error.
Kind regards,
Annet.
--
Subscription settings: http://groups.google.com/group/web2py/subscribe?hl=en
Something like this is now in trunk. Just do
db = DAL(...)
please take a look.
On Apr 21, 1:12 am, Igor Gassko gas...@gmail.com wrote:
Could you please post on this thread, once there's built-in solution
for keeping pooled connections alive?
For now, I've seen that you may end up with
Could you please post on this thread, once there's built-in solution
for keeping pooled connections alive?
For now, I've seen that you may end up with several dead connection in
the pool, so slighly better solution might be as follows:
for x in range(10):
try:
db =
Thanks. The workaround works, even when keeping the pool. :)
D.
try:
db=DAL(mysql://a:b...@localhost/c, pool_size=5)
except:
db=DAL(mysql://a:b...@localhost/c, pool_size=5)
--
You received this message because you are subscribed to the Google Groups
web2py-users group.
To post to
Hi Massimo, please, were you able to look at it?
I'm getting the same error relatively often, several times a day one a
site with about 1 daily requests.
Thanks :)
David
On Feb 23, 9:31 am, mdipierro mdipie...@cs.depaul.edu wrote:
will look into this.
--
You received this message because
For now do this:
try:
db=DAL(mysql://a:b...@localhost/c, pool_size=5)
except:
db=DAL(mysql://a:b...@localhost/c, pool_size=5)
It should fix the probelm. If not, set pool_size=0 in the second call.
On Mar 23, 1:40 pm, David Zejda d...@atlas.cz wrote:
Hi Massimo, please, were you able to
will look into this.
On Feb 23, 12:46 am, Kevin Bowling kevin.bowl...@gmail.com wrote:
Yes,
I have an app and did a 5 connection pool. It seems MySQL by default
closes connections every 8 hours. Therefore, if nobody accesses the
app overnight, it dies.
I know other things that use
Sorry my collection is very slow...
It cannot b predicted when a connection fail so if it fails in between
a trasaction, i.e. when serving a request, it will generate a ticket
but it should do so only once since this connection will be discared
(not recycled) and a new one will be placed in the
Yes,
I have an app and did a 5 connection pool. It seems MySQL by default
closes connections every 8 hours. Therefore, if nobody accesses the
app overnight, it dies.
I know other things that use connection pools like Openfire XMPP
server poll it quite regularly to keep it open. I don't know
26 matches
Mail list logo