[PERFORM] Slow UPDATE in logs that's usually fast

2016-11-25 Thread Aldo Sarmiento
Hello,

There's a slow UPDATE query in my logs (~60s). However, when I run it
manually, it's really fast ( < 0.5ms).

2016-11-23 18:13:51.962 GMT [742-25]: bpd_production bpd@web001(40916)
0 Passenger RubyApp: /var/www/bpd/current (production) LOG:  duration:
59876.947 ms  statement: UPDATE "contacts" SET "updated_at" = '2016-11-23
18:12:52.055456' WHERE "contacts"."id" = 2845179

This particular query isn't waiting on any locks that I can see. I actually
found it because I followed a lock queue up the chain and it bubbled up to
this query. This was one that was waiting on process 742:

2016-11-23 18:13:00.095 GMT [314-118]: bpd_production bpd@web001(40547)
0 Passenger RubyApp: /var/www/bpd/current (production) LOG:  process
314 still waiting for ShareLock on transaction 1663649998 after 1000.067 ms
2016-11-23 18:13:00.095 GMT [314-119]: bpd_production bpd@web001(40547)
0 Passenger RubyApp: /var/www/bpd/current (production) DETAIL:  Process
holding the lock: 742. Wait queue: 314.
2016-11-23 18:13:00.095 GMT [314-120]: bpd_production bpd@web001(40547)
0 Passenger RubyApp: /var/www/bpd/current (production) CONTEXT:  while
updating tuple (288387,8) in relation "contacts"
2016-11-23 18:13:00.095 GMT [314-121]: bpd_production bpd@web001(40547)
0 Passenger RubyApp: /var/www/bpd/current (production) STATEMENT:
 UPDATE "contacts" SET "news_items_last_modified" = '2016-11-23
18:12:59.090806' WHERE "contacts"."id" = 2845179

Anyhow, I'm at a complete loss here since I've hit a dead end.

Thank you!

*Aldo Sarmiento*


Re: [PERFORM] Millions of tables

2016-11-25 Thread Robert Klemme
Greg, sorry for the resent: I had forgotten to include the list.

On Wed, Oct 5, 2016 at 2:34 PM, Greg Spiegelberg  wrote:

> Data is not static.  The 4M tables fall into one of two groups.
>
> Group A contains 2M tables.  INSERT will occur ~100 times/day and maximum
> number of records anticipated will be 200k.  Periodic DELETE's will occur
> removing "old" records.  Age is something the client sets and I have no way
> of saying 1 or 10k records will be removed.

The ~100 times / day are per table I assume. Also, I assume DELETES
will probably delete batches (because the time criteria catches
several records).

> Group B contains the other 2M tables.  Maximum records ~140k and UPSERT will
> be the only mechanism used to populate and maintain.  Periodic DELETE's may
> run on these tables as well removing "old" records.

So there will be inserts and updates.

Either I missed it or you did not mention the criteria for placing a
record in one of the 4M buckets. Can you shed light on what the
criteria are? That would obviously suggest what indexing could be
done.

Also it would be interesting to see results of your tests with btree
on really large tables as Stephen had suggested. I know it is not the
primary tests you want to do but I would rather first explore
"traditional" schema before I venture in the unknown of the multi
million dollar, pardon, table schema.

Kind regards


-- 
[guy, jim, charlie].each {|him| remember.him do |as, often| as.you_can
- without end}
http://blog.rubybestpractices.com/


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance