On Mon, May 10, 2010 at 11:32 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
Hello all,
A query ran twice in succession performs VERY poorly the first time as it
iterates through the nested loop. The second time, it rips. Please see SQL,
SLOW PLAN and FAST PLAN below.
This is
On 11/05/10 13:32, Carlo Stonebanks wrote:
Hello all,
A query ran twice in succession performs VERY poorly the first time as
it iterates through the nested loop. The second time, it rips. Please
see SQL, SLOW PLAN and FAST PLAN below.
I haven't looked at the details, but the comment you
Thanks Scott,
This is almost always due to caching. First time the data aren't in the
cache, second time they are.
I had assumed that it was caching, but I don't know from where because of
the inexplicable delay. Hardware? O/S (Linux)? DB? From the function, which
is IMMUTABLE?
I am
Hi all!
We moved from MySQL to Postgresql for some of our projects. So far
we're very impressed with the performance (especially INSERTs and
UPDATEs), except for a strange problem with the following bulk delete
query:
DELETE FROM table1 WHERE table2_id = ?
I went through these Wiki pages,
On Fri, May 7, 2010 at 8:09 PM, Josh Berkus j...@agliodbs.com wrote:
Jignesh, All:
Most of our Solaris users have been, I think, following Jignesh's advice
from his benchmark tests to set ZFS page size to 8K for the data zpool.
However, I've discovered that this is sometimes a serious
Hi Andy!
Thanks a lot for your hints!
Indeed the problem was on my side. Some Hibernate tuning solved the
problem (and I was able to speedup the query using IN). The real
problem was that Hibernate using unprepared queries if you create a
native query, but prepares the query if you use JP-QL
Hi all,
In my database application, I've a table whose records can reach 10M
and insertions can happen at a faster rate like 100 insertions per second in
the peak times. I configured postgres to do auto vacuum on hourly basis. I
have frontend GUI application in CGI which displays the data
venu madhav venutaurus...@gmail.com wrote:
When I try to get the last twenty records from the database, it
takes around 10-15 mins to complete the operation.
Making this a little easier to read (for me, at least) I get this:
select e.cid, timestamp, s.sig_class, s.sig_priority,
Venu,
For starters,
1) You have used the e.cid twice in ORDER BY clause.
2) If you want last twenty records in the table matching the criteria of
timestamp, why do you need the offset?
3) Do you have indexes on sig_id, signature and timestamp fields?
If you do not get a good response after
Sure, but bulk load + reandom selects is going to *guarentee*
fragmentatioon on a COW system (like ZFS, BTRFS, etc) as the selects
start to write out all the hint-bit-dirtied blocks in random orders...
i.e. it doesn't take long to make an originally nicely continuous block
random
I'm
* select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,
e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE
s.sig_id = e.signature AND e.timestamp = '1270449180' AND e.timestamp
'1273473180' ORDER BY
11 matches
Mail list logo