On 24/05/11 12:14, panam wrote:
> Hi,
>
> In my DB, there is a large table containing messages and one that contains
> message boxes.
> Messages are assigned to boxes via a child parent link m->b.
> In order to obtain the last message for a specific box, I use the following
> SQL:
>
> SELECT m1.i
As near as I can tell from your test configuration description, you have
JMeter --> J2EE --> Postgres.
Have you ruled out the J2EE server as the problem? This problem may not be
the database.
I would take a look at your app server's health and look for any potential
issues there before spending too
Hi,
In my DB, there is a large table containing messages and one that contains
message boxes.
Messages are assigned to boxes via a child parent link m->b.
In order to obtain the last message for a specific box, I use the following
SQL:
SELECT m1.id FROM message m1 LEFT JOIN message m2 ON (m1.box_
On Mon, May 23, 2011 at 3:08 PM, Josh Berkus wrote:
>
>> Well, all of that stuff sounds impractically expensive to me... but I
>> just work here.
>
> I'll point out that the simple version, which just checks for hot tables
> and indexes, would improve estimates greatly and be a LOT less
> complica
On 24/05/2011 5:30 AM, Anibal David Acosta wrote:
So, this is some bug?
Hard to know with the information provided. Please post EXPLAIN ANALYZE
output.
http://wiki.postgresql.org/wiki/SlowQueryQuestions
--
Craig Ringer
Tech-related writing at http://soapyfrogs.blogspot.com/
--
Sent via p
John Rouillard wrote:
> I seem to be able to provoke this error:
>
>vacuum...ERROR: invalid page header in
> block 2128910 of relation base/16385/21476
What version of PostgreSQL?
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To mak
Hi all:
Not sure if this is a performance question or a generic admin
question. I have the following script running on a host different from
the database to use pgbench to test the database:
pgbench -i (inital mode)
pgsql vacuum analyze; (and some other code to dump table sizes)
pgbench (mu
I have a strange situation.
I have a table of detail with millones of rows and a table of items with
thousands of rows
When I do..
select count(*) from wiz_application_response where application_item_id in
(select id from wiz_application_item where application_id=110)
This query NOT use the inde
On 5/23/11 8:54 AM, Dave Johansen wrote:
> I apologize for the multiple posts. I sent this email right after joining
> the list and after it hadn't shown up a day later I figured that it had been
> lost or something and sent the other one.
List moderation took a holiday while all of us were at pgC
On Mon, May 23, 2011 at 5:54 PM, Dave Johansen wrote:
> I apologize for the multiple posts. I sent this email right after joining
> the list and after it hadn't shown up a day later I figured that it had been
> lost or something and sent the other one.
Sorry for the nitpicking but I even see _thr
Dne 23.5.2011 21:05, Robert Haas napsal(a):
> On Mon, May 23, 2011 at 2:46 PM, Tomas Vondra wrote:
>> Really? He already has 64 checkpoint segments, which is about 1GB of
>> xlog data. The real problem is that the amount of buffers to write is
>> constantly growing. At the beginning there's 62861
On Tue, May 17, 2011 at 8:45 AM, Andrey Vorobiev
wrote:
> 1. How does database size affect insert performance?
Well, if your database gets bigger, then your indexes will become
deeper, requiring more time to update. But I'm not sure that's your
problem here.
> 2. Why does number of written buff
> Well, all of that stuff sounds impractically expensive to me... but I
> just work here.
I'll point out that the simple version, which just checks for hot tables
and indexes, would improve estimates greatly and be a LOT less
complicated than these proposals. Certainly having some form of
block-
On Mon, May 23, 2011 at 2:46 PM, Tomas Vondra wrote:
> Really? He already has 64 checkpoint segments, which is about 1GB of
> xlog data. The real problem is that the amount of buffers to write is
> constantly growing. At the beginning there's 62861 buffers (500MB) and
> at the end there's 137657 b
Dne 23.5.2011 15:30, Shaun Thomas napsal(a):
> On 05/17/2011 07:45 AM, Andrey Vorobiev wrote:
>
>> 2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog
>> 2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers
>> (24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;
Hi,
I am using contrib/cube code. I am building GIST index on cube data type
then it leads to a very large size of log file (nearly 220 MB for only 12k
records).
While creating index on geometry field with gist gives 1KB size of log file
for 17 lakh records.
Can someone please tell me how to stop
Dne 23.5.2011 19:01, Maciek Sakrejda napsal(a):
>> You're probably reading it wrong. The sort itself takes about 1 ms (just
>> subtract the numbers in "actual=").
>
> I thought it was cost=startup_cost..total_cost. That is not quite the
> same thing, since startup_cost is effectively "cost to prod
On Mon, May 23, 2011 at 1:01 PM, Maciek Sakrejda wrote:
>> You're probably reading it wrong. The sort itself takes about 1 ms (just
>> subtract the numbers in "actual=").
>
> I thought it was cost=startup_cost..total_cost. That is not quite the
> same thing, since startup_cost is effectively "cost
On Mon, May 16, 2011 at 3:38 PM, Dave Johansen wrote:
> I am using Postgres 8.3 and I have an issue very closely related to the one
> described here:
> http://archives.postgresql.org/pgsql-general/2005-06/msg00488.php
>
> Basically, I have a VIEW which is a UNION ALL of two tables but when I do a
On Tue, May 17, 2011 at 11:10 AM, wrote:
> For Hstore I'm using a GIST index.
I would have thought that GIN would be a better choice for this workload.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql
> You're probably reading it wrong. The sort itself takes about 1 ms (just
> subtract the numbers in "actual=").
I thought it was cost=startup_cost..total_cost. That is not quite the
same thing, since startup_cost is effectively "cost to produce first
row", and Sort can't really operate in a "stre
I apologize for the multiple posts. I sent this email right after joining
the list and after it hadn't shown up a day later I figured that it had been
lost or something and sent the other one.
Also, the database I posted this about does not have internet access and so
I'm working on getting it mov
>
> It's always good to hear when these things work out. Thanks for
> reporting back.
>
> Using the set-based nature of relational databases to your advantage,
> writing smarter queries that do more work server-side with fewer
> round-trips, and effective batching can make a huge difference.
>
Gla
On 05/17/2011 07:45 AM, Andrey Vorobiev wrote:
2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog
2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers
(24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=89.196 s, sync=0.029 s, total=89.242 s
Increase
On 05/20/2011 11:47 AM, Lew wrote:
I don't even know why people bother even putting such nonsense into
their emails, let alone Usenet or mailing-list posts.
This may sound like a surprise, but many of us don't. Several companies
use an auto-append on any outgoing message not sent to an intern
Hi Merlin
The analyze command gave the following result:
On the KVP table:
Index Scan using kvpidx on bench_kvp (cost=0.00..8.53 rows=1 width=180)
(actual time=0.037..0.038 rows=1 loops=1)
Index Cond: (bench_id = '20_20'::text)
Total runtime: 0.057 ms
And on the Hstore table:
Bitma
26 matches
Mail list logo