-performance@postgresql.org
Subject: RE: [PERFORM] Query Performance question
On 14 Červenec 2014, 18:02, Magers, James wrote:
> Tomas,
>
> Thank you for the recommendation. In this case, The bitmap scan runs
> quite quickly, however in production were data may or may not be cached
>
On 14 Červenec 2014, 18:02, Magers, James wrote:
> Tomas,
>
> Thank you for the recommendation. In this case, The bitmap scan runs
> quite quickly, however in production were data may or may not be cached
> and at higher volumes I am trying to ensure the process will continue to
> execute efficien
Tomas,
Thank you for the recommendation. In this case, The bitmap scan runs quite
quickly, however in production were data may or may not be cached and at higher
volumes I am trying to ensure the process will continue to execute efficiently
and reduce the impact of the process on other process
On 14 Červenec 2014, 16:00, Magers, James wrote:
> Thomas,
>
> I would have to agree that the current results do indicate that. However,
> I have run this explain analyze multiple times and the timing varies from
> about 4ms to 35ms using the Bitmap Heap Scan. Here is an explain plan
> from Thurs
Thomas,
I would have to agree that the current results do indicate that. However, I
have run this explain analyze multiple times and the timing varies from about
4ms to 35ms using the Bitmap Heap Scan. Here is an explain plan from Thursday
of last week that shows about 21ms. Part of the issu
Magers, James, 14.07.2014 15:18:
> Thank you. I executed the query this morning after disabling the scan types.
>
> I am including links to explain.depesz output for each of the three
> variations that I executed.
>
> indexscan and bitmapscan off: http://explain.depesz.com/s/sIx
> seqscan a
Thomas,
Thank you. I executed the query this morning after disabling the scan types.
I am including links to explain.depesz output for each of the three variations
that I executed.
indexscan and bitmapscan off: http://explain.depesz.com/s/sIx
seqscan and bitmapscan off: http://explain.depes
Magers, James, 14.07.2014 04:20:
> Thank you for your feedback. I am attaching the requested information.
> While I do not think the query is necessarily inefficient, I believe a
> sequence scan would be more efficient.
You can try
set enable_indexscan = off;
set enable_bitmapscan = off;
Tomas,
Thank you for your feedback. I am attaching the requested information. While
I do not think the query is necessarily inefficient, I believe a sequence scan
would be more efficient.
\d member_subscription_d
Table "public.member_subscription_d"
On 14.7.2014 00:55, Magers, James wrote:
> I am using a Pentaho process to access the database and select the
> appropriate information to update the DB tables and records. I am
> trying to select the previous subscription key in order to update the
> factable for any records that have the previou
I am using a Pentaho process to access the database and select the appropriate
information to update the DB tables and records. I am trying to select the
previous subscription key in order to update the factable for any records that
have the previous key to have the current subscription key. Th
Kenneth Marshall wrote:
Dan,
Did you try this with 8.3 and its new HOT functionality?
Ken
I did not. I had to come up with the solution before we were able to
move to 8.3. But, Tom did mention that the HOT might help and I forgot
about that when writing the prior message. I'm in the
Dan,
Did you try this with 8.3 and its new HOT functionality?
Ken
On Thu, Jun 05, 2008 at 09:43:06AM -0600, Dan Harris wrote:
> [EMAIL PROTECTED] wrote:
>>
>> 3) Build a table with totals or maybe subtotals, updated by triggers. This
>> requires serious changes in application as well as in data
[EMAIL PROTECTED] wrote:
3) Build a table with totals or maybe subtotals, updated by triggers. This
requires serious changes in application as well as in database, but solves
issues of 1) and may give you even better results.
Tomas
I have tried this. It's not a magic bullet. We do our bil
Thank you for your reply. I don't really need to count rows in
transactions table, I just thought this was a good example to show how
slow the query was.
Usually you're more interested in the performance of the queries you need
to make rather than the ones you don't need to make ;)
But b
Hi,
Hubert already answered your question - it's expected behavior, the
count(*) has to read all the tuples from the table (even dead ones!). So
if you have a really huge table, it will take a long time to read it.
There are several ways to speed it up - some of them are simple (but the
speedup i
Hello Hubert,
Thank you for your reply. I don't really need to count rows in
transactions table, I just thought this was a good example to show how
slow the query was.
But based on what you wrote it looks like count(*) is slow in general,
so this seems to be OK since the table is rather large.
On Tue, Jun 03, 2008 at 09:57:15AM +0200, Marcin Citowicki wrote:
> I'm not a dba so I'm not sure if the time it takes to execute this query
> is OK or not, it just seems a bit long to me.
This is perfectly OK. count(*) from table is generally slow. There are
some ways to make it faster (dependi
Hello,
I have a table (transactions) containing 61 414 503 rows. The basic
count query (select count(transid) from transactions) takes 138226
milliseconds.
This is the query analysis output:
Aggregate (cost=2523970.79..2523970.80 rows=1 width=8) (actual
time=268964.088..268964.090 rows=1 lo
Title: Message
You
are pulling a fair amount of data from the database and doing a lot of
computation in the SQL. I'm not sure how fast this query could be expected
to run, but I had one idea. If you've
inserted and deleted a lot into this table, you will need to run vacuum
ocasionally.
Hi, I have a performance problem with Postgresql version 8.1 installed on a Fedora Core release 4 (Stentz) with kernel version 2.6.11. The machine I am working on has 512MB of RAM and Pentium III 800 MHz CPU. I have only one table in the database which consists of 256 columns and 1 rows. Eac
21 matches
Mail list logo