Hi All;
We have a table with approx 200 columns. about a dozen columns are text
data types and the rest are a mix of integers , bigint's and double
precision types.
The table has about 25million rows.
The app wants to run a query like this:
select count(pri_num) from max_xtrv_st_t
where pr
On 06/01/2011 03:38 PM, Kevin Grittner wrote:
CS DBA wrote:
The app wants to run a query like this:
select count(pri_num) from max_xtrv_st_t
where pri_num in (select max(pri_num) from max_xtrv_st_t where 1=1
group by tds_cx_ind, cxs_ind_2)
Why not something
On 06/01/2011 03:15 PM, Merlin Moncure wrote:
On Wed, Jun 1, 2011 at 3:14 PM, CS DBA wrote:
Hi All;
We have a table with approx 200 columns. about a dozen columns are text data
types and the rest are a mix of integers , bigint's and double precision
types.
The table has about 25million
On 06/02/2011 11:31 AM, Shaun Thomas wrote:
On 06/02/2011 11:15 AM, Kevin Grittner wrote:
They all gave the same result, of course, and they all used a seq
scan..
And they all will. I created a test table with a bunch of
generate_series and emulated 200 unique matches of column1 and
column2
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the query:
SELECT
a.account_id, a.customer_id, a.order_id, a.primary_contact_id,
a.status, a.customer_location_id, a.added_date,
o.agent_id, p.order_location
On 10/11/2011 12:02 PM, Pavel Stehule wrote:
Hello
please, send EXPLAIN ANALYZE output instead.
Regards
Pavel Stehule
2011/10/11 CS DBA:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the quer
On 10/11/2011 12:03 PM, Szymon Guz wrote:
On 11 October 2011 19:52, CS DBA <mailto:cs_...@consistentstate.com>> wrote:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the
Hi All ;
I have code that drops a table, re-create's it (based on a long set of
joins) and then re-creates the indexes.
It runs via psql in about 10 seconds. I took the code and simply
wrapped it into a plpgsql function and the function version takes almost
60 seconds to run.
I always tho
On 10/27/2011 11:10 PM, Tom Lane wrote:
CS DBA writes:
I have code that drops a table, re-create's it (based on a long set of
joins) and then re-creates the indexes.
It runs via psql in about 10 seconds. I took the code and simply
wrapped it into a plpgsql function and the function ve
Hi All;
The below contab2 table conmtains ~400,000 rows. This query should not
take this long. We've tweaked work_mem up to 50MB, ensured that the
appropriate indexes are in place, etc...
Thoughts?
Thanks in advance
Explain analyze:
SELECT contab2.contacts_tab
FROM contab2
INNER JOIN scta
On 11/02/2011 02:45 PM, Scott Marlowe wrote:
On Wed, Nov 2, 2011 at 2:21 PM, CS DBA wrote:
Hi All;
The below contab2 table conmtains ~400,000 rows. This query should not take
this long. We've tweaked work_mem up to 50MB, ensured that the appropriate
indexes are in place, etc...
Tho
On 11/03/2011 09:40 AM, Robert Haas wrote:
On Thu, Nov 3, 2011 at 11:31 AM, Rodrigo Gonzalez
wrote:
El 03/11/11 11:42, Robert Haas escribió:
On Fri, Oct 28, 2011 at 9:39 AM, CS DBA wrote:
No parameters, one of them looks like this:
[ code snippet ]
It's hard to believe this is the
All;
I'm working with a client running PostgreSQL on a Fusion-IO drive.
They have a PostgreSQL setup guide from Fusion recommending the
following settings:
effective_io_concurrency=0
bgwriter_lru_maxpages=0
random_page_cost=0.1
sequential_page_cost=0.1

These seem odd to me, effectively turni
All;
We have a large db server with 128GB of ram running complex functions.
with the server set to have the following we were seeing a somewhat low
hit ratio and lots of temp buffers
shared_buffers = 18GB
work_mem = 75MB
effective_cache_size = 105GB
checkpoint_segments = 128
when we increas
This is on a CentOS 6.5 box running PostgreSQL 9.2
On 11/13/14 4:09 PM, CS DBA wrote:
All;
We have a large db server with 128GB of ram running complex functions.
with the server set to have the following we were seeing a somewhat
low hit ratio and lots of temp buffers
shared_buffers
15 matches
Mail list logo