Hi All;
We have a client that has a table where large blobs (bytea) are stored.
the table has a key column that is numbers (like 112362) but
unfortunately it's a varchar column so the blobs are accessed via
queries like:
select * from bigtable where keycol = '217765'
The primary reason we w
Hi All;
I have a query that wants to update a table based on a join like this:
update test_one
set f_key = t.f_key
from
upd_temp1 t,
test_one t2
where
t.id_number = t2.id_number
upd_temp1 has 248,762 rows
test_one has 248,762 rows
test_one has an index on f_key and an index on id_n
On Monday 30 August 2010 17:04, bricklen wrote:
> On Mon, Aug 30, 2010 at 3:28 PM, Kevin Kempter
>
> wrote:
> > Hi all ;
> >
> > we have an automated partition creation process that includes the
> > creation of an FK constraint. we have a few other servers with
Hi all ;
we have an automated partition creation process that includes the creation of
an FK constraint. we have a few other servers with similar scenarios and this
is the only server that stinks per when we create the new partitions.
Anyone have any thoughts on how to debug this? were running
Hi All;
we have a table partitioned by day, the check constraint on the child tables
looks like this (this is the may 31st partition):
CHECK
(stime >= '2010-05-30 00:00:00+00'::timestamp with time zone
AND stime <= '2010-05-30 23:59:59+00'::timestamp with time zone)
We have a python based
Hi all;
we've found that partition elimination is not happening for a prepared
statement, however running the same statement in psql manually does give us
partition elimination.
Is this a known issue?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chang
On Wednesday 24 February 2010 07:55:36 A. Kretschmer wrote:
> In response to Kevin Kempter :
> > Hi All;
> >
> > I have a table that has daily partitions.
> >
> > The check constraints look like this:
> > CHECK (timezone('EST'::text, insert_dt) &
Hi All;
I have a table that has daily partitions.
The check constraints look like this:
CHECK (timezone('EST'::text, insert_dt) >= '2010-01-01'::date
AND timezone('EST'::text, insert_dt) < '2010-01-02'::date)
each partition has this index:
"fact_idx1_20100101_on_cust_id" btree (cust_id)
I
On Thursday 07 January 2010 09:57:59 Kevin Grittner wrote:
> Ludwik Dylag wrote:
> > I would suggest:
> > 1. turn off autovacuum
> > 1a. ewentually tune db for better performace for this kind of
> > operation (cant not help here)
> > 2. restart database
> > 3. drop all indexes
> > 4. update
>
On Wednesday 25 November 2009 05:34:26 Richard Neill wrote:
> Dear All,
>
> Thanks very much for your help so far. My understanding of PG is getting
> a lot better!
>
> I wonder if I've understood analyze properly: I'm not sure I quite
> understand how specific the statistics gathered actually ar
gt; On Wed, 09 Sep 2009 06:05:22 -0400, Heikki Linnakangas
>
> wrote:
> > Kevin Kempter wrote:
> >> Hi all I have a large table (>2billion rows) that's partitioned by date
> >> based
> >> on an epoch int value. We're running a select max(id) wh
Hi all I have a large table (>2billion rows) that's partitioned by date based
on an epoch int value. We're running a select max(id) where id is the PK. I
have a PK index on each of the partitions, no indexes at all on the base
table.
If I hit a partition table directly I get an index scan as e
On Wednesday 02 September 2009 09:19:20 Tom Lane wrote:
> Kevin Kempter writes:
> > I cant figure out why we're scanning all of our partitions.
>
> The example works as expected for me:
>
> regression=# CREATE TABLE url_hits (
> id integer NOT NULL,
On Wednesday 02 September 2009 08:55:38 Kenneth Marshall wrote:
> The planner does not yet work as efficiently as it could
> with child tables. Check the recent mail archives for a
> long discussion of the same.
>
> Regards,
> Ken
>
> On Wed, Sep 02, 2009 at 08:52:30AM -0
On Wednesday 02 September 2009 09:02:27 Scott Marlowe wrote:
> On Wed, Sep 2, 2009 at 8:52 AM, Kevin Kempter
wrote:
> > Hi all;
> >
> > I cant figure out why we're scanning all of our partitions.
> >
> > We setup our tables like this:
> >
>
Hi all;
I cant figure out why we're scanning all of our partitions.
We setup our tables like this:
Base Table:
CREATE TABLE url_hits (
id integer NOT NULL,
content_type_id integer,
file_extension_id integer,
"time" integer,
bytes integer NOT NULL,
path_id integer,
p
On Tuesday 01 September 2009 03:26:08 Pierre Frédéric Caillaud wrote:
> > We have a table that's > 2billion rows big and growing fast. We've setup
> > monthly partitions for it. Upon running the first of many select * from
> > bigTable insert into partition statements (330million rows per month) t
Hi all;
We have a table that's > 2billion rows big and growing fast. We've setup
monthly partitions for it. Upon running the first of many select * from
bigTable insert into partition statements (330million rows per month) the
entire box eventually goes out to lunch.
Any thoughts/suggestions?
Hi all;
I have a simple query against two very large tables ( > 800million rows in
theurl_hits_category_jt table and 9.2 million in the url_hits_klk1 table )
I have indexes on the join columns and I've run an explain.
also I've set the default statistics to 250 for both join columns. I get a
v
On Wednesday 19 August 2009 11:31:30 Nikolas Everett wrote:
> 2009/8/19 Grzegorz Jaśkiewicz
>
> > that seems to be the killer:
> >
> > and time >= extract ('epoch' from timestamp '2009-08-12')
> > and time < extract ('epoch' from timestamp '2009-08-13' )
> >
> > You probably need an index on time/
On Wednesday 19 August 2009 11:17:26 Scott Carey wrote:
> On 8/19/09 9:28 AM, "Kevin Kempter" wrote:
> > Hi all;
> >
> > we've been fighting this query for a few days now. we bumped up the
> > statistict target for the a.id , c.url_hits_id and the b.id c
Hi all;
we've been fighting this query for a few days now. we bumped up the statistict
target for the a.id , c.url_hits_id and the b.id columns below to 250 and ran
an analyze on the relevant tables. we killed it after 8hrs.
Note the url_hits table has > 1.4billion rows
Any suggestions?
Hi All;
I'm looking for tips / ideas per performance tuning some specific queries.
These are generally large tables on a highly active OLTP system
(100,000 - 200,000 plus queries per day)
First off, any thoughts per tuning inserts into large tables. I have a large
table with an insert like thi
Hi All;
I've installed pg_buffercache and I want to use it to help define the optimal
shared_buffers size.
Currently I run this each 15min via cron:
insert into buffercache_stats select now(), isdirty, count(*) as buffers,
(count(*) * 8192) as memory from pg_buffercache group by 1,2;
and here
Hi List;
Anyone have any experiences to share per setting up a federated
architecture with PostgreSQL ? I wonder if the dblink contrib works
well in a federated scenario, specifically in the setup of the
federated views which equate to a select * from the same table on each
federated serv
Version 8.3.1
On May 20, 2008, at 1:51 PM, kevin kempter wrote:
Hi all;
I have 2 tables where I basically want to delete from the first
table (seg_id_tmp7) any rows where the entire row already exists in
the second table (sl_cd_segment_dim)
I have a query that looks like this (and
Hi all;
I have 2 tables where I basically want to delete from the first table
(seg_id_tmp7) any rows where the entire row already exists in the
second table (sl_cd_segment_dim)
I have a query that looks like this (and it's slow):
delete from seg_id_tmp7
where
customer_srcid::text
Hi all;
I have a query that does this:
update tab_x set (inactive_dt, last_update_dt) =
((select run_dt from current_run_date), (select run_dt from
current_run_date))
where
cust_id::text || loc_id::text in
(select cust_id::text || loc_id::text from summary_tab);
The current_run_date table h
imon Riggs wrote:
On Fri, 2008-05-16 at 00:31 -0600, kevin kempter wrote:
I'm running the join shown below and it takes > 10 hours and
eventually runs out of disk space on a 1.4TB file system
Well, running in 10 hours doesn't mean there's a software problem, nor
does runnin
27;s not a cartesion product join.
On May 16, 2008, at 1:40 AM, Richard Huxton wrote:
kevin kempter wrote:
Hi List;
I have a table with 9,961,914 rows in it (see the describe of
bigtab_stats_fact_tmp14 below)
I also have a table with 7,785 rows in it (see the describe of
xsegment_dim below)
de_id = segdim.episode_srcid
and segdim.segment_srcid is NULL;
On May 16, 2008, at 12:31 AM, kevin kempter wrote:
Hi List;
I have a table with 9,961,914 rows in it (see the describe of
bigtab_stats_fact_tmp14 below)
I also have a table with 7,785 rows in it (see the describe of
xsegment_dim
Also, I'm running version 8.3 on a centOS box with 2 dual core CPU's
and 32Gig of ram
On May 16, 2008, at 12:58 AM, kevin kempter wrote:
Sorry I goofed on the query text Here's the correct query:
select
f14.xpublisher_dim_id,
f14.xtime_dim_id,
f14.xlocation_dim_id,
f14.x
Hi List;
I have a table with 9,961,914 rows in it (see the describe of
bigtab_stats_fact_tmp14 below)
I also have a table with 7,785 rows in it (see the describe of
xsegment_dim below)
I'm running the join shown below and it takes > 10 hours and
eventually runs out of disk space on a 1.
Hi List;
I have a large tble (playback_device) with 6million rows in it. The
aff_id_tmp1 table has 600,000 rows.
I also have this query:
select distinct
tmp1.affiliate_id,
tmp1.name,
tmp1.description,
tmp1.create_dt,
tmp1.playback_device_id,
pf.segment_id
from
aff_id_tmp1 tmp1,
playback_fragm
Hi List;
I'm having some performance issues with a partitioned table. We have a
VERY large table that we've partitioned by day.
Currently we have 17 partitions - each partition table contains >
700million rows.
One of the things we need to query is the min date from the master
table - we
Hi List;
Sorry if this is a dupe, my first post never showed up...
I'm having some performance issues with a partitioned table. We have a
VERY large table that we've partitioned by day.
Currently we have 17 partitions - each partition table contains >
700million rows.
One of the things we
Hi List;
We'll be loading a table with begining & ending I.P.'s - the table will likely
have upwards of 30million rows. Any thoughts on how to get the best
performance out of queries that want to look for IP ranges or the use of
between queries? Should these be modeled as integers?
Thanks in
On Tuesday 27 November 2007 09:33:36 cluster wrote:
> I have a query that takes about 7000 ms in average to complete the first
> time it runs. Subsequent runs complete in only 50 ms. That is more than
> a factor 100 faster! How can I make the query perform good in the first
> run too?
>
> Query and
Hi List;
I'm preparing to create a test suite of very complex queries that can be
profiled in terms of load and performance. The ultimate goal is to define a
load/performance profile during a run of the old application code base and
then again with changes to the application code base.
I suspe
Hi List;
any suggestions for improving "LIKE '%text%'" queries?
Thanks in advance
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Hi All;
I'm preparing to fire up WAL archiving on 8 production servers We will follow
up with implementing a warm standby scenariio.
Does anyone have any thoughts per how to maximize performance, yet minimize
the potential for data loss assuming we were not able to retrieve the final
un-archiv
Hi List;
I've recently started cleaning up some postgres db's which previous to my
recent arrival had no DBA to care for them.
I quickly figured out that there were several tables which were grossly full
of dead space. One table in particular had 75G worth of dead pages (or the
equivelant in o
On Tuesday 04 September 2007 11:27:07 [EMAIL PROTECTED] wrote:
> Hello everyone:
>
>I wanted to ask you about how the VACUUM ANALYZE works. is it possible
> that something can happen in order to reset its effects forcing to execute
> the VACUUM ANALYZE comand again? i am asking this because i a
Hi List;
I've just inherited multiple postgres database servers in multiple data
centers across the US and Europe via a new contract I've just started.
Each night during the nightly batch processing several of the servers (2 in
particular) slow to a crawl - they are dedicated postgres database
On Monday 27 August 2007 16:04:39 Decibel! wrote:
> On Mon, Aug 27, 2007 at 04:56:33PM -0500, Kevin Grittner wrote:
> > >>> Decibel! <[EMAIL PROTECTED]> 08/27/07 4:00 PM >>>
> > >>>
> > > > > They're running version 8.1.4
> > >
> > > As for your pg_dump idea... why not just do a CREATE TABLE AS SE
On Monday 27 August 2007 15:00:41 you wrote:
> On Fri, Aug 24, 2007 at 04:41:44PM -0400, Bill Moran wrote:
> > In response to Kevin Kempter <[EMAIL PROTECTED]>:
> > > Hi List;
> > >
> > > I've just started working with a client that has been runni
On Monday 27 August 2007 15:56:33 Kevin Grittner wrote:
> >>> Decibel! <[EMAIL PROTECTED]> 08/27/07 4:00 PM >>>
> >>>
> > > > They're running version 8.1.4
> >
> > As for your pg_dump idea... why not just do a CREATE TABLE AS SELECT *
> > FROM bloated_table? That would likely be much faster than m
On Friday 24 August 2007 15:39:22 Tom Lane wrote:
> Kevin Kempter <[EMAIL PROTECTED]> writes:
> > The development folks that have been here awhile tell me that it seems
> > like when they have a query (not limited to vacuum processes) that has
> > been running for a lon
Hi List;
I've just started working with a client that has been running Postgres (with
no DBA) for a few years. They're running version 8.1.4 on 4-way dell boxes
with 4Gig of memory on each box attached to RAID-10 disk arrays.
Some of their key config settings are here:
shared_buffers = 20480
Hi List;
I've just started working with a new client and they have amoung other issues
with their databases a particular update that basically locks out users.
The below query was running for over 6 hours this morning and the CPU load had
climbed to a point where new connections simply hung wa
Hi List(s);
I'm wanting to find more details per the following methods, can someone
explain to me exactly what each of these methods is, how its implemented in
postgres or point me to some docs or README's that explain these methods?
Some of 'em are obviously no-brainers but I'm writing a postg
Hi List;
I have a client looking to host/co-locate multiple PostgreSQL clusters
(inclusive of PL/pgSQL application code) per server. I did some co-location
work several years back with one of the bigger telco's and remember there
were dire consequences for not carefully evaluating the expected
52 matches
Mail list logo