Mike Charnoky wrote:
Scott Marlowe wrote:
On Mon, Mar 9, 2009 at 8:21 PM, Mike Charnoky n...@nextbus.com wrote:
The random sampling query is normally pretty snappy. It usually takes on
the order of 1 second to sample a few thousand rows of data out of a few
million. The sampling
Scott Marlowe wrote:
On Mon, Mar 9, 2009 at 8:21 PM, Mike Charnoky n...@nextbus.com wrote:
The random sampling query is normally pretty snappy. It usually takes on
the order of 1 second to sample a few thousand rows of data out of a few
million. The sampling is consistently quick, too
Gregory Stark wrote:
Tom Lane t...@sss.pgh.pa.us writes:
Mike Charnoky n...@nextbus.com writes:
The sampling query which runs really slow on some days looks something
like this:
INSERT INTO sampled_data
(item_name, timestmp, ... )
SELECT item_name, timestmp, ... )
FROM raw_data
Hello,
I'm looking for some insight on an intermittent PostgreSQL performance
problem that has been very troublesome. Using PG 8.3.5 on a server
running CentOS 5 2.6.18-8.el5 (Dual Xeon 2.00 GHz, 4 GB RAM, RAID-10
SCSI 600GB array).
The problem in a nutshell is this: on some days, a nightly
of the data (in this case, where item_name=something),
not just between timestamp ranges. Guess I'll just have to try kicking
up the work_mem for that query.
Thanks so much for your input.
Mike
Scott Marlowe wrote:
On Mon, Mar 9, 2009 at 1:55 PM, Mike Charnoky n...@nextbus.com wrote:
Hello
a vacuum running during this time.
Interesting idea to issue the EXPLAIN first... I will see if I can
instrument the sampling program to do this.
Thanks for your help Tom.
Mike
Tom Lane wrote:
Mike Charnoky n...@nextbus.com writes:
The sampling query which runs really slow on some days looks
Our database schema was designed before postgresql supported arrays and
contains a custom type which is basically a float4 array. I would like
to clean things up and convert the custom datatype to float4[], as this
would obviate the need for us to compile a custom shared object. We are
hitting
to be the missing link to eventually get the data to float4[].
Mike
Alvaro Herrera wrote:
Mike Charnoky wrote:
Our database schema was designed before postgresql supported arrays and
contains a custom type which is basically a float4 array. I would like
to clean things up and convert the custom
Thanks! That did the trick. For posterity, I was able to do the final
conversion using:
alter table mytable alter column mycolumn type float4[] using
string_to_array(trim(both '[]' from
textin(nbf4a_out(mycolumn))),',')::float4[];
Mike
Alvaro Herrera wrote:
Mike Charnoky wrote:
OK, forgive
In this usage scenario, doesn't the new HOT (heap only tuples) feature
of PG8.3 help, in terms of the DB requiring less VACUUM maintenance?
I am similarly performing a huge number of inserts, followed by a huge
number of updates to fill in a few null fields. The data is indexed by
insert time.
into this table. I have checkpoint_segments set to 64 so
that pg is not constantly thrashing the disk with writes. The
transaction log is on a separate disk.
Mike
Bill Moran wrote:
In response to Mike Charnoky [EMAIL PROTECTED]:
This is strange... count(*) operations over a period of one day's worth
Gregory Stark wrote:
Alban Hertroys [EMAIL PROTECTED] writes:
Mike Charnoky wrote:
With respect to the ALTER TABLE SET STATISTICS... how do I determine a
good value to use? This wasn't really clear in the pg docs. Also, do I
need to run ANALYZE on the table after I change the statistics
: 111298.695 ms
Mike
Gregory Stark wrote:
Mike Charnoky [EMAIL PROTECTED] writes:
I altered the table in question, with set statistics 100 on the
timestamp column, then ran analyze. This seemed to help somewhat. Now,
queries don't seem to hang, but it still takes a long time to do the count
Hi,
I am still having problems performing a count(*) on a large table. This
is a followup from a recent thread:
http://archives.postgresql.org/pgsql-general/2007-09/msg00561.php
Since the last time these problems happened, we have tweaked some
postgresql config parameters (fsm, etc). I also
The autovacuum is turned on. Since this is pg8.1, I don't know when the
table was actually last vacuumed. I *did* run analyze on the table,
though. Also, nothing has been deleted in this table... so vacuum
should have no affect, right?
Mike
Sean Davis wrote:
Mike Charnoky wrote:
Hi,
I am
[EMAIL PROTECTED] writes:
A. Kretschmer wrote:
am Fri, dem 28.09.2007, um 11:56:46 -0400 mailte Mike Charnoky folgendes:
Hi,
I am still having problems performing a count(*) on a large table. This
Now, certain count(*) queries are failing to complete for certain time
ranges (I killed
Thanks, recreating the table solved my problems. Our team is working on
implementing some performance tuning based on other recommendations from
the list (FSM, etc).
Mike
Joshua D. Drake wrote:
At this point, you are in a world of hurt :). If you stop a vacuum you
have created a huge mess of
Hi,
I'm using PostgreSQL 8.1.8 and am having trouble with a table which
contains a large amount of data. Data is constantly being inserted into
the table, roughly a million inserts per hour at peak. The table
currently has about 100 million entries which take up 14G of space (24G
with indices).
I have never heard that stopping a vacuum is problematic... I have had
to do this many times in the past without any adverse affects. Is there
some sort of documentation which elaborates on this issue?
For the record, I did a VACUUM ANALYZE, not FULL. Now that I think
about it, I probably
pg_locks.relation=pg_class.oid and pg_locks.database=
(SELECT datid from pg_stat_database where datname = 'my_db_name');
Mike
Tom Lane wrote:
Mike Charnoky [EMAIL PROTECTED] writes:
First, a question: For a PG8.1 database, is it preferable to use the new
alter table disable|enable trigger
Hi, I'm using PostgreSQL 8.1.4 and am having some problems with the new
disable/enable trigger command.
First, a question: For a PG8.1 database, is it preferable to use the new
alter table disable|enable trigger command as opposed to the old
method of setting pg_class.reltriggers = 0?
I'm
this is the way to support large files). It
is my understanding that ext3 supports file sizes up to 1T. The restore worked
fine when the database was smaller. Any ideas?
Thanks,
Mike Charnoky
---(end of broadcast)---
TIP 6: Have you searched
-dataPos = 1785996817
BTW, the file size is: 2361910772 bytes
Mike Charnoky
Tom Lane wrote:
Mike Charnoky [EMAIL PROTECTED] writes:
I am currently using PostgreSQL v7.3.4 on a RedHat 8.0 system (2.4.23 kernel)
using the ext3 filesystem. I am experiencing problems when performing a
pg_restore using
treated as an integer somewhere.
Mike Charnoky
Tom Lane wrote:
Mike Charnoky [EMAIL PROTECTED] writes:
I am currently using PostgreSQL v7.3.4 on a RedHat 8.0 system (2.4.23 kernel)
using the ext3 filesystem. I am experiencing problems when performing a
pg_restore using a file which is 2.3G
24 matches
Mail list logo