Hello Gustavo,
Your question seems to say that you suspect a disk
issue, and a few hours later, Simon told me "Sounds like your
disks/layout/something is pretty sick".
To be clear in my own mind about it, I've just
copyed (time cp) the "aggregate" table files (4 Gb) from one disk to an
Hello Simon,
Sounds like your disks/layout/something is
pretty sick. You don't mention I/O bandwidth, controller or RAID, so you
should look more into those topics.Well seen ! (as we say in
France).
As I said to Gustavo, your last suspicion took me
into a simple disk test: I've just
Hello Richard,
Perhaps look into clustering the tables.
Good idea : I will try to go further into this way.
There is no index on the aggregate table since the criterias, their
number and their scope are freely choosen by the customers.
Hmm... not convinced this is a good idea.
Long days
Oleg Bartunov wrote:
from my notes
http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes
It's usefull to see words statistics, for example, to check how good
your dictionaries work or how did you configure pg_ts_cfgmap. Also, you
may notice probable stop words relevant for your
On Thu, 24 Mar 2005, Rick Jansen wrote:
Oleg Bartunov wrote:
from my notes
http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes
It's usefull to see words statistics, for example, to check how good
your dictionaries work or how did you configure pg_ts_cfgmap. Also, you
may notice
Oleg Bartunov wrote:
stat() is indeed a bigdog, it was designed for developers needs,
so we recommend to save results in table.
Anyway, here's my pg_ts_cfgmap now (well the relevant bits):
default_english | lhword | {en_ispell,en_stem}
default_english | lpart_hword | {en_ispell,en_stem}
Good day Patrick!
I can help you to design you disk layout for better perform and
security. Please, tell me how many disks (and some specs, like capacity
and RPM).
If you want to know more, there is a very interesting article abou
benckmark filesystem (
* Tom Lane [EMAIL PROTECTED] wrote:
big_snip
BTW: is it possible to explicitly clear the cache for immutable
functions ?
I'd like to use immutable functions for really often lookups like
fetching a username by uid and vice versa. The queried tables
change very rarely, but when they change is
On Thu, Mar 24, 2005 at 02:32:48PM +0100, Enrico Weigelt wrote:
BTW: is it possible to explicitly clear the cache for immutable
functions ?
What cache? There is no caching of function results.
I'd like to use immutable functions for really often lookups like
fetching a username by uid
* Tom Lane [EMAIL PROTECTED] wrote:
Peter Darley [EMAIL PROTECTED] writes:
I have a question about views: I want to have a fairly wide view (lots
of
columns) where most of the columns have some heavyish calculations in them,
but I'm concerned that it will have to calculate every
Great !
I'm not an expert but as far as I know, my 15
databases are spread over 4 SCSI RAID disks 73 GB 10K RPM mounted under ext3
mode.
I remember that they where provided by DELL under
RAID5 and I asked my system engineer forswitching them to standard SCSI
because I don't care about
* Alvaro Herrera [EMAIL PROTECTED] wrote:
On Thu, Mar 24, 2005 at 02:32:48PM +0100, Enrico Weigelt wrote:
BTW: is it possible to explicitly clear the cache for immutable
functions ?
What cache? There is no caching of function results.
Not ? So what's immutable for ?
snip
I'd like
Hi!
I have a Dell PowerEdge 2600 with a Perc 4/DI and 4 scsi disks
35GB. I have made a array raid 0+1 with 4 disks, because is mission
critical application. But, for your, you can configure a raid0, thats
is faster than raid5 for 4 disks.
Ask to your system enginner what is distribution
On Thu, 24 Mar 2005, Enrico Weigelt wrote:
* Alvaro Herrera [EMAIL PROTECTED] wrote:
On Thu, Mar 24, 2005 at 02:32:48PM +0100, Enrico Weigelt wrote:
BTW: is it possible to explicitly clear the cache for immutable
functions ?
What cache? There is no caching of function results.
Hello !
I'm running pg_autovacuum on a 1GHz, 80Gig, 512Mhz machine. The database is
about 30MB tarred. We have about 5 Updates/Inserts/Deletes per day. It
runs beautifully for ~4 days. Then the HDD activity and the Postmaster CPU
usage goes up ALOT. Even though I have plenty (?) of FSM (2
I've got a report that is starting to take too long to run. I'm going to
create a lookup table that should speed up the results, but first I've got
to create the lookup table.
I honestly don't care how long the query takes to run, I just want to run it
without causing a major performance impact
while you weren't looking, Matthew Nuzum wrote:
select accountid, min(atime) as atime, sessionid from usage_access
group by accountid,sessionid;
Try something along the lines of:
select ua.accountid
, (select atime
from usage_access
where sessionid = ua.sessionid
Otto Blomqvist [EMAIL PROTECTED] writes:
Over 100'000 Index Rows removed, 300'000 unused item pointers ? How could
autovacuum let this happen ?
What PG version is this?
(The earlier autovacuum releases had some bugs with large tables, thus
the question...)
regards, tom
Matthew Nuzum [EMAIL PROTECTED] writes:
Here is the query (BTW, there will be a corresponding max version of this
query as well):
INSERT INTO usage_sessions_min (accountid,atime,sessionid)
select accountid, min(atime) as atime, sessionid from usage_access
group by accountid,sessionid;
How
How many rows in usage_access? How many groups do you expect?
(Approximate answers are fine.) What PG version is this, and
what's your sort_mem setting?
regards, tom lane
I believe there are about 40,000,000 rows, I expect there to be about
10,000,000 groups.
How many rows in usage_access?
Oh, I just got my explain analyze:
QUERY
PLAN
Matthew Nuzum [EMAIL PROTECTED] writes:
I believe there are about 40,000,000 rows, I expect there to be about
10,000,000 groups. PostgreSQL version is 7.3.2 and the sort_mem is at the
default setting.
Okay. I doubt that the nearby suggestion to convert the min()s to
indexscans will help at
I would strongly suggest doing the min and max calculations together:
select groupid, min(col), max(col) from ...
because if you do them in two separate queries 90% of the effort will be
duplicated.
regards, tom lane
Thanks. Other than avoiding using too much
Matthew Nuzum [EMAIL PROTECTED] writes:
Thanks. Other than avoiding using too much sort mem, is there anything else
I can do to ensure this query doesn't starve other processes for resources?
Not a lot.
Doing the explain analyze only increases my server load by 1 and seems to
readily
On Thu, 2005-03-24 at 10:48 +0100, Patrick Vedrines wrote:
You've got 1.5Gb of shared_buffers and 2Gb data. In 8.0, the scan
will
hardly use the cache at all, nor will it ever, since the data is
bigger
than the cache. Notably, the scan of B should NOT spoil the cache
for A
Are you sure
Sorry about that. I'm Running 8.0.0 on Linux Redhat 8.0
Tom Lane [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Otto Blomqvist [EMAIL PROTECTED] writes:
Over 100'000 Index Rows removed, 300'000 unused item pointers ? How
could
autovacuum let this happen ?
What PG version is
The version that shipped with 8.0 should be fine. The only version that
had the problem Tom referred to are in the early 7.4.x releases.
Did you get my other message about information from the stats system
(I'm not sure why my other post has yet to show up on the performance
list).
Matthew
I would rather keep this on list since other people can chime in.
Otto Blomqvist wrote:
It does not seem to be a Stats collector problem.
oid | relname | relnamespace | relpages | relisshared | reltuples |
schemaname | n_tup_ins | n_tup_upd | n_tup_del
v8.0.1 on a Sun v20Z running gentoo linux, 1 cpu, 1GB Ram, 1 10k scsi
disk
I have a (fairly) newly rebuilt database. In the last month it has
undergone extensive testing, hence thousands of inserts and deletes in
the table in question. After each mass unload/load cycle, I vacuum full
analyze
Karim Nassar [EMAIL PROTECTED] writes:
Here is the statement:
orfs=# explain analyze DELETE FROM int_sensor_meas_type WHERE
id_meas_type IN (SELECT * FROM meas_type_ids);
QUERY PLAN
Tom,
I've got a similar problem with deletes taking a very long time. I know
that there are lots of foreign keys referencing this table, and other
foreign keys referencing those tables, etc. I've been curious, is there
a way to find out how long the foreign key checks take for each
dependent
On Thu, 2005-03-24 at 19:52 -0500, Tom Lane wrote:
Karim Nassar [EMAIL PROTECTED] writes:
Here is the statement:
orfs=# explain analyze DELETE FROM int_sensor_meas_type WHERE
id_meas_type IN (SELECT * FROM meas_type_ids);
QUERY
Karim Nassar [EMAIL PROTECTED] writes:
Look at what your triggers are doing. My private bet is that you have
unindexed foreign keys referencing this table, and so each deletion
forces a seqscan of some other, evidently very large, table(s).
Almost. I have a large table (6.3 million rows)
Mark Lewis [EMAIL PROTECTED] writes:
I've got a similar problem with deletes taking a very long time. I know
that there are lots of foreign keys referencing this table, and other
foreign keys referencing those tables, etc. I've been curious, is there
a way to find out how long the foreign
Hmm ... I wonder how hard it would be to teach EXPLAIN ANALYZE to show
the runtime expended in each trigger when the statement is of a kind
that has triggers. We couldn't break down the time *within* the
triggers, but even this info would help a lot in terms of finger
pointing ...
Seq
In that case there's a datatype mismatch between the referencing and
referenced columns, which prevents the index from being used for the
FK check.
Is creating such a foreign key a WARNING yet?
Chris
---(end of broadcast)---
TIP 6: Have you searched
Watch your pg_stats_* views before and after the delete and check what
related tables have had lots of seqscans.
Chris
Mark Lewis wrote:
Tom,
I've got a similar problem with deletes taking a very long time. I know
that there are lots of foreign keys referencing this table, and other
foreign
On Mar 24, 2005, at 10:38 PM, Christopher Kings-Lynne wrote:
In that case there's a datatype mismatch between the referencing and
referenced columns, which prevents the index from being used for the
FK check.
Is creating such a foreign key a WARNING yet?
I recall getting such a warning when
Karim,
Problem now is: this referencing table I expect to grow to about 110
million rows in the next 2 months, then by 4 million rows per month
thereafter. I expect that the time for recreating the foreign key will
grow linearly with size.
Is this just the kind of thing I need to watch out
On Thu, 24 Mar 2005, Tom Lane wrote:
Mark Lewis [EMAIL PROTECTED] writes:
I've got a similar problem with deletes taking a very long time. I know
that there are lots of foreign keys referencing this table, and other
foreign keys referencing those tables, etc. I've been curious, is there
a way to
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
Hmm ... I wonder how hard it would be to teach EXPLAIN ANALYZE to show
the runtime expended in each trigger when the statement is of a kind
that has triggers.
Could SPI know that an explain analyze is being run and add their
output and
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
In that case there's a datatype mismatch between the referencing and
referenced columns, which prevents the index from being used for the
FK check.
Is creating such a foreign key a WARNING yet?
I believe so as of 8.0. It's a bit tricky since
42 matches
Mail list logo