Hi Tomas,
Tomas wrote:
We've decided to remove unneeded 'old' data, which means removing about
99.999% of rows from tables A, C and D (about 2 GB of data). At the
beginning, the B table (containing aggregated from A, C and D) was emptied
(dropped and created) and filled in with current data.
Hello,
I have difficulty in fetching the records from the database.
Database table contains more than 1 GB data.
For fetching the records it is taking more the 1 hour and that's why it is slowing down the performance.
please provide some help regarding improving the performance and how do I run
Hi,
I Attached here a file with details about the tables, the queries and
the
Explain analyze plans.
Hope this can be helpful to analyze my problem
10x
Doron
-Original Message-
From: Ragnar [mailto:[EMAIL PROTECTED]
Sent: Sunday, April 09, 2006 2:37 PM
To: Doron Baranes
Subject: RE:
Hi
I'm currently upgrading a Posgresql 7.3.2 database to a
8.1.something-good
I'd run pg_dump | gzip sqldump.gz on the old system. That took about
30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psql
into the 8.1 database seems to take about the same time. Are
Hi
I'm currently upgrading a Posgresql 7.3.2 database to a
8.1.something-good
I'd run pg_dump | gzip sqldump.gz on the old system. That took about
30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psql
into the 8.1 database seems to take about the same time. Are
Jesper Krogh wrote:
Hi
I'm currently upgrading a Posgresql 7.3.2 database to a
8.1.something-good
I'd run pg_dump | gzip sqldump.gz on the old system. That took about
30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psql
into the 8.1 database seems to take about
If they both took the same amount of time, then you are almost certainly
bottlenecked on gzip.
Try a faster CPU or use gzip -fast.
gzip does not seem to be the bottleneck, on restore is psql the nr. 1
consumer on cpu-time.
Jesper
Sorry for the double post.
--
Jesper Krogh
I'd run pg_dump | gzip sqldump.gz on the old system. That took about
30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psql
into the 8.1 database seems to take about the same time. Are there
any tricks I can use to speed this dump+restore process up?
The
Brendan Duddridge wrote:
Now, initially I thought they would just pre-compute these counts, but
the problem is, when you click on any of the above attribute values,
they reduce the remaining possible set of matching products (and set of
possible remaining attributes and attribute values) by
On Fri, 2006-04-07 at 19:05 -0400, Tom Lane wrote:
It's plausible though that we are seeing contention across members of
the LWLock array, with the semop storm just being a higher-level symptom
of the real hardware-level problem. You might try increasing
LWLOCK_PADDED_SIZE to 64 or even 128,
Simon Riggs wrote:
pSeries cache lines are 128 bytes wide, so I'd go straight to 128.
Hello :)
OK, that line of code is:
#define LWLOCK_PADDED_SIZE (sizeof(LWLock) = 16 ? 16 : 32)
What should I change this to? I don't understand the syntax of the = 16
? : stuff...
would a simple
On mán, 2006-04-10 at 10:30 +0200, Doron Baranes wrote:
I Attached here a file with details about the tables, the queries and
the
Explain analyze plans.
Hope this can be helpful to analyze my problem
first query:
explain analyze SELECT date_trunc('hour'::text,
i.entry_time) AS datetime,
Jesper Krogh [EMAIL PROTECTED] writes:
gzip does not seem to be the bottleneck, on restore is psql the nr. 1
consumer on cpu-time.
Hm. We've seen some situations where readline mistakenly decides that
the input is interactive and wastes lots of cycles doing useless
processing (like keeping
Gavin Hamill [EMAIL PROTECTED] writes:
would a simple #define LWLOCK_PADDED_SIZE 128 be sufficient?
Yeah, that's fine.
regards, tom lane
---(end of broadcast)---
TIP 6: explain analyze is your friend
On 4/10/06, Jesper Krogh [EMAIL PROTECTED] wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.something-goodI'd run pg_dump | gzip sqldump.gzon the old system. That took about30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psqlinto the 8.1 database
sorry for the post , i didn' saw the other replies only after posting.On 4/10/06, Rajesh Kumar Mallah [EMAIL PROTECTED]
wrote:
On 4/10/06, Jesper Krogh [EMAIL PROTECTED]
wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.something-goodI'd run pg_dump | gzip sqldump.gzon the old
what is the query ?use LIMIT or a restricting where clause.regdsmallah.On 4/10/06, soni de
[EMAIL PROTECTED] wrote:Hello,
I have difficulty in fetching the records from the database.
Database table contains more than 1 GB data.
For fetching the records it is taking more the 1 hour and that's why
4. fsync can also be turned off while loading huge dataset , but seek others comments too (as study docs) as i am not sure about the reliability. i think it can make a lot of difference.
On 4/10/06, Jesper Krogh [EMAIL PROTECTED] wrote:
Rajesh Kumar Mallah wrote: I'd run pg_dump | gzip
Rajesh Kumar Mallah wrote:
4. fsync can also be turned off while loading huge dataset ,
but seek others comments too (as study docs) as i am not sure about the
reliability. i think it can make a lot of difference.
Also be sure to increase maintenance_work_mem so that index creation
Hi,I want to optimize something like this.- My items table:code int -- can take one of 100 valuesproperty varchar(250) -- can take one of 5000 valuesparam01 char(10) -- can take one of 10 valuesparam02 char(10) -- can take one of 10 values...[ 20 similar columns }...parama20 char(10) -- can take
- My items table:
code int -- can take one of 100 values
property varchar(250) -- can take one of 5000 values
param01 char(10) -- can take one of 10 values
param02 char(10) -- can take one of 10 values
...
[ 20 similar columns }
...
parama20 char(10) -- can take one
Rajesh Kumar Mallah wrote:
what is the query ?
use LIMIT or a restricting where clause.
You could also use a cursor.
Joshua D. Drake
regds
mallah.
On 4/10/06, *soni de* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
Hello,
I have difficulty in fetching the records from
On Apr 10, 2006, at 3:55 AM, Jesper Krogh wrote:
I'd run pg_dump | gzip sqldump.gz on the old system. That took
about
30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psql
into the 8.1 database seems to take about the same time. Are there
any tricks I can use to
Tom Lane wrote:
This is unfortunately not going to help you as far as getting that
machine into production now (unless you're brave enough to run CVS tip
as production, which I certainly am not). I'm afraid you're most likely
going to have to ship that pSeries back at the end of the month,
Hi Richard (and anyone else who want's to comment!),
I'm not sure it will really work pre-computed. At least not in an
obvious way (for me! :-)) It's fine to display a pre-computed list of
product counts for the initial set of attribute and attribute values,
but we need to be able to
Chris,
Just to make sure the x4100 config is similar to your Linux system, can
you verify the default setting for disk write cache and make sure they
are both enabled or disabled. Here's how to check in Solaris.
As root, run format -e - pick a disk - cache - write_cache - display
Dan Harris [EMAIL PROTECTED] writes:
I have a query that is intended to select from multiple small tables
to get a limited subset of incidentid and then join with a very
large table. One of the operations will require a sequential scan, but
the planner is doing the scan on the very large
Tom Lane wrote:
That's very strange --- the estimated cost of the seqscan is high enough
that the planner should have chosen a nestloop with inner indexscan on
the big table. I'm not sure about the join-order point, but the hash
plan for the first join seems wrong in any case.
Um, you do have
Dan Harris [EMAIL PROTECTED] writes:
Yes, eventactivity.incidentid is indexed. The datatype is varchar(40).
Although, by checking this, I noticed that k_h.incidentid was
varchar(100). Perhaps the difference in length between the keys caused
the planner to not use the fastest method?
No,
I was wondering if anyone on the list has a successful installation of
pgmemcache running
that uses LISTEN/NOTIFY to signal a successfully completed transaction,
i.e., to get around the fact
that TRIGGERS are transaction unaware. Or perhaps any other
information regarding a successful
deployment
I have a slow sql:
SELECT * FROM mytable WHERE id IN (1,3,5,7,3k here...);
mytable is about 10k rows.
if don't use the IN clause, it will cost 0,11 second, otherwise it
will cost 2.x second
I guess pg use linear search to deal with IN clause, is there any way
to let pg use other search method
On lun, 2006-04-10 at 12:44 +0800, Qingqing Zhou wrote:
[EMAIL PROTECTED] wrote
I have a slow sql:
SELECT * FROM mytable WHERE id IN (1,3,5,7,3k here...);
mytable is about 10k rows.
if don't use the IN clause, it will cost 0,11 second, otherwise it
will cost 2.x second
I guess
32 matches
Mail list logo