Hi,
We are having 3 tables;
1. persons -- Base table and no data will be inserted in this table.
2. Person1 -- Inherited table from persons all data will be inserted in this
table.
3. PersonalInfo -- which is storing all personal information of the persons
and is having the foreign key
Currently foreign keys don't work that way. You will need your data to be in
table persons if you want the foreign key to work correctly. Otherwise you
could create your own trigger to ensure the foreign key restriction you need.
I'm no expert at all and it seems there are non trivial issues
A good idea here will be to first do a VACUUM FULL and then keep the
Autovacuum settings you want.
-
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/17/07, Jeremy Haile [EMAIL PROTECTED] wrote:
I still keep wondering if this table is bloated with dead tuples. Even
if
Thanks for the great info Chad. I'm learning a lot from this thread!
347434 rows * 156 bytes = 52MB (reasonable it could be held in your
shared buffers, which makes Tom's suggestion very plausible, the
index scan may not be cheaper -- because it is all cached)
Maybe - I tried running the
On 17-Jan-07, at 9:37 AM, Jeremy Haile wrote:
I still keep wondering if this table is bloated with dead tuples.
Even
if you vacuum often if there's a connection with an idle transaction,
the tuples can't be reclaimed and the table would continue to grow.
I used to vacuum once an hour,
How much memory does the box have
2GB
Yes, it takes up space
Well, I upped max_fsm_pages to 200 because it vacuums were failing
with it set to 150. However, I'm now autovacuuming, which might be
keeping my fsm lower. I didn't realize that setting it too high had
negative effects, so
On Wed, 2007-01-17 at 08:37, Jeremy Haile wrote:
I still keep wondering if this table is bloated with dead tuples. Even
if you vacuum often if there's a connection with an idle transaction,
the tuples can't be reclaimed and the table would continue to grow.
I used to vacuum once an hour,
That's about 32% dead rows. Might be worth scheduling a vacuum full,
but it's not like I was afraid it might be. It looks to me like you
could probably use a faster I/O subsystem in that machine though.
I'll try to schedule a full vacuum tonight. As far as I/O - it's using
SAN over fiber.
Hello again Tom,
We have our upgrade to 8.2.1 scheduled for this weekend, and we noticed
your message regarding the vacuum patch being applied to 8.2 and
back-patched. I expect I know the answer to this next question :) but I
was wondering if the patch referenced below has also been bundled
On 1/17/07, Jeremy Haile [EMAIL PROTECTED] wrote:
Maybe - I tried running the same query for an older time range that is
less likely to be cached. The index scan took longer than my previous
example, but still only took 16 seconds, compared to the 87 seconds
required to seqscan the table.
Hi,
how can I monitor the size of the transaction log files using SQL Statements?
Best Regards
Silvio
Silvio Ziegelwanger
Research Development
Fabalabs Software GmbH
Honauerstraße 4
4020 Linz
Austria
Tel: [43] (732) 60 61 62
Fax: [43] (732) 60 61 62-609
E-Mail: [EMAIL PROTECTED]
Ziegelwanger, Silvio wrote:
Hi,
how can I monitor the size of the transaction log files using SQL Statements?
You can't. You would have to write a custom function to heck the size of
the xlog directory.
Sincerely,
Joshua D. Drake
Best Regards
Silvio
Silvio
archive_timeout (came in ver 8.2) might help you with customizing the size
for log files.
-
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/17/07, Joshua D. Drake [EMAIL PROTECTED] wrote:
Ziegelwanger, Silvio wrote:
Hi,
how can I monitor the size of the transaction
Joshua D. Drake wrote:
Ziegelwanger, Silvio wrote:
Hi,
how can I monitor the size of the transaction log files using SQL Statements?
You can't. You would have to write a custom function to heck the size of
the xlog directory.
wel in recent versions of pg it should be pretty easy to do
On Wed, 2007-01-17 at 10:28, Jeremy Haile wrote:
That's about 32% dead rows. Might be worth scheduling a vacuum full,
but it's not like I was afraid it might be. It looks to me like you
could probably use a faster I/O subsystem in that machine though.
I'll try to schedule a full vacuum
It would be nice if the database could
learn to estimate these values, as newer versions of Oracle does.
That would be really nice since it would take some of the guess work out
of it.
Yes, cluster would rebuild the table for you. I wouldn't do anything too
intrusive, run with the
Also, look at the thread going by about index bloat by 4x. You'll
likely want to reindex after a vacuum full since vacuum full doesn't
reclaim space in indexes and in fact often bloats indexes.
Thanks for the pointer. That thread might indeed apply to my situation.
I'm going to reindex the
Assuming the table's NOT bloated, you may do well to increase the
effective_cache_size, which doesn't allocate anything,
snip
try setting it to something like 512MB or so.
It's currently set to 1000MB.
If your table is bloating, and you don't have idle transactions hanging
of the
That's about 32% dead rows. Might be worth scheduling a vacuum full,
but it's not like I was afraid it might be. It looks to me like you
could probably use a faster I/O subsystem in that machine though.
If the random page cost being lower fixes your issues, then I'd just run
with it lower
Interesting - I haven't seen that tool before. I'll have to check it
out when I get a chance. Thanks!
On Wed, 17 Jan 2007 20:32:37 +0100, Tomas Vondra [EMAIL PROTECTED] said:
That's about 32% dead rows. Might be worth scheduling a vacuum full,
but it's not like I was afraid it might be.
Hey there;
I've been lurking on this list awhile, and I've been working with postgres
for a number of years so I'm not exactly new to this. But I'm still
having trouble getting a good balance of settings and I'd like to see what
other people think. We may also be willing to hire a
Doesn't sound like you want postgres at all Try mysql.
-Original Message-
From: Steve [EMAIL PROTECTED]
To: pgsql-performance@postgresql.org
Sent: 1/17/2007 2:41 PM
Subject: [PERFORM] Configuration Advice
Hey there;
I've been lurking on this list awhile, and I've been working
Adam Rich wrote:
Doesn't sound like you want postgres at all Try mysql.
Could you explain your reason for suggesting mysql? I'm simply curious
why you would offer that as a solution.
---(end of broadcast)---
TIP 5: don't forget to increase
Any finally, any ideas on planner constants? Here's what I'm using:
seq_page_cost = 0.5 # measured on an arbitrary scale
random_page_cost = 1.0 # same scale as above
cpu_tuple_cost = 0.001 # same scale as above
cpu_index_tuple_cost =
On 1/17/07, Steve [EMAIL PROTECTED] wrote:
However, each night we load data from a legacy cobol system into the SQL
system and then we summarize that data to make the reports faster. This
load process is intensely insert/update driven but also has a hefty
amount of selects as well. This load
Steve wrote:
SO ... our goal here is to make this load process take less time. It
seems the big part is building the big summary table; this big summary
table is currently 9 million rows big. Every night, we drop the table,
re-create it, build the 9 million rows of data (we use COPY to put
1) What is the size of the whole database? Does that fit in your memory?
That's the first thing I'd like to know and I can't find it in your
post.
Current on-disk size is about 51 gig. I'm not sure if there's a different
size I should be looking at instead, but that's what du tells me
On Wed, 17 Jan 2007, Benjamin Minshall wrote:
Building these indexes takes forever!
Anyway -- ANYTHING we can do to make this go faster is appreciated :)
Here's some vital statistics:
- Machine is a 16 GB, 4 actual CPU dual-core opteron system using SCSI
discs. The disc configuration
Building these indexes takes forever!
Anyway -- ANYTHING we can do to make this go faster is appreciated :)
Here's some vital statistics:
- Machine is a 16 GB, 4 actual CPU dual-core opteron system using SCSI
discs. The disc configuration seems to be a good one, it's the best of
all the
Bricklen Anderson wrote:
Adam Rich wrote:
Doesn't sound like you want postgres at all Try mysql.
Could you explain your reason for suggesting mysql? I'm simply curious
why you would offer that as a solution.
He sound a little trollish to me. I would refer to the other actually
helpful
From: Steve [EMAIL PROTECTED]
To: pgsql-performance@postgresql.org
Sent: 1/17/2007 2:41 PM
Subject: [PERFORM] Configuration Advice
SO ... our goal here is to make this load process take less time. It
seems the big part is building the big summary table; this big summary
table is
Hello List,
Not sure to which list I should post (gray lines, and all that), so
point me in the right direction if'n it's a problem.
I am in the process of learning some of the art/science of benchmarking.
Given novnov's recent post about the comparison of MS SQL vs
PostgresQL, I felt it
Joshua D. Drake [EMAIL PROTECTED] writes:
Ziegelwanger, Silvio wrote:
how can I monitor the size of the transaction log files using SQL Statements?
You can't. You would have to write a custom function to heck the size of
the xlog directory.
Perhaps more to the point, why do you think you
On Wed, 2007-01-17 at 15:58, Steve wrote:
On Wed, 17 Jan 2007, Benjamin Minshall wrote:
Building these indexes takes forever!
Anyway -- ANYTHING we can do to make this go faster is appreciated :)
Here's some vital statistics:
- Machine is a 16 GB, 4 actual CPU dual-core opteron
How many rows do you typically load each night? If it is say less than 10%
of the total rows, then perhaps the suggestion in the next paragraph is
reasonable.
Hrm. It's very, very variable. I'd say it's more than 10% on
average, and it can actually be pretty close to 50-100% on certain
Sorry if this came off sounding trollish All databases have their
strengths weaknesses, and I feel the responsible thing to do is
exploit
those strengths where possible, rather than expend significant time and
effort coaxing one database to do something it wasn't designed to.
There's just
On 17-Jan-07, at 3:41 PM, Steve wrote:
Hey there;
I've been lurking on this list awhile, and I've been working with
postgres for a number of years so I'm not exactly new to this. But
I'm still having trouble getting a good balance of settings and I'd
like to see what other people
Would it be possible to just update the summary table, instead of recreating
it from scratch every night?
Hrm, I believe it's probably less work for the computer to do if
it's rebuilt. Any number of rows may be changed during an update, not
including additions, so I'd have to pull out
Note that you only need to have the ASC and DESC versions of opclasses when
you are going to use multicolumn indexes with some columns in ASC order and
some in DESC order. For columns used by themselves in an index, you don't
need to do this, no matter which order you are sorting on.
Yeah, I
On 1/17/07, Steve [EMAIL PROTECTED] wrote:
However, I will look into this and see if I can figure out this
average value. This may be a valid idea, and I'll look some more at it.
It must be, Oracle sells it pretty heavily as a data warehousing feature
;). Oracle calls it a
[EMAIL PROTECTED] (Steve) writes:
I'm wondering what we can do to make
this better if anything; would it be better to leave the indexes on?
It doesn't seem to be.
Definitely NOT. Generating an index via a bulk sort is a LOT faster
than loading data into an index one tuple at a time.
We saw
Generally speaking, once you've gotten to the point of swapping, even a
little, you've gone too far. A better approach is to pick some
conservative number, like 10-25% of your ram for shared_buffers, and 1
gig or so for maintenance work_mem, and then increase them while
exercising the system,
This would probably also be another last ditch option. :) Our stuff is
designed to pretty much work on any DB but there are postgres specific
things in there... not to mention ramp up time on MySQL. I mean, I know
MySQL from a user point of view and in a very limited way
administratively,
The thought:
- Load the big chunk of data into a new table
- Generate some minimal set of indices on the new table
- Generate four queries that compare old to new:
q1 - See which tuples are unchanged from yesterday to today
q2 - See which tuples have been deleted from yesterday to today
q3
Hi List,
Can anybody suggest some comprehensive test for version change from 8.1.3 to
8.2
--
Thanks in advance
Gauri
On Jan 18, 2007, at 13:43 , Gauri Kanekar wrote:
Can anybody suggest some comprehensive test for version change from
8.1.3 to 8.2
http://www.postgresql.org/docs/8.2/interactive/release.html
Michael Glaesemann
grzm seespotcode net
---(end of
On Jan 18, 2007, at 13:56 , Michael Glaesemann wrote:
On Jan 18, 2007, at 13:43 , Gauri Kanekar wrote:
Can anybody suggest some comprehensive test for version change
from 8.1.3 to 8.2
http://www.postgresql.org/docs/8.2/interactive/release.html
Sorry, I misread your request as a list of
Please reply to the list so that others may contribute and benefit
from the discussion.
On Jan 18, 2007, at 14:19 , Gauri Kanekar wrote:
i want some comprehensive tests, to identify wheather shifiting
from 8.1.3 to 8.2 will be advantageous.
I think it depends on your installation and use
48 matches
Mail list logo