On Tue, May 24, 2005 at 21:39:15 -0500,
John A Meinel <[EMAIL PROTECTED]> wrote:
>
> By the way, I think doing:
>
> CREATE DATABASE tempdb WITH TEMPLATE = originaldb;
>
> Is a much faster way of doing dump and load. I *think* it would recreate
> indexes, etc. If it just does a copy it may not
SpaceBallOne wrote:
What version of postgres?
8.0.2 ... but I think I've seen this before on 7.3 ...
There are a few possibilities. If you are having a lot of updates to the
table, you can get index bloat. And vacuum doesn't fix indexes. You have
to "REINDEX" to do that. Though REINDEX has
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
>> If I pg_dump that database then create a new database (e.g. "tempdb")
>> and upload the dump file (thus making a duplicate) then the same query
>> only takes 190ms !!
>> Vacuum, vacuum analyse, and vacuum full analyse does not seem to have an
John A Meinel <[EMAIL PROTECTED]> writes:
> By the way, I think doing:
> CREATE DATABASE tempdb WITH TEMPLATE = originaldb;
> Is a much faster way of doing dump and load. I *think* it would recreate
> indexes, etc. If it just does a copy it may not show the dump/restore
> improvement.
CREATE DAT
Would CLUSTER / REINDEX still have an effect if our queries were done
via sequential scan?
SELECTS don't write to the database, so they have no effect at all on
vacuuming/analyzing. You only need to worry about that with writes.
This is a old database (as in built by me when i
was just sta
What version of postgres?
8.0.2 ... but I think I've seen this before on 7.3 ...
There are a few possibilities. If you are having a lot of updates to the
table, you can get index bloat. And vacuum doesn't fix indexes. You have
to "REINDEX" to do that. Though REINDEX has the same lock that VACU
SpaceBallOne wrote:
Wondering if someone could explain a pecularity for me:
We have a database which takes 1000ms to perform a certain query on.
If I pg_dump that database then create a new database (e.g. "tempdb")
and upload the dump file (thus making a duplicate) then the same query
only tak
Can anyone explain why this may be occurring and how I might be able to
keep the original database running at the same speed as "tempdb"?
You're not vacuuming anywhere near often enough. Read up the database
maintenance section of the manual. Then, set up contrib/pg_autovacuum
to vacuum your
If I pg_dump that database then create a new database (e.g. "tempdb")
and upload the dump file (thus making a duplicate) then the same query
only takes 190ms !!
Vacuum, vacuum analyse, and vacuum full analyse does not seem to have an
impact on these times.
Damn, for some reason I didn't read t
Wondering if someone could explain a pecularity for
me:We have a database which takes 1000ms to perform a certain query
on.If I pg_dump that database then create a new database (e.g. "tempdb")
and upload the dump file (thus making a duplicate) then the same query only
takes 190ms !!
Vacuum,
--As Chris pointed out, how real-world is this test?
His point is valid. The database we're planning will
have a lot of rows and require a lot of summarization
(hence my attempt at a "test"), but we shouldn't be
pulling a million rows at a time.
If you want to do lots of aggregate analysis, I su
Michael Stone wrote:
On Tue, May 24, 2005 at 04:35:14PM -0700, Josh Berkus wrote:
Pretty much. There has been discussion about allowing index-only
access to "frozen" tables, i.e. archive partitions. But it all sort
of hinges on someone implementing it and testing
Is there any way to
On Tue, May 24, 2005 at 04:35:14PM -0700, Josh Berkus wrote:
Pretty much. There has been discussion about allowing index-only access to
"frozen" tables, i.e. archive partitions. But it all sort of hinges on
someone implementing it and testing
Is there any way to expose the planner estim
Pretty much. There has been discussion about allowing index-only access
to
"frozen" tables, i.e. archive partitions. But it all sort of hinges on
someone implementing it and testing
Would be interesting as a parameter to set at index creation (ie. if you
know this table will have a
Folks,
> > This gets brought up a lot. The problem is that the
> > index doesn't include
> > information about whether the current transaction
> > can see the referenced
> > row. Putting this information in the index will add
> > significant overhead
> > to every update and the opinion of the deve
Until you start worrying about MVC - we have had problems with the
MSSQL implementation of read consistency because of this 'feature'.
Alex Turner
NetEconomistOn 5/24/05, Bruno Wolff III <[EMAIL PROTECTED]> wrote:
On Tue, May 24, 2005 at 08:36:36 -0700, mark durrant <[EMAIL PROTECTED]> wrote:>> -
Amit V Shah wrote:
Hi Josh,
Thanks for the prompt reply !! Actually migration is inevitable. We have a
totally messed up schema, not normalized and stuff like that. So the goal of
the migration is to get a new and better normalized schema. That part is
done already. Now the decision point is, sh
It's common knowledge, it seems, that MySQL without transactions will be
a lot faster than Postgres on Inserts. And on Updates too, that is, unless
you have more than a few concurrent concurrent connections, at which point
the MySQL full table lock will just kill everything. And you don't
I'm far from an expert, so this may be off-base... but
perhaps a suggestion would be to allow a hint to be
sent to the optimizer if the user doesn't care that
the result is "approximate" maybe then this wouldn't
require adding more overhead to the indexes.
MSSQL has something like this with (noloc
Amit,
> I took a look at this. I have a few concerns with bizgres though -- I am
> using jetspeed portal engine and Hibernate as my O/R Mapping layer. I know
> for sure that they dont support bizgres. Now the question is what
> difference is there between bizgres and postgres ... I guess I will tr
On Tue, May 24, 2005 at 01:56:54PM -0400, Amit V Shah wrote:
> I took a look at this. I have a few concerns with bizgres though -- I am
> using jetspeed portal engine and Hibernate as my O/R Mapping layer.
If you have problems with performance, you might want to look into using JDBC
directly inste
> - Most of the DB usage is Selects. We would have some inserts but that
> would be like a nightly or a monthly process
So transaction integrity is not a real concern? This sounds like a data
warehouse; wanna try Bizgres? (www.bizgres.org)
I took a look at this. I have a few concerns with biz
Amit,
> - We have lot of foreign keys between the tables
Do you need these keys to be enforced? Last I checked, MySQL was still
having trouble with foriegn keys.
> - Most of the DB usage is Selects. We would have some inserts but that
> would be like a nightly or a monthly process
So transac
I took an unbiased look and did some tests.
Objectively for me MYSQL was not an improvement for speed.
I had read the benchmarks in pcmagazine from a while back as well.
I did some tests using ODBC, and .net connections and also used aqua studios
(hooks up to both data bases) and found postgres a
On Tue, May 24, 2005 at 08:36:36 -0700,
mark durrant <[EMAIL PROTECTED]> wrote:
>
> --MSSQL's ability to hit the index only and not having
> to go to the table itself results in a _big_
> performance/efficiency gain. If someone who's in
> development wants to pass this along, it would be a
> nic
Hi Josh,
Thanks for the prompt reply !! Actually migration is inevitable. We have a
totally messed up schema, not normalized and stuff like that. So the goal of
the migration is to get a new and better normalized schema. That part is
done already. Now the decision point is, should we go with postg
I am not trying to start a mysql vs postgres war so please dont
misunderstand me I tried to look around for mysql vs postgres articles,
but most of them said mysql is better in speed. However those articles were
very old so I dont know about recent stage. Please comment !!!
It is my experi
Hi all,
>From whatever reading and surfing I have done, I have found that postgres is
good. Actually I myself am a fan of postgres as compared to mysql. However I
want to have some frank opinions before I decide something. Following are
some of the aspects of my schema, and our concerns --
- We
First, thanks for all the helpful replies. I've
listened to the suggestions and done some more digging
and have results:
I did show_plan_all in MSSQL and found that it was
doing an Index Scan. I've read someplace that if the
data you need is all in the index, then MSSQL has a
feature/hack where it
Mark,
> MSSQL Machine:
> That "Explain Analyze" command doesn't work for MSSQL,
try this:
set showplan_all on
go
select ...
go
Harald
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/doc
30 matches
Mail list logo