Hi all
I'm wondering if and how the size of a table affects speed of inserts
into it? What if the table has indexes, does that alter the answer?
Thanks
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Praveen Raja:
I think the size of a table don't affect the speed of inserts
into it.Because PostgreSQL just doing something like append on the data files.
But the index do speed-down the inserts. Because PostgreSQL should
maintain the index when doing inserts.
Hi,
At 13:24 27/06/2005, Praveen Raja wrote:
I'm wondering if and how the size of a table affects speed of inserts
into it? What if the table has indexes, does that alter the answer?
Many parameters will affect the result:
- whether there are any indexes (including the primary key, unique
Just to clear things up a bit, the scenario that I'm interested in is a
table with a large number of indexes on it (maybe 7-8). In this scenario
other than the overhead of having to maintain the indexes (which I'm
guessing is the same regardless of the size of the table), does the size
of the
Hi,
At 13:50 27/06/2005, Praveen Raja wrote:
Just to clear things up a bit, the scenario that I'm interested in is a
table with a large number of indexes on it (maybe 7-8).
If you're after performance you'll want to carefully consider which indexes
are really useful and/or redesign your
There are some immediate questions from our engineers about
performance
- Oracle has one particular performance enhancement that Postgres is
missing. If you do a select that returns 100,000 rows in a given
order,
and all you want are rows 99101 to 99200, then Oracle can do that very
Hi !
My company is evaluating to compatibilizate our system (developed in
C++) to PostgreSQL.
Our programmer made a lot of tests and he informed me that the
performance using ODBC is very similar than using libpq, even with a big
number of simultaneous connections/queries. Of course that
Hi !
My company is evaluating to compatibilizate our system (developed in
C++) to PostgreSQL.
Our programmer made a lot of tests and he informed me that the
performance using ODBC is very similar than using libpq, even with a
big
number of simultaneous connections/queries. Of course that
i would take a peek at psqlodbc-8.0 drivers ..
i wouldn't battle with other version you might find such as (unixodbc
ones)
-elz
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Merlin Moncure
Sent: 27 juin 2005 10:29
To: grupos
Cc:
[moved to pgsql-performance]
Currently I want to take a TPC-H test on postgresql-8.0.2. I
have
downloaded the DBGEN and QGEN from the homepage of TPC. But I
encountered
many problems which forced me to request some help. 1. How to load
the
data
from flat file generated by dbgen tool?
Dennis Bjorklund wrote:
On Thu, 23 Jun 2005, Bricklen Anderson wrote:
iii. UNIQUE constraint on table t1. This didn't seem to perform too
badly with fewer rows (preliminary tests), but as you'd expect, on error
the whole transaction would roll back. Is it possible to skip a row if
it causes
On Tue, Jun 28, 2005 at 01:54:08AM +, Karl O. Pinc wrote:
On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:
See timeofday().
That only gives you the time at the start of the transaction,
so you get no indication of how long anything in the
transaction takes.
Did you read the
On 06/27/2005 05:37:41 PM, Josh Berkus wrote:
Karl,
Seems to me that when there's a constant value in the query
and an = comparision it will always be faster to use the (b-tree)
index that's ordered first by the constant value, as then all
further
blocks are guarenteed to have a higher
On 06/27/2005 08:34:19 PM, Michael Fuhr wrote:
On Tue, Jun 28, 2005 at 01:54:08AM +, Karl O. Pinc wrote:
On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:
See timeofday().
That only gives you the time at the start of the transaction,
so you get no indication of how long anything in the
On 06/27/2005 09:36:51 PM, Karl O. Pinc wrote:
I'm doing this in a torture test script, loading data.
Every fibnocci number of rows * 100 I VACCUM ANALYZE.
So, 100, 200, 300, 500, 800, etc.
(And of course disconnect my client and re-connect so
as to use the new statistics. sure would be
We have the following function in our home grown mirroring package, but
it isn't running as fast as we would like. We need to select statements
from the pending_statement table, and we want to select all the
statements for a single transaction (pending_trans) in one go (that is,
we either
On Tue, Jun 28, 2005 at 03:03:06AM +, Karl O. Pinc wrote:
For all your work a documentation patch is appended that
I think is easier to read and might avoid this problem
in the future.
Patches should go to the pgsql-patches list -- the people who review
and apply patches might not be
What's wrong with Slony?
David Mitchell wrote:
We have the following function in our home grown mirroring package, but
it isn't running as fast as we would like. We need to select statements
from the pending_statement table, and we want to select all the
statements for a single transaction
Christopher Kings-Lynne wrote:
What's wrong with Slony?
Because it's not multi-master. Our mirroring package is.
--
David Mitchell
Software Engineer
Telogis
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL
What's wrong with Slony?
Because it's not multi-master. Our mirroring package is.
I'm curious - how did you write a multi-master replication package in
pgsql, when pgsql doesn't have 2 phase commits or any kind of
distributed syncing or conflict resolution in a release version?
Chris
David Mitchell wrote:
We have the following function in our home grown mirroring package, but
it isn't running as fast as we would like. We need to select statements
from the pending_statement table, and we want to select all the
statements for a single transaction (pending_trans) in one go
Christopher Kings-Lynne wrote:
I'm curious - how did you write a multi-master replication package in
pgsql, when pgsql doesn't have 2 phase commits or any kind of
distributed syncing or conflict resolution in a release version?
We didn't write it entirely in pgsql, there is a worker process
Hi Keith,
Unfortunately, we must have those sorts. The statements within a
transaction must be executed on the slave in the same order as they were
on the master, and similarly, transactions must also go in the same
order. As for aliasing the tables, that is just a remnant from previous
Merge the two select statements like this and try,
SELECT t.trans_id as ID,s.id, s.transaction_id, s.table_name, s.op, s.data
FROM pending_trans AS t join dbmirror.pending_statement AS s
on (s.transaction_id=t.id)
WHERE t.fetched = false order by t.trans_id,s.id limit 100;
If the above
Hello. I believe in earlier versions, a query of the
form
select attrib from ttt where attrib like 'foo%' would
be able to take advantage of an index. I have seen
this in the past. Currently I am using v8.0.3. From
what I can see is that the execultion plan seems to
use a seq scan and to totally
Hi Gnanavel,
Thanks, but that will only return at most 100 statements. If there is a
transaction with 110 statements then this will not return all the
statements for that transaction. We need to make sure that the function
returns all the statements for a transaction.
Cheers
David
Hey, all. I've bounced this around in #postgres for an hour or so, and
it was suggested that I post it here as well. Hopefully someone can
help me out.
I have three machines. All have 512MB of ram.
Machine A is a 2.0ghz celeron, running debian, pg verison 7.4.6.
Machine B is a 1.8ghz
Hi,
Consider the where-clauses:
WHERE lower(col) LIKE 'abc';
WHERE lower(col) LIKE 'abc%';
these will both use a b-tree functional index in lower(col) if one exists.
The clause
WHERE lower(col) LIKE '%abc';
can't use the index as you would expect, because of the wildcard at the
front (as
hi, need some help with some experts
here.
currently we have a function that use together
with temp table, it calls search result function, everytime
this function is calling, it will go through
some filter before come out as a result.
now we have some major problem , the first time
the
Hi,
I have a tablewith two indiceson the same column, one ofwhich is apartial index. I would like the query planner to use the partial index whenever the query condition lies in the range of the partial index as it would yield better performance.Is there any way to enforce the ordering for
Hello!
I use FreeBSD 4.11 with PostGreSQL 7.3.8.
I got a huge database with roughly 15 million records. There is just one
table, with a time field, a few ints and a few strings.
table test
fields time (timestamp), source (string), destination (string), p1 (int),
p2 (int)
I have run VACUUM
31 matches
Mail list logo