Hi,
I am just wondering, by default, autocommit is enabled for every client
connection. The documentations states that we have to use BEGIN
and END or COMMIT so to increase performance by not using autocommit.
My question is, when we use the BEGIN and END statements, is autocommit
On Mon, Feb 14, 2005 at 04:01:20PM +0800, Hasnul Fadhly bin Hasan wrote:
I am just wondering, by default, autocommit is enabled for every client
connection. The documentations states that we have to use BEGIN
and END or COMMIT so to increase performance by not using autocommit.
My
On Mon, Feb 14, 2005 at 04:58:31PM +0800, Hasnul Fadhly bin Hasan wrote:
Thanks for the reply. I am using postgres 7.4.5 client. There's one
that is using 7.4.1 client. I'm not sure if there would be any difference.
When i use psql and check the status of autocommit, it is set to
enable.
In article [EMAIL PROTECTED],
Greg Stark [EMAIL PROTECTED] writes:
Christopher Browne [EMAIL PROTECTED] writes:
After all, the cost of a computer system to run the transactions is
likely to be comprised of some combination of software licenses and
hardware costs. Even if the software is
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Jaime Casanova)
belched out:
On Sun, 13 Feb 2005 13:41:09 -0800, Josh Berkus josh@agliodbs.com wrote:
Jaime,
Why is this query using a seq scan rather than a index scan?
Because it thinks a seq scan will be faster.
I will
normally you shouldn't have to do anything, it should just work :
select field from table where field like 'abc%'
CREATE INDEX ... ON table( field );
that's all
If it does not use the index, I saw on the mailing list that the locale
could be an issue.
Stephan Szabo wrote:
You can also create an index using a typename_pattern_ops operator
class which should be usable even with other collations.
Could you give me an example for this, or point me to the relevant
documentation?
---(end of
On Mon, 14 Feb 2005, Ivan Voras wrote:
Stephan Szabo wrote:
You can also create an index using a typename_pattern_ops operator
class which should be usable even with other collations.
Could you give me an example for this, or point me to the relevant
documentation?
Basically, you could
Hi All,
I have boiled my situation down to the following simple case: (postgres
version 7.3)
* Query 1 is doing a sequential scan over a table (courtesy of field
ILIKE 'foo%') and index joins to a few others
* Query 2 is doing a functional index scan over the same table
(lower(field) LIKE
Hi,
I have 3 tables in the database with 80G of data, one of
them is almost 40G and the remaining 2 tables has 20G each.
We use this database mainly for query and updating is done only quarterly
and the database perform well. My problem
is after updating and then run VACCUM FULL
But I need to do full vacuum because I
deleted some of the fields that are not use anymore and I also add another
fields. Is there
another way to speed up full
vacuum?
From: Iain
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 15, 2005
9:52 AM
To: Michael Ryan S. Puncia;
On Tue, 2005-02-15 at 09:34 +0800, Michael Ryan S. Puncia wrote:
Hi,
I have 3 tables in the database with 80G of data, one of them is
almost 40G and the remaining 2 tables has 20G each.
We use this database mainly for query and updating is done only
quarterly and the database perform
My concern is that this kind of testing has very little relevance to the
real world of multiuser processing where contention for the cache becomes an
issue. It may be that, at least in the current situation, postgres is
giving too much weight to seq scans based on single user, straight
Iain [EMAIL PROTECTED] writes:
another way to speed up full vacuum?
Hmmm... a full vacuum may help to re-organize the structure of modified
tables, but whether this is significant or not is another matter.
Actually, VACUUM FULL is designed to work nicely for the situation where
a table has
The world rejoiced as [EMAIL PROTECTED] (Mark Aufflick) wrote:
Hi All,
I have boiled my situation down to the following simple case:
(postgres version 7.3)
* Query 1 is doing a sequential scan over a table (courtesy of field
ILIKE 'foo%') and index joins to a few others
* Query 2 is doing
Hi Rod,
Any solution fixing buffers should probably not take into consideration
the method being performed (do you really want to skip caching a
sequential scan of a 2 tuple table because it didn't use an index) but
the volume of data involved as compared to the size of the
16 matches
Mail list logo