Kevin Brown <[EMAIL PROTECTED]> writes:
> Ouch. Is this really a reasonable assumption? I figured the primary
> use of a cursor was to fetch small amounts of data at a time from a
> large table, so 10% seems extremely high as an average fetch size. Or
> is the optimization based on the number
Greg Stark <[EMAIL PROTECTED]> writes:
> 8.0, on the other hand, has a new algorithm that specifically tries to
> protect against the shared buffers being blown out by a sequential
> scan. But that will only help if it's the shared buffers being
> thrashed that's hurting you, not the entire OS file
Mark Aufflick <[EMAIL PROTECTED]> writes:
> Obviously Q2 is faster than Q1,
That's not really obvious at all. If there are lots of records being returned
the index might not be faster than a sequential scan.
> My assumption is that the sequential scan is blowing the index from any cache
> it mi
Hi Rod,
(B
(B> Any solution fixing buffers should probably not take into consideration
(B> the method being performed (do you really want to skip caching a
(B> sequential scan of a 2 tuple table because it didn't use an index) but
(B> the volume of data involved as compared to the size of the
OK, that's interesting. So the original assumption that vacuum full was
(Bneeded was completely wrong anyway.
(B
(BIf table re-organisation isn't required a plain vacuum would be fastest. I
(Bwill take a guess that the next best alternative is to do the "create table
(Bnewtable as select ..
The world rejoiced as [EMAIL PROTECTED] (Mark Aufflick) wrote:
> Hi All,
>
> I have boiled my situation down to the following simple case:
> (postgres version 7.3)
>
> * Query 1 is doing a sequential scan over a table (courtesy of field
> ILIKE 'foo%') and index joins to a few others
> * Query 2 is
"Iain" <[EMAIL PROTECTED]> writes:
>> another way to speed up full vacuum?
> Hmmm... a full vacuum may help to re-organize the structure of modified
> tables, but whether this is significant or not is another matter.
Actually, VACUUM FULL is designed to work nicely for the situation where
a ta
> My concern is that this kind of testing has very little relevance to the
> real world of multiuser processing where contention for the cache becomes an
> issue. It may be that, at least in the current situation, postgres is
> giving too much weight to seq scans based on single user, straight
Hi,
I think there was some discussion about seq scans messing up the cache, and
talk about doing something about it but I don't think it has been addressed
yet. Maybe worth a troll through the archives.
It is certainly true that in many situations, a seq scan is preferable to
using an index. I
>> But I need to
do full vacuum because I deleted some of the fields that are not use anymore and
I also add another fields. Is there
>> another
way to speed up full vacuum?
Hmmm... a full vacuum
may help to re-organize the structure of modified tables, but whether this is
significant
On Tue, 2005-02-15 at 09:34 +0800, Michael Ryan S. Puncia wrote:
> Hi,
>
>
>
> I have 3 tables in the database with 80G of data, one of them is
> almost 40G and the remaining 2 tables has 20G each.
>
> We use this database mainly for query and updating is done only
> quarterly and the database
Hi,
just make sure that your freespace map is
big enough and then do a vacuum analyse without the full option.
I can imagine that database performance
might not be as good as it would be after a vacuum full, though I expect that it
wouldn't make much difference.
regards
Iain
-
But I need to do full vacuum because I
deleted some of the fields that are not use anymore and I also add another
fields. Is there
another way to speed up full
vacuum?
From: Iain
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 15, 2005
9:52 AM
To: Michael Ryan S. Pun
Hi,
I have 3 tables in the database with 80G of data, one of
them is almost 40G and the remaining 2 tables has 20G each.
We use this database mainly for query and updating is done only quarterly
and the database perform well. My problem
is after updating and then run VACCUM FULL ANALY
Tom Lane wrote:
> "Joost Kraaijeveld" <[EMAIL PROTECTED]> writes:
> > I cannot change the query (it is geneated by a tool called Clarion) but it
> > something like (from the psqlodbc_xxx.log):
> > "...
> > declare SQL_CUR01 cursor for
> > SELECT A.ordernummer, B.klantnummer FROM "orders" A LEFT O
Hi All,
I have boiled my situation down to the following simple case: (postgres
version 7.3)
* Query 1 is doing a sequential scan over a table (courtesy of field
ILIKE 'foo%') and index joins to a few others
* Query 2 is doing a functional index scan over the same table
(lower(field) LIKE 'foo%
On Mon, 14 Feb 2005, Ivan Voras wrote:
> Stephan Szabo wrote:
> > On Mon, 14 Feb 2005, Ivan Voras wrote:
>
> >>Could you give me an example for this, or point me to the relevant
> >>documentation?
>
> >
> > http://www.postgresql.org/docs/8.0/interactive/indexes-opclass.html
>
> Thanks! I didn't kn
Stephan Szabo wrote:
On Mon, 14 Feb 2005, Ivan Voras wrote:
Could you give me an example for this, or point me to the relevant
documentation?
http://www.postgresql.org/docs/8.0/interactive/indexes-opclass.html
Thanks! I didn't know this and I certainly didn't think it would be that
easy :)
-
On Mon, 14 Feb 2005, Ivan Voras wrote:
> Stephan Szabo wrote:
>
> > You can also create an index using a _pattern_ops operator
> > class which should be usable even with other collations.
>
> Could you give me an example for this, or point me to the relevant
> documentation?
Basically, you could
On Mon, 14 Feb 2005, Ivan Voras wrote:
> PFC wrote:
> >
> > normally you shouldn't have to do anything, it should just work :
> >
> >> select field from table where field like 'abc%'
>
> > If it does not use the index, I saw on the mailing list that the
> > locale could be an issue.
>
> O
Stephan Szabo wrote:
You can also create an index using a _pattern_ops operator
class which should be usable even with other collations.
Could you give me an example for this, or point me to the relevant
documentation?
---(end of broadcast)---
TIP 6
PFC wrote:
normally you shouldn't have to do anything, it should just work :
select field from table where field like 'abc%'
If it does not use the index, I saw on the mailing list that the
locale could be an issue.
Oh yes, I forgot about that :( I do have LC_COLLATE (on latin2)...
It's
normally you shouldn't have to do anything, it should just work :
select field from table where field like 'abc%'
CREATE INDEX ... ON table( field );
that's all
If it does not use the index, I saw on the mailing list that the locale
could be an issue.
---(end
Is there a way to use indexes for queries like:
select field from table where field like 'abc%'
(i.e. filter for string fields that begin with something) ?
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.po
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Jaime Casanova)
belched out:
> On Sun, 13 Feb 2005 13:41:09 -0800, Josh Berkus wrote:
>> Jaime,
>>
>> > Why is this query using a seq scan rather than a index scan?
>>
>> Because it thinks a seq scan will be faster.
>>
> I will sugge
In article <[EMAIL PROTECTED]>,
Greg Stark <[EMAIL PROTECTED]> writes:
> Christopher Browne <[EMAIL PROTECTED]> writes:
>> After all, the cost of a computer system to run the transactions is
>> likely to be comprised of some combination of software licenses and
>> hardware costs. Even if the soft
On Mon, Feb 14, 2005 at 04:58:31PM +0800, Hasnul Fadhly bin Hasan wrote:
> Thanks for the reply. I am using postgres 7.4.5 client. There's one
> that is using 7.4.1 client. I'm not sure if there would be any difference.
> When i use psql and check the status of autocommit, it is set to
> enab
Hi Micheal,
Thanks for the reply. I am using postgres 7.4.5 client. There's one
that is using 7.4.1 client. I'm not sure if there would be any
difference.
When i use psql and check the status of autocommit, it is set to
enable. I'm not sure if libpq and psql uses the same defaults.
Thanks
On Mon, Feb 14, 2005 at 04:01:20PM +0800, Hasnul Fadhly bin Hasan wrote:
>
> I am just wondering, by default, autocommit is enabled for every client
> connection. The documentations states that we have to use BEGIN
> and END or COMMIT so to increase performance by not using autocommit.
> My qu
Hi,
I am just wondering, by default, autocommit is enabled for every client
connection. The documentations states that we have to use BEGIN
and END or COMMIT so to increase performance by not using autocommit.
My question is, when we use the BEGIN and END statements, is autocommit
unset/disabl
30 matches
Mail list logo