Creating an index and using lower(column) does not change the explain
plan estimates.
It seems that it is not using index for like or ilike queries
irrespective of whether it have a pattern matching character in it or
not. (using PostgreSQL 7.3.3)
On googling I found this thread
On Mon, 2005-01-24 at 15:45 -0800, Josh Berkus wrote:
[about keeping open DB connections between web-client connections]
[I wrote:]
no. you can only count on web-server-process==connection, but not
web-user==connection, unless you can garantee that the same user
client always connects to
On Tue, 25 Jan 2005 07:23 pm, Antony Paul wrote:
Creating an index and using lower(column) does not change the explain
plan estimates.
It seems that it is not using index for like or ilike queries
irrespective of whether it have a pattern matching character in it or
not. (using PostgreSQL
Actually the query is created like this.
User enters the query in a user interface. User can type any character
in the query criteria. ie. % and _ can be at any place. User have the
freedom to choose query columns as well. The query is agianst a single
table .
rgds
Antony Paul
On Tue, 25 Jan
Hi all,
I am running PostgreSQL 7.3.3 on a RHL 7.0 box with PIII and 512
MB RAM. Recenlty I upgraded the kernel from 2.2.16 to 2.4.28. Now the
problem is Postgres is using only half of the memory now while before
upgrading the kernel it was using full memory plus swap. If Postgres
use the full
Ühel kenal päeval (esmaspäev, 24. jaanuar 2005, 11:52+0900), kirjutas
Tatsuo Ishii:
Tatsuo Ishii [EMAIL PROTECTED] writes:
Probably VACUUM works well for small to medium size tables, but not
for huge ones. I'm considering about to implement on the spot
salvaging dead tuples.
That's
Ühel kenal päeval (pühapäev, 23. jaanuar 2005, 15:40-0500), kirjutas Tom
Lane:
Simon Riggs [EMAIL PROTECTED] writes:
Changing the idea slightly might be better: if a row update would cause
a block split, then if there is more than one row version then we vacuum
the whole block first, then
Ühel kenal päeval (neljapäev, 20. jaanuar 2005, 16:00+0100), kirjutas
Hervé Piedvache:
Will both do what you want. Replicator is easier to setup but
Slony is free.
No ... as I have said ... how I'll manage a database getting a table of may
be
250 000 000 records ? I'll need incredible
Ühel kenal päeval (neljapäev, 20. jaanuar 2005, 11:02-0500), kirjutas
Rod Taylor:
Slony has some other issues with databases 200GB in size as well
(well, it hates long running transactions -- and pg_dump is a regular
long running transaction)
IIRC it hates pg_dump mainly on master. If you
On Tue, 25 Jan 2005, Antony Paul wrote:
Creating an index and using lower(column) does not change the explain
plan estimates.
It seems that it is not using index for like or ilike queries
irrespective of whether it have a pattern matching character in it or
not. (using PostgreSQL 7.3.3)
I
Tatsuo Ishii [EMAIL PROTECTED] writes:
Probably VACUUM works well for small to medium size tables, but not
for huge ones. I'm considering about to implement on the spot
salvaging dead tuples.
That's impossible on its face, except for the special case where the
same
My db server is running under high load recently and the number of
connections during the morning hours is actually very high.
This morning I found the postgres not running and the following in my log file:
DETAIL: The postmaster has commanded this server process to roll back
the current
Hannu Krosing [EMAIL PROTECTED] writes:
Why is removing index entries essential ?
Because once you re-use the tuple slot, any leftover index entries would
be pointing to the wrong rows.
regards, tom lane
---(end of
Don Drake [EMAIL PROTECTED] writes:
This morning I found the postgres not running and the following in my log
file:
2005-01-25 01:38:22 FATAL: could not create shared memory segment:
Cannot allocate memory
DETAIL: Failed system call was shmget(key=5432001, size=273383424, 03600).
HINT:
Josh,
Please excuse how my client quotes things...
Are there ones that you use which might use several different connections to
send a series of queries from a single web-user, less than 5 seconds apart?
Using Apache/Perl I often have a situation where we're sending several
Peter, Ragnar,
Are there ones that you use which might use several different connections
to send a series of queries from a single web-user, less than 5 seconds
apart?
Using Apache/Perl I often have a situation where we're sending several
queries from the same user (web client)
[EMAIL PROTECTED] (Tom Lane) wrote in pgsql.performance:
Randolf Richardson [EMAIL PROTECTED] writes:
Ron Mayer [EMAIL PROTECTED] wrote in pgsql.performance:
Randolf Richardson wrote:
While this doesn't exactly answer your question, I use this little
tidbit of information when selling
Hi all,
We are developing some application that works with DB over JDBC. We've used
MSSQL before and trying to migrate to PostgreSQL now. Unfortunately problems
with performance are found. MSSQL with default configuration looks like much
faster then PostgreSQL on the same hardware (PostgreSQL8
Ron Mayer [EMAIL PROTECTED] wrote in pgsql.performance:
I sometimes also think it's fun to point out that Postgresql
bigger companies supporting it's software - like this one:
http://www.fastware.com.au/docs/FujitsuSupportedPostgreSQLWhitePaper.pdf
with $43 billion revenue -- instead of
Randolf Richardson [EMAIL PROTECTED] writes:
... the problem is that I have
to create a separate account and password for each .ORG internet domain
name now and can't just use one master account and password for all of
them,
This is a registrar issue; if you don't like the user-interface
On Fri, Jan 21, 2005 at 04:35:38PM +, Randolf Richardson wrote:
Yes, indeed, that will be. My feeling is that Network Solutions
actually manages the .NET and .COM registries far better than anyone else
does, and when .ORG was switched away I didn't like the lack of flexibility
Alexander Dolgin [EMAIL PROTECTED] writes:
Does anybody have idea about reasons of such results?
Try converting the MAX() functions to queries that will use indexes.
See FAQ entry 4.7 My queries are slow or don't make use of the
indexes. Why?
regards, tom lane
with about 8000 rows. For this table query:
SELECT MAX(MsgSeqNum),MAX(LogTimestamp) FROM ELT_tcli_MessageLog
WHERE LogTimestamp = '0' AND IsFromCounterParty = 'Y' AND
IsOutOfSequence = 'N'
AND ConnectionName = 'DB_BENCHMARK'
AND LogTimestamp IN (SELECT MAX(LogTimestamp)
Hi,
First it will be good if you supply some EXPLAIN ANALYZE results from
your query.
Second, do you created the indexes which can be used with WHERE conditions.
And Third AFAK MAX doesn't use index. If you only need max then you can try:
ORDER BY DESC and LIMIT 1. But you can't use this
Hi gang,
I just inherited a FreeBSD box, and it is horribly sick. So we moved
everything to a new machine (power supply failures) and finally got
stuff running again.
Ok, for two days (rimshot)
Here are the two problems, and for the life of me I cannot find any
documentation on either:
1)
Thanks,
Then how to explain relpages
(size_kb in result returned)?
SELECT relname, relpages * 8 as size_kb,
relfilenode, reltuples
FROM pg_class c1
WHERE relkind = 'r'
AND relname = 'my_tab';
relname | size_kb | relfilenode | reltuples
Litao Wu [EMAIL PROTECTED] writes:
Then how to explain relpages
(size_kb in result returned)?
relpages should be accurate in either case, since we get that by asking
the kernel (lseek).
regards, tom lane
---(end of
Litao Wu [EMAIL PROTECTED] writes:
reasonable size. But I do not understand
why analyze bloats the table size so
big??
ANALYZE won't bloat anything. I suppose you have other processes
inserting or updating data in the table meanwhile.
regards, tom lane
I know it is accurate.
My question is why the table takes
2023024KB after analyzed?
And why it does not shink to 30088 after vacuumed?
I know vacuum full verbose
will force it shrink to
reasonable size. But I do not understand
why analyze bloats the table size so
big??
Please note all above
Believe or not.
The above command is my screen snapshot.
I believe it is most possibably a PG bug!
--- Tom Lane [EMAIL PROTECTED] wrote:
Litao Wu [EMAIL PROTECTED] writes:
reasonable size. But I do not understand
why analyze bloats the table size so
big??
ANALYZE won't bloat
I help manage an animal hospital of 100-employees Linux servers. I am
new to database setup and tuning, I was hoping I could get some
direction on a setting up drive array we're considering moving our
database to.
They're currently on a two-disk Adaptec RAID1 with Postgresql 7.4.2.
The drive
Mitch Pirtle wrote:
1) freebsd will only let PostgreSQL have 38 connections at a time,
regardless of kernel settings or postgresql.conf settings. Where
exactly (and how, exactly) does one remedy that problem?
What version of FreeBSD is the box running?
Generally you need to change semaphores
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Steve Atkins) would
write:
As a bit of obPostgresql, though... While the registry for .org is
run on Postgresql, the actual DNS is run on Oracle. That choice was
driven by the availability of multi-master replication.
Like many of
Folks,
I'm using PostgreSQL 7.4.1 on Linux, and I'm trying to figure out
weather a
query I have is going to be slow when I have more information in my tables.
both tables involved will likely have ~500K rows within a year or so.
Specifically I can't tell if I'm causing myself
Peter, Ragnar,
Are there ones that you use which might use several different connections
to send a series of queries from a single web-user, less than 5 seconds
apart?
Using Apache/Perl I often have a situation where we're sending several
queries from the same user (web
Alex Turner [EMAIL PROTECTED] writes:
I am also very interesting in this very question.. Is there any way to
declare a persistant cursor that remains open between pg sessions?
This would be better than a temp table because you would not have to
do the initial select and insert into a fresh
Steve,
I help manage an animal hospital of 100-employees Linux servers. I am
new to database setup and tuning, I was hoping I could get some
direction on a setting up drive array we're considering moving our
database to.
Check what I have to say at http://www.powerpostgresql.com/PerfList
37 matches
Mail list logo