should PHP4's persistent connection be used at all? i was going back to
the mailing list archive and there were discussions about the use of
persistent connections and interference of one script with another
in between begin/commits. any clarifications?
was that issue with PHP not reusing
How does postgresql deal with locking when one has a large select query running
on a 700,000 row table, when there are inserts pending for that table.
I have an application that does a _lot_ of inserts, and a frontend that
makes large long laborious select queries on the same tables.
MySQL has
Just an FYI.
FWIW, I emailed the php maintainer of the postgres piece, and he replied
with the following (I installed the patch, recompiled, and everything has
been running fine for about a week or so)
Michael Fork - CCNA - MCP - A+
Network Support - Toledo Internet Access - Toledo
Ack! I just timed it at 74 seconds.
Added two indexes, here's the query plan... it doesn't seem to be using the
indexes at all. I'm sure I'm doing something wrong here...
NOTICE: QUERY PLAN:
Sort (cost=6707.62..6707.62 rows=10596 width=170)
- Merge Join (cost=1.34..5492.29 rows=10596
The below is what I'd like:
select * from table natural join (select tablename from table where
table_id = 'rid');
select tablename from table where table_id = 'rid'; returns 'table2'
So, the equivalent of what I'd like in this case is: select * from
table natural join table2;
I could do
Simon Attwell writes:
How does postgresql deal with locking when one has a large select query running
on a 700,000 row table, when there are inserts pending for that table.
The insert and select can happen in parallel. See
http://www.postgresql.org/users-lounge/docs/7.0/postgres/mvcc.htm
Simon Attwell [EMAIL PROTECTED] writes:
How does postgresql deal with locking when one has a large select
query running on a 700,000 row table, when there are inserts pending
for that table.
I have an application that does a _lot_ of inserts, and a frontend that
makes large long laborious
Dave Edmondson writes:
Added two indexes, here's the query plan... it doesn't seem to be using the
indexes at all. I'm sure I'm doing something wrong here...
NOTICE: QUERY PLAN:
Sort (cost=6707.62..6707.62 rows=10596 width=170)
- Merge Join (cost=1.34..5492.29 rows=10596 width=170)
On Mon, Feb 19, 2001 at 12:22:11PM -0500, Tom Lane wrote:
Dave Edmondson [EMAIL PROTECTED] writes:
Ack! I just timed it at 74 seconds.
Added two indexes, here's the query plan... it doesn't seem to be using the
indexes at all. I'm sure I'm doing something wrong here...
Have you done a
Dave Edmondson [EMAIL PROTECTED] writes:
yes. I ran VACUUM ANALYZE after creating the indicies. (Actually, I VACUUM
the database twice a day.) The data table literally has 145972 rows, and
145971 will match conf_id 4...
Hm. In that case the seqscan on data looks pretty reasonable ... not
yes. I ran VACUUM ANALYZE after creating the indicies. (Actually, I VACUUM
the database twice a day.) The data table literally has 145972 rows, and
145971 will match conf_id 4...
Hm. In that case the seqscan on data looks pretty reasonable ... not
sure if you can improve on this much,
You can reconstruct the information out of the triggers
that are created in pg_trigger. It's not easy to
parse however.
There are three triggers created for each fk constraint,
one on the fk table, two on the pk table. You can
get the tables, columns constrained and match type from
the
On Mon, 19 Feb 2001, Mark Cowlishaw wrote:
I noticed that running queries that do -not- use declared table aliases in
the 'where' clause seem to run a hell-of-a-lot slower than when aliases are
used. Is there a valid reason for this? It started out as a typo but now I'm
curious.
Well,
I've noticed that select max(the_primary_key) from some_table
does a table scan. Is there any plan to implement max/min calculations
using index lookups if the appropriate index exists?
Thanks,
Alex.
I have a table/class filled with records/instances in which I accidentally
set the fields/attributes data type to text rather than boolean. I now have
75,000 records with 't' and 'f' So now I want to change the attributes to
bool? I can't find anything on changing data types in the integrated
A friend asked me to figure out how to access PostgreSQL from Tcl via
ODBC. For posterity, here's the step by step "how I did it" that I
emailed to him. I don't know Tcl, this was just about getting the
compile options correct and doing the proper sysadminning to make
things work.
Comments,
Was there something wrong with dbf2sql? :-) It used to be in contribs/ I
think but it was very simple code that did the job. At the time I was
working with 1.x and 6.0x versions of Postgres and made a few mods to it so
that I could rename the tables, change field types, just export the data
At 10:09 AM 19-02-2001 -0500, Bruce Momjian wrote:
We still need to reset any 'SET' commands used, but that can't be done
for 7.1.
Would you have to do an unlisten *?
Are there other per connection things as well?
Cheerio,
Link.
In 7.0 it is rather obvious. In 7.1 they moved to a number based
scheme. I suggested they make a symlink with the name of the table
pointing at the numbered file, but I don't think anyone paid attention.
The info is in one of the pg_ tables, I don't know which one
offhand.
Louis-David
Stephan Szabo wrote:
Do you have a value that is not null that is very common?
It's estimating that there will be 10113 rows that match
nomsession='xxx' which makes a seq scan a much less bad plan.
Err, why? There is an index, isn't there? Shouldn't the index allow
postgres to quickly
At 10:09 AM 19-02-2001 -0500, Bruce Momjian wrote:
We still need to reset any 'SET' commands used, but that can't be done
for 7.1.
Would you have to do an unlisten *?
Are there other per connection things as well?
Wow, yes, there are probably many things we will need to do to reset
On Mon, 19 Feb 2001, Joseph Shraibman wrote:
Stephan Szabo wrote:
Do you have a value that is not null that is very common?
It's estimating that there will be 10113 rows that match
nomsession='xxx' which makes a seq scan a much less bad plan.
Err, why? There is an index, isn't
Joseph Shraibman [EMAIL PROTECTED] writes:
Stephan Szabo wrote:
Do you have a value that is not null that is very common?
It's estimating that there will be 10113 rows that match
nomsession='xxx' which makes a seq scan a much less bad plan.
Err, why? There is an index, isn't there?
Joseph Shraibman wrote:
Can't postgres do the index lookup first and find out there are only a
few tuples that might match?
Actually it looks like postgres is doing this:
o=# explain select * from usertable where p = 33;
NOTICE: QUERY PLAN:
Seq Scan on usertable (cost=0.00..30.54
On Mon, 19 Feb 2001, Joseph Shraibman wrote:
Of course, if the 10113-match estimate is wildly off (as it was in this
case), then the wrong plan may be chosen. But it IS NOT CORRECT to
suppose that indexscans always beat seqscans. The planner's job would
be a lot easier if that were
25 matches
Mail list logo