Hagop,

Do you have a pastebin of your code or a reasonably equivalent snippet?
Include the corresponding SQL create table and inserts.

Joseph Armbruster

On 4/21/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Send PyGreSQL mailing list submissions to
        [email protected]

To subscribe or unsubscribe via the World Wide Web, visit
        http://mailman.vex.net/mailman/listinfo/pygresql
or, via email, send a message with subject or body 'help' to
        [EMAIL PROTECTED]

You can reach the person managing the list at
        [EMAIL PROTECTED]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of PyGreSQL digest..."


Today's Topics:

   1. pygresql connection speeds issue (Hagop Narsesian)
   2. Re: pygresql connection speeds issue (Christoph Zwerschke)


----------------------------------------------------------------------

Message: 1
Date: Sat, 21 Apr 2007 03:29:33 -0700 (PDT)
From: Hagop Narsesian <[EMAIL PROTECTED]>
Subject: [PyGreSQL] pygresql connection speeds issue
To: [email protected]
Message-ID: <[EMAIL PROTECTED]>
Content-Type: text/plain; charset=iso-8859-1

Hello all,
I'm quite new to pygresql and very new to the list.

I am using pygresql for python 2.4 to connect to my
postgresql 8.2 db. I normally open the connection only
for the duration of the action I require, and then
close it again. When I perform inserts or deletes, the
process is quick. But when I have to carry out a
select query, it can take up to 6 or 7 seconds, even
for queries from tables that are no larger than 60 or
70 rows and that are indexed.

Initially I thought this was because of the overheads
of the connection, but in this event, inserts should
also take very long, shouldn't they?

I carry out some basic validations and formatting
after fetching the query, but I'm pretty sure this
can't be taking up the time, as it remains slow even
when I fetch a single row, and it doesn't seem to
matter too much whether it's 1 row 150, and I rarely
fetch any more than about 150.

Anyone have any suggestions? Should I be keeping
connections open all the time?
Hagop

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



------------------------------

Message: 2
Date: Sat, 21 Apr 2007 13:14:11 +0200
From: Christoph Zwerschke <[EMAIL PROTECTED]>
Subject: Re: [PyGreSQL] pygresql connection speeds issue
To: PyGreSQL Development <[email protected]>
Message-ID: <[EMAIL PROTECTED]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi Hagop,

> But when I have to carry out a select query,
> it can take up to 6 or 7 seconds, even for queries
> from tables that are no larger than 60 or 70 rows
> and that are indexed.
> ...
> Initially I thought this was because of the overheads
> of the connection, but in this event, inserts should
> also take very long, shouldn't they?

You're right, creating a connection is a certain overhead, but it should
not be THAT much ;-) Something that is also important is to VACUUM your
database tables, but for a table with 60 rows that cannot be the reason
as well (plus PostgreSQL 8.2 has auto vaccuum). So something in your
database setup seems to be foul. Have a look at the database server
logs. What happens if you do the same things with PgAdmin? Does it
happen with both the pg and pgdb adapters?

If you're worried about the connection overhead you can use a connection
pool (e.g. http://www.webwareforpython.org/DBUtils), but this is
certainly not causing the delays you are seeing here.

-- Chris


------------------------------

_______________________________________________
PyGreSQL mailing list
[email protected]
http://mailman.vex.net/mailman/listinfo/pygresql


End of PyGreSQL Digest, Vol 46, Issue 3
***************************************

_______________________________________________
PyGreSQL mailing list
[email protected]
http://mailman.vex.net/mailman/listinfo/pygresql

Reply via email to