On Thu, Jul 9, 2009 at 10:35 PM, Alex wrote:
> Forgot to add:
>
> postg...@ec2-75-101-128-4:~$ psql --version
> psql (PostgreSQL) 8.3.5
How is the index sl_city_etc defined?
--
greg
http://mit.edu/~gsstark/resume.pdf
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.or
On Mon, Jul 13, 2009 at 3:31 PM, Wayne Conrad wrote:
> Howdy. Some months back, when advised on one of these lists that it
> should not be necessary to issue VACUUM FULL/REINDEX DATABASE, we quit
> this nightly "maintenance" practice. We've been very happy to not
> have to do that, since it locke
On Mon, Jul 13, 2009 at 1:31 PM, Wayne Conrad wrote:
> Howdy. Some months back, when advised on one of these lists that it
> should not be necessary to issue VACUUM FULL/REINDEX DATABASE, we quit
> this nightly "maintenance" practice. We've been very happy to not
> have to do that, since it locke
On Mon, Jul 13, 2009 at 5:23 AM, Suvankar Roy wrote:
>
> Hi,
>
> I have some 99,000 records in a table (OBSERVATION_ALL) in a Postgres DB as
> well as a Greenplum DB.
>
> The Primary key is a composite one comprising of 2 columns (so_no,
> serial_no).
>
> The execution of the following query takes
Hi,
I am transplanting an application to use PostgreSQL8.2.4 instead of DB2 9.1.
CLI was used to connect to DB2, and ODBC is used to connect to PostgreSQL.
The query statement is as follows:
SELECT void,nameId,tag FROM (SELECT void,nameId,tag, FROM Attr
WHERE attributeof IN (SELECT oid_ FROM
Hi,
I have some 99,000 records in a table (OBSERVATION_ALL) in a Postgres DB
as well as a Greenplum DB.
The Primary key is a composite one comprising of 2 columns (so_no,
serial_no).
The execution of the following query takes 8214.016 ms in Greenplum but
only 729.134 ms in Postgres.
select *
Howdy. Some months back, when advised on one of these lists that it
should not be necessary to issue VACUUM FULL/REINDEX DATABASE, we quit
this nightly "maintenance" practice. We've been very happy to not
have to do that, since it locked the database all night. Since then,
however, our database
When users count in Postgres database reaches up to 500 000 - database
command of creating users 'CREATE USER' slows down to 5-10 seconds per user.
What could be a reason of this problem and is there any solution how to
avoid it?
For each of user can be associated up to 10 roles with grants to sy
Below is a query that takes 16 seconds on the first run. I am having
generally poor performance for queries in uncached areas of the data
and often mediocre (500ms-2s+) performance generallly, although
sometimes it's very fast. All the queries are pretty similar and use
the indexes this way.
I'v
Forgot to add:
postg...@ec2-75-101-128-4:~$ psql --version
psql (PostgreSQL) 8.3.5
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi,
I'm using both IN and ANY() operators extensively my application. Can
anybody answer me on the following questions:
1) Which operator is advantage over the another, interms of performance?
2) If I've indexed these columns, will both the operators make use of
index
scanning?
Hi All,
I would like to know if there is a limitation on the size of the XML
document that can be contained in a table?
Do you think inserting a large XML, say with 100 elements, will be a
problem?
Waiting for your reply.
Franclin.
--
Sent via pgsql-performance mailing list (pgsql-performance
2009/5/28 Eddy Ernesto Baños Fernández :
> Try Cybercluster
I looked into that. There is one piece of documentation that is less
than ten pages long. There is no users group, no listserve, no
community that I can discern.
Do you have experience with it and if so could you please share.
Thank
Eric Haszlakiewicz wrote:
>> The log is misleading; the first statement is not really executed,
>> it is only prepared (parsed). If you set the log level to DEBUG2, it
>> will look like:
>
> Yes, but it's still incurring the overhead of sending the message to the
> server, isn't it?
Yes.
>> Mayb
Le Tuesday 14 July 2009 10:23:25, Richard Huxton a écrit :
> Marc Cousin wrote:
> > Temporarily I moved the problem at a bit higher sizes of batch by
> > changing random_page_cost to 0.02 and seq_page_cost to 0.01, but I feel
> > like an apprentice sorcerer with this, as I told postgreSQL that fetc
Le Tuesday 14 July 2009 10:15:21, vous avez écrit :
> Marc Cousin wrote:
> >> Your effective_cache_size is really small for the system you seem to
> >> have - its the size of IO caching your os is doing and uses no resources
> >> itself. And 800MB of that on a system with that amount of data seems
Marc Cousin wrote:
Temporarily I moved the problem at a bit higher sizes of batch by changing
random_page_cost to 0.02 and seq_page_cost to 0.01, but I feel like an
apprentice sorcerer with this, as I told postgreSQL that fetching rows from
disk are much cheaper than they are. These values ar
Marc Cousin wrote:
Your effective_cache_size is really small for the system you seem to have -
its the size of IO caching your os is doing and uses no resources itself.
And 800MB of that on a system with that amount of data seems a bit unlikely
;-)
Using `free` you can see the amount of io cac
18 matches
Mail list logo