Jean-Max Reymond <[EMAIL PROTECTED]> writes:
> I have a stored procedure written in perl and I doubt that perl's
> garbage collector is working :-(
> after a lot of work, postmaster has a size of 1100 Mb and I think
> that the keyword "undef" has no effects.
Check the PG list archives --- there's
[EMAIL PROTECTED] - Tue at 08:33:58PM +0200]
> I use FreeBSD 4.11 with PostGreSQL 7.3.8.
(...)
> database=> explain select date_trunc('hour', time),count(*) as total from
> test where p1=53 and time > now() - interval '24 hours' group by
> date_trunc order by date_trunc ;
I haven't looked through
Meetesh Karia <[EMAIL PROTECTED]> writes:
> According to section 8.3 of the doc:
> "Long values are also stored in background tables so they do not interfere
> with rapid access to the shorter column values."
> So, how long does a value have to be to be considered "long"?
Several kilobytes.
Hi there,
I'm doing an update of ~30,000 rows and she takes about 15mins on
pretty good hardware, even just after a vacuum analyze.
I was hoping some kind soul could offer some performance advice. Do I
just have too many indexes? Or am I missing some trick with the nulls?
MY QUERY
updat
Hi,
I have a table with two indices on the same column, one of which is a partial index. I would like the query planner to use the partial index whenever the query condition lies in the range of the partial index as it would yield better performance. Is there any way to enforce the ordering for
Hello!
I use FreeBSD 4.11 with PostGreSQL 7.3.8.
I got a huge database with roughly 15 million records. There is just one
table, with a time field, a few ints and a few strings.
table test
fields time (timestamp), source (string), destination (string), p1 (int),
p2 (int)
I have run VACUUM ANA
Hi,
I have a table with two indices on the same column, one of which is a partial index. I would like the query planner to use the partial index whenever the query condition lies in the range of the partial index as it would yield better performance. Is there any way to enforce the ordering f
>hi, need some help with some experts
here.
>currently we have a function that use together
with temp table, it calls search result function, everytime
>this function is calling, it will go through
some filter before come out as a result.
>now we have some major problem , the first time
the
Hi,
Consider the where-clauses:
WHERE lower(col) LIKE 'abc';
WHERE lower(col) LIKE 'abc%';
these will both use a b-tree functional index in lower(col) if one exists.
The clause
WHERE lower(col) LIKE '%abc';
can't use the index as you would expect, because of the wildcard at the
front (as me
Hey, all. I've bounced this around in #postgres for an hour or so, and
it was suggested that I post it here as well. Hopefully someone can
help me out.
I have three machines. All have 512MB of ram.
Machine A is a 2.0ghz celeron, running debian, pg verison 7.4.6.
Machine B is a 1.8ghz celeron
Hi all,
I'm running PG 8.0.3 on WinXP and I'm coming across some performance
issues related to text columns. Basically, it appears as though
PG is storing the text data inline with the rest of the row data as
queries that don't touch the text column are slower when there is data
in the text colum
Hi Gnanavel,
Thanks, but that will only return at most 100 statements. If there is a
transaction with 110 statements then this will not return all the
statements for that transaction. We need to make sure that the function
returns all the statements for a transaction.
Cheers
David
Gnanavel
Hello. I believe in earlier versions, a query of the
form
select attrib from ttt where attrib like 'foo%' would
be able to take advantage of an index. I have seen
this in the past. Currently I am using v8.0.3. From
what I can see is that the execultion plan seems to
use a seq scan and to totally i
Merge the two select statements like this and try,
SELECT t.trans_id as ID,s.id, s.transaction_id, s.table_name, s.op, s.data
FROM pending_trans AS t join dbmirror.pending_statement AS s
on (s.transaction_id=t.id)
WHERE t.fetched = false order by t.trans_id,s.id limit 100;
If the above que
Hi Keith,
Unfortunately, we must have those sorts. The statements within a
transaction must be executed on the slave in the same order as they were
on the master, and similarly, transactions must also go in the same
order. As for aliasing the tables, that is just a remnant from previous
versi
Christopher Kings-Lynne wrote:
I'm curious - how did you write a multi-master replication package in
pgsql, when pgsql doesn't have 2 phase commits or any kind of
distributed syncing or conflict resolution in a release version?
We didn't write it entirely in pgsql, there is a worker process
David Mitchell wrote:
We have the following function in our home grown mirroring package, but
it isn't running as fast as we would like. We need to select statements
from the pending_statement table, and we want to select all the
statements for a single transaction (pending_trans) in one go (th
What's wrong with Slony?
Because it's not multi-master. Our mirroring package is.
I'm curious - how did you write a multi-master replication package in
pgsql, when pgsql doesn't have 2 phase commits or any kind of
distributed syncing or conflict resolution in a release version?
Chris
---
Christopher Kings-Lynne wrote:
What's wrong with Slony?
Because it's not multi-master. Our mirroring package is.
--
David Mitchell
Software Engineer
Telogis
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECT
What's wrong with Slony?
David Mitchell wrote:
We have the following function in our home grown mirroring package, but
it isn't running as fast as we would like. We need to select statements
from the pending_statement table, and we want to select all the
statements for a single transaction (pe
On Tue, Jun 28, 2005 at 03:03:06AM +, Karl O. Pinc wrote:
>
> For all your work a documentation patch is appended that
> I think is easier to read and might avoid this problem
> in the future.
Patches should go to the pgsql-patches list -- the people who review
and apply patches might not be
We have the following function in our home grown mirroring package, but
it isn't running as fast as we would like. We need to select statements
from the pending_statement table, and we want to select all the
statements for a single transaction (pending_trans) in one go (that is,
we either selec
On 06/27/2005 09:36:51 PM, Karl O. Pinc wrote:
I'm doing this in a torture test script, loading data.
Every fibnocci number of rows * 100 I VACCUM ANALYZE.
So, 100, 200, 300, 500, 800, etc.
(And of course disconnect my client and re-connect so
as to use the new statistics. sure would be nice
On 06/27/2005 08:34:19 PM, Michael Fuhr wrote:
On Tue, Jun 28, 2005 at 01:54:08AM +, Karl O. Pinc wrote:
> On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:
>
> >See timeofday().
>
> That only gives you the time at the start of the transaction,
> so you get no indication of how long anything in
On 06/27/2005 05:37:41 PM, Josh Berkus wrote:
Karl,
> Seems to me that when there's a constant value in the query
> and an = comparision it will always be faster to use the (b-tree)
> index that's ordered first by the constant value, as then all
further
> blocks are guarenteed to have a higher
On Tue, Jun 28, 2005 at 01:54:08AM +, Karl O. Pinc wrote:
> On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:
>
> >See timeofday().
>
> That only gives you the time at the start of the transaction,
> so you get no indication of how long anything in the
> transaction takes.
Did you read the docum
Hi all,
My company currently runs a number of both web-based and more
transactional projects on a PostgreSQL 7.3 server, and we're looking to
upgrade to a new machine running 8.0 to boost performance and handle
data growth in to the future.
Right now I'm looking at a Sun Fire V40z server in
On Jun 28, 2005, at 10:54 AM, Karl O. Pinc wrote:
On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:
On Mon, Jun 27, 2005 at 11:30:45PM +, Karl O. Pinc wrote:
>
> Short of that I think I'm going to be reduced to
> writing a C function that returns the real
> system time so I can spatter my c
On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:
On Mon, Jun 27, 2005 at 11:30:45PM +, Karl O. Pinc wrote:
>
> Short of that I think I'm going to be reduced to
> writing a C function that returns the real
> system time so I can spatter my code with
> RAISE statements that indicate actual execu
On Mon, Jun 27, 2005 at 11:30:45PM +, Karl O. Pinc wrote:
>
> Short of that I think I'm going to be reduced to
> writing a C function that returns the real
> system time so I can spatter my code with
> RAISE statements that indicate actual execution
> time.
See timeofday().
http://www.postgr
Hi,
I'm having a hard time finding the poorly performing
statements in my plpgsql procedures, many of which
are triggers. Am I missing something?
I can get the query plans by starting up a new
connection and doing:
SET DEBUG_PRINT_PLAN TO TRUE;
SET CLIENT_MIN_MESSAGES TO DEBUG1;
And then runnin
Hi,
At 01:16 28/06/2005, Karl O. Pinc wrote:
http://www.postgresql.org/docs/8.0/static/indexes-examine.html
Says:
"If you do not succeed in adjusting the costs to be more
appropriate, then you may have to resort to forcing index
usage explicitly."
Is there a way to force a query to use a part
Karl,
> Seems to me that when there's a constant value in the query
> and an = comparision it will always be faster to use the (b-tree)
> index that's ordered first by the constant value, as then all further
> blocks are guarenteed to have a higher relevant information
> density. At least when co
http://www.postgresql.org/docs/8.0/static/indexes-examine.html
Says:
"If you do not succeed in adjusting the costs to be more
appropriate, then you may have to resort to forcing index
usage explicitly."
Is there a way to force a query to use a particular index?
If not, what does this sentence m
Postgresql 8.0.3
Hi,
I have a query
select 1
from census
where date < '1975-9-21' and sname = 'RAD' and status != 'A'
limit 1;
Explain analyze says it always uses the index made by:
CREATE INDEX census_date_sname ON census (date, sname);
this is even after I made the index:
CREATE IN
On Jun 27, 2005, at 4:46 PM, Jean-Max Reymond wrote:
Hi,
I have a stored procedure written in perl and I doubt that perl's
garbage collector is working :-(
after a lot of work, postmaster has a size of 1100 Mb and I think
that the keyword "undef" has no effects.
Before tuning my procedure, doe
Hi,
I have a stored procedure written in perl and I doubt that perl's
garbage collector is working :-(
after a lot of work, postmaster has a size of 1100 Mb and I think
that the keyword "undef" has no effects.
Before tuning my procedure, does it exist a known issue, a workaround ?
--
Jean-Max R
Dennis Bjorklund wrote:
> On Thu, 23 Jun 2005, Bricklen Anderson wrote:
>
>
>>iii. UNIQUE constraint on table "t1". This didn't seem to perform too
>>badly with fewer rows (preliminary tests), but as you'd expect, on error
>>the whole transaction would roll back. Is it possible to skip a row if
>
[moved to pgsql-performance]
> > Currently I want to take a TPC-H test on postgresql-8.0.2. I
have
> > downloaded the DBGEN and QGEN from the homepage of TPC. But I
> encountered
> > many problems which forced me to request some help. 1. How to load
the
> data
> > from flat file generated by db
i would take a peek at psqlodbc-8.0 drivers ..
i wouldn't battle with other version you might find such as (unixodbc
ones)
-elz
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Merlin Moncure
> Sent: 27 juin 2005 10:29
> To: grupos
> Cc: pgsql-
> Hi !
>
> My company is evaluating to compatibilizate our system (developed in
> C++) to PostgreSQL.
>
> Our programmer made a lot of tests and he informed me that the
> performance using ODBC is very similar than using libpq, even with a
big
> number of simultaneous connections/queries. Of cour
Hi !
My company is evaluating to compatibilizate our system (developed in
C++) to PostgreSQL.
Our programmer made a lot of tests and he informed me that the
performance using ODBC is very similar than using libpq, even with a big
number of simultaneous connections/queries. Of course that for
> There are some immediate questions from our engineers about
performance
>
> "- Oracle has one particular performance enhancement that Postgres is
> missing. If you do a select that returns 100,000 rows in a given
order,
> and all you want are rows 99101 to 99200, then Oracle can do that very
>
Hi,
At 13:50 27/06/2005, Praveen Raja wrote:
Just to clear things up a bit, the scenario that I'm interested in is a
table with a large number of indexes on it (maybe 7-8).
If you're after performance you'll want to carefully consider which indexes
are really useful and/or redesign your schem
Just to clear things up a bit, the scenario that I'm interested in is a
table with a large number of indexes on it (maybe 7-8). In this scenario
other than the overhead of having to maintain the indexes (which I'm
guessing is the same regardless of the size of the table), does the size
of the table
Hi,
At 13:24 27/06/2005, Praveen Raja wrote:
I'm wondering if and how the size of a table affects speed of inserts
into it? What if the table has indexes, does that alter the answer?
Many parameters will affect the result:
- whether there are any indexes (including the primary key, unique
con
Praveen Raja:
I think the size of a table don't affect the speed of inserts
into it.Because PostgreSQL just doing something like "append" on the data files.
But the index do speed-down the inserts. Because PostgreSQL should
maintain the index when doing inserts.
Hi all
I'm wondering if and how the size of a table affects speed of inserts
into it? What if the table has indexes, does that alter the answer?
Thanks
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
48 matches
Mail list logo