> Now of course this isn't Nirvana, you must pay somewhere ;-) and our
> weak spot is the need for VACUUM. But you have no need to fear large
> individual transactions.
No need to fear long running transactions other than their ability to
stop VACUUM from doing what it's supposed to be doing, thu
On Thu, Jun 16, 2005 at 07:15:08PM -0700, Todd Landfried wrote:
> Thanks for the link. I'll look into those.
>
> I'm going only on what my engineers are telling me, but they say
> upgrading breaks a lot of source code with some SQL commands that are
> a pain to hunt down and kill. Not sure if
=?ISO-8859-1?Q?Veikko_M=E4kinen?= <[EMAIL PROTECTED]> writes:
> How does Postgres (8.0.x) buffer changes to a database within a
> transaction? I need to insert/update more than a thousand rows (mayde
> even more than 1 rows, ~100 bytes/row) in a table but the changes
> must not be visible to
ken shaw <[EMAIL PROTECTED]> writes:
> It looks to me as if the only way to determine whether to issue a
> VACUUM (on a non-clustered table) or a CLUSTER (on a clustered table)
> is to query the table "pg_index", much like view "pg_indexes" does,
> for the column "indisclustered". Is this right?
i
Thanks for the link. I'll look into those.
I'm going only on what my engineers are telling me, but they say
upgrading breaks a lot of source code with some SQL commands that are
a pain to hunt down and kill. Not sure if that's true, but that's
what I'm told.
Todd
On Jun 16, 2005, at 10:0
Veikko,
> One way of doing this that I thought of was start a
> transaction, delete everything and then just dump new data in (copy
> perhaps). The old data would be usable to other transactions until I
> commit my insert. This would be the fastest way, but how much memory
> would this use?
Start
I have 6 Windows PC
in a test environment accessing a very small Postgres DB on a 2003 Server.
The PC's access the database with a cobol app via ODBC. 3 of the PC's
operate very efficiently and quickly. 3 of them do not. The 3 that
do not are all new Dell XP Pro with SP2. They all produ
transaction, delete everything and then just dump new data in (copy
perhaps). The old data would be usable to other transactions until I
commit my insert. This would be the fastest way, but how much memory
would this use? Will this cause performance issues on a heavily loaded
server with too lit
Veikko Mäkinen wrote:
Hey,
How does Postgres (8.0.x) buffer changes to a database within a
transaction? I need to insert/update more than a thousand rows (mayde
even more than 1 rows, ~100 bytes/row) in a table but the changes
must not be visible to other users/transactions before every
Hey,
How does Postgres (8.0.x) buffer changes to a database within a
transaction? I need to insert/update more than a thousand rows (mayde
even more than 1 rows, ~100 bytes/row) in a table but the changes
must not be visible to other users/transactions before every row is
updated. One way
Hi All,
I have an app that updates a PostgreSQL db in a batch fashion. After each batch (or several batches), it issues VACUUM and ANALYZE calls on the updated tables. Now I want to cluster some tables for better performance. I understand that doing a VACUUM and a CLUSTER on a table is wasteful as
On Thu, Jun 16, 2005 at 07:46:45 -0700,
Todd Landfried <[EMAIL PROTECTED]> wrote:
> Yes, it is 7.2. Why? because an older version of our software runs on
> RH7.3 and that was the latest supported release of Postgresql for
> RH7.3 (that we can find). We're currently ported to 8, but we still
We run the RPM's for RH 7.3 on our 7.2 install base with no problems.
RPM's as recent as for PostgreSQL 7.4.2 are available here:
ftp://ftp10.us.postgresql.org/pub/postgresql/binary/v7.4.2/redhat/redhat-7.3/
Or you can always compile from source. There isn't any such thing as a
'supported' packag
Yes, it is 7.2. Why? because an older version of our software runs on
RH7.3 and that was the latest supported release of Postgresql for
RH7.3 (that we can find). We're currently ported to 8, but we still
have a large installed base with the other version.
On Jun 15, 2005, at 7:18 AM, Tom L
14 matches
Mail list logo