I am going to migrate my produciton DB from postgresql 8.1 to 9.0.1.
Anyone please tell me what the important things I have to look for this
migration.
Thanking you all.
On 8 November 2010 10:08, AI Rumman rumman...@gmail.com wrote:
I am going to migrate my produciton DB from postgresql 8.1 to 9.0.1.
Anyone please tell me what the important things I have to look for this
migration.
Thanking you all.
Implicit casting might bite you since that was removed in
2010/11/8 AI Rumman rumman...@gmail.com:
I am going to migrate my produciton DB from postgresql 8.1 to 9.0.1.
Anyone please tell me what the important things I have to look for this
migration.
Thanking you all.
You MUST read Releases Notes for each major version between to see
what change and
Andreas maps...@gmx.net writes:
I can find the problematic rows.
How could I delete every char in a string that can't be converted to
WIN1252?
http://tapoueh.org/articles/blog/_Getting_out_of_SQL_ASCII,_part_1.html
http://tapoueh.org/articles/blog/_Getting_out_of_SQL_ASCII,_part_2.html
On Mon, Nov 8, 2010 at 5:23 AM, Thom Brown t...@linux.com wrote:
Implicit casting might bite you since that was removed in 8.3.
Also if you use bytea fields to store binary data, the encoding format
on return of the data is different. Make sure your client library
handles that for you (or
Hello,
I'm having this table filled with data:
\d pref_users;
Table public.pref_users
Column |Type | Modifiers
+-+---
id | character varying(32) | not null
first_name | character
2010/11/8 Vick Khera vi...@khera.org:
On Mon, Nov 8, 2010 at 5:23 AM, Thom Brown t...@linux.com wrote:
Implicit casting might bite you since that was removed in 8.3.
Also if you use bytea fields to store binary data, the encoding format
on return of the data is different. Make sure your
Alexander Farber, 08.11.2010 15:50:
And then I realized that I actually want
medals smallint default 0 check (medals= 0)
So I've dropped the old constraint with
alter table pref_users drop constraint pref_users_medals_check;
but how can I add the new contraint please? I'm trying:
On 08/11/2010 14:50, Alexander Farber wrote:
alter table pref_users add constraint pref_users_medals_check (medals= 0);
ERROR: syntax error at or near (
LINE 1: ...pref_users add constraint pref_users_medals_check (medals=...
^
and
Thank you,
alter table pref_users add constraint pref_users_medals_check check
(medals = 0);
has worked!
I do not use pgAdmin, because I see in the logs of my 2 web server,
that attackers look for it all the time. But I'll install it at my
development VM at home now.
Regards
Alex
--
Sent via
Le 08/11/2010 16:18, Alexander Farber a écrit :
Thank you,
alter table pref_users add constraint pref_users_medals_check check
(medals = 0);
has worked!
I do not use pgAdmin, because I see in the logs of my 2 web server,
that attackers look for it all the time. But I'll install it at
Oh right, I meant phpPgAdmin
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi all,
I've collected some interesting results during my experiments which I couldn't
figure out the reason behind them and need your assistance.
I'm running PostgreSQL 9.0 on a quad-core machine having two level on-chip
cache
hierarchy. PostgreSQL has a large and warmed-up buffer
cache
On Mon, Nov 8, 2010 at 8:33 AM, umut orhan umut_angelf...@yahoo.com wrote:
Hi all,
I've collected some interesting results during my experiments which I
couldn't figure out the reason behind them and need your assistance.
I'm running PostgreSQL 9.0 on a quad-core machine having two level
On 11/08/10 7:33 AM, umut orhan wrote:
Hi all,
I've collected some interesting results during my experiments which I
couldn't figure out the reason behind them and need your assistance.
I'm running PostgreSQL 9.0 on a quad-core machine having two level
on-chip cache hierarchy. PostgreSQL
On 8 Nov 2010, at 16:18, Alexander Farber wrote:
Thank you,
alter table pref_users add constraint pref_users_medals_check check
(medals = 0);
has worked!
To clarify a bit on this; if you add a constraint, you specify its name and
what type of constraint it is, before specifying the
Alban Hertroys dal...@solfertje.student.utwente.nl writes:
On 8 Nov 2010, at 16:18, Alexander Farber wrote:
alter table pref_users add constraint pref_users_medals_check check
(medals = 0);
has worked!
To clarify a bit on this; if you add a constraint, you specify its name and
what type
Hi,
we have several instances of following error in server log:
2010-11-08 18:44:18 CET 5177 1 @ ERROR: out of memory
2010-11-08 18:44:18 CET 5177 2 @ DETAIL: Failed on request of size 16384.
It's always the first log message from the backend. We're trying to
trace it down. Whether it's
Hi All -
Can you please share your thoughts and help me ?
1. I have 4 ( T1, T2 , T3, T4 ) tables where I have the data from
a transactional system
2. I have created one more table D1 to denormalize the data from
the 4 tables ( T1, T2 , T3, T4 )
3. I
On Mon, Nov 08, 2010 at 07:19:43PM +0100, Jakub Ouhrabka wrote:
Hi,
we have several instances of following error in server log:
2010-11-08 18:44:18 CET 5177 1 @ ERROR: out of memory
2010-11-08 18:44:18 CET 5177 2 @ DETAIL: Failed on request of size 16384.
It's always the first log
On Mon, Nov 08, 2010 at 01:45:49PM -0500, akp geek wrote:
Hi All -
Can you please share your thoughts and help me ?
1. I have 4 ( T1, T2 , T3, T4 ) tables where I have the data from
a transactional system
2. I have created one more table D1 to denormalize
Hi all,
I'm doing some testing of Postgres 9.0 archiving and streaming replication
between a couple of Solaris 10 servers. Recently I was trying to test how
well the standby server catches up after an outage, and a question arose.
It seems that if the standby is uncontactable by the primary
is it 32bit or 64bit machine?
64bit
what's the work_mem?
64MB
Kuba
Dne 8.11.2010 19:52, hubert depesz lubaczewski napsal(a):
On Mon, Nov 08, 2010 at 07:19:43PM +0100, Jakub Ouhrabka wrote:
Hi,
we have several instances of following error in server log:
2010-11-08 18:44:18 CET 5177 1 @
Replaying to my own mail. Maybe we've found the root cause:
In one database there was a table with 200k records where each record
contained 15kB bytea field. Auto-ANALYZE was running on that table
continuously (with statistics target 500). When we avoid the
auto-ANALYZE via UPDATE table set
On Mon, Nov 08, 2010 at 08:04:32PM +0100, Jakub Ouhrabka wrote:
is it 32bit or 64bit machine?
64bit
what's the work_mem?
64MB
that's *way* too much with 24GB of ram and 1k connections. please
lower it to 32MB or even less.
Best regards,
depesz
--
Linkedin:
I currently have Postgres 9.0 install after an upgrade. My database is
relatively small, but complex. The dump is about 90MB.
Every night when there is no activity I do a full vacuum, a reindex, and
then dump a nightly backup.
Is this optimal with regards to performance? autovacuum is set to
what's the work_mem?
64MB
that's *way* too much with 24GB of ram and 1k connections. please
lower it to 32MB or even less.
Thanks for your reply. You are generally right. But in our case most of
the backends are only waiting for notify so not consuming any work_mem.
The server is not
Date: Mon, 8 Nov 2010 20:05:23 +0100
From: k...@comgate.cz
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ERROR: Out of memory - when connecting to database
Replaying to my own mail. Maybe we've found the root cause:
In one database there was a table with 200k records where
I have listed functions, triggers , tables and view for your reference.
Thanks for helping me out
Regards
CREATE OR REPLACE FUNCTION fnc_loadDenormdata()
RETURNS trigger AS
$BODY$
DECLARE
v_transactionid numeric;
v_startdate text;
v_enddate text;
v_statuscode character varying(10);
2010/11/8 Jakub Ouhrabka k...@comgate.cz:
Replaying to my own mail. Maybe we've found the root cause:
In one database there was a table with 200k records where each record
contained 15kB bytea field. Auto-ANALYZE was running on that table
continuously (with statistics target 500). When we
On Monday 8. November 2010 20.06.13 Jason Long wrote:
I currently have Postgres 9.0 install after an upgrade. My database is
relatively small, but complex. The dump is about 90MB.
Every night when there is no activity I do a full vacuum, a reindex, and
then dump a nightly backup.
Is
I get
DETAIL: Process 24749 waits for ShareLock on transaction 113443492;
blocked by process 25199. Process 25199 waits for ShareLock on
transaction 113442820; blocked by process 24749.
I would like to know both statements that caused the sharelock
problem.
This is a long running transaction.
Jakub Ouhrabka k...@comgate.cz writes:
Could it be that the failed connections were issued by autovacuum?
They clearly were: notice the reference to Autovacuum context in the
memory map. I think you are right to suspect that auto-analyze was
getting blown out by the wide bytea columns. Did you
They clearly were: notice the reference to Autovacuum context in the
memory map. I think you are right to suspect that auto-analyze was
getting blown out by the wide bytea columns. Did you have any
expression indexes involving those columns?
Yes, there are two unique btree indexes:
(col1,
We've got over 250GB of files in a pgsql_tmp directory, some with modification
timestamps going back to August 2010 when the server was last restarted.
select pg_postmaster_start_time();
pg_postmaster_start_time
---
2010-08-08 22:53:31.999804-04
(1 row)
I'm not
Ivan Sergio Borgonovo m...@webthatworks.it writes:
I get
DETAIL: Process 24749 waits for ShareLock on transaction 113443492;
blocked by process 25199. Process 25199 waits for ShareLock on
transaction 113442820; blocked by process 24749.
I would like to know both statements that caused the
Jakub Ouhrabka k...@comgate.cz writes:
They clearly were: notice the reference to Autovacuum context in the
memory map. I think you are right to suspect that auto-analyze was
getting blown out by the wide bytea columns. Did you have any
expression indexes involving those columns?
Yes,
Replaying to my own mail. Maybe we've found the root cause:
In one database there was a table with 200k records where each record
contained 15kB bytea field. Auto-ANALYZE was running on that table
continuously (with statistics target 500). When we avoid the
auto-ANALYZE via UPDATE table set
I currently have Postgres 9.0 install after an upgrade. My database is
relatively small, but complex. The dump is about 90MB.
Every night when there is no activity I do a full vacuum, a reindex, and
then dump a nightly backup.
Is this optimal with regards to performance? autovacuum is set to
Greetings all,
I am trying to optimize SELECT queries on a large table (10M rows and
more) by using temporary tables that are subsets of my main table, thus
narrowing the search space to a more manageable size.
Is it possible to transfer indices (or at least use the information from
existing
Michael Glaesemann michael.glaesem...@myyearbook.com writes:
We've got over 250GB of files in a pgsql_tmp directory, some with
modification timestamps going back to August 2010 when the server was last
restarted.
That's very peculiar. Do you keep query logs? It would be useful to
try to
On Mon, 08 Nov 2010 15:45:12 -0500
Tom Lane t...@sss.pgh.pa.us wrote:
Ivan Sergio Borgonovo m...@webthatworks.it writes:
I get
DETAIL: Process 24749 waits for ShareLock on transaction
113443492; blocked by process 25199. Process 25199 waits for
ShareLock on transaction 113442820;
On Mon, Nov 8, 2010 at 2:18 PM, Ivan Sergio Borgonovo
m...@webthatworks.it wrote:
On Mon, 08 Nov 2010 15:45:12 -0500
Tom Lane t...@sss.pgh.pa.us wrote:
Ivan Sergio Borgonovo m...@webthatworks.it writes:
I get
DETAIL: Process 24749 waits for ShareLock on transaction
113443492; blocked by
On Mon, Nov 8, 2010 at 12:15 PM, Matthieu Huin matthieu.h...@wallix.com wrote:
Greetings all,
I am trying to optimize SELECT queries on a large table (10M rows and more)
by using temporary tables that are subsets of my main table, thus narrowing
the search space to a more manageable size.
Is
On 11/08/10 10:50 AM, Jason Long wrote:
I currently have Postgres 9.0 install after an upgrade. My database is
relatively small, but complex. The dump is about 90MB.
Every night when there is no activity I do a full vacuum, a reindex, and
then dump a nightly backup.
Is this optimal with
On Mon, 8 Nov 2010 14:22:16 -0700
Scott Marlowe scott.marl...@gmail.com wrote:
Don't know how much it helps here, but this page:
http://wiki.postgresql.org/wiki/Lock_Monitoring
is priceless when you're having issues midday with a lock that
won't go away.
I was thinking to reinvent the wheel
On Mon, Nov 8, 2010 at 11:50 AM, Jason Long ja...@octgsoftware.com wrote:
I currently have Postgres 9.0 install after an upgrade. My database is
relatively small, but complex. The dump is about 90MB.
Every night when there is no activity I do a full vacuum, a reindex,
One question, why?
On Mon, 2010-11-08 at 13:28 -0800, John R Pierce wrote:
On 11/08/10 10:50 AM, Jason Long wrote:
I currently have Postgres 9.0 install after an upgrade. My database is
relatively small, but complex. The dump is about 90MB.
Every night when there is no activity I do a full vacuum, a
On Mon, 2010-11-08 at 13:28 -0800, John R Pierce wrote:
On 11/08/10 10:50 AM, Jason Long wrote:
I currently have Postgres 9.0 install after an upgrade. My database is
relatively small, but complex. The dump is about 90MB.
Every night when there is no activity I do a full vacuum, a
I'm interested in playing with some of the features in the Alpha 2.
Is there an ETA for the release for the one-click installer?
--
Regards,
Richard Broersma Jr.
Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug
--
Sent via pgsql-general mailing list
On Nov 8, 2010, at 16:03 , Tom Lane wrote:
Michael Glaesemann michael.glaesem...@myyearbook.com writes:
We've got over 250GB of files in a pgsql_tmp directory, some with
modification timestamps going back to August 2010 when the server was last
restarted.
That's very peculiar. Do you
First and foremost, I would highly recommend that you use the Sun
compiler to build it.
...
Try:
CC=/your/path/to/suncc CFLAGS=-I/your/non-standard/include
-L/your/non-standard/lib -R/your/non-standard/lib ... \
./configure ...
This did the trick! Thank you *very* much.
(Sorry for
On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
On Mon, Nov 8, 2010 at 11:50 AM, Jason Long ja...@octgsoftware.com wrote:
I currently have Postgres 9.0 install after an upgrade. My database is
relatively small, but complex. The dump is about 90MB.
Every night when there is no
On Mon, Nov 8, 2010 at 3:42 PM, Jason Long ja...@octgsoftware.com wrote:
On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
On Mon, Nov 8, 2010 at 11:50 AM, Jason Long ja...@octgsoftware.com wrote:
I currently have Postgres 9.0 install after an upgrade. My database is
relatively small,
On Mon, 2010-11-08 at 16:23 -0700, Scott Marlowe wrote:
On Mon, Nov 8, 2010 at 3:42 PM, Jason Long ja...@octgsoftware.com wrote:
On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
On Mon, Nov 8, 2010 at 11:50 AM, Jason Long ja...@octgsoftware.com wrote:
I currently have Postgres 9.0
On Mon, Nov 8, 2010 at 4:41 PM, Jason Long
mailing.li...@octgsoftware.com wrote:
On Mon, 2010-11-08 at 16:23 -0700, Scott Marlowe wrote:
On Mon, Nov 8, 2010 at 3:42 PM, Jason Long ja...@octgsoftware.com wrote:
On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
On Mon, Nov 8, 2010 at
Hi,
Thanks for the reply.
I have write the transaction, but I have some doubt's... If in this example
the Update is executed successfully and the Function it is not, what
happens? The Update automatically rolls back?
Example:
[code]
Begin;
update aae_anuncios
On Mon, Nov 8, 2010 at 5:39 PM, Andre Lopes lopes80an...@gmail.com wrote:
Hi,
Thanks for the reply.
I have write the transaction, but I have some doubt's... If in this example
the Update is executed successfully and the Function it is not, what
happens? The Update automatically rolls back?
Michael Glaesemann michael.glaesem...@myyearbook.com writes:
On Nov 8, 2010, at 16:03 , Tom Lane wrote:
That's very peculiar. Do you keep query logs? It would be useful to
try to correlate the temp files' PIDs and timestamps with the specific
queries that must have created them.
We don't
Hi,
I'm currently in the process of moving the data from the Windows server to
the new Linux box but facing some problems with the encoding.
Additional configuration information: Windows is running PG 8.3 and the new
Linux box is PG 8.4.
Windows dump command:
pg_dump -U postgres -Fc -v -f
Excerpts from Tom Lane's message of lun nov 08 22:29:28 -0300 2010:
Hmm. If you look at FileClose() in fd.c, you'll discover that that
temporary file log message is emitted immediately before unlink'ing
the file. It looks pretty safe ... but, scratching around, I notice
that there's a
On Mon, Nov 8, 2010 at 3:06 PM, Richard Broersma
richard.broer...@gmail.com wrote:
I'm interested in playing with some of the features in the Alpha 2.
Is there an ETA for the release for the one-click installer?
I'm not sure if I'll get time to build them this time. Unfortunately
the release of
Alvaro Herrera alvhe...@commandprompt.com writes:
Excerpts from Tom Lane's message of lun nov 08 22:29:28 -0300 2010:
I think we need to re-order the operations there to ensure that the
unlink will still happen if the ereport gets interrupted.
Would it work to put the removal inside a
How is COLLEEN not there and there at the same time?
-
NOTICE: did not = 11K = 42
CONTEXT: PL/pgSQL function get_word line 37 at perform
NOTICE: value = COLLEEN
CONTEXT: PL/pgSQL function get_word
Hi,
Disclaimer: Not a DBA, nor I am not a DB guy, so please excuse any ignorance
in the below.
*1. Background*
We have a MS Access 2003 database that we are using to manage registration
and workshop/accommodation allocation for a conference. The database isn't
particularly complicated (around
There was an interesting post today on highscalability -
http://highscalability.com/blog/2010/11/4/facebook-at-13-million-queries-per-second-recommends-minimiz.html
The discussion/comments touched upon why mysql is a better idea for Facebook
than Postgres. Here's an interesting one
One is that
On Tue, Nov 9, 2010 at 3:22 PM, Victor Hooi victorh...@yahoo.com wrote:
*4. MS Access to Postgres*
Hmm have you tried Kettle (Pentaho) http://kettle.pentaho.com/
Any particularly good books here that you'd recommend?
http://www.2ndquadrant.com/books/
--
Shoaib Mir
On Mon, Nov 8, 2010 at 6:21 PM, Dave Page dp...@pgadmin.org wrote:
To make matters
worse, the pgAdmin build has changed somewhat on Windows and requires
some effort to update the installers to work again.
Totally understandable. I thank you for all of your effort with the
one-click
On Mon, Nov 8, 2010 at 8:24 PM, Sandeep Srinivasa s...@clearsenses.com wrote:
I wonder if anyone can comment on this - especially the part that PG doesnt
scale as well as MySQL on multiple cores ?
Sorry Sandeep, there may be some that love to re-re-re-hash these
these subjects. I myself am
On Tue, Nov 9, 2010 at 10:31 AM, Richard Broersma
richard.broer...@gmail.com wrote:
The following link contains hundreds of comments that you may be
interested in, some that address issues that are much more interesting
and well established:
On Mon, Nov 8, 2010 at 10:47 PM, Sandeep Srinivasa s...@clearsenses.com wrote:
I did actually try to search for topics on multiple cores vs MySQL, but I
wasnt able to find anything of much use. Elsewhere (on Hacker News for
example), I have indeed come across statements that PG scales better
Hi Victor
Le 9/11/2010 5:22, Victor Hooi a écrit :
Has anybody had any experiencing doing a similar port (Access 2007 to
Postgres) recently, what tools did you use, were there any gotchas you hit
etc? Or just any general advice at all here?
We recently migrated from MSAccess 2000 to
On 9 Nov 2010, at 5:11, Ralph Smith wrote:
How is COLLEEN not there and there at the same time?
Not really sure what your point is (don't have time to look closely), but...
IF LENGTH(Word)0 THEN
Word2=substring(Word from 1 for 50) ;
-- PERFORM SELECT COUNT(*) FROM
73 matches
Mail list logo