On 25.01.2007 05:57, bala wrote:
But , If i run the script in console , it creates the file with
content.
Define $PATH or use /usr/local/bin/pg_dumpall or where ever it is.
--
Regards,
Hannes Dorbath
---(end of broadcast)---
TIP 9: In versions
Hi,
When I try to install PostgreSQL8.2 I get this error message:
Incompactibel version of openssl detected in system path. when you remove
libeay32.dll ssleay32.dll in your
system path postgres will install a newer version...
I've looked in my PATH but I can't seem to find it, can anyone
Check your system, system32 and winnt directories.
/Magnus
--- Original message ---
From: Steven De Vriendt [EMAIL PROTECTED]
Sent: 1-25-'07, 11:37
Hi,
When I try to install PostgreSQL8.2 I get this error message:
Incompactibel version of openssl detected in system path. when
solved :-)
thx
On 1/25/07, Magnus Hagander [EMAIL PROTECTED] wrote:
Check your system, system32 and winnt directories.
/Magnus
--- Original message ---
From: Steven De Vriendt [EMAIL PROTECTED]
Sent: 1-25-'07, 11:37
Hi,
When I try to install PostgreSQL8.2 I get this error
Hi,
I was just wondering if a 32bit client connected to a 64bit server,
would it be possible for the 64bit server to return a OID that was over
4 billion to the 32 bit
client and possibly cause a range error if the OID value was used in a
unsigned 32-bit integer var?
Thanks,
--
Tony
Tony Caduto wrote:
I was just wondering if a 32bit client connected to a 64bit server,
would it be possible for the 64bit server to return a OID that was
over 4 billion to the 32 bit
client and possibly cause a range error if the OID value was used in
a unsigned 32-bit integer var?
OIDs are
Hello,
We tried upgrading a 7.4 base to 8.2 and found many issues with the
triggers. What are the main changes in the pl/pgsql syntax or contraints
checking between these two version?
Thanks,
---(end of broadcast)---
TIP 2: Don't 'kill -9' the
On Thursday 25 January 2007 10:02 am, Louis-David Mitterrand
[EMAIL PROTECTED] thus communicated:
Hello,
We tried upgrading a 7.4 base to 8.2 and found many issues with the
triggers. What are the main changes in the pl/pgsql syntax or contraints
checking between these two version?
[Update: the post didn't make it to the list probably due to the attachment, so
I resend it inlined... and I was not able to trigger the same behavior on 8.2,
so it might have been already fixed.]
[snip]
Well, if you can show a reproducible test case, I'd like to look at it.
OK, I have a test
Hello Folks
Have a look at this Table:
CREATE TABLE foo(
id serial,
a_name text,
CONSTRAINT un_name UNIQUE (a_name));
Obviously, inserting a string twice results in an error (as one would
expect). But: is there any known possibility to ingnore an errorneous
INSERT like SQLite's conflict
Hi,
I'm trying to create a table with 20,000 columns of type int2, but I
keep getting the error message that the limit is 1600. According to
this message http://archives.postgresql.org/pgsql-admin/2001-01/msg00199.php
it can be increased, but only up to about 6400. Can anyone tell me
how to get
On Tue, Jan 23, 2007 at 07:47:26AM -0800, Subramaniam Aiylam wrote:
Hello all,
I have a setup in which four client machines access
a Postgres database (8.1.1) running on a Linux box.
So, there are connections from each machine to the
database; hence, the Linux box has about 2 postgres
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/25/07 09:30, Inoqulath wrote:
Hello Folks
Have a look at this Table:
CREATE TABLE foo(
id serial,
a_name text,
CONSTRAINT un_name UNIQUE (a_name));
Obviously, inserting a string twice results in an error (as one would
expect). But:
Hi,
when I fire the following query:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250,
11042, 16279, 42197, 672089);
I will get the same results in the same order, as in in the next query:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/25/07 09:34, Isaac Ben wrote:
Hi,
I'm trying to create a table with 20,000 columns of type int2, but I
keep getting the error message that the limit is 1600. According to
this message
Ron Johnson wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/25/07 09:30, Inoqulath wrote:
Hello Folks
Have a look at this Table:
CREATE TABLE foo(
id serial,
a_name text,
CONSTRAINT un_name UNIQUE (a_name));
Obviously, inserting a string twice results in an error ...is there
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/25/07 09:45, Thorsten Körner wrote:
Hi,
when I fire the following query:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250,
11042, 16279, 42197, 672089);
I will get the same results in the same order, as in in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/25/07 09:54, [EMAIL PROTECTED] wrote:
Ron Johnson wrote:
On 01/25/07 09:30, Inoqulath wrote:
[snip]
I think he is not asking How do I insert duplicate rows into a
unique-constrained column?, but rather that he wants to have the insert
Good hint. I think that should work for me.
Thanks
(At last, now I know what unique means ;-) )
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org/
Thorsten =?iso-8859-1?q?K=F6rner?= [EMAIL PROTECTED] writes:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250,
11042, 16279, 42197, 672089);
I wonder, how it is possible, to retrieve the results in the same order, as
queried in the list.
You could rewrite the
am Thu, dem 25.01.2007, um 16:45:23 +0100 mailte Thorsten Körner folgendes:
Hi,
when I fire the following query:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250,
11042, 16279, 42197, 672089);
I will get the same results in the same order, as in in the next
Tom,
Did this information shed any light on what the problem might be? Any
solution or workaround?
Thanks!
Jeremy Haile
On Wed, 24 Jan 2007 14:19:05 -0500, Jeremy Haile [EMAIL PROTECTED]
said:
pgstat.stat was last updated 1/22 12:25pm - there is no pgstat.tmp.
Coincidentally (I think
guys,
i inserted 1 record into my database (default
nextval('sequencename'::regclass) where (start 1 increment 1)). then i
tried to insert 1 other record twice but both those inserts failed
because of a domain check (ERROR: value too long for type character
varying(X). when i was finally able to
Jeremy Haile [EMAIL PROTECTED] writes:
Did this information shed any light on what the problem might be?
It seems to buttress Magnus' theory that the intermittent (or not so
intermittent) stats-test buildfarm failures we've been seeing have to
do with the stats collector actually freezing up,
Unfortunately I don't have any debugging tools installed that would work
against postgres - although I'd be glad to do something if you could
tell me the steps involved. I can reproduce the issue quite easily on
two different Windows machines (one is XP, the other is 2003).
Please let me know if
Hi,
Sorry, I forgot to post back to the list instead of just replying
individual responders.
The data is gene expression data with 20,000 dimensions. Part of the
project I'm working on is to discover what dimensions are truly
independent. But to start with I need to have
all of the data
Jeremy Haile [EMAIL PROTECTED] writes:
Unfortunately I don't have any debugging tools installed that would work
against postgres - although I'd be glad to do something if you could
tell me the steps involved. I can reproduce the issue quite easily on
two different Windows machines (one is XP,
On Thu, Jan 25, 2007 at 08:34:08 -0700,
Isaac Ben [EMAIL PROTECTED] wrote:
Hi,
I'm trying to create a table with 20,000 columns of type int2, but I
keep getting the error message that the limit is 1600. According to
this message
John Smith [EMAIL PROTECTED] writes:
i had insert errors yesterday (ERROR: invalid input syntax for
integer ERROR: column 'columnname' is of type date but expression is
of type integer) but they didn't cause any increment jumps. and when
i insert a record now the sequence increments just
On Thu, Jan 25, 2007 at 12:33:51 -0500,
John Smith [EMAIL PROTECTED] wrote:
guys,
i inserted 1 record into my database (default
nextval('sequencename'::regclass) where (start 1 increment 1)). then i
tried to insert 1 other record twice but both those inserts failed
because of a domain check
Hi Fillip,
thanks for your hint, I have tested it on a development database, and it
worked well.
Are there any experiences how this will affect performance on a large
database, with very high traffic?
Is it recommended to use temp tables in such an environment?
THX in advance
Thorsten
Am
On 1/25/07, John Smith [EMAIL PROTECTED] wrote:
guys,
i inserted 1 record into my database (default
nextval('sequencename'::regclass) where (start 1 increment 1)). then i
tried to insert 1 other record twice but both those inserts failed
because of a domain check (ERROR: value too long for type
Tom Lane wrote:
Jeremy Haile [EMAIL PROTECTED] writes:
Unfortunately I don't have any debugging tools installed that would work
against postgres - although I'd be glad to do something if you could
tell me the steps involved. I can reproduce the issue quite easily on
two different Windows
Magnus Hagander [EMAIL PROTECTED] writes:
Jeremy Haile [EMAIL PROTECTED] writes:
Do you know of any workaround other than restarting the whole server?
Can the collector be restarted individually?
You can use pg_ctl to send the int signal. If it's completely hung, that
may not work. In that
Then just pick it up in Task Manager or Process Explorer or whatever and
kill it off. Just make sure you pick the right process.
I mentioned earlier that killing off the collector didn't work - however
I was wrong. I just wasn't giving it enough time. If I kill the
postgres.exe -forkcol
The question I'd ask before offering a solution is, Does the order of the
id data matter, or is it a question of having all the results for a given id
together before proceeding to the next id? The answer to this will
determine whether or not adding either a group by clause or an order by
On Thursday 25 January 2007 09:53, Douglas McNaught wrote:
Nature of the beast. Sequence increments aren't rolled back on
transaction abort (for performance and concurrency reasons), so you
should expect gaps.
Behavior long ago noted and accounted for. But I've always wondered why this
was
On Tuesday 23 January 2007 13:55, Carlos wrote:
What would be the faster way to convert a 7.4.x database into an 8.x
database? A dump of the database takes over 20 hours so we want to convert
the database without having to do a dump and resptore.
You've probably already accounted for this,
Benjamin Smith [EMAIL PROTECTED] writes:
On Thursday 25 January 2007 09:53, Douglas McNaught wrote:
Nature of the beast. Sequence increments aren't rolled back on
transaction abort (for performance and concurrency reasons), so you
should expect gaps.
Behavior long ago noted and accounted
I'll try to put together a test case for hackers, although I'm not sure
what exactly causes it.
Basically, when I fire up PostgreSQL - after about a minute the stats
collector runs once (pgstat.stat is updated, autovacuum fires up, etc.)
- and then the collector seems to hang. If I watch it's
Perhaps my understanding of the 'encode' function is incorrect, but I
was under the impression that I could do something like:
SELECT lower(encode(bytes, 'escape')) FROM mytable;
as it sounded like (from the manual) that 'encode' would return valid
ASCII, with all the non-ascii bytes hex
Douglas McNaught wrote:
Benjamin Smith [EMAIL PROTECTED] writes:
On Thursday 25 January 2007 09:53, Douglas McNaught wrote:
Nature of the beast. ?Sequence increments aren't rolled back on
transaction abort (for performance and concurrency reasons), so you
should expect gaps.
Tom Lane wrote:
Thorsten =?iso-8859-1?q?K=F6rner?= [EMAIL PROTECTED] writes:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250,
11042, 16279, 42197, 672089);
You could rewrite the query as
select ... from tablename where m_id = 26250
union all
select ... from
Hello,
How can I loop a PL/PgSQL recorset variable? The example:
DECLARE
v_tmp_regi RECORD;
v_tmp RECORD;
BEGIN
SELECT * INTO v_tmp_regi FROM sulyozas_futamido sf WHERE
sf.termekfajta_id=
a_termekfajta_id AND sf.marka_id=a_marka_id;
DELETE FROM
Hi guys. I've inherited a system that I'm looking to add replication to.
It already has some custom replication code, but it's being nice to say
that code less than good. I'm hoping there's an existing project out there
that will work much better. Unfortunately, I'm not seeing anything that
Have you read the 8.2 documentation about this:
http://www.postgresql.org/docs/8.2/static/high-availability.html
---
Ben wrote:
Hi guys. I've inherited a system that I'm looking to add replication to.
It already
On 1/25/07, Furesz Peter [EMAIL PROTECTED] wrote:
How can I loop a PL/PgSQL recorset variable? The example:
DECLARE
v_tmp_regi RECORD;
v_tmp RECORD;
BEGIN
SELECT * INTO v_tmp_regi FROM sulyozas_futamido sf WHERE
sf.termekfajta_id=
a_termekfajta_id AND
Hello,
I was wondering if anyone could point me to any documention on
seting up Postgresql in a web hosting environment. Things like
account management, access host management and privilege management
to users that are resellers and will need to administer there own
users in
Jeremy Haile wrote:
I'll try to put together a test case for hackers, although I'm not sure
what exactly causes it.
Basically, when I fire up PostgreSQL - after about a minute the stats
collector runs once (pgstat.stat is updated, autovacuum fires up, etc.)
- and then the collector seems
Hello --
I'm having a problem loading a recent pg_dump of our production database.
In our environment we take a monthly snapshot of our production server and
copy that to our development server so that we have a recent batch of data
to work with.
However, when trying to load the file for this
Alvaro Herrera [EMAIL PROTECTED] writes:
Jeremy Haile wrote:
If anyone else is experiencing similar problems, please post your
situation.
All the Windows buildfarm machines are, apparently.
Can't anyone with a debugger duplicate this and get a stack trace for
us? If the stats collector is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
I'm using PostgreSQL to log web traffic leaving our network. This
results in the database growing to a fairly large size. This machine
will be left unattended and basically unmaintained for long stretches of
time so I need a way to limit the
AFAIR (Magnus can surely confirm) there were some other tables that
weren't showing stats as all zeros -- but there's no way to know whether
those numbers were put there before the collector had frozen (if
that's really what's happening).
Yeah - I have numbers that updated before the stats
Yes, but unless I'm missing something, it doesn't look like any of those
options perfectly fit my situation, except perhaps Slony, which is why I'm
leaning that direction now despite my concerns.
Is there a section of this page I should be re-reading?
On Thu, 25 Jan 2007, Bruce Momjian wrote:
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Jeremy Haile wrote:
If anyone else is experiencing similar problems, please post your
situation.
All the Windows buildfarm machines are, apparently.
Can't anyone with a debugger duplicate this and get a stack trace for
us? If the
In response to Mark Drago [EMAIL PROTECTED]:
I'm using PostgreSQL to log web traffic leaving our network. This
results in the database growing to a fairly large size. This machine
will be left unattended and basically unmaintained for long stretches of
time so I need a way to limit the
On Thu, Jan 25, 2007 at 10:47:50AM -0700, Isaac Ben wrote:
The data is gene expression data with 20,000 dimensions. Part of the
project I'm working on is to discover what dimensions are truly
independent. But to start with I need to have
all of the data available in a master table to do
I got a duplicate key violation when the following query was performed:
INSERT INTO category_product_visible (category_id, product_id)
SELECT cp.category_id, cp.product_id
FROMcategory_product cp
WHERE cp.product_id = $1 AND
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/25/07 15:43, Bill Moran wrote:
In response to Mark Drago [EMAIL PROTECTED]:
[snip]
I don't think either of those are good ideas, because they both
rely on disk limits to trigger drastic changes in database size,
which will then require
Brian Wipf [EMAIL PROTECTED] writes:
I got a duplicate key violation when the following query was performed:
INSERT INTO category_product_visible (category_id, product_id)
SELECT cp.category_id, cp.product_id
FROMcategory_product cp
WHERE
Hello I have a design question:
I have a table representing Families, and a table representing Persons.
The table Family have a row family_id as primary key.
The table Person have a row person_id as primary key and contain also a
row family_id.
As you can understand, the row family_id in a
Hi all,
Is there any way that I can synchronize a table in Postgres on Linux with
another table in Ms Access?
The requirement of the assignment is as following:
In postgres, there is a table called message_received. Whenever we insert,
update or edit this table, the table in Ms Access
go in the other direction... Convert your table in MS Access to use a
pass-through query to the postgreSQL table. Connect yoour MS Access pass
through table to postgreSQL using OBBC.
Even better: Drop MS Access completely and just use postgreSQL. Access is a
totally inferior technology.
codeWarrior wrote:
go in the other direction... Convert your table in MS Access to use a
pass-through query to the postgreSQL table. Connect yoour MS Access pass
through table to postgreSQL using OBBC.
Even better: Drop MS Access completely and just use postgreSQL. Access is a
totally
Sounds like you'll either need an explicit LOCK TABLE
command, set your transaction isolation to serializable,
or use advisory locking.
http://www.postgresql.org/docs/8.2/interactive/explicit-locking.html#LOC
KING-TABLES
http://www.postgresql.org/docs/8.2/interactive/transaction-iso.html#XACT
I have to store binary data in a table, ranging from 512K - 1M. I am getting
very poor performance when inserting this data.
create table my_stuff (data bytea);
I then try to insert 10 1M blobs into this table using PQexecParams from C. It
takes ~10 seconds to insert the 10 records.
The
Mason Hale [EMAIL PROTECTED] writes:
I'm having a problem loading a recent pg_dump of our production database.
However, when trying to load the file for this month's snapshot, we are (for
the first time) seeing a slew of errors, such as:
invalid command \N
invalid command \N
ERROR: syntax
hello all,
I like to know what you think about using dblink to construct serious
syncronous and asyncronous replication.
I'm work with this idea only for test and think this is possible or almost
possible because I don't know the performance for long distances but in the
same network, like
brian stone [EMAIL PROTECTED] writes:
I have to store binary data in a table, ranging from 512K - 1M. I am getting
very poor performance when inserting this data.
create table my_stuff (data bytea);
I then try to insert 10 1M blobs into this table using PQexecParams from C.
It takes ~10
I have not tried profiling yet; I am no pro at that.
output of SELECT version()
PostgreSQL 8.2rc1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.6
20060404 (Red Hat 3.4.6-3)
This is the test program. I run it on the same machine as the postmaster. I
am not sure, but I would assume that
So there is no confusion as to why my output has 10 lines that say Error:,
the pg error printf line should read:
printf(Error: %s\n, PQresultErrorMessage(res));
skye
brian stone [EMAIL PROTECTED] wrote: I have not tried profiling yet; I am no
pro at that.
output of SELECT version()
Iannsp [EMAIL PROTECTED] writes:
I like to know what you think about using dblink to construct serious
syncronous and asyncronous replication.
I think it'd be a lot of work and at the end of the day you'd pretty
much have reinvented Slony-I ... why not just use slony?
On Jan 25, 2007, at 12:47 PM, Benjamin Smith wrote:
On Tuesday 23 January 2007 13:55, Carlos wrote:
What would be the faster way to convert a 7.4.x database into an 8.x
database? A dump of the database takes over 20 hours so we want
to convert
the database without having to do a dump and
Csaba Nagy [EMAIL PROTECTED] writes:
Well, if you can show a reproducible test case, I'd like to look at it.
OK, I have a test case which has ~ 90% success rate in triggering the
issue on my box. It is written in Java, hope you can run it, in any case
you'll get the idea how to reproduce the
74 matches
Mail list logo