That's what I tought the problem was, but I created a table afterwards without
inheritence. Could it have something to do with the max size of the schema
or oid's?
-- Oorspronkelijk bericht --
Date: Sat, 19 Mar 2005 14:55:50 -0800 (PST)
From: Stephan Szabo [EMAIL PROTECTED]
To: [EMAIL
On Sun, 20 Mar 2005 [EMAIL PROTECTED] wrote:
That's what I tought the problem was, but I created a table afterwards
without
inheritence. Could it have something to do with the max size of the schema
or oid's?
I can't think of a reason it would, so can you send a self-contained
full
Hi there Tom, thanks for your reply.
pg_dump: socket not open
pg_dump: SQL command to dump the contents of table activity_log
failed: PQendcopy() failed.
pg_dump: Error message from server: socket not open
pg_dump: The command was: COPY public.activity_log (bunch of columns
TO stdout
Is this
Sorry it took me so long to respond. I've been out for a couple days.
While certain things may be permissible in a language, I think it is also
important to look at the context at which the language is applied and make a
determination if it will practically turn up in relevant code. If the answer
Michael Ben-Nes wrote:
I recomend you to compile PG from source so you can use the new 8.0.1
PostgreSQL 8.0.1 is available in the Debian experimental suite, package
name postgresql-8.0.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of
Tony Caduto wrote:
Hi,
I read in a article/interview on http://madpenguin.org/cms/html/62/3677.html
that work was being done on improving/adding support for sql standard
compliant stored procs/functions
Does anyone know exactly what that means?
Does it mean that Postgres will have
Hi Alvaro, thanks for your reply!
Alvaro Herrera wrote:
psql:db_backup.sql:1548: ERROR: invalid byte sequence for encoding
UNICODE: 0xe12020
CONTEXT: COPY country, line 5, column namespanish:
Canad?
Hmm. The sequence looks like latin1 interpreted as utf8. This seems
Hi Stephan,
I figured out what happened:
The master table contained duplicates, but the insert statement seems to
be very smart by selecting just the unique ones.
Peter
-- Oorspronkelijk bericht --
Date: Sun, 20 Mar 2005 01:46:19 -0800 (PST)
From: Stephan Szabo [EMAIL PROTECTED]
To: [EMAIL
On Sun, 20 Mar 2005, Bruce Momjian wrote:
Tony Caduto wrote:
Hi,
I read in a article/interview on http://madpenguin.org/cms/html/62/3677.html
that work was being done on improving/adding support for sql standard
compliant stored procs/functions
Does anyone know exactly what that means?
Does it
Oleg Bartunov wrote:
On Sun, 20 Mar 2005, Bruce Momjian wrote:
Tony Caduto wrote:
Hi,
I read in a article/interview on
http://madpenguin.org/cms/html/62/3677.html
that work was being done on improving/adding support for sql standard
compliant stored procs/functions
Does anyone know exactly what
Sarah Ewen [EMAIL PROTECTED] writes:
Is this repeatable? What shows up in the postmaster's log when it
happens? What platform is this on, and what version of Postgres?
This is postgresql-7.4.6-1.FC2.2 running on RedHat Fedora Core 2.
The logs don't reveal anything, and it happens
On Sun, 20 Mar 2005, Joshua D. Drake wrote:
Oleg Bartunov wrote:
On Sun, 20 Mar 2005, Bruce Momjian wrote:
Tony Caduto wrote:
Hi,
I read in a article/interview on
http://madpenguin.org/cms/html/62/3677.html
that work was being done on improving/adding support for sql standard
compliant stored
hi,
I'm having a bit of trouble with my SQL query. It takes about 26h to
run on a 3Ghz PC. I'd really like to speed this up.
I put this query in a loop to iterate over 20 tables (each table
including summary has 400k records), each time the table name changes.
I this case it's m_alal. Each
My understanding is that 8.1 will have a much more mature
implementation of
stored procedures versus UDFs (Which we have had forever).
What's the difference between UDF and stored procedure ?
Here are a couple of GGIYF references:
http://builder.com.com/5100-6388-1045463.html
On Sun, 20 Mar 2005, Joshua D. Drake wrote:
My understanding is that 8.1 will have a much more mature implementation
of
stored procedures versus UDFs (Which we have had forever).
What's the difference between UDF and stored procedure ?
Here are a couple of GGIYF references:
Oleg Bartunov oleg@sai.msu.su writes:
Hmm, the only real difference I see - is that SP are precompiled.
I think we should clearly outline what is SP and what is UDF and do we
are working on SP or just improving and extending our functions.
AFAIR, the only person who's actually stated any
Oleg Bartunov wrote:
On Sun, 20 Mar 2005, Joshua D. Drake wrote:
My understanding is that 8.1 will have a much more mature
implementation of
stored procedures versus UDFs (Which we have had forever).
What's the difference between UDF and stored procedure ?
Here are a couple of GGIYF
[I've tried to send this message to pgsql-general several times now,
but even though I'm subscribed to it I never saw the message show up
in the mailing list, so I'm trying to send it from a different account
now. If you get several copies of this message, I apologize.]
I'm working on an
On Sun, Mar 20, 2005 at 10:10:14AM -0800, Jason Leach wrote:
hi,
I'm having a bit of trouble with my SQL query. It takes about 26h to
run on a 3Ghz PC. I'd really like to speed this up.
I put this query in a loop to iterate over 20 tables (each table
including summary has 400k records),
Hi there folks,
I've just had pg_dump fail on me for the first time ever, and I'm not sure why.
It generates 24MB of dump before bombing out with:
pg_dump: socket not open
pg_dump: SQL command to dump the contents of table activity_log
failed: PQendcopy() failed.
pg_dump: Error message from
Dnia Sat, Mar 19, 2005 at 06:36:27PM +0100, [EMAIL PROTECTED] napisal(a/o):
1. I've a master table containing about 4 records. A count(*) provides
me the exact number.
2. I've create a table based on from the master. I copied a fraction from
the master into the new table using a where
tm [EMAIL PROTECTED] wrote in
news:[EMAIL PROTECTED]:
Woodchuck Bill [EMAIL PROTECTED] wrote:
The proponent certainly left a bad taste in my mouth after his
little ...
Too much information.
LOL. Get your mind out of the gutter. ;-)
--
Bill
---(end of
FIRST CALL FOR VOTES (of 2)
unmoderated group comp.databases.postgresql
Newsgroups line:
comp.databases.postgresql PGSQL Relational Database Management System.
Votes must be received by 23:59:59 UTC, 9 Apr 2005.
This vote is being conducted by a neutral
Mean is just sum(col)/count(col)
You can also just use avg(col).
Either way, be careful because nulls may not be treated as you want for
such calculations.
The stats package R can access Postgres databases, and can be used for
robust statistical analyses of the data.
See:
The short question is why does this:
select to_tsvector('default', coalesce(name, '') ||' '||
coalesce(description, '') ||' '|| coalesce(keywords,'')) from link_items;
give different results than this:
update link_items set linksfti=to_tsvector('default', coalesce(name, '')
||' '||
Hello all.
In the tail end of converting an app from MySQL to psql. I have this code :
snip
IF(days_complete = -120, job_price,0) AS Days_120,
IF(days_complete BETWEEN -119 AND -90, job_price,0) AS Days_90,
IF(days_complete BETWEEN -89 AND -60, job_price,0) AS Days_60,
IF(days_complete BETWEEN -59
Folks,
I'm a kind new bye with linux related stuffs. I'm trying to install PG on a FC3 using "postgresql-8.0.1-2PGDG.i686.rpm", but after to issue rpm -ivh and the rpm file I've got a message telling it "error: Failed dependencies: libpq.so.3 is need". I take a look at the documentation and it
Vern [EMAIL PROTECTED] wrote in news:[EMAIL PROTECTED]:
Marc G. Fournier wrote in Msg [EMAIL PROTECTED]:
it can't *hurt* to have the group ...
I respectfully disagree with you, Marc. :)
The PGSQL* hierarchy is now well distributed, and there is no need
for a comp.* group. If
Hello
I need Help on ODBC driver testing. We have written ODBC
driver for our new
SQL engine. I have to test ODBC driver in linux only. I need
some web links
where i can get some free codes in linux for ODBC testing.
Can you refer me
any sites or weblinks where i can get some ODBC driver test
I'm working on an application where we have a central database server
and a bunch of clients that are disconnected most of the time, need to
maintain a local copy of the central database. The client databases are
based on One$DB since it has to be lightweight. The client does not
access the
Michael Fuhr wrote:
On Tue, Mar 15, 2005 at 10:46:09PM +, Paul Moore wrote:
The long and short of it is that I believe you just use \n to delimit
lines on Windows, just like anywhere else.
Many thanks -- your test results contain the info we've been seeking.
Thanks a lot Paul.
Micheal, you
Title: Message
I use AS Tcl 8.4.9
on Windows and I would like to use Pg8 (since it is now native to Windows). What
do I need on the Tcl side of things?
Robert
HicksNorthrop Grumman
Mission Systems
Defense Mission SystemsSystems
Administrator (LIMS)304.264.7939 (Office)
304.264.2664
It's a possible to compress traffic between server and client while server
returns query result?
It's a very actually for dial-up users.
What is solution?
---(end of broadcast)---
TIP 8: explain analyze is your friend
Hi,
I'm need to be able to insert a byte[] of size upto 25MB.
With Heap size upto 512m this is failing, with a java.lang.OutOfMemoryError
Any help resolving this issue will be greatly appreciated.
Thanks,
Suma
---(end of broadcast)---
TIP 9:
The short question is why does this:
select to_tsvector('default', coalesce(name, '') ||' '||
coalesce(description, '') ||' '|| coalesce(keywords,'')) from link_items;
give different results than this:
update link_items set linksfti=to_tsvector('default', coalesce(name, '')
||' '||
Hi all.
We have a setup with Zope and a remote Postgresql server. We're storing
blobs in largeobject files.
What we need to do is to be able to do the transfer of blobs between
Zope and postgres. I thought it was possible to use lo_* functions, by
creating a largeobject, and then sending the
Mark Rae wrote:
I would say that doing the concurrency tests is probably the most
important factor in comparing other databases against MySQL, as
MySQL will almost always win in single-user tests.
E.g. here are some performance figures from tests I have done in the past.
This is with a 6GB databse
In article [EMAIL PROTECTED],
Rick Schumeyer [EMAIL PROTECTED] writes:
These results are for a single process populating a table with 934k rows,
and then performing some selects. I also compared the effect of creating
indexes on some of the columns.
I have not yet done any testing of
I have table containing different types of documents (type A, B and C).
Each document type must have separate sequential ID starting at 1
ID of first inserted record of type A must be set to 1
ID of first inserted record of type B must be also set to 1
ID of second record of type A
Hi,
i dumped my database on server1 with pg_dump -Fc ..., copied the dump to
server2, both same pgsql version 7.4.6
pg_restore says
pg_restore: [custom archiver] could not uncompress data: incorrect data check
But it seems that almost any data was restored.
What does this error mean. I
Hello,
This issue is resolved.
I was using the wrong struct.
Peter
Tom Lane wrote:
peter Willis [EMAIL PROTECTED] writes:
I have a trigger function written in C.
...
Since the trigger is called after each row update the actual row data
should be available in some way to the trigger.
I use a bash script (similar to following example) to update tables.
psql -v passed_in_var=\'some_value\' -f script_name
Is it possible to pass a value back from psql to the bash script?
Thanks,
Paul Cunningham
---(end of broadcast)---
TIP 4:
Hello,
I've dumped the content of MS-Access 2002
SP3tables on a PC with Windows XP Pro in French localization.Then I
COPY these files, on the same PC hosting an PostgreSQL 8.0.1 database.I've
problems with the accents !? Why ?
What kind of encoding must I use to create the PG
database under
Hello,
I resolved this issue already.
The trigger now works fine.
I was looking at the wrong structure.
Thanks,
Peter
Michael Fuhr wrote:
On Tue, Mar 08, 2005 at 11:37:14AM -0800, peter Willis wrote:
I have a trigger function written in C.
The trigger function is called via:
CREATE TRIGGER
Stanislaw Tristan wrote:
It's a possible to compress traffic between server and client while server
returns query result?
It's a very actually for dial-up users.
What is solution?
No, unless SSL compresses automatically.
--
Bruce Momjian| http://candle.pha.pa.us
Bruce Momjian wrote:
Stanislaw Tristan wrote:
It's a possible to compress traffic between server and client while server returns query result?
There are a couple of solutions.
1. Mammoth PostgreSQL supports this for libpq, and jdbc based clients.
2. You can use a web services model that
Stanislaw Tristan wrote:
It's a possible to compress traffic between server and client while server
returns query result?
It's a very actually for dial-up users.
What is solution?
You could use an SSH tunnel with compression to achieve this.
-Neil
---(end of
To fetch all updates since the last synchronization, the client would
calculated a value for $lastrevision by running this query on its local
database:
SELECT max(revision) AS lastrevision FROM codes;
It would then fetch all updated rows by running this query against the
server:
SELECT * FROM
On Tue, 15 Mar 2005, Suma Bhat wrote:
I'm need to be able to insert a byte[] of size upto 25MB.
With Heap size upto 512m this is failing, with a java.lang.OutOfMemoryError
You need to use an 8.0 JDBC driver and a 7.4 or 8.0 server.
Kris Jurka
---(end of
Alex Adriaanse [EMAIL PROTECTED] writes
This seems to work, except there exists a race condition. Consider the
following series of events (in chronological order):
1. Initially, in the codes table there's a row with id=1, revision=1,
and a row with id=2, revision=2
2. Client A
On Mon, 21 Mar 2005 02:50 pm, Bruce Momjian wrote:
Stanislaw Tristan wrote:
It's a possible to compress traffic between server and client while server
returns query result?
It's a very actually for dial-up users.
What is solution?
There is always the possibility of using SSH to tunnel
On Tue, 15 Mar 2005 08:39 pm, Andrus wrote:
I have table containing different types of documents (type A, B and C).
Each document type must have separate sequential ID starting at 1
ID of first inserted record of type A must be set to 1
ID of first inserted record of type B must
I don't remember such problem ? What's your tsearch2 setup ?
Oleg
On Thu, 17 Mar 2005, Justin L. Kennedy wrote:
The short question is why does this:
select to_tsvector('default', coalesce(name, '') ||' '||
coalesce(description, '') ||' '|| coalesce(keywords,'')) from link_items;
give different
The number of lines depends merely on where you place your line breaks.
IF(days_complete = 120, job_price, 0)AS Days_120
could be written as:
CASE WHEN days_complete = 120 THEN job_price ELSE 0 END AS Days_120
There might be somewhat less syntactic sugar, but this is not a five
line expression
Hi,
I have installed previous version of postgresql. 8.x. I uninstalled it.
Then remove the superuser in My Computer ( right click ) -- Properties
-- in tab Advanced -- Settings button. I try to install postgresql
8.0.1. In service configuration, I checked the install as service
checkbox. Filled
Test
On Mon, Mar 21, 2005 at 12:35:22AM -0600, Thomas F.O'Connell wrote:
The number of lines depends merely on where you place your line breaks.
IF(days_complete = 120, job_price, 0)AS Days_120
could be written as:
CASE WHEN days_complete = 120 THEN job_price ELSE 0 END AS Days_120
There
Hi!
Could you please answer me whether PostgreSQL v7.4.7 (on x86 platform)
is compatible with FreeBSD v5.3 or its safer to use FreeBSD v4.11?
Excuse me for my English. Thank you in advance!
58 matches
Mail list logo