Hello there
I am using Postgres 6.5.3 on SuSE Linux 6.4
The problem is that on using rules on a view it will only work for
insert and delete - not update. Even for insert and delete to work,
read and write permission must be given the user on the parent file.
This does somewhat negate the purp
have you tried this under v7.0.3 and/or 7.1, to see if its long since been
fixed?
On Sun, 14 Jan 2001, miss wrote:
> Hello there
>
> I am using Postgres 6.5.3 on SuSE Linux 6.4
>
> The problem is that on using rules on a view it will only work for
> insert and delete - not update. Even for ins
Hi all,
After a year or so of programming with it, I've become a serious Pg fan.
However, I have one quibble / query.
Many of my applications use large objects to store images, text chunks
etc. etc. This works very well, and certainly has adequate performance
for anything I've ever tried to do
Hello to all,
I'm writing because i have noticed a problem with the JDBC driver:
JDBC specification say that "ResultSet.absolute( 1 )" is equivalent to
"ResultSet.first()". With the postgresql jdbc driver this is not true.
I suppose that this problem happens because theVector implementation of
Res
> The 7.1 Release notes include the statement re WAL below.
>
> "Write-ahead Log (WAL)
> To maintain database consistency in case of an operating system crash,
> previous releases of PostgreSQL have forced all data modifications to
> disk before each transaction commit. With WAL, only one log fil
Does this provide true "point of failure" recovery? This sounds like no
more than a cold backup,
which does not provide "point of failure" recovery. I think the original
question is very valid. Postgres
does not, to my knowledge, support transaction logging, which is necessary
for this style of
Yes, the first method will not be valid because you cannot just take down your
system anytime you need to do a backup, plus it's risky anytime you stop and
start a large server application that manages data. The pg_dump method would
be better.
I have worked extensively with Oracle database apps.
Hi,
Using phorum (www.phorum.org) based on PHP, I am getting the following error
when trying to insert a message in the PosgreSQL 6.5.3 database:
ProcessQuery
ERROR: Tuple is too big: size 12440
AbortCurrentTransaction
(obtained from the log). Can anybody tell me where to find proper
documenta
> > But When I am trying to forward engineer that to my Remote PostgreSQL db I
> > am getting ODBC error saying:
> > 'Could not connect to the Server'
> > 'Could not connect to remote socket'
Oh, and I do recognize those error messages as the same that occur when
you are configured for SSH port f
> I've searched the documentation and the mailing lists archive but I'm
> unable to find info on setting checkpoints and performing roll-forward
> operations with the transaction log. I'd appreciate it if
> someone could point me to the right docs.
WAL stuff is not documented yet.
Anyway you sho
Hi,
We are using the postgres7 against Linux. We got a pg_dump error when
tired to backup a database using pg_dump. The error message is “PQgetvalue:
ERROR! tuple number 0 is out of range 0..-1 Segmentation fault“. We could dump
this database when it was created. Our database server can
Hi,
I have read that PostgreSQL has a row size limitaton to 8 or 32 Kb. Is this
true? Doesn't exist any way of surpassing this limitation?
Isn't that a serious limitation for a datababase?
Halford Dace <[EMAIL PROTECTED]> writes:
> However, what rather terrifies me is that pg_dump seems only to store the
> oid field in the dump, and not the content of the LO itself.
You're quite right: pg_dump does NOT back up large objects.
(7.1's pg_dump will, but 7.0 and before don't.)
There is
On Thu, Jan 11, 2001 at 08:57:26AM -0600, Tim White wrote:
> In Oracle, you restore the data files from a previous backup and then
> re-apply the transaction (archive)
> logs, a process called "rolling forward", then you can open the database
> for use, and it is in the state
> just prior to the f
> Hello to all,
> I'm writing because i have noticed a problem with the JDBC driver:
> JDBC specification say that "ResultSet.absolute( 1 )" is equivalent to
> "ResultSet.first()". With the postgresql jdbc driver this is not true.
> I suppose that this problem happens because theVector implementat
We have had a number of fixes in the 7.1 beta JDBC driver. Would you test
that and let us know if it is still broken. You can get the JAR file
from the JDBC website at http://www.retep.org.uk.
> Hello to all,
> I'm writing because i have noticed a problem with the JDBC driver:
> JDBC specific
On Thu, Jan 11, 2001 at 08:57:26AM -0600, Tim White wrote:
> Does this provide true "point of failure" recovery? This sounds like no
> more than a cold backup,
> which does not provide "point of failure" recovery.
Yes, this is only for regular backup (but it doesn't require a long
downtime for y
Hi,
yes it's true, but it's not so serious. Me and many
other people are working with postgreSQL and never
felt this limit. So if you want to insert some large
data in a row, for example picture, you should use
Large Objects (See the manuals).
And now the good news: the upcomming PostgreSQL 7.1
18 matches
Mail list logo