branch id: Branch Identifier. Every RM involved in the global
transaction is given a *different* branch id.
Hm, I am confused then -- the XA spec definitely talks about
enlisting
multiple RMs in a single transaction branch.
Can you explain?
I oversimplified a bit. The TM *can* enlist
On Oracle 9.2 you get 0, 0, 0, and 2 rows.
--Barry
SQL create table tab (col integer);
Table created.
SQL select 1 from tab having 1=0;
no rows selected
SQL select 1 from tab having 1=1;
no rows selected
SQL insert into tab values (1);
1 row created.
SQL insert into tab values (2);
1
Tom,
Your patch works for my test cases. Thanks to both you and Oliver for
getting this fixed.
--Barry
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 28, 2004 2:23 PM
To: Oliver Jowett
Cc: Barry Lind; [EMAIL PROTECTED]
Subject: Re: [HACKERS] [JDBC
and fixed it in 8.0 of the server. Any chance of
getting a backport? Or is my only option to run with protocolVersion=2
on the jdbc connection.
Thanks,
--Barry
-Original Message-
From: Oliver Jowett [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 24, 2004 1:38 AM
To: Barry Lind
)
java.sql.SQLException: ERROR: unrecognized node type: 0
Location: File: clauses.c, Routine: expression_tree_mutator, Line:
3220
Server SQLState: XX000
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 7:10 AM
To: Barry Lind
Cc: [EMAIL PROTECTED]; [EMAIL
to anyone.
Thanks,
--Barry
-Original Message-
From: Barry Lind
Sent: Friday, November 19, 2004 5:40 PM
To: Kris Jurka
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: [HACKERS] [JDBC] Strange server error with current 8.0beta
driver
Kris,
Environment #1: WinXP 8.0beta4 server, 8.0jdbc
working on.
Thanks,
--Barry
-Original Message-
From: Barry Lind
Sent: Monday, November 22, 2004 7:48 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: FW: [HACKERS] [JDBC] Strange server error with current 8.0beta
driver
I have been unable to come up with a simple test case
To: Barry Lind
Cc: [EMAIL PROTECTED]
Subject: Re: [JDBC] Strange server error with current 8.0beta driver
On Fri, 19 Nov 2004, Barry Lind wrote:
During my testing with the 8.0 driver, I am occasionally getting
failures. The strange thing is that a test will only fail 1 out of 10
times
Alvaro,
My proposal would be:
1. Begin main transaction: BEGIN { TRANSACTION | WORK }
2. Commit main (all) transaction: COMMIT { TRANSACTION | WORK }
3. Rollback main (all) transaction: ROLLBACK { TRANSACTION }
4. Begin inner transaction: BEGIN NESTED { TRANSACTION | WORK }
5. Commit inner
Am I the only one who has a hard time understanding why COMMIT in the
case of an error is allowed? Since nothing is actually committed, but
instead everything was actually rolled back. Isn't it misleading to
allow a commit under these circumstances?
Then to further extend the commit syntax
Kris,
Thank you. I objected to having the jdbc code moved out of the base
product cvs tree for some of the reasons being discussed in this thread:
how are people going to find the jdbc driver, how will they get
documentation for it, etc.
I think the core problem is that some people view
Denis,
This is more appropriate for the jdbc mail list.
--Barry
Denis Khabas wrote:
Hi everyone:
I am using Postgresql 7.3.4 and found a problem inserting Timestamp objects through
JDBC Prepared Statements when the time zone is set to Canada/Newfoundland (3 hours and
30 minutes from MGT). I
Oliver Elphick wrote:
On Wed, 2004-03-03 at 04:59, Tom Lane wrote:
What might make sense is some sort of marker file in a
tablespace directory that links back to the owning $PGDATA directory.
CREATE TABLESPACE should create this, or reject if it already exists.
It will not be enough for the
Gavin,
After creating a tablespace what (if any) changes can be done to it.
Can you DROP a tablespace, or once created will it always exist? Can
you RENAME a tablespace? Can you change the location of a tablespace
(i.e you did a disk reorg and move the contents to a different location
and
Jan,
In Oracle a call from sql into java (be it trigger, stored procedure or
function), is required to be a call to a static method. Thus in Oracle
all the work is left for the programmer to manage object instances and
operate on the correct ones. While I don't like this limitation in
Tom Lane wrote:
Using tableoid instead of tablename avoids renaming problems, but makes
the names horribly opaque IMNSHO.
Agreed. I think using the OIDs would be a horrible choice.
As a point of reference Oracle uses a naming convention of 'C' where
is a sequence generated unique
Gmane,
I just checked in a fix to the jdbc driver for this. The problem was
that the connection termination message was being passed the wrong
length, which really didn't have any other adverse side effect than this
message in the log, since the connection was no longer being used.
thanks,
Chris,
SQL_ASCII means that the data could be anything. It could be Latin1,
UTF-8, Latin9, whatever the code inserting data sends to the server. In
general the server accepts anything as SQL_ASCII. In general this
doesn't cause any problems as long as all the clients have a common
PROTECTED]
References: [EMAIL PROTECTED] [EMAIL PROTECTED]
Barry Lind schrieb:
I'm a bit puzzled about the versions of the JDBC driver floating around.
I initially downloaded the release for 7.3 from jdbc.postgresql.org
Now I have seen that the JDBC driver which is included e.g. in the
RPM's
wrote:
On Thursday 05 June 2003 11:39, Barry Lind wrote:
Does anyone know why apparently the 7.3beta1 version of the jdbc drivers
are what is included in the 7.3.3 rpms?
The pg73b1jdbc3.jar file is very old (it is the 7.3 beta 1 version).
What RPMs are you using? You should contact whoever
Davide Romanini wrote:
Barry Lind ha scritto:
The charSet= option will no longer work with the 7.3 driver talking to
a 7.3 server, since character set translation is now performed by the
server (for performance reasons) in that senario.
The correct solution here is to convert the database
Andreas,
From the JDBC side it really doesn't make that much difference. The
JDBC code needs to support both ways of doing it (explicit begin/commits
for 7.2 and earlier servers, and set autocommit for 7.3 servers), so
however it ends up for 7.4 it shouldn't be too much work to adopt. As
Tom,
From the jdbc driver perspective I prefer the GUC variable approach,
but either can be used. Each has limitations.
In 7.2 and earlier jdbc code the driver handled the transaction
symantics by adding begin/commit/rollback in appropriate places. And
that code is still in the 7.3 driver
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes:
One addition I would personally like to see (it comes up in my apps
code) is the ability to detect wheather the server is big endian or
little endian. When using binary cursors this is necessary in order to
read int data.
Actually, my
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes:
AFAICS the only context where this could make sense is binary
transmission of parameters for a previously-prepared statement. We do
have all the pieces for that on the roadmap.
Actually it is the select of binary data that I was refering
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes:
Tom Lane wrote:
See binary cursors ...
Generally that is not an option. It either requires users to code to
postgresql specific sql syntax, or requires the driver to do it
magically for them.
Fair enough. I don't see anything much
Dave Page wrote:
I don't know about JDBC, but ODBC could use it, and it would save a heck
of a lot of pain in apps like pgAdmin that need to figure out if a column
in an arbitrary resultset might be updateable.
At the moment there is some nasty code in pgAdmin II that attempts to
parse the SQL
I don't see any jdbc specific requirements here, other than the fact
that jdbc assumes that the following conversions are done correctly:
dbcharset - utf8 - java/utf16
where the dbcharset to/from utf8 conversion is done by the backend and
the utf8 to/from java/utf16 is done in the jdbc driver.
Jeremy,
This appears to be a bug in the database. I have been able to
reproduce. It appears that the new 'autocommit' functionality in 7.3
has a problem.
The jdbc driver is essentially issuing the following sql in your example:
set autocommit = off; -- result of the setAutoCommit(false)
Mats,
Patch applied. (I also fixed the 'length' problem you reported as well).
thanks,
--Barry
Mats Lofkvist wrote:
(I posted this on the bugs and jdbc newsgroups last week
but have seen no response. Imho, this really needs to
be fixed since the bug makes it impossible to use the
driver in a
Forwarding to hackers a discussion that has been happening off list.
--Barry
Original Message
Subject: Re: [HACKERS] PG functions in Java: maybe use gcj?
Date: 01 Nov 2002 19:13:39 +
From: Oliver Elphick [EMAIL PROTECTED]
To: Barry Lind [EMAIL PROTECTED]
References: [EMAIL
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes:
In either case I am concerned about licensing issues. gcj is not under
a BSD style license. Depending on what you need you are either dealing
with regular GPL, LGPL, or LGPL with a special java exception.
I beleive (without giving
I am not sure I follow. Are you suggesting:
1) create function takes java source and then calls gcj to compile it
to native and build a .so from it that would get called at runtime?
or
2) create function takes java source and just compiles to java .class
files and the runtime invokes the
After turning autocommit off on my test database, my cron scripts that
vacuum the database are now failing.
This can be easily reproduced, turn autocommit off in your
postgresql.conf, then launch psql and run a vacuum.
[blind@blind databases]$ psql files
Welcome to psql 7.3b2, the PostgreSQL
I was spending some time investigating how to fix the jdbc driver to
deal with the autocommit functionality in 7.3. I am trying to come up
with a way of using 'set autocommit = on/off' as a way of implementing
the jdbc symantics for autocommit. The current code just inserts a
'begin' after
create table test (col_a bigint);
update test set col_a = nullif('200', -1);
The above works fine on 7.2 but the update fails on 7.3b2 with the
following error:
ERROR: column col_a is of type bigint but expression is of type text
You will need to rewrite or cast the expression
Is this
Tom Lane wrote:
I would say that that is a very bad decision in the JDBC driver and
should be reverted ... especially if the driver is not bright enough
to notice the context in which the parameter is being used. Consider
for example
...
You are trying to mask a server problem in the
I am waiting for this thread to conclude before deciding exactly what to
do for the jdbc driver for 7.3. While using the 'set autocommit true'
syntax is nice when talking to a 7.3 server, the jdbc driver also needs
to be backwardly compatible with 7.2 and 7.1 servers. So it may just be
easier
.
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 09, 2002 2:54 PM
To: Bruce Momjian
Cc: Barry Lind; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: [JDBC] [HACKERS] problem with new autocommit config
parameter and jdbc
Bruce Momjian [EMAIL PROTECTED] writes
:
Barry Lind wrote:
Haris,
You can't use jdbc (and probably most other postgres clients) with
autocommit in postgresql.conf turned off.
Hackers,
How should client interfaces handle this new autocommit feature? Is it
best to just issue a set at the beginning of the connection to ensure
In testing the new 7.3 prepared statement functionality I have come
across some findings that I cannot explain. I was testing using PREPARE
for a fairly complex sql statement that gets used frequently in my
applicaition. I used the timing information from:
show_parser_stats = true
Haris,
You can't use jdbc (and probably most other postgres clients) with
autocommit in postgresql.conf turned off.
Hackers,
How should client interfaces handle this new autocommit feature? Is it
best to just issue a set at the beginning of the connection to ensure
that it is always on?
Wouldn't it make sense to implement autovacuum information in a struture
like the FSM, a Dirty Space Map (DSM)? As blocks are dirtied by
transactions they can be added to the DSM. Then vacuum can give
priority processing to those blocks only. The reason I suggest this is
that in many usage
Then shouldn't this appear on the Open 7.3 issues list that has been
circulating around? This seems like an open issue to me, that needs to
be addressed before 7.3 ships.
--Barry
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes:
You can no long insert large values into a bigint
It is certainly possibly. We have added that type of functionality to
our inhouse CVS system. Below is an example. We include at the bottom
of the checkin mail a link to the webcvs diff page so you can quickly
see what changed for a particular checkin.
--Barry
wfs checkin by barry
I was just testing my product running on a 7.3 snapshot from a few days
ago. And I ran into the following change in behavior that I consider a
bug. You can no long insert large values into a bigint column without a
cast. Small values (in the int range work fine though).
On 7.3 I get:
Tom Lane wrote:
Also, for Mario and Barry: does this test case look anything like what
your real applications do? In particular, do you ever do a SELECT FOR
UPDATE in a transaction that commits some changes, but does not update
or delete the locked-for-update row? If not, it's possible
Neil Conway wrote:
On Sat, Jul 20, 2002 at 10:00:01PM -0400, Tom Lane wrote:
AFAICT, the syntax we are setting up with actual SQL following the
PREPARE keyword is *not* valid SQL92 nor SQL99. It would be a good
idea to look and see whether any other DBMSes implement syntax that
is
When trying to perform a full vacuum I am getting the following error:
ERROR: No one parent tuple was found
Plain vacuum works fine. Thinking it might be a problem with the
indexes I have rebuilt them but still get the error. What does this
error indicate and what are my options to solve
Tom,
It was not compiled with debug. I will do that now and see if this
happens again in the future. If and when it happens again what would
you like me to do? I am willing provide you access if you need it.
thanks,
--Barry
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes:
When
1004 0 Jul03 ?00:00:00 postgres: stats buffer
process
postgres 1070 1069 0 Jul03 ?00:00:00 postgres: stats
collector proces
I then reconnected via psql and reran the vacuum full getting the same
error.
--Barry
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes
Tom,
No. Restarting the postmaster does not resolve the problem. I am going
to put the debug build in place and see if I can still reproduce.
--Barry
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes:
The only postgres processes running are:
[root@cvs root]# ps -ef | grep
application.
I need the app up and running, but I did shut it down and created a backup of the
entire directory as you suggested.
thanks,
--Barry
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes:
No. Restarting the postmaster does not resolve the problem.
Now you've got my
I know that in Oracle there are 'alter database begin backup' and 'alter
database end backup' commands that allow you to script your hot backups
through a cron job by calling the begin backup command first, then using
disk backup method of choice and then finally call the end backup command.
This
Hannu Krosing wrote:
DELETE relation_expr FROM relation_expr [ , table_ref [ , ... ] ]
[ WHERE bool_expr ]
This in some ways is similar to Oracle where the FROM is optional in a
DELETE (ie. DELETE foo WHERE ...). By omitting the first FROM, the
syntax ends up mirroring the
It means you are running a jdbc driver from 7.2 (perhaps 7.1, but I
think 7.2) against a 6.5 database. While we try to make the jdbc driver
backwardly compatable, we don't go back that far. You really should
consider upgrading your database to something remotely current.
thanks,
--Barry
I just did a fresh build from current cvs and found the following
regression from 7.2:
create table test (cola bigint);
update test set cola = 100;
In 7.3 the update results in the following error:
ERROR: column cola is of type 'bigint' but expression is of type
'double precision'
The problem with this is that the existing functionality of LOs allows
you to share a single LO across multiple tables. There may not be a
single source, but multiple. Since LOs just use an OID as a FK to the
LO, you can store that OID in multiple different tables.
--Barry
Mario Weilguni
, but gives the following error in 7.3:
ERROR: JOIN/ON clause refers to x1, which is not part of JOIN
thanks,
--Barry
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes:
In testing Neil's PREPARE/EXECUTE patch on my test query, I found the
parser complains that this query is not valid when
Curt Sampson wrote:
On Thu, 11 Apr 2002, Barry Lind wrote:
I'm not sure that JDBC would use this feature directly. When a
PreparableStatement is created in JDBC there is nothing that indicates
how many times this statement is going to be used. Many (most IMHO)
will be used only once
In benchmarks that I have done in the past comparing performance of
Oracle and Postgres in our web application, I found that I got ~140
requests/sec on Oracle and ~50 requests/sec on postgres.
The code path in my benchmark only issues one sql statement. Since I
know that Oracle caches query
worked in 7.1 and 7.2). Is
this a bug?
thanks,
--Barry
PS. I forgot to mention that the below performance numbers were done on
7.2 (not current sources).
Barry Lind wrote:
In benchmarks that I have done in the past comparing performance of
Oracle and Postgres in our web application, I
Neil Conway wrote:
I would suggest using it any time you're executing the same query
plan a large number of times. In my experience, this is very common.
There are already hooks for this in many client interfaces: e.g.
PrepareableStatement in JDBC and $dbh-prepare() in Perl DBI.
I'm not
Tom Lane wrote:
Yes, that is the part that was my sticking point last time around.
(1) Because shared memory cannot be extended on-the-fly, I think it is
a very bad idea to put data structures in there without some well
thought out way of predicting/limiting their size. (2) How the heck do
a
related question. Your summary was: Bottom line: feeding huge strings
through the lexer is slow.
--Barry
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes:
In looking at some performance issues (I was trying to look at the
overhead of toast) I found that large insert statements were very
Neil Conway wrote:
On Thu, 11 Apr 2002 16:25:24 +1000
Ashley Cambrell [EMAIL PROTECTED] wrote:
What are the chances that the BE/FE will be altered to take advantage of
prepare / execute? Or is it something that will never happen?
Is there a need for this? The current patch I'm working
Tom,
My feeling is that this change as currently scoped will break a lot of
existing apps. Especially the case where people are using where clauses
of the form: bigintcolumn = '999' to get a query to use the index on
a column of type bigint.
thanks,
--Barry
Tom Lane wrote:
Awhile back
Neil,
Will this allow you to pass bytea data as binary data in the parameters
section (ability to bind values to parameters) or will this still
require that the data be passed as a text string that the parser needs
to parse. When passing bytea data that is on the order of Megs in size
(thus
OK. My mistake. In looking at the regression failures in your post, I
thought I saw errors being reported of this type. My bad.
--Barry
Tom Lane wrote:
Barry Lind [EMAIL PROTECTED] writes:
My feeling is that this change as currently scoped will break a lot of
existing apps. Especially
Tom Lane wrote:
Note: I am now pretty well convinced that we *must* fix SET to roll back
to start-of-transaction settings on transaction abort. If we do that,
at least some of the difficulty disappears for JDBC to handle one-shot
timeouts by issuing SETs before and after the target query
Af far as I know Oracle doesn't have any short cut (along the lines of
what is being discussed in this thread) for this operation. However
Oracle is more efficient in providing the answer than postgres currently
is. While postgres needs to perform a full scan on the table, Oracle
will only
Since both the JDBC and ODBC specs have essentially the same symantics
for this, I would hope this is done in the backend instead of both
interfaces.
--Barry
Jessica Perry Hekman wrote:
On Mon, 1 Apr 2002, Tom Lane wrote:
On the other hand, we do not have anything in the backend now that
Jessica,
My reading of the JDBC spec would indicate that this is a statement
level property (aka query level) since the method to enable this is on
the Statement object and is named setQueryTimeout(). There is nothing I
can find that would indicate that this would apply to the transaction in
The spec isn't clear on that point, but my interpretation is that it
would apply to all types of statements not just queries.
--Barry
Peter Eisentraut wrote:
Barry Lind writes:
My reading of the JDBC spec would indicate that this is a statement
level property (aka query level) since
Jason,
BLOBs as you have correctly inferred do not get automatically deleted.
You can add triggers to your tables to delete them automatically if you
so desire.
However 'bytea' is the datatype that is most appropriate for your needs.
It has been around for a long time, but not well
Also note that an uncommitted select statement will lock the table and
prevent vacuum from running. It isn't just inserts/updates that will
lock and cause vacuum to block, but selects as well. This got me in the
past. (Of course this is all fixed in 7.2 with the new vacuum
functionality
Chris,
Current sources for the jdbc driver does support the bytea type.
However the driver for 7.1 does not.
thanks,
--Barry
Chris Bitmead wrote:
Use bytea, its for 0-255, binary data. When your client
library does not support it, then base64 it in client side
and later decode() into
Thomas,
Can you explain more how this functionality has changed? I know that in
the JDBC driver fractional seconds are assumed to be two decimal places.
If this is no longer true, I need to understand the new symantics so
that the JDBC parsing routines can be changed. Other interfaces may
Haller,
The way I have handled this in the past is to attempt the following
insert, followed by an update if the insert doesn't insert any rows:
insert into foo (fooPK, foo2)
select 'valuePK', 'value2'
where not exists
(select 'x' from foo
where fooPK = 'valuePK')
if number of rows
Kovacs,
A 'union all' will be much faster than 'union'. 'union all' returns all
results from both queries, whereas 'union' will return all distinct
records. The 'union' requires a sort and a merge to remove the
duplicate values. Below are explain output for a union query and a
union all
Rene,
Since the FE/BE protocol deals only with string representations of
values, the protocol doesn't have too much to do with it directly. It
is what happens on the client and server sides that is important here.
Under the covers the server stores all timestamp values as GMT. When a
is the proper encoding for your database
For completeness, I quote the answer Barry Lind gave yesterday.
[the driver] asks the server what character set is being used
for the database. Unfortunatly the server only knows about
character sets if multibyte support is compiled
I agree with Hannu, that:
* make SQL changes to allow PREPARE/EXECUTE in main session, not only
in SPI
is an important feature to expose out to the client. My primary reason
is a perfomance one. Allowing the client to parse a SQL statement once
and then supplying bind values for
Can this be added to the TODO list? (actually put back on the TODO list)
Along with this email thread?
I feel that it is very important to have BLOB support in postgres that
is similar to what the commercial databases provide. This could either
mean fixing the current implementation or
I was going through the Todo list looking at the items that are planned
for 7.2 (i.e. those starting with a '-'). I was doing this to see if
any might impact the jdbc driver. The only one that I thought might
have an impact on the jdbc code is the item:
* -Make binary/file in/out interface
My problem is that my two outer joined tables have columns that have the
same names. Therefore when my select list tries to reference the
columns they are ambiguously defined. Looking at the doc I see the way
to deal with this is by using the following syntax:
table as alias (column1alias,
Peter,
Yes I had the same problem, but for me the reason was that I forgot to
start the ipc-daemon before running initdb. Last night I had no
problems installing beta4 on WinNT 4.0.
thanks,
--Barry
Peter T Mount wrote:
Quoting Tom Lane [EMAIL PROTECTED]:
This doesn't make any sense,
Not knowing much about WAL, but understanding a good deal about Oracle's
logs, I read the WAL documentation below. While it is good, after
reading it I am still left with a couple of questions and therefore
believe the doc could be improved a bit.
The two questions I am left with after reading
I meant to ask this the last time this came up on the list, but now is a
good time. Given what Tom describes below as the behavior in 7.1
(initdb stores the locale info), how do you determine what locale a
database is running in in 7.1 after initdb? Is there some file to look
at? Is there some
In researching a problem I have uncovered the following bug in index
scans when Locale support is enabled.
Given a 7.0.3 postgres installation built with Locale support enabled
and a default US RedHat 7.0 Linux installation (meaning that the LANG
environment variable is set to en_US) to enable
90 matches
Mail list logo