Re: [GENERAL] What's faster: value of 0 or NULL with index

2000-12-11 Thread GH

On Mon, Dec 11, 2000 at 04:28:24AM +0100, some SMTP stream spewed forth: 
 Hi,
 
 I'm thinking about, what might be faster on SELECTs: a column with index
 which is NOT NULL and takes the value of 0 or a column which can take
 the NULL value instead of 0, also with index.
 
 My feeling sais, that 0 and NOT NULL should be a lot more faster, but
 perhaps it's not true?

If your SELECT uses the index on the NOT NULL column, then yes, the 
indexed 0 should be faster. I think it takes less space as well.(?)

gh

 
 
 bye
   Alvar
 
 
 -- 
 Alvar C.H. Freude  |  [EMAIL PROTECTED]
 
 Demo: http://www.online-demonstration.org/  |  Mach mit!
 Blast-DE: http://www.assoziations-blaster.de/   |  Blast-Dich-Fit
 Blast-EN: http://www.a-blast.org/   |  Blast/english



[GENERAL] function that return multiple fields and rows

2000-12-11 Thread Wooi K

Hi All, 
I would like to write a function that returns multiple rows and multiple 
fields, something like this.  I know the syntax is probably wrong, any helps 
would be appreciated.

CREATE FUNCTION GetGuestbookMessages (int4) RETURNS SETOF VARCHAR AS
'
SELECT guestname, guestemail FROM GuestbookMessages WHERE UserId = $1
' LANGUAGE 'sql';

notice that I want to return more than one field and more than one record in 
the above function.  Thanks.

wooi.



Re: [GENERAL] Problems with starting Postgres

2000-12-11 Thread Alessio Bragadini

Tom Lane wrote:

 A plain kill (SIGTERM) should remove the socket file on its way out.
 The only thing that wouldn't is kill -9 ... if you're doing it that
 way, don't.

The problem is an unexpected crash/reboot of the machine (which
shouldn't happen, but...) that leaves socket files in /tmp and requires
manual tweaking after the machine is up again. Maybe it's our
installation, but we need a more reliable pg_ctl

-- 
Alessio F. Bragadini[EMAIL PROTECTED]
APL Financial Services  http://village.albourne.com
Nicosia, Cyprus phone: +357-2-755750

"It is more complicated than you think"
-- The Eighth Networking Truth from RFC 1925



Re: [GENERAL] overhead of small large objects

2000-12-11 Thread Denis Perchine

  Is there significant overhead involoved in using large objects that
  aren't very large?

 Yes, since each large object is a separate table in 7.0.* and before.
 The allocation unit for table space is 8K, so your 10K objects chew up
 16K of table space.  What's worse, each LO table has a btree index, and
 the minimum size of a btree index is 16K --- so your objects take 32K
 apiece.

 That accounts for a factor of 3.  I'm not sure where the other 8K went.
 Each LO table will require entries in pg_class, pg_attribute, pg_type,
 and pg_index, plus the indexes on those tables, but that doesn't seem
 like it'd amount to anything close to 8K per LO.

 7.1 avoids this problem by keeping all LOs in one big table.

Or you can use my patch for the same functionality in 7.0.x.
You can get it at: http://www.perchine.com/dyp/pg/

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[GENERAL] mysql issues

2000-12-11 Thread George Johnson



Hi,

My mailer crashed, but before I had to delete all 
the messages I saw a thread regarding making it easier for mysql users to port 
over to postgresql. Well guys, you've gone and made a hole for yourself, 
ESPECIALLY by adding the limitless row lengths in 7.1. With the 
performance gains, reliability, history and now the ability to not have to deal 
with the 8k row limit length, you're going to have a scarey number of "how do I 
do this?" from converted mysql users (myself being new as of last week). 
I'll hold off telling my 10,000 friends about how great postgresql is until you 
decide how you're going to deal with your success (kidding). :-0

I've been able to convert my interface applications 
from mysql sql to postgresql specifics pretty easily. There are some 
things that required some tricky workarounds, but come to think of it, the 
majority of sql statements had to be carefully examined and most NOT 
reworked. But then again, I tried to stick to SQL standard stuff rather 
than mysql specifics. I am willing to bet many mysql users aren't in the 
same boat I am -- and possibly their SQL savvy isn't at a "been there done that 
a few times" level. 

Some things that created problems: 


"drop table if exists tablename"
[oh boy, is this a great one. imagine not 
even having to worry if the table is there or not]
"alter table tablename drop column 
columnname"
"replace into tablename (col1,col2,col3) values 
(val1, val2, val3)"
[ replace is the combo of a delete and insert, 
sorta like a cover yur butt update
 statement ]
"show tables" -- a great way of just getting a 
listing of the tables in ur database.

Some things that were problems, but should not be 
changed:

postgresql requires stricter syntax with GROUP BY, 
and combining grouping
functions and non-grouping columns. That's 
just mysql letting crappy SQL slide, which it should not. Mysql has some 
JOINS and other add-ons that are used because of its lack of subselects. I 
AM WILLING TO BET everyone who is using the more exotic joins and functions 
started out beating their heads because mysql had no subselects, which would 
have made things oh-so-much easier.
You end up creating a crazy amount of temporary 
tables to get around the lack of subselects, doing crazy optimizations to deal 
with having so many temporary tables, etc.

Some things that were problems, but should 
change:

Mysql has a richer set of date and time functions 
and easy-to-use conversions. PostgreSQL should have that at some 
point.

I don't know if postgresql has set and enumeration 
data types ... or if Array covers that. But as far as storing data, 
nothing beats an automatic conversion of 2 million City names into a set of 150 
city names and only the needed bits to store the city name per row, with a cross 
reference to the enumeration or set value. I'm sure there are better 
examples of that.

Other than these things, the only big time 
differences are some different function definitions (which can be easily I'd 
assume implemented) and just who's gotten farther along in implementing SQL92 or 
whichever one of those standards everybody strives toward.

The big problem is going to come from users of 
MySQL who are not experienced SQL DBA's and who relied heavily on the snappy 
features of MySQL (a la web administration stuff). 

I hope this helps a little bit on your decision 
about whether to expend energy toward making MySQL Friends  
Family.

George Johnson



RE: [GENERAL] Re: Re: Why PostgreSQL is not that popular as MySQL ?

2000-12-11 Thread Matthew

 "Brett W. McCoy" [EMAIL PROTECTED] writes:
  On Sun, 10 Dec 2000, Matthew wrote:
  [Matthew]  Would it make sense for postgre to have a mysql
  compatibility module?  An add on package (perhaps in contrib) that
  would add many of the functions that mysql has that postgre does not.
 
  I think it would be wasted effort.  I would rather the developers focus
 on
  PostgreSQL, not MySQL, or Access, or whatever.
 
 I agree that the key developers shouldn't spend much time on such a
 thing, but on the other hand this isn't a project that requires a key
 developer to get done.  If Matthew or someone else feels like spending
 time on it, I wouldn't object...
 
[Matthew]  I agree also, I didn't mean to imply that a core
developer work on it.  I was just asking if this was a project that would
interest people.  If it has enough demand I would get started on it.  I
don't know a whole lot about mysql, but getting a function list and
comparing it to postgre shouldn't be too hard.



RE: [GENERAL] Re: Re: Why PostgreSQL is not that popular as MySQL ?

2000-12-11 Thread Brett W. McCoy

On Mon, 11 Dec 2000, Matthew wrote:

  I agree that the key developers shouldn't spend much time on such a
  thing, but on the other hand this isn't a project that requires a key
  developer to get done.  If Matthew or someone else feels like spending
  time on it, I wouldn't object...
 
   [Matthew]  I agree also, I didn't mean to imply that a core
 developer work on it.  I was just asking if this was a project that would
 interest people.  If it has enough demand I would get started on it.  I
 don't know a whole lot about mysql, but getting a function list and
 comparing it to postgre shouldn't be too hard.

Ah, ok, I misunderstood your intention.

You know what also would be good?  A guide to porting from MySQL to
PostgreSQL, like a mini-HOWTO or something along those lines.  I've done
some MySQL development (although I was using PostgreSQL even before I knew
MySQL existed, believe it or not), so I would be willing to help in this
endeavour.

-- Brett
 http://www.chapelperilous.net/~bmccoy/
---
Get forgiveness now -- tomorrow you may no longer feel guilty.




Re: [GENERAL] Regular expression question

2000-12-11 Thread Tom Lane

Steve Heaven [EMAIL PROTECTED] writes:
 Does the regular expression parser have anything equivalent to Perl's \w
 word boundary metacharacter?

src/backend/regex/re_format.7 contains the whole scoop (for some reason
this page doesn't seem to get installed with the rest of the
documentation).  In particular:

There are two special cases of bracket expressions:
the bracket expressions `[[::]]' and `[[::]]' match the null
string at the beginning and end of a word respectively.
A word is defined as a sequence of word characters
which is neither preceded nor followed by word characters.
A word character is an alnum character (as defined by ctype(3))
or an underscore.  This is an extension, compatible with but not
specified by POSIX 1003.2, and should be used with caution in
software intended to be portable to other systems.

...

BUGS

The syntax for word boundaries is incredibly ugly.

POSIX bracket expressions are pretty ugly anyway, and this is no worse
than the rest.  However, if you prefer Perl or Tcl, I'd recommend that
you just *use* Perl or Tcl ;-).  plperl and pltcl make great
implementation languages for text-mashing functions...

regards, tom lane



RE: [GENERAL] Regular expression question

2000-12-11 Thread Michael Ansley
Title: RE: [GENERAL] Regular expression question





Yes, that's right :-0 Sorry!


-Original Message-
From: Steve Heaven [mailto:[EMAIL PROTECTED]]
Sent: 11 December 2000 15:09
To: Michael Ansley; [EMAIL PROTECTED]
Subject: RE: [GENERAL] Regular expression question



At 14:58 11/12/00 -, Michael Ansley wrote:
 Hmmm, what I proposed earlier (CREATE INDEX foo ON employees
(UPPER(firstname));) seems to work fine in 7.1 but not 7.0.2. Cheers... 
 MikeA 


I think you're anwering the wrong question.
It was David Hancock [EMAIL PROTECTED] who was asking the upper/lower
case question. Mine was about a metacharacter for word boundaries cf Perl's
\b.


Steve



-- 
thorNET - Internet Consultancy, Services  Training
Phone: 01454 854413
Fax: 01454 854412
http://www.thornet.co.uk 




**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
Nick West - Global Infrastructure Manager.

This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.

www.mimesweeper.com
**



Re: [GENERAL] Simple Question: Case sensitivity

2000-12-11 Thread Tom Lane

Tomas Berndtsson [EMAIL PROTECTED] writes:
 Related to this, is there any way to make an index for a table
 case-insensitive? If you have an index, but use upper() in the select,
 the index is not used.

Sure, make a functional index:

play= create table foo (f1 text);
CREATE
play= create index fooi on foo (upper(f1));
CREATE

This index will be considered for queries like:

play= explain select * from foo where upper(f1) = 'z';
NOTICE:  QUERY PLAN:

Index Scan using fooi on foo  (cost=0.00..8.16 rows=10 width=12)

EXPLAIN
play= explain select * from foo where upper(f1)  'a' and upper(f1)  'z';
NOTICE:  QUERY PLAN:

Index Scan using fooi on foo  (cost=0.00..8.21 rows=10 width=12)

EXPLAIN

You can use the same sort of ploy for lower() or any other simple
function of the table's columns.  Don't go overboard with a ton of
indexes though; remember each index costs time when updating the
table...

regards, tom lane



Re: [GENERAL] Simple Question: Case sensitivity - Performance?

2000-12-11 Thread Tim Uckun

At 03:43 PM 12/11/2000 +, [EMAIL PROTECTED] wrote:

As an aside in DB2 there is the concept of a shared weight index which 
depending
on
locale lumps lower/upper case characters together so that you don't have to
include
an UPPER in the SQL - and it will use the index. Perhaps postgres can also 
work
this way?

MS-SQL server and sybase both have an option which allows you to do case 
insensitive queries as a database setting. I know that this is not standard 
SQL behavior but considering that MS access also treats all of it's queries 
in a case insensitive matter I think it makes sense to try and accommodate 
people migrating from SQL server/ sybase or up sizing from access.

I have thought of doing one of the following.

1) overload the = operator for varchar, text,bpchar etc. so that it 
compares insensitively. Would this seriously disrupt the database? How 
would it effect group bys and order bys?

2) Dig into the code and change the varlena functions so that they do a 
upper before the strcmp. Again how seriously this would disrupt the rest of 
the system.

3) Create a case insensitive locale. Well this certainly seems the cleanest 
approach and is unlikely to break the database in any way. I am suprised 
nobody has done this yet. Is there a document which describes how to create 
locales?

:wq
Tim Uckun
Due Diligence Inc. http://www.diligence.com/   Americas Background 
Investigation Expert.
If your company isn't doing background checks, maybe you haven't considered 
the risks of a bad hire.




[GENERAL] query time in psql

2000-12-11 Thread Juriy Goloveshkin

is it posible to shop query time in psql frontend?
(like in mysql)

-- 
Bye
Juriy Goloveshkin



[GENERAL] Great Bridge PostgreSQL products and services

2000-12-11 Thread Ned Lilly

Hello all,

Great Bridge formally announced its first product and service offerings 
today.  Here are the highlights:

* QA-tested distribution of PostgreSQL 7.0.3 for Linux (free, source 
  and binaries available at http://www.greatbridge.com/download)
* Automated graphical installer (free, source and binaries available 
  at http://www.greatbridge.org/project/gbinstaller/)
* 500+ pages of documentation (free, available at 
  http://www.greatbridge.com/docs)
* professional support offerings ranging all the way up to 24 
  hours/7 days
* consulting services ranging from planning and design to porting 
  and implementation
  
I'd be happy to answer any questions on- or off-list.  Or of course you 
can talk to John Rickman, our VP Sales, [EMAIL PROTECTED]

Here's a link to the announcement:

http://www.greatbridge.com/about/press.php?content_id=23
Regards,
Ned

-- 

Ned Lilly e: [EMAIL PROTECTED]
Vice Presidentw: www.greatbridge.com
Evangelism / Hacker Relationsv: 757.233.5523
Great Bridge, LLCf: 757.233.




Re: [GENERAL] Postgres demographics?

2000-12-11 Thread Jeff MacDonald

Hi,

We have some demographics available at http://www.pgsql.com/user_gallery

Jeff

On Thu, 7 Dec 2000, GH wrote:

 
 Has anybody collected information about what people use Postgres how and
 to do what? I think it would be interesting to see where our Collective 
 got its roots.
 
 Personally, I came from a PHP background using MySQL. An eCommerce (oh, I
 hate that word) project stretched the limits of MySQL and Postgres fit
 the bill very nicely. I was somewhat hesitant due to the massive
 anti-Postgres propaganda spread by just about everybody, but I am glad I
 made the switch and would not consider using MySQL for any but the
 simplest and least likely to grow project.
 
 Hats off to -core and to other developers as well as to the community.
 We have a Good Thing.
 
 gh
 

Jeff MacDonald,

-
PostgreSQL Inc  | Hub.Org Networking Services
[EMAIL PROTECTED]  | [EMAIL PROTECTED]
www.pgsql.com   | www.hub.org
1-902-542-0713  | 1-902-542-3657
-
Facsimile : 1 902 542 5386
IRC Nick  : bignose
PGP Public Key : http://bignose.hub.org/public.txt




Re: [GENERAL] query time in psql

2000-12-11 Thread Martin A. Marques

On Monday 11 December 2000 15:56, Juriy Goloveshkin wrote:
 is it posible to shop query time in psql frontend?

What do you mean with: shop query time?


-- 
System Administration: It's a dirty job, 
but someone told I had to do it.
-
Martín Marqués  email:  [EMAIL PROTECTED]
Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/
Administrador de sistemas en math.unl.edu.ar
-



[GENERAL] Large files on linux

2000-12-11 Thread Fernan Aguero

Dear all:

I am having trouble with large files on a Linux box (RH 6.2). I know there
is a limit of 2 GB on the file size, but do not know if this is kernel
related, filesystem related or both. 

I am researchiing the issue (having some trouble to find info) and found
that there are several alternatives. Can you give me advice for/against the
ones you think are best/worse. Also, if you happen to have links to other
places which might have information on this issue, please pass them along.

Here are my options:

i) patch the kernel. 
I didn't find much information about this one. I don't know yet whether
this is a kernel related limitation or filesystem-related or both. I have
also read that programs had to be patched to use this 'large_file' patch.
Will probably avoid this one.

ii) try a new kernel, 2.4-test
have to do some research into this yet, but apparently the new linux kernel
will support large files. My understanding right now is that ext2fs do not
support files this large, so this option will probably go together with the
next one.

iii) try a new filesystem. 
several filesystems appear to support large files (but not ext2fs, and
apparently not ext3fs).
(http://www.linuxgazette.com/issue55/florido.html).

iv) go for another distro or unix
SuSE 7.0 reportedly has support for large files ( 2GB) although it uses a
pre2.4 kernel. I don't know how SuSE does it (patches like the ones
mentioned in (1) or a different filesystem?).
Another option is to go for a different unix, like FreeBSD, which has had
support for large files for a long while.

Thanks in advance,

Fernan




Re: [GENERAL] Postgres demographics?

2000-12-11 Thread Tim Barnard

snip
 
 We have some demographics available at http://www.pgsql.com/user_gallery
 
snip

I believe what was intended was:

http://www.pgsql.com/register

Tim





Re: [GENERAL] query time in psql

2000-12-11 Thread Juriy Goloveshkin

On Mon, Dec 11, 2000 at 05:09:40PM -0300, Martin A. Marques wrote:
 On Monday 11 December 2000 15:56, Juriy Goloveshkin wrote:
  is it posible to shop query time in psql frontend?
 What do you mean with: shop query time?
oops... s/shop/show/
I want to know how many time the query executed. like in mysql:
---
13 rows in set (0.15 sec)
---

set show_query_stats is too noisy...

-- 
Bye
Juriy Goloveshkin



Re: [GENERAL] Large files on linux

2000-12-11 Thread Trond Eivind GlomsrØd

Fernan Aguero [EMAIL PROTECTED] writes:

 I am having trouble with large files on a Linux box (RH 6.2). I know there
 is a limit of 2 GB on the file size, but do not know if this is kernel
 related, filesystem related or both. 
 
 I am researchiing the issue (having some trouble to find info) and found
 that there are several alternatives. Can you give me advice for/against the
 ones you think are best/worse. Also, if you happen to have links to other
 places which might have information on this issue, please pass them along.
 
 Here are my options:
 
 i) patch the kernel. 
 I didn't find much information about this one. I don't know yet whether
 this is a kernel related limitation or filesystem-related or both. I have
 also read that programs had to be patched to use this 'large_file' patch.
 Will probably avoid this one.
 
 ii) try a new kernel, 2.4-test
 have to do some research into this yet, but apparently the new linux kernel
 will support large files. My understanding right now is that ext2fs do not
 support files this large, 

It does.
 
 iii) try a new filesystem. 
 several filesystems appear to support large files (but not ext2fs, and
 apparently not ext3fs)

On 2.2, it's VFS related. ext3/ext2 doesn't have such problems.

Red Hat Linux 7.0 includes a kernel (the "enterprise" one) with the
LFS patched included and test. It's also ready for 2.4 

-- 
Trond Eivind Glomsrød
Red Hat, Inc.



Re: [GENERAL] Large files on linux

2000-12-11 Thread Alfred Perlstein

* Fernan Aguero [EMAIL PROTECTED] [001211 12:45] wrote:
 Dear all:
 
 I am having trouble with large files on a Linux box (RH 6.2). I know there
 is a limit of 2 GB on the file size, but do not know if this is kernel
 related, filesystem related or both. 

Afaik it's both.

Honestly Postgresql should be able to deal with this limitation by
using more than one file per table.

But if you really want to support large files on a free UNIX,
I'd try FreeBSD.

best of luck,
-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [GENERAL] Large files on linux

2000-12-11 Thread Peter Eisentraut

Fernan Aguero writes:

 I am having trouble with large files on a Linux box (RH 6.2). I know there
 is a limit of 2 GB on the file size,

...but that doesn't affect table size, database size, or whatever you're
thinking of.

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: [GENERAL] Large files on linux

2000-12-11 Thread Hans-Jürgen Schönig

Alfred Perlstein schrieb:

 * Fernan Aguero [EMAIL PROTECTED] [001211 12:45] wrote:
  Dear all:
 
  I am having trouble with large files on a Linux box (RH 6.2). I know there
  is a limit of 2 GB on the file size, but do not know if this is kernel
  related, filesystem related or both.

 Afaik it's both.

 Honestly Postgresql should be able to deal with this limitation by
 using more than one file per table.

 But if you really want to support large files on a free UNIX,
 I'd try FreeBSD.

 best of luck,
 --
 -Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
 "I have the heart of a child; I keep it in a jar on my desk."

Kernel = 2.4 can manage large files. As far as I have tried this it works
perfectly well.

Hans





Re: [GENERAL] Unanswered questions about Postgre

2000-12-11 Thread Joe Kislo

 What I think we _really_ need is a large object interface to TOAST data.
 We already have a nice API, and even psql local large object handling.
 
 If I have a file that I want loaded in/out of a TOAST column, we really
 should make a set of functions to do it, just like we do with large
 objects.
 
 This an obvious way to load files in/out of TOAST columns, and I am not
 sure why it has not been done yet.  I am afraid we are going to get
 critisized if we don't have it soon.

Okay, let me criticize you now then :)  (just kidding)  Over the past
month I've been trying out postgre for two reasons.  I've posted a
number of questions to this mailing list, and the postgre community has
been extremely responsive and helpful.  Kudos to everybody working on
postgre.  Most of my questions have been along the line of asking why a
particular feature works differently then in other databases, or why
postgre seemed to act in an illogical fashion (such as corrupting my
database).  

First, I was evaluating Postgre for a medium scale application I will
working on for my current employer.  Technically this is re-architecting
a current application built on MySQL and Python.  I plan to move the
application to java servlets and some database other then MySQL,
preferably opensource.  Postgre, obviously with its' reputation, was the
beginning of this short list of databases to look at.  Unfortunately I
quickly discovered this lack of BLOB support.  I understand that the C
API can read/write -files- off the server's filesystem and load them
into the database.  Unfortunately we would absolutely require true
over-the-wire blob support through JDBC.  AFAIK, even with these "toast"
columns, it still wouldn't fill that need.  The need here is to load
binary data from the client, transfer it over the JDBC wire, and store
it in the database.  Some people before suggested a shared NFS
partition, then have the server use the existing BLOB support to load
the files off disk.  That's really not an acceptable solution.  So as
for using postgre in this upcoming application, it's really a no-go at
this point without that ability.  I actually suspect a number of people
also have a need to store BLOBs in a database, but maybe it's not as
important as I think.

The second reason why I've been working with Postgre is I'm about to
release into the open source a java based object database abstraction
layer.  This layer maps java objects to a relational database by storing
their primitives in database primitives, and using java reflection to
reconstitute objects from the database.  This allows you to perform
complex joins and such in the -database- then map to the actual java
objects.
When you attach a particular class to a database, you choose the
appropriate database adapter (such as one for oracle or postgre).  These
DBAdapters take care of all the DB specific things, such as native
column types, handling auto incrementing columns (generators or "serial
columns"), creating tables, altering tables when class definitions
change, database independent indexing, and blobs.  Programmers mostly
work at the object layer, and don't really worry about the particulars
of the underlying database.  (although they can execute raw SQL if they
really need to).  So this truly allows an application to be written
independent of any particular underlying database (and to my dismay,
there appear to be very big differences between these databases!).  This
allows you to change your underlying database easily, which means you
can choose the database server on it's merits, and not because it's been
grandfathered into your application :)

Anyway, when implementing the Postgre DBAdapter, I found postgre to be
quite a nice database (and pretty fast too).  But there were two issues
which cripple the postgre DBAdapter from supporting the full feature
set.  

1) No blob support.  As I described above, it needs to be possible to
insert an arbitrarily large (or atleast up to say 5 megabytes) binary
object into the database, and have it accessible by a particular column
name in a table.  AFAIK, this is not currently possible in postgre

2) Postgre does not record rollback segments.  Which means transactions
get ABORTed and rolled back for some odd reasons when they don't
normally need to.  For example, if you just send the SQL server some
garbage SQL, (eg: ASDF;) your transaction gets aborted and rolled back;
even though your garbage SQL didn't touch any rows.  At the object layer
in the aforementioned database layer, if you try insert an object into
the database and doing so would violate a unique key (such as the
primary key), a DuplicateKeyException will be thrown.  No other database
adapters I've implemented, such as MySQL, interbase or oracle, will
*also* abort the transaction.  
So if at the object layer, a DuplicateKeyException is supposed to
happen in that case, I would have to before every object is 

Re: [GENERAL] Unanswered questions about Postgre

2000-12-11 Thread Bruce Momjian

  What I think we _really_ need is a large object interface to TOAST data.
  We already have a nice API, and even psql local large object handling.
  
  If I have a file that I want loaded in/out of a TOAST column, we really
  should make a set of functions to do it, just like we do with large
  objects.
  
  This an obvious way to load files in/out of TOAST columns, and I am not
  sure why it has not been done yet.  I am afraid we are going to get
  critisized if we don't have it soon.
 
   Okay, let me criticize you now then :)  (just kidding)  Over the past
 month I've been trying out postgre for two reasons.  I've posted a
 number of questions to this mailing list, and the postgre community has
 been extremely responsive and helpful.  Kudos to everybody working on
 postgre.  Most of my questions have been along the line of asking why a
 particular feature works differently then in other databases, or why
 postgre seemed to act in an illogical fashion (such as corrupting my
 database).  

Yes, this was my point.  We now have TOAST, but by not going the extra
mile to enable storage of binary files, we really aren't taking full
advantage of our new TOAST feature.

I can see people saying, "Wow, you can store rows of unlimited length
now.  Let me store this jpeg.  Oh, I can't because it is binary!"

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [GENERAL] Unanswered questions about Postgre

2000-12-11 Thread Peter Eisentraut

Joe Kislo writes:

   First, I was evaluating Postgre for a medium scale application I will

I'm just wondering what this "Postgre" thing is you keep talking about...
;-)

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: [GENERAL] Large files on linux

2000-12-11 Thread Lamar Owen

Peter Eisentraut wrote:
 Fernan Aguero writes:
  I am having trouble with large files on a Linux box (RH 6.2). I know there
  is a limit of 2 GB on the file size,
 
 ...but that doesn't affect table size, database size, or whatever you're
 thinking of.

Nope, PostgreSQL segments nicely for tables.

But, unless you do chunking, it _does_ affect dumpfile size. Someone
posted awhile back a script that did dumpchunking.  Should be in the
archives.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



[GENERAL] Re: Unanswered questions about Postgre

2000-12-11 Thread Tim Kientzle

 Of course, people really shouldn't be inserting
 objects which already exist, ...

On the contrary, the best way to test if
something already exists is to just try the
INSERT and let the database tell you if
it's already there.  Both faster and more
reliable than doing SELECT then INSERT.

- Tim



Re: [GENERAL] Unanswered questions about Postgre

2000-12-11 Thread Tom Lane

Joe Kislo [EMAIL PROTECTED] writes:
 ... this lack of BLOB support.  I understand that the C
 API can read/write -files- off the server's filesystem and load them
 into the database.  Unfortunately we would absolutely require true
 over-the-wire blob support through JDBC.  AFAIK, even with these "toast"
 columns, it still wouldn't fill that need.

This is a misunderstanding.  You can still use the old-style large
objects (in fact 7.1 has an improved implementation of them too),
and there's always been support for either over-the-wire or
server-filesystem read and write of large objects.  In fact the former
is the preferred way; the latter is deprecated because of security
issues.  In a standard installation you can't do the server-filesystem
bit at all unless you are superuser.

The JDBC support for over-the-wire access to large objects used to
have some bugs, but AFAIK those are cleaned up in current sources
(right Peter?)

Adding a similar feature for TOAST columns will certainly be a
notational improvement, but it won't add any fundamental capability
that isn't there already.

   2) Postgre does not record rollback segments.

We know this is needed.  But it will not happen for 7.1, and there's
no point in complaining about that; 7.1 is overdue already.

regards, tom lane



Re: [GENERAL] Unanswered questions about Postgre

2000-12-11 Thread Sandeep Joshi

what is the tentative date for 7.1 release?
what is the release date for replication?

sandeep




Re: [GENERAL] Unanswered questions about Postgre

2000-12-11 Thread Bruce Momjian

 This is a misunderstanding.  You can still use the old-style large
 objects (in fact 7.1 has an improved implementation of them too),
 and there's always been support for either over-the-wire or
 server-filesystem read and write of large objects.  In fact the former
 is the preferred way; the latter is deprecated because of security
 issues.  In a standard installation you can't do the server-filesystem
 bit at all unless you are superuser.

I know we haven't talked about the TOAST/binary interface, but one idea
I had was to load the binary into the large object interface, then
automatically somehow transfer it to the TOAST column.  Same for
extracting large objects.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



[GENERAL] Messages not going through

2000-12-11 Thread Soma Interesting

I've sent a message of mine to this list twice now, and it never comes 
through. Odd.


-  -  -  -  -  -  - 
  -  -  -  -  -
WARNING: Some experts believe that use of any keyboard may cause serious 
injury.
Consult Users Guide.
   
[EMAIL PROTECTED]




Re: [GENERAL] deletion of records before commit doesn't work

2000-12-11 Thread Jens Hartwig

Hello all,

I just reproduced the same phenomenon on my installation (PostgreSQL
7.0.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66) and it seems
to me that maybe the index is not correctly actualized while inserting
the record? It seems that any (!) update on blah (before executing the
delete) will solve the problem:

...
insert into blah (subdiv_code) values ('VG');
delete from blah where subdiv_code='VG';
...

= ERROR

...
insert into blah (subdiv_code) values ('VG');
update blah set subdiv_code='VG' where subdiv_code='VG';
delete from blah where subdiv_code='VG';
...

= OK

...
insert into blah (subdiv_code) values ('VG');
update blah set subdiv_code=subdiv_code;
delete from blah where subdiv_code='VG';
...

= OK

...
insert into blah (subdiv_code) values ('VG');
update blah set id=id;
delete from blah where subdiv_code='VG';
...

= OK

Best regards, Jens

Ashley Clark schrieb:
 
 I've come up with this example and I want to know why it does what it
 does.
 
 -- snip --
 You are now connected to database template1.
 CREATE DATABASE
 You are now connected to database testing.
 psql:test2.sql:11: NOTICE:  CREATE TABLE/UNIQUE will create implicit
 index 'subdivs_name_key' for table 'subdivs'
 psql:test2.sql:11: NOTICE:  CREATE TABLE/PRIMARY KEY will create
 implicit index 'subdivs_pkey' for table 'subdivs'
 CREATE
 psql:test2.sql:20: NOTICE:  CREATE TABLE will create implicit sequence
 'blah_id_seq' for SERIAL column 'blah.id'
 psql:test2.sql:20: NOTICE:  CREATE TABLE/PRIMARY KEY will create
 implicit index 'blah_pkey' for table 'blah'
 psql:test2.sql:20: NOTICE:  CREATE TABLE will create implicit
 trigger(s) for FOREIGN KEY check(s)
 CREATE
 INSERT 218198 1
 BEGIN
 INSERT 218199 1
 psql:test2.sql:29: ERROR:  triggered data change violation on relation
 "blah"
 ROLLBACK
 BEGIN
 INSERT 218200 1
 UPDATE 1
 DELETE 1
 ROLLBACK
 -- snip --
 
 and the test file is attached.
 
 --
 hackers ally
 
   
 
test2.sqlName: test2.sql
 Type: Plain Text (text/plain)
 
Part 1.2Type: application/pgp-signature

=
Jens Hartwig
-
debis Systemhaus GEI mbH
10875 Berlin
Tel. : +49 (0)30 2554-3282
Fax  : +49 (0)30 2554-3187
Mobil: +49 (0)170 167-2648
E-Mail   : [EMAIL PROTECTED]
=



[GENERAL] one other big mysql-postgresql item

2000-12-11 Thread George Johnson



Hi,

Forgot one other biggy:

-00-00 00:00:00

is legal for a default value of '' of a datetime 
column defined as not null.

create table test (
funkydate datetime not null;
);

insert into test values ('');

select * from test where funkydate = '-00-00 
00:00:00';

all those work, in MySQL, and I'm willing to bet a 
LOT of users have code reflecting that.

George Johnson