Re: [GENERAL] Proper relational database?

2016-04-22 Thread David Goodenough
On Thursday 21 April 2016 13:36:54 Guyren Howe wrote:
> Anyone familiar with the issue would have to say that the tech world would
> be a significantly better place if IBM had developed a real relational
> database with an elegant query language rather than the awful camel of a
> thing that is SQL.
> 
> If I had a few $million to spend in a philanthropical manner, I would hire
> some of the best PG devs to develop a proper relational database server.
> Probably a query language that expressed the relational algebra in a
> scheme-like syntax, and the storage model would be properly relational (eg
> no duplicate rows).
> 
> It's an enormous tragedy that all the development effort that has gone into
> NoSQL database has pretty much all gotten it wrong: by all means throw out
> SQL, but not the relational model with it. They're all just rehashing the
> debate over hierarchical storage from the 70s. Comp Sci courses should
> feature a history class.
> 
> It's a bit odd to me that someone isn't working on such a thing.
> 
> Just curious what folks here have to say…
Well when IBM were first developing relational databases there were two
different teams.  One in California which produced System-R which became
what we now know as DB2 and spawned SQL, and the other in Peterlee in
the UK which was called PRTV (the Peterlee Relational Test Vehicle).  PRTV
rather died but bits of it survived.  In particular it was the first to system
to include a relational optimiser.  You can find some details on the PRTV
page in Wikipedia.  

It was written in PL/1, although it also used some modified microcode 
and therefore some assembler.  

It never appeared as a product, but there was a geographical system
which built on top of it which was if I recall corrected used by the Greater
London Council and Central Region Scotland, which did something of
what postgis does for PostgreSQL.

According to the Wikipedia page it did have a language (ISBL) but from what
I recall (and it was nearly 40 years ago) there were a series of PL/1 
function calls we used rather than encoding the request as a string
as SQL systems require.

The IBM centre in Peterlee was closed, and the lab moved to Winchester
where I think it still resides.

David


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tool to create database diagrams in postgreSQL

2012-12-28 Thread David Goodenough
On Friday 28 Dec 2012, nevillekb wrote:
 Hi,
 
 Can anyone tell me which free GUI based tools are available for creating
 database diagrams for my database in postgresql.
 
 Thanks,
 
 Neville.
 
 
 
 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/Tool-to-create-database-diagrams-i
 n-postgreSQL-tp5738103.html Sent from the PostgreSQL - general mailing list
 archive at Nabble.com.
While there have (as mentioned elsewhere) been answers to this before, one
that does not seem to have been mentioned is Eclipse.  There are at least
two suitable tools available as plugins.  While they do Postgresql very well,
they also cover other RDBs.

David


Re: [GENERAL] A thought about other open source projects

2010-06-21 Thread David Goodenough
On Monday 21 June 2010, Lew wrote:
 Sim Zacks wrote:
  database agnostic code is theoretically a great idea. However, you 
lose
  most of the advantages of the chosen database engine. For 
example, if
  you support an engine that does not support relational integrity you
  cannot use delete cascades.
  The most efficient way is to have a separate backend module per
 
  database
 
  (or db version) supported. The quickest way is to write code that will
  work on any db but won't take advantage of db-specific features.
 
 David Goodenough wrote:
  This is what I am trying to encourage.  I am asking about the best
  way to encourage it.
 
 You want to encourage the use of databases that don't support 
relational
 integrity?
no, I want to encourage The quickest way is to write code that will
work on any db but won't take advantage of db-specific features.

David
 
 Really?
 
 I think that is a simply terrible idea.
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A thought about other open source projects

2010-06-20 Thread David Goodenough
On Sunday 20 June 2010, Peter Eisentraut wrote:
 On lör, 2010-06-19 at 22:56 +0100, David Goodenough wrote:
  These projects need help to realise that adding Postgresql is not a
  big
  job, especially for those using JDBC which can already connect to all
  DBs.  It strikes me that if the project could write a few pages
  gleaned
  from other porting operations, then whenever a project like this is
  found
  they can be pointed to these pages and shown how easy it is to do.
 
 
http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreS
 QL
 
Excellent, I had not realised this existed.  I will point any projects I meet
which have not found Postrgesql goodness at this page.  Thank you.

 I don't support anyone has written a how to write database agnostic
code guide?  That way its not a matter of porting, more a matter of
starting off right.

David

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A thought about other open source projects

2010-06-20 Thread David Goodenough
On Sunday 20 June 2010, Sim Zacks wrote:
 database agnostic code is theoretically a great idea. However, you lose
 most of the advantages of the chosen database engine. For example, if
 you support an engine that does not support relational integrity you
 cannot use delete cascades.
 The most efficient way is to have a separate backend module per 
database
 (or db version) supported. The quickest way is to write code that will
 work on any db but won't take advantage of db-specific features.
This is what I am trying to encourage.  I am asking about the best 
way to encourage it.

David
 
 On 6/20/2010 12:08 PM, David Goodenough wrote:
  On Sunday 20 June 2010, Peter Eisentraut wrote:
  On lör, 2010-06-19 at 22:56 +0100, David Goodenough wrote:
  These projects need help to realise that adding Postgresql is not a
  big
  job, especially for those using JDBC which can already connect to 
all
  DBs.  It strikes me that if the project could write a few pages
  gleaned
  from other porting operations, then whenever a project like this is
  found
  they can be pointed to these pages and shown how easy it is to 
do.
 
  
http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_Postgr
 eS
 
  QL
 
  Excellent, I had not realised this existed.  I will point any projects I
  meet which have not found Postrgesql goodness at this page.  Thank 
you.
 
   I don't support anyone has written a how to write database agnostic
  code guide?  That way its not a matter of porting, more a matter of
  starting off right.
 
  David
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] A thought about other open source projects

2010-06-19 Thread David Goodenough
I happened across (yet) another open source project which supports
MySql and Derby (its a Java app) and is thinking about supporting 
Oracle (they have actually bought a licence) but does not support
Postgresql.  This particular project is onehippo.org, but there are many
others.  Another perhaps more important project is the Akonadi
project in KDE, which is only gradually getting around to Postgresql.

These projects need help to realise that adding Postgresql is not a big
job, especially for those using JDBC which can already connect to all
DBs.  It strikes me that if the project could write a few pages gleaned
from other porting operations, then whenever a project like this is found
they can be pointed to these pages and shown how easy it is to do.

Then if someone spots a project that might use Postgresql they can 
simply point them at the pages.

David

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom Fields Database Architecture

2009-06-15 Thread David Goodenough
On Monday 15 June 2009, Gnanam wrote:
 Hi,

 I'm designing a database schema in which I should allow user to create
 custom fields at the application level.  My application is a web-based
 system and it has multiple companies in a  single database.  So this means
 that each company can create their own custom fields.  A  custom field
 created in a company should not be visibile to the other company.  Also, we
 don't want to restrict the number of fields allowed to create.

 I also read some article which talks about the type of patterns:
 1. Meta-database
 2. Mutating
 3. Fixed
 4. LOB

 My question here is, what is the best approach to define the architecture
 for custom fields. Performance should not be compromised.

 Thank you in advance.

 Regards,
 Gnanam.

 --
 View this message in context:
 http://www.nabble.com/Custom-Fields-Database-Architecture-tp24034270p240342
70.html Sent from the PostgreSQL - general mailing list archive at
 Nabble.com.

It depends a bit how you want to use the data.  If you are not wedded to the
RDMS model, you might look at CouchDB which is a schema-less DB.  But
do not expect to run SQL against it - it takes a rather different approach.
There are others around, some of them proprietary, Lotus Notes/Domino
is probably the best know of these.

David

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] problem converting database to UTF-8

2009-01-22 Thread David Goodenough
I have a database which was created as LATIN1 (the machine has the
wrong locales installed when I set up  PG).  It is running 8.3.

So I found various places which said the way to do this was to do
a pg_dumpall -f dump_file, get rid of the entire database, init_db -E UTF-8, 
and then psql -f dumpfile.

But the psql fails saying:-

psql:dumpfile:49: ERROR:  encoding LATIN1 does not match server's locale 
en_GB.UTF-8
DETAIL:  The server's LC_CTYPE setting requires encoding UTF8.

I have en_GB.UTF-8 now as my primary locale, and en_GB.ISO8859-1 is
also generated.

So I looked around again and found people saying I needed to use iconv,
but that does not help, I get the same error.

Is there a definative HOWTO that I can follow, if not does someone
have a set of instructions that will work?

If it matters I am running under Debian.

David

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem converting database to UTF-8

2009-01-22 Thread David Goodenough
On Thursday 22 January 2009, Vladimir Konrad wrote:
  Is there a definative HOWTO that I can follow, if not does someone
  have a set of instructions that will work?

 What about running iconv command on the dumped .sql file and transform
 it to the utf8?

 Vlad

 PS: man iconv for manual

iconv does not change the database encodings embedded in the file
(and it is quite large).

Is there no automated procedure.

David

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem converting database to UTF-8

2009-01-22 Thread David Goodenough
On Thursday 22 January 2009, Vladimir Konrad wrote:
  iconv does not change the database encodings embedded in the file
  (and it is quite large).

 Have you read the manual?

file   A pathname of an input file. If no file operands are
specified, or if a file operand is '-', the standard input shall
be used.


 cat the-source-dump.sql | iconv -t utf8 -  my-converted.sql

 Size should not matter in this case...

 V

You have not understood what I said.  I ran iconv, and it changes the
encoding of the data, but not the ENCODING= statements that are
embedded in the datastream.  Yes I can change those with sed, but
I do not know what else I need to change.  There must be an easier 
way.

David

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] DNS(bind) ER model

2008-08-15 Thread David Goodenough
On Friday 15 August 2008, Roderick A. Anderson wrote:
 Anyone aware of an ER model for holding name server records?

 Working on the zone file data and I am getting close but keep running
 into the differences between MX records (with a priority) and the others
 that can hold either a domain/sub-domain/host name or an IP address
 depending on whether is an A, TXT, PTR, etc. or a CNAME.

 Much of the database will be populated and changed automagically so the
 controller for the application will do the right thing but humans will
 get involved every so often.  I hope I can get the database to make the
 right thing easy and the wrong thing impossible for them.

 Any suggestions?


 Rod
 --

Have you looked at mydns?  It is a database driven DNS server - and it 
works just fine with Postgresql.

David

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Attaching information about users

2007-11-01 Thread David Goodenough
What is the proper way to attach additional information about users of 
a database.  That is to say I am using their DB login as their application
ID, and I need to have one or more tables which remember preferences and
other application level things, but the key is their current userid and 
I want to be sure that I only have entries for people who are currently
users in this DB.  I suppose what I want to do is to use foreign key
constraints, but that would be to a postgresql specific table (I guess,
or is the table that holds the list of IDs and its field names common
across DBs?).

David

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Regular express question

2007-06-22 Thread David Goodenough
I have a table that consists of a set of regular expressions, a priority
and a result.  I need to be able to match field in another table against
the set of regular expressions (ordered by priority) and use the first result.

Reading the documentation I can see how to put the regular expression
into an SQL statement (that is as text or a ? which my code provides) but
I can not see how to get the expression from the table.  

David

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Regular express question

2007-06-22 Thread David Goodenough
On Friday 22 June 2007, David Goodenough wrote:
 I have a table that consists of a set of regular expressions, a priority
 and a result.  I need to be able to match field in another table against
 the set of regular expressions (ordered by priority) and use the first
 result.

 Reading the documentation I can see how to put the regular expression
 into an SQL statement (that is as text or a ? which my code provides) but
 I can not see how to get the expression from the table.

 David

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

OK, I worked it out for myself.  Of course I can put a field name on the
right hand side of the SIMILAR TO and the ? on the left had side, then it
works just as you would expect.  So:-

select result from rules where ? similar to rule order by priority limit 1

gives me the answer I want.

David

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Is there an equivalent of the W3c HTML checker for SQL?

2007-01-22 Thread David Goodenough
This may seem like a question unrelated to Postgresql, but I have recently 
noticed a project that is having a discussion about how their code should
be developed.  They are (unfortunately) developing first with MySQL, because
that is what they are familiar with (I assume), but that inevitably leads
to have to backfit changes when they later come to support other DBs (like
Postgresql).  

The W3C checker gives a quick check to catch as many as possible of the
browser dependancies, so that they can be avoided during the development
cycle rather than after it.  I was wondering if something similar exists
in the SQL world.

It would be great if I could persuade them to move to Postgresql as their
development platform, but that is unlikely.  Such a checker would mean
that support of other DBs would be much easier rather than being a big
effort.

If it makes life easier, this project is a Java one, using JDBC.

David

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] SELECT INTO TEMPORARY problem

2007-01-17 Thread David Goodenough
I have a servlet which gets its data through a DataSource (Tomcat 5.5)
and starts each request as a new SQL transaction and either commits
the transaction or rolls it back at the end of each request.

In one of the requests I do a SELECT ... INTO TEMPORARY t1 ..., which 
works just fine when I first use it, but from then on it objects saying
that t1 already exists.  When I read the documentation (8.1 as that is 
what I am using) I thought I understood that the table would disappear 
at the end of the transaction.  Other than deleting it, is there something
else I need to do or have I missunderstood that into temporary does?

David 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] MySQL drops support for most distributions

2006-12-13 Thread David Goodenough
http://developers.slashdot.org/article.pl?sid=06/12/13/1515217from=rss

MySQL quietly deprecated support for most Linux distributions on October 16, 
when its 'MySQL Network' support plan was replaced by 'MySQL Enterprise.' 
MySQL now supports only two Linux distributions — Red Hat Enterprise Linux 
and SUSE Linux Enterprise Server. We learned of this when MySQL declined to 
sell us support for some new Debian-based servers. Our sales rep 'found out 
from engineering that the current Enterprise offering is no longer supported 
on Debian OS.' We were told that 'Generic Linux' in MySQL's list of supported 
platforms means 'generic versions of the implementations listed above'; not 
support for Linux in general.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Performance figures from DbMail list

2006-12-07 Thread David Goodenough
The following appeared this afternoon on the DbMail list.  As someone
replied the MySql used is old, and the newer one is faster, but then
8.2 is faster than the older Postgresql versions.

This was posted by:- Justin McAleer [EMAIL PROTECTED]

I figured I would go ahead and toss this out for anybody
that may be interested, since I was so shocked by the
results. I have two servers set up for testing, one running
postfix/dbmail and one running the database servers. The
database machine is a dual core AMD (4400+ I believe) with
4 gigs of memory, with the database files living on a fiber
connected Apple SAN (XRaid). I have dbmail compiled with
mysql and pgsql, so all I need to do to switch between the
two is change the driver in the conf file and restart. I'm
using dbmail-lmtpd running on a unix socket. Finally, I
have the postfix delivery concurrency set to 5.

For mysql, I'm using a 4GB InnoDB sample config that comes
in the CentOS rpm (increased the buffer pool to 2.5 gigs
though). Version is 4.1.20. 

For postgres, I'm using the default variables except for
increasing the shared buffers to 256MB, setting effective
cache size to 3 GB, and random page cost to 2. Version is
8.1.4.

I've sent a good amount of real mail to each setup as well,
but for quantifiable results I have a perl script that
sends gibberish of a configurable size (3kb here) to a
single recipient. Since we're inserting into a DB, the
recipient of the messages should have no bearing on
delivery performance, barring postfix concurrency. 

For the test, I sent one batch of mail through so postfix
would already have a full lmtp connection pool when I began
the real test. I had 10 perl processes each sending 100
messages as fast as postfix would accept them, for a total
of 1000 3KB messages. Results...

Mysql: 95 seconds to deliver all 1000 messages. Both cores
on the DB server were effectively peaked during delivery.

Postgres: 10 seconds to deliver all 1000 messages. DBMail
was really close to being able to deliver as fast as
postfix could queue to local disk (within a second or two
for 1000th message). The cores on the DB server looked to
average around 45%/30% usage during delivery. 

The CPU usage is just based on watching top output, so keep
that in mind... however with such a huge variance, even
eyeballing it I'm confident in reporting it.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] How to speed up Exclusive Locking

2006-12-05 Thread David Goodenough
I have an application running on a Tomcat cluster talking to a cluster of
Postgresql DBs using HA-JDBC.  If one of the members drop out of the cluster
it is necessary to get that member back into sync with the rest of the 
cluster, and I have an application specific piece of code that does that.
All the records have an updated timestamp in them which makes life easier.

The first bits of the sync are done without locking the source tables, and
I do these until I find less than some suitable threshold of records needing
to be updated.  Then I lock the source tables and do the final sync.

The statements issued to lock each table is:-
 LOCK TABLE table IN EXCLUSIVE MODE; SELECT 1 FROM table;

(I am not quite sure why the SELECT 1 FROM table is there, it came with
HA-JDBC as the code for the Postgresql dialect).

I notice that this seems to take a time that is dependant on the size
of the table, which seems odd - almost as though it is locking each row
rather than the whole table at once.  I am using 8.1 by the way just in
case this is something that has changed in 8.2.

Taking locks on the 7 tables takes over five minutes, which is much longer
that I would have hoped.  Is there anything I can do to speed this up?

The rest of the application components never do explicit locking, they select 
for read only, or select for update, insert, update and delete all inside a 
transaction.

Regards

David
 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to speed up Exclusive Locking

2006-12-05 Thread David Goodenough
On Tuesday 05 December 2006 10:57, Bernd Helmle wrote:
 On Tue, 5 Dec 2006 10:18:21 +, David Goodenough
 [EMAIL PROTECTED] wrote:

 [...]

  The first bits of the sync are done without locking the source tables,
  and I do these until I find less than some suitable threshold of records
  needing
  to be updated.  Then I lock the source tables and do the final sync.
 
  The statements issued to lock each table is:-
   LOCK TABLE table IN EXCLUSIVE MODE; SELECT 1 FROM table;

 So why selecting '1' for each row after locking the relation before? I
 don't know HA-JDBC but this looks really useless. Remove the SELECT and use
 the LOCK TABLE command within the transaction which does the sync for you.
I will give it a try.

 Bernd

 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] How to speed up Exclusive Locking

2006-12-05 Thread David Goodenough
On Tuesday 05 December 2006 12:03, Richard Huxton wrote:
 David Goodenough wrote:
  On Tuesday 05 December 2006 10:57, Bernd Helmle wrote:
  On Tue, 5 Dec 2006 10:18:21 +, David Goodenough
 
  [EMAIL PROTECTED] wrote:
  The statements issued to lock each table is:-
   LOCK TABLE table IN EXCLUSIVE MODE; SELECT 1 FROM table;
 
  So why selecting '1' for each row after locking the relation before? I
  don't know HA-JDBC but this looks really useless. Remove the SELECT and
  use the LOCK TABLE command within the transaction which does the sync
  for you.
 
  I will give it a try.

 It could be that the HA-JDBC code expects some selected value back. In
 which case a simple SELECT 1 should be fine. I have to agree with
 Bernd that selecting all rows and then throwing away the results strikes
 me as particularly a braindead behaviour from the library, presumably it
 makes some sort of sense for locking a limited number of rows.
HA-JDBC only ever locks a whole table.  As far as I can see it does not
use the ResultSet (and JDBC large ResultSets are never a good idea), so I 
have asked the question on its forum why it is there.

Testing with psql a simple LOCK seems pleasantly fast.  

David

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] deadlock detected messages

2006-10-29 Thread David Goodenough
I have a process that is hitting deadlocks.  The message I get talks about
relation and database numbers, not names.  How do I map the numbers back 
into names?

David

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Is anyone using ha-jdbc with the distributable tag

2006-05-09 Thread David Goodenough
I realise this is not strictly a Postgresql problem but I wondered if anyone 
here was using ha-jdbc.  I have tried asking on their mailing list but
apart from two of my questions there has been no traffic for 24 hours
and it appears to be a dead list.

I have a sngle instance of ha-jdbc working talking to multiple postgresql
backends.  But I need to set this up for a tomcat cluster and so I want
multiple ha-jdbc's talking to the same DBs.  According to the docs this
is what the distributable tag is for, but when I try to start the 
second tomcat server rather than joining with the first it complains of
a name clash.

Regards

David

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Is PostgreSQL an easy choice for a large CMS?

2006-04-30 Thread David Goodenough
On Sunday 30 April 2006 12:01, Tony Lausin wrote:
 Hello all,

 I'm working on a CMS which requires an open source database capable of
 handling hundreds of thousands of users simultaneously, with a high
 rate of database writes, and without buckling. We're talking somewhere
 between nerve.com/catch27.com and xanga.com/friendster.com

 PostgreSQL is a personal favorite of mine, and my gut instinct is that
 it's the best choice for a large scale CMS serving many users;
 however, I'm getting antsy. I keep getting suggestions that Postgres
 is really only suited to small and medium projects, and that I should
 be looking at MySQL for a large scale database drive site. I'm not
 really a fan of MySQL, but I'll consider it if it truly is the better
 choice in this case. I just don't understand how it would be. I'm
 thinking this is solely in reference to VACUUM. Even with autovacuum
 suport, I tend to agree there is at least one handicap.

 I could really use some enlightenment on just where PostgreSQL fits in
 a single-server, highly-trafficked web site serving mostly text,
 pictures and possibly streaming media.

 Regards,

 Anthony

Very odd.  I had always heard that MySql (at least originally) was a
quick and dirty database, easy to use, not fully standards compliant,
and not enterprise grade.  Postgresql on the other hand was always 
the heavyweight, standards compliant, enterprise db, which was more
difficult to use and set up but much more resilient.  Postgresql has been
getting more UI support (often seen as a user friendly bonus) and
things like autovacuum support so that it is easier to use out of the box,
and MySql has been gaining standards compliance and resilience.

Funny how perceptions can differ.

David

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Implicit conversion from string to timestamp

2006-02-17 Thread David Goodenough
I have some generic code to which I pass a series of values to be inserted 
into a PostgreSQL table which includes a field which is defined as a timestamp
and which I wish to populate with a string of the form 
-MM-dd hh:mm:ss.SSS.  Under pg 8 and before this worked fine
but now with 8.1 I seem to be getting an exception which reads:-

ERROR: column created is of type timestamp without time zone but expression 
is of type character varying

All this is done using JDBC (so I suppose it might be a JDBC error).

I know that a number of things were tightened up with 8.1, is this one of 
them?  Or should I be asking this on the JDBC list.

I had thought that passing strings into timestamps was acceptable.

David

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Problem with sequence table

2006-02-08 Thread David Goodenough
I have a DB (PostgreSQL of course) which has in the definition of one of
its tables that the default is nextval(public.rr_id_seq'::text).  When I look
in the sequence I see that the last_value column is 40, but the largest
value in the relevant column is 45.  I tried using the SQL update command
to update this value but it would not let me.  How do I get rr_id_seq in step
with the data in the table?

David 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Primary keys for companies and people

2006-02-02 Thread David Goodenough
On Thursday 02 February 2006 09:07, Leif B. Kristensen wrote:
 On Thursday 02 February 2006 09:05, Michael Glaesemann wrote:
 For people I'm more or less stumped. I can't think of a combination
 of things that I know I'll be able to get from people that I'll want
 to be able to add to the database. Starting off we'll have at least
 7,000 individuals in the database, and I don't think that just family
 and given names are going to be enough. I don't think we'll be able
 to get telephone numbers for all of them, and definitely aren't going
 to be getting birthdays for all.
 
 I'm very interested to hear what other use in their applications for
 holding people and companies.

 I've been thinking long and hard about the same thing myself, in
 developing my genealogy database. For identification of people, there
 seems to be no realistic alternative to an arbitrary ID number.

 Still, I'm struggling with the basic concept of /identity/, eg. is the
 William Smith born to John Smith and Jane Doe in 1733, the same William
 Smith who marries Mary Jones in the same parish in 1758? You may never
 really know. Still, collecting such disparate facts under the same ID
 number, thus taking the identity more or less for granted, is the modus
 operandi of computer genealogy. Thus, one of the major objectives of
 genealogy research, the assertion of identity, becomes totally hidden
 the moment that you decide to cluster disparate evidence about what may
 actually have been totally different persons, under a single ID number.

 The alternative is of course to collect each cluster of evidence under a
 separate ID, but then the handling of a person becomes a programmer's
 nightmare.

 I have been writing about my genealogy data model here:
 url:http://solumslekt.org/forays/blue.php The model has been slightly
 modified since I wrote this; due to what I perceive as 'gotchas' in the
 PostgreSQL implementation of table inheritance, I have dropped the
 'citations' table. Besides, I've dropped some of the surrogate keys,
 and more will follow. I really should update this article soon.

 I should perhaps be posting this under another subject, but I feel that
 beneath the surface, Michael's problem and my own are strongly related.
There is also the problem that a name can change.  People change names
by deed-poll, and also women can adopt a married name or keep their old
one.  All in all an ID is about the only answer.

David

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Anyone use Eclipse?

2005-09-17 Thread David Goodenough
On Friday 16 September 2005 20:37, Josh Berkus wrote:
 People:

 The Eclipse project is interested in having PostgreSQL people contribute to
 their Data Services plug in.   Do we have any java hackers in the
 community using Eclipse?  Anyone interested?

Well I use Eclipse and PostgreSQL, but probably only the basics which I 
guess they should not have any problems with.  What kind of help do
they need?

David

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgresql replication

2005-08-25 Thread David Goodenough
On Thursday 25 August 2005 13:03, William Yu wrote:
 As far as I know, nobody has a generic solution for multi-master
 replication where servers are not in close proximity. Single master
 replication? Doable. Application specific conflict resolution? Doable.
 Off the shelf package that somehow knows financial transactions on a
 server shouldn't be duplicated on another? Uhh...I'd be wary of trying
 it out myself.

The most obvious one that does exactly this (generic multi-master
replication) is Lotus Domino.  It is not a relational DB, but not sufficiently
far off to stop the analogy.

Domino marks each document with a binary value which identifies the
server (built from a hash of the server name and the time the DB was
created) and a timestamp when it was last modified, and also each document
(record) has an ID (like OIDs).  More recent versions also do this at a field
level to avoid conflicts and speed replication.  When two servers replicate
they look for all documents modified since the last replication time, and 
compare the list.  Those only modified on one server are copied across
to the other server replacing the old record and carrying the updated on 
server and timestamp with them.  When a document is deleted in Domino it
actually does not dissapear, it is reduced to a deletion stub, and this gets
replicated as it has the same ID as the original record.  Those that have been
modified on both sides are copied to the other DB, but both records remain and
it is left to the user to resolve conflicts.  Field level replication reduces
the need for this considerably.  Periodically the deletion stubs are purged,
once all known replicas have replicated.

Domino has absolutely no concept of a master DB.

Obviously this scheme would be difficult to do on a pure relational system.
But with triggers and a few standard fields it would not be impossible to
do for a limited application set.  How the user would resolve conflicts would
also be application specific I suspect and how one would relate having two
version of a record in the DB then they both have a field which is supposed to
be unique is also a problem that would have to be resolved (Domino does not 
have the concept of unique keys).

David
  

 Bohdan Linda wrote:
  I would have a slight offtopic question, this is issue only of pgsql or
  there are some other db solutions which have good performance when doing
  this kind of replication across the world.
 
  Regards,
  Bohdan
 
  On Thu, Aug 25, 2005 at 09:01:49AM +0200, William Yu wrote:
 It provides pseudo relief if all your servers are in the same building.
 Having a front-end pgpool connector pointing to servers across the world
 is not workable -- performance ends up being completely decrepit due to
 the high latency.
 
 Which is the problem we face. Great, you've got multiple servers for
 failover. Too bad it doesn't do much good if your building gets hit by
 fire/earthquake/hurricane/etc.

 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Does preparing statements other than selects help performance?

2005-08-05 Thread David Goodenough
I was looking at an application recently which was written in Java and used 
Postgresql as it DB.  In it extensive use had been made of PreparedStatements
both for SELECTs and for INSERT, UPDATE and DELETE statements.  Some of
the routines had multiple UPDATEs doing much the same thing but with 
slightly different parameters.  In the comments it was stated that it was 
better to prepare lots of statements in advance rather than build one on the
spot (and then prepare it, it needed the substitution) because of the 
optimiser.

This set me thinking (always dangerous).  I can see how a SELECT can be 
helped by preparing the statement, but not really how an INSERT could
or, other than the SELECT implicit in the WHERE clause on an UPDATE or
DELETE, how UPDATE or DELETE statements would be helped.

Can anyone enlighten me please?

David

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Does preparing statements other than selects help performance?

2005-08-05 Thread David Goodenough
On Friday 05 August 2005 11:57, Martijn van Oosterhout wrote:
 On Fri, Aug 05, 2005 at 10:08:42AM +0100, David Goodenough wrote:
  I was looking at an application recently which was written in Java and
  used Postgresql as it DB.  In it extensive use had been made of
  PreparedStatements both for SELECTs and for INSERT, UPDATE and DELETE
  statements.  Some of the routines had multiple UPDATEs doing much the
  same thing but with slightly different parameters.  In the comments it
  was stated that it was better to prepare lots of statements in advance
  rather than build one on the spot (and then prepare it, it needed the
  substitution) because of the optimiser.

 Which version of PostgreSQL was this built for? Until recently there
 was no support for server side prepared statements so it mattered not
 one wit whether you had one or a thousand prepared queries, it was all
 done by the client anyway.
I am not sure it was originally build for PostgreSQL, but it all client side
anyway, or that its inside Tomcat and thus from PG's point of view 
client side.  I presume by server side you mean triggers and functions or
am I misunderstanding you?

  This set me thinking (always dangerous).  I can see how a SELECT can be
  helped by preparing the statement, but not really how an INSERT could
  or, other than the SELECT implicit in the WHERE clause on an UPDATE or
  DELETE, how UPDATE or DELETE statements would be helped.

 For the executors point of view, there is no difference between a
 SELECT, INSERT, DELETE or UPDATE. Each is doing a query on the database
 but doing different things with the result. SELECT sends it to the
 client, UPDATE changes some values and writes the new tuple out, DELETE
 marks the rows deleted. INSERT ... VALUES () has a trivial plan but
 INSERT .. SELECT can be complicated.
This particular application is only using INSERT ... VALUES( ) so this is in 
the trivial camp.  I had not ever thought of DELETE and UPDATE being 
variants on SELECT, but it makes sense the way you explains it.

 On the client side, prepared statements simplify coding, since they
 seperate the actual SQL text from the function it performs. So there
 you should use one statement for each operation you perform, whatever
 that means for your app.

 On the server side, prepared statements are a way of saving the plan of
 a query and using it multiple times. So the benefit is related to how
 many times you use the statement vs how complex the query is (parsing
 and planning time).

 If your INSERT statement is simple, why bother with prepared stataments,
 since the planning time will be almost nil anyway. If your hugely
 complicated DELETE is only run once, again, no benefit since you're not
 reusing the plan.

 Only in the case where you have a query which you execute a lot of
 times (10, 100, 1000) is it a noticable benefit. Accordingly, several
 Postgres frontends support prepared stataments, but only actually plan
 them in the server if you use them more than a predefined number of
 times.
Understood.

 Actually, there is one downside with prepared queries. When processing
 each query individually, PostgreSQL can use the statistics for the
 values given to produce the optimal plan for that set. If your value
 are not equally distributed (can't think of a better phrase) then
 that plan might not be optimal for all the other substitutions you
 might do. Something to think about.

 In any case, I hope this has clarified things for you. It's all a
 tradeoff between code clarity, parsing, planning and execution time.

 Hope this helps,
Thanks,

David

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] SQL query

2005-02-11 Thread David Goodenough
I realise this is not strictly a Postgreslql question, but if the best way to
solve it involves using PG extensions, such as the PG procedural languages
I am only going to do this on PG and so I am happy to use them.

I have an address table, with all the normal fields and a customer name
field and an address type.  There is a constraint that means that the
combination of customer and type have to be unique.  Normally the
only record per customer will be of type 'default', but if for instance
the customer wants a different billing address I would add in a second
type='billing' address record.  

I then want to join this table to another table, say an invoice table,
and I want to use the billing address if present, otherwise the default
address.  I do not want to create either two addresses or to put both
addresses on the invoice.

I could do this by doing a select * from addresses where customer = ?
and type = 'billing', looking to see if there is a result row and if not
repeating the query with type = 'default', but that seems inelegant to 
me.  

I thought of using an inner select for the join, and using limit 1 to 
get just the one, and forcing the order by to give me the billing
address by preference, but I am then dependant on the sort order
of the particular type values I am selecting from.

Is there a better way?  I am sure this kind of problem must have
been solved before.  

Thanks in advance for any help you can give

David

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] SQL query

2005-02-11 Thread David Goodenough
On Friday 11 February 2005 11:31, Matt K wrote:
 David Goodenough wrote:
 I could do this by doing a select * from addresses where customer = ?
 and type = 'billing', looking to see if there is a result row and if not
 repeating the query with type = 'default', but that seems inelegant to
 me.

 Use NULL to indicate that the customer type is default. Then you can
 query with:

 select * from addresses where customer = ?
 and coalesce(type, 'billing') = 'billing'

 If type is NULL, the comparison will be 'billing' = 'billing' - always
 true. If there's a bunch of non-null type addresses, you'll get the
 'billing' one.

 http://www.postgresql.org/docs/8.0/interactive/functions-conditional.html#A
EN12003

 Matt

Well coalesce is not something I had come across, learn something every day.

But I can not use this as the type (with the customer) are the primary key and
therefore not null.  I could do something like:-

  coalesce( nullif( 'default', type), 'billing')

but I think that might be over egging it a bit.  I will hope this one reserve
and remember coalesce for the future.

Thanks,

David

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] SQL query

2005-02-11 Thread David Goodenough
On Friday 11 February 2005 11:46, Richard Huxton wrote:
 David Goodenough wrote:
  I realise this is not strictly a Postgreslql question, but if the best
  way to solve it involves using PG extensions, such as the PG procedural
  languages I am only going to do this on PG and so I am happy to use them.
 
  I have an address table, with all the normal fields and a customer name
  field and an address type.  There is a constraint that means that the
  combination of customer and type have to be unique.  Normally the
  only record per customer will be of type 'default', but if for instance
  the customer wants a different billing address I would add in a second
  type='billing' address record.
 
  I then want to join this table to another table, say an invoice table,
  and I want to use the billing address if present, otherwise the default
  address.  I do not want to create either two addresses or to put both
  addresses on the invoice.

 Not sure whether a schema change is possible for you, but you might want
   to have two tables -
addresses (customer_id*, addr_id*, ...)
addr_usage (customer_id*, addr_type*, addr_id)
 Add a custom trigger that ensures for every customer_id there is a valid
 row in addr_usage for each addr_type (sales, billing, shipping etc).

 That way you can have any mix of addresses you like, and it's explicit
 which address is for which purpose.
Interesting idea, I will consider this.

Thanks 

David

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] SQL query

2005-02-11 Thread David Goodenough
On Friday 11 February 2005 11:41, Janning Vygen wrote:
 Am Freitag, 11. Februar 2005 12:07 schrieb David Goodenough:
  I have an address table, with all the normal fields and a customer name
  field and an address type.  There is a constraint that means that the
  combination of customer and type have to be unique.  Normally the
  only record per customer will be of type 'default', but if for instance
  the customer wants a different billing address I would add in a second
  type='billing' address record.
 
  I then want to join this table to another table, say an invoice table,
  and I want to use the billing address if present, otherwise the default
  address.  I do not want to create either two addresses or to put both
  addresses on the invoice.
 
  I could do this by doing a select * from addresses where customer = ?
  and type = 'billing', looking to see if there is a result row and if not
  repeating the query with type = 'default', but that seems inelegant to
  me.
 
  I thought of using an inner select for the join, and using limit 1 to
  get just the one, and forcing the order by to give me the billing
  address by preference, but I am then dependant on the sort order
  of the particular type values I am selecting from.

 don't think vertical (adresses in rows), think horizontal (adresses in
 columns), like this:

 SELECT
   c.*,
   COALESCE(a1.street, a2.street) AS street,
   COALESCE(a1.zip, a2.zip) AS zip,
   COALESCE(a1.town, a2.town) AS town
 FROM
   customer AS c
   LEFT JOIN adresses AS a1 USING (customer_id)
   LEFT JOIN adresses AS a2 USING (customer_id)
 WHERE
   a1.type = default
   AND a2.type = 'billing'

 i just type the and did not tested it. the trick is to join adresses
 multiple times and get the right data with COALESCE function which returns
 the first value which is NOT NULL.

 If you still have difficulties, please send your schema.

 kind regards,
 janning

Lateral thinking always did appeal to me.  I will look into this further.

Thanks 

David

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] SQL query

2005-02-11 Thread David Goodenough
On Friday 11 February 2005 13:39, Bruno Wolff III wrote:
 On Fri, Feb 11, 2005 at 11:07:24 +,

   David Goodenough [EMAIL PROTECTED] wrote:
  I thought of using an inner select for the join, and using limit 1 to
  get just the one, and forcing the order by to give me the billing
  address by preference, but I am then dependant on the sort order
  of the particular type values I am selecting from.

 You can order by boolean expressions such as type = 'billing'.
 You can use that with LIMIT or DISTINCT ON to get just the address you
 want.

Tried this, and got a rather un-intuative answer.  If you have two relevant
entries (one billing, the other default) and you:-

 order by type = 'billing' limit 1

you get the default one, if you:-

 order by type != 'billing' limit 1

you get the billing one.

However:-

  order by type = 'billing' DESC limit 1 

does get you the billing one.

It makes sense in that false == 0 and true == 1 in many languages 
and 0 sorts before 1, but it still feels wrong.

I had not realised I could use a comparison like this in order by.

Thanks

David

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] postgresql and javascript

2004-12-10 Thread David Goodenough
On Tuesday 07 December 2004 22:42, Chris Smith wrote:
 [EMAIL PROTECTED] wrote:
  Does anyone know how to connect  javascript to a postgresql database

 You can't connect javascript to any sort of database. You need something
 like php, python etc - it can connect to your database and generate
 javascript.

Well you may not be able to now, but I seem to recall the a future version
of Kexi (the KOffice version of Access) although it currently it scripted in 
Python it is intended to allow it to be scripted in ECMAScript (which is 
near enough JavaScript), so that will have to produce an interface.  But
you will need to wait till at least next year.

David


 Regards,

 Chris Smith

 Suite 30, 45-51 Huntley St, Alexandria, NSW 2015 Australia

 Ph: +61 2 9517 2505
 Fx: +61 2 9517 1915

 email: [EMAIL PROTECTED]
 web: www.interspire.com



 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])