Re: [GENERAL] power failure....

2007-12-02 Thread Greg Smith

On Sat, 1 Dec 2007, Joshua D. Drake wrote:


change wal_sync_method to open_sync and fsync=on isn't nearly as bad as
it sounds.


Just be warned that there's been one report that some Linux versions have 
bugs that make open_sync problematic:


http://archives.postgresql.org/pgsql-hackers/2007-10/msg01310.php

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Cluster using tablespaces?

2007-12-02 Thread Alvaro Herrera
Rainer Bauer wrote:
 Alvaro Herrera wrote:

 It has been theorized that cluster would be faster in general if instead
 of doing an indexscan we would instead use a seqscan + sort step.  It
 would be good to measure it.
 
 Could a reindex on the clustered index speed up the clustering (when executed
 immediatelly before the cluster command)? As I understand it, this index is
 used to fetch the table data in the correct order. Or is most of the time
 spend fetching the table data?

I haven't measured it, but my guess is that most of the time is in
fetching heap pages in random order.

 Also, would it make sense to increase shared_buffers for the cluster
 operation. This is set to 32MB here on my Windows box as was recommended.

Not sure.  In general yes, but on Windows things are different.


  For btree indexes, there is a temporary copy of the index data, which
  will go wherever you have arranged for temp files to go.  (I think that
  easy user control of this may be new for 8.3, though.)
  
  Could you give me a hint where that would be on Windows? I guess this 
  might be
  worth a try since there are a couple of btree indexes in the database.
 
 I think Tom is referring to the new temp_tablespaces config variable.
 
 I moved the pgsql_tmp directory to another disk, but that didn't speed up the
 cluster command.

Probably most of the time is going into creating the new table then.

If you are looking for a short-term solution to your problem, maybe the
best is to follow the recommendation on CLUSTER ref page:

There is another way to cluster data. The CLUSTER command
reorders the original table by scanning it using the index you
specify. This can be slow on large tables because the rows are
fetched from the table in index order, and if the table is
disordered, the entries are on random pages, so there is one
disk page retrieved for every row moved. (PostgreSQL has a
cache, but the majority of a big table will not fit in the
cache.) The other way to cluster a table is to use

CREATE TABLE newtable AS
SELECT * FROM table ORDER BY columnlist;

which uses the PostgreSQL sorting code to produce the desired
order; this is usually much faster than an index scan for
disordered data. Then you drop the old table, use ALTER TABLE
... RENAME to rename newtable to the old name, and recreate the
table's indexes. The big disadvantage of this approach is that
it does not preserve OIDs, constraints, foreign key
relationships, granted privileges, and other ancillary
properties of the table — all such items must be manually
recreated. Another disadvantage is that this way requires a sort
temporary file about the same size as the table itself, so peak
disk usage is about three times the table size instead of twice
the table size. 

-- 
Alvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4
La experiencia nos dice que el hombre peló millones de veces las patatas,
pero era forzoso admitir la posibilidad de que en un caso entre millones,
las patatas pelarían al hombre (Ijon Tichy)

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] relation deleted while in use encountered with Postgresql 8.0.8

2007-12-02 Thread Gautam Sampathkumar
Hi,

I'm using a python script w/ the PyGreSQL library to insert 1 billion rows
into a database table for an experiment (performing a commit every 10K
rows). My script failed at about 170M rows with the following exception:

  File /usr/lib64/python2.3/site-packages/pgdb.py, line 163, in execute
self.executemany(operation, (params,))
  File /usr/lib64/python2.3/site-packages/pgdb.py, line 185, in
executemany
raise DatabaseError, error '%s' in '%s' % ( msg, sql )
pg.DatabaseError: error 'ERROR:  relation 184577 deleted while still in use
' in 'INSERT INTO nfs_files_10 (mxid, fhInode, fhGen, fhSnapId, fhFlags,
name, parentInode, parentGen, parentSnapId, parentFlags, extension, type,
atime, mtime, fileSize, owner, generation) VALUES (10, 120, 927370846, 0, 0,
'gummy0.txt', 1204041, 927370729, 0, 0, 'txt', 0, 1112147234, 1112147234,
40960, NULL, 2);'


After this error, my database table no longer exists and appeared to have
been dropped, although my script was doing only INSERT statements. Any ideas
on what might be causing this and/or if this a known issue and possible
solutions would be greatly appreciated.

thanks,
Gautam


Re: [GENERAL] certificate based authorization

2007-12-02 Thread Sebastian - Anton PONOVESCU
Hello

I did not thought about this. Anyway I think is quite unusable in my
environment. We're talking 50+ server (and in near future 100+ servers)
and 500+ users each of which will be granted access to a small number of
servers (like 2 or 3). So is very easy to say to one server who is
allowed to connect instead of saying the remaining 497 users (actually
certificates) that they are not allowed. And for another server other
different 497 users which are not allowed to connect in order to let in
only the remaining 2 or 3 and so on.

Thank you and best regards,
Sebastian



-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 30, 2007 3:51 AM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] certificate based authorization


Sebastian - Anton PONOVESCU wrote:
 Hello
  
 Is there a way to use certificate based authorization with postgresql?
I
 already implemented authentication, but among the people that my CA
 certifies, and which I trust by the way, I want to distinguish to a
 particular server who I grand access and who I don't even if they are
 who they claim they are. And this based only on certificates not user
/
 pass or other mechanisms like LDAP / PAM.

Have you tried adding CRLs?  We support those.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB
http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PostgresSQL vs Ingress

2007-12-02 Thread Paul Boddie
On 30 Nov, 16:12, [EMAIL PROTECTED] (Tom Lane) wrote:


[Quoting a re-telling of the myth of products living happily ever
after under the control of big companies]

 Anyone who thinks that's a reason to feel good is living on some other
 planet than I do.  Consider that if the company *does* decide to abandon
 the product ... which happens all the time, particularly for products
 that aren't market leaders ... you are up the proverbial creek with no
 paddle.  You've never seen the code and never will.

Indeed. I used to work with a database system which had already
changed ownership at least once, and through a succession of
acquisitions not dissimilar to fish being eaten by successively bigger
fish, with each owner slotting the product alongside some very similar
existing products in their portfolio, the product eventually ended up
being owned by a very large company with a lot of other products on
their shelf (or, if you prefer, a very big fish with a lot of smaller
fish in its diet).

Now, fortunately, I haven't had anything to do with the product
concerned for many years, and although the current owner has a
reputation for supporting stuff over long periods of time, one has to
wonder what kind of support you're actually going to get, whether
there's going to be much new development, or whether the cumulative
effect of the rationalisation process (which saw the little fish all
eaten up) is to milk the existing customers for as long as they can
bear sticking with the product and not migrating to anything else. I
think I'd rather have the source code and a Free Software licence than
an account manager and a corporate roadmap.

Paul

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgres WarmStandby using ZFS or Snapshot to create Web DB?

2007-12-02 Thread Robert Treat
On Thursday 29 November 2007 16:08, Jennifer Spencer wrote:
 I am looking for suggestions in setting up a large postgres database
 scenario. We are running a science project with a lot of data expected from
 the science instrument. If you have time to comment, any advice is most
 welcome!

 Here's the deal:
 1. We expect to store ~1TB per year of data for 10 years. Mostly this is a
 warehouse situation - not a lot of updates, or deletes, but a lot of
 inserts.
 2. We need to replicate a subset of our postgres data to an international
 science community, and in turn take in some data from them (we plan to do
 this via Slony-1 unless there is a compelling reason not to).
 3. We need to make a copy of our database available to the general internet
 community. 4. We need to have a Warm Standby available in case of disaster.
 We plan to use PITR with WAL files for this (again, unless there is a
 compelling reason not to).
 5. We need to make regular full tape backups (~weekly) and occasionally
 scheduled maintenance (think quarterly maintenance).

 We do not have an endless budget, sadly, so I could use some help as to how
 to go about this. Having gone from a job where my database software
 actually had paid tech support to one that doesn't (PostGres), I am pretty
 concerned about what could go wrong.

 Assume our Primary server (A) is good enough to serve our in-house users,
 and our Warm Standby (B) is a physical duplicate of A. My plan is to copy
 WAL files to B. Make a tape backup from B weekly, keeping it out of
 recovery mode for ~6 hours, or alternatively make a snapshot of B's data
 files at a given time and tape off the snapshot. This takes care of A  B,
 and the backups, but what about the other requirements?


Using ZFS snapshots as a base backup for setting up PITR works extremely well, 
though we've found trying to push incremental snapshots to tape not as 
efficient as one would hope, but you'll probably have less traffic than we 
do, so it could work; still, you can probably do full snapshots to tape once 
a week without causing too much trouble. 

 How do we get data to our web community w/out fear of hacking to the
 primary? And how do we do that economically? There is one plan in place to
 use a set of snapshot disks from A's data files to act as the web
 database's files. Can we do that? Is that exceptionally stupid? Another
 plan involves using a Solaris 10 ZFS solution to clone the warm standby B's
 files to act as a web database's files (see:
 http://www.lethargy.org/~jesus/archives ... crack.html for more). I am not
 sure either one of the above solutions will work quickly. We'd like a
 turnaround time from A to B to Web of less than 30 minutes for
 newly-created tables, or new data in existing tables.


Using the ZFS method, you can make new snapshot clones in a matter of minutes, 
and you can script it to make it a pretty brain dead operation. 

 Lastly, we plan to pinhole our firewall for trusted Slony-1 science
 customers. People that we already know who have specific IP addresses. We
 have yet to figure out the drag to our Primary (A) due to Slony-1. Any
 experience with that out there?


It shouldn't be too cumbersome... in the single digit percentages I'd think, 
but it will be workload/hardware dependent. 

 My prior work experience involves a 1TB Sybase database, its warm-standby
 and regular backups  quarterly maintenance. I am new to PostGres and the
 idea of no tech support phone calls when things break is a scary one! I am
 trying to create a belt-and-suspenders redundant solution so that if
 something breaks, I have time to figure out what went wrong and fix it
 before the users even know there's a problem.


If you really want paid technical support, there are many options available, 
of which OmniTI is one. HTH.

-- 
Robert Treat
Database Architect
http://www.omniti.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] postgresql in ramdisk

2007-12-02 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/02/07 04:43, oruc çimen wrote:
 hi;
 i have tested postgresql in memory  but in ramdisk is not faster than
 hardisk.
 why??? if there are some option for postgresql in ramdisk, pls help me
 i need too much fast db if you know another way for fast db pls send a
 mail to me
 thank you

If you explain what kind of task needs such high speeds (lots of
reads, lots of simple writes, busy web server, etc), and what kind
of hardware you have, then we can begin to ask you more detailed
questions.

 (sorry for my bad english:(   )

Bed English?  You should see my Russian!!!

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHUvf5S9HxQb37XmcRApraAKDlKLorRwSSuqVe66cUBbmPdaJXrQCgsLa0
589HllNDuKk8ImByzPAtJBE=
=ZH4M
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] One or more tables?

2007-12-02 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/02/07 07:35, rokj wrote:
 Hi.
 
 For an example let me say that I have a big (over 1 million) user
 base. Then every user does a lot of inserting/updating of data.
 Would it be better to create different tables for insert/updating for
 every user or  would it be better just to have one big table with all
 data (tables would have of course the same columns, ...). How do you
 cope with this kind of things?
 
 1.example (1 enormous table)
 tablename (id, user_id, datetime, some_data)
 
 2. example (a big number of tables)
 tablename_user_id( id, datetime, some_data)

This should help you to decide how to design your tables.  3NF is as
far as you really need to go.

http://en.wikipedia.org/wiki/Data_normalization
http://pubs.logicalexpressions.com/Pub0009/LPMArticle.asp?ID=88

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHUvmzS9HxQb37XmcRAnNhAJ4/bMbLyDXioe7duTO4Dm0vBD8TCgCg3H84
/+gRlkgyuIlRYYGOGH8LWPM=
=LfO7
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Simple math statement - problem

2007-12-02 Thread Lew

Postgres User wrote:

The problem turned out to be related to my function..

Given this table:

CREATE TABLE table2 (
  s_val numeric(6,2),
  e_val numeric(6,2)
) WITH OIDS;



I am curious what would happen if you wrote your procedure like this:

declare
   retval numeric(6,2);
   rec table2%ROWTYPE;
begin
   rec.s_val = 100;
   rec.e_val = 101;
   retval = (rec.s_val - rec.e_val) / rec.s_val;

   return retval;
end

Also, one wonders why you need to do the calculation via a row or record at 
all, when it would seem so easy just to plug in the values.


--
Lew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Simple math statement - problem

2007-12-02 Thread Lew

Lew wrote:

Postgres User wrote:

The problem turned out to be related to my function..

Given this table:

CREATE TABLE table2 (
  s_val numeric(6,2),
  e_val numeric(6,2)
) WITH OIDS;

The following functions of code will set retval = NULL;

declare
   retval numeric(6,2);
   rec record;
begin
   SELECT * INTO rec FROM table2 LIMIT 0;
   rec.s_val = 100;
   rec.e_val = 101;
   retval = (rec.s_val - rec.e_val) / rec.s_val;

   return retval;
end

However, if I explicitly typecast, then it returns the proper value:
retval = (rec.s_val::numeric(6,2) - rec.e_val::numeric(6,2)) /
rec.s_val::numeric(6,2);


Sure, because the first way you're doing integer division, and the 
second way you're doing floating point division.  In integer division, 
-1/100 yields zero.


The more I look at this, the more I think I'm wrong.

I'm researching the semantics of the idioms that you used.  I don't know what 
type rec.s_val and rec.e_val end up being after the integer assignments.


--
Lew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] postgresql in ramdisk

2007-12-02 Thread oruc çimen
hi;
i have tested postgresql in memory  but in ramdisk is not faster than
hardisk.
why??? if there are some option for postgresql in ramdisk, pls help me
i need too much fast db if you know another way for fast db pls send a mail
to me
thank you
(sorry for my bad english:(   )
G.Oruc Cimen


[GENERAL] One or more tables?

2007-12-02 Thread rokj
Hi.

For an example let me say that I have a big (over 1 million) user
base. Then every user does a lot of inserting/updating of data.
Would it be better to create different tables for insert/updating for
every user or  would it be better just to have one big table with all
data (tables would have of course the same columns, ...). How do you
cope with this kind of things?

1.example (1 enormous table)
tablename (id, user_id, datetime, some_data)

2. example (a big number of tables)
tablename_user_id( id, datetime, some_data)


Thank you.

Kind regards,

Rok

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] log_line_prefix='%t %u %d %h %p %i %l %x ' causes error

2007-12-02 Thread Andrus
 That works fine for me... are you sure log_line_prefix is line 482 in your 
 config file? You might have inadvertently put a superfluous % somewhere 
 else.

I use the config file below. I have added only some lines to the end of 
file, all other contents is from windows installer created conf file.
If I remove # sign in front of last line (line 482), and reload 
configuration, I got syntax error
in log file.
Is this Postgres bug ?

Andrus.


# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.)  White space may be used.  Comments are introduced
# with '#' anywhere on a line.  The complete list of option names and
# allowed values can be found in the PostgreSQL documentation.  The
# commented-out settings shown in this file represent the default values.
#
# Please note that re-commenting a setting is NOT sufficient to revert it
# to the default value, unless you restart the server.
#
# Any option can also be given as a command line switch to the server,
# e.g., 'postgres -c log_connections=on'.  Some options can be changed at
# run-time with the 'SET' SQL command.
#
# This file is read on server startup and when the server receives a
# SIGHUP.  If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use pg_ctl reload. Some
# settings, which are marked below, require a server shutdown and restart
# to take effect.
#
# Memory units:  kB = kilobytes MB = megabytes GB = gigabytes
# Time units:ms = milliseconds s = seconds min = minutes h = hours d = 
days


#---
# FILE LOCATIONS
#---

# The default values of these variables are driven from the -D command line
# switch or PGDATA environment variable, represented here as ConfigDir.

#data_directory = 'ConfigDir'  # use data in another directory
 # (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
 # (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
 # (change requires restart)

# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = '(none)'  # write an extra PID file
 # (change requires restart)


#---
# CONNECTIONS AND AUTHENTICATION
#---

# - Connection Settings -

listen_addresses = '*'  # what IP address(es) to listen on;
 # comma-separated list of addresses;
 # defaults to 'localhost', '*' = all
 # (change requires restart)
port = 5432# (change requires restart)
max_connections = 100   # (change requires restart)
# Note: increasing max_connections costs ~400 bytes of shared memory per
# connection slot, plus lock space (see max_locks_per_transaction).  You
# might also need to raise shared_buffers to support more connections.
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directory = ''  # (change requires restart)
#unix_socket_group = ''   # (change requires restart)
#unix_socket_permissions = 0777  # octal
 # (change requires restart)
#bonjour_name = ''   # defaults to the computer name
 # (change requires restart)

# - Security  Authentication -

#authentication_timeout = 1min  # 1s-600s
#ssl = off# (change requires restart)
#password_encryption = on
#db_user_namespace = off

# Kerberos
#krb_server_keyfile = ''  # (change requires restart)
#krb_srvname = 'postgres'  # (change requires restart)
#krb_server_hostname = ''  # empty string matches any keytab entry
 # (change requires restart)
#krb_caseins_users = off  # (change requires restart)

# - TCP Keepalives -
# see 'man 7 tcp' for details

#tcp_keepalives_idle = 0  # TCP_KEEPIDLE, in seconds;
 # 0 selects the system default
#tcp_keepalives_interval = 0  # TCP_KEEPINTVL, in seconds;
 # 0 selects the system default
#tcp_keepalives_count = 0  # TCP_KEEPCNT;
 # 0 selects the system default


#---
# RESOURCE USAGE (except WAL)
#---

# - Memory -

shared_buffers = 32MB   # min 128kB or max_connections*16kB
 # (change requires restart)
#temp_buffers = 8MB   # min 800kB
#max_prepared_transactions = 5  # can be 0 or more
 # (change requires restart)
# Note: increasing max_prepared_transactions costs ~600 bytes of shared 
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
#work_mem = 1MB# min 64kB
#maintenance_work_mem = 16MB  # min 1MB
#max_stack_depth = 2MB   # min 100kB

# - Free Space Map -

max_fsm_pages = 204800  # min 

Re: [GENERAL] Postgres High Availablity Solution needed for hot-standby and load balancing

2007-12-02 Thread Ragnar Heil
Hi Usama
 
yes, currently I am reading a brochure about Continuent uni/cluster for
PostgreSQL. Looks quite interesting.

Another product sounds promising: Cybercluster from www.postgres.at
English Product Description:
http://www.postgresql.at/picts/download/dokumentation/documentation_cybe
rcluster.pdf

Anyone has made experiences with it?


thanks
Ragnar
 




From: Usama Dar [mailto:[EMAIL PROTECTED] 
Sent: Freitag, 30. November 2007 10:35
To: Ragnar Heil
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Postgres High Availablity Solution needed
for hot-standby and load balancing




On 11/29/07, Ragnar Heil [EMAIL PROTECTED] wrote: 

Hi

our customer has got the following requirements:

Req1) Master master replication supported, not only
master / slave 
replication with only the master being writable. If you
do have multiple
slave systems they are only useful from a backup and
standby
perspective.  Our Application must have a db-connection
it can write to.

Req2) Replication of schema should also be possible, not
only data

Req3) Not only a hot-standby-solution is needed. Load
Balancing is
wanted for the future.


Currently I am looking at EnterpriseDB but it seems that
they dont 
support multiple master-replication

best regards
Ragnar



---(end of
broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



Have you looked at pgCluster or Continuent's uni/Cluster?

-- 
Usama Munir Dar http://linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Cascading Trigger Prevention

2007-12-02 Thread JonXP
On Nov 29, 5:09 pm, [EMAIL PROTECTED] (Martijn van Oosterhout) wrote:
 On Wed, Nov 28, 2007 at 02:00:58PM -0800, JonXP wrote:
  I have a table that contains a nested set (using paths), and I'm
  trying to create a trigger that updates the timestamps of a node and
  all of its parents on a modification.  Unfortunately, this is causing
  infinitely recurring triggers.

  The insert occurs, which fires the trigger, which then updates all of
  the parent rows, which then fires the trigger again for each update,
  which recurses yet again.

 You can disable triggers on a table but it's definitly not recommended
 (deadlock prone) but it seems to me that if when the trigger is fired
 it only updates its parent everything should work, right? As it
 recurses up the tree eventually it reaches the end, surely?

 Have a nice day,
 --
 Martijn van Oosterhout   [EMAIL PROTECTED]  http://svana.org/kleptog/

  Those who make peaceful revolution impossible will make violent revolution 
  inevitable.
   -- John F Kennedy



  signature.asc
 1KDownload

Yes, that was one of two approaches I am considering taking.  I was
trying to update all of the parent nodes in one query (which is why I
used this particular method of nested sets) but just recursing up the
tree could also work.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] relation deleted while in use encountered with Postgresql 8.0.8

2007-12-02 Thread Martijn van Oosterhout
On Thu, Nov 29, 2007 at 02:44:25PM -0800, Gautam Sampathkumar wrote:
 Hi,
 
 I'm using a python script w/ the PyGreSQL library to insert 1 billion rows
 into a database table for an experiment (performing a commit every 10K
 rows). My script failed at about 170M rows with the following exception:

You don't indicate your version so it could be XID wraparound, but
you'd have to be running a pretty old version to run into that still...

 After this error, my database table no longer exists and appeared to have
 been dropped, although my script was doing only INSERT statements. Any ideas
 on what might be causing this and/or if this a known issue and possible
 solutions would be greatly appreciated.

Do you run VACUUM regularly?

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] [HACKERS] Stored procedure issue

2007-12-02 Thread Usama Dar
On Dec 2, 2007 7:40 AM, Dragan Zubac [EMAIL PROTECTED] wrote:

 Hello

 I have a stored procedure which does the billing stuff
 in our system,it works ok,but if I put in
 production,where there is some 5-10 billing events per
 second,the whole database slows down. It won't even
 drop some test table,reindex,vacuum,things which were
 done before in the blink of an eye. If I stop the
 application which calls the procedure,all is back to
 normal.

 We didn't implement any special locking mechanism in
 the procedure,all is default. The procedure is
 updating user's balance in table 'users'. On the other
 hand a couple of 'heavy load' table has foreign keys
 pointing to table 'users'.

 Is it the matter of concurency and some locking issue
 or maybe the existing of all those foreign keys
 pointing to table 'users',or maybe something else
 which we're not aware at the moment ?


Can you please post your procedure and explain plan of the SQL which the
procedure uses to do the billing stuff . There can be a zillion reasons for
the performance problems you are seeing, but the email does not provide
enough information.



 Sincerely

 Pera



  
 
 Be a better sports nut!  Let your teams follow you
 with Yahoo Mobile. Try it now.
 http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ

 ---(end of broadcast)---
 TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate




-- 
Usama Munir Dar http://linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar


Re: [GENERAL] One or more tables?

2007-12-02 Thread Usama Dar
On Dec 2, 2007 6:35 PM, rokj [EMAIL PROTECTED] wrote:

 Hi.

 For an example let me say that I have a big (over 1 million) user
 base. Then every user does a lot of inserting/updating of data.
 Would it be better to create different tables for insert/updating for
 every user or  would it be better just to have one big table with all
 data (tables would have of course the same columns, ...). How do you
 cope with this kind of things?

 1.example (1 enormous table)
 tablename (id, user_id, datetime, some_data)

 2. example (a big number of tables)
 tablename_user_id( id, datetime, some_data)


Although  there isn't enough information in the email, but instead of
creating a separate table for every user, you could use one table ,
partitioned on userid,  that would , however, add a maint overhead whenever
you add a new user.





 Thank you.

 Kind regards,

 Rok

 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq




-- 
Usama Munir Dar http://linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar


Re: [GENERAL] full_page_writes = off?

2007-12-02 Thread Greg Smith

On Sat, 1 Dec 2007, Tomasz Ostrowski wrote:


You can also use hdparm -I to check this - look for a Write
caching in Commands/features section. If it has a * in front
then it is enabled and dangerous.


Right; using -I works with most Linux hdparm versions:

# hdparm -V
hdparm v6.6
# hdparm -I /dev/hda | grep Write cache
   *Write cache
# hdparm -W 0 /dev/hda

/dev/hda:
 setting drive write-caching to 0 (off)
# hdparm -I /dev/hda | grep Write cache
Write cache

While being able to check the state with -W only works in very recent 
ones.


The best way to make this change permanent varies depending on your Linux 
distribution.


Also:  nowadays many SATA disks appear as SCSI devices like /dev/sda.  In 
some cases I believe you can use a recent hdparm on them anyway, in others 
I've had to use sdparm instead.  Several of the examples at 
http://sg.torque.net/sg/sdparm.html show how to manipulate the Write Cache 
Enabled (WCE) status similarly to the above on SCSI devices.



I don't know how to check it on BSD.


In FreeBSD I believe you use atacontrol to check the settings, and you can 
make the changes permanent by fiddling with the /boot/device.hints file.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Cluster using tablespaces?

2007-12-02 Thread Rainer Bauer
Alvaro Herrera wrote:

 Alvaro Herrera wrote:
Probably most of the time is going into creating the new table then.

If you are looking for a short-term solution to your problem, maybe the
best is to follow the recommendation on CLUSTER ref page:

I've read that section before, but I have lots of foreign key relationships
between the tables.

Thanks Alvaro and Tom, but it seems that I will have to live with that
behaviour, until ...

No, the ALTER TABLE/CLUSTER ON only defines what index will the table be
clustered on in the future, but it doesn't cluster it at that time.
Perhaps it could be improved so that if a table rewrite is going to be
done anyway for some other reason, then make sure the rewrite uses the
cluster order.  I think it's far from trivial though.

... this has been tried.

Rainer

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] log_line_prefix='%t %u %d %h %p %i %l %x ' causes error

2007-12-02 Thread Adrian Klaver
On Friday 30 November 2007 2:31 am, Andrus wrote:
  That works fine for me... are you sure log_line_prefix is line 482 in
  your config file? You might have inadvertently put a superfluous %
  somewhere else.

 I use the config file below. I have added only some lines to the end of
 file, all other contents is from windows installer created conf file.
 If I remove # sign in front of last line (line 482), and reload
 configuration, I got syntax error
 in log file.
 Is this Postgres bug ?

 Andrus.





 # - What to Log -

 log_line_prefix = '%t '   # Special values:

 #--
- # CUSTOMIZED OPTIONS
 #--
-

 #custom_variable_classes = ''  # list of custom variable class names

 listen_addresses = '*'
 log_destination = 'stderr'
 redirect_stderr = on
 stats_start_collector = on
 stats_row_level = on
 autovacuum = on
 shared_buffers= 15000   # kui on 1 GB opmälu

 log_min_duration_statement = 2 # millisekundites, -1 on keelatud, 0
 logib kõik
 #log_line_prefix='%t %u %d %h %p %i %l %x '

You have log_line_prefix in two locations, under What to log and CUSTOMIZED 
OPTIONS. I would suggest keeping it under What to log.
-- 
Adrian Klaver
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-12-02 Thread Greg Smith

On Fri, 30 Nov 2007, Wolfgang Keller wrote:

it was impossible for me to find a similarly priced 
(Linux-/*BSD/Intel/AMD-)equivalent to my PowerMac G5 over here at the 
time when I bought it.


The problem from my perspective is the common complaint that Apple doesn't 
ship an inexpensive desktop product that would be suitable for light-duty 
server work.  Their cheapest system you can add a PCI-X card to is $2200 
USD (I just priced a system out and realized I can downgrade the 
processors from the default), and that has only has 4 SATA drive bays 
which doesn't make it much of a serious database server platform.  A 
similarly configured system from Dell runs around $1900, which gives the 
usual (and completely reasonable) Apple tax of around $300.  However, I 
can just as easily pop over to Dell, buy a $500 system, drop an SATA 
RAID+BBC controller in for another $400, and I've got a perfectly 
reasonable little server--one that on write-heavy loads will outperform at 
least double its price in Apple hardware, simply because that's how much 
it costs to get the cheapest system you can put a caching controller in 
from them.


(Don't anyone take that as a recommendation for Dell hardware, which I 
hate, but simply as a reference point; the only thing I like about them is 
that the system building interface on their web site makes it easy to do 
comparisons like this)



For example, if you have an application that needs high
database write throughput, to make that work well with PostgreSQL you
must have a controller with a battery backed cache.


Hmm, what would be the difference compared to plenty of RAM and a UPS (plus 
stand-by backup server)? Looks just like moving the single point of failure 
to adifferent hardware item, no...?


When you write a WAL record to commit a transaction, if you can cache that 
write it doesn't slow any client down.  If you can't, the database waits 
for a physical write to the disk, which can only happen at a rate that 
depends on your disk's rotation speed.  For a standard 7200RPM drive, that 
tops out a bit less than 120 writes/second for any single client, and 
somewhere around 500 total for larger numbers of simultaneous clients.


The only robust way to cache a write involves a battery-backed controller. 
Relying on RAM or the write cache in the drives, even if you have the 
world's greatest UPS, means that the first person who accidentally unplugs 
your system (or the first power supply failure) could corrupt your 
database.  That's really not acceptable for anyone.  But since the 
integrity policy of the good caching controlers is far better than that, 
you can leave that cache on safely, and only expect corruption if there's 
a multi-day power outage.


It's still more rambling than I'd like, but I have the pieces to a full 
discussion of this topic at 
http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm



LSI drivers are not available for MacOS X on PowerMacs? Ouch.


There might be something out there, but I'm not aware of anything from 
them or other vendors targeted at the current Intel Power Macs that looks 
robust; there's just Apple's offering.


Erm, systematic error here: It could also be that the MySQL 
implementation/configuration for the two different OSes was the source 
for the performance difference.


That's possible, but other than the specific fsync write fixes they 
applied for OS X I'm not aware of anything specific to Mac OS that would 
cause this.  When the low-level benchmarks show awful performance doing 
things like creating processes, and performance dives under a heavy load, 
it seems sensible to assume the two are linked until proven otherwise. 
(Appropriate disclaimer: 
http://en.wikipedia.org/wiki/Correlation_does_not_imply_causation )


It's also true that some of the MySQL threading limitations that were 
brought up in a tangent to this discussion could be contributing as well, 
in which case a PostgreSQL test might not show as large of a gap.  Again, 
criticizing the benchmark methods doesn't accomplish anything, you need an 
advocate for the platform to perform ones showing otherwise before the 
current results are disproven.


The point is that cost for installation, configuration and 
administration must be taken into account.


The question you asked about was how Apple Hardware+Mac OS X+PostgreSQL 
stacks up on a performance basis with more common platforms like PC 
hardware+Linux.  All the answers I've seen suggest not very well, and none 
of these other things are relevant when evaluating the platform from a 
performance perspetive.  TCO issues are all relative to the administrator 
and tasks anyway--an experienced Linux system administrator may be a 
little slower on some things than one running Apple's GUI tools, but once 
you get to more scriptable changes they could be far more efficient.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of 

Re: [GENERAL] relation deleted while in use encountered with Postgresql 8.0.8

2007-12-02 Thread Adrian Klaver
On Thursday 29 November 2007 2:44 pm, Gautam Sampathkumar wrote:
 Hi,

 I'm using a python script w/ the PyGreSQL library to insert 1 billion rows
 into a database table for an experiment (performing a commit every 10K
 rows). My script failed at about 170M rows with the following exception:

   File /usr/lib64/python2.3/site-packages/pgdb.py, line 163, in execute
 self.executemany(operation, (params,))
   File /usr/lib64/python2.3/site-packages/pgdb.py, line 185, in
 executemany
 raise DatabaseError, error '%s' in '%s' % ( msg, sql )
 pg.DatabaseError: error 'ERROR:  relation 184577 deleted while still in use
 ' in 'INSERT INTO nfs_files_10 (mxid, fhInode, fhGen, fhSnapId, fhFlags,
 name, parentInode, parentGen, parentSnapId, parentFlags, extension, type,
 atime, mtime, fileSize, owner, generation) VALUES (10, 120, 927370846, 0,
 0, 'gummy0.txt', 1204041, 927370729, 0, 0, 'txt', 0, 1112147234,
 1112147234, 40960, NULL, 2);'


 After this error, my database table no longer exists and appeared to have
 been dropped, although my script was doing only INSERT statements. Any
 ideas on what might be causing this and/or if this a known issue and
 possible solutions would be greatly appreciated.

 thanks,
 Gautam
Is it possible to show the python script? 
Also were you using the logging functions in Postgres? 
If so what does the log file show?
Was another application/person accessing the database at the same time?

-- 
Adrian Klaver
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] log_line_prefix='%t %u %d %h %p %i %l %x ' causes error

2007-12-02 Thread Tom Lane
Andrus [EMAIL PROTECTED] writes:
 I use the config file below. I have added only some lines to the end of 
 file, all other contents is from windows installer created conf file.
 If I remove # sign in front of last line (line 482), and reload 
 configuration, I got syntax error
 in log file.
 Is this Postgres bug ?

I still can't duplicate that --- the only way I can get that error is
to leave out the first ' mark.

What PG version are you using?  Back in 8.0 there used to be some odd
corner cases if the last line didn't end with a newline character,
though AFAIR that still wouldn't produce exactly this symptom.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [HACKERS] Stored procedure issue

2007-12-02 Thread Dragan Zubac
Hello

Please find in attachment stored procedure
(proc_uni.txt),as well as description of tables
involved in calculations.
The idea for procedure is to find longest prefix match
for destination number,try to find it in table
'billing' for particular users,find the price,and
insert message into history and inqueue table,as well
as to decreace the user's balance in table 'users'.
Would it help to put all prefices,prices data in some
sort of cache and let procedure first try to match
with data from cache and if it can't find to try to
get data from table itself from hard disk ?

I'm looking for some solution where this procedure can
operate at higher loads and to leave other parts of
database operational as much as it could.

Sincerely

Pera

--- Usama Dar [EMAIL PROTECTED] wrote:

 On Dec 2, 2007 7:40 AM, Dragan Zubac
 [EMAIL PROTECTED] wrote:
 
  Hello
 
  I have a stored procedure which does the billing
 stuff
  in our system,it works ok,but if I put in
  production,where there is some 5-10 billing events
 per
  second,the whole database slows down. It won't
 even
  drop some test table,reindex,vacuum,things which
 were
  done before in the blink of an eye. If I stop the
  application which calls the procedure,all is back
 to
  normal.
 
  We didn't implement any special locking mechanism
 in
  the procedure,all is default. The procedure is
  updating user's balance in table 'users'. On the
 other
  hand a couple of 'heavy load' table has foreign
 keys
  pointing to table 'users'.
 
  Is it the matter of concurency and some locking
 issue
  or maybe the existing of all those foreign keys
  pointing to table 'users',or maybe something else
  which we're not aware at the moment ?
 
 
 Can you please post your procedure and explain plan
 of the SQL which the
 procedure uses to do the billing stuff . There can
 be a zillion reasons for
 the performance problems you are seeing, but the
 email does not provide
 enough information.
 
 
 
  Sincerely
 
  Pera
 
 
 
  


  Be a better sports nut!  Let your teams follow you
  with Yahoo Mobile. Try it now.
 

http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ
 
  ---(end of
 broadcast)---
  TIP 7: You can help support the PostgreSQL project
 by donating at
 

 http://www.postgresql.org/about/donate
 
 
 
 
 -- 
 Usama Munir Dar http://linkedin.com/in/usamadar
 Consultant Architect
 Cell:+92 321 5020666
 Skype: usamadar
 


  

Get easy, one-click access to your favorites. 
Make Yahoo! your homepage.
http://www.yahoo.com/r/hs create type dajbre as (status int,id bigint);

CREATE OR REPLACE FUNCTION 
proc_uni(integer,integer,inet,text,integer,integer,text,integer,integer,
text,int, int,boolean,text) RETURNS setof dajbre AS '

DECLARE

uid alias for $1; 
pid alias for $2;
ip_i alias for $3;
s_number alias for $4;
s_ton_i alias for $5; 
s_npi_i alias for $6;
d_number alias for $7;
d_ton_i alias for $8;
d_npi_i alias for $9;
mess alias for $10;
dcs_i alias for $11;
esm_i alias for $12;
delivery_i alias for $13;
u_mess_id_i alias for $14; 

r dajbre%rowtype;

prefixfound boolean;
prefixprice billing.price%TYPE;
dest_num_len int;
tmp_dest_number text;
tmp_user_bal numeric;
tmp_returnval int;
novi_status int;
tmp_his_id bigint;
tmp_u_mess_id_i text;


begin


dest_num_len := char_length(d_number);
tmp_dest_number := d_number; 
prefixfound := false;

 
while dest_num_len  0 loop 

select into prefixprice price from billing 
where u_id=uid and prefix=tmp_dest_number; 

if not found then 
tmp_dest_number := substring (tmp_dest_number from 1 for 
dest_num_len-1);
dest_num_len := char_length(tmp_dest_number);
else 
prefixfound := true; 
exit;
end if;
end loop;


if prefixfound=false then 
tmp_returnval :=11; 
novi_status :=11;
else if prefixprice = 0 then
tmp_returnval :=11;
novi_status :=50; 
 else select into tmp_user_bal maxsms-cursms from users where id=uid; 
if tmp_user_bal  prefixprice then 
tmp_returnval :=11;
novi_status :=51; 
else
tmp_returnval :=0; 
end if;
 end if;
end if;


if tmp_returnval = 0 then 


insert into history 
(ip,source,dest,message,dcs,esm,s_ton,s_npi,d_ton,d_npi,u_id,delivery,price,p_id,u_mess_id)
 values
(ip_i,s_number,d_number,decode(mess,''base64''),dcs_i,esm_i,s_ton_i,s_npi_i,d_ton_i,d_npi_i,uid,delivery_i,prefixprice,pid,u_mess_id_i);

 tmp_his_id := currval(''history_id_seq'');



if pid = 2 then
if u_mess_id_i = 0 then
tmp_u_mess_id_i := ;
else
tmp_u_mess_id_i := u_mess_id_i;
end if;
else if pid = 3 then
  

Re: [GENERAL] One or more tables?

2007-12-02 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/02/07 14:58, Usama Dar wrote:
 On Dec 2, 2007 6:35 PM, rokj [EMAIL PROTECTED] wrote:
 
 Hi.

 For an example let me say that I have a big (over 1 million) user
 base. Then every user does a lot of inserting/updating of data.
 Would it be better to create different tables for insert/updating for
 every user or  would it be better just to have one big table with all
 data (tables would have of course the same columns, ...). How do you
 cope with this kind of things?

 1.example (1 enormous table)
 tablename (id, user_id, datetime, some_data)

 2. example (a big number of tables)
 tablename_user_id( id, datetime, some_data)
 
 
 Although  there isn't enough information in the email, but instead of
 creating a separate table for every user, you could use one table ,
 partitioned on userid,  that would , however, add a maint overhead whenever
 you add a new user.

Cluster by *range* of user ids, and preallocate some number of
tablespaces.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHU0tsS9HxQb37XmcRAhPoAJsESJL/Zs+SBRisowPXZbWQzIZqSgCeMEJE
uKC47H0oPOI6qxxCFpipD9E=
=A0ks
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgresSQL vs Ingress

2007-12-02 Thread Harvey, Allan AC

Ow Mun Henq wrote:-
 Ingress is also an open source RDBM (and DataWarehouseing) and I'm
 wondering if anyone here has anything to say about it. They also offer
 community editions but I've not gone to see how much it differs/offers
 compared to PG.
 
 I've tried to DL the community edition, but upon log-in, I only get a
 blank page. (tried on both firefox and opera)

Our business has been using Ingres since 1990 and still do.
It is a top quality product and we have followed it as it changed to an open 
source product.
http://www.ingres.com/downloads/prod-comm-download.php

What keeps Ingres in favour here ( amounst the other developers ), for good or 
bad, is QBF and Vision.
I favour Postgres because of psql and its ease of use in Bash scripts.
I moved to Postgres originally because I the source was available to cross 
compile libpq to OS9.

Hope this helps.

Allan


The material contained in this email may be confidential, privileged or 
copyrighted. If you are not the intended recipient, use, disclosure or copying 
of this information is prohibited. If you have received this document in error, 
please advise the sender and delete the document. Neither OneSteel nor the 
sender accept responsibility for any viruses contained in this email or any 
attachments.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Recommendations for a datasync scenario ?

2007-12-02 Thread Jorge Godoy
Em Friday 30 November 2007 05:02:25 Aarni Ruuhimäki escreveu:

 I followed the recent thread about 'replication in Postgres' but still any
 info on experience of similar circumstances and pointers / comments /
 recommendations are more than welcome.

You problem is not one where replication would help so much.  You can design 
your solution so that you *always* feed a local database and this local 
database syncs, from time to time, with a central database.

What to use for syncs?  Some script you can write easily.

All entries should have the location as part of their PK, so that there are no 
clashes and you don't need to rely on sequences and the correct value being 
used everywhere.

It is the same kind of problem that we have on supermarkets and POSs: the POS 
has to sell even if the connection with the server is down. 


-- 
Jorge Godoy  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [HACKERS] Stored procedure issue

2007-12-02 Thread Dragan Zubac
Hello

Here's the stored procedure itself,as well as the
related tables involved in it's calculations.
The idea for procedure is to find longest prefix match
for destination number,try to find it in table
'billing' for particular users,find the price,and
insert message into history and inqueue table,as well
as to decreace the user's balance in table 'users'.
Would it help to put all prefices,prices data in some
sort of cache and let procedure first try to match
with data from cache and if it can't find to try to
get data from table itself from hard disk ?

I'm looking for some solution where this procedure can
operate at higher loads and to leave other parts of
database operational as much as it could.

--Procedure---

create type dajbre as (status int,id bigint);

CREATE OR REPLACE FUNCTION
proc_uni(integer,integer,inet,text,integer,integer,text,integer,integer,
text,int, int,boolean,text) RETURNS setof dajbre AS '

DECLARE

uid alias for $1;
pid alias for $2;
ip_i alias for $3;
s_number alias for $4;
s_ton_i alias for $5;
s_npi_i alias for $6;
d_number alias for $7;
d_ton_i alias for $8;
d_npi_i alias for $9;
mess alias for $10;
dcs_i alias for $11;
esm_i alias for $12;
delivery_i alias for $13;
u_mess_id_i alias for $14;

r dajbre%rowtype;

prefixfound boolean;
prefixprice billing.price%TYPE;
dest_num_len int;
tmp_dest_number text;
tmp_user_bal numeric;
tmp_returnval int;
novi_status int;
tmp_his_id bigint;
tmp_u_mess_id_i text;


begin


dest_num_len := char_length(d_number);
tmp_dest_number := d_number;
prefixfound := false;


while dest_num_len  0 loop

select into prefixprice price from billing
where u_id=uid and prefix=tmp_dest_number;

if not found then
tmp_dest_number := substring
(tmp_dest_number from 1 for dest_num_len-1);
dest_num_len :=
char_length(tmp_dest_number);
else 
prefixfound := true;
exit;
end if;
end loop;


if prefixfound=false then
tmp_returnval :=11;
novi_status :=11;  
else if prefixprice = 0 then
tmp_returnval :=11; 
novi_status :=50;   
 else select into tmp_user_bal maxsms-cursms from
users where id=uid;
if tmp_user_bal  prefixprice then 
tmp_returnval :=11;
novi_status :=51;  
else
tmp_returnval :=0;
end if;
 end if;   
end if;


if tmp_returnval = 0 then


insert into history
(ip,source,dest,message,dcs,esm,s_ton,s_npi,d_ton,d_npi,u_id,delivery,price,p_id,u_mess_id)
values
(ip_i,s_number,d_number,decode(mess,''base64''),dcs_i,esm_i,s_ton_i,s_npi_i,d_ton_i,d_npi_i,uid,delivery_i,prefixprice,pid,u_mess_id_i);

 tmp_his_id := currval(''history_id_seq'');



if pid = 2 then
if u_mess_id_i = 0 then
tmp_u_mess_id_i := ;
else
tmp_u_mess_id_i := u_mess_id_i;
end if;
else if pid = 3 then
  tmp_u_mess_id_i := tmp_his_id ;
 end if;
end if;

update history set u_mess_id = tmp_u_mess_id_i where
id = tmp_his_id;
update users set cursms=cursms+ prefixprice where
id=uid;


insert into inqueue(id, u_id) values (tmp_his_id,
uid);

r.status := 0;
r.id := tmp_his_id;
return next r;


else


insert into rejected
(ip,source,dest,message,dcs,esm,s_ton,s_npi,d_ton,d_npi,status,u_id,delivery,u_mess_id)
values
(ip_i,s_number,d_number,decode(mess,''base64''),dcs_i,esm_i,s_ton_i,s_npi_i,d_ton_i,d_npi_i,novi_status,uid,delivery_i,u_mess_id_i);

r.status := 11;
r.id := 0;
return next r;

end if;

return;

end;

' language 'plpgsql';
-

---Billing table-
   Table public.billing 
  

   Column   |  Type  | 
Modifiers
++--
 id | integer| not null default
nextval('billing_id_seq'::regclass)
 u_id   | integer| not null
 prefix | text   |
 operator   | integer|
 price  | numeric(20,10) |
 comment| text   |
 new_prefix | boolean| default false
Indexes:
billing_pkey PRIMARY KEY, btree (id)
bil_uid btree (u_id)
Foreign-key constraints:
$1 FOREIGN KEY (u_id) REFERENCES users(id)
$2 FOREIGN KEY (operator) REFERENCES
operators(id)
-
Users table--
   Column   |  Type  |
Modifiers
++
 id | integer| not null
default nextval('users_id_seq'::regclass)
 username   | text   | not null
 password   | text   | not null
 name   | text   |
 email  | text   |
 mobile | text   |
 phone  | text   |
 company 

Re: [GENERAL] HA and Replication - how to choose among all the available solutions

2007-12-02 Thread Andrew Sullivan
On Fri, Nov 30, 2007 at 10:30:47PM +0100, Pascal Cohen wrote:
 I am facing a probably very common problem. I made a search in the 
 recent archives and could find many posts related to my issue. But I did 
 not get exactly the answer to my question.

No, and I doubt you will.

 But I don't know how to chose a solution. I don't have much experience 
 and can't see precisely the advantages and the drawbacks of each solution.

That's because, quite frankly, it's hard to answer that question in general.
You have to answer it under your circumstances.  Since you're the one who
knows those, you'll have to do the answering, I'm afraid.  This probably
means do some tests.

 And why should I chose Slony instead of Log-shipping to update the slaves ?

This is a good example (and the only thing in your list I feel comfortable
talking about): if you think that you might find it handy to be able to
query the replicas, use Slony, because you can't do it with log shipping
(yet).

A


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match