[SQL] detaching triggers

2005-03-27 Thread Enrico Weigelt

Hi folks,

is it somehow possible to detach trigger, so the calling transaction
can return immediately, even before the trigger function has 
returned. 

I've got to do some quite complex things which may need some time, 
when some clients fill in some data, but the results are quite
uninteresting to these clients - its important that the client's 
query is finished as fast as possible.

The only solution I currently know is to fill somethings in a queue 
table by rule and have an external daemon looking at it every second.
But this doesnt seem very optimal for me.

Any better idea out these ?


thx
-- 
-----
 Enrico Weigelt==   metux IT service

  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
  cellphone: +49 174 7066481
-
 -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --
-

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] (non)zero function

2005-03-27 Thread Enrico Weigelt

Hi folks,


is there an function returning true if the argument is null or 
zero (just to make some queries more concise). Of course its 
trivial to implement as SQL function, but if postgresql already
provides such a function, I would prefer using it. 


cu
-- 
-
 Enrico Weigelt==   metux IT service

  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
  cellphone: +49 174 7066481
-
 -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --
-

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] asynchrous triggers

2005-03-31 Thread Enrico Weigelt

hi folks,


is it possible somehow to make (AFTER) triggers run in their
own process/thread, so the calling session can return immediately
and the trigger runs in background ?


thx
-- 
-
 Enrico Weigelt==   metux IT service

  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
  cellphone: +49 174 7066481
-
 -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --
-

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Table PARTITION

2005-04-07 Thread Enrico Weigelt
* Richard Huxton  wrote:
> Sean Davis wrote:
> >This is a totally selfish question, but IF someone has a few minutes, 
> >could he/she explain why table partitioning is such an important tool?
> 
> Say you have a large log-table, you could partition it by month. If most 
> queries only search the last month or two, a lot of your partitioned 
> data could be moved to cheaper/slower disks (via tablespaces).

You can solve this problem with multiple tables rules quite easily.
At this point you can also filter out some unused data (often historical
data requires less information than live data, because only the end 
result of certain finished things is interesting for the future, but 
many things needed as long as things are open are completely irrelevant
for later usage, i.e. an archive of accounting information for webhosters
wont require datails of single http requests)

Lets give me some examples on one of my customer's projects:

At fXignal - an forex market trading platform - we're maintaining 
an long-time archive of all run orders. An "open" trade (you've bought
some position) has one order, while an "closed" trade (things are 
sold again) has two. 
I.g we've got two kind of accesses to trade information:
a) viewing and manipulating open trades - active trading (must be fast!)
b) only viewing closed trades for reports (account report, etc)
Also we've got some information which are only interesting for open
trades, ie. limits (points where trade should be closed automatically).

We've solved this by having two tables: one for open trades and one
for archived (closed) trades. When an trade is opened, it goes to the
open-trade table and resides there until it goes to closed state 
(by setting a "closed" flag). Once the trades has reached closed state
its copied to the archive table and removed from the open trade table 
by an rule. (see CREATE RULE). 

When archived trades get old (3 month) we need less information from
that, which has to be kept very long (several years). For that we
catch the DELETE on the archive table and copy data to the longtime
archive before it gets removed from the archive table.

For long time analyses we've got some views which map together 
interesting information from all tables.



Well, this way we've got the same benefits as with partitions, with a 
little bit more coding work, but then with better control and filtering
out unneeded stuff.



cu
-- 
-
 Enrico Weigelt==   metux IT service

  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
  cellphone: +49 174 7066481
-
 -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --
-

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] checking pgsql functions

2005-04-20 Thread Enrico Weigelt

Hi folks,

Is it possible to check plpgsql functions before they're actually
into the database, as it's done w/ sql functions ?

Often I've got the problem that a function still contains some
syntax errors (well, nobody's perfect), but my application is 
quite critical (a single crashing trigger may cost a lot of money).
So I'd like to checkin only syntactically correct (also with 
correct references).

Is it possible somehow ?


cu
-- 
---------
 Enrico Weigelt==   metux IT service

  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
  cellphone: +49 174 7066481
-
 -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --
-

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] RULE for mtime recording

2005-04-21 Thread Enrico Weigelt

Hi folks,


I'd like to write an update rule, which touches the a mtime field
(=current_timestamp) on normal update - when the mtime field isnt
explicitly set. If the update query explictly sets an mtime value,
this value has to be let through.

my tables look like:

-- base class
CREATE TABLE inode 
(
inode_idoid not null default nextval('inode_id_seq'),
mtime   timestamp not null default current_timestamp
);

-- example class
CREATE TABLE foo
(
bar text
) INHERITS ( inode );


now if I do 

UPDATE foo SET bar = 'xyz' WHERE ...

the mtime should be set to the current time, but on 

UPDATE foo SET bar = '123', mtime = '2001-09-11' WHERE ...

we shall have 9/11 as mtime.


Is this possible with rules ?


thx
-- 
---------
 Enrico Weigelt==   metux IT service

  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
  cellphone: +49 174 7066481
-
 -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --
-

---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] trigger/rule question

2005-04-27 Thread Enrico Weigelt

Hi folks,


for database synchronization I'm maintaining an mtime field in 
each record and I'd like to get it updated automatically on 
normal writes (insert seems trivial, but update not), but it 
must remain untouched when data is coming in from another node
(to prevent sync loops).

I first tried it with rules on update, but I didnt find any trick
to prevent infinite recoursion. If I'd replace update by delete 
and reinsert, I'll probably run into trouble with constaints and
delete rules.

Triggers dont seem to have this problem, but require an function
call per record, while a rule solution would only rewrite the 
actual query.

But still I've got the unsolved problem, how to decide when to
touch the mtime and when to pass it untouched. I didnt find any
trick to explicitly bypass specific triggers yet.


Any ideas ?


thx
-- 
---------
 Enrico Weigelt==   metux IT service
  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
-
  Realtime Forex/Stock Exchange trading powered by postgresSQL :))
http://www.fxignal.net/
-

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] trigger/rule question

2005-05-01 Thread Enrico Weigelt
* Ramakrishnan Muralidharan <[EMAIL PROTECTED]> wrote:

Hi,

> Going through you mail, I assume that you are updating the mtime 
> only after inserting the record. 

An "normal" update (=done by an application or user) should also 
update the mtime. But there's an replication subsystem, which writes
should go through untouched. 

> It is always possible to check the mtime filed value of the inserted 
> record and take action based on it in the trigger. 

yeah, but how to detect whether the application has explicitly 
written it ?

The only chance I currently have in mind is to use some session 
dependent data, i.e. username or some persistant storage (could be 
easily done ie. w/ plphp) for this decision. The sync subsystem
has to do some "special" login (ie. separate user or setting the 
session wide variable) before doing its work.

I would be happier to let a rule do this, so there's not an extra 
function per written row. But all my experiments ran into infinite
recoursion trouble.

> Is it possible to send me detail about the trigger?
The trigger isn't existing yet. I'm currently maintaining the mtime
updates within the application, but I wanna get away from that. It 
probably would be interesting, if a normal application couldn't 
touch the mtime at all.


cu
-- 
---------
 Enrico Weigelt==   metux IT service
  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
-
  Realtime Forex/Stock Exchange trading powered by postgresSQL :))
http://www.fxignal.net/
-

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] trigger/rule question

2005-05-01 Thread Enrico Weigelt
* Christoph Haller <[EMAIL PROTECTED]> wrote:

Hi,

> I assume this still refers to 
> [SQL] RULE for mtime recording 
> from last Friday. 

ehm, yeah. I forgot that I've already asked this stuff ...
hmmpf. seems I'm not getting younger ;-)


> I gave it another thought and 
> I am now having something which seems to work. 
> The trick is interpose a view to avoid the 
> rule recursion: 



correct me if I'm wrong: 

you dont let the application write to the actual storage table, but 
instead to a view, which a modified write to the actual storage, where
also the reads get their data from.

okay, that's really an idea worth to think about :)

insert should work the same way. but how to implement delete ? 
(the application should only see one table, so in our case the view).
if we user "DO INSTEAD", we wont get anything to delete (AFAIK), so 
we cannot intercept here. the only chance seems to leave out "INSTEAD"
and live with duplicate data.


Did I miss anyting ?


cu
-- 
---------
 Enrico Weigelt==   metux IT service
  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
-
  Realtime Forex/Stock Exchange trading powered by postgresSQL :))
http://www.fxignal.net/
-

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] pgtop, display PostgreSQL processes in `top' style

2005-05-01 Thread Enrico Weigelt
* Cosimo Streppone <[EMAIL PROTECTED]> wrote:

Hi,

> I'm currently working on something like a `mytop' clone,
> also written in Perl with DBI + DBD::Pg interface.
> 
> mytop is a command line utility that displays mysql
> server status and clients connected modeled after
> `top' unix utility. (http://mytop.sourceforge.net).

Great thing. 

I'd like to invest some time in it, but I'd prefer coding 
it in java for several reasons.

Did you set up an mailing list for that project ?
(if not, I could offer to host it - its just some keystrokes 
away for me)


cu
-- 
---------
 Enrico Weigelt==   metux IT service
  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
-
  Realtime Forex/Stock Exchange trading powered by postgresSQL :))
http://www.fxignal.net/
-

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[SQL] Mutex via database

2005-05-04 Thread Enrico Weigelt

Hi folks,


i've a dozen of servers processing jobs from a database. 
Some servers may only run exactly once, otherwise I'll get jobs 
done twice. 

Is there any way for implementing an kind of mutex within the
database, which is automatically released if the holding 
process/connection dies ?


thx
-- 
-----
 Enrico Weigelt==   metux IT service
  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
-
  Realtime Forex/Stock Exchange trading powered by postgresSQL :))
http://www.fxignal.net/
-

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Converting from MS Access field aliases

2007-08-08 Thread Enrico Weigelt
* Tom Lane <[EMAIL PROTECTED]> wrote:

Hi,

> This is not an "extension", it is *directly* contrary to both 
> the letter and spirit of the SQL standard. 

at which point is this breaking the specification ?
What would happen if postgres would allow this ? 

IMHO supporting aliases in where clauses would make some 
queries easier to read. Think of cases where some column 
is coming from an complex calculation (ie. many nested 
CASE'es, etc, etc) and you need that calculated column
in the WHERE clause. Of course it's good style to encode
those things in views, but is this always very performant ?


cu
-- 
---------
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] Implementing an regex filter

2007-08-08 Thread Enrico Weigelt
ND (NOT seen))
   ->  Hash  (cost=2099.20..2099.20 rows=9127 width=57) (actual 
time=205.996..205.996 rows=0 loops=1)
 ->  Seq Scan on articles  (cost=0.00..2099.20 rows=9127 
width=57) (actual time=0.373..187.468 rows=8662 loops=1)
   Filter: (end_time > ('now'::text)::timestamp(6) with 
time zone)
 Total runtime: 126921.854 ms


   
I'm not sure what "Total runtime" means. Is it the time the analyze
took or the query will take to execute ?

If it's really the execution time, then the second query would be
much faster (about 2mins vs. 18mins). But I really wonder, why 
is processing one huge regex so dramatically slow ?


BTW: in some tables I'm using the username instead (or parallel
to) the numerical id to skip joins against the user table. But
I'm not sure if this wise for performance.


Any hints for futher optimization appreciated :)


thx
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[SQL] Performance on writable views

2007-08-08 Thread Enrico Weigelt

Hi folks,


I'm often using writable views as interfaces to clients, so 
they only see "virtual" objects and never have to cope with
the actual storage, ie. to give some client an totally 
denormalized view of certain things, containing only those 
information required for certain kind of operations. 

This method is nice for creating easy and robust client 
interfaces - internal schema changes are not visible to 
the client. In situations when many, many clients - often
coded/maintained by different people - have to access an
database which is still under development (typical for 
many inhouse applications), it helps to circument interface
instabilities.

Now I've got the strange feeling that this makes updates
slow, since it always has to run the whole view query to
fetch an record to be updated (ie. to get OLD.*).

Could anyone with some deep insight please give me some 
details about that issue ?


cu 
-- 
---------
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Database synchronization

2007-08-08 Thread Enrico Weigelt
* Jyoti Seth <[EMAIL PROTECTED]> wrote:

Hi,

if you can live with some seconds lag, you can use an separate
transfer process which looks at the source of your prices and 
and updates the second DB if some price changes. 

Depending on how often prices change, there're two good options:

a) add an timestamp field to the original price table, which is 
   maintained by an after-trigger. The transfer regularily 
   (ie. every few secs) fetches all those records which have 
   been changed since last poll (via the new timestamp field).

b) create an new table for an journal of the price changes.
   this journal is filled by rules on the original table and 
   contains exactly what's needed to reproduce the price changes
   in the other DB. The transfer process regularily fetches
   the journal and rolls it out in the second DB.
   
I've implemented the method b) in an realtime stock (FOREX) 
trading applications where trading automatically robots execute 
transactions on the customer's broker accounts on signals 
provided by an team of profession traders. (the client can 
subscribe to several trading signal channels and define in 
which quantities transactions should be performed from which
channels should be executed on their broker accounts and the
broking platform does all the time critical work). The robots
work entirely on separate databases (on separate hosts),
several transfer processes feed in robot commands and fetch
back results to the primary database. So time critical things
are separated to own servers. It works quite fine :)


cu
-- 
---------
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] Finding broken regex'es

2007-10-02 Thread Enrico Weigelt

Hi folks,


I'm looking for some way to find broken regex'es in some column
to kick them off. For now I'm regularily fetching all regexes
from an PHP script, try an preg_match() and so find the broken
ones to later remove them.

Is there any way to do this directly within the db ? 


thx
-- 
-----
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate