Re: [HACKERS] storing binary data

2001-10-23 Thread Alex Pilosov

Use bytea. Search archives.

On Sun, 21 Oct 2001, Jason Orendorff wrote:

 Reply-To: sender
 
 Hi.  I was surprised to discover today that postgres's
 character types don't support zero bytes.  That is,
 Postgres isn't 8-bit clean.  Why is that?
 
 More to the point, I need to store about 1k bytes per row
 of varying-length 8-bit binary data.  I have a few options:
 
  + BLOBs.  PostgreSQL BLOBs make me nervous.  I worry about
the BLOB not being deleted when the corresponding row in
the table is deleted.  The documentation is vague.
 
  + What I really need is a binary *short* object type.
I have heard rumors of a legendary bytea type that might
help me, but it doesn't appear to be documented anywhere,
so I hesitate to use it.
 
  + I can base64-encode the data and store it in a text
field.  But postgres is a great big data-storage system;
surely it can store binary data without resorting to
this kind of hack.
 
 What should I do?  Please help.  Thanks!
 
 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Feature Request - PL/PgSQL

2001-10-14 Thread Alex Pilosov

You already can return a cursor.

Support for returning a record set is being worked on.

-alex
On Sun, 14 Oct 2001, Gavin Sherry wrote:

 Hi all,
 
 It would be very nice if PL/PgSQL could return a record set (ie, set of
 tuples). This could be done in two ways as far as I can imagine: either
 PL/PgSQL just returns the rows as a normal query would or it could return
 a cursor. The prior would be very useful, the latter easier to implement
 (especially if INOUT arguments get implemented =)).
 
 Currently, this seems to go against the grain of PL/PgSQL - am I missing
 something?
 
 Gavin
 
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html
 
 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Suitable Driver ?

2001-10-11 Thread Alex Pilosov

On Thu, 11 Oct 2001, Balaji Venkatesan wrote:

 Now i need to install DBD For PGSQL .Is
 this the driver i have to work on for pgsql ?.
 Or do I have any other option to connect to pgsql
 from perl . Indeed i've found out an other way
 to use Pg driver provided by PGSQL to interface
 perl with pgsql.
You need DBD::Pg, which is a DBD driver for postgres.

 
 I need to exactly know the difference between
 use Pg ; and use DBI ; Need to which one is
 proceeding towards correct direction under what circumstances.
You need use DBI; and use DBD::Pg;
Pg by itself is slightly lower-level module that is similar to C interface
to postgresql.

 when I tried to install DBD-Pg-0.93.tar.gz under Linux
 i get
 
 Configuring Pg
 Remember to actually read the README file !
 please set environment variables POSTGRES_INCLUDE and POSTGRES_LIB !
 
 I need to know what these varibles POSTGRES_INCLUDE and POSTGRES_LIB
 should point to ...
To location of your installed postgres includes' and libraries
For example:

export POSTGRES_INCLUDE=/usr/local/pgsql/include
export POSTGRES_LIB=/usr/local/pgsql/lib

-alex


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] RFD: access to remore databases: altername suggestion

2001-10-02 Thread Alex Pilosov

You are attacking here two things: 

a) schemas, which should be done in 7.3, thus multiple databases on same
host would be unnecessary.

b) connections to remote host' databases, which is partially implemented
already (in a ugly way, but...) see contrib/dblink

What you described is a syntactic sugar to implement b) which isn't a bad
idea, but just consider, it is already done. sorta. 

On Wed, 3 Oct 2001 [EMAIL PROTECTED] wrote:

 Hi!
 
 0. I think access to other databases is really important. There was
 a discussion about that. Using a dot operator to specify a
 database (schema) seems to be very standard and elegant.
 But there is another way to implement it. Here is my
 suggestion.
 
 1. First, some syntax:
 
 CREATE [ SHARED ] [ TRUSTED ] CONNECTION conn_name
   USING 'conn_string'
   [ CONNECT ON { LOGIN | USE } ]
   [ DISCONNECT ON { LOGOUT | COMMIT } ];
 
 Description
   Creates a connection definition (Oracle: database link) to
   a remote database.
 
 SHARED
   Means only one instance of connection exists and is accessible
   to all qualified users.
 
 TRUSTED
   Only superusers can use this connection (like TRUSTED modifier
   in CREATE LANGUAGE).
 
 conn_name
   Just an identifier.
 
 'conn_string'
   Connect string in standard form accepted by libpq
   'PQconnectdb' function.
 
 CONNECT ON { LOGIN | USE }
   Defines whether connection should be established when
   user logs in, or when references remote object for the
   first time (default).
 
 DISCONNECT ON { LOGOUT | COMMIT }
   Defines whether connection should be closed when
   user logs out (default), or when transaction is ended (COMMIT,
   ROLLBACK, but also exiting).
 
 2. Additional commands
 
 ALTER CONNECTION conn_name
   USING 'conn_string'
   [ CONNECT ON { LOGIN | USE } ]
   [ DISCONNECT ON { LOGOUT | COMMIT } ];
 
 Description
   Changes behaviour of a defined connection (same parameters
   as for CREATE CONNECTION).
 
 
 DROP CONNECTION conn_name;
 
 Description
   Hmm... drop the connection definition?
 
 
 Also a new privilege CONNECT should be added, so
 GRANT CONNECT ON remote_database TO SCOTT;
 can be processed.
 
 
 3. How to use this?
 
 SELECT local.id, remote.name
   FROM orders local, emp@remote_database remote
   WHERE local.emp_id = remote.id;
 
 SELECT give_a_raise_proc@rempte_database(1000);
 
 
 4. Some notes (in random order)
 
 If a 'conn_string' does not contain a user/password information,
 connection is performed using current user identity. But, for SHARED
 connection always use a 'nobody' account (remeber to create
 'nobody' user on remote database). For security reasons
 'conn_string' must be stored in encrypted form.
 
 When CONNECT ON LOGIN is used, connection is etablished
 only if user has CONNECTprivilege granted on this. For TRUSTED
 connection also superuser rights must be checked.
 
 If first remote object is accessed within a transaction, a remote
 transaction should be started. When trancaction ends, remote
 transaction should also be ended same way (commit or rollback).
 
 SHARED connection should be established when first user logs in
 or uses remote object (depends on CONNECT ON clause) and
 terminated when last user ends transaction or disconnects
 (depens on DISCONNECT ON clause). Of course no remote
 transaction can be performed for SHARED connection.
 
 Of course it would require lot of work, but can be parted. The
 minimum  IMHO can be a SHARED connection with
 CONNECT ON USE and DISCONNECT ON LOGOUT behaviour.
 
 5. Conclusion
 
 I know it is much easier to 'invent' a new functionality than
 to implement it. I also realize this proposal is not complete
 nor coherent. Still want to listen/read your opinions about it.
 
 Regards,
 
 Mariusz Czulada
 
 P.S.: Is it planned to add 'auto_transaction' parameter on server
 or database levels, so events like login, commit or rolback
 automaticly start a new transaction without 'BEGIN WORK'
 (like Oracle does)?
 
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
 
 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Spinlock performance improvement proposal

2001-09-26 Thread Alex Pilosov

On Wed, 26 Sep 2001, D. Hageman wrote:

   Save for the fact that the kernel can switch between threads faster then 
   it can switch processes considering threads share the same address space, 
   stack, code, etc.  If need be sharing the data between threads is much 
   easier then sharing between processes. 
  
  When using a kernel threading model, it's not obvious to me that the
  kernel will switch between threads much faster than it will switch
  between processes.  As far as I can see, the only potential savings is
  not reloading the pointers to the page tables.  That is not nothing,
  but it is also
major snippage
   I can't comment on the isolate data line.  I am still trying to figure 
   that one out.
  
  Sometimes you need data which is specific to a particular thread.
 
 When you need data that is specific to a thread you use a TSD (Thread 
 Specific Data).  
Which Linux does not support with a vengeance, to my knowledge.

As a matter of fact, quote from Linus on the matter was something like
Solution to slow process switching is fast process switching, not another
kernel abstraction [referring to threads and TSD]. TSDs make
implementation of thread switching complex, and fork() complex.

The question about threads boils down to: Is there far more data that is
shared than unshared? If yes, threads are better, if not, you'll be
abusing TSD and slowing things down. 

I believe right now, postgresql' model of sharing only things that need to
be shared is pretty damn good. The only slight problem is overhead of
forking another backend, but its still _fast_.

IMHO, threads would not bring large improvement to postgresql.

 Actually, if I remember, there was someone who ported postgresql (I think
it was 6.5) to be multithreaded with major pain, because the requirement
was to integrate with CORBA. I believe that person posted some benchmarks
which were essentially identical to non-threaded postgres...

-alex


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Spinlock performance improvement proposal

2001-09-26 Thread Alex Pilosov

On Wed, 26 Sep 2001, D. Hageman wrote:

 Oh, man ... am I reading stuff into what you are writing or are you 
 reading stuff into what I am writing?  Maybe a little bit of both?  My 
 original contention is that I think that the best way to get the full 
 potential out of SMP machines is to use a threads model.  I didn't say the 
 present way wasn't fast.  
Or alternatively, that the current inter-process locking is a bit
inefficient. Its possible to have inter-process locks that are as fast as
inter-thread locks.

   Actually, if I remember, there was someone who ported postgresql (I think
  it was 6.5) to be multithreaded with major pain, because the requirement
  was to integrate with CORBA. I believe that person posted some benchmarks
  which were essentially identical to non-threaded postgres...
 
 Actually, it was 7.0.2 and the performance gain was interesting.  The 
 posting can be found at:
 
 7.0.2About10:52 average completion
 multi-threaded2:42 average completion
 7.1beta3  1:13 average completion
 
 If the multi-threaded version was 7.0.2 and threads increased performance 
 that much - I would have to say that was a bonus.  However, the 
 performance increases that the PostgreSQL team implemented later ... 
 pushed the regular version ahead again.  That kinda says to me that 
 potential is there.
Alternatively, you could read that 7.1 took the wind out of threaded
sails. :) But I guess we won't know until the current version is ported to
threads...

-alex


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Changing data types

2001-09-24 Thread Alex Pilosov

On Mon, 24 Sep 2001, mlw wrote:

 To be honest I am very surprised that MS SQL supports that, but then
 again Microsoft is so used to doing everything so utterly wrong, they
 have to design all their products with the ability to support
 fundamental design error corrections on the fly.
 
 I would be surprised if Oracle, DB2, or other industrial grade
 databases could do this. Needing to change a column from a varchar to
 an integer is a huge change and a major error in design.
Actually they do. Its not a such a great deal, same as adding a column and
dropping a column. If you can do that, you can do modification of type. 

The sticky thing is dropping a column. There are two options, and
postgresql developers just can't make up their mind :P)

a) keep old column data in database (wasted space, but fast)
b) immediately 'compress' table, removing old data (slow, needs a lot of
space for compression)

Option a) was implemented once, but kludgy, and had a few kinks, and it
was removed. Option b) plain sucks :P)

-alex



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Changing data types

2001-09-24 Thread Alex Pilosov

On Mon, 24 Sep 2001, Rod Taylor wrote:

 Out of curiosity how was option a) implemented?  I could envision
 supporting multiple versions of a tuple style to be found within a
 table (each described in pg_attribute).  Gradually these would be
 upgraded through normal use.
Check the archives (look for DROP COLUMN and Hiroshi Inoue, author of
original patch).

 One would expect that keeping multiple versions of a tuple structure
 inside a single table to be slower than normal for selects, but I
 don't think it would require marking the rows themselves -- just base
 it on the max and min transactions in the table at that time.  Vacuum
 would have to push the issue (5k tuples at a time?) of upgrading some
 of the tuples each time it's run in order to enfore that they were all
 gone before XID wrap.  Background vacuum is ideal for that (if
 implemented).  Drop all constraints, indexes and the name (change to
 $1 or something) of the column immediatly.  Vacuum can determine when
 XID Min in a table is  XID Max of another version and drop the
 information from pg_attribute.
I think it was done by setting attribute_id to negative, essentially
hiding it from most code, instead of having two tuple versions, but I
really am not very familiar. Check archives :)

 Obviously affected:
 - pg_attribute, and anything dealing with it (add XID Max, XID Min
 wraps for known ranges)
 - storage machanism.  On read of a tuple attempt to make it fit latest
 version (XID Max is NULL) by ignoring select fields.
 
 I'll have to leave it up to the pros as to whether it can be done,
 should be done, and what else it'll affect.
 
 I suppose this was option a) that was removed due to it's kludgyness
 :)



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Changing data types

2001-09-24 Thread Alex Pilosov

This is not for -hackers. 

And the answer is no, you can't. Recreate the table with correct types
and insert the old values into it.

On Mon, 24 Sep 2001, Gowey, Geoffrey wrote:

 I posted this in my last message, but have not heard anything yet so I'm
 wondering if it was overlooked.  I need to know how to change a column from
 being say a varchar(9) to an integer.  Does anyone know how to change the
 data type?
 
 Geoff
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
 http://archives.postgresql.org
 
 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] [SQL] outer joins strangeness

2001-09-24 Thread Alex Pilosov

[moved to hackers]

On Mon, 24 Sep 2001, Stephan Szabo wrote:

  Postgres should understand that left outer join does not constrict join
  order...
 
 But it can.  If your condition was a joining between the other table
 and the right side of the left outer join, you'd have the same condition
 as a right outer join and the left side.  The real condition I think
 is that you can join a non-explicitly joined table to the x side of an
 x outer join before the outer join but not to the other side.
Yes yes. Maybe I was imprecise. Right join and left join are the same,
only the ordering is different. Lets call the table that will always be
included in a join a complete table. 

Then, joins should not impose join order on complete table. Of course,
joins against 'incomplete' table must be done only after outer join is
done.

Anyone who can actually fix it? :)
-alex



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [SQL] outer joins strangeness

2001-09-23 Thread Alex Pilosov

On Sun, 23 Sep 2001, Stephan Szabo wrote:

 On Sun, 23 Sep 2001, Alex Pilosov wrote:
 
  It may be just me, or I am grossly misunderstanding syntax of outer joins,
  but I see that plans for my queries are different depending on how I place
  join conditions and sometimes even on order of the tables.
  
  Example:
  1:
  explain select * from customers c,orders o left outer join adsl_orders ao
  on ao.order_id=o.order_id
  where c.cust_id=o.cust_id
  and c.cust_id=152
snip
  
  explain select * from customers c join orders o on c.cust_id=o.cust_id
  left outer join adsl_orders ao on ao.order_id=o.order_id
  where c.cust_id=152

 Postgres treats join syntax as an explicit definition of what order to
 joins in.  So, I'd guess it sees the first as: do the LOJ and then join
 that to the separate table.  
Yeah, I figure that's how it sees it, but that's pretty stupid from
performance reasons :P)

It _should_ realize that left outer join only constricts join order
between two tables in outer join, and joins to all other tables should
still be treated normally.

I'm going to CC this to -hackers, maybe someone will shed a light on the
internals of this. 

 And for right outer join (for example), those two queries would not
 be equivalent if I read the ordering correctly.  The former syntax
 would mean outer first and then the inner, whereas the second would
 be inner first then the outer, and that could have different results.
True. But this is not right outer join, its a left outer join...:)

Postgres should understand that left outer join does not constrict join
order...

-alex


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [PATCH] [LARGE] select * from cursor foo

2001-09-21 Thread Alex Pilosov

On Fri, 21 Sep 2001, Tom Lane wrote:
 
 I've looked this over, and I think it's not mature enough to apply at
 this late stage of the 7.2 cycle; we'd better hold it over for more work
 during 7.3.  Major problems:

 1. Insufficient defense against queries that outlive the cursors they
 select from.  For example, I could create a view that selects from a
 cursor.  Yes, you check to see if the cursor name still exists ... but
 what if that name now refers to a new cursor that delivers a completely
 different set of columns?  Instant coredump.
Good point. I'll work on it.

 2. I don't understand the semantics of queries that read cursors
 that've already had some rows fetched from them.  Should we reset the
 cursor to the start, so that all its data is implicitly available?
 That seems strange, but if we don't do it, I think the behavior will be
 quite unpredictable, since some join types are going to result in
 resetting and re-reading the cursor multiple times anyway.  (You've
 punted on this issue by not implementing ExecPortalReScan, but that's
 not acceptable for a production feature.)
Yeah, I couldn't figure out which option is worse, which is why I didn't
implement it. I think that rewinding the cursor on each query is better,
but I wanted to get comments first.

 3. What does it mean to SELECT FOR UPDATE from a cursor?  I don't think
 ignoring the FOR UPDATE spec is acceptable; maybe we just have to raise
 an error.
OK, giving an error makes sense.

 4. Complete lack of documentation, including lack of attention to
 updating the code's internal documentation.  (For instance, you seem
 to have changed some of the conventions described in nodes/relation.h,
 but you didn't fix those comments.)
OK. 

 The work you have done so far on changing RTE etc looks good ... but
 I don't think the patch is ready to go.  Nor am I comfortable with
 applying it now on the assumption that the problems can be fixed during
 beta.

If you want to consider this argument: It won't break anything that's not
using the feature. It is needed (its not a 'fringe change' to benefit few)
(well, at least I think so :). 

It also is a base for my code to do 'select * from func(args)', which is
definitely needed and base of many flames against postgres not having
'real' stored procedures (ones that return sets). I was hoping to get the
rest of it in 7.2 so these flames can be put to rest.

Changes to core code are obvious, and all documentation can be taken care
of during beta.

But I understand your apprehension...

 I realize you originally sent this in a month ago, and perhaps you would
 have had time to respond to these concerns if people had reviewed the
 patch promptly.  For myself, I can only apologize for not getting to it
 sooner.  I've had a few distractions over the past month :-(
Can't blame you, completely understandable with GB situation...




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] cvsup trouble

2001-09-21 Thread Alex Pilosov

On Fri, 21 Sep 2001, Thomas Lockhart wrote:

  $ ping cvsup.postgresql.org
  PING rs.postgresql.org: 64 byte packets
  64 bytes from 64.39.15.238: icmp_seq=0. time=57. ms
  64 bytes from 64.39.15.238: icmp_seq=1. time=70. ms
  Perhaps there is a routing problem somewhere between you and 64.39.15.238?
  That machine is not physically at hub (looks like it's a Rackspace site)
  so there might be connectivity issues that are different from hub's.
  What do you get from tracerouting to cvsup.postgresql.org?
 
 *slaps forehead*
 
 I didn't catch on to the different network, and 64.x has always been a
 problem on my firewall/masquerading box since I'm also on a 64.x subnet
 and it keeps wanting to put in default routes for a class A network. So
 it is all a problem on my end.
 
 I had done some testing at another location yesterday, when I was
 finding that cvsup connections were being rejected.
 
 Any hints on how to supress this default route when the network is
 configured?

Best suggestion: Renumber your internal network into one of private
networks (10.*, 192.168.*, 172.16.*).

Alternate suggestion: configure correct netmask for your internal network
(such as, if you choose to be 64.1.1.1 with netmask 255.255.255.0, you
will only lose connectivity to 64.1.1.*, not entire 64.*)

 -alex


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



CVS/CVSup problems (was Re: [HACKERS] [PATCH] [LARGE] )

2001-09-20 Thread Alex Pilosov

On Thu, 20 Sep 2001, Alex Pilosov wrote:

 CVS repository also seems broken right now, I'm unable to log in (cvs
 login: authorization failed: server cvs.postgresql.org rejected access
 to /home/projects/pgsql/cvsroot for user anoncvs) in both
 cvs.postgresql.org and anoncvs.postgresql.org with all possible
 anoncvs passwords (empty, anoncvs, postgresql). Or had the anoncvs
 password been changed, or I missed an announcement?
Augh. A minute later, I find the announcement, but I still have problem,
after logging in, I have: 

tick-bash# cvs -d :pserver:[EMAIL PROTECTED]:/projects/cvsroot co pgsql
cvs server: Updating pgsql
cvs server: failed to create lock directory for `/projects/cvsroot/pgsql'
(/projects/cvsroot/pgsql/#cvs.lock): Permission denied
cvs server: failed to obtain dir lock in repository
`/projects/cvsroot/pgsql'
cvs [server aborted]: read lock failed - giving up

A minute later:
cannot create_adm_p /tmp/cvs-serv8577/pgsql

...:(


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [PATCH] [LARGE] select * from cursor foo

2001-09-20 Thread Alex Pilosov

On Mon, 17 Sep 2001, Tom Lane wrote:

 Alex Pilosov [EMAIL PROTECTED] writes:
  Attached patch does the above.
 
 Alex, could we have this resubmitted in diff -c format?  Plain diff
 format is way too risky to apply.
Tom,

postgresql.org cvsup repository is broken (and according to my records,
been so for last 4 days at least). Unfortunately, I can't get my changes
in correct format unless that gets fixed...So I guess that'll go in 7.3 :(

(error I'm getting is this:
Server message: Collection pgsql release cvs is not available here
which leads me to assume a misconfiguration somewhere).


That is, unless you make an exception for a few days and cvsup gets fixed
in meantime :P)

CVS repository also seems broken right now, I'm unable to log in (cvs
login: authorization failed: server cvs.postgresql.org rejected access to
/home/projects/pgsql/cvsroot for user anoncvs) in both cvs.postgresql.org
and anoncvs.postgresql.org with all possible anoncvs passwords (empty,
anoncvs, postgresql). Or had the anoncvs password been changed, or I
missed an announcement?

--
Alex Pilosov| http://www.acedsl.com/home.html
CTO - Acecape, Inc. | AceDSL:The best ADSL in the world
325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)
New York, NY 10018  |




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Beta timing

2001-09-11 Thread Alex Pilosov

(cough)

Could someone look at my 'select from cursor foo' patch...?

tnx
On Tue, 11 Sep 2001, Marc G. Fournier wrote:

 
 Wait until everyone is ready/finished with their existing projects ...
 this past week has thrown alot of turmoil into several lives that wasn't
 entirely unexpected, but sad nonetheless ...
 
 Let's do a poll on Friday evening for who has stuff outstanding left, give
 it until Monday for ppl to pop their heads up, and then try and deal with
 Monday as a Beta Start ... we aren't in a rush ...
 
 On Mon, 10 Sep 2001, Bruce Momjian wrote:
 
  I have heard from Tom Lane that he will be out of town until Wednesday.
  As with any volunteer project, we can't hold people to dates and
  schedules.  I know he is not done with everything he wants to do before
  beta, so we have to decide whether we should push ahead with beta now or
  wait for him to return.  On compromise would be to push ahead with beta
  now and let Tom slip things in after beta begins.
 
  Comments?
 
  --
Bruce Momjian|  http://candle.pha.pa.us
[EMAIL PROTECTED]   |  (610) 853-3000
+  If your life is a hard drive, |  830 Blythe Avenue
+  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026
 
  ---(end of broadcast)---
  TIP 5: Have you checked our extensive FAQ?
 
  http://www.postgresql.org/users-lounge/docs/faq.html
 
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
 http://www.postgresql.org/search.mpl
 
 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Escaping strings for inclusion into SQL queries

2001-08-30 Thread Alex Pilosov

It is. Application is responsible to call PGescapeString (included in the
patch in question) to escape command that may possibly have user-specified
data... This function isn't called automatically.

On Thu, 30 Aug 2001, Mitch Vincent wrote:

 Perhaps I'm not thinking correctly but isn't it the job of the application
 that's using the libpq library to escape special characters? I guess I don't
 see a down side though, if it's implemented correctly to check and see if
 characters are already escaped before escaping them (else major breakage of
 existing application would occur).. I didn't see the patch but I assume that
 someone took a look to make sure before applying it.
 
 
 -Mitch
 
 - Original Message -
 From: Bruce Momjian [EMAIL PROTECTED]
 To: Florian Weimer [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Thursday, August 30, 2001 6:43 PM
 Subject: Re: [HACKERS] Escaping strings for inclusion into SQL queries
 
 
   Florian Weimer [EMAIL PROTECTED] writes:
  
We therefore suggest that a string escaping function is included in a
future version of PostgreSQL and libpq.  A sample implementation is
provided below, along with documentation.
  
   We have now released a description of the problems which occur when a
   string escaping function is not used:
  
   http://cert.uni-stuttgart.de/advisories/apache_auth.php
  
   What further steps are required to make the suggested patch part of
   the official libpq library?
 
  Will be applied soon.  I was waiting for comments before adding it to
  the patch queue.
 
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
 http://www.postgresql.org/search.mpl
 
 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [PATCH] tiny fix for plperlu

2001-08-29 Thread Alex Pilosov

Nevermind this patch then...

On Wed, 29 Aug 2001, Peter Eisentraut wrote:

 Alex Pilosov writes:
 
  Attached patch fixes following problem: createlang.sh expects one handler
  for each PL. If a handler function for a new PL is found in pg_languages,
  PL won't be created. So you need to have plperl_call_handler and
  plperlu_call_handler. This patch just does that.
 
 This is already fixed by allowing handlers to be shared.
 
 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] [PATCH] [LARGE] select * from cursor foo

2001-08-29 Thread Alex Pilosov

Patch not attached, apparently mail server rejects large files. 

Patch can be found on www.formenos.org/pg/cursor.fix1.diff

Notes:
1. Incompatible changes: CURSOR is now a keyword and may not be used as an
identifier (tablename, etc). Otherwise, we get shift-reduce conflicts in
grammar.

2. Major changes: 

a) RangeTblEntry (RTE for short) instead of having two possibilities,
subquery and non-subquery, now has a rtetype field which can be of 3
possible states: RTE_RELATION, RTE_SUBSELECT, RTE_PORTAL). The
type-specific structures are unionized, so where you used to have
rte-relid, now you must do rte-u.rel.relid.

Proper way to check what is the RTE type is now checking for rte-rtetype
instead of checking whether rte-subquery is null.

b) Similarly, RelOptInfo now has a RelOptInfoType which is an enum with 4
states, REL_PLAIN,REL_SUBQUERY,REL_JOIN,REL_PORTAL. I did not do the
unionization of type-specific structures. Maybe I should've if I'm going
to get in a big change anyway.

c) There's a function PortalRun which fetches N records from portal and
sets atEnd/atStart values properly. It replaces code duplicated in 2
places. 


How to test: 

declare foo cursor for select * from pg_class;

select * from cursor foo;

Documentation updates will be forthcoming ASAP, I just wanted to get this
patch in queue before the freeze. Or at least making sure someone could
look through this patch before freeze. :)

Next patch will be one to support SELECT * FROM func(arg1,arg2) which
would work by creating first a special kind of portal for selection from a
function and then setting query source to be that portal.

-alex


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Re: Toast,bytea, Text -blob all confusing

2001-08-29 Thread Alex Pilosov

On Tue, 28 Aug 2001, Lincoln Yeoh wrote:

 For bytea, follow this rule: to escape a null character, use this:
 '\\0'. To escape a backslash, use this: ''.
 
 Same idea to unescape data.
 
 Are there other characters that need to be escaped? I suspect there are
 more characters that need to be escaped - ctrl chars? single quotes?. Why
 four backslashes for one? Is there a definitive documentation anywhere for
 what bytea is _supposed_ (not what it might actually be) to be and how it
 is to be handled?

Yes, sorry for being unclear on this one. Here's a more detailed
explanation: Bytea is just a stream of data. On input, it must follow C
escaping conventions, on output, it will be escaped using C escaping
conventions. 

However, there's a trap: before things get to bytea input handler, they
are first processed by postgresql general parser. Hence, the string \\0
given from psql will become \0 when it gets to bytea input handler. String
 will become \\. All non-printable characters must be escaped like
this: \\(octal of character), for ex, chr(255) must be presented as \\377.
(If you want, you can also use this as an alternative and more generic way
to escape a backslash, \\134). Single quote must be escaped either as \\47 
or as \'. Note the single backslash. Why only one? Because bytea parser
doesn't care about single quotes and you only need to escape it for the
postgresql parser...

So, just keep in mind the double-parsing of input and you'll be safe.

 Also why wouldn't escaping stuff like this work with TEXT then? If a null
 is going to be backslash backslash zero, and come out the same way, it sure
 looks like TEXT to me :). OK so there's this thing about storage. So maybe
Because text is null-terminated, can't have a null inside.

 I could save a byte by just converting nulls to backslash zero and real
 backslashes to backslash backslash. Tada.  
If you do that, you'll break ordering/comparison. Bytea in memory is
stored EXACTLY the way input string was, without any escaping, hence, all
comparisons will be correct ( '\\0'::bytea is less than '\\1'::bytea). 

With your representation, comparisons will fail, because in memory, data
is escaped using some escaping convention that you made up.

 OK it's probably not the same, but having to put four backslashes when two
 should be enough to quote one makes me rather puzzled and uneasy. 
Double parsing, hence double escaping.

--
Alex Pilosov| http://www.acedsl.com/home.html
CTO - Acecape, Inc. | AceDSL:The best ADSL in the world
325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)
New York, NY 10018  |


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Upcoming events

2001-08-28 Thread Alex Pilosov

Aiiye. I'm sending a _large_ (60k) patch to add 'select * from cursor foo'
tonight. I'm hoping that it could possibly get included...

-alex

On Mon, 27 Aug 2001, Tom Lane wrote:

 Christopher Kings-Lynne [EMAIL PROTECTED] writes:
  Do we want ADD PRIMARY KEY?
 
 If you can get it done in the next week or so ...
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html
 
 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] [PATCH] tiny fix for plperlu

2001-08-28 Thread Alex Pilosov

Attached patch fixes following problem: createlang.sh expects one handler
for each PL. If a handler function for a new PL is found in pg_languages,
PL won't be created. So you need to have plperl_call_handler and
plperlu_call_handler. This patch just does that.

-alex


Index: src/bin/scripts/createlang.sh
===
RCS file: /cvs/pgsql/pgsql/src/bin/scripts/createlang.sh,v
retrieving revision 1.29
retrieving revision 1.30
diff -r1.29 -r1.30
212a213
   lancomp=PL/Perl (untrusted)
214c215
   handler=plperl_call_handler
---
   handler=plperlu_call_handler
Index: src/pl/plperl/plperl.c
===
RCS file: /cvs/pgsql/pgsql/src/pl/plperl/plperl.c,v
retrieving revision 1.22
retrieving revision 1.22.2.1
diff -r1.22 -r1.22.2.1
277a281,294
 
 
  
 /*
  * Alternate handler for unsafe functions
  */
 PG_FUNCTION_INFO_V1(plperlu_call_handler);
 
 /* keep non-static */
 Datum
 plperlu_call_handler(PG_FUNCTION_ARGS)
 {
 return plperl_call_handler(fcinfo);
 }



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Toast,bytea, Text -blob all confusing

2001-08-27 Thread Alex Pilosov

On Thu, 23 Aug 2001 [EMAIL PROTECTED] wrote:

 THIS IS WHAT I CANT SEEM TO FIGURE OUT IN POSTGRESQL
 1. I cant get a clear answer on what kind of data type to use for my large
 text string?  TEXT, ???, ??? or something about TOAST
 I have seen in the e-mail archive but cant find any documentaion?
I would suggest bytea or blob. Blobs are well-documented in normal
documentation and in documentation of your favorite interface, so I'll
just talk about bytea.

 2. I've written my own escape method ,(cant find one for Pgsql) , BUT i
 don't know what 
 to escape and not to escape. So it keeps failing. I cand find any docs. on
 what to escape either?
For bytea, follow this rule: to escape a null character, use this:
'\\0'. To escape a backslash, use this: ''.

Same idea to unescape data.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Re: AW: Re: OID wraparound: summary and proposal

2001-08-06 Thread Alex Pilosov

On Mon, 6 Aug 2001, mlw wrote:

 Zeugswetter Andreas SB SD wrote:
  
   It seems to me, I guess and others too, that the OID mechanism should
  be on a
   per table basis. That way OIDs are much more likely to be unique, and
  TRUNCATE
   on a table should reset it's OID counter to zero.
  
  Seems to me, that this would be no different than a performance improved
  version
  of SERIAL.
  If you really need OID, you imho want the systemid tableid tupleid
  combo.
  A lot of people seem to use OID, when they really could use XTID. That
  is
  what I wanted to say.
  
 
 I don't care about having an OID or ROWID, I care that there is a 2^32 limit to
 the current OID strategy and that a quick fix of allowing tables to exist
 without OIDs may break some existing software. I was suggesting the OIDs be
 managed on a per table basis as a better solution.
Again, what existing software demands per-table OID field? Isn't it what
primary keys are for?

 In reality, a 32 bit OID, even isolated per table, may be too small.
 Databases are getting HUGE. 40G disk drives are less than $100 bucks,
 in a few months 80G drives will be less than $200, one can put
 together 200G RAID systems for about $1000, a terabyte for about
 $5000. A database that would have needed an enterprise level system,
 just 7 years ago, can be run on a $500 desktop today.
If its too small for you, make a serial8 datatype (or something like
this), and use it for your tables. For me, I have tables which have very
few fields, and I don't want to waste 4 bytes/row (much less 8) for OID.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: AW: [HACKERS] Re: OID wraparound: summary and proposal

2001-08-06 Thread Alex Pilosov

On Mon, 6 Aug 2001, mlw wrote:

 I think you are focusing too much on ROWID and not enough on OID. The issue
 at hand is OID. It is a PostgreSQL cluster wide limitation. As data storage
 decreases in price, the likelihood of people running into this limitation
 increases. I have run into OID problems in my curent project. Geez, 40G 7200
 RPM drives are $120, amazing.
Possibly you were using OIDs for what they weren't intended  ;)

 Tom has proposed being able to remove the OID from tables, to preserve
 this resource. I originally thought this was a good idea, but there
 are tools and utilities others may want to use in the future that
 require OIDs, thus they would have to be re-written or abandoned
 altogether.
What are these tools?

 It seems to me, I guess and others too, that the OID mechanism should be on a
 per table basis. That way OIDs are much more likely to be unique, and TRUNCATE
 on a table should reset it's OID counter to zero.
I disagree. OID as it is now is a mandatory SERIAL that is added to every
table. Most tables don't need such a field, those which do, well, they can
keep it as it is now (global per-database), or, if you want per-table
sequence, just create a SERIAL field explicitly.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] pg_depend

2001-07-16 Thread Alex Pilosov

On Tue, 17 Jul 2001, Peter Eisentraut wrote:

 Alex Pilosov writes:
 
   I'm not so convinced on that idea.  Assume you're dropping object foo.
   You look at pg_depend and see that objects 145928, 264792, and 1893723
   depend on it.  Great, what do you do now?
  I believe someone else previously suggested this:
 
  drop type object [RESTRICT | CASCADE]
 
  to make use of dependency info.
 
 That was me.  The point, however, was, given object id 145928, how the
 heck to you know what table this comes from?

have a view pg_objecttype which is a UNION across all the [relevant]
system tables sounds fine to me, but maybe I'm missing something?




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] pg_depend

2001-07-16 Thread Alex Pilosov

On Mon, 16 Jul 2001, Peter Eisentraut wrote:

 Bruce Momjian writes:
 
  I have found that many TODO items would benefit from a pg_depend table
  that tracks object dependencies.  TODO updated.
 
 I'm not so convinced on that idea.  Assume you're dropping object foo.
 You look at pg_depend and see that objects 145928, 264792, and 1893723
 depend on it.  Great, what do you do now?
I believe someone else previously suggested this:

drop type object [RESTRICT | CASCADE]

to make use of dependency info.

 Every system catalog (except the really badly designed ones) already
 contains dependency information.  What might help is that we make the
 internal API for altering and dropping any kind of object more consistent
 and general so that they can call each other in the dependency case.
 (E.g., make sure none of them require whereToSendOutput or parser state as
 an argument.)
Yes, that's definitely requirement to implement the above...


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Rule recompilation

2001-07-12 Thread Alex Pilosov

I remember awhile ago, someone floated the idea of a dependency view which
would list all objects and what OIDs they have in their plan. (i.e. what
do they depend on). 

I'm definitely no expert in this, but to me, one possible implementation
would be to enhance outfuncs to provide for creation tracking of all
OIDs used in plan, and allow caller to receive this list and do something
with it. This would actually be very simple, as only _outOidList will need
to be modified...(but then again, I'm known for oversimplifying things :)

Then, we can add  ev_depends/oidvector to pg_rewrite and store the
dependency there, and for stored procedures, add a prodepends/oidvector to
pg_proc.

Then, create union of pg_rewrite and pg_proc to list dependencies.

Then, we would be able to provide warning when an object is dropped:
'The following objects depend on this blah blah', and possibly an action
alter database fixdepends oid which would recompile everything that
depends on that oid.

How's this sound?

On Thu, 12 Jul 2001, Jan Wieck wrote:

 Hi,
 
 I'd  like  to  add  another column to pg_rewrite, holding the
 string representation of the  rewrite  rule.  A  new  utility
 command  will  then  allow  to recreate the rules (internally
 DROP/CREATE, but that doesn't matter).
 
 This would be a big help in case anything used in a  view  or
 other  rules  get's  dropped  and  recreated (like underlying
 tables). There is of course a difference between the original
 CREATE  RULE/VIEW  statement and the string stored here. This
 is because we cannot rely on the actual query buffer but have
 to parseback the parsetree like done by the utility functions
 used for pg_rules. Thus, changing a column  name  of  a  base
 table will break the view either way.
 
 Anyway,  what's  the preferred syntax for triggering the rule
 recompilation?  I thought about
 
 ALTER RULE {rulename|ALL} RECOMPILE;
 
 Where ALL triggers only those rules where the  user  actually
 has RULE access right on a relation.
 
 
 Jan
 
 --
 
 #==#
 # It's easier to get forgiveness for being wrong than for being right. #
 # Let's break this rule - forgive me.  #
 #== [EMAIL PROTECTED] #
 
 
 
 _
 Do You Yahoo!?
 Get your free @yahoo.com address at http://mail.yahoo.com
 
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 
 




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Rule recompilation

2001-07-12 Thread Alex Pilosov

On Thu, 12 Jul 2001, Jan Wieck wrote:

 Mikheev, Vadim wrote:
In good world rules (PL functions etc) should be automatically
marked as dirty (ie recompilation required) whenever referenced
objects are changed.
  
   Yepp,  and  it'd  be possible for rules (just not right now).
   But we're not in  a  really  good  world,  so  it'll  not  be
   possible for PL's.
 
  Why is it possible in Oracle' world? -:)
 
 Because of there limited features?
 
 Think  about  a  language like PL/Tcl. At the time you call a
 script for execution, you cannot even be sure  that  the  Tcl
 bytecode  compiler parsed anything, so how will you ever know
 the complete set of objects referenced from this function?
 And PL/pgSQL? We don't prepare all the  statements  into  SPI
 plans  at  compile  time. We wait until the separate branches
 are needed, so how do you know offhand here?
If plan hasn't been made (oid has not been referenced), does it really
depend on an object?

 In the PL/pgSQL case it *might* be possible. But is it  worth
 it?
It'd be possible in general, as long as pl compilers properly keep track
what their objects depend on in pg_proc. (as in my above email).

-alex 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Rule recompilation

2001-07-12 Thread Alex Pilosov

On Thu, 12 Jul 2001, Jan Wieck wrote:

 Alex Pilosov wrote:
  I remember awhile ago, someone floated the idea of a dependency view which
  would list all objects and what OIDs they have in their plan. (i.e. what
  do they depend on).
 
  I'm definitely no expert in this, but to me, one possible implementation
  would be to enhance outfuncs to provide for creation tracking of all
  OIDs used in plan, and allow caller to receive this list and do something
  with it. This would actually be very simple, as only _outOidList will need
  to be modified...(but then again, I'm known for oversimplifying things :)
 
  Then, we can add  ev_depends/oidvector to pg_rewrite and store the
  dependency there, and for stored procedures, add a prodepends/oidvector to
  pg_proc.
 
  Then, create union of pg_rewrite and pg_proc to list dependencies.
 
  Then, we would be able to provide warning when an object is dropped:
  'The following objects depend on this blah blah', and possibly an action
  alter database fixdepends oid which would recompile everything that
  depends on that oid.
 
  How's this sound?
 
 Er - oversimplified :-)
Yeah, most of my ideas end up like that, however see below ;)
 
 I  remember  it well, because Bruce is mentioning it every so
 often and constantly tries to convince me to start a  project
 about a dependency table.  I just think it's better not to do
 it for 7.2 (didn't we  wanted  to  have  that  released  THIS
 year?).

 Anyway,  there's  alot  more  to  look  at.  Functions can be
 referenced in views, indexes, operators, aggregates and maybe
 more  places.  Views/rules  can reference allmost any object.
 And this only builds the permanent cross reference.

For views, the necessary information (what does a view depend on) is in
pg_rewrite anyway, which we can track with my proposal.

For indices/operators/aggregates, pg_depends view may simply union the
necessary information from the existing tables, no additional tracking is
necessary. (example, if index depends on a proc, we already have that proc
oid as indproc).

If you are talking that tracking nested dependencies is hard, I don't
disagree there, its a pain to do recursive queries in SQL, but the
solution is to have (non-sql) function list_deep_depend(oid) which would
recurse down the pg_depend and find what depends on an object...

 We have to take a look at runtime information, telling  which
 prepared/saved  SPI plan uses a particular object and trigger
 automatic re-prepare for the plan in case.
This doesn't bother me that much. Restart of postmaster is an acceptable
thing to clear [really strange] things up.

I'm actually not looking for 100% recompilation when an underlying object
is changed, I'm looking for 100% reliable dependency information and a
warning listing all objects that will break if I delete an object.

Your proposal (automatic recompilation for rules) is orthogonal (but
related) to what I'm suggesting. Having an ability to recompile a rule is
great. Having an ability to see what rules depend on a given object is
also great. Having an ability to recompile all rules that depend on a
given object is even better ;) 

Having an ability to recompile _everything_ that depends on a given object
is priceless, but we can take that one step at a time, first tackling
rules...

 For most objects, there is no such recompile possible -  at
 least  not  without  storing  alot more information than now.
 Create a function and based on that  an  operator.  Then  you
 drop  the  function and create another one. Hmmm, pg_operator
 doesn't have the function name and argument  types,  it  only
 knows the old functions oid. How do you find the new function
 from here? So  basically  we'd  need  some  sort  of  pg_dump
 snippet  associated  with  every object and issue an internal
 DROP/CREATE using that string to recompile it.

Which may not be all that hard now, as most things that pg_dump does now
are integrated in the backend, and all pg_dump does is call an appropriate
function (ala pg_get_viewdef/pg_get_ruledef).  But I am content leaving it
for the next time, tackling rules for now.



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



RE: [HACKERS] New SQL Datatype RECURRINGCHAR

2001-07-07 Thread Alex Pilosov

On Sat, 7 Jul 2001, David Bennett wrote:

 -
 In a nutshell you are recommending:
 -
 
   create table contact_type (
 codeint2,
 typechar(16),
 PRIMARY KEY ( code )
   );
 
   create table contact (
 numberserial,
 name  char(32),
 type  int2,
 PRIMARY KEY ( number ),
 FOREIGN KEY ( type ) REFERENCES contact_type ( code )
   );
 
   create view contact_with_readble_type as (
 select c.number as number,
c.name as name,
t.type as type
 from
contact c,
contact_type t
   );
 
 * To build a type lookup table:
 
   1) Select type and code from contact_type
   2) Build UI object which displays type and returns code
Just 'select distinct' on a view should do just fine. 

 * In order to insert a new record with this model:
 
   1) Look up to see if type exists
   2) Insert new type
   3) Get type ID
   4) Insert contact record
This can be encapsulated with ON INSERT rule on a view.

 * The adhoc query user is now faced with
   the task of understanding 3 data tables.
No, only one view. All the logic is encapsulated there.

 
 -
 With recurringchar you could do this easily as:
 -
 
   create table contact (
 numberserial,
 name  char(32),
 type  recurringchar1,
 PRIMARY KEY ( number ),
   );
 
 * To build a type lookup table:
 
   1) Select distinct type from contact (optimized access to recurringchar
 dictionary)
   2) Build UI object which displays and returns type.
 
 * In order to insert a new record with this model:
 
   1) Insert contact record
 
 * The adhoc query user has one data table.
 
 -
 
 Granted, changing the value of contact_type.type would require edits
 to the contact records. It may be possible to add simple syntax to
 allow editing of a 'recurringchar dictionary' to get around isolated
 problem which would only exist in certain applications.
 
 Actually, maybe 'dictionary' or 'dictref' would be a better name for
 the datatype.
These things belong in application or middleware (AKA views/triggers), not
in database server itself.

There are multiple problems with your implementation, for example,
transaction handling, assume this situation:

Tran A inserts a new contact with new type foo, but does not commit.
Dictionary assigns value of N to 'foo'.

Tran B inserts a new contact with type foo. What value should be entered
in the dictionary? N? A new value? 

If a type disappears from database, does its dictionary ID get reused?

All these questions are not simple questions, and its not up to database
to decide it. Your preferred solution belongs in your triggers/views, not
in core database.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] New SQL Datatype RECURRINGCHAR

2001-07-07 Thread Alex Pilosov


On Sat, 7 Jul 2001, Rod Taylor wrote:

 This would be a potential feature of being able to insert into views
 in general.  Reversing the CREATE VIEW statement to accept inserts,
 deletes and updates.
Definitely not a 'potential' feature, but a existing and documented one.
Read up on rules, especially 'ON INSERT DO INSTEAD' stuff. Its not
automatic, though.

 If true, focus on that.  Theres lots of views that cannot be reversed
 properly -- unions come to mind -- but perhaps this type of simple
 join could be a first step in the package.  I believe this is on the
 TODO list already.
On TODO list are updatable views in SQL sense of word, [i.e. automatic
updateability of a view which matches certain criteria].

 Different attack, but accomplishes the same thing within SQL standards
 as I seem to recall views are supposed to do this where reasonable.
 
 
 Failing that, implement this type of action the same way as foreign
 keys.  Via the described method with automagically created views,
 tables, etc.  Though I suggest leaving it in contrib for sometime.
 Enum functionality isn't particularly useful to the majority whose
 applications tend to pull out the numbers for states when the
 application is opened (with the assumption they're generally static).

Original suggestion was not for an enum type, it was for _dynamically
extensible_ data dictionary type. 

ENUM is statically defined, and it wouldn't be too hard to implement, with
one exception: one more type-specific field needs to be added to
pg_attribute table, where would be stored argument for the type (such as,
length for a char/varchar types, length/precision for numeric type, and
possible values for a enum type). 

This just needs a pronouncement that this addition is a good idea, and
then its a trivial thing to implement enum.

-alex


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] selecting from cursor

2001-07-07 Thread Alex Pilosov

On Mon, 2 Jul 2001, Alex Pilosov wrote:

 Erm, forgot to attach the patch. Here it is.
(yow) don't even bother looking at this patch. mail server delayed this
message by almost a week, and by now, the code is totally changed.

I took Tom's suggestion and made RTE a union. So, the below is a new
definition of RTE:

I have most of portal-related code working, only executor needs some more
fixes. Code properly makes PortalScan Path entry, PortalScan Plan nodes,
etc. I have added PortalReScan to tell portal it needs to rescan itself. 

I'll post a correct patch next week. Thank you to everyone and especially
Tom for bearing with my often stupid questions.

--cut here--rte definition--
typedef enum RTEType {
RTE_RELATION,
RTE_SUBSELECT,
RTE_PORTAL
} RTEType;

typedef struct RangeTblEntry
{
NodeTag type;
RTEType rtetype;
/*
 * Fields valid in all RTEs:
 */
Attr   *alias;  /* user-written alias clause, if any */
Attr   *eref;   /* expanded reference names */
boolinh;/* inheritance requested? */
boolinFromCl;   /* present in FROM clause */
boolcheckForRead;   /* check rel for read access */
boolcheckForWrite;  /* check rel for write access */
Oid checkAsUser;/* if not zero, check access as this user
*/
   
union {
struct  {
/* Fields for a plain relation RTE (rtetype=RTE_RELATION) */
char   *relname;/* real name of the relation */
Oid relid;  /* OID of the relation */
} rel;
struct {
/* Fields for a subquery RTE (rtetype=RTE_SUBSELECT) */
Query  *subquery;   /* the sub-query */
} sub;
struct {
/* fields for portal RTE (rtetype=RTE_PORTAL) */
char  *portalname;/* portal's name */
} portal;
} u;
} RangeTblEntry;



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



RE: [HACKERS] New SQL Datatype RECURRINGCHAR

2001-07-06 Thread Alex Pilosov

On Fri, 6 Jul 2001, David Bennett wrote:

rest snipped
 In either model you would:
 
   update master_table set status='OPEN_PENDING_SOMETHING' where status='OPEN'
 
 This would not change, in fact, even in a normalized design you
 wouldn't change the lookup table (parent) key.  Perhaps you are
 misunderstanding my initial concept.  The MySQL 'enum' is close.  
 However, it is static and requires you to embed business data (your
 key list) in the DDL.  The idea I have here is to dynamically extend
 this list as needed.  I am not saying that the value can't relate to a
 parent (lookup) table.  It's just not necessary if the value is all
 that is needed.
You are making absolutely no sense. 

Let me break it down:


a) To do an update of a key to a different value, you would need to do
following:
1) look up the new value in entire table, find if its already exists
2) If it exists, good.
3) if it doesn't, pick a next number. (out of some sequence, I suppose) to
represent the key.
4) do the actual update.

Step 1 without an index is a killer. Then, you need to have a certain
'table' to map the existing key values to their numerical representations.

How would this 'table' get populated? On startup? On select?

Its one thing to take 'enum' datatype, which I wouldn't disagree too
much with. Its another thing to suggest this kind of a scheme, which
should be really done with views and rules.

I.E. instead of (as you would have) table a(...,  x recurringchar), 
you must have two things:

table a_real (..., x int4)
table lookup (x int4, varchar value)

Then, have a view:
create view a as select ..., value from a_real, lookup where
a_real.x=lookup.x

Then create a rule on insert: (syntax may be rusty)
create rule foo
on insert on table a
do instead
...whatever magic you need to do the actual inserton, lookup, etc.


 --Dave (Hoping some other SQL developers are monitoring this thread :)



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] force of last XID

2001-05-18 Thread Alex Pilosov

Hi,

I managed to drop really important table. Fortunately, I had a backup of
the table (raw  file, not a ascii file). After putting that table into
freshly initdb'd database, postgres doesn't see new transactions even
though 'vacuum' sees the tuples alright. 

So, question. I'd like to force XID (of last committed transaction) of
postgres forward. Is there a good way to do it or hacking source is the
only way?


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] operators and indices?

2001-05-17 Thread Alex Pilosov

Hello,

I've noticed that all custom operators or inet type (such as , =, etc)  
cannot use an index, even though it is possible to define such an
operation on an index, for ex:
X  Y can be translated to X = network(Y)  X = broadcast(Y) (or so)

According to docs, postgres has hard-coded the ops which match index types
(such as btree for ,,=, etc and rtree for @, etc). Is there a better way
than hardcoding support for inet types into index-selection code?

-alex


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] [BUG] views and functions on relations

2001-04-18 Thread Alex Pilosov

On Wed, 18 Apr 2001, Tom Lane wrote:

 The ruleutils.c bug cannot explain this however, since ruleutils won't
 even be invoked.  Can you find a sequence to reproduce it?
Sorry, I was mistaken. The error I get for select is this:
ERROR:  cache lookup for type 0 failed

This is a far harder to trigger bug, and actually, it doesn't happen in
this simple case (oops), and the only test case I have involves 2 tables
and 3 stored procedures. It is not related to views at all, just doing the
underlying select causes the problem. Taking out _any_ stored procedure
from the query removes the problem. 

FWIW, this is what I see in server error log:

ERROR:  cache lookup for type 0 failed
DEBUG:  Last error occured while executing PL/pgSQL function cust_name
DEBUG:  while putting call arguments to local variables

And this is the query:
SELECT
cust_name(a)
FROM customers AS a, addresses AS b
WHERE
b.cust_id=a.cust_id
and b.oid=get_billing_record(a.cust_id)
and cust_balance(a.cust_id)0

Removing either get_billing_record or cust_balance conditions or cust_name
selection leaves the problem. Unfortunately, each function is very long,
and involves lots of tables and it'd make no sense to post this all to the
list, so I'm going to try to narrow down the problem more to get a good
reproducible result, but if the above helps any in diagnostic, it'd be
great ;)



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] [BUG] views and functions on relations

2001-04-18 Thread Alex Pilosov

On Wed, 18 Apr 2001, Alex Pilosov wrote:

 This is a far harder to trigger bug, and actually, it doesn't happen in
 this simple case (oops), and the only test case I have involves 2 tables
 and 3 stored procedures. It is not related to views at all, just doing the
 underlying select causes the problem. Taking out _any_ stored procedure
 from the query removes the problem. 
Oh yes. One thing I forgot: It all worked in 7.0 and it only broke after
upgrading to 7.1

-alex


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] [BUG] views and functions on relations

2001-04-18 Thread Alex Pilosov

Here's more info on the bug:

background: function cust_name(customers) returns varchar;
Query in question:

SELECT
cust_name(a)
FROM customers AS a, addresses AS b
WHERE
b.cust_id=a.cust_id
and b.oid=get_billing_record(a.cust_id)
and cust_balance(a.cust_id)0

First, my idea of what's happening:

Tuple in question contains the row from 'customers' table.

Something (when the query is evaluated, before cust_name function is
called) sets the tupdesc-natts=0, however, everything else in that
tupdesc is right (all the attrs are present and have correct values and
atttypes), and tuple-t_data-t_natts is correct (12).

When SPI_getbinval is called, it checks tuple-t_data-t_natts, and works
OK, but, however, when SPI_gettypeid is called, it checks
tupledesc-nattrs, and returns 0. 

Question: Should SPI_gettypeid look at tuple-t_data-t_natts (to do that,
it needs to be passed tuple along with tupdesc)? 
Or some other code should be fixed to properly set tupledesc-nattrs?

NOTE: when I removed the check in SPI_gettypeid, it _also_ fixed the '\d
viewname' bug, so these two bugs are related (i.e. you cannot see \d
because nattrs is set incorrectly). You may have more luck tracing the
code which improperly sets nattrs than me...

Hoping for proper fix, 


-alex

traceback:
#0  elog (lev=-1, fmt=0x45d4b340 "cache lookup for type %u failed")
at elog.c:119
#1  0x45d4693e in exec_cast_value (value=1791, valtype=0, reqtype=23,
reqinput=0x82bfdb0, reqtypelem=0, reqtypmod=-1, isnull=0xbfffeb6f "")
at pl_exec.c:2682
#2  0x45d45f19 in exec_assign_value (estate=0xbfffec40, target=0x82cdd88,
value=1791, valtype=0, isNull=0xbfffeb6f "") at pl_exec.c:2173
#3  0x45d4687a in exec_move_row (estate=0xbfffec40, rec=0x0,
row=0x82bfcc8,
tup=0x827a170, tupdesc=0x827a130) at pl_exec.c:2629
#4  0x45d43e64 in plpgsql_exec_function (func=0x82b3188, fcinfo=0x828e364)
at pl_exec.c:331
#5  0x45d41f57 in plpgsql_call_handler (fcinfo=0x828e364) at
pl_handler.c:128
#6  0x80b78ad in ExecMakeFunctionResult (fcache=0x828e350,
arguments=0x826eb28, econtext=0x826fc98, isNull=0xbfffed37 "",
isDone=0xbfffed68) at execQual.c:796
#7  0x80b794e in ExecEvalFunc (funcClause=0x826ead8, econtext=0x826fc98,
isNull=0xbfffed37 "", isDone=0xbfffed68) at execQual.c:890
#8  0x80b7d1c in ExecEvalExpr (expression=0x826ead8, econtext=0x826fc98,
isNull=0xbfffed37 "", isDone=0xbfffed68) at execQual.c:1215
#9  0x80b7fbb in ExecTargetList (targetlist=0x826e6a0, nodomains=19,
targettype=0x8284620, values=0x8285100, econtext=0x826fc98,
isDone=0xbfffef08) at execQual.c:1536
#10 0x80b8215 in ExecProject (projInfo=0x82850d8, isDone=0xbfffef08)
at execQual.c:1764
#11 0x80bcd9a in ExecNestLoop (node=0x826e5c0) at nodeNestloop.c:245
#12 0x80b6b76 in ExecProcNode (node=0x826e5c0, parent=0x826e5c0)
at execProcnode.c:297
#13 0x80b5eee in ExecutePlan (estate=0x826f770, plan=0x826e5c0,
operation=CMD_SELECT, numberTuples=0, direction=ForwardScanDirection,
destfunc=0x8285de0) at execMain.c:973
#14 0x80b5463 in ExecutorRun (queryDesc=0x826f758, estate=0x826f770,
feature=3, count=0) at execMain.c:233
#15 0x80f76b3 in ProcessQuery (parsetree=0x82433e8, plan=0x826e5c0,
dest=Remote) at pquery.c:295
#16 0x80f62bb in pg_exec_query_string (
query_string=0x8243090 "select * from outstanding_balances;",
dest=Remote,
parse_context=0x8218730) at postgres.c:810
#17 0x80f71e6 in PostgresMain (argc=4, argv=0xb1e0, real_argc=8,
real_argv=0xbaf4, username=0x81cbf69 "sw") at postgres.c:1908
#18 0x80e14c3 in DoBackend (port=0x81cbd00) at postmaster.c:2111
#19 0x80e10ac in BackendStartup (port=0x81cbd00) at postmaster.c:1894
#20 0x80e0436 in ServerLoop () at postmaster.c:992
#21 0x80dfe63 in PostmasterMain (argc=8, argv=0xbaf4) at
postmaster.c:682
#22 0x80c4055 in main (argc=8, argv=0xbaf4) at main.c:151






---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] [BUG] views and functions on relations

2001-04-17 Thread Alex Pilosov

In latest 7.1 (checked out 2 days ago from CVS), I see following
behaviour:

create table foo(x int4);
create function xx(foo) returns int4 as ' return 0;' language 'plpgsql';
create view tv2 as select xx(foo) from foo;

users=# \d tv2
ERROR:  cache lookup of attribute 0 in relation 21747 failed

(21747 is table oid for foo)

HOWEVER, 'select * from tv2' succeeds (sometimes). Sometimes it fails with
the same error (cache lookup failed).

I think the above should be enough to reproduce this bug.  Any hints? 

-alex



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



RE: [HACKERS] Proposed WAL changes

2001-03-07 Thread Alex Pilosov

On Wed, 7 Mar 2001, Mikheev, Vadim wrote:

 But what can be done if fsync returns before pages flushed?
No, it won't. When fsync returns, data is promised by the OS to be on
disk.

-alex


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Plan for straightening out the include-file mess

2001-02-12 Thread Alex Pilosov
e client side only because libpq-int.h includes them, and we
 want to support client code that includes libpq-int.h.  I am going to look
 at skinnying that list down too.  libpq-fs.h, in particular, looks like
 mostly legacy junk ...
 
 As we discussed, there'll be an additional install target (or RPM) that
 installs these files and everything else from the src/include tree.
 
 Comments?
 
   regards, tom lane
 
 

-- 
--
Alex Pilosov| http://www.acecape.com/dsl
CTO - Acecape, Inc. | AceDSL:The best ADSL in Bell Atlantic area
325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)
New York, NY 10018  |







Re: [HACKERS] Plan for straightening out the include-file mess

2001-02-08 Thread Alex Pilosov

On Thu, 8 Feb 2001, Alex Pilosov wrote:

 Great! :)
 
 It might also clean up something that I've been fighting against for
 awhile: when I include files needed for SPI, it drags also a lot of other
 garbage in, which conflicts with other things (namely, trying to get a
 file to simultaneously include SPI and perl headers is impossible). 
 
 I realise it might be a lot of pain to clean up, but, you may consider
 having a separate top-level include for SPI, which would not define (by
 default) things like DEBUG, USE_LOCALE, union semun, etc. 
 
 IMHO, it should really include only definitions of relevant data
 structures which interface with SPI code...
 
 I realize that complete split for SPI/module from "core backend" might be
 very hard, so a thing to consider would be to have (like linux kernel code
 has) #define IN_CORE (you are welcome to come up with better name), and
 include "core backend"-specific things conditionally on that being
 defined.
 
 
 -alex

 On Thu, 8 Feb 2001, Tom Lane wrote:
 
  I have been looking at making a split between client-side and server-side
  include files as we discussed earlier this week (pghackers thread
  "Include files for SPI are not installed", if you missed it).  It turns
  out that the major issue here is not just divvying up the files; the
  problem is that we have never had a clear concept of such a division
  before, and so the core include files like postgres.h, c.h, config.h
  contain a rather unholy mixture of things that are definitely
  backend-only with things that are relevant to both clients and backends.
  I think we need to start by clarifying the roles of these include files
  and moving their contents around as necessary.
  
  Currently, almost every .c in the distribution starts out by including
  postgres.h, which in turn includes these other files:
  
  postgres.h
  postgres_ext.h
  c.h
  config.h
  os.h
  utils/elog.h
  utils/palloc.h
  
  Now elog.h and palloc.h are server-only facilities and certainly don't
  belong in a client file's include set.  I think what we want to do is
  decree that postgres.h is the primary include file for backend .c files
  only, and that frontend .c files should include something else.
  
  postgres_ext.h would be a candidate to be that something else, except
  that it's included by libpq-fe.h, so anything we add to postgres_ext.h
  represents namespace pollution for libpq clients.  I think we should be
  very wary about adding a lot of stuff to postgres_ext.h.  This suggests
  that we'd best create a new primary include file for client-side .c files,
  say "postgres_fe.h" or "postgres_client.h".  (Anyone have a better naming
  idea?  Does the old 14-character limit still pose a problem anywhere?)
  
  That would leave us with include trees like this:
  
  backend .c file:
  postgres.h
  postgres_ext.h
  c.h
  config.h
  os.h
  utils/elog.h
  utils/palloc.h
  
  frontend .c file:
  postgres_fe.h
  postgres_ext.h
  c.h
  config.h
  os.h
  
  where the include files have these roles:
  
  postgres_ext.h: definitions needed in frontend, backend, *and* by clients;
  by design an extremely small file
  
  postgres.h: backend-wide definitions
  
  postgres_fe.h: definitions common to all client-side interface libraries
  
  c.h: basic typedefs and macros needed by both frontend and backend, but
  not intended to be exported to clients of frontend libraries
  
  config.h: configuration definitions, not intended to be client-visible
  
  os.h: platform-specific configuration hacks, not intended to be
  client-visible (this comes from one of the src/include/port files)
  
  config.h and os.h don't need to change, I think, but I'll go through the
  definitions in the other four files and make sure everything is classified
  reasonably.
  
  It's possible that postgres_fe.h will end up containing nothing except
  the inclusions of postgres_ext.h and c.h, in which case we wouldn't really
  need to invent that file, but I'm still inclined to do so.  I think it's
  good practice to have a single include file that's the basic "must haves"
  for all client-side code.
  
  
  Now, since the intent is that the basic install provide headers needed
  for client-side programming, we'd want to add postgres_fe.h to the
  installed header set.  But the following files can be removed from the
  basic install:
  
  access/attnum.h
  commands/trigger.h
  executor/spi.h
  fmgr.h
  postgres.h
  utils/elog.h
  utils/geo_decls.h
  utils/palloc.h
  
  We might also remove utils/fmgroids.h.  I'm uncertain about this one.
  The function OID macros it contains are potentially useful to clients,
  but do we really want people hard-wiring function OIDs on

Re: [HACKERS] Should I look elsewhere??

2001-02-05 Thread Alex Pilosov

post the rest of the traceback.

0x40* is the address inside some shared library, most likely libc.

full traceback will show what happened before it got to libc

-alex



On Mon, 5 Feb 2001, Mathieu Dube wrote:

 So...
 If after recompiling postgres with -g in the CFLAGS and still getting 0x40050768 in 
?? ()
 in gdb, should I look elsewhere???
 
   Thanks
   -Mat
 




Re: [HACKERS] TODO list: Allow Java server-side programming

2001-02-03 Thread Alex Pilosov

On Sat, 3 Feb 2001, Peter Mount wrote:

 It's been a while since I delved into the backend, but unless it's
 changed from fork() to threading, I don't really see this happening,
 unless someone who knows C that well knows of a portable way of
 communicating between two processes - other than RMI. If that could be
 solved, then you could use JNI to interface the JVM.
There are many ways one can do this:
a) each backend will have a JVM linked in (shared object). This is the
way perl/tcl/ruby is embedded, and it works pretty nice. But, Java
['s memory requirement] sucks, therefore, this may not be the optimal
way.

 I know some people think this would slow the backend down, but it's
 only the instanciation of the JVM thats slow, hence the other reason
 fork() is holding this back. Ideally you would want the JVM to be
 running with PostMaster, and then each backend can then use the JVM as
 and when necessary.
b) since JVM is threaded, it may be more efficient to have a dedicated
process running JVM, and accepting some sort of IPC connections from
postgres processes. The biggest problem here is SPI, there aren't a good
way for that JVM to talk back to database.

c) temporarily, to have quick working code, you can reach java using hacks
using programming languages already built into postgres. Both TCL (tcl
blend) and Perl (JPL and another hack which name escapes me) are able to
execute java code. SPI is possible, I think both of these bindings are
two-way (you can go perl-java-perl-java). Might be worth a quick try?
-alex
 




Re: [HACKERS] pg_shadow.usecatupd attribute

2001-01-23 Thread Alex Pilosov

Just to clarify for stupid me: you want to remove it and forbid catalog
updates or remove it and allow catalog updates? (I hope its latter :)

On Tue, 23 Jan 2001, Tom Lane wrote:

 Peter Eisentraut [EMAIL PROTECTED] writes:
  While I'm at it and before I forget the 76 places one needs to edit to
  add/remove a system catalog column, what are people's feelings about the
  usecatupd column?
 
 Unless someone pops up and says that they're actually using it,
 I'd agree with removing it.  It seems like the sort of thing that
 might be a good idea but isn't actually getting used.
 
   regards, tom lane
 
 




Re: [HACKERS] Nothing larger then int8?

2001-01-17 Thread Alex Pilosov

To answer your question, wouldn't numeric(30,0) be the correct?

-alex
On Thu, 18 Jan 2001, The Hermit Hacker wrote:

 
 hrrmm ... ignore this ... I'm suspecting that what I did was copied in
 sum() data from an old table that had bytes declared as int4, without
 casting it to int8 before storing it to the new table ...
 
 if anyone is interested, here is one days worth of http traffic for the
 main PostgreSQL.Org server ... this doesn't include the traffic that the
 mirror sites absorb:
 
 1160643846 / ( 1024 * 1024 * 1024 )
 1.08gig
 
 
 
 On Thu, 18 Jan 2001, The Hermit Hacker wrote:
 
 
  I'm logging traffic to a database, so that I can do analysis on usage and
  whatnot, and I need something bigger then int8 :(
 
  /tmp/psql.edit.70.79087: 6 lines, 222 characters.
ip   |  maxbytes   | port |runtime
  ---+-+--+
   216.126.84.28 |  2169898055 |   80 | 2001-01-16 00:00:00-05
   216.126.84.28 |   160579228 |  873 | 2001-01-16 00:00:00-05
   216.126.84.28 |  365270 |   20 | 2001-01-16 00:00:00-05
   216.126.84.28 |  196256 |   21 | 2001-01-16 00:00:00-05
   216.126.84.28 |  195238 |   22 | 2001-01-16 00:00:00-05
   216.126.84.28 |  182492 | 1024 | 2001-01-16 00:00:00-05
   216.126.84.28 |  171155 |  143 | 2001-01-16 00:00:00-05
   216.126.84.28 | -1392384544 |   80 | 2001-01-13 00:00:00-05
   216.126.84.28 | -1392384544 |   80 | 2001-01-04 00:00:00-05
   216.126.84.28 | -1392384544 |   80 | 2001-01-05 00:00:00-05
   216.126.84.28 | -1392384544 |   80 | 2001-01-06 00:00:00-05
   216.126.84.28 | -1392384544 |   80 | 2001-01-07 00:00:00-05
   216.126.84.28 | -1392384544 |   80 | 2001-01-08 00:00:00-05
   216.126.84.28 | -1392384544 |   80 | 2001-01-14 00:00:00-05
   216.126.84.28 | -1452855018 |   80 | 2001-01-15 00:00:00-05
   216.126.84.28 | -1452855018 |   80 | 2001-01-10 00:00:00-05
   216.126.84.28 | -1452855018 |   80 | 2001-01-09 00:00:00-05
   216.126.84.28 | -1513325492 |   80 | 2001-01-03 00:00:00-05
   216.126.84.28 | -1694736914 |   80 | 2001-01-12 00:00:00-05
   216.126.84.28 | -1815677862 |   80 | 2001-01-11 00:00:00-05
 
  hub_traf_stats=# \d daily_stats
 Table "daily_stats"
   Attribute |   Type| Modifier
  ---+---+--
   ip| inet  |
   port  | integer   |
   bytes | bigint|
   runtime   | timestamp |
 
  do we have anything larger to work with?  I've checked docs, but that
  looks like about it :(
 
  Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
  Systems Administrator @ hub.org
  primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org
 
 
 
 Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
 Systems Administrator @ hub.org
 primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org
 
 




[HACKERS] renaming indices?

2001-01-16 Thread Alex Pilosov

I'm running into a problem where I have to create an index with a name
that doesn't conflict with any existing index.  

Currently, its not possible to do in postgres. 

It'd be nice if either of 3 were implemented:
1) alter index to rename it

2) alter table would rename index with some option(?)

3) index namespace should be constricted to the table on which it is
indexed, since no commands to my knowledge manipulate the index without
also specifying the table. I.E. in such a way, I will have index a on
table foo, and index a on table bar without a conflcit.

Is there a specification of current postgres behaviour anywhere in SQL
standard? (i.e. index namespace being global?)

-alex




Re: [HACKERS] GNU readline and BSD license

2000-12-30 Thread Alex Pilosov

On Sat, 30 Dec 2000, Peter Bierman wrote:

 At 7:15 PM -0500 12/29/00, Tom Lane wrote:
 Alfred Perlstein [EMAIL PROTECTED] writes:
  Rasmus Lerdorf warned one of you guys that simply linking to GNU
  readline can contaminate code with the GPL.
 
  Readline isn't LGPL which permits linking without lincense issues,
  it is GPL which means that if you link to it, you must be GPL as
  well.
 
 I do not believe that.  In fact, I'll go further and say "Horsepucky!"
 The GPL applies to works that "contain or are derived from" a GPL'd
 program.  Linking to a separately distributed library does not cause
 psql either to contain or to be derived from libreadline.
 
 
 Some very highly paid lawyers disagree with you.
 
 That doesn't make them right, but keep in mind that no one has defined "derivitive 
work" in a court of law. And RMS isn't a lawyer.
 
 I agree readline doesn't taint PG, but IMHO, the more distance between the GPL and 
PG, the better.
OK. For the last time, here's the story about linking, as agreed upon by
almost damn everyone:

a) dynamic linking is kosher, as of GPL2
b) static linking is OK, but you may NOT redistribute resulting libraries.

I hope the above will put the discussion about readline to an end, as
Postgres does not distribute statically linked binaries.


-alex




Re: [HACKERS] GNU readline and BSD license

2000-12-29 Thread Alex Pilosov

On 29 Dec 2000, Michael Alan Dorman wrote:

 Peter Eisentraut [EMAIL PROTECTED] writes:
  If there is something functionally wrong with Readline then let's talk
  about it, but let's not replace it with something because some PHP dude
  said that RMS said something.
 
 ncftp used to be for non-commercial use only and had hooks to be
 linked against readline.  RMS threatened legal action, which caused
 the developer to change the license to GPL, which was what RMS wanted.
Problem with ncftp was developers distributing binaries commercially which
were linked to libreadline.

As I said before, postgres doesn't have this problem since neither RPMs
nor other binaries do that.




Re: [HACKERS] GNU readline and BSD license

2000-12-23 Thread Alex Pilosov

On Sat, 23 Dec 2000, Bruce Momjian wrote:

 Rasmus Lerdorf, the big PHP developer, told me that the existance of GNU
 readline hooks in our source tree could cause RMS/GNU to force us to a
 GNU license.
 
 Obviously, we could remove readline hooks and ship a BSD line editing
 library, but does this make any sense to you?  It doesn't make sense to
 me, but he was quite certain.
Unfortunately he's right, since GPL software is incompatible with any
non-GPL software. Stallman publically admitted that he intentionally
released readline under GPL, not LGPL, to force more people into GPLing
their code. 




Re: [HACKERS] GNU readline and BSD license

2000-12-23 Thread Alex Pilosov

On Sat, 23 Dec 2000, Bruce Momjian wrote:

 OK, but does shipping our code with hooks obligate us?  We don't ship
 readline.
Oh, oops. I didn't know readline wasn't in the postgres tree. Then,
obviously, distribution of .tar.gz does not obligate postgres to anything,
HOWEVER, the problem arises with distribution of binaries (.rpm and
others) which are linked against libreadline _statically_ (basically, we
can't do it). Our RPM distrib is linked dynamically, but I don't know
about other binaries...

From my understanding of GPL, if it is linked dynamically, we are exempt
since it does not constitute a 'derived package'.

-alex




Re: [HACKERS] CIDR output format

2000-12-21 Thread Alex Pilosov

On Thu, 21 Dec 2000, Tom Lane wrote:

 Value Default output  text()  abbrev()
 
 '127.0.0.1/32'::inet  127.0.0.1   127.0.0.1/32127.0.0.1
 '127.0.0.1/32'::cidr  127.0.0.1/32127.0.0.1/32127.0.0.1/32
 '127/8'::cidr 127.0.0.0/8 127.0.0.0/8 127/8
 
 This would be a little bit inconsistent, because the default output
 format would match text() for CIDR values but abbrev() for INET values.
 But that seems like the most useful behavior to me.  Possibly others
 will disagree ;-)
I think it makes sense.




Re: [HACKERS] performance modality in 7.1 for large text attributes?

2000-12-19 Thread Alex Pilosov

Paul,

1) Have you ran vacuum analyze after all these inserts to update database
statistics? :) Without vacuum, pgsql will opt to table scan even when
there's an index.

2) I'm not sure if you are executing pgcat 70k times or executing inner
loop in pgcat 70k times. Postgres connection establishment is expensive.

3) Postgres INSERT is not very efficient if you are doing a bulk load of
data (it has to reparse the statement every time). If you want to delete
everything and load new data, use "COPY", which is about 5 times faster.
Also, there's a patch by someone to do following: INSERT INTO (fields...)
VALUES (...), (...), (...), which results in parsing the statement only
once.

Oh...And since I have your attention, could you please resolve
long-standing discussion between me and Tom Lane? :) 

Question is whether proper (standard/most-commonly-used) format for
printing CIDR network address is 10/8 or 10.0.0.0/8 (i.e. should all
octets be printed even if they are 0). After search of RFCs, there's
nothing that specifies the standard, but 10.0.0.0/8 is used more often in
examples than 10/8 form.

Postgres uses 10/8 form, and I'm saying that 10.0.0.0/8 is more accepted
by everyone else. (I.E. all software can deal with that, but not all
software accepts 10/8).

-alex

On Mon, 18 Dec 2000, Paul A Vixie wrote:

 (plz cc me on your replies, i'm not on pgsql-hackers for some reason.)
 
 http://www.vix.com/~vixie/results-psql.png shows a gnuplot of the wall time
 of 70K executions of "pgcat" (shown below) using a CIDR key and TEXT value.
 (this is for storing the MAPS RSS, which we presently have in flat files.)
 
 i've benchmarked this against a flat directory with IP addresses as filenames,
 and against a deep directory with squid/netnews style hashing (127/0/0/1.txt)
 and while it's way more predictable than either of those, there's nothing in
 my test framework which explains the 1.5s mode shown in the above *.png file.
 
 anybody know what i could be doing wrong?  (i'm also wondering why SELECT
 takes ~250ms whereas INSERT takes ~70ms... seems counterintuitive, unless
 TOAST is doing a LOT better than i think.)
 
 furthermore, are there any plans to offer a better libpq interface to INSERT?
 the things i'm doing now to quote the text, and the extra copy i'm maintaining,
 are painful.  arbitrary-sized "text" attributes are a huge boon -- we would
 never have considered using postgres for MAPS RSS (or RBL) with "large
 objects".  (kudos to all who were involved, with both WAL and TOAST!)
 
 here's the test jig -- please don't redistribute it yet since there's no man
 page and i want to try binary cursors and other things to try to speed it up
 or clean it up or both.  but if someone can look at my code (which i'm running
 against the 7.1 bits at the head of the pgsql cvs tree) and at the *.png file
 and help me enumerate the sources of my stupidity, i will be forever grateful.









Re: [HACKERS] beta testing version

2000-11-30 Thread Alex Pilosov

On Thu, 30 Nov 2000, Nathan Myers wrote:

 After a power outage on an active database, you may have corruption
 at low levels of the system, and unless you have enormous redundancy
 (and actually use it to verify everything) the corruption may go 
 undetected and result in (subtly) wrong answers at any future time.
Nathan, why are you so hostile against postgres? Is there an ax to grind?

The conditions under which WAL will completely recover your database:
1) OS guarantees complete ordering of fsync()'d writes. (i.e. having two
blocks A and B, A is fsync'd before B, it could NOT happen that B is on
disk but A is not).
2) on boot recovery, OS must not corrupt anything that was fsync'd.

Rule 1) is met by all unixish OSes in existance. Rule 2 is met by some
filesystems, such as reiserfs, tux2, and softupdates. 

 The logging in 7.1 protects transactions against many sources of 
 database crash, but not necessarily against OS crash, and certainly
 not against power failure.  (You might get lucky, or you might just 
 think you were lucky.)  This is the same as for most databases; an
 embedded database that talks directly to the hardware might be able
 to do better.  




Re: [HACKERS] Summary: what to do about INET/CIDR

2000-11-03 Thread Alex Pilosov

Agreed with all of it, but how about incorporating conversion from inet
to int8? (first octet*256*256*256+second octet*256*256+third
octet*256+fourth octet). 

This will allow to do a lot of magic with addresses using plain math.

Also, I'd still like netmask_length, length of netmask in bits.

-alex

On Fri, 3 Nov 2000, Tom Lane wrote:

 5. The function broadcast(inet) will now return inet not text.  It
will take the given address octets and force the bits to the right
of the netmask to 1.  The display type will be set to inet.  I am
inclined to have it return the same masklength as the input, so for
example broadcast('127.1/16') would yield '127.1.255.255/16'::inet.
If you want the broadcast address displayed without a netmask
notation, you'd need to write host(broadcast(foo)).  Alternatively,
we could say that broadcast() always returns masklen 32, but I think
this loses valuable functionality.
 
 6. The function network(inet) will now return cidr not text.  The result
has the same masklen as the input, with bits to the right of the mask
zeroed to ensure it is a valid cidr value.  The display type will be
set to cidr.  For example, network('127.1.2.3/16') will yield
'127.1/16'::cidr.  To get this result displayed in a different
format, write host(network(foo)) or text(network(foo)).
 
 7. The function netmask(inet) will now return inet not text.  It will
return octets with 1s in the input's netmask, 0s to the right, and
output display type and masklen set to inet and 32.  For example,
netmask('127.1/16') = '255.255.0.0/32'::inet which will display as
'255.255.0.0'.  (I suppose a really anal definition would keep the
input masklen, forcing you to write host(netmask(foo)) to get a
display without "/n".  But I don't see any value in that for
netmasks.)
 
 8. Because we still consider inet and cidr to be binary-equivalent types,
all of these functions can be applied to either inet or cidr columns.
 
 Comments?
 
   regards, tom lane
 
 




RE: [HACKERS] relation ### modified while in use

2000-11-02 Thread Alex Pilosov

On Fri, 3 Nov 2000, Hiroshi Inoue wrote:

 PL/pgSQL already prepares a plan at the first execution
 time and executes the plan repeatedly after that.
 We would have general PREPARE/EXECUTE feature in the
 near fututre. IMHO another mechanism to detect plan invali
 dation is needed.
Excellent point. While now I don't consider it too inconvenient to reload
all my stored procedures after I change database structure, in future, I'd
love it to be handled by postgres itself.

Possibly, plpgsql (or postgresql itself) could have a 'dependency' list of
objects that the current object depends on?

This would additionally help dump/restore (the old one, I'm not talking
about the newfangled way to do it), since, for restore, you need to dump
the objects in the order of their dependency, and plpgsql procedure can
potentially depend on an object that has a higher OID...

-alex





Re: [HACKERS] Summary: what to do about INET/CIDR

2000-10-27 Thread Alex Pilosov

Please read below if the whole thing with inet/cidr doesn't make you puke
yet ;) The semi-longish proposal is at the bottom.

On Fri, 27 Oct 2000, Tom Lane wrote:

 Alex Pilosov [EMAIL PROTECTED] writes:
  We should have as much error-checking as possible.
 
 How so?  Without a netmask you have no way to know if it's a broadcast
 address or not. 10.0.0.255/32 might be a perfectly valid host address
 in, say, 10.0/16.  But 10.0.0.255/24 is recognizably the broadcast
 address for 10.0.0/24 (and not for any other network...)
Right, that's what I'm trying to say: It shouldn't allow you to use
10.0.0.255/24 as a host address, but it should allow you to use 
10.0.0.255/16 

  (ie. broadcast() function must return a value with /32 mask)
 
 I don't disagree with that part, but that's only because I see
 broadcast() as mainly a display convenience.  If we had a larger and
 more thoroughly worked out set of inet/cidr operators, I'd be inclined
 to argue that broadcast('10.0.0.0/24') should yield 10.0.0.255/24 for
 computational convenience.  Then we'd need to offer a separate function
 that would let you strip off the netmask for display purposes (actually
 host() would do for that...)


Actually, now that I think longer about the whole scheme in terms of
actual IP experience, here are my ideas:
a) inet is crock. I don't know anyone who would need to _care_ about a
netmask of a host, who wouldn't have a lookup table of networks/masks.
(Think /etc/hosts, and /etc/netmasks).

Storing a netmask of a network in a inet actually violates the relational
constraints: netmask is not a property of an IP address, its a property of
a network.

99% of people who would be storing IP addresses into postgres database
really do not know nor care what is a netmask on that IP. Only people who
would care are ones who store their _internal_ addresses (read: addresses
used on networks they manage).  There is usually a very limited number of
such networks (1000). 

It makes no sense to have in database both 10.0.0.1/24 and 10.0.0.2/16.
None whatsoever.

This does NOT apply to CIDR datatype, as there are real applications (such
as storing routing tables) where you would care about netmask, but won't
care about a host part. 

What I am suggesting is we do the following:
a) inet will NOT have a netmask

b) all the fancy comparison functions on inet should be deleted. 
(leave only  = = = )

c) the only things you can do on inet is to convert it to 4 octets (of
int1), to a int8, and to retrieve its network from a table of networks.

d) have a table, 'networks' (or any other name, maybe pg_networks?) which
would have one column 'network', with type cidr.
create table networks (network cidr not null primary key)

e) have a function network(inet) which would look up the address in a
table of networks using longest-prefix-match. I.E. something similar to:

select network from networks 
where $1network 
order by network_prefix(network)
desc limit 1;


I realise that this sounds a little bit strange after all the arguments
about inet, but if you think about it, this is the only sane way to deal
with these datatypes. 

Right now, the datatypes we have look and sound pretty but are pretty much
useless in reality. Yes, it is nice to be able to store a netmask with
every IP address, it is useless in reality. (Yes, please, someone tell me
if you are using inet with netmasks and you actually like it).


I'd especially like to get input of Marc on this, as he's both a core team
member and has actual networking background...Oh yeah, if Marc can comment
on whether 10/8 or 10.0.0.0/8 is a proper way to represent a network, it'd
be great too :)






Re: [HACKERS] Summary: what to do about INET/CIDR

2000-10-27 Thread Alex Pilosov

one more small request:

int8_inet(inet) and inet_int8(int8): functions to convert an inet to an
int8 and back. (not an int4, since postgres int4s are signed)

This allows me to do some additional manipulations on values. (ie. given a
host, determine its default gateway, for us, it is always first host on
that network, this could be implemented as inet_int8(int8_inet(network(x))+1), 
or splitting a cidr into two halves, 

-alex









Re: [HACKERS] Re: [GENERAL] A rare error

2000-10-27 Thread Alex Pilosov

 (SELECT * INTO newtable FROM table1) UNION (SELECT * FROM table2);
Possibly a silly (and definitely not standards-conformant) suggestion:

Maybe grammar should be amended to allow for
(SELECT * FROM table1) UNION (SELECT * FROM table2) INTO newtable

i.e. 

union_expr:
 (select_expr) union (union_expr) [into into_table]

 Notice the INTO?  Doesn't this seem like an odd place for it, in what
 appears to be a subordinate query?  Where else would it go?  How would
 it grab you in an expression with five or more levels of parens?
 How about five levels of parens and a complicated targetlist before
 you get to the INTO?
 
 What I'm suggesting is that the parens be allowed only on the right
 hand side of the set operations.  How does that strike you?
 
  
  regards, tom lane
 
 




Re: [HACKERS] Summary: what to do about INET/CIDR

2000-10-27 Thread Alex Pilosov

On Fri, 27 Oct 2000, Tom Lane wrote:

 Larry Rosenman [EMAIL PROTECTED] writes:
  OK, what I really meant was a way to coerce a CIDR entity to INET so 
  that host() can work with a CIDR type to print all 4 octets. 
 
 Hm.  I don't see any really good reason why host() rejects CIDR input
 in the first place.  What's wrong with producing the host address
 that corresponds to extending the CIDR network address with zeroes?
_maybe_ cuz this is an invalid address. (an address cannot have all-zeros
or all-ones host part). On other hand, postgres doesn't enforce that in
inet_in, so its inconsistent to enforce it there...

  Currently you can't coerce a CIDR type to INET. 
 
 Well you can, but it doesn't *do* anything.  One of the peculiarities
 of these two types is that the cidr-vs-inet flag is actually stored
 in the data value.  The type-system differentiation between CIDR and
 INET is a complete no-op for everything except initial entry of a value
 (ie, conversion of a text string to CIDR or INET); all the operators
 that care (which is darn few ... in fact it looks like host() is the
 only one!) look right at the value to see which type they've been given.
 So applying a type coercion may make the type system happy, but it
 doesn't do a darn thing to the bits, and thus not to the behavior of
 subsequent operators either.  I have not yet figured out if that's a
 good thing or a bad thing ...
Probably cidr_inet should make a copy instead of just "blessing" the
original value?

-alex




Re: [HACKERS] Summary: what to do about INET/CIDR

2000-10-27 Thread Alex Pilosov

On Fri, 27 Oct 2000, Tom Lane wrote:

 The way I'm visualizing this, INET is a generalized type that will store
 any 4-octet address plus any netmask width from 1 to 32.  This includes
 not only host addresses, but network specs and broadcast addresses.
 CIDR is a subset type that only accepts valid network specs (ie, no
 nonzero address bits to the right of the netmask).  There is no subset

I really don't think it should. We should have as much error-checking as
possible. Broadcast address does _not_ have a netmask, i.e. 10.0.0.255/24
does not make sense as inet, it should be 10.0.0.255/32

(ie. broadcast() function must return a value with /32 mask)

 type that corresponds to "valid host addresses only" --- if there were,
 it would be a subset of INET but would have no valid values in common
 with CIDR.  We could make such a type but I dunno if it's worth the
 trouble.




Re: [HACKERS] Summary: what to do about INET/CIDR

2000-10-27 Thread Alex Pilosov

On Fri, 27 Oct 2000, Tom Lane wrote:

 A more interesting question is whether the system needs to provide any
 assisting functions that aren't there now.  The lookup function you guys
 are postulating seems like it would be (in the simple cases)
   create function my_network(inet) returns cidr as
   'select network from my_networks where ???'
as in my mail:
select network from my_network where network$1 order by
network_prefix(network) desc limit 1;

(i.e. if many networks cover the ip address, pick the one with longest
prefix). The only hard question here, how to properly index this table.
This sounds like a perfect application of user-defined index method. 
I need to look up documentation on how they work...


However, this probably won't pose a major problem in production: the
networks table will be relatively small. 

 Maybe it's too late at night, but I'm having a hard time visualizing
 what the ??? condition is and whether any additional system-level
 functions are needed to make it simple/efficient.

Actually, you can scratch my proposal. I realise it could be inconvenient
for some people.

I'll be probably putting all my hosts as inet::xxx/32, have the above
lookup function to get real network, and do operations on that.






Re: [HACKERS] Summary: what to do about INET/CIDR

2000-10-27 Thread Alex Pilosov


On Fri, 27 Oct 2000, Tom Lane wrote:

 BTW, does it strike anyone else as peculiar that the host(),
 broadcast(), network(), and netmask() functions yield results
 of type text, rather than type inet?  Seems like it'd be considerably
 more useful if they returned values of type inet with masklen = 32
 (except for network(), which would keep the original masklen while
 coercing bits to its right to 0).
I absolutely agree, except for network(), which should return cidr.
(after all, this is the network).

As I mentioned in another email, should inet datatype really care whether
host part is all-ones or all-zeros and reject that? It would make sense to
me (10.0.0.0/8::inet is not a valid address, but 10.0.0.0/8::cidr is), but
it would break some people's scripts...

I'm talking here from a perspective of a network provider with P
knowledge...I'm sure Marc can chime in here...

 -alex






[HACKERS] relation ### modified while in use

2000-10-22 Thread Alex Pilosov

I'm having the error 'relation number modified while in use' fairly
often. It is the same relation that's always giving a problem. Usually
after all currently-running backends die away with that error, error
disappears. If I shutdown, ipcclean, start up postgres, it also
disappears.


What causes this? I'm having a feeling that it has to do with referential
integrity (the table in question is referenced by almost every other
table), and with [possibly] a leak of reference counts? 

This is all with pg7.0.2 on i386.

-alex




Re: [HACKERS] relation ### modified while in use

2000-10-22 Thread Alex Pilosov

On Mon, 23 Oct 2000, Tom Lane wrote:

 when done, but it will deadlock if SELECT does not release that lock.
 
 That's annoying but I see no way around it, if we are to allow
 concurrent transactions to do schema modifications of tables that other
 transactions are using.

I might be in above my head, but maybe this is time for yet another type
of lock? "Do-not-modify-this-table-under-me" lock, which shall persist
until transaction commits, and will conflict only with alter table
lock/AccessExclusiveLock?

I realise we have already many lock types, but this seems to be proper
solution to me...

In related vein: Is there a way to see who (at least process id) is
holding locks on tables?




Re: [HACKERS] time stops within transaction

2000-10-18 Thread Alex Pilosov

On Wed, 18 Oct 2000, Tom Lane wrote:

 I think what you are advocating is recomputing now() at each statement
 boundary within a transaction, but that's not as simple as it looks
 either.  Consider statement boundaries in an SQL function --- the
 function is probably being called from some outer statement, so
 advancing now() within the function would violate the spec constraint
 with respect to the outer statement.
Postgres doesn't have an idea of what a 'top-level' statement is? I.E.
statement as submitted by a client (libpq)?

-alex




[HACKERS] bug: alter table/FK

2000-10-17 Thread Alex Pilosov

I don't know if this has been fixed or not, but alter table will not
adjust RI/FK triggers on the table. 

I.E:

create table foo (a int4 primary key)
create table bar (b int4 references foo)
alter table foo rename to foo2

now, updates to foo will either crash or hang postgres.


What needs to be done: on alter table, update tgargs in pg_trigger table

-alex




[HACKERS] time stops within transaction

2000-10-17 Thread Alex Pilosov

I just ran into a strangest thing: within transaction, select now() will
always return time when transaction started. Same happens with select
'now'::timestamp.

This is with 7.0. I have not tested it with CVS.

I am not sure what causes this. I assume that result of now() is cached by
fmgr. Is there a way to declare functions 'not-cacheable-ever'? If there
is, such should be applied to now().

-alex




[HACKERS] time stops/workaround

2000-10-17 Thread Alex Pilosov

Strangely, the same thing does not happen when I do timenow() instead of
time(). This  is very counter-intuitive, if this is the way it is supposed
to work, at least docs should be saying that.

Also, I checked, and its probably not the fmgr cache, since now() is set
to be noncacheable...

-alex




Re: [HACKERS] What's happening with pgsql-committers?

2000-10-07 Thread Alex Pilosov

On Sat, 7 Oct 2000, The Hermit Hacker wrote:

 On Tue, 3 Oct 2000, Tom Lane wrote:
 
  Peter Eisentraut [EMAIL PROTECTED] writes:
   Has anybody been getting pgsql-committers messages the last few days?
  
  Coming through fine for me (at least when hub.org isn't wedged
  completely, which it was several times over the weekend...)
 
 We hit a corruption of -general again this past week (I've really gotta
 dive into getting the postgresql backend working for majordomo instead of
 Berkeley DB :( ) ... it was causing a perl process of 900+Meg to run,
 which caused the load avg to climb unreasonably :(  
Oooo, ooo, how about instead writing a ndbm/gdbm/whateverdb wrapper for
postgres? 

Configurable by some file in /etc, where it would establish correspondence
between "file" argument to dbopen() and a postgres view. This would rock
for converting applications which only support dbm mappings (such as
sendmail) to use postgres transparently.

The only problem I see with it is db-fd() function which returns a
filedescriptor for the file, which some apps rely on to do flock/fcntl. On
other hand, locking of that kind will be unnecessary for postgres, so it
can return fd of some file in /tmp...

There's an attempt to do this at http://www.is.kiruna.se/~goran/ldap/arkiv/
but its very raw and not configurable. 

-alex