Re: [HACKERS] Python 3.1 support

2009-11-20 Thread Tino Wildenhain

Am 19.11.2009 18:01, schrieb James Pye:

On Nov 19, 2009, at 3:12 AM, Peter Eisentraut wrote:

The other approach, which is what James Pye's
new implementation proposes (as I understand it), is to convert
PostgreSQL types into specially made Python objects, such as
Postgres.types.record or Postgres.types.timestamp.


Convert is not a good word choice. The Datum of the parameter is stored inside a new 
Python object(that only holds a Datum). So more like copied into Python 
memory, and associated with its respective type. Wrapped in a Python object?


Yes wrapped is the term commonly used for that. And I must say I like 
it and I used plpy where I could.



One cool thing about doing it this way, is that if you just pass parameters 
forward to a prepared statement, there's no type I/O overhead. Not a huge 
performance win for common cases, but if someone were passing larger arrays 
around, it could be quite beneficial.


Exactly and you have all the meta information about the original 
postgres type. IIRC there were some thoughts of having something like 
that in a DBAPI interface as well (similar for example to cx_Oracle).


Tino



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] MySQL Compatibility WAS: 8.5 release timetable, again

2009-08-27 Thread Tino Wildenhain

Tom Lane wrote:

Greg Stark gsst...@mit.edu writes:

Actually it always bothered me that we don't have implicit casts from
integer-boolean. I can't see any ambiguity or unintentional effects
this would cause problems with. Am I missing something?


Personally, as an old Pascal-lover, I always thought that C's failure
to distinguish between int and boolean was the single biggest design
mistake in the language.  I'm very glad that SQL doesn't make that
mistake, and I don't want to go against the standard to introduce it.


Then you should love Python where everything non-empty is regarded True
in boolean context ;-)

Cheers
Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] [GENERAL] Performance of full outer join in 8.3

2009-04-18 Thread Tino Wildenhain

Tom Lane wrote:

Greg Stark st...@enterprisedb.com writes:

...

I suppose if we had explain-to-a-table then we could run explain and
then run an sql query to verify the specific properties we were
looking for.



A similar thing could be done with xml if we had powerful enough xml
predicates but we have a lot more sql skills in-house than xml.


Yeah, I suspect the only really good answers involve the ability to
apply programmable checks to the EXPLAIN output.  A SQL-based solution
shouldn't need any external moving parts, whereas analyzing XML output
presumably would.


If only an explain-to-a-table would be one of the available options
and not the only option that would be great. The big O only has this
option and it totally sux if you want to explain a query on a production
environment where you can't just create tables here and there.

Tino

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] string_to_array with empty input

2009-04-02 Thread Tino Wildenhain

Robert Haas wrote:

On Wed, Apr 1, 2009 at 3:49 PM, Sam Mason s...@samason.me.uk wrote:

On Wed, Apr 01, 2009 at 03:19:23PM -0400, Robert Haas wrote:

On Wed, Apr 1, 2009 at 12:52 PM, David E. Wheeler da...@kineticode.com wrote:

Well, I'd just point out that the return value of string_to_array() is
text[]. Thus, this is not a problem with string_to_array(), but a casting
problem from text[] to int[]. Making string_to_array() return a NULL for
this case to make casting simpler is addressing the problem in the wrong
place, IMHO. If I want to do this in Perl, for example, I'd do something
like this:

my @ints = grep { defined $_  $_ ne '' } split ',', $string;

I've written code that looks a whole lot like this myself, but there's
no easy way to do that in SQL.  SQL, in particular, lacks closures, so
grep {} and map {} don't exist.  I really, really wish they did, but

I don't grok Perl so I'd appreciate an explanation of what the above
does, at a guess it looks a lot like the function I wrote up thread[1]
called array_filter_blanks and using it would look like:

 SELECT array_filter_blanks(string_to_array(arr,',')) AS ints;


map { closure } @list applies closure to each element of list and
makes a new list out of the results.
grep { closure } @list applies closure to each element of list and
returns the list elements for which the closure returns true.


Ah, so thats equal to

 [map_closure(i) for i in thelist if grep_closure(i)]

in python.




I
believe that our type system is too woefully pathetic to be up to the
job.

This has very little to do with PG's type system.  You either want
functions to be first class objects or support for closures, blaming the
type system is not correct.


I'm speaking primarily of functions as first-class objects, though
closures would be nice too.   But consider an operation like

UPDATE rel SET col1 = MAP ( f OVER col2 )

We need to be able to determine whether this is well-typed, just as we
do now for any other SQL query.  Specifically, we need to check that f
is a one argument function whose argument type is that of col2 and
whose return type is that of col1.  My understanding is that right now
types are represented as 32-bit OIDs.  I think they'd need to be some
sort of more complex structure in order to handle cases like this.


Would above query not be written as

UPDATE rel SET col1 = f(col2);

anyway or am I missing something?

imho, having generic tuple tables as we have in INSERT INTO (...)
VALUES (...),(...),(...)

to be useable in all places like a real table would be helpful in
many cases.

But this might be completely unrelated :)

Regards
Tino


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cross-compiling plpython

2009-03-16 Thread Tino Wildenhain

Nikhil Sontakke wrote:
So, I realized that there does not seem to be an easy way for cross 
compiling plpython. The configure checks for the locally installed 
python include locations and eventually the plpython makefile ends up 
picking all the local includes too. 

Does anyone have any experiences on cross-compiling plpython before? 
Obviously I can hack the Makefile to point to the host environment 
headers, but is there a neater way? Also I suspect the plpython.so so 
generated might not load on the host later too because its looking for 
the libpython.so somewhere else too.


Did you check debian source packages? At least I believe they do not
compile everything on the respective system so there must be some hooks
for crosscompiling.

Regards
Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] Simple postgresql.conf wizard

2008-11-03 Thread Tino Wildenhain

Greg Smith wrote:
One of the long-terms goals I'm working toward is wrapping a wizard 
interface around the tuning guidelines described by 
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server now that 
those have gone through a fair amount of peer review.  Attached is a 
first simple version of such a wizard, implemented in Python.  Right now 
what it does is look the amount of memory in your system and adjust 
shared_buffers and effective_cache_size.  So if we started with this 
stock configuration:


Thats really nice. I have been playing with that idea also (and in
python) but not really having time to do more then some simple tests.
So its nice to see we have something to base development on.

Thx for starting!
Regards
Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] Keeping creation time of objects

2008-09-10 Thread Tino Wildenhain

Hi,

Devrim GÜNDÜZ wrote:

On Tue, 2008-09-09 at 15:36 -0400, Tom Lane wrote:

why would creation time (as opposed to any other time, eg last schema
modification, last data modification, yadda yadda) be especially
significant? 

Hmm, those would be cool, too.


maybe except last data modification.

But for audit reasons its really helpful so see if
someone has had hands on objects since they have
been created. So if it would not cost us arm and leg
I'm all for having created/changed timestamps for
all objects.

Regards
Tino



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] XML / XSL rendering in PostgreSQL server?

2008-08-15 Thread Tino Wildenhain

Hi,

Peter Sampson wrote:

Hi,

I would like to use one of the XML rendering functions like 
table_to_xml_and_xmlschema OR table_to_xml and render the output via 
XSL, preferably in one query.


What do you think would be the benefit of doing that?

I've searched the mailing lists,site, docs and Google -- to no avail. 
Also, I see that xslt_process (from xml2) will be deprecated going 
forward. We're building an app that will be used for a long time into 
the future, and I'd like to keep it future-compatible.


I've also searched online for sql / xml docs or tutorials, but haven't 
found any that directly addresses using xsl in queries itself.


You could try with one of the pl/*u languages.

Regards
Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] PL/PythonU

2008-08-05 Thread Tino Wildenhain

Hannu Krosing wrote:

On Mon, 2008-08-04 at 13:08 -0400, David Blewett wrote:

Hi All:

This is an off-shoot of the Do we really want to migrate plproxy and
citext into PG core distribution? thread.

On the way home from PyOhio, I had a conversation with a few people
that use Zope a lot. I happened to mention that Postgres doesn't have
an untrusted version of pl/python and they were curious as to why.


Personally I'm also constantly mentioning it :-)


They directed me to Zope's Restricted Python implementation [1][2]. In
doing some research, I found the Pl/Python -- current maintainer?
[3] thread from 2006. I also found this [4] thread on the python-dev
mailing list.

Hannu: You had mentioned bringing pl/python up to the level of some of
the other pl's. Have you thought any more about pl/pythonu?


My recollection of old times (about python v. 1.6) was that the
restricted sandboxes had some fatal flaws. I have not followed zope's
RestrictedPython enough to have an opinion on its safety.


Yes, the old sandbox (restricted execution and bastion) used a
realatively naive approach of basically limiting only imports and iirc.
some file access objects.
That beeing not really bullet proof so these modules have been
removed. This should not be confused with the different approach
restricted python uses and which proofes to be successfull to date.

Regards
Tino



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] Identifier case folding notes

2008-07-09 Thread Tino Wildenhain

Hi,

Peter Eisentraut wrote:
...
4. Compare the name data type in a case-insensitive manner.  This would 
probably address most problem cases.  Again, you can't have objects with names 
different in case only.  One condition to implementing this would be that this 
behavior would have be tied down globally at initdb, because it affects system 
indexes and shared catalogs.  That might be impractical for some, because 
you'd need different instances for different behaviors, especially when you 
want to host multiple applications or want to port an affected application to 
the native PostgreSQL behavior over time.


The whole stuff as I understand is to fix the behavior with applications 
creating objects without quotes and accessing them QUOTEDUPPERCASE?


Would a small script fixing the schema by using rename not fix this for
many applications?

T.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] Posting to hackers and patches lists

2008-06-27 Thread Tino Wildenhain

Bruce Momjian wrote:

Tom Lane wrote:

...


* no permanent archive of the submitted patch

* reviewer won't know if the submitter changes the patch after he
downloads a copy, and in fact nobody will ever know unless the submitter
takes the time to compare the eventual commit to what he thinks the
patch is


This requires the patch submitter to send an email every time they
update the URL.  The problem with no archive is a problem though.  It
works for me because I am around to supply versions but I see your
point --- perhaps we could make the system have a stable URL but allow
for versioning access.  Maybe email is a fine interface, of course.


What about having tickets? Track for example or something like that
and the submitter feeling an itch to scratch just uploads it to a
ticket. This way you know the reason for a patch and can even have
a little discussion as well as a link to the revision where it
got incorporated. Couldn't be cleaner I think...
The link to the ticket is also rather stable and you can
communicate in mailinglist about it.

Cheers
Tino



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] Git Repository for WITH RECURSIVE and others

2008-06-24 Thread Tino Wildenhain

David Fetter wrote:

Folks,

With lots of help from Greg Sabino Mullane, I've set up a git
repository for the WITH RECURSIVE patches on
http://git.postgresql.org/.

What other patches would people like to try maintaining this way until
commitfest?

It looks like gitosis is a good way to grant write access to git
repositories, but it's not yet packaged for FreeBSD.  Any ideas about
how to handle this?


Isn't the whole point of git not to require write access? If you want
centralized developement, then Subversion/CVS can do the job quite well.

Unless I'm completely wrong on this :-)

Cheers
Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] Case-Insensitve Text Comparison

2008-06-02 Thread Tino Wildenhain

Andrew Sullivan wrote:
...

I think if you want some special treatment of text for some users, it
should be explicit. 


Yes.  Also, not just text.  Think of currency, numeric separators, c.


Which imho, should not really  be the business of the type interface
but instead something to_char() and to_{type} handles.

Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] Setting a pre-existing index as a primary key

2008-05-11 Thread Tino Wildenhain

Joshua D. Drake wrote:

Tom Lane wrote:

Well it should be optional but it would be nice if we had the option 
to have it renamed per the default... meaning the same output if I 
were to do this:


If you want that, you can rename the index (either before or afterwards).
I don't see any reason to clutter the make-constraint-from-index command
with questions of renaming.


As a counter point, I don't see any reason to make the DBA's life 
harder. Sure it is just one step but it is a human step, prone to error 
and taking more time than it should. Why not just make it easy? 
Especially when the easy isn't sacrificing data integrity or quality of 
product?


well the name is by no means a functional problem. Its merely cosmetics,
so if you want propose that a warning is issued to suggest a saner name.

This should be sufficient I think.

T.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] Auto-updated fields

2008-05-07 Thread Tino Wildenhain

David Fetter wrote:

Folks,

A co-worker pointed out to me that MySQL has a feature that, properly
implemented and maybe extended, could be handy, namely what MySQL
calls a timestamp field, so here's a proposal:

1.  Create a generic (possibly overloaded) trigger function, bundled
with PostgreSQL, which sets a field to some value.  For example, a
timestamptz version might set the field to now().

2.  Have some kind of pre-processing of CREATE and ALTER statements on
tables which would attach the above function to the field at hand,
something like:

CREATE TABLE foo(
last_updated TIMESTAMPTZ_UPDATED(),
...
);

which would turn last_updated into a TIMESTAMPTZ with the expected
behavior on UPDATEs.

What do folks think of this idea?


Having the pre defined triggers at hand could be useful, especially
for people not writing triggers so often to get used to it but I'm
really not happy with the idea of magic preprocessing.

I guess this is commonly used with timestamp fields so why not
include a receipe to the docs under examples for timestamp which
shows how to create and use a trigger?

I may be wrong but my feeling is, not to much weirdness in the core
please :) (I guess mysql had it because of lacking triggers and stuff
for a long time?)

T.



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HACKERS] Protection from SQL injection

2008-04-30 Thread Tino Wildenhain

Andrew Dunstan wrote:



Tino Wildenhain wrote:

Hi,


In C the best practice is to use #define for constants. In C++ you
have 'const', in Java 'static final'. Unfortunately the 'named
constant' concept doesn't exist in SQL. I think that's a mistake. I
suggest to support CREATE CONSTANT ... VALUE ... and DROP CONSTANT
..., example: CREATE CONSTANT STATE_ACTIVE VALUE 'active'.


of course you mean:

CREATE CONSTANT state_active TEXT VALUE 'active'; ? ;)


Why does he mean that? Manifest constants are not typed in plenty of 
languages.


Well but in this case we want them to prevent easy sql injection and
therefore arbitrary macro expansion like in those plenty of languages
does not seem like a good idea to me.

Cheers
Tino

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Protection from SQL injection

2008-04-29 Thread Tino Wildenhain

Hi,


In C the best practice is to use #define for constants. In C++ you
have 'const', in Java 'static final'. Unfortunately the 'named
constant' concept doesn't exist in SQL. I think that's a mistake. I
suggest to support CREATE CONSTANT ... VALUE ... and DROP CONSTANT
..., example: CREATE CONSTANT STATE_ACTIVE VALUE 'active'.


of course you mean:

CREATE CONSTANT state_active TEXT VALUE 'active'; ? ;)

interesting idea, would that mean PG complaints on queries

SELECT state_active FROM sometable ... because
state_active is already defined as constant?

What about local session variables? Usefull as well...

I think this is really a big effort :-)

Greets
Tino

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TODO, FAQs to Wiki?

2008-04-21 Thread Tino Wildenhain

Alvaro Herrera wrote:

Bruce Momjian wrote:

Alvaro Herrera wrote:



As far as I know, what the doc translators do is translate the SGML
files directly, which is as difficult and cumbersome as you can possibly
get.  I am in no way suggesting we do that for the FAQ.

What can we do to help people translate the docs?


I suggest we start an experiment with the FAQ in XML Docbook, which is
amenable to automatic processing, and move from there.


Well... or reStructuredText which has the advantage of beeing human
editable? (without specialized editor that is)

Greets
Tino

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TODO, FAQs to Wiki?

2008-04-21 Thread Tino Wildenhain

Greg Smith wrote:

On Mon, 21 Apr 2008, Tino Wildenhain wrote:


Alvaro Herrera wrote:

I suggest we start an experiment with the FAQ in XML Docbook, which is
amenable to automatic processing, and move from there.


Well... or reStructuredText which has the advantage of beeing human
editable? (without specialized editor that is)


reST is a reasonable tool for building small documents, I don't use it 
because it really doesn't scale well to handle larger ones.  Given that 
the rest of the project is already committed to using Docbook for those 
larger documents, I think it's hard to justify the additional toolchain 
needed for reST processing just to make the FAQ a little easier to edit.


Haha, yes thats good no problem. I just looked as we can throw ideas
and so I did.

Greets
Tino

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TODO, FAQs to Wiki?

2008-04-21 Thread Tino Wildenhain

Joshua D. Drake wrote:

On Mon, 21 Apr 2008 19:06:53 +0200
Tino Wildenhain [EMAIL PROTECTED] wrote:
 

Well... or reStructuredText which has the advantage of beeing human
editable? (without specialized editor that is)


Huh? How is XML not human editable... didn't you ever create webpages
in vi? :)


You know, I used a butterfly... there is even an emacs macro for it .-)

Cheers
Tino

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] modules

2008-04-06 Thread Tino Wildenhain

Hi,

D'Arcy J.M. Cain wrote:
...

Yes but what I am suggesting goes beyond that.  My idea is that there
is a modules directory that contains a file for each installable
module.  This file would contain all the information about the module
such as name, version, where to get the actual package, an MD5 checksum
of the package, minimum and maximum PostgreSQL versions required, etc.


I'd suggest the approach taken by debian apt rather then pkgsrc - 
instead of maintaining a whole directory structure on client side

have a couple of files as database - I guess even using the database
itself would work - and RDP (basically xml over http) which would be
different from apt approach but we are dealing with much less modules.

The most important thing we could learn from apt is to use cryptography
to secure installed modules - instead of just maintaining package
integrity with md5. After all, a database module can do almost
everything - so I'd rather know if I trust the packager.

Just my 0.2c
Tino

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Surfacing qualifiers

2008-03-28 Thread Tino Wildenhain

Tom Lane wrote:

David Fetter [EMAIL PROTECTED] writes:

You mentioned in an earlier mail that the information exposed was
inadequate.  Could you sketch out what information would really be
needed and where to find it?


The main problem with what you suggest is that it'll fail utterly
on join queries.

AFAICS any real improvement in the situation will require exposing
remote tables as a concept understood by the planner, complete
with ways to obtain index and statistical information at plan time.
After suitable decisions about join strategy and so forth, we'd
wind up with a plan containing a RemoteTableScan node which


I'd like to point out that Remote* might be a bit to narrow because
its also a general potential for SRF functions (e.g. any virtual table
construction). Would certainly be nice if we had a as general approach
as possible.

Cheers
Tino

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Permanent settings

2008-02-22 Thread Tino Wildenhain

Hi,

Magnus Hagander wrote:
...

Can you explain why this wouldn't be usable?


Because you will end up with an ever-growing file, that will be a PITA to
deal with. Consider it after 10k+ changes. (yes, I can see that happening.
You know how some people use GUIs) Or 100k. The problem does not happen at
100 lines...


I don't think even 100k lines would be a problem. And there should be a
penalty if someone would have automated settings to be written
permanently.


I can see the solution with a single file with them all in, but it needs to
be able to overwrite them IMHO.


Why? If you want to strip down the file you can just postprocess it.
Either a tool or even a little sed-script could do. And you would
save the records for reference.

+1 for Aidans idea.

Regards
Tino

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] PostgreSQL 8.4 development plan

2008-02-08 Thread Tino Wildenhain

Tom Lane wrote:

Dimitri Fontaine [EMAIL PROTECTED] writes:

Le Wednesday 06 February 2008 21:35:54 Peter Eisentraut, vous avez écrit :
Yes, I feel we could use a group writeable patch queue of some sort. 
Perhaps an IMAP server setup could do the job.



I've read some developers appreciating the way review board works:
  http://review-board.org/
  http://code.google.com/p/reviewboard/
  http://code.google.com/p/reviewboard/wiki/UserBasics


Hmm, the info on that last page might be out of date, but what it says is
that the only SCMS they really support 100% is SVN.  The other ones they
claim support for don't work [well/at all] with the post-review tool.


Btw, wasnt a group already playing with Trac/svn? This one also has
something like above: http://trac-hacks.org/wiki/PeerReviewPlugin

And a lot of more nice features as well as posgres backend support :)

Greets
Tino

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] plperl vs. bytea

2007-05-07 Thread Tino Wildenhain

Martijn van Oosterhout schrieb:
...
 I do have one problem though: for bytea/integers/floats Perl has
 appropriate internel representations. But what about other user-defined
 types? Say the user-defined UUID type, it should probably also passed
 by a byte string, yet how could Perl know that. That would imply that
 user-defined types need to be able to specify how they are passed to
 PLs, to *any* PL.

Yes exactly. One way could be to pass the type binary and provide
a hull class for the PL/languages which then call the input/output
routines on the string boundaries of the type unless overridden by
user implementation. So default handling could be done in string
representation of the type whatever that is and for a defined set
of types every pl/language could implement special treatment like
mapping to natural types.

This handling can be done independently for every pl implementation
since it would for the most types just move the current type treatment
just a bit closer to the user code instead of doing all of it
in the call handler.

2nd problem is language interface for outside of the database scripting.
Efficient and lossless type handling there would improve some
situations - maybe a similar approach could be taken here.

Regards
Tino

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] plperl vs. bytea

2007-05-07 Thread Tino Wildenhain

Andrew Dunstan schrieb:



Tino Wildenhain wrote:

Martijn van Oosterhout schrieb:
...
 I do have one problem though: for bytea/integers/floats Perl has
 appropriate internel representations. But what about other user-defined
 types? Say the user-defined UUID type, it should probably also passed
 by a byte string, yet how could Perl know that. That would imply that
 user-defined types need to be able to specify how they are passed to
 PLs, to *any* PL.

Yes exactly. One way could be to pass the type binary and provide
a hull class for the PL/languages which then call the input/output
routines on the string boundaries of the type unless overridden by
user implementation. So default handling could be done in string
representation of the type whatever that is and for a defined set
of types every pl/language could implement special treatment like
mapping to natural types.

This handling can be done independently for every pl implementation
since it would for the most types just move the current type treatment
just a bit closer to the user code instead of doing all of it
in the call handler.

2nd problem is language interface for outside of the database scripting.
Efficient and lossless type handling there would improve some
situations - maybe a similar approach could be taken here.




This seems like an elaborate piece of scaffolding for a relatively small 
problem.


This does not need to be over-engineered, IMNSHO.


Well could you explain where it would appear over-engineered?
All I was proposing is to move the rather hard-coded
type mapping to a softer approach where the language
is able to support it.

Is there any insufficience in perl which makes it harder to
do in a clean way?

Regards
Tino


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] XML export

2007-02-11 Thread Tino Wildenhain

Peter Eisentraut schrieb:

The issue of XML export has been discussed a few times throughout
history.  Right now you've got the HTML output in psql.  A few
people have proposed real XML output formats in psql or elsewhere.

I dug out some old code today that implements what SQL/XML has to say
on the matter and fitted the code to work with the current XML support
in the backend.

Below are examples of what it can do.  I'm thinking about hosting this
on PgFoundry, but if the crowd thinks this should be somewhere else,
short of the moon, let me know.


regression=# select table_to_xml('select * from emp');
 table_to_xml
---
 table xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'

   row
 namesharon/name
 age25/age
 location(15,12)/location
 salary1000/salary
 managersam/manager
   /row

...

   row
 namelinda/name
 age19/age
 location(0.9,6.1)/location
 salary100/salary
 manager xsi:nil='true'/
   /row

 /table

(1 row)

As a use case of sorts, I've got an XSLT stylesheet that can convert
this to HTML tables.

regression=# select table_to_xmlschema('select * from emp');
   table_to_xmlschema
-
 xsd:schema
 xmlns:xsd='http://www.w3.org/2001/XMLSchema'
 xmlns:sqlxml='http://standards.iso.org/iso/9075/2003/sqlxml'

   xsd:import
   namespace='http://standards.iso.org/iso/9075/2003/sqlxml'
   schemaLocation='http://standards.iso.org/iso/9075/2003/sqlxml.xsd'/

 xsd:simpleType name=X-PostgreSQL.regression.pg_catalog.text
   xsd:restriction base=xsd:string
 xsd:maxLength value=MLIT/
   /xsd:restriction
 /xsd:simpleType

 xsd:simpleType name=INTEGER
   xsd:restriction base='xsd:int'
 xsd:maxInclusive value=2147483647/
 xsd:minInclusive value=-2147483648/
   /xsd:restriction
 /xsd:simpleType

 xsd:simpleType 
name='X-PostgreSQL.regression.pg_catalog.point'/xsd:simpleType

 xsd:simpleType 
name='X-PostgreSQL.regression.pg_catalog.name'/xsd:simpleType

 xsd:complexType name='RowType'
   xsd:sequence
 xsd:element name='name' type='X-PostgreSQL.regression.pg_catalog.text' 
nillable='true'/xsd:element
 xsd:element name='age' type='INTEGER' nillable='true'/xsd:element
 xsd:element name='location' type='X-PostgreSQL.regression.pg_catalog.point' 
nillable='true'/xsd:element
 xsd:element name='salary' type='INTEGER' nillable='true'/xsd:element
 xsd:element name='manager' type='X-PostgreSQL.regression.pg_catalog.name' 
nillable='true'/xsd:element
   /xsd:sequence
 /xsd:complexType

 xsd:complexType name='TableType'
   xsd:sequence
 xsd:element name='row' type='RowType' minOccurs='0' 
maxOccurs='unbounded'/
   /xsd:sequence
 /xsd:complexType

 xsd:element name='table' type='TableType'/

 /xsd:schema
(1 row)


I also have a table function which can convert both of these back into
an table, so that would be XML import.  But that doesn't work quite yet.



How would you express null in the values above?

Regards
Tino

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] May, can, might

2007-02-01 Thread Tino Wildenhain

Bruce Momjian schrieb:

I have made these adjustments to the documentation.  Do people want the
error message strings also updated?  It will probably make the
translation easier/clearer in the future, but it does involve some error
message wording churn.  CVS HEAD only, of course.


I still think logging localized error message is a bad idea anayway.
Nothing wrong with a frontend client to respond with localized
messages but logfiles with localized errors are hard or next to
impossible to parse. (Let allone quoting it on mailing lists)

So, changes of the wording could break such applications anyway
but not unexpected :-)

Regards
Tino


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] pgsql: Fix for plpython functions; return true/false for boolean,

2007-01-31 Thread Tino Wildenhain

Bruce Momjian schrieb:

Hannu Krosing wrote:

Officially by who ?

2.3 was the first version to introduce bool as a subtype of int, in
2.2.3 True and False were introduced as two variables pointing to
integers 1 and 0.

So to make your patch ok on all python versions, just make it
conditional on python version being 2.3 or bigger, and return int for
pre-2.3.


I thought about suggesting that, but do we want plpython to have
different result behavior based on the version of python used?  I didn't
think so.


Why not? Python2.2 is rarely in use anymore and users of this would get
the same behavior. Users of python2.3 and up would get the additionally
cleaned boolean interface - also users which go the from __future__ 
import ... way. Thats how python works and develops forth and we should

not work against that from postgres side.

So I'm indeed +1 for conditional approach.

Regards
Tino

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] STOP all user access except for admin for a few minutes?

2007-01-23 Thread Tino Wildenhain

[EMAIL PROTECTED] schrieb:

Thx Russel,
I want to control it from software, changing network access via pg_hba 
with software doesnt feel right.


 possible case
Say I have a Group called Normal_Rights and one called Zero_Rights.

So dB runs as... Normal_Rights(User A, User B, User C, User D)
Then via sql, superuser REVOKEs those user rights and GRANTs them
Zero_Rights(User A, User B, User C, User D)... ie make users a member of 
the ZERO rights group.


Then hopefully Postgres kicks them out gracefully?

Then software make changes and switch's them back to their Normal_Rights 
group.


 or  more general case
RECORD all the SQL for all user rights...
REVOKE everything except needed software superusers (postgres, and 
program superuser).

make changes via software.
PLAY BACK all the rights SQL script.

What do you think, will PG kill connections, let them go gracefully, 
stop after current transaction


 maybe I'm in the wrong tree


Yes I'm thinking that too:

Is it possible to make quick structural changes to postgres, with user 
activety?


of course.

Maybe start a transaction that changes structure... wonder if that will 
stop or hold user activity???


Usually not - all your DDL is done in a transaction just like any other
access users would make. So it only fails (but as a whole) if you want
to modify locked tables and such. But you would not end up w/ a partly
changed database in any case. Just make sure you do everything in
a transaction. No need to suspend user accounts for that.


Regards
Tino



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Tino Wildenhain

Tom Lane schrieb:

Dave Page [EMAIL PROTECTED] writes:

As far as I can see, adding a -f option to pg_dumpall should be straight
forward, the only issue being that we'd need to pass pg_dump an
additional (undocumented?) option to tell it to append to the output
file instead of writing it as normal.


Wouldn't it be easier/better to re-point stdout at the -f file, and not
touch pg_dump at all?


Yeah, and maybe have a modifier like %n or something which would instead
write different files and replaces %n with the name of the database...

Regards
Tino

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] ideas for auto-processing patches

2007-01-04 Thread Tino Wildenhain

[EMAIL PROTECTED] schrieb:

On 1/4/07, Gavin Sherry [EMAIL PROTECTED] wrote:

On Thu, 4 Jan 2007, Andrew Dunstan wrote:

...

Pulling branches from
anonvcvs regularly might be burdensome bandwidth-wise. So, like you 
say, a

local mirror would be beneficial for patch testing.


Right some sort of local mirror would definitely speed things up.


Easier speedup in this regard would be using subversion instead
of cvs. It transfers only diffs to your working copy (or rather,
to your last checkout) so its really saving on bandwidth.

Regards
Tino

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: BugTracker (Was: Re: [HACKERS] 8.2 features status)

2006-08-17 Thread Tino Wildenhain

Josh Berkus schrieb:

Greg,


In short, it's just a tool to solve a problem we actually have (having a
convenient archive of data about current and past bugs) without inventing
problems to solve with extra process that we aren't already doing 
anyways.


RT can be set up similarly but I'm not sure how much work it would 
take to
make it as seamless. Debbugs has the advantage of working that way 
pretty much

out of the box.


Debbugs would be good too.  I'll quiz the Debian folks here at the 
conference about what issues there are with the system.


FWIW, MySQL is pretty proud of their bug tracker, and Marten offered to 
open source it for us.  ;-)


What is wrong with for example trac? (trac.edgewall.com) which actually
runs on postgres just fine...

Regards
Tino

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] plpython sets

2006-07-17 Thread Tino Wildenhain
Matteo Bertini wrote:
 Hello all,
 I'm working with pl/python and I'd like to use the set returning
 function feature.
 
 I'm not working in a debug python, so the iterator bug is not a problem me.
 
 Can someone point me to some plpython.c setof enabled sources?
 
 Hint to build them in an ubuntu dapper environment are welcome too :-P !
 
 Thanks a lot every developer involved in postgres!
 PL/setyourlanguagehere is fantastic!


http://python.projects.postgresql.org/

This works very well for me - although it needs some more
finish (docs and so on) maybe if more people using it
it can get better.

SRF - even lazy ones (e.g. generators) work nicely there.


Regards
Tino Wildenhain

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] COPY (query) TO file

2006-06-03 Thread Tino Wildenhain
Mark Woodward wrote:
 Mark Woodward wrote:
...

 This runs completely in the background and can serve as a running
 backup.
 And you are sure it would be much faster then a server local running
 psql just dumping the result of a query?
 
 No I can't be sure of that at all, but  The COPY command has a
 specific use that is understood and an operation that is separate from the
 normal query mechanism.

Unless you change it to actually execute a query ;)

 (And you could more easy avoid raceconditions in contrast to several
 remote clients trying to trigger your above backup )
 
 Again, the examples may not have been precise in presenting why, the
 focus was mostly what so it could be discussed. As a generic feature it
 has many potential uses. Trying to debate and defend a specific use limits
 the potential scope of the feature.

Thats why I'm asking. I'm still wondering which use-case actually
defends the integration of the resultset-formatter into the backend
vs. just doing it in the frontend (in both places there are already
some routines which could be used to implent).

 Why have COPY anyway? Why not just use SELECT * FROM TABLE?

Because the special SELECT * FROM TABLE can be optimized aparently.

Ah yes, and if usual result fetch requires storing result set
in server ram, it would be nicer to change that if possible.
I think we run SELECT ... much more often then COPY ;-)
(And I hope nobody comes up with the idea if copy would be implemented
to execute queries, to generally use COPY instead of select for
large result sets in applications. Goodbye portability...)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] COPY (query) TO file

2006-06-02 Thread Tino Wildenhain
Mark Woodward wrote:
 Tom had posted a question about file compression with copy. I thought
 about it, and I want to through this out and see if anyone things it is a
 good idea.
 
 Currently, the COPY command only copies a table, what if it could operate
 with a query, as:
 
 COPY (select * from mytable where foo='bar') as BAR TO stdout
 
 I have no idea if it is doable, but I can see uses for replication

I doubt it be really usefull (apart from maybe saving some work
coding a client app) but did you actually test it with

create table as select ...; followed by a copy of that table
if it really is faster then just the usual select  fetch?

Regards
Tino

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] COPY (query) TO file

2006-06-02 Thread Tino Wildenhain
Mark Woodward wrote:
 Mark Woodward wrote:
 Tom had posted a question about file compression with copy. I thought
 about it, and I want to through this out and see if anyone things it is
 a
 good idea.

 Currently, the COPY command only copies a table, what if it could
 operate
 with a query, as:

 COPY (select * from mytable where foo='bar') as BAR TO stdout

 I have no idea if it is doable, but I can see uses for replication
 I doubt it be really usefull (apart from maybe saving some work
 coding a client app) but did you actually test it with

 create table as select ...; followed by a copy of that table
 if it really is faster then just the usual select  fetch?
 
 Why create table?

Just to simulate and time the proposal.
SELECT ... already works over the network and if COPY from a
select (which would basically work like yet another wire
protocol) isnt significantly faster, why bother?

 The idea is that you would have one or more redundent databases and use
 the COPY TO/FROM to keep them up to date.

Well, if you have databases you would have regular tables - and
can use copy as it is now :-)

Regards
Tino


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] COPY (query) TO file

2006-06-02 Thread Tino Wildenhain
Mark Woodward wrote:
...
 create table as select ...; followed by a copy of that table
 if it really is faster then just the usual select  fetch?
 Why create table?
 Just to simulate and time the proposal.
 SELECT ... already works over the network and if COPY from a
 select (which would basically work like yet another wire
 protocol) isnt significantly faster, why bother?
 
 Because the format of COPY is a common transmiter/receiver for PostgreSQL,
 like this:
 
 pg_dump -t mytable | psql -h target -c COPY mytable FROM STDIN
 
 With a more selective copy, you can use pretty much this mechanism to
 limit a copy to a sumset of the records in a table.

Ok, but why not just implement this into pg_dump or psql?
Why bother the backend with that functionality?

For example if you copy numbers, int4 (4 bytes)
gets expanded to up to 10 bytes. Of course
can get the same wire load if you use to_char()
with regular select.

 The idea is that you would have one or more redundent databases and use
 the COPY TO/FROM to keep them up to date.
 Well, if you have databases you would have regular tables - and
 can use copy as it is now :-)
 
 But COPY copies all the records, not some of the records.

yes, that would be the point in having them up to date
and not partially maybe something up to date ;)

COPY is fine for import of data, but for export
I think it should be implemented in the frontend.

Regards
Tino

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] COPY (query) TO file

2006-06-02 Thread Tino Wildenhain
Tom Lane wrote:
 Tino Wildenhain [EMAIL PROTECTED] writes:
 Ok, but why not just implement this into pg_dump or psql?
 Why bother the backend with that functionality?
 
 You're not seriously suggesting we reimplement evaluation of WHERE clauses
 on the client side, are you?

no, did I? But what is wrong with something like:

\COPY 'SELECT foo,bar,baz FROM footable WHERE baz=5 ORDER BY foo' TO
file|stdout

which would just run the query (in the backend of course) and
format the output just like copy would...

I mean, ok, when its in the backend, its fine too (beside the
data expansion if you dont implement compression...)
but I thougt implementing in the frontend would be easier...

Regards
Tino

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] COPY (query) TO file

2006-06-02 Thread Tino Wildenhain
Mark Woodward wrote:
...

 pg_dump -t mytable | psql -h target -c COPY mytable FROM STDIN

 With a more selective copy, you can use pretty much this mechanism to
 limit a copy to a sumset of the records in a table.
 Ok, but why not just implement this into pg_dump or psql?
 Why bother the backend with that functionality?
 
 Because COPY runs on the back-end, not the front end, and the front end
 may not even be in the same city as the backend. When you issue a COPY
 the file it reads or writes local to the backend. True, the examples I
 gave may not show how that is important, but consider this:


We were talking about COPY to stdout :-) Copy to file is another
issue :-) Copy to (server fs) file has so many limitations I dont see
wide use for it. (Of course there are usecases)

 psql -h remote masterdb -c COPY (select * from mytable where ID 
 xxlastxx) as mytable TO '/replicate_backup/mytable-060602.pgc'
 
 This runs completely in the background and can serve as a running backup.

And you are sure it would be much faster then a server local running
psql just dumping the result of a query?
(And you could more easy avoid raceconditions in contrast to several
remote clients trying to trigger your above backup )

But what do I know... I was just asking :-)

Regards
Tino



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [OT] MySQL is bad, but THIS bad?

2006-05-20 Thread Tino Wildenhain
Lukas Kahwe Smith wrote:
...
 apt-get install postgresql-8.1 postgresql-contrib-8.1

 Voila! Tsearch installed at your fingertips. What else were you
 expecting?
 
 I expect this to be one package and I expect this to be what is pushed
 as the default package on all platforms. If someone just sat in an pgsql
 talk (or even a talk that mentions pgsql), has read an article, picked
 up a book .. this is what he should be downloading and installing.

...
 If PostgreSQL pushes FooSQL as its packaged solution at all
 opportunities I am sure it would quickly get into the heads of people
 and if done in a concerted effort along with the corporate sponsors it
 could provide for a huge marketing opportunity and a slew of articles
 from the press. But that is a topic for another list.
 

maybe the package should read: postgresql-heavy
postgresql-complete or even as you seem to suggest: postgresql
where the other parts are postgresql-clients, postgresql-server
postgresql-contrib and so on.

Beware, however, if complete means with gui clients, not
all people would be happy if you pull X and friends to their
unix servers :-) So whatever is in complete should depend
on the target platform.

I think the naming schema of the debian packages go in the
right direction - maybe this can be harmonized along the
distributions?

Regards
Tino Wildenhain

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Regrading TODO item alerting pg_hba.conf from SQL

2006-04-16 Thread Tino Wildenhain
...
 2. What do we think about the SQL command to be. Would it be like the
 following or another syntax. 

 GRANT 
  CONNECTION [LOCAL | HOST | HOSTSSL | HOSTNOSSL ] 
  ON [ ALL | mydatabase1 ]
  TO [ ALL | user1,user2,user3 ]
  FROM 127.0.0.1/32
  METHOD [ TRUST | REJECT | MD5 .. ]
 
 Apart from the complaint that this makes no attempt to take care of the
 fact that entires in pg_hba.conf are order sensetive. Where is that
 found in this syntax? What about pg_ident.conf?

there is actually no proof of the current order depency is really
a good idea. Other access lists work without that constraint.

 3. Could someone clarify the design decisions regarding pg_hba.conf
 file? Why was it done the why it is today? (Tom? Bruce?)
 
 Not sure if there was a design. It was created at some point and
 evolved.

Maybe now we can do a real design? No need to continue on the wrong
path (if it is wrong).

 Now, to just suggest something I've been thinking of. Maybe a way of
 thinking about it is similar to firewall chains in linux. You keep
 pg_hba.conf but allow it to refer to a new auth type chain blah. Then

not that chains are the only and the best solution to firewall rules
out there :-)

 you layer your above grant syntax into those chains. This allow people
 to switch between different auth methods quickly by switching files,
 while allowing people who want to do everything in the database can do
 so too.

Even with in database rules only you can do the switches - you remove
all entries, keeping your current connection and then bring them
back when you are ready. Just a matter of some SQL commands in a script.

Kind regards
Tino

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] About pg_hba.conf

2006-04-06 Thread Tino Wildenhain

Gevik Babakhani schrieb:

Hello Folks,

This may be a dumb question but please bear a moment with me.
About the TODO item “%Allow pg_hba.conf settings to be controlled via
SQL“: If in the future we could configure the settings by SQL commands,
assuming the settings are saved in an internal table, what would be the
need for a pg_hba.conf file anymore. (except for the backward
compatibility of cource)


No, you need the ability to override the settings with external
options to get access to a misconfigured database.

(Well of course you could run postgres in single user mode
to get that too, but it would be a little inconvient...)

Regards
Tino

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] control pg_hba.conf via SQL

2006-03-30 Thread Tino Wildenhain

Andrew Dunstan wrote:

Tino Wildenhain wrote:

...

I dont think it has to be ordered preliminary. Since we are
dealing with subnets and stuff - the ordering already lays
in the data - just like routing tables work: most specific
matches first.

I could think of a solution where pg_hba.conf just
overrides the database table (so you have a starting
point with empty table and/or reentry in case of a
mistake)

...


We don't have the luxury of being able just to throw out old stuff 
because we think it might be neater to do it another way.  The current 
rules for HBA are order dependent. The issue raised as I understood it 
was not to invent a new scheme but to be able to manage it from inside a 
postgres session.


Not sure about the luxury - iirc there was some change in the format
of pg_hba.conf anyway over the time and beside pgadmin3 I dont see
many tools to edit this file (apart from the usual text editor ;)

So I dont see a strong reason to keep it the way it is now just for
some legacy nobody depends on anyway. Alternatively there could
be something like security.conf or the like which depreciates
pg_hba.conf - so if pg_hba.conf is there any has any active
entry in it - things would be like they are now.
if not, then security.conf and the system table would
work like designed, having security.conf read before the table.

A pg_securitydump or the like could be usefull to dump the table
to a file in the security.conf format.

Regards
Tino

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] control pg_hba.conf via SQL

2006-03-29 Thread Tino Wildenhain
Andrew Dunstan wrote:
 
 ISTM that the first requirement is for a sane API that will handle the
 fact that HBA lines are ordered. Persistence in itself shouldn't be a
 big problem - we already do that with some shared tables, iirc.
 
 so we might have some functions like:
 
  insert_hba_rule(at_position int, connection_type text, username text,
 dbname text, cidr_host text, method text)
  move_hba_rule(from_position int, to_position int)
  delete_hba_rule(at_position int)
 
 Inventing new SQL syntax might make things a bit tougher.
 

I dont think it has to be ordered preliminary. Since we are
dealing with subnets and stuff - the ordering already lays
in the data - just like routing tables work: most specific
matches first.

I could think of a solution where pg_hba.conf just
overrides the database table (so you have a starting
point with empty table and/or reentry in case of a
mistake)

regards
Tino

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [SQL] Interval subtracting

2006-03-01 Thread Tino Wildenhain
Hannu Krosing schrieb:
 Ühel kenal päeval, K, 2006-03-01 kell 14:36, kirjutas Scott Marlowe:
 
...
 But we do say both quarter past three (3 hours 15 min) and quarter to
 four (4 hours -15 min) when talking about time.
 
but luckily we dont write it ;)
Some people say (like ) this: quarter past 3, half past 3, three quartes
 past 3, 4. Which seems more logical. :-)

But saying would be a job for to_char, not for internal storage,
which should _always_ be canonical.

Regards
Tino

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Pl/Python -- current maintainer?

2006-02-26 Thread Tino Wildenhain
James William Pye schrieb:
 On Sat, Feb 25, 2006 at 01:21:34PM -0700, I wrote:
 
From what I have seen of zope's restricted python, it does, or can, force its
restrictions by checking bytecode. I imagine a simple PL sitting on top of the
untrusted varient that merely implements a custom validator that checks the
bytecode produced by the untrusted PL's validator. The language handler would
remain the same:
 
 [ugh, Correcting my assumptions...]
 
 Zope's RestrictedPython is a custom bytecode generator that compiles Python
 code specially, as opposed to a bytecode processor that validates against some
 rule set as I had thought for some (wishful? ;) reason. The bytecode then 
 needs

The key point is: it replaces dangerous elements while it compiles the
bytecode - in theory you could also walk the tree after the python
bytecode compiler (not sure if it even works this way)

for example eval() open() file() import, ... are/can be replaced in this
step.

 to be executed in an special environment that then imposes some specified
 restrictions at runtime(I'm not really clear on all the details here as I
 am having a very difficult time finding documentation).

The special environment is there for the fine grained security only zope
would need in this case. (It protects object attributes individually
while maintaining acquisition and all that stuff)

 This doesn't mean that it couldn't be used. However, it does mean that some
 munging of the handler would be required(Something that I desired to avoid).

You should be able to use most of that technique in the stage where you
create the bytecode - just the step pl/plsql does too.

Regards
Tino

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Pl/Python -- current maintainer?

2006-02-25 Thread Tino Wildenhain
James Robinson schrieb:
 I see neilc has hacked on it very recently to reduce memory leaks. I 
 take that as both good and bad signs.
 
 We're a [ small ] python shop, and would be most interested in being 
 able to simplify our life through doing some things in plpython  instead
 of pl/pgsql where appropriate. Keeping our constants and so  forth in
 the appropriate python module would make things ever so much  simpler
 here and there at the very least.
 
 But we've never hacked on the backend, nor at the C python API level. 
 But I see no reason why not to start now -- lurked here for many a 
 year. For example, I see that plpython functions cannot be declared  to
 return void. That can't be too tough to remedy. Implementing the  DBI
 2.0 API interface to SPI can wait another day.

Also have a look at: http://python.projects.postgresql.org/
it needs some more love too but has high potential.
Maybe it can become next generation pl/pythonu? Would be nice.
And with even more love the restricted python from zope could
be ported so there could be a pl/python again :-)

Ok, just haluzinating ;)

Tino

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Updated email signature

2006-02-18 Thread Tino Wildenhain
Joshua D. Drake schrieb:
 


 Anyone able to beat that?


 Sorry, I was still in Junior High in '82 :(  Man, you are *old* :)

 
 At Marc hands himself a foot gun... I was 9 years old in 82.

cool. You too? :-) 1973 must have been a great year .-)

Tino

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] pg_hba.conf alternative

2006-02-13 Thread Tino Wildenhain

Q Beukes schrieb:

how? is there some kernel patch to completely to enable you to deny
access to root?
Tino Wildenhain pointed out SELinux has a feature like that.


I still dont get your problem (apart from that you can always
google for SELinux)

Why arent the other admins not trustworthy? And why do you
have many of them? If they only check logs and create users,
why do they have to be admins? They could use carefully
configured sudo as well to fullfill their tasks w/o full
access to the system.

I'd say, grep your problem at the root (literally spoken)

Regards
Tino

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] pg_hba.conf alternative

2006-02-08 Thread Tino Wildenhain

Q Beukes schrieb:

Well,

I am not looking for 100% security. I know that full access if full access,
and that even if you were to encrypt the system through Postgre the
determined
person WILL always be able to get it out if they have system level access.

All I wanted to do was to prevent the basic SQL/Linux literate user from
accessing
the databases. At the moment it is very easy for them to access the data.

I trust that they wont go as far as overwriting the system with custom
compiled
version, or copying the data and so forth. It just that we would feel
much better
if we knew the data wasn't as open as it is now, with a simple pg
restart it is all
open?

Can this only be done by maybe modifying the source to make pg_hba
fields statically
compiled into the executable?


Instead, you might want to read about SELinux.
You can protect files even to root (unless they
reboot ;) but really you should have only trusted
people have admin accounts. How comes you have
somebody untrusted as admin?

Regards
Tino

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Passing arguments to views

2006-02-02 Thread Tino Wildenhain
Chris Campbell schrieb:
...
 That was a very simplistic example and didn't illustrate my point --  I
 apologize. I was trying to think of something succinct and  illustrative
 for a quick mailing list post but came up short.
 
 Maybe a better example would be a situation where you want to do 
 substitutions in places other than the WHERE clause? There's no way  to
 push that out to the calling query. But even in this simple  case, the
 easier-to-grok syntax of making a view look like a function  (and
 codifying the options for restricting the results as arguments  to the
 view) is a nice win in terms of readability and maintainability.

Well if the view does not suit your needs, why dont you use an
set returnung function instead? Inside it you can do all the magic
you want and still use it similar to a table or view.

Regards
Tino

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Multiple logical databases

2006-02-02 Thread Tino Wildenhain
Mark Woodward schrieb:
...
 Unless you can tell me how to insert live data and indexes to a cluster
 without having to reload the data and recreate the indexes, then I hardly
 think I am misinformed. The ad hominem attack wasn't nessisary.

I see you had a usecase for something like pg_diff and pg_patch ;)
...
 If no one sees a way to manage multiple physical database clusters as one
 logical cluster as something worth doing, then so be it. I have a
 practical example of a valid reason how this would make PostgreSQL easier
 to work with. Yes there are work arounds. Yes it is not currently
 unworkable.

I dont see your problem, really ;)

1) if you have very big and very workloaded databases, you often have
them on different physically boxes anyway
2) you can run any number of postmasters on the same box - just put
   them to listen on different ip:port.

Now to the management - you say cddb and geodb are managed off host.
So they are not managed on the life server and so you dont need to
switch your psql console to them.

And yeah, its really not a problem, to quit psql and connect
to a different server anyway :-)

If you dont like to type -p otherport, you can either create
aliases with all the arguments or use something like pgadmin3
which enables you to easy switch from database to database,
from host to host as you like.

Now is there any usecase I have missed which you still would
like to have addressed?

Kind regards
Tino Wildenhain

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] New project launched : PostgreSQL GUI Installer for

2006-01-31 Thread Tino Wildenhain

Dave Page schrieb:
 

...

As was said, a gui to produce postgresql.conf files (off host)
can be of value. 



pgAdmin?


Well, strictly spoken a gui text editor is a gui... but I rather
had in mind something guided with buttons, select boxes and stuff
and references to documentation, calculations and the like.

 :-)

Tino

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] New project launched : PostgreSQL GUI Installer for

2006-01-31 Thread Tino Wildenhain

Dave Page schrieb:
...


Well, strictly spoken a gui text editor is a gui... but I rather
had in mind something guided with buttons, select boxes and stuff
and references to documentation, calculations and the like.

 :-)



Err, yes. pgAdmin? It's somewhat more than a simple text editor.


Ah, right ;) Didnt see it in action before :-) Now when I actually
load a postgresql.conf file I see what you mean. Nice job :-)

Figuring out the correct values for some of the buffers and costs
is still a bit tough. Otoh, I guess there is no easy way to predict
all these.

Regards
Tino


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] New project launched : PostgreSQL GUI Installer for

2006-01-30 Thread Tino Wildenhain

Devrim GUNDUZ schrieb:

Hi,

As you know, many databases that run on Linux / Unix systems have a GUI
installer which make installation easier and more attractive for some
people.


If you think of the *racle-GUI-Installer, most people find it very
s*cking ;)

Our Windows Installer is very attractive, for example. 


Now, I and Burcu Guzel, who is a Senior Programmer, decided to launch a
new project: pgnixinstaller : 


http://pgfoundry.org/projects/pgnixinstaller/

We are actively looking for developers for the project. Please drop me
an e-mail if you want to join this project. We will use Python, so you
need to be a Python guy to join the project. We are in planning phase,
if you join us earlier, we will be able to share more ideas. 


Might be fun of course. But on unix you usually have some kind
of package system anyway - how is the installer supposed to
play nicely with them?

Regards
Tino

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] New project launched : PostgreSQL GUI Installer for

2006-01-30 Thread Tino Wildenhain

Devrim GUNDUZ schrieb:

Hi,


...

Are you going to work with the underlying system's package manager, or
put everything in /usr/local?



We'll work with the package manager -- I'm an RPM guy ;)


RPM isnt the only packaging system out there ;)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] New project launched : PostgreSQL GUI

2006-01-30 Thread Tino Wildenhain

Jonah H. Harris schrieb:
I had to deal with an installer written in python and several in Java... 
IMHO, Java would be a better language for this and you could build off 
some nice OSS installers that already exist (such as IzPack).  Just my 2 
cents :)


Yes! Use Java for ultimate suckiness of the installer ;) I love to
install all X11, Java and stuff on a server to be able to install
a package with about 1/10 the size ;)

SCNR
Tino

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] New project launched : PostgreSQL GUI Installer for

2006-01-30 Thread Tino Wildenhain

Joshua D. Drake schrieb:
...
As more and more people come on board people are going to want to 
download a .exe (a metaphor),
double click and have it open an installer, they will then want to click 
next, next, continue, finish.


You don't get that with apt-get install.


Well you can use a frontend and search and click as well. I see no
problem - and it really works, as opposed to:

There is a reason that even Oracle has a graphical installer on Linux, 
because most people installing

the software:

A. Don't know how to use it
B. Probably don't know how to use Linux
C. Don't want to.

Hehehe. Did you actually use this installer? I did! And lets tell you,
you dont come by w/o any linux/unix knowledge.

Regards
Tino

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] New project launched : PostgreSQL GUI Installer for

2006-01-30 Thread Tino Wildenhain

Rick Gigger schrieb:
I don't see why anyone has a problem with this.  I am certainly never  
going to use it but if it helps someone who isn't a linux person to  use 
it on a project when they would have used something else (like  mysql) 
or if it convinces someone to run postgres on linux instead of  windows 
because they now have a graphical installer on linux then it  seems like 
a good thing to me.  More users = bigger community =  larger potential 
pool of people to help out.  Even if people can't  code they can answer 
newbie (or advanced) questions on the mailing  lists or write 
documentation or even just tell their dba friends  about it.


The more people using postgres the better.  If this will help then  I'm 
all for it.  Just because I would rather do a ./configure make  make 
install doesn't mean that thats the best route for everyone.


As was said, a gui to produce postgresql.conf files (off host)
can be of value. Also for the tune-people a package builder
can be useful too.

For other people - if they dont learn a bit about their package system
on their choosen system - they will run into other problems soon or
later.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Offer for PG Developers/Hackers

2006-01-24 Thread Tino Wildenhain

Tony Caduto schrieb:

Hi,
I want to give something back(I would give a donation but sales are poor 
:-( ,so I am offering to any verified Postgresql developer(by verified I 
mean your name shows up on this list  a LOT ) a free copy of PG 
Lightning Admin.


Does this mean postgres developer who delelops postgres or one who uses
postgres? :-)

Regards
Tino

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Commands per transaction

2006-01-21 Thread Tino Wildenhain
Rod Taylor schrieb:
 On Sat, 2006-01-21 at 12:48 -0300, Alvaro Herrera wrote:
 
Rod Taylor wrote:

Is there any way of bumping this limit or am I stuck breaking up the
transaction?

Wow, I never heard of anyone reaching the limit :-(  Sorry, you are
stuck (short of changing CommandId to 64 bits, which would bloat your
tables considerably ...)
 
...

As a quick fix (as is quickly fixed, quick running ;)
You could load your update data to a temp table via
COPY and then update a large table in one sweep.
I guess you dont have 2^31 tables to update? ;)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] psql readline win32

2006-01-04 Thread Tino Wildenhain

John DeSoi schrieb:


On Jan 2, 2006, at 4:00 AM, Magnus Hagander wrote:


Me, I'm not fully happy with psql on win32. I want my tab completion!
(which the gui tools don't do either, from what I can tell. At least
pgadmin doesn't. Yet.)



Mine has tab completion adapted from psql :). There are also commands  
for specific completion types, e.g. complete table, complete  function, 
etc.


I hope to have a beta out soon with 8.1 psql and updated tab  completion 
for the new commands (roles, etc).


Great! I once experimented with dropdowns in textarea too but lost
grip a bit.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Why don't we allow DNS names in pg_hba.conf?

2006-01-03 Thread Tino Wildenhain
Tom Lane schrieb:
 Andrew Dunstan [EMAIL PROTECTED] writes:
 
One thing that bothers me slightly is that we would need to look up each 
name (at least until we found a match) for each connection. If you had 
lots of names in your pg_hba.conf that could be quite a hit.
 
 
 A possible answer to that is to *not* look up the names from
 pg_hba.conf, but instead restrict the feature to matching the
 reverse-DNS name of the client.  This limits the cost to one lookup per
 connection instead of N (and it'd be essentially free if you have
 log_hostnames turned on, since we already do that lookup in that case).

Or alternatively (documented) scan and translate the names
only on restart or sighup. This would limit the overhead
and changes to the confile-scanner only and would
at least enable symbolic names in the config files.
(Of course w/o any wildcards - that would be the drawback)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Different length lines in COPY CSV

2005-12-12 Thread Tino Wildenhain
Am Montag, den 12.12.2005, 15:08 -0500 schrieb Andrew Dunstan:
 
 Tom Lane wrote:
...
 
 You are probably right. The biggest wrinkle will be dealing with various 
 encodings, I suspect. That at least is one thing that doing CSV within 
 the backend bought us fairly painlessly. Perl's Text::CSV_XS module for 
 example simply handles this by declaring that only [\x09\x20-\x7f] are 
 valid in its non-binary mode, and in either mode appears to be MBCS 
 unaware. We should try to do better than that.

Are there any test datafiles available in a repository?
I could give it a shot I think.

If not maybe we could set up something like that.

Regards
Tino


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-11 Thread Tino Wildenhain
Am Sonntag, den 11.12.2005, 17:55 -0500 schrieb Carlos Moreno:
...
 I'm interested in adding additional hash functions -- PG supports, as part
 of the built-in SQL functions, MD5 hashing.  So, for instance, I can simply
 type, at a psql console, the following:
 
 select md5('abc');
 
 My feature request (which again, I'd like to implement it myself) would
 be the ability to do:
 
 select sha1('xyz'), sha256('etc');
 
 (At least these two -- maybe for completeness it would be good to have
 sha224, 384, and 512, but I guess SHA1 and SHA-256 would be a very good
 and sound starting point)
 
 So, can you offer some advice or pointers on how to go about that?

You might want to check out contrib/pgcrypto

more often then not, if you want something, its
already done ;)

Not sure if this will ever be included in the core,
since not many people need these advanced hash functions.

HTH
Tino Wildenhain


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] SERIAL type feature request

2005-12-04 Thread Tino Wildenhain
Am Sonntag, den 04.12.2005, 08:52 +0100 schrieb Zoltan Boszormenyi:
 OK, I admit I haven't read the SQL standards on this matter.
 
 Tino Wildenhain írta:
 
...
 A SERIAL type has the assumption that its value starts at a low value 
 (1) and
 is increasing. Or is there a type modifier keyword that makes it work 
 backwards?
 A start value would also work here, decreasing from there.

There is no serial type ;) serial is only a macro which boils down
to int4/int8 and a default value of nextval('some_sequence')

This is a little bit kludgy, but I dont know how much you would
gain from a true type.

   
 
 2. Upon INSERTing to a serial column, explicitly given 0 value or 
 'default' keyword
 or omitted field (implicit default) should be interchangeable.
 
 default and omit are these. 0 would be an error. -1 on this too.
   
 
 Why? A sequence in PostgreSQL won't give you 0 even in wraparound mode.
 I just checked it:

This does not mean we should magically translate values to something
other. We arent MySQL. We are ACID.

 3. When a serial field value is given in an INSERT or UPDATE statement
 and the value is larger the the current value of the sequence then the 
 sequence
 should be modified accordingly.
 
 sideeffects, raceconditions. -1 on this.
   
 
 This event doesn't (shouldn't) occur often, e.g. you have an invoice 
 table, invoice No.
 contains the year, too. It's somewhat natural to handle it with the 
 serial field, so
 it gives out 20051 ... values. At the beginning of the next year, 
 you modify
 the sequence to start at 20061. What I mean is that there may be two 
 paths

Well, you can use setval() for this. Why would you want to do this
inbound? The whole point of sequences is not to set a value
explicitely. Who is the first who set it? And why and when
should it fail?

After all, if you want a year in the number, use a year.
e.g. prepend your serials with to_char(now(),'')

...
 Sounds like this informix is seriously broken ;)
   
 
 
 Yes, and slow, too. :-( That's why I would like to port the company's 
 software to PostgreSQL
 but there way too many places where Informixism were used.

Maybe you can translate these Informixisms to the way postgres
works. It is always some work to migrate from one db to another.
Its quite popular with MySQL-postgres, but I think you should
get by with Informix as well. There arent just so many howtows
on that matter by now.

If you have special issues you need to solve, just ask on the
list for ideas. But I really doubt there is really a point
to modify postgres to the way a slow and sucky database works .-)

++Tino


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] SERIAL type feature request

2005-12-03 Thread Tino Wildenhain
Am Samstag, den 03.12.2005, 22:23 +0100 schrieb Zoltan Boszormenyi:
 Hi!
 
 I would like to add an entry to PostgreSQL 8.2 TODO:
 - Extend SERIAL to a full-featured auto-incrementer type.
 
 To achieve this, the following three requirements should be fulfilled:
 
 1. The statement parser should be able to handle this:
 
 create table x (
 id serial(N),
 ...
 );
 
 and behind the scenes this would translate into the create sequence ... 
 start N
 before creating the table.

why isnt N max_id? Or increment?
Sounds inconsistent. -1 on this.

 2. Upon INSERTing to a serial column, explicitly given 0 value or 
 'default' keyword
 or omitted field (implicit default) should be interchangeable.

default and omit are these. 0 would be an error. -1 on this too.

 3. When a serial field value is given in an INSERT or UPDATE statement
 and the value is larger the the current value of the sequence then the 
 sequence
 should be modified accordingly.

sideeffects, raceconditions. -1 on this.

 This is the way Informix handles its serial type, although it doesn't seem
 to have a visible sequence bound to the serial column.

Sounds like this informix is seriously broken ;)


 Is it feasible in the 8.2 timeframe?

I hope not ;)




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] BIN()

2005-11-29 Thread Tino Wildenhain
Am Mittwoch, den 30.11.2005, 10:15 +0800 schrieb Christopher
Kings-Lynne:
 Hi guys,
 
 How would I go about implementing MySQL's BIN() function easily in PL/SQL.
 
 mysql SELECT BIN(12);
  - '1100'
 
 Basically it converts a bigint to a string containing 1's and 0's.
 
 I've tried messing about with bit() types, but those types lack casts to 
 text, etc.  And they are left padded with many zeros.

In python, I usually go like this:

def trans(value,base=01):
value,r=divmod(value,len(base))
if value: return trans(value,base)+base[r]
return base[r]

While base above has a default of 01 which
let it render binary:

trans(10)
- '1010'

you can use any base you want:

trans(10,0123456789abcdef)
- 'a'

and so on.

If you want it easy, just put above code
into a pl/python function.

Or rewrite it in C or pl/pgsql or something.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] BIN()

2005-11-29 Thread Tino Wildenhain
Am Mittwoch, den 30.11.2005, 00:03 -0700 schrieb Michael Fuhr:
 On Wed, Nov 30, 2005 at 07:42:36AM +0100, Tino Wildenhain wrote:
  In python, I usually go like this:
 
 In Ruby (and therefore in PL/Ruby) you could do this:
 
 10.to_s(2)
 = 1010
 
 10.to_s(16)
 = a

is there a 1000.to_s(abcdefghijk) too? :-)
or 212312321.to_s(range(256)) ?




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Tablespace on ramdisk

2005-11-21 Thread Tino Wildenhain

[EMAIL PROTECTED] schrieb:

Hi,
does anyone have experiences about putting a tablespace on ramdisk? Does it
work (and keep working after a restart of the server)?
Thanks in advance for any insight.


Yes it does work as long as you dont restart your server.
Postgres does not appreciate disappearing cluster data.

What are you trying to solve btw?

++Tino

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] plpython and bytea

2005-11-21 Thread Tino Wildenhain

Hannu Krosing schrieb:

On Mon, 2005-11-21 at 08:37 +0100, Tino Wildenhain wrote:


Am Montag, den 21.11.2005, 02:11 +0200 schrieb Hannu Krosing:


Hi

It seems that plpython is unable to return bytea string when it contains
NUL bytes:



...


Did you also try:

http://python.projects.postgresql.org/project/be.html

? Afaic it works a little different.




The project seems quite interesting, will surely take a deeper look

It looks quite alpha, seems to mess to deeply with backend, and the cvs
checkout of module be does not build, so I will probably not be able to
use it in production for quite a while :(


Well I had it running and it was very impressive. However
it seems no easy install yet - tried a cvs head with
similar problems. I hope the author reads this here.
At least I met him on freenode #postgresql

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Tablespace on ramdisk

2005-11-21 Thread Tino Wildenhain

[EMAIL PROTECTED] schrieb:

I'd like to mimic MySQL's in-memory tables (engine=memory), which structure
survives a server restart (data lost of course).
I suspected that a server restart would be a problem in this case.
Thank you anyway.


you could use temp tables... but usually it isnt worth the
trouble. Adjust your cache mem and stuff and often used
data will be in memory automatically.

HTH
Tino

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] plpython and bytea

2005-11-21 Thread Tino Wildenhain
Am Montag, den 21.11.2005, 09:08 -0700 schrieb James William Pye:
 On Mon, 2005-11-21 at 15:18 +0200, Hannu Krosing wrote:
  The project seems quite interesting, will surely take a deeper look
  
...
 
 The 'layout' package needs to be installed first.
 
 See this quick start section:
 http://python.projects.postgresql.org/quick.html#Fetch+and+Install+the
 +Backend
 ('be' depends on 'lo' and 'ex')

There is:

cvs -d :pserver:anonymous:@cvs.pgfoundry.org co lo ex be

which should be:

cvs -d :pserver:anonymous:@cvs.pgfoundry.org:/cvsroot/python co lo ex be

to work.




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] plpython and bytea

2005-11-20 Thread Tino Wildenhain
Am Montag, den 21.11.2005, 02:11 +0200 schrieb Hannu Krosing:
 Hi
 
 It seems that plpython is unable to return bytea string when it contains
 NUL bytes:
 
 hannu=# CREATE OR REPLACE FUNCTION get_bytea_with_nul() RETURNS bytea AS
 '
 return ''aa\\0bb''
 ' LANGUAGE plpythonu SECURITY DEFINER;
 
 hannu=# select get_bytea_with_nul();
  get_bytea_with_nul
 
  aa
 (1 row)
 
 
 probably related to plpythons way of generating return value via
 converting python objcet to its string representation and then letting
 postgres's input func to convert it back.

Did you also try:

http://python.projects.postgresql.org/project/be.html

? Afaic it works a little different.

 
 Btw, does anyone know where Andrew Bosma (the original author of
 plpython) is ? 
 
 I would probably have a paid job improving an opensource project for
 him :)
 
 


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Improving count(*)

2005-11-18 Thread Tino Wildenhain

Zeugswetter Andreas DCP SD schrieb:
The instant someone touches a block it would no longer be marked as 
frozen (vacuum or analyze or other is not required) and count(*)


would 


visit the tuples in the block making the correct decision at that


time.


Hmm, so the idea would be that if a block no longer contained any


tuples hidden from any active transaction,


you could store the count and skip reading that page.



I like the approach of informix and maxdb, that can tell the count(*)
instantly without looking at index leaf or data pages.

Imho we could do that with a central storage of count(*) even with mvcc.
The idea is a base value for count(*) and corrective values per open
xid.
To tell the count you add all corrective values whose xid is visible in
snapshot.
Each backend is responsibe for compacting xid counters below min open
xid.
Periodically (e.g. at checkpoint time) you compact (aggregate committed
xid counters 
into the base value) and persist the count.


Since that costs, I guess I would make it optional and combine it with
materialized 
views that are automatically used at runtime, and can at the same time
answer other 
aggregates or aggregates for groups. 
create materialized view xx_agg enable query rewrite as select count(*),

sum (col1) from xx
[group by col2];



I wonder how many times you really need a count(*) w/o where clause.
If I understand you correctly you are trying to optimize just this
one case?

Regards
Tino

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Improving count(*)

2005-11-18 Thread Tino Wildenhain

Zeugswetter Andreas DCP SD schrieb:

Since that costs, I guess I would make it optional and combine it


with 


materialized views that are automatically used at runtime, and can


at 


the same time answer other aggregates or aggregates for groups.
create materialized view xx_agg enable query rewrite as select 
count(*), sum (col1) from xx [group by col2];




I wonder how many times you really need a count(*) w/o where clause.
If I understand you correctly you are trying to optimize just this one


case?

I guess you have not read to the end. A materialized view with a group
by 
as indicated in the example is able to answer all sorts of queries

with or without where clauses ( e.g. ... where col2 = 'x').


But wouldn't that mean I need a materialized view (does we have
that now or do I need to play the usual games with triggers?)
for every possible where condition?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Tino Wildenhain

Pollard, Mike schrieb:

Richard Huxton wrote:


Pollard, Mike wrote:


Firstly, if you just want a count, what's wrong with count(1) or
count(*).




Because unless the column does not allow nulls, they will not return


the


same value.


Ah, but in the example given the column was being matched against a
value, so nulls were already excluded.

--



Details, details.  But there is a valid general question here, and
changing the semantics of the query will not address it.  When doing a
count(col), why convert col into a string just so you can determine if
it is null or not?  This isn't a problem on a small amount of data, but


Why convert? A null is always null no matter in which datatype.


it seems like a waste, especially if you are counting millions of
records.  Is there some way to convert this to have the caller convert
nulls to zero and non-nulls to 1, and then just pass an int?  So
logically the backend does:

Select count(case col when null then 0 else 1) from table


Which would be totally silly :-) no matter if its 0 or 1
it counts as 1. Do you mean sum() maybe?
Even then you dont need coalesce to convert null to 0
because sum() just ignores null.



And count just adds the number to the running tally.


Which number here?



Mike Pollard
SUPRA Server SQL Engineering and Support

strange...


Cincom Systems, Inc.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] question about count(b) where b is a custom type

2005-11-16 Thread Tino Wildenhain

Pollard, Mike schrieb:

If count(col) convert col to a string (an assumption that Martijn
has cast into doubt, or perhaps shredded), then rather than convert all
non-nulls that are not a string into a string, I was proposing
converting the values into an int with the values 0 or 1 (0 means that
row was null for that column, 1 means that row was not null;, since
count(col) means count the non-null rows in col).


I'm not getting how you got this idea of count() doing any conversion?
It does not and there is nothing in the docs wich would lead to this.


Anyway, to make a short story long.  The idea is rather than convert the
column into a string, convert it into a value indicating whether the
column was null or not null (which is all count cares about).  In any
case, it's moot idea since it appears Postgres already does that.


No, count does not convert. It just counts all non null values.
If you want to count rows, just use count(*).

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] functions marked STABLE not allowed to do INSERT

2005-11-14 Thread Tino Wildenhain

New in 8.1 it seems functions marked STABLE are
not allowed to have any INSERT statement in them.

However in this particular case, the insert does not
violate the rule:

STABLE indicates that within a single table scan the function will 
consistently return the same result for the same argument values, but 
that its result could change across SQL statements.


it does basically lookup a value by a foreign key
and builds a surrogate key on demand.

I know I could make it volatile but otoh I really want
the optimizer to optimize calls away as possible.

Now, what to do beside a private revert to the
patch?

Regards
Tino

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] functions marked STABLE not allowed to do INSERT

2005-11-14 Thread Tino Wildenhain
Am Montag, den 14.11.2005, 13:29 -0500 schrieb Robert Treat:
 On Monday 14 November 2005 10:02, Tino Wildenhain wrote:
  New in 8.1 it seems functions marked STABLE are
  not allowed to have any INSERT statement in them.
 
 
 Try hiding your inserts in seperate volitle sql function that you can select 
 inside your stable function.  I think the planner won't be smart enough to 
 realize what your doing to it. 


Now this is really a bug:

=# CREATE OR REPLACE function foo(int) RETURNS int as $$
$# DECLARE f ALIAS FOR $1;
$# BEGIN
$# RETURN (random()*f)::int;
$# END;
$# $$ LANGUAGE plpgsql STABLE;

=# SELECT foo(10);
 foo
-
   6
(1 row)

Instead of screaming here, where I use a VOLATILE
function in my STABLE function which could really
be dangerous, it just works.

And the other example, where I do my insert on purpose
and fully knowing what I do gets refused.

Is this a shortcoming of the function compiler?
I dont think so - it retrieves the OID of used
functions anyway so the lookup on stableness
would be easy - and lets skip the silly scan
for INSERT instead.

Regards
Tino


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] functions marked STABLE not allowed to do INSERT

2005-11-14 Thread Tino Wildenhain
Am Montag, den 14.11.2005, 14:45 -0500 schrieb Jaime Casanova:
 On 11/14/05, Tino Wildenhain [EMAIL PROTECTED] wrote:
  Am Montag, den 14.11.2005, 13:29 -0500 schrieb Robert Treat:
   On Monday 14 November 2005 10:02, Tino Wildenhain wrote:
New in 8.1 it seems functions marked STABLE are
not allowed to have any INSERT statement in them.
   
  
   Try hiding your inserts in seperate volitle sql function that you can 
   select
   inside your stable function.  I think the planner won't be smart enough to
   realize what your doing to it.
 
 
  Now this is really a bug:
 
  =# CREATE OR REPLACE function foo(int) RETURNS int as $$
  $# DECLARE f ALIAS FOR $1;
  $# BEGIN
  $# RETURN (random()*f)::int;
  $# END;
  $# $$ LANGUAGE plpgsql STABLE;
 
  =# SELECT foo(10);
   foo
  -
6
  (1 row)
 
  Instead of screaming here, where I use a VOLATILE
  function in my STABLE function which could really
  be dangerous, it just works.
 
 
 stable functions must show an stable image of the database, but if you
 start to do insertions, deletions and so how stable the image is?

No, the definiton is:
STABLE indicates that within a single table scan the function will
consistently return the same result for the same argument values, but
that its result could change across SQL statements. 

And I'm not speaking of delete. My common usecase is
lookup of key in surrogate-key table and generating
one if not found. If it would break on DELETE 
I'd understand it, but it breaks on INSERT which isnt
acceptable imho.

 now, i don't like the behaviour of letting call volatile functions
 inside immutable/stable ones... but some people use it to do what they
 think is good...

Now, we are forcing people to not use INSERT in a STABLE
function but we happily allow them to use VOLATILE
functions where the real danger lives. Doesnt sound
very logical to me.

 if you know you can call volatile functions from stable ones maybe you
 asked enough or read enough to actually know what you are doing...

Thats the point. I know what I'm doing with my INSERT
but am not allowed, but if I didnt know what I do and
use a volatile function, I can happily do that.

 but if you simply put inserts in your stable functions and expect to
 work, maybe you are not reading enough... you can ask to yourself, am
 i reading enough to actually know what am i doing?

Yes I do.
 
 conclusion: think in it as a netsafe for novices, if you think you are
 expert enough take the net off (calling the volatile functions)

Yes sure, but since the change does not really prevent noobs
from doing bad things [tm], it should be reverted or at least
kept consequence - which would be to ban volatile
funtions too.

(IMHO only calling volatile functions should be banned)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] functions marked STABLE not allowed to do INSERT

2005-11-14 Thread Tino Wildenhain
Am Montag, den 14.11.2005, 15:06 -0500 schrieb Tom Lane:
 Tino Wildenhain [EMAIL PROTECTED] writes:
  Now this is really a bug:
 
 That's in the eye of the beholder (and one who wasn't paying attention
 to previous discussion of this point, evidently).

Yes I was, but only to the fact it is not useable for 
caching and there are some cases (like random) 
for which STABLE would be bad thing [tm].

 The reason why the no-data-change rule is now enforced, not only
 recommended, is that a stable/immutable function now actually would
 not see any changes it did make.  Consider code like
 
   INSERT INTO foo VALUES (42, ...);
   SELECT * INTO rec FROM foo WHERE key = 42;
   IF NOT FOUND THEN
   RAISE EXCEPTION 'where did my row go?';
 
 If this were allowed in stable/immutable functions, the RAISE would
 in fact be reached in 8.1, because the SELECT will be done with the
 snapshot of the query that called the function.  This is a feature,

Ah this was the missing bit. I though this would only be true
for IMMUTABLE.

Thanks for the explanation. I'm not fine w/ it.

Regards
Tino


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Getting table name/tuple from OID

2005-11-07 Thread Tino Wildenhain

Paresh Bafna schrieb:

Is there any way to retrieve table name and/or tuple values from OID of
table/tuple?


Yes.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] 'a' == 'a ' (Was: RE: [pgsql-advocacy] [GENERAL]

2005-10-20 Thread Tino Wildenhain
Am Mittwoch, den 19.10.2005, 22:04 +0200 schrieb Tino Wildenhain:
 Am Mittwoch, den 19.10.2005, 16:29 -0300 schrieb Marc G. Fournier:
  I'm CC'ng this over to -hackers ... Tom?  Comments?
  
 ...

   Then we are broken too :)
  
   # select 'a ' = 'a  ';
 ?column?
   --
 f
   (1 row)
 
 
 experiment=# SELECT 'a '::char = 'a  '::char;
  ?column?
 --
  t
 
Sorry, copied wrong line :)

experiment=# SELECT 'a '::char(10) = 'a  '::char(10);
 ?column?
--
 t

and:

SELECT '|' || 'foo  '::char(10) || '|';
 ?column?
--
 |foo|


vs.


SELECT '|' || 'foo  ' || '|';
 ?column?
--
 |foo  |




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] 'a' == 'a ' (Was: RE: [pgsql-advocacy] [GENERAL] Oracle buys

2005-10-19 Thread Tino Wildenhain
Am Mittwoch, den 19.10.2005, 16:29 -0300 schrieb Marc G. Fournier:
 I'm CC'ng this over to -hackers ... Tom?  Comments?
 
 On Wed, 19 Oct 2005, Dann Corbit wrote:
 
  Yes, clearly that is the wrong result according to the SQL standard.
 
  Here is a SQL*Server query:
  select 1 where 'a' = 'a ' AND 'a' = 'a  ' AND 'a ' = 'a '
 
  It returns (correctly): 1
 
  -Original Message-
  From: [EMAIL PROTECTED] [mailto:pgsql-general-
  [EMAIL PROTECTED] On Behalf Of Marc G. Fournier
  Sent: Wednesday, October 19, 2005 11:41 AM
  To: [EMAIL PROTECTED]
  Cc: pgsql-general@postgresql.org
  Subject: Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase
 
  On Wed, 19 Oct 2005, [EMAIL PROTECTED] wrote:
 
  I was referring to trailing blanks, but did not explicitly say it,
  though showed it in the examples.  I am pretty sure that the SQL
  standard says that trailing whitespace is insignificant in string
  comparison.
 
  Then we are broken too :)
 
  # select 'a ' = 'a  ';
?column?
  --
f
  (1 row)


experiment=# SELECT 'a '::char = 'a  '::char;
 ?column?
--
 t



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] pg_dump option to dump only functions

2005-10-09 Thread Tino Wildenhain
Am Samstag, den 08.10.2005, 18:03 -0400 schrieb Tom Lane:
 Josh Berkus josh@agliodbs.com writes:
  I was wonderring, because I create a lot of server side utility functions,
  whether adding an option to pg_dump to just dump functions has been
  considered. I did a quick perusal of the code, and noted that there is a
  separate section within pg_dump to get the functions, but it is not able to
  be triggered separately from schema and data. Any reason why this wouldn't
  be a good(tm) idea?
 
  It would be an *excellent* idea, along with options to dump specific 
  functions, and both specific and all views/types/operators.  Go for it.
 
 I kinda thought we had a TODO entry for that already, but I see we
 don't.
 
 Another thing you'd find yourself wanting very quickly is an option to
 follow dependencies, ie dump these objects plus everything they depend
 on.  Otherwise you'd have to find the dependencies manually, which
 would get real tedious in any complex schema.
 
 Proposed TODO entries for pg_dump:
 
 * Allow selection of individual object(s) of all types, not just tables
 * In a selective dump, allow dumping of all dependencies of the objects

May I suggest the implementation of -l / -L like pg_restore has?
So you can work the same way to produce the list of objects
to dump and manipulate them - as well as adding the depencies
tracking option to pg_restore?

Regards
Tino


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] postgresql clustering

2005-09-29 Thread Tino Wildenhain

Daniel Duvall schrieb:

While clustering in some circles may be an open-ended buzzword --
mainly the commercial DB marketing crowd -- there are concepts beneath
the bull that are even inherent in the name.  However, I understand
your point.


From what I've researched, the concepts and practices seem to fall

under one of two abstract categorizations: fail-over (ok...
high-availability), and parallel execution (high-performance... sure).


Well, I dont know why many people believe parallel execution
automatically means high performance. Actually most of the time
the performance is much worser this way.
If your dataset remains statically and you do only read-only
requets, you get higher performance thru load-balancing.
If howewer you do some changes to the data, the change has to
be propagated to all nodes - which in fact costs performance.
This highly depends on the link speed between the nodes.


While some consider the implementation of only one of these to qualify
a cluster, others seem to demand that a true cluster must
implement both.

What I'm really after is a DB setup that does fail-over and parallel
execution.  Your setup sounds like it would gracefully handle the
former, but cannot achieve the latter.  Perhaps I'm simply asking too
much of a free software setup.


commercial vendors arent much better here - they just dont tell you :-)
There is pgpool or SQLRelay for example if you want to parallelize
requests, you can combine with the various replication mechanism
also available for PG and get what you want - and most important
- get whats possible. Nobody can trick the math :-)


Greets
Tino

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] postgresql clustering

2005-09-29 Thread Tino Wildenhain

Jonah H. Harris schrieb:
On 9/29/05, *Tino Wildenhain* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Well, I dont know why many people believe parallel execution
automatically means high performance. Actually most of the time
the performance is much worser this way.
If your dataset remains statically and you do only read-only
requets, you get higher performance thru load-balancing.
If howewer you do some changes to the data, the change has to
be propagated to all nodes - which in fact costs performance.
This highly depends on the link speed between the nodes. 



I think you should clarify that the type of clustering you're discussing 
is the, shared-nothing model which is most prevalent in open-source 
databases.  Shared-disk and shared-memory clustered systems do not have 
the propagation issue but do have others (distributed lock manager, 
etc).  Don't make blind statements.  If you want more information about 
real-world clustering, read the research for DB2 (Mainframe) and 
Oracle RAC.


No, thats not a blind statement ;) It does not matter how the
information is technically shared - shared mem must be
copied or accessed over network links if you have more then
one independend system. Locks are informations too - thus the
same constraints apply.

So no matter how you label the problem, the basic constraints:
read communication and synchronisation overhead will remain.

Costom solutions can circumvent some of the problems if you
can shift the problem area (e.g. have some read-only areas,
some seldom-write areas and some high write, some seldom read
and not immediately propagated data)


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Materialized Views in PostgreSQL

2005-09-12 Thread Tino Wildenhain

Jean-Michel Pouré schrieb:

Dear friends and all,

Johnathan Gardner did a wonderful job on materialized views:
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html

Are there any plans to integrate materialized views written in plain C in the 
future. This can help gain a lot of time in Web applications.


I fail to see how C would improve the situation here because you just
send queries to the backend in those functions. You can use
the recipes on that page just as they are. If you feel like,
make them a contrib module you can easily apply to a situation.

Not sure how an implemenation in database could look like.
Maybe SQL standard reserves something for materialized views
I dont know?

Regards
Tino

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] FAQ/HTML standard?

2005-09-10 Thread Tino Wildenhain
Am Samstag, den 10.09.2005, 12:59 -0500 schrieb Bruno Wolff III:
 On Sat, Sep 10, 2005 at 12:10:19 -0400,
   Andrew Dunstan [EMAIL PROTECTED] wrote:
  
  Is there an HTML standard that we try to follow in our HTML docs such as 
  FAQs?
  
  If there isn't an explicit standard, may I suggest that we adopt XHTML 
  1.0 as the standard?
 
 I ran accross an article a few weeks ago that suggested that this wasn't
 all that great of an idea. Using HTML 4.01 should be just as useful.

Well, you find articles for or against everyting. What made you believe
this one was a resonable one? ;)

Regards
Tino


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] typo? was: Version number in psql banner

2005-09-01 Thread Tino Wildenhain
Am Donnerstag, den 01.09.2005, 23:34 -0500 schrieb Jim C. Nasby:
 On Thu, Sep 01, 2005 at 11:18:25PM +0100, Simon Riggs wrote:

 As a side note, there's a typo in \?:
 
   \c[onnect] [DBNAME|- [USER]]
 
 Note the | where there should be a ].

No ;) The | stands for the alternative.
The whole block is optional, where 
you can either type dbname or a hypen
for the database name - to be able to
just switch the user.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [ANNOUNCE] Welcome Core Team member Dave Page

2005-08-26 Thread Tino Wildenhain

Gavin M. Roy schrieb:

Congrats Dave!



Yes, congrats!


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] pl/Ruby, deprecating plPython and Core

2005-08-17 Thread Tino Wildenhain

Josh Berkus schrieb:

People:

How about we draft some criteria for inclusion of a PL in the main distro?

Suggestions:

1) The PL must be stable (that is, not capable of crashing the backend)
2) The PL must be buildable only using --with-{lang} and createlang 
(assuming that the user has the correct libraries)
3) There must be a regression test included, which tests both creating the 
lang and creating+executing a small function in it.
4) The PL must have at least one maintainer who subscribes to 
pgsql-hackers.
5) It must be possible to build the PL without changing the licensing of 
PostgreSQL (this excludes PL/R, unfortunately).


Controversial Criterion:
6) The PL should be buildable in trusted mode.  (I vote no on this one)

I, myself, do not think that either popularity or inclusion of the language 
in Linux distros should be a criterion.   If PL/Haskell or PL/Smalltalk 
catches on with *our* community it should be good enough for us.  Heck, 
were it not for the licensing and build issues, I'd be advocating strongly 
fro PL/R.


+1 on all of this from me

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] How to secure PostgreSQL Data for distribute?

2005-08-17 Thread Tino Wildenhain
Am Donnerstag, den 18.08.2005, 09:56 +0700 schrieb Premsun
Choltanwanich:
 Dear All,
  
  I need to distribute my application that use PostgreSQL as
 database to my customer. But I still have some questions in my mind on
 database security. I understand that everybody  who get my application
 database will be have a full control permission on my database in case
 that PostgreSQL already installed on their computer and they are an
 administrator on PostgreSQL. So that mean data, structure and any
 ideas contain in database will does not secure on this point. Is my
 understanding correct?
  
  What is the good way to make it all secure? Please advise.

Postgres is secure as it garanties your data integrity (as long
as the underlying os plays well). It is also secure in a way
to protect unauthorized access from 3rd party users if the
DBA doesnt want it. (Usuall account and access).

However, if you deliver software to a customer, the software
is the product and the customer can technically do whatever
she wants with it. 

Your copyright on the application protects you legally from
someone using exactly your table layout etc. in another application
to sell to another customer (provided the model isnt too simple
or so state of the art that anybody must trivially come to the
same solution)

All the other ideas of IP (intellectual property) seem very
silly when you talk software. Personally I've yet to see
a model worth any thoughts about 'protecting' or obfuscating
it. (That means, something not any talented database designer
can create, faced with the same problem)



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Testing of MVCC

2005-08-16 Thread Tino Wildenhain

Tom Lane schrieb:

Greg Stark [EMAIL PROTECTED] writes:


So why bother with driving multiple invocations of psql under
Expect. Just use DBD::Pg to open as many connections as you want and
issue whatever queries you want.



The bit that I think is missing in DBI is issue a command and don't
wait for the result just yet.  Without that, you cannot for instance
stack up several waiters for the same lock, as you might wish to do to
verify that they get released in the correct order once the original
lock holder goes away.  Or stack up some conflicting waiters and check
to see if deadlock is detected when it should be ... or contrariwise,
not signalled when it should not be.  There's lots of stuff you can
do that isn't exactly probing for race conditions, yet would be awfully
nice to check for in a routine test suite.

I might be wrong though, not being exactly a DBI guru ... can this
sort of thing be done?


I wonder if you dont have a wrapper around libpq you can use like that?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Testing of MVCC

2005-08-16 Thread Tino Wildenhain

Tom Lane schrieb:

Tino Wildenhain [EMAIL PROTECTED] writes:


Tom Lane schrieb:


The bit that I think is missing in DBI is issue a command and don't
wait for the result just yet. ...
I might be wrong though, not being exactly a DBI guru ... can this
sort of thing be done?



I wonder if you dont have a wrapper around libpq you can use like that?



Sure, it wouldn't take much to create a minimal C+libpq program that
would do the basics.  But the history of testing tools teaches that


Well no no. I was just thinking perl might have something similar to
pythons pyPgSQL module which both hase dbapi2 interface as well
as low level access to libpq - all that nicely accessible from the
scripting language. I'm using it for NOTIFY/LISTEN for example.


you soon find yourself wanting a whole lot more functionality, like
conditional tests, looping, etc, in the test-driver mechanism.
That's the wheel that I don't want to re-invent.  And it's a big part
of the reason why stuff like Expect and the Perl Test modules have
become so popular: you have a full scripting language right there at
your command.


Sure, see above :)


Maybe the right answer is just to hack up Pg.pm or DBD::Pg to provide
the needed asynchronous-command-submission facility, and go forward
from there using the Perl Test framework.


Nothing on cpan or how thats called?


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] pl/Ruby, deprecating plPython and Core

2005-08-15 Thread Tino Wildenhain
Am Montag, den 15.08.2005, 10:30 -0700 schrieb Joshua D. Drake:
 Hello,
 
 I have negotiated with the author of pl/Ruby to release plRuby under the 
 PostgreSQL license. The reason I did this is the following:
 
 1. I felt we needed a truly OO language in core.
 2. plPython isn't really moving forward and has the whole 
 trusted/untrusted issue.
 
 Now anyone who knows me, knows that I love Python which means this is 
 not a language argument as much as a functionality argument.
 
 Ruby for good or bad is gaining a large following and has become a very 
 active language in a short period of time. It can also be trusted and 
 untrusted.
 
 I believe that unless plPython can either be fixed or is going to 
 continue to move forward as a pl language that we should consider 
 deprecating it and even removing it in 8.2 or 8.3.

There is the ply, which is right now working better then pythonu
(it has support for generators for example)

See  http://python.projects.postgresql.org/quick.html

the author is currently also working on the trusted
language issue. 

So maybe when the time comes, one option would be to replace
pl/python with this one.

 As far as a PL language plruby seems to have some really good stuff. 
 Here is the docs:
 
 http://moulon.inra.fr/ruby/plruby.html
 
 What does everybody think?
 
 Sincerely,
 
 Joshua D. Drake
 
 
 
 
 
 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] US Census database (Tiger 2004FE)

2005-08-04 Thread Tino Wildenhain
Am Donnerstag, den 04.08.2005, 08:40 -0400 schrieb Mark Woodward:
  * Mark Woodward ([EMAIL PROTECTED]) wrote:
  I just finished converting and loading the US census data into
  PostgreSQL
  would anyone be interested in it for testing purposes?
 
  It's a *LOT* of data (about 40+ Gig in PostgreSQL)
 
  How big dumped  compressed?  I may be able to host it depending on how
  big it ends up being...
 
 It's been running for about an hour now, and it is up to 3.3G.
 
 pg_dump tiger | gzip  tiger.pgz
 
 I'll let you know. Hopefully, it will fit on  DVD.
 
 You know, ... maybe pg_dump needs a progress bar? (How would it do that, I
 wonder?)
pg_dump -v maybe? ;) *hint hint*

-- 
Tino Wildenhain [EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] pg_dump -- data and schema only?

2005-08-04 Thread Tino Wildenhain
Am Donnerstag, den 04.08.2005, 10:26 -0400 schrieb Mark Woodward:
 I haven't seen this option, and does anyone thing it is a good idea?
 
 A option to pg_dump and maybe pg_dump all, that dumps only the table
 declarations and the data. No owners, tablespace, nothing.
 
 This, I think, would allow more generic PostgreSQL data transfers.

pg_dump -s maybe?

See man pg_dump: 

   -s

   --schema-only
  Dump only the object definitions (schema), not data.

Usually one dumps the database with -Fc and then construct
SQL for data and DDL via pg_restore from this binary dump.
You can then use pg_restore -l, edit (for example via sed)
and use it with -L to only generate SQL for these objects.

-- 
Tino Wildenhain [EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


  1   2   >