[GENERAL] Executing SQL expression from C-functions

2011-01-28 Thread Jorge Arévalo
Hello,

I need to write a C-function to extend my PostgreSQL server
functionality. One of the arguments of that function is a string
representing any PostgreSQL valid expression returning a number. For
example:

3+5*cos(7.4)/8
7
2+2
log(34)
degrees(0.5)
power(9.0, 3.0)
case when 8  2 then 1 when 7*5  43 then 2 else 3 end

and so on

The point is my function first replace some values in the expression
(variable names) for numbers and then it should execute the expression
and returns the result (a number). With PL/pgSQL I can use EXECUTE
expr INTO val, but how could I do it in a C function?

Many thanks in advance, and best regards

-- 
Jorge Arévalo
Internet  Mobilty Division, DEIMOS
jorge.arev...@deimos-space.com
http://mobility.grupodeimos.com/
http://gis4free.wordpress.com
http://geohash.org/ezjqgrgzz0g

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Fwd: RV: RV: DECRETO (ME LLEGÓ HOY)

2011-01-28 Thread Alfredo Torres
:

*DECRETO METAFÍSICO
*
Les diré, que terminando de leerlosonó el teléfono del Apartamentoy
pensando que fue casualidad, volví a leerlo y...!!sonó el
celularahí les dejo eso.!

Se los mando por las dudas..
LES COMENTO QUE A MI TAMBIÉN ME SONÓ EL TELÉFONO CUANDO TERMINÉ DE LEER Y
DUDABA SI REENVIARLO.

BUENO GENTE, VALE LA PENA INTENTARLO, MI DESEO FUE PEDIDO. VEREMOS SI ES
CIERTO.



EL DECRETO METAFÍSICO
Si te ha llegado este mensaje, considérate la persona más afortunada. A
partir de este momento tu vida cambiará y todo aquello que tanto anhelas ¡Se
te concederá!
SORTILEGIO DE LA ABUNDANCIA

Mientras lees este mensaje, concéntrate y piensa en todo aquello que deseas.
Piensa con fe y da por sentado que ya es una realidad.
Piensa en ese deseo AHORA. . . . . . . . . . . . . . . . . . .

Entonces, repite 3 veces el siguiente sortilegio:

Por el poder de 3 veces 3...
Por aquellos que van y aquellos que vienen...
Por los vivos y los muertos...
Por el poder de los 4 elementos...
A mi alrededor todas las cabezas se giran, abriéndome paso, elimino los
obstáculos...
Crece mi fuerza, soy energía, puro es mi pensamiento, atraigo lo que
quiero.
El universo me otorga aquello que más deseo
Reina la abundancia, el amor, la salud y el dinero.
Gracias Padre-Madre, concreto está mi sueño.
¡Danza mi corazón, mi espíritu está contento!

Ahora, re-envía este mensaje para que el sortilegio surta efecto. Este
sortilegio es muy poderoso, pero solo verás los resultados si lo reenvías.
De romper la cadena, estarás rompiendo su efectividad.
  5-10 personas = 3 semanas de buena suerte a partir de hoy.
10-20 personas = 3 meses de buena suerte a partir de hoy.

PD: CUANDO LO ENVIE DEBE SONAR MI TELEFONO ASI SERA MI DESEO YA ESTA
CONCEDIDO












http://www.bandoo.com/wp/ie.php?plg=iead2=0subs=hotmailelm=sign


Re: [GENERAL] Executing SQL expression from C-functions

2011-01-28 Thread Pavel Stehule
Hello

see SPI interface

http://www.postgresql.org/docs/8.4/interactive/spi-examples.html

Regards

Pavel Stehule

2011/1/28 Jorge Arévalo jorge.arev...@deimos-space.com:
 Hello,

 I need to write a C-function to extend my PostgreSQL server
 functionality. One of the arguments of that function is a string
 representing any PostgreSQL valid expression returning a number. For
 example:

 3+5*cos(7.4)/8
 7
 2+2
 log(34)
 degrees(0.5)
 power(9.0, 3.0)
 case when 8  2 then 1 when 7*5  43 then 2 else 3 end

 and so on

 The point is my function first replace some values in the expression
 (variable names) for numbers and then it should execute the expression
 and returns the result (a number). With PL/pgSQL I can use EXECUTE
 expr INTO val, but how could I do it in a C function?

 Many thanks in advance, and best regards

 --
 Jorge Arévalo
 Internet  Mobilty Division, DEIMOS
 jorge.arev...@deimos-space.com
 http://mobility.grupodeimos.com/
 http://gis4free.wordpress.com
 http://geohash.org/ezjqgrgzz0g

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How best to load modules?

2011-01-28 Thread Steve White
Hello, all!

What are best practices regarding the loading of postgresql modules, say
from the contrib/ directory; specifically, with regard to portability?

I would like to distribute an SQL script which loads a module, and works
with as little further fiddling as possible.

known options
=

Within a session, or in a script, one can use 
\i explicit file path
But within a script this has the weakness that the file path varies from
one system distribution to another.

One can start psql with 
psql ... -f explicit file path
but that's a measure taken outside the script, to done either with session,
or else be done by a further measure such as a shell script.

Ideally, the location of the default modules directory (or installation
directory) should be available within a session in some variable or from
some function call.

There are some pre-defined variables, listed in a session by
show all;
but I don't see anything like a directory path there.
Maybe a built-in function returning this directory?  Searched to no avail:
http://www.postgresql.org/docs/8.2/interactive/functions.html

There has been talk about a bigger solution on
http://wiki.postgresql.org/wiki/Module_Manager
but little seems to have happened there in some years.

An environment variable
$libdir, 
is mentioned
http://www.postgresql.org/docs/8.2/static/runtime-config-client.html
but this seems not to be present within a session.
It seems to be expanded within the LANGUAGE C environment, for instance in
tablefunc.sql
 -
CREATE OR REPLACE FUNCTION crosstab2(text)
RETURNS setof tablefunc_crosstab_2
AS '$libdir/tablefunc','crosstab'
LANGUAGE C STABLE STRICT;
 -

Thanks!

-- 
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
| Steve White +49(331)7499-202
| E-ScienceZi. 27  Villa Turbulenz 
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
| Astrophysikalisches Institut Potsdam (AIP)
| An der Sternwarte 16, D-14482 Potsdam
|
| Vorstand: Prof. Dr. Matthias Steinmetz, Peter A. Stolz
|
| Stiftung privaten Rechts, Stiftungsverzeichnis Brandenburg: III/7-71-026
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Executing SQL expression from C-functions

2011-01-28 Thread Jorge Arévalo
2011/1/28 Pavel Stehule pavel.steh...@gmail.com:
 Hello

 see SPI interface

 http://www.postgresql.org/docs/8.4/interactive/spi-examples.html

 Regards

 Pavel Stehule


Hi Pavel,

Thanks a lot!

Best regards,
-- 
Jorge Arévalo
Internet  Mobilty Division, DEIMOS
jorge.arev...@deimos-space.com
http://mobility.grupodeimos.com/
http://gis4free.wordpress.com
http://geohash.org/ezjqgrgzz0g

 2011/1/28 Jorge Arévalo jorge.arev...@deimos-space.com:
 Hello,

 I need to write a C-function to extend my PostgreSQL server
 functionality. One of the arguments of that function is a string
 representing any PostgreSQL valid expression returning a number. For
 example:

 3+5*cos(7.4)/8
 7
 2+2
 log(34)
 degrees(0.5)
 power(9.0, 3.0)
 case when 8  2 then 1 when 7*5  43 then 2 else 3 end

 and so on

 The point is my function first replace some values in the expression
 (variable names) for numbers and then it should execute the expression
 and returns the result (a number). With PL/pgSQL I can use EXECUTE
 expr INTO val, but how could I do it in a C function?

 Many thanks in advance, and best regards

 --
 Jorge Arévalo
 Internet  Mobilty Division, DEIMOS
 jorge.arev...@deimos-space.com
 http://mobility.grupodeimos.com/
 http://gis4free.wordpress.com
 http://geohash.org/ezjqgrgzz0g

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How best to load modules?

2011-01-28 Thread Dimitri Fontaine
Steve White swh...@aip.de writes:
 What are best practices regarding the loading of postgresql modules, say
 from the contrib/ directory; specifically, with regard to portability?

 I would like to distribute an SQL script which loads a module, and works
 with as little further fiddling as possible.

See about PGXS.

  http://www.postgresql.org/docs/9.0/static/xfunc-c.html#XFUNC-C-PGXS

 known options
 =

 Within a session, or in a script, one can use 
   \i explicit file path
 But within a script this has the weakness that the file path varies from
 one system distribution to another.

 One can start psql with 
   psql ... -f explicit file path
 but that's a measure taken outside the script, to done either with session,
 or else be done by a further measure such as a shell script.

 Ideally, the location of the default modules directory (or installation
 directory) should be available within a session in some variable or from
 some function call.

You can use pg_config to get this PATH, and in recent versions of
PostgreSQL you can use $libdir as the module directory name.

  select name, setting from pg_settings where name ~ 'dynamic_library_path';

 There has been talk about a bigger solution on
   http://wiki.postgresql.org/wiki/Module_Manager
 but little seems to have happened there in some years.

It seemed stalled for a long time because the harder part of this
development was to get an agreement among hackers about what to develop
exactly.  We've been slowly reaching that between developer meetings in
2009 and 2010, and the result should hit the official source tree before
the next developer meeting in 2011 :)

  https://commitfest.postgresql.org/action/patch_view?id=471
  https://commitfest.postgresql.org/action/patch_view?id=472

If you want to see more details about the expected-to-be-commited-soon
development work, have a look there:

  http://pgsql.tapoueh.org/extensions/doc/html/extend-extension.html

If you have enough time and interest into the feature, you can even
clone the git repository where the development occurs (branches named
extension and upgrade) and try it for yourself, then maybe send a
mail about your findings (we call that a review):

  http://git.postgresql.org/gitweb?p=postgresql-extension.git;a=summary
  http://wiki.postgresql.org/wiki/Reviewing_a_Patch

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select query ignores index on large table

2011-01-28 Thread Michael Kemanetzis
Thanks, it did help. Now queries run in zero time.
I had thought of doing thatbut since the same configuration was working
ok on MSSQL I thought it should also here.
Now with that index the server query times are a lot faster than MSSQL
without it.

Since it is working I will leave it like that. At least till I know more on
how postgre behaves (and databases in general) with indexes.

Thanks again!

On Thu, Jan 27, 2011 at 4:12 PM, hubert depesz lubaczewski 
dep...@depesz.com wrote:

 On Thu, Jan 27, 2011 at 04:04:02PM +0200, Michael Kemanetzis wrote:
  Hello, I'm experiencing a strange behavior running a simple select query
 on
  a table that has about 12 million rows. Specifically, changing the
 LIMIT
  value seems to change the execution plan but the result in one of the
 cases
  is unjustifiably slow, as if it ignores all indexes.
 
  The table structure, the queries used and the results are here:
  http://pastebin.com/fn36BuKs
 
  Is there anything I can do to improve the speed of this query?

 create index qq on vehicleevent (veh_id, event_id)
 could help.

 Best regards,

 depesz

 --
 Linkedin: http://www.linkedin.com/in/depesz  /  blog:
 http://www.depesz.com/
 jid/gtalk http://www.depesz.com/%0Ajid/gtalk: dep...@depesz.com /
 aim:depeszhdl / skype:depesz_hdl / gg:6749007



Re: [GENERAL] Select query ignores index on large table

2011-01-28 Thread Michael Kemanetzis
I am answering just for the sake of answering your questions.
What hubert depesz lubaczewski suggested  had fixed the problem i had.


I have other queries that need event_id to be the clustered index
Veh_id is spread all over the table. (for veh_id 3 there are no records)
Due to the spread of records all over the table the second plan that scans
the whole table to get enough records takes ages.
A planner should probably predict this (the fragmentation of veh_id) and
mostly consider the second solution.
Maybe it does that..who knows...all I know is my problem is fixed with the
second two field index.

Thank you for your effort to help.

On Thu, Jan 27, 2011 at 8:14 PM, Alban Hertroys 
dal...@solfertje.student.utwente.nl wrote:

 On 27 Jan 2011, at 15:04, Michael Kemanetzis wrote:

  Hello, I'm experiencing a strange behavior running a simple select query
 on a table that has about 12 million rows. Specifically, changing the
 LIMIT value seems to change the execution plan but the result in one of
 the cases is unjustifiably slow, as if it ignores all indexes.
 
  The table structure, the queries used and the results are here:
 http://pastebin.com/fn36BuKs
 
  Is there anything I can do to improve the speed of this query?

 What does explain analyse say about query B?

 According to the query plan there are about 30k rows with veh_id = 3. From
 the amount of disk I/O you describe it would appear that the rows
 corresponding to that ID are all over the place. I expect that clustering
 that table on the veh_id index will help in that case.

 It does seem a bit strange that the planner is choosing an index scan for
 30k records, I'd expect a sequential scan to be more efficient. That seems
 to be another indication that your records are very fragmented with regards
 to the veh_id.

 That, or you are running out of memory (the setting to look at is work_mem
 I think). Did you change any settings from the defaults?


 BTW, 12M records isn't considered a large table by many Postgres users.
 It's certainly not small though, I suppose average fits the bill.

 Alban Hertroys

 --
 If you can't see the forest for the trees,
 cut the trees and you'll see there is no forest.


 !DSPAM:1227,4d41b61c11731997733516!





Re: [GENERAL] Store base64 in database. Use bytea or text?

2011-01-28 Thread Andre Lopes
Hi,

Another question about this subject.

It is possible to cache this images from the database? In the future I
will need to cache the pictures.

If you have any knowledge about this, please give me a clue.

Best Regards,




On Wed, Jan 26, 2011 at 2:09 PM, Bill Moran wmo...@potentialtech.com wrote:
 In response to Dmitriy Igrishin dmit...@gmail.com:

 2011/1/26 Andre Lopes lopes80an...@gmail.com

  Thanks for the reply.
 
  I will mainly store files with 100kb to 250kb not bigger than this.
 
  PQescapeByteaConn is not available in a default installation of
  PostgreSQL? My hosting account hava a standard installation of
  PostgreSQL. There are other options for escaping binary files?
 
  Best Regards,
 
 PQescapeByteConn is a function of libpq - native C client library.
 In you case (PHP) you should use its functions to encode binary
 data before including it into a query (e.g., in base64) and use
 built-in decode() function of Postgres:
 -- Pseudo-code
 INSERT INTO img (dat) VALUES (decode(BASE64_INPUT_FROM_PHP, 'base64'));

 where dat column of table img of type bytea.

 More specifically:

 $query = INSERT INTO image_data (bytea_field) VALUES (' .
         pg_escape_bytea($binary_data) . ');
 pg_query($query);

 And to get it back out:
 $query = SELECT bytea_field FROM image_data;
 $rs = pg_query($query);
 $row = pg_fetch_assoc($rs);
 $binary_data = pg_unescape_bytea($row['bytea_field']);

 (note that I may have omitted some steps for clarity)

 DO NOT use parametrized queries with PHP and bytea (I hate to say that,
 because parametrized fields are usually a very good idea).  PHP has a
 bug that mangles bytea data when pushed through parametrized fields.

 PHP bug #35800

 --
 Bill Moran
 http://www.potentialtech.com
 http://people.collaborativefusion.com/~wmoran/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Store base64 in database. Use bytea or text?

2011-01-28 Thread rsmogura

Hi,

In means of database, it is impossible. If you want to cache, add 
version or last modified column, then ask for changes and cache data 
locally.


Kind regards,
Radosław Smogura
http://softperience.eu

On Fri, 28 Jan 2011 13:32:31 +, Andre Lopes wrote:

Hi,

Another question about this subject.

It is possible to cache this images from the database? In the future 
I

will need to cache the pictures.

If you have any knowledge about this, please give me a clue.

Best Regards,




On Wed, Jan 26, 2011 at 2:09 PM, Bill Moran 
wmo...@potentialtech.com wrote:

In response to Dmitriy Igrishin dmit...@gmail.com:


2011/1/26 Andre Lopes lopes80an...@gmail.com

 Thanks for the reply.

 I will mainly store files with 100kb to 250kb not bigger than 
this.


 PQescapeByteaConn is not available in a default installation of
 PostgreSQL? My hosting account hava a standard installation of
 PostgreSQL. There are other options for escaping binary files?

 Best Regards,

PQescapeByteConn is a function of libpq - native C client library.
In you case (PHP) you should use its functions to encode binary
data before including it into a query (e.g., in base64) and use
built-in decode() function of Postgres:
-- Pseudo-code
INSERT INTO img (dat) VALUES (decode(BASE64_INPUT_FROM_PHP, 
'base64'));


where dat column of table img of type bytea.


More specifically:

$query = INSERT INTO image_data (bytea_field) VALUES (' .
        pg_escape_bytea($binary_data) . ');
pg_query($query);

And to get it back out:
$query = SELECT bytea_field FROM image_data;
$rs = pg_query($query);
$row = pg_fetch_assoc($rs);
$binary_data = pg_unescape_bytea($row['bytea_field']);

(note that I may have omitted some steps for clarity)

DO NOT use parametrized queries with PHP and bytea (I hate to say 
that,
because parametrized fields are usually a very good idea).  PHP has 
a

bug that mangles bytea data when pushed through parametrized fields.

PHP bug #35800

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How best to load modules?

2011-01-28 Thread Steve White
Hi Dimitri!

PGXS is interesting, but a bigger solution than I was looking for:
ideally, some simple commands for loading the module from my .sql script.

pg_config seems to be in yet another package, postgresql84-devel.  It is a
shell utility for getting such information.  This is again far removed from
the postgresql session, and more bother for the user to install.
Maybe I could run it from the script, regex it for the SHAREDIR key, and
construct from that '$SHAREDIR/contrib'.

A simple variable or function returning the library path would have
solved my present problem.  Perhaps we should make a feature request.

A proper notion of a module (something like the Python import command)
would be really nice, of course, and should already have been there a long
time ago, and it seems to be the aim of the Module_Manager proposal.

Cheers!

On 28.01.11, Dimitri Fontaine wrote:
 Steve White swh...@aip.de writes:
  What are best practices regarding the loading of postgresql modules, say
  from the contrib/ directory; specifically, with regard to portability?
 
  I would like to distribute an SQL script which loads a module, and works
  with as little further fiddling as possible.
 
 See about PGXS.
 
   http://www.postgresql.org/docs/9.0/static/xfunc-c.html#XFUNC-C-PGXS
 
  known options
  =
 
  Within a session, or in a script, one can use 
  \i explicit file path
  But within a script this has the weakness that the file path varies from
  one system distribution to another.
 
  One can start psql with 
  psql ... -f explicit file path
  but that's a measure taken outside the script, to done either with session,
  or else be done by a further measure such as a shell script.
 
  Ideally, the location of the default modules directory (or installation
  directory) should be available within a session in some variable or from
  some function call.
 
 You can use pg_config to get this PATH, and in recent versions of
 PostgreSQL you can use $libdir as the module directory name.
 
   select name, setting from pg_settings where name ~ 'dynamic_library_path';
 
  There has been talk about a bigger solution on
  http://wiki.postgresql.org/wiki/Module_Manager
  but little seems to have happened there in some years.
 
 It seemed stalled for a long time because the harder part of this
 development was to get an agreement among hackers about what to develop
 exactly.  We've been slowly reaching that between developer meetings in
 2009 and 2010, and the result should hit the official source tree before
 the next developer meeting in 2011 :)
 
   https://commitfest.postgresql.org/action/patch_view?id=471
   https://commitfest.postgresql.org/action/patch_view?id=472
 
 If you want to see more details about the expected-to-be-commited-soon
 development work, have a look there:
 
   http://pgsql.tapoueh.org/extensions/doc/html/extend-extension.html
 
 If you have enough time and interest into the feature, you can even
 clone the git repository where the development occurs (branches named
 extension and upgrade) and try it for yourself, then maybe send a
 mail about your findings (we call that a review):
 
   http://git.postgresql.org/gitweb?p=postgresql-extension.git;a=summary
   http://wiki.postgresql.org/wiki/Reviewing_a_Patch
 
 Regards,
 -- 
 Dimitri Fontaine
 http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
 

-- 
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
| Steve White +49(331)7499-202
| E-ScienceZi. 27  Villa Turbulenz 
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
| Astrophysikalisches Institut Potsdam (AIP)
| An der Sternwarte 16, D-14482 Potsdam
|
| Vorstand: Prof. Dr. Matthias Steinmetz, Peter A. Stolz
|
| Stiftung privaten Rechts, Stiftungsverzeichnis Brandenburg: III/7-71-026
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dumpall without OID

2011-01-28 Thread Girts Laudaks
Well, seems that everything is OK. There are no OIDs used from the 
application side but they still appear in the database tables, this was 
what made the confusion.


Thanks,
G.

On 2011.01.27. 21:33, Adrian Klaver wrote:

On 01/27/2011 04:52 AM, Girts Laudaks wrote:

Hi,

What could be the possible damage if a database is migraged without the
-o (OID) option? Integrity of data?


User OIDs are deprecated,so the main problem is whether you are 
currently using them as foreign keys. If you do not specify the -o 
switch and have OIDs specifically defined for a table(not the default) 
then new ones will be generated when the data is restored. If your 
schema/app depend on the old numbers then it would be a problem.



What are the options to solve this problem if it is migrated this way?
Shouldn't pg_dumpall work like an exact copy of db?
What could have gone wrong if only these commands were used?

pg_dumpall -v -U postgres | gzip  /var/lib/pgsql/dump.data.gz
gunzip -c /var/lib/pgsql/dump.data.gz | psql -U postgres  
/tmp/import.log


They were used to migrate from PG8.4 to PG9

Regards,
Girts






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding ddl audit trigger

2011-01-28 Thread Kenneth Buckler
You just need to log DDL, correct?

Why not just edit postgres.conf and set log_statement to 'ddl'.

See http://www.postgresql.org/docs/9.0/static/runtime-config-logging.html

If you need to include username, database, etc, take a look at
log_line_prefix on the same page.

Ken

On Wed, Jan 26, 2011 at 4:30 PM, El Co lc...@yahoo.com wrote:
 Trying to get some DDL auditing in a development environment by adding
 triggers to pg_proc, pg_class,pg_type,pg_trigger and getting the following
 error:



 ERROR: permission denied: pg_proc is a system catalog

 SQL state: 42501



 Is there any way to achieve DDL auditing in Postgres and trace any
 new/dropped/changed object into a table?

 All I need is to at least be able and save a userid(current_user),
 timestamp, action, and the name of the object and this could be done easily
 by adding triggers to these pg catalogs.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Store base64 in database. Use bytea or text?

2011-01-28 Thread Dmitriy Igrishin
2011/1/28 Andre Lopes lopes80an...@gmail.com

 Hi,

 Another question about this subject.

 It is possible to cache this images from the database? In the future I
 will need to cache the pictures.

 If you have any knowledge about this, please give me a clue.

 Best Regards,

How would you like to cache them ? On a file system of client (e.g.,
WEB-server) ?
Why would you like to cache them ?





 On Wed, Jan 26, 2011 at 2:09 PM, Bill Moran wmo...@potentialtech.com
 wrote:
  In response to Dmitriy Igrishin dmit...@gmail.com:
 
  2011/1/26 Andre Lopes lopes80an...@gmail.com
 
   Thanks for the reply.
  
   I will mainly store files with 100kb to 250kb not bigger than this.
  
   PQescapeByteaConn is not available in a default installation of
   PostgreSQL? My hosting account hava a standard installation of
   PostgreSQL. There are other options for escaping binary files?
  
   Best Regards,
  
  PQescapeByteConn is a function of libpq - native C client library.
  In you case (PHP) you should use its functions to encode binary
  data before including it into a query (e.g., in base64) and use
  built-in decode() function of Postgres:
  -- Pseudo-code
  INSERT INTO img (dat) VALUES (decode(BASE64_INPUT_FROM_PHP, 'base64'));
 
  where dat column of table img of type bytea.
 
  More specifically:
 
  $query = INSERT INTO image_data (bytea_field) VALUES (' .
  pg_escape_bytea($binary_data) . ');
  pg_query($query);
 
  And to get it back out:
  $query = SELECT bytea_field FROM image_data;
  $rs = pg_query($query);
  $row = pg_fetch_assoc($rs);
  $binary_data = pg_unescape_bytea($row['bytea_field']);
 
  (note that I may have omitted some steps for clarity)
 
  DO NOT use parametrized queries with PHP and bytea (I hate to say that,
  because parametrized fields are usually a very good idea).  PHP has a
  bug that mangles bytea data when pushed through parametrized fields.
 
  PHP bug #35800
 
  --
  Bill Moran
  http://www.potentialtech.com
  http://people.collaborativefusion.com/~wmoran/http://people.collaborativefusion.com/%7Ewmoran/
 




-- 
// Dmitriy.


Re: [GENERAL] Dumpall without OID

2011-01-28 Thread Adrian Klaver
On Friday 28 January 2011 4:54:18 am Girts Laudaks wrote:
 Well, seems that everything is OK. There are no OIDs used from the
 application side but they still appear in the database tables, this was
 what made the confusion.

 Thanks,
 G.


OIDS on user tables have not been on by default since 8.0. Possible reasons why 
they exist:
If the tables have been around since 8.0- OIDS will carry on with them. 
The table CREATE statements have the WITH OIDS clause.
In postgresql.conf the default_with_oids setting is set on.

In addition to the point I made yesterday, some old database drivers rely on 
OIDS to determine uniqueness. If you are positive that they are not needed you 
can use the ALTER TABLE table_name SET WITHOUT OIDS to remove the OID column 
from a table 
(http://www.postgresql.org/docs/9.0/interactive/sql-altertable.html)




-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with restoring from backup on 9.0.2

2011-01-28 Thread hubert depesz lubaczewski
On Thu, Jan 27, 2011 at 10:23:52PM +, Thom Brown wrote:
 Depesz, did you ever resolve this?

Robert Treat did some digging. Current status is: slav backup work as
long as you don't enable hot standby.

I will be working on omnipitr-backup-slave fix, but can't give you eta
at the moment.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temporal period type and select distinct gives equality error

2011-01-28 Thread Jeff Davis
On Thu, 2011-01-27 at 07:32 -0500, Arturo Perez wrote:
 I thought I saw that in CVS but when I checked it out and installed it 
 the error did not go away.
 
 Let me try that again.  Do you think I'd need to reinstall the server 
 itself to insure the proper behavior?

No, reinstalling the extension should fix it.

Also, there is a mailing list for this extension as well: 
http://lists.pgfoundry.org/pipermail/temporal-general/

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding PG9.0 streaming replication feature

2011-01-28 Thread Ray Stell
On Wed, Jan 26, 2011 at 09:02:24PM -0500, Ray Stell wrote:
 On Wed, Jan 26, 2011 at 02:22:41PM -0800, Dan Birken wrote:
   Can you give some concrete suggestions on what needs to be added?  The
   current documentation is here:


It seems like there is a departure in postgresql/pg_hba.conf with 9.x
at standby activation.  I'm running 8.x pitr standbys with the same conf
files on both systems.  This is not possible in 9.x, right?

After the standby is activated, don't you have to restart with the a
different config file in order to get a new standby into recovery?

I suggest a statement about actions needed after a standby is activated.
This could be added in the 25.2.x somewhere.  

wal_level and max_wal_senders are not dynamic and are not set as needed on 
the new primary.  Could this should be hacked into the activation code? 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Full Text Index Scanning

2011-01-28 Thread Matt Warner
I'm in the process of migrating a project from Oracle to Postgres and have
run into a feature question. I know that Postgres has a full-text search
feature, but it does not allow scanning the index (as opposed to the data).
Specifically, in Oracle you can do select * from table where
contains(colname,'%part_of_word%')1. While this isn't terribly efficient,
it's much faster than full-scanning the raw data and is relatively quick.

It doesn't seem that Postgres works this way. Attempting to do this returns
no rows: select * from table where to_tsvector(colname) @@
to_tsquery('%part_of_word%')

The reason I want to do this is that the partial word search does not
involve dictionary words (it's scanning names).

Is this something Postgres can do? Or is there a different way to do scan
the index?

TIA,

Matt


Re: [GENERAL] tablespace restore

2011-01-28 Thread shl7c

Vangelis,

Did you find a best way to achieve what you were asking about? I have a
similar desire to migrate a large table and its indices.

Regards,
Sky
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/tablespace-restore-tp3272200p3361935.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql-8.4.6, 64bit, Solaris 10u9, dtrace

2011-01-28 Thread dennis jenkins
Hello Everyone,

My goal is to install a 64-bit build of the latest Postgresql 8.4
(not ready for 9.0 yet) onto a Solaris 10u9 server (Intel chips,
X4270), with dtrace support.  Postgresql compiles just fine when
configured with --disable-dtrace.  Attempting to compile when
configured with --enable-dtrace fails.  Specifically, the dtrace
compiler utility fails to compile src/backend/utils/probes.d:

$ ./configure CC=gcc -m64 MAKE=gmake DTRACEFLAGS='-64'
--prefix=/opt/local/x64/postgresql-8.4.6 --enable-dtrace
--with-openssl --with-pam --enable-thread-safety --enable-debug
--with-libraries=/opt/local/x64/openssl-1.0.0c/lib:/opt/local/x64/readline-6.1/lib
--with-includes=/opt/local/x64/openssl-1.0.0c/include:/opt/local/x64/readline-6.1/include

$ gmake
(lots of output omitted)
dtrace: failed to link script utils/probes.d: an error was encountered
while processing access/transam/clog.o
gmake[2]: *** [utils/probes.o] Error 1

  Google reveals very little about this error.  Others reported
problems compiling Postgresql-8.2 with dtrace on Solaris 10u3.  A fix
was promised for 10u4.  I can only surmise that the fix failed.  The
reported issue was that dtrace was unable to create probes to
statically defined functions.

  Someone suggested building postgresql on Solaris Express (what will
become Solaris 11), and then installing the binaries on Solaris 10.
So I tried that.  I build Postgresql without dtrace and installed it
on Solaris 10.  I build a virtual machine with Oracle Solaris Express,
installed the gcc compiler and a few other tools.  I then build and
installed readline, openssl, postgresql and slony following my
build instructions, except that I did --enable-dtrace on postgresql.
 It built and installed just fine.  So I tarred up
/opt/local/x64/postgresql-8.4.6 and copied the tarball to the
Solaris-10 server, renamed the existing install to
.../no-dtrace-postgresql, untarred the build with dtrace.  I then
fired up postgresql using my SMF script and it worked GREAT!

   Awesome I thought... Until I tried to use the same dtrace script
provided in section 26.4.3 Using Probes on the postgresql.org web
site (http://www.postgresql.org/docs/8.4/static/dynamic-trace.html).
I got this error:

# ./script.d 4759
dtrace: failed to compile script ./exp-1.d: line 3: probe description
postgresql4759:::transaction-start does not match any probes

  So I checked to see if there were ANY dtrace providers listed...

# dtrace -l
   ID   PROVIDERMODULE  FUNCTION NAME

  None dtrace -l on Solaris-11 lists 66,218 of them.

root@solaris-11:~# dtrace -l | wc -l
66218


  My questions are as follows:

1) Has anyone else successfully built Postgresql (v8 or v9) on
Solaris-10 with dtrace support?  If so, what did I do wrong?

2) Has anyone else successfully used dtrace on Postgresql (any build)
on SOlaris-10?

3) Any idea why dtrace itself is bjorked on Solaris-10?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adding more space, and a vacuum question.

2011-01-28 Thread Herouth Maoz
Hello. We have two problems (which may actually be related...)

1. We are running at over 90% capacity of the disk at one of the servers - a 
report/data warehouse system. We have ran out of disk space several times. Now 
we need to make some file-archived data available on the database to support 
our legal team. This means two huge tables to be added to the database. The 
only solution that I see is to add more space by means of another tablespace. 
The two tables are static - after loading them and creating indexes they will 
not be changed.

The machine has no additional room for internal disks. It is a recent purchase 
and not likely to be replaced any time soon. Now, my position is that the best 
solution would be to add an external hard disk, via USB/firewire, and use it 
for the archive tables. My sysadmin, on the other hand, wants to mount a 
storage machine remotely and use it for the extra tablespace, as the storage 
machine is a more reliable hardware. I think that remote mounted volumes are 
not a proper device for a database, as the network is subject to load and I've 
ran into frozen mounts in both NFS and SMB in the past. Never mind being slower.

Which solution would you advise and which one of us is right?

2. That database has a few really huge tables. I think they are not being 
automatically vacuumed properly. In the past few days I've noticed a vacuum 
process on one of them which has been running since January 14th. 
Unfortunately, it never finished, because we were informed of a scheduled power 
down in our building yesterday, and had to shut down the machine. The questions 
are:

a. Is it normal for vacuum processes to take two weeks?
b. What happens if the vacuum process is stopped? Are the tuples partially 
recovered, or are they only recovered if the process completes properly?
c. Is there anything I can do to make vacuums shorter?
d. After restarting the server, all the data in pg_stat_user_tables seem to 
have been reset. What does this mean and how does this affect vacuum scheduling?

Thank you in advance,
Herouth


Re: [GENERAL] Full Text Index Scanning

2011-01-28 Thread Oleg Bartunov

Matt, I'd try to use prefix search on original string concatenated with reverse 
string:

Just tried on some spare table

knn=# \d spot_toulouse
Table public.spot_toulouse
   Column|   Type| Modifiers 
-+---+---

 clean_name  | character varying |


1. create index 
knn=# create index clean_name_tlz_idx on spot_toulouse using gin(to_tsvector('french', clean_name || ' ' || reverse(clean_name)));

2.
select clean_name from spot_toulouse where to_tsvector('french', clean_name|| ' 
' ||  reverse(clean_name) ) @@ to_tsquery('french','the:* | et:*');

Select looks cumbersome, but you can always write wrapper functions. 
The only drawback I see for now is that ranking function will a bit confused,
since coordinates of original and reversed words will be not the same, 
but again, it's possible to obtain tsvector by custom function, which 
aware about reversing.


Good luck and let me know if this help you.

Oleg

On Fri, 28 Jan 2011, Matt Warner wrote:


I'm in the process of migrating a project from Oracle to Postgres and have
run into a feature question. I know that Postgres has a full-text search
feature, but it does not allow scanning the index (as opposed to the data).
Specifically, in Oracle you can do select * from table where
contains(colname,'%part_of_word%')1. While this isn't terribly efficient,
it's much faster than full-scanning the raw data and is relatively quick.

It doesn't seem that Postgres works this way. Attempting to do this returns
no rows: select * from table where to_tsvector(colname) @@
to_tsquery('%part_of_word%')

The reason I want to do this is that the partial word search does not
involve dictionary words (it's scanning names).

Is this something Postgres can do? Or is there a different way to do scan
the index?

TIA,

Matt



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error while trying to change the database encoding on a database

2011-01-28 Thread Jasen Betts
On 2011-01-24, Geoffrey Myers li...@serioustechnology.com wrote:
 Adrian Klaver wrote:

 Thanks for the suggestion.  As it stands, we are getting different 
 errors for different hex characters, thus the solution we need is the 
 ability to identify the characters that won't convert from SQL_ASCII to 
 UTF8.  Is there a resource that would identify these characters?

use iconv to strip out the invalid chacaters from the SQL and then
compare before and after.

 I think the iconv command is
 
  iconv -f UTF8 -t UTF8 -c 


-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dumpall without OID

2011-01-28 Thread Jasen Betts
On 2011-01-27, Girts Laudaks lauda...@gmail.com wrote:
 Hi,

 What could be the possible damage if a database is migraged without the 
 -o (OID) option? Integrity of data?

some things that used OID might fail to work.
Postgres doesn't need them, does your application?

 What are the options to solve this problem if it is migrated this way?
 Shouldn't pg_dumpall work like an exact copy of db?

It does if you use the correct version.

 What could have gone wrong if only these commands were used?

 pg_dumpall -v -U postgres | gzip  /var/lib/pgsql/dump.data.gz
 gunzip -c /var/lib/pgsql/dump.data.gz | psql -U postgres  /tmp/import.log

 They were used to migrate from PG8.4 to PG9

that should work if you use the version 9 pg_dumpall 


-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get TimeZone name?

2011-01-28 Thread Jasen Betts
On 2011-01-18, ar...@esri.com ar...@esri.com wrote:
 Hi,

 How can I get timezone name? 
 I can get timezone offset but I could not find any reference of timezone 
 name. 

Change your datestyle setting, a setting of ISO gives nice portable
offsets that will work the same anywhere in the world.
a setting of SQL gives a locale-dependant rendering that looks nice
but is often ambiguous

 1-
 sde10= Select current_timestamp - current_timestamp AT TIME ZONE 'UTC' As 
 TimeZoneOffSet;
  timezoneoffset
 
  -08:00:00
 (1 row)

What do you think the above is doing?


Is the following of any use to you?

set datestyle to 'sql,mdy';
select current_timestamp;


-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] searching for characters via the hexidecimal value

2011-01-28 Thread Jasen Betts
On 2011-01-24, Geoffrey Myers li...@serioustechnology.com wrote:
 Massa, Harald Armin wrote:

 This does not work for me, but if I convert the hex value to octal this 
 does work:

 select comments from fax where comments ~* E'\231';


you can do hex like this:

select comments from fax where comments ~* E'\x99';

or if your hex is unicode

 select comments from fax where comments ~* U'\2122';


http://www.postgresql.org/docs/8.4/static/sql-syntax-lexical.html

-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Separating the ro directory of the DB engine itself from the rw data areas . . .

2011-01-28 Thread Jasen Betts
On 2011-01-24, Albretch Mueller lbrt...@gmail.com wrote:
 ~
  I need to configure postgreSQL in a way that I could run it from a
 directory mounted as read only, with separate rw partitions mounted
 for the data, logs, . . .
 ~
  What would be the steps to follow and the issues to take into consideration?

first read through the main config file

as far as I know postgres doesn't need to write the main cluster
directory (/var/lib/postgres/8.4/main here - a debian variant)
only the subdirectories off it data,xlog etc...

-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Complex DBs Ontologies

2011-01-28 Thread Alpha Beta
Hello,

I tried to translate a small database schema to an ontology model (following
some mapping rules).
However I guess the process would be more complex to do with a big and more
constrained relational database.
My question is not specific to Postgresql, But I thought more people in this
list are expert about databases :)
So, my question, what are the main problems of big relational databases that
will be hard to map into another things (cause in my case to ontologies I
don't want to lose useful information  constraints ..and so on.. ).

Regards,
Yitaru


[GENERAL] Re: Separating the ro directory of the DB engine itself from the rw data areas . . .

2011-01-28 Thread Jasen Betts
On 2011-01-24, Albretch Mueller lbrt...@gmail.com wrote:
  ... better yet; is it possible to configure postgreSQL in a way that
 it depends on external variables set  via the OS in the same process
 in which it is started?

Debian manages that somehow. I've got two 8.4 clusters running and
only one copy of the binaries installed,  I think they use a
command-line option to specify the location of the config files.

-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Store base64 in database. Use bytea or text?

2011-01-28 Thread Jasen Betts
On 2011-01-26, Bill Moran wmo...@potentialtech.com wrote:

 DO NOT use parametrized queries with PHP and bytea (I hate to say that,
 because parametrized fields are usually a very good idea).  PHP has a
 bug that mangles bytea data when pushed through parametrized fields.

 PHP bug #35800

OOTOH pg_insert() and pg_update() work well with bytea data 

-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Separating the ro directory of the DB engine itself from the rw data areas . . .

2011-01-28 Thread Andrew Sullivan
On Mon, Jan 24, 2011 at 10:12:28PM +, Albretch Mueller wrote:
  ... better yet; is it possible to configure postgreSQL in a way that
 it depends on external variables set  via the OS in the same process
 in which it is started?

Sure.  Use the -D command-line switch or the $PGDATA environment variable.  

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Error trying to install Ruby postgres gems on OS/X

2011-01-28 Thread Mike Christensen
I'm trying to install the Postgres gem on OS/X but getting errors no
matter what I try..  In theory, it should be as simple as gem install
postgres, correct?  Here's what I get:

sudo gem install postgres
Building native extensions.  This could take a while...
ERROR:  Error installing postgres:
ERROR: Failed to build gem native extension.

/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb
extconf.rb:73: command not found: pg_config --bindir

===   WARNING   ===

You are building this extension on OS X without setting the
ARCHFLAGS environment variable, and PostgreSQL does not appear
to have been built as a universal binary. If you are seeing this
message, that means that the build will probably fail.

Try setting the environment variable ARCHFLAGS
to '-arch i386' before building.

For example:
(in bash) $ export ARCHFLAGS='-arch i386'
(in tcsh) $ setenv ARCHFLAGS '-arch i386'

Then try building again.

===
extconf.rb:46: command not found: pg_config --includedir
extconf.rb:53: command not found: pg_config --libdir
checking for main() in -lpq... no
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir

--ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
--with-pgsql-dir
--without-pgsql-dir
--with-pgsql-include
--without-pgsql-include=${pgsql-dir}/include
--with-pgsql-lib
--without-pgsql-lib=${pgsql-dir}/lib
--with-pqlib
--without-pqlib
Could not find PostgreSQL build environment (libraries  headers):
Makefile not created


Gem files will remain installed in
/Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
Results logged to
/Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out

I have Postgres 9.0 installed at /Library/PostgreSQL/9.0.  Any ideas?

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error trying to install Ruby postgres gems on OS/X

2011-01-28 Thread Thom Brown
On 29 January 2011 01:37, Mike Christensen m...@kitchenpc.com wrote:
 I'm trying to install the Postgres gem on OS/X but getting errors no
 matter what I try..  In theory, it should be as simple as gem install
 postgres, correct?  Here's what I get:

sudo gem install postgres
 Building native extensions.  This could take a while...
 ERROR:  Error installing postgres:
        ERROR: Failed to build gem native extension.

 /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb
 extconf.rb:73: command not found: pg_config --bindir

                ===   WARNING   ===

                You are building this extension on OS X without setting the
                ARCHFLAGS environment variable, and PostgreSQL does not appear
                to have been built as a universal binary. If you are seeing 
 this
                message, that means that the build will probably fail.

                Try setting the environment variable ARCHFLAGS
                to '-arch i386' before building.

                For example:
                (in bash) $ export ARCHFLAGS='-arch i386'
                (in tcsh) $ setenv ARCHFLAGS '-arch i386'

                Then try building again.

                ===
                extconf.rb:46: command not found: pg_config --includedir
 extconf.rb:53: command not found: pg_config --libdir
 checking for main() in -lpq... no
 *** extconf.rb failed ***
 Could not create Makefile due to some reason, probably lack of
 necessary libraries and/or headers.  Check the mkmf.log file for more
 details.  You may need configuration options.

 Provided configuration options:
        --with-opt-dir
        --without-opt-dir
        --with-opt-include
        --without-opt-include=${opt-dir}/include
        --with-opt-lib
        --without-opt-lib=${opt-dir}/lib
        --with-make-prog
        --without-make-prog
        --srcdir=.
        --curdir
        
 --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
        --with-pgsql-dir
        --without-pgsql-dir
        --with-pgsql-include
        --without-pgsql-include=${pgsql-dir}/include
        --with-pgsql-lib
        --without-pgsql-lib=${pgsql-dir}/lib
        --with-pqlib
        --without-pqlib
 Could not find PostgreSQL build environment (libraries  headers):
 Makefile not created


 Gem files will remain installed in
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
 Results logged to
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out

 I have Postgres 9.0 installed at /Library/PostgreSQL/9.0.  Any ideas?

 Mike

Hi Mike,

Well the message does advise you to specify the architecture in the
environment variable.

Try:

export ARCHFLAGS='-arch i386'
sudo -E gem install postgres

-E will allow the environment variable survive the sudo call.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] resizing a varchar column on 8.3.8

2011-01-28 Thread Jasen Betts
On 2011-01-27, Emi Lu em...@encs.concordia.ca wrote:
 On 01/15/2011 04:22 PM, Jon Hoffman wrote:
 Hi,

 I found a post with some instructions for resizing without locking up
 the table, but would like to get some re-assurance that this is the best
 way:

 http://sniptools.com/databases/resize-a-column-in-a-postgresql-table-without-changing-data

 How does that affect data storage and future updates on existing rows?

Under mvcc rows are discarded and replaced with new rows when you do updates. 
Also postgres stores all strings types as text.

  Varchar(n) is just text with an an added character length constraint.

There's no worry about old records not satisfying the new format, or 
old records beeing too small for reuse.

there is rarely a good reason (other than conforming to standards?) to 
prefer varchar over text when creating tables.

 I did not see any feedbacks about this topic.

 I need confirmation that it is safe to do this! Personally, I feel that 
 it is specially useful when there are many view dependencies. Update 
 from data dictionary, all views will be updated automatically, right?

The only problem I can see is aomeone in a (read commited) tranaction finding 
strings
in the table that are longer than they expected to find.

It feels safe to me. but if you want to sure, on a test database
insert some long strings, then make the limit shorter the existing
strings and see if it causes any problems.

-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error trying to install Ruby postgres gems on OS/X

2011-01-28 Thread Mike Christensen
Now I get:

/Library/PostgreSQL/9.0export ARCHFLAGS='-arch i386'
/Library/PostgreSQL/9.0sudo -E gem install postgres
Password:
Building native extensions.  This could take a while...
ERROR:  Error installing postgres:
ERROR: Failed to build gem native extension.

/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb
extconf.rb:73: command not found: pg_config --bindir
extconf.rb:46: command not found: pg_config --includedir
extconf.rb:53: command not found: pg_config --libdir
checking for main() in -lpq... no
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir

--ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
--with-pgsql-dir
--without-pgsql-dir
--with-pgsql-include
--without-pgsql-include=${pgsql-dir}/include
--with-pgsql-lib
--without-pgsql-lib=${pgsql-dir}/lib
--with-pqlib
--without-pqlib
Could not find PostgreSQL build environment (libraries  headers):
Makefile not created


Gem files will remain installed in
/Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
Results logged to
/Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out
/Library/PostgreSQL/9.0


On Fri, Jan 28, 2011 at 5:46 PM, Thom Brown t...@linux.com wrote:
 On 29 January 2011 01:37, Mike Christensen m...@kitchenpc.com wrote:
 I'm trying to install the Postgres gem on OS/X but getting errors no
 matter what I try..  In theory, it should be as simple as gem install
 postgres, correct?  Here's what I get:

sudo gem install postgres
 Building native extensions.  This could take a while...
 ERROR:  Error installing postgres:
        ERROR: Failed to build gem native extension.

 /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby 
 extconf.rb
 extconf.rb:73: command not found: pg_config --bindir

                ===   WARNING   ===

                You are building this extension on OS X without setting the
                ARCHFLAGS environment variable, and PostgreSQL does not appear
                to have been built as a universal binary. If you are seeing 
 this
                message, that means that the build will probably fail.

                Try setting the environment variable ARCHFLAGS
                to '-arch i386' before building.

                For example:
                (in bash) $ export ARCHFLAGS='-arch i386'
                (in tcsh) $ setenv ARCHFLAGS '-arch i386'

                Then try building again.

                ===
                extconf.rb:46: command not found: pg_config --includedir
 extconf.rb:53: command not found: pg_config --libdir
 checking for main() in -lpq... no
 *** extconf.rb failed ***
 Could not create Makefile due to some reason, probably lack of
 necessary libraries and/or headers.  Check the mkmf.log file for more
 details.  You may need configuration options.

 Provided configuration options:
        --with-opt-dir
        --without-opt-dir
        --with-opt-include
        --without-opt-include=${opt-dir}/include
        --with-opt-lib
        --without-opt-lib=${opt-dir}/lib
        --with-make-prog
        --without-make-prog
        --srcdir=.
        --curdir
        
 --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
        --with-pgsql-dir
        --without-pgsql-dir
        --with-pgsql-include
        --without-pgsql-include=${pgsql-dir}/include
        --with-pgsql-lib
        --without-pgsql-lib=${pgsql-dir}/lib
        --with-pqlib
        --without-pqlib
 Could not find PostgreSQL build environment (libraries  headers):
 Makefile not created


 Gem files will remain installed in
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
 Results logged to
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out

 I have Postgres 9.0 installed at /Library/PostgreSQL/9.0.  Any ideas?

 Mike

 Hi Mike,

 Well the message does advise you to specify the architecture in the
 environment variable.

 Try:

 export ARCHFLAGS='-arch i386'
 sudo -E gem install postgres

 -E will allow the environment variable survive the sudo call.

 --
 Thom Brown
 Twitter: @darkixion
 IRC (freenode): dark_ixion
 Registered Linux user: #516935


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error trying to install Ruby postgres gems on OS/X

2011-01-28 Thread Mike Christensen
So I installed the postgres-pg library, which if I understand
correctly is a Ruby implementation of the adapter (as opposed to
native code that has to be built) and that's working fine..

From what I've read, this adapter is much slower but probably fine for
non-production use (I'm just learning for now)..

However, if anyone has any ideas how to install the native adapter, lemme know!

On Fri, Jan 28, 2011 at 5:50 PM, Mike Christensen m...@kitchenpc.com wrote:
 Now I get:

 /Library/PostgreSQL/9.0export ARCHFLAGS='-arch i386'
 /Library/PostgreSQL/9.0sudo -E gem install postgres
 Password:
 Building native extensions.  This could take a while...
 ERROR:  Error installing postgres:
        ERROR: Failed to build gem native extension.

 /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb
 extconf.rb:73: command not found: pg_config --bindir
 extconf.rb:46: command not found: pg_config --includedir
 extconf.rb:53: command not found: pg_config --libdir
 checking for main() in -lpq... no
 *** extconf.rb failed ***
 Could not create Makefile due to some reason, probably lack of
 necessary libraries and/or headers.  Check the mkmf.log file for more
 details.  You may need configuration options.

 Provided configuration options:
        --with-opt-dir
        --without-opt-dir
        --with-opt-include
        --without-opt-include=${opt-dir}/include
        --with-opt-lib
        --without-opt-lib=${opt-dir}/lib
        --with-make-prog
        --without-make-prog
        --srcdir=.
        --curdir
        
 --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
        --with-pgsql-dir
        --without-pgsql-dir
        --with-pgsql-include
        --without-pgsql-include=${pgsql-dir}/include
        --with-pgsql-lib
        --without-pgsql-lib=${pgsql-dir}/lib
        --with-pqlib
        --without-pqlib
 Could not find PostgreSQL build environment (libraries  headers):
 Makefile not created


 Gem files will remain installed in
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
 Results logged to
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out
 /Library/PostgreSQL/9.0


 On Fri, Jan 28, 2011 at 5:46 PM, Thom Brown t...@linux.com wrote:
 On 29 January 2011 01:37, Mike Christensen m...@kitchenpc.com wrote:
 I'm trying to install the Postgres gem on OS/X but getting errors no
 matter what I try..  In theory, it should be as simple as gem install
 postgres, correct?  Here's what I get:

sudo gem install postgres
 Building native extensions.  This could take a while...
 ERROR:  Error installing postgres:
        ERROR: Failed to build gem native extension.

 /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby 
 extconf.rb
 extconf.rb:73: command not found: pg_config --bindir

                ===   WARNING   ===

                You are building this extension on OS X without setting the
                ARCHFLAGS environment variable, and PostgreSQL does not 
 appear
                to have been built as a universal binary. If you are seeing 
 this
                message, that means that the build will probably fail.

                Try setting the environment variable ARCHFLAGS
                to '-arch i386' before building.

                For example:
                (in bash) $ export ARCHFLAGS='-arch i386'
                (in tcsh) $ setenv ARCHFLAGS '-arch i386'

                Then try building again.

                ===
                extconf.rb:46: command not found: pg_config --includedir
 extconf.rb:53: command not found: pg_config --libdir
 checking for main() in -lpq... no
 *** extconf.rb failed ***
 Could not create Makefile due to some reason, probably lack of
 necessary libraries and/or headers.  Check the mkmf.log file for more
 details.  You may need configuration options.

 Provided configuration options:
        --with-opt-dir
        --without-opt-dir
        --with-opt-include
        --without-opt-include=${opt-dir}/include
        --with-opt-lib
        --without-opt-lib=${opt-dir}/lib
        --with-make-prog
        --without-make-prog
        --srcdir=.
        --curdir
        
 --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
        --with-pgsql-dir
        --without-pgsql-dir
        --with-pgsql-include
        --without-pgsql-include=${pgsql-dir}/include
        --with-pgsql-lib
        --without-pgsql-lib=${pgsql-dir}/lib
        --with-pqlib
        --without-pqlib
 Could not find PostgreSQL build environment (libraries  headers):
 Makefile not created


 Gem files will remain installed in
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
 Results logged to
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out

 I have Postgres 9.0 installed at /Library/PostgreSQL/9.0.  Any ideas?

 Mike

 Hi Mike,

 Well the message does advise you to specify the 

Re: [GENERAL] Error trying to install Ruby postgres gems on OS/X

2011-01-28 Thread Thom Brown
On 29 January 2011 01:50, Mike Christensen m...@kitchenpc.com wrote:
 Now I get:

 /Library/PostgreSQL/9.0export ARCHFLAGS='-arch i386'
 /Library/PostgreSQL/9.0sudo -E gem install postgres
 Password:
 Building native extensions.  This could take a while...
 ERROR:  Error installing postgres:
        ERROR: Failed to build gem native extension.

 /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb
 extconf.rb:73: command not found: pg_config --bindir
 extconf.rb:46: command not found: pg_config --includedir
 extconf.rb:53: command not found: pg_config --libdir
 checking for main() in -lpq... no
 *** extconf.rb failed ***
 Could not create Makefile due to some reason, probably lack of
 necessary libraries and/or headers.  Check the mkmf.log file for more
 details.  You may need configuration options.

 Provided configuration options:
        --with-opt-dir
        --without-opt-dir
        --with-opt-include
        --without-opt-include=${opt-dir}/include
        --with-opt-lib
        --without-opt-lib=${opt-dir}/lib
        --with-make-prog
        --without-make-prog
        --srcdir=.
        --curdir
        
 --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
        --with-pgsql-dir
        --without-pgsql-dir
        --with-pgsql-include
        --without-pgsql-include=${pgsql-dir}/include
        --with-pgsql-lib
        --without-pgsql-lib=${pgsql-dir}/lib
        --with-pqlib
        --without-pqlib
 Could not find PostgreSQL build environment (libraries  headers):
 Makefile not created


 Gem files will remain installed in
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
 Results logged to
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out
 /Library/PostgreSQL/9.0


 On Fri, Jan 28, 2011 at 5:46 PM, Thom Brown t...@linux.com wrote:
 On 29 January 2011 01:37, Mike Christensen m...@kitchenpc.com wrote:
 I'm trying to install the Postgres gem on OS/X but getting errors no
 matter what I try..  In theory, it should be as simple as gem install
 postgres, correct?  Here's what I get:

sudo gem install postgres
 Building native extensions.  This could take a while...
 ERROR:  Error installing postgres:
        ERROR: Failed to build gem native extension.

 /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby 
 extconf.rb
 extconf.rb:73: command not found: pg_config --bindir

                ===   WARNING   ===

                You are building this extension on OS X without setting the
                ARCHFLAGS environment variable, and PostgreSQL does not 
 appear
                to have been built as a universal binary. If you are seeing 
 this
                message, that means that the build will probably fail.

                Try setting the environment variable ARCHFLAGS
                to '-arch i386' before building.

                For example:
                (in bash) $ export ARCHFLAGS='-arch i386'
                (in tcsh) $ setenv ARCHFLAGS '-arch i386'

                Then try building again.

                ===
                extconf.rb:46: command not found: pg_config --includedir
 extconf.rb:53: command not found: pg_config --libdir
 checking for main() in -lpq... no
 *** extconf.rb failed ***
 Could not create Makefile due to some reason, probably lack of
 necessary libraries and/or headers.  Check the mkmf.log file for more
 details.  You may need configuration options.

 Provided configuration options:
        --with-opt-dir
        --without-opt-dir
        --with-opt-include
        --without-opt-include=${opt-dir}/include
        --with-opt-lib
        --without-opt-lib=${opt-dir}/lib
        --with-make-prog
        --without-make-prog
        --srcdir=.
        --curdir
        
 --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
        --with-pgsql-dir
        --without-pgsql-dir
        --with-pgsql-include
        --without-pgsql-include=${pgsql-dir}/include
        --with-pgsql-lib
        --without-pgsql-lib=${pgsql-dir}/lib
        --with-pqlib
        --without-pqlib
 Could not find PostgreSQL build environment (libraries  headers):
 Makefile not created


 Gem files will remain installed in
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
 Results logged to
 /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out

 I have Postgres 9.0 installed at /Library/PostgreSQL/9.0.  Any ideas?

 Mike

 Hi Mike,

 Well the message does advise you to specify the architecture in the
 environment variable.

 Try:

 export ARCHFLAGS='-arch i386'
 sudo -E gem install postgres

 -E will allow the environment variable survive the sudo call.

You might have to tell it where the PostgreSQL binaries live first then:

export PATH=$PATH:/Library/PostgreSQL/9.0/bin

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via 

Re: [GENERAL] Store base64 in database. Use bytea or text?

2011-01-28 Thread Jasen Betts
On 2011-01-28, Dmitriy Igrishin dmit...@gmail.com wrote:
 --001636c598d9470a92049ae97be4
 Content-Type: text/plain; charset=UTF-8

 2011/1/28 Andre Lopes lopes80an...@gmail.com

 Hi,

 Another question about this subject.

 It is possible to cache this images from the database? In the future I
 will need to cache the pictures.


for HTTP caching to work you need to serve the resources with
apropriate HTTP headers, and probably need to honour HEAD requests in
a useful way.

you mention PHP, how does your web server communicate a HEAD request
to your PHP script?

it's beginning to look like disk might be a better place to store the
images.  it's more work to administer, but it lets the server and
cache interact naturally.

-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error trying to install Ruby postgres gems on OS/X

2011-01-28 Thread Mike Christensen
 You might have to tell it where the PostgreSQL binaries live first then:

 export PATH=$PATH:/Library/PostgreSQL/9.0/bin

Hey that seems to have fixed it!  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] One last Ruby question for tonight - Regarding UUID type

2011-01-28 Thread Mike Christensen
My goal is to learn Ruby by porting one of my existing PG web
applications over to Rails..  However, my existing data heavily relies
on the UUID data type.  I've noticed when I create a new model with
something like:

guidtest name:string value:uuid

And then do a rake:migrate, the CREATE TABLE that gets generated looks like:

CREATE TABLE guidtests
(
  id serial NOT NULL,
  name character varying(255),
  created_at timestamp without time zone,
  updated_at timestamp without time zone,
  CONSTRAINT guidtests_pkey PRIMARY KEY (id)
)
...

In other words, it just ignores my uuid type.  However, the views
and stuff do include this column so the page will crash when I load it
since the column doesn't exist in the DB.

Is there some special thing I have to do to use the uuid type in
ActiveRecord?  Thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] One last Ruby question for tonight - Regarding UUID type

2011-01-28 Thread Mike Christensen
 My goal is to learn Ruby by porting one of my existing PG web
 applications over to Rails..  However, my existing data heavily relies
 on the UUID data type.  I've noticed when I create a new model with
 something like:

 guidtest name:string value:uuid

 And then do a rake:migrate, the CREATE TABLE that gets generated looks like:

 CREATE TABLE guidtests
 (
  id serial NOT NULL,
  name character varying(255),
  created_at timestamp without time zone,
  updated_at timestamp without time zone,
  CONSTRAINT guidtests_pkey PRIMARY KEY (id)
 )
 ...

 In other words, it just ignores my uuid type.  However, the views
 and stuff do include this column so the page will crash when I load it
 since the column doesn't exist in the DB.
 Is there some special thing I have to do to use the uuid type in
 ActiveRecord?  Thanks!

Update: If I manually add the column in using pgAdmin (as a uuid type
of course), the program actually runs (I can create new rows and
display data)..  So RoR does support this type (probably gets
marshalled as a string??) but I guess the ActiveRecord schema
generation stuff just doesn't support uuid.  Hmmm.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general