Re: [GENERAL] importing a messy text file

2014-04-30 Thread Rob Sargentg

On 04/30/2014 03:50 AM, Willy-Bas Loos wrote:
On Wed, Apr 30, 2014 at 11:06 AM, Rémi Cura remi.c...@gmail.com 
mailto:remi.c...@gmail.com wrote:


with latest version 9.3 you can use a copy from with a programm as
argument.
You can use the unix | pipe for most of the stuff.

If you have an older version you can use the mkfifo command to
trick postgres into believing he will copy from a file.


Hi, you're right i can see the new feature in the docs.
But since i am working from the (bash) command line already, i don't 
see any use for this.

I already pass the data like this:
cat file |psql -c copy table from stdin NULL '' db  
file.log 21


It's especially handy if you want to stay in sql(transaction), i gues.

Cheers,

--
Willy-Bas Loos
Unless you know this is the only problem (extra/missing tab) I would 
triage the file with a scripting language. One favourite check is


   awk '{a[NF]++}END{print Field count: Record count;for (i in
   a){printf %11d : %d, i, a[i]}}

If you only have the tab problem you will get two lines of output with 
field count = N, N-1.  Take care in setting the field separator.




Re: [GENERAL] Altering array(composite-types) without breaking code when inserting them and similar questions

2014-04-20 Thread Rob Sargentg

Sorry, I should not have top-posted (Dang iPhone).  Continued below:
On 04/20/2014 05:54 PM, Dorian Hoxha wrote:
Because i always query the whole row, and in the other way(many 
tables) i will always join + have other indexes.



On Sun, Apr 20, 2014 at 8:56 PM, Rob Sargent robjsarg...@gmail.com 
mailto:robjsarg...@gmail.com wrote:


Why do you think you need an array of theType v. a dependent table
of theType. This tack is of course immune to to most future type
changess.

Sent from my iPhone

Interesting.  Of course any decent mapper will return the whole row. 
And would it be less disk intensive as an array of struct ( where 
struct is implemented as an array).  From other threads [1] [2] I've 
come to understand the datatype overhead per native type will be applied 
per type instance per array element.


[1] 30K floats 
http://postgresql.1045698.n5.nabble.com/Is-it-reasonable-to-store-double-arrays-of-30K-elements-td5790562.html
[2] char array 
http://postgresql.1045698.n5.nabble.com/COPY-v-java-performance-comparison-tc5798389.html


Re: [GENERAL] How to evaluate if a query is correct?

2013-09-19 Thread Rob Sargentg

On 09/19/2013 12:13 AM, Juan Daniel Santana Rodés wrote:

Hi my friends...
I wrote in the last post a question similiar to this. But in this post 
I clarify better the previous question.
I need know how to evaluated if a query is correct without execute it. 
When I say if a query is correct, is that if I run the query, it did 
not throw an exception.

For example...

create or replace function is_correct(query text) returns boolean as
$body$
 Declare
 Begin
  -- Here I check if the query is correct, but I can't execute this 
query, because it may make changes in the data base

 End;
$body$
language 'plpgsql';

Greetings
__
Todos el 12 de Septiembre con una Cinta Amarilla
FIN A LA INJUSTICIA, LIBERENLOS YA!!
http://www.antiterroristas.cu http://justiciaparaloscinco.wordpress.com


Is this an academic exercise or are you actually planning on checking 
sql then executing it if it's ok (according to your function).  I love 
plpgsql but I don't think it's the best option for, just to get started, 
parsing the incoming sql text - that would be one nasty regexp :)


rjs



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sum of columns

2013-09-08 Thread Rob Sargentg

On 09/08/2013 07:12 PM, jane...@web.de wrote:

Hi,
this is my query:
SELECT user,
sum(CASE WHEN lev = 50 AND lev  70 THEN 1 ELSE 0 END) as a,
sum(CASE WHEN lev = 70 AND lev  80 THEN 1 ELSE 0 END) as b,
sum(CASE WHEN lev = 80 AND lev = 90 THEN 1 ELSE 0 END) as c,
sum(CASE WHEN lev  90 THEN 1 ELSE 0 END) as d,
(SELECT a + b + a + d) AS matches
FROM t_temp_fts
GROUP BY user'
I like to add up the 4 columns a,b,c and d of every user, but it 
doesn't work like this.

Does anyone know a solution
Janek Sendrowski



How far does this get you?
insert into t_temp_fts values('rob', 51), ('rob', 71), ('rob', 81), 
('rob', 91);
insert into t_temp_fts values('jon', 51), ('jon', 71), ('jon', 81), 
('jon', 91);

SELECT distinct usern,
(select count(*) from t_temp_fts i where o.usern = i.usern and  
lev = 50 AND lev  70) as a,
(select count(*) from t_temp_fts i where o.usern = i.usern and  
lev = 70 AND lev  80)as b ,
(select count(*) from t_temp_fts i where o.usern = i.usern and  
lev = 80 AND lev  90)as c ,
(select count(*) from t_temp_fts i where o.usern = i.usern and  
lev  90)  as d

from t_temp_fts o
;
  usern | a | b | c | d
---+---+---+---+---
 jon   | 1 | 1 | 1 | 1
 rob   | 1 | 1 | 1 | 1
(2 rows)



Re: [GENERAL] SQL Path in psql

2013-09-06 Thread Rob Sargentg

On 09/06/2013 11:05 AM, miles wrote:

Bobby Dewitt wrote

I come from an Oracle background and I am fairly new to Postgres.
Oracle's command line utility (SQL*Plus) uses an environment variable
called SQLPATH to search the given directories for SQL scripts that the
user calls to execute using a method similar to the \i meta-command in
psql.  This allows users to put SQL scripts in various directories and
execute them no matter which directory is there current working directory
in psql.

Is there a similar environment variable that can be used with psql, or
another way to mimic this behavior?

I'm in the same boat as Bobby. Any feature like this exist? If not, how
would it be requested?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/SQL-Path-in-psql-tp4413658p5769925.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.



I wonder if this would at least get the full path on-screen for a c/p
\! for d in  $SQLPATH; do find $d -name script-filename; done

That said, I would down-vote this suggestion.  I tend to put sql files 
in amongst my various project dirs and maintaining the envvar isn't 
worth it.


Anything I re-use I make into a function.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] earthdistance

2013-08-10 Thread Rob Sargentg
I just recently built the postgis stack (unbuntu 12.4) and had to go to 
the trunk (maybe 2.1) for postgis itself for a work-around a dependency 
failure on, ircc, geos.


rjs


On 08/10/2013 04:03 PM, Brent Wood wrote:

Is there not a precompiled Postgis package you can use?

There are a few dependencies, the PROJ.4 libraries you are missing enable 
projection support, and the package tools automatically manage such dependencies. I 
know packages are well supported for Debian, Ubuntu/Mint/etc, Suse  Fedora.

See: http://trac.osgeo.org/postgis/wiki/UsersWikiInstall

Brent Wood

Programme leader: Environmental Information Delivery
NIWA
DDI:  +64 (4) 3860529

From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] 
on behalf of Olivier Chaussavoine [olivier.chaussavo...@gmail.com]
Sent: Saturday, August 10, 2013 10:17 PM
To: John R Pierce
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] earthdistance

As a simple potential user, I tried to install PostGIS, downloaded all 
libraries required: proj-4.8.0, 
gdal-1.10.0,json-c,postgis-2.0.3,geos-3.3.8,libwml2-2.9.0, and tried to build 
the first library with the simple procedure:

./configure
make
make install

I had a fatal error:

make[2]: entrant dans le répertoire « /home/olivier/ob/proj-4.8.0/src »
/bin/bash ../libtool --tag=CC   --mode=compile gcc -DHAVE_CONFIG_H -I. 
-DPROJ_LIB=\/usr/local/share/proj\ -DMUTEX_pthread -g -O2 -MT jniproj.lo 
-MD -MP -MF .deps/jniproj.Tpo -c -o jniproj.lo jniproj.c
libtool: compile:  gcc -DHAVE_CONFIG_H -I. -DPROJ_LIB=\/usr/local/share/proj\ 
-DMUTEX_pthread -g -O2 -MT jniproj.lo -MD -MP -MF .deps/jniproj.Tpo -c jniproj.c  -fPIC 
-DPIC -o .libs/jniproj.o
jniproj.c:52:26: fatal error: org_proj4_PJ.h: No such file or directory
compilation terminated.

problem out of the scope of this list, and probably not /difficult. Since I 
look for a simple geographic indexing using imprecise lat,long coordinates that 
do not deal with precise modeling; that I am afraid of long install procedure, 
and heavy computations, I also give up.

Spacial mysql indexing seems to be included in pre-built packages.

What can we do?








2013/8/10 John R Pierce pie...@hogranch.commailto:pie...@hogranch.com
On 8/9/2013 5:18 PM, Brent Wood wrote:

You might install Postgis to implement very powerful spatial functionality that 
can easily do what you are asking (plus a whole lot more).


indeed, PostGIS is the logical answer, but the OP specifically stated he wanted 
the functionality without 'sophisticated geographic systems'.  so I ignored the 
question.

the alternative would be implementing your own spherical geometry functions, 
and hook them up to GiST indexing, its not that hard, but by the time you got 
all the functionality you need, you'd be half way to PostGIS, so why fight it?




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Olivier Chaussavoine
--
Please consider the environment before printing this email.
NIWA is the trading name of the National Institute of Water  Atmospheric 
Research Ltd.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regexp_replace failing on 9.0.4

2013-03-18 Thread Rob Sargentg



Maybe we're barking up the wrong tree by suspecting the regex itself.
Perhaps the updates were suppressed by a trigger, or the transaction
rolled back instead of committing, or some such?

regards, tom lane
Barking mad, more like it.  I had rolled  back the execution of my 
function in my dev env. and running again produces the same result as 
experienced in production system.


Why the update of the text type field view regexp_replace is failing 
still confuses me, but that's pretty much my natural state.  The size of 
the text field ranges from 7.5k to 24k char.



Here is the update (again)
  update cms.segment_data s
  set text = regexp_replace(f.content, 
'(^.*)ns/acres/pathology/dx/1.5(.*$)', E'\\1ns/acres/pathology/dx/1.6\\2')

  from
dm.payload_segment p,
support.fix_pathdx_namespace f
  where s.id  = p.segment_data_id
  and p.item_id = f.item_id
  and p.content_version = f.maxversion
  ;



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql copy from through bash

2013-01-11 Thread Rob Sargentg

On 01/11/2013 11:32 AM, Kirk Wythers wrote:


On Jan 11, 2013, at 12:18 PM, Szymon Guz mabew...@gmail.com 
mailto:mabew...@gmail.com wrote:






On 11 January 2013 19:13, Kirk Wythers kwyth...@umn.edu 
mailto:kwyth...@umn.edu wrote:


Can anyone see what I'm misisng? I am trying to run a psql copy
from command through a bash script to load a buch of cdv files
into the same table. I'm getting an error about the file infile
not existing?

#!/bin/sh

for infile in /path_to_files/*.csv
do
   cat infile | psql dbname -c \copy table_name FROM stdin with
delimiter as ',' NULL AS 'NA' CSV HEADER
done


Thanks in advance

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Hi Kirk,
maybe try this:

cat $infile |




Oh my goodness! Thanks you.

Once more quickie. It seems that I am going to be asked for my 
password every time psql loops through the copy statement.


What is considered best practices to handle authentication? I am 
connecting locally, as myself as the user and I'm being asked for my 
user password. I added the -w (no-password) to the psql statement, but 
now assume I need to add a .pgpass file or something.


Suggestions?



Yes a .pgpass file would work nicely




Re: [GENERAL] Is EXISTS the most efficient approach for PostgreSql to check for existence of nodes in a tree?

2012-05-17 Thread Rob Sargentg

On 05/17/2012 03:06 AM, Seref Arikan wrote:
I have a key value table in my Postgresql db, which represents 
hierarchical data through parent_feature_mapping column that points to 
id of feature_mapping_id column of the same table.


I need to select root nodes that has children which satisfy various 
conditions. The conditions may extend to children of children, so I'm 
trying to find roots of trees that contain paths that satisfy  the 
given constraints.


An example is finding the trees where the root node has type 
'COMPOSITION' and root node's archetypeNodeId attribute has value 
'openEHR-EHR-COMPOSITION.discharge.v1' another constraint is root node 
having a child of type 'CONTENTITEM' that in turn has a child of type 
'ITEMSTRUCTURE'
All nodes in a tree have the same payload Id. The fastest query that I 
could write so far is given below.



SELECT root.id http://root.id from path_value as root
WHERE
root.rm_type_name = 'COMPOSITION'
AND
root.feature_name = 'composition'
AND
EXISTS (SELECT 1 from path_value as anodeid
WHERE
anodeId.parent_feature_mapping_id = 
root.feature_mapping_id

AND
anodeId.payload_id = root.payload_id
AND
anodeId.feature_name = 'archetypeNodeId'
AND
anodeId.val_string = 
'openEHR-EHR-COMPOSITION.discharge.v1'

LIMIT 1
)

AND
EXISTS (SELECT 1 from path_value as node1
 WHERE
node1.payload_id = root.payload_id
AND
node1.parent_feature_mapping_id = root.feature_mapping_id
AND
node1.feature_name = 'content'
AND
node1.rm_type_name = 'CONTENTITEM'
AND
EXISTS (SELECT 1 from path_value as node2
WHERE
node2.payload_id = node1.payload_id
AND
node2.parent_feature_mapping_id = 
node1.feature_mapping_id

AND
node2.rm_type_name = 'ITEMSTRUCTURE'
LIMIT 1)
LIMIT 1)

My question is: is this the best approach in terms of performance? 
This is an attempt to identify XML payloads that fit certain criteria. 
I have also considered using an ltree column that will contain the 
tree in a from that I can query as an alternative to sql based method, 
or I can use xpath queries on XML payload.


The create statement for my table is as follows:

CREATE TABLE public.path_value (
  val_string TEXT,
  feature_mapping_id INTEGER NOT NULL,
  parent_feature_mapping_id INTEGER,
  feature_name TEXT,
  rm_type_name TEXT,
  path INTEGER NOT NULL,
  payload_id INTEGER NOT NULL,
  id INTEGER NOT NULL,
  ehr_id INTEGER,
  CONSTRAINT path_value_pkey PRIMARY KEY(id)
) WITHOUT OIDS;


Best regards
Seref


Any other constraints or indexes on that table?


Re: [GENERAL] Bug? Query plans / EXPLAIN using gigabytes of memory

2012-04-29 Thread Rob Sargentg

On 04/29/2012 07:19 PM, Toby Corkindale wrote:

On 27/04/12 09:33, Tom Lane wrote:

Toby Corkindaletoby.corkind...@strategicdata.com.au  writes:

I've created a bit of a test case now.
There's a Perl script here:
http://dryft.net/postgres/


AFAICT, what is happening is that we're repeating the planning of that
messy nest of views for each child table of foo.  For most of the
children the planner eventually decides that the join degenerates to
nothing because of constraint exclusion, but not until it's expended a
fair amount of time and memory space per child.

I looked at whether we could improve that by having inheritance_planner
use a temporary memory context per child, but that doesn't look very
practical: it would add a good deal of extra data-copying overhead,
and some of the data structures involved are not easily copiable.

The general scheme of replanning per child might be questioned as well,
but IMO it's fairly important given the looseness of inheritance
restrictions --- it's not unlikely that you *need* different plans for
different children.  We might be able to reconsider that approach
whenever we invent an explicit concept of partitioned tables, since
presumably the partitions would all be essentially alike.

In the meantime, the best advice I can come up with is to reconsider
whether you need so many partitions.  That mechanism is really designed
for only a dozen or two partitions at most.



Hi Tom,
Thanks for looking into this, I appreciate you spending the time.

The system I've come up with for partitioning this data requires quite 
a lot of partitions - say thirty to seventy - but I didn't realise it 
would cause trouble down the line, so I'll see if it can be reworked 
to reduce the number.


For what it's worth, the actual query that was blowing out to 
gigabytes was only hitting a couple of dozen partitions per table it 
was touching - but it was hitting three such tables, about sixteen 
times (!) each.


I'm still curious about why I can do a SELECT * FROM complexview 
without using much memory, but an UPDATE foo FROM complexview causes 
all the memory to get exhausted?


Thanks,
Toby


Does

UPDATE foo set values
where foo.id in (select id from complexview...)

also swallow the memory?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unfamous 'could not read block ... in file ...: read only 0 of 8192 bytes' again

2012-02-20 Thread Rob Sargentg

On 02/20/2012 07:46 PM, Maxim Boguk wrote:



On Tue, Feb 21, 2012 at 1:37 PM, Tom Lane t...@sss.pgh.pa.us 
mailto:t...@sss.pgh.pa.us wrote:


Maxim Boguk maxim.bo...@gmail.com mailto:maxim.bo...@gmail.com
writes:
 Do you know why the mod date on the file is 2012-02-20 12:04?

 Cron was attempt to populate the table once per hour after that
problem
 happened.
 And each time it was produced the same error.

That's interesting ... is there any possibility that the
insertions were
attempting to insert values that matched a previously-existing primary
key value?  I'm thinking there's no reason for the INSERT per se to be
touching nonexistent blocks, but if for some reason the pkey index
still
had entries pointing at vanished rows (as it seems to) then the errors
could be coming from uniqueness checks attempting to fetch those
rows to
see if they're live.

   regards, tom lane


Hi,

There isn't possibility but close to 100% new inserted values were 
matched a previously-existing primary

key value.
The table is hand-made 'materialyzed view'-type statistic table which 
is getting recalculated via cron.


--
Maxim Boguk
Senior Postgresql DBA.




Sorry Maxim,
Trying to follow along here:  Are you say the inserted record DO or DO 
NOT match previously existing key values.





Re: [GENERAL] Appending a newline to a column value - in a psql cronjob

2012-01-13 Thread Rob Sargentg

On 01/13/2012 05:11 AM, Alexander Farber wrote:

Hello!

I'm using PostgreSQL 8.4.9 on CentOS 6.2 and with bash.

The following cronjob works well for me
(trying to send a mail to myself - for moderation):

6   6   *   *   *   psql -c select
'http://mysite/user.php?id=' ||id, about from pref_rep where
length(about)  1 and last_rated  now() - interval '1 day'

but I can't figure out how to append a newline to the
1st value (because otherwise the line is too long
and I have to scroll right in my mail reader):

What I've tried sofar:

# history
  1001  psql -c select 'http://mysite/user.php?id=' ||id||'\n', about from 
.
  1002  psql -c select 'http://mysite/user.php?id=' ||id||'\\n', about
from .
  1003  psql -c select 'http://mysite/user.php?id=' ||id|| \\n,
about from .
  1004  psql -c select 'http://mysite/user.php?id=' ||id|| \\\n\,
about from .
  1005  psql -c select 'http://mysite/user.php?id=' ||id|| \\n\,
about from .
  1006  psql -c select 'http://mysite/user.php?id=' ||id|| \'\n\',
about from .

Thank you
Alex



Given that it's a constant, I would just drop the http header :)


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.2: Preventing connections / syncing a database

2012-01-01 Thread Rob Sargentg

On 01/01/2012 11:51 AM, Jay Levitt wrote:


revoke connect on database rails_dev from public;
select pg_terminate_backend(procpid) from pg_stat_activity where 
datname='rails_dev';


Still, the app can reconnect. (So can psql.)

So...

1. How can I prevent (or redirect, or timeout, or anything) new 
connections? I think superuser roles might be exempt from connection 
limits and privileges.  I could repeatedly terminate backends until 
I'm able to rename the database, but... ick.


2. What's a better way to slave to a master database without being a 
read-only slave?  In other words, we want to use the production 
database as a starting point each morning, but we'll be creating test 
rows and tables that we wouldn't want to propagate to production. Can 
I do this while the database is live through some existing replication 
tool?  The production database is on Ubuntu but the workstations are 
Macs, FWIW.


Jay
More along the lines of what Greg has said.  Not clear to me why an 
individual dev box needs to be that current data-wise.  Of course 
stopping and starting your app should be easy, especially for the 
developers so maybe that's a better place to start.  Then dev can do it 
when and how often suits dev best (even cronning shutdown app; reload 
db; to happen 3am Sundays)


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] reading build config.log -- possible bug?

2011-11-27 Thread Rob Sargentg
Looking to see where my new data directory is in recently built 9.1 I 
happened upon what could potentially be a bug in configure:


Just after the line which begins configure_args= (line 11068 for me, 
99% of buffer) I find:


datadir='${datarootdir}'
datarootdir='${prefix}/share'

I'm troubled that datadir may be getting set too early since that's the 
first mention of either variable in config.log? I don't see 'datadir' 
anywhere else in the file but that's not abnormal.


Paranoid but maybe they _are_ after me,

rjs



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reading build config.log -- possible bug?

2011-11-27 Thread Rob Sargentg

On 11/27/2011 09:31 PM, Tom Lane wrote:

Rob Sargentgrobjsarg...@gmail.com  writes:

Looking to see where my new data directory is in recently built 9.1 I
happened upon what could potentially be a bug in configure:
Just after the line which begins configure_args= (line 11068 for me,
99% of buffer) I find:
datadir='${datarootdir}'
datarootdir='${prefix}/share'
I'm troubled that datadir may be getting set too early since that's the
first mention of either variable in config.log? I don't see 'datadir'
anywhere else in the file but that's not abnormal.

I think you're reading the alphabetized dump of variables' final values,
no?

regards, tom lane

Ah, yes... Note to self: Step back, see the larger picture...

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 9.1.1 build failure : postgres link fails

2011-11-20 Thread Rob Sargentg
I just got round to updating my laptop to ubuntu-10.4 (32bit), in part 
because I kept hitting snags while trying to configure postgres 9.1.1.


The upgrade did in fact solve the dependency issues (though  I was 
surprised UUID came along with out the ossp specific impl??) but the 
build is having troubles linking the server executable.


Here's my configure line:
./configure --with-pgport=5439 --with-perl --with-python --with-openssl 
--with-ldap --with-ossp-uuid --with-libxml --with-libxslt


and the tail of the configure output:

configure: creating ./config.status
config.status: creating GNUmakefile
config.status: creating src/Makefile.global
config.status: creating src/include/pg_config.h
config.status: creating src/interfaces/ecpg/include/ecpg_config.h
config.status: linking src/backend/port/tas/dummy.s to 
src/backend/port/tas.s
config.status: linking src/backend/port/dynloader/linux.c to 
src/backend/port/dynloader.c
config.status: linking src/backend/port/sysv_sema.c to 
src/backend/port/pg_sema.c
config.status: linking src/backend/port/sysv_shmem.c to 
src/backend/port/pg_shmem.c
config.status: linking src/backend/port/unix_latch.c to 
src/backend/port/pg_latch.c
config.status: linking src/backend/port/dynloader/linux.h to 
src/include/dynloader.h
config.status: linking src/include/port/linux.h to 
src/include/pg_config_os.h

config.status: linking src/makefiles/Makefile.linux to src/Makefile.port

ubuntu-10.4 is not the latest of course and comes with gnu make 3.8.1, 
but it seems to compile everything then fails to link postgres executable:


gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wformat-security 
-fno-strict-aliasing -fwrapv -L../../src/port -Wl,--as-needed 
-Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags  -Wl,-E TONS OF 
DOT OHs ../../src/port/libpgport_srv.a -lxslt -lxml2 -lssl -lcrypto 
-lcrypt -ldl -lm -lldap -o postgres

postmaster/postmaster.o: In function `PostmasterMain':
postmaster.c:(.text+0x48d7): undefined reference to `optreset'
tcop/postgres.o: In function `process_postgres_switches':
postgres.c:(.text+0x1312): undefined reference to `optreset'
utils/misc/ps_status.o: In function `set_ps_display':
ps_status.c:(.text+0xd4): undefined reference to `setproctitle'
collect2: ld returned 1 exit status
make[2]: *** [postgres] Error 1
make[2]: Leaving directory `/home/rob/tools/postgresql-9.1.1/src/backend'
make[1]: *** [all-backend-recurse] Error 2
make[1]: Leaving directory `/home/rob/tools/postgresql-9.1.1/src'
make: *** [all-src-recurse] Error 2


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.1 build failure : postgres link fails

2011-11-20 Thread Rob Sargentg

On 11/20/2011 11:05 AM, Tom Lane wrote:

Rob Sargentrobjsarg...@gmail.com  writes:

On 11/20/2011 09:24 AM, Tom Lane wrote:

It appears that on Ubuntu, libbsd defines those symbols, which confuses
configure into supposing that they're provided by libc, and then the
link fails because libbsd isn't actually linked into the postmaster.
The question is what's pulling in libbsd though.  In the previous report
it came via libedit, which you're not using.  I'd try looking in the
config.log file to see what it was linking in the test that decided
setproctitle was available, and then using ldd on each of those
libraries to see which one(s) require libbsd.

Will do.  Then there's always trying Ubuntu-11?

Couldn't say.  But re-reading this, I wonder if maybe you *are* using
libedit.  Have you got readline installed?  If not, configure will try
libedit as second choice ... so maybe the best fix is to install
readline (and don't forget readline-devel or local equivalent).

regards, tom lane
I built readline-6.2 from source and got passed linking 'postgres' 
executable.  Progress!  now chasing down libperl


gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wformat-security 
-fno-strict-aliasing -fwrapv -fpic -shared -o plperl.so plperl.o SPI.o 
Util.o -L../../../src/port -Wl,--as-needed 
-Wl,-rpath,'/usr/lib/perl/5.10/CORE',--enable-new-dtags  
-fstack-protector -L/usr/local/lib  -L/usr/lib/perl/5.10/CORE -lperl 
-ldl -lm -lpthread -lc -lcrypt

/usr/bin/ld: cannot find -lperl
collect2: ld returned 1 exit status
make[3]: *** [plperl.so] Error 1
make[3]: Leaving directory `/home/rob/tools/postgresql-9.1.1/src/pl/plperl'
make[2]: *** [all-plperl-recurse] Error 2
make[2]: Leaving directory `/home/rob/tools/postgresql-9.1.1/src/pl'
make[1]: *** [all-pl-recurse] Error 2
make[1]: Leaving directory `/home/rob/tools/postgresql-9.1.1/src'
make: *** [all-src-recurse] Error 2

I had to make a simlink 'ln -s /usr/lib/libperl.so.5.10 
/usr/lib/libperl.so' which I thought was supposed to happen automagically...


BUT
make[1]: Leaving directory `/home/rob/tools/postgresql-9.1.1/config'
All of PostgreSQL successfully made. Ready to install.
PostgreSQL installation complete.

Thank you Tom for the readline tip.  Sort of thing that would take me 
forever to track down.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general