Re: [HACKERS] MVCC overheads

2016-07-08 Thread Pete Stevenson
Good info, thanks for the note. Agreed that it is difficult to pull things 
apart to isolate these features for offload — so actually running experiments 
with offload is not possible, as you point out (and for other reasons).

Maybe I could figure out the lines of code that add versions into a table and 
then those that collect old versions (they do get collected, right?). Anyway, 
thought being I could profile while running TPC-C or similar. I was hoping that 
someone might be able to jump on this with a response that they already did 
something similar. I know that Stonebraker has done some analysis along these 
lines, but I’m looking for an independent result that confirms (or not) his 
work.

Thank you,
Pete Stevenson


> On Jul 7, 2016, at 3:43 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
> 
> On 7 July 2016 at 20:50, Pete Stevenson <etep.nosnev...@gmail.com 
> <mailto:etep.nosnev...@gmail.com>> wrote:
> Hi Simon -
> 
> Thanks for the note. I think it's fair to say that I didn't provide enough 
> context, so let me try and elaborate on my question.
> 
> I agree, MVCC is a benefit. The research angle here is about enabling MVCC 
> with hardware offload. Since I didn't explicitly mention it, the offload I 
> refer to will respect all consistency guarantees also.
> 
> It is the case that for the database to implement MVCC it must provide 
> consistent read to multiple different versions of data, i.e. depending on the 
> version used at transaction start. I'm not an expert on postgresql internals, 
> but this must have some cost. I think the cost related to MVCC guarantees can 
> roughly be categorized as: creating new versions (linking them in), version 
> checking on read, garbage collecting old versions, and then there is an 
> additional cost that I am interested in (again not claiming it is unnecessary 
> in any sense) but there is a cost to generating the log.
> 
> Thanks, by the way, for the warning about lab vs. reality. That's why I'm 
> asking this question here. I want to keep the hypothetical tagged as such, 
> but find defensible and realistic metrics where those exist, i.e. in this 
> instance, we do have a database that can use MVCC. It should be possible to 
> figure out how much work goes into maintaining that property.
> 
> PostgreSQL uses a no overwrite storage mechanism, so any additional row 
> versions are maintained in the same table alongside other rows. The MVCC 
> actions are mostly mixed in with other aspects of the storage, so not 
> isolated much for offload.
> 
> Oracle has a different mechanism that does isolate changed row versions into 
> a separate data structure, so would be much more amenable to offload than 
> PostgreSQL, in its current form.
> 
> Maybe look at SLRUs (clog etc) as a place to offload something?
> 
> -- 
> Simon Riggshttp://www.2ndQuadrant.com/ 
> <http://www.2ndquadrant.com/>
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] MVCC overheads

2016-07-07 Thread Pete Stevenson
Hi Simon -

Thanks for the note. I think it's fair to say that I didn't provide enough
context, so let me try and elaborate on my question.

I agree, MVCC is a benefit. The research angle here is about enabling MVCC
with hardware offload. Since I didn't explicitly mention it, the offload I
refer to will respect all consistency guarantees also.

It is the case that for the database to implement MVCC it must provide
consistent read to multiple different versions of data, i.e. depending on
the version used at transaction start. I'm not an expert on postgresql
internals, but this must have some cost. I think the cost related to MVCC
guarantees can roughly be categorized as: creating new versions (linking
them in), version checking on read, garbage collecting old versions, and
then there is an additional cost that I am interested in (again not
claiming it is unnecessary in any sense) but there is a cost to generating
the log.

Thanks, by the way, for the warning about lab vs. reality. That's why I'm
asking this question here. I want to keep the hypothetical tagged as such,
but find defensible and realistic metrics where those exist, i.e. in this
instance, we do have a database that can use MVCC. It should be possible to
figure out how much work goes into maintaining that property.

Thank you,
Pete



On Thu, Jul 7, 2016 at 11:10 AM, Simon Riggs <si...@2ndquadrant.com> wrote:

> On 7 July 2016 at 17:45, Pete Stevenson <etep.nosnev...@gmail.com> wrote:
>
>> Hi postgresql hackers -
>>
>> I would like to find some analysis (published work, blog posts) on the
>> overheads affiliated with the guarantees provided by MVCC isolation. More
>> specifically, assuming the current workload is CPU bound (as opposed to IO)
>> what is the CPU overhead of generating the WAL, the overhead of version
>> checking and version creation, and of garbage collecting old and
>> unnecessary versions? For what it’s worth, I am working on a research
>> project where it is envisioned that some of this work can be offloaded.
>>
>
> MVCC is a benefit, not an overhead. To understand that you should compare
> MVCC with a system that performs S2PL.
>
> If you're thinking that somehow consistency isn't important, I'd hope that
> you also consider some way to evaluate the costs associated with
> inconsistent and incorrect results in applications, or other architectural
> restrictions imposed to make that possible. It's easy to make assumptions
> in the lab that don't work in the real world.
>
> --
> Simon Riggshttp://www.2ndQuadrant.com/
> <http://www.2ndquadrant.com/>
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


[HACKERS] MVCC overheads

2016-07-07 Thread Pete Stevenson
Hi postgresql hackers -

I would like to find some analysis (published work, blog posts) on the 
overheads affiliated with the guarantees provided by MVCC isolation. More 
specifically, assuming the current workload is CPU bound (as opposed to IO) 
what is the CPU overhead of generating the WAL, the overhead of version 
checking and version creation, and of garbage collecting old and unnecessary 
versions? For what it’s worth, I am working on a research project where it is 
envisioned that some of this work can be offloaded.

Thank you,
Pete Stevenson



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add EXPLAIN (ALL) shorthand

2016-05-21 Thread Pete Hollobon
On 21 May 2016 16:07, "David G. Johnston" 
wrote:
> ​And most of the time the choice of options is totally arbitrary based
upon the mood and experience of the user, so what's it matter if they saved
a few keystrokes and set the GUC in the .psqlrc​ file?
>
>> I'm predicting users that will have
>> trouble while using EXPLAIN if someone change the suggested GUC. It also
>> breaks clients/applications that parse EXPLAIN.
>
>
> ​Pretty much the same argument as above.​
>
> I would not expect a DBA to set this value globally - but shame on them
if they do.  I'd expect either ALTER ROLE or SET usage, in .psqlrc if
applicable, to be the dominate usage for setting the value to a non-empty
string.  There is UI to consider here but I don't see any fundamental
problems.

A GUC seems like overkill for psql. I have the following in my .psqlrc:

\set expall 'EXPLAIN (analyze, buffers, costs, timing, verbose) '

That lets you type

:expall select a, b from whatever;

For GUI tools like pgadmin you've often got built in explain tools anyway.

> David J.
>


Re: [HACKERS] Buildfarm issues on specific machines

2005-07-17 Thread Pete St. Onge
First off, thanks for looking into this, Tom, and thanks to Andrew for
all the stellar work on the buildfarm, I'm glad to be a part of it.

Perhaps this will help in the diagnosis of why REL7_2_STABLE fails on
arbor (aka caribou). Please let me know if there is anything I can try
on this side, or if there is any other info you could use.

Thanks,

Pete

arbor:~# whereis flex
flex: /usr/bin/flex /usr/share/man/man1/flex.1.gz
arbor:~# whereis gcc
gcc: /usr/bin/gcc /usr/share/man/man1/gcc.1.gz

[EMAIL PROTECTED]:~$ flex -V
flex 2.5.31

arbor:~# crontab -l
30 2 * * * 
PATH=/bin:/usr/bin:/home/pete/bin:/home/pete/projects/pg_build_farm/build-farm-2.05
 cd ~/projects/pg_build_farm/build-farm-2.05  for foo in HEAD REL7_2_STABLE 
REL7_3_STABLE REL7_4_STABLE REL8_0_STABLE REL8_1_STABLE; do ./run_build.pl 
--verbose $foo; done


On Sat, Jul 16, 2005 at 11:17:29PM -0400, Tom Lane wrote:
[Clip]

 caribou [7.2]: no flex installed
 
 This looks like pilot error as well, though again I don't understand why the
 later branches seem to work.  Are we sure the same PATH is being used for
 every branch here?  Why doesn't the buildfarm report for 7.2 show the PATH?
 
   regards, tom lane

-- 
Peter D. St. Onge,  http://pete.economics.utoronto.ca/
IT Admin, Department of Economics   [EMAIL PROTECTED]
SB01, 150 St. George, University of Toronto   416-946-3594

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] feature request... case sensitivity without double quotes

2004-01-18 Thread Pete
Hi,

I'm not sure if this is the correct place to make a feature request. If 
not hopefully I can be kindly pointed in that direction.

I have several project that use MySQL and I would like to port them to 
PostgreSQL unfortunately they use a naming convention which uses upper 
case and lower case letters
Example:
SELECT AccountID FROM Account

I am aware that if you enclose those table and column names with  then 
postgresql will take the case into consideration. Only problem is most 
people who have current MySQL project have not written their statements 
with  (MySQL parser uses no quotes of the ` back tick) and it would 
take considerable man power to convert each SQL statement.

Perhaps a feature, which is not set by default so it doesn't break 
current functionality, can be set so that when creating the database you 
can set a flag that will let postgresql know to parse column and table 
names that don't have double quotes and still keep the case information.

I'm sure this will help spure more adoption of Postgresql, because 
people with serious databases concerns can't use MySQL and for real 
production and large scale projects I've purchased commercial databases 
because the migration from MySQL to them was easier because of the 
different approach to parsing.

Thanks for your time,
Pete
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Hosed PostGreSQL Installation

2002-09-23 Thread Pete St. Onge

Just following up on Tom Lane's email - 

A couple of things that I hadn't mentioned: After bringing up the
machine, the first thing I did before mucking about with PostGreSQL was
to tarball $PGDATA so that I'd have a second chance if I messed up. I
then ran pg_resetlog -f the first time, as Tom surmised, with the
unwanted results. 

That done, I sent out the email, and followed Tom's instructions (yay
backups!) and did it properly.

On Sat, Sep 21, 2002 at 11:13:44AM -0400, Tom Lane wrote:
 Pete St. Onge [EMAIL PROTECTED] writes:
 
 That should not have been a catastrophic mistake in any version = 7.1.
 I suspect you had disk problems or other problems.
 We did, but these were on a different disk according to the logs,
AFAIK. 

 These numbers are suspiciously small for an installation that's been
 in production awhile.  I suspect you have not told us the whole story;
 in particular I suspect you already tried pg_resetxlog -f, which was
 probably not a good idea.
 *raises hand* Yep.

Here's the contents of the pg_xlog directory. PGSQL has only been used
here for approximately 4 months of fairly light use, so perhaps the
numbers aren't as strange as they could be (this is from the backup).

-rw---1 postgres postgres 16777216 Sep 19 22:09 0002007E


 Yeah, your xlog positions should be a great deal higher than they are,
 if segment 2/7E was previously in use.
 
 It is likely that you can recover (with some uncertainty about integrity
 of recent transactions) if you proceed as follows:
 
 1. Get contrib/pg_resetxlog/pg_resetxlog.c from the 7.2.2 release ...
[Chomp]

The compile worked without a hitch after doing ./configure in the
top-level directory. I just downloaded the src for both trees, made the
changes manually, copied the file into the 7.1.3 tree and compiled it
there. 

 2. Run the hacked-up pg_resetxlog like this:
 
   pg_resetxlog -l 2 127 -x 10 $PGDATA
 
 (the -l position is next beyond what we see in pg_xlog, the 1-billion
 XID is just a guess at something past where you were.  Actually, can
 you give us the size of pg_log, ie, $PGDATA/global/1269?  That would
 allow computing a correct next-XID to use.  Figure 4 XIDs per byte,
 thus if pg_log is 1 million bytes you need -x at least 4 million.)

 -rw---1 postgres postgres 11870208 Sep 19 17:00 1269

 This gives a min WAL starting location of 47480832. I used
4750.


 3. The postmaster should start now.
 I had to use pg_resetxlog's force option, but yeah, it worked like
you said it would.

 4. *Immediately* attempt to do a pg_dumpall.  Do not pass GO, do not
 collect $200, do not let in any interactive clients until you've done
 it. (I'd suggest tweaking pg_hba.conf to disable all logins but your
 own.)
 I did not pass go, I did not collect $200. I *did* do a pg_dumpall
right there and then, and was able to dump everything I needed. One
of the projects uses large objects - image files and html files (don't
ask, I've already tried to dissuade the Powers-That-Be) - and these
didn't come out. However, since this stuff is entered via script, the
project leader was fine with re-running the scripts tomorrow.


 5. If pg_dumpall succeeds and produces sane-looking output, then you've
 survived.  initdb, reload the dump file, re-open for business, go have
 a beer.  (Recommended: install 7.2.2 and reload into that, not 7.1.*.)
 You will probably still need to check for partially-applied recent
 transactions, but for the most part you should be OK.
 rpm -Uvh'ed the 7.2.2 RPMs, initdb'd and reloaded data into the new
installation. Pretty painless. I've just sent out an email to folks here
to let them know the situation, and we should know in the next day or so
what is up.


 6. If pg_dumpall fails then let us know what the symptoms are, and we'll
 see if we can figure out a workaround for whatever the corruption is.
 I've kept the tarball with the corrupted data. I'll hold onto it
for a bit, in case, but will likely expunge it in the next week or so.
If this can have a use for the project (whatever it may be), let me know
and I can burn it to DVD.

 Of course, without your help, Tom, there would be a lot of Very
Unhappy People here, me only being one of them. Many thanks for your
help and advice!

Cheers,

Pete 


-- 
Pete St. Onge
Research Associate, Computational Biologist, UNIX Admin
Banting and Best Institute of Medical Research
Program in Bioinformatics and Proteomics
University of Toronto
http://www.utoronto.ca/emililab/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] Hosed PostGreSQL Installation

2002-09-20 Thread Pete St. Onge

As a result of some disk errors on another drive, an admin in our group
brought down the server hosting our pgsql databases with a kill -KILL
after having gone to runlevel 1 and finding the postmaster process still
running. No surprise, our installation was hosed in the process. 

After talking on #postgresql with klamath for about an hour or so to
work through the issue (many thanks!), it was suggested that I send
the info to this list.

Currently, PostGreSQL will no longer start, and gives this error.

bash-2.05$ /usr/bin/pg_ctl  -D $PGDATA -p /usr/bin/postmaster start
postmaster successfully started
bash-2.05$ DEBUG:  database system shutdown was interrupted at
2002-09-19 22:59:54 EDT
DEBUG:  open(logfile 0 seg 0) failed: No such file or directory
DEBUG:  Invalid primary checkPoint record
DEBUG:  open(logfile 0 seg 0) failed: No such file or directory
DEBUG:  Invalid secondary checkPoint record
FATAL 2:  Unable to locate a valid CheckPoint record
/usr/bin/postmaster: Startup proc 11735 exited with status 512 - abort


Our setup is vanilla Red Hat 7.2, having pretty much all of the
postgresql-*-7.1.3-2 packages installed. Klamath asked if I had disabled
fsync in postgresql.conf, and the only non-default (read: non-commented)
setting in the file is: `tcpip_socket = true`


Klamath suggested that I run pg_controldata:

bash-2.05$ ./pg_controldata 
pg_control version number:71
Catalog version number:   200101061
Database state:   SHUTDOWNING
pg_control last modified: Thu Sep 19 22:59:54 2002
Current log file id:  0
Next log file segment:1
Latest checkpoint location:   0/1739A0
Prior checkpoint location:0/1718F0
Latest checkpoint's REDO location:0/1739A0
Latest checkpoint's UNDO location:0/0
Latest checkpoint's StartUpID:21
Latest checkpoint's NextXID:  615
Latest checkpoint's NextOID:  18720
Time of latest checkpoint:Thu Sep 19 22:49:42 2002
Database block size:  8192
Blocks per segment of large relation: 131072
LC_COLLATE:   en_US
LC_CTYPE: en_US


If I look into the pg_xlog directory, I see this:

sh-2.05$ cd pg_xlog/
bash-2.05$ ls -l
total 32808
-rw---1 postgres postgres 16777216 Sep 20 23:13 0002
-rw---1 postgres postgres 16777216 Sep 19 22:09 0002007E


There is one caveat. The installation resides on a partition of its own:
/dev/hda317259308   6531140   9851424  40% /var/lib/pgsql/data

fdisk did not report errors for this partition at boot time after the
forced shutdown, however.

This installation serves a university research project, and although
most of the code / schemas are in development (and should be in cvs by
rights), I can't confirm that all projects have indeed done that. So any
advice, ideas or suggestions on how the data and / or schemas can be
recovered would be greatly appreciated.

Many thanks!

-- pete

P.S.: I've been using pgsql for about four years now, and it played a
big role during my grad work. In fact, the availability of pgsql was one
of the reasons why I was able to complete and graduate. Many thanks for
such a great database!


-- 
Pete St. Onge
Research Associate, Computational Biologist, UNIX Admin
Banting and Best Institute of Medical Research
Program in Bioinformatics and Proteomics
University of Toronto
http://www.utoronto.ca/emililab/   [EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] bad performance on irix

2002-03-25 Thread Pete Forman

[EMAIL PROTECTED] (Luis Alberto Amigo Navarro) writes:

  we're running on sgi powerchallenge, 8 r1 4-way smp, and we're
  getting bad performance from postgres, throughput increases from 1
  to 5 streams, but from 5 and above there is no further increase,
  performance analysis show high sleep waiting for resources

IIRC there is a bottleneck on calls to sleep() or similar on IRIX
SMP.  All requests are dealt with on just one of the CPUs.  I don't
recollect whether there is a way to work around that or whether
programs need to be rewritten.
-- 
Pete Forman-./\.-  Disclaimer: This post is originated
WesternGeco  -./\.-   by myself and does not represent
[EMAIL PROTECTED]-./\.-   opinion of Schlumberger, Baker
http://petef.port5.com  (new)-./\.-   Hughes or their divisions.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] Re: Final call for platform testing

2001-04-04 Thread Pete Forman

  Solaris 2.7-8 Sparc7.1 2001-03-22, Marc Fournier

I've reported Solaris 2.6 Sparc as working on a post-RC1 snapshot.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] RC1 core dumps in initdb on Solaris 2.6

2001-03-28 Thread Pete Forman

Tom Lane writes:
  Pete Forman [EMAIL PROTECTED] writes:
   The regression test is failing early on, during initdb.  The core
   file indicates that there is a SIGBUS.  Hopefully the bugs fixed
   as a result of the "More bogus alignment assumptions" thread will
   sort things out.
  
  Sure looks like this is the same issue reported (and fixed) before.
  Would you try it with current CVS or last night's snapshot tarball?

Yes, it is fixed in the snapshot of 2001-03-27, as is the earlier bug
in pg_backup_null.c.  All tests passed.



I've registered the result via the web form though the report is not
accurate.  Can somebody update the Remarks or Version field to
indicate that I was using a snapshot rather than RC1 OOTB.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] RC1 core dumps in initdb on Solaris 2.6

2001-03-28 Thread Pete Forman

Pete Forman writes:
  I've registered the result via the web form though the report is
  not accurate.  Can somebody update the Remarks or Version field to
  indicate that I was using a snapshot rather than RC1 OOTB.

Further to that request, please ignore/delete the existing Remarks.

I have looked at 12 Solaris systems running 2.5, 2.5.1, 2.6, 7, and 8.
It looks as if my machine only has had a non-standard symlink made
from libld.so.2 to libld.so in /usr/lib.  This will be removed.  As
far as I can tell, libld.so.2 is used in a special way by ld rather
than being a normal shared library.
-- 
Pete Forman   http://www.bedford.waii.com/wsdev/petef/PeteF_links.html
WesternGeco   http://www.crosswinds.net/~petef
Manton Lane, Bedford,   mailto:[EMAIL PROTECTED]
MK41 7PA, UK  tel:+44-1234-224798  fax:+44-1234-224804

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] RC1 core dumps in initdb on Solaris 2.6

2001-03-27 Thread Pete Forman

The regression test is failing early on, during initdb.  The core file
indicates that there is a SIGBUS.  Hopefully the bugs fixed as a
result of the "More bogus alignment assumptions" thread will sort
things out.  Here are initdb.log and the stack trace.  If needs be I
can recompile with -g but unless I hear otherwise I'll wait for RC2.



Running with noclean mode on. Mistakes will not be cleaned up.
This database system will be initialized with username "gsez020".
This user will own all the data files and must also own the server process.

Creating directory 
/SRC/freeware/solaris_2.6/postgresql-7.1RC1/src/test/regress/./tmp_check/data
Creating directory 
/SRC/freeware/solaris_2.6/postgresql-7.1RC1/src/test/regress/./tmp_check/data/base
Creating directory 
/SRC/freeware/solaris_2.6/postgresql-7.1RC1/src/test/regress/./tmp_check/data/global
Creating directory 
/SRC/freeware/solaris_2.6/postgresql-7.1RC1/src/test/regress/./tmp_check/data/pg_xlog
Creating template1 database in 
/SRC/freeware/solaris_2.6/postgresql-7.1RC1/src/test/regress/./tmp_check/data/base/1
DEBUG:  database system was shut down at 2001-03-27 19:10:18 BST
DEBUG:  CheckPoint record at (0, 8)
DEBUG:  Redo record at (0, 8); Undo record at (0, 8); Shutdown TRUE
DEBUG:  NextTransactionId: 514; NextOid: 16384
DEBUG:  database system is in production state
Creating global relations in 
/SRC/freeware/solaris_2.6/postgresql-7.1RC1/src/test/regress/./tmp_check/data/global
DEBUG:  database system was shut down at 2001-03-27 19:10:26 BST
DEBUG:  CheckPoint record at (0, 112)
DEBUG:  Redo record at (0, 112); Undo record at (0, 0); Shutdown TRUE
DEBUG:  NextTransactionId: 514; NextOid: 17199
DEBUG:  database system is in production state
Initializing pg_shadow.
Enabling unlimited row width for system tables.
Creating system views.
Loading pg_description.
Setting lastsysoid.
Vacuuming database.
Bus Error - core dumped

initdb failed.
Data directory 
/SRC/freeware/solaris_2.6/postgresql-7.1RC1/src/test/regress/./tmp_check/data will not 
be removed at user's request.


Reading src/test/regress/tmp_check/install/usr/local/pgsql/bin/postmaster
core file header read successfully
Reading /usr/lib/ld.so.1
Reading /usr/lib/libresolv.so.2
Reading /usr/lib/libnsl.so.1
Reading /usr/lib/libsocket.so.1
Reading /usr/lib/libdl.so.1
Reading /usr/lib/libc.so.1
Reading /usr/lib/libmp.so.2
Reading /usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1
program terminated by signal BUS (invalid address alignment)
(dbx) where
=[1] toast_save_datum(0x3efb38, 0x491a, 0x7, 0x4345d8, 0x67677265, 0xefff9b6d), at 
0x56104
  [2] toast_insert_or_update(0x3efb38, 0x44b450, 0x0, 0x298, 0x449618, 0x44b450), at 
0x54ff0
  [3] heap_tuple_toast_attrs(0x3efb38, 0x4495d0, 0x0, 0xefffe838, 0x449604, 0x0), at 
0x535ac
  [4] heap_insert(0x3efb38, 0x4495d0, 0xefffe838, 0x, 0xefffe8a8, 0x0), at 
0x4ea50
  [5] update_attstats(0x42a2, 0x7, 0x429228, 0x429df4, 0x0, 0x429d50), at 0xc25e4
  [6] analyze_rel(0x42a2, 0x0, 0xfffe, 0x3990d0, 0x219fa0, 0x0), at 0xc1310
  [7] vac_vacuum(0x0, 0x1, 0x0, 0x2000, 0x80, 0x0), at 0xbb074
  [8] vacuum(0x0, 0x0, 0x1, 0x0, 0x0, 0x0), at 0xbaf60
  [9] ProcessUtility(0x39d0a0, 0x1, 0x0, 0x0, 0x0, 0x0), at 0x160e08
  [10] pg_exec_query_string(0x39cf10, 0x1, 0x3990d0, 0x0, 0x227818, 0x56), at 0x15da58
  [11] PostgresMain(0x7, 0xefffecfc, 0x7, 0xefffecfc, 0x286930, 0x626f6f74), at 
0x15f448
  [12] main(0x7, 0xefffecfc, 0xefffed1c, 0x245000, 0x0, 0x0), at 0xf5494
(dbx)



-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] IANA registration

2001-03-26 Thread Pete Forman

PostgreSQL typically uses port 5432 for client-server communications.
It would be a good idea to register this with IANA.  This will help to
avoid a clash with other services that might try to use the port.
DB2, Interbase, MS SQL, MySQL, Oracle, Sybase, etc. are already
registered.

Might someone with a reasonable grasp of the low level messages in
PostgreSQL care to submit a registration?

http://www.iana.org/
http://www.iana.org/cgi-bin/usr-port-number.pl
http://www.isi.edu/in-notes/iana/assignments/port-numbers

-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Re: elog with automatic file, line, and function

2001-03-20 Thread Pete Forman

Larry Rosenman writes:
  * Tom Lane [EMAIL PROTECTED] [010319 18:58]:
   However, if the C99 spec has such a concept, they didn't use that name
   for it ...
  My C99 compiler (SCO, UDK FS 7.1.1b), defines the following:
  Predefined names
  
  The following identifiers are predefined as object-like macros: 
  
  
  __LINE__
  The current line number as a decimal constant. 
  
  __FILE__
  A string literal representing the name of the file being compiled. 
  
  __DATE__
  The date of compilation as a string literal in the form ``Mmm dd
  .'' 
  
  __TIME__
  The time of compilation, as a string literal in the form
  ``hh:mm:ss.'' 
  
  __STDC__
  The constant 1 under compilation mode -Xc, otherwise 0. 
  
  __USLC__
  A positive integer constant; its definition signifies a USL C
  compilation system. 
  
  Nothing for function that I can find.

It is called __func__ in C99 but it is not an object-like macro.  The
difference is that it behaves as if it were declared thus.

static const char __func__[] = "function-name";

Those other identifiers can be used in this sort of way.

printf("Error in " __FILE__ " at line " __LINE__ "\n");

But you've got to do something like this for __func__.

printf("Error in %s\n", __func__);

-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



RE: [HACKERS] beta5 ...

2001-02-22 Thread Pete Forman

Vince Vielhaber writes:
  On Thu, 22 Feb 2001, Christopher Kings-Lynne wrote:
  
   What about adding a field where they paste the output of 'uname
   -a' on their system...?
  
  Got this and Justin's changes along with compiler version.  Anyone
  think of anything else?

Architecture.  IRIX, Solaris and AIX allow applications and libraries
to be built 32 or 64 bit.

You may also like to add a field for configure options used.  Or is
this just for results OOTB?
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



Re: [HACKERS] floating point representation

2001-02-20 Thread Pete Forman

Tom Lane writes:
  Hiroshi Inoue [EMAIL PROTECTED] writes:
   Tom Lane wrote:
   The defaults
   would be "%.7g" and "%.17g" (or thereabouts, not sure what number of
   digits we are currently using).
  
   Wouldn't changing current '%.6g','%.15g'(on many platforms)
   cause the regression test failure ? 
  
  I didn't check my numbers.  If the current behavior is '%.6g','%.15g'
  then we should stay with that as the default.
  
  Hmm, on looking at the code, this might mean we need some configure
  pushups to extract FLT_DIG and DBL_DIG and put those into the default
  strings.  Do we support any platforms where these are not 6  15?

Please remind me what we are trying to do.  6  15 are values to
suppress trailing digits at the end of a number in a standard printf.
For example, 0.1 prints as 0.10001 at %.17g but as 0.1 at
%.16g.  However those shorter formats are less precise.  There are
several other doubles that will also print the same result.  A round
trip of printf/scanf will not generally preserve the number.

Printing for display purposes may not be adequate for dumping with a
view to restoring.  Are we talking about display or dump?

The ideal is to print just enough digits to be able to read the number
back.  There should be no redundant digits at the end.  Printf is
unable to do this by itself.  The reason is that the correct number of
decimal digits for a %.*g is a function of the number being printed.

There are algorithms to do the right thing but they can be expensive.
I play with some in a program at the URI below.  There is a minor typo
in the usage and a missing (optional) file.  I'll correct those when
the site allows uploads again.  The files' contents are currently
available at http://petef.8k.com/.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



Re: [HACKERS] beta3 Solaris 7 (SPARC) port report

2001-01-29 Thread Pete Forman

Ross J. Reedstrom writes:
  Hmm, multiple processors, and lots of IPC:
  [snip]
  Since it's just you and the sysadmin: any chance you could bring
  the system up uniprocessor (I don't even know if this is _possible_
  with Sun hardware, let alone how hard) and run the regressions some
  more?  If that makes it go away, I'd say it pretty well points
  straight into the Solaris kernel.

My observations of Solaris UNIX domain socket problems were on single
processor machines.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



Re: [HACKERS] beta3 Solaris 7 (SPARC) port report [ Was: Lookingfor . . . ]

2001-01-26 Thread Pete Forman

Peter Eisentraut writes:
  Frank Joerdens writes:
  
I have experienced before that Unix sockets will cause random
connection abortions on Solaris [ . . . ]
  
   Isn't that _really_ bad? Random connection abortions when going
   over Unix sockets?? My app does _all_ the connecting over Unix
   sockets?!
  
  That's bad, for sure.  Maybe you can check for odd conditions
  surrounding the /tmp directory, like is it on NFS, permission
  problems, mount options.  Or is there something odd in the kernel
  configuration?  If I'm counting correctly this is the third
  independent report of this problem, which is scary.

I'm not sure if you counted me.  I also observed that Unix sockets
cause the parallel tests to fail in random places on Solaris.


We had a similar problem porting a product that uses a lot of IPC to
Solaris.  There were failures involving the overloading of the Unix
domain sockets.  We took the code to Sun and they were unable to
resolve the problems.  It should have been possible to tune the kernel
to provide more resources.  However it turns out that some of the
parameters that we wanted to tune were ignored in favour of hard coded
values.  In the end we rewrote our code to use Internet domain sockets
(AF_INET).



BTW, owing to a DNS error email to me has bounced over the last couple
of days.  It should be okay now if anything needs to be resent.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



[HACKERS] Re: postgres memory management

2001-01-23 Thread Pete Forman

Justin Clift writes:
  I found the solution to this being to edit the ipcclean script and
  change the "ps x | grep -s 'postmaster'" part to "ps -e | grep -s
  'postmaster'".  This then works correctly with Mandrake 7.2.

A standard way of finding a process by name without the grep itself
appearing is use something like "grep '[p]ostmaster'".
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



Re: AW: AW: AW: [HACKERS] Re: tinterval - operator problems on AIX

2001-01-22 Thread Pete Forman

Peter Eisentraut writes:
  What if someone has a binary PostgreSQL package installed, then
  updates his time library to something supposedly binary compatible
  and finds out that PostgreSQL still doesn't use the enhanced
  capabilities?

You are too generous.  If someone downloads a binary package it should
not be expected to be able to take advantage of non standard features
of the platform.  It is reasonable that they should compile it from
source to get the most from it.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



Re: [HACKERS] FW: Postgresql on win32

2001-01-22 Thread Pete Forman

Tom Lane writes:
  That'd be OK with me.  I don't suppose Win32 has "sed" though :-(

Cygwin does.  We can worry about native support for PostgreSQL later ;-) 
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



Re: AW: AW: AW: [HACKERS] Re: tinterval - operator problems on AIX

2001-01-18 Thread Pete Forman

Zeugswetter Andreas SB writes:
  The down side is, that I did not do a configure test, and did not
  incooperate IRIX, since I didn't know what define to check.
  
  The correct thing to do instead of the #if defined (_AIX) would be
  to use something like #ifdef NO_NEGATIVE_MKTIME and set that with a
  configure.

I agree that configure is the way to go.  What if someone has
installed a third party library to provide a better mktime() and
localtime()?

But to answer your question, #if defined (__sgi) is a good test for
IRIX, at least with the native compiler.  I can't speak for gcc.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



Re: AW: [HACKERS] Re: tinterval - operator problems on AIX

2001-01-12 Thread Pete Forman

Tom Lane writes:
  Pete Forman [EMAIL PROTECTED] writes:
   Thinking about that a bit more, I think that tm_isdst should not
   be written into.
  
  IIRC, setting isdst to -1 was necessary to get the right behavior
  across DST boundaries on more-mainstream systems.  I do not think
  it's acceptable to do worse on systems with good time libraries in
  order to improve behavior on fundamentally broken ones.

A footnote in the C89 (and C99) standard says:

Thus, a positive or zero value for  tm_isdst  causes  the
mktime function to presume initially that Daylight Saving
Time, respectively, is  or  is  not  in  effect  for  the
specified time.  A negative value causes it to attempt to
determine whether Daylight Saving Time is in  effect  for
the specified time.

So tm_isdst being input as 0 or 1 is not forcing the choice of what it
will be on output.  It can be important at the end of DST when local
times repeat and the only way to distinguish them is the setting of
this flag.

That is borne out by my observations.

Setting tm_isdst to -1 before calling mktime can make a difference to
the result when the input and result have different DST flags.

It is fairly arbitrary what the answer to this question is: if six
months is subtracted from a to give b, should a.local.hour =
b.local.hour or should a.utc.hour = b.utc.hour?  If you want the
former then set tm_isdst = -1 before calling mktime.  I'm out of time
now but I'll try and look for some guidance in the SQL standards.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



Re: AW: AW: [HACKERS] Re: tinterval - operator problems on AIX

2001-01-11 Thread Pete Forman

Zeugswetter Andreas SB writes:
  Try the attachment with negative values, and tell us whether mktime
  returns anything other that -1. Do you have an idea how else we
  could determine daylight savings time ?

mktime always returns -1 for tm's that might expect to return a
negative number.  In those cases the tm is not normalized and
tm_isdst is set to -1.  When mktime returns zero or positive then tm
is normalized and tm_isdst is set to 0 or 1.

localtime sets all the fields of tm correctly, including tm_isdst, for
all values of time_t, including negative ones.  When I say correctly,
there is the usual limitation that the rules to specify when DST is in
force cannot express a variation from year to year.  (You can specify
e.g. the last Sunday in a month.)

My observations were consistent across AIX 4.1.5, 4.2.1, and 4.3.3.


If you have a time_t, then you can use localtime to determine DST.  If
you have a tm then you cannot work out DST for dates before the epoch.
One workaround would be to add 4*n to tm_year and subtract (365*4+1)
*24*60*60*n from the time_t returned.  (All leap years are multiples
of 4 in the range 1901 to 2038.  If tm_wday is wanted, that will need
to be adjusted as well.)  But don't you do time interval arithmetic
using PostgreSQL date types rather than accepting the limitations of
POSIX/UNIX?
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



Re: AW: AW: [HACKERS] Re: tinterval - operator problems on AIX

2001-01-11 Thread Pete Forman

Pete Forman writes:
  One workaround would be to add 4*n to tm_year and subtract (365*4+1)
  *24*60*60*n from the time_t returned.  (All leap years are multiples
  of 4 in the range 1901 to 2038.  If tm_wday is wanted, that will need
  to be adjusted as well.)

FWIW, that should be to add 28*n to tm_year and subtract (365*4+1)*7
*24*60*60*n from the time_t returned.  That calculates tm_wday
correctly.

Also I should have been more explicit that this applies only to AIX
and IRIX.  Those return -1 from mktime(year  1970) and do not allow
DST rules to vary from year to year.  Linux and Solaris have more
capable date libraries.
-- 
Pete Forman   http://www.bedford.waii.com/wsdev/petef/PeteF_links.html
WesternGeco   http://www.crosswinds.net/~petef
Manton Lane, Bedford,   mailto:[EMAIL PROTECTED]
MK41 7PA, UK  tel:+44-1234-224798  fax:+44-1234-224804



Re: AW: [HACKERS] Re: tinterval - operator problems on AIX

2001-01-10 Thread Pete Forman

Thomas Lockhart writes:
   On AIX mktime(3) leaves tm_isdst at -1 if it does not have timezone
   info for that particular year and returns -1.
   The following code then makes savings time out of the -1.
 tz = (tm-tm_isdst ? (timezone - 3600) : timezone);
  
  Hmm. That description is consistant with what I see in the Linux
  man page. So I should check for (tm-tm_isdst  0) rather than
  checking for non-zero?
  
  Would you like to test that on your machine? I'll try it here, and
  if successful will consider this a bug report and a necessary fix
  for 7.1.

I have machines running AIX 4.1.5, 4.2.1, and 4.3.3 if you would like
to send me your test programs.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



Re: [HACKERS] pg_dump return status..

2001-01-08 Thread Pete Forman

Nathan Myers writes:
  On Fri, Jan 05, 2001 at 11:20:43AM -0500, Tom Lane wrote:
   Philip Warner [EMAIL PROTECTED] writes:
how do I
check for a failed write in a way that works on all Unixes? Is the
following OK:
   
- fwrite: ok if return value equals item count
- fprintf: ok if return value  0.
- fputc: ok if != EOF
   
   Probably fprintf() = 0 --- according to my specs, it returns the number
   of chars emitted, or a negative value on error.  The other two are
   correct.
  
  An fprintf returning 0 is a suspicious event; it's easy to imagine 
  cases where it makes sense, but I don't think I have ever coded one.
  Probably N (where N is the smallest reasonable output, defaulting 
  to 1) may be a better test in real code.
  
  As I recall, on SunOS 4 the printf()s don't return the number of 
  characters written.  I don't recall what they do instead, and have
  no access to such machines any more.
  
  Other old BSD-derived systems are likely to have have wonky return 
  values/types on the printf()s.  Looking at the list of supported 
  platforms, none jump out as likely candidates, but in the "unsupported" 
  list, Ultrix and NextStep do.  (Do we care?)
  
  If SunOS 4 is to remain a supported platform, the printf checks may 
  need to be special-cased for it.

Current Solaris is liable to problems still, though these are not
relevant to this thread.  printf() and fprintf() have always returned
the number of characters transmitted, or EOF for failure.  It is
sprintf() that has problems.

There are two versions of sprintf() available in SunOS 4 - 8.  The
standard one (ANSI C) in libc returns an int, the number of characters
written (excluding '\0').  The BSD version returns a char* which
points to the target.  If you have a -lbsd on your link line then you
get the BSD version.  There are no compiler errors, just run time
errors if you rely on the return from sprintf() being the number of
characters.  The workaround is to put an extra -lc on the link line
before the -lbsd if your code needs both standard sprintf() and some
other BSD function.

Ultrix is documented as having the same behaviour as Solaris.  I don't
know about NeXTSTEP/OPENSTEP/GNUStep.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef  -./\.-  Hughes or their divisions.



[HACKERS] Re: Add support for xti.h

2000-11-23 Thread Pete Forman

Tom Lane writes:
  Pete Forman wrote:
   The basic problem is that netinet/tcp.h is a BSD header.  The
   correct header for TCP internals such as TCP_NODELAY on a UNIX
   system is xti.h.  By UNIX I mean UNIX95 (aka XPG4v2 or SUSv1)
   or later.  The 2 files which conditionally include
   netinet/tcp.h need also to conditionally include xti.h.

I've done bit more research.  xti.h was the correct place to find
TCP_NODELAY in UNIX98/SUSv2.  However in the Austin Group draft of the
next version of POSIX and UNIX0x/SUSv3, XTI has been dropped and
netinet/tcp.h  officially included.

  I have never heard of xti.h before and am rather dubious that it
  should be considered more standard than tcp.h.  However, if we
  are going to include it then it evidently must be *mutually
  exclusive* with including tcp.h.  The $64 question is, which one
  ought to be included when both are available?  I'd tend to go for
  tcp.h on the grounds of "don't fix what wasn't broken".
  
  Actually, given your description of the problem, I'm half inclined
  to revert the whole patch and instead make configure's test for
  availability of netinet/tcp.h first include netinet/in.h, so
  that that configure test will succeed on IRIX etc.  Do you know any
  platforms where tcp.h doesn't exist at all?

I agree with this.  Back out the patch and update configure.in.  I
might have done that myself but I do not have enough experience with
autoconf.

The only platform I know of without netinet/tcp.h is Cygwin B20.1.
There is a workaround in place for that.  The current Cygwin 1.1 does
have the header.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
Western Geophysical   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  the opinion of Baker Hughes or
http://www.crosswinds.net/~petef  -./\.-  its divisions.



Re: [HACKERS] (download ANSI SQL benchmark?) Re: Postgres article

2000-11-21 Thread Pete Forman

Don Baccus writes:
  I also hope that the PG crew, and Great Bridge, never stoop so low
  as to ship benchmarks wired to "prove" PG's superiority.

I thought that Great Bridge's August benchmarks were rather skewed.
They only used one particular test from the AS3AP suite.  That was the
basis for their headline figure of 4-5 times the performance of the
competition.

I was however impressed by the TPC-C results.  MySQL and Interbase
were unable to complete them.  PostgreSQL showed almost identical
performance over a range of loads to Proprietary 1 (version 8.1.5, on
Linux) and Proprietary 2 (version 7.0, on NT).
-- 
Pete Forman -./\.- Disclaimer: This post is originated
Western Geophysical   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  the opinion of Baker Hughes or
http://www.crosswinds.net/~petef  -./\.-  its divisions.



Re: [HACKERS] problems with configure

2000-11-09 Thread pete . forman

Tom Lane writes:
  If socklen_t exists, it's presumably the right thing to use, so if
  we just hardwire "void - socklen_t", I think it'd be OK.  If we're
  wrong, we'll hear about it...

Ah, if only life were that simple ;-/

Depending on the version of Solaris and the compiler flags the third
argument can be a pointer to socklen_t, void, size_t or int.

For Solaris 7  8 the impression I get is that accept() is an XPG4v2
thing and so the compile flags should include one of the following
sets of flags.  The first specifies XPG4v2 (UNIX95), the second XPG5
(UNIX98).  Using either will make the third argument socklen_t*.

   -D_XOPEN_SOURCE -D_XOPEN_SOURCE_EXTENDED
or
   -D_XOPEN_SOURCE=500


Solaris 2.6 only groks the first of those.  Setting the flags for
XPG4v2 will use size_t* for arg3, otherwise it will be int*.  The
underlying types are the same width, size_t is unsigned.  I'd expect
that the program would work with either, give or take warnings about
the signedness.

The only choice of arg3 on Solaris 2.5 is int*.


My bottom line is that flags for XPG4v2 should be set on Solaris.
I've successfully run configure from the current CVS sources on
Solaris 7 with the following workaround.  I presume that there is a
better place to apply the change.

CPPFLAGS="-D_XOPEN_SOURCE -D_XOPEN_SOURCE_EXTENDED" configure


-- 
Pete Forman -./\.- Disclaimer: This post is originated
Western Geophysical   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  the opinion of Baker Hughes or
http://www.crosswinds.net/~petef  -./\.-  its divisions.
***==  My old email address [EMAIL PROTECTED] will ==***
***==  not be operational from Fri 10 to Tue 14 Nov 2000.==***



Re: [HACKERS] problems with configure

2000-11-09 Thread Pete Forman

Peter Eisentraut writes:
  [EMAIL PROTECTED] writes:
  
   Depending on the version of Solaris and the compiler flags the
   third argument can be a pointer to socklen_t, void, size_t or
   int.
  
  The argument is question cannot possibly be of a different width
  than int, unless someone is *really* on drugs at Sun.  Therefore,
  if the third argument to accept() is "void *" then we just take
  "int".  Evidently there will not be a compiler problem if you pass
  an "int *" where a "void *" is expected.  The fact that int may be
  signed differently than the actual argument should not be a
  problem, since evidently the true argument type varies with
  compiler options, but surely the BSD socket layer does not.

Unless there is more than one library that implements accept, or if
accept is mapped as a macro to another function.

Whatever, I'd be happier if "void *" were mapped to "unsigned int*" as
that is what the Solaris 7 library is expecting.  But it's no big deal
if you want to go with signed.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
Western Geophysical   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  the opinion of Baker Hughes or
http://www.crosswinds.net/~petef  -./\.-  its divisions.
***==  My old email address [EMAIL PROTECTED] will ==***
***==  not be operational from Fri 10 to Tue 14 Nov 2000.==***



Re: [HACKERS] v7.0.3 *pre-release* ...

2000-11-07 Thread Pete Forman

The Hermit Hacker writes:
  
   ftp://ftp.postgresql.org/pub/source/v7.0.3
  
  Please take a minute to download and test these out, so that when
  we release, we don't get a bunch of "oops, you forgot this"
  messages :)

I've tried it on a couple of platforms:

IRIX 6.5.5m, MIPSpro 7.3.

  Configure detection of accept() is working.

  Several regression tests fail.  Two patches that I'd submitted to
  fix these on IRIX have not been applied:

2000-08-18: Update tests/regress/resultmap for IRIX 6
2000-10-12: Regression tests - expected file for IRIX geometry test

AIX 4.3.2, xlc 3.6.6.

  Same regression test failures as 7.0.2.
  The nasty failures are triggers, misc, and plgpgsql which
  consistently give "pqReadData() -- backend closed the channel
  unexpectedly." at the same point.  Also the sanity_check hangs
  during a VACUUM.  Killing the backend was the only way to continue.

-- 
Pete Forman -./\.- Disclaimer: This post is originated
Western Geophysical   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  the opinion of Baker Hughes or
http://www.crosswinds.net/~petef  -./\.-  its divisions.



Re: AW: [HACKERS] v7.0.3 *pre-release* ...

2000-11-07 Thread Pete Forman

Pete Forman writes:
  The only remaining failure is geometry.  The results I got were
  nearly identical to geometry-powerpc-aix4.out.  The only
  differences were the order of rows returned by three of the tables.
  I'll submit the results file to pgsql-patches.

I've submitted a one line patch on resultmap.

There was an oddity, on that one runtest on 7.0.3 the geometry.out had
the rows in a different order from three of the select statements.
Repeating the runtest five times passed consistently (with the new
resultmap).  Now I realise that in an RDB the set of results have no
intrinsic order but find it a bit surprising that the regression tests
are not consistent.
-- 
Pete Forman -./\.- Disclaimer: This post is originated
Western Geophysical   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  the opinion of Baker Hughes or
http://www.crosswinds.net/~petef  -./\.-  its divisions.



AW: [HACKERS] v7.0.3 *pre-release* ...

2000-11-07 Thread Pete Forman

Zeugswetter Andreas SB writes:
  
   AIX 4.3.2, xlc 3.6.6.
   
 Same regression test failures as 7.0.2.
 The nasty failures are triggers, misc, and plgpgsql which
 consistently give "pqReadData() -- backend closed the channel
 unexpectedly." at the same point.  Also the sanity_check hangs
 during a VACUUM.  Killing the backend was the only way to continue.
  
  This should not be so. Your setup should definitely work without
  regression failures. There is something wrong with your dynamic
  loading of shared libs. Can you give more details, e.g. did you add
  optimization which does not work yet ?

No extra flags were added by me.  The only build warnings were
duplicate symbol errors.  (There were a couple of warnings about
0.0/0.0 used to represent a NaN.)  Here are a couple of extracts from
my build log.

  xlc -I../include -I../backend -qmaxmem=16384 -qhalt=w -qsrcmsg
-qlanglvl=extended -qlonglong -o postgres access/SUBSYS.o
bootstrap/SUBSYS.o catalog/SUBSYS.o commands/SUBSYS.o
executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o
parser/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o
postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o
storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o ../utils/version.o
-lPW -lcrypt -lld -lnsl -ldl -lm -lcurses
  Making postgres.imp
  ../backend/port/aix/mkldexport.sh postgres /usr/local/pgsql/bin 
postgres.imp
  xlc -Wl,-bE:../backend/postgres.imp -o postgres access/SUBSYS.o
bootstrap/SUBSYS.o catalog/SUBSYS.o commands/SUBSYS.o
executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o
parser/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o
postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o
storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o ../utils/version.o
-lPW -lcrypt -lld -lnsl -ldl -lm -lcurses


  xlc -I../../include -I../../backend -qmaxmem=16384 -qhalt=w
-qsrcmsg -qlanglvl=extended -qlonglong -DFRONTEND -c pqsignal.c -o
pqsignal.o
  ar crs libpq.a fe-auth.o fe-connect.o fe-exec.o fe-misc.o
fe-print.o fe-lobj.o pqexpbuffer.o dllist.o pqsignal.o
  touch libpq.a
  ../../backend/port/aix/mkldexport.sh libpq.a /usr/local/pgsql/lib
 libpq.exp
  ld -H512 -bM:SRE -bI:../../backend/postgres.imp -bE:libpq.exp -o
libpq.so libpq.a -lPW -lcrypt -lld -lnsl -ldl -lm -lcurses -lcrypt
-lc
  ld: 0711-224 WARNING: Duplicate symbol: .crypt
  ld: 0711-224 WARNING: Duplicate symbol: crypt
  ld: 0711-224 WARNING: Duplicate symbol: .strlen
  ld: 0711-224 WARNING: Duplicate symbol: strlen
  ld: 0711-224 WARNING: Duplicate symbol: .PQuntrace
  ld: 0711-224 WARNING: Duplicate symbol: .PQtrace
  ld: 0711-224 WARNING: Duplicate symbol: .setsockopt
  ld: 0711-224 WARNING: Duplicate symbol: setsockopt
[and others]

-- 
Pete Forman -./\.- Disclaimer: This post is originated
Western Geophysical   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  the opinion of Baker Hughes or
http://www.crosswinds.net/~petef  -./\.-  its divisions.



Re: [HACKERS] 7.0.3 branded

2000-11-03 Thread Pete Forman

Bruce Momjian writes:
  I have marked 7.0.3 release tree.  The new 7.0.3 items are listed
  below.

So have Jason's patches to build on Cygwin not made it in?


On a related note, what tag should I give to cvs to get code from the
7.0.3 branch?  Is it REL7_0_PATCHES?
-- 
Pete Forman -./\.- Disclaimer: This post is originated
Western Geophysical   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  the opinion of Baker Hughes or
http://www.crosswinds.net/~petef  -./\.-  its divisions.



Re: [HACKERS] Re: Add support for xti.h

2000-10-25 Thread Pete Forman

Tom Lane writes:
   This is an IRIX bug but I think that we need to work around it.
  
   Roger, will do.
  
  I have changed configure in the CVS repository to test for
  netinet/tcp.h per your recommendation.  At your convenience, please
  verify that it really does do the right thing on IRIX.

Yes, that works.


There is a separate problem running the configure script on AIX.  It
hangs while testing for flex.  The two processes that I killed to
allow configure to continue were

/usr/ccs/bin/lex --version
/usr/bin/lex --version

The problem is that lex is waiting for input from stdin.  This patch
should fix it.   I've only tested modification of the configure file
directly.

*** config/programs.m4.orig Mon Aug 28 12:53:13 2000
--- config/programs.m4  Wed Oct 25 10:20:31 2000
***
*** 22,28 
  for pgac_prog in flex lex; do
pgac_candidate="$pgac_dir/$pgac_prog"
if test -f "$pgac_candidate" \
!  $pgac_candidate --version /dev/null 21
then
  echo '%%'   conftest.l
  if $pgac_candidate -t conftest.l 2/dev/null | grep FLEX_SCANNER /dev/null 
21; then
--- 22,28 
  for pgac_prog in flex lex; do
pgac_candidate="$pgac_dir/$pgac_prog"
if test -f "$pgac_candidate" \
!  $pgac_candidate --version /dev/null /dev/null 21
then
  echo '%%'   conftest.l
  if $pgac_candidate -t conftest.l 2/dev/null | grep FLEX_SCANNER /dev/null 
21; then



-- 
Pete Forman -./\.- Disclaimer: This post is originated
Western Geophysical   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  the opinion of Baker Hughes or
http://www.crosswinds.net/~petef  -./\.-  its divisions.



Re: [HACKERS] Re: Add support for xti.h

2000-10-24 Thread Pete Forman

Tom Lane writes:
   Actually, given your description of the problem, I'm half
   inclined to revert the whole patch and instead make configure's
   test for availability of netinet/tcp.h first include
   netinet/in.h, so that that configure test will succeed on IRIX
   etc.
  
  Pete,
After looking at this I'm confused again.  The configure test
  consists of seeing whether cpp will process
  
   #include netinet/tcp.h
  
  without complaint.  I can well believe that the full C compilation
  process will generate errors if netinet/tcp.h is included without
  also including netinet/in.h, but it's a little harder to believe
  that cpp alone will complain.  Could you double-check this?
  
  It would be useful to look at the config.log file generated by the
  configure run that's reporting tcp.h isn't found.  It should
  contain the error messages generated by failed tests.

On IRIX 6.5.5m I get the following error.  The header standards.h is
included by (nearly!) all of the standard headers.  It is the IRIX
equivalent of config.h if you will.

In order to preprocess this test on IRIX a system header such as
stdio.h must precede netinet/tcp.h.  The logical choice of header
to use is netinet/in.h as tcp.h is supplying values for levels
defined in in.h.

This is an IRIX bug but I think that we need to work around it.


configure:4349: checking for netinet/tcp.h
configure:4359: cc -E   conftest.c /dev/null 2conftest.out
cc-1035 cc: WARNING File = /usr/include/sys/endian.h, Line = 32
  #error directive:  "standards.h must be included before sys/endian.h."

  #error "standards.h must be included before sys/endian.h."
   ^
configure: failed program was:
#line 4354 "configure"
#include "confdefs.h"
#include netinet/tcp.h



-- 
Pete Forman -./\.- Disclaimer: This post is originated
Western Geophysical   -./\.-  by myself and does not represent
[EMAIL PROTECTED] -./\.-  the opinion of Baker Hughes or
http://www.crosswinds.net/~petef  -./\.-  its divisions.