FW: [GENERAL] a plpgsql programming question

2001-09-10 Thread Jefim Matskin



I have the following problem;
The client of the DB issues a lot of the SQL commands over a network , the
statements are of the simple form - select an integer from some table(s)
using some criteria.
I want to create a function that will accept the string containing all the
statements to be executed, a string containing the length of the statements
to be executed and will return the string containing all the integer values
retrieved.

And the questions are:
1. How to execute a sql statement that is contained in the text variable in
the function
2. How to get the result of that statement into a variable.

Thanks.
 Jefim Matskin
 -
 Senior SW engeneer
 Sphera Corporation
 Tel: +972.3.613.2424 Ext:104
 mailto:[EMAIL PROTECTED]
 http://www.sphera.com/
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Extra files required during backup...

2001-09-10 Thread Ashley T. Howes Ph.D.

I have reviewed the online documentation for PostgreSQL 7.1 for
backup/recovery.  This covers the backup of the tables and their contents.
For my backup I wish to include as much as possible to limit the
consequences of a complete failure.  From reviewing the archives I have
found the following should also be saved:

data/pg_hba.conf

Does the WAL requiring backup as well?  Are there other config files which
require backup?


Thanks

Ashley



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] user

2001-09-10 Thread Daniel ?erud

Why is this so very commonly used word reserved?
Is that some kind of #define so you easily can recompile 
PostgreSQL? If so, please guide me to the place. This is 
truly annoying.

Thanks!

Daniel Åkerud


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] user

2001-09-10 Thread Daniel ?erud


Forgott to say that I try to create a table named 
QUOTEuser/QUOTE.

 Why is this so very commonly used word reserved?
 Is that some kind of #define so you easily can recompile 
 PostgreSQL? If so, please guide me to the place. This is 
 truly annoying.
 
 Thanks!
 
 Daniel Åkerud
 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Performance question

2001-09-10 Thread Tille, Andreas

Hello,

I have ported a database from MS SQL Server to PostgreSQL.  The database has
40 tables and 117 indexes which I defined for the same fields as in MS SQL.
I converted the data using some SQL output from MS SQL server and inserted
it with psql.

Now I started some performance comparisons and did the following statement:

SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz
FROM Hauptdaten_Fall
WHERE (((Hauptdaten_Fall.IstAktuell)=20))
GROUP BY Hauptdaten_Fall.MeldeKategorie
ORDER BY Hauptdaten_Fall.MeldeKategorie;

(sorry for the German names used here).

The MS-SQL server represents the result immediately - you just not notice
any delay.  If I do it on the PostgreSQL server it takes 30s on comparable
hardware :-(((.

I really have no idea what caused this big difference in speed which would
forbid the use of PostgreSQL for our application.  How can I checked
whether the indexes I created are really used?  What could be other
reasons for such a big difference in speed?

I´m using PostgreSQL 7.1.2 on Debian GNU/Linux (testing).

Any help appreciated

Andreas.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Performance question

2001-09-10 Thread Tille, Andreas

On Mon, 10 Sep 2001 [EMAIL PROTECTED] wrote:

 Use explain. Explain tells you the query plan of the optimizer.

 explain SELECT .;
Thanks I just found the thread Index usage question and tried to make
some profit from it:

explain SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz FROM 
Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20)) GROUP BY 
Hauptdaten_Fall.MeldeKategorie ORDER BY Hauptdaten_Fall.MeldeKategorie;

NOTICE:  QUERY PLAN:

Aggregate  (cost=35267.33..36154.62 rows=17746 width=16)
  -  Group  (cost=35267.33..35710.98 rows=177458 width=16)
-  Sort  (cost=35267.33..35267.33 rows=177458 width=16)
  -  Seq Scan on hauptdaten_fall  (cost=0.00..15024.12 rows=177458 
width=16)


Now I tried

Vacuum analyze;

but nothing changed :-(

Aggregate  (cost=35267.33..36154.62 rows=17746 width=16)
  -  Group  (cost=35267.33..35710.98 rows=177458 width=16)
-  Sort  (cost=35267.33..35267.33 rows=177458 width=16)
  -  Seq Scan on hauptdaten_fall  (cost=0.00..15024.12 rows=177458 
width=16)

I have nearly no experience with query optimizing but the gread difference
in speed tells me that something is wrong here.  There were some hints in
the Index usage question thread about some fields which might be interpreted
as strings.  Could this be a reason and how to check this?

Kind regards

   Andreas.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] approve awKtwN unsubscribe gb@a3design.de

2001-09-10 Thread gb




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] Performance question

2001-09-10 Thread Roderick A. Anderson

On Mon, 10 Sep 2001, Tille, Andreas wrote:

 Hello,
 
 Now I started some performance comparisons and did the following statement:

 The MS-SQL server represents the result immediately - you just not notice
 any delay.  If I do it on the PostgreSQL server it takes 30s on comparable
 hardware :-(((.

Did you do a VACUUM ANALYZE after loading the data?



Rod
-- 
  A small clue and no money
  will get you further than
  lots of money and no clue.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] install only pg_dump

2001-09-10 Thread Ben-Nes Michael

Hi All

I need to do dumps for backups from a redhat6.2 mechine.

I dont want to install the whole server for just one utility nor could I
find a proper rpm.

Any sugestions ?

the server 7.1.3 is on redhat 7.1

--
Canaan Surfing Ltd.
Internet Service Providers
Ben-Nes Michael - Manager
Tel: 972-4-6991122
http://sites.canaan.co.il
--



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] Performance question

2001-09-10 Thread Stephan Szabo


On Mon, 10 Sep 2001, Tille, Andreas wrote:

 On Mon, 10 Sep 2001 [EMAIL PROTECTED] wrote:
 
  Use explain. Explain tells you the query plan of the optimizer.
 
  explain SELECT .;
 Thanks I just found the thread Index usage question and tried to make
 some profit from it:
 
 explain SELECT Hauptdaten_Fall.MeldeKategorie,
 Count(Hauptdaten_Fall.ID) AS Anz FROM Hauptdaten_Fall WHERE
 (((Hauptdaten_Fall.IstAktuell)=20)) GROUP BY
 Hauptdaten_Fall.MeldeKategorie ORDER BY
 Hauptdaten_Fall.MeldeKategorie;
 
 Aggregate  (cost=35267.33..36154.62 rows=17746 width=16)
   -  Group  (cost=35267.33..35710.98 rows=177458 width=16)
 -  Sort  (cost=35267.33..35267.33 rows=177458 width=16)
   -  Seq Scan on hauptdaten_fall  (cost=0.00..15024.12 rows=177458 
width=16)
 
 I have nearly no experience with query optimizing but the gread difference
 in speed tells me that something is wrong here.  There were some hints in
 the Index usage question thread about some fields which might be interpreted
 as strings.  Could this be a reason and how to check this?

What's the schema for the table? How many rows are in the table?  How many
rows actually have IstAktuell=20 (is 177458 a reasonable estimate?).  If
not, is there a common, non-NULL value that is much more common than other
values?



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] install only pg_dump

2001-09-10 Thread Lamar Owen

On Monday 10 September 2001 10:24 am, Ben-Nes Michael wrote:
 I need to do dumps for backups from a redhat6.2 mechine.

 I dont want to install the whole server for just one utility nor could I
 find a proper rpm.

I currently am not building Red Hat 6.2 RPMs.  If you can get RH 6.2 binary 
RPM's, all you would need installed would be the main postgresql RPMand the 
postgresql-libs RPM.  You would not need the postgresql-server RPM installed 
for to do a pg_dump.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] Performance question

2001-09-10 Thread Herbert.Liechti

On Mon, 10 Sep 2001, Tille, Andreas wrote:

 Hello,

 I have ported a database from MS SQL Server to PostgreSQL.  The database has
 40 tables and 117 indexes which I defined for the same fields as in MS SQL.
 I converted the data using some SQL output from MS SQL server and inserted
 it with psql.

 Now I started some performance comparisons and did the following statement:

 SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz
 FROM Hauptdaten_Fall
 WHERE (((Hauptdaten_Fall.IstAktuell)=20))
 GROUP BY Hauptdaten_Fall.MeldeKategorie
 ORDER BY Hauptdaten_Fall.MeldeKategorie;

 (sorry for the German names used here).

 The MS-SQL server represents the result immediately - you just not notice
 any delay.  If I do it on the PostgreSQL server it takes 30s on comparable
 hardware :-(((.

 I really have no idea what caused this big difference in speed which would
 forbid the use of PostgreSQL for our application.  How can I checked
 whether the indexes I created are really used?  What could be other
 reasons for such a big difference in speed?

Use explain. Explain tells you the query plan of the optimizer.

explain SELECT .;

Best regards
Herbie
-- 

Herbert Liechti  http://www.thinx.ch
ThinX networked business servicesAdlergasse 5, CH-4500 Solothurn




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Performance question

2001-09-10 Thread Einar Karttunen

On Mon, Sep 10, 2001 at 02:34:25PM +0200, Tille, Andreas wrote:
 On Mon, 10 Sep 2001 [EMAIL PROTECTED] wrote:
 
  Use explain. Explain tells you the query plan of the optimizer.
 
  explain SELECT .;
 Thanks I just found the thread Index usage question and tried to make
 some profit from it:
 
 explain SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz FROM 
Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20)) GROUP BY 
Hauptdaten_Fall.MeldeKategorie ORDER BY Hauptdaten_Fall.MeldeKategorie;
 
 NOTICE:  QUERY PLAN:
 
 Aggregate  (cost=35267.33..36154.62 rows=17746 width=16)
   -  Group  (cost=35267.33..35710.98 rows=177458 width=16)
 -  Sort  (cost=35267.33..35267.33 rows=177458 width=16)
   -  Seq Scan on hauptdaten_fall  (cost=0.00..15024.12 rows=177458 
width=16)
 
 
There must be a problem with your indeces. I tried the following:
temp=# CREATE TABLE Hauptdaten_Fall (
temp(# MeldeKategorie text,
temp(# ID integer,
temp(# IstAktuell integer);
CREATE
temp=# explain SELECT MeldeKategorie,count(ID) FROM Hauptdaten_Fall
temp-#  WHERE IstAktuell=20 GROUP BY MeldeKategorie
temp-#  ORDER BY MeldeKategorie;
NOTICE:  QUERY PLAN:

Aggregate  (cost=22.67..22.72 rows=1 width=16)
  -  Group  (cost=22.67..22.69 rows=10 width=16)
-  Sort  (cost=22.67..22.67 rows=10 width=16)
  -  Seq Scan on hauptdaten_fall  (cost=0.00..22.50 rows=10 width=16)

EXPLAIN
temp=# CREATE INDEX hfia ON Hauptdaten_Fall (IstAktuell);
CREATE
temp=# explain SELECT MeldeKategorie,count(ID) FROM Hauptdaten_Fall
temp-#  WHERE IstAktuell=20 GROUP BY MeldeKategorie
temp-#  ORDER BY MeldeKategorie;
NOTICE:  QUERY PLAN:

Aggregate  (cost=8.30..8.35 rows=1 width=16)
  -  Group  (cost=8.30..8.33 rows=10 width=16)
-  Sort  (cost=8.30..8.30 rows=10 width=16)
  -  Index Scan using hfia on hauptdaten_fall  (cost=0.00..8.14 rows=10 
width=16)

EXPLAIN
temp=#

Which shows quite clearly that an index scan will improve the situation drastically. 
Even more
so for you because the table seems to have very many rows in it.

- Einar Karttunen

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] SQL Loader?

2001-09-10 Thread Guy Fraser

Joel Pang wrote:
 
 Hi Fraser,
 
 I've read your mail in postgresql web site that you've a procedure that will do bulk 
loading of records into database.  I've been looking a utility just like the sql 
loader of Oracle for the Postgres database.  I was wondering if you will share your 
procedure will me? or give me some URL links that I can get reference on how to write 
my own sql loader procedure?
 
 Thanks in advance for your help.
 
 Cheers,
 Joel

Please not the script below uses some special program to modify radius
accounting files for import as an ordered tab delimited file format.

The script also shows a way to import into an intermediate table for
seperating different uniqu records into seperate files then moving the
duplicate entries to a table for data integrity.

This also demonstrates a method of using tables determined from the data
being imported.

I wrote this software a long time ago and have spent little time
patching or rewriting. Since the script was initially written, I now
know some better ways of performing some of these tasks. I don't have
time to rewrite the script and it has worked for over 3 years so it is
low priority.

Guy Fraser

 begin radimport 
#!/bin/bash
ORIG=`pwd`
WORKDIR='/usr/local/pgsql'

cd $WORKDIR
echo Start collecting files
wget -P $WORKDIR/ -t 10 -T 30 -c \
ftp://username:[EMAIL PROTECTED]/path/*.acct.gz   # modified for
security
echo Finished collecting files
echo Start decompressing files
gunzip -v *.acct.gz
echo Finished decompressing files

for fname in `ls *.acct`;
do {
[ -f $fname ]  {
date
echo Start inputing $fname
# parserad - convert radius detail file to tab delimited format.
/usr/local/sbin/parserad -i $fname \
| /usr/local/pgsql/bin/psql -c \
copy detail from stdin; radius
echo Finished inputing $fname
echo Start compressing $fname
gzip -v $fname
echo Finished compressing $fname
#
#
# Clean up detail
#
/usr/local/bin/psql -c vacuum detail; radius
#
# If more than one month determine Current and Previous, Month and Year.
#
#
MINTIME=`/usr/local/pgsql/bin/psql -c \
select date_trunc('month',min(\Time-Stamp\)::abstime) from detail;\
 -A -t radius`
MINMON=`echo $MINTIME | awk '{print $2}' -`
MINYEAR=`echo $MINTIME | awk '{print $5}' -`
MAXTIME=`/usr/local/pgsql/bin/psql -c \
select date_trunc('month',max(\Time-Stamp\)::abstime) from detail;\
 -A -t radius`
MAXMON=`echo $MAXTIME | awk '{print $2}' -`
MAXYEAR=`echo $MAXTIME | awk '{print $5}' -`
[ $MAXYEAR =  ]  (echo Exiting: No Data in detail table. || exit
1)
echo Moving $fname
mv $fname.gz /mnt/sdb3/done/$MAXYEAR
echo Start processing data from $fname
#
# Process records in detail file and create a unique record file called
radius.
#
#
echo Creating lookup table
/usr/local/bin/psql -c \
select min(oid) as recnum,max(\Acct-Session-Time\),\

\Acct-Status-Type\,\Acct-Session-Id\,\NAS-IP-Address\,\NAS-Port-Id\,\
 \User-Name\,\Realm\,\Framed-IP-Address\,\Calling-Station-Id\ \
into radius \
from detail \
group by
\Acct-Session-Id\,\NAS-IP-Address\,\NAS-Port-Id\,\User-Name\,\

\Realm\,\Framed-IP-Address\,\Calling-Station-Id\,\Acct-Status-Type\;\
 radius
/usr/local/bin/psql -c vacuum radius; radius

#
#
# Move stop records to stop table
#
echo Creating unique stop record table
/usr/local/bin/psql -c \
select recnum into radstop from radius where \Acct-Status-Type\ =
'Stop';\
 radius
echo Filtering stop records
/usr/local/bin/psql -c \
select a.* into stop from detail as a,radstop as b where a.oid =
b.recnum;\
 radius
/usr/local/bin/psql -c select count(*) as \Filtered\ from stop;
radius
echo Cleaning stop records
/usr/local/bin/psql -c \
delete from detail where oid = radstop.recnum;\
 radius
echo Cleaning temporary tables
/usr/local/bin/psql -c drop table radstop; radius
/usr/local/bin/psql -c vacuum detail; radius

#
# Move start and alive records to start table
#
echo Creating unique start record table
/usr/local/bin/psql -c \
select recnum into radstart from radius where \Acct-Status-Type\ !=
'Stop';\
 radius
echo Filtering start records
/usr/local/bin/psql -c \
select a.* into start from detail as a,radstart as b where a.oid =
b.recnum;\
 radius
/usr/local/bin/psql -c select count(*) as \Filtered\ from start;
radius
echo Cleaning start records
/usr/local/bin/psql -c \
delete from detail where oid = radstart.recnum;\
 radius
echo Cleaning temporary tables
/usr/local/bin/psql -c drop table radstart; radius
/usr/local/bin/psql -c drop table radius; radius
/usr/local/bin/psql -c vacuum detail; radius

#
# Move rest of records to dups table
#
echo Filtering duplicate records

[GENERAL] unicode in 7.1

2001-09-10 Thread Culley Harrelson

Hello,

my isp recently upgraded form postgreSQL 7.0 to 7.1.  It went pretty well
but I just discovered that non-english characters are now in the database
as a question mark-- inserting non-english characters produces a ? as
well.  Any idea what has gone wrong and what we need to do to fix this?

culley

__
Do You Yahoo!?
Get email alerts  NEW webcam video instant messaging with Yahoo! Messenger
http://im.yahoo.com

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] COPY command with timestamp and NULLs

2001-09-10 Thread Dwayne Miller

I recently upgraded to 7.1.3.  I was experimenting with a script to 
export data from FoxPro into an SQL file and multiple data files.  The 
SQL file creates the tables, indexes, foreign keys, etc, and calls the 
COPY command to load the data from the appropriate data files.

It appears, and I could easily be mistaken, that the COPY command does 
not allow NULLs into a timestamp field, even though the field is defined 
to accept nulls.  Actually, it appears that the behavior of the COPY 
command changed as I believe it would accept nulls in the prior release 
7.1.2.

In any case, I'm using the COPY command WITH NULL AS '^N'. And the 
datafile contains ^N in timestamp fields that could be NULL, but the 
command fails with an invalid timestamp error, referencing the first 
line that contains the '^N' null sequence.

Any thoughts?

Thanks,
Dwayne


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Questions about tuning on FreeBSD...

2001-09-10 Thread Philip Hallstrom

Hi all -
I have some questions about tuning the various kernel parameters
on FreeBSD.  I've read Bruce's article a couple of times and the various
parts of the documentation, but I still have some questions and was hoping
people here could confirm/deny my assumptions.

The machine in question has 512 megs of ram and doesn't do anything else
significant other than postgresql.


As I understand there are two major tuneable parameters:

- shared buffer cache: which I can think of as a sort of RAM-based
disk cache of recently accessed tables (or parts of tables).  Ideally this
would be large enough to hold the entire database.  The goal is to make
this large enough to hold the most commonly accessed tables.

- sort memory batch size: this is the amount of memory that *each backend*
uses to do it's sorts/merges/joins.  If the backend needs more than this
then it writes to temporary files.  Again the goal would be to make all
your sorts/merges/joins fit in this size.

The overall goal is to give enough memory to postgresql so that it writes
to disk as little as possible while making sure that the OS as a whole
doesn't have to start swapping.  So as a starting point could one install
the OS, let it run for a bit, see how much memory it's using, see how much
memory is left and assign most of that to postgresql?


Regarding the KERNEL parameters.  I can follow the general rules mentioned
in the docs for configuring things no problem.  However is there any
danger in doing that?  Can they be too big?  Or can they only be too big
in conjunction with giving postgresql too much memory?  In otherwords if I
set them to something, but don't run postgresql have I affected how the OS
will run by itself?

Is there a way to determine the maximum number of backends that can be run
given a given amount of RAM?

Also, if anyone has a FreeBSD box with 512 ram, what are your kernel
settings?


Thanks!

-philip




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] unicode in 7.1

2001-09-10 Thread Tatsuo Ishii

 my isp recently upgraded form postgreSQL 7.0 to 7.1.  It went pretty well
 but I just discovered that non-english characters are now in the database
 as a question mark-- inserting non-english characters produces a ? as
 well.  Any idea what has gone wrong and what we need to do to fix this?

Hard to tell without knowing what the configuration option was and
what kind of API you are using... 
--
Tatsuo Ishii

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] user

2001-09-10 Thread Karen Ellrick

 Forgott to say that I try to create a table named
 QUOTEuser/QUOTE.

  Why is this so very commonly used word reserved?
  Is that some kind of #define so you easily can recompile
  PostgreSQL? If so, please guide me to the place. This is
  truly annoying.

Rather than trying to tear apart a database system that was carefully
designed with user as a word with meaning to the system, is there any
reason why you can't use a slightly different name for your table?  I tried
the same thing once, by the way, and when I realized I couldn't name my
table user, I called it users - after all, there will be more than one
user! :-)  Other ideas are usr, db_user (replace db with something
meaningful to you), user_info, etc.

Just a thought.


Karen Ellrick
S  C Technology, Inc.
1-21-35 Kusatsu-shinmachi
Hiroshima  733-0834  Japan
(from U.S. 011-81, from Japan 0) 82-293-2838



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] Idea: jobs.postgresql.org

2001-09-10 Thread Justin Clift

Sorry Vince,

My mistake.  I must have worded that badly.

What I mean is that some Agencies find it difficult to believe that we
would actually assist them in looking for candidates, especially
directly to a targeted audience, without demanding to be paid for the
effort.

Once they understand we're doing this for the benefit of the PostgreSQL
Community, they're cool with it.

:-)

Regards and best wishes,

Justin Clift


Vince Vielhaber wrote:
 
 On Sun, 9 Sep 2001, Justin Clift wrote:
 
  I definitely believe we should allow agencies to list their jobs at no
  charge.  Just because they're into business doesn't mean we have to take
  a cut of it.  This should also mean they feel more comfortable about
  posting PostgreSQL jobs, and thereby benefits our community.
 
 My comment was because you said they were uncomfortable with no charge
 for posting.  If it makes them feel better or will keep them from posting,
 let them donate.
 
 Vince.
 --
 ==
 Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
  56K Nationwide Dialup from $16.00/mo at Pop4 Networking
 Online Campground Directoryhttp://www.camping-usa.com
Online Giftshop Superstorehttp://www.cloudninegifts.com
 ==

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
 - Indira Gandhi


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Idea: jobs.postgresql.org

2001-09-10 Thread Vince Vielhaber

On Mon, 10 Sep 2001, Justin Clift wrote:

 Sorry Vince,

 My mistake.  I must have worded that badly.

 What I mean is that some Agencies find it difficult to believe that we
 would actually assist them in looking for candidates, especially
 directly to a targeted audience, without demanding to be paid for the
 effort.

 Once they understand we're doing this for the benefit of the PostgreSQL
 Community, they're cool with it.

OOhhh!

Now that makes more sense!

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] unicode in 7.1

2001-09-10 Thread Karen Ellrick

 my isp recently upgraded form postgreSQL 7.0 to 7.1.  It went pretty well
 but I just discovered that non-english characters are now in the database
 as a question mark-- inserting non-english characters produces a ? as
 well.  Any idea what has gone wrong and what we need to do to fix this?

Without any more info it's only a guess, but did the ISP folks forget to use
the --enable-multibyte=favorite_charset during the configure step of
installation?  If they had multibyte enabled before and didn't this time,
that could explain the problem.

Regards,


Karen Ellrick
S  C Technology, Inc.
1-21-35 Kusatsu-shinmachi
Hiroshima  733-0834  Japan
(from U.S. 011-81, from Japan 0) 82-293-2838



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Fwd: Re: [GENERAL] unicode in 7.1

2001-09-10 Thread Culley Harrelson

The was corrupted in the process of the upgrade.

Is there some way to tell what the configuration options were when it was 
installed?  I am assuming by API you mean how am I accessing Postgres?  JDBC.

Culley


X-Apparently-To: [EMAIL PROTECTED] via web9605; 10 Sep 2001 
18:19:26 -0700 (PDT)
X-Track: 1: 40
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] unicode in 7.1
X-Mailer: Mew version 1.94.2 on Emacs 20.7 / Mule 4.1
  =?iso-2022-jp?B?KBskQjAqGyhCKQ==?=
Date: Tue, 11 Sep 2001 10:19:00 +0900
From: Tatsuo Ishii [EMAIL PROTECTED]
X-Dispatcher: imput version 2228(IM140)
Lines: 9

  my isp recently upgraded form postgreSQL 7.0 to 7.1.  It went pretty well
  but I just discovered that non-english characters are now in the database
  as a question mark-- inserting non-english characters produces a ? as
  well.  Any idea what has gone wrong and what we need to do to fix this?

Hard to tell without knowing what the configuration option was and
what kind of API you are using...
--
Tatsuo Ishii


_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] How to make a REALLY FAST db server?

2001-09-10 Thread Dr. Evil

 As far as disks go.  You cannot have too many spindles.  But the number of
 spindles you have available depends on which pieces of postgres get split
 onto dedicated spindles.

But if you have enough RAM that the entire DB can be held in RAM, and
if the DB acess pattern is 90% read, 10% write (which is usually the
case) then do you really need to worry about disk performance?

It seems like for most applications, it is possible to hold all the
data in RAM, and RAM is cheap these days.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: Fwd: Re: [GENERAL] unicode in 7.1

2001-09-10 Thread Tatsuo Ishii

 The was corrupted in the process of the upgrade.
 
 Is there some way to tell what the configuration options were when it was 
 installed?

pg_config --configure

 I am assuming by API you mean how am I accessing Postgres?  JDBC.

7.1's JDBC driver has been slightly enhanced from 7.0 in the sense of
encoding handling. Modify your Java applications to meet the new JDBC
driver's requirements (do not ask me how, because I'm not a Java
programmer) or stay with the old driver.
--
Tatsuo Ishii

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Questions about tuning on FreeBSD...

2001-09-10 Thread postgres

On Monday 10 September 2001 18:52, Philip Hallstrom wrote:
 Hi all -
   I have some questions about tuning the various kernel parameters
 on FreeBSD.  I've read Bruce's article a couple of times and the various
 parts of the documentation, but I still have some questions and was hoping
 people here could confirm/deny my assumptions.

 The machine in question has 512 megs of ram and doesn't do anything else
 significant other than postgresql.

Depending on the size of your DB, this should do OK...

 As I understand there are two major tuneable parameters:

 - shared buffer cache: which I can think of as a sort of RAM-based
 disk cache of recently accessed tables (or parts of tables).  Ideally this
 would be large enough to hold the entire database.  The goal is to make
 this large enough to hold the most commonly accessed tables.
I run with shared buffers = 5120

 - sort memory batch size: this is the amount of memory that *each backend*
 uses to do it's sorts/merges/joins.  If the backend needs more than this
 then it writes to temporary files.  Again the goal would be to make all
 your sorts/merges/joins fit in this size.

Sort Mem = 4096

 The overall goal is to give enough memory to postgresql so that it writes
 to disk as little as possible while making sure that the OS as a whole
 doesn't have to start swapping.  So as a starting point could one install
 the OS, let it run for a bit, see how much memory it's using, see how much
 memory is left and assign most of that to postgresql?


 Regarding the KERNEL parameters.  I can follow the general rules mentioned
 in the docs for configuring things no problem.  However is there any
 danger in doing that?  Can they be too big?  Or can they only be too big
 in conjunction with giving postgresql too much memory?  In otherwords if I
 set them to something, but don't run postgresql have I affected how the OS
 will run by itself?

 Is there a way to determine the maximum number of backends that can be run
 given a given amount of RAM?

 Also, if anyone has a Free# SYSV stuff BSD box with 512 ram, what are your 
 kernel settings?

options SYSVSHM #SYSV-style shared memory
options SHMMAXPGS=12288
options SHMMAX=(SHMMAXPGS*PAGE_SIZE+1)
options SHMSEG=256
options SHMMNI=512
options SHMMIN=1
 
options SYSVMSG #SYSV-style message queues
 
options SYSVSEM #SYSV-style semaphores
options SEMMNI=256
options SEMMNS=512
options SEMMNU=256
options SEMMAP=256

Note:  Some of these might be WAY TOO high!
I could not find enough docs to tell me, so I just added extra
everywhere  Maybe some of the experts can pick this apart...


 Thanks!

 -philip


On that note:  Could some of the PostgreSQL expert take a look my
number of buffers and the kernel config and tell me if I'm running too
much of anything?

GB

-- 
GB Clark II | Roaming FreeBSD Admin
[EMAIL PROTECTED] | General Geek 
   CTHULU for President - Why choose the lesser of two evils?

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] how to get the md5 result of a string ?

2001-09-10 Thread Feite Brekeveld

Hi,

Is there a function like:

select md5(attribute_name) from 

Thanks,


--
Feite Brekeveld
[EMAIL PROTECTED]
http://www.osiris-it.nl




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] How to make a REALLY FAST db server?

2001-09-10 Thread Jim Buttafuoco

I am currently working on a patch to 7.2 that will allow data/indexes to
be in different locations.  I am also looking at replacing the current
LOCATION code with a table driven (pg_locations) that will allow not
only the default data/indexes locations to be set but also each
table/index to have its own location.  The first part is finished.  I
will be making a patch tomorrow after some more testing.  On my app
(about 400G of table data and 350G of index data) it really makes a
difference...

Jim


  - Hardware:  dual / quad Intel class
 
 OK, but remember I/O is key for databases.  The more spindles the
 better.  See my performance article on techdocs.
 
  - OS:  Prolly FreeBSD (FFS is your friend (what with syncs and all)
and it
  can do multi proc support
 
 I would recommend soft updates be enabled.
 
 Good points.
 
  - Disk:  SCSI Raid 1+0
 
 Not sure about that.  Is that optimal for I/O?
 
 From my experience it is.  As long as you have a raid controler that
 can do 2 level RAID abstraction.  First you need mirrored pairs and
 then you stripe over them.  It costs a lot in disk,  but is stupid
fast
 with the right raid controller.  With some Suns / FC / EMC,  we were
 getting ~100M/s+ with that setup for our Oracle server.
 
  - Ram:  Not really sure here.  Is there math somewhere for ram
needs for
  pgsql? I imagine is has something to do with # connections,  db
size,
  etc.
 
 Again, see article.
 
 Thanks.
 
 


  b. palmer,  [EMAIL PROTECTED]   
pgp:crimelabs.net/bpalmer.pgp5
 
 
 
 ---(end of
broadcast)---
 TIP 1: subscribe and unsubscribe commands go to
[EMAIL PROTECTED]
 
 



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] How to make a REALLY FAST db server?

2001-09-10 Thread Trond Eivind Glomsrød

bpalmer [EMAIL PROTECTED] writes:

 I'm looking to make a DB server for a project I'm working on (using pgsql)
 and am wondering if people have suggestions?
 
 Thoughts:
 
 - Hardware:  dual / quad Intel class

Athlon gives more bang for the buck - the dual Athlons are _really_
nice, and have proven stable as well. 

 - Disk:  SCSI Raid 1+0

I'd probably go for a 3ware RAID instead... IDE disks are so much
cheaper nowadays than SCSI, and the premium isn't really justifiable
anymore.

 - OS:  Prolly FreeBSD (FFS is your friend (what with syncs and all) and it
 can do multi proc support

I'd recommend Linux, which has more mature MP support and scales
better, but I'm obviously biased :). It's of course very important to
keep what you are familiar with - a good sysadmin makes a world of
difference no matter what you're using.

 - Ram:  Not really sure here.  Is there math somewhere for ram needs for
 pgsql? I imagine is has something to do with # connections,  db size,
 etc.

More is better. RAM is cheap. Avoid RAMBUS. 

-- 
Trond Eivind Glomsrød
Red Hat, Inc.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] how to get the md5 result of a string ?

2001-09-10 Thread Marko Kreen

On Mon, Sep 10, 2001 at 11:40:27PM +0200, Feite Brekeveld wrote:
 Is there a function like:
 
 select md5(attribute_name) from 

digest(field, 'md5')

If you want in hex:

encode(digest(field, 'md5'), 'hex')

Look into contrib/pgcrypto in PostgreSQL source.

-- 
marko


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] How to make a REALLY FAST db server?

2001-09-10 Thread Lamar Owen

  - Hardware:  dual / quad Intel class

The ultimate would be an IBM S/390 mainframe running some distribution of 
Linux S/390.  I/O bandwidth on mainframe DASD is incredible, the memory is 
robust and fast, and the CPU is trememdous.

The price is also trememdous.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] Articles from Computerworld

2001-09-10 Thread Bruce Momjian

Here are two good articles, one on open source database, the other about
the Great Bridge closure, both from Computerworld:

http://www.computerworld.com/cwi/story/0,1199,NAV47_STO63629,00.html

http://www.computerworld.com/storyba/0,4125,NAV47_STO63600,00.html

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] user

2001-09-10 Thread Peter Eisentraut

Daniel ?erud writes:

 Why is this so very commonly used word reserved?

Because SQL says so.  More specifically, because USER is a special
function.

 Is that some kind of #define so you easily can recompile
 PostgreSQL?

Surely not.

 If so, please guide me to the place.

You can hack around in src/backend/parser/gram.y, remove the USER
expansion of the c_expr nonterminal, and move the USER expansion of
ColLabel up to TokenId.

-- 
Peter Eisentraut   [EMAIL PROTECTED]   http://funkturm.homeip.net/~peter


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] how to get the md5 result of a string ?

2001-09-10 Thread Bruce Momjian


Yes see /contrig/pgcrypto.

 Hi,
 
 Is there a function like:
 
 select md5(attribute_name) from 
 
 Thanks,
 
 
 --
 Feite Brekeveld
 [EMAIL PROTECTED]
 http://www.osiris-it.nl
 
 
 
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] How to make a REALLY FAST db server?

2001-09-10 Thread Steve Wolfe

 I'd probably go for a 3ware RAID instead... IDE disks are so much
 cheaper nowadays than SCSI, and the premium isn't really justifiable
 anymore.

   Having used IDE and SCSI disks, when I'm serious about performance, IDE
doesn't even enter my mind.  Also, over on the XFS list, there are a few
people that have been using 3ware cards, and it sounds like there are
still some serious caveats/bugs to them.  Myself, I much prefer a good
SCSI RAID card that's going to work, and going to work well.

  (As an aside, one person was in a heated argument about how much cheaper
IDE was than SCSI.  I got on pricewatch, found some prices, and would have
been able to put together a very fast SCSI system for the same price as
his IDE array.)

steve



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] How to make a REALLY FAST db server?

2001-09-10 Thread Steve Wolfe

 I'm not sure how much a 2+ way system will help.  Most of the db work
 will be done in one long serialized processes and those can't be spread
 out over multiple processors (with current day postgresql).

  That's assuming that only one query will ever be executed at once.  As a
new backend is spawned for each connection, extra CPU's are very helpful
if the database will see more than occasional use.

  Also, even if there's only one query, dual-CPU machines are generally
much more responsive, especially under load, as one CPU can be handling
interrupts, kernel code, and other system processes while the other sits
there doing your task.

 To really eek out as much speed as possible here, you'll want 10k RPM
 Ultra-160 Fibre Channel SCSI drives with a dedicated hardware raid
 controller.  If have more reads than writes, you may want to use Raid 5
 instead.

 Why 5?  1+0 is far better and faster.  I was planning on doing a
 hardware RAID controller (just need to find the one that FBSD likes the
 best).

  If you have enough RAM, disk speed isn't terribly important, so RAID 5
gives you the redundancy without as many disks.  Throw in an extra gig of
RAM for your disk cache, turn of fsync(), and you're likely to see a lot
bigger speed-up than any disk upgrade will give you.  There are cases
where that isn't the case (such as updating every row in a multi-gigabyte
table), but unless you're involved in those specialized cases, it's not as
important.

  So, why did I say that I don't use IDE for high-performance machines?
IDE has limitations.  For example, say I wanted 10 drives in my array.
Finding a 5-channel IDE RAID controller is probably not as easy (and not
as cheap) as a dual-channel SCSI RAID controller.  Also, SCSI buses are
much better about sharing bandwidth than IDE, as IDE doesn't have some of
the nifty features that SCSI does.  And to go one further, hot-swappable
SCA bays are pretty common in server chassis.  I simply plugged the RAID
controller into the SCA backplanes, and was done.  Had I gone IDE, there
would have been additional cost in obtaining the hot-swap IDE bays.

  As an aside, if you do go with a 3ware card, you might NOT want to use
RAID 5.  The processors on the card are not up to the computing demands of
RAID 5, you might want to take a look at:

http://marc.theaimsgroup.com/?l=linux-xfsm=99970690219042w=2


 No reason not to go 2GB.  Ram is cheap these days, and you can always
 increase shared buffers and caches to actually fill the server memory
 up with as much quick-fetch info as possible.

 But then why not 4G?  I would love some real numbers rather than 'a
 lot'.  With oracle,  you can plug in some numbers and a real extimate
 will be spit out.  I've worked with DB servers w/ 14G of ram that were
 killing that,  so get a lot isn't really good enough.

  We run 1.5 gigs, and that's plenty for us.  I increased the shared
buffers until it didn't help any more, then doubled it, I believe that it
came out to around 128 gigs.  I did the same with sort memory, that came
out to around 64 megs.  The machine right now uses about 860 megs of disk
cache, but took a few months to ge that high.  It hasn't used swap at all.
If it ever hits swap, we'll add more.  Luckily, with the 4-way interleaved
memory, it'll take up to 16 gigs, and with 16 slots, there's a lot of room
to add more. : )

steve



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] COPY command with timestamp and NULLs

2001-09-10 Thread Jeff Eckermann

COPY expects NULL to be represented by \N by default.
Probably easiest to explicitly specify on the command line what you want
COPY to recognize as a NULL: copy table from wherever with null as
'something';

- Original Message -
From: Dwayne Miller [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, September 10, 2001 2:51 PM
Subject: [GENERAL] COPY command with timestamp and NULLs


 I recently upgraded to 7.1.3.  I was experimenting with a script to
 export data from FoxPro into an SQL file and multiple data files.  The
 SQL file creates the tables, indexes, foreign keys, etc, and calls the
 COPY command to load the data from the appropriate data files.

 It appears, and I could easily be mistaken, that the COPY command does
 not allow NULLs into a timestamp field, even though the field is defined
 to accept nulls.  Actually, it appears that the behavior of the COPY
 command changed as I believe it would accept nulls in the prior release
 7.1.2.

 In any case, I'm using the COPY command WITH NULL AS '^N'. And the
 datafile contains ^N in timestamp fields that could be NULL, but the
 command fails with an invalid timestamp error, referencing the first
 line that contains the '^N' null sequence.

 Any thoughts?

 Thanks,
 Dwayne


 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

 http://www.postgresql.org/users-lounge/docs/faq.html




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] How to make a REALLY FAST db server?

2001-09-10 Thread Randy Hall

I second the RAM point.  Not only is more better, you also need to configure
postgres to use it.  You will need to take the average size of your
backends, the number of concurrent connections you expect and the amount of
RAM you want postgres to leave for the OS and all the other processes.  This
will let you back into the proper buffer setting to best use the available
RAM without letting the postmaster go to swap space.  Buffers are not the
only variable for memory.  You need to allow space in ram for sort memory as
well.

As far as disks go.  You cannot have too many spindles.  But the number of
spindles you have available depends on which pieces of postgres get split
onto dedicated spindles.

A note on SMP.  Postgres is not a threaded application.  The best you can
hope for is that multiple connections get spread over multiple processors.
Each individual connection lives on a single CPU.

Tunning is somewhat of a black art to get the right balance.  If you have to
make a choice, buy fewer processors, faster disks, and as much RAM as the
board will handle.

--
Randy Hall - Red Hat Certified Engineer - Ex-Great Bridge PostgreSQL Expert
Resume: http://members.home.net/rthall3

- Original Message -
From: Trond Eivind Glomsrød [EMAIL PROTECTED]
To: bpalmer [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, September 10, 2001 4:54 PM
Subject: Re: [GENERAL] How to make a REALLY FAST db server?


 bpalmer [EMAIL PROTECTED] writes:

  I'm looking to make a DB server for a project I'm working on (using
pgsql)
  and am wondering if people have suggestions?
 
  Thoughts:
 
  - Hardware:  dual / quad Intel class

 Athlon gives more bang for the buck - the dual Athlons are _really_
 nice, and have proven stable as well.

  - Disk:  SCSI Raid 1+0

 I'd probably go for a 3ware RAID instead... IDE disks are so much
 cheaper nowadays than SCSI, and the premium isn't really justifiable
 anymore.

  - OS:  Prolly FreeBSD (FFS is your friend (what with syncs and all) and
it
  can do multi proc support

 I'd recommend Linux, which has more mature MP support and scales
 better, but I'm obviously biased :). It's of course very important to
 keep what you are familiar with - a good sysadmin makes a world of
 difference no matter what you're using.

  - Ram:  Not really sure here.  Is there math somewhere for ram needs for
  pgsql? I imagine is has something to do with # connections,  db size,
  etc.

 More is better. RAM is cheap. Avoid RAMBUS.

 --
 Trond Eivind Glomsrød
 Red Hat, Inc.

 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

 http://www.postgresql.org/users-lounge/docs/faq.html



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] MacOS X Build Fails with 7.1.3

2001-09-10 Thread Hunter Hillegas

My postgres build on MacOS X is failing:

cc -traditional-cpp -g -O2 -Wall -Wmissing-prototypes -Wmissing-declarations
-bundle -undefined suppress -bundle -undefined suppress fe-auth.o
fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o pqexpbuffer.o dllist.o
pqsignal.o   -o libpq.so.2.1
/usr/bin/ld: -undefined error must be used when -twolevel_namespace is in
effect
make[3]: *** [libpq.so.2.1] Error 1
make[2]: *** [all] Error 2
make[1]: *** [all] Error 2
make: *** [all] Error 2

Any ideas?

Hunter


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] MacOS X Build Fails with 7.1.3

2001-09-10 Thread Justin Clift

Hi,

You could try either of :

http://techdocs.postgresql.org/installguides.php#macosx

or

http://techdocs.postgresql.org/oresources.php#macosx

:-)

Regards and best wishes,

Justin Clift


Hunter Hillegas wrote:
 
 My postgres build on MacOS X is failing:
 
 cc -traditional-cpp -g -O2 -Wall -Wmissing-prototypes -Wmissing-declarations
 -bundle -undefined suppress -bundle -undefined suppress fe-auth.o
 fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o pqexpbuffer.o dllist.o
 pqsignal.o   -o libpq.so.2.1
 /usr/bin/ld: -undefined error must be used when -twolevel_namespace is in
 effect
 make[3]: *** [libpq.so.2.1] Error 1
 make[2]: *** [all] Error 2
 make[1]: *** [all] Error 2
 make: *** [all] Error 2
 
 Any ideas?
 
 Hunter
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
 - Indira Gandhi

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Supporting PostgreSQL infrastructure

2001-09-10 Thread Bruce Momjian

As you know, PostgreSQL has had a first-class infrastructure since we
started five years ago.  This is all thanks to Marc Fournier who has
created and managed this infrastructure over the years.

Quality infrastructure is very important to an open-source project.  It
prevents disruptions that can greatly delay forward movement of the
project.

Of course, this infrastructure is not cheap.  There are a variety of
monthly and one-time expenses that are required to keep it going.  If
you or your company has money it can donate to help fund this effort,
please go to the following URL and make a donation:

http://www.pgsql.com/pg_goodies/

Although the web page mentions PostgreSQL, Inc, the contributions item
is soley to support the PostgreSQL project and does not fund any
specific company.  If you prefer, you can also send a check to the
contact address.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] How to make a REALLY FAST db server?

2001-09-10 Thread bpalmer

I'm looking to make a DB server for a project I'm working on (using pgsql)
and am wondering if people have suggestions?

Thoughts:

- Hardware:  dual / quad Intel class
- OS:  Prolly FreeBSD (FFS is your friend (what with syncs and all) and it
can do multi proc support
- Disk:  SCSI Raid 1+0
- Ram:  Not really sure here.  Is there math somewhere for ram needs for
pgsql? I imagine is has something to do with # connections,  db size,
etc.

Any people have any comments?

- Brandon



 b. palmer,  [EMAIL PROTECTED]pgp:crimelabs.net/bpalmer.pgp5


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] How to make a REALLY FAST db server?

2001-09-10 Thread Shaun Thomas

On Mon, 10 Sep 2001, bpalmer wrote:

 - Hardware:  dual / quad Intel class

Fairly easy to obtain.  If all you want is a dual, you can use
desktop-class motherboards from such makers as Asus, Abit, and
IWill.  If you're going for speed, stick to the DDR or SDRAM
capable boards.

 - Disk:  SCSI Raid 1+0

To really eek out as much speed as possible here, you'll want 10k RPM
Ultra-160 Fibre Channel SCSI drives with a dedicated hardware raid
controller.  If have more reads than writes, you may want to use Raid 5
instead.

Postgres won't let you separate indexes from the database they represent,
so you can't make separate raid clusters for indexes and data; no
optimization there.  Maybe in the next version that implements
schemas?  What you can do if you use multiple DB's in your app design,
is put different DB's on different raid clusters.  That'll help parallel
execution times.  If you do this, make sure template1 and template0 are
separated from the rest of the databases, this will allow fast responses
from the system tables and make sure no application database IO affects
them adversely.

 - Ram:  Not really sure here.  Is there math somewhere for ram needs for
 pgsql? I imagine is has something to do with # connections,  db size,
 etc.

No reason not to go 2GB.  Ram is cheap these days, and you can always
increase shared buffers and caches to actually fill the server memory
up with as much quick-fetch info as possible.

All in all, if you're making a DB machine, do whatever you can to get
rid of hits caused by disk IO.  Parallelize as much as possible between
your databases, and if you have a DB capable of separating indexes from
the mix, do that too.  Don't run any other services on it, and make
sure it has a nice wide 100MBit or 1GBit pipe so it doesn't saturate when
servicing multiple hosts.

Hope that helps.

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| Most of our lives are about proving something, either to   |
|  ourselves or to someone else. |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] SQL Loader?

2001-09-10 Thread Jeff Eckermann

COPY is the command used for bulk loading in PostgreSQL (check the SQL
Commands in the Reference Manual part of the docs).
COPY is not intelligent about interpreting varying data formats.  Your
flexibility is limited to specifying the data delimiter and whatever string
you want to represent NULL.  If your data is not in the correct format to
match the PostgreSQL data type for that column in the table, COPY will error
out.
If you have data that doesn't meet these criteria, you have two choices:
1. Write a script to edit your data into the required form (which is what
people generally do), or
2. Go to the source code for COPY, and develop it into a more
intelligent/flexible form.
Requests for no. 2 don't seem to be frequent, so I am guessing that
development of COPY is not high on the developers' todo list.

- Original Message -
From: Guy Fraser [EMAIL PROTECTED]
To: Joel Pang [EMAIL PROTECTED]; PostgreSQL general
[EMAIL PROTECTED]
Sent: Monday, September 10, 2001 12:10 PM
Subject: Re: [GENERAL] SQL Loader?


 Joel Pang wrote:
 
  Hi Fraser,
 
  I've read your mail in postgresql web site that you've a procedure that
will do bulk loading of records into database.  I've been looking a utility
just like the sql loader of Oracle for the Postgres database.  I was
wondering if you will share your procedure will me? or give me some URL
links that I can get reference on how to write my own sql loader procedure?
 
  Thanks in advance for your help.
 
  Cheers,
  Joel

 Please not the script below uses some special program to modify radius
 accounting files for import as an ordered tab delimited file format.

 The script also shows a way to import into an intermediate table for
 seperating different uniqu records into seperate files then moving the
 duplicate entries to a table for data integrity.

 This also demonstrates a method of using tables determined from the data
 being imported.

 I wrote this software a long time ago and have spent little time
 patching or rewriting. Since the script was initially written, I now
 know some better ways of performing some of these tasks. I don't have
 time to rewrite the script and it has worked for over 3 years so it is
 low priority.

 Guy Fraser

  begin radimport 
 #!/bin/bash
 ORIG=`pwd`
 WORKDIR='/usr/local/pgsql'

 cd $WORKDIR
 echo Start collecting files
 wget -P $WORKDIR/ -t 10 -T 30 -c \
 ftp://username:[EMAIL PROTECTED]/path/*.acct.gz   # modified for
 security
 echo Finished collecting files
 echo Start decompressing files
 gunzip -v *.acct.gz
 echo Finished decompressing files

 for fname in `ls *.acct`;
 do {
 [ -f $fname ]  {
 date
 echo Start inputing $fname
 # parserad - convert radius detail file to tab delimited format.
 /usr/local/sbin/parserad -i $fname \
 | /usr/local/pgsql/bin/psql -c \
 copy detail from stdin; radius
 echo Finished inputing $fname
 echo Start compressing $fname
 gzip -v $fname
 echo Finished compressing $fname
 #
 #
 # Clean up detail
 #
 /usr/local/bin/psql -c vacuum detail; radius
 #
 # If more than one month determine Current and Previous, Month and Year.
 #
 #
 MINTIME=`/usr/local/pgsql/bin/psql -c \
 select date_trunc('month',min(\Time-Stamp\)::abstime) from detail;\
  -A -t radius`
 MINMON=`echo $MINTIME | awk '{print $2}' -`
 MINYEAR=`echo $MINTIME | awk '{print $5}' -`
 MAXTIME=`/usr/local/pgsql/bin/psql -c \
 select date_trunc('month',max(\Time-Stamp\)::abstime) from detail;\
  -A -t radius`
 MAXMON=`echo $MAXTIME | awk '{print $2}' -`
 MAXYEAR=`echo $MAXTIME | awk '{print $5}' -`
 [ $MAXYEAR =  ]  (echo Exiting: No Data in detail table. || exit
 1)
 echo Moving $fname
 mv $fname.gz /mnt/sdb3/done/$MAXYEAR
 echo Start processing data from $fname
 #
 # Process records in detail file and create a unique record file called
 radius.
 #
 #
 echo Creating lookup table
 /usr/local/bin/psql -c \
 select min(oid) as recnum,max(\Acct-Session-Time\),\


\Acct-Status-Type\,\Acct-Session-Id\,\NAS-IP-Address\,\NAS-Port-Id\,
\
  \User-Name\,\Realm\,\Framed-IP-Address\,\Calling-Station-Id\ \
 into radius \
 from detail \
 group by
 \Acct-Session-Id\,\NAS-IP-Address\,\NAS-Port-Id\,\User-Name\,\


\Realm\,\Framed-IP-Address\,\Calling-Station-Id\,\Acct-Status-Type\;
\
  radius
 /usr/local/bin/psql -c vacuum radius; radius

 #
 #
 # Move stop records to stop table
 #
 echo Creating unique stop record table
 /usr/local/bin/psql -c \
 select recnum into radstop from radius where \Acct-Status-Type\ =
 'Stop';\
  radius
 echo Filtering stop records
 /usr/local/bin/psql -c \
 select a.* into stop from detail as a,radstop as b where a.oid =
 b.recnum;\
  radius
 /usr/local/bin/psql -c select count(*) as \Filtered\ from stop;
 radius
 echo Cleaning stop records
 /usr/local/bin/psql -c \
 delete from detail where oid = radstop.recnum;\
  radius
 echo Cleaning temporary tables
 /usr/local/bin/psql -c drop table radstop; radius
 /usr/local/bin/psql -c vacuum detail; radius

 #
 # Move start and alive records to start table
 #
 echo