Re: [GENERAL] postgres on redhat 7.0

2000-10-31 Thread Trond Eivind Glomsrød

"Adam Lang" [EMAIL PROTECTED] writes:

 But it don't help if you downloaded the OS. ;)

You can of course find it on the ftp site as well, in the RPMS directory.

-- 
Trond Eivind Glomsrød
Red Hat, Inc.



Re: Re: [GENERAL] OT: List confirmation

2000-10-31 Thread Frank Bax

Frank asked: What's this 'CONFIRM' message all about anyway?

Scappy replied:   It was just me attempting to fix ppls
subscription settings after the upgrade, without realizing
it would email out to everyone ...  sorry about that :(

Frank: Does this mean we don't actually do the CONFIRM's?  Or should we?

Scappy replied:   please do :)

The web interface worked fine for me.


At 10:18 AM 10/31/00 -0500, you wrote:
On Tue, 31 Oct 2000 [EMAIL PROTECTED] wrote:

 I tried the links and got an error that said postgresql.org didn't exist
or something of that nature. Anybody else get that and what do you do to
correct it? Thanks.


It was easiest to respond by mail so I never even tried the web interface.

Vince.



 Rick
 Vince Vielhaber [EMAIL PROTECTED] wrote:
  On Tue, 31 Oct 2000, Adam Lang wrote:

  Scrappy from postgresql wants me to reply to subscription messages for
each
  of the lists I am subscribed to... do I need to, or should I ignore the
  messages?

 I got it too and figure since it only took a few seconds each it wasn't
 any big deal and it be a bigger pain in the ass to have to resub to all
 the lists again.

 Vince.


-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==








Re: [GENERAL] True ACID under linux (no fsync)?

2000-10-31 Thread Poul L. Christiansen

Strange. 

I did a test on Redhat Linux 7.0 with fsync() turned off and my
batchfile of inserts ran 10 times as fast. But that was on an slow PC
with a slow IDE harddrive.

What Linux distribution and kernel version are you running?

Poul L. Christiansen

"Gary Howland (During daytime)" wrote:
 
 Hi,
 
 Just a quickie - I heard that linux does not have a working fsync() call
 (it has no concept of raw devices).  In other words, fsync cannot be used
 to ensure that data is on disk (try it with a floppy - it returns IMMEDIATELY!!! - 
long before the write finishes)
 
 This is for ALL linuxes.
 
 Does this mean that postgres is unrealiable (or non ACID-conforming) under linux?
 
 Gary
 
 --
 Friends help you move.
 Real friends help you move bodies.
 
 ___
 
 Free Unlimited Internet Access! Try it now!
 http://www.zdnet.com/downloads/altavista/index.html
 
 ___



Re: [GENERAL] postgres on redhat 7.0

2000-10-31 Thread Lamar Owen

Adam Lang wrote:
 
 But it don't help if you downloaded the OS. ;)

If you downloaded the RedHat CD ISO images, OR the ftp dirs, you got the
postgresql RPM's, unless you specifically excluded them.

And you can certainly get them from RedHat's ftp site, as busy as it is.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [GENERAL] --enable-syslog in rh7 ?

2000-10-31 Thread Lamar Owen

"Sergio A. Kessler" wrote:
 i was reading in the docs that that for syslog
 (ie. syslog = 2 in pg_options) to effectively work, you need to build
 postgres with --enable-syslog

PostgreSQL 7.0.2 has no --enable-syslog switch.  7.1 (and, apparently
7.0.3) will have such a configure switch (it exists both in CURRENT and
REL7_0_PATCHES in CVS).
 
 so the question is: if I put syslog = 2 in pg_options, it will work
 in a rh7 with postgres out of the box ?

Yes, it should.  The patch was necessary for 7.0.2, but not for later
than that.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



[GENERAL] Converting to PostgresQL and have some questions.

2000-10-31 Thread Craig N. Caroon

Hello,

My company is thinking about converting from Sybase to PostgresQL. We have
PostgresQL 7.0.2 installed and are trying to convert our tables, triggers and
stored procedures to it.
We are having the most trouble with triggers and stored procedures (functions).
If anyone could answer our questions, we'd really appreciate it.

1.  How do i retreive multiple values from a function.
In Sybase the following procedure will return 3 values:
---
create procedure get_Person(@id numeric(12))
as
begin
select name, nickName, address
from Person
where id = @id
end
---
All I have to do is add another field name to the select statement
and the procedure will then return it too.

The closest things I can find in PostgresQL are:
a. SETOF, which is mentioned in the docs, but no examples of it's usage are
   given. An example would be helpful.
b. returning a row, such as:
   
   create function get_person(int4)
   returns person
   as
   'select * from person where id = $1;'
   language 'sql';
   
   
   which really isn't that helpful because I still have to make multiple 
   calls to get_person() to get multiple values. Correct?

   mydb= select name(get_person(1));
   mydb= select nickName(get_person(1));
   mydb= select address(get_person(1));


2.  Can a trigger return values?
   
Again, in Sybase I can do this:
---
create trigger
Person_insert
on Person
for insert
as
begin
select id
from inserted
end
---
Everytime I insert a record into Person, it's id is returned to the caller.

As far as I can tell from the docs, all a PostgresQL trigger can do is validate 
and change the row being inserted/updated/deleted. 
Is that correct?

   
3.  What does the OPAQUE return type mean.
I understand the functions used as triggers must return the OPAQUE data
type. What exactly is the OPAQUE data type?



Thanks for any help.
--craig
+---+
| Craig N. Caroon   | Nando Media   |
| Senior Developer  | http://www.nandomedia.com |
| [EMAIL PROTECTED] | The Nando Times   |
| (919) 836-2885| http://www.nandotimes.com |
+---+



Re: [GENERAL] Postgres 7.1 and the 8k tuple / row limit

2000-10-31 Thread Douglas

Where can I download the 7.1 snapshot?
Thanks
Douglas

On 9 Oct 2000 23:46:29 -0400, [EMAIL PROTECTED] (Tom Lane) wroth:

"Ally" [EMAIL PROTECTED] writes:
 However, I've read that Postgres 7.1 / 7.2 will accomodate larger row sizes
 and was wondering if there was a ballpark ETA / release date for them. And
 does anybody know which version will have this feature?

7.1 for sure; the feature has been working in development sources for
months.

7.1 release date is not set yet, but you could reasonably do development
work with a current snapshot.  Wouldn't want to trust a snapshot with
mission-critical data, but for devel work it's plenty solid enough.

   regards, tom lane




Re: [GENERAL] True ACID under linux (no fsync)?

2000-10-31 Thread Kevin O'Gorman

I must have missed that one.  How do you turn off fsync()??
I have lots of batch loading to do during which ACID is of no use.

++ kevin



"Poul L. Christiansen" wrote:
 
 Strange.
 
 I did a test on Redhat Linux 7.0 with fsync() turned off and my
 batchfile of inserts ran 10 times as fast. But that was on an slow PC
 with a slow IDE harddrive.
 
 What Linux distribution and kernel version are you running?
 
 Poul L. Christiansen
 
 "Gary Howland (During daytime)" wrote:
 
  Hi,
 
  Just a quickie - I heard that linux does not have a working fsync() call
  (it has no concept of raw devices).  In other words, fsync cannot be used
  to ensure that data is on disk (try it with a floppy - it returns IMMEDIATELY!!! - 
long before the write finishes)
 
  This is for ALL linuxes.
 
  Does this mean that postgres is unrealiable (or non ACID-conforming) under linux?
 
  Gary
 
  --
  Friends help you move.
  Real friends help you move bodies.
 
  ___
 
  Free Unlimited Internet Access! Try it now!
  http://www.zdnet.com/downloads/altavista/index.html
 
  ___

-- 
Kevin O'Gorman  (805) 650-6274  mailto:[EMAIL PROTECTED]
Permanent e-mail forwarder:  mailto:Kevin.O'[EMAIL PROTECTED]
At school: mailto:[EMAIL PROTECTED]
Web: http://www.cs.ucsb.edu/~kogorman/index.html
Web: http://trixie.kosman.via.ayuda.com/~kevin/index.html

"There is a freedom lying beyond circumstance,
derived from the direct intuition that life can
be grounded upon its absorption in what is
changeless amid change" 
   -- Alfred North Whitehead



Re: [GENERAL] True ACID under linux (no fsync)?

2000-10-31 Thread Poul L. Christiansen

Use the "-F" option.

I start PostgreSQL with this line:
su -l postgres -c "/usr/bin/pg_ctl  -D $PGDATA -p /usr/bin/postmaster -o
'-i -B 4096 -o -F' start /dev/null 21"  /dev/null

Poul L. Christiansen

Kevin O'Gorman wrote:
 
 I must have missed that one.  How do you turn off fsync()??
 I have lots of batch loading to do during which ACID is of no use.
 
 ++ kevin
 
 "Poul L. Christiansen" wrote:
 
  Strange.
 
  I did a test on Redhat Linux 7.0 with fsync() turned off and my
  batchfile of inserts ran 10 times as fast. But that was on an slow PC
  with a slow IDE harddrive.
 
  What Linux distribution and kernel version are you running?
 
  Poul L. Christiansen
 
  "Gary Howland (During daytime)" wrote:
  
   Hi,
  
   Just a quickie - I heard that linux does not have a working fsync() call
   (it has no concept of raw devices).  In other words, fsync cannot be used
   to ensure that data is on disk (try it with a floppy - it returns IMMEDIATELY!!! 
- long before the write finishes)
  
   This is for ALL linuxes.
  
   Does this mean that postgres is unrealiable (or non ACID-conforming) under linux?
  
   Gary
  
   --
   Friends help you move.
   Real friends help you move bodies.
  
   ___
  
   Free Unlimited Internet Access! Try it now!
   http://www.zdnet.com/downloads/altavista/index.html
  
   ___
 
 --
 Kevin O'Gorman  (805) 650-6274  mailto:[EMAIL PROTECTED]
 Permanent e-mail forwarder:  mailto:Kevin.O'[EMAIL PROTECTED]
 At school: mailto:[EMAIL PROTECTED]
 Web: http://www.cs.ucsb.edu/~kogorman/index.html
 Web: http://trixie.kosman.via.ayuda.com/~kevin/index.html
 
 "There is a freedom lying beyond circumstance,
 derived from the direct intuition that life can
 be grounded upon its absorption in what is
 changeless amid change"
-- Alfred North Whitehead



Re: [GENERAL] True ACID under linux (no fsync)?

2000-10-31 Thread Steve Wolfe


 Use the "-F" option.

 I start PostgreSQL with this line:
 su -l postgres -c "/usr/bin/pg_ctl  -D $PGDATA -p /usr/bin/postmaster -o
 '-i -B 4096 -o -F' start /dev/null 21"  /dev/null

  Interesting... trying to start postgres 7.0.2 with the "-F" simply returns
a list of options to me, as if it can't understand it.  It sure does sound
appealing to me, though.  With a few hundred megs of disk cache (between the
kernel and the RAID controller), I've always wished that PostgreSQL would
take advantage of it when writing. : )

steve






Re: [GENERAL] True ACID under linux (no fsync)?

2000-10-31 Thread Dominic J. Eidson

On Tue, 31 Oct 2000, Steve Wolfe wrote:

   Interesting... trying to start postgres 7.0.2 with the "-F" simply returns
 a list of options to me, as if it can't understand it.  It sure does sound
 appealing to me, though.  With a few hundred megs of disk cache (between the
 kernel and the RAID controller), I've always wished that PostgreSQL would
 take advantage of it when writing. : )

postmaster -o -F -D /your/datadirectory 

The -o tells postmaster to pass -F as backend option.

man postmaster and man postgres for more information.

-- 
Dominic J. Eidson
"Baruk Khazad! Khazad ai-menu!" - Gimli
---
http://www.the-infinite.org/  http://www.the-infinite.org/~dominic/




[GENERAL] I tried to increase the block size

2000-10-31 Thread Mark Lane

I tried to increase the block size from 8K to 32K and received a IPC error.
Now IPC is compiled into the Kernel so why would I get this error. I
switched it back to 8K and it runs fine.


Mark




[GENERAL] pg_check 0.1.3 is now Available

2000-10-31 Thread Bryan White

I sent this to the 'announce' list earlier but it did not show up.  My guess
is that list is moderated so I will post again here.
--

pg_check is a command line tool I have written to aid is diagnosing and
recovering from PostgreSQL table corruptions.  See the ReadMe file for more
information.

It is available here http://www.arcamax.com/pg_check/

I am looking for suggestions as to how to make it more useful so please look
it over.

Bryan White, ArcaMax.com, VP of Technology
You can't deny that it is not impossible, can you.






Re: [GENERAL] I tried to increase the block size

2000-10-31 Thread Bryan White

 I tried to increase the block size from 8K to 32K and received a IPC
error.
 Now IPC is compiled into the Kernel so why would I get this error. I
 switched it back to 8K and it runs fine.

Did you dump your database(s) before the change and initdb/reload them
after?  I presume this is needed and could wreak all sorts of havoc if you
don't.  Also there is a script 'ipcclean' that is supposed to reinitialize
some stuff to do with IPC.  (yes that is vague but its all I know).




Re: [GENERAL] pg_check 0.1.3 is now Available

2000-10-31 Thread Bryan White

 Fascinating.  Looks like a possible framework for building a standalone
 dumping utility.for migration

It could be turned into that.  It already does all the parsing, you would
just have to change the output functions for the desired format.




Re: [GENERAL] pg_check 0.1.3 is now Available

2000-10-31 Thread Lamar Owen

Bryan White wrote:
 pg_check is a command line tool I have written to aid is diagnosing and
 recovering from PostgreSQL table corruptions.  See the ReadMe file for more
 information.

Fascinating.  Looks like a possible framework for building a standalone
dumping utility.for migration
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [GENERAL] how good is PostgreSQL

2000-10-31 Thread markw

Arnold Gamboa wrote:

 Hi,

 For users of large PostgreSQL and PostgreSQL builders, this is for you.

 I'm having a terrible time deciding now. :(

 We're about to build a "huge" website now.  I got tied up in signing the
 contract without really getting enough information about PgSQL since this
 what we plan to implement with PHP (normally we use mySQL but i guess it
 does not fit for huge databases like that).

 Here's my problem.. We're about to build a site like hitbox.com where there
 is a large amount of database required.. If say there is 100,000 users with
 1000 page hits per day for each, and everything will be logged, you could
 imagine how huge this will be.  I'm just so "nervous" (really, that's the
 term) if we implement this and later on experience a slow down or worse than
 that, crash in the server.

That is a LOT of work for any system. That is over 1100 page views a second, or
under 900us each..  A standard Pentium III system, serving static pages would
have problems with that.

If you look at search engines, to get that performance with readonly data, they
usually cluster multiple systems and load balance across them. You may need to
segment your data and have multiple SQL servers perform different functions.

Also, that 1100 page view per second is assuming an even distribution of
traffic, which does not happen in a web server. If you average that much,
chances are there will be periods of twice that.

Look into a "local director," "Alteon," or even LVS.



 My questions are:
 1. What is the limit for number of records in a table WITHOUT SUFFERING SLOW
 DOWN.

 2. limit in number of tables per database
 3. ... limit in number of database.

There are a couple factors involved, more complex than a simple response.

Use multiple databases and put each on a separate disk, with its own
controller. Better yet, have multiple load balanced web boxes do a lot of
processing in PHP and offload much of the CPU bound SQL work to the "cheap" web
boxes, and have multiple SQL databases in the back handling various independent
tasks.

In a web site I worked on, we had multiple front end web servers, load balanced
with an Alteon. Each web server had its own SQL database which provided SQL
access to "static" data which was updated each week.  We had an additional
single SQL database backend which all the Web servers accessed for synchronized
dynamic data.

If you are serious about the load you expect to put on this system you must be
careful:
Do not create any indexes you do not need.
Do not use the "foreign key" constraint as it forces a trigger for each insert.

Make sure you index the keys by which you will access data.
Avoid searching by strings, try to use keys.

Even after that, you have a long way to go before you will hit 1000
transactions per second from any SQL database.

If you are betting your business on this implementation, you have a lot of
homework to do.



 Thanks for you comments.  I would really appreciate every comment that I'll
 receive regarding this.

 Arnold






Re: [GENERAL] True ACID under linux (no fsync)?

2000-10-31 Thread markw

"Gary Howland (During daytime)" wrote:

 Hi,

 Just a quickie - I heard that linux does not have a working fsync() call
 (it has no concept of raw devices).  In other words, fsync cannot be used
 to ensure that data is on disk (try it with a floppy - it returns IMMEDIATELY!!! - 
long before the write finishes)

Are you sure? Where is this information?


 This is for ALL linuxes.

 Does this mean that postgres is unrealiable (or non ACID-conforming) under linux?

 Gary

 --
 Friends help you move.
 Real friends help you move bodies.

 ___

 Free Unlimited Internet Access! Try it now!
 http://www.zdnet.com/downloads/altavista/index.html

 ___






Re: [GENERAL] True ACID under linux (no fsync)?

2000-10-31 Thread shawn everett

I may be horriblly out of my element here but, I'm going to jump in anyway
:)

If you mount a floppy and copy a large file to it, you will get a prompt
back fairly quickly.

If you type: sync right after

The sync command writes everything to the floppy as expected.

Shawn

On Tue, 31 Oct 2000, markw wrote:

 "Gary Howland (During daytime)" wrote:
 
  Hi,
 
  Just a quickie - I heard that linux does not have a working fsync() call
  (it has no concept of raw devices).  In other words, fsync cannot be used
  to ensure that data is on disk (try it with a floppy - it returns IMMEDIATELY!!! - 
long before the write finishes)
 
 Are you sure? Where is this information?
 
 
  This is for ALL linuxes.
 
  Does this mean that postgres is unrealiable (or non ACID-conforming) under linux?
 
  Gary
 
  --
  Friends help you move.
  Real friends help you move bodies.
 
  ___
 
  Free Unlimited Internet Access! Try it now!
  http://www.zdnet.com/downloads/altavista/index.html
 
  ___
 
 
 




Re: [GENERAL] True ACID under linux (no fsync)?

2000-10-31 Thread bmccoy

"Gary Howland (During daytime)" wrote:

 Just a quickie - I heard that linux does not have a working fsync()
 call (it has no concept of raw devices).  In other words, fsync cannot
 be used to ensure that data is on disk (try it with a floppy - it
 returns IMMEDIATELY!!! - long before the write finishes)

That's because the mounted drive is buffered in memory, so when you copy a
file, you are copying it to memory (which is very fast), and the kernel
implicitly syncs with the media when the buffer fills or when an explicit
sync is done or when it is unmounted.

If there is no working fsync (and fdatasync), why do the Linux man pages
claim it to be POSIX.1b compliant?

Can you back up your claim with actual documentation?  Saying 'I heard'
puts it in the realm of urban legendry...

Brett W. McCoy
  http://www.chapelperilous.net
---
This is the LAST time I take travel suggestions from Ray Bradbury!




Re: [GENERAL] how good is PostgreSQL

2000-10-31 Thread Steve Wolfe

 Or they could buy a single IBM S/390, run Linux/390 and PostgreSQL on
 that. Probably would cost less, and be more reliable.  And they can
 always load another Linux/390 VM -- an S/390 can run something like
 41,000 virtual machines each running Linux/390 and Apache.

  Yeah  I'm very optomistic about IBM's new chips that are coming out
next year.  Each "processor module" will have 4 processors, but each
processor will have 2 cores - so in effect, each "processor module" has 8
processors on it.  All processors will have copper interconnects, and
depending on the source, will debut at anywhere from 1.3 to 2 gigahertz.  I
think that will certainly help them get a larger share of the high-end
market!

steve





Re: [GENERAL] how good is PostgreSQL

2000-10-31 Thread Steve Wolfe


 Even after that, you have a long way to go before you will hit 1000
 transactions per second from any SQL database.

 Since my last post probably wasn't too useful, here's some information
that might be a little more help.  It's a little long, I know, but hopefully
it will be of use to someone.

 As programmers, we naturally want to throw things into databases for
three reasons.  First, it's easy to get data in.  Second, it's easy to get
relevant data out.  And third, it's "cool".  We don't want to work with flat
files, now do we?  ; )

 However, in some cases, using the database to get data out ends up
costing us a lot of time and money.  Sometimes we do the same nasty query so
often, that we end up purchasing bigger hardware to make the system work
reasonably.  Why?  Because it was easier for us to write a program that did:

GetDataFromDatabase();
PrepareData();
PrintData();

 Each time, the database server does the work.  But it doesn't
necessarily have to be that way.  In our company, we've found two trends
that have enabled us to save a LOT of processing power on our machines.
(read:  Increase the capacity of our servers by 30% or more, with fairly
minor changes)

 The first case is that of rarely-changing data.  Some of our datasets
probably have around 50,000 to 1,000,000 views (selects) for each update
(insert/delete).  Having the database repeat the query every time is a
waste.  So, we began writing our programs such that they will grab the data
from the database once, and generate the HTML for every page, and the
indexes.  Then, when an update is made to the database (via the
administrative tools), it simply rewrites *the relevant HTML files*, and
changes the indeces pointing to them.  (There are also some other very large
advantages to this sort of thing, but I'm not allowed to say them. ; )  )

   The second case is that of often-repeated queries.  One of the
offerings on our site is an online directory, which gets a pretty fair
amount of traffic.  Unfortunately, it uses a proprietary program that was
purchased by management before they spoke with us.  Grr   It was the
most utterly inefficient program I've ever seen.  It would *not* allow the
database to do joins, it would grab entire tables, then try to do the joins
itself, in Perl.

   We rewrote the program to let PostgreSQL do the joins, and that sped
it up.   Then we realized that a very small number of queries (those for the
first one or two levels of pages) accounted for a huge portion of the
useage.  So, we replaced the front page with a static HTML page (the front
page doesn't change...), and saw another terrific drop in our system loads.


   Overall, by only modifying a couple of our more heavily-uesd programs,
our server loads dropped by about 30%-40%.  If we went to the trouble to
modify some others, it would drop even more.  But we're going to rewrite
them completely for other reasons. : )


   In any event, there are ways like this to save a LOT of CPU and disk I/O.
Most web servers can server out several hundred static pages with the
resources that would otherwise deliver one dynamically-created,
database-driven page.  It also allows you to cluster the web servers with
cheap commodity hardware, instead of using big-iron on the database.  And if
you have a big-iron machine running the back-end, this can severely lighten
the load on it, keeping you from dropping a few hundred grand on the next
step up. ; )


   (Incidentally, we've toyed around with developping a query-caching system
that would sit betwen PostgreSQL and our DB libraries.  However, it seems
like it could be done *much* more efficiently in PostgreSQL itself, as it
would be much easier to keep track of which tables have changed, etc..
Anybody know if this sort of functionality is planned?  It would be terrific
to simply give the machine another 256 megs of RAM, and tell it to use it as
a DB cache...)

steve






Re: [GENERAL] True ACID under linux (no fsync)?

2000-10-31 Thread markw

shawn everett wrote:

 I may be horriblly out of my element here but, I'm going to jump in anyway
 :)

 If you mount a floppy and copy a large file to it, you will get a prompt
 back fairly quickly.

 If you type: sync right after

 The sync command writes everything to the floppy as expected.

Yes, but does cp call fsync()? That is the question. I would suspect that it does not. 
A non-functional fsync call is a
serious error, and one should think more people would mention it. This is the first I 
have heard.



 Shawn

 On Tue, 31 Oct 2000, markw wrote:

  "Gary Howland (During daytime)" wrote:
 
   Hi,
  
   Just a quickie - I heard that linux does not have a working fsync() call
   (it has no concept of raw devices).  In other words, fsync cannot be used
   to ensure that data is on disk (try it with a floppy - it returns IMMEDIATELY!!! 
- long before the write finishes)
 
  Are you sure? Where is this information?
 
 
   This is for ALL linuxes.
  
   Does this mean that postgres is unrealiable (or non ACID-conforming) under linux?
  
   Gary
  
   --
   Friends help you move.
   Real friends help you move bodies.
  
   ___
  
   Free Unlimited Internet Access! Try it now!
   http://www.zdnet.com/downloads/altavista/index.html
  
   ___
 
 
 




Re: [GENERAL] how good is PostgreSQL

2000-10-31 Thread KuroiNeko

  As programmers, we naturally want to throw things into databases for
 three reasons. First, it's easy to get  data in. Second, it's easy to get
 relevant data  out. And third,  it's "cool". We  don't want to  work with
 flat files, now do we? ; )

 Kiddin', eh? :)  Actually, the third reason seems to  dominate the younger
developers' minds. People often tend to  keep everything in poor DBMS until
it begins  to kick back.  And this has impact  on the customers.  Does your
system use a database?  No, why should it? You mean  you'll keep our dearly
beloved banner  ads as flat files?  Yes, this is where  they belong. Sorry,
we'll seek for someone more advanced. Good luck.
 Of course, hardware  vendors jump up of  joy :) Maybe I don't  get it, but
IMHO there's no reason  to put into DB something that  can't be indexed and
used in where clause.

 It would *not* allow the
 database to  do joins, it  would grab entire tables,  then try to  do the
 joins
 itself, in Perl.

 Umh Yeah  Well To  keep compatibility  with other  Open Source
Databases and ESR/RMS, you know :)

(Incidentally,  we've toyed  around with  developping a  query-caching
 system that would sit betwen PostgreSQL and our DB libraries.

 Sounds  amazing, but  requires some  research, I  guess. However,  in many
cases one  would be  more than  happy with  cahced connections.  Of course,
cahced query results  can be naturally added to that,  but just connections
are OK to start with. Security


--

 contaminated fish and microchips
  huge supertankers on Arabian trips
 oily propaganda from the leaders' lips
  all about the future
 there's people over here, people over there
  everybody's looking for a little more air
 crossing all the borders just to take their share
  planning for the future

 Rainbow, Difficult to Cure



[GENERAL] Easy way to find out allowed data?

2000-10-31 Thread Andrew Sullivan

Hi,

Is there any convenient way to get what input qualifies as valid data
for a CHECK constraint?  I'm trying to generalise an interface which
allows data entry through the Web.  I'd like to be able to find out
what data is allowed in a field, and then generate a pick-list based
on that.  It looks like relevant information is in pg_relcheck, but
I'm not sure I understand what I'd have to do to get it out (more
like, "I'm sure I don't understand. . ."), because there's a lot of
other stuff in there.

Thanks,
A
-- 
Andrew Sullivan  Computer Services
[EMAIL PROTECTED]Burlington Public Library
+1 905 639 3611 x158   2331 New Street
   Burlington, Ontario, Canada L7R 1J4



Re: [GENERAL] postgres on redhat 7.0

2000-10-31 Thread Adam Lang

Yeah I know.  I mentioned the redhat website when I first replied to the
post.

Which leads me to a question.  Why do so many people ask the list where the
most current RPMS for Redhat are located?  Not trying to start trouble, but
doesn't it seem obvious to check the Redhat site?  Just wierd that so many
people ask where to find them...

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
- Original Message -
From: "Lamar Owen" [EMAIL PROTECTED]
To: "Adam Lang" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Tuesday, October 31, 2000 10:54 AM
Subject: Re: [GENERAL] postgres on redhat 7.0


 Adam Lang wrote:
 
  But it don't help if you downloaded the OS. ;)

 If you downloaded the RedHat CD ISO images, OR the ftp dirs, you got the
 postgresql RPM's, unless you specifically excluded them.

 And you can certainly get them from RedHat's ftp site, as busy as it is.
 --
 Lamar Owen
 WGCR Internet Radio
 1 Peter 4:11




Re: [GENERAL] postgres on redhat 7.0

2000-10-31 Thread Lamar Owen

Adam Lang wrote:
 Which leads me to a question.  Why do so many people ask the list where the
 most current RPMS for Redhat are located?  Not trying to start trouble, but
 doesn't it seem obvious to check the Redhat site?  Just wierd that so many
 people ask where to find them...

Well, we maintain RPM's ourselves (or should I say, I maintain RPM's
myself) -- but due to the lack of a RedHat 7 machine for me to build on
(at this time) the best place to get RedHat 7 RPM's is with RedHat 7.

I just announced the newest set -- there will be more announcements
later this week or early next week as I build binaries on other
architectures (I'm even working on building Hercules so I can
build/rebuild for S/390 on my Intel box -- Hercules is an ESA/390
emulator -- slow as Christmas, but will run Linux/390 well enough to
build/rebuild RPM's.  My goal is posting working S/390 binaries on
ftp.postgresql.org -- the coolness factor alone is worth the work) and
other distributions (such as TurboLinux, Caldera eServer, and SuSE --
although SuSE has a 7.0.2 RPMset of their own that is substantially
different from ours, even though it is based on ours).

Unless someone want to offer me a shell and build privileges on a
Linux/390 VM on their real S/390 :-).

In fact, I'm requesting that people running something other than RedHat
6.2 on Intel try to --rebuild the src.rpm I just posted so that I can
get binaries for multiple architectures.  If you need to patch the spec
file or whatever else to get it to run, send the patches my way so we
can make it just a simple --rebuild for most folks.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



[GENERAL] Query caching

2000-10-31 Thread Steve Wolfe


 (Incidentally,  we've toyed  around with  developping a
query-caching
  system that would sit betwen PostgreSQL and our DB libraries.

  Sounds  amazing, but  requires some  research, I  guess. However,  in
many
 cases one  would be  more than  happy with  cahced connections.  Of
course,
 cahced query results  can be naturally added to that,  but just
connections
 are OK to start with. Security

To me, it doesn't sound like it would be that difficult of a project, at
least not for the likes of the PostgreSQL developpers.  It also doesn't seem
like it would really introduce any security problems, not if it were done
inside of PostgreSQL.  Long ago, I got sidetracked from my endeavors in C,
and so I don't feel that I'm qualified to do it.  (otherwise, I would have
done it already. : ) )   If you wanted it done in Perl or Object Pascal, I
could help. : )

Here's a simple design that I was tossing back and forth.  Please
understand that I'm not saying this is the best way to do it, or even a good
way to do it.  Just a possible way to do it.  I haven't been able to give it
as much thought as I would like to.  Here goes.


Implementation

Upon starting, the PostgreSQL engine could allocate a chunk of memory,
sized according to the administrator's desire.  That chunk would be used
solely for query caching.

When a query came in that was not cached (say, the first query), the
database engine would process it as normal.  It would then return it to the
user, and add it to the cache.  "Adding it to the cache" would mean that it
would enter the query itself, the result set, and a list of which tables the
query relied upon.  The query that is stored could be either the query
coming from the user, or the query after it goes through the optimizer.
Each has pros and cons, I would probably favor using the query that comes
from the user.

 When another query comes along, the caching engine would quickly look
in the hash table, and see if it already had the cached results of the
query.  If so, it returns them, and wham.  You've just avoided all of the
work of optimizing, parsing, and executing, not to mention the disk I/O.  A
hash lookup seems extremely cheap compared to the work of actually
processing a query.

 When an update/delete/insert comes along, the engine would analyze
which tables were affected, and clear the cache entries that relied upon
those tables.

-
Cache Clearing

 Cache clearing would be achieved via an LRU-based algorithm, which
would also take into account the amount of RAM used by each query in the
cache.
-
Performance Impact

   The potential performance differences range from a miniscule decrease to
a tremendous increase.  And it's a lot cheaper to throw an extra half gig of
RAM in a machine that to upgrade processors and disk subsystems!

--
Possible Changes

 One potential drawback is that when a table is modified, the queries
that rely upon it would be discarded.  Where a table is updated frequently,
that could greatly reduce the performance benefit.  One possible alternative
is to store the query cost with each query in the cache.  When a table is
updated, those queries are marked as "dirty".  If the system load is below a
certain amount, or the system has been idle, it could then re-execute those
queries and update the cache.  Which queries it re-executed would be
determined on a factor of query cost and how frequently those cache entries
were used.
---

   The reason I would prefer it done in the PostgreSQL engine (as opposed to
in a middleware application) is that the caching engine needs to know (a)
which tables a query relies upon, and (b) which tables get changed.  It
seems that it would significantly reduce overhead to do those inside of
PostgreSQL (which is already doing the query parsing and analysis).

This could certainly give PostgreSQL a huge advantage over other
database systems, too.   It could save administrators a very large chunk of
cash that they would otherwise have to spend on large systems.  And it would
just be cool.   ; )

steve





Re: [GENERAL] postgres on redhat 7.0

2000-10-31 Thread os390 ibmos




I was the original poster. Really my confusion stemmed
from the fact that upgrading from RH6.0 to RH7.0, 7.0
complained (during boot) that my Postgress verision was outdated
I need to upgrade. This threw me off.

In general I am pretty pissed at RH attitude to system
upgrade, if I were working in a Production environment,
I would either hire them and not try anything myself,
which kinda contradicts the whole Linux philosophy.


Yeah I know.  I mentioned the redhat website when I first replied to the
post.

Which leads me to a question.  Why do so many people ask the list where the
most current RPMS for Redhat are located?  Not trying to start trouble, but
doesn't it seem obvious to check the Redhat site?  Just wierd that so many
people ask where to find them...

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
- Original Message -
From: "Lamar Owen" [EMAIL PROTECTED]
To: "Adam Lang" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Tuesday, October 31, 2000 10:54 AM
Subject: Re: [GENERAL] postgres on redhat 7.0


  Adam Lang wrote:
  
   But it don't help if you downloaded the OS. ;)
 
  If you downloaded the RedHat CD ISO images, OR the ftp dirs, you got the
  postgresql RPM's, unless you specifically excluded them.
 
  And you can certainly get them from RedHat's ftp site, as busy as it is.
  --
  Lamar Owen
  WGCR Internet Radio
  1 Peter 4:11


_
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at 
http://profiles.msn.com.




Re: [GENERAL] how good is PostgreSQL

2000-10-31 Thread Bryan White

  Whenever a query is executed (not found in cache, etc.), the caching
 system would simply store the query, the results, and a list of tables
 queried.   When a new query came in, it would do a quick lookup in the
query
 hash to see if it already had the results.  If so, whammo. Whenever an
 insert/delete/update was sensed, it would look at the tables being
affected,
 and the caching mechanism would clear out the entries depending on those
 tables.

It seems to me that tracking the list of cached queries and watching for
queries that might invalidate them adds a lot of complexity to the back end
and the front end still has to establish the connection and wait transfer
the data over the socket.

On a more practical level, a backend solution would require someone with
fairly detailed knowlege of the internals of the backend.  A front end
solution can more likely to be implemented by someone not as knowlegable.

One of the big advantages of your technique is there is no code change at
the application level.  This means less database lock-in.  Maybe that is a
disadvantage too. ;-)





Re: [GENERAL] SQL question - problem with INTERSECT

2000-10-31 Thread Igor Roboul

On Mon, Oct 30, 2000 at 07:59:06PM -0500, Keith L. Musser wrote:
 "(SELECT messages.msgid FROM messages, subject_index WHERE
 ((subject_index.word='Hello' or subject_index.word='There') and
 (subject_index.msgid = messages.msgid))
 GROUP BY messages.msgid HAVING count(messages.msgid)=2)
 INTERSECT
 (SELECT messages.msgid FROM messages, author_index WHERE
 ((author_index.word='Jim' or author_index.word='Jones') and
 (author_index.msgid = messages.msgid))
 GROUP BY messages.msgid HAVING count(messages.msgid)=2);"
Try removing first "GROUP BY messages.msgid HAVING count(messages.msgid)=2)"

-- 
Igor Roboul, Unix System Administrator  Programmer @ sanatorium "Raduga", 
Sochi, Russia
http://www.brainbench.com/transcript.jsp?pid=304744



[GENERAL] Case insensitive LIKE queries

2000-10-31 Thread Yann Ramin

Hello,

What would be the most effective way of preforming a case-insensitive LIKE 
query?  I've notived that MySQL somehow ALWAYS does it case insensitive, 
which is not very smart, but I should expect this feature to exist without 
reasonable pain on the programmer;s end?

Yann

-- 


Yann Ramin  [EMAIL PROTECTED]
Atrus Trivalie Productions  www.redshift.com/~yramin
AIM oddatrus
Marina, CA  http://profiles.yahoo.com/theatrus

IRM Developer   Network Toaster Developer
SNTS Developer  KLevel Developer
Electronics Hobbyistperson who loves toys

Build a man a fire, and he's warm for a day.
Set a man on fire, and he'll be warm for the rest of his life.

"I'm prepared for all emergencies but totally unprepared for everyday
life." 












[GENERAL] How to configure pgsql to store chinese?

2000-10-31 Thread Dave

Hi all,

I have my pgsql 7.0.2 with such configure
--enable-locale --enable-multibyte=EUC_TW
and I created a db with "createdb -E EUC_TW testdb,
however, I got "¦w" instead of "¦W" in chinese display.
any idea?

Thanks
Dave



Re: [GENERAL] how good is PostgreSQL

2000-10-31 Thread Steve Wolfe


 Even after that, you have a long way to go before you will hit 1000
 transactions per second from any SQL database.

   I guess they could always buy a few Sun E1's on the backend, and a
large room of rack-mountable PC's for web/CGI serving.  Nothing like
plopping down ten or twenty million dollars on hardware. : )

steve





Re: [GENERAL] how good is PostgreSQL

2000-10-31 Thread Lamar Owen

Steve Wolfe wrote:
 
  Even after that, you have a long way to go before you will hit 1000
  transactions per second from any SQL database.

I guess they could always buy a few Sun E1's on the backend, and a
 large room of rack-mountable PC's for web/CGI serving.  Nothing like
 plopping down ten or twenty million dollars on hardware. : )

Or they could buy a single IBM S/390, run Linux/390 and PostgreSQL on
that. Probably would cost less, and be more reliable.  And they can
always load another Linux/390 VM -- an S/390 can run something like
41,000 virtual machines each running Linux/390 and Apache.

However, if you want to see the architecture of a _large_
database-backed website, see the story behind Digital City at
www.aolserver.com.  While they're using Sybase instead of PostgreSQL,
the architecture is the same.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11