Re: [GENERAL] State of Beta (2)

2003-09-18 Thread Sander Steffann
Hi,

 Command Prompt will set up an escrow account online at www.escrow.com.
 When the Escrow account totals 2000.00 and is released, Command Prompt
 will dedicate a programmer for one month to debugging, documenting,
 reviewing, digging, crying, screaming, begging and bleeding with the
 code. At the end of the month and probably during depending on how
 everything goes Command Prompt will release its findings.  The findings
 will include a project plan on moving forward over the next 5 months
 (if that is what it takes) to produce the first functional pg_upgrade.

 If the project is deemed as moving in the right direction by the
 community members and specifically the core members we will setup
 milestone payments for the project.

 What does everyone think?

Sounds good. It provides a safe way for people to fund this development. I
can't promise anything yet on behalf of my company, but I'll donate at least
$50,- personally.

Sander.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Where are PL/pgSQL functions stored?

2003-09-18 Thread David Shadovitz
Thanks to Alvaro H. and Joshua D. for pointing me to pg_proc.

The function I was looking for is an overloaded one, so in my query of
pg_proc I had to specify the signature of the one that I wanted.

My favorite page, for today:
http://www.postgresql.org/docs/7.3/static/catalogs.html

-David

 Where are PL/pgSQL functions stored?  I want to retrieve the text of a
 function that I've created.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] psql and blob

2003-09-18 Thread CoL
hi,
you are right: I wrote it in wrong way. psql is a client program, but he 
wants to get the file not from the place where psql runs.

He says:
 but i want to execute this script from the client and so my blob-data 
is on the client and lo_import fails (the server doesn't have this file).
That's why he needs a program, an application, or something which can 
communicate with client. Or another way, if he runs the psql from client 
where the file is, than connecting to postgres server :)

Sorry for my ambiguous letter.

C.

Jonathan Bartlett wrote, On 9/17/2003 9:48 PM:

if you, then write a program for that. psql is a database server, not a
client program.
No, psql is a client program.  postmaster is the database server.

Jon


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Why does adding SUM and GROUP BY destroy performance?

2003-09-18 Thread Christopher Browne
In the last exciting episode, [EMAIL PROTECTED] (David Link) wrote:
 Why does adding SUM and GROUP BY destroy performance?

When you use SUM (or other aggregates), there are no short cuts to
walking through each and every tuple specified by the WHERE clause.

On some systems there are statistics mechanisms that can short-circuit
that.  On PostgreSQL, the use of MVCC to let new data almost
magically appear :-) has the demerit, in the case of aggregates, of
not leaving much opening for short cuts.

There are some cases where you CAN do much better than the aggregates
do.

  SELECT MAX(FIELD) FROM TABLE WHERE A='THIS' and B='THAT';

may be replaced with the likely-to-be-faster:

  select field from table where a = 'THIS' and b='THAT' order by field
desc limit 1;

MIN() admits a similar rewriting.  If there is an index on FIELD, this
will likely be _way_ faster than using MIN()/MAX().

In a sense, it's not that aggregates destroy performance; just that
there are no magical shortcuts to make them incredibly fast.
-- 
wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org').
http://www.ntlug.org/~cbbrowne/multiplexor.html
And  1.1.81 is  officially BugFree(tm),  so  if you  receive any  bug
reports on it, you know they are just evil lies. -- Linus Torvalds

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] State of Beta (2)

2003-09-18 Thread Andrew Rawnsley
Sounds good to me. I can throw in $500 to start.

On Wednesday, September 17, 2003, at 12:06 PM, Joshua D. Drake wrote:

Hello,

 O.k. here are my thoughts on how this could work:

 Command Prompt will set up an escrow account online at www.escrow.com.
 When the Escrow account totals 2000.00 and is released, Command 
Prompt will dedicate a
 programmer for one month to debugging, documenting, reviewing, 
digging, crying,
 screaming, begging and bleeding with the code. At the end of the 
month and probably during
 depending on how everything goes Command Prompt will release its 
findings.  The findings
 will include a project plan on moving forward over the next 5 months 
(if that is what it takes) to
 produce the first functional pg_upgrade.

 If the project is deemed as moving in the right direction by the 
community members and specifically
 the core members we will setup milestone payments for the project.

  What does everyone think?

  Sincerely,

  Joshua D. Drake

Dennis Gearon wrote:

I had already committed $50/mo.

Robert Creager wrote:

Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
Dennis Gearon [EMAIL PROTECTED] uttered something amazingly 
similar to:


Robert Creager wrote:


Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
Joshua D. Drake [EMAIL PROTECTED] uttered something 
amazingly similar
to:




If someone is willing to pony up 2000.00 per month for a period 
of at
Well, if you're willing to set up some sort of escrow, I'll put in 
$100.  I

Is that $100 times once, or $100 X 6mos anticiapated develop time.



That's $100 once.  And last I looked, there are well over 1800 
subscribers on
this list alone.  On the astronomically small chance everyone one of 
them did
what I'm doing, it would cover more than 6 months of development 
time ;-)  This
strikes me as like supporting public radio.  The individuals do 
some, and the
corporations do a bunch.

I'm just putting my money toward a great product, rather than 
complaining that
it's not done.  Just like Joshua is doing.  You cannot hire a 
competent
programmer for $24k a year, so he is putting up some money on this 
also.

There have been a couple of other bytes from small businesses, so 
who knows!

You game?

Cheers,
Rob

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.


---(end of 
broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faqs/FAQ.html



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] psql and blob

2003-09-18 Thread Daniel Schuchardt
Yes thats it.

Ok - one last question.
My Script looks like this and actually i can run it only on the server (so i have to 
copy all my data to the server each time i want to update my blobs):

 INSERT INTO tablexy (BLOBFIELD) VALUES (lo_import('BLOBFILE')).

Now we know if I want to upload a clientfile I have to use \lo_import BUT i cant use 
this inside the script.

 INSERT INTO  tablexy (BLOBFIELD) VALUES (\lo_import('BLOBFILE')). is not possible 
because \lo_import is a unknown command for the server.

So I have to do 2 steps manually in psql:

 \lo_import(ClientFile) - Returns OID

 INSERT INTO tablexy (BLOBFIELD) VALUES (Returned OID)

Is there a way to do this automatically?
Means my Clientside script should upload a local file (from the same computer where 
the script is executed) to the server and insert this file in a special table 
automatically.

Thanks


Am Mi, 2003-09-17 um 22.42 schrieb Doug McNaught:
 Daniel Schuchardt [EMAIL PROTECTED] writes:
 
  Hi @ all,
  
  i'm sure there was a psql-function to transfere my Blob-Data to the
  server but I can't remember.
 
 The psql function to use is \lo_import--this reads the file from the
 client side.
 
 -Doug
 
 ---(end of broadcast)---
 TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] psql and blob

2003-09-18 Thread Daniel Schuchardt
Yes thats it. Thanks.

Am Mi, 2003-09-17 um 22.42 schrieb Doug McNaught:
 Daniel Schuchardt [EMAIL PROTECTED] writes:
 
  Hi @ all,
  
  i'm sure there was a psql-function to transfere my Blob-Data to the
  server but I can't remember.
 
 The psql function to use is \lo_import--this reads the file from the
 client side.
 
 -Doug
 
 ---(end of broadcast)---
 TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] High-volume shop uses PostgreSQL

2003-09-18 Thread Gaetano Mendola
Ron Johnson wrote:
On Thu, 2003-09-18 at 03:23, Gaetano Mendola wrote:

Ron Johnson wrote:

PostgreSQL does not do horizontal scaling at all, since the postmaster
can only run on 1 CPU, but it's good at vertical scaling, since it
can make use of all of the CPUs in a box.  (Well, there's sure to
be a point at which there is so much activity that the postmaster
can't handle it all...)
I seen some PCI cards that permits to have a shared memory shared 
between more boxes, I'd like know how much effort is required to permit 
postgres to run on two or more machine and have the shared memory shared 
between the boxes.


HPaq/DEC has a hardware/software product called MemoryChannel, which
does that for you.  Of course, it only works with Tru64 and OpenVMS.


I knew the existence of this hardware my concern is about made the
postmaster aware that another postmaster is running on another machine
and that the underlyng shared memory is shared between two/more boxes.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Cannot Delete

2003-09-18 Thread Alex
Hi,
hi have a table with 2.5 million records which i try do delete. i have 
several constraints on it too.
i tried to delete the records using delete but it does not seem to work. 
the delete runs forever. hrs...
i cannot truncate it as it complains about foreign keys.

What is the problem ?

Thanks
Alex


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Cannot Delete

2003-09-18 Thread Tim McAuley
Not sure about 2.5 million records but try running VACUUM ANALYSE 
before the delete and during (every now and then).

Had the same problem with 100,000 records and it did the trick nicely.

Hi,
hi have a table with 2.5 million records which i try do delete. i have 
several constraints on it too.
i tried to delete the records using delete but it does not seem to 
work. the delete runs forever. hrs...
i cannot truncate it as it complains about foreign keys.

What is the problem ?

Thanks
Alex


---(end of broadcast)---
TIP 8: explain analyze is your friend



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] psql and blob

2003-09-18 Thread Tom Lane
Daniel Schuchardt [EMAIL PROTECTED] writes:
 So I have to do 2 steps manually in psql:
  \lo_import(ClientFile) - Returns OID
  INSERT INTO tablexy (BLOBFIELD) VALUES (Returned OID)

 Is there a way to do this automatically?

See psql's :LASTOID (I think that's the name) variable.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Cannot Delete

2003-09-18 Thread Tom Lane
Alex [EMAIL PROTECTED] writes:
 hi have a table with 2.5 million records which i try do delete. i have 
 several constraints on it too.
 i tried to delete the records using delete but it does not seem to work. 
 the delete runs forever. hrs...
 i cannot truncate it as it complains about foreign keys.

It's a good bet that you need to create indexes on the columns that
reference this table via foreign keys.  Without such indexes, updates
and deletes on the referenced table will be really slow.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] State of Beta 2

2003-09-18 Thread Lamar Owen
Marc G. Fournier wrote:
And that has nothing to do with user need as a whole, since the care
level I mentioned is predicated by the developer interest level.
While I know, Marc, how the whole project got started (I have read the
first posts), and I appreciate that you, Bruce, Thomas, and Vadim
started the original core team because you were and are users of
PostgreSQL, I sincerely believe that in this instance you are out of
touch with this need of many of today's userbase.

Huh?  I have no disagreement that upgrading is a key feature that we are
lacking ... but, if there are any *on disk* changes between releases, how
do you propose 'in place upgrades'?
RTA.  It's been hashed, rehashed, and hashed again.  I've asked twice if 
eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a 
7.3); that question has yet to be answered.  If it can do this, then I 
would be a much happier camper.  I would be happy for a migration tool 
that could read the old format _without_a_running_old_backend_ and 
convert it to the new format _without_a_running_backend_.  That's always 
been my beef, that the new backend is powerless to recover the old data. 
 OS upgrades where PostgreSQL is part of the OS, FreeBSD ports upgrades 
(according to a user report on the lists a few months back), and RPM 
upgrades are absolutely horrid at this point. *You* might can stand it; 
some cannot.

  Granted, if its just changes to the
system catalogs and such, pg_upgrade should be able to be taught to handle
it .. I haven't seen anyone step up to do so, and for someone spending so
much time pushing for an upgrade path, I haven't seen you pony up the time
I believe I pony up quite a bit of time already, Marc.  Not as much as 
some, by any means, but I am not making one red cent doing what I do for 
the project.  And one time I was supposed to have gotten paid for a 
related project, I didn't.  I did get paid by Great Bridge for RPM work 
as a one-shot deal, though.

The time I've already spent on this is too much.  I've probably put 
several hundred hours of my time into this issue in one form or another; 
what I don't have time to do is climb the steep slope Tom mentioned 
earlier.  I actually need to feed my family, and my employer has more 
for me to do than something that should have already been done.

Just curious here ... but, with all the time you've spent pushing for an
easy upgrade path, have you looked at the other RDBMSs and how they deal
with upgrades?  I think its going to be a sort of apples-to-oranges thing,
since I imagine that most of the 'big ones' don't change their disk
formats anymore ...
I don't use the others; thus I don't care how they do it; only how we do 
it.  But even MySQL has a better system than we -- they allow you to 
migrate table by table, gaining the new features of the new format when 
you migrate.  Tom and I pretty much reached consensus that the reason we 
have a problem with this is the integration of features in the system 
catalogs, and the lack of separation between 'system' information in the 
catalogs and 'feature' or 'user' information in the catalogs.  It's all 
in the archives that nobdy seems willing to read over again.  Why do we 
even have archives if they're not going to be used?

If bugfixes were consistently backported, and support was provided for 
older versions running on newer OS's, then this wouldn't be as much of a 
problem.  But we orphan our code afte one version cycle; 7.0.x is 
completely unsupported, for instance, while even 7.2.x is virtually 
unsupported.  My hat's off to Red Hat for backporting the buffer 
overflow fixes to all their supported versions; we certainly wouldn't 
have don it.  And 7.3.x will be unsupported once we get past 7.4 
release, right? So in order to get critical bug fixes, users must 
upgrade to a later codebase, and go through the pain of upgrading their 
data.

K, looking back through that it almost sounds like a ramble ... hopefully
you understand what I'm asking ...
*I* should complain about a ramble? :-)
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute
Formerly of WGCR Internet Radio, and the PostgreSQL RPM maintainer since 
1999.



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] State of Beta 2

2003-09-18 Thread Marc G. Fournier

On Thu, 18 Sep 2003, Lamar Owen wrote:

  Huh?  I have no disagreement that upgrading is a key feature that we are
  lacking ... but, if there are any *on disk* changes between releases, how
  do you propose 'in place upgrades'?

 RTA.  It's been hashed, rehashed, and hashed again.  I've asked twice if
 eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a
 7.3); that question has yet to be answered.

'K, I had already answered it as part of this thread when I suggested
doing exactly that ... in response to which several ppl questioned the
feasibility of setting up a duplicate system with 1TB of disk space to do
the replication over to ...

See: http://archives.postgresql.org/pgsql-general/2003-09/msg00886.php

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] State of Beta 2

2003-09-18 Thread Andrew Rawnsley
On Thursday, September 18, 2003, at 12:11 PM, Lamar Owen wrote:

RTA.  It's been hashed, rehashed, and hashed again.  I've asked twice 
if eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 
onto a 7.3); that question has yet to be answered.  If it can do this, 
then I would be a much happier camper.  I would be happy for a 
migration tool that could read the old format 
_without_a_running_old_backend_ and convert it to the new format 
_without_a_running_backend_.  That's always been my beef, that the new 
backend is powerless to recover the old data.  OS upgrades where 
PostgreSQL is part of the OS, FreeBSD ports upgrades (according to a 
user report on the lists a few months back), and RPM upgrades are 
absolutely horrid at this point. *You* might can stand it; some  cannot.

eRserver should be able to migrate the data. If you make heavy use of 
sequences, schemas and other such things it won't help you for those.

Its not a bad idea to do it that way, if you aren't dealing with large 
or very complex databases. The first thing its going to do when you add 
a slave is do a dump/restore to create the replication target. If you 
can afford the disk space and time, that will migrate the data. By 
itself that isn't any different than doing that by hand. Where eRserver 
may help is keeping the data in sync while you work the other things 
out.

Sequences and schemas are the two things it doesn't handle at the 
moment. I've created a patch and some new client apps to manage the 
schema part, but I haven't had the chance to send them off to someone 
to see if they'll fit in. Sequences are on my list of things to do 
next. Time time time time.

Using eRserver may help you work around the problem, given certain 
conditions. It doesn't solve it. I think if we can get Mr. Drake's 
initiative off the ground we may at least figure out if there is a 
solution.



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Why does adding SUM and GROUP BY destroy performance?

2003-09-18 Thread David Link
Thanks Ron, Thanks Christopher for your excellent feedback.

I guess it's back to the drawing board.  This is a very late hour
business requirement change.  And we need quick real-time results.

Two things are being considered:

   1. loading the non aggregate query entirely into memory (using perl
cgi, currently, but looking at the possiblity of doing this in the
database with either PL/perl or PL/plsql, though I don't know what
would be gained by doing it that way).  And handling the summing and
then the sort ourselves in the program, -- or --

   2. preprocessing a lot of the data at pre-determined times. 
Essentially doubling the size of our database.

I'd be open to any other suggestions.

Thanks again. very much.
-David



__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] State of Beta 2

2003-09-18 Thread Dennis Gearon
Andrew Rawnsley wrote:

eRserver should be able to migrate the data. If you make heavy use of 
sequences, schemas and other such things it won't help you for those.

snip

Using eRserver may help you work around the problem, given certain 
conditions. It doesn't solve it. I think if we can get Mr. Drake's 
initiative off the ground we may at least figure out if there is a 
solution.


So a replication application
IS
a method to migrate
OR CAN BE MADE
to do it somewhat
AND is a RELATED
project to the migration tool.
Again, I wonder what on the TODO's or any other roadmap is related and 
should be part of a comprehensive plan to drain the swamp and not just 
club alligators over the head?

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] psql and blob

2003-09-18 Thread Darko Prenosil

- Original Message -
From: Daniel Schuchardt [EMAIL PROTECTED]
To: Doug McNaught [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, September 18, 2003 4:44 PM
Subject: Re: [GENERAL] psql and blob


 Yes thats it.

 Ok - one last question.
 My Script looks like this and actually i can run it only on the server (so
i have to copy all my data to the server each time i want to update my
blobs):

  INSERT INTO tablexy (BLOBFIELD) VALUES (lo_import('BLOBFILE')).

 Now we know if I want to upload a clientfile I have to use \lo_import BUT
i cant use this inside the script.

  INSERT INTO  tablexy (BLOBFIELD) VALUES (\lo_import('BLOBFILE')). is not
possible because \lo_import is a unknown command for the server.

 So I have to do 2 steps manually in psql:

  \lo_import(ClientFile) - Returns OID

  INSERT INTO tablexy (BLOBFIELD) VALUES (Returned OID)

 Is there a way to do this automatically?
 Means my Clientside script should upload a local file (from the same
computer where the script is executed) to the server and insert this file in
a special table automatically.

Maybe Your problem is only to find last inserted oid ?
See: http://www.postgresql.org/docs/7.3/interactive/app-psql.html and look
for LASTOID.
Part from docs:

LASTOID
The value of the last affected OID, as returned from an INSERT or lo_insert
command. This variable is only guaranteed to be valid until after the result
of the next SQL command has been displayed.


So solution might be :
\lo_import(ClientFile)
  INSERT INTO tablexy (BLOBFIELD) VALUES (:LASTOID)
I'm not shure what exactly you want to acomplish, but this might work.

I repeat: lo_read/lo_write from libpq are the only true client side way I
know. You can write small C program that reads the file from local
filesystem and writes it directly to sql server with no uploads or nothing
like that.
Look at : http://www.postgresql.org/docs/7.3/interactive/lo-libpq.html
Note that \lo_import and \lo_export PSQL INSTRUCTIONS are using the same
technique, and they act different than server side lo_import() and
lo_export() SERVER SIDE FUNCTIONS. See:
http://www.postgresql.org/docs/7.3/interactive/app-psql.html
Hope this helps.

Regards !



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] State of Beta 2

2003-09-18 Thread Joshua D. Drake

If bugfixes were consistently backported, and support was provided for 
older versions running on newer OS's, then this wouldn't be as much of 
a problem.  But we orphan our code afte one version cycle; 7.0.x is 
completely unsupported, for instance, while even 7.2.x is virtually 
unsupported.  My hat's off to Red Hat for backporting the buffer 
overflow fixes to all their supported versions; we certainly wouldn't 
have don it.  And 7.3.x will be unsupported once we get past 7.4 
release, right? So in order to get critical bug fixes, users must 
upgrade to a later codebase, and go through the pain of upgrading 
their data.


Command Prompt is supporting the 7.3 series until 2005 and that includes 
backporting certain features and bug fixes. The reality is that most 
(with the exception of the Linux kernel and maybe Apache) open source 
projects don't support back releases. That is the point of commercial 
releases such as RedHat DB and Mammoth. We will support the the older 
releases for some time.

If you want to have continued support for an older rev, purchase a 
commercial version. I am not trying to push my product here, but frankly 
I think your argument is weak. There is zero reason for the community to 
support previous version of code. Maybe until 7.4 reaches 7.4.1 or 
something but longer? Why? The community should be focusing on 
generating new, better, faster, cleaner code.

That is just my .02.

Joshua Drake





K, looking back through that it almost sounds like a ramble ... 
hopefully
you understand what I'm asking ...


*I* should complain about a ramble? :-)


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] how can i use SELECT to find a substring of a

2003-09-18 Thread Jim Crate
Restrictions in the Mac OS X implementation of shmget limit Postgres to 2MB
of shared buffers. This could be a problem for large databases and/or heavy
activity.

http://www.postgresql.org/docs/7.3/interactive/kernel-resources.html

These directions are sufficient to increase shared buffers significantly beyond
2MB on MacOS 10.2.6.

-- 
Jim Crate
Deep Sky Technologies, Inc.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Scalability (both vertical and horizontal)?

2003-09-18 Thread Duffey, Kevin
We are looking for information regarding any capabilities of PostgreSQL in regards to 
scalability. Ideally we want to be able to scale in both directions. What sort of 
solutions are out there for either or both directions of scalability? Specifically, 
open-source solutions would be most in need, but commercial applications are fine as 
well.

Thank you.
 

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.501 / Virus Database: 299 - Release Date: 7/14/2003
 


***
The information contained in this e-mail message  may be confidential and 
protected from disclosure.  If you are not the intended recipient, any 
dissemination, distribution or copying is strictly prohibited.  If you think 
that you have received this e-mail message in error, please e-mail the 
sender at [EMAIL PROTECTED]
***

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] State of Beta 2

2003-09-18 Thread Tom Lane
Joshua D. Drake [EMAIL PROTECTED] writes:
 If you want to have continued support for an older rev, purchase a 
 commercial version. I am not trying to push my product here, but frankly 
 I think your argument is weak. There is zero reason for the community to 
 support previous version of code. Maybe until 7.4 reaches 7.4.1 or 
 something but longer? Why? The community should be focusing on 
 generating new, better, faster, cleaner code.

I tend to agree on this point.  Red Hat is also in the business of
supporting back-releases of PG, and I believe PG Inc, SRA, and others
will happily do it too.  I don't think it's the development community's
job to do that.

[ This does not, however, really bear on the primary issue, which is how
can we make upgrading less unpleasant for people with large databases.
We do need to address that somehow. ]

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] I dont know the error with updtae

2003-09-18 Thread Edwin Quijada
Hi!
I have a trigger using a update but this doestn work into trigger but if I 
execute the update from pgadmin this works perfectely.

This is the code
select into registro * from t_pagos_comisiones where f_wholetipoagnivel = 
who and f_fecha = $4 for update;
IF FOUND THEN
 IF ($5 0) THEN
 t1:= registro.f_montopagado+$7;
 t2:= registro.f_montocomision+$5;
 raise notice ''  hacer el update t1 y t2 %, %'',t1,t2; 
--f_montocomision=t2
 UPDATE t_pagos_comisiones SET 
f_montopagado=t1,f_montocomision=t2,f_nivel=90  wheref_wholetipoagnivel 
= who and f_fecha = vfecha;
end if;
end if;

*---*
*-Edwin Quijada
*-Developer DataBase
*-JQ Microsistemas
*-809-747-2787
*  Si deseas lograr cosas excepcionales debes de hacer cosas fuera de lo 
comun
*---*

_
¿Estás buscando un auto nuevo?  http://www.yupimsn.com/autos/
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Image data type equivalent in postgresql

2003-09-18 Thread Josué Maldonado
Hello list,

That's the question, what is the equivalent data type of the msSQL image 
data type ?

TIA,

--
Josué Maldonado.
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] PostgreSQL versus MySQL

2003-09-18 Thread Joshua D. Drake
Hello,

  I think the below just about says it all:

http://www.commandprompt.com/images/mammoth_versus_dolphin_500.jpg

Sincerely,

Joshua Drake

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL versus MySQL

2003-09-18 Thread Mike Mascari
Joshua D. Drake wrote:

 Hello,
 
   I think the below just about says it all:
 
 http://www.commandprompt.com/images/mammoth_versus_dolphin_500.jpg
 
 Sincerely,
 
 Joshua Drake

Too bad the symbol of Oracle Corp. isn't a peanut...

Mike Mascari
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Image data type equivalent in postgresql

2003-09-18 Thread Mike Mascari
Josué Maldonado wrote:

 Hello list,
 
 That's the question, what is the equivalent data type of the msSQL image
 data type ?

You can use 'bytea' for binary data. You can use 'text' and base64
encode the image before insertion. You can, of course, create your own
 'image' type using CREATE TYPE.  You could also alias the 'bytea'
type via CREATE DOMAIN.

HTH,

Mike Mascari
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Scalability (both vertical and horizontal)?

2003-09-18 Thread scott.marlowe
On Thu, 18 Sep 2003, Duffey, Kevin wrote:

 We are looking for information regarding any capabilities of PostgreSQL 
 in regards to scalability. Ideally we want to be able to scale in both 
 directions. What sort of solutions are out there for either or both 
 directions of scalability? Specifically, open-source solutions would be 
 most in need, but commercial applications are fine as well.

The most important resource as regards vertical scalability is probably 
the performance mailing list.  I.e. Postgresql can already take advantage 
of multiple processors and spare memory and fast I/O subsystems, but 
things like 64 way Sun E10ks are not commonly tested with Postgresql, so 
if you're the first guy on the block to buy one, you might find some 
issues that need ironing out with that large of a piece of iron.

As for horizontal scaling, you could look at ERserver for that.  Setup one 
master writer and a bunch of slave boxes to handle the majority of the 
queries.  There's not been a bunch of work into horizontal scaling really, 
with most of the clustering software for postgresql aiming at failover / 
high availability, not massive parallelization of read and / or writes.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL versus MySQL

2003-09-18 Thread Sean Chittenden
   I think the below just about says it all:
 
 http://www.commandprompt.com/images/mammoth_versus_dolphin_500.jpg

Hey Josh, what about some t-shirts with this on the back and some
snappy verbiage above/below the image?  Just a thought, but maybe
that's something the advocacy team could run with depending on who
owns the copyright to the image.  -sc

-- 
Sean Chittenden

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Sequences

2003-09-18 Thread Tom Lane
Andrew Rawnsley [EMAIL PROTECTED] writes:
 Sequence information is stored in their individual 1-row tables, with 
 an entry is pg_class of relkind 'S'. So you can't
 really get a list of sequences with last_value in a single query 

Nope, you can't ... and I surely hope you weren't expecting that the
last_values would all be simultaneously valid ...

What I'd try for this is

select relname, get_last_value(relname) from pg_class where relkind = 'S';

where get_last_value() is a plpgsql function that does an EXECUTE.
But you must realize that there will be time skew between the
last_values.

What is the problem you really want to solve?

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] This mail list and its policies

2003-09-18 Thread Bruno Wolff III
On Thu, Sep 18, 2003 at 20:59:53 -0700,
  expect [EMAIL PROTECTED] wrote:
 
 I had no idea that my address was being broadcast to the world via
 comp.databases.postgresql.general  I have no problem with having messages sent
 to the list go to the group.  I do have a problem with my address out there in
 the free and clear.  Shouldn't the initial subscription notice let new sub-
 scribers know that their address will be broadcast over the planet?

Not really, as addresses on technical lists generally are available on
the web archives.

 What's the logic and/or justification for doing this? I can handle the spam
 using the delete key but I really don't like the additional burden that it
 puts on my ISP.  I would have posted via usenet had I known about this policy.
 As a matter of fact that's what I intend to do from now on.   Well this
 address was good for some time and now it's tainted.  The list owner
 should send out as part of the pre-subscription message a warning that
 the address they use will show up on usenet and the www. 

To make it easier to communicate with people.

 Things that should be done as a responsible list:

One option for you is to use the list address in the from header when
posting to the list. That will hide your address and not break replies.
Most likely the list checks the envelope sender address to see whether
or not the message needs moderator approval. So you should be able to
have your messages go through right away if you keep the envelope sender
address the same as your subscription address.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL versus MySQL

2003-09-18 Thread Tom Lane
scott.marlowe [EMAIL PROTECTED] writes:
 ... Being honest and fair will win 
 hearts and minds, and when they need the Saturn 4 instead of the Estes
 rocket, they'll remember who to come to.

I like this analogy, though maybe you've overstretched.  Perhaps:

MySQL = Estes.  Put in InnoDB, and you have a D engine ... but it's
still a model rocket.

Postgres = Titan II.  Can boost LEO missions or small interplanetary
probes.  Never mind its ICBM heritage ;-)

Oracle = Saturn IV.  Can take you to the moon ... if you can afford
the price tag.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Sequences

2003-09-18 Thread Andrew Rawnsley
Oh, its a sticky problem, to be sure. Have to get something working at 
some point, though...

You're point about approximate solutions is well taken.

On Friday, September 19, 2003, at 01:11 AM, Tom Lane wrote:

Andrew Rawnsley [EMAIL PROTECTED] writes:
On Friday, September 19, 2003, at 12:21 AM, Tom Lane wrote:
What is the problem you really want to solve?

Hacking some semblance of sequence support into eRserver.
Hmm.  I don't see a lot of value in an approximate solution.  Either 
the
sequence is up to date at the slave, or it is not.  What's the point of
almost up to date?  You'd still have to take some action along the
lines of select setval('seq', (select max(col) from tab)) during any
failover.  If you have to do that, it doesn't matter what the sequence
value was.

Perhaps sequence increments could be broadcast to slaves as-is, using
some variant of the existing erserver protocol that understands that
these things happen outside transaction control.
			regards, tom lane



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Sequences

2003-09-18 Thread Andrew Rawnsley
On Friday, September 19, 2003, at 12:21 AM, Tom Lane wrote:

Andrew Rawnsley [EMAIL PROTECTED] writes:
Sequence information is stored in their individual 1-row tables, with
an entry is pg_class of relkind 'S'. So you can't
really get a list of sequences with last_value in a single query
Nope, you can't ... and I surely hope you weren't expecting that the
last_values would all be simultaneously valid ...
No, I don't expect that. I'm shooting for 'best I can do'.

What I'd try for this is

select relname, get_last_value(relname) from pg_class where relkind = 
'S';

where get_last_value() is a plpgsql function that does an EXECUTE.
But you must realize that there will be time skew between the
last_values.
What is the problem you really want to solve?

Hacking some semblance of sequence support into eRserver. The possible 
skew when gathering last_values
doesn't bother me too much - the replicated system is out of sync to 
start with, and the hope is that everything will stay pretty
much caught up.  Its certainly possible for me to be caught with my 
pants down and have a sequence lagging the column its
supposed to represent at time of failure. But a) its not likely, and b) 
I don't have much choice anyway. Something is better than
nothing. Pays your money you takes your chances.

I'll poke around with the function idea. Certain better than doing it 
all from the client side...

			regards, tom lane

---(end of 
broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if 
your
  joining column's datatypes do not match



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com


Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 8: explain analyze is your friend