Hi..
I have a Solaris 2.8 machine on which I have Postgresql 7.2.4 postgres
listening on port 5432. This works fine.
I have a user who needs a newer version and has asked specifically for
7.3. I have compiled and installed a 7.3.4
successfully, using
./configure --prefix=/usr/local/p_sql734
= This is extremely hard to believe. I can see no way that pg_dump will
= do that unless you explicitly ask for it (-d or -D switch, or one of the
= long variants of same).
I know of a lot of people that confuses the -d switch as an option
indicating the database name to follow. It is also
- Original Message -
Will there be a 7.3.5 version released, containing relevant patches,
before the release of the 7.4 version?
That seems unlikely. It seems likely that there will be such a release, but
most likely it will be after the 7.4 release.
Yes. I'd expect we'd wait
Hi all
I was asking for help, a week ago. Performace tests took mi some more
time because of other things that I had to do.
The problem was: tune PostgreSQL to work with 10.000 databases.
Tom Lane (thanks) suggested solution: one database and 10.000 schemas.
From now I will write switch database
On 05 Nov 2003 14:33:33 +0100
Marek Florianczyk [EMAIL PROTECTED] wrote:
During this test I was changing some parameters in postgres, and send
kill -HUP ( pg_ctl reload ). I still don't know what settings will be
best for me, except shared buffers, and some kernel and shell
settings.
as
skennedy [EMAIL PROTECTED] writes:
however, when I try and create a user I get
bash-2.03$ createuser idip_734
Shall the new user be allowed to create databases? (y/n) y
Shall the new user be allowed to create more new users? (y/n) n
psql: FATAL 1: user p_sql734 does not exist
Looks to me
W licie z ro, 05-11-2003, godz. 14:48, Jeff pisze:
On 05 Nov 2003 14:33:33 +0100
Marek Florianczyk [EMAIL PROTECTED] wrote:
During this test I was changing some parameters in postgres, and send
kill -HUP ( pg_ctl reload ). I still don't know what settings will be
best for me, except
Thanks Tom,
PGPORT was unset in the environment.
I set it to 5433 and things started to work.
If I configured the server to user 5433 before building (using
--with-pg_port=5433 in configure) shouldn't the clients, etc
automatically look for port 5433?
The executables ARE the ones I compiled
Donald Fraser [EMAIL PROTECTED] writes:
The fix I am looking for in 7.3.5, which is in the 7.4 branch, is for the
regproc-reverse-conversion problem (the error comes when the planner tries
to convert that string back to OID form).
IIRC there is no solution to that without moving to 7.4 --- the
On Tue, Nov 04, 2003 at 03:40:21PM -0600, Bruno Wolff III wrote:
On Tue, Nov 04, 2003 at 15:27:23 +,
Rob Fielding [EMAIL PROTECTED] wrote:
I keep doing this because I keep forgetting the Reply-To field isn't set
to the pgsql- mailing list.
Its not. You should probably just get
skennedy [EMAIL PROTECTED] writes:
If I configured the server to user 5433 before building (using
--with-pg_port=5433 in configure) shouldn't the clients, etc
automatically look for port 5433?
They should, and they do for me (I routinely run several different
versions of PG on different ports
Hello admins
Tome Lane once wrote, if I remember well, that the biggest PostgreSQL DB
he knows is about 4 TB!!! In Oracle we normaly say, it is not possible
(= usefull) to exp/imp more than let's say 20 to 50 GB of data. This is
more ore less the same we do with pg_dumpall???
Oracle was not
How do I direct logging output from stdout to a file? I'm using PG
7.3.4. Thanks for the help!
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Hi Steve
In my old company we had some shell skripts, monitoring all the oracle
stuff. I plan to transfer this to pg later. It was easy but verry stable
concept: 2 Servers where watching each other with the same
functionality: the master was checking all databases for availablity,
free space
Hello Stephen
I am actually working on a concept for this problems: Many clusters with
different versions on a server:
It consists off 3 docs and some skripts:
* Optimal Flexible Architecture (OFA) for PostgreSQL
* Flexible Environment for PostgreSQL in a Multi-Cluster-Multi-Database
(MCMD)
On Wed, Nov 05, 2003 at 16:14:59 +0100,
Marek Florianczyk [EMAIL PROTECTED] wrote:
One database with 3.000 schemas works better than 3.000 databases, but
there is REAL, BIG problem, and I won't be able to use this solution:
Every query, like \d table \di takes veeery long time.
Users
On Wed, Nov 05, 2003 at 08:48:52AM -0500, Jeff wrote:
as far as I know, -HUP won't make things like shared buffer changes
take. you need a full restart of PG.
It definitely will not. Anything that can only be set on startup
actually means startup.
but your numbers are different... I guess
W licie z ro, 05-11-2003, godz. 17:24, Andrew Sullivan pisze:
On Wed, Nov 05, 2003 at 08:48:52AM -0500, Jeff wrote:
as far as I know, -HUP won't make things like shared buffer changes
take. you need a full restart of PG.
It definitely will not. Anything that can only be set on startup
Marek Florianczyk [EMAIL PROTECTED] writes:
Each client was doing:
10 x connect,select * from table[rand(1-4)] where
number=[rand(1-1000)],disconnect--(fetch one row)
Seems like this is testing the cost of connect and disconnect to the
exclusion of nearly all else. PG is not designed to
W licie z ro, 05-11-2003, godz. 17:18, Bruno Wolff III pisze:
On Wed, Nov 05, 2003 at 16:14:59 +0100,
Marek Florianczyk [EMAIL PROTECTED] wrote:
One database with 3.000 schemas works better than 3.000 databases, but
there is REAL, BIG problem, and I won't be able to use this solution:
On 05 Nov 2003 19:01:38 +0100
Marek Florianczyk [EMAIL PROTECTED] wrote:
and it works better, but no revelation, when I do \d
schemaname.table it's better. I've to still wait about 10-30 sec. and
now it's only 100 clients connected. :(
So it only goes slow with hundred(s) of clients
W licie z ro, 05-11-2003, godz. 18:59, Tom Lane pisze:
Marek Florianczyk [EMAIL PROTECTED] writes:
Each client was doing:
10 x connect,select * from table[rand(1-4)] where
number=[rand(1-1000)],disconnect--(fetch one row)
Seems like this is testing the cost of connect and disconnect
W licie z ro, 05-11-2003, godz. 19:23, Jeff pisze:
On 05 Nov 2003 19:01:38 +0100
Marek Florianczyk [EMAIL PROTECTED] wrote:
and it works better, but no revelation, when I do \d
schemaname.table it's better. I've to still wait about 10-30 sec. and
now it's only 100 clients connected. :(
W licie z ro, 05-11-2003, godz. 19:34, Tom Lane pisze:
Marek Florianczyk [EMAIL PROTECTED] writes:
But did you do that under some database load ? eg. 100 clients
connected, like in my example ? When I do these queries \d without any
clients connected and after ANALYZE it's fast, but only
Marek Florianczyk [EMAIL PROTECTED] writes:
Maybe reconnect is to often, but how to explain that reular queries like
select * from table1 ale much faster than \d's ? ( my post to Jeff )
[ further experimentation... ] Ah-hah, I see the problem in 7.3, though
not in 7.4 which is what I was
W licie z ro, 05-11-2003, godz. 19:52, Tom Lane pisze:
Marek Florianczyk [EMAIL PROTECTED] writes:
Maybe reconnect is to often, but how to explain that reular queries like
select * from table1 ale much faster than \d's ? ( my post to Jeff )
[ further experimentation... ] Ah-hah, I see
I'm trying to do a query: select ta.x from ta join tb using (y) where z
= 'somevalue' FOR UPDATE
Why can't this be executed without error in 7.3.x? It worked just fine
in 7.2.x. Thanks for the input
---(end of broadcast)---
TIP 8: explain
Kris Kiger [EMAIL PROTECTED] writes:
I'm trying to do a query: select ta.x from ta join tb using (y) where z
= 'somevalue' FOR UPDATE
Why can't this be executed without error in 7.3.x? It worked just fine
in 7.2.x. Thanks for the input
Try saying FOR UPDATE OF ta, tb.
I agree there's
On 5 Nov 2003, Marek Florianczyk wrote:
W li¶cie z ¶ro, 05-11-2003, godz. 19:52, Tom Lane pisze:
Marek Florianczyk [EMAIL PROTECTED] writes:
Maybe reconnect is to often, but how to explain that reular queries like
select * from table1 ale much faster than \d's ? ( my post to Jeff )
kbd wrote:
I have many years experience with several database products: Foxbase,
FoxPro, MS SQL, Access, Postgresql, AS400 DB2. Compared to any of the
above Access is a toy. Case in point; in my office we have a product
which is multi-user based upon Access. When the users, there are
only
What happends with http://www.postgresql.org ?
When I try to connect to this URL, the download stops.
The only avalaible version of Postgres website that I can find is
http://www.postgresql.com, that is a comercial site.
Thanks.
---(end of
On Wed, 5 Nov 2003, Juan Miguel wrote:
Therefore, do you know a better Open Source DBMS than Access, that is
easy to install and integrate with your applications ?
What is your target environment?
We sell a commercial program (http://www.canit.ca) that uses PostgreSQL
internally. For our Red
Thanks Jonathan and Kaolin fire
Technicaly it is clear to me, how it works. But on a SUN E1 we had
hours to exp/imp 20-30 GB of data. And I do not think, that we were able
to exp/imp 1TB.
If somebody is telling me now, I have to do this several times a year
(how can I sell this to a
works fine for me from here, and they are all on the same set of servers,
so it isn't network related ... browser?
On Wed, 5 Nov 2003, Juan Miguel wrote:
What happends with http://www.postgresql.org ?
When I try to connect to this URL, the download stops.
The only avalaible version of
Juan Miguel wrote:
kbd wrote:
I have many years experience with several database products: Foxbase,
FoxPro, MS SQL, Access, Postgresql, AS400 DB2. Compared to any of the
above Access is a toy. Case in point; in my office we have a product
which is multi-user based upon Access. When the users,
hai admin,
I've been recieving a lot of emails about PostgreSql
administration, and I
don't want reciving it anymore, remove my account from your databases
records, please.
praveen
_
Download ringtones, logos and picture
On Thu, Nov 06, 2003 at 00:27:03 +0100,
Oli Sennhauser [EMAIL PROTECTED] wrote:
If somebody is telling me now, I have to do this several times a year
(how can I sell this to a customer???)... It is not a problem handling a
micky-mouse database. dump/load 100 GB I would gess it takes me
Hi,
I'm moving a Postgresql database onto a co-located server - a Linux
Virtual Server.
Today I got some error's in my logs:
ERROR: _mdfd_getrelnfd: cannot open relation virtualusertable: Cannot
allocate memory
If I reboot it goes away but has been reappearing.
I'd like to try limiting the
38 matches
Mail list logo