On 18 Jul 2003 at 16:58, Sean Mullen wrote:
Other projects I've seen use their app for authentication/security
and bypass/ignore the extremely 'useful' security system built into
postgresql and build their own security/authentication system.
I'm wondering if the reason for this is:
A)
On 18 Jul 2003 at 16:46, Ursula Lee wrote:
Hi all,
Any idea on how to set the background color of the JPanel in Java
Applet? A few questions here:
And how this is related to postgresql?
Bye
Shridhar
--
All language designers are arrogant. Goes with the territory...(By Larry
Wall)
Can two postgresql processes (running in different machines) access and
work with the same database files in a shared storage scenario? Would
there be any problem?
Thanks in advance :)
---(end of broadcast)---
TIP 3: if posting/reading
Hi Daniel,
3.App-Server reads the database and makes changes. Problem : the changes
the client does are not commited - the server can't see the changes or
the case more bad the server waits for the client connection.
(transaction isolation and table / record locking)
The app server CAN see
On 18 Jul 2003 at 11:35, Jordi Sánchez López wrote:
Can two postgresql processes (running in different machines) access and
work with the same database files in a shared storage scenario? Would
there be any problem?
No. Don't attempt it. It will cause data corruption.
Bye
Shridhar
--
Does Postgresql
7.2.1 support double byte characters?
If yes is then how
to define fields that will contain double byte characters?
Thanks and
regards,
Kallol.
How to go about
scheduled backup in Postgresql.
What are the exact
steps to befollowed?
Does anyone know
this?
Thanks and
Regards,
Kallol.
On 18 Jul 2003 at 15:58, Kallol Nandi wrote:
How to go about scheduled backup in Postgresql.
You need to use cron and pgdump. Man pages for both of them will give you what
you want.
Bye
Shridhar
--
Cohen's Law:There is no bottom to worse.
---(end of
As i experienced with pg_dump it looks like in ver.7.3 requires interactive
enter password for *custom users*.
The 7.1 will not make this kind of problems.
Still in 7.3 you can make it with cron, but as i know, only with a script
which might look like this
?php
exec(pg_dump -u [other
Hi Viorel,
what are the exact circumstances for this? I'm not experiencing
that behavior. Maybe it depends on your settings in pg_hba.conf?
And are you using -X set-session-authorization and so on?
Regards
Tino Wildenhain
Viorel Dragomir wrote:
As i experienced with pg_dump it looks like in
Sincerely i dunno why this happens and after couple of emails received two
days ago I stoped searching why.
In any case I need it to dump for custom users their dbs data from a php
script.
And now it works, but if i gave the same cmd which run through exec the
pg_dump force me to introduce the
Greetings,
I've been fighting against a very strange behaviour found in PostgreSQL
7.1.2 on a RedHat 6.2. I have a very simple table called site_site and I
lost it's indexes everytime I run a vaccum. Do you know why this happens? Is
there a way to get around or fix this kind of problem?I put a
Tino,
Thanks for the information.
I already have a database with SQL_ASCII encoding.
Is there any way that I can change the encoding to UNICODE.
Regards,
Kallol.
-Original Message-
From: Tino Wildenhain [mailto:[EMAIL PROTECTED]
Sent: Friday, July 18, 2003 6:49 PM
To: Kallol Nandi
On Fri, Jul 18, 2003 at 11:35:19AM +0200, Jordi S?nchez L?pez wrote:
Can two postgresql processes (running in different machines) access and
work with the same database files in a shared storage scenario? Would
No.
there be any problem?
Yes. Probably massive database corruption.
A
On Fri, Jul 18, 2003 at 01:39:19PM +0300, Viorel Dragomir wrote:
As i experienced with pg_dump it looks like in ver.7.3 requires interactive
enter password for *custom users*.
I don't know what a custom user is, but if you put the password in
~/.pgpass, authentication happens automatically.
On Fri, Jul 18, 2003 at 08:26:59AM -0300, Vilson farias wrote:
Greetings,
I've been fighting against a very strange behaviour found in PostgreSQL
7.1.2 on a RedHat 6.2. I have a very simple table called site_site and I
lost it's indexes everytime I run a vaccum. Do you know why this
- Original Message -
From: Andrew Sullivan [EMAIL PROTECTED]
To: Pgsql-General [EMAIL PROTECTED]
Sent: Friday, July 18, 2003 2:45 PM
Subject: Re: [GENERAL] Scheduled back up
On Fri, Jul 18, 2003 at 01:39:19PM +0300, Viorel Dragomir wrote:
As i experienced with pg_dump it looks like
Kirill Ponazdyr [EMAIL PROTECTED] writes:
It is for a advanced syslog server product we are currently developing.
The very basic idea is to feed all syslog messages into a DB and allow
easy monitoring and even correlation, we use Postgres as our DB Backend,
in big environments the machine
Viorel Dragomir [EMAIL PROTECTED] writes:
I don't know what a custom user is, but if you put the password in
~/.pgpass, authentication happens automatically. That's a new
feature in 7.3.x.
But I can't do that, the users that are granted to use the database don't
have any user id on that
- Original Message -
From: Tom Lane [EMAIL PROTECTED]
To: Viorel Dragomir [EMAIL PROTECTED]
Cc: Andrew Sullivan [EMAIL PROTECTED]; Pgsql-General
[EMAIL PROTECTED]
Sent: Friday, July 18, 2003 4:41 PM
Subject: Re: [GENERAL] Scheduled back up
Viorel Dragomir [EMAIL PROTECTED] writes:
I
Has anyone tried PostgreSQL in a MOSIX-like cluster?
Jon
On Fri, 18 Jul 2003, [ISO-8859-1] Jordi Sánchez López wrote:
Can two postgresql processes (running in different machines) access and
work with the same database files in a shared storage scenario? Would
there be any problem?
Thanks
On Fri, 2003-07-18 at 15:49, Viorel Dragomir wrote:
No, .pgpass is sought in the home directory of the user running pg_dump
(or any other client program). It's not a server-side file.
In my case the user is apache.
I dunno for sure but the apache doesn't have a home directory.
If you
- Original Message -
From: Csaba Nagy [EMAIL PROTECTED]
To: Viorel Dragomir [EMAIL PROTECTED]
Cc: Tom Lane [EMAIL PROTECTED]; Andrew Sullivan
[EMAIL PROTECTED]; Pgsql-General [EMAIL PROTECTED]
Sent: Friday, July 18, 2003 5:03 PM
Subject: Re: [GENERAL] Scheduled back up
On Fri,
On Fri, Jul 18, 2003 at 07:02:08AM -0700, Jonathan Bartlett wrote:
Has anyone tried PostgreSQL in a MOSIX-like cluster?
It won't work, apparently: MOSIX doesn't (last I checked) have a
mechanism for using SYSV-stype shared memory across the cluster, and
PostgreSQL needs it.
A
--
Andrew
On Fri, Jul 18, 2003 at 02:56:53PM +0300, Viorel Dragomir wrote:
I'm sorry for *custom users*.
The project it's kind of cpanel.
A user can create and grant access for his databases.
And thx to pg_dump he can export import databases.
So any user that have a db might want to export his data
Ahhh. Glad you got it working. I can't wait for subtransactions to come
along.
Feel free to ask questions, it's tough at first getting used to the way
postgresql does things, but rewarding once you start to get it.
On Fri, 18 Jul 2003, Annabelle Desbois wrote:
Hi,
In fact I forgot the
Sean Chittenden wrote:
I have received a question via the Advocacy site and I am not
knowledgeable enough to answer. Can you help?
The question is: can PostgreSQL handle between 10'000 and 40'000
simultaneous connections? The persone asking the question has to
choose between Oracle and
There are 1000's of references to postgresql and connection pooling.
http://www.google.com/search?hl=enie=UTF-8oe=UTF-8q=pooling+postgresql
Maybe somthing there will work.
Those are all application level connection pooling links. I'm
thinking about something that's done on the database
scott.marlowe [EMAIL PROTECTED] writes:
But I'm sure that with a few tweaks to the code here and there it's
doable, just don't expect it to work out of the box.
I think you'd be sticking your neck out to assume that 10k concurrent
connections would perform well, even after tweaking. I'd worry
Dmitry Tkach [EMAIL PROTECTED] writes:
3) I restart the server manually, and try again
analyze mytable;
... it *works*
4) I let it run for a while, then try again:
analyze mytable;
... it crashes.
Proves nothing, since ANALYZE only touches a random sample of the rows.
If you get that
Hi,
After having moved all of the data to a new database initializaed with
es_MX as locale, the postmaster is dying and restarting every time a
program tries to read information on this tables:
pg_catalog.pg_class
pg_catalog.pg_namespace
it is important to note that if I do a simple select *
Tom Lane wrote:
Proves nothing, since ANALYZE only touches a random sample of the rows.
Ok, I understand... Thanks.
If you get that behavior with VACUUM, or a full-table SELECT (say,
SELECT count(*) FROM foo), then it'd be interesting.
I never got it with select - only with vacuum and/or
Try to EXPLAIN SELECT ..., if it crash you most likely have to recompile
postgres with
that strxfrm fix and it's have nothing to do with your data.
Basically in my case, SunOS 5.8 ( dunno what Solaris version is that,
probably 8 )
PG was crashing during cost calculation, long before any data
Dmitry Tkach [EMAIL PROTECTED] writes:
Well ... *today* there seem to be files between and 00EC
Is that range supposed to stay the same or does it vary?
It will vary, but not quickly --- each file represents 1 million
transactions.
If the problem is erratic with VACUUM or SELECT COUNT(*),
Tom Lane wrote:
Dmitry Tkach [EMAIL PROTECTED] writes:
Well ... *today* there seem to be files between and 00EC
Is that range supposed to stay the same or does it vary?
It will vary, but not quickly --- each file represents 1 million
transactions.
If the problem is erratic with
Dmitry Tkach [EMAIL PROTECTED] writes:
Any ideas?
Time to get out memtest86 and badblocks.
regards, tom lane
---(end of broadcast)---
TIP 6: Have you searched our list archives?
--- Ian Harding [EMAIL PROTECTED] wrote:
There is a switch in the ODBC configuration under
OPTIONS | DATASOURCE | PAGE 2 to use -1 as true. I
think that will make it work, although I have not
tried it.
I unchecked Bools as Char and checked True as -1
in psqlodbc 7.03.01.00. I've transferred
On Friday 18 July 2003 01:28 pm, Sean Chittenden wrote:
There are 1000's of references to postgresql and connection pooling.
http://www.google.com/search?hl=enie=UTF-8oe=UTF-8q=pooling+postgresql
Maybe somthing there will work.
Those are all application level connection pooling links.
There are 1000's of references to postgresql and connection pooling.
http://www.google.com/search?hl=enie=UTF-8oe=UTF-8q=pooling+postgresql
Maybe somthing there will work.
Those are all application level connection pooling links. I'm
thinking about something that's done on
On Sat, 12 Jul 2003, Daniel Seichter wrote:
Hello,
see news.us.postgresql.org for now, while we deal with some issues locally
... if anyone else wishes to open up a similar mirror, please let us know
and we'll help get it setup ...
all the lists are gatewayed to a
When I started writing Hermes (http://hermesweb.sourceforge.net), I was
faced with this problem as well. I wanted to support both MySQL (because it
is widely supported) but provide a flexible way of supporting some of the
more advanced features of PostgreSQL. You might find my project refreshing
On Fri, 18 Jul 2003, Tom Lane wrote:
scott.marlowe [EMAIL PROTECTED] writes:
But I'm sure that with a few tweaks to the code here and there it's
doable, just don't expect it to work out of the box.
I think you'd be sticking your neck out to assume that 10k concurrent
connections would
Sean Chittenden [EMAIL PROTECTED] writes:
Some light weight multi-threaded proxy that
relays active connections to the backend and holds idle connections
more efficiently than PostgreSQL...
What excuse is there for postgres connections being heavyweight to begin with?
The only real resource
But I'm sure that with a few tweaks to the code here and there
it's doable, just don't expect it to work out of the box.
I think you'd be sticking your neck out to assume that 10k
concurrent connections would perform well, even after tweaking.
I'd worry first about whether the OS can
Some light weight multi-threaded proxy that relays active
connections to the backend and holds idle connections more
efficiently than PostgreSQL...
What excuse is there for postgres connections being heavyweight to
begin with? The only real resource they ought to represent is a
single
45 matches
Mail list logo