On Fri, Feb 03, 2006 at 08:05:48AM -0500, Mark Woodward wrote:
> Like I said, in this thread of posts, yes there are ways of doing this,
> and I've been doing it for years. It is just one of the rough eges that I
> think could be smoother.
>
> (in php)
> pg_connect("dbname=geo host=dbserver");
>
Mark Woodward wrote:
> >
> > On Feb 3, 2006, at 6:47 AM, Chris Campbell wrote:
> >
> >> On Feb 3, 2006, at 08:05, Mark Woodward wrote:
> >>
> >>> Using the "/etc/hosts" file or DNS to maintain host locations for
> >>> is a
> >>> fairly common and well known practice, but there is no such
> >>> mech
>
> On Feb 3, 2006, at 6:47 AM, Chris Campbell wrote:
>
>> On Feb 3, 2006, at 08:05, Mark Woodward wrote:
>>
>>> Using the "/etc/hosts" file or DNS to maintain host locations for
>>> is a
>>> fairly common and well known practice, but there is no such
>>> mechanism for
>>> "ports." The problem now
> On Feb 3, 2006, at 12:43, Rick Gigger wrote:
>
>> If he had multiple ips couldn't he just make them all listen only
>> on one specific ip (instead of '*') and just use the default port?
>
> Yeah, but the main idea here is that you could use ipfw to forward
> connections *to other hosts* if you wa
Jeremy,
> The immediate use I thought of was being able to have what appeared to
> be multiple databases on the same server with different locale settings,
> which cannot be changed post-initdb.
Again, this is patching the symtoms instead of going after the cause. The
real issue you're trying
On Fri, 3 Feb 2006, Josh Berkus wrote:
> The feature you proposed is a way to make your idiosyncratic setup easier
> to manage, but doesn't apply to anyone else's problems on this list, so
> you're going to have a hard time drumming up enthusiasm.
I am somewhat reluctant to interject into this di
Mark, all:
> > So your databases would listen on 5433, 5434, etc and the proxy would
> > listen on 5432 and route everything properly. If a particular cluster
> > is not up, the proxy could just error out the connection.
> >
> > Hmm, that'd be fun to write if I ever find the time...
>
> It is sim
On Feb 3, 2006, at 12:43, Rick Gigger wrote:
If he had multiple ips couldn't he just make them all listen only
on one specific ip (instead of '*') and just use the default port?
Yeah, but the main idea here is that you could use ipfw to forward
connections *to other hosts* if you wanted to.
On Feb 3, 2006, at 6:47 AM, Chris Campbell wrote:
On Feb 3, 2006, at 08:05, Mark Woodward wrote:
Using the "/etc/hosts" file or DNS to maintain host locations for
is a
fairly common and well known practice, but there is no such
mechanism for
"ports." The problem now becomes a code issue, n
[EMAIL PROTECTED] ("Mark Woodward") writes:
> The "port" aspect is troubling, it isn't really self
> documenting. The application isn't psql, the applications are custom
> code written in PHP and C/C++.
Nonsense. See /etc/services
> Using the "/etc/hosts" file or DNS to maintain host locations f
"Mark Woodward" <[EMAIL PROTECTED]> writes:
> It is similar to a proxy, yes, but that is just part of it. The setup and
> running of these systems should all be managed.
All that requires is some scripts that wrap pg_ctl and bring the right
instances up and down, perhaps with a web interface on t
Mark Woodward wrote:
> Oh come on, "misinformed?" is that really called for?
Claiming that all databases share the same system tables is misinformed,
with no judgement passed.
> The street database is typically generated and QAed in the lab. It is
> then uploaded to the server. It has many milli
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>
>> The point is, that I have been working with this sort of "use case" for
>> a
>> number of years, and being able to represent multiple physical databases
>> as one logical db server would make life easier. It was a brainstorm I
>> had
>> while I was
On Feb 3, 2006, at 08:05, Mark Woodward wrote:
Using the "/etc/hosts" file or DNS to maintain host locations for is a
fairly common and well known practice, but there is no such
mechanism for
"ports." The problem now becomes a code issue, not a system
administration
issue.
What if you ass
On Fri, Feb 03, 2006 at 08:05:48AM -0500, Mark Woodward wrote:
> Using the "/etc/hosts" file or DNS to maintain host locations for is a
> fairly common and well known practice, but there is no such mechanism for
> "ports." The problem now becomes a code issue, not a system administration
> issue.
"Mark Woodward" <[EMAIL PROTECTED]> writes:
> The point is, that I have been working with this sort of "use case" for a
> number of years, and being able to represent multiple physical databases
> as one logical db server would make life easier. It was a brainstorm I had
> while I was setting this
> Mark Woodward schrieb:
> ...
>> Unless you can tell me how to insert live data and indexes to a cluster
>> without having to reload the data and recreate the indexes, then I
>> hardly
>> think I am "misinformed." The ad hominem attack wasn't nessisary.
>
> I see you had a usecase for something li
Josh Berkus wrote:
Mark,
Even though they run on the same machine, run the same version of the
software, and are used by the same applications, they have NO
interoperability. For now, lets just accept that they need to be on
separate physical clusters because some need to be able to started and
Mark Woodward schrieb:
...
> Unless you can tell me how to insert live data and indexes to a cluster
> without having to reload the data and recreate the indexes, then I hardly
> think I am "misinformed." The ad hominem attack wasn't nessisary.
I see you had a usecase for something like pg_diff an
> Mark Woodward wrote:
>> From an administration perspective, a single point of admin would
>> seem like a logical and valuable objective, no?
>
> I don't understand why you are going out of your way to separate your
> databases (for misinformed reasons, it appears) and then want to design
> a way
Mark Woodward wrote:
> From an administration perspective, a single point of admin would
> seem like a logical and valuable objective, no?
I don't understand why you are going out of your way to separate your
databases (for misinformed reasons, it appears) and then want to design
a way to centra
Mark Woodward wrote:
My issue is this, (and this is NOT a slam on PostgreSQL), I have a number
of physical databases on one machine on ports 5432, 5433, 5434. All
running the same version and in fact, installation of PostgreSQL.
Even though they run on the same machine, run the same version of
Mark,
> Even though they run on the same machine, run the same version of the
> software, and are used by the same applications, they have NO
> interoperability. For now, lets just accept that they need to be on
> separate physical clusters because some need to be able to started and
> stopped whi
On Thu, Feb 02, 2006 at 02:05:03PM -0500, Mark Woodward wrote:
> My issue is this, (and this is NOT a slam on PostgreSQL), I have a number
> of physical databases on one machine on ports 5432, 5433, 5434. All
> running the same version and in fact, installation of PostgreSQL.
One way of acheiving
> On Thu, 2 Feb 2006, Mark Woodward wrote:
>
>> Now, the answer, obviously, is to create multiple postgresql database
>> clusters and run postmaster for each logical group of databases, right?
>> That really is a fine idea, but
>>
>> Say, in pgsql, I do this: "\c newdb" It will only find the da
On Thu, 2 Feb 2006, Mark Woodward wrote:
> Now, the answer, obviously, is to create multiple postgresql database
> clusters and run postmaster for each logical group of databases, right?
> That really is a fine idea, but
>
> Say, in pgsql, I do this: "\c newdb" It will only find the database t
Mark Woodward wrote:
> Seriously? No use at all? You don't see any purpose in controlling and
> managing multiple postgresql postmaster processes from one central point?
I'd rather spend effort in fixing the problems that arise from big
databases; for example Hannu's patch for concurrent vacuum a
Mark Woodward wrote:
"Mark Woodward" <[EMAIL PROTECTED]> writes:
One of the problems with the current PostgreSQL design is that all the
databases operated by one postmaster server process are interlinked at
some core level. They all share the same system tables. If one database
becomes corrupt
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> One of the problems with the current PostgreSQL design is that all the
>> databases operated by one postmaster server process are interlinked at
>> some core level. They all share the same system tables. If one database
>> becomes corrupt because of
"Mark Woodward" <[EMAIL PROTECTED]> writes:
> One of the problems with the current PostgreSQL design is that all the
> databases operated by one postmaster server process are interlinked at
> some core level. They all share the same system tables. If one database
> becomes corrupt because of disk o
On Thu, 2006-02-02 at 10:23 -0500, Mark Woodward wrote:
> If one db is REALLY REALLY huge and doesn't change, and a few
> others are small and change often, pg_dumpall will spend most of its time
> dumping the unchanging data.
>
My usual backup strategy does pg_dumpall -g to get the (tiny) global
31 matches
Mail list logo