Hey folks,
I have done some googling and found a few things on the matter. But
am looking for some suggestions from the experts out there.
Got any good pointers for reading material to help me get up to speed
on PostgreSQL clustering? What options are available? What are the
issues?
By the way: cross-posting on these lists is generally frowned upon. It
causes problems for people who reply to you but are aren't on all of the
lists you sent to. If you're not sure what list something should go on,
just send it to -general rather than cc'ing multiple ones.
Duly noted!
Continuent works (AFAIK) like pgpool clustering, it sends the same
statements to both/all servers in the cluster but it has no insight to the
servers beyond this, so if via a direct connection server A becomes out of
sync with server B then continuent is oblivious.
So can the same be said for
Depending on your exact needs, which the terminology you're using only allow
to guess about, you might enjoy this reading:
http://wiki.postgresql.org/wiki/Image:Moskva_DB_Tools.v3.pdf
Thanks. To be honest I don't even know myself what my needs are yet.
I've only been on the job here for a
Hmmm. Anyone out there have the Continuent solution working with PostgreSQL?
If so, what release? We're at 8.3 right now.
thanks,
-Alan
p.s. I'm continuing the cross-post because that is the way I started
this thread. Future threads will not be cross-posted.
On Thu, May 28, 2009 at 9:34 AM,
Look into the headers of any email message from the list and look for
the List-Unsubscribe line
On Thu, May 28, 2009 at 2:26 PM, Marcelo Giovane nrhce...@teleon.com.br wrote:
Please, remove me from the list!
Marcelo Giovane
--
“Mother Nature doesn’t do bailouts.”
- Glenn Prickett
The easiest way to get started is to use your package management
system to install PG for you. If you are on a Fedora/Centos or
similar based system then this is easily accomplished with the 'yum'
command. The minimal packages to add would likely be :
postgresql-server
postgresql
The latter
There was an interesting presentation at PG Con from a guy at Sun who
did a series of load tests on 8.3 vs 8.4
http://www.pgcon.org/2009/schedule/events/124.en.html
There is a link to the video from that page so you can watch it. But
he found a strange corner case where 8.4 performed way worse.
On Sat, Jun 20, 2009 at 12:45 AM, Uwe C. Schroederu...@oss4u.com wrote:
What I don't get is this: you said your CPU died. For me that's the processor
or maybe some interpret that as the main board.
So why don't you grab the harddisk from that server and plug it into the new
one?
x 2
Should
What OS are you running?
What exactly is the window saying? If you could take a snapshot of it
and upload it to a photo site and send the URL to the list, that might
be helpful.
Most OSes allow you to snapshot the active window with CTRL-PRT-SCRN
Then you can use the paste option in your favorite
No other takers on this one?
I'm wondering what exactly direct attached storage entails?
At PG Con I heard a lot about using only direct-attached storage, and not a SAN.
Are there numbers to back this up?
Does fibre-channel count as direct-attached storage? I'm thinking it would.
What
OK, my DB Admin is on vacation, and 15 minutes of googling didn't get
me the answer :-)
Although in that 15 minutes I could have done all 109 tables manually :-)
I know this command for a single table, and checked the manual but
don't see anything about wildcards
ALTER TABLE tablename OWNER to
Why not populate the registry properly?
It is not that difficult to do.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
Hey folks,
I'm installing OTRS/ITSM (and yes, sending the same question to their
list) and it gives me this warning. I cannot find an equivalent
config parameter in Postgres.
Make sure your database accepts packages over 5 MB in size. A MySQL
database for example accepts packages up to 1 MB by
Hey folks,
I realise this is probably more a matter for a kickstart list, but
then again, I have to think that someone else on this list has done
this and can help. So I'll ask here and there.
I'm dragging our company kicking and screaming into the realm of
Kickstart/Anaconda, and trying to get
And within the directory for that repo, I've created a comps.xml file
And of course re-run createrepo ...
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
Hey folks,
I've got Munin installed on all my systems, so was able to get some
interesting data around the big crash we had last night. We'd
thought it was simply a matter of our DB connections maxing out, but
it looks a bit more complex than that. A good 2 or 3 hours before the
connections
pg_locks? Somebody taking exclusive lock on a widely-used table might
explain that.
Thanks, I'll check with the SW designers and DB admin.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
--
Sent via pgsql-general mailing
pg_locks? Somebody taking exclusive lock on a widely-used table might
explain that.
OK, in theory we could do the following, no?
Use our PITR logs to restore a tertiary system to the point when we
were having the problem (we have a pretty wide 2 or 3 hour window to
hit), then query the
OK, looks like the time window is exactly when we run vacuum. That
has been running now for a couple of months no problem, but the last 2
weekends we've been doing massive data loads which could be
complicating things.
Is vacuum a good candidate for what could be locking up the tables?
Here is
How can i take some measurements to understand what bottlenecks will
appear?
For long-term / ongoing I'm very happy so far with a package called
munin. Google it and join their mailing list for help setting it up.
But it takes snapshots at 5 minute intervals and this is not configurable.
For
Is there any way to limit a query to a certain amount of RAM and / or
certain runtime?
i.e. automatically kill it if it exceeds either boundary?
We've finally narrowed down our system crashes and have a smoking gun,
but no way to fix it in the immediate term. This sort of limit would
really
Generally speaking work_mem limits ram used. What are your
non-default postgresql.conf settings?
This cannot be right because we had queries taking 4G and see our
setting is such :
work_mem = 2MB # min 64kB
I'll have to find a copy of the default file to figure out
I'm gonna make a SWAG that you've got 4 to 4.5G shared buffers, and if
you subract that from DRS you'll find it's using a few hundred to
several hundred megs. still a lot, but not in the 4G range you're
expecting. What does top say about this?
I've just add this in my cronjob with top -b -n
On Thu, Sep 17, 2009 at 3:35 PM, Scott Marlowe scott.marl...@gmail.com wrote:
True, but with a work_mem of 2M, I can't imagine having enough sorting
going on to need 4G of ram. (2000 sorts? That's a lot) I'm betting
the OP was looking at top and misunderstanding what the numbers mean,
which
An EXPLAIN (EXPLAIN ANALYSE if it's not going to hurt things) of some of
your common queries would help a lot here.
Yes, we are just about to start getting into that sort of thing.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of
Hey folks,
We have a tool for monitoring our website's performance, and would
like to deploy it in a few places around the world. Mainly India and
Asia Pac at this point.
I know that DBs do not do well in a virtualized environment, but this
one is a fairly light load (as compared to our
OK, I did find this
http://www.postgresql.org/support/professional_hosting_asia
but does anyone have experience with any of them?
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
--
Sent via pgsql-general mailing list
Hey folks,
Sorry for the OT - we are most of the way through a Db2 -- PG
migration that is some 18 months in the making so far.We've got
maybe another 3 to 6 months to go before we are complete, and in the
meantime have identified the need for connection pooling in Db2, a-la
the excellent
http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/conn/c0006170.htm
Yeah, that is Db2 Enterprise, and we have Workgroup Server version.
And the cost of upgrading to that was part of why we decided to move
to PG.
So I should have been more specific - a FREE
On Fri, Jan 15, 2010 at 4:45 PM, Scott Marlowe scott.marl...@gmail.com wrote:
What language are you running this in again? There might be other
options that are more language oriented (java for instance) than db
oriented. Or maybe some intermediate layer for pooling that's db
agnostic.
Oh,
On Fri, Jan 15, 2010 at 5:36 PM, Joshua D. Drake j...@commandprompt.com wrote:
Mod_perl?
That on our front-end servers, as well as just regular perl on the back end.
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of In Defense of Food
--
Sent
On Thu, Mar 25, 2010 at 4:04 PM, Merlin Moncure mmonc...@gmail.com wrote:
There is very little reason to do this. both postgres and the
operating system cache frequently used pages in memory already and
they are pretty smart about it -- this leaves more memory for
temporary demands like
On Thu, Mar 25, 2010 at 4:15 PM, Scott Marlowe scott.marl...@gmail.com wrote:
These questions always get the first question back, what are you
trying to accomplish? Different objectives will have different
answers.
We have a real-time application that processes data as it comes in.
Doing some
On Fri, Mar 26, 2010 at 10:14 AM, Ozz Nixon ozzni...@gmail.com wrote:
I have to ask the obvious question... as we develop solutions which must
process 100,000 queries a second. In those cases, we use a combination hash
table and link-lists. There are times where SQL is not the right choice, it
Have you considered using one of these:
http://www.acard.com/english/fb01-product.jsp?idno_no=270prod_no=ANS-9010type1_title=
Solid State Drivetype1_idno=13
We did some research which suggested that performance may not be so
great with them because the PG engine is not optimized to utilize
36 matches
Mail list logo