Greg Smith wrote:
Tom Lane wrote:
So has anyone looked at porting MythTV to PG?
Periodically someone hacks together something that works, last big
effort I'm aware of was in 2006, and then it bit rots away. I'm sure
we'd get some user uptake on the result--MySQL corruption is one of
Yeb Havinga wrote:
Greg Smith wrote:
Tom Lane wrote:
So has anyone looked at porting MythTV to PG?
Periodically someone hacks together something that works, last big
effort I'm aware of was in 2006, and then it bit rots away. I'm sure
we'd get some user uptake on the result--MySQL
Yeb Havinga wrote:
Greg Smith wrote:
MySQL corruption is one of the top ten cause of a MythTV system
crashing.
It would be the same with PG, unless the pg cluster configuration with
mythtv would come with a properly configured WAL - I had corrupted
tables (and a personal wiki entry (the
reeds...@rice.edu (Ross J. Reedstrom) writes:
http://www.mythtv.org/wiki/PostgreSQL_Support
That's a pretty hostile presentation...
The page has had two states:
a) In 2008, someone wrote up...
After some bad experiences with MySQL (data loss by commercial power
failure, very bad
t...@sss.pgh.pa.us (Tom Lane) writes:
Ross J. Reedstrom reeds...@rice.edu writes:
On Sat, Mar 20, 2010 at 10:47:30PM -0500, Andy Colson wrote:
(I added the and trust as an after thought, because I do have one very
important 100% uptime required mysql database that is running. Its my
MythTV
On 03/22/2010 03:21 PM, Tom Lane wrote:
The fundamental reason why you're getting a bad plan choice is the
factor-of-100 estimation error here. I'm not sure you can do a whole
lot about that without rethinking the query --- in particular I would
suggest trying to get rid of the non-constant
On Wed, Mar 17, 2010 at 9:01 PM, Eger, Patrick pe...@automotive.com wrote:
I'm running 8.4.2 and have noticed a similar heavy preference for
sequential scans and hash joins over index scans and nested loops. Our
database is can basically fit in cache 100% so this may not be
applicable to your
PostgreSQL 8.4.3
Linux Redhat 5.0
Question: How much memory do I really need?
From my understanding there are two primary strategies for setting up
PostgreSQL in relationship to memory:
1) Rely on Linux to cache the files. In this approach you set the
shared_buffers to a
Ok, the wording is a bit unclear in the documentation as to whether it is the
cost for an entire *page* of tuples, or actual tuples. So something like the
following might give better results for a fully-cached DB?
seq_page_cost = 1.0
random_page_cost = 1.1 #even memory has random access costs,
On Wed, Mar 24, 2010 at 6:49 PM, Campbell, Lance la...@illinois.edu wrote:
PostgreSQL 8.4.3
Linux Redhat 5.0
Question: How much memory do I really need?
The answer is as much as needed to hold your entire database in
memory and a few gig left over for sorts and backends to play in.
From my
On Wed, Mar 24, 2010 at 8:59 PM, Eger, Patrick pe...@automotive.com wrote:
Ok, the wording is a bit unclear in the documentation as to whether it is the
cost for an entire *page* of tuples, or actual tuples. So something like the
following might give better results for a fully-cached DB?
11 matches
Mail list logo