[EMAIL PROTECTED] writes:
my data base is very slow. The machine is a processor Xeon 2GB with
256 MB of RAM DDR. My archive of configuration is this:
sort_mem = 131072 # min 64, size in KB
#vacuum_mem = 8192 # min 1024, size in KB
Change it back to 8192, or perhaps
[EMAIL PROTECTED] (Anderson Boechat Lopes) writes:
I´m new here and i´m not sure if this is the right email to
solve my problem.
This should be OK...
Well, i have a very large database, with vary tables and very
registers. Every day, too many operations are perfomed in that DB,
with
[EMAIL PROTECTED] (James Thornton) writes:
Back in 2001, there was a lengthy thread on the PG Hackers list about
PG and journaling file systems
(http://archives.postgresql.org/pgsql-hackers/2001-05/msg00017.php),
but there was no decisive conclusion regarding what FS to use. At the
time the
[EMAIL PROTECTED] (Richard Huxton) writes:
If you could pin data in the cache it would run quicker, but at the
cost of everything else running slower.
Suggested steps:
1. Read the configuration/tuning guide at:
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
2. Post a sample
[EMAIL PROTECTED] (Neil Conway) writes:
Christopher Browne wrote:
One of our sysadmins did all the configuring OS stuff part; I don't
recall offhand if there was a need to twiddle something in order to
get it to have great gobs of shared memory.
FWIW, the section on configuring kernel
[EMAIL PROTECTED] (Dan Harris) writes:
Christopher Browne wrote:
We have a couple of these at work; they're nice and fast, although the
process of compiling things, well, makes me feel a little unclean.
Thanks very much for your detailed reply, Christopher. Would you mind
elaborating on the
[EMAIL PROTECTED] (Simon Riggs) writes:
Well, its fairly straightforward to auto-generate the UNION ALL view, and
important as well, since it needs to be re-specified each time a new
partition is loaded or an old one is cleared down. The main point is that
the constant placed in front of each
[EMAIL PROTECTED] (Matt Clark) writes:
As for vendor support for Opteron, that sure looks like a
trainwreck... If you're going through IBM, then they won't want to
respond to any issues if you're not running a bog-standard RHAS/RHES
release from Red Hat. And that, on Opteron, is preposterous,
[EMAIL PROTECTED] (Pierre-Frédéric Caillaud) writes:
posix_fadvise(2) may be a candidate. Read/Write bareers another pone, as
well asn syncing a bunch of data in different files with a single call
(so that the OS can determine the best write order). I can also imagine
some interaction with the
[EMAIL PROTECTED] (Dawid Kuroczko) writes:
ALTER TABLE foo ALTER COLUMN bar SET STATISTICS n; .
I wonder what are the implications of using this statement,
I know by using, say n=100, ANALYZE will take more time,
pg_statistics will be bigger, planner will take longer time,
on the other
[EMAIL PROTECTED] (Dave Held) writes:
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 22, 2005 3:48 PM
To: Greg Stark
Cc: Christopher Browne; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] What about utility to calculate planner cost
[EMAIL PROTECTED] (Joshua D. Drake) writes:
So, my question is this: My server currently works great,
performance wise. I need to add fail-over capability, but I'm
afraid that introducing a stressful task such as replication will
hurt my server's performance. Is there any foundation to my fears?
josh@agliodbs.com (Josh Berkus) writes:
Bill,
What about if an out-of-the-ordinary number of rows
were deleted (say 75% of rows in the table, as opposed
to normal 5%) followed by a 'VACUUM ANALYZE'? Could
things get out of whack because of that situation?
Yes. You'd want to run REINDEX
[EMAIL PROTECTED] (Christopher Petrilli) writes:
On 5/2/05, Tim Terlegård [EMAIL PROTECTED] wrote:
Howdy!
I'm converting an application to be using postgresql instead of
oracle. There seems to be only one issue left, batch inserts in
postgresql seem significant slower than in oracle. I
[EMAIL PROTECTED] writes:
How can i know a capacity of a pg database ?
How many records my table can have ?
I saw in a message that someone have 50 000 records it's possible in a table ?
(My table have 8 string field (length 32 car)).
Thanks for your response.
The capacity is much more
[EMAIL PROTECTED] (Mohan, Ross) writes:
for time-series and insane fast, nothing beats kdB, I believe
www.kx.com
... Which is well and fine if you're prepared to require that all of
the staff that interact with data are skilled APL hackers. Skilled
enough that they're all ready to leap into
[EMAIL PROTECTED] (Stuart Bishop) writes:
I'm putting together a road map on how our systems can scale as our
load increases. As part of this, I need to look into setting up some
fast read only mirrors of our database. We should have more than
enough RAM to fit everything into memory. I would
[EMAIL PROTECTED] (John A Meinel) writes:
I saw a review of a relatively inexpensive RAM disk over at
anandtech.com, the Gigabyte i-RAM
http://www.anandtech.com/storage/showdoc.aspx?i=2480
And the review shows that it's not *all* that valuable for many of the
cases they looked at.
Basically,
[EMAIL PROTECTED] (Jeffrey W. Baker) writes:
I haven't tried this product, but the microbenchmarks seem truly
slow. I think you would get a similar benefit by simply sticking a
1GB or 2GB DIMM -- battery-backed, of course -- in your RAID
controller.
Well, the microbenchmarks were pretty
[EMAIL PROTECTED] (Donald Courtney) writes:
I mean well with this comment -
This whole issue of data caching is a troubling issue with postreSQL
in that even if you ran postgreSQL on a 64 bit address space
with larger number of CPUs you won't see much of a scale up
and possibly even a drop.
[EMAIL PROTECTED] (Michael Stone) writes:
On Tue, Aug 23, 2005 at 12:38:04PM -0700, Josh Berkus wrote:
which have a clear and measurable effect on performance and are
fixable without bloating the PG code. Some of these issues (COPY
path, context switching
Does that include increasing the
tobbe [EMAIL PROTECTED] writes:
Hi Chris.
Thanks for the answer.
Sorry that i was a bit unclear.
1) We update around 20.000 posts per night.
No surprise there; I would have been surprised to see 100/nite or
6M/nite...
2) What i meant was that we suspect that the DBMS called PervasiveSQL
[EMAIL PROTECTED] (Steve Poe) writes:
Chris,
Unless I am wrong, you're making the assumpting the amount of time spent
and ROI is known. Maybe those who've been down this path know how to get
that additional 2-4% in 30 minutes or less?
While each person and business' performance gains (or
[EMAIL PROTECTED] (Ron) writes:
At 03:45 PM 8/25/2005, Josh Berkus wrote:
Ask me sometime about my replacement for GNU sort. Â It uses the
same sorting algorithm, but it's an order of magnitude faster due
to better I/O strategy. Â Someday, in my infinite spare time, I
hope to demonstrate
[EMAIL PROTECTED] (Markus Benne) writes:
We have a highly active table that has virtually all
entries updated every 5 minutes. Typical size of the
table is 50,000 entries, and entries have grown fat.
We are currently vaccuming hourly, and towards the end
of the hour we are seeing
[EMAIL PROTECTED] (Rigmor Ukuhe) writes:
-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-performance-
[EMAIL PROTECTED] On Behalf Of Markus Benne
Sent: Wednesday, August 31, 2005 12:14 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] When to do a vacuum for highly
[EMAIL PROTECTED] (wisan watcharinporn) writes:
please help me ,
comment on postgresql (8.x.x) performance on cpu AMD, INTEL
and why i should use 32 bit or 64 cpu ? (what the performance difference)
Generally speaking, the width of your I/O bus will be more important
to performance than the
[EMAIL PROTECTED] (Stef) writes:
Bruno Wolff III mentioned :
= If you have a proper FSM setting you shouldn't need to do vacuum fulls
= (unless you have an older version of postgres where index bloat might
= be an issue).
What version of postgres was the last version that had
the index
[EMAIL PROTECTED] (Joshua D. Drake) writes:
There is a huge advantage to software raid on all kinds of
levels. If you have the CPU then I suggest it. However you will
never get the performance out of software raid on the high level
(think 1 gig of cache) that you would on a software raid
[EMAIL PROTECTED] (Announce) writes:
I KNOW that I am not going to have anywhere near 32,000+ different
genres in my genre table so why use int4? Would that squeeze a few
more milliseconds of performance out of a LARGE song table query
with a genre lookup?
By the way, I see a lot of queries
[EMAIL PROTECTED] (Announce) writes:
I KNOW that I am not going to have anywhere near 32,000+ different
genres in my genre table so why use int4? Would that squeeze a few
more milliseconds of performance out of a LARGE song table query
with a genre lookup?
If the field is immaterial in terms
[EMAIL PROTECTED] (Dan Harris) writes:
On Oct 3, 2005, at 5:02 AM, Steinar H. Gunderson wrote:
I thought this might be interesting, not the least due to the
extremely low
price ($150 + the price of regular DIMMs):
Replying before my other post came through.. It looks like their
benchmarks
[EMAIL PROTECTED] (Jeff Frost) writes:
What's the current status of how much faster the Opteron is compared
to the Xeons? I know the Opterons used to be close to 2x faster,
but is that still the case? I understand much work has been done to
reduce the contect switching storms on the Xeon
[EMAIL PROTECTED] ([EMAIL PROTECTED]) writes:
I know in mysql, index will auto change after copying data Of
course, index will change after inserting a line in postgresql, but
what about copying data?
Do you mean, by this, something like...
Are indexes affected by loading data using the COPY
Michael Riess [EMAIL PROTECTED] writes:
On 12/1/05, Michael Riess [EMAIL PROTECTED] wrote:
we are currently running a postgres server (upgraded to 8.1) which
has one large database with approx. 15,000 tables. Unfortunately
performance suffers from that, because the internal tables
(especially
[EMAIL PROTECTED] (Andrew Sullivan) writes:
On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:
hi,
I'm curious as to why autovacuum is not designed to do full vacuum. I
Because nothing that runs automatically should ever take an exclusive
lock on the entire database, which is
[EMAIL PROTECTED] (Alvaro Herrera) writes:
Chris Browne wrote:
[EMAIL PROTECTED] (Andrew Sullivan) writes:
On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:
hi,
I'm curious as to why autovacuum is not designed to do full vacuum. I
Because nothing that runs
[EMAIL PROTECTED] (Mindaugas) writes:
Even a database-wide vacuum does not take locks on more than one
table. The table locks are acquired and released one by one, as
the operation proceeds.
Has that changed recently? I have always seen vacuumdb or SQL
VACUUM (without table
[EMAIL PROTECTED] (Michael Crozier) writes:
On Wednesday 18 January 2006 08:54 am, Chris Browne wrote:
To the contrary, there is a whole section on what functionality to
*ADD* to VACUUM.
Near but not quite off the topic of VACUUM and new features...
I've been thinking about parsing
[EMAIL PROTECTED] (Tom Lane) writes:
Brad Nicholson [EMAIL PROTECTED] writes:
I'm investigating a potential IO issue. We're running 7.4 on AIX 5.1.
During periods of high activity (reads, writes, and vacuums), we are
seeing iostat reporting 100% disk usage. I have a feeling that the
matthew@zeut.net (Matthew T. O'Connor) writes:
I think the default settings should be designed to minimize the
impact autovacuum has on the system while preventing the system from
ever getting wildly bloated (also protect xid wraparound, but that
doesn't have anything to do with the
Nik [EMAIL PROTECTED] writes:
I have a table that has only a few records in it at the time, and they
get deleted every few seconds and new records are inserted. Table never
has more than 5-10 records in it.
However, I noticed a deteriorating performance in deletes and inserts
on it. So I
[EMAIL PROTECTED] (Jamal Ghaffour) writes:
Hi All, I ' m using the postgresql datbase to stores cookies. Theses
cookies become invalid after 30 mn and have to be deleted. i have
defined a procedure that will delete all invalid cookies, but i
don't know how to call it in loop way (for example
[EMAIL PROTECTED] (Jim C. Nasby) writes:
On Thu, Mar 23, 2006 at 09:22:34PM -0500, Christopher Browne wrote:
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Scott Marlowe)
wrote:
On Thu, 2006-03-23 at 10:43, Joshua D. Drake wrote:
Has someone been working on the problem of
[EMAIL PROTECTED] (Luke Lonergan) writes:
Christopher,
On 3/23/06 6:22 PM, Christopher Browne [EMAIL PROTECTED] wrote:
Question: Does the Bizgress/MPP use threading for this concurrency?
Or forking?
If it does so via forking, that's more portable, and less dependent on
specific
[EMAIL PROTECTED] (Michael Stone) writes:
On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:
A naive read on this is that you might start with one backend process,
which then spawns 16 more. Each of those backends is scanning through
one of those 16 files; they then throw relevant
[EMAIL PROTECTED] (Craig A. James) writes:
Gorshkov wrote:
/flame on
if you were *that* worried about performance, you wouldn't be using
PHP or *any* interperted language
/flame off
sorry - couldn't resist it :-)
I hope this was just a joke. You should be sure to clarify - there
might
josh@agliodbs.com (Josh Berkus) writes:
Juan,
When I hit
this pgsql on this laptop with a large query I can see the load spike up
really high on both of my virtual processors. Whatever, pgsql is doing
it looks like both cpu's are being used indepently.
Nope, sorry, you're being decieved.
[EMAIL PROTECTED] (Steve Poe) writes:
I have a client who is running Postgresql 7.4.x series database
(required to use 7.4.x). They are planning an upgrade to a new server.
They are insistent on Dell.
Then they're being insistent on poor performance.
If you search for dell postgresql
[EMAIL PROTECTED] writes:
Is it possible to start two instances of postgresql with different port and
directory which run simultaneously?
Certainly. We have one HACMP cluster which hosts 14 PostgreSQL
instances across two physical boxes. (If one went down, they'd all
migrate to the
[EMAIL PROTECTED] (Milen Kulev) writes:
I am pretty exited whether XFS will clearly outpertform ETX3 (no
default setups for both are planned !). I am not sure whether is it
worth to include JFS in comparison too ...
I did some benchmarking about 2 years ago, and found that JFS was a
few
[EMAIL PROTECTED] (Denis Lussier) writes:
I have no personal experience with XFS, but, I've seen numerous
internal edb-postgres test results that show that of all file
systems... OCFS 2.0 seems to be quite good for PG update intensive
apps (especially on 64 bit machines).
I have been curious
[EMAIL PROTECTED] (Graham Davis) writes:
Adding DESC to both columns in the SORT BY did not make the query use
the multikey index. So both
SELECT DISTINCT ON (assetid) assetid, ts
FROM asset_positions ORDER BY assetid, ts DESC;
and
SELECT DISTINCT ON (assetid) assetid, ts
FROM
[EMAIL PROTECTED] (Graham Davis) writes:
40 seconds is much too slow for this query to run and I'm assuming
that the use of an index will make it much faster (as seen when I
removed the GROUP BY clause). Any tips?
Assumptions are dangerous things.
An aggregate like this has *got to* scan the
[EMAIL PROTECTED] (Tom Lane) writes:
Another thing we've been beat up about in the past is that loading a
pg_dump script doesn't ANALYZE the data afterward...
Do I misrecall, or were there not plans (circa 7.4...) to for pg_dump
to have an option to do an ANALYZE at the end?
I seem to remember
[EMAIL PROTECTED] (Craig A. James) writes:
Mark Kirkwood wrote:
The result? I can't use my function in any WHERE clause that
involves any other conditions or joins. Only by itself. PG will
occasionally decide to use my function as a filter instead of doing
the join or the other WHERE
[EMAIL PROTECTED] (Merlin Moncure) writes:
On 10/17/06, Mario Weilguni [EMAIL PROTECTED] wrote:
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
Lastly, note that in PostgreSQL these length declarations are not
necessary:
contacto varchar(255),
fuente varchar(512),
[EMAIL PROTECTED] (Alexander Staubo) writes:
On Oct 17, 2006, at 17:29 , Mario Weilguni wrote:
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
Lastly, note that in PostgreSQL these length declarations are not
necessary:
contacto varchar(255),
fuente varchar(512),
[EMAIL PROTECTED] writes:
If anyone knows what may cause this problem, or has any other ideas, I
would be grateful.
Submit the command VACUUM ANALYZE VERBOSE locations; on both
servers, and post the output of that. That might help us tell for
sure whether the table is bloated (and needs VACUUM
[EMAIL PROTECTED] (Daniel van Ham Colchete) writes:
You are right Christopher.
Okay. Let's solve this matter.
What PostgreSQL benchmark software should I use???
pgbench is one option.
There's a TPC-W at pgFoundry
(http://pgfoundry.org/projects/tpc-w-php/).
There's the Open Source
[EMAIL PROTECTED] (Steve) writes:
I'm wondering what we can do to make
this better if anything; would it be better to leave the indexes on?
It doesn't seem to be.
Definitely NOT. Generating an index via a bulk sort is a LOT faster
than loading data into an index one tuple at a time.
We saw
[EMAIL PROTECTED] (Paweł Gruszczyński) writes:
To test I use pgBench with default database schema, run for 25, 50, 75
users at one time. Every test I run 5 time to take average.
Unfortunetly my result shows that ext is fastest, ext3 and jfs are
very simillar. I can understand that ext2 without
[EMAIL PROTECTED] (Michael Stone) writes:
On Wed, May 16, 2007 at 12:09:26PM -0400, Alvaro Herrera wrote:
Maybe, but we should also mention that CLUSTER is a likely faster
workaround.
Unless, of course, you don't particularly care about the order of
the items in your table; you might end up
[EMAIL PROTECTED] (Tom Lane) writes:
PS: for the record, there is a hard limit at 1GB of query text, owing
to restrictions built into palloc. But I think you'd hit other
memory limits or performance bottlenecks before that one.
It would be much funnier to set a hard limit of 640K of query
[EMAIL PROTECTED] (Dave Cramer) writes:
On 11-Jul-07, at 10:05 AM, Gregory Stark wrote:
Dave Cramer [EMAIL PROTECTED] writes:
Assuming we have 24 73G drives is it better to make one big
metalun and carve
it up and let the SAN manage the where everything is, or is it
better to
specify
[EMAIL PROTECTED] (Jeff Davis) writes:
On Thu, 2007-07-26 at 01:44 -0700, angga erwina wrote:
Hi all,
whats the benefits of replication by using slony in
postgresql??
My office is separate in several difference place..its
about hundreds branch office in the difference
place..so any one can
[EMAIL PROTECTED] (Mark Makarowsky) writes:
I have a table with 4,889,820 records in it. The
table also has 47 fields. I'm having problems with
update performance. Just as a test, I issued the
following update:
update valley set test='this is a test'
This took 905641 ms. Isn't that
[EMAIL PROTECTED] (Pallav Kalva) writes:
Tom Lane wrote:
Pallav Kalva [EMAIL PROTECTED] writes:
We turned on autovacuums on 8.2 and we have a database which is
read only , it is basically a USPS database used only for address
lookups (only SELECTS, no updates/deletes/inserts).
[EMAIL PROTECTED] (Pallav Kalva) writes:
Mark Lewis wrote:
On Fri, 2007-08-31 at 12:25 -0400, Pallav Kalva wrote:
Can you please correct me if I am wrong, I want to understand how
this works.
Based on what you said, it will run autovacuum again when it passes
200M transactions, as SELECTS
[EMAIL PROTECTED] (Patrice Castet) writes:
I wonder if clustering a table improves perfs somehow ?
Any example/ideas about that ?
ref : http://www.postgresql.org/docs/8.2/interactive/sql-cluster.html
Sometimes.
1. It compacts the table, which may be of value, particularly if the
table is not
[EMAIL PROTECTED] (Kevin Kempter) writes:
any suggestions for improving LIKE '%text%' queries?
If you know that the 'text' portion of that query won't change, then
you might create a partial index on the boolean condition.
That is,
create index index_foo_text on my_table (tfield) where
[EMAIL PROTECTED] (Yinan Li) writes:
I am trying to improve the performance of creating index.:p
I've set shared_buffers = 1024MB:p
Effective_cache_size = 1024MB:p
Work_mem = 1GB:p
Maintenance_work_mem=512MB:p
[EMAIL PROTECTED] (Jean-David Beyer) writes:
But what is the limitation on such a thing? In this case, I am just
populating the database and there are no other users at such a time. I am
willing to lose the whole insert of a file if something goes wrong -- I
would fix whatever went wrong and
[EMAIL PROTECTED] (Jean-David Beyer) writes:
Chris Browne wrote:
[EMAIL PROTECTED] (Jean-David Beyer) writes:
But what is the limitation on such a thing? In this case, I am just
populating the database and there are no other users at such a time. I am
willing to lose the whole insert
[EMAIL PROTECTED] (Rafael Martinez) writes:
Heikki Linnakangas wrote:
On a small table like that you could run VACUUM every few minutes
without much impact on performance. That should keep the table size in
check.
Ok, we run VACUUM ANALYZE only one time a day, every night. But we would
[EMAIL PROTECTED] (Roberts, Jon) writes:
I think it is foolish to not make PostgreSQL as feature rich when it
comes to security as the competition because you are idealistic when
it comes to the concept of source code. PostgreSQL is better in
many ways to MS SQL Server and equal to many
[EMAIL PROTECTED] (Scott Marlowe) writes:
On Jan 23, 2008 1:57 PM, Guy Rouillier [EMAIL PROTECTED] wrote:
Scott Marlowe wrote:
I assume you're talking about solid state drives? They have their
uses, but for most use cases, having plenty of RAM in your server will
be a better way to spend
[EMAIL PROTECTED] (Florian Weimer) writes:
So, that web site seems to list products starting at about 32GB in a
separate rack-mounted box with redundant everything. I'd be more
interested in just putting the WAL on an SSD device, so 500MB or 1GB
would be quite sufficient. Can anyone point me
[EMAIL PROTECTED] (Douglas J Hunley) writes:
Subject about says it all. Should I be more concerned about checkpoints
happening 'frequently' or lasting 'longer'? In other words, is it ok to
checkpoint say, every 5 minutes, if it only last a second or three or better
to have checkpoints every
[EMAIL PROTECTED] (Thomas Spreng) writes:
On 16.04.2008, at 01:24, PFC wrote:
The queries in question (select's) occasionally take up to 5 mins
even if they take ~2-3 sec under normal conditions, there are no
sequencial scans done in those queries. There are not many users
connected (around
[EMAIL PROTECTED] (Marinos Yannikos) writes:
This helped with our configuration:
bgwriter_delay = 1ms # 10-1ms between rounds
bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round
FYI, I'd be inclined to reduce both of those numbers, as it should
reduce the
[EMAIL PROTECTED] (Jesper Krogh) writes:
I have this message queue table.. currently with 8m+
records. Picking the top priority messages seem to take quite
long.. it is just a matter of searching the index.. (just as explain
analyze tells me it does).
Can anyone digest further optimizations
[EMAIL PROTECTED] (A B) writes:
So, it is time to improve performance, it is running to slow.
AFAIK (as a novice) there are a few general areas:
1) hardware
2) rewriting my queries and table structures
3) using more predefined queries
4) tweek parameters in the db conf files
Of these
[EMAIL PROTECTED] (Gauri Kanekar) writes:
We have a table table1 which get insert and updates daily in high
numbers, bcoz of which its size is increasing and we have to vacuum
it every alternate day. Vacuuming table1 take almost 30min and
during that time the site is down. We need to cut down
[EMAIL PROTECTED] (Gauri Kanekar) writes:
Basically we have some background process which updates table1 and
we don't want the application to make any changes to table1 while
vacuum. Vacuum requires exclusive lock on table1 and if any of
the background or application is ON vacuum don't kick
I'm doing some analysis on temporal usages, and was hoping to make use
of OVERLAPS, but it does not appear that it makes use of indices.
Couching this in an example... I created a table, t1, thus:
metadata=# \d t1
Table public.t1
Column | Type
[EMAIL PROTECTED] (Merlin Moncure) writes:
I think the SSD manufacturers made a tactical error chasing the
notebook market when they should have been chasing the server
market...
That's a very good point; I agree totally!
--
output = reverse(moc.enworbbc @ enworbbc)
phoenix.ki...@gmail.com (Phoenix Kiula) writes:
[Ppsted similar note to PG General but I suppose it's more appropriate
in this list. Apologies for cross-posting.]
Hi. Further to my bafflement with the count(*) queries as described
in this thread:
mallah.raj...@gmail.com (Rajesh Kumar Mallah) writes:
why is it not a good idea to give end users control over when they
want to run it ?
It's not a particularly good idea to give end users things that they
are likely then to *immediately* use to shoot themselves in the foot.
Turning off
craig_ja...@emolecules.com (Craig James) writes:
Dave Cramer wrote:
So I tried writing directly to the device, gets around 250MB/s,
reads at around 500MB/s
The client is using redhat so xfs is not an option.
I'm using Red Hat and XFS, and have been for years. Why is XFS not an option
with
cl...@uah.es (Angel Alvarez) writes:
more optimal plan...
morreoptimal configuration...
we suffer a 'more optimal' superlative missuse
there is not so 'more optimal' thing but a simple 'better' thing.
im not native english speaker but i think it still applies.
If I wanted to be pedantic
kelv...@gmail.com (Kelvin Quee) writes:
I will go look at Slony now.
It's worth looking at, but it is not always to be assumed that
replication will necessarily improve scalability of applications; it's
not a magic wand to wave such that presto, it's all faster!
Replication is helpful from a
cr...@postnewspapers.com.au (Craig Ringer) writes:
On 13/03/2010 5:54 AM, Jeff Davis wrote:
On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote:
of course. You can always explicitly open a transaction on the remote
side over dblink, do work, and commit it at the last possible moment.
reeds...@rice.edu (Ross J. Reedstrom) writes:
http://www.mythtv.org/wiki/PostgreSQL_Support
That's a pretty hostile presentation...
The page has had two states:
a) In 2008, someone wrote up...
After some bad experiences with MySQL (data loss by commercial power
failure, very bad
t...@sss.pgh.pa.us (Tom Lane) writes:
Ross J. Reedstrom reeds...@rice.edu writes:
On Sat, Mar 20, 2010 at 10:47:30PM -0500, Andy Colson wrote:
(I added the and trust as an after thought, because I do have one very
important 100% uptime required mysql database that is running. Its my
MythTV
swamp...@noao.edu (Steve Wampler) writes:
Or does losing WAL files mandate a new initdb?
Losing WAL would mandate initdb, so I'd think this all fits into the
set of stuff worth putting onto ramfs/tmpfs. Certainly it'll all be
significant to the performance focus.
--
select 'cbbrowne' || '@' ||
g...@2ndquadrant.com (Greg Smith) writes:
Yeb Havinga wrote:
* What filesystem to use on the SSD? To minimize writes and maximize
chance for seeing errors I'd choose ext2 here.
I don't consider there to be any reason to deploy any part of a
PostgreSQL database on ext2. The potential for
j...@commandprompt.com (Joshua D. Drake) writes:
On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote:
Greg Smith wrote:
Note that not all of the Sandforce drives include a capacitor; I hope
you got one that does! I wasn't aware any of the SF drives with a
capacitor on them were even
david_l...@boreham.org (David Boreham) writes:
Feels like I fell through a worm hole in space/time, back to inmos in
1987, and a guy from marketing has just
walked in the office going on about there's a customer who wants to
use our massively parallel hardware to speed up databases...
... As
sgend...@ideasculptor.com (Samuel Gendler) writes:
Geez. I wish someone would have written something quite so bold as
'xfs is always faster than ext3' in the standard tuning docs. I
couldn't find anything that made a strong filesystem
recommendation. How does xfs compare to ext4? I wound
1 - 100 of 112 matches
Mail list logo