Tom Lane wrote:
Chris [EMAIL PROTECTED] writes:
Email admins - Could we add this above or below the random tips that get
appended to every email ?
You mean like these headers that already get added to every list
message (these copied-and-pasted from your own message):
The headers aren't
.
- Chris
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Andrew Sullivan
Sent: Thursday, May 04, 2006 8:28 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgres 7.4 and vacuum_cost_delay.
On Tue, May 02, 2006 at 05:47:15PM -0400
the performance.
The index will have entries like:
CHRIS
BERT
JOE
and so on.
If you run a query like:
select * from table where UPPER(name) = 'CHRIS';
It's an easy match.
If you don't create an UPPER index, it has to do a comparison with each
row - so the index can't be used because postgres has
a solution. (barring some performance testing)
The only problem with pg_autovacuum is the need for pg_statio, which itself will reduce performance at all times.
Any suggestions?
Thanks!
- Chris
On 4/29/06, Greg Stumph [EMAIL PROTECTED] wrote:
Well, since I got no response at all to this message, I can only assume that
I've asked the question in an insufficient way, or else that no one has
anything to offer on our problem.
This was my first post to the list, so if there's a better way
On 4/25/06, Arnau [EMAIL PROTECTED] wrote:
Hi all,
I have the following running on postgresql version 7.4.2:
CREATE SEQUENCE agenda_user_group_id_seq
MINVALUE 1
MAXVALUE 9223372036854775807
CYCLE
INCREMENT 1
START 1;
CREATE TABLE AGENDA_USERS_GROUPS
(
AGENDA_USER_GROUP_ID INT8
OK. Stop and think about what you're telling postgresql to do here.
You're telling it to cast the field group_id to int8, then compare it to
9. How can it cast the group_id to int8 without fetching it? That's
right, you're ensuring a seq scan. You need to put the int8 cast on the
other
On 4/13/06, Gavin Hamill [EMAIL PROTECTED] wrote:
laterooms=# explain analyze select allocation0_.ID as y1_,
allocation0_.RoomID as y2_, allocation0_.StatusID as y4_,
allocation0_.Price as y3_, allocation0_.Number as y5_,
allocation0_.Date as y6_ from Allocation allocation0_ where
Francisco Reyes wrote:
Doing my first write heavy database.
What settings will help improve inserts?
Only a handfull of connections, but each doing up to 30 inserts/second.
Plan to have 2 to 3 clients which most of the time will not run at the
same time, but ocasionaly it's possible two of them
Chris,
Just to make sure the x4100 config is similar to your Linux system, can
you verify the default setting for disk write cache and make sure they
are both enabled or disabled. Here's how to check in Solaris.
As root, run format -e - pick a disk - cache - write_cache - display
!
Chris,
Just to make sure the x4100 config is similar to your Linux system, can
you verify the default setting for disk write cache and make sure they
are both enabled or disabled. Here's how to check in Solaris.
As root, run format -e - pick a disk - cache - write_cache - display
.
Ok, so I did a few runs for each of the sync methods, keeping all the
rest constant and got this:
open_datasync 0.7
fdatasync 4.6
fsync 4.5
fsync_writethrough not supported
open_sync 0.6
in arbitrary units - higher is faster.
Quite impressive!
Bye, Chris
josh@agliodbs.com (Josh Berkus) writes:
Juan,
When I hit
this pgsql on this laptop with a large query I can see the load spike up
really high on both of my virtual processors. Whatever, pgsql is doing
it looks like both cpu's are being used indepently.
Nope, sorry, you're being decieved.
comment
wal_sync_method to double check.
To the point: the default wal_sync_method choosen on Solaris 10 appears
to be a very bad one - for me, picking fsync increases performance ~
times 7, all other parameters unchanged!
Would it be a good idea to change this in the default install?
Bye, Chris
unchanged!
Would it be a good idea to change this in the default install?
Bye, Chris.
PS: yes I did a fresh initdb again to double check ;)
---(end of broadcast)---
TIP 6: explain analyze is your friend
for pg_xlog and base
and will try what you suggest.
(but note the other mail about wal_sync_method = fsync)
bye, chris.
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your
as on Solaris.
I'm happy so far, but I find it very surprising that this single
parameter has such an impact (only on) Solaris 10.
(my test program is a bulk inserts using PQputCopyData in large
transactions - all test were 8.1.3).
Bye, Chris
---(end of broadcast
them there in the first place, anyway). I've this bad
feeling there's a secret turbo switch I can't spot hidden somewhere
in Solaris :/
Bye, Chris.
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
On 4/2/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
On Sat, Apr 01, 2006 at 11:23:37AM +1000, chris smith wrote:
On 4/1/06, Brendan Duddridge [EMAIL PROTECTED] wrote:
Hi Jim,
I'm not quite sure what you mean by the correlation of category_id?
It means how many distinct values does
On 4/2/06, chris smith [EMAIL PROTECTED] wrote:
On 4/2/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
On Sat, Apr 01, 2006 at 11:23:37AM +1000, chris smith wrote:
On 4/1/06, Brendan Duddridge [EMAIL PROTECTED] wrote:
Hi Jim,
I'm not quite sure what you mean by the correlation
On 4/1/06, Brendan Duddridge [EMAIL PROTECTED] wrote:
Hi Jim,
I'm not quite sure what you mean by the correlation of category_id?
It means how many distinct values does it have (at least that's my
understanding of it ;) ).
select category_id, count(*) from category_product group by
[EMAIL PROTECTED] (Craig A. James) writes:
Gorshkov wrote:
/flame on
if you were *that* worried about performance, you wouldn't be using
PHP or *any* interperted language
/flame off
sorry - couldn't resist it :-)
I hope this was just a joke. You should be sure to clarify - there
might
. It is impossible to provide
an estimate for capacity though without knowing the app in question,
expected query composition, and so forth.
Best Wishes,
Chris Travers
Metatron Technology Consulting
---(end of broadcast)---
TIP 4: Have you searched our
george young wrote:
[PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1]
I have a simple join on two tables that takes way too long. Can you help
me understand what's wrong? There are indexes defined on the relevant columns.
I just did a fresh vacuum --full --analyze on the
Ruben Rubio Rey wrote:
Greg Quinn wrote:
The query is,
select * from users
which returns 4 varchar fields, there is no where clause
Yes, I am running the default postgres config. Basically I have been a
MySQL user and thought I would like to check out PostGreSql. So I did
a quick
[EMAIL PROTECTED] (Jim C. Nasby) writes:
On Thu, Mar 23, 2006 at 09:22:34PM -0500, Christopher Browne wrote:
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Scott Marlowe)
wrote:
On Thu, 2006-03-23 at 10:43, Joshua D. Drake wrote:
Has someone been working on the problem of
[EMAIL PROTECTED] (Luke Lonergan) writes:
Christopher,
On 3/23/06 6:22 PM, Christopher Browne [EMAIL PROTECTED] wrote:
Question: Does the Bizgress/MPP use threading for this concurrency?
Or forking?
If it does so via forking, that's more portable, and less dependent on
specific
[EMAIL PROTECTED] (Michael Stone) writes:
On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:
A naive read on this is that you might start with one backend process,
which then spawns 16 more. Each of those backends is scanning through
one of those 16 files; they then throw relevant
. But that
would not provide any indication of scalability.
Best Wishes,
Chris Travers
Metatron Technology Consulting
begin:vcard
fn:Chris Travers
n:Travers;Chris
email;internet:[EMAIL PROTECTED]
tel;work:509-888-0220
tel;cell:509-630-9974
x-mozilla-html:FALSE
version:2.1
end:vcard
Jan de Visser wrote:
Hello,
After fixing the hanging problems I reported here earlier (by uninstalling
W2K3 SP1), I'm running into another weird one.
After doing a +/- 8hr cycle of updates and inserts (what we call a 'batch'),
the first 'reporting' type query on tables involved in that
Hi all,
I'm trying to work out why my 8.1 system is slower than my 7.4 system
for importing data.
The import is a lot of insert into commands - it's a converted
database from another system so I can't change it to copy commands.
My uncommented config options:
autovacuum = off
Gavin Sherry wrote:
On Tue, 14 Mar 2006, Chris wrote:
Hi all,
I'm trying to work out why my 8.1 system is slower than my 7.4 system
for importing data.
The import is a lot of insert into commands - it's a converted
database from another system so I can't change it to copy commands.
snip
Frank Wiles wrote:
On Tue, 14 Mar 2006 12:24:22 +1100
Chris [EMAIL PROTECTED] wrote:
Gavin Sherry wrote:
On Tue, 14 Mar 2006, Chris wrote:
Hi all,
I'm trying to work out why my 8.1 system is slower than my 7.4
system for importing data.
The import is a lot of insert into commands
Tom Lane wrote:
Chris [EMAIL PROTECTED] writes:
Tons of difference :/
Have you checked that the I/O performance is comparable? It seems
possible that there's something badly misconfigured about the disks
on your new machine. Benchmarking with bonnie or some such would
be useful; also try
David Lang wrote:
On Tue, 14 Mar 2006, Chris wrote:
The only other thing I can see is the old server is ext2:
/dev/hda4 on / type ext2 (rw,errors=remount-ro)
the new one is ext3:
/dev/hda2 on / type ext3 (rw)
this is actually a fairly significant difference.
with ext3 most of your data
Javier Somoza wrote:
I wanna test my system performance when using pgCluster.
I'm using postgreSQL 8.1.0 and i've downloaded pgcluster-1.5.0rc7
and pgcluster-1.5.0rc7-patch.
Do i need to recompile postgreSQL with the patch?
Can i use pgcluster-1.5 with
i.v.r. wrote:
Hi everyone,
I'm experimenting with PostgreSQL, but since I'm no expert DBA, I'm
experiencing some performance issues.
Please take a look at the following query:
SELECT
/*groups.name AS t2_r1,
groups.id AS t2_r3,
groups.user_id AS t2_r0,
groups.pretty_url AS t2_r2,
Nik [EMAIL PROTECTED] writes:
I have a table that has only a few records in it at the time, and they
get deleted every few seconds and new records are inserted. Table never
has more than 5-10 records in it.
However, I noticed a deteriorating performance in deletes and inserts
on it. So I
[EMAIL PROTECTED] (Jamal Ghaffour) writes:
Hi All, I ' m using the postgresql datbase to stores cookies. Theses
cookies become invalid after 30 mn and have to be deleted. i have
defined a procedure that will delete all invalid cookies, but i
don't know how to call it in loop way (for example
ryan groth wrote:
I am issing a query like this:
SELECT *
FROM users users
LEFT JOIN phorum_users_base ON users.uid = phorum_users_base.user_id
LEFT JOIN useraux ON useraux.uid = users.uid;
I'm not sure if postgres would rewrite your query to do the joins
properly, though I guess
PFC wrote:
I'm developing a search engine using the postgresql's databas. I've
already doing some tunnings looking increase the perform.
Now, I'd like of do a realistic test of perfom with number X of queries
for know the performance with many queries.
What the corret way to do this?
Indexing the t_name.name field, I can increase speed, but only if I
restrict my search to something like :
select *
from t_name
where t_name.name like 'my_search%'
(In this case it takes generally less than 1 second)
My question : Are there algorithms or tools that can speed up such a
type
matthew@zeut.net (Matthew T. O'Connor) writes:
I think the default settings should be designed to minimize the
impact autovacuum has on the system while preventing the system from
ever getting wildly bloated (also protect xid wraparound, but that
doesn't have anything to do with the
[EMAIL PROTECTED] (Tom Lane) writes:
Brad Nicholson [EMAIL PROTECTED] writes:
I'm investigating a potential IO issue. We're running 7.4 on AIX 5.1.
During periods of high activity (reads, writes, and vacuums), we are
seeing iostat reporting 100% disk usage. I have a feeling that the
[EMAIL PROTECTED] (Mindaugas) writes:
Even a database-wide vacuum does not take locks on more than one
table. The table locks are acquired and released one by one, as
the operation proceeds.
Has that changed recently? I have always seen vacuumdb or SQL
VACUUM (without table
[EMAIL PROTECTED] (Michael Crozier) writes:
On Wednesday 18 January 2006 08:54 am, Chris Browne wrote:
To the contrary, there is a whole section on what functionality to
*ADD* to VACUUM.
Near but not quite off the topic of VACUUM and new features...
I've been thinking about parsing
[EMAIL PROTECTED] (Andrew Sullivan) writes:
On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:
hi,
I'm curious as to why autovacuum is not designed to do full vacuum. I
Because nothing that runs automatically should ever take an exclusive
lock on the entire database, which is
[EMAIL PROTECTED] (Alvaro Herrera) writes:
Chris Browne wrote:
[EMAIL PROTECTED] (Andrew Sullivan) writes:
On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:
hi,
I'm curious as to why autovacuum is not designed to do full vacuum. I
Because nothing that runs
study:
http://www.postgresql.org/about/casestudies/shannonmedical
Bye, Chris.
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do
in performance?\
Obviously, I want more memory, but I have to prove the need to my boss since it raises the cost of the servers a fair amount.
Thanks for any help,
Chris
In PostgreSQL 8.1, is the pg_autovacuum daemon affected by the
vacuum_cost_* variables? I need to make sure that if we turn
autovacuuming on when we upgrade to 8.1, we don't cause any i/o
issues.
Thanks,
Chris
---(end of broadcast)---
TIP 1
Michael Riess [EMAIL PROTECTED] writes:
On 12/1/05, Michael Riess [EMAIL PROTECTED] wrote:
we are currently running a postgres server (upgraded to 8.1) which
has one large database with approx. 15,000 tables. Unfortunately
performance suffers from that, because the internal tables
(especially
[EMAIL PROTECTED] ([EMAIL PROTECTED]) writes:
I know in mysql, index will auto change after copying data Of
course, index will change after inserting a line in postgresql, but
what about copying data?
Do you mean, by this, something like...
Are indexes affected by loading data using the COPY
On Tue, Oct 25, 2005 at 03:44:36PM +0200, Chris Mair wrote:
Is there a better, faster way to do these inserts?
COPY is generally the fastest way to do bulk inserts (see
PQputCopyData).
Hi,
I've rewritten the testclient now to use COPY, but I'm getting
the exact same results as when doing
getting profiling output but when I look at it using
gprof bin-somewhere/postgres $PGDATA/gmon.out I'm only seeing
what I think are the calls for the server startup. How can I profile
the (forked) process that actually performs all the work on
my connection?
Sorry for the long post :)
Bye,
Chris
Is there a better, faster way to do these inserts?
COPY is generally the fastest way to do bulk inserts (see
PQputCopyData).
Thanks :)
I'll give that I try and report the results here later.
Bye, Chris.
---(end of broadcast)---
TIP 4: Have
expect?
I'm specifying it as binary (i.e. one's in PQexecPrepared's
format parameter). The stored data is correct.
I'll try copy from stdin with binary tomorrow and see what
I get...
Thanks Bye, Chris.
---(end of broadcast)---
TIP 2: Don't 'kill -9
[EMAIL PROTECTED] (Jeff Frost) writes:
What's the current status of how much faster the Opteron is compared
to the Xeons? I know the Opterons used to be close to 2x faster,
but is that still the case? I understand much work has been done to
reduce the contect switching storms on the Xeon
[EMAIL PROTECTED] (Dan Harris) writes:
On Oct 3, 2005, at 5:02 AM, Steinar H. Gunderson wrote:
I thought this might be interesting, not the least due to the
extremely low
price ($150 + the price of regular DIMMs):
Replying before my other post came through.. It looks like their
benchmarks
[EMAIL PROTECTED] (Announce) writes:
I KNOW that I am not going to have anywhere near 32,000+ different
genres in my genre table so why use int4? Would that squeeze a few
more milliseconds of performance out of a LARGE song table query
with a genre lookup?
By the way, I see a lot of queries
[EMAIL PROTECTED] (Announce) writes:
I KNOW that I am not going to have anywhere near 32,000+ different
genres in my genre table so why use int4? Would that squeeze a few
more milliseconds of performance out of a LARGE song table query
with a genre lookup?
If the field is immaterial in terms
[EMAIL PROTECTED] (Joshua D. Drake) writes:
There is a huge advantage to software raid on all kinds of
levels. If you have the CPU then I suggest it. However you will
never get the performance out of software raid on the high level
(think 1 gig of cache) that you would on a software raid
[EMAIL PROTECTED] (Stef) writes:
Bruno Wolff III mentioned :
= If you have a proper FSM setting you shouldn't need to do vacuum fulls
= (unless you have an older version of postgres where index bloat might
= be an issue).
What version of postgres was the last version that had
the index
of the awkwardness in the calculation. That calc gets used
for a lot of different things including column definitions when people want
to see the column on screen.
Thanks,
-Chris
On Wednesday 14 September 2005 05:13 am, Richard Huxton wrote:
Chris Kratz wrote:
Hello All,
We are struggling
) with no
change in the plan.
Any thoughts?
-Chris
[1] explain analyze snippet from larger query
- Nested Loop (cost=0.00..955.70 rows=1 width=204) (actual
time=3096.689..202704.649 rows=17 loops=1)
Join Filter: (inner.nameid = outer.name_id)
- Nested Loop (cost=0.00..112.25 rows=1 width=33
[EMAIL PROTECTED] (wisan watcharinporn) writes:
please help me ,
comment on postgresql (8.x.x) performance on cpu AMD, INTEL
and why i should use 32 bit or 64 cpu ? (what the performance difference)
Generally speaking, the width of your I/O bus will be more important
to performance than the
[EMAIL PROTECTED] (Rigmor Ukuhe) writes:
-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-performance-
[EMAIL PROTECTED] On Behalf Of Markus Benne
Sent: Wednesday, August 31, 2005 12:14 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] When to do a vacuum for highly
responsibility for anyone
trying these and running into trouble.
Best Wishes,
Chris Travers
Metatron Technology Consulting
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
the planner
that an empty table is expected not to grow? Otherwise, I can see
nightmares in a data warehouse environment where you have an empty
parent table...
Best Wishes,
Chris Travers
Metatron Technology Consulting
begin:vcard
fn:Chris Travers
n:Travers;Chris
email;internet:[EMAIL PROTECTED
in the inheritance process, iirc. However, index
entries are not inherited, which means that index-based unique
constraints don't properly get inherited.
Best Wishes,
Chris Travers
Metatron Technology Consulting
---(end of broadcast)---
TIP 3: Have
[EMAIL PROTECTED] (Markus Benne) writes:
We have a highly active table that has virtually all
entries updated every 5 minutes. Typical size of the
table is 50,000 entries, and entries have grown fat.
We are currently vaccuming hourly, and towards the end
of the hour we are seeing
tobbe wrote:
Hi Chris.
Thanks for the answer.
Sorry that i was a bit unclear.
1) We update around 20.000 posts per night.
2) What i meant was that we suspect that the DBMS called PervasiveSQL
that we are using today is much to small. That's why we're looking for
alternatives.
Today we base
Hopefully a quick question.
In 7.3.4, how does the planner execute a query with union alls in it?
Does it execute the unions serially, or does it launch a thread for
each union (or maybe something else entirely).
Thanks,
Chris
Here is an explain from the view I'm thinking about, how does
[EMAIL PROTECTED] (Ron) writes:
At 03:45 PM 8/25/2005, Josh Berkus wrote:
Ask me sometime about my replacement for GNU sort. Â It uses the
same sorting algorithm, but it's an order of magnitude faster due
to better I/O strategy. Â Someday, in my infinite spare time, I
hope to demonstrate
tobbe [EMAIL PROTECTED] writes:
Hi Chris.
Thanks for the answer.
Sorry that i was a bit unclear.
1) We update around 20.000 posts per night.
No surprise there; I would have been surprised to see 100/nite or
6M/nite...
2) What i meant was that we suspect that the DBMS called PervasiveSQL
[EMAIL PROTECTED] (Steve Poe) writes:
Chris,
Unless I am wrong, you're making the assumpting the amount of time spent
and ROI is known. Maybe those who've been down this path know how to get
that additional 2-4% in 30 minutes or less?
While each person and business' performance gains
be appreciated.
Thanks,
Chris
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
would add an
identifying column to each table so that I can differentiate the data.
Chris
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
[EMAIL PROTECTED] (Donald Courtney) writes:
I mean well with this comment -
This whole issue of data caching is a troubling issue with postreSQL
in that even if you ran postgreSQL on a 64 bit address space
with larger number of CPUs you won't see much of a scale up
and possibly even a drop.
[EMAIL PROTECTED] (Michael Stone) writes:
On Tue, Aug 23, 2005 at 12:38:04PM -0700, Josh Berkus wrote:
which have a clear and measurable effect on performance and are
fixable without bloating the PG code. Some of these issues (COPY
path, context switching
Does that include increasing the
the psql shell?
Bye, Chris.
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
towards
the i/o starved side.
Thanks for any insight,
Chris
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Sorry, forgot to state that we are still on PG 7.3.4.
On 8/17/05, Chris Hoover [EMAIL PROTECTED] wrote:
I have some questions about tuning my effective_cache_size
I have a RHEL 2.1 box running with dual Xeon (2.6 GHz I believe and
they have HT on). The box has 8GB memory. In my
Does anyone have any suggestions on this? I did not get any response
from the admin list.
Thanks,
Chris
-- Forwarded message --
From: Chris Hoover [EMAIL PROTECTED]
Date: Jul 27, 2005 12:29 PM
Subject: Re: Help with view performance problem
To: pgsql-admin@postgresql.org
I
.
Normally I do:
vacuum analyze;
reindex database ;
Secondly, the project table has *never* had anything in it. So where
are these numbers coming from?
Best Wishes,
Chris Travers
Metatron Technology Consulting
---(end of broadcast)---
TIP 1
I'm alreading running at 1.5. It looks like if I drop the
random_page_cost t0 1.39, it starts using the indexes. Are there any
unseen issues with dropping the random_page_cost this low?
Thanks,
Chris
On 7/28/05, Dan Harris [EMAIL PROTECTED] wrote:
On Jul 28, 2005, at 8:38 AM, Chris Hoover
performance quite easily.
Chris Travers
Metatron Technology Consulting
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get
)
I am not sure if I want to remove support for the other two tables
yet. However, I wanted to submit this here as a (possibly corner-)
case where the plan seems to be far slower than it needs to be.
Best Wishes,
Chris Travers
Metatron Technology Consulting
---(end
my test. I have 1G of RAM, which
is less than we'll be running in production (likely 2G).
-Original Message-
From: John A Meinel [mailto:[EMAIL PROTECTED]
Sent: Monday, July 25, 2005 6:09 PM
To: Chris Isaacson; Postgresql Performance
Subject: Re: [PERFORM] COPY insert performance
Chris
take care of this. I'll
increase work_mem to 512MB and rerun my test. I have 1G of RAM, which
is less than we'll be running in production (likely 2G).
-Chris
-Original Message-
From: John A Meinel [mailto:[EMAIL PROTECTED]
Sent: Monday, July 25, 2005 6:09 PM
To: Chris Isaacson
I need the chunks for each table COPYed within the same transaction
which is why I'm not COPYing concurrently via multiple
threads/processes. I will experiment w/o OID's and decreasing the
shared_buffers and wal_buffers.
Thanks,
Chris
-Original Message-
From: Gavin Sherry [mailto:[EMAIL
[EMAIL PROTECTED] (John A Meinel) writes:
I saw a review of a relatively inexpensive RAM disk over at
anandtech.com, the Gigabyte i-RAM
http://www.anandtech.com/storage/showdoc.aspx?i=2480
And the review shows that it's not *all* that valuable for many of the
cases they looked at.
Basically,
[EMAIL PROTECTED] (Jeffrey W. Baker) writes:
I haven't tried this product, but the microbenchmarks seem truly
slow. I think you would get a similar benefit by simply sticking a
1GB or 2GB DIMM -- battery-backed, of course -- in your RAID
controller.
Well, the microbenchmarks were pretty
Title: Message
I needCOPY via
libpqxx to insert millions of rows into two tables. One table has roughly
have as many rows and requires half the storage. In production, the
largest table will grow by ~30M rows/day. To test the COPY performance I
split my transactionsinto 10,000 rows. I
(autocommit on) with fsync on.
Bye, Chris.
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
[EMAIL PROTECTED] (Stuart Bishop) writes:
I'm putting together a road map on how our systems can scale as our
load increases. As part of this, I need to look into setting up some
fast read only mirrors of our database. We should have more than
enough RAM to fit everything into memory. I would
[EMAIL PROTECTED] (Mohan, Ross) writes:
for time-series and insane fast, nothing beats kdB, I believe
www.kx.com
... Which is well and fine if you're prepared to require that all of
the staff that interact with data are skilled APL hackers. Skilled
enough that they're all ready to leap into
[EMAIL PROTECTED] writes:
How can i know a capacity of a pg database ?
How many records my table can have ?
I saw in a message that someone have 50 000 records it's possible in a table ?
(My table have 8 string field (length 32 car)).
Thanks for your response.
The capacity is much more
wrong or not doing at all?
Your help is greatly appreciated.
Regards,
Chris.
--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.11.2 - Release Date: 5/2/2005
---(end of broadcast)---
TIP 1
Mark Kirkwood wrote:
Chris Hebrard wrote:
kern.ipc.shmmax and kern.ipc.shmmin will not stay to what I set them to.
What am I doing wrong or not doing at all?
These need to go in /etc/sysctl.conf. You might need to set shmall as
well.
(This not-very-clear distinction between what is sysctl'abe
201 - 300 of 362 matches
Mail list logo