Relaxin kirjutas N, 04.09.2003 kell 03:28:
I have a table with 102,384 records in it, each record is 934 bytes.
I created a test database on my Linux (RH9) laptop with 30GB/4200RPM ide
drive and P3-1133Mhz, 768MB, populated it with 128000 rows of 930 bytes
each and did
[EMAIL PROTECTED]
Mindaugas Riauba kirjutas K, 10.09.2003 kell 13:21:
router_db=# explain analyze update ifdata set ifspeed=256000,
ifreason='12121', iflastupdate=CURRENT_TIMESTAMP WHERE clientid='#0003904#';
QUERY PLAN
Christopher Browne kirjutas K, 01.10.2003 kell 19:21:
The FS-related result appeared surprising, as the stories I had
heard suggested that JFS hadn't been particularly heavily tuned on
Linux, whereas XFS was supposed to be the speed demon.
Gentoo linux recommends XFS only for SAN+fibre
Christopher Browne kirjutas R, 03.10.2003 kell 00:57:
[EMAIL PROTECTED] (Jean-Luc Lachance) writes:
That's one of the draw back of MVCC.
I once suggested that the transaction number and other house keeping
info be included in the index, but was told to forget it...
It would solve once
Tom Lane kirjutas L, 04.10.2003 kell 19:07:
Hannu Krosing [EMAIL PROTECTED] writes:
Christopher Browne kirjutas R, 03.10.2003 kell 00:57:
A while back I outlined how this would have to be done, and for it to
be done efficiently, it would be anything BUT simple.
Could this be made
Josh Berkus kirjutas N, 09.10.2003 kell 08:36:
Chris,
The need to do a lot of joins would likely hurt performance somewhat,
as well as the way that it greatly increases the number of rows.
Although you could always split it into several tables, one for each
value_type, and UNION them
Alexander Priem kirjutas E, 20.10.2003 kell 16:04:
Even better than the four-disk NAS I mentioned earlier is the following:
Promise UltraTrak RM8000. This is a so-called SCSI-to-IDE RAID system.
While you are at it, you could also check out http://www.3ware.com/
I guess one of these with
Rhaoni Chiu Pereira kirjutas E, 20.10.2003 kell 17:13:
Hi List,
I got a P4 1.7Ghz , 512MB RAM , HD 7200 RPM, on RED HAT 9 running PostgreSQL
7.3.2-3 Database.
I have a Delphi aplication that updates the Oracle database using .dbf
file's information ( converting the data from the
Christopher Browne kirjutas N, 06.11.2003 kell 20:23:
Private [EMAIL PROTECTED] writes:
Try this benchmark on PostgreSQL, MySQL, FireBird, Oracle:
http://go.jitbot.com/dbbench-pg-fb-mys-orcl
It looks like a good candidate for adding in a plpgsql stored
procedure to get similar speedups
Jeremiah Jahn kirjutas K, 26.11.2003 kell 22:14:
I was wondering if there is something I can do that would act similar to
a index over more than one table.
I have about 3 million people in my DB at the moment, they all have
roles, and many of them have more than one name.
for example, a
Chadwick, Russell kirjutas L, 13.12.2003 kell 00:40:
Hello everyone.
Can anyone explain why this table which has never had more than a
couple rows in it shows 500k in the query planner even after running
vacuum full. Its terribly slow to return 2 rows of data. The 2 rows
in it are being
Spiegelberg, Greg kirjutas P, 11.01.2004 kell 18:21:
It would seem we're experiencing somthing similiar with our scratch
volume (JFS mounted with noatime).
Which files/directories do you keep on scratch volume ?
All postgres files or just some (WAL, tmp) ?
-
Hannu
Greg Spiegelberg kirjutas E, 12.01.2004 kell 19:03:
Hannu Krosing wrote:
Spiegelberg, Greg kirjutas P, 11.01.2004 kell 18:21:
It would seem we're experiencing somthing similiar with our scratch
volume (JFS mounted with noatime).
Which files/directories do you keep on scratch volume
Shea,Dan [CIS] kirjutas N, 22.01.2004 kell 22:35:
Something that I do not understand is why if you use a valid_time =
'2004-01-22 00:00:00' the query will use the index but if you do a
valid_time '2004-01-22 00:00:00' it does not use the index?
It probably can't tell if is selective enough
Hannu Krosing kirjutas N, 22.01.2004 kell 22:46:
Shea,Dan [CIS] kirjutas N, 22.01.2004 kell 22:35:
Something that I do not understand is why if you use a valid_time =
'2004-01-22 00:00:00' the query will use the index but if you do a
valid_time '2004-01-22 00:00:00' it does not use
Shea,Dan [CIS] kirjutas N, 22.01.2004 kell 23:32:
The end date in the previous example was actually invalid between
'2004-01-12'::date and '2003-01-12'::date;
There have been multiple inserts since I recreated the index but it took
quite some time to complete the following
PWFPM_DEV=#
Saleem Burhani Baloch kirjutas N, 19.02.2004 kell 11:01:
Hi,
Thanks every one for helping me. I have upgraded to 7.4.1 on
redhat 8 ( rh 9 require a lot of lib's) and set the configuration
sent by Chris. Now the query results in 6.3 sec waooo. I m thinking
that why the 7.1 process
On P, 2004-12-05 at 21:43, Rodrigo Carvalhaes wrote:
Hi !
Thanks for the lots of tips that I received on this matter.
...
There is something more that I can try to improve this performance?
check the speed of your ide drive. maybe tweak some params with
/sbin/hdparm . Sometimes the
in most cases and thus
avoid lots of disk head movement when updating huge tables in one go.
Hannu Krosing [EMAIL PROTECTED]
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http
essential ?
In pg yuo always have to visit data page, so finding the wrong tuple
there could just produce the same result as deleted tuple (which in this
case it actually is). The cleaning of index entries could be left to the
real vacuum.
--
Hannu Krosing [EMAIL PROTECTED
database in size and so quick access ?
They use lots of boxes and lots custom software to implement a very
specific kind of cluster.
regards,
--
Hannu Krosing [EMAIL PROTECTED]
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ
are able to run pg_dump
from slave, it should be ok.
--
Hannu Krosing [EMAIL PROTECTED]
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Ühel kenal päeval (teisipäev, 25. jaanuar 2005, 10:41-0500), kirjutas
Tom Lane:
Hannu Krosing [EMAIL PROTECTED] writes:
Why is removing index entries essential ?
Because once you re-use the tuple slot, any leftover index entries would
be pointing to the wrong rows.
That much I understood
at least use a temp table for intermediate steps. This
will at least save WAL traffic.
--
Hannu Krosing [EMAIL PROTECTED]
---(end of broadcast)---
TIP 8: explain analyze is your friend
.
This could/should/maybe :) possibly be combined with clustering as well.
--
Hannu Krosing [EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
of read-only table to avoid
OID-wraparound ?
--
Hannu Krosing [EMAIL PROTECTED]
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
)
---(end of broadcast)---
TIP 8: explain analyze is your friend
--
Hannu Krosing [EMAIL PROTECTED]
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
design the system to partition on PK only.
--
Hannu Krosing [EMAIL PROTECTED]
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
();
version
PostgreSQL 8.0.3 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.6
(Debian 1:3.3.6-7)
(1 row)
--
Hannu Krosing [EMAIL PROTECTED]
---(end of broadcast
, it was 1G shared buffers vs. 3G
on a 4G machine). It was probably due to all kinds of index page splits
etc which shuffled index pages back and forth a lot between userspace
and syscache. So this is not entirely read-only thing either.
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL
happen despite all the odds to
maximise the pain and chaos caused.
--
Craig Ringer
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
systems to allocate in bigger chunks help
here ? I know that xfs can be tuned in that regard, but how about other
common file systems like ext3 ?
-
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training
--
Sent via pgsql-performance
On Thu, 2010-03-18 at 16:12 -0400, Justin Pitts wrote:
It seems to me that a separate partition / tablespace would be a much simpler
approach.
Do you mean a separate partition/ tablespace for _each_ index built
concurrently ?
On Mar 17, 2010, at 5:18 PM, Hannu Krosing wrote:
On Wed, 2010
for that
regards,
Yeb Havinga
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
as well.
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
in other
catagory.
Nah, actually [PERFORM] is the right place to ask.
Just most people got the impression that you may be doing unnecessary
REINDEXing, and the best way to speed up unneeded things is not to do
them ;)
Thank you
Regards
Raghavendra
--
Hannu Krosing http://www
tgrm_deleted_user_index ON telegrams(user_id)
WHERE user_deleted=FALSE;
(if on live system, use CREATE INDEX CONCURRENTLY ...)
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training
--
Sent via pgsql-performance mailing list (pgsql-performance
.tag_id = 7
and this_.id5000
order by this_.id asc
limit 1000;
I'm not sure that planner considers the above form of plan rewrite, nor
that it would make much sense to do so unless there was a really small
number of rows where x_.company_id5000
--
Hannu Krosing http://www
as it looks, though:
http://it.toolbox.com/blogs/database-soup/admission-control-and-its-discontents-39895
--
Craig Ringer
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training
--
Sent via pgsql-performance mailing
option to use incoming unix sockets,
because they run into the IP socket port number limit (a little above
31k, or more exactly 63k/2.
And unix sockets can be used only on local host .
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting
On Wed, 2010-07-14 at 08:58 -0500, Kevin Grittner wrote:
Scott Marlowe scott.marl...@gmail.com wrote:
Hannu Krosing ha...@2ndquadrant.com wrote:
One example where you need a separate connection pool is pooling
really large number of connections, which you may want to do on
another host
On Sun, 2010-07-18 at 21:48 +0530, Rajesh Kumar Mallah wrote:
Hi,
Sorry, if posting here was not proper instead of starting new thread
(I am really not sure if its bad thing to do)
I would like to share my recent experience on implementation of
client side pooling using pgbouncer. By
On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote:
On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer
cr...@postnewspapers.com.au wrote:
So rather than asking should core have a connection pool perhaps
what's needed is to ask what can an in-core pool do that an external
pool cannot do?
://twitter.com/cmdpromptinc | http://identi.ca/commandprompt
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On Thu, 2010-07-22 at 20:57 -0400, Robert Haas wrote:
On Thu, Jul 22, 2010 at 3:15 PM, Hannu Krosing ha...@krosing.net wrote:
On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote:
On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer
cr...@postnewspapers.com.au wrote:
So rather than asking
On Fri, 2010-07-23 at 09:52 -0700, Joshua D. Drake wrote:
On Thu, 2010-07-22 at 20:56 +0100, Hannu Krosing wrote:
Let's extend this shall we:
Avoid adding yet another network hop
postgreSQL is multi-process, so you either have a separate pooler
process or need to put pooler
by the direct client. proxy can decide to pass some of
these from the real client but it would be a decision made by proxy, not
mandated by some proxying rules.
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training
is showing that a single SSD is competitive with a small
array on average, but with better worst-case behavior than I'm used to
seeing.
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com www.2ndQuadrant.us
--
Hannu Krosing
?
Do you have any stats on how much WAL is written for 8kb and 4kb test
cases ?
And for other disk i/o during the tests ?
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training
--
Sent via pgsql-performance mailing list
on a
database-wide basis and have it work well for all queries.
Keep trying. The close you get with your conf to real conditions, the
better choices the optimiser can make ;)
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training
On Wed, 2010-08-04 at 09:14 -0400, Robert Haas wrote:
On Tue, Aug 3, 2010 at 3:03 AM, Hannu Krosing ha...@2ndquadrant.com wrote:
In case of fully cached database it is closer to 1.
In the case of a fully cached database I believe the correct answer
begins with a decimal point.
The number 1
On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
Hannu Krosing ha...@2ndquadrant.com writes:
Of course there are more variables than just *_page_cost, so if you nail
down any other one, you may end with less than 1 for both page costs.
I have always used seq_page_cost = 1 in my thinking
On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:
On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
regression=# select name, setting from pg_settings where name like '%cost';
name | setting
--+-
cpu_index_tuple_cost | 0.005
On Wed, 2010-08-04 at 22:03 +0300, Hannu Krosing wrote:
On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:
On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
regression=# select name, setting from pg_settings where name like
'%cost';
name | setting
On Wed, 2010-08-04 at 15:16 -0400, Greg Smith wrote:
Hannu Krosing wrote:
There was ample space for keeping the indexes in linux cache (it has 1GB
cached currently) though the system may have decided to start writing it
to disk, so I suspect that most of the time was spent copying random
On Wed, 2010-08-04 at 22:03 +0300, Hannu Krosing wrote:
On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:
On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
regression=# select name, setting from pg_settings where name like
'%cost';
name | setting
a considerable performance boost
Cheers,
WBL
--
Quality comes from focus and clarity of purpose -- Mark Shuttleworth
--
---
Hannu Krosing
PostgreSQL Unlimited Scalability and Performance Consultant
2ndQuadrant Nordic
PG Admin Book: http://www.2ndQuadrant.com/books/
--
Sent via pgsql
57 matches
Mail list logo