On Wednesday 13 October 2010 05:33:28 Mladen Gogala wrote:
> On 10/13/2010 8:12 AM, Greg Smith wrote:
> > The work incorporating a more stable XFS into RHEL started with xfsprogs
> > 3.0.1-6 going into Fedora 11, and 3.1.X would represent a current
> > release. So your Ubuntu kernel is two major
On Wed, 13 Oct 2010, Tom Lane wrote:
Neil Whelchel writes:
That's probably got little to do with caching and everything to do with
setting hint bits on the first SELECT pass.
I concur with Mark's question about whether your UPDATE pushed the table
size across the limit of what would fit in RA
Ants Aasma wrote:
I hit an issue with window aggregate costing while experimenting with
providing a count of the full match along side a limited result set.
Seems that the window aggregate node doesn't take into account that it
has to consume the whole input before outputting the first row. When
Wow. Thanks so much to all of you for the thoughtful and helpful
responses!
Reuven
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Wed, Oct 13, 2010 at 02:38, Neil Whelchel wrote:
> And the cache helps...
> So, we are right back to within 10ms of where we started after INSERTing the
> data, but it took a VACUUM FULL to accomplish this (by making the table fit in
> RAM).
> This is a big problem on a production machine as t
On Wed, Oct 13, 2010 at 07:49, Tom Lane wrote:
> Neil Whelchel writes:
> I concur with Mark's question about whether your UPDATE pushed the table
> size across the limit of what would fit in RAM.
Yeah, you said you have ~2GB of ram, just counting the bytes and the
number of rows (not including
On 2010-10-13 15:28, Robert Haas wrote:
On Wed, Oct 13, 2010 at 6:16 AM, Neil Whelchel wrote:
I might go as far as to rattle the cage of the developers to see if it makes
any sense to add some column oriented storage capability to Postgres. That
would be the hot ticket to be able to specify
I hit an issue with window aggregate costing while experimenting with
providing a count of the full match along side a limited result set.
Seems that the window aggregate node doesn't take into account that it
has to consume the whole input before outputting the first row. When
this is combined wit
"Reuven M. Lerner" writes:
> All of the database-related logic for this application is in server-side
> functions, written in PL/PgSQL. That is, the application never issues a
> SELECT or INSERT; rather, it invokes a function with parameters, and the
> function handles the query. It's not unusu
On Wed, Oct 13, 2010 at 3:30 AM, Reuven M. Lerner wrote:
> Hi, everyone. I'm working with a client to try to optimize their use of
> PostgreSQL. They're running 8.3 on a Windows platform, packaged as part
> of a physical product that is delivered to customers.
>
> We're planning to upgrade to 9.
Neil Whelchel writes:
> Insert the data into one table:
> crash:~# time psql -U test test -q < log.sql
> real679m43.678s
> user1m4.948s
> sys 13m1.893s
> crash:~# echo 3 > /proc/sys/vm/drop_caches
> crash:~# time psql -U test test -c "SELECT count(*) FROM log;"
> count
> ---
Neil Whelchel wrote:
> crash:~# time psql -U test test -c "UPDATE log SET
> raw_data=raw_data+1"
> UPDATE 10050886
>
> real14m13.802s
> user0m0.000s
> sys 0m0.000s
>
> crash:~# time psql -U test test -c "SELECT count(*) FROM log;"
> count
> --
> 10050886
> (1 row)
>
On Wed, Oct 13, 2010 at 6:16 AM, Neil Whelchel wrote:
> I might go as far as to rattle the cage of the developers to see if it makes
> any sense to add some column oriented storage capability to Postgres. That
> would be the hot ticket to be able to specify an attribute on a column so that
> the b
On Wed, Oct 13, 2010 at 4:38 AM, Neil Whelchel wrote:
> There seems to be allot of discussion about VACUUM FULL, and its problems. The
> overall buzz seems to be that VACUUM FULL is a bad idea (I could be wrong
> here). It has been some time since I have read the changelogs, but I seem to
> rememb
Mladen Gogala wrote:
I am afraid that my management will not let me use anything that
doesn't exist as a RPM package in the current Red Hat distribution. No
Ubuntu, no Fedora, no manual linking. There will always be that
ominous question: how many other companies are using XFS? From the
busin
On 10/13/2010 8:12 AM, Greg Smith wrote:
The work incorporating a more stable XFS into RHEL started with xfsprogs
3.0.1-6 going into Fedora 11, and 3.1.X would represent a current
release. So your Ubuntu kernel is two major improvement releases
behind, 3.0 and 3.1 were the upgrades to xfsprogs
Dan Harris wrote:
I'm not sure how to figure out what version of XFS we're on.. but it's
Linux kernel 2.6.24-24 x86_64 on Ubuntu Server 8.04.3. Postgres
version 8.3
There's the kernel side support that matches your kernel, as well as the
xfsprogs package. The latter is where a lot of distri
On 13/10/2010 12:38 AM, Jesper Krogh wrote:
If some clever postgres hacker could teach postgres to allocate blocks
using posix_fallocate in quite large batches, say .. something like:
fallocate(min(current_relation_size *0.1,1073741824))
There doesn't seem to be any use of posix_fallocate in t
12.10.10 21:58, Tom Lane написав(ла):
I'm less than convinced that that approach will result in a significant
win. It's certainly not going to do anything to convert COUNT(*) into
an O(1) operation, which frankly is what the complainants are expecting.
There's basically no hope of solving the "
12.10.10 14:44, Craig Ringer написав(ла):
in the case where you are doing a count(*) where query and the where is
on an indexed column, could the search just look at the index + the
visibility mapping rather than doing an sequential search through the
table?
Nope, because the visibility map,
On Wednesday 13 October 2010 01:50:23 Mark Kirkwood wrote:
> On 13/10/10 21:38, Neil Whelchel wrote:
> > So with our conclusion pile so far we can deduce that if we were to keep
> > all of our data in two column tables (one to link them together, and the
> > other to store one column of data), we s
On 13/10/10 21:38, Neil Whelchel wrote:
So with our conclusion pile so far we can deduce that if we were to keep all
of our data in two column tables (one to link them together, and the other to
store one column of data), we stand a much better chance of making the entire
table to be counted fit
On 10/13/2010 3:19 AM, Mark Kirkwood wrote:
I think that major effect you are seeing here is that the UPDATE has
made the table twice as big on disk (even after VACUUM etc), and it has
gone from fitting in ram to not fitting in ram - so cannot be
effectively cached anymore.
In the real world, t
On 10/13/2010 2:47 AM, Neil Whelchel wrote:
Even with VACUUM database table speed degrades
What the heck is the "database table speed"? Tables don't do anything.
--
Mladen Gogala
Sr. Oracle DBA
1500 Broadway
New York, NY 10036
(212) 329-5251
www.vmsinfo.com
--
Sent via pgsql-performance mai
On Wednesday 13 October 2010 00:19:26 Mark Kirkwood wrote:
> On 13/10/10 19:47, Neil Whelchel wrote:
> > Nope...
> > So, possible conclusions are:
> > 1. Even with VACUUM database table speed degrades as tables are updated.
> > 2. Time testing on a freshly INSERTed table gives results that are not
On 13/10/2010 3:30 PM, Reuven M. Lerner wrote:
My question is whether this is somehow to be expected. Under what
conditions will SQL functions be slower than PL/PgSQL functions?
The main cases I can think of:
- Where the SQL function is inlined (PL/PgSQL functions can't be
inlined, some SQL
I guess I have to comment here again and point out that while I am
having this
issue with text searches, I avoid using count(*) in such cases, I just
use
next and previous links.
Unfortunately sometimes you got to do an ORDER BY on search results, and
then all the rows got to be read...
Hi, everyone. I'm working with a client to try to optimize their use of
PostgreSQL. They're running 8.3 on a Windows platform, packaged as part
of a physical product that is delivered to customers.
We're planning to upgrade to 9.0 at some point in the coming months, but
this question is relevant
On 13/10/10 19:47, Neil Whelchel wrote:
Nope...
So, possible conclusions are:
1. Even with VACUUM database table speed degrades as tables are updated.
2. Time testing on a freshly INSERTed table gives results that are not real-
world.
3. Filesystem defragmentation helps (some).
4. Cache only mak
29 matches
Mail list logo