On Tue, May 22, 2012 at 11:33 AM, Jeff Janes wrote:
> On Mon, May 21, 2012 at 2:29 PM, Merlin Moncure wrote:
>> See here:
>> http://www.devheads.net/database/postgresql/performance/index-all-necessary-columns-postgres-vs-mssql.htm
>> for a 'in the wild' gripe about about not having index scans.
On Mon, May 21, 2012 at 9:37 PM, Simon Riggs wrote:
>>> This is exactly what we do for VACUUM and it works faster there.
>>
>> The reason that's okay for vacuum is that vacuum doesn't care if it
>> visits the same index tuple multiple times. It will not work for real
>> queries, unless you would
On Mon, May 21, 2012 at 2:29 PM, Merlin Moncure wrote:
> On Mon, May 21, 2012 at 4:17 PM, Jeff Janes wrote:
>>
>> For vaguely real life, take your example of pgbench -i -s200 -F 50,
>> and I have 2Gig RAM, which seems to be the same as you do.
>>
>> With select only work load (pgbench -S -M prepa
On Tue, May 22, 2012 at 12:29 AM, Merlin Moncure wrote:
> Generally though the real world wins (although the gains will be
> generally less spectacular) are heavily i/o bound queries where the
> indexed subset of data you want is nicely packed and the (non
> clustered) heap records are all over th
On Mon, May 21, 2012 at 4:17 PM, Jeff Janes wrote:
> On Mon, May 21, 2012 at 1:42 PM, Josh Berkus wrote:
>>
>>> Earlier you said that this should be an ideal setup for IOS. But it
>>> isn't really--the ideal set up is one in which the alternative to an
>>> IOS is a regular index scan which makes
On Mon, May 21, 2012 at 1:42 PM, Josh Berkus wrote:
>
>> Earlier you said that this should be an ideal setup for IOS. But it
>> isn't really--the ideal set up is one in which the alternative to an
>> IOS is a regular index scan which makes many uncached scattered reads
>> into the heap. I don't
On 21 May 2012 16:42, Josh Berkus wrote:
>
>> Earlier you said that this should be an ideal setup for IOS. But it
>> isn't really--the ideal set up is one in which the alternative to an
>> IOS is a regular index scan which makes many uncached scattered reads
>> into the heap. I don't think that
> Earlier you said that this should be an ideal setup for IOS. But it
> isn't really--the ideal set up is one in which the alternative to an
> IOS is a regular index scan which makes many uncached scattered reads
> into the heap. I don't think that that situation can't really be
> engineered wit
On 21 May 2012 16:02, Tom Lane wrote:
> Simon Riggs writes:
>> Surely the way to solve this is by having a new plan node that does a
>> physical SeqScan of the index relation. It means we wouldn't preserve
>> the sort order of the rows from the index, but that is just a plan
>> cost issue.
>
>> T
On Mon, May 21, 2012 at 10:44 AM, Josh Berkus wrote:
>
> Right. So what I'm trying to figure out is why counting an index which
> fits in ram (and I've confirmed via EXPLAIN ( buffers on ) ) is not
> being heap-fetched or read from disk would take 25% as long as counting
> a table which is 80% on
Simon Riggs writes:
> Surely the way to solve this is by having a new plan node that does a
> physical SeqScan of the index relation. It means we wouldn't preserve
> the sort order of the rows from the index, but that is just a plan
> cost issue.
> This is exactly what we do for VACUUM and it wor
On 21 May 2012 13:41, Tom Lane wrote:
> Josh Berkus writes:
>> Well, if it's not CPU costs, then something else is eating the time,
>> since I'm seeing per-tuple COUNT counts on indexes being 400% more than
>> on heap.
>
> Well, I'm not: as I said, it looks like about 10% here. Perhaps you're
>
On 5/21/12 10:41 AM, Tom Lane wrote:
> Josh Berkus writes:
>> Well, if it's not CPU costs, then something else is eating the time,
>> since I'm seeing per-tuple COUNT counts on indexes being 400% more than
>> on heap.
>
> Well, I'm not: as I said, it looks like about 10% here. Perhaps you're
> t
Josh Berkus writes:
> Well, if it's not CPU costs, then something else is eating the time,
> since I'm seeing per-tuple COUNT counts on indexes being 400% more than
> on heap.
Well, I'm not: as I said, it looks like about 10% here. Perhaps you're
testing a cassert-enabled build?
> In the airpor
> Anyway, on my machine it seems that the per-tuple CPU costs for SELECT
> COUNT(*) with an index-only scan are something like 10% higher than the
> per-tuple costs with a heap scan. We might get that down to roughly par
> with some hacking, but it's never going to be vastly better. The
> argume
On Sun, May 20, 2012 at 3:24 PM, Tom Lane wrote:
> Another possibility is to try to reduce the costs of index_getnext_tid
> and FunctionCall2Coll, which are basically just trampolines to reach
> btgettuple. It's not immediately obvious how to make that much better
> though.
Hmm... this seems awf
Jeff Janes writes:
> On Thu, May 17, 2012 at 11:35 AM, Joshua Berkus wrote:
>> That's in-RAM speed ... I ran the query twice to make sure the index was
>> cached, and it didn't get any better. And I meant 5X per byte rather than
>> 5X per tuple.
> Ah, OK that makes more sense. I played aroun
On Thu, May 17, 2012 at 11:35 AM, Joshua Berkus wrote:
> Jeff,
>
> That's in-RAM speed ... I ran the query twice to make sure the index was
> cached, and it didn't get any better. And I meant 5X per byte rather than 5X
> per tuple.
Ah, OK that makes more sense. I played around with this, spec
Jeff,
That's in-RAM speed ... I ran the query twice to make sure the index was
cached, and it didn't get any better. And I meant 5X per byte rather than 5X
per tuple.
I talked this over with Haas, and his opinion is that we have a LOT of overhead
in the way we transverse indexes, especially l
On Thu, May 17, 2012 at 5:22 AM, Joshua Berkus wrote:
> Ants,
>
> Well, that's somewhat better, but again hardly the gain in performance I'd
> expect to see ... especially since this is ideal circumstances for index-only
> scan.
>
> bench2=# select count(*) from pgbench_accounts;
> count
>
Ants,
Well, that's somewhat better, but again hardly the gain in performance I'd
expect to see ... especially since this is ideal circumstances for index-only
scan.
bench2=# select count(*) from pgbench_accounts;
count
--
2000
(1 row)
Time: 3827.508 ms
bench2=# set enable_
On Thu, May 17, 2012 at 6:08 AM, Joshua Berkus wrote:
> As you can see, the indexonlyscan version of the query spends 5% as much time
> reading the data as the seq scan version, and doesn't have to read the heap
> at all. Yet it spends 20 seconds doing ... what, exactly?
>
> BTW, kudos on the n
So, I set up a test which should have been ideal setup for index-only scan.
The index was 1/10 the size of the table, and fit in RAM (1G) which the table
does not:
bench2=# select pg_size_pretty(pg_relation_size('pgbench_accounts_pkey'));
pg_size_pretty
428 MB
(1 row)
be
23 matches
Mail list logo