Hi,
In reference to the seq scans roadmap, I have just submitted a patch
that addresses some of the concerns.
The patch does this:
1. for small relation (smaller than 60% of bufferpool), use the
current logic
2. for big relation:
- use a ring buffer in heap scan
- pin
The patch has no effect on scans that do updates. The
KillAndReadBuffer routine does not force out a buffer if the dirty
bit is set. So updated pages revert to the current performance
characteristics.
-cktan
GreenPlum, Inc.
On May 10, 2007, at 5:22 AM, Heikki Linnakangas wrote:
Sorry, 16x8K page ring is too small indeed. The reason we selected 16
is because greenplum db runs on 32K page size, so we are indeed
reading 128K at a time. The #pages in the ring should be made
relative to the page size, so you achieve 128K per read.
Also agree that KillAndReadBuffer
Hi All,
COPY/INSERT are also bottlenecked on record at a time insertion into
heap, and in checking for pre-insert trigger, post-insert trigger and
constraints.
To speed things up, we really need to special case insertions without
triggers and constraints, [probably allow for unique
CPU at all. If we could predetermine that there is not
any triggers for a relation, inserts into that relation could then
follow a different path that inserts N-tuples at a time.
Regards,
-cktan
On May 13, 2007, at 4:54 PM, Tom Lane wrote:
CK Tan [EMAIL PROTECTED] writes:
COPY/INSERT
by email to ck...@vitessedata.com .
Thank you for your help.
--
CK Tan
Vitesse Data, Inc.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
.
Thanks,
-cktan
On Oct 17, 2014, at 6:43 AM, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Oct 17, 2014 at 8:14 AM, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Oct 17, 2014 at 7:32 AM, CK Tan ck...@vitessedata.com wrote:
Hi everyone,
Vitesse DB 9.3.5.S is Postgres 9.3.5 with a LLVM
Happy to contribute to that decision :-)
On Fri, Oct 17, 2014 at 11:35 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2014-10-17 13:12:27 -0400, Tom Lane wrote:
Well, that's pretty much cheating: it's too hard to disentangle what's
coming from JIT vs
of Postgres -- the implementation needs to be as
non-invasive as possible.
Regards,
-cktan
On Fri, Oct 17, 2014 at 8:40 PM, David Gould da...@sonic.net wrote:
On Fri, 17 Oct 2014 13:12:27 -0400
Tom Lane t...@sss.pgh.pa.us wrote:
CK Tan ck...@vitessedata.com writes:
The bigint sum,avg,count
Hi Mark,
Vitesse DB won't be open-sourced, or it would have been a contrib
module to postgres. We should take further discussions off this list.
People should contact me directly if there is any questions.
Thanks,
ck...@vitessedata.com
On Fri, Oct 17, 2014 at 10:55 PM, Mark Kirkwood
].
Not sure whether they modified/optimized PostgreSQL with respect to the
MONEY data type and/or how much performance that gained, so CCing CK Tan
as well.
Michael
[1]
http://vitesse-timing-on.blogspot.de/2014/10/running-tpch-on-postgresql-part-1.html
[2] http://vitessedata.com/benchmark
http://vldb.org/pvldb/vol5/p1790_andrewlamb_vldb2012.pdf
In sketch:
There is the concept of a Write-Optimized-Store (WOS) and
Read-optimized-store (ROS), and a TupleMover that moves records from WOS to
ROS (some what like vacuum), and from ROS to WOS for updates. It seems to
me that heap is
On 14 June 2015 at 23:51, Tomas Vondra tomas.von...@2ndquadrant.com wrote:
The current state, where HashAgg just blows up the memory, is just not
reasonable, and we need to track the memory to fix that problem.
Meh. HashAgg could track its memory usage without loading the entire
system
You're right. I misread the problem description.
On Tue, May 26, 2015 at 3:13 AM, Petr Jelinek p...@2ndquadrant.com wrote:
On 26/05/15 11:59, CK Tan wrote:
It has to do with the implementation of slot_getattr, which tries to do
the deform on-demand lazily.
if you do select a,b,c
It has to do with the implementation of slot_getattr, which tries to do the
deform on-demand lazily.
if you do select a,b,c, the execution would do slot_getattr(1) and deform
a, and then slot_getattr(2) which reparse the tuple to deform b, and
finally slot_getattr(3), which parse the tuple yet
Hi Hackers,
I am looking for some help in creating LEFT/RIGHT/FULL sort-merge-join.
Does anyone have a complete and reproducible script that would generate
those plans? Can I find it in the regression test suite? If not, how do you
exercise those code path for QA purposes?
Thanks!
-cktan
On Mon, Dec 12, 2016 at 6:14 PM, Andres Freund wrote:
>
>
> For Q1 I think the bigger win is JITing the transition function
> invocation in advance_aggregates/transition_function - that's IIRC where
> the biggest bottleneck lies.
>
Yeah, we bundle the agg core into our expr
Andres,
> dev (no jiting):
> Time: 30343.532 ms
> dev (jiting):
> SET jit_tuple_deforming = on;
> SET jit_expressions = true;
>
> Time: 24439.803 ms
FYI, ~20% improvement for TPCH Q1 is consistent with what we find when we
only jit expression.
Cheers,
-cktan
18 matches
Mail list logo