looks like it’s having to do a table scan for all the rows above
the id cutoff to see if any meet the filter requirement. “not in” can be very
expensive. An index might help on this column. Have you tried that?
Your rowcounts aren’t high enough to require partitioning or any other changes
to your table that I can see right now.
Mike Sofen (Synthetic Genomics)
From: Mike Sofen Sent: Tuesday, September 27, 2016 8:10 AM
From: Greg Spiegelberg Sent: Monday, September 26, 2016 7:25 AM
I've gotten more responses than anticipated and have answered some questions
and gotten some insight but my challenge again is what should I capture along
the w
can easily do 500
million rows per bucket before approaching anything close to the 30ms max query
time.
Mike Sofen (Synthetic Genomics)
(with guaranteed IOPS), performance against even 100m row tables should still
stay within your requirements.
So Rick’s point about not needing millions of tables is right on. If there’s a
way to create table “clumps”, at least you’ll have a more modest table count.
Mike Sofen (Synthetic Genomics)
scale, auto-shard, fault tolerant,
etc…and I’m not a Hadoopie.
I am looking forward to hearing how this all plays out, it will be quite an
adventure! All the best,
Mike Sofen (Synthetic Genomics…on Postgres 9.5x)
ther issues/requirements that are creating other performance
concerns that aren’t obvious in your initial post?
Mike Sofen (Synthetic Genomics)
From: Jim Nasby [mailto:jim.na...@bluetreble.com] Sent: Wednesday, September
07, 2016 12:22 PM
On 9/4/16 7:34 AM, Mike Sofen wrote:
> You raise a good point. However, other disk activities involving
> large data (like backup/restore and pure large table copying), on both
> plat
From: Claudio Freire Sent: Friday, September 02, 2016 1:27 PM
On Thu, Sep 1, 2016 at 11:30 PM, Mike Sofen < <mailto:mso...@runbox.com>
mso...@runbox.com> wrote:
> It's obvious the size of the batch exceeded the AWS server memory,
> resulting in a profoundly slower pro
son between Pass 1 and Pass 2: average row lengths were within 7% of
each other (1121 vs 1203) using identical table structures and processing
code, the only difference was the target server.
I'm happy to answer questions about these results.
Mike Sofen (USA)
--
Sent via pgsql-perform
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tommi K
Sent: Friday, August 26, 2016 7:25 AM
To: Craig James
Cc: andreas kretschmer ;
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow query with big tables
Ok, sorry tha
h 20 cores, it can
only support 40 active users?
I come from the SQL Server world where a single 20 core server could support
hundreds/thousands of active users and/or many dozens of background/foreground
data processes. Is there something fundamentally different between the two
platforms rela
> -Original Message-
> Thomas Kellerer Wednesday, March 23, 2016 2:51 AM
>
> Jim Nasby schrieb am 11.03.2016 um 17:37:
> > If the blob is in the database then you have nothing extra to do. It's
> > handled
> just like all your other data.
> >
> > If it's a file in a file system then you n
Hi Dave,
Database disk performance has to take into account IOPs, and IMO, over MBPs,
since it’s the ability of the disk subsystem to write lots of little bits
(usually) versus writing giant globs, especially in direct attached storage
(like yours, versus a SAN). Most db disk benchmarks rev
Dave
On Thu, Mar 17, 2016 at 5:11 PM, Mike Sofen mailto:mso...@runbox.com> > wrote:
Hi Dave,
Database disk performance has to take into account IOPs, and IMO, over MBPs,
since it’s the ability of the disk subsystem to write lots of little bits
(usually) versus writing giant globs, e
14 matches
Mail list logo