On Wed, Jul 05, 2017 at 07:31:39PM +1000, AP wrote: > On Tue, Jul 04, 2017 at 08:23:20PM -0700, Jeff Janes wrote: > > On Tue, Jul 4, 2017 at 3:57 AM, AP <a...@zip.com.au> wrote: > > > The data being indexed is BYTEA, (quasi)random and 64 bytes in size. > > > The table has over 2 billion entries. The data is not unique. There's > > > an average of 10 duplicates for every unique value. > > > > What is the number of duplicates for the most common value? > > Damn. Was going to collect this info as I was doing a fresh upload but > it fell through the cracks of my mind. It'll probably take at least > half a day to collect (a simple count(*) on the table takes 1.5-1.75 > hours parallelised across 11 processes) so I'll probably have this in > around 24 hours if all goes well. (and I don't stuff up the SQL :) )
Well... num_ids | count ---------+---------- 1 | 91456442 2 | 56224976 4 | 14403515 16 | 13665967 3 | 12929363 17 | 12093367 15 | 10347006 So the most common is a unique value, then a dupe. AP. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers