On Tue, Jul 2, 2013 at 12:33 AM, Atri Sharma atri.j...@gmail.com wrote:
If you want to get a useful response to your emails, consider
including a statement of what you think the problem is and why you
think your proposed changes will help. Consider offering a test case
that performs badly and
On Tue, Jul 2, 2013 at 5:20 PM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Jul 2, 2013 at 12:33 AM, Atri Sharma atri.j...@gmail.com wrote:
If you want to get a useful response to your emails, consider
including a statement of what you think the problem is and why you
think your proposed
On Tue, Jul 2, 2013 at 5:04 AM, Atri Sharma atri.j...@gmail.com wrote:
I think if you'll try it you'll find that we perform quite well on
data sets of this kind - and if you read the code you'll see why.
Right, let me read the code again from that viewpoint.
In my opinion, it would be
On Tue, Jul 2, 2013 at 12:36 PM, Peter Geoghegan p...@heroku.com wrote:
On Tue, Jul 2, 2013 at 5:04 AM, Atri Sharma atri.j...@gmail.com wrote:
I think if you'll try it you'll find that we perform quite well on
data sets of this kind - and if you read the code you'll see why.
Right, let me
My PostgresSQL (9.2) is crashing after certain load tests. Currently,
postgressql is crashing when simulatenously 800 to 1000 threads are run on a
10 million records schema. Not sure, if we have to tweak some more
parameters of postgres.
-
jasmine
--
View this message in context:
On Mon, Jul 01, 2013 at 04:42:04AM -0700, jasmine wrote:
My PostgresSQL (9.2) is crashing after certain load tests. Currently,
postgressql is crashing when simulatenously 800 to 1000 threads are run on a
10 million records schema. Not sure, if we have to tweak some more
parameters of postgres.
On Mon, Jul 1, 2013 at 10:02 PM, David Fetter da...@fetter.org wrote:
On Mon, Jul 01, 2013 at 04:42:04AM -0700, jasmine wrote:
My PostgresSQL (9.2) is crashing after certain load tests. Currently,
postgressql is crashing when simulatenously 800 to 1000 threads are run on a
10 million records
On Sun, Jun 30, 2013 at 8:30 AM, Atri Sharma atri.j...@gmail.com wrote:
I have been reading the recent discussion and was researching a bit, and I
think that we should really go with the idea of randomising the input data(if
it is not completely presorted), to ensure that we do not get
On Mon, Jul 1, 2013 at 4:32 PM, Robert Haas robertmh...@gmail.com wrote:
This shouldn't be too complex, and should give us a fixed nlogn complexity
even for wild data sets, without affecting existing normal data sets that
are present in every day transactions. I even believe that those data
On Mon, Jul 1, 2013 at 3:54 PM, Claudio Freire klaussfre...@gmail.com wrote:
On Mon, Jul 1, 2013 at 4:32 PM, Robert Haas robertmh...@gmail.com wrote:
This shouldn't be too complex, and should give us a fixed nlogn complexity
even for wild data sets, without affecting existing normal data sets
On Mon, Jul 1, 2013 at 5:12 PM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jul 1, 2013 at 3:54 PM, Claudio Freire klaussfre...@gmail.com wrote:
On Mon, Jul 1, 2013 at 4:32 PM, Robert Haas robertmh...@gmail.com wrote:
This shouldn't be too complex, and should give us a fixed nlogn
On Mon, Jul 1, 2013 at 12:32 PM, Robert Haas robertmh...@gmail.com wrote:
I have been reading the recent discussion and was researching a bit, and I
think that we should really go with the idea of randomising the input
data(if it is not completely presorted), to ensure that we do not get
On Mon, Jul 1, 2013 at 4:31 PM, Claudio Freire klaussfre...@gmail.com wrote:
What?
A median of medians algorithm will guarantee floor(N/2) elements on
the smaller. That's the definition of median.
Note that I'm referring to picking the actual median of all tuples,
not just a sample. That's
On Tue, Jul 2, 2013 at 1:02 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Jun 30, 2013 at 8:30 AM, Atri Sharma atri.j...@gmail.com wrote:
I have been reading the recent discussion and was researching a bit, and I
think that we should really go with the idea of randomising the input
Hi all,
I have been reading the recent discussion and was researching a bit, and I
think that we should really go with the idea of randomising the input data(if
it is not completely presorted), to ensure that we do not get quadratic
complexity.
One easy way to do that could be to take a
15 matches
Mail list logo