On Thu, Mar 10, 2011 at 11:32 AM, Merlin Moncure wrote:
> On Thu, Mar 10, 2011 at 9:55 AM, Robert Haas wrote:
>> On Wed, Mar 9, 2011 at 6:01 PM, Jim Nasby wrote:
>>> Unfortunately, I don't think the planner actually has that level of
>>> knowledge.
>>
>> Actually, I don't think it would be that
On Mar 10, 2011, at 3:48 PM, fork wrote:
[much thoughtfulness]
Steve Atkins blighty.com> writes:
[also much thoughtfulness]
Steve and fork -- thank you, this is super helpful. I meant to tweak
that exact search before sending this around, sorry if that was
confusing. That was meant to be
Steve Atkins blighty.com> writes:
>
>
> On Mar 10, 2011, at 1:25 PM, Dan Ancona wrote:
>
> > Hi postgressers -
> >
> > As part of my work with voter file data, I pretty regularly have to join one
large-ish (over 500k rows) table
> to another. Sometimes this is via a text field (countyname) +
On Thu, Mar 10, 2011 at 17:40, fork wrote:
> The data is not particularly sensitive; if something happened and it rolled
> back, that wouldnt be the end of the world. So I don't know if I can use
> "dangerous" setting for WAL checkpoints etc. There are also aren't a lot of
> concurrent hits on
On Mar 10, 2011, at 1:25 PM, Dan Ancona wrote:
> Hi postgressers -
>
> As part of my work with voter file data, I pretty regularly have to join one
> large-ish (over 500k rows) table to another. Sometimes this is via a text
> field (countyname) + integer (voter id). I've noticed sometimes this
Hi postgressers -
As part of my work with voter file data, I pretty regularly have to
join one large-ish (over 500k rows) table to another. Sometimes this
is via a text field (countyname) + integer (voter id). I've noticed
sometimes this converges and sometimes it doesn't, seemingly
regar
On Thu, Mar 10, 2011 at 10:26 AM, Julius Tuskenis wrote:
> Hello, list
>
> Our company is creating a ticketing system. Of course the performance issues
> are very important to us (as to all of you I guess). To increase speed of
> some queries stable functions are used, but somehow they don't act e
Merlin Moncure gmail.com> writes:
> > I am loathe to create a new table from a select, since the indexes
> > themselves
> > take a really long time to build.
>
> you are aware that updating the field for the entire table, especially
> if there is an index on it (or any field being updated), wil
On Thu, Mar 10, 2011 at 3:12 AM, runner wrote:
>
> I'm setting up my first PostgreSQL server to replace an existing MySQL
> server. I've been reading Gregory Smith's book Postgres 9.0 High
> Performance and also Riggs/Krosing's PostgreSQL 9 Administration Cookbook.
> While both of these books are
Hello, list
Our company is creating a ticketing system. Of course the performance
issues are very important to us (as to all of you I guess). To increase
speed of some queries stable functions are used, but somehow they don't
act exactly as I expect, so would you please explain what am I doing
I'm setting up my first PostgreSQL server to replace an existing MySQL server.
I've been reading Gregory Smith's book Postgres 9.0 High Performance and also
Riggs/Krosing's PostgreSQL 9 Administration Cookbook. While both of these
books are excellent, I am completely new to PostgreSQL and
On Thu, Mar 10, 2011 at 9:40 AM, fork wrote:
> Given that doing a massive UPDATE SET foo = bar || ' ' || baz; on a 12 million
> row table (with about 100 columns -- the US Census PUMS for the 2005-2009 ACS)
> is never going to be that fast, what should one do to make it faster?
>
> I set work_mem
On Thu, Mar 10, 2011 at 9:55 AM, Robert Haas wrote:
> On Wed, Mar 9, 2011 at 6:01 PM, Jim Nasby wrote:
>> Unfortunately, I don't think the planner actually has that level of
>> knowledge.
>
> Actually, I don't think it would be that hard to teach the planner
> about that special case...
>
>> A m
Given that doing a massive UPDATE SET foo = bar || ' ' || baz; on a 12 million
row table (with about 100 columns -- the US Census PUMS for the 2005-2009 ACS)
is never going to be that fast, what should one do to make it faster?
I set work_mem to 2048MB, but it currently is only using a little bit
On Wed, Mar 9, 2011 at 6:01 PM, Jim Nasby wrote:
> Unfortunately, I don't think the planner actually has that level of knowledge.
Actually, I don't think it would be that hard to teach the planner
about that special case...
> A more reasonable fix might be to teach the executor that it can do 2
Hi, all. I've done some further analysis, found a form that works if I split
things over two separate queries (B1 and B2, below) but still trouble when
combining (B, below).
This is the full pseudo-query: SELECT FROM A UNION SELECT FROM B ORDER BY
dateTime DESC LIMIT 50
In that pseudo-query:
Hi jim thanks for your answer,
The database model is some' like that :
Measure(Id, numbering,Date, crcCorrect, sensorId) and a SimpleMeasure
(Id, doubleValue) and GenericMeasure (Id, BlobValue, numberOfElements)
and in the UML model SimpleMeasure and GenericMeasure inherits from the
Measure cla
17 matches
Mail list logo