Hi,
I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount
of sensors. In order to have good performances on querying by timestamp on
each sensor, I partitionned my measures table for each sensor. Thus I create
a lot of tables.
I simulated a large sensor network with 3000 nodes
"Bill Moran" <[EMAIL PROTECTED]> writes:
> "Fernando Hevia" <[EMAIL PROTECTED]> wrote:
>>
>> Hi list.
>>
>> I have a table with over 30 million rows. Performance was dropping steadily
>> so I moved old data not needed online to an historic table. Now the table
>> has about 14 million rows. I don
"Fernando Hevia" <[EMAIL PROTECTED]> wrote:
>
> Hi list.
>
> I have a table with over 30 million rows. Performance was dropping steadily
> so I moved old data not needed online to an historic table. Now the table
> has about 14 million rows. I don't need the disk space returned to the OS
> but I
>>> "Fernando Hevia" <[EMAIL PROTECTED]> wrote:
> I have a table with over 30 million rows. Performance was dropping
steadily
> so I moved old data not needed online to an historic table. Now the
table
> has about 14 million rows. I don't need the disk space returned to
the OS
> but I do need to
Hi list.
I have a table with over 30 million rows. Performance was dropping steadily
so I moved old data not needed online to an historic table. Now the table
has about 14 million rows. I don't need the disk space returned to the OS
but I do need to improve performance. Will a plain vacuum do or
On Thu, 21 Aug 2008, Tom Lane wrote:
> I think you need to raise from_collapse_limit and/or
> join_collapse_limit.
Ahah, that was it.. a much simpler solution than I was fearing.
I had already re-written the queries to get around it, but ran into
another snag with that method, so this was good t
On Fri, Aug 22, 2008 at 2:31 AM, Moritz Onken <[EMAIL PROTECTED]> wrote:
> Am 21.08.2008 um 19:08 schrieb Merlin Moncure:
>
>> On Thu, Aug 21, 2008 at 11:07 AM, Moritz Onken <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> Am 21.08.2008 um 16:39 schrieb Scott Carey:
>>>
It looks to me like the work_mem d
Gregory Stark escreveu:
André Volpato <[EMAIL PROTECTED]> writes:
I think we almost reached the tuning limit, without changing the schema.
It's hard to tell from the plan you posted (and with only a brief look) but it
looks to me like your query with that function is basi
On Aug 20, 2008, at 1:18 PM, Tom Lane wrote:
If you're worried about storage space, I wouldn't go for arrays of
composite :-(. The tuple header overhead is horrendous, almost
certainly a lot worse than hstore.
Oh holy cow, I didn't realize we had a big header in there. Is that
to allow for
Hello,
I'm having trouble with a Nested Loop being selected for a rather
complex query; it turns out this is a pretty bad plan as the nested
loop's row estimates are quite off (1 estimated / 1207881 actual). If
I disable enable_nestloop, the query executes much faster (42 seconds
instead o
Hi Mathias,
On Aug 22, 2008, at 8:35 AM, Mathias Stjernström wrote:
I Agree with Robert but i never heard of Cybercluster before.
Does anyone have any experience with Cybercluster? It sounds really
interesting!
Some months ago i took a look into cybercluster. At that point
cybercluster wa
[for the purpose of this post, 'blocking' refers to an I/O operation
taking a long time for reasons other than the amount of work the I/O
operation itself actually implies; not to use of blocking I/O calls or
anything like that]
Hello,
I have a situation in which deterministic latency is a lot mo
André Volpato <[EMAIL PROTECTED]> writes:
> Tom Lane escreveu:
>>> We are guessing that a dual core 3.0GHz will beat up a quad core 2.2,
>>> at least in this environmnent with less than 4 concurrent queryes.
>>
>> The most you could hope for from that is less than a 50% speedup. I'd
>> suggest in
Dan Harris wrote:
> My desire would be to have a parallel server that could act as a hot
> standby system with automatic fail over in a multi-master role.
I will add my "me too" for DRBD + Heartbeat.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
14 matches
Mail list logo