Hi,
I need to store data about sensor readings. There is a known (but
configurable) number of sensors which can send update data at any time.
The current state needs to be kept but also all historical records.
I'm trying to decide between these two designs:
1) create a table for current data,
Ivan Voras wrote:
I need to store data about sensor readings. There is a known (but
configurable) number of sensors which can send update data at any time.
The current state needs to be kept but also all historical records.
I'm trying to decide between these two designs:
1) create a table for
Option 1 is about somewhere between 2 and 3 times more work for the database
than option 2.
Do you need every sensor update to hit the database? In a situation like
this I'd be tempted to keep the current values in the application itself and
then sweep them all into the database periodically.
On Thu, May 28, 2009 at 2:54 PM, Ivan Voras ivo...@freebsd.org wrote:
The volume of sensor data is potentially huge, on the order of 500,000
updates per hour. Sensor data is few numeric(15,5) numbers.
The size of that dataset, combined with the apparent simplicity of
your schema and the
2009/5/28 Heikki Linnakangas heikki.linnakan...@enterprisedb.com:
Ivan Voras wrote:
I need to store data about sensor readings. There is a known (but
configurable) number of sensors which can send update data at any time.
The current state needs to be kept but also all historical records.
2009/5/28 Nikolas Everett nik9...@gmail.com:
Option 1 is about somewhere between 2 and 3 times more work for the database
than option 2.
Yes, for writes.
Do you need every sensor update to hit the database? In a situation like
We can't miss an update - they can be delayed but they all need
On Thu, May 28, 2009 at 04:55:34PM +0200, Ivan Voras wrote:
2009/5/28 Heikki Linnakangas heikki.linnakan...@enterprisedb.com:
Ivan Voras wrote:
I need to store data about sensor readings. There is a known (but
configurable) number of sensors which can send update data at any time.
The
2009/5/28 Alexander Staubo a...@bengler.no:
On Thu, May 28, 2009 at 2:54 PM, Ivan Voras ivo...@freebsd.org wrote:
The volume of sensor data is potentially huge, on the order of 500,000
updates per hour. Sensor data is few numeric(15,5) numbers.
The size of that dataset, combined with the
2009/5/28 Kenneth Marshall k...@rice.edu:
One big benefit of partitioning is that you can prune old data with
minimal impact to the running system. Doing a large bulk delete would
be extremely I/O impacting without partion support. We use this for
a DB log system and it allows us to simply
On Thu, May 28, 2009 at 5:06 PM, Ivan Voras ivo...@freebsd.org wrote:
If you require precise data with the ability to filter, aggregate and
correlate over multiple dimensions, something like Hadoop -- or one of
the Hadoop-based column database implementations, such as HBase or
Hypertable --
depends on how soon do you need to access that data after it's being
created, the way I do it in my systems, I get data from 8 points, bit
less than you - but I dump it to csv, and import it on database host
(separate server).
now, you could go to BDB or whatever, but that's not the solution.
So,
Hmmm. Anyone out there have the Continuent solution working with PostgreSQL?
If so, what release? We're at 8.3 right now.
thanks,
-Alan
p.s. I'm continuing the cross-post because that is the way I started
this thread. Future threads will not be cross-posted.
On Thu, May 28, 2009 at 9:34 AM,
On Thu, May 28, 2009 at 05:24:33PM +0200, Ivan Voras wrote:
2009/5/28 Kenneth Marshall k...@rice.edu:
One big benefit of partitioning is that you can prune old data with
minimal impact to the running system. Doing a large bulk delete would
be extremely I/O impacting without partion
HI.
Someone had some experience of bad performance with postgres in some server
with many processors?
I have a server with 4 CPUS dual core and gives me a very good performance
but I have experienced problems with another server that has 8 CPUS quad
core (32 cores). The second one only gives me
I currently have a database doing something very similar. I setup partition
tables with predictable names based on the the data's timestamp week number
eg: (Data__WI).
I have a tigger on the parent partition table to redirect data to the
correct partition( tablename:='Data_' ||
I also forgot to note that I had no problems setting up replication via
londiste (skytools). The cronjob that creates the partition each week for
me also adds the table to the replication set. As simple as:
londiste.py londiste.ini provider add 'public.Data__WI'
londiste.py
On Thu, May 28, 2009 at 11:50 AM, Fabrix fabrix...@gmail.com wrote:
Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,
eth, etc) except that processors regularly climb to 100%.
What kind of load are you putting the server under when this happens?
I can see that the
On Thu, May 28, 2009 at 12:50 PM, Fabrix fabrix...@gmail.com wrote:
HI.
Someone had some experience of bad performance with postgres in some server
with many processors?
Seems to depend on the processors and chipset a fair bit.
I have a server with 4 CPUS dual core and gives me a very
Thanks David...
2009/5/28 David Rees dree...@gmail.com
On Thu, May 28, 2009 at 11:50 AM, Fabrix fabrix...@gmail.com wrote:
Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,
eth, etc) except that processors regularly climb to 100%.
What kind of load are you putting
On Thu, May 28, 2009 at 4:53 PM, Fabrix fabrix...@gmail.com wrote:
Wow, that's some serious context-switching right there - 300k context
switches a second mean that the processors are spending a lot of their
time fighting for CPU time instead of doing any real work.
There is a bug in
On Thu, May 28, 2009 at 2:53 PM, Fabrix fabrix...@gmail.com wrote:
yes, i have max_connections = 5000
can lower, but at least i need 3500 connections
Whoa, that's a lot. Can you look into connection pooling of some sort?
--
Sent via pgsql-performance mailing list
Thanks Scott
2009/5/28 Scott Marlowe scott.marl...@gmail.com
On Thu, May 28, 2009 at 12:50 PM, Fabrix fabrix...@gmail.com wrote:
HI.
Someone had some experience of bad performance with postgres in some
server
with many processors?
Seems to depend on the processors and chipset a
2009/5/28 Scott Mead scott.li...@enterprisedb.com
On Thu, May 28, 2009 at 4:53 PM, Fabrix fabrix...@gmail.com wrote:
Wow, that's some serious context-switching right there - 300k context
switches a second mean that the processors are spending a lot of their
time fighting for CPU time
Hi,
We have one query which has a left join. If we run this query without
the left join, it runs slower than with the left join.
-query with the left join:
EXPLAIN ANALYZE
SELECT
artifact.id AS id,
artifact.priority AS priority,
item.title AS title,
item.name AS name,
field_value2.value AS
- Alan McKay alan.mc...@gmail.com escreveu:
Hmmm. Anyone out there have the Continuent solution working with PostgreSQL?
If so, what release? We're at 8.3 right now.
I have tested Sequoia 2.10.10 with a high transaction rate database with good
servers and plenty of memory. Since that's
- Scott Marlowe scott.marl...@gmail.com escreveu:
On Thu, May 28, 2009 at 12:50 PM, Fabrix fabrix...@gmail.com wrote:
HI.
Someone had some experience of bad performance with postgres in some server
with many processors?
I had.
but I have experienced problems with another
2009/5/28 Flavio Henrique Araque Gurgel fla...@4linux.com.br
- Scott Marlowe scott.marl...@gmail.com escreveu:
On Thu, May 28, 2009 at 12:50 PM, Fabrix fabrix...@gmail.com wrote:
HI.
Someone had some experience of bad performance with postgres in some
server
with many
I would ask for your kernel version. uname -a please?
sure, and thanks for you answer Flavio...
uname -a
Linux SERVIDOR-A 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64
x86_64 x86_64 GNU/Linux
cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.2
On Thu, May 28, 2009 at 7:04 PM, Fabrix fabrix...@gmail.com wrote:
I would ask for your kernel version. uname -a please?
sure, and thanks for you answer Flavio...
uname -a
Linux SERVIDOR-A 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64
x86_64 x86_64 GNU/Linux
cat
29 matches
Mail list logo