n Tue, Nov 3, 2009 at 10:12 PM, Jay Manni jma...@fireeye.com wrote:
Hi:
I have an application wherein a process needs to read data from a stream and
store the records for further analysis and reporting. The data in the stream
is in the form of variable length records with clearly defined
Jay Manni wrote:
The data in the stream is in the form of variable length records with
clearly defined fields ? so it can be stored in a database or in a
file. The only caveat is that the rate of records coming in the stream
could be several 1000 records a second.
There's a few limits to
On Tue, Nov 3, 2009 at 7:12 PM, Jay Manni jma...@fireeye.com wrote:
Hi:
I have an application wherein a process needs to read data from a stream and
store the records for further analysis and reporting.
Where is the stream coming from? What happens if the process reading
the stream fails
I have an application wherein a process needs to read data from a stream and
store the records for further analysis and reporting. The data in the stream
is in the form of variable length records with clearly defined fields – so
it can be stored in a database or in a file. The only caveat is
Merlin Moncure wrote:
Postgres can handle multiple 1000 insert/sec but your hardware most
likely can't handle multiple 1000 transaction/sec if fsync is on.
commit_delay or async commit should help a lot there.
http://www.postgresql.org/docs/8.3/static/wal-async-commit.html
On Tue, Nov 3, 2009 at 8:12 PM, Jay Manni jma...@fireeye.com wrote:
Hi:
I have an application wherein a process needs to read data from a stream and
store the records for further analysis and reporting. The data in the stream
is in the form of variable length records with clearly defined
could be several 1000 records a second.
So, are there periods when there are no/few records coming in? Do the
records/data/files really need to be persisted?
The following statement makes me think you should go the flat file route:
The advantage of running complex queries to mine the data