Having implemented just such a graph, but without said advanced
paging, I have to say there is a wall of performance issues. Rendering
the data takes a significant amount of memory, as well as rendering
time. If you find a way around it, be sure to let me know! This has
been a major issue in deploying the product I'm working on, and we
opted to do incremental loading based on the area selected and then
reduce the data set to N number of points, where N is the maximum
number of points we have been able to sustain without slow downs. Of
course, we are rendering more than 3 series at a time with over 500
data points per series and processing some of the data in Flex itself.
Our data set is stored as single readings per row in a Postgres
database. I would assume you could do the same in MySQL or the like,
but that depends on your indexing and storage limitations of the DBMS.
It takes very little time to return but the time for transfer is
rather high. AMF encoding (instead of XML) would likely help with
that. Something to keep in mind.

If you find anything interesting, be sure to post it, as I'm certain
there are quite a few developers trying to do the very same thing. I'm
still waiting for paged charting in Flex 4. ;)

- William


--- In [email protected], "Peter Passaro" <[EMAIL PROTECTED]> wrote:
>
> I have recently been using Brendan Meutzner's excellent Flex
charting clone of Google 
> Finance to display scientific time series data. Now I am trying to
find a good server side 
> solution to filling it with data. Specifically I want to be able to
store a very large data set (as 
> file refs or BLOBs) in a database, query the database for a specific
time chunk within the set, 
> package that for sending to the Flex client, and do paging if the
chunk is large.
> 
> So I am looking for suggestions on: how to implement this, what
server/database combos to 
> use, etc. Would Flex (Live Cycle) Data Services be useful for
something like this?  So far I have 
> been looking at RRDTool <http://oss.oetiker.ch/rrdtool/> or using
MySQL and just writing 
> custom code to crawl a data set on the server side to produce the
desired chunk and wrap it 
> in xml. For RRDTool I would also have to modify it to use much
smaller time steps.
> 
> This is a pilot project, but we hope it will eventually become a
large repository for 
> neuroscience data, so I am keeping scalability in mind as well.
> 
> Any help would be greatly appreciated.
> 
> Peter
>


Reply via email to