Jerker Hammarberg wrote:
> Thank you Daniel and Mateusz for your input! I have spent the last 
> few days getting acquainted with the source code, and currently my 
> theory is that most of the time is spent reading attribute data - our
>  test data also has over 50 attribute fields. But since we primarily
>  want to render the objects on the screen, we are not interested in 
> loading the attribute data for all objects. So I'll try to remove the
>  attribute loading and see if it makes a big difference.

I'm sure in 90% the slowness is caused by reading attributes.
In OGR, reading attributes may be a bottleneck of processing.
I'd wish we have a way to request subset of attributes without issuing
the OGR SQL query, but by specifying "static filter" by indexes
(pseudo-code):

OGRLayer* lyr = ds->GetLayer(0);
lyr->SetAttributesLimit( { 0, 4, 8 } );
OGRFeature* f = lyr->GetNextFeature();

and "f" consists of only 3 attributes.

>> It should be efficient enough. However, there may be performance 
>> difference between accessing built-in flash memory and memory card.
>>  I'd suggest to try to read data from memory and from card and 
>> compare results.
> 
> You're right in that our data is on a storage card, but it has to be
>  that way.

Yes, I understand. In most cases data are accessed from memory card.
I'm just suggesting to do some tests with built-in memory and memory
card, as well as using memory cards of different speed (I recommend
latest SanDisk Extreme III cards, the fastest on the market),
and compare.
This way you may find if and where is possible bottleneck.

> I also believe that disk access is the main problem here, so the
> solution should be to minimize disk access.

Generally, there are two approaches:

- read all data to memory at once and later access it from
memory, AFAIR Manifold software uses this approach.

- read data directly from a datastore (file) - ArcView/ArcGIS approach

In my previous company, we developed Shapefile-based application that
followed the former approach and I can tell it isn't ideal. User always
has to wait a couple of seconds (sometimes even 15-20 sec) to load data
after adding new layers. Also, user can not compose map of many layers.

There is also a combination of those two options possible:
- load data to memory in subsequent steps based on requested spatial
extent, also probably in a separate thread

The degenerate case will load all at once if user requests data of full
extent, but this could be controlled/limited by application to never
allow to full extent view.

Yet another option is data preparation and generalization. Every layer
could be generalized and assigned to particular zoom level, in order
to avoid loading and rendering ie. 100 point features which actually
occupy the same screen pixel :-)

Greetings
-- 
Mateusz Loskot
http://mateusz.loskot.net


 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/mitab/

<*> Your email settings:
    Individual Email | Traditional

<*> To change settings online go to:
    http://groups.yahoo.com/group/mitab/join
    (Yahoo! ID required)

<*> To change settings via email:
    mailto:[EMAIL PROTECTED] 
    mailto:[EMAIL PROTECTED]

<*> To unsubscribe from this group, send an email to:
    [EMAIL PROTECTED]

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 

Reply via email to