Hi Iain,
The large files first: It sounds like you are doing a Dataset fanout. 

Dataset Fanout is a great tool but does have a disadvantage: because
there is no guarantee that data will arrive in the correct Dataset
order, FME has to cache it ALL to disk then separate it out. Hence the
need for large files. It's still a good method to use in many
circumstances, and your process does seem well suited to it.

Your process itself is fairly standard. With the Clipper you'd be
vastly better off using the "Clippers First" setting. It's one way of
avoiding high memory usage or large writes to disk. If you aren't
using it - or are unsure how to - then check out the pages on fmepedia
(http://www.fmepedia.com/index.php/Clipper)

As for batching, it's not necessarily better because it often means
that FME gets started and stopped multiple times; in other words each
batch process is quick, but the sum of them is likely greater than the
single process time.

If you do want to try, I think an Oracle database as source is going
to cause problems because it's harder to treat with a batch process.
The easiest solution is this. 

In the navigation pane right click each of the source envelope
settings and choose the 'publish' option. Then run the translation.
Look at the top of the log file - you'll see the command line to use
to run this workspace. Simply copy the line into a .bat file and run
the file from the command line. Repeat the line in the file once for
each set of read coordinates you have (editing them as necessary)

Again there is info on fmepedia
(http://www.fmepedia.com/index.php/Solving_Common_and_Not-So-Common_Problems_with_FME#Batch_Processing)

Hope this helps

Mark

Mark Ireland, Product Support Engineer
Safe Software Inc. Surrey, BC, CANADA
[EMAIL PROTECTED] http://www.safe.com
Solutions for Spatial Data Translation, Distribution and Access


--- In [email protected], "Iain Thacker" <[EMAIL PROTECTED]> wrote:
>
> Hi,
> 
> We're working with a large(ish) oracle spatial dataset extracting to 
> DGN files.  We're using a custom grid to provide a fanout attribute 
> and the output is tiled.  Basic desctiption of the current process is:
> 
> Database -> Processing -> Sorter -> Clipper (using custom grid) -> 
> Attribute Filter (level) -> outputs
> 
> It seems to me that it would be more efficient to process one 'tile' 
> at a time and write that DGN file rather than the current way it's 
> working - reading the whole database and peocessing the whole lot in 
> one go splitting the features into the relevant DGN file on output.
> 
> Would I be right in thinking the simplest method of organising this 
> would be to create a batch script and use input variables indicating 
> which area of the database to read from?
> 
> Alternatively (as we're not too interested in the speed of this 
> extraction) could I disable the creation of the huge temporary files 
> that are generated?
> 
> Regards,
> 
> Iain Thacker
>







Get the maximum benefit from your FME, FME Objects, or SpatialDirect via our 
Professional Services team.  Visit www.safe.com/services for details. 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/fme/

<*> To unsubscribe from this group, send an email to:
    [EMAIL PROTECTED]

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 


Reply via email to