> I recently tried breaking my doc set into topic-sized fm docs, then
> building the different outputs with the topic fm's as insets to a
> chapter container. It worked great w/ 1 chapter, but when I tested it
> with the first 3 chapters, FM couldn't handle it. (50% CPU usage on
> 2 Ghz CPU w/ 2Gb RAM and only FM was open. Plenty of hard drive space
> too.) I may be able to compromise, in this case, and inset only those
> whose headings change though.
I've never used text insets for _all_ the content, only for some
percentage (typically, 20-40%) that was being reused. If you want to
completely chunk everything, that's probably best done by putting all
your content chunks into a database, and assembling the FM docs from
But regarding lots of text insets -- are you putting each in a separate
file? A text inset is just a named flow, and a single FM file can
contain any number of text insets (when you want to import one, you
point to the file, and then FM presents a dialog in which you can
specify which flow of that file you want). Opening one file that
contains 20 or 30 text insets presents much less of a burden than
opening 20 or 30 files.
That said, if you're running Vista, 2 Gb RAM is marginal for this kind
of resource-intensive work. More memory would certainly help.
Richard G. Combs
Senior Technical Writer
richardDOTcombs AT polycomDOTcom
rgcombs AT gmailDOTcom