Hi Andrus,

In your example, wouldn't "context" continue growing and possibly
exhausting memory as more Artists are brought into it?  (And this would
also slow down the commitChanges() call as it has more evaluation to do as
the context grows.)

mrg


On Fri, May 19, 2017 at 9:49 AM, Andrus Adamchik <[email protected]>
wrote:

> I concur with Mike on the suggestion. Though I would recommend using
> vastly improved 4.0 API:
>
> http://cayenne.apache.org/docs/4.0/cayenne-guide/performance-tuning.html#
> iterated-queries
>
> > As you iterate over your entire record set, you can convert the DataRows
> > into Cayenne objects
>
> In 4.0 you can iterate over objects.
>
> > Gather up 50 or 100 or 1000
>
> In 4.0 you can use batch iterator to receive the stream already split in
> batches. Docs example actually has a typo. Batch iterator looks like this:
>
> try(ResultBatchIterator<Artist> it =
>     ObjectSelect.query(Artist.class).batchIterator(context, batchSize)) {
>
>     for(List<Artist> list : it) {
>        ...
>        context.commitChanges();
>     }
> }
>
> Andrus
>
>
> > On May 19, 2017, at 4:39 PM, Michael Gentry <[email protected]> wrote:
> >
> > Hi Pascal,
> >
> > I suspect you need to utilize an iterated query:
> >
> > http://cayenne.apache.org/docs/3.1/cayenne-guide/
> performance-tuning.html#iterated-queries
> >
> > As you iterate over your entire record set, you can convert the DataRows
> > into Cayenne objects (see the section in the documentation above the
> > iterated queries documentation) in a *different* DataContext.  Gather up
> 50
> > or 100 or 1000 (whatever number feels good to you) in that second
> > DataContext and then commit them, throw away that DataContext and create
> a
> > new one.  Repeat.  This should keep your memory usage fairly constant and
> > allow you to process arbitrarily large record sizes.
> >
> > mrg
> >
> >
> > On Fri, May 19, 2017 at 9:27 AM, Pascal Robert <[email protected]>
> wrote:
> >
> >> Hi,
> >>
> >> I’m still in my FileMaker -> MySQL migration project. This time, I want
> to
> >> migrate a FileMaker table who have 445 244 records in it. If I fetch
> >> everything into an object entity for each row, I’m getting a Java heap
> >> space problem, which is somewhat expected by the size of the result set.
> >>
> >> If I call setFetchLimit() with a 10 000 limit, works fine. FileMaker
> >> doesn’t support fetch limits, so I can’t do something on that side.
> >>
> >> Any tips?
>
>

Reply via email to