Unfortunately the data model is very thin (it's a database representation of
a very large series of XSD schema documents), with "types" that have
"properties", where each "type" has a "parent", and each "property" has a
"type" as well (among other things).

Big objects and N+1 selects don't bother me too much, but since a given
"type" or "property" may be involved in several relationships I was hoping
that information about them could be pulled from the cache instead of
re-selected, during lazy-load joins.

On Mon, Jan 18, 2010 at 11:10 AM, Clinton Begin <clinton.be...@gmail.com>wrote:

> Hmm... it looks to me like your situation is a perfect storm.
>
>  * You're loading lots of objects.
>  * You're loading big objects.
>  * You are guaranteeing an N+1 selects problem with that for loop.
>
> No one of these situations should create a major problem, but all three
> together are a guaranteed disappointment.
>
> That said, I'm surprised that 2000 rows would be that much of a problem...
> how big are these objects really?
>
> Also, memory wise, there should be almost no difference between using one
> session or multiple.  And no, I would not open multiple as in your second
> example.
>
> As for the serialization error, that might be a bug if LoadPair wasn't
> serializable.
>
> Clinton
>
>
> On Mon, Jan 18, 2010 at 8:24 AM, Dave Rafkind <dave.rafk...@gmail.com>wrote:
>
>> Ok, that's some good information, I understand that you should marshall
>> your objects with care. Unfortunately for me at some point I will have to
>> marshall every single one of my database rows into an object graph. However
>> what I took from your advice was not to be so stingy in re-using the session
>> objects. So if I do something like this:
>>
>> for (id : ids) {
>>   SqlSession session2 = sessionMapper.openSession();
>>   MyObject o = session2.select("getOne", id.getActualId());
>>   session2.close();
>> }
>>
>> Now memory usage is pretty good! However now performance is terrible. In
>> attempting to rectify this I tried to put in a custom cache implementation,
>> logging cache access to see how it was performing. I noticed it was never
>> actually using the cache (never doing cache.putObject).
>> I noticed that according to this discussion:
>>
>>
>> http://markmail.org/search/?q=select+commit+cache+list%3Aorg.apache.ibatis.user-java#query:select%20commit%20cache%20list%3Aorg.apache.ibatis.user-java+page:1+mid:4mmwki3dnp57gweu+state:results
>>
>> ...commits don't occur (and thus cache fill doesn't occur) unless a
>> transaction is committed. So for a pure read-only use case of the db the
>> cache won't be very useful, right?
>>
>> Additionally, putting in a "session2.commit()" before the call to close()
>> causes the following error (which I assume, but don't know is caused by
>> committing when nothing actually needs to be comitted:)
>>
>> ### Error committing transaction.  Cause:
>> org.apache.ibatis.cache.CacheException: Error serializing object.  Cause:
>> java.io.NotSerializableException:
>> org.apache.ibatis.executor.loader.ResultLoaderRegistry$LoadPair
>>
>>
>>
>> On Thu, Jan 14, 2010 at 2:44 PM, Clinton Begin 
>> <clinton.be...@gmail.com>wrote:
>>
>>> By nested results, yes, I mean collections and associations.  And by
>>> "flattening" I mean avoiding the use of those.
>>>
>>> iBATIS exhibits this behavior, the same way any ORM would, because the
>>> object instances need to be cached to preserve object identity.  So as
>>> you're result set is being read through, each unique object is stored.
>>> iBATIS isn't quite as effcient as something like Hibernate can be with
>>> circular references though, in that depending on how you map it out, you may
>>> end up with multiple instances of the same data (parent/child relationships
>>> mapped with resultMap).
>>>
>>> So with iBATIS, the most memory efficient approach is to use nested
>>> select associations/collections.  But for query performance, nested
>>> resultMaps are ideal.  I often find I need to use a combination of both to
>>> get the best optimization.  But if I often don't load lists of complete
>>> objects either.  If I'm loading a large list, I'll use a lighter weight
>>> representation, and then only load the complete object graph for individual
>>> instances.  I've never found a case where this wasn't a good idea anyway.
>>> Even when working with something like Rails, a rich domain ORM, I would
>>> often write optimized lightweight queries for large lists of flatter
>>> objects.
>>>
>>> The memory should not be significantly more than will ultimately be
>>> required to store your final result set.   And any additional memory used
>>> should be released upon the closing of the SqlSession.
>>>
>>> If the memory isn't being released at the end of the session, that's a
>>> different story... but otherwise, this is normal behavior.
>>>
>>> Clinton
>>>
>>>
>>> On Thu, Jan 14, 2010 at 11:20 AM, Dave Rafkind 
>>> <dave.rafk...@gmail.com>wrote:
>>>
>>>> Thanks for the reply. What do you mean by nested result maps or selects?
>>>> Do you mean collections or associations with their own selects and result
>>>> maps? Why would ibatis exhibit this behavior in that case?
>>>>
>>>>  And by flattening, you mean the same kind of stuff used to avoid the
>>>> n+1 select problem?
>>>>
>>>> On Thu, Jan 14, 2010 at 1:02 PM, Clinton Begin <clinton.be...@gmail.com
>>>> > wrote:
>>>>
>>>>> If it uses nested result maps or nested selects, I'm afraid you're out
>>>>> of luck.   You'll need to reduce the query results, or flatten out the
>>>>> results.
>>>>>
>>>>> Clinton
>>>>>
>>>>> On 2010-01-14, Dave Rafkind <dave.rafk...@gmail.com> wrote:
>>>>> > Hi ibatis list, I'm new to ibatis so perhaps this is a noob question.
>>>>> I'm
>>>>> > using Ibatis 3 (ibatis-3-core-3.0.0.216.jar) with a somewhat
>>>>> complicated
>>>>> > schema (plenty of circular links etc).
>>>>> >
>>>>> > I'm doing something like this:
>>>>> >
>>>>> > List<MyIdObject> ids = session.selectList("getAll");
>>>>> >
>>>>> > for (id : ids) {
>>>>> >   MyObject o = session.select("getOne", id.getActualId());
>>>>> > }
>>>>> >
>>>>> > The first query returns a list about 2k big, and the second query in
>>>>> the for
>>>>> > loop returns objects that are somewhat large (have several
>>>>> collections in
>>>>> > them, a discriminator, etc).
>>>>> >
>>>>> > The problem I have is that as the for loop marches on it uses an
>>>>> > ever-increasing amount of memory. I would assuming that when the
>>>>> objects in
>>>>> > the body of the for loop go out of scope they can get garbage
>>>>> collected, but
>>>>> > apparently that never happens; is there some weird interaction with
>>>>> the
>>>>> > "first-level cache"? Should I be going about this a different way?
>>>>> >
>>>>> > Thanks!
>>>>> > Dave
>>>>> >
>>>>>
>>>>> --
>>>>> Sent from my mobile device
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: user-java-unsubscr...@ibatis.apache.org
>>>>> For additional commands, e-mail: user-java-h...@ibatis.apache.org
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to