Hi,

Thanks for reporting this. The elementCleaned() must be called for
each element in the cache on shutdown so this is a bug.

Regarding performance the batch inserter implementation is there for
convince and will not perform as good as the normal batch inserter
API. There may be a full implementation in the future but right now it
only supports basic insertion and lookup.

Regards,
Johan

On Thu, Jul 22, 2010 at 5:34 PM, Craig Taverner <[email protected]> wrote:
> When I looked at BatchGraphDatabaseImpl, the impression I got was that the
> work to fully support the GraphDatabaseService was only partially completed.
> It seems it is necessary to use the BatchInserter API to get things working
> correctly, and if you use the GraphDatabaseService wrapper, some things
> silently fail.
>
> I would, however, think it should be possible to complete this
> implementation. Perhaps the fake transaction provided by the
> BatchGraphDatabaseImpl.beginTx() should be able to call the elementCleaned()
> method when tx.finish() is called, and flush the properties to disk?
>
> In my opinion, using the GraphDatabaseService wrapper on the BatchInserter
> should merely perform worse than using the real BatchInserter. I do not
> think it should fail to perform some key functions at all. Any opinions on
> this from the core team?
>
> On Thu, Jul 22, 2010 at 12:47 PM, Lagutko, Nikolay <
> [email protected]> wrote:
>
>> Hi to all
>>
>>
>>
>> Find out interesting thing in BatchGraphDatabaseImpl. I tried to load a
>> lot of data using BatchInserter Service and everything was OK. But some
>> nodes that were created in the end didn't have any properties. So I
>> looked to the code and find next here:
>>
>>
>>
>> Properties writes to database only when LruCache.elementCleaned() method
>> was called. And when we calling shutdown() for service it calls clear()
>> method of LruCache. So let's have a look to this method
>>
>>
>>
>> public synchronized void clear()
>>
>>    {
>>
>>        resizeInternal( 0 );
>>
>>    }
>>
>>
>>
>> private void resizeInternal( int newMaxSize )
>>
>>    {
>>
>>        resizing = true;
>>
>>        try
>>
>>        {
>>
>>            if ( newMaxSize >= size() )
>>
>>            {
>>
>>                maxSize = newMaxSize;
>>
>>            }
>>
>>            else if ( newMaxSize == 0 )
>>
>>            {
>>
>>                cache.clear();
>>
>>            }
>>
>>            else
>>
>>            {
>>
>>                maxSize = newMaxSize;
>>
>>                java.util.Iterator<Map.Entry<K,E>> itr =
>> cache.entrySet()
>>
>>                    .iterator();
>>
>>                while ( itr.hasNext() && cache.size() > maxSize )
>>
>>                {
>>
>>                    E element = itr.next().getValue();
>>
>>                    itr.remove();
>>
>>                    elementCleaned( element );
>>
>>                }
>>
>>            }
>>
>>        }
>>
>>        finally
>>
>>        {
>>
>>            resizing = false;
>>
>>        }
>>
>>    }
>>
>>
>>
>> As you can see in case if we call clear() we didn't write last changes
>> to database and only clear cache. Is it correct way?
>>
>>
>>
>> Nikolay Lagutko
_______________________________________________
Neo4j mailing list
[email protected]
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to