Hi Artem,

As for the exception, I insert vertexes not in parallel processes but in 
sequential processes (i.e. process2 waits for execution of process1 and so 
on). This helps to reduce memory consumption at least on client side.
Before adding new 50k locations I load a number of vertex from DB as there 
should be created edges between these vertexes. So no other common 
resources but vertexes itself are shared. Maybe because of high memory 
consumption all these preloaded vertexes become detached or something like 
this? And that is why they have different version for same rid.

As for the memory consumption - indeed, it could be the level 1 cache, 
hovewer as I run it in visualVM I can't see that memory is used that much. 
I allocated 2GB for DB and see that GC works fine: 
http://snag.gy/m3ync.jpg 
http://snag.gy/7tM4e.jpg
However its process in windows consume all possible memory 


Speaking about level1 cache - should it be disabled on server side? Is 
there any cache on client side?

вторник, 1 июля 2014 г., 19:11:06 UTC+3 пользователь Artem Orobets написал:
>
> Hi Andrey,
>
> So we got 2 issue there a memory consumption and Exception.
>
> That additional 300MB may be a first level cache. Could you run a profiler 
> to check that?
>
> As for the exception, you said that you do inserts in parallel processes, 
> may be some of them are access a common resource? Take a notice for now if 
> you add an edge between v1 and v2 you change the version of v1 and v2. We 
> are going to fix that soon.
>
> Best regards,
> Artem Orobets
>
> * Orient Technologiesthe Company behind OrientDB*
>  
>
> 2014-07-01 17:49 GMT+03:00 Андрей Логинов <[email protected] 
> <javascript:>>:
>
>> Hello,
>>
>> I'm trying to perform massive insertion of data into my database (v. 
>> 1.7.2). 
>> I have a class Location and one of it's properties locationId. 
>> Firstly I've added something about 50k vertexes of Location class (Lets 
>> call them 'CITIES'). Then I've added a unique index for this class:
>> CREATE INDEX Location.locationId ON Location (locationId) unique
>>
>> After that I'm starting adding 600k new Location class vertexes (lets 
>> call them ZIP_CODES). As we have rather limited system resources, we 
>> decided to add these postal codes by 50k batches one after another, each in 
>> separate java process. Zip codes depends from cities. There should be an 
>> edge between them, so index for locationId is required. Each 50k Locations 
>> also have 500k dependent vertexes and edges total linked to them.
>> Each batch make orient db to consume additional 300-400mb memory which it 
>> doesn't seem to return :(. 
>>
>> When memory consumption is near to it's limit (I allocated 3GB for 
>> OrientDB), I start getting a very surprising error (sometimes it is a batch 
>> 200-250k or 250-300k):
>>
>> APPLOG: 2014-07-01 16:07:31,885 ERROR [com.efinancialcareers.locations.
>> export.job.ExportLocationsJob] - <Exception in thread "main" com.
>> orientechnologies.orient.core.exception.OConcurrentModificationException: 
>> Cannot UPDATE the record #12:289185 because the version is not the 
>> latest. Probably you are updating an old record or it has been modified by 
>> another user (db=v2 your=v0)>
>> APPLOG: 2014-07-01 16:07:31,885 ERROR [com.efinancialcareers.locations.
>> export.job.ExportLocationsJob] - <    at com.orientechnologies.orient.
>> core.storage.impl.local.paginated.OLocalPaginatedStorage.updateRecord(
>> OLocalPaginatedStorage.java:818)>
>> APPLOG: 2014-07-01 16:07:31,885 ERROR [com.efinancialcareers.locations.
>> export.job.ExportLocationsJob] - <    at com.orientechnologies.orient.
>> core.storage.impl.local.paginated.OLocalPaginatedStorage.commitEntry(
>> OLocalPaginatedStorage.java:2110)>
>> APPLOG: 2014-07-01 16:07:31,885 ERROR [com.efinancialcareers.locations.
>> export.job.ExportLocationsJob] - <    at com.orientechnologies.orient.
>> core.storage.impl.local.paginated.OLocalPaginatedStorage.commit(
>> OLocalPaginatedStorage.java:1099)>
>> APPLOG: 2014-07-01 16:07:31,885 ERROR [com.efinancialcareers.locations.
>> export.job.ExportLocationsJob] - <    at com.orientechnologies.orient.
>> core.tx.OTransactionOptimistic.doCommit(OTransactionOptimistic.java:132)>
>> APPLOG: 2014-07-01 16:07:31,885 ERROR [com.efinancialcareers.locations.
>> export.job.ExportLocationsJob] - <    at com.orientechnologies.orient.
>> core.tx.OTransactionOptimistic.commit(OTransactionOptimistic.java:105)>
>>
>>
>> *There is no other thread which could change any record. Moreover I'm 
>> doing this all in scope of a single transaction. So I have no idea why this 
>> error appears. Please advise me something... *
>> I've looked through the 
>> https://code.google.com/p/orient/wiki/TroubleshootingJava and 
>> https://code.google.com/p/orient/wiki/GraphDatabaseRaw#ConcurrencyGraphDB 
>> and before accessing the graph I'm disabling mvcc. I'm also trying to 
>> improve memory consumption and speedup the initialization process using 
>> next command:
>>
>>         graph.getRawGraph()
>>                 .setValidationEnabled(false)
>>                 .setRetainRecords(false)
>>                 .setMVCC(false)
>>                 .declareIntent(new OIntentMassiveInsert());
>>
>> However I can't even be sure that it helps! Should these settings be 
>> applied to client side app to the server itself?
>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "OrientDB" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"OrientDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to