Hi Luca;
I will try new versions I promise. I use your approach for creating edge 
and vertexes.  But I want to know  my approach is okey. I want to know for 
creating edges massively, it is good idea to use ON-TRANSACTION api and my 
configurations are correct for massive insertion.

8 Mart 2016 Salı 01:34:02 UTC+2 tarihinde l.garulli yazdı:
>
> Hi,
> First suggestion is use v2.2.0-beta if you're in development (will be 
> final in few weeks, promise) or at least last v2.1.12. Both versions are 
> safe if you work with the graph api in NON-TRANSACTION. In this way you can 
> go multi-threads without the risk to have broken edges. Please consider 
> that v2.2 is much faster on this, from 3x to 10x!
>
> Do you have properties? Then, if you can, I suggest to set the properties 
> ad vertex/edge creation, not with further calls. This makes a lot of 
> difference when you are using the OGraphNoTx implementation.
>
>
>
> Best Regards,
>
> Luca Garulli
> Founder & CEO
> OrientDB <http://orientdb.com/>
>
>
> On 7 March 2016 at 22:38, kurtuluş yılmaz <[email protected] 
> <javascript:>> wrote:
>
>> Hi Luca;
>> You are so kind. I develop some framework for spring orintdb so it takes 
>> time to understand code. I write transaction manager so data access layer 
>> is like hibernate or others. I start commit and rollback over transaction 
>> manager I don't  call this methods explicitly. When you see any interaction 
>> like save edge,save vertex or load something I get current connection from 
>> thread local. At the following code first transaction manager gets 
>> connection,   then code finds  Cookie, Ip and User vertexes( all has index) 
>> from DB, then  I create Login vertex and then I create three edges and 
>> commit transaction over transaction manager. Service layer is this at the 
>> following  show implemetaion of dao methods.
>>
>> @Override
>> @Transactional(value = "orientTransactionManager", propagation = 
>> Propagation.REQUIRED)
>> public void insertLogins(List<LoginInformationType> loginTypes) {
>>     for (LoginInformationType loginInformationType : loginTypes){
>>         List<BaseOrientEdgeEntity> edges = new ArrayList<>();
>>         UserEntity userEntity = 
>> userGraphDao.load(loginInformationType.getUserId());
>>         CookieEntity cookieEntity = 
>> cookieGraphDao.load(loginInformationType.getCompId());
>>         IpEntity ipEntity = 
>> ipGraphDao.load(loginInformationType.getIpNumber());
>>         LoginEntity loginEntity = new 
>> LoginEntity(loginInformationType.getLoginDate());
>>         loginGraphDao.save(loginEntity);
>>         CookieLoginByEdge cookieLoginByEdge = new 
>> CookieLoginByEdge(cookieEntity,loginEntity);
>>         edges.add(cookieLoginByEdge);
>>         IpLogInByEdge ipLogInByEdge = new 
>> IpLogInByEdge(ipEntity,loginEntity);
>>         edges.add(ipLogInByEdge);
>>         LoginByEdge loginByEdge = new LoginByEdge(loginEntity,userEntity);
>>         edges.add(loginByEdge);
>>         userRelationGraphDao.saveEdges(edges);
>>     }
>>
>>
>> Save vertex is same for all vertex types. At the above Service I used it 
>> once for Login Vertex.
>>
>>
>> @Override
>> public Vertex createVertex(String className, Map<String, Object> 
>> vertexProperties) {
>>
>>     String vertexType = "class:" + className;
>>
>>     Vertex vertex = 
>> orientDBFactory.getCurrentDb().addVertex(vertexType,vertexProperties);
>>
>>     return vertex;
>> }
>>
>>
>> Save edge is similar for all edges. I call following code for 3 times.
>>
>>
>> @Override
>> public void createOneDirectionalEdge(String className, Vertex from, Vertex 
>> to) {
>>
>>     orientDBFactory.getCurrentDb().addEdge("class:" + className, from, to, 
>> className);
>> }
>>
>>
>> Load method is similar for all vertex types ip,cookie and user. I call it 
>> three times.
>>
>> public Vertex findOneVertex(String query, Map<String, Object> params) {
>>
>>     Vertex vertex = null;
>>     OCommandRequest command = orientDBFactory.getCurrentDb().command(new 
>> OCommandSQL(query));
>>
>>     for (Vertex v : (Iterable<Vertex>) command.execute(params)) {
>>         vertex = v;
>>         break;
>>     }
>>
>>     return vertex;
>> }
>>
>>
>>
>> For dao operations I need OrientGraphNoTx object so  I call getCurrentDb(). 
>> This method gets OrientGraphNoTx object from thread local. I call this 
>> method. I call it seven times. 
>>
>>
>>
>> public OrientBaseGraph getCurrentDb(){
>>
>>     OrientGraphNoTx tx = (OrientGraphNoTx)OrientBaseGraph.getActiveGraph();
>>     log.debug("orientdbfactory hash" + tx.hashCode());
>>     return tx;
>> }
>>
>>
>>
>>
>>
>> Thank you again for your quick response.
>>
>>
>>
>>
>>
>> 7 Mart 2016 Pazartesi 20:12:29 UTC+2 tarihinde kurtuluş yılmaz yazdı:
>>
>>> Hi;
>>> I am trying to migrate data from mysql to orientdb. I can insert 16 
>>> million vertexes per hour with multiple threads  and it is very impressive. 
>>> After inserting vertexes  I try to insert edges but  it is very slow. I 
>>> looked at internet but ı coulnt find any useful information. What is the 
>>> best practice for massive insertion of EDGES. I send my configuration . Any 
>>> help appreciated.
>>>
>>> OrientDb version : 2.1.11
>>> Transaction managemet = OrientdBNoTx
>>>
>>> OrientDB 2.1.11 (build 2.1.x@rddb5c0b4761473ae9549c3ac94871ab56ef5af2c; 
>>> 2016-02-15 10:45:12+0000) configuration dump:
>>> - ENVIRONMENT
>>>   + environment.dumpCfgAtStartup = true
>>>   + environment.concurrent = true
>>>   + environment.allowJVMShutdown = true
>>> - SCRIPT
>>>   + script.pool.maxSize = 20
>>> - MEMORY
>>>   + memory.useUnsafe = true
>>>   + memory.directMemory.safeMode = true
>>>   + memory.directMemory.trackMode = false
>>>   + memory.directMemory.onlyAlignedMemoryAccess = true
>>> - JVM
>>>   + jvm.gc.delayForOptimize = 600
>>> - STORAGE
>>>   + storage.diskCache.pinnedPages = 20
>>>   + storage.diskCache.bufferSize = 1256
>>>   + storage.diskCache.writeCachePart = 15
>>>   + storage.diskCache.writeCachePageTTL = 86400
>>>   + storage.diskCache.writeCachePageFlushInterval = 25
>>>   + storage.diskCache.writeCacheFlushInactivityInterval = 60000
>>>   + storage.diskCache.writeCacheFlushLockTimeout = -1
>>>   + storage.diskCache.diskFreeSpaceLimit = 100
>>>   + storage.diskCache.diskFreeSpaceCheckInterval = 5
>>>   + storage.configuration.syncOnUpdate = true
>>>   + storage.compressionMethod = nothing
>>>   + storage.useWAL = false
>>>   + storage.wal.syncOnPageFlush = false
>>>   + storage.wal.cacheSize = 3000
>>>   + storage.wal.maxSegmentSize = 128
>>>   + storage.wal.maxSize = 4096
>>>   + storage.wal.commitTimeout = 1000
>>>   + storage.wal.shutdownTimeout = 10000
>>>   + storage.wal.fuzzyCheckpointInterval = 300
>>>   + storage.wal.reportAfterOperationsDuringRestore = 10000
>>>   + storage.wal.restore.batchSize = 50000
>>>   + storage.wal.readCacheSize = 1000
>>>   + storage.wal.fuzzyCheckpointShutdownWait = 600
>>>   + storage.wal.fullCheckpointShutdownTimeout = 600
>>>   + storage.wal.path = null
>>>   + storage.makeFullCheckpointAfterCreate = true
>>>   + storage.makeFullCheckpointAfterOpen = true
>>>   + storage.makeFullCheckpointAfterClusterCreate = true
>>>   + storage.diskCache.pageSize = 64
>>>   + storage.lowestFreeListBound = 16
>>>   + storage.cluster.usecrc32 = false
>>>   + storage.lockTimeout = 0
>>>   + storage.record.lockTimeout = 2000
>>>   + storage.useTombstones = false
>>> - RECORD
>>>   + record.downsizing.enabled = true
>>> - OBJECT
>>>   + object.saveOnlyDirty = false
>>> - DB
>>>   + db.pool.min = 1
>>>   + db.pool.max = 50
>>>   + db.pool.idleTimeout = 0
>>>   + db.pool.idleCheckDelay = 0
>>>   + db.mvcc.throwfast = false
>>>   + db.validation = true
>>> - NONTX
>>>   + nonTX.recordUpdate.synch = false
>>>   + nonTX.clusters.sync.immediately = manindex
>>> - TX
>>>   + tx.trackAtomicOperations = false
>>> - INDEX
>>>   + index.embeddedToSbtreeBonsaiThreshold = 40
>>>   + index.sbtreeBonsaiToEmbeddedThreshold = -1
>>> - HASHTABLE
>>>   + hashTable.slitBucketsBuffer.length = 1500
>>> - INDEX
>>>   + index.auto.synchronousAutoRebuild = true
>>>   + index.auto.lazyUpdates = 10000
>>>   + index.flushAfterCreate = true
>>>   + index.manual.lazyUpdates = 1
>>>   + index.durableInNonTxMode = false
>>>   + index.txMode = FULL
>>>   + index.cursor.prefetchSize = 500000
>>> - SBTREE
>>>   + sbtree.maxDepth = 64
>>>   + sbtree.maxKeySize = 10240
>>>   + sbtree.maxEmbeddedValueSize = 40960
>>> - SBTREEBONSAI
>>>   + sbtreebonsai.bucketSize = 2
>>>   + sbtreebonsai.linkBagCache.size = 100000
>>>   + sbtreebonsai.linkBagCache.evictionSize = 1000
>>>   + sbtreebonsai.freeSpaceReuseTrigger = 0.5
>>> - RIDBAG
>>>   + ridBag.embeddedDefaultSize = 4
>>>   + ridBag.embeddedToSbtreeBonsaiThreshold = -1
>>>   + ridBag.sbtreeBonsaiToEmbeddedToThreshold = -1
>>> - COLLECTIONS
>>>   + collections.preferSBTreeSet = false
>>> - FILE
>>>   + file.trackFileClose = false
>>>   + file.lock = true
>>>   + file.deleteDelay = 10
>>>   + file.deleteRetry = 50
>>> - JNA
>>>   + jna.disable.system.library = true
>>> - NETWORK
>>>   + network.maxConcurrentSessions = 1000
>>>   + network.socketBufferSize = 32768
>>>   + network.lockTimeout = 15000
>>>   + network.socketTimeout = 15000
>>>   + network.requestTimeout = 3600000
>>>   + network.retry = 5
>>>   + network.retryDelay = 500
>>>   + network.binary.loadBalancing.enabled = false
>>>   + network.binary.loadBalancing.timeout = 2000
>>>   + network.binary.maxLength = 32736
>>>   + network.binary.readResponse.maxTimes = 20
>>>   + network.binary.debug = false
>>>   + network.http.maxLength = 1000000
>>>   + network.http.charset = utf-8
>>>   + network.http.jsonResponseError = true
>>>   + network.http.jsonp = false
>>> - OAUTH2
>>>   + oauth2.secretkey = 
>>> - NETWORK
>>>   + network.http.sessionExpireTimeout = 300
>>>   + network.http.useToken = false
>>>   + network.token.secretyKey = 
>>>   + network.token.encriptionAlgorithm = HmacSHA256
>>>   + network.token.expireTimeout = 60
>>> - PROFILER
>>>   + profiler.enabled = true
>>>   + profiler.config = null
>>>   + profiler.autoDump.interval = 0
>>>   + profiler.maxValues = 200
>>> - LOG
>>>   + log.console.level = info
>>>   + log.file.level = fine
>>> - COMMAND
>>>   + command.timeout = 0
>>> - QUERY
>>>   + query.scanThresholdTip = 50000
>>>   + query.limitThresholdTip = 10000
>>> - SQL
>>>   + sql.graphConsistencyMode = notx_async_repair
>>> - CLIENT
>>>   + client.channel.maxPool = 100
>>>   + client.connectionPool.waitTimeout = 5000
>>>   + client.channel.dbReleaseWaitTimeout = 10000
>>>   + client.ssl.enabled = false
>>>   + client.ssl.keyStore = null
>>>   + client.ssl.keyStorePass = null
>>>   + client.ssl.trustStore = null
>>>   + client.ssl.trustStorePass = null
>>>   + client.session.tokenBased = false
>>> - SERVER
>>>   + server.channel.cleanDelay = 5000
>>>   + server.cache.staticFile = false
>>>   + server.log.dumpClientExceptionLevel = FINE
>>>   + server.log.dumpClientExceptionFullStackTrace = false
>>> - DISTRIBUTED
>>>   + distributed.crudTaskTimeout = 3000
>>>   + distributed.commandTaskTimeout = 10000
>>>   + distributed.commandLongTaskTimeout = 86400000
>>>   + distributed.deployDbTaskTimeout = 1200000
>>>   + distributed.deployChunkTaskTimeout = 15000
>>>   + distributed.deployDbTaskCompression = 7
>>>   + distributed.queueTimeout = 5000
>>>   + distributed.asynchQueueSize = 0
>>>   + distributed.asynchResponsesTimeout = 15000
>>>   + distributed.purgeResponsesTimerDelay = 15000
>>>   + distributed.queueMaxSize = 10000
>>>   + distributed.backupDirectory = ../backup/databases
>>>   + distributed.concurrentTxMaxAutoRetry = 10
>>>   + distributed.concurrentTxAutoRetryDelay = 100
>>> - DB
>>>   + db.makeFullCheckpointOnIndexChange = true
>>>   + db.makeFullCheckpointOnSchemaChange = true
>>>   + db.document.serializer = ORecordSerializerBinary
>>> - LAZYSET
>>>   + lazyset.workOnStream = true
>>> - DB
>>>   + db.mvcc = true
>>>   + db.use.distributedVersion = false
>>> - MVRBTREE
>>>   + mvrbtree.timeout = 0
>>>   + mvrbtree.nodePageSize = 256
>>>   + mvrbtree.loadFactor = 0.7
>>>   + mvrbtree.optimizeThreshold = 100000
>>>   + mvrbtree.entryPoints = 64
>>>   + mvrbtree.optimizeEntryPointsFactor = 1.0
>>>   + mvrbtree.entryKeysInMemory = false
>>>   + mvrbtree.entryValuesInMemory = false
>>>   + mvrbtree.ridBinaryThreshold = -1
>>>   + mvrbtree.ridNodePageSize = 64
>>>   + mvrbtree.ridNodeSaveMemory = false
>>> - TX
>>>   + tx.commit.synch = false
>>>   + tx.autoRetry = 1
>>>   + tx.log.fileType = classic
>>>   + tx.log.synch = false
>>>   + tx.useLog = false
>>> - INDEX
>>>   + index.auto.rebuildAfterNotSoftClose = true
>>> - CLIENT
>>>   + client.channel.minPool = 1
>>> - STORAGE
>>>   + storage.keepOpen = false
>>> - CACHE
>>>   + cache.local.enabled = false
>>>
>>>
>>>
>>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "OrientDB" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"OrientDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to