Hi Pieter,
All operations in single tx now are accumulated in memory and then
committed to the  disk.
If  you have such huge txs in production so you should  them by batches if
you want pre-fill  import data you can use noTx version of graphdb.
Halo,

Is this of interest?
Is commiting every 100000 to coarse?

Thanks
Pieter


On 30/12/2013 23:09, Pieter Martin wrote:

> Hi,
>
> I am trying to run the following test on orientdb. I am running it on
> blueprints version 2.5.0-SNAPSHOT
>
>     @Test
>     public void testSpeedDude() throws IOException {
>         String url = "/tmp/test-orientdb-blueprints";
>         File dir = new File(url);
>         FileUtils.deleteDirectory(dir);
>         File f = new File(url);
>         TransactionalGraph g = new OrientGraph("local:" +
> f.getAbsolutePath());
>         try {
>
>             int NUMBER_TO_ITER = 10000000;
>
>             StopWatch stopWatch = new StopWatch();
>             stopWatch.start();
>
>             Vertex one = g.addVertex(null);
>             one.setProperty("one", "1");
>             long previousSplitTime = 0;
>             for (int i = 0; i < NUMBER_TO_ITER; i++) {
>                 Vertex many = g.addVertex(null);
>                 many.setProperty("many", "2");
>                 g.addEdge(null, one, many, "toMany");
>
>                 if (i != 0 && i % 100000 == 0) {
>                     stopWatch.split();
>                     long splitTime = stopWatch.getSplitTime();
>                     System.out.println(i + " " + stopWatch.toString() + "
> 100000 in " + (splitTime - previousSplitTime));
>                     previousSplitTime = stopWatch.getSplitTime();
>                     g.commit();
>                 }
>             }
>             g.commit();
>             stopWatch.stop();
>             System.out.println("write " + NUMBER_TO_ITER + " = " +
> stopWatch.toString());
>
>             stopWatch.reset();
>             stopWatch.start();
>             int count = 1;
>             Vertex startV = g.getVertex(one.getId());
>             for (Vertex v : startV.getVertices(Direction.OUT)) {
>                 v.getProperty("many");
>                 if (count++ % 1000000 == 0) {
>                     System.out.println("read 1000000 vertex, id = " +
> v.getId());
>                 }
>             }
>             stopWatch.stop();
>             System.out.println("read " + NUMBER_TO_ITER + " = " +
> stopWatch.toString());
>         } finally {
>             g.shutdown();
>         }
>     }
>
> After inserting the 4 900 000 record I am getting a OutOfMemoryError
> exception.
> I first ran it with no settings but even with -Xms1024m -Xmx6144m it still
> gets OutOfMemoryError.
>
> Does this point to a memory leak somewhere?
>
> The output was as follow
>
> 4000000 0:04:03.495 100000 in 6660
> 4100000 0:04:10.210 100000 in 6715
> 4200000 0:04:24.434 100000 in 14224
> 4300000 0:04:31.830 100000 in 7396
> 4400000 0:04:38.905 100000 in 7075
> 4500000 0:04:46.019 100000 in 7114
> 4600000 0:04:53.246 100000 in 7227
> 4700000 0:05:20.028 100000 in 26782
> 4800000 0:05:45.045 100000 in 25017
> 4900000 0:06:40.781 100000 in 55736
> ...OutOfMemoryError
>
> Thanks
> Pieter
>

-- 

--- You received this message because you are subscribed to the Google
Groups "OrientDB" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to orient-database+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"OrientDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to orient-database+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to