Hi,
Seems that the issue still reproduces.
The microkernels may fail on a local non-clustered mongod instance, too (not
only in cluster).
So what I did, I hope you'll get the exception on your machine too:
- I used the test attached to this email and trigger it 3 times from my IDE.
- the test uses a "database barrier" ( dbWriter.syncMongos(writersNumber,
"syncOAK") ) and waits for all the java processes before starting to write
- i put a breakpoint on "e.printStackTrace();" just to catch the exception
I was able to reproduce it multiple times with an oak build from Friday (29
march) on both my Ubuntu station, and on a Windows station (on this platform i
had to wait longer).
Can you please try again with the attached test, and see if reproduces on your
station?
Tudor
On 03/27/2013 04:42 PM, Jukka Zitting wrote:
Hi,
On Wed, Mar 27, 2013 at 4:28 PM, Tudor Rogoz <[email protected]> wrote:
I just made a pull, and is not reproducing on my side, too.It was probably
fixed somehow in the latest commits.
OK, good to know!
BR,
Jukka Zitting
package org.apache.jackrabbit.oak.run;
import java.util.Random;
import javax.jcr.Node;
import javax.jcr.Repository;
import javax.jcr.Session;
import javax.jcr.SimpleCredentials;
import org.apache.jackrabbit.mongomk.impl.MongoConnection;
import org.apache.jackrabbit.mongomk.prototype.MongoMK;
import org.apache.jackrabbit.oak.jcr.Jcr;
import org.junit.Before;
import org.junit.Test;
import com.mongodb.BasicDBObject;
import com.mongodb.DB;
public class ConcurrencyTest {
Repository repo;
DBWriter dbWriter;
String clusterNodeId;
String database = "dataBase16";
int writersNumber = 3;
@Before
public void before() throws Exception {
Random randomGenerator = new Random();
repo = new Jcr(new MongoMK((new MongoConnection("localhost", 27017,
database)).getDB(), 1)).createRepository();
clusterNodeId = Integer.toString(randomGenerator.nextInt(1000));
dbWriter = new DBWriter(clusterNodeId, (new MongoConnection(
"localhost", 27017, database)).getDB());
}
@Test
public void testException() {
Session adminSession;
try {
adminSession = repo.login(new SimpleCredentials("admin", "admin"
.toCharArray()));
dbWriter.initialCommit("syncOAK");
// wait for all processes
dbWriter.syncMongos(writersNumber, "syncOAK");
Node root = adminSession.getRootNode();
for (int k = 0; k < 10; k++) {
Node nk = root.addNode("nodeR" + clusterNodeId + k + "R",
"nt:folder");
for (int j = 0; j < 50; j++) {
Node nj = nk.addNode("nodeX" + clusterNodeId + j + "X",
"nt:folder");
for (int i = 0; i < 1000; i++) {
nj.addNode("nodeY" + clusterNodeId + i + "Y",
"nt:folder");
}
adminSession.save();
System.out.println("ClusterNode:" + clusterNodeId);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
class DBWriter {
String clusterNodeId;
DB database;
public DBWriter(String clusterNodeId, DB database) {
this.clusterNodeId = clusterNodeId;
this.database = database;
}
public String initialCommit(String collectionName) {
BasicDBObject document = new BasicDBObject();
document.put(clusterNodeId, "1");
database.getCollection(collectionName).insert(document);
return clusterNodeId;
}
public void syncMongos(int mongosNumber, String collectionName)
throws InterruptedException {
while (database.getCollection(collectionName).count() != mongosNumber) {
Thread.sleep(1000);
}
}
}