I am using 2.2.0 community. I am testing with a simple Java program,

4 threads, each with its couchbase client writing to a bucket (512 ram, 
15gb hdd space) continuously.

                       public void run() {
List<URI> hosts = Arrays.asList(URI.create("http://localhost:8091/pools";));
String bucket = "mybucket";
String password = "";
CouchbaseClient client = null;
try {
client = new CouchbaseClient(hosts, bucket, password);
} catch (IOException e1) {
e1.printStackTrace();
}

OperationFuture<Boolean> addOp;
int tmp = 0;
while (true) {
tmp = count.incrementAndGet();
if (tmp > 1833) break;
try {
br = new BufferedReader(new FileReader("d:\\file"+tmp));
String line;
while ((line=br.readLine()) != null) {
addOp = client.add(line, 1);
if (!addOp.get()) {
System.out.println(tmp+"---"+line);
}
}
br.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}



At about 6mil inserts, I am encountering these errors. And subsequent 
inserts failed. What should I do?


*[19:22:05]* - Hard Out Of Memory Error. Bucket "mybucket" on node 
127.0.0.1 is full. All memory allocated to this bucket is used for metadata.
*[19:22:05]* - Metadata overhead warning. Over 68% of RAM allocated to 
bucket "mybucket" on node "127.0.0.1" is taken up by keys and metadata.


-- 
You received this message because you are subscribed to the Google Groups 
"Couchbase" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to