reddycharan commented on a change in pull request #1233: Reduce running time 
for testLedgerCreateAdvWithLedgerIdInLoop

 File path: 
 @@ -579,45 +583,64 @@ public void addComplete(int rc, LedgerHandle lh, long 
entryId, Object ctx) {
     public void testLedgerCreateAdvWithLedgerIdInLoop() throws Exception {
-        long ledgerId;
         int ledgerCount = 40;
-        List<List<byte[]>> entryList = new ArrayList<List<byte[]>>();
-        LedgerHandle[] lhArray = new LedgerHandle[ledgerCount];
-        List<byte[]> tmpEntry;
-        for (int lc = 0; lc < ledgerCount; lc++) {
-            tmpEntry = new ArrayList<byte[]>();
-            ledgerId = rng.nextLong();
-            ledgerId &= Long.MAX_VALUE;
-            if 
-                // since LongHierarchicalLedgerManager supports ledgerIds of 
decimal length upto 19 digits but other
-                // LedgerManagers only upto 10 decimals
-                ledgerId %= 9999999999L;
-            }
-  "Iteration: {}  LedgerId: {}", lc, ledgerId);
-            lh = bkc.createLedgerAdv(ledgerId, 5, 3, 2, digestType, 
ledgerPassword, null);
-            lhArray[lc] = lh;
-            for (int i = 0; i < numEntriesToWrite; i++) {
-                ByteBuffer entry = ByteBuffer.allocate(4);
-                entry.putInt(rng.nextInt(maxInt));
-                entry.position(0);
-                tmpEntry.add(entry.array());
-                lh.addEntry(i, entry.array());
-            }
-            entryList.add(tmpEntry);
-        }
-        for (int lc = 0; lc < ledgerCount; lc++) {
-            // Read and verify
-            long lid = lhArray[lc].getId();
-  "readEntries for lc: {} ledgerId: {} ", lc, 
-            readEntries(lhArray[lc], entryList.get(lc));
-            lhArray[lc].close();
-            bkc.deleteLedger(lid);
+        long maxId = 9999999999L;
+        if 
+            // since LongHierarchicalLedgerManager supports ledgerIds of 
decimal length upto 19 digits but other
+            // LedgerManagers only upto 10 decimals
+            maxId = Long.MAX_VALUE;
+        rng.longs(ledgerCount, 0, maxId) // generate a stream of ledger ids
+            .mapToObj(ledgerId -> { // create a ledger for each ledger id
+          "Creating adv ledger with id {}", ledgerId);
+                    return bkc.newCreateLedgerOp()
+                        .execute()
+                        .thenApply(writer -> { // Add entries to ledger when 
+                      "Writing stream of {} entries to {}",
+                                         numEntriesToWrite, ledgerId);
+                                List<ByteBuf> entries = 
rng.ints(numEntriesToWrite, 0, maxInt)
+                                    .mapToObj(i -> {
+                                            ByteBuf entry = Unpooled.buffer(4);
+                                            entry.retain();
 Review comment:
   the way ByteBuf instance is used in this testcase makes me question the 
specifications and expectations of WriteAdvHandle.write(long entryId, ByteBuf 
data) API.
   1) Regarding the refcount of ByteBuf, from what I understand the API method 
which takes ByteBuf as argument shouldn't increase or decrease the refcount of 
'ReferenceCounted' object.
   I mean
   myApplicationMethod() {
       ByteBuf byteBuf = createNewInstanceOfByteBuf();
   here when I create byteBuf the refCount would typically be 1. Now by passing 
this refcounted object to the standard API method, I don?t expect the APIMethod 
to decrease the refCount (modify refCount) and finally it is the responsibility 
of the caller (application) method to release it by calling release method 
explicitly when they are done with the byteBuf.
   But in our case  WriteAdvHandle.write(long entryId, ByteBuf data) is 
decreasing the refcount, this is because of
 . Hence @ivankelly is calling
 , explicit retain() immediately after creating ByteBuf. 
   **Que:** Why are we doing this? What is the general practice with regards to 
usage of refcounted objects in API methods? Have we mentioned the expected 
behavior in detail in our API docs?
   2) Regarding the the updates to readerIndex/writerIndex of ByteBuf by API 
method. AFAIU if the API method is doing read operation from the byteBuf then 
after calling API method with ByteBuf argument, I wouldn?t expect the 
readerIndex of the byteBuf to be at 0.
   I mean
   myApplicationMethod() {
       ByteBuf byteBuf = createNewInstanceOfByteBuf();
       // add data to byteBuf
       // here I would expect the readerIndex and writerIndex of byteBuf to be 
the same,
      // since APIReadMethod has consumed/read all of the data in the byteBuf
   But in our case, WriteAdvHandle.write(long entryId, ByteBuf data) doesn?t 
advances readerIndex of data ByteBuf and this might be because of the following 
   I would expect it to behave in the same way as other core java APIs deal 
with ByteBuffer - for eg:
 here position of ByteBuffer and readerIndex of ByteBuf is advanced.
   **Que:** Are we following general practices of Buffers with our API?  Have 
we mentioned the expected behavior in detail in our API docs?
   3) In this test case @ivankelly is explicitly calling retain immediately 
after creation of ByteBuf but the corresponding release call isn?t there. 
Wondering is it ok to be lenient? Are we doing corresponding release calls in 
all the places where ByteBuff is created. says that initial 
count would be 1 when an ReferenceCounted object is created.
   My bad, I wasn?t paying attention to new API and netty buffer changes. But 
the way this test case handled with ByteBuf made me delve into this. IIRC 
@merlimat and @kishorekasi were making changes to the netty buffers and 
@eolivelli  introduced new client API.
   @sijie @jvrao @dlg99  FYI

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

With regards,
Apache Git Services

Reply via email to