[ 
https://issues.apache.org/jira/browse/HBASE-469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12584224#action_12584224
 ] 

stack commented on HBASE-469:
-----------------------------

Thanks for the detailed explaination.

Patch has lots of nice cleanup.

Here's a couple of comments:


Why catch the exception?  Why not just let out?

{code}
+    if (fs == null) {
+      try {
+        this.fs = FileSystem.get(conf);
+      } catch (IOException e) {
+        LOG.fatal("error getting file system", e);
+        throw e;
+      }
{code}

Regards the below:

{code}
+  protected void deleteLocal(File f) {
+    if (f.exists()) {
+      if (f.isDirectory()) {
+        File[] children = f.listFiles();
+        if (children != null && children.length > 0) {
+          for (int i = 0; i < children.length; i++) {
+            deleteLocal(children[i]);
+          }
+        }
+      }
+      f.delete();
+    }
+  } 
{code}

Its called deleteLocalFile -- does it only work against local filesystem?  
Should this method be in file utils instead?  Is there a method that does this 
up in hadoop?  What if test is run on DFS?

Should the System.out.* logging be changed in 
src/test/org/apache/hadoop/hbase/TestHBaseCluster.java?

Whats happening with TestMemcache?  Is there a renaming going on?   Its 
different from TestHMemcache?

Why do the below:

{code}
+    } catch (IOException e) {
+      e.printStackTrace();
+      throw e;
{code}

Doesn't a stacktrace come out anyways?

Log instead? +    System.err.println("setup completed.");  Change all the 
System.out* to use log?

Just let the below out?

{code}
+    try {
+      r.flushcache();
+    } catch (IOException e) {
+      e.printStackTrace();
+      throw e;
+    } 
+    try {
+      r.compactStores();
+    } catch (IOException e) {
+      e.printStackTrace();
+      throw e;
+    }
{code}

And here:

{code}
+    try {
+      r.flushcache();
+      r.compactStores();
+    } catch (IOException e) {
+      e.printStackTrace();
+      throw e;
+    }
{code}

Should this run in tearDown?

{code}
     } finally {
       mrCluster.shutdown();
+      if (jobConf != null) {
+        deleteLocal(new File(jobConf.get("hadoop.tmp.dir")));
+      }
{code}

Why is this necessary?

{code}
+  void close() {
+    this.lock.writeLock().lock();
+    try {
+      this.memcache.clear();
+      this.snapshot.clear();
+    } finally {
+      this.lock.writeLock().unlock();
+    }
+  }
{code}

Its just busy work and it will mask real memory retention issues -- e.g. if the 
host of this MemCache is being held on to, then this clearing will make it that 
condidtion harder to detect (if the host is not being referenced, the memcache 
and snapshot have no path to root and GC will clean them up).  I see that we 
used do this elsewhere...and you're just tidying stuff.

Can this be final? +  protected Memcache memcache = new Memcache();

Why remove the final in below?

-  final FileSystem fs;
-  private final HBaseConfiguration conf;
+  FileSystem fs;
+  private HBaseConfiguration conf;

Because not set on construction?

This is a nice addition -> +  private volatile long storeSize;

I think one of my patches yesterday might make it so this part doesn't apply:

{code}
@@ -208,29 +220,18 @@
     if(LOG.isDebugEnabled()) {
       LOG.debug("starting " + storeName +
           ((reconstructionLog == null || !fs.exists(reconstructionLog)) ?
-          " (no reconstruction log)" :
-            " with reconstruction log: " + reconstructionLog.toString()));
+              " (no reconstruction log)" :
+                " with reconstruction log: " + reconstructionLog.toString()));
{code}

What you think our policy on data member access should be?  Sometimes we make 
things default access.  Other times we add accessors:

{code}
+
+  HColumnDescriptor getFamily() {
+    return this.family;
   }
{code}

Should we log debug something here just in case things are not working as we'd 
hope?

{code}
+      } catch (IOException e) {
+        // If the HSTORE_LOGINFOFILE doesn't contain a number, just ignore it.
+        // That means it was built prior to the previous run of HStore, and so
+        // it cannot contain any updates also contained in the log.
+      }
{code}

Just suggesting...

You want to remove this?

{code}
+//      this.basedir = null;
+//      this.info = null;
+//      this.family = null;
+//      this.fs = null;
{code}

Would suggest this log not necessary

{code}
+            if (LOG.isDebugEnabled()) {
+              LOG.debug("Not compacting " + this.storeName +
+                  " because no store files to compact.");
+            }
{code}

Lets log events and changes in state -- not the fact that they did not happen 
(there are a few other logs after this one).  Would suggest that as part of 
logging compactions, you add rationale.

Should the below check be done inside the checkSplit method for the sake of 
better coherence?

{code}
+      if (storeSize > this.desiredMaxFileSize) {
+        return checkSplit();
+      }
{code}

This is intentional (Moving where reportOpen happens?)?

{code}
-      reportOpen(regionInfo); 
     }   
+    reportOpen(regionInfo);  
{code}

I don't think the HRegion cleanup you've added helps (See above note on how I 
think it could make memleaks harder to detect).





> Streamline HStore startup and compactions
> -----------------------------------------
>
>                 Key: HBASE-469
>                 URL: https://issues.apache.org/jira/browse/HBASE-469
>             Project: Hadoop HBase
>          Issue Type: Improvement
>          Components: regionserver
>    Affects Versions: 0.2.0
>            Reporter: Jim Kellerman
>            Assignee: Jim Kellerman
>             Fix For: 0.2.0
>
>         Attachments: patch.txt, patch.txt
>
>
> Several useful patches that streamline HStore startup and compactions that 
> were a part of the abandoned changes in HBASE-69 should be incorporated.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to