[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

2012-01-22 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5235:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.90 and trunk.  
Thanks for the review Ted.

> HLogSplitter writer thread's streams not getting closed when any of the 
> writer threads has exceptions.
> --
>
> Key: HBASE-5235
> URL: https://issues.apache.org/jira/browse/HBASE-5235
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0, 0.90.5
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.92.1, 0.90.6
>
> Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, 
> HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got 
> while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we 
> try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
> thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>   for (WriterThread t: writerThreads) {
> try {
>   t.join();
> } catch (InterruptedException ie) {
>   throw new IOException(ie);
> }
> checkForErrors();
>   }
>   LOG.info("Split writers finished");
>   
>   return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
> Throwable thrown = this.thrown.get();
> if (thrown == null) return;
> if (thrown instanceof IOException) {
>   throw (IOException)thrown;
> } else {
>   throw new RuntimeException(thrown);
> }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

2012-01-22 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5235:
--

Attachment: HBASE-5235_0.90_2.patch

Updated patch addressing Ted's comments for 0.90. Already trunk patch 
incorporates Ted's comments.

> HLogSplitter writer thread's streams not getting closed when any of the 
> writer threads has exceptions.
> --
>
> Key: HBASE-5235
> URL: https://issues.apache.org/jira/browse/HBASE-5235
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0, 0.90.5
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.92.1, 0.90.6
>
> Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, 
> HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got 
> while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we 
> try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
> thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>   for (WriterThread t: writerThreads) {
> try {
>   t.join();
> } catch (InterruptedException ie) {
>   throw new IOException(ie);
> }
> checkForErrors();
>   }
>   LOG.info("Split writers finished");
>   
>   return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
> Throwable thrown = this.thrown.get();
> if (thrown == null) return;
> if (thrown instanceof IOException) {
>   throw (IOException)thrown;
> } else {
>   throw new RuntimeException(thrown);
> }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

2012-01-21 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5235:
--

Attachment: HBASE-5235_trunk.patch

Patch for trunk.  

> HLogSplitter writer thread's streams not getting closed when any of the 
> writer threads has exceptions.
> --
>
> Key: HBASE-5235
> URL: https://issues.apache.org/jira/browse/HBASE-5235
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0, 0.90.5
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.92.1, 0.90.6
>
> Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, 
> HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got 
> while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we 
> try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
> thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>   for (WriterThread t: writerThreads) {
> try {
>   t.join();
> } catch (InterruptedException ie) {
>   throw new IOException(ie);
> }
> checkForErrors();
>   }
>   LOG.info("Split writers finished");
>   
>   return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
> Throwable thrown = this.thrown.get();
> if (thrown == null) return;
> if (thrown instanceof IOException) {
>   throw (IOException)thrown;
> } else {
>   throw new RuntimeException(thrown);
> }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

2012-01-21 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5235:
--

Status: Patch Available  (was: Open)

> HLogSplitter writer thread's streams not getting closed when any of the 
> writer threads has exceptions.
> --
>
> Key: HBASE-5235
> URL: https://issues.apache.org/jira/browse/HBASE-5235
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.90.5, 0.92.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.92.1, 0.90.6
>
> Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, 
> HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got 
> while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we 
> try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
> thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>   for (WriterThread t: writerThreads) {
> try {
>   t.join();
> } catch (InterruptedException ie) {
>   throw new IOException(ie);
> }
> checkForErrors();
>   }
>   LOG.info("Split writers finished");
>   
>   return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
> Throwable thrown = this.thrown.get();
> if (thrown == null) return;
> if (thrown instanceof IOException) {
>   throw (IOException)thrown;
> } else {
>   throw new RuntimeException(thrown);
> }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

2012-01-21 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5235:
--

Attachment: HBASE-5235_0.90_1.patch

Addressing Ted's comments.  Here the logWriter.values() will be iterated twice.

> HLogSplitter writer thread's streams not getting closed when any of the 
> writer threads has exceptions.
> --
>
> Key: HBASE-5235
> URL: https://issues.apache.org/jira/browse/HBASE-5235
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0, 0.90.5
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.92.1, 0.90.6
>
> Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got 
> while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we 
> try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
> thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>   for (WriterThread t: writerThreads) {
> try {
>   t.join();
> } catch (InterruptedException ie) {
>   throw new IOException(ie);
> }
> checkForErrors();
>   }
>   LOG.info("Split writers finished");
>   
>   return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
> Throwable thrown = this.thrown.get();
> if (thrown == null) return;
> if (thrown instanceof IOException) {
>   throw (IOException)thrown;
> } else {
>   throw new RuntimeException(thrown);
> }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

2012-01-21 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5235:
--

Attachment: HBASE-5235_0.90.patch

Patch for 0.90.  If this patch is fine i will prepare a similar patch for 0.92

> HLogSplitter writer thread's streams not getting closed when any of the 
> writer threads has exceptions.
> --
>
> Key: HBASE-5235
> URL: https://issues.apache.org/jira/browse/HBASE-5235
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0, 0.90.5
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.92.1, 0.90.6
>
> Attachments: HBASE-5235_0.90.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got 
> while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we 
> try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
> thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>   for (WriterThread t: writerThreads) {
> try {
>   t.join();
> } catch (InterruptedException ie) {
>   throw new IOException(ie);
> }
> checkForErrors();
>   }
>   LOG.info("Split writers finished");
>   
>   return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
> Throwable thrown = this.thrown.get();
> if (thrown == null) return;
> if (thrown instanceof IOException) {
>   throw (IOException)thrown;
> } else {
>   throw new RuntimeException(thrown);
> }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

2012-01-20 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5235:
--

Summary: HLogSplitter writer thread's streams not getting closed when any 
of the writer threads has exceptions.  (was: HLogSplitter writer threads not 
getting closed when any of the writer threads has exceptions.)

> HLogSplitter writer thread's streams not getting closed when any of the 
> writer threads has exceptions.
> --
>
> Key: HBASE-5235
> URL: https://issues.apache.org/jira/browse/HBASE-5235
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0, 0.90.5
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.92.1, 0.90.6
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got 
> while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we 
> try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
> thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>   for (WriterThread t: writerThreads) {
> try {
>   t.join();
> } catch (InterruptedException ie) {
>   throw new IOException(ie);
> }
> checkForErrors();
>   }
>   LOG.info("Split writers finished");
>   
>   return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
> Throwable thrown = this.thrown.get();
> if (thrown == null) return;
> if (thrown instanceof IOException) {
>   throw (IOException)thrown;
> } else {
>   throw new RuntimeException(thrown);
> }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira