I wanted to use SocketTeeWriter without going through the steps of
writing to HDFS.
It seems that PipelineStageWriter is designed to have any number of
PipeLineable writers, optionally followed by a SeqFileWriter.
So I changed my collector config :

  <property>
    <name>chukwaCollector.writerClass</name>
    
<value>org.apache.hadoop.chukwa.datacollection.writer.PipelineStageWriter</value>
  </property>

  <property>
    <name>chukwaCollector.pipeline</name>
    
<value>org.apache.hadoop.chukwa.datacollection.writer.SocketTeeWriter</value>
  </property>

After doing one minor change to SocketTeeWriter, I could get this to
work. The advantage is that now I do not need to set up HDFS.

Please let me know if this is something we should patch, I will submit
the patch.

SocketTeeWriter changes:

[~/hadoop-src/chukwa/trunk] svn diff
Index: 
src/java/org/apache/hadoop/chukwa/datacollection/writer/SocketTeeWriter.java
===================================================================
--- 
src/java/org/apache/hadoop/chukwa/datacollection/writer/SocketTeeWriter.jav(revision
831608)
+++ 
src/java/org/apache/hadoop/chukwa/datacollection/writer/SocketTeeWriter.jav(working
copy)
@@ -225,7 +225,9 @@

   @Override
   public CommitStatus add(List<Chunk> chunks) throws WriterException {
-    CommitStatus rv = next.add(chunks); //pass data through
+    CommitStatus rv = ChukwaWriter.COMMIT_OK;
+    if (next != null)
+       rv = next.add(chunks); //pass data through
     synchronized(tees) {
       Iterator<Tee> loop = tees.iterator();
       while(loop.hasNext()) {
[~/hadoop-src/chukwa/trunk]

Reply via email to