I am trying to understand the flow of data inside hdfs as it's processed by the 
data processor script.
I see the archive.sh and demux.sh are run which runs ArchiveManager and 
DemuxManager.   It appears to that just reading the code that they both are 
looking at the data sink (default /chukwa/logs).  

Can someone shed some light on how ArchiveManager and DemuxManager interact?  
E.g. I was under the impression that the data flowed through the archiving 
process first then got fed into the demuxing after it had created .arc files.  

Reply via email to