Author: yhemanth
Date: Wed Jul  9 07:45:42 2008
New Revision: 675194

URL: http://svn.apache.org/viewvc?rev=675194&view=rev
Log:
HADOOP-3695. Provide an ability to start multiple workers per node. Contributed 
by Vinod Kumar Vavilapalli

Modified:
    hadoop/core/trunk/docs/changes.html
    hadoop/core/trunk/docs/hod_config_guide.html
    hadoop/core/trunk/docs/hod_config_guide.pdf
    hadoop/core/trunk/src/contrib/hod/CHANGES.txt
    hadoop/core/trunk/src/contrib/hod/bin/hod
    hadoop/core/trunk/src/contrib/hod/bin/ringmaster
    hadoop/core/trunk/src/contrib/hod/hodlib/GridServices/hdfs.py
    hadoop/core/trunk/src/contrib/hod/hodlib/GridServices/mapred.py
    hadoop/core/trunk/src/contrib/hod/hodlib/RingMaster/ringMaster.py
    hadoop/core/trunk/src/contrib/hod/support/logcondense.py
    hadoop/core/trunk/src/contrib/hod/testing/testRingmasterRPCs.py
    
hadoop/core/trunk/src/docs/src/documentation/content/xdocs/hod_config_guide.xml

Modified: hadoop/core/trunk/docs/changes.html
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/docs/changes.html?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- hadoop/core/trunk/docs/changes.html (original)
+++ hadoop/core/trunk/docs/changes.html Wed Jul  9 07:45:42 2008
@@ -95,7 +95,7 @@
     </ol>
   </li>
   <li><a 
href="javascript:toggleList('trunk_(unreleased_changes)_._improvements_')">  
IMPROVEMENTS
-</a>&nbsp;&nbsp;&nbsp;(6)
+</a>&nbsp;&nbsp;&nbsp;(7)
     <ol id="trunk_(unreleased_changes)_._improvements_">
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3577";>HADOOP-3577</a>. Tools 
to inject blocks into name node and simulated
 data nodes for testing.<br />(Sanjay Radia via hairong)</li>
@@ -106,6 +106,8 @@
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3543";>HADOOP-3543</a>. Update 
the copyright year to 2008.<br />(cdouglas via omalley)</li>
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3587";>HADOOP-3587</a>. Add a 
unit test for the contrib/data_join framework.<br />(cdouglas)</li>
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3402";>HADOOP-3402</a>. Add 
terasort example program<br />(omalley)</li>
+      <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3660";>HADOOP-3660</a>. Add 
replication factor for injecting blocks in simulated
+datanodes.<br />(Sanjay Radia via cdouglas)</li>
     </ol>
   </li>
   <li><a 
href="javascript:toggleList('trunk_(unreleased_changes)_._optimizations_')">  
OPTIMIZATIONS
@@ -130,7 +132,7 @@
 </a></h2>
 <ul id="release_0.18.0_-_unreleased_">
   <li><a 
href="javascript:toggleList('release_0.18.0_-_unreleased_._incompatible_changes_')">
  INCOMPATIBLE CHANGES
-</a>&nbsp;&nbsp;&nbsp;(22)
+</a>&nbsp;&nbsp;&nbsp;(23)
     <ol id="release_0.18.0_-_unreleased_._incompatible_changes_">
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-2703";>HADOOP-2703</a>.  The 
default options to fsck skips checking files
 that are being written to. The output of fsck is incompatible
@@ -202,6 +204,9 @@
 the superclass compare will throw a NullPointerException. Also define
 a RawComparator for NullWritable and permit it to be written as a key
 to SequenceFiles.<br />(cdouglas)</li>
+      <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3673";>HADOOP-3673</a>. Avoid 
deadlock caused by DataNode RPC receoverBlock().
+(Tsz Wo (Nicholas), SZE via rangadi)
+</li>
     </ol>
   </li>
   <li><a 
href="javascript:toggleList('release_0.18.0_-_unreleased_._new_features_')">  
NEW FEATURES
@@ -378,7 +383,7 @@
     </ol>
   </li>
   <li><a 
href="javascript:toggleList('release_0.18.0_-_unreleased_._bug_fixes_')">  BUG 
FIXES
-</a>&nbsp;&nbsp;&nbsp;(115)
+</a>&nbsp;&nbsp;&nbsp;(117)
     <ol id="release_0.18.0_-_unreleased_._bug_fixes_">
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-2905";>HADOOP-2905</a>. 'fsck 
-move' triggers NPE in NameNode.<br />(Lohit Vjayarenu via rangadi)</li>
       <li>Increment ClientProtocol.versionID missed by <a 
href="http://issues.apache.org/jira/browse/HADOOP-2585";>HADOOP-2585</a>.<br 
/>(shv)</li>
@@ -605,33 +610,50 @@
 classpath jars.<br />(Brice Arnould via nigel)</li>
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3692";>HADOOP-3692</a>. Fix 
documentation for Cluster setup and Quick start guides.<br />(Amareshwari 
Sriramadasu via ddas)</li>
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3691";>HADOOP-3691</a>. Fix 
streaming and tutorial docs.<br />(Jothi Padmanabhan via ddas)</li>
+      <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3630";>HADOOP-3630</a>. Fix 
NullPointerException in CompositeRecordReader from empty
+sources<br />(cdouglas)</li>
+      <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3706";>HADOOP-3706</a>. Fix a 
ClassLoader issue in the mapred.join Parser that
+prevents it from loading user-specified InputFormats.<br />(Jingkei Ly via 
cdouglas)</li>
     </ol>
   </li>
 </ul>
 <h2><a href="javascript:toggleList('older')">Older Releases</a></h2>
 <ul id="older">
-<h3><a href="javascript:toggleList('release_0.17.1_-_unreleased_')">Release 
0.17.1 - Unreleased
+<h3><a href="javascript:toggleList('release_0.17.2_-_unreleased_')">Release 
0.17.2 - Unreleased
+</a></h3>
+<ul id="release_0.17.2_-_unreleased_">
+  <li><a 
href="javascript:toggleList('release_0.17.2_-_unreleased_._bug_fixes_')">  BUG 
FIXES
+</a>&nbsp;&nbsp;&nbsp;(3)
+    <ol id="release_0.17.2_-_unreleased_._bug_fixes_">
+      <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3681";>HADOOP-3681</a>. 
DFSClient can get into an infinite loop while closing
+a file if there are some errors.<br />(Lohit Vijayarenu via rangadi)</li>
+      <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3002";>HADOOP-3002</a>. Hold 
off block removal while in safe mode.<br />(shv)</li>
+      <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3685";>HADOOP-3685</a>. 
Unbalanced replication target.<br />(hairong)</li>
+    </ol>
+  </li>
+</ul>
+<h3><a href="javascript:toggleList('release_0.17.1_-_2008-06-23_')">Release 
0.17.1 - 2008-06-23
 </a></h3>
-<ul id="release_0.17.1_-_unreleased_">
-  <li><a 
href="javascript:toggleList('release_0.17.1_-_unreleased_._incompatible_changes_')">
  INCOMPATIBLE CHANGES
+<ul id="release_0.17.1_-_2008-06-23_">
+  <li><a 
href="javascript:toggleList('release_0.17.1_-_2008-06-23_._incompatible_changes_')">
  INCOMPATIBLE CHANGES
 </a>&nbsp;&nbsp;&nbsp;(1)
-    <ol id="release_0.17.1_-_unreleased_._incompatible_changes_">
+    <ol id="release_0.17.1_-_2008-06-23_._incompatible_changes_">
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3565";>HADOOP-3565</a>. Fix 
the Java serialization, which is not enabled by
 default, to clear the state of the serializer between objects.<br />(tomwhite 
via omalley)</li>
     </ol>
   </li>
-  <li><a 
href="javascript:toggleList('release_0.17.1_-_unreleased_._improvements_')">  
IMPROVEMENTS
+  <li><a 
href="javascript:toggleList('release_0.17.1_-_2008-06-23_._improvements_')">  
IMPROVEMENTS
 </a>&nbsp;&nbsp;&nbsp;(2)
-    <ol id="release_0.17.1_-_unreleased_._improvements_">
+    <ol id="release_0.17.1_-_2008-06-23_._improvements_">
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3522";>HADOOP-3522</a>. 
Improve documentation on reduce pointing out that
 input keys and values will be reused.<br />(omalley)</li>
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3487";>HADOOP-3487</a>. 
Balancer uses thread pools for managing its threads;
 therefore provides better resource management.<br />(hairong)</li>
     </ol>
   </li>
-  <li><a 
href="javascript:toggleList('release_0.17.1_-_unreleased_._bug_fixes_')">  BUG 
FIXES
+  <li><a 
href="javascript:toggleList('release_0.17.1_-_2008-06-23_._bug_fixes_')">  BUG 
FIXES
 </a>&nbsp;&nbsp;&nbsp;(14)
-    <ol id="release_0.17.1_-_unreleased_._bug_fixes_">
+    <ol id="release_0.17.1_-_2008-06-23_._bug_fixes_">
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-2159";>HADOOP-2159</a> 
Namenode stuck in safemode. The counter blockSafe should
 not be decremented for invalid blocks.<br />(hairong)</li>
       <li><a 
href="http://issues.apache.org/jira/browse/HADOOP-3472";>HADOOP-3472</a> 
MapFile.Reader getClosest() function returns incorrect results

Modified: hadoop/core/trunk/docs/hod_config_guide.html
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/docs/hod_config_guide.html?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- hadoop/core/trunk/docs/hod_config_guide.html (original)
+++ hadoop/core/trunk/docs/hod_config_guide.html Wed Jul  9 07:45:42 2008
@@ -384,7 +384,8 @@
           
 <li>work-dirs: Comma-separated list of paths that will serve
                        as the root for directories that HOD generates and 
passes
-                       to Hadoop for use to store DFS and Map/Reduce data. For 
e.g.
+                       to Hadoop for use to store DFS and Map/Reduce data. For
+                       example,
                        this is where DFS data blocks will be stored. Typically,
                        as many paths are specified as there are disks available
                        to ensure all disks are being utilized. The restrictions
@@ -406,9 +407,23 @@
                        successful allocation even in the presence of a few bad
                        nodes in the cluster.
                        </li>
+          
+<li>workers_per_ring: Number of workers per service per HodRing.
+                       By default this is set to 1. If this configuration
+                       variable is set to a value 'n', the HodRing will run
+                       'n' instances of the workers (TaskTrackers or DataNodes)
+                       on each node acting as a slave. This can be used to run
+                       multiple workers per HodRing, so that the total number 
of
+                       workers  in a HOD cluster is not limited by the total
+                       number of nodes requested during allocation. However, 
note
+                       that this will mean each worker should be configured to 
use
+                       only a proportional fraction of the capacity of the 
+                       resources on the node. In general, this feature is only
+                       useful for testing and simulation purposes, and not for
+                       production use.</li>
         
 </ul>
-<a name="N100A5"></a><a name="3.5+gridservice-hdfs+options"></a>
+<a name="N100A8"></a><a name="3.5+gridservice-hdfs+options"></a>
 <h3 class="h4">3.5 gridservice-hdfs options</h3>
 <ul>
           
@@ -449,7 +464,7 @@
 <li>final-server-params: Same as above, except they will be marked final.</li>
         
 </ul>
-<a name="N100C4"></a><a name="3.6+gridservice-mapred+options"></a>
+<a name="N100C7"></a><a name="3.6+gridservice-mapred+options"></a>
 <h3 class="h4">3.6 gridservice-mapred options</h3>
 <ul>
           
@@ -482,7 +497,7 @@
 <li>final-server-params: Same as above, except they will be marked final.</li>
         
 </ul>
-<a name="N100E3"></a><a name="3.7+hodring+options"></a>
+<a name="N100E6"></a><a name="3.7+hodring+options"></a>
 <h3 class="h4">3.7 hodring options</h3>
 <ul>
           

Modified: hadoop/core/trunk/docs/hod_config_guide.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/docs/hod_config_guide.pdf?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- hadoop/core/trunk/docs/hod_config_guide.pdf (original)
+++ hadoop/core/trunk/docs/hod_config_guide.pdf Wed Jul  9 07:45:42 2008
@@ -5,10 +5,10 @@
 /Producer (FOP 0.20.5) >>
 endobj
 5 0 obj
-<< /Length 723 /Filter [ /ASCII85Decode /FlateDecode ]
+<< /Length 725 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-Gaua=c#T:-'SZ9DKpM-NMLLY;m[Bqhg.eFXG%mjA6"":5EWp?YDdNei,a1n908a1HXuO#P42+sAP6TWS$:kF78BV"$T0,[EMAIL
 PROTECTED];%(o.Uc>2<hFPe2.tMF=_`[sD`Topod%[Wl]qDIm&'eQug8/[b"[EMAIL 
PROTECTED]<[EMAIL PROTECTED];dd6-tR,[EMAIL 
PROTECTED]"c=Pag[dU)@$t+1G%jPNrCA]bHjMS-hbgaQ&'/d/?I;[EMAIL 
PROTECTED]>ECc*L`"2=:*V`,ZE-E*$TC=Rp&,[EMAIL 
PROTECTED]>aN;pCDI+R$IS/r30DFTlURV]V6QF?p$0W>5s7Bp0;OHS&PWW?hQeb:[EMAIL 
PROTECTED](rq)d+_Fk]_QMBRBiZhZ6?IG,EI&n%`0m67^I*]un;)r<:7lTbb)S]_LP-DTntQVMY6+F8a$_[oJW7rSkb:qA?o:SgKTKaI&=nR=6E`<*aT4rN:3][4079Gn5X]:Ib!'O01Dg[XW9]qHX%5HB[`\=LJ;?IlTNARj4hR7=O[8hJ%,0jtGQY:0[[!_B77bf<B`F[Ye12_T6h9'-Kg_:[EMAIL
 
PROTECTED],he-N$+FsL=4Piu\%CC5KOGf3KOPms)e"@0].A0d7p5P81?FIrjh<=$:F1+[D7;D?f']&1R-G=g[-7Mi_XV\F~>
+Gaua=c#T:-'SZ9DKpM-NMLLY;m[Bqhg.eFXG%mjA6"":5EWp?YDdNei,a1n908a1HXuO#P42+sAP6TWS$:kF78BV"$T0,[EMAIL
 PROTECTED];%(o.Uc>2<hFPe2.tMF=_`[sD`Topod%[Wl]qDIm&'eQug8/[b"[EMAIL 
PROTECTED]<[EMAIL PROTECTED];dd6-tR,[EMAIL 
PROTECTED]"c=Pag[dU)@$t+1G%jPNrCA]bHjMS-hbgaQ&'/d/?I;[EMAIL 
PROTECTED]>ECc*L`"2=:*V`,ZE-E*$TC=Rp&,[EMAIL 
PROTECTED]>aN;pCDI+R$IS/r30DFTlURV]V6QF?p$0W>5s7Bp0;OHS&PWW?hQeb:[EMAIL 
PROTECTED](rq)d+_Fk]_QMBRBiZhZ6?IG,EI&n%`0m67^I*]un;)r<:7lTbb)S]_LP-DTntQVMY6+F8a$_[oJW7rSkb:qA?o:SgKTKaI&=nR=6E`<*aT4rN:3][4079Gn5X]:Ib!'O01Dg[XW9]qHX%5HB[`\=LJ;[EMAIL
 
PROTECTED]:Sh<n$BrQ,!n%=Sc`f3t_9FXN,Af/<Y]KE3Aa)12m2beBE;:*CI<-tLW3(Gc7/c9S,+31=e:GCNUuD0ih+rC.X4&[EMAIL
 PROTECTED];7[>tpoqq^2h%!&[EMAIL PROTECTED]/mM1qom;*^'=&&e3`;Z~>
 endstream
 endobj
 6 0 obj
@@ -182,10 +182,10 @@
 >>
 endobj
 34 0 obj
-<< /Length 2543 /Filter [ /ASCII85Decode /FlateDecode ]
+<< /Length 2806 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-Gau`V>[EMAIL PROTECTED]'[EMAIL 
PROTECTED](b<0jc<E6llst[lVTBO#BSqD).%<;]W$SG5\A,bBAQe1eA3:@-/V4,&.$EciSaTDNl/Cd.%l]!nGcRtGqt&(PFkft:[EMAIL
 
PROTECTED]"4+ET_)?4"3cDZ4hNe&;K^I[#H9?f+:&cdsr`jc+Uljm#(eb&c$\BbWY6)m17IeN[Ess7Ym$e_37Uo4go;.Ysg>H:432)RMn)32u]FqjnLl\'[EMAIL
 
PROTECTED]'?ORTE)""djU[pXsTZYG?UFM&'A,hCaU.c8<1d-W4TXVuF)MpgpUV(c^KQ^"P36%fF#t;6++'[EMAIL
 
PROTECTED],^q_[/14G?\-6&gX_Q9(T_6hFH3VM>>FMm]d/n5sdQ_[kJ'oEQM7+t&KPHVs_CC^K(.ZeBG%`C>K_\[<`Jh*7$R/$EMZ!caBa7r\<%>_)`HYmNdY8hgfVH-G3:<_,$;^oP:pb,l"U!4L6nk](C"j!i#2<!p4pKp147A\sUG,%:]7CWk-X"1YIWSm-/JakPf.59HqO0OMCEaK.lV]f6gKqU^"Sg?o)$TU&&dZNmPi+f`=UG]:[EMAIL
 
PROTECTED]<poLPcpEQ4`3*]S9&I`M""5Ssc,36Poh6YGO]LUGKka`Gl8&:CAMQM/u3ipM5)3W$1[[&B37'>R.Fe!\$b3ENQS:7:VR\45a/J`-iD\"rn8n3][&+([EMAIL
 
PROTECTED];K.g9l&]lFIb0o0))HaA-_[0Wbl_)n3aghs>#Kf8FUE[Ig\=FEQWcI%!J*^hND%*[3k=s(,SX;EcRFH2<UrmL'@b*X3->7,(A;WOdSJt#,<8=3%Qj01iQ%#98S"Qbp)!$5<66Q'_QAtKnd0;#9)i.\pRF/ge`df0Y[=`#,Ui/bh0<]i`fn1FC,<?M9r]k5p5mR"J+*#k9fSEKp+LR?#OH&IO=epsI;*T_!PnQQ"@%<C=,[EMAIL
 PROTECTED]'M>nlH
 
Gcse&V-G;Q\6F2D1K3dnjnHie$%'Kj56YiiT&?+ZAnAol\UCt9l+(hX`3lu1M/9Sm9N^/6>i?k*<N,/f9&/dbbO[/2dMM,Rj0Gt_76`/+Kfda;NF"1bSJ^S-[?52(Cg$I+$MEmZd(:@aQU_+_CYg/Wak+W-Z_DuV2AE=!?OrK*Pi-:hX*#gjLYeM4[pBt"m.\9a;f=Ra[,4P<E&<@&JgSZ`a*%t"*#0PQ9cpnP0KscV#j91$X"`*F*,gg=[c6"MS;?<d%Zah?HUpn^4L$fHcAe.9IUZEQ)-k"U)GLF>`1>X/YdX`sk%Mhj7:mVE8<c-t$U$k(jW'7[k)]V+!+CS+,+T^NU:Q/Da*FE+q%1&mC*6r'ci]r5S3C/Z\'N1,Hgl2u+[TIb[i^uMgU[/[EMAIL
 
PROTECTED])r`]/PJ-cBHC`M%-f%rC][.k[*t&J$e)j,MVT2,/2b77o(MYfNpa?\=^l,&e)T:C.>1d7>'^%E;Ef!oDAHRoE+1B/?R!+:kPf4'O`?SS,CN\ZJC47FW$8gB>f8M=+N\ZE<=/HSe84cfCY68dJam9bD:j35f:ACr>8MeEC$=6b\+d_pE2rDF-Fb;7C0#pS>HGrej&VaoRE5`KYt&`0%<m'f4Ip+GqP2C]uK'*7ssS`81$uY5?)[EMAIL
 
PROTECTED],edptk!a2JFE+Oi])<Tsn.@)BOH#(ed1I]47(d(#(d^,/@#('3+l17\=rm=L50%aOB>%,;0-9)7qA$`N/W/hA,%.rh?Ne!!QLhZEOfP%+kf?,)F(m=K)ENj.hKdjr_:`L;s>ZZ=-V925.;[EMAIL
 PROTECTED]'<&#0A3MW>-BpG;u=([EMAIL 
PROTECTED],[S,N.)SE^Vo%3Foki*./;"-)/ER)q[rZuo1muB,>Mdbg52K7\BN/]Hu5''b30IK_,++$5`sY-`0q,^5jdenQSqkK\7c=&*qb`l8egi[A3ut5Ut.+0UkZ%#g[i:iV#>sug/SRPAuOG=K**A"F7j
 1mrgjZhoWl)YUkeWB?C4t5LD.D/KF*Jg`hBI",g>[EMAIL 
PROTECTED]>r33AIJ*4MFI#aSkf>h[62PAA3G=$*8f6bnS7/($])&LFU%9.%1&1S,GS;pGq.UR!_VleQXGM1TL-3V"Z&/[EMAIL
 
PROTECTED]/OfL3cYM2jk-J0qiS/Lm`dg1YDg39_b0gZ[ZVaVT'H?E.k4Uf[>d?P.2(5J;t5I:iDLo[8J!n<1l_8"Bkdp];T:5IT!ICZ<.ZV'aJuMHEBfBSu98gmO[]-JVru(,+0WH!M]]($V[>T21?BHhD<T`Eo2>nN%uhMKi)u%Hi3_qIo_#E#hqB$h(:+n"B#5l=iE[<MK$5o2!Ff0c8.[2G*8=?N&HJiB,cV_&N%5mJN0sH04:M'bg^aruX'%1V8([3FSoQpe_QujTMD9Bd9h]J/[EMAIL
 
PROTECTED],7Aki)>L+<pjZ9B5KG"-3+]7<:ib4TlZJhF5hHShX`;"1)l(+h[+*l>VGMR?ZSNbI4K>4c=LZ?^n<;Y,r_sl`ipVbH=%TkXI_D!s@/]!eq4aZ>hg33K"'!H$L~>
+Gau0FhiHMM%">t+,1t([EMAIL 
PROTECTED]<OYRZA'bDCPRj[o%)ZG1j/i^BRo>qYGtiekgc,k<2UOeVgFp?IFfX1tfhfinFSGd[**J$rH1o`&pZs.1TXj=QO.-Km"mqXs.8fa6VPiPjaFk;UL6#kabCo#fk9o(Bb*(AQ(%C9m=EIQY6EB7F1$q;[EMAIL
 
PROTECTED]'):5noUH&nq\/BShuCVW-b4)cCY3RW6H')0D%qB1^&G\Fo^\ULWV^8c/Df/\2oa>nBFe1VDsrr,[&Ydo]rdB7/[EMAIL
 
PROTECTED]@,L&mRtn*PijL$SIkUTEJN*Lt6O$J_I&KaT$rmB1"\WN$sMM1DF;dd3V1e-9*SQd;OLjEoi_A_GM%Q&[EMAIL
 PROTECTED]</]NF<fsJ((m?k`6VB]V`_4,Q1Wl.Dn.rtmp,>HT"!/7X.`$/EGU)[EMAIL 
PROTECTED],E9,ir8JU#JdCPkb%,AQEJmmJ,gK)*J(e;QU2[K0U-9-_>;:%ZAqiM[H#<>#&@<.fu9I:1:ko%[>Ai4k$f0h)<k+stp`a6A5!kdjaMurg#$%ueG$?DfX.#\]&[EMAIL
 PROTECTED],3:Tj8G8qN;]4(L>YRI<X*+bniTeZ/=_cga"j>lHF#[UGW:&AT([EMAIL 
PROTECTED]@0?PasIXBqu4Ok^Lm&9=d!5D_b4<"[EMAIL 
PROTECTED]>I'@((84G%*P^_"ri6Zob)to]DK4;/6k<dN8."AEd7<X624&.P1jhP!5DeK5-L0U>AVI]!fN/Tq0C,ed_AVZ/qohGQU4"ZdmZ$X!$UZe-RP^:.4%iV::8'rZ^l9p9Q0i20^6sn`SE*b9Q!(D,7=_g)#%8:Rh$t<-s8e!0HHY3(qr=dLp"j-5[T&B?);U]jDR6Q+TUY\k184<YZ)lX,o&fiWhE3Ui1`fR->K$#"qV^1<Do)$K0L8PPBV1j$LDFrLTV!9AhV=KI$KC5h
 )hcL"bN8;mYG7YTLCZkk&jUEk,[EMAIL PROTECTED]"`c'[EMAIL 
PROTECTED]'c]?)(@^9cnP\eiLlW*\=_>SG,:,b404\.#b56CJZa;W^Hn]2j>S0PlcSh5o_WhbB[&tQfh>jZ3.)[EMAIL
 
PROTECTED]/_FbrAQguYqe.$f]u23_`i@<5J&+`X*UNT)e)YVtcZS9S_jL#s!C\Y\;lclW&%_J#;@&;*bN.de0B&)A78!BA4p^p=1r8QW`Ui]MEYdZ2R%ajP-*i+.TB2$G1E1;e%"Dd>[EMAIL
 
PROTECTED]'4$6VJaT>e>W]+,B!VLO-\e_LjqHKu]"TK]DFcd,ARO$>qOH1#*!qf=MECrsWK.+2OZoO*D2CD3b)4tQ-bl;]40p_b*/0-MFB3G/.)NIEhT4F90n=*.%889L`Bfa+k_^klN,5OES.\(*!Jp,[EMAIL
 
PROTECTED]'qZu#L0\r6o7aFnN!U(c0B:;lQdfXI=)KtYoprPVN=J$sJ="JmX`FYtI?Im+\PR!cf`$3CC!IGbp.pYfin'qrFMXl(cI4U=3t(7h4#BUGt;,Zi:O!_tkR,rno9-TnE8,jtkJ)eY;7c0RDD*nk=nZe&/SU/8a\8JaW1]6t,r_]rG)_SZflm;[EMAIL
 PROTECTED]>hAq3`2BKMp6/Z;Vh[ndu$M`+eeND-%"[EMAIL 
PROTECTED]"d(+U7rdf?6$SJTne(r!*W[eq.u*[Xh\7@>WeHG*Br<eF<aHE:Vd=)!H6MR=3XUrs>^:#t>Yor=Om`HFE,[EMAIL
 PROTECTED],[EMAIL 
PROTECTED]&Id>'FWr1i=*N*?9ZVI6)@[]8L7OQa.U'C3He!9pMLpPk3i`3n]V$;r3\N1cnUV2/[EMAIL
 PROTECTED]"GR>!"S\i&[EMAIL 
PROTECTED]&H0Gp!7h\`ck==4a=UY?dl6bPe;=L[qC9*u:65qdtd5&5^m;?cB>Un'W7]6D=3MM'3<70)@.BUP6+h,`X%;I#Z+r)H5+Wqr1g/8\UmGRJoI_AJ+2"dT<WfnJ)CYI
 (QTO2s+lArChae=Nl6#U(k^_n`O/!WVje`EIY)l0O*`&i[<&[EMAIL 
PROTECTED]@I`$LA7Ei),/[EMAIL 
PROTECTED]@rrm=Lt%H;)UILS=H)u"`*8Jfk49fg/-0U:Fp5'RM1`<'qgI2\$#^f`\5)r'Ei'phI34NpnH9icT(IN7METG.)/^jC;`(>YG]D(YnIBtJW5WfQ.[#i6g^D1WMjI"8?A&Jq^N=oq47s!$&qj<;0I$B_u!NNr49sh3MOgVqh0,^[6T)kAZ>ob+YD90r*Lt0X?"gQ9/AC!mHoD&Fg+Ik/D#L(ddU1^[tZQ^hY-o&C0VerXFF<f"[EMAIL
 
PROTECTED]>PJM+1E9T"3iphDRg1p^fNArJn8b#`n_1kaU'!)kaWY95[ESHhT\#1^o)>UK>%G4R76e-'+5gcC>KRhC1u=&I9eh![([EMAIL
 
PROTECTED]'E=*\k!QkeR4MJZIVB*%'#4I$VK*J]DHe2Im][W-T]'-27D4Hp<J&.pM)tC%*%#Lu`ebk0.93$]jOAlFmkdm\.1J\=Tn/Bkln`jrc:=,<:cGS-iA5ere)H,[EMAIL
 PROTECTED],1'Np[T/t_qEIrTaK1&nA7#PV>>-OKWF([EMAIL 
PROTECTED]'g4](Tmo*^8EUZ"!rkkNi,:U''+uTD"&[EMAIL 
PROTECTED]:QM_0RAD'(k'C7'fn[/^m#<pLI"GDDBKrn.-<+]k*SN$2g#:8\ERtR;&6#_SLK#Ge4X0=6XF;>O;G_?q-jY0?n2*VW9H+^*PcZJ)mH[%b:./b"\L_9QtBAFthucYhrZ:1#4q>.<fLmrW.[7(uk~>
 endstream
 endobj
 35 0 obj
@@ -197,10 +197,10 @@
 >>
 endobj
 36 0 obj
-<< /Length 1946 /Filter [ /ASCII85Decode /FlateDecode ]
+<< /Length 2231 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-Gb!#]gQ(#H&:O:Sk\UtUU33m)I;:`82F3J$>mGdkMh76CRoi6]8SL-_qbe5g6^dN!BF>N4[7*Y$kKKGV1SWcE([EMAIL
 
PROTECTED]<RMIrA[2OkZaQZgVqKs[kCKimg"%ihn,3?aU=SSiH18'q<(r[`,LFc33Y;pp"QP)4YM2"rF'P,a[tGNkr?M%iaKjL)[EMAIL
 PROTECTED]<f=.q8gZt/Toi2LPW+P>iGu'Ve\?mVun;[EMAIL 
PROTECTED]"@eG6nBh"mgjtN<,OXi:-NN4gV5SC*j-m8OSP:O\8^7%hNB/G(dh(dcg!XoOa!kM`3_-oJUoElB-YC"O95mVR(BJku'O:srGE@"[EMAIL
 PROTECTED]>bI`I<[EMAIL 
PROTECTED];H+p8)2/G\P(W^do7dZ=-H<Y0ElMd$aot[A2gZa!2kOfUK,B+hNIWNG.doo">/]\7Sg(NFXU4i1jH/[EMAIL
 
PROTECTED]/'4,@4;$i=\7mK>lkiLNnY8U&;Pu+kdg/6=dIA\NNHT#8q5`,D\>9Z7#2IVEmPK>'%X@&m#j95=0"+p*b3B/Cuh3%V$3`?4XkB&Km,SJOm,3dh7b/<NQHFSU)d8TtDraU8(\7;>XqUC>GPu6#5Ak-45<^r,A$#i/3`J('d^#_Nr&LF2jkd^H60F&=IQ*]CFbP&!mXWrKh_L5uJS2TQB>9a?:FpR]W-Pg/go7L`(eG=GRj\ZNaT1iHfDhTJgKq)O$\*pYGJ%7_PRZ[/.e$cOJ7tLp%W1L/5t4W.S[c\t@&WMFetZVop'1c$4eP'_u5<S>M\\S)^7,Lok`gbqcof;doFRZ'=h!BtEYZe[oRajGj*"2GF4(cO;;)Y+D*0#'[EMAIL
 PROTECTED]'[EMAIL 
PROTECTED];]Y"FXjJ()BFop4Bd!O5,o?bCiC'^d8)l%oo-CMCpAm.I+F4d]"VmbuS^A9l%SdR
 h8ejnM[M8cl9_j%'jrb?KUolSO]0&;c\'d(Jbik''F(?fj,?R4[Pt$iQZe>[EMAIL 
PROTECTED]/'[EMAIL PROTECTED]&K0W?+.]I?!N,"N^SXgB:[EMAIL 
PROTECTED]/E^CrnB(j!MDh+S8Pa=Ji6*EIE2l=B(B6)XfSLPS['-9BeiU@(nYg'[EMAIL 
PROTECTED])Z>g("rR-)7\Emc+9TPa:ZK$<0sc%"mW.$EeKXd%4G[]m'[EMAIL 
PROTECTED]'Q.Lem<[EMAIL 
PROTECTED]/q5Qf'+Ao[UM.db(43b2OJ?%K<nSkFp4E0q$&gnC4R=_Sgp]_?D'O`r/sioAdk-BpItA2ER+X:?a#\jD)GO<U=fRZ(YLpp'ROO;$pp*j>W0C!MR.sA0b\M(MMCFLLCqlfaiYmiCERT;JMf5PYm[?6IdAM#dqd'!hpe![J(t9'[EMAIL
 PROTECTED])`+'@gbM*LY(cp7^XbE-jG<[EMAIL 
PROTECTED])G"NpM/fi4&SG=*5%fmjNb_s(uRdM+g\Z'=-1'Dg/&[EMAIL 
PROTECTED]/WgTYiZ-<5*^]!,F/&"CTm3K$cJ5EX%X^@<R#E9'<ScRKJVIVrW2t)<_C!kDEgqjuU[(o;s>]c[,l>aAZ;&+QEe;JsFb8K3ZSZ.jjBr]].'*(_uLkVH"/[EMAIL
 
PROTECTED]&Gc\bS=P'rJY,M-/oMl;@bMU4CfH.K#LV)E]k$='I;g:=h0<?&,hBP:#rc;M9Dh_".TOc9(>[EMAIL
 
PROTECTED],7)D(iEG=5Xb"?9\b<t,DR?5/(N(BcZ*d<ZO2E?IOXT"SXaJ^`GhUKQ)-?E-icWfcI(Yo4(Z\KC`Gi<5/[EMAIL
 PROTECTED]".9H~>
+Gb!#]?'Ca9'Rf_Zi,+[GJGI4GXkE)RPA-LClV&DS:.GLsCDIJH?oS)Z($7kb=SQC[b]GU3/?n'(H]QH8P2EAX5HYfP0-"IPrk;aX[UgCd(lEhAT`&^'/'%5IcsYMY/:Lm;>;)<:Ol:QEhE"lnLO$?'iu$TWhm6e42:dKOq^]u"Xk/<l*hLlAjC$/%at\5i#rW/j0q?C5[&OEaq.o"dA,[EMAIL
 
PROTECTED],1V7*t8fCJRc5_I#d0q?$pC'DJ2&%e&D$9lN[;,QM85C"NOjA1P29lJ\Ta&Pph,4,Q3_GcPakcINun/9M'hNbiP[ILMAh9o&J&p>(4;%P@/YfaVuj=-nc9D/4UZKeI?B(4)hC&^qDmY5FD*4l)r&k_([EMAIL
 
PROTECTED]))!4PVZ)cBaU!:lq"Da\U^pUS5/YG9[M!nLG+:ed%Wd:C?^XQZUUnp\Fi2Z*.AL\76FA<$JW"Z+Vb;89:Pu)22+hfT*LtKHPgmnO_pl?>Z^PMhpt?B)(#S=_98mHgs1]\-p,Re^u>SR/5ZGbT\3"%0#dOHDUo4B:RKC+"-rbi+<s1jlTg=Fd/^K?hZ$YG)$Kh>%.uKJE)se,n]5KQ52Ut/#q^cg$)"b0Sg2)6d>[EMAIL
 PROTECTED]/]CR+L<j,:QI>e!<?e/pT&$M:r.XYGQu0PtTdOOAWrn+B1/&+m!a";G&[EMAIL 
PROTECTED],Og>[EMAIL PROTECTED]<[EMAIL 
PROTECTED];MMcARh]D2JB2Y5>r4QZDq&_Tjoi+%\?`"4]7hfVLNd.2huQATiIk_E^>K=dc"E8"=KA-HCJ3Wc&7HPLpNhTTaJLH7Yer=<^(F7gMbD9Y%Y:J_9hoG>I4Lug/"QI`pE'!**F:gj)PgS1LLr=h!:cud)uL[qr7Jb^+kdg/6=`LJ[Q-jM7n*FiW&[EMAIL
 PROTECTED]/Q[ffd?P)0&[EMAIL 
PROTECTED],NV4;8elI3>%%sQIPOh!A-=AV;asAEhK($l%t#mdrP:n5;>]5S-IVX:CDEt
 
SWIc3oDVWS%]WTr\GuG!4aXDk;I5bgSFHSFXI<Y^9WLt9eT"[a9"_m2eHNMY-4E!,iH5p;X*V^&Z/&_6WoN;9kW)11V'2Y+H$dDhLEml>gl4'#7qF6Ol=]qd+hr&#lm:4X3]`L>jA2H`b&Bn'a%`"sC5Vn:FEINq7R1>],#`AB/%Z)uUB%B&\c#.YTKtN7jfWr\#Q`=W3mk>(?fY:7]Yc<"!6#5Zb>)"@p06Sps#]b\<1rDE/jJ`K-][s(d#,sZm7DWF48t]*Y(fi]<&[EMAIL
 
PROTECTED]'_Q4lMGHs"/_t:q<U0+Ds/mDN\TY`#T+I5#9`?VO$7HlF3+4.-3%+$g,^l(4Y($KACMZgS^$*/L4%G)u%4`)#B5`=*%3o0XtG9/3K,YMdPW>>hT?7#:3N)<;i\i%I(=;:(o-YC7%d.skPIKDbh:Dhb5AHat]?2K^&\pYr6V_H"G8S/kFAM]0)[EMAIL
 PROTECTED];)[EMAIL 
PROTECTED]'UP'G+Z(G]+9pJW[(<9Bn7A?'B#@)#MGQ\C,LD17h%8ccL<"[EMAIL 
PROTECTED]&5ka`grX\,i"'/CP)KZV*+RG.DE2E&3>G'[EMAIL 
PROTECTED]@iH`\$97GE2/n"C$J,MG<DlQ)5_B(3"eXYq">ns/9_`\eSdqo?9;1&fZ/f#D"1f:W*bLkSL_(=FaY>:(a^)l!JiTGP3%fKoe<IiXW0ftciaNi4'!eF[9N+%f;[EMAIL
 
PROTECTED](C\oK)&&hIU"`C_Y)O3PahU2`=1Jkg=G;`(h+nBM0qF5N;VoIL#`FF.VR%"+b9U=HH^iTroJ/n`U/EEnd9%6fiWg=GYYDe)3d:!hFBRn4B/q47+(o:GD>hpY^A3'U\pX_3VBgacU%!f';Pfg/:,sO&Zl5!6sBM"[EMAIL
 PROTECTED]/dO^`W!06(0le1pdp][0.8$MG"bm+DMbJtUTH<;/[EMAIL PROTECTED]</6E*0>.D!l[
 
c'Q9hZbN#%WH1g+g-?Rjn9VH#3W+-BfWY-G;1UX^.eDrS;*\WJ_-;.RXCkSgWgQ-/S+[m)@U<[EMAIL 
PROTECTED]@:R9_,5nC/A%]=gTTos9'6$oDlaWi*;W,[0We49?QD6`Z3!8qtc:,TsVE[*dK5J9*D@)ZP^XF(&[5OZ4\eXhp&GJU4P]MP<;OX&Jd=)ugbk&[EMAIL
 PROTECTED];l[HMh]D3\Z+p%^3[T!%_n.oGj29R1pksYngcn!#?*"~>
 endstream
 endobj
 37 0 obj
@@ -394,19 +394,19 @@
 23 0 obj
 <<
 /S /GoTo
-/D [35 0 R /XYZ 85.0 468.2 null]
+/D [35 0 R /XYZ 85.0 362.6 null]
 >>
 endobj
 25 0 obj
 <<
 /S /GoTo
-/D [35 0 R /XYZ 85.0 220.947 null]
+/D [37 0 R /XYZ 85.0 659.0 null]
 >>
 endobj
 27 0 obj
 <<
 /S /GoTo
-/D [37 0 R /XYZ 85.0 573.8 null]
+/D [37 0 R /XYZ 85.0 477.747 null]
 >>
 endobj
 38 0 obj
@@ -417,60 +417,60 @@
 xref
 0 55
 0000000000 65535 f 
-0000015888 00000 n 
-0000015974 00000 n 
-0000016066 00000 n 
+0000016438 00000 n 
+0000016524 00000 n 
+0000016616 00000 n 
 0000000015 00000 n 
 0000000071 00000 n 
-0000000885 00000 n 
-0000001005 00000 n 
-0000001093 00000 n 
-0000016200 00000 n 
-0000001228 00000 n 
-0000016263 00000 n 
-0000001365 00000 n 
+0000000887 00000 n 
+0000001007 00000 n 
+0000001095 00000 n 
+0000016750 00000 n 
+0000001230 00000 n 
+0000016813 00000 n 
+0000001367 00000 n 
+0000016879 00000 n 
+0000001504 00000 n 
+0000016945 00000 n 
+0000001641 00000 n 
+0000017011 00000 n 
+0000001777 00000 n 
+0000017075 00000 n 
+0000001912 00000 n 
+0000017141 00000 n 
+0000002049 00000 n 
+0000017207 00000 n 
+0000002186 00000 n 
+0000017271 00000 n 
+0000002322 00000 n 
+0000017335 00000 n 
+0000002459 00000 n 
+0000004719 00000 n 
+0000004827 00000 n 
+0000007501 00000 n 
+0000007624 00000 n 
+0000007651 00000 n 
+0000007905 00000 n 
+0000010804 00000 n 
+0000010912 00000 n 
+0000013236 00000 n 
+0000017401 00000 n 
+0000013344 00000 n 
+0000013522 00000 n 
+0000013691 00000 n 
+0000013986 00000 n 
+0000014274 00000 n 
+0000014475 00000 n 
+0000014754 00000 n 
+0000014997 00000 n 
+0000015275 00000 n 
+0000015565 00000 n 
+0000015776 00000 n 
+0000015889 00000 n 
+0000015999 00000 n 
+0000016107 00000 n 
+0000016213 00000 n 
 0000016329 00000 n 
-0000001502 00000 n 
-0000016395 00000 n 
-0000001639 00000 n 
-0000016461 00000 n 
-0000001775 00000 n 
-0000016525 00000 n 
-0000001910 00000 n 
-0000016591 00000 n 
-0000002047 00000 n 
-0000016657 00000 n 
-0000002184 00000 n 
-0000016721 00000 n 
-0000002320 00000 n 
-0000016787 00000 n 
-0000002457 00000 n 
-0000004717 00000 n 
-0000004825 00000 n 
-0000007499 00000 n 
-0000007622 00000 n 
-0000007649 00000 n 
-0000007903 00000 n 
-0000010539 00000 n 
-0000010647 00000 n 
-0000012686 00000 n 
-0000016851 00000 n 
-0000012794 00000 n 
-0000012972 00000 n 
-0000013141 00000 n 
-0000013436 00000 n 
-0000013724 00000 n 
-0000013925 00000 n 
-0000014204 00000 n 
-0000014447 00000 n 
-0000014725 00000 n 
-0000015015 00000 n 
-0000015226 00000 n 
-0000015339 00000 n 
-0000015449 00000 n 
-0000015557 00000 n 
-0000015663 00000 n 
-0000015779 00000 n 
 trailer
 <<
 /Size 55
@@ -478,5 +478,5 @@
 /Info 4 0 R
 >>
 startxref
-16902
+17452
 %%EOF

Modified: hadoop/core/trunk/src/contrib/hod/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/src/contrib/hod/CHANGES.txt?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- hadoop/core/trunk/src/contrib/hod/CHANGES.txt (original)
+++ hadoop/core/trunk/src/contrib/hod/CHANGES.txt Wed Jul  9 07:45:42 2008
@@ -6,6 +6,9 @@
 
   NEW FEATURES
 
+    HADOOP-3695. Provide an ability to start multiple workers per node.
+    (Vinod Kumar Vavilapalli via yhemanth)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

Modified: hadoop/core/trunk/src/contrib/hod/bin/hod
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/src/contrib/hod/bin/hod?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- hadoop/core/trunk/src/contrib/hod/bin/hod (original)
+++ hadoop/core/trunk/src/contrib/hod/bin/hod Wed Jul  9 07:45:42 2008
@@ -229,8 +229,10 @@
 
              ('max-master-failures', 'pos_int', 
               'Defines how many times a master can fail before' \
-              ' failing cluster allocation', False, 5, True, True)),
+              ' failing cluster allocation', False, 5, True, True),
 
+             ('workers_per_ring', 'pos_int', 'Defines number of workers per 
service per hodring',
+             False, 1, False, True)),
 
             'gridservice-mapred' : (
              ('external', 'bool', "Connect to an already running MapRed?",
@@ -517,6 +519,12 @@
           "The log destiniation uri must be of type local:// or hdfs://."))
         sys.exit(1)
   
+    if hodConfig['ringmaster']['workers_per_ring'] < 1:
+      printErrors(hodConfig.var_error('ringmaster', 'workers_per_ring',
+                "ringmaster.workers_per_ring must be a positive integer " +
+                "greater than or equal to 1"))
+      sys.exit(1)
+                        
     ## TODO : end of should move the dependency verification to hodConfig.verif
       
     hodConfig['hod']['base-dir'] = rootDirectory

Modified: hadoop/core/trunk/src/contrib/hod/bin/ringmaster
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/src/contrib/hod/bin/ringmaster?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- hadoop/core/trunk/src/contrib/hod/bin/ringmaster (original)
+++ hadoop/core/trunk/src/contrib/hod/bin/ringmaster Wed Jul  9 07:45:42 2008
@@ -117,7 +117,10 @@
 
              ('max-master-failures', 'pos_int', 
               'Defines how many times a master can fail before' \
-              ' failing cluster allocation', False, 5, True, True)),
+              ' failing cluster allocation', False, 5, True, True),
+
+             ('workers_per_ring', 'pos_int', 'Defines number of workers per 
service per hodring',
+              False, 1, False, True)),
 
             'resource_manager' : (
              ('id', 'string', 'Batch scheduler ID: torque|condor.',

Modified: hadoop/core/trunk/src/contrib/hod/hodlib/GridServices/hdfs.py
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/src/contrib/hod/hodlib/GridServices/hdfs.py?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- hadoop/core/trunk/src/contrib/hod/hodlib/GridServices/hdfs.py (original)
+++ hadoop/core/trunk/src/contrib/hod/hodlib/GridServices/hdfs.py Wed Jul  9 
07:45:42 2008
@@ -76,7 +76,8 @@
 class Hdfs(MasterSlave):
 
   def __init__(self, serviceDesc, nodePool, required_node, version, \
-                                        format=True, upgrade=False):
+                                        format=True, upgrade=False,
+                                        workers_per_ring = 1):
     MasterSlave.__init__(self, serviceDesc, nodePool, required_node)
     self.masterNode = None
     self.masterAddr = None
@@ -87,6 +88,7 @@
     self.upgrade = upgrade
     self.workers = []
     self.version = version
+    self.workers_per_ring = workers_per_ring
 
   def getMasterRequest(self):
     req = NodeRequest(1, [], False)
@@ -117,8 +119,11 @@
     return adminCommands
 
   def getWorkerCommands(self, serviceDict):
-    cmdDesc = self._getDataNodeCommand()
-    return [cmdDesc]
+    workerCmds = []
+    for id in range(1, self.workers_per_ring + 1):
+      workerCmds.append(self._getDataNodeCommand(str(id)))
+
+    return workerCmds
 
   def setMasterNodes(self, list):
     node = list[0]
@@ -250,7 +255,7 @@
     cmd = CommandDesc(dict)
     return cmd
  
-  def _getDataNodeCommand(self):
+  def _getDataNodeCommand(self, id):
 
     sd = self.serviceDesc
 
@@ -282,6 +287,14 @@
       # TODO: check for major as well as minor versions
       attrs['dfs.datanode.ipc.address'] = 'fillinhostport'
                     
+    # unique workdirs in case of multiple datanodes per hodring
+    pd = []
+    for dir in parentDirs:
+      dir = dir + "-" + id
+      pd.append(dir)
+    parentDirs = pd
+    # end of unique workdirs
+
     self._setWorkDirs(workDirs, envs, attrs, parentDirs, 'hdfs-dn')
 
     dict = { 'name' : 'datanode' }
@@ -292,7 +305,7 @@
     dict['workdirs'] = workDirs
     dict['final-attrs'] = attrs
     dict['attrs'] = sd.getAttrs()
-
+ 
     cmd = CommandDesc(dict)
     return cmd
 

Modified: hadoop/core/trunk/src/contrib/hod/hodlib/GridServices/mapred.py
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/src/contrib/hod/hodlib/GridServices/mapred.py?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- hadoop/core/trunk/src/contrib/hod/hodlib/GridServices/mapred.py (original)
+++ hadoop/core/trunk/src/contrib/hod/hodlib/GridServices/mapred.py Wed Jul  9 
07:45:42 2008
@@ -82,7 +82,8 @@
   
 class MapReduce(MasterSlave):
 
-  def __init__(self, serviceDesc, workDirs,required_node, version):
+  def __init__(self, serviceDesc, workDirs,required_node, version,
+                workers_per_ring = 1):
     MasterSlave.__init__(self, serviceDesc, workDirs,required_node)
 
     self.masterNode = None
@@ -91,6 +92,7 @@
     self.workers = []
     self.required_node = required_node
     self.version = version
+    self.workers_per_ring = workers_per_ring
 
   def isLaunchable(self, serviceDict):
     hdfs = serviceDict['hdfs']
@@ -116,8 +118,11 @@
 
     hdfs = serviceDict['hdfs']
 
-    cmdDesc = self._getTaskTrackerCommand(hdfs)
-    return [cmdDesc]
+    workerCmds = []
+    for id in range(1, self.workers_per_ring + 1):
+      workerCmds.append(self._getTaskTrackerCommand(str(id), hdfs))
+      
+    return workerCmds
 
   def setMasterNodes(self, list):
     node = list[0]
@@ -217,7 +222,7 @@
     cmd = CommandDesc(dict)
     return cmd
 
-  def _getTaskTrackerCommand(self, hdfs):
+  def _getTaskTrackerCommand(self, id, hdfs):
 
     sd = self.serviceDesc
 
@@ -245,6 +250,14 @@
       if 'mapred.task.tracker.http.address' not in attrs:
         attrs['mapred.task.tracker.http.address'] = 'fillinhostport'
 
+    # unique parentDirs in case of multiple tasktrackers per hodring
+    pd = []
+    for dir in parentDirs:
+      dir = dir + "-" + id
+      pd.append(dir)
+    parentDirs = pd
+    # end of unique workdirs
+
     self._setWorkDirs(workDirs, envs, attrs, parentDirs, 'mapred-tt')
 
     dict = { 'name' : 'tasktracker' }

Modified: hadoop/core/trunk/src/contrib/hod/hodlib/RingMaster/ringMaster.py
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/src/contrib/hod/hodlib/RingMaster/ringMaster.py?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- hadoop/core/trunk/src/contrib/hod/hodlib/RingMaster/ringMaster.py (original)
+++ hadoop/core/trunk/src/contrib/hod/hodlib/RingMaster/ringMaster.py Wed Jul  
9 07:45:42 2008
@@ -553,6 +553,8 @@
     self.__isStopped = False # to let main exit
     self.__exitCode = 0 # exit code with which the ringmaster main method 
should return
 
+    self.workers_per_ring = self.cfg['ringmaster']['workers_per_ring']
+
     self.__initialize_signal_handlers()
     
     sdd = self.cfg['servicedesc']
@@ -609,7 +611,8 @@
         hdfs = HdfsExternal(hdfsDesc, workDirs, 
version=int(hadoopVers['minor']))
         hdfs.setMasterParams( self.cfg['gridservice-hdfs'] )
       else:
-        hdfs = Hdfs(hdfsDesc, workDirs, 0, version=int(hadoopVers['minor']))
+        hdfs = Hdfs(hdfsDesc, workDirs, 0, version=int(hadoopVers['minor']),
+                    workers_per_ring = self.workers_per_ring)
 
       self.serviceDict[hdfs.getName()] = hdfs
       
@@ -619,7 +622,8 @@
         mr = MapReduceExternal(mrDesc, workDirs, 
version=int(hadoopVers['minor']))
         mr.setMasterParams( self.cfg['gridservice-mapred'] )
       else:
-        mr = MapReduce(mrDesc, workDirs,1, version=int(hadoopVers['minor']))
+        mr = MapReduce(mrDesc, workDirs,1, version=int(hadoopVers['minor']),
+                       workers_per_ring = self.workers_per_ring)
 
       self.serviceDict[mr.getName()] = mr
     except:

Modified: hadoop/core/trunk/src/contrib/hod/support/logcondense.py
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/src/contrib/hod/support/logcondense.py?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- hadoop/core/trunk/src/contrib/hod/support/logcondense.py (original)
+++ hadoop/core/trunk/src/contrib/hod/support/logcondense.py Wed Jul  9 
07:45:42 2008
@@ -113,11 +113,10 @@
   # otherwise only JobTracker logs. Likewise, in case of dynamic dfs, we must 
also look for
   # deleting datanode logs
   filteredNames = ['jobtracker']
-  deletedNamePrefixes = ['0-tasktracker-*']
+  deletedNamePrefixes = ['*-tasktracker-*']
   if options.dynamicdfs == 'true':
     filteredNames.append('namenode')
-    deletedNamePrefixes.append('1-tasktracker-*')
-    deletedNamePrefixes.append('0-datanode-*')
+    deletedNamePrefixes.append('*-datanode-*')
 
   filepath = '%s/\*/hod-logs/' % (options.log)
   cmd = getDfsCommand(options, "-lsr " + filepath)
@@ -128,7 +127,7 @@
     try:
       m = re.match("^.*\s(.*)\n$", line)
       filename = m.group(1)
-      # file name format: 
<prefix>/<user>/hod-logs/<jobid>/[0-1]-[jobtracker|tasktracker|datanode|namenode|]-hostname-YYYYMMDDtime-random.tar.gz
+      # file name format: 
<prefix>/<user>/hod-logs/<jobid>/[0-9]*-[jobtracker|tasktracker|datanode|namenode|]-hostname-YYYYMMDDtime-random.tar.gz
       # first strip prefix:
       if filename.startswith(options.log):
         filename = filename.lstrip(options.log)

Modified: hadoop/core/trunk/src/contrib/hod/testing/testRingmasterRPCs.py
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/src/contrib/hod/testing/testRingmasterRPCs.py?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- hadoop/core/trunk/src/contrib/hod/testing/testRingmasterRPCs.py (original)
+++ hadoop/core/trunk/src/contrib/hod/testing/testRingmasterRPCs.py Wed Jul  9 
07:45:42 2008
@@ -30,6 +30,28 @@
 from hodlib.Common.desc import ServiceDesc
 from hodlib.RingMaster.ringMaster import _LogMasterSources
 
+configuration = {
+       'hod': {}, 
+      'resource_manager': {
+                            'id': 'torque', 
+                            'batch-home': '/home/y/'
+                          }, 
+       'ringmaster': {
+                      'max-connect' : 2,
+                      'max-master-failures' : 5
+                     }, 
+       'hodring': {
+                  }, 
+       'gridservice-mapred': { 
+                              'id': 'mapred' 
+                             } ,
+       'gridservice-hdfs': { 
+                              'id': 'hdfs' 
+                            }, 
+       'servicedesc' : {} ,
+       'nodepooldesc': {} , 
+       }
+
 # All test-case classes should have the naming convention test_.*
 class test_MINITEST1(unittest.TestCase):
   def setUp(self):
@@ -45,12 +67,49 @@
   def tearDown(self):
     pass
 
-class test_MINITEST2(unittest.TestCase):
+class test_Multiple_Workers(unittest.TestCase):
   def setUp(self):
+    self.config = configuration
+    self.config['ringmaster']['workers_per_ring'] = 2
+
+    hdfsDesc = self.config['servicedesc']['hdfs'] = 
ServiceDesc(self.config['gridservice-hdfs'])
+    mrDesc = self.config['servicedesc']['mapred'] = 
ServiceDesc(self.config['gridservice-mapred'])
+
+    self.hdfs = Hdfs(hdfsDesc, [], 0, 19, workers_per_ring = \
+                                 self.config['ringmaster']['workers_per_ring'])
+    self.mr = MapReduce(mrDesc, [],1, 19, workers_per_ring = \
+                                 self.config['ringmaster']['workers_per_ring'])
+    
+    self.log = logging.getLogger()
     pass
 
   # All testMethods have to have their names start with 'test'
-  def testSuccess(self):
+  def testWorkersCount(self):
+    self.serviceDict = {}
+    self.serviceDict[self.hdfs.getName()] = self.hdfs
+    self.serviceDict[self.mr.getName()] = self.mr
+    self.rpcSet = _LogMasterSources(self.serviceDict, self.config, None, 
self.log, None)
+
+    cmdList = self.rpcSet.getCommand('host1')
+    self.assertEquals(len(cmdList), 2)
+    self.assertEquals(cmdList[0].dict['argv'][0], 'namenode')
+    self.assertEquals(cmdList[1].dict['argv'][0], 'namenode')
+    addParams = ['fs.default.name=host1:51234', 'dfs.http.address=host1:5125' ]
+    self.rpcSet.addMasterParams('host1', addParams)
+    # print "NN is launched"
+
+    cmdList = self.rpcSet.getCommand('host2')
+    self.assertEquals(len(cmdList), 1)
+    self.assertEquals(cmdList[0].dict['argv'][0], 'jobtracker')
+    addParams = ['mapred.job.tracker=host2:51236',
+                 'mapred.job.tracker.http.address=host2:51237']
+    self.rpcSet.addMasterParams('host2', addParams)
+    # print "JT is launched"
+
+    cmdList = self.rpcSet.getCommand('host3')
+    # Verify the workers count per ring : TTs + DNs
+    self.assertEquals(len(cmdList),
+                      self.config['ringmaster']['workers_per_ring'] * 2)
     pass
     
   def testFailure(self):
@@ -61,27 +120,7 @@
 
 class test_GetCommand(unittest.TestCase):
   def setUp(self):
-    self.config = {
-       'hod': {}, 
-      'resource_manager': {
-                            'id': 'torque', 
-                            'batch-home': '/home/y/'
-                          }, 
-       'ringmaster': {
-                      'max-connect' : 2,
-                      'max-master-failures' : 5
-                     }, 
-       'hodring': {
-                  }, 
-       'gridservice-mapred': { 
-                              'id': 'mapred' 
-                             } ,
-       'gridservice-hdfs': { 
-                              'id': 'hdfs' 
-                            }, 
-       'servicedesc' : {} ,
-       'nodepooldesc': {} , 
-       }
+    self.config = configuration
 
     hdfsDesc = self.config['servicedesc']['hdfs'] = 
ServiceDesc(self.config['gridservice-hdfs'])
     mrDesc = self.config['servicedesc']['mapred'] = 
ServiceDesc(self.config['gridservice-mapred'])

Modified: 
hadoop/core/trunk/src/docs/src/documentation/content/xdocs/hod_config_guide.xml
URL: 
http://svn.apache.org/viewvc/hadoop/core/trunk/src/docs/src/documentation/content/xdocs/hod_config_guide.xml?rev=675194&r1=675193&r2=675194&view=diff
==============================================================================
--- 
hadoop/core/trunk/src/docs/src/documentation/content/xdocs/hod_config_guide.xml 
(original)
+++ 
hadoop/core/trunk/src/docs/src/documentation/content/xdocs/hod_config_guide.xml 
Wed Jul  9 07:45:42 2008
@@ -156,7 +156,8 @@
         <ul>
           <li>work-dirs: Comma-separated list of paths that will serve
                        as the root for directories that HOD generates and 
passes
-                       to Hadoop for use to store DFS and Map/Reduce data. For 
e.g.
+                       to Hadoop for use to store DFS and Map/Reduce data. For
+                       example,
                        this is where DFS data blocks will be stored. Typically,
                        as many paths are specified as there are disks available
                        to ensure all disks are being utilized. The restrictions
@@ -177,6 +178,19 @@
                        successful allocation even in the presence of a few bad
                        nodes in the cluster.
                        </li>
+          <li>workers_per_ring: Number of workers per service per HodRing.
+                       By default this is set to 1. If this configuration
+                       variable is set to a value 'n', the HodRing will run
+                       'n' instances of the workers (TaskTrackers or DataNodes)
+                       on each node acting as a slave. This can be used to run
+                       multiple workers per HodRing, so that the total number 
of
+                       workers  in a HOD cluster is not limited by the total
+                       number of nodes requested during allocation. However, 
note
+                       that this will mean each worker should be configured to 
use
+                       only a proportional fraction of the capacity of the 
+                       resources on the node. In general, this feature is only
+                       useful for testing and simulation purposes, and not for
+                       production use.</li>
         </ul>
       </section>
       


Reply via email to