http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/6/pipeline.md
----------------------------------------------------------------------
diff --git a/6/pipeline.md b/6/pipeline.md
new file mode 100644
index 0000000..4389435
--- /dev/null
+++ b/6/pipeline.md
@@ -0,0 +1,666 @@
+---
+layout: default6
+category: links
+title: The Joshua Pipeline
+---
+
+*Please note that the Joshua 6.0.3 included some big changes to directory 
organization of the
+ pipeline's files.*
+
+This page describes the Joshua pipeline script, which manages the complexity 
of training and
+evaluating machine translation systems.  The pipeline eases the pain of two 
related tasks in
+statistical machine translation (SMT) research:
+
+- Training SMT systems involves a complicated process of interacting steps 
that are
+  time-consuming and prone to failure.
+
+- Developing and testing new techniques requires varying parameters at 
different points in the
+  pipeline. Earlier results (which are often expensive) need not be recomputed.
+
+To facilitate these tasks, the pipeline script:
+
+- Runs the complete SMT pipeline, from corpus normalization and tokenization, 
through alignment,
+  model building, tuning, test-set decoding, and evaluation.
+
+- Caches the results of intermediate steps (using robust SHA-1 checksums on 
dependencies), so the
+  pipeline can be debugged or shared across similar runs while doing away with 
time spent
+  recomputing expensive steps.
+ 
+- Allows you to jump into and out of the pipeline at a set of predefined 
places (e.g., the alignment
+  stage), so long as you provide the missing dependencies.
+
+The Joshua pipeline script is designed in the spirit of Moses' 
`train-model.pl`, and shares
+(and has borrowed) many of its features.  It is not as extensive as Moses'
+[Experiment Management 
System](http://www.statmt.org/moses/?n=FactoredTraining.EMS), which allows
+the user to define arbitrary execution dependency graphs. However, it is 
significantly simpler to
+use, allowing many systems to be built with a single command (that may run for 
days or weeks).
+
+## Dependencies
+
+The pipeline has no *required* external dependencies.  However, it has support 
for a number of
+external packages, some of which are included with Joshua.
+
+-  [GIZA++](http://code.google.com/p/giza-pp/) (included)
+
+   GIZA++ is the default aligner.  It is included with Joshua, and should 
compile successfully when
+   you typed `ant` from the Joshua root directory.  It is not required because 
you can use the
+   (included) Berkeley aligner (`--aligner berkeley`). We have recently also 
provided support
+   for the [Jacana-XY 
aligner](http://code.google.com/p/jacana-xy/wiki/JacanaXY) (`--aligner
+   jacana`). 
+
+-  [Hadoop](http://hadoop.apache.org/) (included)
+
+   The pipeline uses the [Thrax grammar extractor](thrax.html), which is built 
on Hadoop.  If you
+   have a Hadoop installation, simply ensure that the `$HADOOP` environment 
variable is defined, and
+   the pipeline will use it automatically at the grammar extraction step.  If 
you are going to
+   attempt to extract very large grammars, it is best to have a good-sized 
Hadoop installation.
+   
+   (If you do not have a Hadoop installation, you might consider setting one 
up.  Hadoop can be
+   installed in a
+   
["pseudo-distributed"](http://hadoop.apache.org/common/docs/r0.20.2/quickstart.html#PseudoDistributed)
+   mode that allows it to use just a few machines or a number of processors on 
a single machine.
+   The main issue is to ensure that there are a lot of independent physical 
disks, since in our
+   experience Hadoop starts to exhibit lots of hard-to-trace problems if there 
is too much demand on
+   the disks.)
+   
+   If you don't have a Hadoop installation, there are still no worries.  The 
pipeline will unroll a
+   standalone installation and use it to extract your grammar.  This behavior 
will be triggered if
+   `$HADOOP` is undefined.
+   
+-  [Moses](http://statmt.org/moses/) (not included). Moses is needed
+   if you wish to use its 'kbmira' tuner (--tuner kbmira), or if you
+   wish to build phrase-based models.
+   
+-  [SRILM](http://www.speech.sri.com/projects/srilm/) (not included; not 
needed; not recommended)
+
+   By default, the pipeline uses the included 
[KenLM](https://kheafield.com/code/kenlm/) for
+   building (and also querying) language models. Joshua also includes a Java 
program from the
+   [Berkeley LM](http://code.google.com/p/berkeleylm/) package that contains 
code for constructing a
+   Kneser-Ney-smoothed language model in ARPA format from the target side of 
your training data.  
+   There is no need to use SRILM, but if you do wish to use it, you need to do 
the following:
+   
+   1. Install SRILM and set the `$SRILM` environment variable to point to its 
installed location.
+   1. Add the `--lm-gen srilm` flag to your pipeline invocation.
+   
+   More information on this is available in the [LM building section of the 
pipeline](#lm).  SRILM
+   is not used for representing language models during decoding (and in fact 
is not supported,
+   having been supplanted by [KenLM](http://kheafield.com/code/kenlm/) (the 
default) and
+   BerkeleyLM).
+
+After installing any dependencies, follow the brief instructions on
+the [installation page](install.html), and then you are ready to build
+models. 
+
+## A basic pipeline run
+
+The pipeline takes a set of inputs (training, tuning, and test data), and 
creates a set of
+intermediate files in the *run directory*.  By default, the run directory is 
the current directory,
+but it can be changed with the `--rundir` parameter.
+
+For this quick start, we will be working with the example that can be found in
+`$JOSHUA/examples/training`.  This example contains 1,000 sentences of 
Urdu-English data (the full
+dataset is available as part of the
+[Indian languages parallel corpora](/indian-parallel-corpora/) with
+100-sentence tuning and test sets with four references each.
+
+Running the pipeline requires two main steps: data preparation and invocation.
+
+1. Prepare your data.  The pipeline script needs to be told where to find the 
raw training, tuning,
+   and test data.  A good convention is to place these files in an input/ 
subdirectory of your run's
+   working directory (NOTE: do not use `data/`, since a directory of that name 
is created and used
+   by the pipeline itself for storing processed files).  The expected format 
(for each of training,
+   tuning, and test) is a pair of files that share a common path prefix and 
are distinguished by
+   their extension, e.g.,
+
+       input/
+             train.SOURCE
+             train.TARGET
+             tune.SOURCE
+             tune.TARGET
+             test.SOURCE
+             test.TARGET
+
+   These files should be parallel at the sentence level (with one sentence per 
line), should be in
+   UTF-8, and should be untokenized (tokenization occurs in the pipeline).  
SOURCE and TARGET denote
+   variables that should be replaced with the actual target and source 
language abbreviations (e.g.,
+   "ur" and "en").
+   
+1. Run the pipeline.  The following is the minimal invocation to run the 
complete pipeline:
+
+       $JOSHUA/bin/pipeline.pl  \
+         --rundir .             \
+         --type hiero           \
+         --corpus input/train   \
+         --tune input/tune      \
+         --test input/devtest   \
+         --source SOURCE        \
+         --target TARGET
+
+   The `--corpus`, `--tune`, and `--test` flags define file prefixes that are 
concatened with the
+   language extensions given by `--target` and `--source` (with a "." in 
between).  Note the
+   correspondences with the files defined in the first step above.  The 
prefixes can be either
+   absolute or relative pathnames.  This particular invocation assumes that a 
subdirectory `input/`
+   exists in the current directory, that you are translating from a language 
identified "ur"
+   extension to a language identified by the "en" extension, that the training 
data can be found at
+   `input/train.en` and `input/train.ur`, and so on.
+
+*Don't* run the pipeline directly from `$JOSHUA`, or, for that matter, in any 
directory with lots of other files.
+This can cause problems because the pipeline creates lots of files under 
`--rundir` that can clobber existing files.
+You should run experiments in a clean directory.
+For example, if you have Joshua installed in `$HOME/code/joshua`, manage your 
runs in a different location, such as `$HOME/expts/joshua`.
+
+Assuming no problems arise, this command will run the complete pipeline in 
about 20 minutes,
+producing BLEU scores at the end.  As it runs, you will see output that looks 
like the following:
+   
+    [train-copy-en] rebuilding...
+      dep=/Users/post/code/joshua/test/pipeline/input/train.en 
+      dep=data/train/train.en.gz [NOT FOUND]
+      cmd=cat /Users/post/code/joshua/test/pipeline/input/train.en | gzip -9n 
> data/train/train.en.gz
+      took 0 seconds (0s)
+    [train-copy-ur] rebuilding...
+      dep=/Users/post/code/joshua/test/pipeline/input/train.ur 
+      dep=data/train/train.ur.gz [NOT FOUND]
+      cmd=cat /Users/post/code/joshua/test/pipeline/input/train.ur | gzip -9n 
> data/train/train.ur.gz
+      took 0 seconds (0s)
+    ...
+   
+And in the current directory, you will see the following files (among
+other files, including intermediate files
+generated by the individual sub-steps).
+   
+    data/
+        train/
+            corpus.ur
+            corpus.en
+            thrax-input-file
+        tune/
+            corpus.ur -> tune.tok.lc.ur
+            corpus.en -> tune.tok.lc.en
+            grammar.filtered.gz
+            grammar.glue
+        test/
+            corpus.ur -> test.tok.lc.ur
+            corpus.en -> test.tok.lc.en
+            grammar.filtered.gz
+            grammar.glue
+    alignments/
+        0/
+            [giza/berkeley aligner output files]
+        1/
+        ...
+        training.align
+    thrax-hiero.conf
+    thrax.log
+    grammar.gz
+    lm.gz
+    tune/
+         decoder_command
+         model/
+               [model files]
+         params.txt
+         joshua.log
+         mert.log
+         joshua.config.final
+         final-bleu
+    test/
+         model/
+               [model files]
+         output
+         final-bleu
+
+These files will be described in more detail in subsequent sections of this 
tutorial.
+
+Another useful flag is the `--rundir DIR` flag, which chdir()s to the 
specified directory before
+running the pipeline.  By default the rundir is the current directory.  
Changing it can be useful
+for organizing related pipeline runs.  In fact, we highly recommend
+that you organize your runs using consecutive integers, also taking a
+minute to pass a short note with the `--readme` flag, which allows you
+to quickly generate reports on [groups of related experiments](#managing).
+Relative paths specified to other flags (e.g., to `--corpus`
+or `--lmfile`) are relative to the directory the pipeline was called *from*, 
not the rundir itself
+(unless they happen to be the same, of course).
+
+The complete pipeline comprises many tens of small steps, which can be grouped 
together into a set
+of traditional pipeline tasks:
+   
+1. [Data preparation](#prep)
+1. [Alignment](#alignment)
+1. [Parsing](#parsing) (syntax-based grammars only)
+1. [Grammar extraction](#tm)
+1. [Language model building](#lm)
+1. [Tuning](#tuning)
+1. [Testing](#testing)
+1. [Analysis](#analysis)
+
+These steps are discussed below, after a few intervening sections about 
high-level details of the
+pipeline.
+
+## <a id="managing" /> Managing groups of experiments
+
+The real utility of the pipeline comes when you use it to manage groups of 
experiments. Typically,
+there is a held-out test set, and we want to vary a number of training 
parameters to determine what
+effect this has on BLEU scores or some other metric. Joshua comes with a script
+`$JOSHUA/scripts/training/summarize.pl` that collects information from a group 
of runs and reports
+them to you. This script works so long as you organize your runs as follows:
+
+1. Your runs should be grouped together in a root directory, which I'll call 
`$EXPDIR`.
+
+2. For comparison purposes, the runs should all be evaluated on the same test 
set.
+
+3. Each run in the run group should be in its own numbered directory, shown 
with the files used by
+the summarize script:
+
+       $RUNDIR/
+           1/
+               README.txt
+               test/
+                   final-bleu
+                   final-times
+               [other files]
+           2/
+               README.txt
+               test/
+                   final-bleu
+                   final-times
+               [other files]
+               ...
+               
+You can get such directories using the `--rundir N` flag to the pipeline. 
+
+Run directories can build off each other. For example, `1/` might contain a 
complete baseline
+run. If you wanted to just change the tuner, you don't need to rerun the 
aligner and model builder,
+so you can reuse the results by supplying the second run with the information 
it needs that was
+computed in step 1:
+
+    $JOSHUA/bin/pipeline.pl \
+      --first-step tune \
+      --grammar 1/grammar.gz \
+      ...
+      
+More details are below.
+
+## Grammar options
+
+Hierarchical Joshua can extract three types of grammars: Hiero
+grammars, GHKM, and SAMT grammars.  As described on the
+[file formats page](file-formats.html), all of them are encoded into
+the same file format, but they differ in terms of the richness of
+their nonterminal sets.
+
+Hiero grammars make use of a single nonterminals, and are extracted by 
computing phrases from
+word-based alignments and then subtracting out phrase differences.  More 
detail can be found in
+[Chiang (2007) 
[PDF]](http://www.mitpressjournals.org/doi/abs/10.1162/coli.2007.33.2.201).
+[GHKM](http://www.isi.edu/%7Emarcu/papers/cr_ghkm_naacl04.pdf) (new with 5.0) 
and
+[SAMT](http://www.cs.cmu.edu/~zollmann/samt/) grammars make use of a source- 
or target-side parse
+tree on the training data, differing in the way they extract rules using these 
trees: GHKM extracts
+synchronous tree substitution grammar rules rooted in a subset of the tree 
constituents, whereas
+SAMT projects constituent labels down onto phrases.  SAMT grammars are usually 
many times larger and
+are much slower to decode with, but sometimes increase BLEU score.  Both 
grammar formats are
+extracted with the [Thrax software](thrax.html).
+
+By default, the Joshua pipeline extract a Hiero grammar, but this can be 
altered with the `--type
+(ghkm|samt)` flag. For GHKM grammars, the default is to use
+[Michel Galley's 
extractor](http://www-nlp.stanford.edu/~mgalley/software/stanford-ghkm-latest.tar.gz),
+but you can also use Moses' extractor with `--ghkm-extractor moses`. Galley's 
extractor only outputs
+two features, so the scores tend to be significantly lower than that of Moses'.
+
+Joshua (new in version 6) also includes an unlexicalized phrase-based
+decoder. Building a phrase-based model requires you to have Moses
+installed, since its `train-model.perl` script is used to extract the
+phrase table. You can enable this by defining the `$MOSES` environment
+variable and then specifying `--type phrase`.
+
+## Other high-level options
+
+The following command-line arguments control run-time behavior of multiple 
steps:
+
+- `--threads N` (1)
+
+  This enables multithreaded operation for a number of steps: alignment (with 
GIZA, max two
+  threads), parsing, and decoding (any number of threads)
+  
+- `--jobs N` (1)
+
+  This enables parallel operation over a cluster using the qsub command.  This 
feature is not
+  well-documented at this point, but you will likely want to edit the file
+  `$JOSHUA/scripts/training/parallelize/LocalConfig.pm` to setup your qsub 
environment, and may also
+  want to pass specific qsub commands via the `--qsub-args "ARGS"`
+  command. We suggest you stick to the standard Joshua model that
+  tries to use as many cores as are available with the `--threads N` option.
+
+## Restarting failed runs
+
+If the pipeline dies, you can restart it with the same command you used the 
first time.  If you
+rerun the pipeline with the exact same invocation as the previous run (or an 
overlapping
+configuration -- one that causes the same set of behaviors), you will see 
slightly different
+output compared to what we saw above:
+
+    [train-copy-en] cached, skipping...
+    [train-copy-ur] cached, skipping...
+    ...
+
+This indicates that the caching module has discovered that the step was 
already computed and thus
+did not need to be rerun.  This feature is quite useful for restarting 
pipeline runs that have
+crashed due to bugs, memory limitations, hardware failures, and the myriad 
other problems that
+plague MT researchers across the world.
+
+Often, a command will die because it was parameterized incorrectly.  For 
example, perhaps the
+decoder ran out of memory.  This allows you to adjust the parameter (e.g., 
`--joshua-mem`) and rerun
+the script.  Of course, if you change one of the parameters a step depends on, 
it will trigger a
+rerun, which in turn might trigger further downstream reruns.
+   
+## <a id="steps" /> Skipping steps, quitting early
+
+You will also find it useful to start the pipeline somewhere other than data 
preparation (for
+example, if you have already-processed data and an alignment, and want to 
begin with building a
+grammar) or to end it prematurely (if, say, you don't have a test set and just 
want to tune a
+model).  This can be accomplished with the `--first-step` and `--last-step` 
flags, which take as
+argument a case-insensitive version of the following steps:
+
+- *FIRST*: Data preparation.  Everything begins with data preparation.  This 
is the default first
+   step, so there is no need to be explicit about it.
+
+- *ALIGN*: Alignment.  You might want to start here if you want to skip data 
preprocessing.
+
+- *PARSE*: Parsing.  This is only relevant for building SAMT grammars (`--type 
samt`), in which case
+   the target side (`--target`) of the training data (`--corpus`) is parsed 
before building a
+   grammar.
+
+- *THRAX*: Grammar extraction [with Thrax](thrax.html).  If you jump to this 
step, you'll need to
+   provide an aligned corpus (`--alignment`) along with your parallel data.  
+
+- *TUNE*: Tuning.  The exact tuning method is determined with `--tuner 
{mert,mira,pro}`.  With this
+   option, you need to specify a grammar (`--grammar`) or separate tune 
(`--tune-grammar`) and test
+   (`--test-grammar`) grammars.  A full grammar (`--grammar`) will be filtered 
against the relevant
+   tuning or test set unless you specify `--no-filter-tm`.  If you want a 
language model built from
+   the target side of your training data, you'll also need to pass in the 
training corpus
+   (`--corpus`).  You can also specify an arbitrary number of additional 
language models with one or
+   more `--lmfile` flags.
+
+- *TEST*: Testing.  If you have a tuned model file, you can test new corpora 
by passing in a test
+   corpus with references (`--test`).  You'll need to provide a run name 
(`--name`) to store the
+   results of this run, which will be placed under `test/NAME`.  You'll also 
need to provide a
+   Joshua configuration file (`--joshua-config`), one or more language models 
(`--lmfile`), and a
+   grammar (`--grammar`); this will be filtered to the test data unless you 
specify
+   `--no-filter-tm`) or unless you directly provide a filtered test grammar 
(`--test-grammar`).
+
+- *LAST*: The last step.  This is the default target of `--last-step`.
+
+We now discuss these steps in more detail.
+
+### <a id="prep" /> 1. DATA PREPARATION
+
+Data prepare involves doing the following to each of the training data 
(`--corpus`), tuning data
+(`--tune`), and testing data (`--test`).  Each of these values is an absolute 
or relative path
+prefix.  To each of these prefixes, a "." is appended, followed by each of 
SOURCE (`--source`) and
+TARGET (`--target`), which are file extensions identifying the languages.  The 
SOURCE and TARGET
+files must have the same number of lines.  
+
+For tuning and test data, multiple references are handled automatically.  A 
single reference will
+have the format TUNE.TARGET, while multiple references will have the format 
TUNE.TARGET.NUM, where
+NUM starts at 0 and increments for as many references as there are.
+
+The following processing steps are applied to each file.
+
+1.  **Copying** the files into `$RUNDIR/data/TYPE`, where TYPE is one of 
"train", "tune", or "test".
+    Multiple `--corpora` files are concatenated in the order they are 
specified.  Multiple `--tune`
+    and `--test` flags are not currently allowed.
+    
+1.  **Normalizing** punctuation and text (e.g., removing extra spaces, 
converting special
+    quotations).  There are a few language-specific options that depend on the 
file extension
+    matching the [two-letter ISO 
639-1](http://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)
+    designation.
+
+1.  **Tokenizing** the data (e.g., separating out punctuation, converting 
brackets).  Again, there
+    are language-specific tokenizations for a few languages (English, German, 
and Greek).
+
+1.  (Training only) **Removing** all parallel sentences with more than 
`--maxlen` tokens on either
+    side.  By default, MAXLEN is 50.  To turn this off, specify `--maxlen 0`.
+
+1.  **Lowercasing**.
+
+This creates a series of intermediate files which are saved for posterity but 
compressed.  For
+example, you might see
+
+    data/
+        train/
+            train.en.gz
+            train.tok.en.gz
+            train.tok.50.en.gz
+            train.tok.50.lc.en
+            corpus.en -> train.tok.50.lc.en
+
+The file "corpus.LANG" is a symbolic link to the last file in the chain.  
+
+## 2. ALIGNMENT <a id="alignment" />
+
+Alignments are between the parallel corpora at 
`$RUNDIR/data/train/corpus.{SOURCE,TARGET}`.  To
+prevent the alignment tables from getting too big, the parallel corpora are 
grouped into files of no
+more than ALIGNER\_CHUNK\_SIZE blocks (controlled with a parameter below).  
The last block is folded
+into the penultimate block if it is too small.  These chunked files are all 
created in a
+subdirectory of `$RUNDIR/data/train/splits`, named `corpus.LANG.0`, 
`corpus.LANG.1`, and so on.
+
+The pipeline parameters affecting alignment are:
+
+-   `--aligner ALIGNER` {giza (default), berkeley, jacana}
+
+    Which aligner to use.  The default is 
[GIZA++](http://code.google.com/p/giza-pp/), but
+    [the Berkeley aligner](http://code.google.com/p/berkeleyaligner/) can be 
used instead.  When
+    using the Berkeley aligner, you'll want to pay attention to how much 
memory you allocate to it
+    with `--aligner-mem` (the default is 10g).
+
+-   `--aligner-chunk-size SIZE` (1,000,000)
+
+    The number of sentence pairs to compute alignments over. The training data 
is split into blocks
+    of this size, aligned separately, and then concatenated.
+    
+-   `--alignment FILE`
+
+    If you have an already-computed alignment, you can pass that to the script 
using this flag.
+    Note that, in this case, you will want to skip data preparation and 
alignment using
+    `--first-step thrax` (the first step after alignment) and also to specify 
`--no-prepare` so
+    as not to retokenize the data and mess with your alignments.
+    
+    The alignment file format is the standard format where 0-indexed many-many 
alignment pairs for a
+    sentence are provided on a line, source language first, e.g.,
+
+      0-0 0-1 1-2 1-7 ...
+
+    This value is required if you start at the grammar extraction step.
+
+When alignment is complete, the alignment file can be found at 
`$RUNDIR/alignments/training.align`.
+It is parallel to the training corpora.  There are many files in the 
`alignments/` subdirectory that
+contain the output of intermediate steps.
+
+### <a id="parsing" /> 3. PARSING
+
+To build SAMT and GHKM grammars (`--type samt` and `--type ghkm`), the target 
side of the
+training data must be parsed. The pipeline assumes your target side will be 
English, and will parse
+it for you using [the Berkeley 
parser](http://code.google.com/p/berkeleyparser/), which is included.
+If it is not the case that English is your target-side language, the target 
side of your training
+data (found at CORPUS.TARGET) must already be parsed in PTB format.  The 
pipeline will notice that
+it is parsed and will not reparse it.
+
+Parsing is affected by both the `--threads N` and `--jobs N` options.  The 
former runs the parser in
+multithreaded mode, while the latter distributes the runs across as cluster 
(and requires some
+configuration, not yet documented).  The options are mutually exclusive.
+
+Once the parsing is complete, there will be two parsed files:
+
+- `$RUNDIR/data/train/corpus.en.parsed`: this is the mixed-case file that was 
parsed.
+- `$RUNDIR/data/train/corpus.parsed.en`: this is a leaf-lowercased version of 
the above file used for
+  grammar extraction.
+
+## 4. THRAX (grammar extraction) <a id="tm" />
+
+The grammar extraction step takes three pieces of data: (1) the 
source-language training corpus, (2)
+the target-language training corpus (parsed, if an SAMT grammar is being 
extracted), and (3) the
+alignment file.  From these, it computes a synchronous context-free grammar.  
If you already have a
+grammar and wish to skip this step, you can do so passing the grammar with the 
`--grammar
+/path/to/grammar` flag.
+
+The main variable in grammar extraction is Hadoop.  If you have a Hadoop 
installation, simply ensure
+that the environment variable `$HADOOP` is defined, and Thrax will seamlessly 
use it.  If you *do
+not* have a Hadoop installation, the pipeline will roll out out for you, 
running Hadoop in
+standalone mode (this mode is triggered when `$HADOOP` is undefined).  
Theoretically, any grammar
+extractable on a full Hadoop cluster should be extractable in standalone mode, 
if you are patient
+enough; in practice, you probably are not patient enough, and will be limited 
to smaller
+datasets. You may also run into problems with disk space; Hadoop uses a lot 
(use `--tmp
+/path/to/tmp` to specify an alternate place for temporary data; we suggest you 
use a local disk
+partition with tens or hundreds of gigabytes free, and not an NFS partition).  
Setting up your own
+Hadoop cluster is not too difficult a chore; in particular, you may find it 
helpful to install a
+[pseudo-distributed version of 
Hadoop](http://hadoop.apache.org/common/docs/r0.20.2/quickstart.html).
+In our experience, this works fine, but you should note the following caveats:
+
+- It is of crucial importance that you have enough physical disks.  We have 
found that having too
+  few, or too slow of disks, results in a whole host of seemingly unrelated 
issues that are hard to
+  resolve, such as timeouts.  
+- NFS filesystems can cause lots of problems.  You should really try to 
install physical disks that
+  are dedicated to Hadoop scratch space.
+
+Here are some flags relevant to Hadoop and grammar extraction with Thrax:
+
+- `--hadoop /path/to/hadoop`
+
+  This sets the location of Hadoop (overriding the environment variable 
`$HADOOP`)
+  
+- `--hadoop-mem MEM` (2g)
+
+  This alters the amount of memory available to Hadoop mappers (passed via the
+  `mapred.child.java.opts` options).
+  
+- `--thrax-conf FILE`
+
+   Use the provided Thrax configuration file instead of the (grammar-specific) 
default.  The Thrax
+   templates are located at 
`$JOSHUA/scripts/training/templates/thrax-TYPE.conf`, where TYPE is one
+   of "hiero" or "samt".
+  
+When the grammar is extracted, it is compressed and placed at 
`$RUNDIR/grammar.gz`.
+
+## <a id="lm" /> 5. Language model
+
+Before tuning can take place, a language model is needed.  A language model is 
always built from the
+target side of the training corpus unless `--no-corpus-lm` is specified.  In 
addition, you can
+provide other language models (any number of them) with the `--lmfile FILE` 
argument.  Other
+arguments are as follows.
+
+-  `--lm` {kenlm (default), berkeleylm}
+
+   This determines the language model code that will be used when decoding.  
These implementations
+   are described in their respective papers (PDFs:
+   [KenLM](http://kheafield.com/professional/avenue/kenlm.pdf),
+   
[BerkeleyLM](http://nlp.cs.berkeley.edu/pubs/Pauls-Klein_2011_LM_paper.pdf)). 
KenLM is written in
+   C++ and requires a pass through the JNI, but is recommended because it 
supports left-state minimization.
+   
+- `--lmfile FILE`
+
+  Specifies a pre-built language model to use when decoding.  This language 
model can be in ARPA
+  format, or in KenLM format when using KenLM or BerkeleyLM format when using 
that format.
+
+- `--lm-gen` {kenlm (default), srilm, berkeleylm}, `--buildlm-mem MEM`, 
`--witten-bell`
+
+  At the tuning step, an LM is built from the target side of the training data 
(unless
+  `--no-corpus-lm` is specified).  This controls which code is used to build 
it.  The default is a
+  KenLM's [lmplz](http://kheafield.com/code/kenlm/estimation/), and is 
strongly recommended.
+  
+  If SRILM is used, it is called with the following arguments:
+  
+        $SRILM/bin/i686-m64/ngram-count -interpolate SMOOTHING -order 5 -text 
TRAINING-DATA -unk -lm lm.gz
+        
+  Where SMOOTHING is `-kndiscount`, or `-wbdiscount` if `--witten-bell` is 
passed to the pipeline.
+  
+  [BerkeleyLM java 
class](http://code.google.com/p/berkeleylm/source/browse/trunk/src/edu/berkeley/nlp/lm/io/MakeKneserNeyArpaFromText.java)
+  is also available. It computes a Kneser-Ney LM with a constant discounting 
(0.75) and no count
+  thresholding.  The flag `--buildlm-mem` can be used to control how much 
memory is allocated to the
+  Java process.  The default is "2g", but you will want to increase it for 
larger language models.
+  
+  A language model built from the target side of the training data is placed 
at `$RUNDIR/lm.gz`.  
+
+## Interlude: decoder arguments
+
+Running the decoder is done in both the tuning stage and the testing stage.  A 
critical point is
+that you have to give the decoder enough memory to run.  Joshua can be very 
memory-intensive, in
+particular when decoding with large grammars and large language models.  The 
default amount of
+memory is 3100m, which is likely not enough (especially if you are decoding 
with SAMT grammar).  You
+can alter the amount of memory for Joshua using the `--joshua-mem MEM` 
argument, where MEM is a Java
+memory specification (passed to its `-Xmx` flag).
+
+## <a id="tuning" /> 6. TUNING
+
+Two optimizers are provided with Joshua: MERT and PRO (`--tuner {mert,pro}`).  
If Moses is
+installed, you can also use Cherry & Foster's k-best batch MIRA (`--tuner 
mira`, recommended).
+Tuning is run till convergence in the `$RUNDIR/tune` directory.
+
+When tuning is finished, each final configuration file can be found at either
+
+    $RUNDIR/tune/joshua.config.final
+
+## <a id="testing" /> 7. Testing 
+
+For each of the tuner runs, Joshua takes the tuner output file and decodes the 
test set.  If you
+like, you can also apply minimum Bayes-risk decoding to the decoder output 
with `--mbr`.  This
+usually yields about 0.3 - 0.5 BLEU points, but is time-consuming.
+
+After decoding the test set with each set of tuned weights, Joshua computes 
the mean BLEU score,
+writes it to `$RUNDIR/test/final-bleu`, and cats it. It also writes a file
+`$RUNDIR/test/final-times` containing a summary of runtime information. That's 
the end of the pipeline!
+
+Joshua also supports decoding further test sets.  This is enabled by rerunning 
the pipeline with a
+number of arguments:
+
+-   `--first-step TEST`
+
+    This tells the decoder to start at the test step.
+
+-   `--joshua-config CONFIG`
+
+    A tuned parameter file is required.  This file will be the output of some 
prior tuning run.
+    Necessary pathnames and so on will be adjusted.
+    
+## <a id="analysis"> 8. ANALYSIS
+
+If you have used the suggested layout, with a number of related runs all 
contained in a common
+directory with sequential numbers, you can use the script 
`$JOSHUA/scripts/training/summarize.pl` to
+display a summary of the mean BLEU scores from all runs, along with the text 
you placed in the run
+README file (using the pipeline's `--readme TEXT` flag).
+
+## COMMON USE CASES AND PITFALLS 
+
+- If the pipeline dies at the "thrax-run" stage with an error like the 
following:
+
+      JOB FAILED (return code 1) 
+      hadoop/bin/hadoop: line 47: 
+      /some/path/to/a/directory/hadoop/bin/hadoop-config.sh: No such file or 
directory 
+      Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/hadoop/fs/FsShell 
+      Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.fs.FsShell 
+      
+  This occurs if the `$HADOOP` environment variable is set but does not point 
to a working
+  Hadoop installation.  To fix it, make sure to unset the variable:
+  
+      # in bash
+      unset HADOOP
+      
+  and then rerun the pipeline with the same invocation.
+
+- Memory usage is a major consideration in decoding with Joshua and 
hierarchical grammars.  In
+  particular, SAMT grammars often require a large amount of memory.  Many 
steps have been taken to
+  reduce memory usage, including beam settings and test-set- and 
sentence-level filtering of
+  grammars.  However, memory usage can still be in the tens of gigabytes.
+
+  To accommodate this kind of variation, the pipeline script allows you to 
specify both (a) the
+  amount of memory used by the Joshua decoder instance and (b) the amount of 
memory required of
+  nodes obtained by the qsub command.  These are accomplished with the 
`--joshua-mem` MEM and
+  `--qsub-args` ARGS commands.  For example,
+
+      pipeline.pl --joshua-mem 32g --qsub-args "-l pvmem=32g -q himem.q" ...
+
+  Also, should Thrax fail, it might be due to a memory restriction. By 
default, Thrax requests 2 GB
+  from the Hadoop server. If more memory is needed, set the memory requirement 
with the
+  `--hadoop-mem` in the same way as the `--joshua-mem` option is used.
+
+- Other pitfalls and advice will be added as it is discovered.
+
+## FEEDBACK 
+
+Please email [email protected] with problems or suggestions.
+

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/6/quick-start.md
----------------------------------------------------------------------
diff --git a/6/quick-start.md b/6/quick-start.md
new file mode 100644
index 0000000..53814ae
--- /dev/null
+++ b/6/quick-start.md
@@ -0,0 +1,59 @@
+---
+layout: default6
+title: Quick Start
+---
+
+If you just want to use Joshua to translate data, the quickest way is
+to download a [pre-built model](/language-packs/). 
+
+If not language pack is available, or if you have your own parallel
+data that you want to train the translation engine on, then you have
+to build your own model. This takes a bit more knowledge and effort,
+but is made easier with Joshua's [pipeline script](pipeline.html),
+which runs all the steps of preparing data, aligning it, and
+extracting and tuning component models. 
+
+Detailed information about running the pipeline can be found in
+[the pipeline documentation](/6.0/pipeline.html), but as a quick
+start, you can build a simple Bengali--English model by following
+these instructions.
+
+*NOTE: We suggest you build models outside the `$JOSHUA` directory*.
+
+First, download the dataset:
+   
+    mkdir -p ~/models/bn-en/
+    cd ~/models/bn-en
+    wget -q 
https://github.com/joshua-decoder/indian-parallel-corpora/archive/1.0.tar.gz
+    tar xzf indian-parallel-corpora-1.0.tar.gz
+    ln -s indian-parallel-corpora-1.0 input
+
+Then, train and test a model
+
+    $JOSHUA/bin/pipeline.pl --source bn --target en \
+        --type hiero \
+        --no-prepare --aligner berkeley \
+        --corpus input/bn-en/tok/training.bn-en \
+        --tune input/bn-en/tok/dev.bn-en \
+        --test input/bn-en/tok/devtest.bn-en
+
+This will align the data with the Berkeley aligner, build a Hiero
+model, tune with MERT, decode the test sets, and reports results that
+should correspond with what you find on
+[the Indian Parallel Corpora page](/indian-parallel-corpora/). For
+more details, including information on the many options available with
+the pipeline script, please see [its documentation page](pipeline.html).
+
+Finally, you can export the full model as a language pack:
+
+    ./run-bundler.py \
+      tune/joshua.config.final \
+      language-pack-bn-en \
+      --pack-tm grammar.gz
+      
+(or possibly `tune/1/joshua.config.final` if you're using an older version of
+the pipeline).
+
+This will create a [runnable model](bundle.html) in
+`language-pack-bn-en`. See the `README` file in that directory for
+information on how to run the decoder.

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/6/server.md
----------------------------------------------------------------------
diff --git a/6/server.md b/6/server.md
new file mode 100644
index 0000000..f3d8da5
--- /dev/null
+++ b/6/server.md
@@ -0,0 +1,30 @@
+---
+layout: default6
+category: links
+title: Server mode
+---
+
+The Joshua decoder can be run as a TCP/IP server instead of a POSIX-style 
command-line tool. Clients can concurrently connect to a socket and receive a 
set of newline-separated outputs for a set of newline-separated inputs.
+
+Threading takes place both within and across requests.  Threads from the 
decoder pool are assigned in round-robin manner across requests, preventing 
starvation.
+
+
+# Invoking the server
+
+A running server is configured at invokation time. To start in server mode, 
run `joshua-decoder` with the option `-server-port [PORT]`. Additionally, the 
server can be configured in the same ways as when using the 
command-line-functionality.
+
+E.g.,
+
+    $JOSHUA/bin/joshua-decoder -server-port 10101 -mark-oovs false 
-output-format "%s" -threads 10
+
+## Using the server
+
+To test that the server is working, a set of inputs can be sent to the server 
from the command line. 
+
+The server, as configured in the example above, will then respond to requests 
on port 10101.  You can test it out with the `nc` utility:
+
+    wget -qO - http://cs.jhu.edu/~post/files/pg1023.txt | head -132 | tail -11 
| nc localhost 10101
+
+Since no model was loaded, this will just return the text to you as sent to 
the server.
+
+The `-server-port` option can also be used when creating a [bundled 
configuration](bundle.html) that will be run in server mode.

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/6/thrax.md
----------------------------------------------------------------------
diff --git a/6/thrax.md b/6/thrax.md
new file mode 100644
index 0000000..dbcc71c
--- /dev/null
+++ b/6/thrax.md
@@ -0,0 +1,14 @@
+---
+layout: default6
+category: advanced
+title: Grammar extraction with Thrax
+---
+
+One day, this will hold Thrax documentation, including how to use Thrax, how 
to do grammar
+filtering, and details on the configuration file options.  It will also 
include details about our
+experience setting up and maintaining Hadoop cluster installations, knowledge 
wrought of hard-fought
+sweat and tears.
+
+In the meantime, please bother [Jonny Weese](http://cs.jhu.edu/~jonny/) if 
there is something you
+need to do that you don't understand.  You might also be able to dig up some 
information [on the old
+Thrax page](http://cs.jhu.edu/~jonny/thrax/).

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/6/tms.md
----------------------------------------------------------------------
diff --git a/6/tms.md b/6/tms.md
new file mode 100644
index 0000000..7ce5e9d
--- /dev/null
+++ b/6/tms.md
@@ -0,0 +1,106 @@
+---
+layout: default6
+category: advanced
+title: Building Translation Models
+---
+
+# Build a translation model
+
+Extracting a grammar from a large amount of data is a multi-step process. The 
first requirement is parallel data. The Europarl, Call Home, and Fisher corpora 
all contain parallel translations of Spanish and English sentences.
+
+We will copy (or symlink) the parallel source text files in a subdirectory 
called `input/`.
+
+Then, we concatenate all the training files on each side. The pipeline script 
normally does tokenization and normalization, but in this instance we have a 
custom tokenizer we need to apply to the source side, so we have to do it 
manually and then skip that step using the `pipeline.pl` option `--first-step 
alignment`.
+
+* to tokenize the English data, do
+
+    cat callhome.en europarl.en fisher.en > all.en | 
$JOSHUA/scripts/training/normalize-punctuation.pl en | 
$JOSHUA/scripts/training/penn-treebank-tokenizer.perl | 
$JOSHUA/scripts/lowercase.perl > all.norm.tok.lc.en
+
+The same can be done for the Spanish side of the input data:
+
+    cat callhome.es europarl.es fisher.es > all.es | 
$JOSHUA/scripts/training/normalize-punctuation.pl es | 
$JOSHUA/scripts/training/penn-treebank-tokenizer.perl | 
$JOSHUA/scripts/lowercase.perl > all.norm.tok.lc.es
+
+By the way, an alternative tokenizer is a Twitter tokenizer found in the 
[Jerboa](http://github.com/vandurme/jerboa) project.
+
+The final step in the training data preparation is to remove all examples in 
which either of the language sides is a blank line.
+
+    paste all.norm.tok.lc.es all.norm.tok.lc.en | grep -Pv "^\t|\t$" \
+      | ./splittabs.pl all.norm.tok.lc.noblanks.es all.norm.tok.lc.noblanks.en
+
+contents of `splittabls.pl` by Matt Post:
+
+    #!/usr/bin/perl
+
+    # splits on tab, printing respective chunks to the list of files given
+    # as script arguments
+
+    use FileHandle;
+
+    my @fh;
+    $| = 1;   # don't buffer output
+
+    if (@ARGV < 0) {
+      print "Usage: splittabs.pl < tabbed-file\n";
+      exit;
+    }
+
+    my @fh = map { get_filehandle($_) } @ARGV;
+    @ARGV = ();
+
+    while (my $line = <>) {
+      chomp($line);
+      my (@fields) = split(/\t/,$line,scalar @fh);
+
+      map { print {$fh[$_]} "$fields[$_]\n" } (0..$#fields);
+    }
+
+    sub get_filehandle {
+        my $file = shift;
+
+        if ($file eq "-") {
+            return *STDOUT;
+        } else {
+            local *FH;
+            open FH, ">$file" or die "can't open '$file' for writing";
+            return *FH;
+        }
+    }
+
+Now we can run the pipeline to extract the grammar. Run the following script:
+
+    #!/bin/bash
+
+    # this creates a grammar
+
+    # NEED:
+    # pair
+    # type
+
+    set -u
+
+    pair=es-en
+    type=hiero
+
+    #. ~/.bashrc
+
+    #basedir=$(pwd)
+
+    dir=grammar-$pair-$type
+
+    [[ ! -d $dir ]] && mkdir -p $dir
+    cd $dir
+
+    source=$(echo $pair | cut -d- -f 1)
+    target=$(echo $pair | cut -d- -f 2)
+
+    $JOSHUA/scripts/training/pipeline.pl \
+      --source $source \
+      --target $target \
+      --corpus 
/home/hltcoe/lorland/expts/scale12/model1/input/all.norm.tok.lc.noblanks \
+      --type $type \
+      --joshua-mem 100g \
+      --no-prepare \
+      --first-step align \
+      --last-step thrax \
+      --hadoop $HADOOP \
+      --threads 8 \

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/6/tutorial.md
----------------------------------------------------------------------
diff --git a/6/tutorial.md b/6/tutorial.md
new file mode 100644
index 0000000..482162f
--- /dev/null
+++ b/6/tutorial.md
@@ -0,0 +1,187 @@
+---
+layout: default6
+category: links
+title: Pipeline tutorial
+---
+
+This document will walk you through using the pipeline in a variety of 
scenarios. Once you've gained a
+sense for how the pipeline works, you can consult the [pipeline 
page](pipeline.html) for a number of
+other options available in the pipeline.
+
+## Download and Setup
+
+Download and install Joshua as described on the [quick start 
page](index.html), installing it under
+`~/code/`. Once you've done that, you should make sure you have the following 
environment variable set:
+
+    export JOSHUA=$HOME/code/joshua-v{{ site.data.joshua.release_version }}
+    export JAVA_HOME=/usr/java/default
+
+If you have a Hadoop installation, make sure you've set `$HADOOP` to point to 
it. For example, if the `hadoop` command is in `/usr/bin`,
+you should type
+
+    export HADOOP=/usr
+
+Joshua will find the binary and use it to submit to your hadoop cluster. If 
you don't have one, just
+make sure that HADOOP is unset, and Joshua will roll one out for you and run 
it in
+[standalone 
mode](https://hadoop.apache.org/docs/r1.2.1/single_node_setup.html). 
+
+## A basic pipeline run
+
+For today's experiments, we'll be building a Spanish--English system using 
data included in the
+[Fisher and CALLHOME translation corpus](/data/fisher-callhome-corpus/). This
+data was collected by translating transcribed speech from previous LDC 
releases.
+
+Download the data and install it somewhere:
+
+    cd ~/data
+    wget --no-check -O fisher-callhome-corpus.zip 
https://github.com/joshua-decoder/fisher-callhome-corpus/archive/master.zip
+    unzip fisher-callhome-corpus.zip
+
+Then define the environment variable `$FISHER` to point to it:
+
+    cd ~/data/fisher-callhome-corpus-master
+    export FISHER=$(pwd)
+    
+### Preparing the data
+
+Inside the tarball is the Fisher and CALLHOME Spanish--English data, which 
includes Kaldi-provided
+ASR output and English translations on the Fisher and CALLHOME  dataset 
transcriptions. Because of
+licensing restrictions, we cannot distribute the Spanish transcripts, but if 
you have an LDC site
+license, a script is provided to build them. You can type:
+
+    ./bin/build_fisher.sh /export/common/data/corpora/LDC/LDC2010T04
+
+Where the first argument is the path to your LDC data release. This will 
create the files in `corpus/ldc`.
+
+In `$FISHER/corpus`, there are a set of parallel directories for LDC 
transcripts (`ldc`), ASR output
+(`asr`), oracle ASR output (`oracle`), and ASR lattice output (`plf`). The 
files look like this:
+
+    $ ls corpus/ldc
+    callhome_devtest.en  fisher_dev2.en.2  fisher_dev.en.2   fisher_test.en.2
+    callhome_evltest.en  fisher_dev2.en.3  fisher_dev.en.3   fisher_test.en.3
+    callhome_train.en    fisher_dev2.es    fisher_dev.es     fisher_test.es
+    fisher_dev2.en.0     fisher_dev.en.0   fisher_test.en.0  fisher_train.en
+    fisher_dev2.en.1     fisher_dev.en.1   fisher_test.en.1  fisher_train.es
+
+If you don't have the LDC transcripts, you can use the data in `corpus/asr` 
instead. We will now use
+this data to build our own Spanish--English model using Joshua's pipeline.
+    
+### Run the pipeline
+
+Create an experiments directory for containing your first experiment. *Note: 
it's important that
+this **not** be inside your `$JOSHUA` directory*.
+
+    mkdir ~/expts/joshua
+    cd ~/expts/joshua
+    
+We will now create the baseline run, using a particular directory structure 
for experiments that
+will allow us to take advantage of scripts provided with Joshua for displaying 
the results of many
+related experiments. Because this can take quite some time to run, we are 
going to reduce the model
+by quite a bit by 
+restriction: Joshua will only use sentences in the training sets with ten or 
fewer words on either
+side (Spanish or English):
+
+    cd ~/expts/joshua
+    $JOSHUA/bin/pipeline.pl           \
+      --rundir 1                      \
+      --readme "Baseline Hiero run"   \
+      --source es                     \
+      --target en                     \
+      --type hiero                    \
+      --corpus $FISHER/corpus/ldc/fisher_train \
+      --tune $FISHER/corpus/ldc/fisher_dev \
+      --test $FISHER/corpus/ldc/fisher_dev2 \
+      --maxlen 10 \
+      --lm-order 3
+      
+This will start the pipeline building a Spanish--English translation system 
constructed from the
+training data and a dictionary, tuned against dev, and tested against devtest. 
It will use the
+default values for most of the pipeline: 
[GIZA++](https://code.google.com/p/giza-pp/) for alignment,
+KenLM's `lmplz` for building the language model, Z-MERT for tuning, KenLM with 
left-state
+minimization for representing LM state in the decoder, and so on. We change 
the order of the n-gram
+model to 3 (from its default of 5) because there is not enough data to build a 
5-gram LM.
+
+A few notes:
+
+- This will likely take many hours to run, especially if you don't have a 
Hadoop cluster.
+
+- If you are running on Mac OS X, KenLM's `lmplz` will not build due to the 
absence of static
+  libraries. In that case, you should add the flag `--lm-gen srilm` 
(recommended, if SRILM is
+  installed) or `--lm-gen berkeleylm`.
+
+### Variations
+
+Once that is finished, you will have a baseline model. From there, you might 
wish to try variations
+of the baseline model. Here are some examples of what you could vary:
+
+- Build an SAMT model (`--type samt`), GKHM model (`--type ghkm`), or phrasal 
ITG model (`--type phrasal`) 
+   
+- Use the Berkeley aligner instead of GIZA++ (`--aligner berkeley`)
+   
+- Build the language model with BerkeleyLM (`--lm-gen srilm`) instead of KenLM 
(the default)
+
+- Change the order of the LM from the default of 5 (`--lm-order 4`)
+
+- Tune with MIRA instead of MERT (`--tuner mira`). This requires that Moses is 
installed.
+   
+- Decode with a wider beam (`--joshua-args '-pop-limit 200'`) (the default is 
100)
+
+- Add the provided BN-EN dictionary to the training data (add another 
`--corpus` line, e.g., `--corpus $FISHER/bn-en/dict.bn-en`)
+
+To do this, we will create new runs that partially reuse the results of 
previous runs. This is
+possible by doing two things: (1) incrementing the run directory and providing 
an updated README
+note; (2) telling the pipeline which of the many steps of the pipeline to 
begin at; and (3)
+providing the needed dependencies.
+
+# A second run
+
+Let's begin by changing the tuner, to see what effect that has. To do so, we 
change the run
+directory, tell the pipeline to start at the tuning step, and provide the 
needed dependencies:
+
+    $JOSHUA/bin/pipeline.pl           \
+      --rundir 2                      \
+      --readme "Tuning with MIRA"     \
+      --source bn                     \
+      --target en                     \
+      --corpus $FISHER/bn-en/tok/training.bn-en \
+      --tune $FISHER/bn-en/tok/dev.bn-en        \
+      --test $FISHER/bn-en/tok/devtest.bn-en    \
+      --first-step tune \
+      --tuner mira \
+      --grammar 1/grammar.gz \
+      --no-corpus-lm \
+      --lmfile 1/lm.gz
+      
+ Here, we have essentially the same invocation, but we have told the pipeline 
to use a different
+ MIRA, to start with tuning, and have provided it with the language model file 
and grammar it needs
+ to execute the tuning step. 
+ 
+ Note that we have also told it not to build a language model. This is 
necessary because the
+ pipeline always builds an LM on the target side of the training data, if 
provided, but we are
+ supplying the language model that was already built. We could equivalently 
have removed the
+ `--corpus` line.
+ 
+## Changing the model type
+
+Let's compare the Hiero model we've already built to an SAMT model. We have to 
reextract the
+grammar, but can reuse the alignments and the language model:
+
+    $JOSHUA/bin/pipeline.pl           \
+      --rundir 3                      \
+      --readme "Baseline SAMT model"  \
+      --source bn                     \
+      --target en                     \
+      --corpus $FISHER/bn-en/tok/training.bn-en \
+      --tune $FISHER/bn-en/tok/dev.bn-en        \
+      --test $FISHER/bn-en/tok/devtest.bn-en    \
+      --alignment 1/alignments/training.align   \
+      --first-step parse \
+      --no-corpus-lm \
+      --lmfile 1/lm.gz
+
+See [the pipeline script page](pipeline.html#steps) for a list of all the 
steps.
+
+## Analyzing the results
+
+We now have three runs, in subdirectories 1, 2, and 3. We can display summary 
results from them
+using the `$JOSHUA/scripts/training/summarize.pl` script.

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/6/whats-new.md
----------------------------------------------------------------------
diff --git a/6/whats-new.md b/6/whats-new.md
new file mode 100644
index 0000000..c145fd5
--- /dev/null
+++ b/6/whats-new.md
@@ -0,0 +1,12 @@
+---
+layout: default6
+title: What's New
+---
+
+Joshua 6.0 introduces a number of new features and improvements.
+
+- A new phrase-based decoder that is as fast as Moses
+- Significantly faster hierarchical decoding
+- Support for class-based language modeling
+- Reflection-based loading of feature functions for super-easy
+  development of new features

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/6/zmert.md
----------------------------------------------------------------------
diff --git a/6/zmert.md b/6/zmert.md
new file mode 100644
index 0000000..022d0dc
--- /dev/null
+++ b/6/zmert.md
@@ -0,0 +1,83 @@
+---
+layout: default6
+category: advanced
+title: Z-MERT
+---
+
+This document describes how to manually run the ZMERT module.  ZMERT is 
Joshua's minimum error-rate
+training module, written by Omar F. Zaidan.  It is easily adapted to drop in 
different decoders, and
+was also written so as to work with different objective functions (other than 
BLEU).
+
+((Section (1) in `$JOSHUA/examples/ZMERT/README_ZMERT.txt` is an expanded 
version of this section))
+
+Z-MERT, can be used by launching the driver program (`ZMERT.java`), which 
expects a config file as
+its main argument.  This config file can be used to specify any subset of 
Z-MERT's 20-some
+parameters.  For a full list of those parameters, and their default values, 
run ZMERT with a single
+-h argument as follows:
+
+    java -cp $JOSHUA/bin joshua.zmert.ZMERT -h
+
+So what does a Z-MERT config file look like?
+
+Examine the file `examples/ZMERT/ZMERT_config_ex2.txt`.  You will find that it
+specifies the following "main" MERT parameters:
+
+    (*) -dir dirPrefix:         working directory
+    (*) -s sourceFile:          source sentences (foreign sentences) of the 
MERT dataset
+    (*) -r refFile:             target sentences (reference translations) of 
the MERT dataset
+    (*) -rps refsPerSen:        number of reference translations per sentence
+    (*) -p paramsFile:          file containing parameter names, initial 
values, and ranges
+    (*) -maxIt maxMERTIts:      maximum number of MERT iterations
+    (*) -ipi initsPerIt:        number of intermediate initial points per 
iteration
+    (*) -cmd commandFile:       name of file containing commands to run the 
decoder
+    (*) -decOut decoderOutFile: name of the output file produced by the decoder
+    (*) -dcfg decConfigFile:    name of decoder config file
+    (*) -N N:                   size of N-best list (per sentence) generated 
in each MERT iteration
+    (*) -v verbosity:           output verbosity level (0-2; higher value => 
more verbose)
+    (*) -seed seed:             seed used to initialize the random number 
generator
+
+(Note that the `-s` parameter is only used if Z-MERT is running Joshua as an
+ internal decoder.  If Joshua is run as an external decoder, as is the case in
+ this README, then this parameter is ignored.)
+
+To test Z-MERT on the 100-sentence test set of example2, provide this config
+file to Z-MERT as follows:
+
+    java -cp bin joshua.zmert.ZMERT -maxMem 500 
examples/ZMERT/ZMERT_config_ex2.txt > examples/ZMERT/ZMERT_example/ZMERT.out
+
+This will run Z-MERT for a couple of iterations on the data from the example2
+folder.  (Notice that we have made copies of the source and reference files
+from example2 and renamed them as src.txt and ref.* in the MERT_example folder,
+just to have all the files needed by Z-MERT in one place.)  Once the Z-MERT run
+is complete, you should be able to inspect the log file to see what kinds of
+things it did.  If everything goes well, the run should take a few minutes, of
+which more than 95% is time spent by Z-MERT waiting on Joshua to finish
+decoding the sentences (once per iteration).
+
+The output file you get should be equivalent to `ZMERT.out.verbosity1`.  If you
+rerun the experiment with the verbosity (-v) argument set to 2 instead of 1,
+the output file you get should be equivalent to `ZMERT.out.verbosity2`, which 
has
+more interesting details about what Z-MERT does.
+
+Notice the additional `-maxMem` argument.  It tells Z-MERT that it should not
+persist to use up memory while the decoder is running (during which time Z-MERT
+would be idle).  The 500 tells Z-MERT that it can only use a maximum of 500 MB.
+For more details on this issue, see section (4) in Z-MERT's README.
+
+A quick note about Z-MERT's interaction with the decoder.  If you examine the
+file `decoder_command_ex2.txt`, which is provided as the commandFile (`-cmd`)
+argument in Z-MERT's config file, you'll find it contains the command one would
+use to run the decoder.  Z-MERT launches the commandFile as an external
+process, and assumes that it will launch the decoder to produce translations.
+(Make sure that commandFile is executable.)  After launching this external
+process, Z-MERT waits for it to finish, then uses the resulting output file for
+parameter tuning (in addition to the output files from previous iterations).
+The command file here only has a single command, but your command file could
+have multiple lines.  Just make sure the command file itself is executable.
+
+Notice that the Z-MERT arguments `configFile` and `decoderOutFile` (`-cfg` and
+`-decOut`) must match the two Joshua arguments in the commandFile's (`-cmd`) 
single
+command.  Also, the Z-MERT argument for N must match the value for `top_n` in
+Joshua's config file, indicated by the Z-MERT argument configFile (`-cfg`).
+
+For more details on Z-MERT, refer to `$JOSHUA/examples/ZMERT/README_ZMERT.txt`

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/CNAME
----------------------------------------------------------------------
diff --git a/CNAME b/CNAME
new file mode 100644
index 0000000..ba6985f
--- /dev/null
+++ b/CNAME
@@ -0,0 +1 @@
+joshua-decoder.org

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..503d970
--- /dev/null
+++ b/README.md
@@ -0,0 +1,42 @@
+This directory contains the Joshua web site, including
+
+- The Joshua decoder main web page
+- The Indian language corpora (in indian-parallel-corpora/)
+- [Jekyll](https://github.com/mojombo/jekyll/) code for generating the Joshua 
end user documentation
+
+The main thing you might want to do (assuming you have write access to this 
repository) is to add
+documentation pages.  This can be done in two steps:
+
+1. Write your documentation using Github-supported Markdown or HTML.  Create 
the file in the current
+   directory, using one of the existing files as templates.  The top of the 
file contains a number of
+   lines specifying metadata.  The metadata looks like this:
+
+    ---
+    layout: default.html
+    title:  My New Page
+    ---
+    Your content goes here.
+
+   At minimum, you should specify the template to apply (relative to 
`_layouts`, probably
+   default.html) and the page's title.  Everything below the second set of 
`---` is substituted into
+   the template where `{{ content }}` is found.
+
+1. Edit `_layouts/default.html`, which contains the template file used to host 
user documentation.
+   You'll want to add a link to your page from the sidebar.
+
+1. If you also want to edit the main documentation page, you can find that in 
the file `index.md`.
+   This file is transformed by Jekyll and placed in `userdocs/` alongside 
everything else.
+
+Note that if you're testing on your local machine, you'll need to install 
Jekyll.  You need to have
+ruby installed.  Then type:
+
+    gem install jekyll  # you might need to prepend 'sudo'
+
+You can then type:
+
+    jekyll --pygments --safe
+
+to generate the user pages.  Do this within a web server and point a recent 
browser at it.  You can
+also run your own minimal web server with Jekyll.
+[This 
page](http://net.tutsplus.com/tutorials/other/building-static-sites-with-jekyll/)
 has a good
+Jekyll tutorial.

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/_config.yml
----------------------------------------------------------------------
diff --git a/_config.yml b/_config.yml
new file mode 100644
index 0000000..65aebd9
--- /dev/null
+++ b/_config.yml
@@ -0,0 +1,5 @@
+timezone: US/Eastern
+markdown: kramdown
+
+gems:
+  - jekyll-redirect-from

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/_data/joshua.yaml
----------------------------------------------------------------------
diff --git a/_data/joshua.yaml b/_data/joshua.yaml
new file mode 100644
index 0000000..1358510
--- /dev/null
+++ b/_data/joshua.yaml
@@ -0,0 +1,2 @@
+release_version: 6.0.5
+release_date: November 5, 2015

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/_layouts/default.html
----------------------------------------------------------------------
diff --git a/_layouts/default.html b/_layouts/default.html
new file mode 100644
index 0000000..28b0fcc
--- /dev/null
+++ b/_layouts/default.html
@@ -0,0 +1,169 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <title>Joshua Documentation | {{ page.title }}</title>
+    <meta name="viewport" content="width=device-width, initial-scale=1.0">
+    <meta name="description" content="">
+    <meta name="author" content="">
+
+    <!-- Le styles -->
+    <link href="/bootstrap/css/bootstrap.css" rel="stylesheet">
+    <style>
+      body {
+        padding-top: 60px; /* 60px to make the container go all the way to the 
bottom of the topbar */
+      }
+      #download {
+          background-color: green;
+          font-size: 14pt;
+          font-weight: bold;
+          text-align: center;
+          color: white;
+          border-radius: 5px;
+          padding: 4px;
+      }
+
+      #download a:link {
+          color: white;
+      }
+
+      #download a:hover {
+          color: lightgrey;
+      }
+
+      #download a:visited {
+          color: white;
+      }
+
+      a.pdf {
+          font-variant: small-caps;
+          /* font-weight: bold; */
+          font-size: 10pt;
+          color: white;
+          background: brown;
+          padding: 2px;
+      }
+
+      a.bibtex {
+          font-variant: small-caps;
+          /* font-weight: bold; */
+          font-size: 10pt;
+          color: white;
+          background: orange;
+          padding: 2px;
+      }
+
+      img.sponsor {
+        height: 120px;
+        margin: 5px;
+      }
+    </style>
+    <link href="bootstrap/css/bootstrap-responsive.css" rel="stylesheet">
+
+    <!-- HTML5 shim, for IE6-8 support of HTML5 elements -->
+    <!--[if lt IE 9]>
+      <script src="bootstrap/js/html5shiv.js"></script>
+    <![endif]-->
+
+    <!-- Fav and touch icons -->
+    <link rel="apple-touch-icon-precomposed" sizes="144x144" 
href="bootstrap/ico/apple-touch-icon-144-precomposed.png">
+    <link rel="apple-touch-icon-precomposed" sizes="114x114" 
href="bootstrap/ico/apple-touch-icon-114-precomposed.png">
+      <link rel="apple-touch-icon-precomposed" sizes="72x72" 
href="bootstrap/ico/apple-touch-icon-72-precomposed.png">
+                    <link rel="apple-touch-icon-precomposed" 
href="bootstrap/ico/apple-touch-icon-57-precomposed.png">
+                                   <link rel="shortcut icon" 
href="bootstrap/ico/favicon.png">
+  </head>
+
+  <body>
+
+    <div class="navbar navbar-inverse navbar-fixed-top">
+      <div class="navbar-inner">
+        <div class="container">
+          <button type="button" class="btn btn-navbar" data-toggle="collapse" 
data-target=".nav-collapse">
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+          </button>
+          <a class="brand" href="/">Joshua</a>
+          <div class="nav-collapse collapse">
+            <ul class="nav">
+              <li><a href="index.html">Documentation</a></li>
+              <li><a href="pipeline.html">Pipeline</a></li>
+              <li><a href="tutorial.html">Tutorial</a></li>
+              <li><a href="decoder.html">Decoder</a></li>
+              <li><a href="thrax.html">Thrax</a></li>
+              <li><a href="file-formats.html">File formats</a></li>
+              <!-- <li><a href="advanced.html">Advanced</a></li> -->
+              <li><a href="faq.html">FAQ</a></li>
+            </ul>
+          </div><!--/.nav-collapse -->
+        </div>
+      </div>
+    </div>
+
+    <div class="container">
+
+      <div class="row">
+        <div class="span2">
+          <img src="/images/joshua-logo-small.png" 
+               alt="Joshua logo (picture of a Joshua tree)" />
+        </div>
+        <div class="span10">
+          <h1>Joshua Documentation</h1>
+          <h2>{{ page.title }}</h2>
+          <span id="download">
+            <a 
href="http://cs.jhu.edu/~post/files/joshua-v5.0.tgz";>Download</a>
+          </span>
+          &nbsp; (version 5.0, released 16 August 2013)
+        </div>
+      </div>
+      
+      <hr />
+
+      <div class="row">
+        <div class="span8">
+
+          {{ content }}
+
+        </div>
+      </div>
+    </div> <!-- /container -->
+
+    <!-- Le javascript
+    ================================================== -->
+    <!-- Placed at the end of the document so the pages load faster -->
+    <script src="bootstrap/js/jquery.js"></script>
+    <script src="bootstrap/js/bootstrap-transition.js"></script>
+    <script src="bootstrap/js/bootstrap-alert.js"></script>
+    <script src="bootstrap/js/bootstrap-modal.js"></script>
+    <script src="bootstrap/js/bootstrap-dropdown.js"></script>
+    <script src="bootstrap/js/bootstrap-scrollspy.js"></script>
+    <script src="bootstrap/js/bootstrap-tab.js"></script>
+    <script src="bootstrap/js/bootstrap-tooltip.js"></script>
+    <script src="bootstrap/js/bootstrap-popover.js"></script>
+    <script src="bootstrap/js/bootstrap-button.js"></script>
+    <script src="bootstrap/js/bootstrap-collapse.js"></script>
+    <script src="bootstrap/js/bootstrap-carousel.js"></script>
+    <script src="bootstrap/js/bootstrap-typeahead.js"></script>
+
+    <!-- Start of StatCounter Code for Default Guide -->
+    <script type="text/javascript">
+      var sc_project=8264132; 
+      var sc_invisible=1; 
+      var sc_security="4b97fe2d"; 
+    </script>
+    <script type="text/javascript" 
src="http://www.statcounter.com/counter/counter.js";></script>
+    <noscript>
+      <div class="statcounter">
+        <a title="hit counter joomla" 
+           href="http://statcounter.com/joomla/";
+           target="_blank">
+          <img class="statcounter"
+               src="http://c.statcounter.com/8264132/0/4b97fe2d/1/";
+               alt="hit counter joomla" />
+        </a>
+      </div>
+    </noscript>
+    <!-- End of StatCounter Code for Default Guide -->
+
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/_layouts/default4.html
----------------------------------------------------------------------
diff --git a/_layouts/default4.html b/_layouts/default4.html
new file mode 100644
index 0000000..a9d417b
--- /dev/null
+++ b/_layouts/default4.html
@@ -0,0 +1,94 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" 
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd";>
+
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
+    <link rel="stylesheet" type="text/css" media="screen,print" 
href="../joshua4.css" />
+    <title>Joshua | {{ page.title }}</title>
+  </head>
+
+  <body>
+
+    <div id="navbar">
+      <a href="http://joshua-decoder.org/";>
+        <img src="../images/joshua-logo-small.png" width="130px" 
+             alt="Joshua logo (picture of a Joshua tree)" />
+      </a>
+
+      <p class="infobox">
+        <b>Stable version</b><br />
+        4.1<br/><br/>
+        <b>Release date</b><br />
+        2013 January
+      </p>
+
+<!--       <div class="infobox"> -->
+<!--         <b>AUTO LINKS</b><br/> -->
+<!--         <ul> -->
+<!--           {% for page in site.pages %} -->
+<!--           <li> {{ page.title }}</li> -->
+<!--           {% endfor %} -->
+<!--         </ul> -->
+<!--       </div>   -->
+
+      <div class="infobox">
+
+        <b>Links</b><br />
+        <ul>
+          <li> <a href="../index.html">Main</a> </li>
+          <li> <a href="pipeline.html">Pipeline</a> </li>
+          <li> <a href="step-by-step-instructions.html">Manual walkthrough</a> 
</li>
+          <li> <a href="decoder.html">Decoder</a> </li>
+          <li> <a href="server.html">Decoder Server</a> </li>
+          <li> <a href="file-formats.html">File formats</a> </li>
+          <li> <a href="thrax.html">Grammar Extraction</a> </li>
+          <li> <a href="../releases.html">Releases</a> </li>
+        </ul>
+      </div>
+
+      <div class="infobox">
+        <b>Advanced</b><br />
+        <ul>
+<!--          <li> <a href="packing.html">Grammar packing</a> </li> -->
+          <li> <a href="large-lms.html">Building large LMs</a> </li>
+          <li> <a href="zmert.html">Running Z-MERT</a> </li>
+          <li> <a href="lattice.html">Lattices</a> </li>
+          <li> <a href="server.html">TCP/IP server</a> </li>
+          <li> <a href="bundle.html">Bundled configuration</a> </li>
+        </ul>
+      </div>
+
+      <div class="infobox">
+        <b>Help</b><br />
+        <ul>
+          <li> <a href="faq.html">Answers</a> </li>
+          <li> <a 
href="https://groups.google.com/d/forum/joshua_support";>Archive</a> </li>
+        </ul>
+      </div>
+
+      <div class="footer">
+        Last updated on {{ site.time | date: "%B %d, %Y" }}
+      </div>
+
+    </div>
+
+    <div id="main">
+      <div id="title">
+        <h1>{{ page.title }}</h1>
+      </div>
+
+      <div id="content">
+        
+        {{ content }}
+
+      </div>
+    </div>
+
+  </body>
+</html>
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/_layouts/default6.html
----------------------------------------------------------------------
diff --git a/_layouts/default6.html b/_layouts/default6.html
new file mode 100644
index 0000000..d647655
--- /dev/null
+++ b/_layouts/default6.html
@@ -0,0 +1,200 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+    <meta name="description" content="">
+    <meta name="author" content="">
+    <link rel="icon" href="../../favicon.ico">
+
+    <title>Joshua Documentation | {{ page.title }}</title>
+
+    <!-- Bootstrap core CSS -->
+    <link href="/dist/css/bootstrap.min.css" rel="stylesheet">
+
+    <!-- Custom styles for this template -->
+    <link href="/joshua6.css" rel="stylesheet">
+  </head>
+
+  <body>
+
+    <div class="blog-masthead">
+      <div class="container">
+        <nav class="blog-nav">
+          <!-- <a class="blog-nav-item active" href="#">Joshua</a> -->
+          <a class="blog-nav-item" href="/">Joshua</a>
+          <!-- <a class="blog-nav-item" href="/6.0/whats-new.html">New 
features</a> -->
+          <a class="blog-nav-item" href="/language-packs/">Language packs</a>
+          <a class="blog-nav-item" href="/data/">Datasets</a>
+          <a class="blog-nav-item" href="/support/">Support</a>
+          <a class="blog-nav-item" href="/contributors.html">Contributors</a>
+        </nav>
+      </div>
+    </div>
+
+    <div class="container">
+
+      <div class="row">
+
+        <div class="col-sm-2">
+          <div class="sidebar-module">
+            <!-- <h4>About</h4> -->
+            <center>
+            <img src="/images/joshua-logo-small.png" />
+            <p>Joshua machine translation toolkit</p>
+            </center>
+          </div>
+          <hr>
+          <center>
+            <a href="/releases/current/" target="_blank"><button 
class="button">Download Joshua {{ site.data.joshua.release_version 
}}</button></a>
+            <br />
+            <a href="/releases/runtime/" target="_blank"><button 
class="button">Runtime only version</button></a>
+            <p>Released {{ site.data.joshua.release_date }}</p>
+          </center>
+          <hr>
+          <!-- <div class="sidebar-module"> -->
+          <!--   <span id="download"> -->
+          <!--     <a 
href="http://joshua-decoder.org/downloads/joshua-6.0.tgz";>Download</a> -->
+          <!--   </span> -->
+          <!-- </div> -->
+          <div class="sidebar-module">
+            <h4>Using Joshua</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/install.html">Installation</a></li>
+              <li><a href="/6.0/quick-start.html">Quick Start</a></li>
+            </ol>
+          </div>
+          <hr>
+          <div class="sidebar-module">
+            <h4>Building new models</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/pipeline.html">Pipeline</a></li>
+              <li><a href="/6.0/tutorial.html">Tutorial</a></li>
+              <li><a href="/6.0/faq.html">FAQ</a></li>
+            </ol>
+          </div>
+<!--
+          <div class="sidebar-module">
+            <h4>Phrase-based</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/phrase.html">Training</a></li>
+            </ol>
+          </div>
+-->
+          <hr>
+          <div class="sidebar-module">
+            <h4>Advanced</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/bundle.html">Building language packs</a></li>
+              <li><a href="/6.0/decoder.html">Decoder options</a></li>
+              <li><a href="/6.0/file-formats.html">File formats</a></li>
+              <li><a href="/6.0/packing.html">Packing TMs</a></li>
+              <li><a href="/6.0/large-lms.html">Building large LMs</a></li>
+            </ol>
+          </div>
+
+          <hr> 
+          <div class="sidebar-module">
+            <h4>Developer</h4>
+            <ol class="list-unstyled">              
+               <li><a 
href="https://github.com/joshua-decoder/joshua";>Github</a></li>
+               <li><a 
href="http://cs.jhu.edu/~post/joshua-docs";>Javadoc</a></li>
+               <li><a 
href="https://groups.google.com/forum/?fromgroups#!forum/joshua_developers";>Mailing
 list</a></li>              
+            </ol>
+          </div>
+
+        </div><!-- /.blog-sidebar -->
+
+        {% if page.twitter %}
+        <div class="col-sm-6 blog-main">
+        {% else %}
+        <div class="col-sm-8 blog-main">
+        {% endif %}
+
+          <div class="blog-title">
+            <h2>{{ page.title }}</h2>
+          </div>
+          
+          <div class="blog-post">
+
+            {{ content }}
+
+          <!--   <h4 class="blog-post-title">Welcome to Joshua!</h4> -->
+
+          <!--   <p>This blog post shows a few different types of content 
that's supported and styled with Bootstrap. Basic typography, images, and code 
are all supported.</p> -->
+          <!--   <hr> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis <a href="#">dis 
parturient montes</a>, nascetur ridiculus mus. Aenean eu leo quam. Pellentesque 
ornare sem lacinia quam venenatis vestibulum. Sed posuere consectetur est at 
lobortis. Cras mattis consectetur purus sit amet fermentum.</p> -->
+          <!--   <blockquote> -->
+          <!--     <p>Curabitur blandit tempus porttitor. <strong>Nullam quis 
risus eget urna mollis</strong> ornare vel eu leo. Nullam id dolor id nibh 
ultricies vehicula ut id elit.</p> -->
+          <!--   </blockquote> -->
+          <!--   <p>Etiam porta <em>sem malesuada magna</em> mollis euismod. 
Cras mattis consectetur purus sit amet fermentum. Aenean lacinia bibendum nulla 
sed consectetur.</p> -->
+          <!--   <h2>Heading</h2> -->
+          <!--   <p>Vivamus sagittis lacus vel augue laoreet rutrum faucibus 
dolor auctor. Duis mollis, est non commodo luctus, nisi erat porttitor ligula, 
eget lacinia odio sem nec elit. Morbi leo risus, porta ac consectetur ac, 
vestibulum at eros.</p> -->
+          <!--   <h3>Sub-heading</h3> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis dis parturient 
montes, nascetur ridiculus mus.</p> -->
+          <!--   <pre><code>Example code block</code></pre> -->
+          <!--   <p>Aenean lacinia bibendum nulla sed consectetur. Etiam porta 
sem malesuada magna mollis euismod. Fusce dapibus, tellus ac cursus commodo, 
tortor mauris condimentum nibh, ut fermentum massa.</p> -->
+          <!--   <h3>Sub-heading</h3> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis dis parturient 
montes, nascetur ridiculus mus. Aenean lacinia bibendum nulla sed consectetur. 
Etiam porta sem malesuada magna mollis euismod. Fusce dapibus, tellus ac cursus 
commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet 
risus.</p> -->
+          <!--   <ul> -->
+          <!--     <li>Praesent commodo cursus magna, vel scelerisque nisl 
consectetur et.</li> -->
+          <!--     <li>Donec id elit non mi porta gravida at eget metus.</li> 
-->
+          <!--     <li>Nulla vitae elit libero, a pharetra augue.</li> -->
+          <!--   </ul> -->
+          <!--   <p>Donec ullamcorper nulla non metus auctor fringilla. Nulla 
vitae elit libero, a pharetra augue.</p> -->
+          <!--   <ol> -->
+          <!--     <li>Vestibulum id ligula porta felis euismod semper.</li> 
-->
+          <!--     <li>Cum sociis natoque penatibus et magnis dis parturient 
montes, nascetur ridiculus mus.</li> -->
+          <!--     <li>Maecenas sed diam eget risus varius blandit sit amet 
non magna.</li> -->
+          <!--   </ol> -->
+          <!--   <p>Cras mattis consectetur purus sit amet fermentum. Sed 
posuere consectetur est at lobortis.</p> -->
+          <!-- </div><\!-- /.blog-post -\-> -->
+
+        </div>
+
+      </div><!-- /.row -->
+
+      {% if page.twitter %}
+      <div style="col-sm-3">
+        <div class="twitter">
+          <a class="twitter-timeline" href="https://twitter.com/joshuadecoder"; 
data-widget-id="367380700124569600">Tweets by @joshuadecoder</a>
+          <script>!function(d,s,id){var 
js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>
+        </div>
+      </div>
+      {% endif %}
+        
+    </div><!-- /.container -->
+
+    <!-- Bootstrap core JavaScript
+    ================================================== -->
+    <!-- Placed at the end of the document so the pages load faster -->
+    <script 
src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js";></script>
+    <script src="../../dist/js/bootstrap.min.js"></script>
+    <!-- <script src="../../assets/js/docs.min.js"></script> -->
+    <!-- IE10 viewport hack for Surface/desktop Windows 8 bug -->
+    <!-- <script 
src="../../assets/js/ie10-viewport-bug-workaround.js"></script>
+    -->
+
+    <!-- Start of StatCounter Code for Default Guide -->
+    <script type="text/javascript">
+      var sc_project=8264132; 
+      var sc_invisible=1; 
+      var sc_security="4b97fe2d"; 
+    </script>
+    <script type="text/javascript" 
src="http://www.statcounter.com/counter/counter.js";></script>
+    <noscript>
+      <div class="statcounter">
+        <a title="hit counter joomla" 
+           href="http://statcounter.com/joomla/";
+           target="_blank">
+          <img class="statcounter"
+               src="http://c.statcounter.com/8264132/0/4b97fe2d/1/";
+               alt="hit counter joomla" />
+        </a>
+      </div>
+    </noscript>
+    <!-- End of StatCounter Code for Default Guide -->
+  </body>
+</html>
+

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/_layouts/documentation.html
----------------------------------------------------------------------
diff --git a/_layouts/documentation.html b/_layouts/documentation.html
new file mode 100644
index 0000000..95b6df2
--- /dev/null
+++ b/_layouts/documentation.html
@@ -0,0 +1,60 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <title>Joshua decoder | {{ page.title }}</title>
+    <meta name="viewport" content="width=device-width, initial-scale=1.0">
+    <meta name="description" content="">
+    <meta name="author" content="">
+
+    <!-- Le styles -->
+    <link href="/bootstrap/css/bootstrap.css" rel="stylesheet" />
+    <link href="/joshua.css" rel="stylesheet" />
+    <style>
+      body {
+        padding-top: 60px; /* 60px to make the container go all the way to the 
bottom of the topbar */
+      }
+    </style>
+    <link href="bootstrap/css/bootstrap-responsive.css" rel="stylesheet">
+
+    <!-- HTML5 shim, for IE6-8 support of HTML5 elements -->
+    <!--[if lt IE 9]>
+      <script src="bootstrap/js/html5shiv.js"></script>
+    <![endif]-->
+  </head>
+
+  <body>
+    <div class="navbar navbar-inverse navbar-fixed-top">
+      <div class="navbar-inner">
+        <div class="container">
+          <button type="button" class="btn btn-navbar" data-toggle="collapse" 
data-target=".nav-collapse">
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+          </button>
+          <a class="brand" href="/">Joshua</a>
+          <div class="nav-collapse collapse">
+            <ul class="nav">
+              <li class="active"><a href="index.html">Indian Languages</a></li>
+              <li class="active"><a 
href="../fisher-callhome-corpus">Fisher-Callhome Spanish</a></li>
+            </ul>
+          </div><!--/.nav-collapse -->
+        </div>
+      </div>
+    </div>
+
+    <div class="container">
+
+      {{ content }}
+
+    </div> <!-- /container -->
+
+    <!-- Le javascript
+    ================================================== -->
+    <!-- Placed at the end of the document so the pages load faster -->
+    <script src="bootstrap/js/jquery.js"></script>
+    <script src="bootstrap/js/bootstrap.js"></script>
+
+  </body>
+</html>
+

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/ccc92816/blog.css
----------------------------------------------------------------------
diff --git a/blog.css b/blog.css
new file mode 100644
index 0000000..2c756a6
--- /dev/null
+++ b/blog.css
@@ -0,0 +1,171 @@
+/*
+ * Globals
+ */
+
+body {
+  font-family: Georgia, "Times New Roman", Times, serif;
+  color: #555;
+}
+
+h1, .h1,
+h2, .h2,
+h3, .h3,
+h4, .h4,
+h5, .h5,
+h6, .h6 {
+  margin-top: 0;
+  font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
+  font-weight: normal;
+  color: #333;
+}
+
+
+/*
+ * Override Bootstrap's default container.
+ */
+
+@media (min-width: 1200px) {
+  .container {
+    /* width: 970px; */
+      width: 100%;
+  }
+}
+
+
+/*
+ * Masthead for nav
+ */
+
+.blog-masthead {
+  background-color: #428bca;
+  -webkit-box-shadow: inset 0 -2px 5px rgba(0,0,0,.1);
+          box-shadow: inset 0 -2px 5px rgba(0,0,0,.1);
+}
+
+/* Nav links */
+.blog-nav-item {
+  position: relative;
+  display: inline-block;
+  padding: 10px;
+  font-weight: 500;
+  color: #cdddeb;
+}
+.blog-nav-item:hover,
+.blog-nav-item:focus {
+  color: #fff;
+  text-decoration: none;
+}
+
+/* Active state gets a caret at the bottom */
+.blog-nav .active {
+  color: #fff;
+}
+.blog-nav .active:after {
+  position: absolute;
+  bottom: 0;
+  left: 50%;
+  width: 0;
+  height: 0;
+  margin-left: -5px;
+  vertical-align: middle;
+  content: " ";
+  border-right: 5px solid transparent;
+  border-bottom: 5px solid;
+  border-left: 5px solid transparent;
+}
+
+
+/*
+ * Blog name and description
+ */
+
+.blog-header {
+  padding-top: 20px;
+  padding-bottom: 20px;
+}
+.blog-title {
+  margin-top: 30px;
+  margin-bottom: 0;
+  font-size: 60px;
+  font-weight: normal;
+}
+.blog-description {
+  font-size: 20px;
+  color: #999;
+}
+
+
+/*
+ * Main column and sidebar layout
+ */
+
+.blog-main {
+  font-size: 18px;
+  line-height: 1.5;
+}
+
+/* Sidebar modules for boxing content */
+.sidebar-module {
+  padding: 3px;
+  margin-top: 5px;
+  margin-left: 0px;
+  margin-right: 5px;
+  margin-bottom: 5px;
+}
+.sidebar-module-inset {
+  padding: 0px;
+  background-color: #f5f5f5;
+  border-radius: 0px;
+}
+.sidebar-module-inset p:last-child,
+.sidebar-module-inset ul:last-child,
+.sidebar-module-inset ol:last-child {
+  margin-bottom: 0;
+}
+
+
+/* Pagination */
+.pager {
+  margin-bottom: 60px;
+  text-align: left;
+}
+.pager > li > a {
+  width: 140px;
+  padding: 10px 20px;
+  text-align: center;
+  border-radius: 30px;
+}
+
+
+/*
+ * Blog posts
+ */
+
+.blog-post {
+  margin-bottom: 60px;
+}
+.blog-post-title {
+  margin-top: 10px;
+  margin-bottom: 5px;
+  font-size: 24px;
+}
+.blog-post-meta {
+  margin-bottom: 20px;
+  color: #999;
+}
+
+
+/*
+ * Footer
+ */
+
+.blog-footer {
+  padding: 40px 0;
+  color: #999;
+  text-align: center;
+  background-color: #f9f9f9;
+  border-top: 1px solid #e5e5e5;
+}
+.blog-footer p:last-child {
+  margin-bottom: 0;
+}

Reply via email to