Added: aurora/site/source/documentation/0.12.0/storage-config.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/0.12.0/storage-config.md?rev=1733548&view=auto
==============================================================================
--- aurora/site/source/documentation/0.12.0/storage-config.md (added)
+++ aurora/site/source/documentation/0.12.0/storage-config.md Fri Mar  4 
02:43:01 2016
@@ -0,0 +1,142 @@
+# Storage Configuration And Maintenance
+
+- [Overview](#overview)
+- [Scheduler storage configuration 
flags](#scheduler-storage-configuration-flags)
+  - [Mesos replicated log configuration 
flags](#mesos-replicated-log-configuration-flags)
+    - [-native_log_quorum_size](#-native_log_quorum_size)
+    - [-native_log_file_path](#-native_log_file_path)
+    - [-native_log_zk_group_path](#-native_log_zk_group_path)
+  - [Backup configuration flags](#backup-configuration-flags)
+    - [-backup_interval](#-backup_interval)
+    - [-backup_dir](#-backup_dir)
+    - [-max_saved_backups](#-max_saved_backups)
+- [Recovering from a scheduler backup](#recovering-from-a-scheduler-backup)
+  - [Summary](#summary)
+  - [Preparation](#preparation)
+  - [Cleanup and re-initialize Mesos replicated 
log](#cleanup-and-re-initialize-mesos-replicated-log)
+  - [Restore from backup](#restore-from-backup)
+  - [Cleanup](#cleanup)
+
+## Overview
+
+This document summarizes Aurora storage configuration and maintenance details 
and is
+intended for use by anyone deploying and/or maintaining Aurora.
+
+For a high level overview of the Aurora storage architecture refer to [this 
document](/documentation/0.12.0/storage/).
+
+## Scheduler storage configuration flags
+
+Below is a summary of scheduler storage configuration flags that either don't 
have default values
+or require attention before deploying in a production environment.
+
+### Mesos replicated log configuration flags
+
+#### -native_log_quorum_size
+Defines the Mesos replicated log quorum size. See
+[the replicated log configuration 
document](/documentation/0.12.0/deploying-aurora-scheduler/#replicated-log-configuration)
+on how to choose the right value.
+
+#### -native_log_file_path
+Location of the Mesos replicated log files. Consider allocating a dedicated 
disk (preferably SSD)
+for Mesos replicated log files to ensure optimal storage performance.
+
+#### -native_log_zk_group_path
+ZooKeeper path used for Mesos replicated log quorum discovery.
+
+See 
[code](https://github.com/apache/aurora/blob/#{git_tag}/src/main/java/org/apache/aurora/scheduler/log/mesos/MesosLogStreamModule.java))
 for
+other available Mesos replicated log configuration options and default values.
+
+### Backup configuration flags
+
+Configuration options for the Aurora scheduler backup manager.
+
+#### -backup_interval
+The interval on which the scheduler writes local storage backups.  The default 
is every hour.
+
+#### -backup_dir
+Directory to write backups to.
+
+#### -max_saved_backups
+Maximum number of backups to retain before deleting the oldest backup(s).
+
+## Recovering from a scheduler backup
+
+- [Overview](#overview)
+- [Preparation](#preparation)
+- [Assess Mesos replicated log damage](#assess-mesos-replicated-log-damage)
+- [Restore from backup](#restore-from-backup)
+- [Cleanup](#cleanup)
+
+**Be sure to read the entire page before attempting to restore from a backup, 
as it may have
+unintended consequences.**
+
+### Summary
+
+The restoration procedure replaces the existing (possibly corrupted) Mesos 
replicated log with an
+earlier, backed up, version and requires all schedulers to be taken down 
temporarily while
+restoring. Once completed, the scheduler state resets to what it was when the 
backup was created.
+This means any jobs/tasks created or updated after the backup are unknown to 
the scheduler and will
+be killed shortly after the cluster restarts. All other tasks continue 
operating as normal.
+
+Usually, it is a bad idea to restore a backup that is not extremely recent 
(i.e. older than a few
+hours). This is because the scheduler will expect the cluster to look exactly 
as the backup does,
+so any tasks that have been rescheduled since the backup was taken will be 
killed.
+
+### Preparation
+
+Follow these steps to prepare the cluster for restoring from a backup:
+
+* Stop all scheduler instances
+
+* Consider blocking external traffic on a port defined in `-http_port` for all 
schedulers to
+prevent users from interacting with the scheduler during the restoration 
process. This will help
+troubleshooting by reducing the scheduler log noise and prevent users from 
making changes that will
+be erased after the backup snapshot is restored
+
+* Next steps are required to put scheduler into a partially disabled state 
where it would still be
+able to accept storage recovery requests but unable to schedule or change task 
states. This may be
+accomplished by updating the following scheduler configuration options:
+  * Set `-mesos_master_address` to a non-existent zk address. This will 
prevent scheduler from
+    registering with Mesos. E.g.: `-mesos_master_address=zk://localhost:2181`
+  * `-max_registration_delay` - set to sufficiently long interval to prevent 
registration timeout
+    and as a result scheduler suicide. E.g: `-max_registration_delay=360mins`
+  * Make sure `-reconciliation_initial_delay` option is set high enough (e.g.: 
`365days`) to
+    prevent accidental task GC. This is important as scheduler will attempt to 
reconcile the cluster
+    state and will kill all tasks when restarted with an empty Mesos 
replicated log.
+
+* Restart all schedulers
+
+### Cleanup and re-initialize Mesos replicated log
+
+Get rid of the corrupted files and re-initialize Mesos replicate log:
+
+* Stop schedulers
+* Delete all files under `-native_log_file_path` on all schedulers
+* Initialize Mesos replica's log file: `mesos-log initialize 
--path=<-native_log_file_path>`
+* Restart schedulers
+
+### Restore from backup
+
+At this point the scheduler is ready to rehydrate from the backup:
+
+* Identify the leading scheduler by:
+  * running `aurora_admin get_scheduler <cluster>` - if scheduler is responsive
+  * examining scheduler logs
+  * or examining Zookeeper registration under the path defined by 
`-zk_endpoints`
+    and `-serverset_path`
+
+* Locate the desired backup file, copy it to the leading scheduler and stage 
recovery by running
+the following command on a leader
+`aurora_admin scheduler_stage_recovery <cluster> 
scheduler-backup-<yyyy-MM-dd-HH-mm>`
+
+* At this point, the recovery snapshot is staged and available for manual 
verification/modification
+via `aurora_admin scheduler_print_recovery_tasks` and 
`scheduler_delete_recovery_tasks` commands.
+See `aurora_admin help <command>` for usage details.
+
+* Commit recovery. This instructs the scheduler to overwrite the existing 
Mesosreplicated log with
+the provided backup snapshot and initiate a mandatory failover
+`aurora_admin scheduler_commit_recovery <cluster>`
+
+### Cleanup
+Undo any modification done during [Preparation](#preparation) sequence.
+

Added: aurora/site/source/documentation/0.12.0/storage.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/0.12.0/storage.md?rev=1733548&view=auto
==============================================================================
--- aurora/site/source/documentation/0.12.0/storage.md (added)
+++ aurora/site/source/documentation/0.12.0/storage.md Fri Mar  4 02:43:01 2016
@@ -0,0 +1,88 @@
+#Aurora Scheduler Storage
+
+- [Overview](#overview)
+- [Reads, writes, modifications](#reads-writes-modifications)
+  - [Read lifecycle](#read-lifecycle)
+  - [Write lifecycle](#write-lifecycle)
+- [Atomicity, consistency and isolation](#atomicity-consistency-and-isolation)
+- [Population on restart](#population-on-restart)
+
+## Overview
+
+Aurora scheduler maintains data that need to be persisted to survive failovers 
and restarts.
+For example:
+
+* Task configurations and scheduled task instances
+* Job update configurations and update progress
+* Production resource quotas
+* Mesos resource offer host attributes
+
+Aurora solves its persistence needs by leveraging the Mesos implementation of 
a Paxos replicated
+log [[1]](https://ramcloud.stanford.edu/~ongaro/userstudy/paxos.pdf)
+[[2]](http://en.wikipedia.org/wiki/State_machine_replication) with a key-value
+[LevelDB](https://github.com/google/leveldb) storage as persistence media.
+
+Conceptually, it can be represented by the following major components:
+
+* Volatile storage: in-memory cache of all available data. Implemented via 
in-memory
+[H2 Database](http://www.h2database.com/html/main.html) and accessed via
+[MyBatis](http://mybatis.github.io/mybatis-3/).
+* Log manager: interface between Aurora storage and Mesos replicated log. The 
default schema format
+is [thrift](https://github.com/apache/thrift). Data is stored in serialized 
binary form.
+* Snapshot manager: all data is periodically persisted in Mesos replicated log 
in a single snapshot.
+This helps establishing periodic recovery checkpoints and speeds up volatile 
storage recovery on
+restart.
+* Backup manager: as a precaution, snapshots are periodically written out into 
backup files.
+This solves a [disaster recovery 
problem](/documentation/0.12.0/storage-config/#recovering-from-a-scheduler-backup)
+in case of a complete loss or corruption of Mesos log files.
+
+![Storage hierarchy](images/storage_hierarchy.png)
+
+## Reads, writes, modifications
+
+All services in Aurora access data via a set of predefined store interfaces 
(aka stores) logically
+grouped by the type of data they serve. Every interface defines a specific set 
of operations allowed
+on the data thus abstracting out the storage access and the actual persistence 
implementation. The
+latter is especially important in view of a general immutability of persisted 
data. With the Mesos
+replicated log as the underlying persistence solution, data can be read and 
written easily but not
+modified. All modifications are simulated by saving new versions of modified 
objects. This feature
+and general performance considerations justify the existence of the volatile 
in-memory store.
+
+### Read lifecycle
+
+There are two types of reads available in Aurora: consistent and 
weakly-consistent. The difference
+is explained [below](#atomicity-and-isolation).
+
+All reads are served from the volatile storage making reads generally cheap 
storage operations
+from the performance standpoint. The majority of the volatile stores are 
represented by the
+in-memory H2 database. This allows for rich schema definitions, queries and 
relationships that
+key-value storage is unable to match.
+
+### Write lifecycle
+
+Writes are more involved operations since in addition to updating the volatile 
store data has to be
+appended to the replicated log. Data is not available for reads until fully 
ack-ed by both
+replicated log and volatile storage.
+
+## Atomicity, consistency and isolation
+
+Aurora uses [write-ahead 
logging](http://en.wikipedia.org/wiki/Write-ahead_logging) to ensure
+consistency between replicated and volatile storage. In Aurora, data is first 
written into the
+replicated log and only then updated in the volatile store.
+
+Aurora storage uses read-write locks to serialize data mutations and provide 
consistent view of the
+available data. The available `Storage` interface exposes 3 major types of 
operations:
+* `consistentRead` - access is locked using reader's lock and provides 
consistent view on read
+* `weaklyConsistentRead` - access is lock-less. Delivers best contention 
performance but may result
+in stale reads
+* `write` - access is fully serialized by using writer's lock. Operation 
success requires both
+volatile and replicated writes to succeed.
+
+The consistency of the volatile store is enforced via H2 transactional 
isolation.
+
+## Population on restart
+
+Any time a scheduler restarts, it restores its volatile state from the most 
recent position recorded
+in the replicated log by restoring the snapshot and replaying individual log 
entries on top to fully
+recover the state up to the last write.
+

Added: aurora/site/source/documentation/0.12.0/test-resource-generation.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/0.12.0/test-resource-generation.md?rev=1733548&view=auto
==============================================================================
--- aurora/site/source/documentation/0.12.0/test-resource-generation.md (added)
+++ aurora/site/source/documentation/0.12.0/test-resource-generation.md Fri Mar 
 4 02:43:01 2016
@@ -0,0 +1,24 @@
+# Generating test resources
+
+## Background
+The Aurora source repository and distributions contain several
+[binary 
files](https://github.com/apache/aurora/blob/#{git_tag}/src/test/resources/org/apache/thermos/root/checkpoints))
 to
+qualify the backwards-compatibility of thermos with checkpoint data. Since
+thermos persists state to disk, to be read by the thermos observer), it is 
important that we have
+tests that prevent regressions affecting the ability to parse 
previously-written data.
+
+## Generating test files
+The files included represent persisted checkpoints that exercise different
+features of thermos. The existing files should not be modified unless
+we are accepting backwards incompatibility, such as with a major release.
+
+It is not practical to write source code to generate these files on the fly,
+as source would be vulnerable to drift (e.g. due to refactoring) in ways
+that would undermine the goal of ensuring backwards compatibility.
+
+The most common reason to add a new checkpoint file would be to provide
+coverage for new thermos features that alter the data format. This is
+accomplished by writing and running a
+[job configuration](/documentation/0.12.0/configuration-reference/) that 
exercises the feature, and
+copying the checkpoint file from the sandbox directory, by default this is
+`/var/run/thermos/checkpoints/<aurora task id>`.

Added: aurora/site/source/documentation/0.12.0/thrift-deprecation.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/0.12.0/thrift-deprecation.md?rev=1733548&view=auto
==============================================================================
--- aurora/site/source/documentation/0.12.0/thrift-deprecation.md (added)
+++ aurora/site/source/documentation/0.12.0/thrift-deprecation.md Fri Mar  4 
02:43:01 2016
@@ -0,0 +1,50 @@
+# Thrift API Changes
+
+## Overview
+Aurora uses [Apache Thrift](https://thrift.apache.org/) for representing 
structured data in
+client/server RPC protocol as well as for internal data storage. While Thrift 
is capable of
+correctly handling additions and renames of the existing members, field 
removals must be done
+carefully to ensure backwards compatibility and provide predictable 
deprecation cycle. This
+document describes general guidelines for making Thrift schema changes to the 
existing fields in
+[api.thrift](https://github.com/apache/aurora/blob/#{git_tag}/api/src/main/thrift/org/apache/aurora/gen/api.thrift)).
+
+It is highly recommended to go through the
+[Thrift: The Missing 
Guide](http://diwakergupta.github.io/thrift-missing-guide/) first to refresh on
+basic Thrift schema concepts.
+
+## Checklist
+Every existing Thrift schema modification is unique in its requirements and 
must be analyzed
+carefully to identify its scope and expected consequences. The following 
checklist may help in that
+analysis:
+* Is this a new field/struct? If yes, go ahead
+* Is this a pure field/struct rename without any type/structure change? If 
yes, go ahead and rename
+* Anything else, read further to make sure your change is properly planned
+
+## Deprecation cycle
+Any time a breaking change (e.g.: field replacement or removal) is required, 
the following cycle
+must be followed:
+
+### vCurrent
+Change is applied in a way that does not break scheduler/client with this 
version to
+communicate with scheduler/client from vCurrent-1.
+* Do not remove or rename the old field
+* Add a new field as an eventual replacement of the old one and implement a 
dual read/write
+anywhere the old field is used
+* Check 
[storage.thrift](https://github.com/apache/aurora/blob/#{git_tag}/api/src/main/thrift/org/apache/aurora/gen/storage.thrift))
 to see if the
+affected struct is stored in Aurora scheduler storage. If so, you most likely 
need to backfill
+existing data to ensure both fields are populated eagerly on startup
+See 
[StorageBackfill.java](https://github.com/apache/aurora/blob/#{git_tag}/src/main/java/org/apache/aurora/scheduler/storage/StorageBackfill.java))
+* Add a deprecation jira ticket into the vCurrent+1 release candidate
+* Add a TODO for the deprecated field mentioning the jira ticket
+
+### vCurrent+1
+Finalize the change by removing the deprecated fields from the Thrift schema.
+* Drop any dual read/write routines added in the previous version
+* Remove the deprecated Thrift field
+
+## Testing
+It's always advisable to test your changes in the local vagrant environment to 
build more
+confidence that you change is backwards compatible. It's easy to simulate 
different
+client/scheduler versions by playing with `aurorabuild` command. See [this 
document](/documentation/0.12.0/vagrant/)
+for more.
+

Added: aurora/site/source/documentation/0.12.0/tools.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/0.12.0/tools.md?rev=1733548&view=auto
==============================================================================
--- aurora/site/source/documentation/0.12.0/tools.md (added)
+++ aurora/site/source/documentation/0.12.0/tools.md Fri Mar  4 02:43:01 2016
@@ -0,0 +1,16 @@
+# Tools
+
+Various tools integrate with Aurora. Is there a tool missing? Let us know, or 
submit a patch to add it!
+
+* Load-balacing technology used to direct traffic to services running on Aurora
+  - [synapse](https://github.com/airbnb/synapse) based on HAProxy
+  - [aurproxy](https://github.com/tellapart/aurproxy) based on nginx
+  - [jobhopper](https://github.com/benley/aurora-jobhopper) performing HTTP 
redirects for easy developers and administor access
+
+* Monitoring
+  - [collectd-aurora](https://github.com/zircote/collectd-aurora) for cluster 
monitoring using collectd
+  - [Prometheus Aurora 
exporter](https://github.com/tommyulfsparre/aurora_exporter) for cluster 
monitoring using Prometheus
+  - [Prometheus service discovery 
integration](http://prometheus.io/docs/operating/configuration/#zookeeper-serverset-sd-configurations-serverset_sd_config)
 for discovering and monitoring services running on Aurora
+
+* Packaging and deployment
+  - [aurora-packaging](https://github.com/apache/aurora-packaging), the source 
of the official Aurora packaes

Added: aurora/site/source/documentation/0.12.0/tutorial.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/0.12.0/tutorial.md?rev=1733548&view=auto
==============================================================================
--- aurora/site/source/documentation/0.12.0/tutorial.md (added)
+++ aurora/site/source/documentation/0.12.0/tutorial.md Fri Mar  4 02:43:01 2016
@@ -0,0 +1,260 @@
+# Aurora Tutorial
+
+This tutorial shows how to use the Aurora scheduler to run (and 
"`printf-debug`")
+a hello world program on Mesos. This is the recommended document for new 
Aurora users
+to start getting up to speed on the system.
+
+- [Prerequisite](#setup-install-aurora)
+- [The Script](#the-script)
+- [Aurora Configuration](#aurora-configuration)
+- [Creating the Job](#creating-the-job)
+- [Watching the Job Run](#watching-the-job-run)
+- [Cleanup](#cleanup)
+- [Next Steps](#next-steps)
+
+
+## Prerequisite
+
+This tutorial assumes you are running [Aurora locally using 
Vagrant](/documentation/0.12.0/vagrant/).
+However, in general the instructions are also applicable to any other
+[Aurora installation](/documentation/0.12.0/installing/).
+
+Unless otherwise stated, all commands are to be run from the root of the aurora
+repository clone.
+
+
+## The Script
+
+Our "hello world" application is a simple Python script that loops
+forever, displaying the time every few seconds. Copy the code below and
+put it in a file named `hello_world.py` in the root of your Aurora repository 
clone
+(Note: this directory is the same as `/vagrant` inside the Vagrant VMs).
+
+The script has an intentional bug, which we will explain later on.
+
+<!-- NOTE: If you are changing this file, be sure to also update 
examples/vagrant/test_tutorial.sh.
+-->
+```python
+import time
+
+def main():
+  SLEEP_DELAY = 10
+  # Python ninjas - ignore this blatant bug.
+  for i in xrang(100):
+    print("Hello world! The time is now: %s. Sleeping for %d secs" % (
+      time.asctime(), SLEEP_DELAY))
+    time.sleep(SLEEP_DELAY)
+
+if __name__ == "__main__":
+  main()
+```
+
+## Aurora Configuration
+
+Once we have our script/program, we need to create a *configuration
+file* that tells Aurora how to manage and launch our Job. Save the below
+code in the file `hello_world.aurora`.
+
+<!-- NOTE: If you are changing this file, be sure to also update 
examples/vagrant/test_tutorial.sh.
+-->
+```python
+pkg_path = '/vagrant/hello_world.py'
+
+# we use a trick here to make the configuration change with
+# the contents of the file, for simplicity.  in a normal setting, packages 
would be
+# versioned, and the version number would be changed in the configuration.
+import hashlib
+with open(pkg_path, 'rb') as f:
+  pkg_checksum = hashlib.md5(f.read()).hexdigest()
+
+# copy hello_world.py into the local sandbox
+install = Process(
+  name = 'fetch_package',
+  cmdline = 'cp %s . && echo %s && chmod +x hello_world.py' % (pkg_path, 
pkg_checksum))
+
+# run the script
+hello_world = Process(
+  name = 'hello_world',
+  cmdline = 'python -u hello_world.py')
+
+# describe the task
+hello_world_task = SequentialTask(
+  processes = [install, hello_world],
+  resources = Resources(cpu = 1, ram = 1*MB, disk=8*MB))
+
+jobs = [
+  Service(cluster = 'devcluster',
+          environment = 'devel',
+          role = 'www-data',
+          name = 'hello_world',
+          task = hello_world_task)
+]
+```
+
+There is a lot going on in that configuration file:
+
+1. From a "big picture" viewpoint, it first defines two
+Processes. Then it defines a Task that runs the two Processes in the
+order specified in the Task definition, as well as specifying what
+computational and memory resources are available for them.  Finally,
+it defines a Job that will schedule the Task on available and suitable
+machines. This Job is the sole member of a list of Jobs; you can
+specify more than one Job in a config file.
+
+2. At the Process level, it specifies how to get your code into the
+local sandbox in which it will run. It then specifies how the code is
+actually run once the second Process starts.
+
+For more about Aurora configuration files, see the [Configuration
+Tutorial](/documentation/0.12.0/configuration-tutorial/) and the [Aurora + 
Thermos
+Reference](/documentation/0.12.0/configuration-reference/) (preferably after 
finishing this
+tutorial).
+
+
+## Creating the Job
+
+We're ready to launch our job! To do so, we use the Aurora Client to
+issue a Job creation request to the Aurora scheduler.
+
+Many Aurora Client commands take a *job key* argument, which uniquely
+identifies a Job. A job key consists of four parts, each separated by a
+"/". The four parts are  `<cluster>/<role>/<environment>/<jobname>`
+in that order:
+
+* Cluster refers to the name of a particular Aurora installation.
+* Role names are user accounts existing on the slave machines. If you
+don't know what accounts are available, contact your sysadmin.
+* Environment names are namespaces; you can count on `test`, `devel`,
+`staging` and `prod` existing.
+* Jobname is the custom name of your job.
+
+When comparing two job keys, if any of the four parts is different from
+its counterpart in the other key, then the two job keys identify two separate
+jobs. If all four values are identical, the job keys identify the same job.
+
+The `clusters.json` [client 
configuration](/documentation/0.12.0/client-cluster-configuration/)
+for the Aurora scheduler defines the available cluster names.
+For Vagrant, from the top-level of your Aurora repository clone, do:
+
+    $ vagrant ssh
+
+Followed by:
+
+    vagrant@aurora:~$ cat /etc/aurora/clusters.json
+
+You'll see something like the following. The `name` value shown here, 
corresponds to a job key's cluster value.
+
+```javascript
+[{
+  "name": "devcluster",
+  "zk": "192.168.33.7",
+  "scheduler_zk_path": "/aurora/scheduler",
+  "auth_mechanism": "UNAUTHENTICATED",
+  "slave_run_directory": "latest",
+  "slave_root": "/var/lib/mesos"
+}]
+```
+
+The Aurora Client command that actually runs our Job is `aurora job create`. 
It creates a Job as
+specified by its job key and configuration file arguments and runs it.
+
+    aurora job create <cluster>/<role>/<environment>/<jobname> <config_file>
+
+Or for our example:
+
+    aurora job create devcluster/www-data/devel/hello_world 
/vagrant/hello_world.aurora
+
+After entering our virtual machine using `vagrant ssh`, this returns:
+
+    vagrant@aurora:~$ aurora job create devcluster/www-data/devel/hello_world 
/vagrant/hello_world.aurora
+     INFO] Creating job hello_world
+     INFO] Checking status of devcluster/www-data/devel/hello_world
+    Job create succeeded: job 
url=http://aurora.local:8081/scheduler/www-data/devel/hello_world
+
+
+## Watching the Job Run
+
+Now that our job is running, let's see what it's doing. Access the
+scheduler web interface at 
`http://$scheduler_hostname:$scheduler_port/scheduler`
+Or when using `vagrant`, `http://192.168.33.7:8081/scheduler`
+First we see what Jobs are scheduled:
+
+![Scheduled Jobs](images/ScheduledJobs.png)
+
+Click on your user name, which in this case was `www-data`, and we see the 
Jobs associated
+with that role:
+
+![Role Jobs](images/RoleJobs.png)
+
+If you click on your `hello_world` Job, you'll see:
+
+![hello_world Job](images/HelloWorldJob.png)
+
+Oops, looks like our first job didn't quite work! The task is temporarily 
throttled for
+having failed on every attempt of the Aurora scheduler to run it. We have to 
figure out
+what is going wrong.
+
+On the Completed tasks tab, we see all past attempts of the Aurora scheduler 
to run our job.
+
+![Completed tasks tab](images/CompletedTasks.png)
+
+We can navigate to the Task page of a failed run by clicking on the host link.
+
+![Task page](images/TaskBreakdown.png)
+
+Once there, we see that the `hello_world` process failed. The Task page
+captures the standard error and standard output streams and makes them 
available.
+Clicking through to `stderr` on the failed `hello_world` process, we see what 
happened.
+
+![stderr page](images/stderr.png)
+
+It looks like we made a typo in our Python script. We wanted `xrange`,
+not `xrang`. Edit the `hello_world.py` script to use the correct function
+and save it as `hello_world_v2.py`. Then update the `hello_world.aurora`
+configuration to the newest version.
+
+In order to try again, we can now instruct the scheduler to update our job:
+
+    vagrant@aurora:~$ aurora update start 
devcluster/www-data/devel/hello_world /vagrant/hello_world.aurora
+     INFO] Starting update for: hello_world
+    Job update has started. View your update progress at 
http://aurora.local:8081/scheduler/www-data/devel/hello_world/update/8ef38017-e60f-400d-a2f2-b5a8b724e95b
+
+This time, the task comes up.
+
+![Running Job](images/RunningJob.png)
+
+By again clicking on the host, we inspect the Task page, and see that the
+`hello_world` process is running.
+
+![Running Task page](images/runningtask.png)
+
+We then inspect the output by clicking on `stdout` and see our process'
+output:
+
+![stdout page](images/stdout.png)
+
+## Cleanup
+
+Now that we're done, we kill the job using the Aurora client:
+
+    vagrant@aurora:~$ aurora job killall devcluster/www-data/devel/hello_world
+     INFO] Killing tasks for job: devcluster/www-data/devel/hello_world
+     INFO] Instances to be killed: [0]
+    Successfully killed instances [0]
+    Job killall succeeded
+
+The job page now shows the `hello_world` tasks as completed.
+
+![Killed Task page](images/killedtask.png)
+
+## Next Steps
+
+Now that you've finished this Tutorial, you should read or do the following:
+
+- [The Aurora Configuration 
Tutorial](/documentation/0.12.0/configuration-tutorial/), which provides more 
examples
+  and best practices for writing Aurora configurations. You should also look at
+  the [Aurora + Thermos Configuration 
Reference](/documentation/0.12.0/configuration-reference/).
+- The [Aurora User Guide](/documentation/0.12.0/user-guide/) provides an 
overview of how Aurora, Mesos, and
+  Thermos work "under the hood".
+- Explore the Aurora Client - use `aurora -h`, and read the
+  [Aurora Client Commands](/documentation/0.12.0/client-commands/) document.

Added: aurora/site/source/documentation/0.12.0/user-guide.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/0.12.0/user-guide.md?rev=1733548&view=auto
==============================================================================
--- aurora/site/source/documentation/0.12.0/user-guide.md (added)
+++ aurora/site/source/documentation/0.12.0/user-guide.md Fri Mar  4 02:43:01 
2016
@@ -0,0 +1,355 @@
+Aurora User Guide
+-----------------
+
+- [Overview](#overview)
+- [Job Lifecycle](#job-lifecycle)
+       - [Life Of A Task](#life-of-a-task)
+       - [PENDING to RUNNING states](#pending-to-running-states)
+       - [Task Updates](#task-updates)
+       - [HTTP Health Checking and Graceful 
Shutdown](#http-health-checking-and-graceful-shutdown)
+               - [Tearing a task down](#tearing-a-task-down)
+       - [Giving Priority to Production Tasks: 
PREEMPTING](#giving-priority-to-production-tasks-preempting)
+       - [Natural Termination: FINISHED, 
FAILED](#natural-termination-finished-failed)
+       - [Forceful Termination: KILLING, 
RESTARTING](#forceful-termination-killing-restarting)
+- [Service Discovery](#service-discovery)
+- [Configuration](#configuration)
+- [Creating Jobs](#creating-jobs)
+- [Interacting With Jobs](#interacting-with-jobs)
+
+Overview
+--------
+
+This document gives an overview of how Aurora works under the hood.
+It assumes you've already worked through the "hello world" example
+job in the [Aurora Tutorial](/documentation/0.12.0/tutorial/). Specifics of 
how to use Aurora are **not**
+ given here, but pointers to documentation about how to use Aurora are
+provided.
+
+Aurora is a Mesos framework used to schedule *jobs* onto Mesos. Mesos
+cares about individual *tasks*, but typical jobs consist of dozens or
+hundreds of task replicas. Aurora provides a layer on top of Mesos with
+its `Job` abstraction. An Aurora `Job` consists of a task template and
+instructions for creating near-identical replicas of that task (modulo
+things like "instance id" or specific port numbers which may differ from
+machine to machine).
+
+How many tasks make up a Job is complicated. On a basic level, a Job consists 
of
+one task template and instructions for creating near-idential replicas of that 
task
+(otherwise referred to as "instances" or "shards").
+
+However, since Jobs can be updated on the fly, a single Job identifier or *job 
key*
+can have multiple job configurations associated with it.
+
+For example, consider when I have a Job with 4 instances that each
+request 1 core of cpu, 1 GB of RAM, and 1 GB of disk space as specified
+in the configuration file `hello_world.aurora`. I want to
+update it so it requests 2 GB of RAM instead of 1. I create a new
+configuration file to do that called `new_hello_world.aurora` and
+issue a `aurora update start <job_key_value>/0-1 new_hello_world.aurora`
+command.
+
+This results in instances 0 and 1 having 1 cpu, 2 GB of RAM, and 1 GB of disk 
space,
+while instances 2 and 3 have 1 cpu, 1 GB of RAM, and 1 GB of disk space. If 
instance 3
+dies and restarts, it restarts with 1 cpu, 1 GB RAM, and 1 GB disk space.
+
+So that means there are two simultaneous task configurations for the same Job
+at the same time, just valid for different ranges of instances.
+
+This isn't a recommended pattern, but it is valid and supported by the
+Aurora scheduler. This most often manifests in the "canary pattern" where
+instance 0 runs with a different configuration than instances 1-N to test
+different code versions alongside the actual production job.
+
+A task can merely be a single *process* corresponding to a single
+command line, such as `python2.6 my_script.py`. However, a task can also
+consist of many separate processes, which all run within a single
+sandbox. For example, running multiple cooperating agents together,
+such as `logrotate`, `installer`, master, or slave processes. This is
+where Thermos  comes in. While Aurora provides a `Job` abstraction on
+top of Mesos `Tasks`, Thermos provides a `Process` abstraction
+underneath Mesos `Task`s and serves as part of the Aurora framework's
+executor.
+
+You define `Job`s,` Task`s, and `Process`es in a configuration file.
+Configuration files are written in Python, and make use of the Pystachio
+templating language. They end in a `.aurora` extension.
+
+Pystachio is a type-checked dictionary templating library.
+
+> TL;DR
+>
+> -   Aurora manages jobs made of tasks.
+> -   Mesos manages tasks made of processes.
+> -   Thermos manages processes.
+> -   All defined in `.aurora` configuration file.
+
+![Aurora hierarchy](images/aurora_hierarchy.png)
+
+Each `Task` has a *sandbox* created when the `Task` starts and garbage
+collected when it finishes. All of a `Task'`s processes run in its
+sandbox, so processes can share state by using a shared current working
+directory.
+
+The sandbox garbage collection policy considers many factors, most
+importantly age and size. It makes a best-effort attempt to keep
+sandboxes around as long as possible post-task in order for service
+owners to inspect data and logs, should the `Task` have completed
+abnormally. But you can't design your applications assuming sandboxes
+will be around forever, e.g. by building log saving or other
+checkpointing mechanisms directly into your application or into your
+`Job` description.
+
+Job Lifecycle
+-------------
+
+When Aurora reads a configuration file and finds a `Job` definition, it:
+
+1.  Evaluates the `Job` definition.
+2.  Splits the `Job` into its constituent `Task`s.
+3.  Sends those `Task`s to the scheduler.
+4.  The scheduler puts the `Task`s into `PENDING` state, starting each
+    `Task`'s life cycle.
+
+### Life Of A Task
+
+![Life of a task](images/lifeofatask.png)
+
+### PENDING to RUNNING states
+
+When a `Task` is in the `PENDING` state, the scheduler constantly
+searches for machines satisfying that `Task`'s resource request
+requirements (RAM, disk space, CPU time) while maintaining configuration
+constraints such as "a `Task` must run on machines  dedicated  to a
+particular role" or attribute limit constraints such as "at most 2
+`Task`s from the same `Job` may run on each rack". When the scheduler
+finds a suitable match, it assigns the `Task` to a machine and puts the
+`Task` into the `ASSIGNED` state.
+
+From the `ASSIGNED` state, the scheduler sends an RPC to the slave
+machine containing `Task` configuration, which the slave uses to spawn
+an executor responsible for the `Task`'s lifecycle. When the scheduler
+receives an acknowledgement that the machine has accepted the `Task`,
+the `Task` goes into `STARTING` state.
+
+`STARTING` state initializes a `Task` sandbox. When the sandbox is fully
+initialized, Thermos begins to invoke `Process`es. Also, the slave
+machine sends an update to the scheduler that the `Task` is
+in `RUNNING` state.
+
+If a `Task` stays in `ASSIGNED` or `STARTING` for too long, the
+scheduler forces it into `LOST` state, creating a new `Task` in its
+place that's sent into `PENDING` state. This is technically true of any
+active state: if the Mesos core tells the scheduler that a slave has
+become unhealthy (or outright disappeared), the `Task`s assigned to that
+slave go into `LOST` state and new `Task`s are created in their place.
+From `PENDING` state, there is no guarantee a `Task` will be reassigned
+to the same machine unless job constraints explicitly force it there.
+
+If there is a state mismatch, (e.g. a machine returns from a `netsplit`
+and the scheduler has marked all its `Task`s `LOST` and rescheduled
+them), a state reconciliation process kills the errant `RUNNING` tasks,
+which may take up to an hour. But to emphasize this point: there is no
+uniqueness guarantee for a single instance of a job in the presence of
+network partitions. If the Task requires that, it should be baked in at
+the application level using a distributed coordination service such as
+Zookeeper.
+
+### Task Updates
+
+`Job` configurations can be updated at any point in their lifecycle.
+Usually updates are done incrementally using a process called a *rolling
+upgrade*, in which Tasks are upgraded in small groups, one group at a
+time.  Updates are done using various Aurora Client commands.
+
+For a configuration update, the Aurora Client calculates required changes
+by examining the current job config state and the new desired job config.
+It then starts a rolling batched update process by going through every batch
+and performing these operations:
+
+- If an instance is present in the scheduler but isn't in the new config,
+  then that instance is killed.
+- If an instance is not present in the scheduler but is present in
+  the new config, then the instance is created.
+- If an instance is present in both the scheduler and the new config, then
+  the client diffs both task configs. If it detects any changes, it
+  performs an instance update by killing the old config instance and adds
+  the new config instance.
+
+The Aurora client continues through the instance list until all tasks are
+updated, in `RUNNING,` and healthy for a configurable amount of time.
+If the client determines the update is not going well (a percentage of health
+checks have failed), it cancels the update.
+
+Update cancellation runs a procedure similar to the described above
+update sequence, but in reverse order. New instance configs are swapped
+with old instance configs and batch updates proceed backwards
+from the point where the update failed. E.g.; (0,1,2) (3,4,5) (6,7,
+8-FAIL) results in a rollback in order (8,7,6) (5,4,3) (2,1,0).
+
+### HTTP Health Checking and Graceful Shutdown
+
+The Executor implements a protocol for rudimentary control of a task via HTTP. 
 Tasks subscribe for
+this protocol by declaring a port named `health`.  Take for example this 
configuration snippet:
+
+    nginx = Process(
+      name = 'nginx',
+      cmdline = './run_nginx.sh -port {{thermos.ports[http]}}')
+
+When this Process is included in a job, the job will be allocated a port, and 
the command line
+will be replaced with something like:
+
+    ./run_nginx.sh -port 42816
+
+Where 42816 happens to be the allocated. port.  Typically, the Executor 
monitors Processes within
+a task only by liveness of the forked process.  However, when a `health` port 
was allocated, it will
+also send periodic HTTP health checks.  A task requesting a `health` port must 
handle the following
+requests:
+
+| HTTP request            | Description                             |
+| ------------            | -----------                             |
+| `GET /health`           | Inquires whether the task is healthy.   |
+| `POST /quitquitquit`    | Task should initiate graceful shutdown. |
+| `POST /abortabortabort` | Final warning task is being killed.     |
+
+Please see the
+[configuration 
reference](/documentation/0.12.0/configuration-reference/#healthcheckconfig-objects)
 for
+configuration options for this feature.
+
+#### Snoozing Health Checks
+
+If you need to pause your health check, you can do so by touching a file 
inside of your sandbox,
+named `.healthchecksnooze`
+
+As long as that file is present, health checks will be disabled, enabling 
users to gather core dumps
+or other performance measurements without worrying about Aurora's health check 
killing their
+process.
+
+WARNING: Remember to remove this when you are done, otherwise your instance 
will have permanently
+disabled health checks.
+
+#### Tearing a task down
+
+The Executor follows an escalation sequence when killing a running task:
+
+  1. If `health` port is not present, skip to (5)
+  2. POST /quitquitquit
+  3. wait 5 seconds
+  4. POST /abortabortabort
+  5. Send SIGTERM (`kill`)
+  6. Send SIGKILL (`kill -9`)
+
+If the Executor notices that all Processes in a Task have aborted during this 
sequence, it will
+not proceed with subsequent steps.  Note that graceful shutdown is 
best-effort, and due to the many
+inevitable realities of distributed systems, it may not be performed.
+
+### Giving Priority to Production Tasks: PREEMPTING
+
+Sometimes a Task needs to be interrupted, such as when a non-production
+Task's resources are needed by a higher priority production Task. This
+type of interruption is called a *pre-emption*. When this happens in
+Aurora, the non-production Task is killed and moved into
+the `PREEMPTING` state  when both the following are true:
+
+- The task being killed is a non-production task.
+- The other task is a `PENDING` production task that hasn't been
+  scheduled due to a lack of resources.
+
+Since production tasks are much more important, Aurora kills off the
+non-production task to free up resources for the production task. The
+scheduler UI shows the non-production task was preempted in favor of the
+production task. At some point, tasks in `PREEMPTING` move to `KILLED`.
+
+Note that non-production tasks consuming many resources are likely to be
+preempted in favor of production tasks.
+
+### Natural Termination: FINISHED, FAILED
+
+A `RUNNING` `Task` can terminate without direct user interaction. For
+example, it may be a finite computation that finishes, even something as
+simple as `echo hello world. `Or it could be an exceptional condition in
+a long-lived service. If the `Task` is successful (its underlying
+processes have succeeded with exit status `0` or finished without
+reaching failure limits) it moves into `FINISHED` state. If it finished
+after reaching a set of failure limits, it goes into `FAILED` state.
+
+### Forceful Termination: KILLING, RESTARTING
+
+You can terminate a `Task` by issuing an `aurora job kill` command, which
+moves it into `KILLING` state. The scheduler then sends the slave  a
+request to terminate the `Task`. If the scheduler receives a successful
+response, it moves the Task into `KILLED` state and never restarts it.
+
+The scheduler has access to a non-public `RESTARTING` state. If a `Task`
+is forced into the `RESTARTING` state, the scheduler kills the
+underlying task but in parallel schedules an identical replacement for
+it.
+
+Configuration
+-------------
+
+You define and configure your Jobs (and their Tasks and Processes) in
+Aurora configuration files. Their filenames end with the `.aurora`
+suffix, and you write them in Python making use of the Pystachio
+templating language, along
+with specific Aurora, Mesos, and Thermos commands and methods. See the
+[Configuration Guide and 
Reference](/documentation/0.12.0/configuration-reference/) and
+[Configuration Tutorial](/documentation/0.12.0/configuration-tutorial/).
+
+Service Discovery
+-----------------
+
+It is possible for the Aurora executor to announce tasks into ServerSets for
+the purpose of service discovery.  ServerSets use the Zookeeper [group 
membership 
pattern](http://zookeeper.apache.org/doc/trunk/recipes.html#sc_outOfTheBox)
+of which there are several reference implementations:
+
+  - [C++](https://github.com/apache/mesos/blob/master/src/zookeeper/group.cpp)
+  - 
[Java](https://github.com/twitter/commons/blob/master/src/java/com/twitter/common/zookeeper/ServerSetImpl.java#L221)
+  - 
[Python](https://github.com/twitter/commons/blob/master/src/python/twitter/common/zookeeper/serverset/serverset.py#L51)
+
+These can also be used natively in Finagle using the 
[ZookeeperServerSetCluster](https://github.com/twitter/finagle/blob/master/finagle-serversets/src/main/scala/com/twitter/finagle/zookeeper/ZookeeperServerSetCluster.scala).
+
+For more information about how to configure announcing, see the [Configuration 
Reference](/documentation/0.12.0/configuration-reference/).
+
+Creating Jobs
+-------------
+
+You create and manipulate Aurora Jobs with the Aurora client, which starts all 
its
+command line commands with
+`aurora`. See [Aurora Client Commands](/documentation/0.12.0/client-commands/) 
for details
+about the Aurora Client.
+
+Interacting With Jobs
+---------------------
+
+You interact with Aurora jobs either via:
+
+- Read-only Web UIs
+
+  Part of the output from creating a new Job is a URL for the Job's scheduler 
UI page.
+
+  For example:
+
+      vagrant@precise64:~$ aurora job create devcluster/www-data/prod/hello \
+      /vagrant/examples/jobs/hello_world.aurora
+      INFO] Creating job hello
+      INFO] Response from scheduler: OK (message: 1 new tasks pending for job 
www-data/prod/hello)
+      INFO] Job url: http://precise64:8081/scheduler/www-data/prod/hello
+
+  The "Job url" goes to the Job's scheduler UI page. To go to the overall 
scheduler UI page,
+  stop at the "scheduler" part of the URL, in this case, 
`http://precise64:8081/scheduler`
+
+  You can also reach the scheduler UI page via the Client command `aurora job 
open`:
+
+      aurora job open [<cluster>[/<role>[/<env>/<job_name>]]]
+
+  If only the cluster is specified, it goes directly to that cluster's 
scheduler main page.
+  If the role is specified, it goes to the top-level role page. If the full 
job key is specified,
+  it goes directly to the job page where you can inspect individual tasks.
+
+  Once you click through to a role page, you see Jobs arranged separately by 
pending jobs, active
+  jobs, and finished jobs. Jobs are arranged by role, typically a service 
account for production
+  jobs and user accounts for test or development jobs.
+
+- The Aurora client
+
+  See [client commands](/documentation/0.12.0/client-commands/).

Added: aurora/site/source/documentation/0.12.0/vagrant.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/0.12.0/vagrant.md?rev=1733548&view=auto
==============================================================================
--- aurora/site/source/documentation/0.12.0/vagrant.md (added)
+++ aurora/site/source/documentation/0.12.0/vagrant.md Fri Mar  4 02:43:01 2016
@@ -0,0 +1,137 @@
+Getting Started
+===============
+
+This document shows you how to configure a complete cluster using a virtual 
machine. This setup
+replicates a real cluster in your development machine as closely as possible. 
After you complete
+the steps outlined here, you will be ready to create and run your first Aurora 
job.
+
+The following sections describe these steps in detail:
+
+1. [Overview](#overview)
+1. [Install VirtualBox and Vagrant](#install-virtualbox-and-vagrant)
+1. [Clone the Aurora repository](#clone-the-aurora-repository)
+1. [Start the local cluster](#start-the-local-cluster)
+1. [Log onto the VM](#log-onto-the-vm)
+1. [Run your first job](#run-your-first-job)
+1. [Rebuild components](#rebuild-components)
+1. [Shut down or delete your local 
cluster](#shut-down-or-delete-your-local-cluster)
+1. [Troubleshooting](#troubleshooting)
+
+
+Overview
+--------
+
+The Aurora distribution includes a set of scripts that enable you to create a 
local cluster in
+your development machine. These scripts use 
[Vagrant](https://www.vagrantup.com/) and
+[VirtualBox](https://www.virtualbox.org/) to run and configure a virtual 
machine. Once the
+virtual machine is running, the scripts install and initialize Aurora and any 
required components
+to create the local cluster.
+
+
+Install VirtualBox and Vagrant
+------------------------------
+
+First, download and install [VirtualBox](https://www.virtualbox.org/) on your 
development machine.
+
+Then download and install [Vagrant](https://www.vagrantup.com/). To verify 
that the installation
+was successful, open a terminal window and type the `vagrant` command. You 
should see a list of
+common commands for this tool.
+
+
+Clone the Aurora repository
+---------------------------
+
+To obtain the Aurora source distribution, clone its Git repository using the 
following command:
+
+     git clone git://git.apache.org/aurora.git
+
+
+Start the local cluster
+-----------------------
+
+Now change into the `aurora/` directory, which contains the Aurora source code 
and
+other scripts and tools:
+
+     cd aurora/
+
+To start the local cluster, type the following command:
+
+     vagrant up
+
+This command uses the configuration scripts in the Aurora distribution to:
+
+* Download a Linux system image.
+* Start a virtual machine (VM) and configure it.
+* Install the required build tools on the VM.
+* Install Aurora's requirements (like [Mesos](http://mesos.apache.org/) and
+[Zookeeper](http://zookeeper.apache.org/)) on the VM.
+* Build and install Aurora from source on the VM.
+* Start Aurora's services on the VM.
+
+This process takes several minutes to complete.
+
+To verify that Aurora is running on the cluster, visit the following URLs:
+
+* Scheduler - http://192.168.33.7:8081
+* Observer - http://192.168.33.7:1338
+* Mesos Master - http://192.168.33.7:5050
+* Mesos Slave - http://192.168.33.7:5051
+
+
+Log onto the VM
+---------------
+
+To SSH into the VM, run the following command in your development machine:
+
+     vagrant ssh
+
+To verify that Aurora is installed in the VM, type the `aurora` command. You 
should see a list
+of arguments and possible commands.
+
+The `/vagrant` directory on the VM is mapped to the `aurora/` local directory
+from which you started the cluster. You can edit files inside this directory 
in your development
+machine and access them from the VM under `/vagrant`.
+
+A pre-installed `clusters.json` file refers to your local cluster as 
`devcluster`, which you
+will use in client commands.
+
+
+Run your first job
+------------------
+
+Now that your cluster is up and running, you are ready to define and run your 
first job in Aurora.
+For more information, see the [Aurora 
Tutorial](/documentation/0.12.0/tutorial/).
+
+
+Rebuild components
+------------------
+
+If you are changing Aurora code and would like to rebuild a component, you can 
use the `aurorabuild`
+command on the VM to build and restart a component.  This is considerably 
faster than destroying
+and rebuilding your VM.
+
+`aurorabuild` accepts a list of components to build and update. To get a list 
of supported
+components, invoke the `aurorabuild` command with no arguments:
+
+     vagrant ssh -c 'aurorabuild client'
+
+
+Shut down or delete your local cluster
+--------------------------------------
+
+To shut down your local cluster, run the `vagrant halt` command in your 
development machine. To
+start it again, run the `vagrant up` command.
+
+Once you are finished with your local cluster, or if you would otherwise like 
to start from scratch,
+you can use the command `vagrant destroy` to turn off and delete the virtual 
file system.
+
+
+Troubleshooting
+---------------
+
+Most of the vagrant related problems can be fixed by the following steps:
+
+* Destroying the vagrant environment with `vagrant destroy`
+* Killing any orphaned VMs (see AURORA-499) with `virtualbox` UI or 
`VBoxManage` command line tool
+* Cleaning the repository of build artifacts and other intermediate output 
with `git clean -fdx`
+* Bringing up the vagrant environment with `vagrant up`

Modified: aurora/site/source/documentation/latest/committers.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/committers.md?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
--- aurora/site/source/documentation/latest/committers.md (original)
+++ aurora/site/source/documentation/latest/committers.md Fri Mar  4 02:43:01 
2016
@@ -59,14 +59,16 @@ it run
                ./build-support/release/release-candidate -l m -p
 
 3. Update, if necessary, the draft email created from the `release-candidate` 
script in step #2 and
-send the [VOTE] email to the dev@ and private@ mailing lists. You can verify 
the release signature
-and checksums by running
+send the [VOTE] email to the dev@ mailing list. You can verify the release 
signature and checksums
+by running
 
-                               ./build-support/release/verify-release-candidate
+               ./build-support/release/verify-release-candidate
 
-4. Wait for the vote to complete. If the vote fails address any issues and go 
back to step #1 and
-run again, this time you will use the -r flag to increment the release 
candidate version. This will
-automatically clean up the release candidate rc0 branch and source 
distribution.
+4. Wait for the vote to complete. If the vote fails close the vote by replying 
to the initial [VOTE]
+email sent in step #3 by editing the subject to [RESULT][VOTE] ... and noting 
the failure reason
+(example [here](http://markmail.org/message/d4d6xtvj7vgwi76f)). Now address 
any issues and go back to
+step #1 and run again, this time you will use the -r flag to increment the 
release candidate
+version. This will automatically clean up the release candidate rc0 branch and 
source distribution.
 
                ./build-support/release/release-candidate -l m -r 1 -p
 
@@ -75,5 +77,5 @@ automatically clean up the release candi
                ./build-support/release/release
 
 6. Update the draft email created fom the `release` script in step #5 to 
include the Apache ID's for
-all binding votes and send the [RESULT][VOTE] email to the dev@ and private@ 
mailing lists.
+all binding votes and send the [RESULT][VOTE] email to the dev@ mailing list.
 

Modified: aurora/site/source/documentation/latest/configuration-reference.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/configuration-reference.md?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
--- aurora/site/source/documentation/latest/configuration-reference.md 
(original)
+++ aurora/site/source/documentation/latest/configuration-reference.md Fri Mar  
4 02:43:01 2016
@@ -148,21 +148,27 @@ schedule.
 
 #### logger
 
-The default behavior of Thermos is to allow stderr/stdout logs to grow 
unbounded. In the event
-that you have large log volume, you may want to configure Thermos to 
automatically rotate logs
+The default behavior of Thermos is to store  stderr/stdout logs in files which 
grow unbounded.
+In the event that you have large log volume, you may want to configure Thermos 
to automatically rotate logs
 after they grow to a certain size, which can prevent your job from using more 
than its allocated
 disk space.
 
-A Logger union consists of a mode enum and a rotation policy. Rotation 
policies only apply to
-loggers whose mode is `rotate`. The acceptable values for the LoggerMode enum 
are `standard`
-and `rotate`. The rotation policy applies to both stderr and stdout.
+A Logger union consists of a destination enum, a mode enum and a rotation 
policy.
+It's to set where the process logs should be sent using `destination`. Default
+option is `file`. Its also possible to specify `console` to get logs output
+to stdout/stderr, `none` to suppress any logs output or `both` to send logs to 
files and
+console output. In case of using `none` or `console` rotation attributes are 
ignored.
+Rotation policies only apply to loggers whose mode is `rotate`. The acceptable 
values
+for the LoggerMode enum are `standard` and `rotate`. The rotation policy 
applies to both
+stderr and stdout.
 
 By default, all processes use the `standard` LoggerMode.
 
-  **Attribute Name**  | **Type**     | **Description**
-  ------------------- | :----------: | ---------------------------------
-   **mode**           | LoggerMode   | Mode of the logger. (Required)
-   **rotate**         | RotatePolicy | An optional rotation policy.
+  **Attribute Name**  | **Type**          | **Description**
+  ------------------- | :---------------: | ---------------------------------
+   **destination**    | LoggerDestination | Destination of logs. (Default: 
`file`)
+   **mode**           | LoggerMode        | Mode of the logger. (Default: 
`standard`)
+   **rotate**         | RotatePolicy      | An optional rotation policy.
 
 A RotatePolicy describes log rotation behavior for when `mode` is set to 
`rotate`. It is ignored
 otherwise.
@@ -177,6 +183,7 @@ An example process configuration is as f
         process = Process(
           name='process',
           logger=Logger(
+            destination=LoggerDestination('both'),
             mode=LoggerMode('rotate'),
             rotate=RotatePolicy(log_size=5*MB, backups=5)
           )
@@ -408,7 +415,6 @@ Parameters for controlling the rate and
 | object                       | type     | description
 | ---------------------------- | :------: | ------------
 | ```batch_size```             | Integer  | Maximum number of shards to be 
updated in one iteration (Default: 1)
-| ```restart_threshold```      | Integer  | Maximum number of seconds before a 
shard must move into the ```RUNNING``` state before considered a failure 
(Default: 60)
 | ```watch_secs```             | Integer  | Minimum number of seconds a shard 
must remain in ```RUNNING``` state before considered a success (Default: 45)
 | ```max_per_shard_failures``` | Integer  | Maximum number of restarts per 
shard during update. Increments total failure count when this limit is 
exceeded. (Default: 0)
 | ```max_total_failures```     | Integer  | Maximum number of shard failures 
to be tolerated in total during an update. Cannot be greater than or equal to 
the total number of tasks in a job. (Default: 0)
@@ -424,9 +430,6 @@ Parameters for controlling a task's heal
 
 | param                          | type      | description
 | -------                        | :-------: | --------
-| *```endpoint```*               | String    | HTTP endpoint to check 
(Default: /health) **Deprecated.**
-| *```expected_response```*      | String    | If not empty, fail the HTTP 
health check if the response differs. Case insensitive. (Default: ok) 
**Deprecated.**
-| *```expected_response_code```* | Integer   | If not zero, fail the HTTP 
health check if the response code differs. (Default: 0) **Deprecated.**
 | ```health_checker```           | HealthCheckerConfig | Configure what kind 
of health check to use.
 | ```initial_interval_secs```    | Integer   | Initial delay for performing a 
health check. (Default: 15)
 | ```interval_secs```            | Integer   | Interval on which to check the 
task's health. (Default: 10)
@@ -457,13 +460,15 @@ Parameters for controlling a task's heal
 
 If the `announce` field in the Job configuration is set, each task will be
 registered in the ServerSet `/aurora/role/environment/jobname` in the
-zookeeper ensemble configured by the executor.  If no Announcer object is 
specified,
+zookeeper ensemble configured by the executor (which can be optionally 
overriden by specifying
+zk_path parameter).  If no Announcer object is specified,
 no announcement will take place.  For more information about ServerSets, see 
the [User Guide](/documentation/latest/user-guide/).
 
 | object                         | type      | description
 | -------                        | :-------: | --------
 | ```primary_port```             | String    | Which named port to register as 
the primary endpoint in the ServerSet (Default: `http`)
 | ```portmap```                  | dict      | A mapping of additional 
endpoints to announced in the ServerSet (Default: `{ 'aurora': 
'{{primary_port}}' }`)
+| ```zk_path```                  | String    | Zookeeper serverset path 
override (executor must be started with the 
--announcer-allow-custom-serverset-path parameter)
 
 ### Port aliasing with the Announcer `portmap`
 
@@ -489,7 +494,7 @@ tasks with the same static port allocati
 External constraints such as slave attributes should be used to enforce such
 guarantees should they be needed.
 
-### Container Object
+### Container Objects
 
 *Note: The only container type currently supported is "docker".  Docker 
support is currently EXPERIMENTAL.*
 *Note: In order to correctly execute processes inside a job, the Docker 
container must have python 2.7 installed.*

Modified: aurora/site/source/documentation/latest/deploying-aurora-scheduler.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/deploying-aurora-scheduler.md?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
--- aurora/site/source/documentation/latest/deploying-aurora-scheduler.md 
(original)
+++ aurora/site/source/documentation/latest/deploying-aurora-scheduler.md Fri 
Mar  4 02:43:01 2016
@@ -15,7 +15,7 @@ machines.  This guide helps you get the
   - [Considerations for running jobs in 
docker](#considerations-for-running-jobs-in-docker)
   - [Security Considerations](#security-considerations)
   - [Configuring Resource 
Oversubscription](#configuring-resource-oversubscription)
-  - [Process Log Rotation](#process-log-rotation)
+  - [Process Logs](#process-logs)
 - [Running Aurora](#running-aurora)
   - [Maintaining an Aurora Installation](#maintaining-an-aurora-installation)
   - [Monitoring](#monitoring)
@@ -164,15 +164,47 @@ wrapper script and executor are correctl
 script does not access resources outside of the sandbox, as when the script is 
run from within a
 docker container those resources will not exist.
 
+In order to correctly execute processes inside a job, the docker container 
must have python 2.7
+installed.
+
 A scheduler flag, `-global_container_mounts` allows mounting paths from the 
host (i.e., the slave)
 into all containers on that host. The format is a comma separated list of 
host_path:container_path[:mode]
 tuples. For example 
`-global_container_mounts=/opt/secret_keys_dir:/mnt/secret_keys_dir:ro` mounts
 `/opt/secret_keys_dir` from the slaves into all launched containers. Valid 
modes are `ro` and `rw`.
 
-In order to correctly execute processes inside a job, the docker container 
must have python 2.7
-installed.
+If you would like to supply your own parameters to `docker run` when launching 
jobs in docker
+containers, you may use the following flags:
+
+    -allow_docker_parameters
+    -default_docker_parameters
+
+`-allow_docker_parameters` controls whether or not users may pass their own 
configuration parameters
+through the job configuration files. If set to `false` (the default), the 
scheduler will reject
+jobs with custom parameters. *NOTE*: this setting should be used with caution 
as it allows any job
+owner to specify any parameters they wish, including those that may introduce 
security concerns
+(`privileged=true`, for example).
+
+`-default_docker_parameters` allows a cluster operator to specify a universal 
set of parameters that
+should be used for every container that does not have parameters explicitly 
configured at the job
+level. The argument accepts a multimap format:
+
+    -default_docker_parameters="read-only=true,tmpfs=/tmp,tmpfs=/run"
+
+### Process Logs
+
+#### Log destination
+By default, Thermos will write process stdout/stderr to log files in the 
sandbox. Process object configuration
+allows specifying alternate log file destinations like streamed stdout/stderr 
or suppression of all log output.
+Default behavior can be configured for the entire cluster with the following 
flag (through the `-thermos_executor_flags`
+argument to the Aurora scheduler):
+
+    --runner-logger-destination=both
+
+`both` configuration will send logs to files and stream to parent 
stdout/stderr outputs.
+
+See [this document](/documentation/latest/configuration-reference/#logger) for 
all destination options.
 
-### Process Log Rotation
+#### Log rotation
 By default, Thermos will not rotate the stdout/stderr logs from child 
processes and they will grow
 without bound. An individual user may change this behavior via configuration 
on the Process object,
 but it may also be desirable to change the default configuration for the 
entire cluster.

Added: aurora/site/source/documentation/latest/design-documents.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/design-documents.md?rev=1733548&view=auto
==============================================================================
--- aurora/site/source/documentation/latest/design-documents.md (added)
+++ aurora/site/source/documentation/latest/design-documents.md Fri Mar  4 
02:43:01 2016
@@ -0,0 +1,17 @@
+# Design Documents
+
+Since its inception as an Apache project, larger feature additions to the
+Aurora code base are discussed in form of design documents. Design documents
+are living documents until a consensus has been reached to implement a feature
+in the proposed form.
+
+Current and past documents:
+
+* [Command Hooks for the Aurora Client](design/command-hooks.md)
+* [Health Checks for 
Updates](https://docs.google.com/document/d/1ZdgW8S4xMhvKW7iQUX99xZm10NXSxEWR0a-21FP5d94/edit)
+* [JobUpdateDiff thrift 
API](https://docs.google.com/document/d/1Fc_YhhV7fc4D9Xv6gJzpfooxbK4YWZcvzw6Bd3qVTL8/edit)
+* [REST API 
RFC](https://docs.google.com/document/d/11_lAsYIRlD5ETRzF2eSd3oa8LXAHYFD8rSetspYXaf4/edit)
+* [Revocable Mesos offers in 
Aurora](https://docs.google.com/document/d/1r1WCHgmPJp5wbrqSZLsgtxPNj3sULfHrSFmxp2GyPTo/edit)
+* [Ubiquitous 
Jobs](https://docs.google.com/document/d/12hr6GnUZU3mc7xsWRzMi3nQILGB-3vyUxvbG-6YmvdE/edit)
+
+Design documents can be found in the Aurora issue tracker via the query 
[`project = AURORA AND text ~ "docs.google.com" ORDER BY 
created`](https://issues.apache.org/jira/browse/AURORA-1528?jql=project%20%3D%20AURORA%20AND%20text%20~%20%22docs.google.com%22%20ORDER%20BY%20created).

Modified: aurora/site/source/documentation/latest/developing-aurora-scheduler.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/developing-aurora-scheduler.md?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
--- aurora/site/source/documentation/latest/developing-aurora-scheduler.md 
(original)
+++ aurora/site/source/documentation/latest/developing-aurora-scheduler.md Fri 
Mar  4 02:43:01 2016
@@ -56,7 +56,7 @@ environment:
 In addition, there is an end-to-end test that runs a suite of aurora commands
 using a virtual cluster:
 
-    bash src/test/sh/org/apache/aurora/e2e/test_end_to_end.sh
+    ./src/test/sh/org/apache/aurora/e2e/test_end_to_end.sh
 
 
 

Added: aurora/site/source/documentation/latest/images/CompletedTasks.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/CompletedTasks.png?rev=1733548&view=auto
==============================================================================
Binary file - no diff available.

Propchange: aurora/site/source/documentation/latest/images/CompletedTasks.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Modified: aurora/site/source/documentation/latest/images/HelloWorldJob.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/HelloWorldJob.png?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
Binary files - no diff available.

Modified: aurora/site/source/documentation/latest/images/RoleJobs.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/RoleJobs.png?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
Binary files - no diff available.

Added: aurora/site/source/documentation/latest/images/RunningJob.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/RunningJob.png?rev=1733548&view=auto
==============================================================================
Binary file - no diff available.

Propchange: aurora/site/source/documentation/latest/images/RunningJob.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Modified: aurora/site/source/documentation/latest/images/ScheduledJobs.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/ScheduledJobs.png?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
Binary files - no diff available.

Modified: aurora/site/source/documentation/latest/images/TaskBreakdown.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/TaskBreakdown.png?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
Binary files - no diff available.

Modified: aurora/site/source/documentation/latest/images/killedtask.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/killedtask.png?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
Binary files - no diff available.

Added: 
aurora/site/source/documentation/latest/images/presentations/03_07_2015_aurora_mesos_in_practice_at_twitter_thumb.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/presentations/03_07_2015_aurora_mesos_in_practice_at_twitter_thumb.png?rev=1733548&view=auto
==============================================================================
Binary file - no diff available.

Propchange: 
aurora/site/source/documentation/latest/images/presentations/03_07_2015_aurora_mesos_in_practice_at_twitter_thumb.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: 
aurora/site/source/documentation/latest/images/presentations/09_20_2015_shipping_code_with_aurora_thumb.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/presentations/09_20_2015_shipping_code_with_aurora_thumb.png?rev=1733548&view=auto
==============================================================================
Binary file - no diff available.

Propchange: 
aurora/site/source/documentation/latest/images/presentations/09_20_2015_shipping_code_with_aurora_thumb.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: 
aurora/site/source/documentation/latest/images/presentations/09_20_2015_twitter_production_scale_thumb.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/presentations/09_20_2015_twitter_production_scale_thumb.png?rev=1733548&view=auto
==============================================================================
Binary file - no diff available.

Propchange: 
aurora/site/source/documentation/latest/images/presentations/09_20_2015_twitter_production_scale_thumb.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: 
aurora/site/source/documentation/latest/images/presentations/10_08_2015_mesos_aurora_on_a_small_scale_thumb.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/presentations/10_08_2015_mesos_aurora_on_a_small_scale_thumb.png?rev=1733548&view=auto
==============================================================================
Binary file - no diff available.

Propchange: 
aurora/site/source/documentation/latest/images/presentations/10_08_2015_mesos_aurora_on_a_small_scale_thumb.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: 
aurora/site/source/documentation/latest/images/presentations/10_08_2015_sla_aware_maintenance_for_operators_thumb.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/presentations/10_08_2015_sla_aware_maintenance_for_operators_thumb.png?rev=1733548&view=auto
==============================================================================
Binary file - no diff available.

Propchange: 
aurora/site/source/documentation/latest/images/presentations/10_08_2015_sla_aware_maintenance_for_operators_thumb.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Modified: aurora/site/source/documentation/latest/images/runningtask.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/runningtask.png?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
Binary files - no diff available.

Modified: aurora/site/source/documentation/latest/images/stderr.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/stderr.png?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
Binary files - no diff available.

Modified: aurora/site/source/documentation/latest/images/stdout.png
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/images/stdout.png?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
Binary files - no diff available.

Modified: aurora/site/source/documentation/latest/index.html.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/index.html.md?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
--- aurora/site/source/documentation/latest/index.html.md (original)
+++ aurora/site/source/documentation/latest/index.html.md Fri Mar  4 02:43:01 
2016
@@ -5,7 +5,7 @@ Apache Aurora is a service scheduler tha
  * Operators: For those that wish to manage and fine-tune an Aurora cluster.
  * Developers: All the information you need to start modifying Aurora and 
contributing back to the project.
 
-We encourage you to ask questions on the [Aurora developer 
list](http://aurora.apache.org/community/) or the `#aurora` IRC channel on 
`irc.freenode.net`.
+We encourage you to ask questions on the [Aurora user 
list](http://aurora.apache.org/community/) or the `#aurora` IRC channel on 
`irc.freenode.net`.
 
 ## Users
  * [Install Aurora on virtual machines on your private 
machine](/documentation/latest/vagrant/)
@@ -19,13 +19,12 @@ We encourage you to ask questions on the
 
 ## Operators
  * [Installation](/documentation/latest/installing/)
- * [Deployment and cluster 
configuraiton](/documentation/latest/deploying-aurora-scheduler/)
+ * [Deployment and cluster 
configuration](/documentation/latest/deploying-aurora-scheduler/)
  * [Security](/documentation/latest/security/)
  * [Monitoring](/documentation/latest/monitoring/)
  * [Hooks for Aurora Client API](/documentation/latest/hooks/)
  * [Scheduler Storage](/documentation/latest/storage/)
  * [Scheduler Storage and Maintenance](/documentation/latest/storage-config/)
- * [Scheduler Storage Performance 
Tuning](/documentation/latest/scheduler-storage/)
  * [SLA Measurement](/documentation/latest/sla/)
  * [Resource Isolation and Sizing](/documentation/latest/resources/)
 
@@ -34,6 +33,7 @@ We encourage you to ask questions on the
  * [Developing the Aurora 
Scheduler](/documentation/latest/developing-aurora-scheduler/)
  * [Developing the Aurora 
Client](/documentation/latest/developing-aurora-client/)
  * [Committers Guide](/documentation/latest/committers/)
+ * [Design Documents](/documentation/latest/design-documents/)
  * [Deprecation Guide](/documentation/latest/thrift-deprecation/)
  * [Build System](/documentation/latest/build-system/)
  * [Generating test resources](/documentation/latest/test-resource-generation/)

Modified: aurora/site/source/documentation/latest/installing.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/installing.md?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
--- aurora/site/source/documentation/latest/installing.md (original)
+++ aurora/site/source/documentation/latest/installing.md Fri Mar  4 02:43:01 
2016
@@ -260,10 +260,15 @@ are identical for both.
 
 ### Mesos on Ubuntu Trusty
 
+    sudo apt-get update
+    sudo apt-get install -y software-properties-common
     sudo add-apt-repository ppa:openjdk-r/ppa -y
     sudo apt-get update
 
-    sudo apt-get install -y software-properties-common wget libsvn1 libcurl3 
openjdk-8-jre-headless
+    sudo apt-get install -y wget libsvn1 libcurl3 openjdk-8-jre-headless
+
+    # NOTE: This appears to be a missing dependency of the mesos deb package.
+    sudo apt-get install -y libcurl4-nss-dev
 
     wget -c 
http://downloads.mesosphere.io/master/ubuntu/14.04/mesos_0.23.0-1.0.ubuntu1404_amd64.deb
     sudo dpkg -i mesos_0.23.0-1.0.ubuntu1404_amd64.deb

Modified: aurora/site/source/documentation/latest/presentations.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/presentations.md?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
--- aurora/site/source/documentation/latest/presentations.md (original)
+++ aurora/site/source/documentation/latest/presentations.md Fri Mar  4 
02:43:01 2016
@@ -4,6 +4,31 @@ Video and slides from presentations and
 _(Listed in date descending order)_
 
 <table>
+
+       <tr>
+               <td><img 
src="/documentation/latest/images/presentations/10_08_2015_mesos_aurora_on_a_small_scale_thumb.png"
 alt="Mesos and Aurora on a Small Scale Thumbnail" /></td>
+               <td><strong><a 
href="https://www.youtube.com/watch?v=q5iIqhaCJ_o";>Mesos &amp; Aurora on a 
Small Scale (Video)</a></strong>
+               <p>Presented by Florian Pfeiffer</p>
+               <p>October 8, 2015 at <a 
href="http://events.linuxfoundation.org/events/archive/2015/mesoscon-europe";>#MesosCon
 Europe 2015</a></p></td>
+       </tr>
+       <tr>
+               <td><img 
src="/documentation/latest/images/presentations/10_08_2015_sla_aware_maintenance_for_operators_thumb.png"
 alt="SLA Aware Maintenance for Operators Thumbnail" /></td>
+               <td><strong><a 
href="https://www.youtube.com/watch?v=tZ0-SISvCis";>SLA Aware Maintenance for 
Operators (Video)</a></strong>
+               <p>Presented by Joe Smith</p>
+               <p>October 8, 2015 at <a 
href="http://events.linuxfoundation.org/events/archive/2015/mesoscon-europe";>#MesosCon
 Europe 2015</a></p></td>
+       </tr>
+       <tr>
+               <td><img 
src="/documentation/latest/images/presentations/09_20_2015_shipping_code_with_aurora_thumb.png"
 alt="Shipping Code with Aurora Thumbnail" /></td>
+               <td><strong><a 
href="https://www.youtube.com/watch?v=y1hi7K1lPkk";>Shipping Code with Aurora 
(Video)</a></strong>
+               <p>Presented by Bill Farner</p>
+               <p>August 20, 2015 at <a 
href="http://events.linuxfoundation.org/events/archive/2015/mesoscon";>#MesosCon 
2015</a></p></td>
+       </tr>
+       <tr>
+               <td><img 
src="/documentation/latest/images/presentations/09_20_2015_twitter_production_scale_thumb.png"
 alt="Twitter Production Scale Thumbnail" /></td>
+               <td><strong><a 
href="https://www.youtube.com/watch?v=nNrh-gdu9m4";>Twitter’s Production 
Scale: Mesos and Aurora Operations (Video)</a></strong>
+               <p>Presented by Joe Smith</p>
+               <p>August 20, 2015 at <a 
href="http://events.linuxfoundation.org/events/archive/2015/mesoscon";>#MesosCon 
2015</a></p></td>
+       </tr>
        <tr>
                <td><img 
src="/documentation/latest/images/presentations/04_30_2015_monolith_to_microservices_thumb.png"
 alt="From Monolith to Microservices with Aurora Video Thumbnail" /></td>
                <td><strong><a 
href="https://www.youtube.com/watch?v=yXkOgnyK4Hw";>>From Monolith to 
Microservices w/ Aurora (Video)</a></strong>
@@ -11,6 +36,12 @@ _(Listed in date descending order)_
                <p>April 30, 2015 at <a 
href="http://www.meetup.com/Bay-Area-Apache-Aurora-Users-Group/events/221219480/";>Bay
 Area Apache Aurora Users Group</a></p></td>
        </tr>
        <tr>
+               <td><img 
src="/documentation/latest/images/presentations/03_07_2015_aurora_mesos_in_practice_at_twitter_thumb.png"
 alt="Aurora + Mesos in Practice at Twitter Thumbnail" /></td>
+               <td><strong><a 
href="https://www.youtube.com/watch?v=1XYJGX_qZVU";>Aurora + Mesos in Practice 
at Twitter (Video)</a></strong>
+               <p>Presented by Bill Farner</p>
+               <p>March 07, 2015 at <a 
href="http://www.bigeng.io/aurora-mesos-in-practice-at-twitter";>Bigcommerce 
TechTalk</a></p></td>
+       </tr>
+       <tr>
                <td><img 
src="/documentation/latest/images/presentations/02_28_2015_apache_aurora_thumb.png"
 alt="Apache Auroraの始めかた Slideshow Thumbnail" /></td>
                <td><strong><a 
href="http://www.slideshare.net/zembutsu/apache-aurora-introduction-and-tutorial-osc15tk";>Apache
 Auroraの始めかた (Slides)</a></strong>
                <p>Presented by Masahito Zembutsu</p>
@@ -38,8 +69,7 @@ _(Listed in date descending order)_
                <td><img 
src="/documentation/latest/images/presentations/08_21_2014_past_present_future_thumb.png"
 alt="Past, Present, and Future of the Aurora Scheduler Video Thumbnail" /></td>
                <td><strong><a 
href="https://www.youtube.com/watch?v=Dsc5CPhKs4o";>Past, Present, and Future of 
the Aurora Scheduler (Video)</a></strong>
                <p>Presented by Bill Farner</p>
-               <p>August 21, 2014 at <a 
href="http://events.linuxfoundation.org/events/archive/2014/mesoscon";>#MesosCon 
2014</a></p>
-</td>
+               <p>August 21, 2014 at <a 
href="http://events.linuxfoundation.org/events/archive/2014/mesoscon";>#MesosCon 
2014</a></p></td>
        </tr>
        <tr>
                <td><img 
src="/documentation/latest/images/presentations/03_25_2014_introduction_to_aurora_thumb.png"
 alt="Introduction to Apache Aurora Video Thumbnail" /></td>
@@ -47,4 +77,4 @@ _(Listed in date descending order)_
                <p>Presented by Bill Farner</p>
                <p>March 25, 2014 at <a 
href="https://www.eventbrite.com/e/aurora-and-mesosframeworksmeetup-tickets-10850994617";>Aurora
 and Mesos Frameworks Meetup</a></p></td>
        </tr>
-</table>
\ No newline at end of file
+</table>

Modified: aurora/site/source/documentation/latest/security.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/security.md?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
--- aurora/site/source/documentation/latest/security.md (original)
+++ aurora/site/source/documentation/latest/security.md Fri Mar  4 02:43:01 2016
@@ -217,6 +217,14 @@ You might find documentation on the Inte
 like `[main]` and `[urls]`. These are not supported by Aurora as it uses a 
different mechanism to configure
 those parts of Shiro. Think of Aurora's `security.ini` as a subset with only 
`[users]` and `[roles]` sections.
 
+## Implementing Delegated Authorization
+
+It is possible to leverage Shiro's `runAs` feature by implementing a custom 
Servlet Filter that provides
+the capability and passing it's fully qualified class name to the command line 
argument
+`-shiro_after_auth_filter`. The filter is registered in the same filter chain 
as the Shiro auth filters
+and is placed after the Shiro auth filters in the filter chain. This ensures 
that the Filter is invoked
+after the Shiro filters have had a chance to authenticate the request.
+
 # Implementing a Custom Realm
 
 Since Aurora’s security is backed by [Apache 
Shiro](https://shiro.apache.org), you can implement a

Modified: aurora/site/source/documentation/latest/tools.md
URL: 
http://svn.apache.org/viewvc/aurora/site/source/documentation/latest/tools.md?rev=1733548&r1=1733547&r2=1733548&view=diff
==============================================================================
--- aurora/site/source/documentation/latest/tools.md (original)
+++ aurora/site/source/documentation/latest/tools.md Fri Mar  4 02:43:01 2016
@@ -1,6 +1,6 @@
 # Tools
 
-Various tools integrate with Aurora. There is a tool missing? Let us know, or 
submit a patch to add it!
+Various tools integrate with Aurora. Is there a tool missing? Let us know, or 
submit a patch to add it!
 
 * Load-balacing technology used to direct traffic to services running on Aurora
   - [synapse](https://github.com/airbnb/synapse) based on HAProxy


Reply via email to