This is an automated email from the ASF dual-hosted git repository.

wonook pushed a commit to branch gh-pages
in repository https://gitbox.apache.org/repos/asf/incubator-nemo.git

commit ac269ee63c9bbe1c386d733ac2cfbce286c34a4a
Author: Joo Yeon Kim <[email protected]>
AuthorDate: Thu Feb 1 13:43:51 2018 +0900

    onyx -> coral in text
---
 _docs/designs/compiler_design.md          | 14 +++++++-------
 _docs/designs/runtime_design.md           | 14 +++++++-------
 _docs/getting_started.md                  | 26 +++++++++++++-------------
 _docs/index.md                            |  8 ++++----
 _docs/optimization/extending_onyx.md      |  4 ++--
 _docs/optimization/ir.md                  |  6 +++---
 _docs/optimization/passes_and_policies.md | 12 ++++++------
 _pages/downloads.md                       |  4 ++--
 index.html                                | 18 +++++++++---------
 9 files changed, 53 insertions(+), 53 deletions(-)

diff --git a/_docs/designs/compiler_design.md b/_docs/designs/compiler_design.md
index 06a9144..dc35c24 100644
--- a/_docs/designs/compiler_design.md
+++ b/_docs/designs/compiler_design.md
@@ -7,25 +7,25 @@ permalink: /docs/compiler_design/
 
 Compiler takes an arbitrary dataflow program as input, and outputs an 
optimized physical execution plan to be understood by the execution runtime. 
The steps are as followings:
 
-1. **Compiler frontend** first translates the logical layer of given dataflow 
program written in high-level languages, like Apache Beam, into an expressive, 
general-purpose [Onyx Intermediate Representation (IR)](../ir).
+1. **Compiler frontend** first translates the logical layer of given dataflow 
program written in high-level languages, like Apache Beam, into an expressive, 
general-purpose [Coral Intermediate Representation (IR)](../ir).
 2. Then using the [optimization pass](../passes_and_policies) interface 
provided by the **Compiler optimizer**, the IR can be flexibly reshaped and 
annotated with a variety of execution properties that configures the underlying 
runtime behaviors.
-3. After being processed by _optimization passes_, the **Compiler backend** 
finally lays out the IR into a physical execution plan, composed of tasks and 
stages, to be carried out by the [Onyx Execution Runtime](../runtime_design).
+3. After being processed by _optimization passes_, the **Compiler backend** 
finally lays out the IR into a physical execution plan, composed of tasks and 
stages, to be carried out by the [Coral Execution Runtime](../runtime_design).
 
 ### Frontend
 
-The frontend of *Onyx Compiler* translates arbitrary high-level dataflow 
languages, like Apache Beam, into our expression of [Onyx IR](../ir) with an 
elementary annotation of default *execution properties*.
+The frontend of *Coral Compiler* translates arbitrary high-level dataflow 
languages, like Apache Beam, into our expression of [Coral IR](../ir) with an 
elementary annotation of default *execution properties*.
 **Frontend** for different languages are designed as visitors that traverse 
given applications written in high-level dataflow languages in a topological 
order.
-While traversing the logic, it translates each dataflow operators and edges on 
the way, and appends the translated IR components to the *Onyx IR builder*.
+While traversing the logic, it translates each dataflow operators and edges on 
the way, and appends the translated IR components to the *Coral IR builder*.
 After completing the traversal, the IR builder builds the logical part of the 
IR after checking its integrity.
 Integrity check ensures a few factors, such as ensuring vertices without any 
incoming edges to read source data.
 
 ### Optimizer
 
-After the IR is created with its logical structures set up, we need an [Onyx 
policy](../passes_and_policies) to optimize the application for a specific goal.
-To build Onyx policies safely and correctly, we provide a *policy builder* 
interface, which checks for the integrity while registering series of passes in 
a specific order.
+After the IR is created with its logical structures set up, we need an [Coral 
policy](../passes_and_policies) to optimize the application for a specific goal.
+To build Coral policies safely and correctly, we provide a *policy builder* 
interface, which checks for the integrity while registering series of passes in 
a specific order.
 
 For example, if an annotating pass requires information of specific *execution 
properties* to perform its work, we specify them as *prerequisite execution 
properties*, and check the order and the content of registered passes to ensure 
that the conditions have been met.
-We avoid the cases where circular dependencies occur, through the default 
execution properties that we provide at the initiation of the Onyx IR.
+We avoid the cases where circular dependencies occur, through the default 
execution properties that we provide at the initiation of the Coral IR.
 
 Using the policy, the optimizer applies each *optimization passes* one-by-one 
in the provided order, and checks for the IR integrity after each optimization 
has been done, to ensure that the [IR](../ir) is not broken.
 
diff --git a/_docs/designs/runtime_design.md b/_docs/designs/runtime_design.md
index 6084954..4d8d3a3 100644
--- a/_docs/designs/runtime_design.md
+++ b/_docs/designs/runtime_design.md
@@ -3,24 +3,24 @@ title: Runtime Design
 permalink: /docs/runtime_design/
 ---
 
-### Receiving a Job from the Onyx Compiler
+### Receiving a Job from the Coral Compiler
 
-After the compiler goes through a set of passes for optimization, the 
optimized Onyx IR is translated into into a 
+After the compiler goes through a set of passes for optimization, the 
optimized Coral IR is translated into into a 
 physical form for the execution runtime to execute. This involves translations 
like expanding an operator annotated 
-with parallelism in Onyx IR to the desired number of tasks and connecting the 
tasks according to the data communication 
+with parallelism in Coral IR to the desired number of tasks and connecting the 
tasks according to the data communication 
 patterns annotated on the IR edges. Physical execution plan is also in the 
form of a DAG, with the same values annotated 
-for execution properties as the given IR DAG if necessary. Onyx IR DAG and 
physical execution plan can be translated 
+for execution properties as the given IR DAG if necessary. Coral IR DAG and 
physical execution plan can be translated 
 from one another by sharing the identifiers.
 
 ### Runtime Architecture
-The Onyx runtime consists of a _RuntimeMaster_ and multiple _Executors_.
+The Coral runtime consists of a _RuntimeMaster_ and multiple _Executors_.
 _RuntimeMaster_ takes the submitted physical execution plan and schedules each 
_TaskGroup_ to _Executor_ for execution.
 
-The figure below shows the Onyx runtime's overall architecture.
+The figure below shows the Coral runtime's overall architecture.
 Our runtime's components can be broken down into two parts, the processing 
backbone and the extensible modules.
 
 The processing backbone illustrated by the blue double stroked boxes in the 
figure below,
-implements the inherent and basic code that must be executed for all Onyx jobs
+implements the inherent and basic code that must be executed for all Coral jobs
 (and potentially all data processing jobs). 
 The code includes references to the flexible and extensible data structures 
 representing our execution properties. 
diff --git a/_docs/getting_started.md b/_docs/getting_started.md
index 3e6a320..a00c2c5 100644
--- a/_docs/getting_started.md
+++ b/_docs/getting_started.md
@@ -26,7 +26,7 @@ permalink: /docs/getting_started/
         * `sudo make install`
     3. To check for a successful installation of version 2.5.0, run `protoc 
--version`
 
-### Installing Onyx 
+### Installing Coral 
 * Run all tests and install: `mvn clean install -T 2C`
 * Run only unit tests and install: `mvn clean install -DskipITs -T 2C`
 
@@ -37,18 +37,18 @@ permalink: /docs/getting_started/
 
 ```bash
 ./bin/run_external_app.sh \
-`pwd`/onyx_app/target/bd17f-1.0-SNAPSHOT.jar \
+`pwd`/coral_app/target/bd17f-1.0-SNAPSHOT.jar \
 -job_id mapreduce \
--executor_json `pwd`/onyx_runtime/config/default.json \
+-executor_json `pwd`/coral_runtime/config/default.json \
 -user_main MapReduce \
--user_args "`pwd`/mr_input_data `pwd`/onyx_output/output_data"
+-user_args "`pwd`/mr_input_data `pwd`/coral_output/output_data"
 ```
 
 ### Configurable options
 * `-job_id`: ID of the Beam job
 * `-user_main`: Canonical name of the Beam application
 * `-user_args`: Arguments that the Beam application accepts
-* `-optimization_policy`: Canonical name of the optimization policy to apply 
to a job DAG in Onyx Compiler
+* `-optimization_policy`: Canonical name of the optimization policy to apply 
to a job DAG in Coral Compiler
 * `-deploy_mode`: `yarn` is supported(default value is `local`)
 
 ### Examples
@@ -56,16 +56,16 @@ permalink: /docs/getting_started/
 ## MapReduce example
 ./bin/run.sh \
   -job_id mr_default \
-  -user_main edu.snu.onyx.examples.beam.MapReduce \
-  -optimization_policy edu.snu.onyx.compiler.optimizer.policy.DefaultPolicy \
+  -user_main edu.snu.coral.examples.beam.MapReduce \
+  -optimization_policy edu.snu.coral.compiler.optimizer.policy.DefaultPolicy \
   -user_args "`pwd`/src/main/resources/sample_input_mr 
`pwd`/src/main/resources/sample_output"
 
 ## YARN cluster example
 ./bin/run.sh \
   -deploy_mode yarn \
   -job_id mr_pado \
-  -user_main edu.snu.onyx.examples.beam.MapReduce \
-  -optimization_policy edu.snu.onyx.compiler.optimizer.policy.PadoPolicy \
+  -user_main edu.snu.coral.examples.beam.MapReduce \
+  -optimization_policy edu.snu.coral.compiler.optimizer.policy.PadoPolicy \
   -user_args "hdfs://v-m:9000/sample_input_mr hdfs://v-m:9000/sample_output_mr"
 ```
 
@@ -103,16 +103,16 @@ This example configuration specifies
 * 1 reserved container with 2 cores and 1024MB memory
 
 ## Monitoring your job using web UI
-Onyx Compiler and Engine can store JSON representation of intermediate DAGs.
+Coral Compiler and Runtime can store JSON representation of intermediate DAGs.
 * `-dag_dir` command line option is used to specify the directory where the 
JSON files are stored. The default directory is `./dag`.
-Using our [online visualizer](https://service.jangho.io/onyx-dag/), you can 
easily visualize a DAG. Just drop the JSON file of the DAG as an input to it.
+Using our [online visualizer](https://service.jangho.io/Coral-dag/), you can 
easily visualize a DAG. Just drop the JSON file of the DAG as an input to it.
 
 ### Examples
 ```bash
 ./bin/run.sh \
   -job_id als \
-  -user_main edu.snu.onyx.examples.beam.AlternatingLeastSquare \
-  -optimization_policy edu.snu.onyx.compiler.optimizer.policy.PadoPolicy \
+  -user_main edu.snu.coral.examples.beam.AlternatingLeastSquare \
+  -optimization_policy edu.snu.coral.compiler.optimizer.policy.PadoPolicy \
   -dag_dir "./dag/als" \
   -user_args "`pwd`/src/main/resources/sample_input_als 10 3"
 ```
diff --git a/_docs/index.md b/_docs/index.md
index 0d41b5b..6bc44d2 100644
--- a/_docs/index.md
+++ b/_docs/index.md
@@ -4,16 +4,16 @@ permalink: /docs/home/
 redirect_from: /docs/index.html
 ---
 
-Onyx aims to optimize data processing for better performance and datacenter 
efficiency, not only in general and common conditions, but also with various 
*deployment characteristics*.
+Coral aims to optimize data processing for better performance and datacenter 
efficiency, not only in general and common conditions, but also with various 
*deployment characteristics*.
 Such characteristics include processing data on *specific resource 
environments*, like transient resources, and running *jobs with specific 
attributes*, like skewed data.
 
 There exists many data processing systems with different designs to solve each 
of such problems it targets, but it fails to cover or adapt to unconsidered 
cases without substantial effort for modification.
 The primary reason is because system runtime behaviors are hidden and planted 
inside the system core to hide the complexity of distributed computing.
 This makes it very hard for a single system to support different *deployment 
characteristics* with different *runtime behaviors* without substantial effort.
 
-To solve this problem and easily modify *runtime behaviors* for different 
*deployment characteristics*, Onyx expresses workloads using the [Onyx 
Intermediate Representation (IR)](../ir), which represents the logical notion 
of data processing applications and its runtime behaviors on separate layers.
-These layers can be easily modified through a set of high-level [graph 
pass](../passes_and_policies) interfaces, exposed by the [Onyx 
Compiler](../compiler_design), enabling users to flexibly modify *runtime 
behaviors* at both compile-time and runtime.
-Works represented this way can be executed by the [Onyx Execution 
Runtime](../runtime_design) through its [modular and 
extensible](../extending_onyx) design.
+To solve this problem and easily modify *runtime behaviors* for different 
*deployment characteristics*, Coral expresses workloads using the [Coral 
Intermediate Representation (IR)](../ir), which represents the logical notion 
of data processing applications and its runtime behaviors on separate layers.
+These layers can be easily modified through a set of high-level [graph 
pass](../passes_and_policies) interfaces, exposed by the [Coral 
Compiler](../compiler_design), enabling users to flexibly modify *runtime 
behaviors* at both compile-time and runtime.
+Works represented this way can be executed by the [Coral Execution 
Runtime](../runtime_design) through its [modular and 
extensible](../extending_Coral) design.
 
 <br>
 <div class="text-center">
diff --git a/_docs/optimization/extending_onyx.md 
b/_docs/optimization/extending_onyx.md
index f74d7ca..1a280d6 100644
--- a/_docs/optimization/extending_onyx.md
+++ b/_docs/optimization/extending_onyx.md
@@ -1,6 +1,6 @@
 ---
-title: Extending Onyx
-permalink: /docs/extending_onyx/
+title: Extending Coral
+permalink: /docs/extending_coral/
 ---
 
 ### Overview
diff --git a/_docs/optimization/ir.md b/_docs/optimization/ir.md
index 2e1b581..f6362d7 100644
--- a/_docs/optimization/ir.md
+++ b/_docs/optimization/ir.md
@@ -1,5 +1,5 @@
 ---
-title: Onyx Intermediate Representation (IR)
+title: Coral Intermediate Representation (IR)
 permalink: /docs/ir/
 ---
 
@@ -12,8 +12,8 @@ On that layer, we can annotate specific execution properties 
related to the IR c
 
 ### IR structure
 
-Onyx IR is composed of vertices, which each represent a data-parallel operator 
that transforms data, and edges between them, which each represents the 
dependency of data flow between the vertices.
-Onyx IR supports four different types of IR vertices:
+Coral IR is composed of vertices, which each represent a data-parallel 
operator that transforms data, and edges between them, which each represents 
the dependency of data flow between the vertices.
+Coral IR supports four different types of IR vertices:
 
 - **UDF Vertex**: Most commonly used vertex. Each UDF vertex contains a 
transform which determines the actions to take for the given input data. A 
transform can express any kind of data processing operation that high-level 
languages articulate.
 - **Source Vertex**: This produces data by reading from an arbitrary source 
like disks and distributed filesystems.
diff --git a/_docs/optimization/passes_and_policies.md 
b/_docs/optimization/passes_and_policies.md
index 05470ef..a99a578 100644
--- a/_docs/optimization/passes_and_policies.md
+++ b/_docs/optimization/passes_and_policies.md
@@ -5,22 +5,22 @@ permalink: /docs/passes_and_policies/
 
 ### Optimization Passes
 
-The [Onyx IR](../ir) can be flexibly modified, both in its logical structure 
and annotations, through an interface called *Onyx optimization pass*.
-An *optimization pass* is basically a function that takes an *Onyx IR* and 
outputs an optimized *Onyx IR*.
+The [Coral IR](../ir) can be flexibly modified, both in its logical structure 
and annotations, through an interface called *Coral optimization pass*.
+An *optimization pass* is basically a function that takes an *Coral IR* and 
outputs an optimized *Coral IR*.
 
 ##### Compile-time passes
 
 The modification during compile-time can be categorized in different ways:
 
-1. **Reshaping passes** modify the shape of the IR itself by inserting, 
regrouping, or deleting IR vertices and edges on an Onyx IR, such as collecting 
repetitive vertices inside a single loop or inserting metric vertices. This 
modifies the logical notion of data processing applications.
+1. **Reshaping passes** modify the shape of the IR itself by inserting, 
regrouping, or deleting IR vertices and edges on an Coral IR, such as 
collecting repetitive vertices inside a single loop or inserting metric 
vertices. This modifies the logical notion of data processing applications.
 2. **Annotating passes** annotate IR vertices and edges with *execution 
properties* with the provided logic to adjust and run the workload in the 
fashion that the user wants.
 3. **Composite passes** are collections of passes that are grouped together 
for convenience.
 
 ##### Run-time passes
 
-After the compilation and compile-time optimizations, the *Onyx IR* gets laid 
out as a *physical execution plan* to be submitted to and executed by the *Onyx 
Execution Runtime*.
+After the compilation and compile-time optimizations, the *Coral IR* gets laid 
out as a *physical execution plan* to be submitted to and executed by the 
*Coral Execution Runtime*.
 While execution, an *run-time optimization pass* can be performed to perform 
dynamic optimizations, like solving data skew, using runtime statistics.
-It takes the old *Onyx IR* and metric data of runtime statistics, and sends 
the newly optimized Onyx IR to execution runtime for the physical plan to be 
updated accordingly.
+It takes the old *Coral IR* and metric data of runtime statistics, and sends 
the newly optimized Coral IR to execution runtime for the physical plan to be 
updated accordingly.
 
 ### Examples
 
@@ -52,4 +52,4 @@ and data flow model pass, that determines the fashion in 
which each computation
 
 Using different optimization policies for specific goals enables users to 
flexibly customize and perform data processing for different deployment 
characteristics.
 This greatly simplifies the work by replacing the work of exploring and 
rewriting system internals for modifying runtime behaviors with a simple 
process of using pluggable policies.
-It also makes it possible for the system to promptly meet new requirements 
through [easy extension of system capabilities](../extending_onyx).
+It also makes it possible for the system to promptly meet new requirements 
through [easy extension of system capabilities](../extending_Coral).
diff --git a/_pages/downloads.md b/_pages/downloads.md
index 0c8f3af..250ebc1 100644
--- a/_pages/downloads.md
+++ b/_pages/downloads.md
@@ -8,7 +8,7 @@ permalink: /pages/downloads/
 
 | Release version | Download link |
 | --------------- | ------------: |
-| 0.1 | [Source code (zip)](https://github.com/snuspl/onyx/archive/v0.1.zip) / 
[Source code (tar.gz)](Source code (tar.gz)) |
+| 0.1 | [Source code (zip)](https://github.com/snuspl/coral/archive/v0.1.zip) 
/ [Source code (tar.gz)](Source code (tar.gz)) |
 
 
 ## Development and Maintenance Branches
@@ -16,5 +16,5 @@ permalink: /pages/downloads/
 If you are interested in working with the newest under-development code or 
contributing, you can clone the master branch from Git:
 
 ```
-$ git clone [email protected]:snuspl/onyx.git
+$ git clone [email protected]:snuspl/coral.git
 ```
diff --git a/index.html b/index.html
index 4d75d95..9c16397 100644
--- a/index.html
+++ b/index.html
@@ -4,7 +4,7 @@ layout: default
 
 <div class="header-container jumbotron">
     <div class="container">
-    <h1>Onyx <small>[ˈäniks]</small> </h1>
+    <h1>Coral</h1>
         <p>A Data Processing System for Flexible Employment With Different 
Deployment Characteristics.</p>
         <p><a class="btn btn-primary btn-lg" href="{{ "/docs/home/" | prepend: 
site.baseurl }}" role="button">Learn more</a></p>
     </div>
@@ -15,13 +15,13 @@ layout: default
 
     <div class="row">
         <div class="col-md-6">
-            <h2 class="header-light regular-pad">What is Onyx? 
<small>[ˈäniks]</small> </h2>
+            <h2 class="header-light regular-pad">What is Coral?</h2>
             <blockquote>
                 <p>
-                    Onyx is a data processing system for flexible employment 
with different execution scenarios for various deployment characteristics on 
clusters.
+                    Coral is a data processing system for flexible employment 
with different execution scenarios for various deployment characteristics on 
clusters.
                     They include processing data on specific resource 
environments, like on transient resources, and running jobs with specific 
attributes, like skewed data.
-                    Onyx decouples the logical notion of data processing 
applications from runtime behaviors and express them on separate layers using 
Onyx Intermediate Representation (IR).
-                    Specifically, through a set of high-level graph pass 
interfaces, Onyx exposes runtime behaviors to be flexibly configured and 
modified at both compile-time and runtime, and the Onyx Runtime executes the 
Onyx IR with its modular and extensible design.
+                    Coral decouples the logical notion of data processing 
applications from runtime behaviors and express them on separate layers using 
Coral Intermediate Representation (IR).
+                    Specifically, through a set of high-level graph pass 
interfaces, Coral exposes runtime behaviors to be flexibly configured and 
modified at both compile-time and runtime, and the Coral Runtime executes the 
Coral IR with its modular and extensible design.
                 </p>
               <!--<p>Jekyll is a simple, blog-aware, static site generator. It 
takes a template-->
               <!--directory containing raw text files in various formats, runs 
it through-->
@@ -46,7 +46,7 @@ layout: default
             <h1 class="text-center"><i class="fa fa-pencil" 
aria-hidden="true"></i></h1>
             <h3 class="text-center">Flexible</h3>
             <p>
-                Onyx offers flexible adaptation to your desired execution 
environment.
+                Coral offers flexible adaptation to your desired execution 
environment.
                 Examples of such execution environments include using 
transient resources, disaggregation of different computing resources, and 
handling skewed data.
             </p>
         </div>
@@ -54,7 +54,7 @@ layout: default
             <h1 class="text-center"><i class="fa fa-cogs" 
aria-hidden="true"></i></h1>
             <h3 class="text-center">Modular and Extensible</h3>
             <p>
-                Onyx is designed to be modular and extensible for even more 
variety of execution scenarios and deployment characteristics.
+                Coral is designed to be modular and extensible for even more 
variety of execution scenarios and deployment characteristics.
                 Users with specific needs can plug in and out the required 
components and execute their jobs accordingly.
             </p>
         </div>
@@ -62,8 +62,8 @@ layout: default
             <h1 class="text-center"><i class="fa fa-arrows-alt" 
aria-hidden="true"></i></h1>
             <h3 class="text-center">Runs Everywhere</h3>
             <p>
-                Onyx is able to run Apache Beam™ programs using our runtime, 
and Apache Spark™ programs in the near future.
-                Moreover, by using Apache REEF™, Onyx enables data processing 
possible on different resource managers including Apache Hadoop™ YARN or Apache 
Mesos™.
+                Coral is able to run Apache Beam™ programs using our runtime, 
and Apache Spark™ programs in the near future.
+                Moreover, by using Apache REEF™, Coral enables data processing 
possible on different resource managers including Apache Hadoop™ YARN or Apache 
Mesos™.
             </p>
         </div>
     </div>

-- 
To stop receiving notification emails like this one, please contact
[email protected].

Reply via email to