http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/site/markdown/ResourceEstimator.md
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/site/markdown/ResourceEstimator.md 
b/hadoop-tools/hadoop-resourceestimator/src/site/markdown/ResourceEstimator.md
new file mode 100644
index 0000000..12f8dd5
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/site/markdown/ResourceEstimator.md
@@ -0,0 +1,181 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Resource Estimator Service
+==========================
+
+* [Resource Estimator Service](#Resource_Estimator_Service)
+    * [Overview](#Overview)
+        * [Motivation](#Motivation)
+        * [Goals](#Goals)
+        * [Architecture](#Architecture)
+    * [Usage](#Usage)
+    * [Example](#Example)
+    * [Advanced Configuration](#AdvancedConfig)
+    * [Future work](#Future)
+
+Overview
+--------
+
+### Motivation
+Estimating job resource requirements remains an important and challenging 
problem for enterprise clusters. This is amplified by the ever-increasing 
complexity of workloads, i.e. from traditional batch jobs to interactive 
queries to streaming and recently machine learning jobs. This results in jobs 
relying on multiple computation frameworks such as Tez, MapReduce, Spark, etc., 
and the problem is further compounded by sharing nature of the clusters. 
Current state-of-art solution relies on user expertise to make resource 
requirement estimations for the jobs (for e.g.: number of reducers or container 
memory size, etc.), which is both tedious and inefficient.
+
+Based on the analysis of our cluster workloads, we observe that a large 
portion of jobs (more than 60%) are recurring jobs, giving us the opportunity 
to automatically estimate job resource requirements based on job's history 
runs. It is worth noting that jobs usually come from different computation 
frameworks, and the version may change across runs as well. Therefore, we want 
to come up with a framework agnostic black-box solution to automatically make 
resource requirement estimation for the recurring jobs.
+
+### Goals
+
+*   For a periodic job, analyze its history logs and predict its resource 
requirement for the new run.
+*   Support various types of job logs.
+*   Scale to terabytes of job logs.
+
+### Architecture
+
+The following figure illustrates the implementation architecture of the 
resource estimator.
+
+![The architecture of the resource 
estimator](images/resourceestimator_arch.png)
+
+Hadoop-resourceestimator mainly consists of three modules: Translator, 
SkylineStore and Estimator.
+
+1. `ResourceSkyline` is used to characterize job's resource utilizations 
during its lifespan. More specifically, it uses `RLESparseResourceAllocation` 
(<https://github.com/apache/hadoop/blob/b6e7d1369690eaf50ce9ea7968f91a72ecb74de0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/RLESparseResourceAllocation.java>)
 to record the container allocation information. `RecurrenceId` is used to 
identify a specific run of a recurring pipeline. A pipeline could consist of 
multiple jobs, each has a `ResourceSkyline` to characterize its resource 
utilization.
+2. `Translator` parses the job logs, extracts their `ResourceSkylines` and 
stores them to the SkylineStore. `SingleLineParser` parses one line in the log 
stream and extract the `ResourceSkyline`. `LogParser` recursively parses each 
line in the log stream using `SingleLineParser`. Note that logs could have 
different storage formats, so `LogParser` takes a stream of strings as input, 
instead of File or other formats. Since job logs may have various formats thus 
require different `SingleLineParser` implementations, `LogParser` initiates the 
`SingleLineParser` based on user configuration. Currently 
Hadoop-resourceestimator provides two implementations for `SingleLineParser`: 
`NativeSingleLineParser` supports an optimized native format, and 
`RMSingleLineParser` parses the YARN ResourceManager logs generated in Hadoop 
systems since RM logs are widely available (in production deployments).
+3. `SkylineStore` serves as the storage layer for Hadoop-resourceestimator and 
consists of 2 parts. `HistorySkylineStore` stores the `ResourceSkylines` 
extracted by the `Translator`. It supports four actions: addHistory, 
deleteHistory, updateHistory and getHistory. addHistory appends new 
`ResourceSkylines` to the recurring pipelines, while updateHistory deletes all 
the `ResourceSkylines` of a specific recurring pipeline, and re-insert new 
`ResourceSkylines`. `PredictionSkylineStore` stores the predicted 
`RLESparseResourceAllocation` generated by the Estimator. It supports two 
actions: addEstimation and getEstimation.
+
+    Currently Hadoop-resourceestimator provides in-memory implementation for 
the SkylineStore.
+4. `Estimator` predicts recurring pipeline's resource requirements based on 
its history runs, stores the prediction to the `SkylineStore` and makes 
recurring resource reservations to YARN (YARN-5326). `Solver` reads all the 
history `ResourceSkylines` of a specific recurring pipeline, and predicts its 
new resource requirements wrapped in `RLESparseResourceAllocation`. Currently 
Hadoop-resourceestimator provides a `LPSOLVER` to make the prediction (the 
details of the Linear Programming model can be find in the paper). There is 
also a `BaseSolver` to translate predicted resource requirements into 
`ReservationSubmissionRequest` which is used by different solver 
implementations to make recurring resource reservations on YARN.
+5. `ResourceEstimationService` wraps Hadoop-resourceestimator as a 
micro-service, which can be easily deployed in clusters. It provides a set of 
REST APIs to allow users to parse specified job logs, query pipeline's history 
`ResourceSkylines`, query pipeline's predicted resource requirements and run 
the `SOLVER` if the prediction does not exist, delete the `ResourceSkylines` in 
the `SkylineStore`.
+
+Usage
+-----
+
+This section will guide you through the usage of resource estimator service.
+
+Here let `$HADOOP_ROOT` represent the Hadoop install directory. If you build 
Hadoop yourself, `$HADOOP_ROOT` is `hadoop-dist/target/hadoop-$VERSION`. The 
location of the resource estimator service, `$ResourceEstimatorServiceHome`, is 
`$HADOOP_ROOT/share/hadoop/tools/resourceestimator`. It contains 3 folders: 
`bin`, `conf` and `data`. Please note that users can use resource estimator 
service with the default configurations.
+
+* `bin` contains the running scripts for the resource estimator service.
+
+* `conf`: contains the configuration files for the resource estimator service.
+
+* `data` contains the sample log that is used to run the example of resource 
estimator service.
+
+### Step 1: Start the estimator
+
+First of all, copy the configuration file (located in 
`$ResourceEstimatorServiceHome/conf/`) to `$HADOOP_ROOT/etc/hadoop`.
+
+The script to start the estimator is `start-estimator.sh`.
+
+    $ cd $ResourceEstimatorServiceHome
+    $ bin/start-estimator.sh
+
+A web server is started, and users can use the resource estimation service 
through REST APIs.
+
+### Step 2: Run the estimator
+
+The URI for the resource estimator sercive is `http://0.0.0.0`, and the 
default service port is `9998` (configured in 
`$ResourceEstimatorServiceHome/conf/resourceestimator-config.xml`). In 
`$ResourceEstimatorServiceHome/data`, there is a sample log file 
`resourceEstimatorService.txt` which contains the logs of tpch_q12 query job 
for 2 runs.
+
+* `parse job logs: POST 
http://URI:port/resourceestimator/translator/LOG_FILE_DIRECTORY`
+
+  Send `POST 
http://0.0.0.0:9998/resourceestimator/translator/data/resourceEstimatorService.txt`.
 The underlying estimator will extract the ResourceSkylines from the log file 
and store them in the jobHistory SkylineStore.
+
+* `query job's history ResourceSkylines: GET 
http://URI:port/resourceestimator/skylinestore/history/{pipelineId}/{runId}`
+
+  Send `GET http://0.0.0.0:9998/resourceestimator/skylinestore/history/*/*`, 
and the underlying estimator will return all the records in history 
SkylineStore. You should be able to see ResourceSkylines for two runs of 
tpch_q12: tpch_q12_0 and tpch_q12_1. Note that both `pipelineId` and `runId` 
fields support wildcard operations.
+
+* `predict job's resource skyline requirement: GET 
http://URI:port/resourceestimator/estimator/{pipelineId}`
+
+  Send `http://0.0.0.0:9998/resourceestimator/estimator/tpch_q12`, and the 
underlying estimator will predict job's resource requirements for the new run 
based on its history ResourceSkylines, and store the predicted resource 
requirements to jobEstimation SkylineStore.
+
+* `query job's estimated resource skylines: GET 
http://URI:port/resourceestimator/skylinestore/estimation/{pipelineId}`
+
+  Send 
`http://0.0.0.0:9998/resourceestimator/skylinestore/estimation/tpch_q12`, and 
the underlying estimator will return the history resource requirement 
estimation for tpch_q12 job. Note that for jobEstimation SkylineStore, it does 
not support wildcard operations.
+
+*   `delete job's history resource skylines: DELETE 
http://URI:port/resourceestimator/skylinestore/history/{pipelineId}/{runId}`
+
+  Send 
`http://0.0.0.0:9998/resourceestimator/skylinestore/history/tpch_q12/tpch_q12_0`,
 and the underlying estimator will delete the ResourceSkyline record for 
tpch_q12_0. Re-send `GET 
http://0.0.0.0:9998/resourceestimator/skylinestore/history/*/*`, and the 
underlying estimator will only return the ResourceSkyline for tpch_q12_1.
+
+### Step 3: Run the estimator
+The script to stop the estimator is `stop-estimator.sh`.
+
+    $ cd $ResourceEstimatorServiceHome
+    $ bin/stop-estimator.sh
+
+Example
+-------
+
+Here we present an example for using Resource Estimator Service.
+
+First, we run a tpch_q12 job for 9 times, and collect job's resource skylines 
in each run (note that in this example, we only collect "# of allocated 
containers" information).
+
+Then, we run the log parser in Resource Estimator Service to extract the 
ResourceSkylines from logs and store them in the SkylineStore. The job's 
ResourceSkylines are plotted below for demonstration.
+
+![Tpch job history runs](images/tpch_history.png)
+
+Finally, we run the estimator in Resource Estimator Service to predict the 
resource requirements for the new run, which is wrapped in 
RLESparseResourceAllocation 
(<https://github.com/apache/hadoop/blob/b6e7d1369690eaf50ce9ea7968f91a72ecb74de0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/RLESparseResourceAllocation.java>).
 The predicted resource requirement is plotted below for demonstration.
+
+![Tpch job history prediction](images/tpch_predict.png)
+
+Advanced Configuration
+----------------------
+
+This section will guide you through the configuration for Resource Estimator 
Service. The configuration file is located at 
`$ResourceEstimatorServiceHome/conf/resourceestimator-config.xml`.
+
+* `resourceestimator.solver.lp.alpha`
+
+  The resource estimator has an integrated Linear Programming solver to make 
the prediction (refer to 
<https://www.microsoft.com/en-us/research/wp-content/uploads/2016/10/osdi16-final107.pdf>
 for more details), and this parameter tunes the tradeoff between resource 
over-allocation and under-allocation in the Linear Programming model. This 
parameter varies from 0 to 1, and a larger alpha value means the model 
minimizes over-allocation better. Default value is 0.1.
+
+* `resourceestimator.solver.lp.beta`
+
+  This parameter controls the generalization of the Linear Programming model. 
This parameter varies from 0 to 1. Deafult value is 0.1.
+
+* `resourceestimator.solver.lp.minJobRuns`
+
+  The minimum number of job runs required in order to make the prediction. 
Default value is 2.
+
+* `resourceestimator.timeInterval`
+
+  The time length which is used to discretize job execution into intervals. 
Note that the estimator makes resource allocation prediction for each interval. 
A smaller time interval has more fine-grained granularity for prediction, but 
it also takes longer time and more space for prediction. Default value is 5 
(seconds).
+
+* `resourceestimator.skylinestore.provider`
+
+  The class name of the skylinestore provider. Default value is 
`org.apache.hadoop.resourceestimator.skylinestore.impl.InMemoryStore`, which is 
an in-memory implementation of skylinestore. If users want to use their own 
skylinestore implementation, they need to change this value accordingly.
+
+* `resourceestimator.translator.provider`
+
+  The class name of the translator provider. Default value is 
`org.apache.hadoop.resourceestimator.translator.impl.BaseLogParser`, which 
extracts resourceskylines from log streams. If users want to use their own 
translator implementation, they need to change this value accordingly.
+
+* `resourceestimator.translator.line-parser`
+
+  The class name of the translator single-line parser, which parses a single 
line in the log. Default value is 
`org.apache.hadoop.resourceestimator.translator.impl.NativeSingleLineParser`, 
which can parse one line in the sample log. Note that if users want to parse 
Hadoop Resource Manager 
(<https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html>)
 logs, they need to set the value to be 
`org.apache.hadoop.resourceestimator.translator.impl.RmSingleLineParser`. If 
they want to implement single-line parser to parse their customized log file, 
they need to change this value accordingly.
+
+* `resourceestimator.solver.provider`
+
+  The class name of the solver provider. Default value is 
`org.apache.hadoop.resourceestimator.solver.impl.LpSolver`, which incorporates 
a Linear Programming model to make the prediction. If users want to implement 
their own models, they need to change this value accordingly.
+
+* `resourceestimator.service-port`
+
+  The port which ResourceEstimatorService listens to. The default value is 
9998.
+
+Future work
+-----------
+
+1. For SkylineStore, we plan to provide a persistent store implementation. We 
are considering HBase to future proof our scale requirements.
+
+2. For Translator module, we want to support Timeline Service v2 as the 
primary source as we want to rely on a stable API and logs are flaky at best.
+
+3. Job resource requirements could vary across runs due to skewness, 
contention, input data or code changes, etc. We want to design a Reprovisioner 
module, which dynamically monitors job progress at runtime, identifies the 
performance bottlenecks if the progress is slower than expectation, and 
dynamically adjusts job's resource allocations accordingly using 
ReservationUpdateRequest.
+
+4. When Estimator predicts job's resource requirements, we want to provide the 
confidence level associated with the prediction according to the estimation 
error (combination of over-allocation and under-allocation), etc.
+
+5. For Estimator module, we can integrate machine learning tools such as 
reinforcement learning to make better prediction. We can also integrate with 
domain-specific solvers such as PerfOrator to improve prediction quality.
+
+6. For Estimator module, we want to design incremental solver, which can 
incrementally update job's resource requirements only based on the new logs.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/site/resources/css/site.css
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/site/resources/css/site.css 
b/hadoop-tools/hadoop-resourceestimator/src/site/resources/css/site.css
new file mode 100644
index 0000000..7315db3
--- /dev/null
+++ b/hadoop-tools/hadoop-resourceestimator/src/site/resources/css/site.css
@@ -0,0 +1,29 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements.  See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+#banner {
+  height: 93px;
+  background: none;
+}
+
+#bannerLeft img {
+  margin-left: 30px;
+  margin-top: 10px;
+}
+
+#bannerRight img {
+  margin: 17px;
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/api/TestResourceSkyline.java
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/api/TestResourceSkyline.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/api/TestResourceSkyline.java
new file mode 100644
index 0000000..65e1c34
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/api/TestResourceSkyline.java
@@ -0,0 +1,128 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.resourceestimator.common.api;
+
+import java.util.TreeMap;
+
+import org.apache.hadoop.yarn.api.records.Resource;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.RLESparseResourceAllocation;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationInterval;
+import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Test {@link ResourceSkyline} class.
+ */
+public class TestResourceSkyline {
+  /**
+   * Testing variables.
+   */
+  private ResourceSkyline resourceSkyline;
+
+  private Resource resource1;
+  private Resource resource2;
+  private TreeMap<Long, Resource> resourceOverTime;
+  private RLESparseResourceAllocation skylineList;
+
+  @Before public final void setup() {
+    resourceOverTime = new TreeMap<>();
+    skylineList = new RLESparseResourceAllocation(resourceOverTime,
+        new DefaultResourceCalculator());
+    resource1 = Resource.newInstance(1024 * 100, 100);
+    resource2 = Resource.newInstance(1024 * 200, 200);
+  }
+
+  @Test public final void testGetJobId() {
+    Assert.assertNull(resourceSkyline);
+    ReservationInterval riAdd = new ReservationInterval(0, 10);
+    skylineList.addInterval(riAdd, resource1);
+    riAdd = new ReservationInterval(10, 20);
+    skylineList.addInterval(riAdd, resource1);
+    resourceSkyline =
+        new ResourceSkyline("1", 1024.5, 0, 20, resource1, skylineList);
+    Assert.assertEquals("1", resourceSkyline.getJobId());
+  }
+
+  @Test public final void testGetJobSubmissionTime() {
+    Assert.assertNull(resourceSkyline);
+    ReservationInterval riAdd = new ReservationInterval(0, 10);
+    skylineList.addInterval(riAdd, resource1);
+    riAdd = new ReservationInterval(10, 20);
+    skylineList.addInterval(riAdd, resource1);
+    resourceSkyline =
+        new ResourceSkyline("1", 1024.5, 0, 20, resource1, skylineList);
+    Assert.assertEquals(0, resourceSkyline.getJobSubmissionTime());
+  }
+
+  @Test public final void testGetJobFinishTime() {
+    Assert.assertNull(resourceSkyline);
+    ReservationInterval riAdd = new ReservationInterval(0, 10);
+    skylineList.addInterval(riAdd, resource1);
+    riAdd = new ReservationInterval(10, 20);
+    skylineList.addInterval(riAdd, resource1);
+    resourceSkyline =
+        new ResourceSkyline("1", 1024.5, 0, 20, resource1, skylineList);
+    Assert.assertEquals(20, resourceSkyline.getJobFinishTime());
+  }
+
+  @Test public final void testGetKthResource() {
+    Assert.assertNull(resourceSkyline);
+    ReservationInterval riAdd = new ReservationInterval(10, 20);
+    skylineList.addInterval(riAdd, resource1);
+    riAdd = new ReservationInterval(20, 30);
+    skylineList.addInterval(riAdd, resource2);
+    resourceSkyline =
+        new ResourceSkyline("1", 1024.5, 0, 20, resource1, skylineList);
+    final RLESparseResourceAllocation skylineList2 =
+        resourceSkyline.getSkylineList();
+    for (int i = 10; i < 20; i++) {
+      Assert.assertEquals(resource1.getMemorySize(),
+          skylineList2.getCapacityAtTime(i).getMemorySize());
+      Assert.assertEquals(resource1.getVirtualCores(),
+          skylineList2.getCapacityAtTime(i).getVirtualCores());
+    }
+    for (int i = 20; i < 30; i++) {
+      Assert.assertEquals(resource2.getMemorySize(),
+          skylineList2.getCapacityAtTime(i).getMemorySize());
+      Assert.assertEquals(resource2.getVirtualCores(),
+          skylineList2.getCapacityAtTime(i).getVirtualCores());
+    }
+    // test if resourceSkyline automatically extends the skyline with
+    // zero-resource at both ends
+    Assert.assertEquals(0, skylineList2.getCapacityAtTime(9).getMemorySize());
+    Assert.assertEquals(0, 
skylineList2.getCapacityAtTime(9).getVirtualCores());
+    Assert.assertEquals(0, skylineList2.getCapacityAtTime(30).getMemorySize());
+    Assert
+        .assertEquals(0, skylineList2.getCapacityAtTime(30).getVirtualCores());
+  }
+
+  @After public final void cleanUp() {
+    resourceSkyline = null;
+    resource1 = null;
+    resource2 = null;
+    resourceOverTime.clear();
+    resourceOverTime = null;
+    skylineList = null;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestHistorySkylineSerDe.java
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestHistorySkylineSerDe.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestHistorySkylineSerDe.java
new file mode 100644
index 0000000..62743aa
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestHistorySkylineSerDe.java
@@ -0,0 +1,134 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.resourceestimator.common.serialization;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.hadoop.resourceestimator.common.api.RecurrenceId;
+import org.apache.hadoop.resourceestimator.common.api.ResourceSkyline;
+import org.apache.hadoop.yarn.api.records.Resource;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.RLESparseResourceAllocation;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationInterval;
+import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import com.google.gson.Gson;
+import com.google.gson.GsonBuilder;
+import com.google.gson.reflect.TypeToken;
+
+/**
+ * Test HistorySkylineSerDe.
+ */
+public class TestHistorySkylineSerDe {
+  /**
+   * Testing variables.
+   */
+  private Gson gson;
+
+  private ResourceSkyline resourceSkyline;
+  private Resource resource;
+  private Resource resource2;
+  private TreeMap<Long, Resource> resourceOverTime;
+  private RLESparseResourceAllocation skylineList;
+
+  @Before public final void setup() {
+    resourceOverTime = new TreeMap<>();
+    skylineList = new RLESparseResourceAllocation(resourceOverTime,
+        new DefaultResourceCalculator());
+    resource = Resource.newInstance(1024 * 100, 100);
+    resource2 = Resource.newInstance(1024 * 200, 200);
+    gson = new GsonBuilder()
+        .registerTypeAdapter(Resource.class, new ResourceSerDe())
+        .registerTypeAdapter(RLESparseResourceAllocation.class,
+            new RLESparseResourceAllocationSerDe())
+        .enableComplexMapKeySerialization().create();
+  }
+
+  @Test public final void testSerialization() {
+    ReservationInterval riAdd = new ReservationInterval(0, 10);
+    skylineList.addInterval(riAdd, resource);
+    riAdd = new ReservationInterval(10, 20);
+    skylineList.addInterval(riAdd, resource2);
+    resourceSkyline =
+        new ResourceSkyline("1", 1024.5, 0, 20, resource, skylineList);
+    RecurrenceId recurrenceId = new RecurrenceId("FraudDetection", "1");
+    List<ResourceSkyline> listSkyline = new ArrayList<>();
+    listSkyline.add(resourceSkyline);
+    Map<RecurrenceId, List<ResourceSkyline>> historySkyline =
+        new HashMap<RecurrenceId, List<ResourceSkyline>>();
+    historySkyline.put(recurrenceId, listSkyline);
+
+    final String json = gson.toJson(historySkyline,
+        new TypeToken<Map<RecurrenceId, List<ResourceSkyline>>>() {
+        }.getType());
+    final Map<RecurrenceId, List<ResourceSkyline>> historySkylineDe =
+        gson.fromJson(json,
+            new TypeToken<Map<RecurrenceId, List<ResourceSkyline>>>() {
+            }.getType());
+    // check if the recurrenceId is correct
+    List<ResourceSkyline> resourceSkylineList =
+        historySkylineDe.get(recurrenceId);
+    Assert.assertNotNull(resourceSkylineList);
+    Assert.assertEquals(1, resourceSkylineList.size());
+
+    // check if the resourceSkyline is correct
+    ResourceSkyline resourceSkylineDe = resourceSkylineList.get(0);
+    Assert
+        .assertEquals(resourceSkylineDe.getJobId(), 
resourceSkyline.getJobId());
+    Assert.assertEquals(resourceSkylineDe.getJobInputDataSize(),
+        resourceSkyline.getJobInputDataSize(), 0);
+    Assert.assertEquals(resourceSkylineDe.getJobSubmissionTime(),
+        resourceSkyline.getJobSubmissionTime());
+    Assert.assertEquals(resourceSkylineDe.getJobFinishTime(),
+        resourceSkyline.getJobFinishTime());
+    Assert.assertEquals(resourceSkylineDe.getContainerSpec().getMemorySize(),
+        resourceSkyline.getContainerSpec().getMemorySize());
+    Assert.assertEquals(resourceSkylineDe.getContainerSpec().getVirtualCores(),
+        resourceSkyline.getContainerSpec().getVirtualCores());
+    final RLESparseResourceAllocation skylineList2 =
+        resourceSkyline.getSkylineList();
+    final RLESparseResourceAllocation skylineListDe =
+        resourceSkylineDe.getSkylineList();
+    for (int i = 0; i < 20; i++) {
+      Assert.assertEquals(skylineList2.getCapacityAtTime(i).getMemorySize(),
+          skylineListDe.getCapacityAtTime(i).getMemorySize());
+      Assert.assertEquals(skylineList2.getCapacityAtTime(i).getVirtualCores(),
+          skylineListDe.getCapacityAtTime(i).getVirtualCores());
+    }
+  }
+
+  @After public final void cleanUp() {
+    gson = null;
+    resourceSkyline = null;
+    resourceOverTime.clear();
+    resourceOverTime = null;
+    resource = null;
+    resource2 = null;
+    skylineList = null;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestResourceSerDe.java
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestResourceSerDe.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestResourceSerDe.java
new file mode 100644
index 0000000..8749e59
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestResourceSerDe.java
@@ -0,0 +1,64 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.resourceestimator.common.serialization;
+
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import com.google.gson.Gson;
+import com.google.gson.GsonBuilder;
+import com.google.gson.reflect.TypeToken;
+
+/**
+ * Test ResourceSerDe.
+ */
+public class TestResourceSerDe {
+  /**
+   * Testing variables.
+   */
+  private Gson gson;
+
+  private Resource resource;
+
+  @Before public final void setup() {
+    resource = Resource.newInstance(1024 * 100, 100);
+    gson = new GsonBuilder()
+        .registerTypeAdapter(Resource.class, new ResourceSerDe()).create();
+  }
+
+  @Test public final void testSerialization() {
+    final String json = gson.toJson(resource, new TypeToken<Resource>() {
+    }.getType());
+    final Resource resourceDe = gson.fromJson(json, new TypeToken<Resource>() {
+    }.getType());
+    Assert.assertEquals(resource.getMemorySize(), resourceDe.getMemorySize());
+    Assert
+        .assertEquals(resource.getVirtualCores(), 
resourceDe.getVirtualCores());
+  }
+
+  @After public final void cleanUp() {
+    resource = null;
+    gson = null;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestResourceSkylineSerDe.java
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestResourceSkylineSerDe.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestResourceSkylineSerDe.java
new file mode 100644
index 0000000..859ea1e
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/TestResourceSkylineSerDe.java
@@ -0,0 +1,112 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.resourceestimator.common.serialization;
+
+import java.util.TreeMap;
+
+import org.apache.hadoop.resourceestimator.common.api.ResourceSkyline;
+import org.apache.hadoop.yarn.api.records.Resource;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.RLESparseResourceAllocation;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationInterval;
+import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import com.google.gson.Gson;
+import com.google.gson.GsonBuilder;
+import com.google.gson.reflect.TypeToken;
+
+/**
+ * Test ResourceSkylineSerDe.
+ */
+public class TestResourceSkylineSerDe {
+  /**
+   * Testing variables.
+   */
+  private Gson gson;
+
+  private ResourceSkyline resourceSkyline;
+  private Resource resource;
+  private Resource resource2;
+  private TreeMap<Long, Resource> resourceOverTime;
+  private RLESparseResourceAllocation skylineList;
+
+  @Before public final void setup() {
+    resourceOverTime = new TreeMap<>();
+    skylineList = new RLESparseResourceAllocation(resourceOverTime,
+        new DefaultResourceCalculator());
+    resource = Resource.newInstance(1024 * 100, 100);
+    resource2 = Resource.newInstance(1024 * 200, 200);
+    gson = new GsonBuilder()
+        .registerTypeAdapter(Resource.class, new ResourceSerDe())
+        .registerTypeAdapter(RLESparseResourceAllocation.class,
+            new RLESparseResourceAllocationSerDe()).create();
+  }
+
+  @Test public final void testSerialization() {
+    ReservationInterval riAdd = new ReservationInterval(0, 10);
+    skylineList.addInterval(riAdd, resource);
+    riAdd = new ReservationInterval(10, 20);
+    skylineList.addInterval(riAdd, resource2);
+    resourceSkyline =
+        new ResourceSkyline("1", 1024.5, 0, 20, resource, skylineList);
+    final String json =
+        gson.toJson(resourceSkyline, new TypeToken<ResourceSkyline>() {
+        }.getType());
+    final ResourceSkyline resourceSkylineDe =
+        gson.fromJson(json, new TypeToken<ResourceSkyline>() {
+        }.getType());
+    Assert
+        .assertEquals(resourceSkylineDe.getJobId(), 
resourceSkyline.getJobId());
+    Assert.assertEquals(resourceSkylineDe.getJobInputDataSize(),
+        resourceSkyline.getJobInputDataSize(), 0);
+    Assert.assertEquals(resourceSkylineDe.getJobSubmissionTime(),
+        resourceSkyline.getJobSubmissionTime());
+    Assert.assertEquals(resourceSkylineDe.getJobFinishTime(),
+        resourceSkyline.getJobFinishTime());
+    Assert.assertEquals(resourceSkylineDe.getContainerSpec().getMemorySize(),
+        resourceSkyline.getContainerSpec().getMemorySize());
+    Assert.assertEquals(resourceSkylineDe.getContainerSpec().getVirtualCores(),
+        resourceSkyline.getContainerSpec().getVirtualCores());
+    final RLESparseResourceAllocation skylineList2 =
+        resourceSkyline.getSkylineList();
+    final RLESparseResourceAllocation skylineListDe =
+        resourceSkylineDe.getSkylineList();
+    for (int i = 0; i < 20; i++) {
+      Assert.assertEquals(skylineList2.getCapacityAtTime(i).getMemorySize(),
+          skylineListDe.getCapacityAtTime(i).getMemorySize());
+      Assert.assertEquals(skylineList2.getCapacityAtTime(i).getVirtualCores(),
+          skylineListDe.getCapacityAtTime(i).getVirtualCores());
+    }
+  }
+
+  @After public final void cleanUp() {
+    gson = null;
+    resourceSkyline = null;
+    resourceOverTime.clear();
+    resourceOverTime = null;
+    resource = null;
+    resource2 = null;
+    skylineList = null;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/package-info.java
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/package-info.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/package-info.java
new file mode 100644
index 0000000..06957b3
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/common/serialization/package-info.java
@@ -0,0 +1,24 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+/**
+ * SkylineStore serialization module.
+ */
+package org.apache.hadoop.resourceestimator.common.serialization;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/GuiceServletConfig.java
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/GuiceServletConfig.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/GuiceServletConfig.java
new file mode 100644
index 0000000..b72d317
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/GuiceServletConfig.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.resourceestimator.service;
+
+import com.google.inject.Injector;
+import com.google.inject.servlet.GuiceServletContextListener;
+
+/**
+ * GuiceServletConfig is a wrapper class to have a static Injector instance
+ * instead of having the instance inside test classes. This allow us to use
+ * Jersey test framework after 1.13.
+ * Please check test cases to know how to use this class:
+ * e.g. TestRMWithCSRFFilter.java
+ */
+public class GuiceServletConfig extends GuiceServletContextListener {
+
+  private static Injector internalInjector = null;
+
+  @Override protected Injector getInjector() {
+    return internalInjector;
+  }
+
+  public static Injector setInjector(Injector in) {
+    internalInjector = in;
+    return internalInjector;
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/TestResourceEstimatorService.java
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/TestResourceEstimatorService.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/TestResourceEstimatorService.java
new file mode 100644
index 0000000..d4dde7e
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/TestResourceEstimatorService.java
@@ -0,0 +1,282 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.resourceestimator.service;
+
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import javax.ws.rs.core.MediaType;
+
+import org.apache.hadoop.resourceestimator.common.api.RecurrenceId;
+import org.apache.hadoop.resourceestimator.common.api.ResourceSkyline;
+import 
org.apache.hadoop.resourceestimator.common.serialization.RLESparseResourceAllocationSerDe;
+import org.apache.hadoop.resourceestimator.common.serialization.ResourceSerDe;
+import org.apache.hadoop.yarn.api.records.Resource;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.RLESparseResourceAllocation;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationInterval;
+import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.gson.Gson;
+import com.google.gson.GsonBuilder;
+import com.google.gson.reflect.TypeToken;
+import com.google.inject.Guice;
+import com.google.inject.servlet.ServletModule;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.guice.spi.container.servlet.GuiceContainer;
+import com.sun.jersey.test.framework.JerseyTest;
+import com.sun.jersey.test.framework.WebAppDescriptor;
+
+/**
+ * Test ResourceEstimatorService.
+ */
+public class TestResourceEstimatorService extends JerseyTest {
+  private static final Logger LOGGER =
+      LoggerFactory.getLogger(TestResourceEstimatorService.class);
+  private final String parseLogCommand = "resourceestimator/translator/"
+      + "src/test/resources/resourceEstimatorService.txt";
+  private final String getHistorySkylineCommand =
+      "resourceestimator/skylinestore/history/tpch_q12/*";
+  private final String getEstimatedSkylineCommand =
+      "resourceestimator/skylinestore/estimation/tpch_q12";
+  private final String makeEstimationCommand =
+      "resourceestimator/estimator/tpch_q12";
+  private final String deleteHistoryCommand =
+      "resourceestimator/skylinestore/history/tpch_q12/tpch_q12_1";
+  private static boolean setUpDone = false;
+  private Resource containerSpec;
+  private Gson gson;
+  private long containerMemAlloc;
+  private int containerCPUAlloc;
+
+  private static class WebServletModule extends ServletModule {
+    @Override protected void configureServlets() {
+      bind(ResourceEstimatorService.class);
+      serve("/*").with(GuiceContainer.class);
+    }
+  }
+
+  static {
+    GuiceServletConfig
+        .setInjector(Guice.createInjector(new WebServletModule()));
+  }
+
+  public TestResourceEstimatorService() {
+    super(new WebAppDescriptor.Builder(
+        "org.apache.hadoop.resourceestimator.service")
+        .contextListenerClass(GuiceServletConfig.class)
+        .filterClass(com.google.inject.servlet.GuiceFilter.class).build());
+  }
+
+  @Before @Override public void setUp() throws Exception {
+    super.setUp();
+    GuiceServletConfig
+        .setInjector(Guice.createInjector(new WebServletModule()));
+    containerMemAlloc = 1024;
+    containerCPUAlloc = 1;
+    containerSpec = Resource.newInstance(containerMemAlloc, containerCPUAlloc);
+    gson = new GsonBuilder()
+        .registerTypeAdapter(Resource.class, new ResourceSerDe())
+        .registerTypeAdapter(RLESparseResourceAllocation.class,
+            new RLESparseResourceAllocationSerDe())
+        .enableComplexMapKeySerialization().create();
+  }
+
+  private void compareResourceSkyline(final ResourceSkyline skyline1,
+      final ResourceSkyline skyline2) {
+    Assert.assertEquals(skyline1.getJobId(), skyline2.getJobId());
+    Assert.assertEquals(skyline1.getJobInputDataSize(),
+        skyline2.getJobInputDataSize(), 0);
+    Assert.assertEquals(skyline1.getJobSubmissionTime(),
+        skyline2.getJobSubmissionTime());
+    Assert
+        .assertEquals(skyline1.getJobFinishTime(), 
skyline2.getJobFinishTime());
+    Assert.assertEquals(skyline1.getContainerSpec().getMemorySize(),
+        skyline2.getContainerSpec().getMemorySize());
+    Assert.assertEquals(skyline1.getContainerSpec().getVirtualCores(),
+        skyline2.getContainerSpec().getVirtualCores());
+    final RLESparseResourceAllocation skylineList1 = skyline1.getSkylineList();
+    final RLESparseResourceAllocation skylineList2 = skyline2.getSkylineList();
+    for (int i = (int) skylineList1.getEarliestStartTime();
+         i < skylineList1.getLatestNonNullTime(); i++) {
+      Assert.assertEquals(skylineList1.getCapacityAtTime(i).getMemorySize(),
+          skylineList2.getCapacityAtTime(i).getMemorySize());
+      Assert.assertEquals(skylineList1.getCapacityAtTime(i).getVirtualCores(),
+          skylineList2.getCapacityAtTime(i).getVirtualCores());
+    }
+  }
+
+  private ResourceSkyline getSkyline1() {
+    final TreeMap<Long, Resource> resourceOverTime = new TreeMap<>();
+    ReservationInterval riAdd;
+    final RLESparseResourceAllocation skylineList =
+        new RLESparseResourceAllocation(resourceOverTime,
+            new DefaultResourceCalculator());
+    riAdd = new ReservationInterval(0, 10);
+    Resource resource =
+        Resource.newInstance(containerMemAlloc, containerCPUAlloc);
+    skylineList.addInterval(riAdd, resource);
+    riAdd = new ReservationInterval(10, 15);
+    resource = Resource
+        .newInstance(containerMemAlloc * 1074, containerCPUAlloc * 1074);
+    skylineList.addInterval(riAdd, resource);
+    riAdd = new ReservationInterval(15, 20);
+    resource = Resource
+        .newInstance(containerMemAlloc * 2538, containerCPUAlloc * 2538);
+    skylineList.addInterval(riAdd, resource);
+    riAdd = new ReservationInterval(20, 25);
+    resource = Resource
+        .newInstance(containerMemAlloc * 2468, containerCPUAlloc * 2468);
+    skylineList.addInterval(riAdd, resource);
+    final ResourceSkyline resourceSkyline1 =
+        new ResourceSkyline("tpch_q12_0", 0, 0, 25, containerSpec, 
skylineList);
+
+    return resourceSkyline1;
+  }
+
+  private ResourceSkyline getSkyline2() {
+    final TreeMap<Long, Resource> resourceOverTime = new TreeMap<>();
+    ReservationInterval riAdd;
+    final RLESparseResourceAllocation skylineList =
+        new RLESparseResourceAllocation(resourceOverTime,
+            new DefaultResourceCalculator());
+    riAdd = new ReservationInterval(0, 10);
+    Resource resource =
+        Resource.newInstance(containerMemAlloc, containerCPUAlloc);
+    skylineList.addInterval(riAdd, resource);
+    riAdd = new ReservationInterval(10, 15);
+    resource =
+        Resource.newInstance(containerMemAlloc * 794, containerCPUAlloc * 794);
+    skylineList.addInterval(riAdd, resource);
+    riAdd = new ReservationInterval(15, 20);
+    resource = Resource
+        .newInstance(containerMemAlloc * 2517, containerCPUAlloc * 2517);
+    skylineList.addInterval(riAdd, resource);
+    riAdd = new ReservationInterval(20, 25);
+    resource = Resource
+        .newInstance(containerMemAlloc * 2484, containerCPUAlloc * 2484);
+    skylineList.addInterval(riAdd, resource);
+    final ResourceSkyline resourceSkyline2 =
+        new ResourceSkyline("tpch_q12_1", 0, 0, 25, containerSpec, 
skylineList);
+
+    return resourceSkyline2;
+  }
+
+  private void checkResult(final String jobId,
+      final Map<RecurrenceId, List<ResourceSkyline>> jobHistory) {
+    switch (jobId) {
+    case "tpch_q12_0": {
+      final RecurrenceId recurrenceId =
+          new RecurrenceId("tpch_q12", "tpch_q12_0");
+      Assert.assertEquals(1, jobHistory.get(recurrenceId).size());
+      ResourceSkyline skylineReceive = jobHistory.get(recurrenceId).get(0);
+      compareResourceSkyline(skylineReceive, getSkyline1());
+      break;
+    }
+    case "tpch_q12_1": {
+      final RecurrenceId recurrenceId =
+          new RecurrenceId("tpch_q12", "tpch_q12_1");
+      Assert.assertEquals(1, jobHistory.get(recurrenceId).size());
+      ResourceSkyline skylineReceive = jobHistory.get(recurrenceId).get(0);
+      compareResourceSkyline(skylineReceive, getSkyline2());
+      break;
+    }
+    default:
+      break;
+    }
+  }
+
+  private void compareRLESparseResourceAllocation(
+      final RLESparseResourceAllocation rle1,
+      final RLESparseResourceAllocation rle2) {
+    for (int i = (int) rle1.getEarliestStartTime();
+         i < rle1.getLatestNonNullTime(); i++) {
+      Assert.assertEquals(rle1.getCapacityAtTime(i), 
rle2.getCapacityAtTime(i));
+    }
+  }
+
+  @Test public void testGetPrediction() {
+    // first, parse the log
+    final String logFile = "resourceEstimatorService.txt";
+    WebResource webResource = resource();
+    webResource.path(parseLogCommand).type(MediaType.APPLICATION_XML_TYPE)
+        .post(logFile);
+    webResource = resource().path(getHistorySkylineCommand);
+    String response = webResource.get(String.class);
+    Map<RecurrenceId, List<ResourceSkyline>> jobHistory =
+        gson.fromJson(response,
+            new TypeToken<Map<RecurrenceId, List<ResourceSkyline>>>() {
+            }.getType());
+    checkResult("tpch_q12_0", jobHistory);
+    checkResult("tpch_q12_1", jobHistory);
+    // then, try to get estimated resource allocation from skyline store
+    webResource = resource().path(getEstimatedSkylineCommand);
+    response = webResource.get(String.class);
+    Assert.assertEquals("null", response);
+    // then, we call estimator module to make the prediction
+    webResource = resource().path(makeEstimationCommand);
+    response = webResource.get(String.class);
+    RLESparseResourceAllocation skylineList =
+        gson.fromJson(response, new TypeToken<RLESparseResourceAllocation>() {
+        }.getType());
+    Assert.assertEquals(1,
+        skylineList.getCapacityAtTime(0).getMemorySize() / containerMemAlloc);
+    Assert.assertEquals(1058,
+        skylineList.getCapacityAtTime(10).getMemorySize() / containerMemAlloc);
+    Assert.assertEquals(2538,
+        skylineList.getCapacityAtTime(15).getMemorySize() / containerMemAlloc);
+    Assert.assertEquals(2484,
+        skylineList.getCapacityAtTime(20).getMemorySize() / containerMemAlloc);
+    // then, we get estimated resource allocation for tpch_q12
+    webResource = resource().path(getEstimatedSkylineCommand);
+    response = webResource.get(String.class);
+    final RLESparseResourceAllocation skylineList2 =
+        gson.fromJson(response, new TypeToken<RLESparseResourceAllocation>() {
+        }.getType());
+    compareRLESparseResourceAllocation(skylineList, skylineList2);
+    // then, we call estimator module again to directly get estimated resource
+    // allocation from skyline store
+    webResource = resource().path(makeEstimationCommand);
+    response = webResource.get(String.class);
+    final RLESparseResourceAllocation skylineList3 =
+        gson.fromJson(response, new TypeToken<RLESparseResourceAllocation>() {
+        }.getType());
+    compareRLESparseResourceAllocation(skylineList, skylineList3);
+    // finally, test delete
+    webResource = resource().path(deleteHistoryCommand);
+    webResource.delete();
+    webResource = resource().path(getHistorySkylineCommand);
+    response = webResource.get(String.class);
+    jobHistory = gson.fromJson(response,
+        new TypeToken<Map<RecurrenceId, List<ResourceSkyline>>>() {
+        }.getType());
+    // jobHistory should only have info for tpch_q12_0
+    Assert.assertEquals(1, jobHistory.size());
+    final String pipelineId =
+        ((RecurrenceId) jobHistory.keySet().toArray()[0]).getRunId();
+    Assert.assertEquals("tpch_q12_0", pipelineId);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/skylinestore/impl/TestInMemoryStore.java
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/skylinestore/impl/TestInMemoryStore.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/skylinestore/impl/TestInMemoryStore.java
new file mode 100644
index 0000000..4aefab4
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/skylinestore/impl/TestInMemoryStore.java
@@ -0,0 +1,32 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.resourceestimator.skylinestore.impl;
+
+import org.apache.hadoop.resourceestimator.skylinestore.api.SkylineStore;
+
+/**
+ * Test {@link InMemoryStore} class.
+ */
+public class TestInMemoryStore extends TestSkylineStore {
+  @Override public final SkylineStore createSkylineStore() {
+    return new InMemoryStore();
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/skylinestore/impl/TestSkylineStore.java
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/skylinestore/impl/TestSkylineStore.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/skylinestore/impl/TestSkylineStore.java
new file mode 100644
index 0000000..e333e3f
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/skylinestore/impl/TestSkylineStore.java
@@ -0,0 +1,464 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.resourceestimator.skylinestore.impl;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.hadoop.resourceestimator.common.api.RecurrenceId;
+import org.apache.hadoop.resourceestimator.common.api.ResourceSkyline;
+import org.apache.hadoop.resourceestimator.skylinestore.api.SkylineStore;
+import 
org.apache.hadoop.resourceestimator.skylinestore.exceptions.DuplicateRecurrenceIdException;
+import 
org.apache.hadoop.resourceestimator.skylinestore.exceptions.EmptyResourceSkylineException;
+import 
org.apache.hadoop.resourceestimator.skylinestore.exceptions.NullPipelineIdException;
+import 
org.apache.hadoop.resourceestimator.skylinestore.exceptions.NullRLESparseResourceAllocationException;
+import 
org.apache.hadoop.resourceestimator.skylinestore.exceptions.NullRecurrenceIdException;
+import 
org.apache.hadoop.resourceestimator.skylinestore.exceptions.NullResourceSkylineException;
+import 
org.apache.hadoop.resourceestimator.skylinestore.exceptions.RecurrenceIdNotFoundException;
+import 
org.apache.hadoop.resourceestimator.skylinestore.exceptions.SkylineStoreException;
+import org.apache.hadoop.yarn.api.records.Resource;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.RLESparseResourceAllocation;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationInterval;
+import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Test {@link SkylineStore} class.
+ */
+public abstract class TestSkylineStore {
+  /**
+   * Testing variables.
+   */
+  private SkylineStore skylineStore;
+
+  private TreeMap<Long, Resource> resourceOverTime;
+  private RLESparseResourceAllocation skylineList;
+  private ReservationInterval riAdd;
+  private Resource resource;
+
+  protected abstract SkylineStore createSkylineStore();
+
+  @Before public final void setup() {
+    skylineStore = createSkylineStore();
+    resourceOverTime = new TreeMap<>();
+    resource = Resource.newInstance(1024 * 100, 100);
+  }
+
+  private void compare(final ResourceSkyline skyline1,
+      final ResourceSkyline skyline2) {
+    Assert.assertEquals(skyline1.getJobId(), skyline2.getJobId());
+    Assert.assertEquals(skyline1.getJobInputDataSize(),
+        skyline2.getJobInputDataSize(), 0);
+    Assert.assertEquals(skyline1.getJobSubmissionTime(),
+        skyline2.getJobSubmissionTime());
+    Assert
+        .assertEquals(skyline1.getJobFinishTime(), 
skyline2.getJobFinishTime());
+    Assert.assertEquals(skyline1.getContainerSpec().getMemorySize(),
+        skyline2.getContainerSpec().getMemorySize());
+    Assert.assertEquals(skyline1.getContainerSpec().getVirtualCores(),
+        skyline2.getContainerSpec().getVirtualCores());
+    Assert.assertEquals(true,
+        skyline2.getSkylineList().equals(skyline1.getSkylineList()));
+  }
+
+  private void addToStore(final RecurrenceId recurrenceId,
+      final ResourceSkyline resourceSkyline) throws SkylineStoreException {
+    final List<ResourceSkyline> resourceSkylines = new ArrayList<>();
+    resourceSkylines.add(resourceSkyline);
+    skylineStore.addHistory(recurrenceId, resourceSkylines);
+    final List<ResourceSkyline> resourceSkylinesGet =
+        skylineStore.getHistory(recurrenceId).get(recurrenceId);
+    Assert.assertTrue(resourceSkylinesGet.contains(resourceSkyline));
+  }
+
+  private ResourceSkyline getSkyline(final int n) {
+    skylineList = new RLESparseResourceAllocation(resourceOverTime,
+        new DefaultResourceCalculator());
+    for (int i = 0; i < n; i++) {
+      riAdd = new ReservationInterval(i * 10, (i + 1) * 10);
+      skylineList.addInterval(riAdd, resource);
+    }
+    final ResourceSkyline resourceSkyline =
+        new ResourceSkyline(Integer.toString(n), 1024.5, 0, 20, resource,
+            skylineList);
+
+    return resourceSkyline;
+  }
+
+  @Test public final void testGetHistory() throws SkylineStoreException {
+    // addHistory first recurring pipeline
+    final RecurrenceId recurrenceId1 =
+        new RecurrenceId("FraudDetection", "17/06/20 00:00:00");
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    addToStore(recurrenceId1, resourceSkyline1);
+    final ResourceSkyline resourceSkyline2 = getSkyline(2);
+    addToStore(recurrenceId1, resourceSkyline2);
+    final RecurrenceId recurrenceId2 =
+        new RecurrenceId("FraudDetection", "17/06/21 00:00:00");
+    final ResourceSkyline resourceSkyline3 = getSkyline(3);
+    addToStore(recurrenceId2, resourceSkyline3);
+    final ResourceSkyline resourceSkyline4 = getSkyline(4);
+    addToStore(recurrenceId2, resourceSkyline4);
+    // addHistory second recurring pipeline
+    final RecurrenceId recurrenceId3 =
+        new RecurrenceId("Random", "17/06/20 00:00:00");
+    addToStore(recurrenceId3, resourceSkyline1);
+    addToStore(recurrenceId3, resourceSkyline2);
+    // test getHistory {pipelineId, runId}
+    Map<RecurrenceId, List<ResourceSkyline>> jobHistory =
+        skylineStore.getHistory(recurrenceId1);
+    Assert.assertEquals(1, jobHistory.size());
+    for (final Map.Entry<RecurrenceId, List<ResourceSkyline>> entry : 
jobHistory
+        .entrySet()) {
+      Assert.assertEquals(recurrenceId1, entry.getKey());
+      final List<ResourceSkyline> getSkylines = entry.getValue();
+      Assert.assertEquals(2, getSkylines.size());
+      compare(resourceSkyline1, getSkylines.get(0));
+      compare(resourceSkyline2, getSkylines.get(1));
+    }
+    // test getHistory {pipelineId, *}
+    RecurrenceId recurrenceIdTest = new RecurrenceId("FraudDetection", "*");
+    jobHistory = skylineStore.getHistory(recurrenceIdTest);
+    Assert.assertEquals(2, jobHistory.size());
+    for (final Map.Entry<RecurrenceId, List<ResourceSkyline>> entry : 
jobHistory
+        .entrySet()) {
+      Assert.assertEquals(recurrenceId1.getPipelineId(),
+          entry.getKey().getPipelineId());
+      final List<ResourceSkyline> getSkylines = entry.getValue();
+      if (entry.getKey().getRunId().equals("17/06/20 00:00:00")) {
+        Assert.assertEquals(2, getSkylines.size());
+        compare(resourceSkyline1, getSkylines.get(0));
+        compare(resourceSkyline2, getSkylines.get(1));
+      } else {
+        Assert.assertEquals(entry.getKey().getRunId(), "17/06/21 00:00:00");
+        Assert.assertEquals(2, getSkylines.size());
+        compare(resourceSkyline3, getSkylines.get(0));
+        compare(resourceSkyline4, getSkylines.get(1));
+      }
+    }
+    // test getHistory {*, runId}
+    recurrenceIdTest = new RecurrenceId("*", "some random runId");
+    jobHistory = skylineStore.getHistory(recurrenceIdTest);
+    Assert.assertEquals(3, jobHistory.size());
+    for (final Map.Entry<RecurrenceId, List<ResourceSkyline>> entry : 
jobHistory
+        .entrySet()) {
+      if (entry.getKey().getPipelineId().equals("FraudDetection")) {
+        final List<ResourceSkyline> getSkylines = entry.getValue();
+        if (entry.getKey().getRunId().equals("17/06/20 00:00:00")) {
+          Assert.assertEquals(2, getSkylines.size());
+          compare(resourceSkyline1, getSkylines.get(0));
+          compare(resourceSkyline2, getSkylines.get(1));
+        } else {
+          Assert.assertEquals(entry.getKey().getRunId(), "17/06/21 00:00:00");
+          Assert.assertEquals(2, getSkylines.size());
+          compare(resourceSkyline3, getSkylines.get(0));
+          compare(resourceSkyline4, getSkylines.get(1));
+        }
+      } else {
+        Assert.assertEquals("Random", entry.getKey().getPipelineId());
+        Assert.assertEquals(entry.getKey().getRunId(), "17/06/20 00:00:00");
+        final List<ResourceSkyline> getSkylines = entry.getValue();
+        Assert.assertEquals(2, getSkylines.size());
+        compare(resourceSkyline1, getSkylines.get(0));
+        compare(resourceSkyline2, getSkylines.get(1));
+      }
+    }
+    // test getHistory with wrong RecurrenceId
+    recurrenceIdTest =
+        new RecurrenceId("some random pipelineId", "some random runId");
+    Assert.assertNull(skylineStore.getHistory(recurrenceIdTest));
+  }
+
+  @Test public final void testGetEstimation() throws SkylineStoreException {
+    // first, add estimation to the skyline store
+    final RLESparseResourceAllocation skylineList2 =
+        new RLESparseResourceAllocation(resourceOverTime,
+            new DefaultResourceCalculator());
+    for (int i = 0; i < 5; i++) {
+      riAdd = new ReservationInterval(i * 10, (i + 1) * 10);
+      skylineList2.addInterval(riAdd, resource);
+    }
+    skylineStore.addEstimation("FraudDetection", skylineList2);
+    // then, try to get the estimation
+    final RLESparseResourceAllocation estimation =
+        skylineStore.getEstimation("FraudDetection");
+    for (int i = 0; i < 50; i++) {
+      Assert.assertEquals(skylineList2.getCapacityAtTime(i),
+          estimation.getCapacityAtTime(i));
+    }
+  }
+
+  @Test(expected = NullRecurrenceIdException.class)
+  public final void testGetNullRecurrenceId()
+      throws SkylineStoreException {
+    // addHistory first recurring pipeline
+    final RecurrenceId recurrenceId1 =
+        new RecurrenceId("FraudDetection", "17/06/20 00:00:00");
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    addToStore(recurrenceId1, resourceSkyline1);
+    final ResourceSkyline resourceSkyline2 = getSkyline(2);
+    addToStore(recurrenceId1, resourceSkyline2);
+    final RecurrenceId recurrenceId2 =
+        new RecurrenceId("FraudDetection", "17/06/21 00:00:00");
+    final ResourceSkyline resourceSkyline3 = getSkyline(3);
+    addToStore(recurrenceId2, resourceSkyline3);
+    final ResourceSkyline resourceSkyline4 = getSkyline(4);
+    addToStore(recurrenceId2, resourceSkyline4);
+    // addHistory second recurring pipeline
+    final RecurrenceId recurrenceId3 =
+        new RecurrenceId("Random", "17/06/20 00:00:00");
+    addToStore(recurrenceId3, resourceSkyline1);
+    addToStore(recurrenceId3, resourceSkyline2);
+    // try to getHistory with null recurringId
+    skylineStore.getHistory(null);
+  }
+
+  @Test(expected = NullPipelineIdException.class)
+  public final void testGetNullPipelineIdException()
+      throws SkylineStoreException {
+    skylineStore.getEstimation(null);
+  }
+
+  @Test public final void testAddNormal() throws SkylineStoreException {
+    // addHistory resource skylines to the in-memory store
+    final RecurrenceId recurrenceId =
+        new RecurrenceId("FraudDetection", "17/06/20 00:00:00");
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    addToStore(recurrenceId, resourceSkyline1);
+    final ArrayList<ResourceSkyline> resourceSkylines =
+        new ArrayList<ResourceSkyline>();
+    // the resource skylines to be added contain null
+    resourceSkylines.add(null);
+    final ResourceSkyline resourceSkyline2 = getSkyline(2);
+    resourceSkylines.add(resourceSkyline2);
+    skylineStore.addHistory(recurrenceId, resourceSkylines);
+    // query the in-memory store
+    final Map<RecurrenceId, List<ResourceSkyline>> jobHistory =
+        skylineStore.getHistory(recurrenceId);
+    Assert.assertEquals(1, jobHistory.size());
+    for (final Map.Entry<RecurrenceId, List<ResourceSkyline>> entry : 
jobHistory
+        .entrySet()) {
+      Assert.assertEquals(recurrenceId, entry.getKey());
+      final List<ResourceSkyline> getSkylines = entry.getValue();
+      Assert.assertEquals(2, getSkylines.size());
+      compare(resourceSkyline1, getSkylines.get(0));
+      compare(resourceSkyline2, getSkylines.get(1));
+    }
+  }
+
+  @Test(expected = NullRecurrenceIdException.class)
+  public final void testAddNullRecurrenceId()
+      throws SkylineStoreException {
+    // recurrenceId is null
+    final RecurrenceId recurrenceIdNull = null;
+    final ArrayList<ResourceSkyline> resourceSkylines =
+        new ArrayList<ResourceSkyline>();
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    resourceSkylines.add(resourceSkyline1);
+    skylineStore.addHistory(recurrenceIdNull, resourceSkylines);
+  }
+
+  @Test(expected = NullResourceSkylineException.class)
+  public final void testAddNullResourceSkyline()
+      throws SkylineStoreException {
+    final RecurrenceId recurrenceId =
+        new RecurrenceId("FraudDetection", "17/06/20 00:00:00");
+    final ArrayList<ResourceSkyline> resourceSkylines =
+        new ArrayList<ResourceSkyline>();
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    resourceSkylines.add(resourceSkyline1);
+    // resourceSkylines is null
+    skylineStore.addHistory(recurrenceId, null);
+  }
+
+  @Test(expected = DuplicateRecurrenceIdException.class)
+  public final void testAddDuplicateRecurrenceId()
+      throws SkylineStoreException {
+    final RecurrenceId recurrenceId =
+        new RecurrenceId("FraudDetection", "17/06/20 00:00:00");
+    final ArrayList<ResourceSkyline> resourceSkylines =
+        new ArrayList<ResourceSkyline>();
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    resourceSkylines.add(resourceSkyline1);
+    // trying to addHistory duplicate resource skylines
+    skylineStore.addHistory(recurrenceId, resourceSkylines);
+    skylineStore.addHistory(recurrenceId, resourceSkylines);
+  }
+
+  @Test(expected = NullPipelineIdException.class)
+  public final void testAddNullPipelineIdException()
+      throws SkylineStoreException {
+    final RLESparseResourceAllocation skylineList2 =
+        new RLESparseResourceAllocation(resourceOverTime,
+            new DefaultResourceCalculator());
+    for (int i = 0; i < 5; i++) {
+      riAdd = new ReservationInterval(i * 10, (i + 1) * 10);
+      skylineList2.addInterval(riAdd, resource);
+    }
+    skylineStore.addEstimation(null, skylineList2);
+  }
+
+  @Test(expected = NullRLESparseResourceAllocationException.class)
+  public final void testAddNullRLESparseResourceAllocationExceptionException()
+      throws SkylineStoreException {
+    skylineStore.addEstimation("FraudDetection", null);
+  }
+
+  @Test public final void testDeleteNormal() throws SkylineStoreException {
+    // addHistory first recurring pipeline
+    final RecurrenceId recurrenceId1 =
+        new RecurrenceId("FraudDetection", "17/06/20 00:00:00");
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    addToStore(recurrenceId1, resourceSkyline1);
+    final ResourceSkyline resourceSkyline2 = getSkyline(2);
+    addToStore(recurrenceId1, resourceSkyline2);
+    // test deleteHistory function of the in-memory store
+    skylineStore.deleteHistory(recurrenceId1);
+  }
+
+  @Test(expected = NullRecurrenceIdException.class)
+  public final void testDeleteNullRecurrenceId()
+      throws SkylineStoreException {
+    final RecurrenceId recurrenceId1 =
+        new RecurrenceId("FraudDetection", "17/06/20 00:00:00");
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    addToStore(recurrenceId1, resourceSkyline1);
+    // try to deleteHistory with null recurringId
+    skylineStore.deleteHistory(null);
+  }
+
+  @Test(expected = RecurrenceIdNotFoundException.class)
+  public final void testDeleteRecurrenceIdNotFound()
+      throws SkylineStoreException {
+    final RecurrenceId recurrenceId1 =
+        new RecurrenceId("FraudDetection", "17/06/20 00:00:00");
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    addToStore(recurrenceId1, resourceSkyline1);
+    final RecurrenceId recurrenceIdInvalid =
+        new RecurrenceId("Some random pipelineId", "Some random runId");
+    // try to deleteHistory non-existing recurringId
+    skylineStore.deleteHistory(recurrenceIdInvalid);
+  }
+
+  @Test public final void testUpdateNormal() throws SkylineStoreException {
+    // addHistory first recurring pipeline
+    final RecurrenceId recurrenceId1 =
+        new RecurrenceId("FraudDetection", "17/06/20 00:00:00");
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    addToStore(recurrenceId1, resourceSkyline1);
+    final ArrayList<ResourceSkyline> resourceSkylines =
+        new ArrayList<ResourceSkyline>();
+    final ResourceSkyline resourceSkyline2 = getSkyline(2);
+    resourceSkylines.add(resourceSkyline1);
+    resourceSkylines.add(resourceSkyline2);
+    skylineStore.updateHistory(recurrenceId1, resourceSkylines);
+    // query the in-memory store
+    final Map<RecurrenceId, List<ResourceSkyline>> jobHistory =
+        skylineStore.getHistory(recurrenceId1);
+    Assert.assertEquals(1, jobHistory.size());
+    for (final Map.Entry<RecurrenceId, List<ResourceSkyline>> entry : 
jobHistory
+        .entrySet()) {
+      Assert.assertEquals(recurrenceId1, entry.getKey());
+      final List<ResourceSkyline> getSkylines = entry.getValue();
+      Assert.assertEquals(2, getSkylines.size());
+      compare(resourceSkyline1, getSkylines.get(0));
+      compare(resourceSkyline2, getSkylines.get(1));
+    }
+  }
+
+  @Test(expected = NullRecurrenceIdException.class)
+  public final void testUpdateNullRecurrenceId()
+      throws SkylineStoreException {
+    final ArrayList<ResourceSkyline> resourceSkylines =
+        new ArrayList<ResourceSkyline>();
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    resourceSkylines.add(resourceSkyline1);
+    final ArrayList<ResourceSkyline> resourceSkylinesInvalid =
+        new ArrayList<ResourceSkyline>();
+    resourceSkylinesInvalid.add(null);
+    // try to updateHistory with null recurringId
+    skylineStore.updateHistory(null, resourceSkylines);
+  }
+
+  @Test(expected = NullResourceSkylineException.class)
+  public final void testUpdateNullResourceSkyline()
+      throws SkylineStoreException {
+    final RecurrenceId recurrenceId =
+        new RecurrenceId("FraudDetection", "17/06/20 00:00:00");
+    final ArrayList<ResourceSkyline> resourceSkylines =
+        new ArrayList<ResourceSkyline>();
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    resourceSkylines.add(resourceSkyline1);
+    final ArrayList<ResourceSkyline> resourceSkylinesInvalid =
+        new ArrayList<ResourceSkyline>();
+    resourceSkylinesInvalid.add(null);
+    // try to updateHistory with null resourceSkylines
+    skylineStore.addHistory(recurrenceId, resourceSkylines);
+    skylineStore.updateHistory(recurrenceId, null);
+  }
+
+  @Test(expected = EmptyResourceSkylineException.class)
+  public final void testUpdateEmptyRecurrenceId()
+      throws SkylineStoreException {
+    final RecurrenceId recurrenceId =
+        new RecurrenceId("FraudDetection", "17/06/20 00:00:00");
+    final ArrayList<ResourceSkyline> resourceSkylines =
+        new ArrayList<ResourceSkyline>();
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    resourceSkylines.add(resourceSkyline1);
+    final ArrayList<ResourceSkyline> resourceSkylinesInvalid =
+        new ArrayList<ResourceSkyline>();
+    resourceSkylinesInvalid.add(null);
+    skylineStore.addHistory(recurrenceId, resourceSkylines);
+    // try to updateHistory with empty resourceSkyline
+    skylineStore.updateHistory(recurrenceId, resourceSkylinesInvalid);
+  }
+
+  @Test(expected = RecurrenceIdNotFoundException.class)
+  public final void testUpdateRecurrenceIdNotFound()
+      throws SkylineStoreException {
+    final ArrayList<ResourceSkyline> resourceSkylines =
+        new ArrayList<ResourceSkyline>();
+    final ResourceSkyline resourceSkyline1 = getSkyline(1);
+    resourceSkylines.add(resourceSkyline1);
+    final RecurrenceId recurrenceIdInvalid =
+        new RecurrenceId("Some random pipelineId", "Some random runId");
+    final ArrayList<ResourceSkyline> resourceSkylinesInvalid =
+        new ArrayList<ResourceSkyline>();
+    resourceSkylinesInvalid.add(null);
+    // try to updateHistory with non-existing recurringId
+    skylineStore.updateHistory(recurrenceIdInvalid, resourceSkylines);
+  }
+
+  @After public final void cleanUp() {
+    skylineStore = null;
+    resourceOverTime.clear();
+    resourceOverTime = null;
+    skylineList = null;
+    riAdd = null;
+    resource = null;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/solver/impl/TestLpSolver.java
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/solver/impl/TestLpSolver.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/solver/impl/TestLpSolver.java
new file mode 100644
index 0000000..d32f7c3
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/solver/impl/TestLpSolver.java
@@ -0,0 +1,112 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.resourceestimator.solver.impl;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.resourceestimator.common.api.RecurrenceId;
+import org.apache.hadoop.resourceestimator.common.api.ResourceSkyline;
+import 
org.apache.hadoop.resourceestimator.common.config.ResourceEstimatorConfiguration;
+import 
org.apache.hadoop.resourceestimator.common.exception.ResourceEstimatorException;
+import 
org.apache.hadoop.resourceestimator.skylinestore.exceptions.SkylineStoreException;
+import org.apache.hadoop.resourceestimator.skylinestore.impl.InMemoryStore;
+import org.apache.hadoop.resourceestimator.solver.api.Solver;
+import org.apache.hadoop.resourceestimator.solver.exceptions.SolverException;
+import org.apache.hadoop.resourceestimator.translator.api.LogParser;
+import 
org.apache.hadoop.resourceestimator.translator.exceptions.DataFieldNotFoundException;
+import org.apache.hadoop.resourceestimator.translator.impl.BaseLogParser;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.RLESparseResourceAllocation;
+import org.junit.Test;
+
+import java.io.BufferedReader;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.Reader;
+import java.nio.charset.Charset;
+import java.text.ParseException;
+import java.util.List;
+import java.util.Map;
+
+import static org.junit.Assert.assertEquals;
+
+/**
+ * This LPSolver class will make resource estimation using Linear Programming
+ * model. We use ojAlgo solver to solve the model.
+ */
+public class TestLpSolver extends TestSolver {
+  private static final LogParser SAMPLEPARSER = new BaseLogParser();
+  private Solver solver;
+  private InMemoryStore skylineStore = new InMemoryStore();
+
+  private void parseLog(final String inputLog)
+      throws SolverException, IOException, SkylineStoreException,
+      DataFieldNotFoundException, ParseException {
+    final InputStream logs = new FileInputStream(inputLog);
+    SAMPLEPARSER.parseStream(logs);
+  }
+
+  @Override protected Solver createSolver() throws ResourceEstimatorException {
+    solver = new LpSolver();
+    Configuration config = new Configuration();
+    config.addResource(ResourceEstimatorConfiguration.CONFIG_FILE);
+    solver.init(config, skylineStore);
+    SAMPLEPARSER.init(config, skylineStore);
+    return solver;
+  }
+
+  @Test public void testSolve()
+      throws IOException, SkylineStoreException, SolverException,
+      ResourceEstimatorException, DataFieldNotFoundException, ParseException {
+    parseLog("src/test/resources/lp/tpch_q12.txt");
+    RecurrenceId recurrenceId = new RecurrenceId("tpch_q12", "*");
+    final Map<RecurrenceId, List<ResourceSkyline>> jobHistory =
+        skylineStore.getHistory(recurrenceId);
+    solver = createSolver();
+    RLESparseResourceAllocation result = solver.solve(jobHistory);
+    String file = "src/test/resources/lp/answer.txt";
+    Reader fileReader = new InputStreamReader(new FileInputStream(file),
+        Charset.forName("UTF-8"));
+    BufferedReader bufferedReader = new BufferedReader(fileReader);
+    String line = bufferedReader.readLine();
+    Configuration config = new Configuration();
+    config.addResource(new org.apache.hadoop.fs.Path(
+        ResourceEstimatorConfiguration.CONFIG_FILE));
+    int timeInterval =
+        config.getInt(ResourceEstimatorConfiguration.TIME_INTERVAL_KEY, 5);
+    final long containerMemAlloc =
+        jobHistory.entrySet().iterator().next().getValue().get(0)
+            .getContainerSpec().getMemorySize();
+    int count = 0;
+    int numContainer = 0;
+    while (line != null) {
+      numContainer =
+          (int) (result.getCapacityAtTime(count * timeInterval).getMemorySize()
+              / containerMemAlloc);
+      assertEquals(Integer.parseInt(line), numContainer,
+          0.1 * Integer.parseInt(line));
+      line = bufferedReader.readLine();
+      count++;
+    }
+    fileReader.close();
+    bufferedReader.close();
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625039ef/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/solver/impl/TestSolver.java
----------------------------------------------------------------------
diff --git 
a/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/solver/impl/TestSolver.java
 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/solver/impl/TestSolver.java
new file mode 100644
index 0000000..11df6cd
--- /dev/null
+++ 
b/hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/solver/impl/TestSolver.java
@@ -0,0 +1,73 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.resourceestimator.solver.impl;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.resourceestimator.common.api.RecurrenceId;
+import org.apache.hadoop.resourceestimator.common.api.ResourceSkyline;
+import 
org.apache.hadoop.resourceestimator.common.exception.ResourceEstimatorException;
+import 
org.apache.hadoop.resourceestimator.skylinestore.exceptions.SkylineStoreException;
+import org.apache.hadoop.resourceestimator.solver.api.Solver;
+import 
org.apache.hadoop.resourceestimator.solver.exceptions.InvalidInputException;
+import org.apache.hadoop.resourceestimator.solver.exceptions.SolverException;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * This LPSolver class will make resource estimation using Linear Programming
+ * model. We use Google Or Tool to solve the model.
+ */
+public abstract class TestSolver {
+  private Solver solver;
+
+  protected abstract Solver createSolver() throws ResourceEstimatorException;
+
+  @Before public void setup()
+      throws SolverException, IOException, SkylineStoreException,
+      ResourceEstimatorException {
+    solver = createSolver();
+  }
+
+  @Test(expected = InvalidInputException.class) public void 
testNullJobHistory()
+      throws SolverException, SkylineStoreException {
+    // try to solve with null jobHistory
+    solver.solve(null);
+  }
+
+  @Test(expected = InvalidInputException.class)
+  public void testEmptyJobHistory()
+      throws SolverException, SkylineStoreException {
+    Map<RecurrenceId, List<ResourceSkyline>> jobHistoryInvalid =
+        new HashMap<RecurrenceId, List<ResourceSkyline>>();
+    // try to solve with emty jobHistory
+    solver.solve(jobHistoryInvalid);
+  }
+
+  @After public final void cleanUp() {
+    solver.close();
+    solver = null;
+  }
+}


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to