This is an automated email from the ASF dual-hosted git repository.

vinoth pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-hudi-site.git

commit ada1ba23399ce8566279580ff84f94f1999a3fb0
Author: Vinoth Chandar <[email protected]>
AuthorDate: Fri Feb 15 20:53:49 2019 -0800

    Dockerized doc build
    
     - Fixed Dockerfile with Ruby 2.6+
     - Tested along with docker-compose
     - Documented docker commands
     - Tweaks to json version
     - Update gitignore & remove .DS_Store
---
 docs/.DS_Store          | Bin 8196 -> 0 bytes
 docs/.gitignore         |   1 +
 docs/Dockerfile         |  12 ++++++++----
 docs/Gemfile.lock       |   2 +-
 docs/README.md          |  10 +++++++++-
 docs/community.md       |   4 ++--
 docs/docker-compose.yml |  13 +++++++++++++
 docs/index.md           |   3 +--
 8 files changed, 35 insertions(+), 10 deletions(-)

diff --git a/docs/.DS_Store b/docs/.DS_Store
deleted file mode 100644
index 83b1f95..0000000
Binary files a/docs/.DS_Store and /dev/null differ
diff --git a/docs/.gitignore b/docs/.gitignore
index f380cae..e52ea46 100644
--- a/docs/.gitignore
+++ b/docs/.gitignore
@@ -2,3 +2,4 @@ _site
 .sass-cache
 .jekyll-metadata
 .ruby-version
+.DS_Store
diff --git a/docs/Dockerfile b/docs/Dockerfile
index b1fa52c..89cc254 100644
--- a/docs/Dockerfile
+++ b/docs/Dockerfile
@@ -1,5 +1,4 @@
-FROM ruby:2.1
-MAINTAINER [email protected]
+FROM ruby:2.6
 
 RUN apt-get clean \
   && mv /var/lib/apt/lists /var/lib/apt/lists.broke \
@@ -8,7 +7,7 @@ RUN apt-get clean \
 RUN apt-get update
 
 RUN apt-get install -y \
-    node \
+    nodejs \
     python-pygments \
   && apt-get clean \
   && rm -rf /var/lib/apt/lists/
@@ -16,11 +15,16 @@ RUN apt-get install -y \
 WORKDIR /tmp
 ADD Gemfile /tmp/
 ADD Gemfile.lock /tmp/
+
+RUN gem install bundler
+RUN gem install jekyll
 RUN bundle install
+RUN bundle update --bundler
+ 
 
 VOLUME /src
 EXPOSE 4000
 
 WORKDIR /src
-ENTRYPOINT ["jekyll"]
+ENTRYPOINT ["bundle", "exec", "jekyll", "serve", "--force_polling", "-H", 
"0.0.0.0", "-P", "4000"]
 
diff --git a/docs/Gemfile.lock b/docs/Gemfile.lock
index 8359d57..b72b9b1 100644
--- a/docs/Gemfile.lock
+++ b/docs/Gemfile.lock
@@ -103,7 +103,7 @@ GEM
       gemoji (~> 2.0)
       html-pipeline (~> 2.2)
       jekyll (>= 3.0)
-    json (1.8.6)
+    json (2.1.0)
     kramdown (1.11.1)
     liquid (3.0.6)
     listen (3.0.6)
diff --git a/docs/README.md b/docs/README.md
index 6c5030d..0995250 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -7,11 +7,19 @@ This folder contains resources that build the [Apache Hudi 
website](https://hudi
 
 The site is based on a [Jekyll](https://jekyllrb.com/) theme hosted 
[here](idratherbewriting.com/documentation-theme-jekyll/) with detailed 
instructions.
 
-To build the docs, first you need to install
+#### Docker
+
+Simply run `docker-compose build --no-cache && docker-compose up` from the 
`docs` folder and the site should be up & running at `http://localhost:4000`
+
+
+#### Host OS
+
+To build directly on host OS (\*nix), first you need to install
 
 - gem, ruby (using apt-get/brew)
 - bundler (`gem install bundler`)
 - jekyll (`gem install jekyll`)
+- Update bundler `bundle update --bundler`
 
 and then run the following from `docs` folder to serve a local site
 
diff --git a/docs/community.md b/docs/community.md
index d2f207e..c508191 100644
--- a/docs/community.md
+++ b/docs/community.md
@@ -13,10 +13,10 @@ issues or pull requests against this repo. Before you do 
so, please sign the
 Also, be sure to write unit tests for your bug fix or feature to show that it 
works as expected.
 If the reviewer feels this contributions needs to be in the release notes, 
please add it to CHANGELOG.md as well.
 
-If you want to participate in day-day conversations, please join our [slack 
group](https://hoodielib.slack.com/x-147852474016-157730502112/signup).
+If you want to participate in day-day conversations, please join our [slack 
group](https://join.slack.com/t/apache-hudi/signup).
 If you are from select pre-listed email domains, you can self signup. Others, 
please subscribe to [email protected]
 
-## Becoming a Committer 
+## Becoming a Committer
 
 Hoodie has adopted a lot of guidelines set forth in [Google Chromium 
project](https://www.chromium.org/getting-involved/become-a-committer), to 
determine committership proposals. However, given this is a much younger 
project, we would have the contribution bar to be 10-15 non-trivial patches 
instead.
 Additionally, we expect active engagement with the community over a few 
months, in terms of conference/meetup talks, helping out with issues/questions 
on slack/github.
diff --git a/docs/docker-compose.yml b/docs/docker-compose.yml
new file mode 100644
index 0000000..7c454e3
--- /dev/null
+++ b/docs/docker-compose.yml
@@ -0,0 +1,13 @@
+version: '3.3'
+services:
+  server:
+    build:
+      context: .
+      dockerfile: Dockerfile
+    image: hudi_docs/latest
+    ports:
+      - '4000:4000'
+    volumes:
+      - ".:/src"
+networks:
+  default:
diff --git a/docs/index.md b/docs/index.md
index 8fa056e..c198431 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -10,7 +10,7 @@ summary: "Hudi lowers data latency across the board, while 
simultaneously achiev
 
 
 
-Hudi (pronounced “Hoodie”) manages storage of large analytical datasets on 
[HDFS](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html)
 and serve them out via two types of tables
+Hudi (pronounced “Hoodie”) manages storage of large analytical datasets on 
[HDFS](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html)
 or cloud and serves them out via two types of tables
 
  * **Read Optimized Table** - Provides excellent query performance via purely 
columnar storage (e.g. [Parquet](https://parquet.apache.org/))
  * **Near-Real time Table** - Provides queries on real-time data, using a 
combination of columnar & row based storage (e.g Parquet + 
[Avro](http://avro.apache.org/docs/current/mr.html))
@@ -23,4 +23,3 @@ By carefully managing how data is laid out in storage & how 
it’s exposed to qu
 Hudi broadly consists of a self contained Spark library to build datasets and 
integrations with existing query engines for data access.
 
 {% include callout.html content="Hudi is a new project. Near Real-Time  Table 
implementation is currently underway. Get involved 
[here](https://github.com/uber/hoodie/projects/1)" type="info" %}
-

Reply via email to