This is an automated email from the ASF dual-hosted git repository.

gian pushed a commit to branch master
in repository 
https://gitbox.apache.org/repos/asf/incubator-druid-website-src.git

commit d5c4477f00e1f1b794b44a40b6618909f834c25b
Author: Vadim Ogievetsky <[email protected]>
AuthorDate: Wed Jun 12 21:52:31 2019 -0700

    removed mentions of druid.io (#10)
---
 README.md                                          |  2 +-
 _config.yml                                        |  2 +-
 _posts/2011-05-20-druid-part-deux.md               | 22 ++++-----
 ...98-right-cardinality-estimation-for-big-data.md | 10 ++--
 _posts/2013-08-30-loading-data.md                  |  6 +--
 _posts/2013-11-04-querying-your-data.md            | 12 ++---
 _posts/2014-02-03-rdruid-and-twitterstream.md      | 54 +++++++++++-----------
 ...rloglog-optimizations-for-real-world-systems.md |  2 +-
 _posts/2014-03-12-batch-ingestion.md               | 24 +++++-----
 _posts/2014-04-15-intro-to-pydruid.md              |  6 +--
 ...-off-on-the-rise-of-the-real-time-data-stack.md |  2 +-
 _posts/2015-11-03-seeking-new-committers.md        |  2 +-
 _posts/2016-06-28-druid-0-9-1.md                   |  2 +-
 _posts/2016-12-01-druid-0-9-2.md                   |  2 +-
 _posts/2017-04-18-druid-0-10-0.md                  |  2 +-
 _posts/2017-08-22-druid-0-10-1.md                  |  2 +-
 _posts/2017-12-04-druid-0-11-0.md                  |  2 +-
 _posts/2018-03-08-druid-0-12-0.md                  |  2 +-
 _posts/2018-06-08-druid-0-12-1.md                  |  2 +-
 community/index.md                                 |  6 +--
 .../ingestion/hadoop-vs-native-batch.md            |  4 +-
 .../ingestion/hadoop-vs-native-batch.md            |  4 +-
 .../ingestion/hadoop-vs-native-batch.md            |  4 +-
 docs/latest/ingestion/hadoop-vs-native-batch.md    |  4 +-
 downloads.md                                       |  2 +-
 feed/index.xml                                     | 25 ----------
 robots.txt                                         |  2 +-
 technology.md                                      | 10 ++--
 28 files changed, 95 insertions(+), 124 deletions(-)

diff --git a/README.md b/README.md
index 44ce4b9..897ce4a 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
 Druid Project Website
 =====================
 
-http://druid.io/
+http://apache.druid.org/
 
 ## Building
 
diff --git a/_config.yml b/_config.yml
index 699f27d..29761c3 100644
--- a/_config.yml
+++ b/_config.yml
@@ -41,7 +41,7 @@ gems:
 
 prose:
   metadata:
-    siteurl: 'http://druid.io'
+    siteurl: 'https://druid.apache.org'
 
     _posts:
       - name: "author"
diff --git a/_posts/2011-05-20-druid-part-deux.md 
b/_posts/2011-05-20-druid-part-deux.md
index c0660c8..a64af09 100644
--- a/_posts/2011-05-20-druid-part-deux.md
+++ b/_posts/2011-05-20-druid-part-deux.md
@@ -7,22 +7,22 @@ layout: post
 ---
 
 In a [previous blog
-post](http://druid.io/blog/2011/04/30/introducing-druid.html) we introduced the
+post](/blog/2011/04/30/introducing-druid.html) we introduced the
 distributed indexing and query processing infrastructure we call Druid. In that
 post, we characterized the performance and scaling challenges that motivated us
 to build this system in the first place. Here, we discuss three design
 principles underpinning its architecture.
 
-**1. Partial Aggregates + In-Memory + Indexes => Fast Queries** 
+**1. Partial Aggregates + In-Memory + Indexes => Fast Queries**
 
 We work with two representations of our data: *alpha* represents the raw,
 unaggregated event logs, while *beta* is its partially aggregated derivative.
 This *beta* is the basis against which all further queries are evaluated:
 
-    2011-01-01T01:00:00Z  ultratrimfast.com  google.com  Male    USA  1800  25 
 15.70 
-    2011-01-01T01:00:00Z  bieberfever.com    google.com  Male    USA  2912  42 
 29.18 
-    2011-01-01T02:00:00Z  ultratrimfast.com  google.com  Male    UK   1953  17 
 17.31 
-    2011-01-01T02:00:00Z  bieberfever.com    google.com  Male    UK   3194  
170 34.01 
+    2011-01-01T01:00:00Z  ultratrimfast.com  google.com  Male    USA  1800  25 
 15.70
+    2011-01-01T01:00:00Z  bieberfever.com    google.com  Male    USA  2912  42 
 29.18
+    2011-01-01T02:00:00Z  ultratrimfast.com  google.com  Male    UK   1953  17 
 17.31
+    2011-01-01T02:00:00Z  bieberfever.com    google.com  Male    UK   3194  
170 34.01
 
 This is the most compact representation that preserves the finest grain of 
data,
 while enabling on-the-fly computation of all O(2^n) possible dimensional
@@ -38,7 +38,7 @@ calculation (using AND & OR operations) of rows matching a 
search query. The
 inverted index enables us to scan a limited subset of rows to compute final
 query results – and these scans are themselves distributed, as we discuss next.
 
-**2. Distributed Data + Parallelizable Queries => Horizontal Scalability** 
+**2. Distributed Data + Parallelizable Queries => Horizontal Scalability**
 
 Druid’s performance depends on having memory — lots of it. We achieve the 
requisite
 memory scale by dynamically distributing data across a cluster of nodes. As the
@@ -69,14 +69,14 @@ This architecture provides a number of extra benefits:
 
 * Segments are read-only, so they can simultaneously serve multiple servers. If
   we have a hotspot in a particular index, we can replicate that index to
-multiple servers and load balance across them.  
+multiple servers and load balance across them.
 * We can provide tiered classes of service for our data, with servers occupying
-  different points in the “query latency vs. data size” spectrum 
+  different points in the “query latency vs. data size” spectrum
 * Our clusters can span data center boundaries
 
 
 
-**3. Real-Time Analytics: Immutable Past, Append-Only Future** 
+**3. Real-Time Analytics: Immutable Past, Append-Only Future**
 
 Our system for real-time analytics is centered, naturally, on time. Because 
past events
 happen once and never change, they need not be re-writable. We need only be
@@ -96,7 +96,7 @@ provides access to long-range data while maintaining the 
high-performance that
 our customers expect for near-term data.
 
 
-##Summary## 
+##Summary##
 Druid’s power resides in providing users fast, arbitrarily deep
 exploration of large-scale transaction data. Queries over billions of rows,
 that previously took minutes or hours to run, can now be investigated directly
diff --git 
a/_posts/2012-05-04-fast-cheap-and-98-right-cardinality-estimation-for-big-data.md
 
b/_posts/2012-05-04-fast-cheap-and-98-right-cardinality-estimation-for-big-data.md
index 7f30692..20822f7 100644
--- 
a/_posts/2012-05-04-fast-cheap-and-98-right-cardinality-estimation-for-big-data.md
+++ 
b/_posts/2012-05-04-fast-cheap-and-98-right-cardinality-estimation-for-big-data.md
@@ -51,10 +51,10 @@ extract a probability distribution for the likelihood of a 
specific
 phenomenon.  The phenomenon we care about is the maximum index of a 1 bit. 
 Specifically, we expect the following to be true:
 
-50% of hashed values will look like this: 1xxxxxxx…x  
-25% of hashed values will look like this: 01xxxxxx…x  
-12.5% of hashed values will look like this: 001xxxxxxxx…x  
-6.25% of hashed values will look like this: 0001xxxxxxxx…x  
+50% of hashed values will look like this: 1xxxxxxx…x
+25% of hashed values will look like this: 01xxxxxx…x
+12.5% of hashed values will look like this: 001xxxxxxxx…x
+6.25% of hashed values will look like this: 0001xxxxxxxx…x
 …
 
 So, naively speaking, we expect that if we were to hash 8 unique things, one of
@@ -94,7 +94,7 @@ ability to compute cardinalities.  We wanted to be able to 
take advantage of
 the space savings and row reduction of summarization while still being able to
 compute cardinalities:  this is where HyperLogLog comes in.
 
-In [Druid](http://druid.io/), our summarization process applies the hash
+In [Druid](/), our summarization process applies the hash
 function ([Murmur 128](http://sites.google.com/site/murmurhash/)) and computes
 the intermediate HyperLogLog format (i.e. the list of buckets of
 `max(index of 1)`) and stores that in a column.  Thus, for every row in our
diff --git a/_posts/2013-08-30-loading-data.md 
b/_posts/2013-08-30-loading-data.md
index 12cff7e..a0bca7f 100644
--- a/_posts/2013-08-30-loading-data.md
+++ b/_posts/2013-08-30-loading-data.md
@@ -11,7 +11,7 @@ In our last post, we got a realtime node working with example 
Twitter data. Now
 ## About Druid ##
 Druid is a rockin' exploratory analytical data store capable of offering 
interactive query of big data in realtime - as data is ingested. Druid drives 
10's of billions of events per day for the 
[Metamarkets](http://www.metamarkets.com) platform, and Metamarkets is 
committed to building Druid in open source.
 
-To learn more check out the first blog in this series [Understanding Druid Via 
Twitter Data](http://druid.io/blog/2013/08/06/twitter-tutorial.html)
+To learn more check out the first blog in this series [Understanding Druid Via 
Twitter Data](/blog/2013/08/06/twitter-tutorial.html)
 
 Checkout Druid at XLDB on Sept 9th 
[XLDB](https://conf-slac.stanford.edu/xldb-2013/tutorials#amC)
 
@@ -139,7 +139,7 @@ In a new console, launch the kafka console producer (so you 
can type in JSON kaf
     {"utcdt": "2010-01-01T01:01:03", "wp": 3000, "gender": "male", "age": 20}
     {"utcdt": "2010-01-01T01:01:04", "wp": 4000, "gender": "female", "age": 30}
     {"utcdt": "2010-01-01T01:01:05", "wp": 5000, "gender": "male", "age": 40}
-    
+
 **Watch the events as they are ingested** in the Druid realtime node console
 
     ...
@@ -181,4 +181,4 @@ In a new console, edit a file called query.body:
       }
     } ]
 
-Congratulations, you've queried the data we just loaded! In our next post, 
we'll move on to Querying our Data.
\ No newline at end of file
+Congratulations, you've queried the data we just loaded! In our next post, 
we'll move on to Querying our Data.
diff --git a/_posts/2013-11-04-querying-your-data.md 
b/_posts/2013-11-04-querying-your-data.md
index 0e5d5ba..2e3eb3f 100644
--- a/_posts/2013-11-04-querying-your-data.md
+++ b/_posts/2013-11-04-querying-your-data.md
@@ -57,7 +57,7 @@ com.metamx.druid.http.ComputeMain
 
 # Querying Your Data #
 
-Now that we have a complete cluster setup on localhost, we need to load data. 
To do so, refer to [Loading Your 
Data](http://druid.io/blog/2013/08/30/loading-data.html). Having done that, its 
time to query our data!
+Now that we have a complete cluster setup on localhost, we need to load data. 
To do so, refer to [Loading Your Data](/blog/2013/08/30/loading-data.html). 
Having done that, its time to query our data!
 
 ## Querying Different Nodes ##
 
@@ -153,7 +153,7 @@ Now that we know what nodes can be queried (although you 
should usually use the
 
 ## Querying Against the realtime.spec ##
 
-How are we to know what queries we can run? Although 
[Querying](http://druid.io/docs/latest/Querying.html) is a helpful index, to 
get a handle on querying our data we need to look at our Realtime node's 
realtime.spec file:
+How are we to know what queries we can run? Although 
[Querying](/docs/latest/Querying.html) is a helpful index, to get a handle on 
querying our data we need to look at our Realtime node's realtime.spec file:
 
 ```json
 [{
@@ -195,7 +195,7 @@ Our dataSource tells us the name of the relation/table, or 
'source of data', to
 
 ### aggregations ###
 
-Note the [aggregations](http://druid.io/docs/latest/Aggregations.html) in our 
query:
+Note the [aggregations](/docs/latest/Aggregations.html) in our query:
 
 ```json
     "aggregations": [
@@ -214,7 +214,7 @@ this matches up to the aggregators in the schema of our 
realtime.spec!
 
 ### dimensions ###
 
-Lets look back at our actual records (from [Loading Your 
Data](http://druid.io/blog/2013/08/30/loading-data.html):
+Lets look back at our actual records (from [Loading Your 
Data](/blog/2013/08/30/loading-data.html):
 
 ```json
 {"utcdt": "2010-01-01T01:01:01", "wp": 1000, "gender": "male", "age": 100}
@@ -329,8 +329,8 @@ Which gets us just people aged 40:
 } ]
 ```
 
-Check out [Filters](http://druid.io/docs/latest/Filters.html) for more.
+Check out [Filters](/docs/latest/Filters.html) for more.
 
 ## Learn More ##
 
-Finally, you can learn more about querying at 
[Querying](http://druid.io/docs/latest/Querying.html)!
+Finally, you can learn more about querying at 
[Querying](/docs/latest/Querying.html)!
diff --git a/_posts/2014-02-03-rdruid-and-twitterstream.md 
b/_posts/2014-02-03-rdruid-and-twitterstream.md
index 4fce8dd..29016b3 100644
--- a/_posts/2014-02-03-rdruid-and-twitterstream.md
+++ b/_posts/2014-02-03-rdruid-and-twitterstream.md
@@ -6,7 +6,7 @@ author: Igal Levy
 tags: #R #druid #analytics #querying #bigdata, #datastore
 ---
 
-What if you could combine a statistical analysis language with the power of an 
analytics database for instant insights into realtime data? You'd be able to 
draw conclusions from analyzing data streams at the speed of now. That's what 
combining the prowess of a [Druid database](http://druid.io) with the power of 
[R](http://www.r-project.org) can do.
+What if you could combine a statistical analysis language with the power of an 
analytics database for instant insights into realtime data? You'd be able to 
draw conclusions from analyzing data streams at the speed of now. That's what 
combining the prowess of a [Druid database]() with the power of 
[R](http://www.r-project.org) can do.
 
 In this blog, we'll look at how to bring streamed realtime data into R using 
nothing more than a laptop, an Internet connection, and open-source 
applications. And we'll do it with *only one* Druid node.
 
@@ -16,13 +16,13 @@ You'll need to download and unpack 
[Druid](http://static.druid.io/artifacts/rele
 
 Get the [R application](http://www.r-project.org/) for your platform.
 We also recommend using [RStudio](http://www.rstudio.com/) as the R IDE, which 
is what we used to run R.
-    
+
 You'll also need a free Twitter account to be able to get a sample of streamed 
Twitter data.
-    
+
 
 ## Set Up the Twitterstream
 
-First, register with the Twitter API. Log in at the [Twitter developer's 
site](https://dev.twitter.com/apps/new) (you can use your normal Twitter 
credentials) and fill out the form for creating an application; use any website 
and callback URL to complete the form. 
+First, register with the Twitter API. Log in at the [Twitter developer's 
site](https://dev.twitter.com/apps/new) (you can use your normal Twitter 
credentials) and fill out the form for creating an application; use any website 
and callback URL to complete the form.
 
 Make note of the API credentials that are then generated. Later you'll need to 
enter them when prompted by the Twitter-example startup script, or save them in 
a `twitter4j.properties` file (nicer if you ever restart the server). If using 
a properties file, save it under `$DRUID_HOME/examples/twitter`. The file 
should contains the following (using your real keys):
 
@@ -39,8 +39,8 @@ oauth.accessTokenSecret=<yourTwitterAccessTokenSecret>
 From the Druid home directory, start the Druid Realtime node:
 
     $DRUID_HOME/run_example_server.sh
-    
-When prompted, you'll choose the "twitter" example. If you're using the 
properties file, the server should start right up. Otherwise, you'll have to 
answer the prompts with the credentials you obtained from Twitter. 
+
+When prompted, you'll choose the "twitter" example. If you're using the 
properties file, the server should start right up. Otherwise, you'll have to 
answer the prompts with the credentials you obtained from Twitter.
 
 After the Realtime node starts successfully, you should see 
"Connected_to_Twitter" printed, as well as messages similar to the following:
 
@@ -73,7 +73,7 @@ druid <- druid.url("localhost:8083")
 
 ## Querying the Realtime Node
 
-[Druid queries](http://druid.io/docs/latest/Tutorial:-All-About-Queries.html) 
are in the format of JSON objects, but in R they'll have a different format. 
Let's look at this with a simple query that will give the time range of the 
Twitter data currently in our Druid node:
+[Druid queries](/docs/latest/Tutorial:-All-About-Queries.html) are in the 
format of JSON objects, but in R they'll have a different format. Let's look at 
this with a simple query that will give the time range of the Twitter data 
currently in our Druid node:
 
 ```
 > druid.query.timeBoundary(druid, dataSource="twitterstream", 
 > intervals=interval(ymd(20140101), ymd(20141231)), verbose="true")
@@ -120,10 +120,10 @@ Content-Length: 151
 < Transfer-Encoding: chunked
 * Server Jetty(8.1.11.v20130520) is not blacklisted
 < Server: Jetty(8.1.11.v20130520)
-< 
+<
 * Connection #2 to host localhost left intact
-                  minTime                   maxTime 
-"2014-01-25 00:52:00 UTC" "2014-01-25 01:35:00 UTC" 
+                  minTime                   maxTime
+"2014-01-25 00:52:00 UTC" "2014-01-25 01:35:00 UTC"
 ```
 
 At the very end comes the response to our query, a minTime and maxTime, the 
boundaries to our data set.
@@ -132,11 +132,11 @@ At the very end comes the response to our query, a 
minTime and maxTime, the boun
 Now lets look at some real Twitter data. Say we are interested in the number 
of tweets per language during that time period. We need to do an aggregation 
via a groupBy query (see RDruid help in RStudio):
 
 ```
-druid.query.groupBy(druid, dataSource="twitterstream", 
-                    interval(ymd("2014-01-01"), ymd("2015-01-01")), 
-                    granularity=granularity("P1D"), 
-                    aggregations = (tweets = sum(metric("tweets"))), 
-                    dimensions = "lang", 
+druid.query.groupBy(druid, dataSource="twitterstream",
+                    interval(ymd("2014-01-01"), ymd("2015-01-01")),
+                    granularity=granularity("P1D"),
+                    aggregations = (tweets = sum(metric("tweets"))),
+                    dimensions = "lang",
                     verbose="true")
 ```
 
@@ -198,7 +198,7 @@ Content-Length: 489
 < Transfer-Encoding: chunked
 * Server Jetty(8.1.11.v20130520) is not blacklisted
 < Server: Jetty(8.1.11.v20130520)
-< 
+<
 * Connection #3 to host localhost left intact
     timestamp tweets  lang
 1  2014-01-25   6476    ar
@@ -256,12 +256,12 @@ Then create the chart:
 You can refine this query with more aggregations and post aggregations (math 
within the results). For example, to find out how many rows in Druid the data 
for each of those languages takes, use:
 
 ```
-druid.query.groupBy(druid, dataSource="twitterstream", 
-                    interval(ymd("2014-01-01"), ymd("2015-01-01")), 
-                    granularity=granularity("all"), 
-                    aggregations = list(rows = druid.count(), 
-                                        tweets = sum(metric("tweets"))), 
-                    dimensions = "lang", 
+druid.query.groupBy(druid, dataSource="twitterstream",
+                    interval(ymd("2014-01-01"), ymd("2015-01-01")),
+                    granularity=granularity("all"),
+                    aggregations = list(rows = druid.count(),
+                                        tweets = sum(metric("tweets"))),
+                    dimensions = "lang",
                     verbose="true")
 ```
 
@@ -277,10 +277,10 @@ How do you find out what metrics and dimensions are 
available to query? You can
 Some interesting analyses on current events could be done using these 
dimensions and metrics. For example, you could filter on specific hashtags for 
events that happen to be spiking at the time:
 
 ```
-druid.query.groupBy(druid, dataSource="twitterstream", 
-                interval(ymd("2014-01-01"), ymd("2015-01-01")), 
-                granularity=granularity("P1D"), 
-                aggregations = (tweets = sum(metric("tweets"))), 
+druid.query.groupBy(druid, dataSource="twitterstream",
+                interval(ymd("2014-01-01"), ymd("2015-01-01")),
+                granularity=granularity("P1D"),
+                aggregations = (tweets = sum(metric("tweets"))),
                 filter =
                     dimension("first_hashtag") %~% "academyawards" |
                     dimension("first_hashtag") %~% "oscars",
@@ -289,4 +289,4 @@ druid.query.groupBy(druid, dataSource="twitterstream",
 
 See the [RDruid wiki](https://github.com/metamx/RDruid/wiki/Examples) for more 
examples.
 
-The point to remember is that this data is being streamed into Druid and 
brought into R via RDruid in realtime. For example, with an R script the data 
could be continuously queried, updated, and analyzed. 
+The point to remember is that this data is being streamed into Druid and 
brought into R via RDruid in realtime. For example, with an R script the data 
could be continuously queried, updated, and analyzed.
diff --git 
a/_posts/2014-02-18-hyperloglog-optimizations-for-real-world-systems.md 
b/_posts/2014-02-18-hyperloglog-optimizations-for-real-world-systems.md
index 4c7f681..d83814a 100644
--- a/_posts/2014-02-18-hyperloglog-optimizations-for-real-world-systems.md
+++ b/_posts/2014-02-18-hyperloglog-optimizations-for-real-world-systems.md
@@ -184,6 +184,6 @@ Martin][image-credits].
 [durand-thesis]: http://algo.inria.fr/durand/Articles/these.ps
 [google-40671]: 
http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/40671.pdf
 [strata-talk]: http://strataconf.com/stratany2013/public/schedule/detail/30045
-[druid-part-deux]: http://druid.io/blog/2011/05/20/druid-part-deux.html
+[druid-part-deux]: /blog/2011/05/20/druid-part-deux.html
 [hamming-weight]: http://en.wikipedia.org/wiki/Hamming_weight
 [image-credits]: 
http://donasdays.blogspot.com/2012/10/are-you-sprinter-or-long-distance-runner.html
diff --git a/_posts/2014-03-12-batch-ingestion.md 
b/_posts/2014-03-12-batch-ingestion.md
index 0dd3a97..018d72d 100644
--- a/_posts/2014-03-12-batch-ingestion.md
+++ b/_posts/2014-03-12-batch-ingestion.md
@@ -1,7 +1,7 @@
 ---
 title: Batch-Loading Sensor Data into Druid
 published: true
-layout: post 
+layout: post
 author: Igal Levy
 tags: #sensors #usgs #druid #analytics #querying #bigdata, #datastore
 ---
@@ -18,7 +18,7 @@ We used this map to get the sensor info for the Napa River in 
Napa County, Calif
 We decided to first import the data into [R (the statistical programming 
language)](http://www.r-project.org/) for two reasons:
 
 * The R package `waterData` from USGS. This package allows us to retrieve and 
analyze hydrologic data from USGS. We can then export that data from within the 
R environment, then set up Druid to ingest it.
-* The R package `RDruid` which we've [blogged about 
before](http://druid.io/blog/2014/02/03/rdruid-and-twitterstream.html). This 
package allows us to query Druid from within the R environment.
+* The R package `RDruid` which we've [blogged about 
before](/blog/2014/02/03/rdruid-and-twitterstream.html). This package allows us 
to query Druid from within the R environment.
 
 ## Extracting the Streamflow Data
 In R, load the waterData package, then run `importDVs()`:
@@ -35,7 +35,7 @@ The last line uses the function `waterData.importDVs()` to 
get sensor (or "strea
 * staid, or site identification number, which is entered as a string due to 
the fact that some IDs have leading 0s. This value was obtained from the 
interactive map discussed above.
 * code, which specifies the type of sensor data we're interested in (if 
available). Our chosen code specifies measurement of discharge, in cubic feet 
per second. You can learn about codes at the [USGS Water Resources 
site](http://nwis.waterdata.usgs.gov/usa/nwis/pmcodes).
 * stat, which specifies the type of statistic we're looking for&mdash;in this 
case, the mean daily flow (mean is the default statistic). The USGS provides [a 
page summarizing various types of codes and 
parameters](http://help.waterdata.usgs.gov/codes-and-parameters).
-* start and end dates. 
+* start and end dates.
 
 The information on the specific site and sensor should provide information on 
the type of data available and the start-end dates for the full historical 
record.
 
@@ -83,7 +83,7 @@ write.table(napa_flow_subset, file="~/napa-flow.tsv", 
sep="\t", col.names = F, r
 And here's our file:
 
 ```bash
-$ head ~/napa-flow.tsv 
+$ head ~/napa-flow.tsv
 "11458000"     90      1963-01-01
 "11458000"     87      1963-01-02
 "11458000"     85      1963-01-03
@@ -100,7 +100,7 @@ $ head ~/napa-flow.tsv
 Loading the data into Druid involves setting up Druid's indexing service to 
ingest the data into the Druid cluster, where specialized nodes will manage it.
 
 ### Configure the Indexing Task
-Druid has an indexing service that can load data. Since there's a relatively 
small amount of data to ingest, we're going to use the [basic Druid indexing 
service](http://druid.io/docs/latest/Batch-ingestion.html) to ingest it. 
(Another option to ingest data uses a Hadoop cluster and is set up in a similar 
way, but that is more than we need for this job.) We must create a task (in 
JSON format) that specifies the work the indexing service will do:
+Druid has an indexing service that can load data. Since there's a relatively 
small amount of data to ingest, we're going to use the [basic Druid indexing 
service](/docs/latest/Batch-ingestion.html) to ingest it. (Another option to 
ingest data uses a Hadoop cluster and is set up in a similar way, but that is 
more than we need for this job.) We must create a task (in JSON format) that 
specifies the work the indexing service will do:
 
 ```json
 {
@@ -135,20 +135,20 @@ Druid has an indexing service that can load data. Since 
there's a relatively sma
     }
   }
 }
-``` 
+```
 
 The taks is saved to a file, `usgs_index_task.json`. Note a few things about 
this task:
 
-* granularitySpec sets 
[segment](http://druid.io/docs/latest/Concepts-and-Terminology.html) 
granularity to MONTH, rather than using the default DAY, even though each row 
of our data is a daily reading. We do this to avoid having Druid create a 
segment per row of data. That's a lot of extra work (note the interval is 
"1963-01-01/2013-12-31"), and we simply don't need that much granularity to 
make sense of this data for a broad view. Setting the granularity to MONTH 
causes Druid to roll up [...]
+* granularitySpec sets [segment](/docs/latest/Concepts-and-Terminology.html) 
granularity to MONTH, rather than using the default DAY, even though each row 
of our data is a daily reading. We do this to avoid having Druid create a 
segment per row of data. That's a lot of extra work (note the interval is 
"1963-01-01/2013-12-31"), and we simply don't need that much granularity to 
make sense of this data for a broad view. Setting the granularity to MONTH 
causes Druid to roll up data into mont [...]
 
-    A different granularity setting for the data itself 
([indexGranularity](http://druid.io/docs/latest/Tasks.html)) controls how the 
data is rolled up before it is chunked into segments. This granularity, which 
defaults to "MINUTE", won't affect our data, which consists of daily values.
+    A different granularity setting for the data itself 
([indexGranularity](/docs/latest/Tasks.html)) controls how the data is rolled 
up before it is chunked into segments. This granularity, which defaults to 
"MINUTE", won't affect our data, which consists of daily values.
 * We specify aggregators that Druid will use as *metrics* to summarize the 
data. "count" is a built-in metric that counts the raw number of rows on 
ingestion, and the Druid rows (after rollups) after processing. We've added a 
metric to summarize "val" from our water data.
 * The firehose section specifies out data source, which in this case is a 
file. If our data existed in multiple files, we could have set "filter" to 
"*.tsv".
 * We have to specify the timestamp column so Druid knows.
 * We also specify the format of the data ("tsv"), what the columns are, and 
which to treat as dimensions. Dimensions are the values that describe our data.
 
 ## Start a Druid Cluster and Post the Task
-Before submitting this task, we must start a small Druid cluster consisting of 
the indexing service, a Coordinator node, and a Historical node. Instructions 
on how to set up and start a Druid cluster are in the [Druid 
documentation](http://druid.io/docs/latest/Tutorial:-Loading-Your-Data-Part-1.html).
+Before submitting this task, we must start a small Druid cluster consisting of 
the indexing service, a Coordinator node, and a Historical node. Instructions 
on how to set up and start a Druid cluster are in the [Druid 
documentation](/docs/latest/Tutorial:-Loading-Your-Data-Part-1.html).
 
 Once the cluster is ready, the task is submitted to the indexer's REST service 
(showing the relative path to the task file):
 
@@ -171,7 +171,7 @@ We can also verify the data by querying Druid. Here's a 
simple time-boundary que
 
 ```json
 {
-    "queryType": "timeBoundary", 
+    "queryType": "timeBoundary",
     "dataSource": "usgs"
 }
 ```
@@ -194,8 +194,8 @@ The response should be:
 } ]
 ```
 
-You can learn about submitting more complex queries in the [Druid 
documentation](http://druid.io/docs/latest/Tutorial:-All-About-Queries.html).
- 
+You can learn about submitting more complex queries in the [Druid 
documentation](/docs/latest/Tutorial:-All-About-Queries.html).
+
 ## What to Try Next: Something More Akin to a Production System
 For the purposes of demonstration, we've cobbled together a simple system for 
manually fetching, mutating, loading, analyzing, storing, and then querying 
(for yet more analysis) data. But this would hardly be anyone's idea of a 
production system.
 
diff --git a/_posts/2014-04-15-intro-to-pydruid.md 
b/_posts/2014-04-15-intro-to-pydruid.md
index c4e4e92..c165069 100644
--- a/_posts/2014-04-15-intro-to-pydruid.md
+++ b/_posts/2014-04-15-intro-to-pydruid.md
@@ -6,7 +6,7 @@ layout: post
 tags: #druid #analytics #querying #python #pandas #scipi #matplotlib
 ---
 
-We've already written about pairing [R with 
RDruid](http://druid.io/blog/2014/02/03/rdruid-and-twitterstream.html), but 
Python has powerful and free open-source analysis tools too. Collectively, 
these are often referred to as the [SciPy 
Stack](http://www.scipy.org/stackspec.html). To pair SciPy's analytic power 
with the advantages of querying time-series data in Druid, we created the 
pydruid connector. This allows Python users to query Druid&mdash;and export the 
results to useful formats [...]
+We've already written about pairing [R with 
RDruid](/blog/2014/02/03/rdruid-and-twitterstream.html), but Python has 
powerful and free open-source analysis tools too. Collectively, these are often 
referred to as the [SciPy Stack](http://www.scipy.org/stackspec.html). To pair 
SciPy's analytic power with the advantages of querying time-series data in 
Druid, we created the pydruid connector. This allows Python users to query 
Druid&mdash;and export the results to useful formats&mdash;in a way [...]
 
 ## Getting Started
 pydruid should run with Python 2.x, and is known to run with Python 2.7.5.
@@ -26,7 +26,7 @@ pip install pandas
 When you import pydruid into your example, it will try to load Pandas as well.
 
 ## Run the Druid Wikipedia Example
-[Download Druid](http://druid.io/downloads.html) and unpack Druid. If you are 
not familiar with Druid, see this [introductory 
tutorial](http://druid.io/docs/latest/Tutorial:-A-First-Look-at-Druid.html).
+[Download Druid](/downloads.html) and unpack Druid. If you are not familiar 
with Druid, see this [introductory 
tutorial](/docs/latest/Tutorial:-A-First-Look-at-Druid.html).
 
 From the Druid home directory, start the Druid Realtime node:
 
@@ -37,7 +37,7 @@ $DRUID_HOME/run_example_server.sh
 When prompted, choose the "wikipedia" example. After the Druid realtime node 
is done starting up, messages should appear that start with the following:
 
     2014-04-03 18:01:32,852 INFO [wikipedia-incremental-persist] ...
-    
+
 These messages confirm that the realtime node is ingesting data from the 
Wikipedia edit stream, and that data can be queried.
 
 ## Write, Execute, and Submit a pydruid Query
diff --git 
a/_posts/2014-05-07-open-source-leaders-sound-off-on-the-rise-of-the-real-time-data-stack.md
 
b/_posts/2014-05-07-open-source-leaders-sound-off-on-the-rise-of-the-real-time-data-stack.md
index bb11987..a5aa049 100644
--- 
a/_posts/2014-05-07-open-source-leaders-sound-off-on-the-rise-of-the-real-time-data-stack.md
+++ 
b/_posts/2014-05-07-open-source-leaders-sound-off-on-the-rise-of-the-real-time-data-stack.md
@@ -16,7 +16,7 @@ organized a panel that same night to continue the 
conversation.
 
 The discussion featured key contributors to several open source technologies:
 Andy Feng ([Storm](http://storm.incubator.apache.org/)), Eric Tschetter
-([Druid](http://druid.io/)), Jun Rao ([Kafka](http://kafka.apache.org/)), and
+([Druid](/)), Jun Rao ([Kafka](http://kafka.apache.org/)), and
 Matei Zaharia ([Spark](http://spark.apache.org/)). It was moderated by
 VentureBeat Staff Writer Jordan Novet and hosted by Zack Bogue of the [Founders
 Den](http://www.foundersden.com/) and [Data Collective](http://dcvc.com/).
diff --git a/_posts/2015-11-03-seeking-new-committers.md 
b/_posts/2015-11-03-seeking-new-committers.md
index f3bdcfb..b174564 100644
--- a/_posts/2015-11-03-seeking-new-committers.md
+++ b/_posts/2015-11-03-seeking-new-committers.md
@@ -23,7 +23,7 @@ committers from diverse organizations. If you are a Druid 
user who is
 passionate about druid and wants to get involved more, then please send your
 pull requests to improve documentation, bug fixes, tests and proposed/accepted
 features. Also, feel free to let us know about your interest by contacting
-[existing committers](http://druid.io/community/) or post in the [development
+[existing committers](/community/) or post in the [development
 list](https://groups.google.com/forum/#!forum/druid-development).
 
 To get started developing on Druid, we’ve created and tagged a set of [beginner
diff --git a/_posts/2016-06-28-druid-0-9-1.md b/_posts/2016-06-28-druid-0-9-1.md
index c4fc968..16dc4af 100644
--- a/_posts/2016-06-28-druid-0-9-1.md
+++ b/_posts/2016-06-28-druid-0-9-1.md
@@ -13,7 +13,7 @@ over the previous 0.9.0 release, from over 30 contributors. 
Major new features i
 experimental Kafka indexing service to support exactly-once consumption from 
Apache Kafka, support
 for cluster-wide query-time lookups (QTL), and an improved segment balancing 
algorithm.
 
-You can download the release here: 
[http://druid.io/downloads.html](http://druid.io/downloads.html)
+You can download the release here: [/downloads.html](/downloads.html)
 
 The full release notes are here: 
[https://github.com/apache/incubator-druid/releases/druid-0.9.1.1](https://github.com/apache/incubator-druid/releases/druid-0.9.1.1)
 
diff --git a/_posts/2016-12-01-druid-0-9-2.md b/_posts/2016-12-01-druid-0-9-2.md
index 6e8f371..cc61a85 100644
--- a/_posts/2016-12-01-druid-0-9-2.md
+++ b/_posts/2016-12-01-druid-0-9-2.md
@@ -15,7 +15,7 @@ performance improvements for HyperUnique and DataSketches, a 
query cache impleme
 Caffeine, a new lookup extension exposing fine grained caching strategies, 
support for reading ORC
 files, and new aggregators for variance and standard deviation.
 
-You can download the release here: 
[http://druid.io/downloads.html](/downloads.html)
+You can download the release here: [/downloads.html](/downloads.html)
 
 The full release notes are here:
 
[https://github.com/apache/incubator-druid/releases/druid-0.9.2](https://github.com/apache/incubator-druid/releases/druid-0.9.2)
diff --git a/_posts/2017-04-18-druid-0-10-0.md 
b/_posts/2017-04-18-druid-0-10-0.md
index 9bcaac8..a6e9953 100644
--- a/_posts/2017-04-18-druid-0-10-0.md
+++ b/_posts/2017-04-18-druid-0-10-0.md
@@ -15,7 +15,7 @@ support, a revamp of the "index" task, a new "like" filter, 
large columns,
 ability to run the coordinator and overlord as a single service, better
 performing defaults, and eight new extensions.
 
-You can download the release here: 
[http://druid.io/downloads.html](/downloads.html)
+You can download the release here: [/downloads.html](/downloads.html)
 
 The full release notes are here:
 
[https://github.com/apache/incubator-druid/releases/druid-0.10.0](https://github.com/apache/incubator-druid/releases/druid-0.10.0)
diff --git a/_posts/2017-08-22-druid-0-10-1.md 
b/_posts/2017-08-22-druid-0-10-1.md
index fc3e22f..888897d 100644
--- a/_posts/2017-08-22-druid-0-10-1.md
+++ b/_posts/2017-08-22-druid-0-10-1.md
@@ -23,7 +23,7 @@ Druid 0.10.1 contains hundreds of performance improvements, 
stability improvemen
 - Various improvements to Druid SQL
 
 
-You can download the release here: 
[http://druid.io/downloads.html](/downloads.html)
+You can download the release here: [/downloads.html](/downloads.html)
 
 The full release notes are here:
 
[https://github.com/apache/incubator-druid/releases/druid-0.10.1](https://github.com/apache/incubator-druid/releases/druid-0.10.1)
diff --git a/_posts/2017-12-04-druid-0-11-0.md 
b/_posts/2017-12-04-druid-0-11-0.md
index fe349ee..5f40f2d 100644
--- a/_posts/2017-12-04-druid-0-11-0.md
+++ b/_posts/2017-12-04-druid-0-11-0.md
@@ -21,7 +21,7 @@ Major new features include:
 - GroupBy performance improvements
 - Various improvements to Druid SQL
 
-You can download the release here: 
[http://druid.io/downloads.html](/downloads.html)
+You can download the release here: [/downloads.html](/downloads.html)
 
 The full release notes are here:
 
[https://github.com/apache/incubator-druid/releases/druid-0.11.0](https://github.com/apache/incubator-druid/releases/druid-0.11.0)
diff --git a/_posts/2018-03-08-druid-0-12-0.md 
b/_posts/2018-03-08-druid-0-12-0.md
index 58894ac..74c54de 100644
--- a/_posts/2018-03-08-druid-0-12-0.md
+++ b/_posts/2018-03-08-druid-0-12-0.md
@@ -23,7 +23,7 @@ Major new features include:
 - Various performance improvements
 - Various improvements to Druid SQL
 
-You can download the release here: 
[http://druid.io/downloads.html](/downloads.html)
+You can download the release here: [/downloads.html](/downloads.html)
 
 The full release notes are here:
 
[https://github.com/apache/incubator-druid/releases/druid-0.12.0](https://github.com/apache/incubator-druid/releases/druid-0.12.0)
diff --git a/_posts/2018-06-08-druid-0-12-1.md 
b/_posts/2018-06-08-druid-0-12-1.md
index a91919a..ef4a8df 100644
--- a/_posts/2018-06-08-druid-0-12-1.md
+++ b/_posts/2018-06-08-druid-0-12-1.md
@@ -20,7 +20,7 @@ Major improvements include:
 - Support HTTP OPTIONS request
 - Fix a bug of different segments of the same segment id in Kafka indexing
 
-You can download the release here: 
[http://druid.io/downloads.html](/downloads.html)
+You can download the release here: [/downloads.html](/downloads.html)
 
 The full release notes are here:
 
[https://github.com/apache/incubator-druid/releases/druid-0.12.1](https://github.com/apache/incubator-druid/releases/druid-0.12.1)
diff --git a/community/index.md b/community/index.md
index 99869b7..8808b4f 100644
--- a/community/index.md
+++ b/community/index.md
@@ -8,13 +8,9 @@ layout: simple_page
 
 Most discussion about Druid happens over email and GitHub.
 
-The Druid community is in the process of migrating to Apache by way of the 
Apache Incubator. As we proceed
-along this path, our site will move from http://druid.io/ to 
https://druid.apache.org/, and our mailing lists
-and Git repositories will be migrated as well.
-
 * **User mailing list** 
[[email protected]](https://groups.google.com/forum/#!forum/druid-user)
 for general discussion
 * **Development mailing list** 
[[email protected]](https://lists.apache.org/[email protected])
 for discussion about project development
-* **GitHub** [druid-io/druid](https://github.com/apache/druid) issues and pull 
requests (watch to subscribe)
+* **GitHub** [apache/druid](https://github.com/apache/druid) issues and pull 
requests (watch to subscribe)
 * **Meetups** [Druid meetups](https://www.meetup.com/topics/apache-druid/) for 
different meetup groups around the world.
 * **IRC** `#druid-dev` on irc.freenode.net
 
diff --git a/docs/0.14.0-incubating/ingestion/hadoop-vs-native-batch.md 
b/docs/0.14.0-incubating/ingestion/hadoop-vs-native-batch.md
index 85373a0..326309a 100644
--- a/docs/0.14.0-incubating/ingestion/hadoop-vs-native-batch.md
+++ b/docs/0.14.0-incubating/ingestion/hadoop-vs-native-batch.md
@@ -35,8 +35,8 @@ ingestion method.
 | Parallel indexing | Always parallel | Parallel if firehose is splittable | 
Always sequential |
 | Supported indexing modes | Replacing mode | Both appending and replacing 
modes | Both appending and replacing modes |
 | External dependency | Hadoop (it internally submits Hadoop jobs) | No 
dependency | No dependency |
-| Supported [rollup 
modes](http://druid.io/docs/latest/ingestion/index.html#roll-up-modes) | 
Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
-| Supported partitioning methods | [Both Hash-based and range 
partitioning](http://druid.io/docs/latest/ingestion/hadoop.html#partitioning-specification)
 | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
+| Supported [rollup modes](/docs/latest/ingestion/index.html#roll-up-modes) | 
Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
+| Supported partitioning methods | [Both Hash-based and range 
partitioning](/docs/latest/ingestion/hadoop.html#partitioning-specification) | 
N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
 | Supported input locations | All locations accessible via HDFS client or 
Druid dataSource | All implemented [firehoses](./firehose.html) | All 
implemented [firehoses](./firehose.html) |
 | Supported file formats | All implemented Hadoop InputFormats | Currently 
text file formats (CSV, TSV, JSON) by default. Additional formats can be added 
though a [custom extension](../development/modules.html) implementing 
[`FiniteFirehoseFactory`](https://github.com/apache/incubator-druid/blob/master/core/src/main/java/org/apache/druid/data/input/FiniteFirehoseFactory.java)
 | Currently text file formats (CSV, TSV, JSON) by default. Additional formats 
can be added though a [custom exten [...]
 | Saving parse exceptions in ingestion report | Currently not supported | 
Currently not supported | Supported |
diff --git a/docs/0.14.1-incubating/ingestion/hadoop-vs-native-batch.md 
b/docs/0.14.1-incubating/ingestion/hadoop-vs-native-batch.md
index 85373a0..326309a 100644
--- a/docs/0.14.1-incubating/ingestion/hadoop-vs-native-batch.md
+++ b/docs/0.14.1-incubating/ingestion/hadoop-vs-native-batch.md
@@ -35,8 +35,8 @@ ingestion method.
 | Parallel indexing | Always parallel | Parallel if firehose is splittable | 
Always sequential |
 | Supported indexing modes | Replacing mode | Both appending and replacing 
modes | Both appending and replacing modes |
 | External dependency | Hadoop (it internally submits Hadoop jobs) | No 
dependency | No dependency |
-| Supported [rollup 
modes](http://druid.io/docs/latest/ingestion/index.html#roll-up-modes) | 
Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
-| Supported partitioning methods | [Both Hash-based and range 
partitioning](http://druid.io/docs/latest/ingestion/hadoop.html#partitioning-specification)
 | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
+| Supported [rollup modes](/docs/latest/ingestion/index.html#roll-up-modes) | 
Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
+| Supported partitioning methods | [Both Hash-based and range 
partitioning](/docs/latest/ingestion/hadoop.html#partitioning-specification) | 
N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
 | Supported input locations | All locations accessible via HDFS client or 
Druid dataSource | All implemented [firehoses](./firehose.html) | All 
implemented [firehoses](./firehose.html) |
 | Supported file formats | All implemented Hadoop InputFormats | Currently 
text file formats (CSV, TSV, JSON) by default. Additional formats can be added 
though a [custom extension](../development/modules.html) implementing 
[`FiniteFirehoseFactory`](https://github.com/apache/incubator-druid/blob/master/core/src/main/java/org/apache/druid/data/input/FiniteFirehoseFactory.java)
 | Currently text file formats (CSV, TSV, JSON) by default. Additional formats 
can be added though a [custom exten [...]
 | Saving parse exceptions in ingestion report | Currently not supported | 
Currently not supported | Supported |
diff --git a/docs/0.14.2-incubating/ingestion/hadoop-vs-native-batch.md 
b/docs/0.14.2-incubating/ingestion/hadoop-vs-native-batch.md
index 85373a0..326309a 100644
--- a/docs/0.14.2-incubating/ingestion/hadoop-vs-native-batch.md
+++ b/docs/0.14.2-incubating/ingestion/hadoop-vs-native-batch.md
@@ -35,8 +35,8 @@ ingestion method.
 | Parallel indexing | Always parallel | Parallel if firehose is splittable | 
Always sequential |
 | Supported indexing modes | Replacing mode | Both appending and replacing 
modes | Both appending and replacing modes |
 | External dependency | Hadoop (it internally submits Hadoop jobs) | No 
dependency | No dependency |
-| Supported [rollup 
modes](http://druid.io/docs/latest/ingestion/index.html#roll-up-modes) | 
Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
-| Supported partitioning methods | [Both Hash-based and range 
partitioning](http://druid.io/docs/latest/ingestion/hadoop.html#partitioning-specification)
 | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
+| Supported [rollup modes](/docs/latest/ingestion/index.html#roll-up-modes) | 
Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
+| Supported partitioning methods | [Both Hash-based and range 
partitioning](/docs/latest/ingestion/hadoop.html#partitioning-specification) | 
N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
 | Supported input locations | All locations accessible via HDFS client or 
Druid dataSource | All implemented [firehoses](./firehose.html) | All 
implemented [firehoses](./firehose.html) |
 | Supported file formats | All implemented Hadoop InputFormats | Currently 
text file formats (CSV, TSV, JSON) by default. Additional formats can be added 
though a [custom extension](../development/modules.html) implementing 
[`FiniteFirehoseFactory`](https://github.com/apache/incubator-druid/blob/master/core/src/main/java/org/apache/druid/data/input/FiniteFirehoseFactory.java)
 | Currently text file formats (CSV, TSV, JSON) by default. Additional formats 
can be added though a [custom exten [...]
 | Saving parse exceptions in ingestion report | Currently not supported | 
Currently not supported | Supported |
diff --git a/docs/latest/ingestion/hadoop-vs-native-batch.md 
b/docs/latest/ingestion/hadoop-vs-native-batch.md
index 85373a0..326309a 100644
--- a/docs/latest/ingestion/hadoop-vs-native-batch.md
+++ b/docs/latest/ingestion/hadoop-vs-native-batch.md
@@ -35,8 +35,8 @@ ingestion method.
 | Parallel indexing | Always parallel | Parallel if firehose is splittable | 
Always sequential |
 | Supported indexing modes | Replacing mode | Both appending and replacing 
modes | Both appending and replacing modes |
 | External dependency | Hadoop (it internally submits Hadoop jobs) | No 
dependency | No dependency |
-| Supported [rollup 
modes](http://druid.io/docs/latest/ingestion/index.html#roll-up-modes) | 
Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
-| Supported partitioning methods | [Both Hash-based and range 
partitioning](http://druid.io/docs/latest/ingestion/hadoop.html#partitioning-specification)
 | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
+| Supported [rollup modes](/docs/latest/ingestion/index.html#roll-up-modes) | 
Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
+| Supported partitioning methods | [Both Hash-based and range 
partitioning](/docs/latest/ingestion/hadoop.html#partitioning-specification) | 
N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
 | Supported input locations | All locations accessible via HDFS client or 
Druid dataSource | All implemented [firehoses](./firehose.html) | All 
implemented [firehoses](./firehose.html) |
 | Supported file formats | All implemented Hadoop InputFormats | Currently 
text file formats (CSV, TSV, JSON) by default. Additional formats can be added 
though a [custom extension](../development/modules.html) implementing 
[`FiniteFirehoseFactory`](https://github.com/apache/incubator-druid/blob/master/core/src/main/java/org/apache/druid/data/input/FiniteFirehoseFactory.java)
 | Currently text file formats (CSV, TSV, JSON) by default. Additional formats 
can be added though a [custom exten [...]
 | Saving parse exceptions in ingestion report | Currently not supported | 
Currently not supported | Supported |
diff --git a/downloads.md b/downloads.md
index 45cd8a6..69cafdf 100644
--- a/downloads.md
+++ b/downloads.md
@@ -2,7 +2,7 @@
 title: Download
 layout: simple_page
 sectionid: download
-canonical: 'http://druid.io/downloads.html'
+canonical: 'http://apache.druid.com/downloads.html'
 ---
 
 ## Latest release
diff --git a/feed/index.xml b/feed/index.xml
deleted file mode 100644
index 6a0b1a0..0000000
--- a/feed/index.xml
+++ /dev/null
@@ -1,25 +0,0 @@
----
-layout: null
----
-<?xml version="1.0" encoding="utf-8"?>
-<feed xmlns="http://www.w3.org/2005/Atom";>
-       
-  <title type="text" xml:lang="en">{{ site.title }}</title>
-  <subtitle>{{ site.description }}</subtitle>
-  <link type="application/atom+xml" href="http://druid.io/feed/"; rel="self"/>
-  <link type="text/html" href="http://druid.io/"; rel="alternate"/>
-       <updated>{{ site.time | date_to_xmlschema }}</updated>
-        <id>http://druid.io/</id>
-       
-       {% for post in site.posts limit:20 %}
-       <entry>
-               <title>{{ post.title }}</title>
-               <link href="http://druid.io{{ post.url }}"/>
-               <updated>{{ post.date | date_to_xmlschema }}</updated>
-                <id>http://druid.io{{ post.id }}</id>
-                <author><name>{{ post.author }}</name></author>
-                <summary type="html">{{ post.excerpt | xml_escape }}</summary>
-               <content type="html">{{ post.content | xml_escape }}</content>
-       </entry>
-       {% endfor %}
-</feed>
diff --git a/robots.txt b/robots.txt
index a958ccd..f024d6b 100644
--- a/robots.txt
+++ b/robots.txt
@@ -1,4 +1,4 @@
-# robots.txt for http://druid.io
+# robots.txt for http://apache.druid.org
 
 # Keep robots from crawling old Druid doc versions
 
diff --git a/technology.md b/technology.md
index 817383b..ee78f49 100644
--- a/technology.md
+++ b/technology.md
@@ -94,7 +94,7 @@ Druid converts raw data stored in a source to a more 
read-optimized format (call
   <img src="img/diagram-4.png" style="max-width: 580px;">
 </div>
 
-For more information, please visit [our docs 
page](http://druid.io/docs/latest/ingestion/index.html).
+For more information, please visit [our docs 
page](/docs/latest/ingestion/index.html).
 
 ## Storage
 
@@ -112,7 +112,7 @@ This pre-aggregation step is known as 
[rollup](/docs/latest/tutorials/tutorial-r
   <img src="img/diagram-5.png" style="max-width: 800px;">
 </div>
 
-For more information, please visit [our docs 
page](http://druid.io/docs/latest/design/segments.html).
+For more information, please visit [our docs 
page](/docs/latest/design/segments.html).
 
 ## Querying
 
@@ -123,7 +123,7 @@ In addition to standard SQL operators, Druid supports 
unique operators that leve
   <img src="img/diagram-6.png" style="max-width: 580px;">
 </div>
 
-For more information, please visit [our docs 
page](http://druid.io/docs/latest/querying/querying.html).
+For more information, please visit [our docs 
page](/docs/latest/querying/querying.html).
 
 ## Architecture
 
@@ -139,7 +139,7 @@ Druid processes can independently fail without impacting 
the operations of other
   <img src="img/diagram-7.png" style="max-width: 620px;">
 </div>
 
-For more information, please visit [our docs 
page](http://druid.io/docs/latest/design/index.html).
+For more information, please visit [our docs 
page](/docs/latest/design/index.html).
 
 ## Operations
 
@@ -181,4 +181,4 @@ As such, Druid possesses several features to ensure uptime 
and no data loss.
   </div>
 </div>
 
-For more information, please visit [our docs 
page](http://druid.io/docs/latest/operations/recommendations.html).
+For more information, please visit [our docs 
page](/docs/latest/operations/recommendations.html).


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to