This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 9c124f2cde Add a configurable bufferPeriod between when a segment is 
marked unused and deleted by KillUnusedSegments duty (#12599)
9c124f2cde is described below

commit 9c124f2cde074d268f95ccf5989f548e495237b0
Author: Lucas Capistrant <[email protected]>
AuthorDate: Thu Aug 17 19:32:51 2023 -0500

    Add a configurable bufferPeriod between when a segment is marked unused and 
deleted by KillUnusedSegments duty (#12599)
    
    * Add new configurable buffer period to create gap between mark unused and 
kill of segment
    
    * Changes after testing
    
    * fixes and improvements
    
    * changes after initial self review
    
    * self review changes
    
    * update sql statement that was lacking last_used
    
    * shore up some code in SqlMetadataConnector after self review
    
    * fix derby compatibility and improve testing/docs
    
    * fix checkstyle violations
    
    * Fixes post merge with master
    
    * add some unit tests to improve coverage
    
    * ignore test coverage on new UpdateTools cli tool
    
    * another attempt to ignore UpdateTables in coverage check
    
    * change column name to used_flag_last_updated
    
    * fix a method signature after column name switch
    
    * update docs spelling
    
    * Update spelling dictionary
    
    * Fixing up docs/spelling and integrating altering tasks table with my 
alteration code
    
    * Update NULL values for used_flag_last_updated in the background
    
    * Remove logic to allow segs with null used_flag_last_updated to be killed 
regardless of bufferPeriod
    
    * remove unneeded things now that the new column is automatically updated
    
    * Test new background row updater method
    
    * fix broken tests
    
    * fix create table statement
    
    * cleanup DDL formatting
    
    * Revert adding columns to entry table by default
    
    * fix compilation issues after merge with master
    
    * discovered and fixed metastore inserts that were breaking integration 
tests
    
    * fixup forgotten insert by using pattern of sharing now timestamp across 
columns
    
    * fix issue introduced by merge
    
    * fixup after merge with master
    
    * add some directions to docs in the case of segment table validation issues
---
 docs/configuration/index.md                        |   1 +
 docs/design/metadata-storage.md                    |   4 +-
 docs/operations/upgrade-prep.md                    |  71 ++++++++
 .../test-data/high-availability-sample-data.sql    |  10 +-
 .../docker/test-data/ldap-security-sample-data.sql |   2 +-
 .../docker/test-data/query-error-sample-data.sql   |  10 +-
 .../docker/test-data/query-retry-sample-data.sql   |  10 +-
 .../docker/test-data/query-sample-data.sql         |  10 +-
 .../docker/test-data/security-sample-data.sql      |   2 +-
 pom.xml                                            |   3 +
 .../druid/metadata/MetadataStorageConnector.java   |   8 +
 .../metadata/TestMetadataStorageConnector.java     |   6 +
 .../SQLMetadataStorageUpdaterJobHandler.java       |  11 +-
 .../IndexerSQLMetadataStorageCoordinator.java      |  10 +-
 .../druid/metadata/SQLMetadataConnector.java       | 180 ++++++++++++++++-----
 .../metadata/SQLMetadataSegmentPublisher.java      |  14 +-
 .../druid/metadata/SegmentsMetadataManager.java    |  20 ++-
 .../druid/metadata/SqlSegmentsMetadataManager.java | 124 +++++++++++++-
 .../druid/metadata/SqlSegmentsMetadataQuery.java   |  11 +-
 .../metadata/storage/derby/DerbyConnector.java     |  58 ++++---
 .../druid/server/coordinator/DruidCoordinator.java |   2 +
 .../server/coordinator/DruidCoordinatorConfig.java |   4 +
 .../coordinator/duty/KillUnusedSegments.java       |   7 +-
 .../IndexerSQLMetadataStorageCoordinatorTest.java  |  11 +-
 .../druid/metadata/SQLMetadataConnectorTest.java   |  44 ++++-
 .../metadata/SqlSegmentsMetadataManagerTest.java   | 118 +++++++++++++-
 .../apache/druid/metadata/TestDerbyConnector.java  |  26 +++
 .../coordinator/TestDruidCoordinatorConfig.java    |  22 ++-
 .../coordinator/duty/KillUnusedSegmentsTest.java   |   7 +-
 .../simulate/TestSegmentsMetadataManager.java      |  12 +-
 .../src/main/java/org/apache/druid/cli/Main.java   |   3 +-
 .../java/org/apache/druid/cli/UpdateTables.java    | 134 +++++++++++++++
 website/.spelling                                  |   1 +
 33 files changed, 832 insertions(+), 124 deletions(-)

diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index 4924fb478f..deb1e7c541 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -858,6 +858,7 @@ These Coordinator static configurations can be defined in 
the `coordinator/runti
 |`druid.coordinator.kill.period`|How often to send kill tasks to the indexing 
service. Value must be greater than `druid.coordinator.period.indexingPeriod`. 
Only applies if kill is turned on.|P1D (1 Day)|
 |`druid.coordinator.kill.durationToRetain`|Only applies if you set 
`druid.coordinator.kill.on` to `true`. This value is ignored if 
`druid.coordinator.kill.ignoreDurationToRetain` is `true`. Valid configurations 
must be a ISO8601 period. Druid will not kill unused segments whose interval 
end date is beyond `now - durationToRetain`. `durationToRetain` can be a 
negative ISO8601 period, which would result in `now - durationToRetain` to be 
in the future.<br /><br />Note that the `durationToRe [...]
 |`druid.coordinator.kill.ignoreDurationToRetain`|A way to override 
`druid.coordinator.kill.durationToRetain` and tell the coordinator that you do 
not care about the end date of unused segment intervals when it comes to 
killing them. If true, the coordinator considers all unused segments as 
eligible to be killed.|false|
+|`druid.coordinator.kill.bufferPeriod`|The amount of time that a segment must 
be unused before it is able to be permanently removed from metadata and deep 
storage. This can serve as a buffer period to prevent data loss if data ends up 
being needed after being marked unused.|`P30D`|
 |`druid.coordinator.kill.maxSegments`|The number of unused segments to kill 
per kill task. This number must be greater than 0. This only applies when 
`druid.coordinator.kill.on=true`.|100|
 |`druid.coordinator.balancer.strategy`|Specify the type of balancing strategy 
for the coordinator to use to distribute segments among the historicals. 
`cachingCost` is logically equivalent to `cost` but is more CPU-efficient on 
large clusters. `diskNormalized` weights the costs according to the servers' 
disk usage ratios - there are known issues with this strategy distributing 
segments unevenly across the cluster. `random` distributes segments among 
services randomly.|`cost`|
 |`druid.coordinator.balancer.cachingCost.awaitInitialization`|Whether to wait 
for segment view initialization before creating the `cachingCost` balancing 
strategy. This property is enabled only when 
`druid.coordinator.balancer.strategy` is `cachingCost`. If set to 'true', the 
Coordinator will not start to assign segments, until the segment view is 
initialized. If set to 'false', the Coordinator will fallback to use the `cost` 
balancing strategy only if the segment view is not initialized [...]
diff --git a/docs/design/metadata-storage.md b/docs/design/metadata-storage.md
index a2a6996861..fc741f9796 100644
--- a/docs/design/metadata-storage.md
+++ b/docs/design/metadata-storage.md
@@ -103,7 +103,9 @@ system. The table has two main functional columns, the 
other columns are for ind
 Value 1 in the `used` column means that the segment should be "used" by the 
cluster (i.e., it should be loaded and
 available for requests). Value 0 means that the segment should not be loaded 
into the cluster. We do this as a means of
 unloading segments from the cluster without actually removing their metadata 
(which allows for simpler rolling back if
-that is ever an issue).
+that is ever an issue). The `used` column has a corresponding 
`used_flag_last_updated` column that indicates the date at the instant
+that the `used` status of the segment was last updated. This information can 
be used by the coordinator to determine if
+a segment is a candidate for deletion (if automated segment killing is 
enabled).
 
 The `payload` column stores a JSON blob that has all of the metadata for the 
segment.
 Some of the data in the `payload` column intentionally duplicates data from 
other columns in the segments table.
diff --git a/docs/operations/upgrade-prep.md b/docs/operations/upgrade-prep.md
new file mode 100644
index 0000000000..03fc24c9de
--- /dev/null
+++ b/docs/operations/upgrade-prep.md
@@ -0,0 +1,71 @@
+---
+id: upgrade-prep
+title: "Upgrade Prep"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+  
+## Upgrade to `0.24+` from `0.23` and earlier
+
+### Altering segments table
+
+**If you have set `druid.metadata.storage.connector.createTables` to `true` 
(which is the default), and your metadata connect user has DDL privileges, you 
can disregard this section.**
+
+**The coordinator and overlord services will fail if you do not execute this 
change prior to the upgrade**
+
+A new column, `used_flag_last_updated`, is needed in the segments table to 
support new
+segment killing functionality. You can manually alter the table, or you can use
+a CLI tool to perform the update.
+
+#### CLI tool
+
+Druid provides a `metadata-update` tool for updating Druid's metadata tables.
+
+In the example commands below:
+
+- `lib` is the Druid lib directory
+- `extensions` is the Druid extensions directory
+- `base` corresponds to the value of `druid.metadata.storage.tables.base` in 
the configuration, `druid` by default.
+- The `--connectURI` parameter corresponds to the value of 
`druid.metadata.storage.connector.connectURI`.
+- The `--user` parameter corresponds to the value of 
`druid.metadata.storage.connector.user`.
+- The `--password` parameter corresponds to the value of 
`druid.metadata.storage.connector.password`.
+- The `--action` parameter corresponds to the update action you are executing. 
In this case it is: `add-last-used-to-segments`
+
+##### MySQL
+
+```bash
+cd ${DRUID_ROOT}
+java -classpath "lib/*" 
-Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml 
-Ddruid.extensions.directory="extensions" 
-Ddruid.extensions.loadList=[\"mysql-metadata-storage\"] 
-Ddruid.metadata.storage.type=mysql org.apache.druid.cli.Main tools 
metadata-update --connectURI="<mysql-uri>" --user <user> --password <pass> 
--base druid --action add-used-flag-last-updated-to-segments
+```
+
+##### PostgreSQL
+
+```bash
+cd ${DRUID_ROOT}
+java -classpath "lib/*" 
-Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml 
-Ddruid.extensions.directory="extensions" 
-Ddruid.extensions.loadList=[\"postgresql-metadata-storage\"] 
-Ddruid.metadata.storage.type=postgresql org.apache.druid.cli.Main tools 
metadata-update --connectURI="<postgresql-uri>" --user <user> --password <pass> 
--base druid --action add-used-flag-last-updated-to-segments
+```
+
+
+#### Manual ALTER TABLE
+
+```SQL
+ALTER TABLE druid_segments
+ADD used_flag_last_updated varchar(255);
+```
diff --git 
a/integration-tests/docker/test-data/high-availability-sample-data.sql 
b/integration-tests/docker/test-data/high-availability-sample-data.sql
index 18ab48ad55..93293cc2af 100644
--- a/integration-tests/docker/test-data/high-availability-sample-data.sql
+++ b/integration-tests/docker/test-data/high-availability-sample-data.sql
@@ -13,8 +13,8 @@
 -- See the License for the specific language governing permissions and
 -- limitations under the License.
 
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830
 [...]
-INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.9
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.7
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.5
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"
 [...]
+INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_flag_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48
 [...]
diff --git a/integration-tests/docker/test-data/ldap-security-sample-data.sql 
b/integration-tests/docker/test-data/ldap-security-sample-data.sql
index f9edf756de..5ae57750fd 100644
--- a/integration-tests/docker/test-data/ldap-security-sample-data.sql
+++ b/integration-tests/docker/test-data/ldap-security-sample-data.sql
@@ -14,4 +14,4 @@
 -- limitations under the License.
 
 INSERT INTO druid_tasks (id, created_date, datasource, payload, 
status_payload, active) VALUES ('index_auth_test_2030-04-30T01:13:31.893Z', 
'2030-04-30T01:13:31.893Z', 'auth_test', 
'{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"created_date\":\"2030-04-30T01:13:31.893Z\",\"datasource\":\"auth_test\",\"active\":0}',
 
'{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"status\":\"SUCCESS\",\"duration\":1}',
 0);
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"l
 [...]
diff --git a/integration-tests/docker/test-data/query-error-sample-data.sql 
b/integration-tests/docker/test-data/query-error-sample-data.sql
index 18ab48ad55..93293cc2af 100644
--- a/integration-tests/docker/test-data/query-error-sample-data.sql
+++ b/integration-tests/docker/test-data/query-error-sample-data.sql
@@ -13,8 +13,8 @@
 -- See the License for the specific language governing permissions and
 -- limitations under the License.
 
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830
 [...]
-INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.9
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.7
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.5
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"
 [...]
+INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_flag_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48
 [...]
diff --git a/integration-tests/docker/test-data/query-retry-sample-data.sql 
b/integration-tests/docker/test-data/query-retry-sample-data.sql
index 18ab48ad55..93293cc2af 100644
--- a/integration-tests/docker/test-data/query-retry-sample-data.sql
+++ b/integration-tests/docker/test-data/query-retry-sample-data.sql
@@ -13,8 +13,8 @@
 -- See the License for the specific language governing permissions and
 -- limitations under the License.
 
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830
 [...]
-INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.9
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.7
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.5
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"
 [...]
+INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_flag_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48
 [...]
diff --git a/integration-tests/docker/test-data/query-sample-data.sql 
b/integration-tests/docker/test-data/query-sample-data.sql
index 18ab48ad55..93293cc2af 100644
--- a/integration-tests/docker/test-data/query-sample-data.sql
+++ b/integration-tests/docker/test-data/query-sample-data.sql
@@ -13,8 +13,8 @@
 -- See the License for the specific language governing permissions and
 -- limitations under the License.
 
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.980Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.791Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.590Z_v9\",\"loadSpec\":{
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830
 [...]
-INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48.989Z\",\"loadSpec\":{\
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.9
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.7
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.5
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"
 [...]
+INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_flag_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48
 [...]
diff --git a/integration-tests/docker/test-data/security-sample-data.sql 
b/integration-tests/docker/test-data/security-sample-data.sql
index f9edf756de..5ae57750fd 100644
--- a/integration-tests/docker/test-data/security-sample-data.sql
+++ b/integration-tests/docker/test-data/security-sample-data.sql
@@ -14,4 +14,4 @@
 -- limitations under the License.
 
 INSERT INTO druid_tasks (id, created_date, datasource, payload, 
status_payload, active) VALUES ('index_auth_test_2030-04-30T01:13:31.893Z', 
'2030-04-30T01:13:31.893Z', 'auth_test', 
'{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"created_date\":\"2030-04-30T01:13:31.893Z\",\"datasource\":\"auth_test\",\"active\":0}',
 
'{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"status\":\"SUCCESS\",\"duration\":1}',
 0);
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload) VALUES 
('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"loadSpec\":{\"type\":\"s
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"l
 [...]
diff --git a/pom.xml b/pom.xml
index 33f29b1683..345bddd5d3 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1301,6 +1301,9 @@
                         
<exclude>org/apache/druid/server/initialization/jetty/*Module*</exclude>
                         <exclude>org/apache/druid/guice/http/*</exclude>
 
+                        <!-- Ignore cli tools -->
+                        <exclude>org/apache/druid/cli/UpdateTables*</exclude>
+
                         <!-- Ignore generated code -->
                         
<exclude>org/apache/druid/math/expr/antlr/Expr*</exclude>  <!-- core -->
                         
<exclude>org/apache/druid/**/generated/*Benchmark*</exclude>  <!-- benchmarks 
-->
diff --git 
a/processing/src/main/java/org/apache/druid/metadata/MetadataStorageConnector.java
 
b/processing/src/main/java/org/apache/druid/metadata/MetadataStorageConnector.java
index 45fb663908..a4a8ec4839 100644
--- 
a/processing/src/main/java/org/apache/druid/metadata/MetadataStorageConnector.java
+++ 
b/processing/src/main/java/org/apache/druid/metadata/MetadataStorageConnector.java
@@ -88,4 +88,12 @@ public interface MetadataStorageConnector
   void createSupervisorsTable();
 
   void deleteAllRecords(String tableName);
+
+  /**
+   * Upgrade Compatibility Method.
+   *
+   * A new column, used_flag_last_updated, is added to druid_segments table. 
This method alters the table to add the column to make
+   * a cluster's metastore tables compatible with the updated Druid codebase 
in 0.24.x+
+   */
+  void alterSegmentTableAddUsedFlagLastUpdated();
 }
diff --git 
a/processing/src/test/java/org/apache/druid/metadata/TestMetadataStorageConnector.java
 
b/processing/src/test/java/org/apache/druid/metadata/TestMetadataStorageConnector.java
index 028a9d5cc0..560e724944 100644
--- 
a/processing/src/test/java/org/apache/druid/metadata/TestMetadataStorageConnector.java
+++ 
b/processing/src/test/java/org/apache/druid/metadata/TestMetadataStorageConnector.java
@@ -89,4 +89,10 @@ public class TestMetadataStorageConnector implements 
MetadataStorageConnector
   {
     throw new UnsupportedOperationException();
   }
+
+  @Override
+  public void alterSegmentTableAddUsedFlagLastUpdated()
+  {
+    throw new UnsupportedOperationException();
+  }
 }
diff --git 
a/server/src/main/java/org/apache/druid/indexer/SQLMetadataStorageUpdaterJobHandler.java
 
b/server/src/main/java/org/apache/druid/indexer/SQLMetadataStorageUpdaterJobHandler.java
index 71c2cadb10..d0bf41bd61 100644
--- 
a/server/src/main/java/org/apache/druid/indexer/SQLMetadataStorageUpdaterJobHandler.java
+++ 
b/server/src/main/java/org/apache/druid/indexer/SQLMetadataStorageUpdaterJobHandler.java
@@ -59,29 +59,28 @@ public class SQLMetadataStorageUpdaterJobHandler implements 
MetadataStorageUpdat
           {
             final PreparedBatch batch = handle.prepareBatch(
                 StringUtils.format(
-                    "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload) "
-                    + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload)",
+                    "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
+                    + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_flag_last_updated)",
                     tableName, connector.getQuoteString()
                 )
             );
             for (final DataSegment segment : segments) {
-
+              String now = DateTimes.nowUtc().toString();
               batch.add(
                   new ImmutableMap.Builder<String, Object>()
                       .put("id", segment.getId().toString())
                       .put("dataSource", segment.getDataSource())
-                      .put("created_date", DateTimes.nowUtc().toString())
+                      .put("created_date", now)
                       .put("start", 
segment.getInterval().getStart().toString())
                       .put("end", segment.getInterval().getEnd().toString())
                       .put("partitioned", (segment.getShardSpec() instanceof 
NoneShardSpec) ? false : true)
                       .put("version", segment.getVersion())
                       .put("used", true)
                       .put("payload", mapper.writeValueAsBytes(segment))
+                      .put("used_flag_last_updated", now)
                       .build()
               );
-
               log.info("Published %s", segment.getId());
-
             }
             batch.execute();
 
diff --git 
a/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
 
b/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
index bbf7a7f7bf..38bed0d7f8 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
@@ -1419,8 +1419,8 @@ public class IndexerSQLMetadataStorageCoordinator 
implements IndexerMetadataStor
 
       PreparedBatch preparedBatch = handle.prepareBatch(
           StringUtils.format(
-              "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload) "
-                  + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload)",
+              "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
+                  + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_flag_last_updated)",
               dbTables.getSegmentsTable(),
               connector.getQuoteString()
           )
@@ -1428,16 +1428,18 @@ public class IndexerSQLMetadataStorageCoordinator 
implements IndexerMetadataStor
 
       for (List<DataSegment> partition : partitionedSegments) {
         for (DataSegment segment : partition) {
+          String now = DateTimes.nowUtc().toString();
           preparedBatch.add()
               .bind("id", segment.getId().toString())
               .bind("dataSource", segment.getDataSource())
-              .bind("created_date", DateTimes.nowUtc().toString())
+              .bind("created_date", now)
               .bind("start", segment.getInterval().getStart().toString())
               .bind("end", segment.getInterval().getEnd().toString())
               .bind("partitioned", (segment.getShardSpec() instanceof 
NoneShardSpec) ? false : true)
               .bind("version", segment.getVersion())
               .bind("used", usedSegments.contains(segment))
-              .bind("payload", jsonMapper.writeValueAsBytes(segment));
+              .bind("payload", jsonMapper.writeValueAsBytes(segment))
+              .bind("used_flag_last_updated", now);
         }
         final int[] affectedRows = preparedBatch.execute();
         final boolean succeeded = 
Arrays.stream(affectedRows).allMatch(eachAffectedRows -> eachAffectedRows == 1);
diff --git 
a/server/src/main/java/org/apache/druid/metadata/SQLMetadataConnector.java 
b/server/src/main/java/org/apache/druid/metadata/SQLMetadataConnector.java
index a6e0dbe396..e31b74d300 100644
--- a/server/src/main/java/org/apache/druid/metadata/SQLMetadataConnector.java
+++ b/server/src/main/java/org/apache/druid/metadata/SQLMetadataConnector.java
@@ -227,6 +227,41 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
     }
   }
 
+  /**
+   * Execute the desired ALTER statement on the desired table
+   *
+   * @param tableName The name of the table being altered
+   * @param sql ALTER statment to be executed
+   */
+  private void alterTable(final String tableName, final Iterable<String> sql)
+  {
+    try {
+      retryWithHandle(
+          new HandleCallback<Void>()
+          {
+            @Override
+            public Void withHandle(Handle handle)
+            {
+              if (tableExists(handle, tableName)) {
+                final Batch batch = handle.createBatch();
+                for (String s : sql) {
+                  log.info("Altering table[%s], with command: %s", tableName, 
s);
+                  batch.add(s);
+                }
+                batch.execute();
+              } else {
+                log.info("Table[%s] doesn't exist", tableName);
+              }
+              return null;
+            }
+          }
+      );
+    }
+    catch (Exception e) {
+      log.warn(e, "Exception Altering table[%s]", tableName);
+    }
+  }
+
   public void createPendingSegmentsTable(final String tableName)
   {
     createTable(
@@ -296,6 +331,7 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
                 + "  version VARCHAR(255) NOT NULL,\n"
                 + "  used BOOLEAN NOT NULL,\n"
                 + "  payload %2$s NOT NULL,\n"
+                + "  used_flag_last_updated VARCHAR(255) NOT NULL,\n"
                 + "  PRIMARY KEY (id)\n"
                 + ")",
                 tableName, getPayloadType(), getQuoteString(), getCollation()
@@ -346,23 +382,11 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
         )
     );
   }
-  
-  public boolean tableContainsColumn(Handle handle, String table, String 
column)
-  {
-    try {
-      DatabaseMetaData databaseMetaData = handle.getConnection().getMetaData();
-      ResultSet columns = databaseMetaData.getColumns(null, null, table, 
column);
-      return columns.next();
-    }
-    catch (SQLException e) {
-      return false;
-    }
-  }
-  
+
   public void prepareTaskEntryTable(final String tableName)
   {
     createEntryTable(tableName);
-    alterEntryTable(tableName);
+    alterEntryTableAddTypeAndGroupId(tableName);
   }
 
   public void createEntryTable(final String tableName)
@@ -399,32 +423,23 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
     );
   }
 
-  private void alterEntryTable(final String tableName)
+  private void alterEntryTableAddTypeAndGroupId(final String tableName)
   {
-    try {
-      retryWithHandle(
-          new HandleCallback<Void>()
-          {
-            @Override
-            public Void withHandle(Handle handle)
-            {
-              final Batch batch = handle.createBatch();
-              if (!tableContainsColumn(handle, tableName, "type")) {
-                log.info("Adding column: type to table[%s]", tableName);
-                batch.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN type 
VARCHAR(255)", tableName));
-              }
-              if (!tableContainsColumn(handle, tableName, "group_id")) {
-                log.info("Adding column: group_id to table[%s]", tableName);
-                batch.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN 
group_id VARCHAR(255)", tableName));
-              }
-              batch.execute();
-              return null;
-            }
-          }
-      );
+    ArrayList<String> statements = new ArrayList<>();
+    if (!tableHasColumn(tableName, "type")) {
+      log.info("Adding 'type' column to %s", tableName);
+      statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN type 
VARCHAR(255)", tableName));
+    } else {
+      log.info("%s already has 'type' column", tableName);
     }
-    catch (Exception e) {
-      log.warn(e, "Exception altering table");
+    if (!tableHasColumn(tableName, "group_id")) {
+      log.info("Adding 'group_id' column to %s", tableName);
+      statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN group_id 
VARCHAR(255)", tableName));
+    } else {
+      log.info("%s already has 'group_id' column", tableName);
+    }
+    if (!statements.isEmpty()) {
+      alterTable(tableName, statements);
     }
   }
 
@@ -486,6 +501,32 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
     );
   }
 
+  /**
+   * Adds the used_flag_last_updated column to the Druid segment table.
+   *
+   * This is public due to allow the UpdateTables cli tool to use for upgrade 
prep.
+   */
+  @Override
+  public void alterSegmentTableAddUsedFlagLastUpdated()
+  {
+    String tableName = tablesConfigSupplier.get().getSegmentsTable();
+    if (!tableHasColumn(tableName, "used_flag_last_updated")) {
+      log.info("Adding 'used_flag_last_updated' column to %s", tableName);
+      alterTable(
+          tableName,
+          ImmutableList.of(
+              StringUtils.format(
+                  "ALTER TABLE %1$s \n"
+                  + "ADD used_flag_last_updated varchar(255)",
+                  tableName
+              )
+          )
+      );
+    } else {
+      log.info("%s already has used_flag_last_updated column", tableName);
+    }
+  }
+
   @Override
   public Void insertOrUpdate(
       final String tableName,
@@ -631,7 +672,11 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
   {
     if (config.get().isCreateTables()) {
       createSegmentTable(tablesConfigSupplier.get().getSegmentsTable());
+      alterSegmentTableAddUsedFlagLastUpdated();
     }
+    // Called outside of the above conditional because we want to validate the 
table
+    // regardless of cluster configuration for creating tables.
+    validateSegmentTable();
   }
 
   @Override
@@ -942,4 +987,63 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
       log.error(e, StringUtils.format("Exception while creating index on table 
[%s]", tableName));
     }
   }
+
+  /**
+   * Interrogate table metadata and return true or false depending on the 
existance of the indicated column
+   *
+   * public visibility because DerbyConnector needs to override thanks to 
uppercase table and column names invalidating
+   * this implementation.
+   *
+   * @param tableName The table being interrogated
+   * @param columnName The column being looked for
+   * @return boolean indicating the existence of the column in question
+   */
+  public boolean tableHasColumn(String tableName, String columnName)
+  {
+    return getDBI().withHandle(
+        new HandleCallback<Boolean>()
+        {
+          @Override
+          public Boolean withHandle(Handle handle)
+          {
+            try {
+              if (tableExists(handle, tableName)) {
+                DatabaseMetaData dbMetaData = 
handle.getConnection().getMetaData();
+                ResultSet columns = dbMetaData.getColumns(
+                    null,
+                    null,
+                    tableName,
+                    columnName
+                );
+                return columns.next();
+              } else {
+                return false;
+              }
+            }
+            catch (SQLException e) {
+              return false;
+            }
+          }
+        }
+    );
+  }
+
+  /**
+   * Ensure that the segment table has the proper schema required to run Druid 
properly.
+   *
+   * Throws RuntimeException if the column does not exist. There is no 
recovering from an invalid schema,
+   * the program should crash.
+   *
+   * See <a 
href="https://druid.apache.org/docs/latest/operations/upgrade-prep.html";>upgrade-prep
 docs</a> for info
+   * on manually preparing your segment table.
+   */
+  private void validateSegmentTable()
+  {
+    if (tableHasColumn(tablesConfigSupplier.get().getSegmentsTable(), 
"used_flag_last_updated")) {
+      return;
+    } else {
+      throw new RuntimeException("Invalid Segment Table Schema! No 
used_flag_last_updated column!" +
+              " See 
https://druid.apache.org/docs/latest/operations/upgrade-prep.html for more info 
on remediation");
+    }
+  }
 }
diff --git 
a/server/src/main/java/org/apache/druid/metadata/SQLMetadataSegmentPublisher.java
 
b/server/src/main/java/org/apache/druid/metadata/SQLMetadataSegmentPublisher.java
index 6c33396de6..0ad1d607ed 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/SQLMetadataSegmentPublisher.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/SQLMetadataSegmentPublisher.java
@@ -55,8 +55,8 @@ public class SQLMetadataSegmentPublisher implements 
MetadataSegmentPublisher
     this.config = config;
     this.connector = connector;
     this.statement = StringUtils.format(
-        "INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, 
partitioned, version, used, payload) "
-        + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload)",
+        "INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, 
partitioned, version, used, payload, used_flag_last_updated) "
+        + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_flag_last_updated)",
         config.getSegmentsTable(), connector.getQuoteString()
     );
   }
@@ -64,16 +64,18 @@ public class SQLMetadataSegmentPublisher implements 
MetadataSegmentPublisher
   @Override
   public void publishSegment(final DataSegment segment) throws IOException
   {
+    String now = DateTimes.nowUtc().toString();
     publishSegment(
         segment.getId().toString(),
         segment.getDataSource(),
-        DateTimes.nowUtc().toString(),
+        now,
         segment.getInterval().getStart().toString(),
         segment.getInterval().getEnd().toString(),
         (segment.getShardSpec() instanceof NoneShardSpec) ? false : true,
         segment.getVersion(),
         true,
-        jsonMapper.writeValueAsBytes(segment)
+        jsonMapper.writeValueAsBytes(segment),
+        now
     );
   }
 
@@ -87,7 +89,8 @@ public class SQLMetadataSegmentPublisher implements 
MetadataSegmentPublisher
       final boolean partitioned,
       final String version,
       final boolean used,
-      final byte[] payload
+      final byte[] payload,
+      final String usedFlagLastUpdated
   )
   {
     try {
@@ -128,6 +131,7 @@ public class SQLMetadataSegmentPublisher implements 
MetadataSegmentPublisher
                     .bind("version", version)
                     .bind("used", used)
                     .bind("payload", payload)
+                    .bind("used_flag_last_updated", usedFlagLastUpdated)
                     .execute();
 
               return null;
diff --git 
a/server/src/main/java/org/apache/druid/metadata/SegmentsMetadataManager.java 
b/server/src/main/java/org/apache/druid/metadata/SegmentsMetadataManager.java
index 9b41f61f90..7fd832a6ec 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/SegmentsMetadataManager.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/SegmentsMetadataManager.java
@@ -139,11 +139,25 @@ public interface SegmentsMetadataManager
   Set<String> retrieveAllDataSourceNames();
 
   /**
-   * Returns top N unused segment intervals with the end time no later than 
the specified maxEndTime when ordered by
-   * segment start time, end time.
+   * Returns top N unused segment intervals with the end time no later than 
the specified maxEndTime and
+   * used_flag_last_updated time no later than maxLastUsedTime when ordered by 
segment start time, end time. Any segment having no
+   * used_flag_last_updated time due to upgrade from legacy Druid means 
maxUsedFlagLastUpdatedTime is ignored for that segment.
    */
-  List<Interval> getUnusedSegmentIntervals(String dataSource, DateTime 
maxEndTime, int limit);
+  List<Interval> getUnusedSegmentIntervals(
+      String dataSource,
+      DateTime maxEndTime,
+      int limit,
+      DateTime maxUsedFlagLastUpdatedTime
+  );
 
   @VisibleForTesting
   void poll();
+
+  /**
+   * Populates used_flag_last_updated column in the segments table iteratively 
until there are no segments with a NULL
+   * value for that column.
+   */
+  void populateUsedFlagLastUpdatedAsync();
+
+  void stopAsyncUsedFlagLastUpdatedUpdate();
 }
diff --git 
a/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataManager.java
 
b/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataManager.java
index 46db26a56d..93e2ae189c 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataManager.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataManager.java
@@ -55,11 +55,13 @@ import org.joda.time.DateTime;
 import org.joda.time.Duration;
 import org.joda.time.Interval;
 import org.skife.jdbi.v2.BaseResultSetMapper;
+import org.skife.jdbi.v2.Batch;
 import org.skife.jdbi.v2.FoldController;
 import org.skife.jdbi.v2.Handle;
 import org.skife.jdbi.v2.StatementContext;
 import org.skife.jdbi.v2.TransactionCallback;
 import org.skife.jdbi.v2.TransactionStatus;
+import org.skife.jdbi.v2.tweak.HandleCallback;
 import org.skife.jdbi.v2.tweak.ResultSetMapper;
 
 import javax.annotation.Nullable;
@@ -76,6 +78,8 @@ import java.util.Map;
 import java.util.Objects;
 import java.util.Set;
 import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 import java.util.concurrent.ScheduledExecutorService;
 import java.util.concurrent.TimeUnit;
@@ -229,6 +233,8 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
   @GuardedBy("startStopPollLock")
   private @Nullable ScheduledExecutorService exec = null;
 
+  private Future<?> usedFlagLastUpdatedPopulationFuture;
+
   @Inject
   public SqlSegmentsMetadataManager(
       ObjectMapper jsonMapper,
@@ -313,6 +319,110 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
     }
   }
 
+  @Override
+  public void stopAsyncUsedFlagLastUpdatedUpdate()
+  {
+    if (!usedFlagLastUpdatedPopulationFuture.isDone() && 
!usedFlagLastUpdatedPopulationFuture.isCancelled()) {
+      usedFlagLastUpdatedPopulationFuture.cancel(true);
+    }
+  }
+
+  @Override
+  public void populateUsedFlagLastUpdatedAsync()
+  {
+    ExecutorService executorService = Executors.newSingleThreadExecutor();
+    usedFlagLastUpdatedPopulationFuture = executorService.submit(
+        () -> populateUsedFlagLastUpdated()
+    );
+  }
+
+  /**
+   * Populate used_flag_last_updated for unused segments whose current value 
for said column is NULL
+   *
+   * The updates are made incrementally.
+   */
+  @VisibleForTesting
+  void populateUsedFlagLastUpdated()
+  {
+    String segmentsTable = getSegmentsTable();
+    log.info(
+        "Populating used_flag_last_updated with non-NULL values for unused 
segments in [%s]",
+        segmentsTable
+    );
+
+    int limit = 100;
+    int totalUpdatedEntries = 0;
+
+    while (true) {
+      List<String> segmentsToUpdate = new ArrayList<>(100);
+      try {
+        connector.retryWithHandle(
+            new HandleCallback<Void>()
+            {
+              @Override
+              public Void withHandle(Handle handle)
+              {
+                segmentsToUpdate.addAll(handle.createQuery(
+                    StringUtils.format(
+                        "SELECT id FROM %1$s WHERE used_flag_last_updated IS 
NULL and used = :used %2$s",
+                        segmentsTable,
+                        connector.limitClause(limit)
+                    )
+                ).bind("used", false).mapTo(String.class).list());
+                return null;
+              }
+            }
+        );
+
+        if (segmentsToUpdate.isEmpty()) {
+          // We have no segments to process
+          break;
+        }
+
+        connector.retryWithHandle(
+            new HandleCallback<Void>()
+            {
+              @Override
+              public Void withHandle(Handle handle)
+              {
+                Batch updateBatch = handle.createBatch();
+                String sql = "UPDATE %1$s SET used_flag_last_updated = '%2$s' 
WHERE id = '%3$s'";
+                String now = DateTimes.nowUtc().toString();
+                for (String id : segmentsToUpdate) {
+                  updateBatch.add(StringUtils.format(sql, segmentsTable, now, 
id));
+                }
+                updateBatch.execute();
+                return null;
+              }
+            }
+        );
+      }
+      catch (Exception e) {
+        log.warn(e, "Population of used_flag_last_updated in [%s] has failed. 
There may be unused segments with"
+                    + " NULL values for used_flag_last_updated that won't be 
killed!", segmentsTable);
+        return;
+      }
+
+      totalUpdatedEntries += segmentsToUpdate.size();
+      log.info("Updated a batch of %d rows in [%s] with a valid 
used_flag_last_updated date",
+               segmentsToUpdate.size(),
+               segmentsTable
+      );
+      try {
+        Thread.sleep(10000);
+      }
+      catch (InterruptedException e) {
+        log.info("Interrupted, exiting!");
+        Thread.currentThread().interrupt();
+      }
+    }
+    log.info(
+        "Finished updating [%s] with a valid used_flag_last_updated date. %d 
rows updated",
+        segmentsTable,
+        totalUpdatedEntries
+    );
+  }
+
   private Runnable createPollTaskForStartOrder(long startOrder, 
PeriodicDatabasePoll periodicDatabasePoll)
   {
     return () -> {
@@ -520,8 +630,9 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
     try {
       int numUpdatedDatabaseEntries = connector.getDBI().withHandle(
           (Handle handle) -> handle
-              .createStatement(StringUtils.format("UPDATE %s SET used=true 
WHERE id = :id", getSegmentsTable()))
+              .createStatement(StringUtils.format("UPDATE %s SET used=true, 
used_flag_last_updated = :used_flag_last_updated WHERE id = :id", 
getSegmentsTable()))
               .bind("id", segmentId)
+              .bind("used_flag_last_updated", DateTimes.nowUtc().toString())
               .execute()
       );
       // Unlike bulk markAsUsed methods: 
markAsUsedAllNonOvershadowedSegmentsInDataSource(),
@@ -975,8 +1086,14 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
   }
 
   @Override
-  public List<Interval> getUnusedSegmentIntervals(final String dataSource, 
final DateTime maxEndTime, final int limit)
+  public List<Interval> getUnusedSegmentIntervals(
+      final String dataSource,
+      final DateTime maxEndTime,
+      final int limit,
+      DateTime maxUsedFlagLastUpdatedTime
+  )
   {
+    // Note that we handle the case where used_flag_last_updated IS NULL here 
to allow smooth transition to Druid version that uses used_flag_last_updated 
column
     return connector.inReadOnlyTransaction(
         new TransactionCallback<List<Interval>>()
         {
@@ -987,7 +1104,7 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
                 .createQuery(
                     StringUtils.format(
                         "SELECT start, %2$send%2$s FROM %1$s WHERE dataSource 
= :dataSource AND "
-                        + "%2$send%2$s <= :end AND used = false ORDER BY 
start, %2$send%2$s",
+                        + "%2$send%2$s <= :end AND used = false AND 
used_flag_last_updated IS NOT NULL AND used_flag_last_updated <= 
:used_flag_last_updated ORDER BY start, %2$send%2$s",
                         getSegmentsTable(),
                         connector.getQuoteString()
                     )
@@ -996,6 +1113,7 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
                 .setMaxRows(limit)
                 .bind("dataSource", dataSource)
                 .bind("end", maxEndTime.toString())
+                .bind("used_flag_last_updated", 
maxUsedFlagLastUpdatedTime.toString())
                 .map(
                     new BaseResultSetMapper<Interval>()
                     {
diff --git 
a/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataQuery.java 
b/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataQuery.java
index b35116584f..7e4b00b3c7 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataQuery.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataQuery.java
@@ -23,6 +23,7 @@ import com.fasterxml.jackson.databind.ObjectMapper;
 import com.google.common.collect.ImmutableList;
 import com.google.common.collect.Iterators;
 import org.apache.druid.java.util.common.CloseableIterators;
+import org.apache.druid.java.util.common.DateTimes;
 import org.apache.druid.java.util.common.IAE;
 import org.apache.druid.java.util.common.Intervals;
 import org.apache.druid.java.util.common.StringUtils;
@@ -148,13 +149,13 @@ public class SqlSegmentsMetadataQuery
     final PreparedBatch batch =
         handle.prepareBatch(
             StringUtils.format(
-                "UPDATE %s SET used = ? WHERE datasource = ? AND id = ?",
+                "UPDATE %s SET used = ?, used_flag_last_updated = ? WHERE 
datasource = ? AND id = ?",
                 dbTables.getSegmentsTable()
             )
         );
 
     for (SegmentId segmentId : segmentIds) {
-      batch.add(used, dataSource, segmentId.toString());
+      batch.add(used, DateTimes.nowUtc().toString(), dataSource, 
segmentId.toString());
     }
 
     final int[] segmentChanges = batch.execute();
@@ -175,12 +176,13 @@ public class SqlSegmentsMetadataQuery
       return handle
           .createStatement(
               StringUtils.format(
-                  "UPDATE %s SET used=:used WHERE dataSource = :dataSource",
+                  "UPDATE %s SET used=:used, used_flag_last_updated = 
:used_flag_last_updated WHERE dataSource = :dataSource",
                   dbTables.getSegmentsTable()
               )
           )
           .bind("dataSource", dataSource)
           .bind("used", false)
+          .bind("used_flag_last_updated", DateTimes.nowUtc().toString())
           .execute();
     } else if (Intervals.canCompareEndpointsAsStrings(interval)
                && interval.getStart().getYear() == 
interval.getEnd().getYear()) {
@@ -190,7 +192,7 @@ public class SqlSegmentsMetadataQuery
       return handle
           .createStatement(
               StringUtils.format(
-                  "UPDATE %s SET used=:used WHERE dataSource = :dataSource AND 
%s",
+                  "UPDATE %s SET used=:used, used_flag_last_updated = 
:used_flag_last_updated WHERE dataSource = :dataSource AND %s",
                   dbTables.getSegmentsTable(),
                   
IntervalMode.CONTAINS.makeSqlCondition(connector.getQuoteString(), ":start", 
":end")
               )
@@ -199,6 +201,7 @@ public class SqlSegmentsMetadataQuery
           .bind("used", false)
           .bind("start", interval.getStart().toString())
           .bind("end", interval.getEnd().toString())
+          .bind("used_flag_last_updated", DateTimes.nowUtc().toString())
           .execute();
     } else {
       // Retrieve, then drop, since we can't write a WHERE clause directly.
diff --git 
a/server/src/main/java/org/apache/druid/metadata/storage/derby/DerbyConnector.java
 
b/server/src/main/java/org/apache/druid/metadata/storage/derby/DerbyConnector.java
index a84b1e975c..dbb8087148 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/storage/derby/DerbyConnector.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/storage/derby/DerbyConnector.java
@@ -88,24 +88,6 @@ public class DerbyConnector extends SQLMetadataConnector
                   .isEmpty();
   }
 
-  @Override
-  public boolean tableContainsColumn(Handle handle, String table, String 
column)
-  {
-    try {
-      DatabaseMetaData databaseMetaData = handle.getConnection().getMetaData();
-      ResultSet columns = databaseMetaData.getColumns(
-          null,
-          null,
-          table.toUpperCase(Locale.ENGLISH),
-          column.toUpperCase(Locale.ENGLISH)
-      );
-      return columns.next();
-    }
-    catch (SQLException e) {
-      return false;
-    }
-  }
-
   @Override
   public String getSerialType()
   {
@@ -186,6 +168,46 @@ public class DerbyConnector extends SQLMetadataConnector
         false
     );
   }
+  /**
+   * Interrogate table metadata and return true or false depending on the 
existance of the indicated column
+   *
+   * public visibility because DerbyConnector needs to override thanks to 
uppercase table and column names.
+   *
+   * @param tableName The table being interrogated
+   * @param columnName The column being looked for
+   * @return boolean indicating the existence of the column in question
+   */
+  @Override
+  public boolean tableHasColumn(String tableName, String columnName)
+  {
+    return getDBI().withHandle(
+        new HandleCallback<Boolean>()
+        {
+          @Override
+          public Boolean withHandle(Handle handle)
+          {
+            try {
+              if (tableExists(handle, tableName)) {
+                DatabaseMetaData dbMetaData = 
handle.getConnection().getMetaData();
+                ResultSet columns = dbMetaData.getColumns(
+                    null,
+                    null,
+                    tableName.toUpperCase(Locale.ENGLISH),
+                    columnName.toUpperCase(Locale.ENGLISH)
+                );
+                return columns.next();
+              } else {
+                return false;
+              }
+            }
+            catch (SQLException e) {
+              return false;
+            }
+          }
+        }
+    );
+  }
+
   @LifecycleStart
   public void start()
   {
diff --git 
a/server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java
 
b/server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java
index fd017dd5ed..a7b8417351 100644
--- 
a/server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java
+++ 
b/server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java
@@ -440,6 +440,7 @@ public class DruidCoordinator
       );
 
       segmentsMetadataManager.startPollingDatabasePeriodically();
+      segmentsMetadataManager.populateUsedFlagLastUpdatedAsync();
       metadataRuleManager.start();
       lookupCoordinatorManager.start();
       serviceAnnouncer.announce(self);
@@ -534,6 +535,7 @@ public class DruidCoordinator
       lookupCoordinatorManager.stop();
       metadataRuleManager.stop();
       segmentsMetadataManager.stopPollingDatabasePeriodically();
+      segmentsMetadataManager.stopAsyncUsedFlagLastUpdatedUpdate();
 
       if (balancerExec != null) {
         balancerExec.shutdownNow();
diff --git 
a/server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinatorConfig.java
 
b/server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinatorConfig.java
index c28da1f439..4b5610f911 100644
--- 
a/server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinatorConfig.java
+++ 
b/server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinatorConfig.java
@@ -55,6 +55,10 @@ public abstract class DruidCoordinatorConfig
   @Default("false")
   public abstract boolean getCoordinatorKillIgnoreDurationToRetain();
 
+  @Config("druid.coordinator.kill.bufferPeriod")
+  @Default("P30D")
+  public abstract Duration getCoordinatorKillBufferPeriod();
+
   @Config("druid.coordinator.kill.maxSegments")
   @Default("100")
   public abstract int getCoordinatorKillMaxSegments();
diff --git 
a/server/src/main/java/org/apache/druid/server/coordinator/duty/KillUnusedSegments.java
 
b/server/src/main/java/org/apache/druid/server/coordinator/duty/KillUnusedSegments.java
index bbd58bfe63..fa1fcac260 100644
--- 
a/server/src/main/java/org/apache/druid/server/coordinator/duty/KillUnusedSegments.java
+++ 
b/server/src/main/java/org/apache/druid/server/coordinator/duty/KillUnusedSegments.java
@@ -68,6 +68,7 @@ public class KillUnusedSegments implements CoordinatorDuty
   private final boolean ignoreRetainDuration;
   private final int maxSegmentsToKill;
   private long lastKillTime = 0;
+  private final long bufferPeriod;
 
   private final SegmentsMetadataManager segmentsMetadataManager;
   private final OverlordClient overlordClient;
@@ -94,14 +95,16 @@ public class KillUnusedSegments implements CoordinatorDuty
           this.retainDuration
       );
     }
+    this.bufferPeriod = config.getCoordinatorKillBufferPeriod().getMillis();
 
     this.maxSegmentsToKill = config.getCoordinatorKillMaxSegments();
     Preconditions.checkArgument(this.maxSegmentsToKill > 0, "coordinator kill 
maxSegments must be > 0");
 
     log.info(
-        "Kill Task scheduling enabled with period [%s], retainDuration [%s], 
maxSegmentsToKill [%s]",
+        "Kill Task scheduling enabled with period [%s], retainDuration [%s], 
bufferPeriod [%s], maxSegmentsToKill [%s]",
         this.period,
         this.ignoreRetainDuration ? "IGNORING" : this.retainDuration,
+        this.bufferPeriod,
         this.maxSegmentsToKill
     );
 
@@ -230,7 +233,7 @@ public class KillUnusedSegments implements CoordinatorDuty
                                 : DateTimes.nowUtc().minus(retainDuration);
 
     List<Interval> unusedSegmentIntervals = segmentsMetadataManager
-        .getUnusedSegmentIntervals(dataSource, maxEndTime, maxSegmentsToKill);
+        .getUnusedSegmentIntervals(dataSource, maxEndTime, maxSegmentsToKill, 
DateTimes.nowUtc().minus(bufferPeriod));
 
     if (CollectionUtils.isNullOrEmpty(unusedSegmentIntervals)) {
       return null;
diff --git 
a/server/src/test/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinatorTest.java
 
b/server/src/test/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinatorTest.java
index 444e159411..92b0d22ff0 100644
--- 
a/server/src/test/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinatorTest.java
+++ 
b/server/src/test/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinatorTest.java
@@ -385,10 +385,10 @@ public class IndexerSQLMetadataStorageCoordinatorTest
           (int) derbyConnector.getDBI().<Integer>withHandle(
               handle -> {
                 String request = StringUtils.format(
-                    "UPDATE %s SET used = false WHERE id = :id",
+                    "UPDATE %s SET used = false, used_flag_last_updated = 
:used_flag_last_updated WHERE id = :id",
                     
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable()
                 );
-                return handle.createStatement(request).bind("id", 
segment.getId().toString()).execute();
+                return handle.createStatement(request).bind("id", 
segment.getId().toString()).bind("used_flag_last_updated", 
DateTimes.nowUtc().toString()).execute();
               }
           )
       );
@@ -433,8 +433,8 @@ public class IndexerSQLMetadataStorageCoordinatorTest
         handle -> {
           PreparedBatch preparedBatch = handle.prepareBatch(
               StringUtils.format(
-                  "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload) "
-                  + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload)",
+                  "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
+                  + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_flag_last_updated)",
                   table,
                   derbyConnector.getQuoteString()
               )
@@ -449,7 +449,8 @@ public class IndexerSQLMetadataStorageCoordinatorTest
                          .bind("partitioned", !(segment.getShardSpec() 
instanceof NoneShardSpec))
                          .bind("version", segment.getVersion())
                          .bind("used", true)
-                         .bind("payload", mapper.writeValueAsBytes(segment));
+                         .bind("payload", mapper.writeValueAsBytes(segment))
+                         .bind("used_flag_last_updated", 
DateTimes.nowUtc().toString());
           }
 
           final int[] affectedRows = preparedBatch.execute();
diff --git 
a/server/src/test/java/org/apache/druid/metadata/SQLMetadataConnectorTest.java 
b/server/src/test/java/org/apache/druid/metadata/SQLMetadataConnectorTest.java
index 4dba35ca84..030123168d 100644
--- 
a/server/src/test/java/org/apache/druid/metadata/SQLMetadataConnectorTest.java
+++ 
b/server/src/test/java/org/apache/druid/metadata/SQLMetadataConnectorTest.java
@@ -30,11 +30,13 @@ import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Rule;
 import org.junit.Test;
+import org.skife.jdbi.v2.Batch;
 import org.skife.jdbi.v2.DBI;
 import org.skife.jdbi.v2.Handle;
 import org.skife.jdbi.v2.exceptions.CallbackFailedException;
 import org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException;
 import org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException;
+import org.skife.jdbi.v2.tweak.HandleCallback;
 
 import java.sql.SQLException;
 import java.sql.SQLRecoverableException;
@@ -45,6 +47,7 @@ import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashSet;
 import java.util.List;
+import java.util.Locale;
 import java.util.Map;
 import java.util.Set;
 import java.util.stream.Collectors;
@@ -99,7 +102,7 @@ public class SQLMetadataConnectorTest
           for (String column : Arrays.asList("type", "group_id")) {
             Assert.assertTrue(
                 StringUtils.format("Tasks table column %s was not created!", 
column),
-                connector.tableContainsColumn(handle, taskTable, column)
+                connector.tableHasColumn(taskTable, column)
             );
           }
 
@@ -164,6 +167,45 @@ public class SQLMetadataConnectorTest
     }
   }
 
+  /**
+   * This is a test for the upgrade path where a cluster is upgrading from a 
version that did not have used_flag_last_updated
+   * in the segments table.
+   */
+  @Test
+  public void testAlterSegmentTableAddLastUsed()
+  {
+    connector.createSegmentTable();
+
+    // Drop column used_flag_last_updated to bring us in line with pre-upgrade 
state
+    derbyConnectorRule.getConnector().retryWithHandle(
+        new HandleCallback<Void>()
+        {
+          @Override
+          public Void withHandle(Handle handle)
+          {
+            final Batch batch = handle.createBatch();
+            batch.add(
+                StringUtils.format(
+                    "ALTER TABLE %1$s DROP COLUMN USED_FLAG_LAST_UPDATED",
+                    derbyConnectorRule.metadataTablesConfigSupplier()
+                                      .get()
+                                      .getSegmentsTable()
+                                      .toUpperCase(Locale.ENGLISH)
+                )
+            );
+            batch.execute();
+            return null;
+          }
+        }
+    );
+
+    connector.alterSegmentTableAddUsedFlagLastUpdated();
+    connector.tableHasColumn(
+        
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable(),
+        "USED_FLAG_LAST_UPDATED"
+    );
+  }
+
   @Test
   public void testInsertOrUpdate()
   {
diff --git 
a/server/src/test/java/org/apache/druid/metadata/SqlSegmentsMetadataManagerTest.java
 
b/server/src/test/java/org/apache/druid/metadata/SqlSegmentsMetadataManagerTest.java
index 6dad542644..9e289263bb 100644
--- 
a/server/src/test/java/org/apache/druid/metadata/SqlSegmentsMetadataManagerTest.java
+++ 
b/server/src/test/java/org/apache/druid/metadata/SqlSegmentsMetadataManagerTest.java
@@ -37,6 +37,8 @@ import org.apache.druid.server.metrics.NoopServiceEmitter;
 import org.apache.druid.timeline.DataSegment;
 import org.apache.druid.timeline.SegmentId;
 import org.apache.druid.timeline.partition.NoneShardSpec;
+import org.joda.time.DateTime;
+import org.joda.time.Duration;
 import org.joda.time.Interval;
 import org.joda.time.Period;
 import org.junit.After;
@@ -44,8 +46,13 @@ import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Rule;
 import org.junit.Test;
+import org.skife.jdbi.v2.Handle;
+import org.skife.jdbi.v2.tweak.HandleCallback;
 
 import java.io.IOException;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
 import java.util.Set;
 import java.util.stream.Collectors;
 
@@ -101,8 +108,18 @@ public class SqlSegmentsMetadataManagerTest
   );
 
   private void publish(DataSegment segment, boolean used) throws IOException
+  {
+    publish(segment, used, DateTimes.nowUtc());
+  }
+
+  private void publish(DataSegment segment, boolean used, DateTime 
usedFlagLastUpdated) throws IOException
   {
     boolean partitioned = !(segment.getShardSpec() instanceof NoneShardSpec);
+
+    String usedFlagLastUpdatedStr = null;
+    if (null != usedFlagLastUpdated) {
+      usedFlagLastUpdatedStr = usedFlagLastUpdated.toString();
+    }
     publisher.publishSegment(
         segment.getId().toString(),
         segment.getDataSource(),
@@ -112,7 +129,8 @@ public class SqlSegmentsMetadataManagerTest
         partitioned,
         segment.getVersion(),
         used,
-        jsonMapper.writeValueAsBytes(segment)
+        jsonMapper.writeValueAsBytes(segment),
+        usedFlagLastUpdatedStr
     );
   }
 
@@ -350,7 +368,8 @@ public class SqlSegmentsMetadataManagerTest
         true,
         "corrupt-version",
         true,
-        StringUtils.toUtf8("corrupt-payload")
+        StringUtils.toUtf8("corrupt-payload"),
+        "corrupt-last-used-date"
     );
 
     EmittingLogger.registerEmitter(new NoopServiceEmitter());
@@ -364,28 +383,76 @@ public class SqlSegmentsMetadataManagerTest
   }
 
   @Test
-  public void testGetUnusedSegmentIntervals()
+  public void testGetUnusedSegmentIntervals() throws IOException
   {
     sqlSegmentsMetadataManager.startPollingDatabasePeriodically();
     sqlSegmentsMetadataManager.poll();
+
+    // We alter the segment table to allow nullable used_flag_last_updated in 
order to test compatibility during druid upgrade from version without 
used_flag_last_updated.
+    derbyConnectorRule.allowUsedFlagLastUpdatedToBeNullable();
+
     
Assert.assertTrue(sqlSegmentsMetadataManager.isPollingDatabasePeriodically());
     int numChangedSegments = 
sqlSegmentsMetadataManager.markAsUnusedAllSegmentsInDataSource("wikipedia");
     Assert.assertEquals(2, numChangedSegments);
 
+    String newDs = "newDataSource";
+    final DataSegment newSegment = createSegment(
+        newDs,
+        "2017-10-15T00:00:00.000/2017-10-16T00:00:00.000",
+        "2017-10-15T20:19:12.565Z",
+        
"wikipedia2/index/y=2017/m=10/d=15/2017-10-16T20:19:12.565Z/0/index.zip",
+        0
+    );
+    publish(newSegment, false, 
DateTimes.nowUtc().minus(Duration.parse("PT7200S").getMillis()));
+
+    final DataSegment newSegment2 = createSegment(
+        newDs,
+        "2017-10-16T00:00:00.000/2017-10-17T00:00:00.000",
+        "2017-10-15T20:19:12.565Z",
+        
"wikipedia2/index/y=2017/m=10/d=15/2017-10-16T20:19:12.565Z/0/index.zip",
+        0
+    );
+    publish(newSegment2, false, 
DateTimes.nowUtc().minus(Duration.parse("PT172800S").getMillis()));
+
+    final DataSegment newSegment3 = createSegment(
+        newDs,
+        "2017-10-17T00:00:00.000/2017-10-18T00:00:00.000",
+        "2017-10-15T20:19:12.565Z",
+        
"wikipedia2/index/y=2017/m=10/d=15/2017-10-16T20:19:12.565Z/0/index.zip",
+        0
+    );
+    publish(newSegment3, false, null);
+
     Assert.assertEquals(
         ImmutableList.of(segment2.getInterval()),
-        sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", 
DateTimes.of("3000"), 1)
+        sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", 
DateTimes.of("3000"), 1, DateTimes.COMPARE_DATE_AS_STRING_MAX)
     );
 
     // Test the DateTime maxEndTime argument of getUnusedSegmentIntervals
     Assert.assertEquals(
         ImmutableList.of(segment2.getInterval()),
-        sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", 
DateTimes.of(2012, 1, 7, 0, 0), 1)
+        sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", 
DateTimes.of(2012, 1, 7, 0, 0), 1, DateTimes.COMPARE_DATE_AS_STRING_MAX)
     );
 
     Assert.assertEquals(
         ImmutableList.of(segment2.getInterval(), segment1.getInterval()),
-        sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", 
DateTimes.of("3000"), 5)
+        sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", 
DateTimes.of("3000"), 5, DateTimes.COMPARE_DATE_AS_STRING_MAX)
+    );
+
+    // Test a buffer period that should exclude some segments
+
+    // The wikipedia datasource has segments generated with last used time 
equal to roughly the time of test run. None of these segments should be 
selected with a bufer period of 1 day
+    Assert.assertEquals(
+        ImmutableList.of(),
+        sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", 
DateTimes.of("3000"), 5, DateTimes.nowUtc().minus(Duration.parse("PT86400S")))
+    );
+
+    // One of the 3 segments in newDs has a null used_flag_last_updated which 
should mean getUnusedSegmentIntervals never returns it
+    // One of the 3 segments in newDs has a used_flag_last_updated older than 
1 day which means it should also be returned
+    // The last of the 3 segemns in newDs has a used_flag_last_updated date 
less than one day and should not be returned
+    Assert.assertEquals(
+        ImmutableList.of(newSegment2.getInterval()),
+        sqlSegmentsMetadataManager.getUnusedSegmentIntervals(newDs, 
DateTimes.of("3000"), 5, DateTimes.nowUtc().minus(Duration.parse("PT86400S")))
     );
   }
 
@@ -870,4 +937,43 @@ public class SqlSegmentsMetadataManagerTest
     Assert.assertTrue(dataSegmentSet.contains(newSegment2));
   }
 
+  @Test
+  public void testPopulateUsedFlagLastUpdated() throws IOException
+  {
+    derbyConnectorRule.allowUsedFlagLastUpdatedToBeNullable();
+    final DataSegment newSegment = createSegment(
+        "dummyDS",
+        "2017-10-17T00:00:00.000/2017-10-18T00:00:00.000",
+        "2017-10-15T20:19:12.565Z",
+        
"wikipedia2/index/y=2017/m=10/d=15/2017-10-16T20:19:12.565Z/0/index.zip",
+        0
+    );
+    publish(newSegment, false, null);
+    Assert.assertTrue(getCountOfRowsWithLastUsedNull() > 0);
+    sqlSegmentsMetadataManager.populateUsedFlagLastUpdated();
+    Assert.assertTrue(getCountOfRowsWithLastUsedNull() == 0);
+  }
+
+  private int getCountOfRowsWithLastUsedNull()
+  {
+    return derbyConnectorRule.getConnector().retryWithHandle(
+        new HandleCallback<Integer>()
+        {
+          @Override
+          public Integer withHandle(Handle handle)
+          {
+            List<Map<String, Object>> lst = handle.select(
+                StringUtils.format(
+                    "SELECT * FROM %1$s WHERE USED_FLAG_LAST_UPDATED IS NULL",
+                    derbyConnectorRule.metadataTablesConfigSupplier()
+                                      .get()
+                                      .getSegmentsTable()
+                                      .toUpperCase(Locale.ENGLISH)
+                )
+            );
+            return lst.size();
+          }
+        }
+    );
+  }
 }
diff --git 
a/server/src/test/java/org/apache/druid/metadata/TestDerbyConnector.java 
b/server/src/test/java/org/apache/druid/metadata/TestDerbyConnector.java
index e5460ce402..6fff6f62ad 100644
--- a/server/src/test/java/org/apache/druid/metadata/TestDerbyConnector.java
+++ b/server/src/test/java/org/apache/druid/metadata/TestDerbyConnector.java
@@ -25,10 +25,14 @@ import org.apache.druid.java.util.common.StringUtils;
 import org.apache.druid.metadata.storage.derby.DerbyConnector;
 import org.junit.Assert;
 import org.junit.rules.ExternalResource;
+import org.skife.jdbi.v2.Batch;
 import org.skife.jdbi.v2.DBI;
+import org.skife.jdbi.v2.Handle;
 import org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException;
+import org.skife.jdbi.v2.tweak.HandleCallback;
 
 import java.sql.SQLException;
+import java.util.Locale;
 import java.util.UUID;
 
 public class TestDerbyConnector extends DerbyConnector
@@ -135,5 +139,27 @@ public class TestDerbyConnector extends DerbyConnector
     {
       return dbTables;
     }
+
+    public void allowUsedFlagLastUpdatedToBeNullable()
+    {
+      connector.retryWithHandle(
+          new HandleCallback<Void>()
+          {
+            @Override
+            public Void withHandle(Handle handle)
+            {
+              final Batch batch = handle.createBatch();
+              batch.add(
+                  StringUtils.format(
+                      "ALTER TABLE %1$s ALTER COLUMN USED_FLAG_LAST_UPDATED 
NULL",
+                      
dbTables.get().getSegmentsTable().toUpperCase(Locale.ENGLISH)
+                  )
+              );
+              batch.execute();
+              return null;
+            }
+          }
+      );
+    }
   }
 }
diff --git 
a/server/src/test/java/org/apache/druid/server/coordinator/TestDruidCoordinatorConfig.java
 
b/server/src/test/java/org/apache/druid/server/coordinator/TestDruidCoordinatorConfig.java
index d6089557c3..93b1246af1 100644
--- 
a/server/src/test/java/org/apache/druid/server/coordinator/TestDruidCoordinatorConfig.java
+++ 
b/server/src/test/java/org/apache/druid/server/coordinator/TestDruidCoordinatorConfig.java
@@ -46,6 +46,7 @@ public class TestDruidCoordinatorConfig extends 
DruidCoordinatorConfig
   private final int curatorLoadQueuePeonNumCallbackThreads;
   private final Duration httpLoadQueuePeonHostTimeout;
   private final int httpLoadQueuePeonBatchSize;
+  private final Duration coordinatorKillBufferPeriod;
 
   public TestDruidCoordinatorConfig(
       Duration coordinatorStartDelay,
@@ -70,7 +71,8 @@ public class TestDruidCoordinatorConfig extends 
DruidCoordinatorConfig
       Duration httpLoadQueuePeonRepeatDelay,
       Duration httpLoadQueuePeonHostTimeout,
       int httpLoadQueuePeonBatchSize,
-      int curatorLoadQueuePeonNumCallbackThreads
+      int curatorLoadQueuePeonNumCallbackThreads,
+      Duration coordinatorKillBufferPeriod
   )
   {
     this.coordinatorStartDelay = coordinatorStartDelay;
@@ -96,6 +98,7 @@ public class TestDruidCoordinatorConfig extends 
DruidCoordinatorConfig
     this.httpLoadQueuePeonHostTimeout = httpLoadQueuePeonHostTimeout;
     this.httpLoadQueuePeonBatchSize = httpLoadQueuePeonBatchSize;
     this.curatorLoadQueuePeonNumCallbackThreads = 
curatorLoadQueuePeonNumCallbackThreads;
+    this.coordinatorKillBufferPeriod = coordinatorKillBufferPeriod;
   }
 
   @Override
@@ -236,6 +239,12 @@ public class TestDruidCoordinatorConfig extends 
DruidCoordinatorConfig
     return httpLoadQueuePeonBatchSize;
   }
 
+  @Override
+  public Duration getCoordinatorKillBufferPeriod()
+  {
+    return coordinatorKillBufferPeriod;
+  }
+
   public static class Builder
   {
     private static final Duration DEFAULT_COORDINATOR_START_DELAY = new 
Duration("PT300s");
@@ -261,6 +270,7 @@ public class TestDruidCoordinatorConfig extends 
DruidCoordinatorConfig
     private static final int DEFAULT_HTTP_LOAD_QUEUE_PEON_BATCH_SIZE = 1;
     private static final Duration DEFAULT_COORDINATOR_AUDIT_KILL_PERIOD = new 
Duration("PT86400s");
     private static final Duration 
DEFAULT_COORDINATOR_AUTIT_KILL_DURATION_TO_RETAIN = new Duration("PT7776000s");
+    private static final Duration DEFAULT_COORDINATOR_KILL_BUFFER_PERIOD = new 
Duration("PT86400s");
 
 
     private Duration coordinatorStartDelay;
@@ -286,6 +296,7 @@ public class TestDruidCoordinatorConfig extends 
DruidCoordinatorConfig
     private Integer httpLoadQueuePeonBatchSize;
     private Duration coordinatorAuditKillPeriod;
     private Duration coordinatorAuditKillDurationToRetain;
+    private Duration coordinatorKillBufferPeriod;
 
     public Builder()
     {
@@ -429,6 +440,12 @@ public class TestDruidCoordinatorConfig extends 
DruidCoordinatorConfig
       return this;
     }
 
+    public Builder withCoordinatorKillBufferPeriod(Duration 
coordinatorKillBufferPeriod)
+    {
+      this.coordinatorKillBufferPeriod = coordinatorKillBufferPeriod;
+      return this;
+    }
+
     public TestDruidCoordinatorConfig build()
     {
       return new TestDruidCoordinatorConfig(
@@ -455,7 +472,8 @@ public class TestDruidCoordinatorConfig extends 
DruidCoordinatorConfig
           httpLoadQueuePeonHostTimeout == null ? 
DEFAULT_HTTP_LOAD_QUEUE_PEON_HOST_TIMEOUT : httpLoadQueuePeonHostTimeout,
           httpLoadQueuePeonBatchSize == null ? 
DEFAULT_HTTP_LOAD_QUEUE_PEON_BATCH_SIZE : httpLoadQueuePeonBatchSize,
           curatorLoadQueuePeonNumCallbackThreads == null ? 
DEFAULT_CURATOR_LOAD_QUEUE_PEON_NUM_CALLBACK_THREADS
-                                                         : 
curatorLoadQueuePeonNumCallbackThreads
+                                                         : 
curatorLoadQueuePeonNumCallbackThreads,
+          coordinatorKillBufferPeriod == null ? 
DEFAULT_COORDINATOR_KILL_BUFFER_PERIOD : coordinatorKillBufferPeriod
       );
     }
 
diff --git 
a/server/src/test/java/org/apache/druid/server/coordinator/duty/KillUnusedSegmentsTest.java
 
b/server/src/test/java/org/apache/druid/server/coordinator/duty/KillUnusedSegmentsTest.java
index 5d0c81385c..72a4d115f9 100644
--- 
a/server/src/test/java/org/apache/druid/server/coordinator/duty/KillUnusedSegmentsTest.java
+++ 
b/server/src/test/java/org/apache/druid/server/coordinator/duty/KillUnusedSegmentsTest.java
@@ -103,6 +103,7 @@ public class KillUnusedSegmentsTest
     
Mockito.doReturn(DURATION_TO_RETAIN).when(config).getCoordinatorKillDurationToRetain();
     
Mockito.doReturn(INDEXING_PERIOD).when(config).getCoordinatorIndexingPeriod();
     
Mockito.doReturn(MAX_SEGMENTS_TO_KILL).when(config).getCoordinatorKillMaxSegments();
+    
Mockito.doReturn(Duration.parse("PT3154000000S")).when(config).getCoordinatorKillBufferPeriod();
 
     Mockito.doReturn(Collections.singleton("DS1"))
            
.when(coordinatorDynamicConfig).getSpecificDataSourcesToKillUnusedSegmentsIn();
@@ -129,7 +130,8 @@ public class KillUnusedSegmentsTest
         segmentsMetadataManager.getUnusedSegmentIntervals(
             ArgumentMatchers.anyString(),
             ArgumentMatchers.any(),
-            ArgumentMatchers.anyInt()
+            ArgumentMatchers.anyInt(),
+            ArgumentMatchers.any()
         )
     ).thenAnswer(invocation -> {
       DateTime maxEndTime = invocation.getArgument(1);
@@ -153,7 +155,8 @@ public class KillUnusedSegmentsTest
     
Mockito.doReturn(null).when(segmentsMetadataManager).getUnusedSegmentIntervals(
         ArgumentMatchers.anyString(),
         ArgumentMatchers.any(),
-        ArgumentMatchers.anyInt()
+        ArgumentMatchers.anyInt(),
+        ArgumentMatchers.any()
     );
 
     mockTaskSlotUsage(1.0, Integer.MAX_VALUE, 1, 10);
diff --git 
a/server/src/test/java/org/apache/druid/server/coordinator/simulate/TestSegmentsMetadataManager.java
 
b/server/src/test/java/org/apache/druid/server/coordinator/simulate/TestSegmentsMetadataManager.java
index e0d1d887ef..77711ce906 100644
--- 
a/server/src/test/java/org/apache/druid/server/coordinator/simulate/TestSegmentsMetadataManager.java
+++ 
b/server/src/test/java/org/apache/druid/server/coordinator/simulate/TestSegmentsMetadataManager.java
@@ -199,7 +199,7 @@ public class TestSegmentsMetadataManager implements 
SegmentsMetadataManager
   }
 
   @Override
-  public List<Interval> getUnusedSegmentIntervals(String dataSource, DateTime 
maxEndTime, int limit)
+  public List<Interval> getUnusedSegmentIntervals(String dataSource, DateTime 
maxEndTime, int limit, DateTime maxUsedFlagLastUpdatedTime)
   {
     return null;
   }
@@ -209,4 +209,14 @@ public class TestSegmentsMetadataManager implements 
SegmentsMetadataManager
   {
 
   }
+
+  @Override
+  public void populateUsedFlagLastUpdatedAsync()
+  {
+  }
+
+  @Override
+  public void stopAsyncUsedFlagLastUpdatedUpdate()
+  {
+  }
 }
diff --git a/services/src/main/java/org/apache/druid/cli/Main.java 
b/services/src/main/java/org/apache/druid/cli/Main.java
index ffd60410b0..8f3754b5ce 100644
--- a/services/src/main/java/org/apache/druid/cli/Main.java
+++ b/services/src/main/java/org/apache/druid/cli/Main.java
@@ -76,7 +76,8 @@ public class Main
         DumpSegment.class,
         ResetCluster.class,
         ValidateSegments.class,
-        ExportMetadata.class
+        ExportMetadata.class,
+        UpdateTables.class
     );
     builder.withGroup("tools")
            .withDescription("Various tools for working with Druid")
diff --git a/services/src/main/java/org/apache/druid/cli/UpdateTables.java 
b/services/src/main/java/org/apache/druid/cli/UpdateTables.java
new file mode 100644
index 0000000000..9a5ba9bc39
--- /dev/null
+++ b/services/src/main/java/org/apache/druid/cli/UpdateTables.java
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.cli;
+
+import com.github.rvesse.airline.annotations.Command;
+import com.github.rvesse.airline.annotations.Option;
+import com.github.rvesse.airline.annotations.restrictions.Required;
+import com.google.common.collect.ImmutableList;
+import com.google.inject.Injector;
+import com.google.inject.Key;
+import com.google.inject.Module;
+import org.apache.druid.guice.DruidProcessingModule;
+import org.apache.druid.guice.JsonConfigProvider;
+import org.apache.druid.guice.QueryRunnerFactoryModule;
+import org.apache.druid.guice.QueryableModule;
+import org.apache.druid.guice.annotations.Self;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.metadata.MetadataStorageConnector;
+import org.apache.druid.metadata.MetadataStorageConnectorConfig;
+import org.apache.druid.metadata.MetadataStorageTablesConfig;
+import org.apache.druid.server.DruidNode;
+
+import java.util.List;
+
+@Command(
+    name = "metadata-update",
+    description = "Controlled update of metadata storage"
+)
+
+public class UpdateTables extends GuiceRunnable
+{
+  private static final String SEGMENTS_TABLE_ADD_USED_FLAG_LAST_UPDATED = 
"add-used-flag-last-updated-to-segments";
+
+  @Option(name = "--connectURI", description = "Database JDBC connection 
string")
+  @Required
+  private String connectURI;
+
+  @Option(name = "--user", description = "Database username")
+  @Required
+  private String user;
+
+  @Option(name = "--password", description = "Database password")
+  @Required
+  private String password;
+
+  @Option(name = "--base", description = "Base table name")
+  private String base;
+
+  @Option(name = "--action", description = "Action Name")
+  private String action_name;
+
+  private static final Logger log = new Logger(CreateTables.class);
+
+  public UpdateTables()
+  {
+    super(log);
+  }
+
+  @Override
+  protected List<? extends Module> getModules()
+  {
+    return ImmutableList.of(
+        // It's unknown why those modules are required in CreateTables, and if 
all of those modules are required or not.
+        // Maybe some of those modules could be removed.
+        // See https://github.com/apache/druid/pull/4429#discussion_r123602930
+        new DruidProcessingModule(),
+        new QueryableModule(),
+        new QueryRunnerFactoryModule(),
+        binder -> {
+          JsonConfigProvider.bindInstance(
+              binder,
+              Key.get(MetadataStorageConnectorConfig.class),
+              new MetadataStorageConnectorConfig()
+              {
+                @Override
+                public String getConnectURI()
+                {
+                  return connectURI;
+                }
+
+                @Override
+                public String getUser()
+                {
+                  return user;
+                }
+
+                @Override
+                public String getPassword()
+                {
+                  return password;
+                }
+              }
+          );
+          JsonConfigProvider.bindInstance(
+              binder,
+              Key.get(MetadataStorageTablesConfig.class),
+              MetadataStorageTablesConfig.fromBase(base)
+          );
+          JsonConfigProvider.bindInstance(
+              binder,
+              Key.get(DruidNode.class, Self.class),
+              new DruidNode("tools", "localhost", false, -1, null, true, false)
+          );
+        }
+    );
+  }
+
+  @Override
+  public void run()
+  {
+    final Injector injector = makeInjector();
+    MetadataStorageConnector dbConnector = 
injector.getInstance(MetadataStorageConnector.class);
+    if (SEGMENTS_TABLE_ADD_USED_FLAG_LAST_UPDATED.equals(action_name)) {
+      dbConnector.alterSegmentTableAddUsedFlagLastUpdated();
+    }
+  }
+}
diff --git a/website/.spelling b/website/.spelling
index b34ad9c6e8..200aec5914 100644
--- a/website/.spelling
+++ b/website/.spelling
@@ -53,6 +53,7 @@ Ceph
 CloudWatch
 ColumnDescriptor
 Corretto
+CLI
 DDL
 DML
 DNS


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to