This is an automated email from the ASF dual-hosted git repository.

kfaraz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 097b645005 Clean up after add kill bufferPeriod (#14868)
097b645005 is described below

commit 097b6450057426d21fbcaa8319ae582b33976690
Author: Kashif Faraz <[email protected]>
AuthorDate: Sat Aug 19 00:00:04 2023 +0530

    Clean up after add kill bufferPeriod (#14868)
    
    Follow up changes to #12599
    
    Changes:
    - Rename column `used_flag_last_updated` to `used_status_last_updated`
    - Remove new CLI tool `UpdateTables`.
      - We already have a `CreateTables` with similar functionality, which 
should be able to
      handle update cases too.
      - Any user running the cluster for the first time should either just have 
`connector.createTables`
      enabled or run `CreateTables` which should create tables at the latest 
version.
      - For instance, the `UpdateTables` tool would be inadequate when a new 
metadata table has
      been added to Druid, and users would have to run `CreateTables` anyway.
    - Remove `upgrade-prep.md` and include that info in `metadata-init.md`.
    - Fix log messages to adhere to Druid style
    - Use lambdas
---
 docs/design/metadata-storage.md                    |   4 +-
 docs/operations/metadata-migration.md              |   6 +
 docs/operations/upgrade-prep.md                    |  71 --------
 .../test-data/high-availability-sample-data.sql    |  10 +-
 .../docker/test-data/ldap-security-sample-data.sql |   2 +-
 .../docker/test-data/query-error-sample-data.sql   |  10 +-
 .../docker/test-data/query-retry-sample-data.sql   |  10 +-
 .../docker/test-data/query-sample-data.sql         |  10 +-
 .../docker/test-data/security-sample-data.sql      |   2 +-
 .../druid/metadata/MetadataStorageConnector.java   |   8 -
 .../metadata/TestMetadataStorageConnector.java     |   6 -
 .../SQLMetadataStorageUpdaterJobHandler.java       |   6 +-
 .../IndexerSQLMetadataStorageCoordinator.java      |   6 +-
 .../druid/metadata/SQLMetadataConnector.java       | 192 +++++++++------------
 .../metadata/SQLMetadataSegmentPublisher.java      |   6 +-
 .../druid/metadata/SegmentsMetadataManager.java    |   6 +-
 .../druid/metadata/SqlSegmentsMetadataManager.java |  26 +--
 .../druid/metadata/SqlSegmentsMetadataQuery.java   |  10 +-
 .../IndexerSQLMetadataStorageCoordinatorTest.java  |  10 +-
 .../druid/metadata/SQLMetadataConnectorTest.java   |   8 +-
 .../metadata/SqlSegmentsMetadataManagerTest.java   |  10 +-
 .../apache/druid/metadata/TestDerbyConnector.java  |   2 +-
 .../src/main/java/org/apache/druid/cli/Main.java   |   3 +-
 .../java/org/apache/druid/cli/UpdateTables.java    | 134 --------------
 24 files changed, 154 insertions(+), 404 deletions(-)

diff --git a/docs/design/metadata-storage.md b/docs/design/metadata-storage.md
index fc741f9796..20439aa135 100644
--- a/docs/design/metadata-storage.md
+++ b/docs/design/metadata-storage.md
@@ -103,8 +103,8 @@ system. The table has two main functional columns, the 
other columns are for ind
 Value 1 in the `used` column means that the segment should be "used" by the 
cluster (i.e., it should be loaded and
 available for requests). Value 0 means that the segment should not be loaded 
into the cluster. We do this as a means of
 unloading segments from the cluster without actually removing their metadata 
(which allows for simpler rolling back if
-that is ever an issue). The `used` column has a corresponding 
`used_flag_last_updated` column that indicates the date at the instant
-that the `used` status of the segment was last updated. This information can 
be used by the coordinator to determine if
+that is ever an issue). The `used` column has a corresponding 
`used_status_last_updated` column which denotes the time
+when the `used` status of the segment was last updated. This information can 
be used by the Coordinator to determine if
 a segment is a candidate for deletion (if automated segment killing is 
enabled).
 
 The `payload` column stores a JSON blob that has all of the metadata for the 
segment.
diff --git a/docs/operations/metadata-migration.md 
b/docs/operations/metadata-migration.md
index b38972caf4..ea3596784a 100644
--- a/docs/operations/metadata-migration.md
+++ b/docs/operations/metadata-migration.md
@@ -57,6 +57,8 @@ Update your Druid runtime properties with the new metadata 
configuration.
 
 ### Create Druid tables
 
+**If you have set `druid.metadata.storage.connector.createTables` to `true` 
(which is the default), and your metadata connect user has DDL privileges, you 
can disregard this section as Druid will create metadata tables automatically 
on start up.**
+
 Druid provides a `metadata-init` tool for creating Druid's metadata tables. 
After initializing the Druid database, you can run the commands shown below 
from the root of the Druid package to initialize the tables.
 
 In the example commands below:
@@ -82,6 +84,10 @@ cd ${DRUID_ROOT}
 java -classpath "lib/*" 
-Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml 
-Ddruid.extensions.directory="extensions" 
-Ddruid.extensions.loadList="[\"postgresql-metadata-storage\"]" 
-Ddruid.metadata.storage.type=postgresql -Ddruid.node.type=metadata-init 
org.apache.druid.cli.Main tools metadata-init --connectURI="<postgresql-uri>" 
--user <user> --password <pass> --base druid
 ```
 
+### Update Druid tables to latest compatible schema
+
+The same command as above can be used to update Druid metadata tables to the 
latest version. If any table already exists, it is not created again but any 
ALTER statements that may be required are still executed.
+
 ### Import metadata
 
 After initializing the tables, please refer to the [import 
commands](../operations/export-metadata.md#importing-metadata) for your target 
database.
diff --git a/docs/operations/upgrade-prep.md b/docs/operations/upgrade-prep.md
deleted file mode 100644
index 03fc24c9de..0000000000
--- a/docs/operations/upgrade-prep.md
+++ /dev/null
@@ -1,71 +0,0 @@
----
-id: upgrade-prep
-title: "Upgrade Prep"
----
-
-<!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one
-  ~ or more contributor license agreements.  See the NOTICE file
-  ~ distributed with this work for additional information
-  ~ regarding copyright ownership.  The ASF licenses this file
-  ~ to you under the Apache License, Version 2.0 (the
-  ~ "License"); you may not use this file except in compliance
-  ~ with the License.  You may obtain a copy of the License at
-  ~
-  ~   http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing,
-  ~ software distributed under the License is distributed on an
-  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  ~ KIND, either express or implied.  See the License for the
-  ~ specific language governing permissions and limitations
-  ~ under the License.
-  -->
-  
-## Upgrade to `0.24+` from `0.23` and earlier
-
-### Altering segments table
-
-**If you have set `druid.metadata.storage.connector.createTables` to `true` 
(which is the default), and your metadata connect user has DDL privileges, you 
can disregard this section.**
-
-**The coordinator and overlord services will fail if you do not execute this 
change prior to the upgrade**
-
-A new column, `used_flag_last_updated`, is needed in the segments table to 
support new
-segment killing functionality. You can manually alter the table, or you can use
-a CLI tool to perform the update.
-
-#### CLI tool
-
-Druid provides a `metadata-update` tool for updating Druid's metadata tables.
-
-In the example commands below:
-
-- `lib` is the Druid lib directory
-- `extensions` is the Druid extensions directory
-- `base` corresponds to the value of `druid.metadata.storage.tables.base` in 
the configuration, `druid` by default.
-- The `--connectURI` parameter corresponds to the value of 
`druid.metadata.storage.connector.connectURI`.
-- The `--user` parameter corresponds to the value of 
`druid.metadata.storage.connector.user`.
-- The `--password` parameter corresponds to the value of 
`druid.metadata.storage.connector.password`.
-- The `--action` parameter corresponds to the update action you are executing. 
In this case it is: `add-last-used-to-segments`
-
-##### MySQL
-
-```bash
-cd ${DRUID_ROOT}
-java -classpath "lib/*" 
-Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml 
-Ddruid.extensions.directory="extensions" 
-Ddruid.extensions.loadList=[\"mysql-metadata-storage\"] 
-Ddruid.metadata.storage.type=mysql org.apache.druid.cli.Main tools 
metadata-update --connectURI="<mysql-uri>" --user <user> --password <pass> 
--base druid --action add-used-flag-last-updated-to-segments
-```
-
-##### PostgreSQL
-
-```bash
-cd ${DRUID_ROOT}
-java -classpath "lib/*" 
-Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml 
-Ddruid.extensions.directory="extensions" 
-Ddruid.extensions.loadList=[\"postgresql-metadata-storage\"] 
-Ddruid.metadata.storage.type=postgresql org.apache.druid.cli.Main tools 
metadata-update --connectURI="<postgresql-uri>" --user <user> --password <pass> 
--base druid --action add-used-flag-last-updated-to-segments
-```
-
-
-#### Manual ALTER TABLE
-
-```SQL
-ALTER TABLE druid_segments
-ADD used_flag_last_updated varchar(255);
-```
diff --git 
a/integration-tests/docker/test-data/high-availability-sample-data.sql 
b/integration-tests/docker/test-data/high-availability-sample-data.sql
index 93293cc2af..abe0f11518 100644
--- a/integration-tests/docker/test-data/high-availability-sample-data.sql
+++ b/integration-tests/docker/test-data/high-availability-sample-data.sql
@@ -13,8 +13,8 @@
 -- See the License for the specific language governing permissions and
 -- limitations under the License.
 
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.9
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.7
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.5
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"
 [...]
-INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_flag_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":
 [...]
+INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_status_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:
 [...]
diff --git a/integration-tests/docker/test-data/ldap-security-sample-data.sql 
b/integration-tests/docker/test-data/ldap-security-sample-data.sql
index 5ae57750fd..732cc55d4a 100644
--- a/integration-tests/docker/test-data/ldap-security-sample-data.sql
+++ b/integration-tests/docker/test-data/ldap-security-sample-data.sql
@@ -14,4 +14,4 @@
 -- limitations under the License.
 
 INSERT INTO druid_tasks (id, created_date, datasource, payload, 
status_payload, active) VALUES ('index_auth_test_2030-04-30T01:13:31.893Z', 
'2030-04-30T01:13:31.893Z', 'auth_test', 
'{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"created_date\":\"2030-04-30T01:13:31.893Z\",\"datasource\":\"auth_test\",\"active\":0}',
 
'{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"status\":\"SUCCESS\",\"duration\":1}',
 0);
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"l
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\
 [...]
diff --git a/integration-tests/docker/test-data/query-error-sample-data.sql 
b/integration-tests/docker/test-data/query-error-sample-data.sql
index 93293cc2af..abe0f11518 100644
--- a/integration-tests/docker/test-data/query-error-sample-data.sql
+++ b/integration-tests/docker/test-data/query-error-sample-data.sql
@@ -13,8 +13,8 @@
 -- See the License for the specific language governing permissions and
 -- limitations under the License.
 
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.9
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.7
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.5
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"
 [...]
-INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_flag_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":
 [...]
+INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_status_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:
 [...]
diff --git a/integration-tests/docker/test-data/query-retry-sample-data.sql 
b/integration-tests/docker/test-data/query-retry-sample-data.sql
index 93293cc2af..abe0f11518 100644
--- a/integration-tests/docker/test-data/query-retry-sample-data.sql
+++ b/integration-tests/docker/test-data/query-retry-sample-data.sql
@@ -13,8 +13,8 @@
 -- See the License for the specific language governing permissions and
 -- limitations under the License.
 
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.9
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.7
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.5
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"
 [...]
-INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_flag_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":
 [...]
+INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_status_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:
 [...]
diff --git a/integration-tests/docker/test-data/query-sample-data.sql 
b/integration-tests/docker/test-data/query-sample-data.sql
index 93293cc2af..abe0f11518 100644
--- a/integration-tests/docker/test-data/query-sample-data.sql
+++ b/integration-tests/docker/test-data/query-sample-data.sql
@@ -13,8 +13,8 @@
 -- See the License for the specific language governing permissions and
 -- limitations under the License.
 
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41.9
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58.7
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13.5
 [...]
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"
 [...]
-INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_flag_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:48
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-01T00:00:00.000Z_2013-01-02T00:00:00.000Z_2013-01-02T04:13:41.980Z_v9','twitterstream','2013-05-13T01:08:18.192Z','2013-01-01T00:00:00.000Z','2013-01-02T00:00:00.000Z',0,'2013-01-02T04:13:41.980Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-01T00:00:00.000Z/2013-01-02T00:00:00.000Z\",\"version\":\"2013-01-02T04:13:41
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-02T00:00:00.000Z_2013-01-03T00:00:00.000Z_2013-01-03T03:44:58.791Z_v9','twitterstream','2013-05-13T00:03:28.640Z','2013-01-02T00:00:00.000Z','2013-01-03T00:00:00.000Z',0,'2013-01-03T03:44:58.791Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-02T00:00:00.000Z/2013-01-03T00:00:00.000Z\",\"version\":\"2013-01-03T03:44:58
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('twitterstream_2013-01-03T00:00:00.000Z_2013-01-04T00:00:00.000Z_2013-01-04T04:09:13.590Z_v9','twitterstream','2013-05-13T00:03:48.807Z','2013-01-03T00:00:00.000Z','2013-01-04T00:00:00.000Z',0,'2013-01-04T04:09:13.590Z_v9',1,'{\"dataSource\":\"twitterstream\",\"interval\":\"2013-01-03T00:00:00.000Z/2013-01-04T00:00:00.000Z\",\"version\":\"2013-01-04T04:09:13
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','wikipedia_editstream','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"wikipedia_editstream\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":
 [...]
+INSERT INTO druid_segments (id, dataSource, created_date, start, end, 
partitioned, version, used, payload,used_status_last_updated) VALUES 
('wikipedia_2013-08-01T00:00:00.000Z_2013-08-02T00:00:00.000Z_2013-08-08T21:22:48.989Z',
 'wikipedia', '2013-08-08T21:26:23.799Z', '2013-08-01T00:00:00.000Z', 
'2013-08-02T00:00:00.000Z', '0', '2013-08-08T21:22:48.989Z', '1', 
'{\"dataSource\":\"wikipedia\",\"interval\":\"2013-08-01T00:00:00.000Z/2013-08-02T00:00:00.000Z\",\"version\":\"2013-08-08T21:22:
 [...]
diff --git a/integration-tests/docker/test-data/security-sample-data.sql 
b/integration-tests/docker/test-data/security-sample-data.sql
index 5ae57750fd..732cc55d4a 100644
--- a/integration-tests/docker/test-data/security-sample-data.sql
+++ b/integration-tests/docker/test-data/security-sample-data.sql
@@ -14,4 +14,4 @@
 -- limitations under the License.
 
 INSERT INTO druid_tasks (id, created_date, datasource, payload, 
status_payload, active) VALUES ('index_auth_test_2030-04-30T01:13:31.893Z', 
'2030-04-30T01:13:31.893Z', 'auth_test', 
'{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"created_date\":\"2030-04-30T01:13:31.893Z\",\"datasource\":\"auth_test\",\"active\":0}',
 
'{\"id\":\"index_auth_test_2030-04-30T01:13:31.893Z\",\"status\":\"SUCCESS\",\"duration\":1}',
 0);
-INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_flag_last_updated)
 VALUES 
('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\"l
 [...]
+INSERT INTO druid_segments 
(id,dataSource,created_date,start,end,partitioned,version,used,payload,used_status_last_updated)
 VALUES 
('auth_test_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9','auth_test','2013-03-15T20:49:52.348Z','2012-12-29T00:00:00.000Z','2013-01-10T08:00:00.000Z',0,'2013-01-10T08:13:47.830Z_v9',1,'{\"dataSource\":\"auth_test\",\"interval\":\"2012-12-29T00:00:00.000Z/2013-01-10T08:00:00.000Z\",\"version\":\"2013-01-10T08:13:47.830Z_v9\",\
 [...]
diff --git 
a/processing/src/main/java/org/apache/druid/metadata/MetadataStorageConnector.java
 
b/processing/src/main/java/org/apache/druid/metadata/MetadataStorageConnector.java
index a4a8ec4839..45fb663908 100644
--- 
a/processing/src/main/java/org/apache/druid/metadata/MetadataStorageConnector.java
+++ 
b/processing/src/main/java/org/apache/druid/metadata/MetadataStorageConnector.java
@@ -88,12 +88,4 @@ public interface MetadataStorageConnector
   void createSupervisorsTable();
 
   void deleteAllRecords(String tableName);
-
-  /**
-   * Upgrade Compatibility Method.
-   *
-   * A new column, used_flag_last_updated, is added to druid_segments table. 
This method alters the table to add the column to make
-   * a cluster's metastore tables compatible with the updated Druid codebase 
in 0.24.x+
-   */
-  void alterSegmentTableAddUsedFlagLastUpdated();
 }
diff --git 
a/processing/src/test/java/org/apache/druid/metadata/TestMetadataStorageConnector.java
 
b/processing/src/test/java/org/apache/druid/metadata/TestMetadataStorageConnector.java
index 560e724944..028a9d5cc0 100644
--- 
a/processing/src/test/java/org/apache/druid/metadata/TestMetadataStorageConnector.java
+++ 
b/processing/src/test/java/org/apache/druid/metadata/TestMetadataStorageConnector.java
@@ -89,10 +89,4 @@ public class TestMetadataStorageConnector implements 
MetadataStorageConnector
   {
     throw new UnsupportedOperationException();
   }
-
-  @Override
-  public void alterSegmentTableAddUsedFlagLastUpdated()
-  {
-    throw new UnsupportedOperationException();
-  }
 }
diff --git 
a/server/src/main/java/org/apache/druid/indexer/SQLMetadataStorageUpdaterJobHandler.java
 
b/server/src/main/java/org/apache/druid/indexer/SQLMetadataStorageUpdaterJobHandler.java
index d0bf41bd61..e1c9df1175 100644
--- 
a/server/src/main/java/org/apache/druid/indexer/SQLMetadataStorageUpdaterJobHandler.java
+++ 
b/server/src/main/java/org/apache/druid/indexer/SQLMetadataStorageUpdaterJobHandler.java
@@ -59,8 +59,8 @@ public class SQLMetadataStorageUpdaterJobHandler implements 
MetadataStorageUpdat
           {
             final PreparedBatch batch = handle.prepareBatch(
                 StringUtils.format(
-                    "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
-                    + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_flag_last_updated)",
+                    "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload, used_status_last_updated) "
+                    + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_status_last_updated)",
                     tableName, connector.getQuoteString()
                 )
             );
@@ -77,7 +77,7 @@ public class SQLMetadataStorageUpdaterJobHandler implements 
MetadataStorageUpdat
                       .put("version", segment.getVersion())
                       .put("used", true)
                       .put("payload", mapper.writeValueAsBytes(segment))
-                      .put("used_flag_last_updated", now)
+                      .put("used_status_last_updated", now)
                       .build()
               );
               log.info("Published %s", segment.getId());
diff --git 
a/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
 
b/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
index 38bed0d7f8..3773aa466d 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
@@ -1419,8 +1419,8 @@ public class IndexerSQLMetadataStorageCoordinator 
implements IndexerMetadataStor
 
       PreparedBatch preparedBatch = handle.prepareBatch(
           StringUtils.format(
-              "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
-                  + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_flag_last_updated)",
+              "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload, used_status_last_updated) "
+                  + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_status_last_updated)",
               dbTables.getSegmentsTable(),
               connector.getQuoteString()
           )
@@ -1439,7 +1439,7 @@ public class IndexerSQLMetadataStorageCoordinator 
implements IndexerMetadataStor
               .bind("version", segment.getVersion())
               .bind("used", usedSegments.contains(segment))
               .bind("payload", jsonMapper.writeValueAsBytes(segment))
-              .bind("used_flag_last_updated", now);
+              .bind("used_status_last_updated", now);
         }
         final int[] affectedRows = preparedBatch.execute();
         final boolean succeeded = 
Arrays.stream(affectedRows).allMatch(eachAffectedRows -> eachAffectedRows == 1);
diff --git 
a/server/src/main/java/org/apache/druid/metadata/SQLMetadataConnector.java 
b/server/src/main/java/org/apache/druid/metadata/SQLMetadataConnector.java
index e31b74d300..35f13baae6 100644
--- a/server/src/main/java/org/apache/druid/metadata/SQLMetadataConnector.java
+++ b/server/src/main/java/org/apache/druid/metadata/SQLMetadataConnector.java
@@ -198,29 +198,25 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
     return false;
   }
 
+  /**
+   * Creates the given table and indexes if the table doesn't already exist.
+   */
   public void createTable(final String tableName, final Iterable<String> sql)
   {
     try {
-      retryWithHandle(
-          new HandleCallback<Void>()
-          {
-            @Override
-            public Void withHandle(Handle handle)
-            {
-              if (!tableExists(handle, tableName)) {
-                log.info("Creating table [%s]", tableName);
-                final Batch batch = handle.createBatch();
-                for (String s : sql) {
-                  batch.add(s);
-                }
-                batch.execute();
-              } else {
-                log.info("Table [%s] already exists", tableName);
-              }
-              return null;
-            }
+      retryWithHandle(handle -> {
+        if (tableExists(handle, tableName)) {
+          log.info("Table[%s] already exists", tableName);
+        } else {
+          log.info("Creating table[%s]", tableName);
+          final Batch batch = handle.createBatch();
+          for (String s : sql) {
+            batch.add(s);
           }
-      );
+          batch.execute();
+        }
+        return null;
+      });
     }
     catch (Exception e) {
       log.warn(e, "Exception creating table");
@@ -236,26 +232,19 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
   private void alterTable(final String tableName, final Iterable<String> sql)
   {
     try {
-      retryWithHandle(
-          new HandleCallback<Void>()
-          {
-            @Override
-            public Void withHandle(Handle handle)
-            {
-              if (tableExists(handle, tableName)) {
-                final Batch batch = handle.createBatch();
-                for (String s : sql) {
-                  log.info("Altering table[%s], with command: %s", tableName, 
s);
-                  batch.add(s);
-                }
-                batch.execute();
-              } else {
-                log.info("Table[%s] doesn't exist", tableName);
-              }
-              return null;
-            }
+      retryWithHandle(handle -> {
+        if (tableExists(handle, tableName)) {
+          final Batch batch = handle.createBatch();
+          for (String s : sql) {
+            log.info("Altering table[%s], with command: %s", tableName, s);
+            batch.add(s);
           }
-      );
+          batch.execute();
+        } else {
+          log.info("Table[%s] doesn't exist.", tableName);
+        }
+        return null;
+      });
     }
     catch (Exception e) {
       log.warn(e, "Exception Altering table[%s]", tableName);
@@ -331,7 +320,7 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
                 + "  version VARCHAR(255) NOT NULL,\n"
                 + "  used BOOLEAN NOT NULL,\n"
                 + "  payload %2$s NOT NULL,\n"
-                + "  used_flag_last_updated VARCHAR(255) NOT NULL,\n"
+                + "  used_status_last_updated VARCHAR(255) NOT NULL,\n"
                 + "  PRIMARY KEY (id)\n"
                 + ")",
                 tableName, getPayloadType(), getQuoteString(), getCollation()
@@ -425,18 +414,18 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
 
   private void alterEntryTableAddTypeAndGroupId(final String tableName)
   {
-    ArrayList<String> statements = new ArrayList<>();
-    if (!tableHasColumn(tableName, "type")) {
-      log.info("Adding 'type' column to %s", tableName);
-      statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN type 
VARCHAR(255)", tableName));
+    List<String> statements = new ArrayList<>();
+    if (tableHasColumn(tableName, "type")) {
+      log.info("Table[%s] already has column[type].", tableName);
     } else {
-      log.info("%s already has 'type' column", tableName);
+      log.info("Adding column[type] to table[%s].", tableName);
+      statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN type 
VARCHAR(255)", tableName));
     }
-    if (!tableHasColumn(tableName, "group_id")) {
-      log.info("Adding 'group_id' column to %s", tableName);
-      statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN group_id 
VARCHAR(255)", tableName));
+    if (tableHasColumn(tableName, "group_id")) {
+      log.info("Table[%s] already has column[group_id].", tableName);
     } else {
-      log.info("%s already has 'group_id' column", tableName);
+      log.info("Adding column[group_id] to table[%s].", tableName);
+      statements.add(StringUtils.format("ALTER TABLE %1$s ADD COLUMN group_id 
VARCHAR(255)", tableName));
     }
     if (!statements.isEmpty()) {
       alterTable(tableName, statements);
@@ -502,28 +491,24 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
   }
 
   /**
-   * Adds the used_flag_last_updated column to the Druid segment table.
-   *
-   * This is public due to allow the UpdateTables cli tool to use for upgrade 
prep.
+   * Adds the used_status_last_updated column to the "segments" table.
    */
-  @Override
-  public void alterSegmentTableAddUsedFlagLastUpdated()
+  protected void alterSegmentTableAddUsedFlagLastUpdated()
   {
-    String tableName = tablesConfigSupplier.get().getSegmentsTable();
-    if (!tableHasColumn(tableName, "used_flag_last_updated")) {
-      log.info("Adding 'used_flag_last_updated' column to %s", tableName);
+    final String tableName = tablesConfigSupplier.get().getSegmentsTable();
+    if (tableHasColumn(tableName, "used_status_last_updated")) {
+      log.info("Table[%s] already has column[used_status_last_updated].", 
tableName);
+    } else {
+      log.info("Adding column[used_status_last_updated] to table[%s].", 
tableName);
       alterTable(
           tableName,
           ImmutableList.of(
               StringUtils.format(
-                  "ALTER TABLE %1$s \n"
-                  + "ADD used_flag_last_updated varchar(255)",
+                  "ALTER TABLE %1$s ADD used_status_last_updated varchar(255)",
                   tableName
               )
           )
       );
-    } else {
-      log.info("%s already has used_flag_last_updated column", tableName);
     }
   }
 
@@ -676,7 +661,7 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
     }
     // Called outside of the above conditional because we want to validate the 
table
     // regardless of cluster configuration for creating tables.
-    validateSegmentTable();
+    validateSegmentsTable();
   }
 
   @Override
@@ -724,14 +709,7 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
   )
   {
     return getDBI().withHandle(
-        new HandleCallback<byte[]>()
-        {
-          @Override
-          public byte[] withHandle(Handle handle)
-          {
-            return lookupWithHandle(handle, tableName, keyColumn, valueColumn, 
key);
-          }
-        }
+        handle -> lookupWithHandle(handle, tableName, keyColumn, valueColumn, 
key)
     );
   }
 
@@ -989,61 +967,47 @@ public abstract class SQLMetadataConnector implements 
MetadataStorageConnector
   }
 
   /**
-   * Interrogate table metadata and return true or false depending on the 
existance of the indicated column
-   *
-   * public visibility because DerbyConnector needs to override thanks to 
uppercase table and column names invalidating
-   * this implementation.
+   * Checks table metadata to determine if the given column exists in the 
table.
    *
-   * @param tableName The table being interrogated
-   * @param columnName The column being looked for
-   * @return boolean indicating the existence of the column in question
+   * @return true if the column exists in the table, false otherwise
    */
-  public boolean tableHasColumn(String tableName, String columnName)
-  {
-    return getDBI().withHandle(
-        new HandleCallback<Boolean>()
-        {
-          @Override
-          public Boolean withHandle(Handle handle)
-          {
-            try {
-              if (tableExists(handle, tableName)) {
-                DatabaseMetaData dbMetaData = 
handle.getConnection().getMetaData();
-                ResultSet columns = dbMetaData.getColumns(
-                    null,
-                    null,
-                    tableName,
-                    columnName
-                );
-                return columns.next();
-              } else {
-                return false;
-              }
-            }
-            catch (SQLException e) {
-              return false;
-            }
-          }
+  protected boolean tableHasColumn(String tableName, String columnName)
+  {
+    return getDBI().withHandle(handle -> {
+      try {
+        if (tableExists(handle, tableName)) {
+          DatabaseMetaData dbMetaData = handle.getConnection().getMetaData();
+          ResultSet columns = dbMetaData.getColumns(null, null, tableName, 
columnName);
+          return columns.next();
+        } else {
+          return false;
         }
-    );
+      }
+      catch (SQLException e) {
+        return false;
+      }
+    });
   }
 
   /**
-   * Ensure that the segment table has the proper schema required to run Druid 
properly.
+   * Ensures that the "segments" table has a schema compatible with the 
current version of Druid.
    *
-   * Throws RuntimeException if the column does not exist. There is no 
recovering from an invalid schema,
-   * the program should crash.
-   *
-   * See <a 
href="https://druid.apache.org/docs/latest/operations/upgrade-prep.html";>upgrade-prep
 docs</a> for info
-   * on manually preparing your segment table.
+   * @throws RuntimeException if the "segments" table has an incompatible 
schema.
+   *                          There is no recovering from an invalid schema, 
the program should crash.
+   * @see <a 
href="https://druid.apache.org/docs/latest/operations/metadata-migration/";>Metadata
 migration</a> for info
+   * on manually preparing the "segments" table.
    */
-  private void validateSegmentTable()
+  private void validateSegmentsTable()
   {
-    if (tableHasColumn(tablesConfigSupplier.get().getSegmentsTable(), 
"used_flag_last_updated")) {
-      return;
+    if (tableHasColumn(tablesConfigSupplier.get().getSegmentsTable(), 
"used_status_last_updated")) {
+      // do nothing
     } else {
-      throw new RuntimeException("Invalid Segment Table Schema! No 
used_flag_last_updated column!" +
-              " See 
https://druid.apache.org/docs/latest/operations/upgrade-prep.html for more info 
on remediation");
+      throw new ISE(
+          "Cannot start Druid as table[%s] has an incompatible schema."
+          + " Reason: Column [used_status_last_updated] does not exist in 
table."
+          + " See 
https://druid.apache.org/docs/latest/operations/upgrade-prep.html for more info 
on remediation.",
+          tablesConfigSupplier.get().getSegmentsTable()
+      );
     }
   }
 }
diff --git 
a/server/src/main/java/org/apache/druid/metadata/SQLMetadataSegmentPublisher.java
 
b/server/src/main/java/org/apache/druid/metadata/SQLMetadataSegmentPublisher.java
index 0ad1d607ed..b69f15edb6 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/SQLMetadataSegmentPublisher.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/SQLMetadataSegmentPublisher.java
@@ -55,8 +55,8 @@ public class SQLMetadataSegmentPublisher implements 
MetadataSegmentPublisher
     this.config = config;
     this.connector = connector;
     this.statement = StringUtils.format(
-        "INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, 
partitioned, version, used, payload, used_flag_last_updated) "
-        + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_flag_last_updated)",
+        "INSERT INTO %1$s (id, dataSource, created_date, start, %2$send%2$s, 
partitioned, version, used, payload, used_status_last_updated) "
+        + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_status_last_updated)",
         config.getSegmentsTable(), connector.getQuoteString()
     );
   }
@@ -131,7 +131,7 @@ public class SQLMetadataSegmentPublisher implements 
MetadataSegmentPublisher
                     .bind("version", version)
                     .bind("used", used)
                     .bind("payload", payload)
-                    .bind("used_flag_last_updated", usedFlagLastUpdated)
+                    .bind("used_status_last_updated", usedFlagLastUpdated)
                     .execute();
 
               return null;
diff --git 
a/server/src/main/java/org/apache/druid/metadata/SegmentsMetadataManager.java 
b/server/src/main/java/org/apache/druid/metadata/SegmentsMetadataManager.java
index 7fd832a6ec..0b1468d0f1 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/SegmentsMetadataManager.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/SegmentsMetadataManager.java
@@ -140,8 +140,8 @@ public interface SegmentsMetadataManager
 
   /**
    * Returns top N unused segment intervals with the end time no later than 
the specified maxEndTime and
-   * used_flag_last_updated time no later than maxLastUsedTime when ordered by 
segment start time, end time. Any segment having no
-   * used_flag_last_updated time due to upgrade from legacy Druid means 
maxUsedFlagLastUpdatedTime is ignored for that segment.
+   * used_status_last_updated time no later than maxLastUsedTime when ordered 
by segment start time, end time. Any segment having no
+   * used_status_last_updated time due to upgrade from legacy Druid means 
maxUsedFlagLastUpdatedTime is ignored for that segment.
    */
   List<Interval> getUnusedSegmentIntervals(
       String dataSource,
@@ -154,7 +154,7 @@ public interface SegmentsMetadataManager
   void poll();
 
   /**
-   * Populates used_flag_last_updated column in the segments table iteratively 
until there are no segments with a NULL
+   * Populates used_status_last_updated column in the segments table 
iteratively until there are no segments with a NULL
    * value for that column.
    */
   void populateUsedFlagLastUpdatedAsync();
diff --git 
a/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataManager.java
 
b/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataManager.java
index 93e2ae189c..dd20600913 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataManager.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataManager.java
@@ -337,7 +337,7 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
   }
 
   /**
-   * Populate used_flag_last_updated for unused segments whose current value 
for said column is NULL
+   * Populate used_status_last_updated for unused segments whose current value 
for said column is NULL
    *
    * The updates are made incrementally.
    */
@@ -346,7 +346,7 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
   {
     String segmentsTable = getSegmentsTable();
     log.info(
-        "Populating used_flag_last_updated with non-NULL values for unused 
segments in [%s]",
+        "Populating used_status_last_updated with non-NULL values for unused 
segments in [%s]",
         segmentsTable
     );
 
@@ -364,7 +364,7 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
               {
                 segmentsToUpdate.addAll(handle.createQuery(
                     StringUtils.format(
-                        "SELECT id FROM %1$s WHERE used_flag_last_updated IS 
NULL and used = :used %2$s",
+                        "SELECT id FROM %1$s WHERE used_status_last_updated IS 
NULL and used = :used %2$s",
                         segmentsTable,
                         connector.limitClause(limit)
                     )
@@ -386,7 +386,7 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
               public Void withHandle(Handle handle)
               {
                 Batch updateBatch = handle.createBatch();
-                String sql = "UPDATE %1$s SET used_flag_last_updated = '%2$s' 
WHERE id = '%3$s'";
+                String sql = "UPDATE %1$s SET used_status_last_updated = 
'%2$s' WHERE id = '%3$s'";
                 String now = DateTimes.nowUtc().toString();
                 for (String id : segmentsToUpdate) {
                   updateBatch.add(StringUtils.format(sql, segmentsTable, now, 
id));
@@ -398,13 +398,13 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
         );
       }
       catch (Exception e) {
-        log.warn(e, "Population of used_flag_last_updated in [%s] has failed. 
There may be unused segments with"
-                    + " NULL values for used_flag_last_updated that won't be 
killed!", segmentsTable);
+        log.warn(e, "Population of used_status_last_updated in [%s] has 
failed. There may be unused segments with"
+                    + " NULL values for used_status_last_updated that won't be 
killed!", segmentsTable);
         return;
       }
 
       totalUpdatedEntries += segmentsToUpdate.size();
-      log.info("Updated a batch of %d rows in [%s] with a valid 
used_flag_last_updated date",
+      log.info("Updated a batch of %d rows in [%s] with a valid 
used_status_last_updated date",
                segmentsToUpdate.size(),
                segmentsTable
       );
@@ -417,7 +417,7 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
       }
     }
     log.info(
-        "Finished updating [%s] with a valid used_flag_last_updated date. %d 
rows updated",
+        "Finished updating [%s] with a valid used_status_last_updated date. %d 
rows updated",
         segmentsTable,
         totalUpdatedEntries
     );
@@ -630,9 +630,9 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
     try {
       int numUpdatedDatabaseEntries = connector.getDBI().withHandle(
           (Handle handle) -> handle
-              .createStatement(StringUtils.format("UPDATE %s SET used=true, 
used_flag_last_updated = :used_flag_last_updated WHERE id = :id", 
getSegmentsTable()))
+              .createStatement(StringUtils.format("UPDATE %s SET used=true, 
used_status_last_updated = :used_status_last_updated WHERE id = :id", 
getSegmentsTable()))
               .bind("id", segmentId)
-              .bind("used_flag_last_updated", DateTimes.nowUtc().toString())
+              .bind("used_status_last_updated", DateTimes.nowUtc().toString())
               .execute()
       );
       // Unlike bulk markAsUsed methods: 
markAsUsedAllNonOvershadowedSegmentsInDataSource(),
@@ -1093,7 +1093,7 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
       DateTime maxUsedFlagLastUpdatedTime
   )
   {
-    // Note that we handle the case where used_flag_last_updated IS NULL here 
to allow smooth transition to Druid version that uses used_flag_last_updated 
column
+    // Note that we handle the case where used_status_last_updated IS NULL 
here to allow smooth transition to Druid version that uses 
used_status_last_updated column
     return connector.inReadOnlyTransaction(
         new TransactionCallback<List<Interval>>()
         {
@@ -1104,7 +1104,7 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
                 .createQuery(
                     StringUtils.format(
                         "SELECT start, %2$send%2$s FROM %1$s WHERE dataSource 
= :dataSource AND "
-                        + "%2$send%2$s <= :end AND used = false AND 
used_flag_last_updated IS NOT NULL AND used_flag_last_updated <= 
:used_flag_last_updated ORDER BY start, %2$send%2$s",
+                        + "%2$send%2$s <= :end AND used = false AND 
used_status_last_updated IS NOT NULL AND used_status_last_updated <= 
:used_status_last_updated ORDER BY start, %2$send%2$s",
                         getSegmentsTable(),
                         connector.getQuoteString()
                     )
@@ -1113,7 +1113,7 @@ public class SqlSegmentsMetadataManager implements 
SegmentsMetadataManager
                 .setMaxRows(limit)
                 .bind("dataSource", dataSource)
                 .bind("end", maxEndTime.toString())
-                .bind("used_flag_last_updated", 
maxUsedFlagLastUpdatedTime.toString())
+                .bind("used_status_last_updated", 
maxUsedFlagLastUpdatedTime.toString())
                 .map(
                     new BaseResultSetMapper<Interval>()
                     {
diff --git 
a/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataQuery.java 
b/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataQuery.java
index 7e4b00b3c7..01b110516f 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataQuery.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataQuery.java
@@ -149,7 +149,7 @@ public class SqlSegmentsMetadataQuery
     final PreparedBatch batch =
         handle.prepareBatch(
             StringUtils.format(
-                "UPDATE %s SET used = ?, used_flag_last_updated = ? WHERE 
datasource = ? AND id = ?",
+                "UPDATE %s SET used = ?, used_status_last_updated = ? WHERE 
datasource = ? AND id = ?",
                 dbTables.getSegmentsTable()
             )
         );
@@ -176,13 +176,13 @@ public class SqlSegmentsMetadataQuery
       return handle
           .createStatement(
               StringUtils.format(
-                  "UPDATE %s SET used=:used, used_flag_last_updated = 
:used_flag_last_updated WHERE dataSource = :dataSource",
+                  "UPDATE %s SET used=:used, used_status_last_updated = 
:used_status_last_updated WHERE dataSource = :dataSource",
                   dbTables.getSegmentsTable()
               )
           )
           .bind("dataSource", dataSource)
           .bind("used", false)
-          .bind("used_flag_last_updated", DateTimes.nowUtc().toString())
+          .bind("used_status_last_updated", DateTimes.nowUtc().toString())
           .execute();
     } else if (Intervals.canCompareEndpointsAsStrings(interval)
                && interval.getStart().getYear() == 
interval.getEnd().getYear()) {
@@ -192,7 +192,7 @@ public class SqlSegmentsMetadataQuery
       return handle
           .createStatement(
               StringUtils.format(
-                  "UPDATE %s SET used=:used, used_flag_last_updated = 
:used_flag_last_updated WHERE dataSource = :dataSource AND %s",
+                  "UPDATE %s SET used=:used, used_status_last_updated = 
:used_status_last_updated WHERE dataSource = :dataSource AND %s",
                   dbTables.getSegmentsTable(),
                   
IntervalMode.CONTAINS.makeSqlCondition(connector.getQuoteString(), ":start", 
":end")
               )
@@ -201,7 +201,7 @@ public class SqlSegmentsMetadataQuery
           .bind("used", false)
           .bind("start", interval.getStart().toString())
           .bind("end", interval.getEnd().toString())
-          .bind("used_flag_last_updated", DateTimes.nowUtc().toString())
+          .bind("used_status_last_updated", DateTimes.nowUtc().toString())
           .execute();
     } else {
       // Retrieve, then drop, since we can't write a WHERE clause directly.
diff --git 
a/server/src/test/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinatorTest.java
 
b/server/src/test/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinatorTest.java
index 92b0d22ff0..9fadb8b106 100644
--- 
a/server/src/test/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinatorTest.java
+++ 
b/server/src/test/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinatorTest.java
@@ -385,10 +385,10 @@ public class IndexerSQLMetadataStorageCoordinatorTest
           (int) derbyConnector.getDBI().<Integer>withHandle(
               handle -> {
                 String request = StringUtils.format(
-                    "UPDATE %s SET used = false, used_flag_last_updated = 
:used_flag_last_updated WHERE id = :id",
+                    "UPDATE %s SET used = false, used_status_last_updated = 
:used_status_last_updated WHERE id = :id",
                     
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable()
                 );
-                return handle.createStatement(request).bind("id", 
segment.getId().toString()).bind("used_flag_last_updated", 
DateTimes.nowUtc().toString()).execute();
+                return handle.createStatement(request).bind("id", 
segment.getId().toString()).bind("used_status_last_updated", 
DateTimes.nowUtc().toString()).execute();
               }
           )
       );
@@ -433,8 +433,8 @@ public class IndexerSQLMetadataStorageCoordinatorTest
         handle -> {
           PreparedBatch preparedBatch = handle.prepareBatch(
               StringUtils.format(
-                  "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload, used_flag_last_updated) "
-                  + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_flag_last_updated)",
+                  "INSERT INTO %1$s (id, dataSource, created_date, start, 
%2$send%2$s, partitioned, version, used, payload, used_status_last_updated) "
+                  + "VALUES (:id, :dataSource, :created_date, :start, :end, 
:partitioned, :version, :used, :payload, :used_status_last_updated)",
                   table,
                   derbyConnector.getQuoteString()
               )
@@ -450,7 +450,7 @@ public class IndexerSQLMetadataStorageCoordinatorTest
                          .bind("version", segment.getVersion())
                          .bind("used", true)
                          .bind("payload", mapper.writeValueAsBytes(segment))
-                         .bind("used_flag_last_updated", 
DateTimes.nowUtc().toString());
+                         .bind("used_status_last_updated", 
DateTimes.nowUtc().toString());
           }
 
           final int[] affectedRows = preparedBatch.execute();
diff --git 
a/server/src/test/java/org/apache/druid/metadata/SQLMetadataConnectorTest.java 
b/server/src/test/java/org/apache/druid/metadata/SQLMetadataConnectorTest.java
index 030123168d..1c8f6493e2 100644
--- 
a/server/src/test/java/org/apache/druid/metadata/SQLMetadataConnectorTest.java
+++ 
b/server/src/test/java/org/apache/druid/metadata/SQLMetadataConnectorTest.java
@@ -168,7 +168,7 @@ public class SQLMetadataConnectorTest
   }
 
   /**
-   * This is a test for the upgrade path where a cluster is upgrading from a 
version that did not have used_flag_last_updated
+   * This is a test for the upgrade path where a cluster is upgrading from a 
version that did not have used_status_last_updated
    * in the segments table.
    */
   @Test
@@ -176,7 +176,7 @@ public class SQLMetadataConnectorTest
   {
     connector.createSegmentTable();
 
-    // Drop column used_flag_last_updated to bring us in line with pre-upgrade 
state
+    // Drop column used_status_last_updated to bring us in line with 
pre-upgrade state
     derbyConnectorRule.getConnector().retryWithHandle(
         new HandleCallback<Void>()
         {
@@ -186,7 +186,7 @@ public class SQLMetadataConnectorTest
             final Batch batch = handle.createBatch();
             batch.add(
                 StringUtils.format(
-                    "ALTER TABLE %1$s DROP COLUMN USED_FLAG_LAST_UPDATED",
+                    "ALTER TABLE %1$s DROP COLUMN USED_STATUS_LAST_UPDATED",
                     derbyConnectorRule.metadataTablesConfigSupplier()
                                       .get()
                                       .getSegmentsTable()
@@ -202,7 +202,7 @@ public class SQLMetadataConnectorTest
     connector.alterSegmentTableAddUsedFlagLastUpdated();
     connector.tableHasColumn(
         
derbyConnectorRule.metadataTablesConfigSupplier().get().getSegmentsTable(),
-        "USED_FLAG_LAST_UPDATED"
+        "USED_STATUS_LAST_UPDATED"
     );
   }
 
diff --git 
a/server/src/test/java/org/apache/druid/metadata/SqlSegmentsMetadataManagerTest.java
 
b/server/src/test/java/org/apache/druid/metadata/SqlSegmentsMetadataManagerTest.java
index 9e289263bb..4995e69d72 100644
--- 
a/server/src/test/java/org/apache/druid/metadata/SqlSegmentsMetadataManagerTest.java
+++ 
b/server/src/test/java/org/apache/druid/metadata/SqlSegmentsMetadataManagerTest.java
@@ -388,7 +388,7 @@ public class SqlSegmentsMetadataManagerTest
     sqlSegmentsMetadataManager.startPollingDatabasePeriodically();
     sqlSegmentsMetadataManager.poll();
 
-    // We alter the segment table to allow nullable used_flag_last_updated in 
order to test compatibility during druid upgrade from version without 
used_flag_last_updated.
+    // We alter the segment table to allow nullable used_status_last_updated 
in order to test compatibility during druid upgrade from version without 
used_status_last_updated.
     derbyConnectorRule.allowUsedFlagLastUpdatedToBeNullable();
 
     
Assert.assertTrue(sqlSegmentsMetadataManager.isPollingDatabasePeriodically());
@@ -447,9 +447,9 @@ public class SqlSegmentsMetadataManagerTest
         sqlSegmentsMetadataManager.getUnusedSegmentIntervals("wikipedia", 
DateTimes.of("3000"), 5, DateTimes.nowUtc().minus(Duration.parse("PT86400S")))
     );
 
-    // One of the 3 segments in newDs has a null used_flag_last_updated which 
should mean getUnusedSegmentIntervals never returns it
-    // One of the 3 segments in newDs has a used_flag_last_updated older than 
1 day which means it should also be returned
-    // The last of the 3 segemns in newDs has a used_flag_last_updated date 
less than one day and should not be returned
+    // One of the 3 segments in newDs has a null used_status_last_updated 
which should mean getUnusedSegmentIntervals never returns it
+    // One of the 3 segments in newDs has a used_status_last_updated older 
than 1 day which means it should also be returned
+    // The last of the 3 segemns in newDs has a used_status_last_updated date 
less than one day and should not be returned
     Assert.assertEquals(
         ImmutableList.of(newSegment2.getInterval()),
         sqlSegmentsMetadataManager.getUnusedSegmentIntervals(newDs, 
DateTimes.of("3000"), 5, DateTimes.nowUtc().minus(Duration.parse("PT86400S")))
@@ -964,7 +964,7 @@ public class SqlSegmentsMetadataManagerTest
           {
             List<Map<String, Object>> lst = handle.select(
                 StringUtils.format(
-                    "SELECT * FROM %1$s WHERE USED_FLAG_LAST_UPDATED IS NULL",
+                    "SELECT * FROM %1$s WHERE USED_STATUS_LAST_UPDATED IS 
NULL",
                     derbyConnectorRule.metadataTablesConfigSupplier()
                                       .get()
                                       .getSegmentsTable()
diff --git 
a/server/src/test/java/org/apache/druid/metadata/TestDerbyConnector.java 
b/server/src/test/java/org/apache/druid/metadata/TestDerbyConnector.java
index 6fff6f62ad..d0d8357837 100644
--- a/server/src/test/java/org/apache/druid/metadata/TestDerbyConnector.java
+++ b/server/src/test/java/org/apache/druid/metadata/TestDerbyConnector.java
@@ -151,7 +151,7 @@ public class TestDerbyConnector extends DerbyConnector
               final Batch batch = handle.createBatch();
               batch.add(
                   StringUtils.format(
-                      "ALTER TABLE %1$s ALTER COLUMN USED_FLAG_LAST_UPDATED 
NULL",
+                      "ALTER TABLE %1$s ALTER COLUMN USED_STATUS_LAST_UPDATED 
NULL",
                       
dbTables.get().getSegmentsTable().toUpperCase(Locale.ENGLISH)
                   )
               );
diff --git a/services/src/main/java/org/apache/druid/cli/Main.java 
b/services/src/main/java/org/apache/druid/cli/Main.java
index 8f3754b5ce..ffd60410b0 100644
--- a/services/src/main/java/org/apache/druid/cli/Main.java
+++ b/services/src/main/java/org/apache/druid/cli/Main.java
@@ -76,8 +76,7 @@ public class Main
         DumpSegment.class,
         ResetCluster.class,
         ValidateSegments.class,
-        ExportMetadata.class,
-        UpdateTables.class
+        ExportMetadata.class
     );
     builder.withGroup("tools")
            .withDescription("Various tools for working with Druid")
diff --git a/services/src/main/java/org/apache/druid/cli/UpdateTables.java 
b/services/src/main/java/org/apache/druid/cli/UpdateTables.java
deleted file mode 100644
index 9a5ba9bc39..0000000000
--- a/services/src/main/java/org/apache/druid/cli/UpdateTables.java
+++ /dev/null
@@ -1,134 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-
-package org.apache.druid.cli;
-
-import com.github.rvesse.airline.annotations.Command;
-import com.github.rvesse.airline.annotations.Option;
-import com.github.rvesse.airline.annotations.restrictions.Required;
-import com.google.common.collect.ImmutableList;
-import com.google.inject.Injector;
-import com.google.inject.Key;
-import com.google.inject.Module;
-import org.apache.druid.guice.DruidProcessingModule;
-import org.apache.druid.guice.JsonConfigProvider;
-import org.apache.druid.guice.QueryRunnerFactoryModule;
-import org.apache.druid.guice.QueryableModule;
-import org.apache.druid.guice.annotations.Self;
-import org.apache.druid.java.util.common.logger.Logger;
-import org.apache.druid.metadata.MetadataStorageConnector;
-import org.apache.druid.metadata.MetadataStorageConnectorConfig;
-import org.apache.druid.metadata.MetadataStorageTablesConfig;
-import org.apache.druid.server.DruidNode;
-
-import java.util.List;
-
-@Command(
-    name = "metadata-update",
-    description = "Controlled update of metadata storage"
-)
-
-public class UpdateTables extends GuiceRunnable
-{
-  private static final String SEGMENTS_TABLE_ADD_USED_FLAG_LAST_UPDATED = 
"add-used-flag-last-updated-to-segments";
-
-  @Option(name = "--connectURI", description = "Database JDBC connection 
string")
-  @Required
-  private String connectURI;
-
-  @Option(name = "--user", description = "Database username")
-  @Required
-  private String user;
-
-  @Option(name = "--password", description = "Database password")
-  @Required
-  private String password;
-
-  @Option(name = "--base", description = "Base table name")
-  private String base;
-
-  @Option(name = "--action", description = "Action Name")
-  private String action_name;
-
-  private static final Logger log = new Logger(CreateTables.class);
-
-  public UpdateTables()
-  {
-    super(log);
-  }
-
-  @Override
-  protected List<? extends Module> getModules()
-  {
-    return ImmutableList.of(
-        // It's unknown why those modules are required in CreateTables, and if 
all of those modules are required or not.
-        // Maybe some of those modules could be removed.
-        // See https://github.com/apache/druid/pull/4429#discussion_r123602930
-        new DruidProcessingModule(),
-        new QueryableModule(),
-        new QueryRunnerFactoryModule(),
-        binder -> {
-          JsonConfigProvider.bindInstance(
-              binder,
-              Key.get(MetadataStorageConnectorConfig.class),
-              new MetadataStorageConnectorConfig()
-              {
-                @Override
-                public String getConnectURI()
-                {
-                  return connectURI;
-                }
-
-                @Override
-                public String getUser()
-                {
-                  return user;
-                }
-
-                @Override
-                public String getPassword()
-                {
-                  return password;
-                }
-              }
-          );
-          JsonConfigProvider.bindInstance(
-              binder,
-              Key.get(MetadataStorageTablesConfig.class),
-              MetadataStorageTablesConfig.fromBase(base)
-          );
-          JsonConfigProvider.bindInstance(
-              binder,
-              Key.get(DruidNode.class, Self.class),
-              new DruidNode("tools", "localhost", false, -1, null, true, false)
-          );
-        }
-    );
-  }
-
-  @Override
-  public void run()
-  {
-    final Injector injector = makeInjector();
-    MetadataStorageConnector dbConnector = 
injector.getInstance(MetadataStorageConnector.class);
-    if (SEGMENTS_TABLE_ADD_USED_FLAG_LAST_UPDATED.equals(action_name)) {
-      dbConnector.alterSegmentTableAddUsedFlagLastUpdated();
-    }
-  }
-}


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to