vineethvp opened a new issue, #12660:
URL: https://github.com/apache/pinot/issues/12660
I have multiple realtime tables in my cluster which are having older
segments. Below is configuration for one of them. It looks like retention
manager is not running for these tables. while looking at controller logs, it
seems retention manager is running only for one of the dimensional table.
I have verified that the timestamp field for the table is having the proper
value in milliseconds inserted in the db.
Below is the table config for which retention is not happening and the
relevant information from controller log.
**{
"tableName": "events",
"tableType": "REALTIME",
"segmentsConfig": {
"timeColumnName": "upload_ts_millis",
"timeType": "MILLISECONDS",
"schemaName": "events",
"retentionTimeUnit": "HOURS",
"retentionTimeValue": "6",
"replication": "2"
},
"tenants": {},
"ingestionConfig": {
"ingestionConfig": {
"indexableExtrasField": "extras"
},
"transformConfigs": [
{"columnName": "upload_timestamp","transformFunction":
"Groovy({timestamp}, timestamp)"},
{"columnName": "upload_date", "transformFunction":
"toEpochDays(upload_timestamp)" },
{"columnName": "upload_ts_millis", "transformFunction":
"FromEpochSeconds(upload_timestamp)" }
},
{"columnName": "sensor_temperature", "transformFunction":
"JSONPATHSTRING(data, '$.sensor_temperature')" },
{"columnName": "data_source", "transformFunction":
"JSONPATHSTRING(data, '$.data_source')" }
]
},
"instanceAssignmentConfigMap": {
"CONSUMING": {
"tagPoolConfig": {
"tag": "DefaultTenant_REALTIME"
},
"replicaGroupPartitionConfig": {
"numInstances": 2
}
}
},
"tableIndexConfig": {
"loadMode": "MMAP",
"jsonIndexConfigs": {
"data" : {
"maxLevels":2,
"excludeArray":false,
"disableCrossArrayUnnest":true,
"includePaths":null,
"excludePaths":null
}
},
"streamConfigs": {
"streamType": "kafka",
"stream.kafka.consumer.type": "lowlevel",
"stream.kafka.topic.name": "derived-measurements-data",
"stream.kafka.decoder.class.name":
"org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",
"stream.kafka.consumer.factory.class.name":
"org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
"stream.kafka.broker.list":
"b-2.agsdevus3msk.xlcy7b.c6.kafka.us-east-2.amazonaws.com:9092,b-1.agsdevus3msk.xlcy7b.c6.kafka.us-east-2.amazonaws.com:9092,b-3.agsdevus3msk.xlcy7b.c6.kafka.us-east-2.amazonaws.com:9092",
"realtime.segment.flush.threshold.rows": "0",
"realtime.segment.flush.threshold.time": "24h",
"realtime.segment.flush.threshold.segment.size": "300M",
"stream.kafka.consumer.prop.auto.offset.reset": "largest"
}
},
"metadata": {
"customConfigs": {}
}
}**
_**Controller log**_
**Starting RetentionManager with running frequency of 21600 seconds.
[TaskRequestId: auto] Start running task: RetentionManager
Processing 1 tables in task: RetentionManager
Start managing retention for table: account_hierarchy_OFFLINE
Segment push type is not APPEND for table: account_hierarchy_OFFLINE, skip
managing retention
Segment lineage metadata clean-up is successfully processed for table:
account_hierarchy_OFFLINE
Removing aged deleted segments for all tables
Finish processing 1/1 tables in task: RetentionManager
[TaskRequestId: auto] Finish running task: RetentionManager in 9ms**
Do i need any additional configs for running the retention manager for all
tables?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]