hlteoh37 commented on PR #22901:
URL: https://github.com/apache/flink/pull/22901#issuecomment-1617830035
## Performing performance test on `/checkpoints` API
From the below tests, we ran the following:
1. Control test where we don't have changes to Flink REST API.
ExecutionGraph cache of 30s
2. Checkpoint cache with 3s cache
3. No checkpoint cache
We can see that there is no significant differences in the access pattern /
latency pattern of the REST API for `/checkpoints`.
We also found the following nuances:
- Periodic `500` returned by `/checkpoints` on the first call after a job
restart (likely that there are no "Quantile" statistics after only 1
checkpoint. **This is consistent for all 3 tests, meaning its an independent
issue.**
- p95 access pattern seems spiky. Seems like averages around 15ms, but goes
up to 40 / 50ms occasionally. **This pattern is consistent across all tests, so
this is not a problem introduced by latest change.**
### Test configuration
- Cluster setup
- Single JM with Zookeeper HA
- Running on K8s
- JM resources (vCPU: 200m, Memory: 6083Mi)
- ResourceManager, JobMaster, BlobServer all running on the same JM
container.
- Test setup
- TPS: 10 req/s
- Endpoint `/jobs/:jobid/checkpoints`
- Simple Flink job reading from KDS writing to KDS
- Checkpoint interval 1s, min pause between checkpoints 1s
### Control Test 1 (no changes to Flink REST API): ExecutionGraph cache of
3s, Job restarting constantly

JM CPU/memory use
```
NAME CPU(cores) MEMORY(bytes)
flink-jobmanager-dc5d8b5df-875x6 51m 1271Mi
```

### Test 2: 3s checkpoint cache, Job restarting constantly

JM CPU/memory use
```
NAME CPU(cores) MEMORY(bytes)
flink-jobmanager-6797f4cdbd-xll86 70m 1237Mi
```

### Test 3: No checkpoint cache, Job restarting constantly

JM CPU/memory use
```
NAME CPU(cores) MEMORY(bytes)
flink-jobmanager-79f5768fdc-2t9hd 50m 1272Mi
```

--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]