craigcondit commented on code in PR #757:
URL: https://github.com/apache/yunikorn-core/pull/757#discussion_r1521789360
##########
pkg/webservice/handlers.go:
##########
@@ -1216,3 +1227,53 @@ func getStream(w http.ResponseWriter, r *http.Request) {
}
}
}
+
+func checkHeader(h http.Header, key string, value string) bool {
+ values := h.Values(key)
+ for _, v := range values {
+ v2 := strings.Split(v, ",")
+ for _, item := range v2 {
+ item = strings.TrimSpace(item)
+ if item == value {
+ return true
+ }
+ }
+ }
+ return false
+}
+
+func compress(w http.ResponseWriter, data any) {
+ response, err := json.Marshal(data)
+ if err != nil {
+ buildJSONErrorResponse(w, err.Error(),
http.StatusInternalServerError)
+ return
+ }
+
+ // don't compress the data if it is smaller than MTU size
+ if len(response) < 1500 {
Review Comment:
This is really arbitrary. Not every interface uses 1500 (and this is really
only at the ethernet level anyway). Practically speaking, 64K is at least the
max packet size for IPv4. We should really just focus on endpoints which can
produce large outputs and not speculatively encode first. Also, we should be
compressing the stream, rather than writing the JSON data to a byte array
first. This can get very large on big results and use excessive RAM on the
server. Instead, we should create a helper function that decides based on
Accept-Encoding that takes the result stream and passes back a wrapped,
compressed stream instead. This can then easily be called from any endpoint we
want to compress. I think we should skip endpoints that only produce a single
object as output (single app, summary endpoints, etc.) as these won't be large
enough to benefit from compression anyway.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]