Hi users, We are using the KubernetesPodOperator to isolate some code which writes data into a VERY OLD Elasticsearch 2.X cluster. Please don't make fun of me for this!!! We are wondering, does a recommended practice exists for processing (JSON) data within the KubernetesPodOperator? Currently, we've experimented with passing various volumes of JSON string data to the KubernetesPodOperator 'argument' parameter. This works for reasonably small record batches such as 100's but fails for >10k's records. Should we be using a custom XCom backend to pull data into the container rather than push it via 'arguments'? Thank you lewismc
-- http://home.apache.org/~lewismc/ http://people.apache.org/keys/committer/lewismc
