Re: Custom Operator Placement for Kubernetes

2023-05-05 Thread Shammon FY
Hi chris,

I think there is no existing method that allows you to customize placing
the operator on the specified node.

By the way, I think "predicting the optimal parallelism" of flink jobs is
interesting. Currently flink supports autoscaling mechanism, you can find
detailed information in this FLIP [1].

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-271%3A+Autoscaling

Best,
Shammon FY


On Fri, May 5, 2023 at 11:48 PM John Gerassimou 
wrote:

> Sorry for the mix-up. I read your message wrong. Please ignore my last
> reply.
>
> On Fri, May 5, 2023 at 11:42 AM John Gerassimou <
> john.gerassi...@unity3d.com> wrote:
>
>> Hi Chris,
>>
>> You should be able to do this using nodeSelector, or taints and
>> tolerations.
>>
>>
>> https://github.com/apache/flink-kubernetes-operator/blob/main/helm/flink-kubernetes-operator/templates/flink-operator.yaml#L55:L61
>>
>> Thanks
>> John
>>
>> On Fri, May 5, 2023 at 8:38 AM  wrote:
>>
>>> Hi,
>>>
>>> is there a way to manually define which node an operator should be
>>> placed in, using Kubernetes?
>>>
>>> To give a bit more context, for my master's thesis, I'm looking into
>>> predicting the optimal parallelism degree for a node. To do so, we use a
>>> Zero Shot Model, which predicts the latency and throughput for a given
>>> query.  To increase performance, we need to manually place operators on
>>> different nodes in the network and incorporate other learning methods to
>>> see the best configuration.
>>>
>>> Regards,
>>> Chris
>>>
>>


Re: Custom Operator Placement for Kubernetes

2023-05-05 Thread John Gerassimou
Sorry for the mix-up. I read your message wrong. Please ignore my last
reply.

On Fri, May 5, 2023 at 11:42 AM John Gerassimou 
wrote:

> Hi Chris,
>
> You should be able to do this using nodeSelector, or taints and
> tolerations.
>
>
> https://github.com/apache/flink-kubernetes-operator/blob/main/helm/flink-kubernetes-operator/templates/flink-operator.yaml#L55:L61
>
> Thanks
> John
>
> On Fri, May 5, 2023 at 8:38 AM  wrote:
>
>> Hi,
>>
>> is there a way to manually define which node an operator should be placed
>> in, using Kubernetes?
>>
>> To give a bit more context, for my master's thesis, I'm looking into
>> predicting the optimal parallelism degree for a node. To do so, we use a
>> Zero Shot Model, which predicts the latency and throughput for a given
>> query.  To increase performance, we need to manually place operators on
>> different nodes in the network and incorporate other learning methods to
>> see the best configuration.
>>
>> Regards,
>> Chris
>>
>


Re: Custom Operator Placement for Kubernetes

2023-05-05 Thread John Gerassimou
Hi Chris,

You should be able to do this using nodeSelector, or taints and tolerations.

https://github.com/apache/flink-kubernetes-operator/blob/main/helm/flink-kubernetes-operator/templates/flink-operator.yaml#L55:L61

Thanks
John

On Fri, May 5, 2023 at 8:38 AM  wrote:

> Hi,
>
> is there a way to manually define which node an operator should be placed
> in, using Kubernetes?
>
> To give a bit more context, for my master's thesis, I'm looking into
> predicting the optimal parallelism degree for a node. To do so, we use a
> Zero Shot Model, which predicts the latency and throughput for a given
> query.  To increase performance, we need to manually place operators on
> different nodes in the network and incorporate other learning methods to
> see the best configuration.
>
> Regards,
> Chris
>


Custom Operator Placement for Kubernetes

2023-05-05 Thread chris-weiland
Hi,

 

is there a way to manually define which node an operator should be placed in, using Kubernetes?

 

To give a bit more context, for my master's thesis, I'm looking into predicting the optimal parallelism degree for a node. To do so, we use a Zero Shot Model, which predicts the latency and throughput for a given query.  To increase performance, we need to manually place operators on different nodes in the network and incorporate other learning methods to see the best configuration.

 

Regards,

Chris