HI,I am running a tf inference task on my cluster,but I flind it took so long a 
time to get response, becase it is a bert model and I run it on cpu machine.My 
componey has gpu k8s cluster,and I read the document 
https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/advanced/external_resources/

 

count you give me a demo?Including tf inference on gpu and train on gpu?

I use alink in some of my task, is there a demo for alink on gpu?


this is part of the answer:
https://issues.apache.org/jira/browse/FLINK-22205

回复