Hey everyone! Today, many users have pipelines that choose a single model
for inference from 100s or 1000s of models based on properties of the data.
Unfortunately, RunInference does not support this use case. I put together
a proposal for RunInference that allows a single keyed RunInference
transform to serve a different model for each key. I'd appreciate any
thoughts or comments!

https://docs.google.com/document/d/1kj3FyWRbJu1KhViX07Z0Gk0MU0842jhYRhI-DMhhcv4/edit?usp=sharing

Thanks,
Danny

Reply via email to