kubraaksux commented on PR #2430:
URL: https://github.com/apache/systemds/pull/2430#issuecomment-3910312636

   > Hi @kubraaksux , thanks for the contribution! I have a concern about the 
current approach: I’m not sure moving LLM inference into Python is the right 
direction, especially since most calls still go through Python wrapper 
functions and there’s additional overhead from using Py4J. Also, as implemented 
now, it seems we’re bypassing systemd’s core functionality entirely. Looping in 
@mboehm7 .
   
   Hi @e-strauss, thanks for the feedback. Both points are valid.
   
   I redesigned the approach. Instead of the Py4J bridge, llmPredict is now a 
native parameterized built-in. The DML goes through the full compilation 
pipeline: parser → hops → lops → CP instruction. The instruction makes HTTP 
calls directly via java.net.HttpURLConnection.
   
   Thanks again for catching this early.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to