MonkeyCanCode commented on issue #50:
URL: https://github.com/apache/polaris-tools/issues/50#issuecomment-3624214213

   > [@MonkeyCanCode](https://github.com/MonkeyCanCode) - I love all your work 
on the MCP Server! We are really shaping this thing up to production quality! 🔥 
We have not started a Dev ML for this, however I think there are two items here:
   > 
   > 1. Version compatibility: How the client & the server can negotiate which 
APIs are supported based on the server version.
   > 2. Context engineering: How to inject server-specific context into the tool
   > 
   > For version compatibility, I don't believe we have a good story yet. IMO, 
this is a problem for all of our clients - Spark Plugin, MCP Server, & the UI. 
Eventually, we should start this up this discussion on the Dev ML because I do 
think that a unified approach to client-server version compatibility is 
necessary for production use cases.
   > 
   > For context engineering, I think we have a few options:
   > 
   > 1. Embed schema definitions in the tool description. In the table.py case, 
do this in input_schema().
   > 2. Use @mcp.resource to expose the OpenAPI yaml. This might work well for 
us.
   > 3. We could include example payloads in our hints.
   > 4. We could decorate some functions with @mcp.prompt which is a sort of 
RAG approach.
   > 
   > Does this work?
   
   Not sure if there is an easy way to determine how efficient a change is for 
context engineering. E.g. I tried with `@mcp.resources` for OpenAPI spec files 
we have in Polaris server, but it is really hard to tell if it really helps or 
not. Is there a standard or tooling around measuring the efficiency of the mcp 
context?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to