Kimahriman commented on PR #735:
URL: https://github.com/apache/incubator-sedona/pull/735#issuecomment-1356792821

   Yeah I think that mostly covers it. We can keep a shaded jar for Spark 
and/or Flink for people who need it, and that can either be the python-adapter 
module or a new specifically shaded module since there's no reason the 
python-adapter _needs_ to be shaded and it might be better to have an explicit 
module named "shaded". But for people who are using dependency management 
systems (either by building custom jars or using `--packages` in Spark), which 
should be what most people are doing and the common case, we use properly 
compile scoped dependencies where possible, while providing whatever shaded 
artifacts we need for the edge cases where users don't have the network 
connections available to load all the dependencies automatically.
   
   Definitely will need to do a lot of test publishing with this to verify the 
resulting poms, but I hope it will start make the dependencies easier to manage 
and understand over time, and be less likely to have issues with conflicting 
scala versions and such.
   
   I am _not_ a pom or maven expert by any means (learned a lot of this as I 
went), so definitely looking for any feedback. This is how I see a lot of other 
projects structured, and I tried to pull a lot of things from how the Spark 
poms themselves were structured.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to