xushiyan commented on code in PR #13902:
URL: https://github.com/apache/hudi/pull/13902#discussion_r2356259166
##########
docker/hoodie/hadoop/spark_base/Dockerfile:
##########
@@ -15,8 +15,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-ARG HADOOP_VERSION=2.8.4
-ARG HIVE_VERSION=2.3.3
+ARG HADOOP_VERSION=3.3.4
+ARG HIVE_VERSION=3.1.3
Review Comment:
so to confirm, this pr is the first step which makes these changes:
- make the same docker stack as the current docker demo, make sure it works
for both amd64 and arm64
- hadoop upgrade to 3.3.4, hive upgrade to 3.1.3
- users still need to follow the same step to run the demo apps, like first
build bundle jars, then specify this yml for the docker compose to run
The next step is add notebook container into the stack to allow users to use
UI to work with the stack.
Then we need to avoid building jars and simplify the hadoop and other setup
to keep the demo lightweight. @deepakpanda93 @rangareddy please confirm this is
the plan to go with.
##########
hudi-sync/hudi-hive-sync/run_sync_tool.sh:
##########
@@ -46,7 +46,7 @@ HIVE_JDBC=`ls ${HIVE_HOME}/lib/hive-jdbc-*.jar | tr '\n' ':'`
if [ -z "${HIVE_JDBC}" ]; then
HIVE_JDBC=`ls ${HIVE_HOME}/lib/hive-jdbc-*.jar | grep -v handler | tr '\n'
':'`
fi
-HIVE_JARS=$HIVE_METASTORE:$HIVE_SERVICE:$HIVE_EXEC:$HIVE_JDBC
+HIVE_JARS=$HIVE_METASTORE:$HIVE_SERVICE:$HIVE_EXEC:$HIVE_JDBC:${HIVE_HOME}/lib/calcite-core-1.16.0.jar:${HIVE_HOME}/lib/libfb303-0.9.3.jar
Review Comment:
this could break if jars not available? how to ensure when user runs this
script, jars are there? you're fix this because of the docker setup change
right, but this script is not only intended for docker demo. you'll need to
figure some way to decouple this
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]