spark-shell is not on your path. Give the full path to it.
On Tue, May 11, 2021 at 4:10 PM Talha Javed wrote:
> Hello Team!
> Hope you are doing well
>
> I have downloaded the Apache Spark version (spark-3.1.1-bin-hadoop2.7). I
> have downloaded the winutils file too from github.
> Python versio
Hello Team!
Hope you are doing well
I have downloaded the Apache Spark version (spark-3.1.1-bin-hadoop2.7). I
have downloaded the winutils file too from github.
Python version :Python 3.9.4
Java version: java version "1.8.0_291"
Java(TM) SE Runtime Environment (build 1.8.0_291-b10)
Java HotSpot(TM
Thanks everyone for the input. Yes it makes sense that metadata
backup/restore should be done outside Spark. We will update the customers
with documentations about how that can be done and leave the
implementations to them.
Thanks,
Tianchen
On Tue, May 11, 2021 at 1:14 AM Mich Talebzadeh
wrote:
>From my experience of dealing with metadata for other applications like
Hive if needed an external database for Spark metadata would be useful.
However, the maintenance and upgrade of that database should be external to
Spark (left to the user) and as usual some form of reliable API or JDBC
conn
That's my expectation as well. Spark needs a reliable catalog.
backup/restore is just implementation details about how you make your
catalog reliable, which should be transparent to Spark.
On Sat, May 8, 2021 at 6:54 AM ayan guha wrote:
> Just a consideration:
>
> Is there a value in backup/rest