Re: Naming files while saving a Dataframe

2021-08-12 Thread Eric Beabes
This doesn't work as given here (
https://stackoverflow.com/questions/36107581/change-output-filename-prefix-for-dataframe-write)
but the answer suggests using FileOutputFormat class. Will try that.
Thanks. Regards.

On Sun, Jul 18, 2021 at 12:44 AM Jörn Franke  wrote:

> Spark heavily depends on Hadoop writing files. You can try to set the
> Hadoop property: mapreduce.output.basename
>
>
> https://spark.apache.org/docs/latest/api/java/org/apache/spark/SparkContext.html#hadoopConfiguration--
>
>
> Am 18.07.2021 um 01:15 schrieb Eric Beabes :
>
> 
> Mich - You're suggesting changing the "Path". Problem is that, we've an
> EXTERNAL table created on top of this path so "Path" CANNOT change. If we
> could, it would be easy to solve this problem. My question is about
> changing the "Filename".
>
> As Ayan pointed out, Spark doesn't seem to allow "prefixes" for the
> filenames!
>
> On Sat, Jul 17, 2021 at 1:58 PM Mich Talebzadeh 
> wrote:
>
>> Using this
>>
>> df.write.mode("overwrite").format("parquet").saveAsTable("test.ABCD")
>>
>> That will create a parquet table in the database test. which is
>> essentially a hive partition in the format
>>
>> /user/hive/warehouse/test.db/abcd/00_0
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Sat, 17 Jul 2021 at 20:45, Eric Beabes 
>> wrote:
>>
>>> I am not sure if you've understood the question. Here's how we're saving
>>> the DataFrame:
>>>
>>> df
>>>   .coalesce(numFiles)
>>>   .write
>>>   .partitionBy(partitionDate)
>>>   .mode("overwrite")
>>>   .format("parquet")
>>>
>>>   .save(*someDirectory*)
>>>
>>>
>>> Now where would I add a 'prefix' in this one?
>>>
>>>
>>> On Sat, Jul 17, 2021 at 10:54 AM Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 try it see if it works

 fullyQualifiedTableName = appName+'_'+tableName



view my Linkedin profile
 



 *Disclaimer:* Use it at your own risk. Any and all responsibility for
 any loss, damage or destruction of data or any other property which may
 arise from relying on this email's technical content is explicitly
 disclaimed. The author will in no case be liable for any monetary damages
 arising from such loss, damage or destruction.




 On Sat, 17 Jul 2021 at 18:02, Eric Beabes 
 wrote:

> I don't think Spark allows adding a 'prefix' to the file name, does
> it? If it does, please tell me how. Thanks.
>
> On Sat, Jul 17, 2021 at 9:47 AM Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> Jobs have names in spark. You can prefix it to the file name when
>> writing to directory I guess
>>
>>  val sparkConf = new SparkConf().
>>setAppName(sparkAppName).
>>
>>
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>> for any loss, damage or destruction of data or any other property which 
>> may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Sat, 17 Jul 2021 at 17:40, Eric Beabes 
>> wrote:
>>
>>> Reason we've two jobs writing to the same directory is that the data
>>> is partitioned by 'day' (mmdd) but the job runs hourly. Maybe the 
>>> only
>>> way to do this is to create an hourly partition (/mmdd/hh). Is that 
>>> the
>>> only way to solve this?
>>>
>>> On Fri, Jul 16, 2021 at 5:45 PM ayan guha 
>>> wrote:
>>>
 IMHO - this is a bad idea esp in failure scenarios.

 How about creating a subfolder each for the jobs?

 On Sat, 17 Jul 2021 at 9:11 am, Eric Beabes <
 mailinglist...@gmail.com> wrote:

> We've two (or more) jobs that write data into the same directory
> via a Dataframe.save method. We need to be able to figure out which 
> job
> wrote which file. Maybe provide a 'prefix' to the file names. I was
> wondering if there's any 'option' that allows us to do this. Googling
> didn't come up with any solution so thought of asking the Spark 
> experts on
> this mailing list.
>
> Thanks in advance.
>
 --

Replacing BroadcastNestedLoopJoin

2021-08-12 Thread Eric Beabes
We’ve two datasets that look like this:

Dataset A: App specific data that contains (among other fields): ip_address


Dataset B: Location data that contains start_ip_address_int,
end_ip_address_int, latitude, longitude

We’re (left) joining these two datasets as: A.ip_address >=
B.start_ip_address_int AND A.ip_address <= B.end_ip_address_int. When
there's a match, we pick latitude & longitude from Dataset B.

This works fine but it takes a LONG time (over 20 minutes) to complete for
SMALL datasets.

Dataset A => Usually contains 110,000

Dataset B => Contains 12.5 Million rows. This is “static” data. Hasn’t
changed since August 2020.

When we looked at the DAG, it seems a BroadcastNestedLoopJoin is getting
used which supposedly is very slow. It seems Spark selects it by default
when we have “in equal” conditions such as “greater than”, “less than”.

What’s the best way to speed up this process? Thanks in advance.


Re: K8S submit client vs. cluster

2021-08-12 Thread Mich Talebzadeh
OK amazon not much difference compared to Google Cloud Kubernetes Engines
(GKE).

When I submit a job, you need a powerful compute server to submit the job.
It is another host but you cannot submit from K8s cluster nodes (I am not
aware if one can actually do that).

Anyway you submit something like below

 spark-submit --verbose \
   --properties-file ${property_file} \
   --master k8s://https://$KUBERNETES_MASTER_IP:443 \
  * --deploy-mode cluster \*
   --name pytest \
   --conf
spark.yarn.appMasterEnv.PYSPARK_PYTHON=./pyspark_venv/bin/python \
   --py-files $CODE_DIRECTORY/DSBQ.zip \
   --conf spark.kubernetes.namespace=$NAMESPACE \
   --conf spark.executor.memory=5000m \
   --conf spark.network.timeout=300 \
   --conf spark.executor.instances=3 \
   --conf spark.kubernetes.driver.limit.cores=1 \
   --conf spark.driver.cores=1 \
   --conf spark.executor.cores=1 \
   --conf spark.executor.memory=2000m \
   --conf spark.kubernetes.driver.docker.image=${IMAGEGCP} \
   --conf spark.kubernetes.executor.docker.image=${IMAGEGCP} \
   --conf spark.kubernetes.container.image=${IMAGEGCP} \
   --conf
spark.kubernetes.authenticate.driver.serviceAccountName=spark-bq \
   --conf
spark.driver.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true" \
   --conf
spark.executor.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true"
\
   --conf spark.sql.execution.arrow.pyspark.enabled="true" \
   $CODE_DIRECTORY/${APPLICATION}

This is a PySpark job and I have told Spark to run it  in cluster mode. The
docker image I built is Spark version 3.1.1 with Java 8. Java 11 would not
work.


However, under the bonnet it is run in a client mode


+ CMD=("$SPARK_HOME/bin/spark-submit" --conf
"spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client
"$@")

+ exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf
spark.driver.bindAddress=10.64.0.88 *--deploy-mode client*
--properties-file /opt/spark/conf/spark.properties --class
org.apache.spark.deploy.PythonRunner
gs://axial-glow-224522-spark-on-k8s/codes/RandomDataBigQuery.py


So regardless it is run in the client mode. You can see this behaviour with
switch


 spark-submit --verbose


HTH


   view my Linkedin profile




*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 12 Aug 2021 at 17:29, Bode, Meikel, NMA-CFD <
meikel.b...@bertelsmann.de> wrote:

> On EKS…
>
>
>
> *From:* Mich Talebzadeh 
> *Sent:* Donnerstag, 12. August 2021 15:47
> *To:* Bode, Meikel, NMA-CFD 
> *Cc:* user@spark.apache.org
> *Subject:* Re: K8S submit client vs. cluster
>
>
>
> Ok
>
>
>
> As I see it with PySpark even if it is submitted as cluster, it will be
> converted to client mode anyway
>
>
> Are you running this on AWS or GCP?
>
>
>
>view my Linkedin profile
> 
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
>
>
>
> On Thu, 12 Aug 2021 at 12:42, Bode, Meikel, NMA-CFD <
> meikel.b...@bertelsmann.de> wrote:
>
> Hi Mich,
>
>
>
> All PySpark.
>
>
>
> Best,
>
> Meikel
>
>
>
> *From:* Mich Talebzadeh 
> *Sent:* Donnerstag, 12. August 2021 13:41
> *To:* Bode, Meikel, NMA-CFD 
> *Cc:* user@spark.apache.org
> *Subject:* Re: K8S submit client vs. cluster
>
>
>
> Is this Spark or PySpark?
>
>
>
>
>
>
>view my Linkedin profile
> 
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from 

RE: K8S submit client vs. cluster

2021-08-12 Thread Bode, Meikel, NMA-CFD
On EKS...

From: Mich Talebzadeh 
Sent: Donnerstag, 12. August 2021 15:47
To: Bode, Meikel, NMA-CFD 
Cc: user@spark.apache.org
Subject: Re: K8S submit client vs. cluster

Ok

As I see it with PySpark even if it is submitted as cluster, it will be 
converted to client mode anyway


Are you running this on AWS or GCP?


 
[https://docs.google.com/uc?export=download=1-q7RFGRfLMObPuQPWSd9sl_H1UPNFaIZ=0B1BiUVX33unjMWtVUWpINWFCd0ZQTlhTRHpGckh4Wlg4RG80PQ]
   view my Linkedin 
profile



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction.




On Thu, 12 Aug 2021 at 12:42, Bode, Meikel, NMA-CFD 
mailto:meikel.b...@bertelsmann.de>> wrote:
Hi Mich,

All PySpark.

Best,
Meikel

From: Mich Talebzadeh 
mailto:mich.talebza...@gmail.com>>
Sent: Donnerstag, 12. August 2021 13:41
To: Bode, Meikel, NMA-CFD 
mailto:meikel.b...@bertelsmann.de>>
Cc: user@spark.apache.org
Subject: Re: K8S submit client vs. cluster

Is this Spark or PySpark?





 
[https://docs.google.com/uc?export=download=1-q7RFGRfLMObPuQPWSd9sl_H1UPNFaIZ=0B1BiUVX33unjMWtVUWpINWFCd0ZQTlhTRHpGckh4Wlg4RG80PQ]
   view my Linkedin 
profile



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction.




On Thu, 12 Aug 2021 at 12:35, Bode, Meikel, NMA-CFD 
mailto:meikel.b...@bertelsmann.de>> wrote:
Hi all,

If we schedule a spark job on k8s, how are volume mappings handled?

In client mode I would expect that drivers volumes have to mapped manually in 
the pod template. Executor volumes are attached dynamically based on submit 
parameters. Right...?

I cluster mode I would expect that volumes for drivers/executors are taken from 
submit command and attached to the pods accordingly. Right...?

Any hints appreciated,

Best,
Meikel


Re: [EXTERNAL] [Marketing Mail] Reading SPARK 3.1.x generated parquet in SPARK 2.4.x

2021-08-12 Thread Gourav Sengupta
Hi Saurabh,

a very big note of thanks from Gourav :)

Regards,
Gourav Sengupta

On Thu, Aug 12, 2021 at 4:16 PM Saurabh Gulati
 wrote:

> We had issues with this migration mainly because of changes in spark date
> calendars. See
> 
> We got this working by setting the below params:
>
> ("spark.sql.legacy.parquet.datetimeRebaseModeInRead", "LEGACY"),
> ("spark.sql.legacy.parquet.datetimeRebaseModeInWrite", "CORRECTED"),
> ("spark.sql.legacy.parquet.int96RebaseModeInRead", "LEGACY"),
> ("spark.sql.legacy.parquet.int96RebaseModeInWrite", "CORRECTED")
>
>
>
> But otherwise, it's a change for good. Performance seems better.
> Also, there were bugs in 3.0.1 which have been addressed in 3.1.1.
> --
> *From:* Gourav Sengupta 
> *Sent:* 05 August 2021 10:17
> *To:* user @spark 
> *Subject:* [EXTERNAL] [Marketing Mail] Reading SPARK 3.1.x generated
> parquet in SPARK 2.4.x
>
> *Caution! This email originated outside of FedEx. Please do not open
> attachments or click links from an unknown or suspicious origin*.
> Hi,
>
> we are trying to migrate some of the data lake pipelines to run in SPARK
> 3.x, where as the dependent pipelines using those tables will be still
> running in SPARK 2.4.x for sometime to come.
>
> Does anyone know of any issues that can happen:
> 1. when reading Parquet files written in 3.1.x in SPARK 2.4
> 2. when in the data lake some partitions have parquet files written in
> SPARK 2.4.x and some are in SPARK 3.1.x.
>
> Please note that there are no changes in schema, but later on we might end
> up adding or removing some columns.
>
> I will be really grateful for your kind help on this.
>
> Regards,
> Gourav Sengupta
>


Re: [EXTERNAL] [Marketing Mail] Reading SPARK 3.1.x generated parquet in SPARK 2.4.x

2021-08-12 Thread Saurabh Gulati
We had issues with this migration mainly because of changes in spark date 
calendars. 
See
We got this working by setting the below params:

("spark.sql.legacy.parquet.datetimeRebaseModeInRead", "LEGACY"),
("spark.sql.legacy.parquet.datetimeRebaseModeInWrite", "CORRECTED"),
("spark.sql.legacy.parquet.int96RebaseModeInRead", "LEGACY"),
("spark.sql.legacy.parquet.int96RebaseModeInWrite", "CORRECTED")


But otherwise, it's a change for good. Performance seems better.
Also, there were bugs in 3.0.1 which have been addressed in 3.1.1.

From: Gourav Sengupta 
Sent: 05 August 2021 10:17
To: user @spark 
Subject: [EXTERNAL] [Marketing Mail] Reading SPARK 3.1.x generated parquet in 
SPARK 2.4.x

Caution! This email originated outside of FedEx. Please do not open attachments 
or click links from an unknown or suspicious origin.

Hi,

we are trying to migrate some of the data lake pipelines to run in SPARK 3.x, 
where as the dependent pipelines using those tables will be still running in 
SPARK 2.4.x for sometime to come.

Does anyone know of any issues that can happen:
1. when reading Parquet files written in 3.1.x in SPARK 2.4
2. when in the data lake some partitions have parquet files written in SPARK 
2.4.x and some are in SPARK 3.1.x.

Please note that there are no changes in schema, but later on we might end up 
adding or removing some columns.

I will be really grateful for your kind help on this.

Regards,
Gourav Sengupta


Re: K8S submit client vs. cluster

2021-08-12 Thread Mich Talebzadeh
Ok

As I see it with PySpark even if it is submitted as cluster, it will be
converted to client mode anyway

Are you running this on AWS or GCP?


   view my Linkedin profile




*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 12 Aug 2021 at 12:42, Bode, Meikel, NMA-CFD <
meikel.b...@bertelsmann.de> wrote:

> Hi Mich,
>
>
>
> All PySpark.
>
>
>
> Best,
>
> Meikel
>
>
>
> *From:* Mich Talebzadeh 
> *Sent:* Donnerstag, 12. August 2021 13:41
> *To:* Bode, Meikel, NMA-CFD 
> *Cc:* user@spark.apache.org
> *Subject:* Re: K8S submit client vs. cluster
>
>
>
> Is this Spark or PySpark?
>
>
>
>
>
>
>view my Linkedin profile
> 
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
>
>
>
> On Thu, 12 Aug 2021 at 12:35, Bode, Meikel, NMA-CFD <
> meikel.b...@bertelsmann.de> wrote:
>
> Hi all,
>
>
>
> If we schedule a spark job on k8s, how are volume mappings handled?
>
>
>
> In client mode I would expect that drivers volumes have to mapped manually
> in the pod template. Executor volumes are attached dynamically based on
> submit parameters. Right…?
>
>
>
> I cluster mode I would expect that volumes for drivers/executors are taken
> from submit command and attached to the pods accordingly. Right…?
>
>
>
> Any hints appreciated,
>
>
>
> Best,
>
> Meikel
>
>


RE: K8S submit client vs. cluster

2021-08-12 Thread Bode, Meikel, NMA-CFD
Hi Mich,

All PySpark.

Best,
Meikel

From: Mich Talebzadeh 
Sent: Donnerstag, 12. August 2021 13:41
To: Bode, Meikel, NMA-CFD 
Cc: user@spark.apache.org
Subject: Re: K8S submit client vs. cluster

Is this Spark or PySpark?





 
[https://docs.google.com/uc?export=download=1-q7RFGRfLMObPuQPWSd9sl_H1UPNFaIZ=0B1BiUVX33unjMWtVUWpINWFCd0ZQTlhTRHpGckh4Wlg4RG80PQ]
   view my Linkedin 
profile



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction.




On Thu, 12 Aug 2021 at 12:35, Bode, Meikel, NMA-CFD 
mailto:meikel.b...@bertelsmann.de>> wrote:
Hi all,

If we schedule a spark job on k8s, how are volume mappings handled?

In client mode I would expect that drivers volumes have to mapped manually in 
the pod template. Executor volumes are attached dynamically based on submit 
parameters. Right...?

I cluster mode I would expect that volumes for drivers/executors are taken from 
submit command and attached to the pods accordingly. Right...?

Any hints appreciated,

Best,
Meikel


Re: K8S submit client vs. cluster

2021-08-12 Thread Mich Talebzadeh
Is this Spark or PySpark?



   view my Linkedin profile




*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 12 Aug 2021 at 12:35, Bode, Meikel, NMA-CFD <
meikel.b...@bertelsmann.de> wrote:

> Hi all,
>
>
>
> If we schedule a spark job on k8s, how are volume mappings handled?
>
>
>
> In client mode I would expect that drivers volumes have to mapped manually
> in the pod template. Executor volumes are attached dynamically based on
> submit parameters. Right…?
>
>
>
> I cluster mode I would expect that volumes for drivers/executors are taken
> from submit command and attached to the pods accordingly. Right…?
>
>
>
> Any hints appreciated,
>
>
>
> Best,
>
> Meikel
>


K8S submit client vs. cluster

2021-08-12 Thread Bode, Meikel, NMA-CFD
Hi all,

If we schedule a spark job on k8s, how are volume mappings handled?

In client mode I would expect that drivers volumes have to mapped manually in 
the pod template. Executor volumes are attached dynamically based on submit 
parameters. Right...?

I cluster mode I would expect that volumes for drivers/executors are taken from 
submit command and attached to the pods accordingly. Right...?

Any hints appreciated,

Best,
Meikel


Re: How can I config hive.metastore.warehouse.dir

2021-08-12 Thread eab...@163.com
 Hi,
I think you should set hive-site.xml before init SparkSession, spark will 
connect to metostore,and logged like that:
==
2021-08-12 09:21:21 INFO  HiveUtils:54 - Initializing HiveMetastoreConnection 
version 1.2.1 using Spark classes.
2021-08-12 09:21:22 INFO  metastore:376 - Trying to connect to metastore with 
URI thrift://hadoop001:9083
2021-08-12 09:21:22 WARN  UserGroupInformation:1535 - No groups available for 
user hdfs
2021-08-12 09:21:22 INFO  metastore:472 - Connected to metastore.
2021-08-12 09:21:22 INFO  SessionState:641 - Created local directory: 
/tmp/8bc342dd-aa0b-407b-b9ad-ff7ed3cd4076_resources
2021-08-12 09:21:22 INFO  SessionState:641 - Created HDFS directory: 
/tmp/hive/hdfs/8bc342dd-aa0b-407b-b9ad-ff7ed3cd4076
2021-08-12 09:21:22 INFO  SessionState:641 - Created local directory: 
/tmp/tempo/8bc342dd-aa0b-407b-b9ad-ff7ed3cd4076
2021-08-12 09:21:22 INFO  SessionState:641 - Created HDFS directory: 
/tmp/hive/hdfs/8bc342dd-aa0b-407b-b9ad-ff7ed3cd4076/_tmp_space.db
===

eab...@163.com
 
From: igyu
Date: 2021-08-12 11:33
To: user
Subject: How can I config hive.metastore.warehouse.dir
I need write data to hive with spark

val proper =  new Properties
proper.setProperty("fs.defaultFS", "hdfs://nameservice1")
proper.setProperty("dfs.nameservices", "nameservice1")
proper.setProperty("dfs.ha.namenodes.nameservice1", "namenode337,namenode369")
proper.setProperty("dfs.namenode.rpc-address.nameservice1.namenode337", 
"bigdser1:8020")
proper.setProperty("dfs.namenode.rpc-address.nameservice1.namenode369", 
"bigdser5:8020")
proper.setProperty("dfs.client.failover.proxy.provider.nameservice1", 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider")
proper.setProperty("hadoop.security.authentication", "Kerberos")
proper.setProperty("fs.hdfs.impl", 
"org.apache.hadoop.hdfs.DistributedFileSystem")
proper.setProperty("spark.sql.warehouse.dir", "/user/hive/warehouse")
proper.setProperty("hive.metastore.warehouse.dir","/user/hive/warehouse")
proper.setProperty("hive.metastore.uris", "thrift://bigdser1:9083")
sparkSession.sqlContext.setConf(proper)
sparkSession.sqlContext.setConf("hive.exec.dynamic.partition", "true")
sparkSession.sqlContext.setConf("hive.exec.dynamic.partition.mode", 
"nonstrict")   DF.write.format("jdbc")
  .option("timestampFormat", "/MM/dd HH:mm:ss ZZ")
  .options(cfg)
//  .partitionBy(partitions:_*)
  .mode(mode)
  .insertInto(table)
but I get a error

21/08/12 11:25:07 INFO SharedState: Setting hive.metastore.warehouse.dir 
('null') to the value of spark.sql.warehouse.dir 
('file:/D:/file/code/Java/jztsynctools/spark-warehouse/').
21/08/12 11:25:07 INFO SharedState: Warehouse path is 
'file:/D:/file/code/Java/jztsynctools/spark-warehouse/'.
21/08/12 11:25:08 INFO StateStoreCoordinatorRef: Registered 
StateStoreCoordinator endpoint
21/08/12 11:25:08 INFO Version: Elasticsearch Hadoop v7.10.2 [f53f4b7b2b]
21/08/12 11:25:08 INFO Utils: Supplied authorities: tidb4ser:11000
21/08/12 11:25:08 INFO Utils: Resolved authority: tidb4ser:11000
21/08/12 11:25:16 INFO UserGroupInformation: Login successful for user 
jztwk/had...@join.com using keytab file D:\file\jztwk.keytab. Keytab auto 
renewal enabled : false
login user: jztwk/had...@join.com (auth:KERBEROS)
21/08/12 11:25:25 WARN ProcfsMetricsGetter: Exception when trying to compute 
pagesize, as a result reporting of ProcessTree metrics is stopped
Exception in thread "main" org.apache.spark.sql.AnalysisException: Table or 
view not found: hivetest.chinese_part1, the database hivetest doesn't exist.;
at 
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:47)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:737)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:710)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:708)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)
at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:89)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at