Hallo!

You can set the DB2 JDBC driver options in theJDBC connection string:
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.1.0/com.ibm.db2.luw.apdv.java.doc/src/tpc/imjcc_rjvdsprp.html


The DB2 JDBC driver has an option called "defaultIsolationLevel"
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.1.0/com.ibm.db2.luw.apdv.java.doc/src/tpc/imjcc_r0052038.html

Maybe you should also try the option "concurrentAccessResolution"
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.1.0/com.ibm.db2.luw.apdv.java.doc/src/tpc/imjcc_r0052607.html

See also
https://www.ibm.com/support/pages/how-set-isolation-level-db2-jdbc-database-connections
and https://www.idug.org/p/fo/et/thread=45083

Regards

Jörg Strebel


Am 02.09.20 um 16:34 schrieb Filipa Sousa:
>     Hello,
>
>     We are trying to read from an IBM DB2 database using a pyspark job.
>     We have a requirement to add an isolation level - Read Uncommitted (WITH 
> UR) to the JDBC queries when reading DB2 data.
>     We found "isolationLevel" parameter in Spark documentation, but 
> apparently it seems like it only applies to writing 
> (https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html). Do you 
> know if there's a similar one for reading?
>
>     isolationLevel - The transaction isolation level, which applies to 
> current connection. It can be one of NONE, READ_COMMITTED, READ_UNCOMMITTED, 
> REPEATABLE_READ, or SERIALIZABLE, corresponding to standard transaction 
> isolation levels defined by JDBC's Connection object, with default of 
> READ_UNCOMMITTED. This option applies only to writing. Please refer the 
> documentation in java.sql.Connection.
>
>     Also, we tested putting the "WITH UR" directly to the query, but since 
> the isolation level must always be at the outer-most layer of the query, and 
> Spark always parenthesizes the query 
> (https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html), it throws 
> an error.
>
>     The last try we did is to add this predicates option when reading with 
> spark, but this is being ignored.
>     predicates = "PART_NR != '0' with UR"
>             input_df = (
>                 self.spark.read.format("jdbc")
>                 .option("url", self.db_settings["jdbc_url"])
>                 .option("dbtable", db_table)
>                 .option("user", self.db_settings["db_username"])
>                 .option("password", self.db_settings["db_password"])
>                 .option("predicates", predicates )
>                 .option("fetchsize", self.fetch_size)
>             )
>
>     Do you have any advises on how can we do this?
>
>
> Best regards,
> Filipa Sousa
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
-- 
Jörg Strebel
Aachener Straße 2
80804 München


---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to