Hi Another related question to this. Has anyone tried transactions using
Oracle JDBC and spark. How do you do it given that code will be distributed
on workers. Do I combine certain queries to make sure they don't get
distributed?

Regards,
Leena

On Fri, Jul 21, 2017 at 1:50 PM, Cassa L <lcas...@gmail.com> wrote:

> Hi Xiao,
> I am trying JSON sample table provided by Oracle 12C. It is on the website
> -https://docs.oracle.com/database/121/ADXDB/json.htm#ADXDB6371
>
> CREATE TABLE j_purchaseorder
>    (id          RAW (16) NOT NULL,
>     date_loaded TIMESTAMP WITH TIME ZONE,
>     po_document CLOB
>     CONSTRAINT ensure_json CHECK (po_document IS JSON));
>
> Data that I inserted was -
>
> { "PONumber"             : 1600,
>   "Reference"            : "ABULL-20140421",
>   "Requestor"            : "Alexis Bull",
>   "User"                 : "ABULL",
>   "CostCenter"           : "A50",
>   "ShippingInstructions" : { "name"   : "Alexis Bull",
>                              "Address": { "street"  : "200 Sporting Green",
>                                           "city"    : "South San Francisco",
>                                           "state"   : "CA",
>                                           "zipCode" : 99236,
>                                           "country" : "United States of 
> America" },
>                              "Phone" : [ { "type" : "Office", "number" : 
> "909-555-7307 <(909)%20555-7307>" },
>                                          { "type" : "Mobile", "number" : 
> "415-555-1234 <(415)%20555-1234>" } ] },
>   "Special Instructions" : null,
>   "AllowPartialShipment" : false,
>   "LineItems"            : [ { "ItemNumber" : 1,
>                                "Part"       : { "Description" : "One Magic 
> Christmas",
>                                                 "UnitPrice"   : 19.95,
>                                                 "UPCCode"     : 13131092899 },
>                                "Quantity"   : 9.0 },
>                              { "ItemNumber" : 2,
>                                "Part"       : { "Description" : "Lethal 
> Weapon",
>                                                 "UnitPrice"   : 19.95,
>                                                 "UPCCode"     : 85391628927 },
>                                "Quantity"   : 5.0 } ] }
>
>
> On Fri, Jul 21, 2017 at 10:12 AM, Xiao Li <gatorsm...@gmail.com> wrote:
>
>> Could you share the schema of your Oracle table and open a JIRA?
>>
>> Thanks!
>>
>> Xiao
>>
>>
>> 2017-07-21 9:40 GMT-07:00 Cassa L <lcas...@gmail.com>:
>>
>>> I am using 2.2.0. I resolved the problem by removing SELECT * and adding
>>> column names to the SELECT statement. That works. I'm wondering why SELECT
>>> * will not work.
>>>
>>> Regards,
>>> Leena
>>>
>>> On Fri, Jul 21, 2017 at 8:21 AM, Xiao Li <gatorsm...@gmail.com> wrote:
>>>
>>>> Could you try 2.2? We fixed multiple Oracle related issues in the
>>>> latest release.
>>>>
>>>> Thanks
>>>>
>>>> Xiao
>>>>
>>>>
>>>> On Wed, 19 Jul 2017 at 11:10 PM Cassa L <lcas...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>> I am trying to use Spark to read from Oracle (12.1) table using Spark
>>>>> 2.0. My table has JSON data.  I am getting below exception in my code. Any
>>>>> clue?
>>>>>
>>>>> >>>>>
>>>>> java.sql.SQLException: Unsupported type -101
>>>>>
>>>>> at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.o
>>>>> rg$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$ge
>>>>> tCatalystType(JdbcUtils.scala:233)
>>>>> at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$a
>>>>> nonfun$8.apply(JdbcUtils.scala:290)
>>>>> at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$a
>>>>> nonfun$8.apply(JdbcUtils.scala:290)
>>>>> at scala.Option.getOrElse(Option.scala:121)
>>>>> at
>>>>>
>>>>> ==========
>>>>> My code is very simple.
>>>>>
>>>>> SparkSession spark = SparkSession
>>>>>         .builder()
>>>>>         .appName("Oracle Example")
>>>>>         .master("local[4]")
>>>>>         .getOrCreate();
>>>>>
>>>>> final Properties connectionProperties = new Properties();
>>>>> connectionProperties.put("user", *"some_user"*));
>>>>> connectionProperties.put("password", "some_pwd"));
>>>>>
>>>>> final String dbTable =
>>>>>         "(select *  from  MySampleTable)";
>>>>>
>>>>> Dataset<Row> jdbcDF = spark.read().jdbc(*URL*, dbTable, 
>>>>> connectionProperties);
>>>>>
>>>>>
>>>
>>
>

Reply via email to