Hi Ted,
I moved to Spark 1.6
Still the same issue outstanding
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.1
/_/
Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java
1.7.0_25)
Type in expressions to have them evaluated.
Type :help for more information.
Spark context available as sc.
SQL context available as sqlContext.
scala> sql("describe formatted test.t14").collect.foreach(println)
16/03/26 08:51:23 ERROR Hive: Table test not found: default.test table not
found
16/03/26 08:51:23 ERROR Hive: Table test not found: default.test table not
found
[# col_name data_type comment ]
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
On 25 March 2016 at 22:49, Mich Talebzadeh <[email protected]>
wrote:
> mine is version 1.5.2 Ted
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 25 March 2016 at 22:45, Ted Yu <[email protected]> wrote:
>
>> Strange: the JIRAs below were marked Fixed in 1.5.0
>>
>> On Fri, Mar 25, 2016 at 3:43 PM, Mich Talebzadeh <
>> [email protected]> wrote:
>>
>>> Is this 1.6 Ted?
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn *
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>> On 25 March 2016 at 22:40, Ted Yu <[email protected]> wrote:
>>>
>>>> Looks like database support was fixed by:
>>>>
>>>> [SPARK-7943] [SPARK-8105] [SPARK-8435] [SPARK-8714] [SPARK-8561] Fixes
>>>> multi-database support
>>>>
>>>> On Fri, Mar 25, 2016 at 3:35 PM, Ashok Kumar <[email protected]>
>>>> wrote:
>>>>
>>>>> 1.5.2 Ted.
>>>>>
>>>>> Those two lines I don't know where they come. It finds and gets the
>>>>> table info OK
>>>>>
>>>>> HTH
>>>>>
>>>>>
>>>>> On Friday, 25 March 2016, 22:32, Ted Yu <[email protected]> wrote:
>>>>>
>>>>>
>>>>> Which release of Spark do you use, Mich ?
>>>>>
>>>>> In master branch, the message is more accurate
>>>>> (sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/NoSuchItemException.scala):
>>>>>
>>>>> override def getMessage: String = s"Table $table not found in
>>>>> database $db"
>>>>>
>>>>>
>>>>> On Fri, Mar 25, 2016 at 3:21 PM, Mich Talebzadeh <
>>>>> [email protected]> wrote:
>>>>>
>>>>> You can use DESCRIBE FORMATTED <DATABASE>.<TABLE_NAME> to get that
>>>>> info.
>>>>>
>>>>> This is based on the same command in Hive however, it throws two
>>>>> erroneous error lines as shown below (don't see them in Hive DESCRIBE ...)
>>>>>
>>>>> Example
>>>>>
>>>>> scala> sql("describe formatted test.t14").collect.foreach(println)
>>>>> 16/03/25 22:32:38 ERROR Hive: Table test not found: test.test table
>>>>> not found
>>>>> 16/03/25 22:32:38 ERROR Hive: Table test not found: test.test table
>>>>> not found
>>>>> [# col_name data_type comment ]
>>>>> [ ]
>>>>> [invoicenumber int ]
>>>>> [paymentdate date ]
>>>>> [net decimal(20,2) ]
>>>>> [vat decimal(20,2) ]
>>>>> [total decimal(20,2) ]
>>>>> [ ]
>>>>> [# Detailed Table Information ]
>>>>> [Database: test ]
>>>>> [Owner: hduser ]
>>>>> [
>>>>> *CreateTime: Fri Mar 25 22:13:44 GMT 2016
>>>>> ]*[LastAccessTime:
>>>>> UNKNOWN ]
>>>>> [Protect Mode: None ]
>>>>> [Retention: 0 ]
>>>>> [Location:
>>>>> hdfs://rhes564:9000/user/hive/warehouse/test.db/t14 ]
>>>>> [Table Type: MANAGED_TABLE ]
>>>>> [Table Parameters: ]
>>>>> [ COLUMN_STATS_ACCURATE {\"BASIC_STATS\":\"true\"}]
>>>>> [ comment from csv file from excel sheet]
>>>>> [ numFiles 2 ]
>>>>> [ orc.compress ZLIB ]
>>>>> [ totalSize 1090 ]
>>>>> [ transient_lastDdlTime 1458944025 ]
>>>>> [ ]
>>>>> [# Storage Information ]
>>>>> [SerDe Library:
>>>>> org.apache.hadoop.hive.ql.io.orc.OrcSerde ]
>>>>> [InputFormat:
>>>>> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat ]
>>>>> [OutputFormat:
>>>>> org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat ]
>>>>> [Compressed: No ]
>>>>> [Num Buckets: -1 ]
>>>>> [Bucket Columns: [] ]
>>>>> [Sort Columns: [] ]
>>>>> [Storage Desc Params: ]
>>>>> [ serialization.format 1 ]
>>>>>
>>>>> HTH
>>>>>
>>>>> Dr Mich Talebzadeh
>>>>>
>>>>> LinkedIn *
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>>
>>>>> http://talebzadehmich.wordpress.com
>>>>>
>>>>>
>>>>> On 25 March 2016 at 22:12, Ashok Kumar <[email protected]>
>>>>> wrote:
>>>>>
>>>>> Experts,
>>>>>
>>>>> I would like to know when a table was created in Hive database using
>>>>> Spark shell?
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>