Hi,
So , is this a bug, or something I need to fix? If its our issue, how can
we fix? Please help.
Best,
On Sun, Feb 11, 2018 at 3:49 AM, Shmuel Blitz
wrote:
> Your table is missing a "PARTITIONED BY " section.
>
> Spark 2.x save the partition information in the TBLPROPERTIES section.
>
>
> O
Your table is missing a "PARTITIONED BY " section.
Spark 2.x save the partition information in the TBLPROPERTIES section.
On Sun, Feb 11, 2018 at 10:41 AM, Deepak Sharma
wrote:
> I can see its trying to read the parquet and failing while decompressing
> using snappy:
> parquet.hadoop.Parquet
I can see its trying to read the parquet and failing while decompressing
using snappy:
parquet.hadoop.ParquetRecordReader.nextKeyValue(
ParquetRecordReader.java:201)
So the table looks good but this needs to be fixed before you can query the
data in hive.
Thanks
Deepak
On Sun, Feb 11, 2018 at 1:
When I do that , and then do a select, full of errors. I think Hive table
to read.
select * from mine;
OK
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for
Why did it matter?
So, are you saying Spark SQL cannot create tables in HIVE?
If I need to use HiveCOntext, how should I change my code?
Best,
On Sun, Feb 11, 2018 at 3:09 AM, Deepak Sharma
wrote:
> I think this is the problem here.
> You created the table using the spark sql and not the hive
There was a typo:
Instead of :
alter table mine set locations "hdfs://localhost:8020/user/
hive/warehouse/mine";
Use :
alter table mine set location "hdfs://localhost:8020/user/
hive/warehouse/mine";
On Sun, Feb 11, 2018 at 1:38 PM, Deepak Sharma
wrote:
> Try this in hive:
> alter table mine se
Sorry Mich. I did not create using an explicit create statement Instead I
used below:
//Created a data frame loading from MYSQL
passion_df.write.saveAsTable("default.mine")
After logging into HIVE, HIVE shows the table. But cannot select the data.
On Sun, Feb 11, 2018 at 3:08 AM, ☼ R Nair (रविश
I think this is the problem here.
You created the table using the spark sql and not the hive sql context.
Thanks
Deepak
On Sun, Feb 11, 2018 at 1:36 PM, Mich Talebzadeh
wrote:
> simple question have you created the table through spark sql or hive?
>
> I recall similar issues a while back.
>
> v
Try this in hive:
alter table mine set locations "hdfs://localhost:8020/
user/hive/warehouse/mine";
Thanks
Deepak
On Sun, Feb 11, 2018 at 1:24 PM, ☼ R Nair (रविशंकर नायर) <
ravishankar.n...@gmail.com> wrote:
> Hi,
> Here you go:
>
> hive> show create table mine;
> OK
> CREATE TABLE `mine`(
> `
I have created it using Spark SQL. Then I want to retrieve from HIVE. Thats
where the issue is. I can , still retrieve from Spark. No problems. Why
HIVE is not giving me the data??
On Sun, Feb 11, 2018 at 3:06 AM, Mich Talebzadeh
wrote:
> simple question have you created the table through spar
simple question have you created the table through spark sql or hive?
I recall similar issues a while back.
val HiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
//val sqlContext = new HiveContext(sc)
println ("\nStarted at"); spark.sql("SELECT FROM_unixtime(unix_timestamp(),
'dd/MM/yyy
Hi,
Here you go:
hive> show create table mine;
OK
CREATE TABLE `mine`(
`policyid` int,
`statecode` string,
`socialid` string,
`county` string,
`eq_site_limit` decimal(10,2),
`hu_site_limit` decimal(10,2),
`fl_site_limit` decimal(10,2),
`fr_site_limit` decimal(10,2),
`tiv_2014` de
Please run the following command, and paste the result:
SHOW CREATE TABLE <>
On Sun, Feb 11, 2018 at 7:56 AM, ☼ R Nair (रविशंकर नायर) <
ravishankar.n...@gmail.com> wrote:
> No, No luck.
>
> Thanks
>
> On Sun, Feb 11, 2018 at 12:48 AM, Deepak Sharma
> wrote:
>
>> In hive cli:
>> msck repair table
No, No luck.
Thanks
On Sun, Feb 11, 2018 at 12:48 AM, Deepak Sharma
wrote:
> In hive cli:
> msck repair table 《table_name》;
>
> Thanks
> Deepak
>
> On Feb 11, 2018 11:14, "☼ R Nair (रविशंकर नायर)" <
> ravishankar.n...@gmail.com> wrote:
>
>> NO, can you pease explain the command ? Let me try now
In hive cli:
msck repair table 《table_name》;
Thanks
Deepak
On Feb 11, 2018 11:14, "☼ R Nair (रविशंकर नायर)"
wrote:
> NO, can you pease explain the command ? Let me try now.
>
> Best,
>
> On Sun, Feb 11, 2018 at 12:40 AM, Deepak Sharma
> wrote:
>
>> I am not sure about the exact issue bjt i see
NO, can you pease explain the command ? Let me try now.
Best,
On Sun, Feb 11, 2018 at 12:40 AM, Deepak Sharma
wrote:
> I am not sure about the exact issue bjt i see you are partioning while
> writing from spark.
> Did you tried msck repair on the table before reading it in hive ?
>
> Thanks
> D
I am not sure about the exact issue bjt i see you are partioning while
writing from spark.
Did you tried msck repair on the table before reading it in hive ?
Thanks
Deepak
On Feb 11, 2018 11:06, "☼ R Nair (रविशंकर नायर)"
wrote:
> All,
>
> Thanks for the inputs. Again I am not successful. I thin
All,
Thanks for the inputs. Again I am not successful. I think, we need to
resolve this, as this is a very common requirement.
Please go through my complete code:
STEP 1: Started Spark shell as spark-shell --master yarn
STEP 2: Flowing code is being given as inout to shark shell
import org.ap
Its possible that the format of your table is not compatible with your
version of hive, so Spark saved it in a way such that only Spark can read
it. When this happens it prints out a very visible warning letting you know
this has happened.
We've seen it most frequently when trying to save a parque
Ravi,
Can you send the result of
Show create table your_table_name
Thanks
Prakash
On Feb 9, 2018 8:20 PM, "☼ R Nair (रविशंकर नायर)" <
ravishankar.n...@gmail.com> wrote:
> All,
>
> It has been three days continuously I am on this issue. Not getting any
> clue.
>
> Environment: Spark 2.2.x, all c
but create a seperate dataframe object*
>
> *customer_df_orc = spark.read.orc("/data/tpch/customer.orc")*
>
>
>
> *# reference the newly created dataframe object and create a tempView for
> QA purposes*
>
> *customer_df_orc.createOrReplaceTempView("customer
iew("customer")
# reference the sparkSession class and SQL method in order to issue SQL
statements to the materialized view
spark.sql("SELECT * FROM customer LIMIT 10").show()
From: "☼ R Nair (रविशंकर नायर)"
Date: Friday, February 9, 2018 at 7:03 AM
To: "user
An update: (Sorry I missed)
When I do
passion_df.createOrReplaceTempView("sampleview")
spark.sql("create table sample table as select * from sample view")
Now, I can see table and can query as well.
So why this do work from Spark and other method discussed below is not?
Thanks
On Fri, Feb
All,
It has been three days continuously I am on this issue. Not getting any
clue.
Environment: Spark 2.2.x, all configurations are correct. hive-site.xml is
in spark's conf.
1) Step 1: I created a data frame DF1 reading a csv file.
2) Did manipulations on DF1. Resulting frame is passion_df.
24 matches
Mail list logo