[ 
https://issues.apache.org/jira/browse/SPARK-20394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcelo Vanzin resolved SPARK-20394.
------------------------------------
    Resolution: Workaround

Great. I doubt we'll fix this in 1.6 at this point, and I believe this is fixed 
in 2.x, so let's close this.

> Replication factor value Not changing properly
> ----------------------------------------------
>
>                 Key: SPARK-20394
>                 URL: https://issues.apache.org/jira/browse/SPARK-20394
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, Spark Submit
>    Affects Versions: 1.6.0
>            Reporter: Kannan Subramanian
>
> To save SparkSQL dataframe to a persistent hive table using the below steps.
> a) RegisterTempTable to the dataframe as a tempTable
> b) create table <table name> (cols....)PartitionedBy(col1, col2) stored as 
> parquet
> c) Insert into <table name> partition(col1, col2) select * from tempTable
> I have set dfs.replication is equal to "1" in hiveContext object. But It did 
> not work properly. That is replica is 1 for 80 % of the generated parquet 
> files on HDFS and default replica 3 is for remaining 20 % of parquet files in 
> HDFS. I am not sure why the replica is not reflecting to all the generated 
> parquet files. Please let me know if you have any suggestions or solutions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to