Harish commented on SPARK-17908:

Yes. You are code structure is same as mine.. But i have 70M records with 1000 
columns. It works with simple joins as above. But when you try to modify the DF 
multiple times this will happen, as i was getting this error from 1.6.0 but i 
didn't raise because i cant prove this with working use case. But it happens 
frequently with my code so i tried with rename

Here my steps:
df = df.select('key1', 'key2', 'key3', 'val','total') -70Million records
df =df.withColumn('key2', 'ABC')
df1= df.groupBy('key1', 'key2', 
df1 = df1.columnRenamed('key2', 'key2')
df3 =df.join(df1, ['key1', 'key2', 'key3'])\
        .withcolumn('newcol', func.col('val')/func.col('total'))

I just wanted to see if any one else observed this behavior, I will try to find 
the code sample to proof this issue. If not in another 1-2 days i will mark it 
not reproducible.  

> Column names Corrupted in pysaprk dataframe groupBy
> ---------------------------------------------------
>                 Key: SPARK-17908
>                 URL: https://issues.apache.org/jira/browse/SPARK-17908
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.6.0, 1.6.1, 1.6.2, 2.0.0, 2.0.1
>            Reporter: Harish
>            Priority: Minor
> I have DF say df
> df1= df.groupBy('key1', 'key2', 
> 'key3').agg(func.count(func.col('val')).alias('total'))
> df3 =df.join(df1, ['key1', 'key2', 'key3'])\
>              .withcolumn('newcol', func.col('val')/func.col('total'))
> I am getting key2 is not present in df1, which is not truw becuase df1.show 
> () is having the data with the key2.
> Then i added this code  before join-- df1 = df1.columnRenamed('key2', 'key2') 
> renamed with same name. Then it works.
> Stack trace will say column missing, but it is npt.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to