Sean Owen commented on SPARK-17908:

You must be doing something different than what you show, because what you show 
doesn't even quite compile. Here I adapted your example from the Python example 
in the docs and ran it succesfully on 2.0.1:

import pyspark.sql.functions as func
from pyspark.sql.types import *

sc = spark.sparkContext
lines = sc.textFile("examples/src/main/resources/people.txt")
parts = lines.map(lambda l: l.split(","))
people = parts.map(lambda p: (p[0], p[1].strip()))
schemaString = "name age"
fields = [StructField(field_name, StringType(), True) for field_name in 
schema = StructType(fields)
df = spark.createDataFrame(people, schema)

df1 = df.groupBy('name', 'age').agg(func.count(func.col('age')).alias('total'))
df3 = df.join(df1, ['name', 'age']).withColumn('newcol', 

> Column names Corrupted in pysaprk dataframe groupBy
> ---------------------------------------------------
>                 Key: SPARK-17908
>                 URL: https://issues.apache.org/jira/browse/SPARK-17908
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.6.0, 1.6.1, 1.6.2, 2.0.0, 2.0.1
>            Reporter: Harish
>            Priority: Minor
> I have DF say df
> df1= df.groupBy('key1', 'key2', 
> 'key3').agg(func.count(func.col('val')).alias('total'))
> df3 =df.join(df1, ['key1', 'key2', 'key3'])\
>              .withcolumn('newcol', func.col('val')/func.col('total'))
> I am getting key2 is not present in df1, which is not truw becuase df1.show 
> () is having the data with the key2.
> Then i added this code  before join-- df1 = df1.columnRenamed('key2', 'key2') 
> renamed with same name. Then it works.
> Stack trace will say column missing, but it is npt.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to