In earlier versions you should be able to use callUdf or callUDF (depending
on which version) and call the hive function "concat".

On Sun, Dec 13, 2015 at 3:05 AM, Yanbo Liang <yblia...@gmail.com> wrote:

> Sorry, it was added since 1.5.0.
>
> 2015-12-13 2:07 GMT+08:00 Satish <jsatishchan...@gmail.com>:
>
>> Hi,
>> Will the below mentioned snippet work for Spark 1.4.0
>>
>> Thanks for your inputs
>>
>> Regards,
>> Satish
>> ------------------------------
>> From: Yanbo Liang <yblia...@gmail.com>
>> Sent: ‎12-‎12-‎2015 20:54
>> To: satish chandra j <jsatishchan...@gmail.com>
>> Cc: user <user@spark.apache.org>
>> Subject: Re: Concatenate a string to a Column of type string in DataFrame
>>
>> Hi Satish,
>>
>> You can refer the following code snippet:
>> df.select(concat(col("String_Column"), lit("00:00:000")))
>>
>> Yanbo
>>
>> 2015-12-12 16:01 GMT+08:00 satish chandra j <jsatishchan...@gmail.com>:
>>
>>> HI,
>>> I am trying to update a column value in DataFrame, incrementing a column
>>> of integer data type than the below code works
>>>
>>> val new_df=old_df.select(df("Int_Column")+10)
>>>
>>> If I implement the similar approach for appending a string to a column
>>> of string datatype  as below than it does not error out but returns only
>>> "null" values
>>>
>>> val new_df=old_df.select(df("String_Column")+"00:00:000")
>>>                          OR
>>> val dt ="00:00:000"
>>> val new_df=old_df.select(df("String_Column")+toString(dt))
>>>
>>> Please suggest if any approach to update a column value of datatype
>>> String
>>> Ex: Column value consist '20-10-2015' post updating it should have
>>> '20-10-201500:00:000'
>>>
>>> Note: Transformation such that new DataFrame has to becreated from old
>>> DataFrame
>>>
>>> Regards,
>>> Satish Chandra
>>>
>>>
>>>
>>>
>>
>>
>

Reply via email to