That’s right, Griffin depends on the operations of spark sql, transferring a 
data frame into another, but for the operations which could not be covered by 
spark sql, some pre-defined “df-ops” could help on this. 
Users can implement their own “df-ops” for such specific operations.

Thanks
Lionel, Liu

From: Nick Sokolov
Sent: 2019年2月8日 0:47
To: [email protected]
Cc: [email protected]
Subject: Re: Measure creation with DSL Type as "DF-OPS"

I did not see any documentation on it, but from source code, it is doing some 
pre-defined transformation based on "rule" parameter (from_json, clear, 
accuracy), with in.dataframe.name as input and out.dataframe.name as output.

Transformations themselves are defined in DataFrameOps.scala, and picking 
transformation is done in DataFrameOpsTransformStep.scala#L36. From context 
where df-ops is mentioned, it looks like it's mostly useful for reading json 
from kafka topics or from flat files, or defining empty RDDs in DQ job context.

On Tue, Feb 5, 2019 at 11:58 PM Vikram Jain <[email protected]> wrote:
Hi,
Can someone please explain the process of creating a measure with DSL type as 
“DF-OPS”. A sample measure.json with explanation of associated fields with 
df-ops would be highly appreciated. I could not find any resources on cwiki or 
github that explains the process.
 
Thanks in advance.
Vikram 

Reply via email to