I did not see any documentation on it, but from source code, it is doing
some pre-defined transformation based on "rule" parameter (from_json,
clear, accuracy), with in.dataframe.name as input and out.dataframe.name as
output.

Transformations themselves are defined in DataFrameOps.scala
<https://github.com/apache/griffin/blob/master/measure/src/main/scala/org/apache/griffin/measure/step/transform/DataFrameOps.scala>,
and picking transformation is done in DataFrameOpsTransformStep.scala#L36
<https://github.com/apache/griffin/blob/master/measure/src/main/scala/org/apache/griffin/measure/step/transform/DataFrameOpsTransformStep.scala#L36>.
>From context where df-ops is mentioned, it looks like it's mostly useful
for reading json from kafka topics or from flat files, or defining empty
RDDs in DQ job context.

On Tue, Feb 5, 2019 at 11:58 PM Vikram Jain <[email protected]> wrote:

> Hi,
>
> Can someone please explain the process of creating a measure with DSL type
> as “DF-OPS”. A sample measure.json with explanation of associated fields
> with df-ops would be highly appreciated. I could not find any resources on
> cwiki or github that explains the process.
>
>
>
> Thanks in advance.
>
> Vikram
>

Reply via email to