Hi Vinod,

We're glad to receive your email, there're some other documents of Griffin
listed below:
wiki: https://cwiki.apache.org/confluence/display/GRIFFIN/Apache+Griffin
github: https://github.com/apache/incubator-griffin/tree/master/griffin-doc
And you can follow
https://github.com/apache/incubator-griffin/blob/master/griffin-doc/docker/griffin-docker-guide.md
to try griffin docker image.

For your questions, I'll list my answers:

*1. What is the usage of accuracy metric? In what situations, it will be
useful?*

Accuracy measures the match percentage between two data sources, we call
them "target" and "source", "source" is the data source you trust, "target"
is the data source you want to check.
For example, say "source" is [1, 2, 3, 4, 5], while "target" is [1, 3, 5,
7, 9], we'll get the accuracy #(target items matched in source) / #(all
target items) = 3/5 = 80%. Actually, "exactly match" is a narrow concept,
in accuracy, we say "pass the match rule", users can define their own
"match rule" like "source.age <= target.age AND upper(source.city) =
upper(target.city)" instead of "exactly match".
When we have a data source we trust, let it be the "source", then we can
measure accuracy of another data source named "target", to figure out how
correctly we can trust.

There's a standard use case:
In our data pipeline, when we get users' data from site, we persist it as
table T1, which we trust it as the source of truth. On the other hand, a
copy of users' data will be pushed to some streaming or batch processes,
after some steps, the processed data is persisted as table T2, we want to
know how correct it is, or how much we can trust it.
Set T1 as "source", T2 as "target", we can get the accuracy of T2, with the
wrong items from T2 persisted.

And another specific use case:
We have a streaming data process system, it consumes data from input and
produces to output. In each output data item, it also contains the key of
input item, we want to know how much data is successfully processed.
Set output as "source", input as "target", we can get the accuracy of
input, and the missing items from input will be persisted.
Actually, this case measures the completeness of output, but it works like
reversed accuracy, so we can use it like this.

However, in griffin measure configuration, the concept of source and target
are based on the code implementation, which is different from the business
concept above. In the documents of measure configuration, we're measuring
accuracy of "source".
We are planning to modify the code implementation to be align with the
business concept later, by then, we'll highlight it in the release notes.


*2. Can we run other metrics using command-line? (or) Is only accuracy
metric supported at the moment?*

Yes, you can just run griffin measure module using cmd-line directly, like
this:
https://github.com/bhlx3lyx7/griffin-docker/blob/master/svc_msr_new/prep/measure/start-accu.sh
.
At current, griffin UI module doesn't support all the dimensions, but
measure module supports accuracy, profiling, timeliness and uniqueness, you
can get some description of them here:
https://github.com/apache/incubator-griffin/blob/master/griffin-doc/measure/dsl-guide.md#griffin-dsl-translation-to-sql
.


*3. Project roadmap for features?*

The project roadmap is out of date, we've updated it:
https://cwiki.apache.org/confluence/display/GRIFFIN/0.+Roadmap
Some new features we're planning in the short term planning:
- streaming measure job schedule.
- more data quality dimensions support, such as completeness, consistency,
validity.
And for long term, maybe including:
- more data sources support, such as RDBs, elasticsearch.
- anomaly detection support.
- spark 2 support.


*4. Can we use create custom Rules and profile existing data?*

Yes, you can create custom rules for your data, according to the documents:
https://github.com/apache/incubator-griffin/blob/master/griffin-doc/measure/measure-configuration-guide.md
and
https://github.com/apache/incubator-griffin/blob/master/griffin-doc/measure/measure-batch-sample.md
.
The profiling rule supports simple spark-sql syntax directly, as
https://github.com/apache/incubator-griffin/blob/master/griffin-doc/measure/dsl-guide.md#profiling
described.
If you want to use spark-sql, you can also define the rules like this:
https://github.com/apache/incubator-griffin/blob/master/griffin-doc/measure/dsl-guide.md#spark-sql
.


*5. Postgresql and mysql -- both listed in Prerequisites. We have MySQL, Is
that enough?*

In fact, you can choose either one of postgresql and mysql.
We use mysql for the measure and schedule persistance before, but due to
the license issue of release, we have to switch to postgresql these days.
If you want to use mysql, you need to modify some dependencies in service
module and the application.properties file, rebuild the service.jar as well.
We are going to place a document to help users for mysql or other db.


Hope this helps you, please feel free if any question.

Thanks,
Lionel

On Tue, Apr 3, 2018 at 1:41 PM, Vinod Raina <vinod.ra...@tavant.com> wrote:

> Hi Griffin team,
> In our team, We are looking to create a Data Quality model for your EDL
> Ingestion and are exploring Apache Griffin for it. We have gone through the
> documentation. The documentation is still not complete but we understand
> that the project is in incubation and there might be other reasons as well.
> It would be really helpful if there is any other source of information
> (other than the apache portal  and the git hub readme ) which can help us
> to understand the usage of this framework.
> Also ,we have below few question and would really if you can help us with
> the answers :
>
> 1. What is the usage of accuracy metric? In what situations, it will be
> useful?
> 2. Can we run other metrics using command-line? (or) Is only accuracy
> metric supported at the moment?
> 3. Project roadmap for features?
> 4. Can we use create custom Rules and profile existing data?
> 5. Postgresql and mysql -- both listed in Prerequisites. We have MySQL, Is
> that enough?
>
>
>
>
> Regards
> Vinod Raina | vinod.ra...@tavant.com<mailto:vinod.ra...@tavant.com>
> Associate Technical Architect
> M: +91 9711022965
> Tavant Technologies | www.tavant.com<http://www.tavant.com/>
> Okaya Centre, Tower 1, 5th Floor,B-5, Sector 62, Noida, UP 201 309
>
> ________________________________
> Any comments or statements made in this email are not necessarily those of
> Tavant Technologies. The information transmitted is intended only for the
> person or entity to which it is addressed and may contain confidential
> and/or privileged material. If you have received this in error, please
> contact the sender and delete the material from any computer. All emails
> sent from or to Tavant Technologies may be subject to our monitoring
> procedures.
>

Reply via email to