Hi teammates,
I Just watched a video about "user behavior analytics", which I think is pretty
cool and fit our user case.
It gives deep analysis on "machine learning and user behavior analytics", hope
you might be interested in it.
Here is the link:
This is so exiting! Great to have spark support!
Regards,
Da
> On Aug 4, 2016, at 10:45 PM, Hao Chen wrote:
>
> During offline meetup, some engineering team located at Shanghai is very
> interested in contributing eagle alert engine by supporting spark
> streaming instead
Hi, Caoyu,
Recently I tested Eagle 0.4.0 on cloudera (CDH5.5.2), and wrote a doc for
deploying apache Eagle on CDH here :
https://github.com/DadanielZ/eaglemonitoring.github.io/blob/master/cloudera-integration.md
@eagle-team
Also I opened a pull request for this doc to repository:
Hi guys,
I've one question about match criteria field timestamp in policy setting: why
eagle expect user to input time in milliseconds?
Take 'hdfsAudit" for example, hdfs audit log parser converts human readable
date to milliseconds and then construct a new message for siddhi engine to
Ramesh <sram...@dataguise.com>
Subject: Re: [Announce] New Apache Eagle Committer: Daniel Zhou
Congrats Daniel! Thanks for your contribution to Eagle!
BTW: Your account will be created shortly, please feel free to let me know if
you have any problem :)
- Hao
On Tue, Jun 14, 2016 at 1:42 PM,
Ramesh <sram...@dataguise.com>
Subject: Re: [Announce] New Apache Eagle Committer: Daniel Zhou
Hi Daniel,
Welcome to Eagle community, congratulations!
Thanks
Edward
On 6/13/16, 9:33, "Daniel Zhou" <daniel.z...@dataguise.com> wrote:
>
>Hi Eagle Community ,
>
>Thank y
rote:
Eagle Community,
The Project Management Committee (PPMC) for Apache Eagle has asked Daniel Zhou
to become a committer and we are pleased to announce that he has accepted.
Daniel has continuously participated in discussion, bug reporting and fixing in
eagle community, and also active
Thank you! XD
MapR has a different command set, which is used in its audit log.
Check this :
http://doc.mapr.com/display/MapR/Auditing+of+Filesystem+Operations+and+Table+Operations
I've tested it and can see alerts shown in UI.
Regards,
Daniel Zhou
-Original Message-
From
existing features.
Thanks
Edward
On 4/13/16, 14:39, "Daniel Zhou" <daniel.z...@dataguise.com> wrote:
>Just found out that :
>1. The column name in table "alertdef_alertdef" has changed.
>"dataSource" has been replaced by "application".
I'm using the master branch and I think there is a bug for this configuration.
It won’t work if I update the configuration in UI with JSON format:
{"web.fs.defaultFS":"hdfs:// something.com:8020”}
From "eagle-topology-init.sh" I saw this configuration is set in string but not
json
Hi guys,
Now I'm able to generate "HDFSAuditLogObject" based on MAPR's hdfs audit log
message, but met a problem of the "userID" part.
Like I mentioned before, mapR's audit log message contains ids( eg:
"2050.36.131336" is the file id of "/tmp/file.w" , "5098" is the id of user
"daniel") but
head limit exceeded
Hi Daniel , Can you check/increase memory settings in idea.vmoptions under
INTELLIJ_DIR/bin/ and try to rebuild .. Let me know if this helps..
--Senthil
On Thu, Feb 25, 2016 at 6:26 AM, Daniel Zhou <daniel.z...@dataguise.com>
wrote:
> I just build it successfu
[mailto:suliang...@gmail.com]
Sent: Tuesday, February 23, 2016 6:43 PM
To: dev@eagle.incubator.apache.org
Subject: Re: build error: GC overhead limit exceeded
Didn't see this before, based on the GC error, can you to add more mem to the
maven jvm?
On Wed, Feb 24, 2016 at 9:34 AM, Daniel Zhou
Hi, all
I encountered the following error when building Eagle, any opinions?
[INFO] Building jar:
/Users/yan/GithubProject/DaZhou/incubator-eagle/eagle-topology-assembly/target/eagle-topology-0.3.0-assembly.jar
Cleaning up unclosed ZipFile for archive
Hi, Nagalkar,
To kill a topology you can log in to Strom UI:
localhost:8744
and click the button ”kill” for that topology, then rerun Eagle’s service.
Regards,
Daniel
From: Nagalkar, Sanjay Contractor [mailto:sanjay.nagal...@ssa.gov]
Sent: Tuesday, February 23, 2016 1:35 PM
To: Daniel Zhou
ou have properly application.conf (point to correct eagle
> service in #1). You can use run or debug as you wish.
>
> Ralph
>
>
> On Sat, Feb 20, 2016 at 7:01 AM, Daniel Zhou <daniel.z...@dataguise.com>
> wrote:
>
>> I'm trying to set up Intellij IDE to run Ea
Hi, all,
I'm trying to test Eagle on MapR, the problem is that MapR's hdfs audit log
format is quite different from what Eagle support.
What I'm trying to do now is to get the audit log message from MAPR and convert
them to the format that Eagle currently support in real time. I come up with
I'm trying to set up Intellij IDE to run Eagle, but do not know where to start.
Can someone tell me how to run Eagle project from IntelliJ with details?
Thanks and Regards,
Daniel
-Original Message-
From: yonzhang [mailto:g...@git.apache.org]
Sent: Thursday, February 18, 2016 3:53 PM
Hi, all,
Eagle seems dont't allow users to change/modify Data Sources once the site has
been created, right?
I tried to add "userProfile" to an existed site, but after I click "update"
button in site configuration, nothing happened.
Regards,
Daniel
Hi, all,
I'm trying to figure out how the User Profile works, my questions is:
1. Once I put new training data into the corresponding folder, will Eagle
detect it automatically or do I need to run ML training program from Eagle UI
manually?
2. Do Eagle provide APIs to for user
Hi, Nagalkar
Could you check if your Kafka works ? Use these steps:
1. Create a topic:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1
--partitions 1 --topic test
2. Check if topic “test” is created:
bin/kafka-topics.sh --list --zookeeper
@eagle.incubator.apache.org
Subject: Re: Policy based on sensitive types stops working if there are too
many sensitive items
can you please show one policy? I thought it would be because policy itself is
too complicated for engine to parse and evaluate
Thanks
Edward
On 2/5/16, 15:46, "Daniel
Hi all,
In Eagle UI for Sensitivity Browser, when I set multiple values in Sensitivity
type, I put it in the format as "Type1|Type2|Type3" to represent 3 different
types.
But Eagle treats it as a single sensitive type name.
So how to set a multiple value attribute for file sensitivity?
' or
> sensitivityType == 'sfs' or sensitivityType == 'sdfs')] select *
> insert into outputStream;
>
> Above pasted is what i get form the ui by add more value to
> sensitiveType field. The UI looks like below,
> [cid:ii_ijg8zxfw0_152475f8caf0ef24]
>
>
> Ralph
>
insert everything or just do
incremental updates?
Thanks
Edward
On Tue, Jan 12, 2016 at 7:01 PM, Daniel Zhou <daniel.z...@dataguise.com>
wrote:
> Just a short question:
>
> Now in HBase eagle have stored the info of sensitive files, including
> the file name and sensitive typ
Just a short question:
Now in HBase eagle have stored the info of sensitive files, including the file
name and sensitive type defined by user.
If these sensitive files in hadoop got deleted, how would this "delete" action
affect the info stored in the HBase ? would their record in HBase also
t; In the last 2 months we have had some increasing participation and
>>>active discussions from the community on various topics.
>>>
>>> I would like to thank Prasad Mujumdar, Ralph Su, Michael Wu, Daniel
>>> Zhou for their contributions to the project.
>>>
>>> Thanks,
>>> Arun
>>>
>
w, it works for me.
http://xxx:xxx/eagle-service/rest/list?query=AlertDefinitionService[@dataSource="userProfile;
and @site="sandbox"]{*}=100
And your given query just shown empty result instead of parse error in my local.
Can you post more detail/stack?
Ralph
On Sat, Dec 19
erviceName=FileSensitivi
tyService
[
{
"tags":{
"site" : "sandbox",
"filedir" : "testFilePath"
},
"sensitivityType": "EMAIL"
},
{
"tags":{
"site" : "sandbox",
"filedir" :
Hi,
I didn't find the API about marking sensitivity data in the online tutorial,
can someone tell me where to find it?
Thanks,
Daniel
Hi,
I met a trouble when calling the "create policy" API of eagle,
In my java code, I put:
String jsonData =
"[{\"tags\":{\"site\":\"Demo\",\"dataSource\":\"hdfsAuditLog\",\"policyId\":
\"testPolicy\",\"alertExecutorId\":\"hdfsAuditLogAlertExecutor\",
\"policyType\":
is mapping with field 'lastModifiedDate' which is not necessary
field for data process. You can with 'lastModifiedDate' field when calling api.
2015-12-16 5:16 GMT+08:00 Daniel Zhou <daniel.z...@dataguise.com>:
> Hi all,
>
> In the UI "Policy List", the policies created using UI
Hi all,
I called the eagle API to create a policy but ended with "HTTP Status 403
Access Denied". I'm not sure if there is something wrong with the
authorization part?
Here is how I called the policy definition API:
URL:
Hi all,
I met a problem: after I reboot my machines, Kafka service cannot produce
messages.
When I tried to send out the messages with producer, error occurs:
"bin/kafka-console-producer.sh --broker-list centos.da2:6667 --topic test1
Message
WARN Error while fetching metadata [{TopicMetadata
is better.
The other graceful way is to write a Siddhi plugin which handles comparison
against multi-value attribute. Let me your thoughts.
Thanks
Edward
On 12/4/15, 12:25, "Daniel Zhou" <daniel.z...@dataguise.com> wrote:
>Hi all,
>
>Is it possible for me to mark one fil
35 matches
Mail list logo