Sorry for being a PITA.

IT works. the pretty=true had to be given after the query.

curl -XGET http://10.16.131.8:9200/system/events/_search?q=ip
:"2.2.0.4",pretty=true
{"took":2,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":1,"max_score":0.20050973,"hits":[{"_index":"system","_type":"events","_id":"tol-0wcJR7ipsTPfCnZOOQ","_score":0.20050973,
"_source" :
{
"message":" 2.2.0.4,apsim_00,this is a test 764997288",
 "ip":" 2.2.0.4",
 "name":"apsim_00"
}}]}}#



On Wed, Jun 19, 2013 at 7:28 PM, Mahesh V <[email protected]>wrote:

> Thanks a lot Radu for helping me with this and for being patient with me.
>
> I created the following template in rsyslog.conf
>
> $template apsimTemplate,"\n{\n\"message\":\"%msg:::json%\",\n
> \"ip\":\"%msg:F,44:1%\",\n \"name\":\"%msg:F,44:2%\"\n}"
> *.*   action(type="omelasticsearch" template="apsimTemplate"
> server="10.16.131.8" serverport="9200")
>
> and with a syslog entries created  as
>
> void main()
> {
>         int i = 0;
>         char * string = "{\n \"name\":\"joys of programming\" \n }";
>         setlogmask (LOG_UPTO (LOG_NOTICE));
>         openlog ("exampleprog",  LOG_PID | LOG_NDELAY, LOG_DAEMON);
>         perror("openlog");
>
>         for(i = 0; i < 5; i++) {
>                 syslog (LOG_NOTICE, "2.2.0.%d,apsim_00,this is a test %d",
> i);
>         }
>         closelog ();
> }
>
> I do get some filtered documents however, I am not able to filter it out.
> (I get all the records with the query)
> Is the hierarchy a mandatory thing to get filters correctly?
>
> [root@localhost rsyslog]# curl -XGET
> 10.16.131.8:9200/system/events/_search?pretty=true,q="ip":"2.2.0.1"
> {
>   "took" : 1,
>
>   "timed_out" : false,
>   "_shards" : {
>     "total" : 5,
>     "successful" : 5,
>     "failed" : 0
>   },
>   "hits" : {
>     "total" : 5,
>
>     "max_score" : 1.0,
>     "hits" : [ {
>       "_index" : "system",
>       "_type" : "events",
>       "_id" : "D43JsNcTRkyxjoCSRkKbxQ",
>       "_score" : 1.0, "_source" :
> {
> "message":" 2.2.0.4,apsim_00,this is a test 764997288",
>  "ip":" 2.2.0.4",
>  "name":"apsim_00"
>
> }
>     }, {
>       "_index" : "system",
>       "_type" : "events",
>       "_id" : "0GP7SgtPQ0mxokvhwWgC4g",
>       "_score" : 1.0, "_source" :
> {
> "message":" 2.2.0.1,apsim_00,this is a test 764997288",
>  "ip":" 2.2.0.1",
>  "name":"apsim_00"
>
> }
>     }, {
>       "_index" : "system",
>       "_type" : "events",
>       "_id" : "3I8kFSCVQD-Kh2ow-uu4Dw",
>       "_score" : 1.0, "_source" :
> {
> "message":" 2.2.0.3,apsim_00,this is a test 764997288",
>  "ip":" 2.2.0.3",
>  "name":"apsim_00"
>
> }
>     }, {
>       "_index" : "system",
>       "_type" : "events",
>       "_id" : "yLs8_GLPQsWiYKlZP9n3fw",
>       "_score" : 1.0, "_source" :
> {
> "message":" 2.2.0.0,apsim_00,this is a test 0",
>  "ip":" 2.2.0.0",
>  "name":"apsim_00"
>
> }
>     }, {
>       "_index" : "system",
>       "_type" : "events",
>       "_id" : "LXbayzZaQGuwCuPMZ2ZiAQ",
>       "_score" : 1.0, "_source" :
> {
> "message":" 2.2.0.2,apsim_00,this is a test 764997288",
>  "ip":" 2.2.0.2",
>  "name":"apsim_00"
> }
>     } ]
>   }
> }
>
>
>
>
>
> On Wed, Jun 19, 2013 at 6:25 PM, Radu Gheorghe <[email protected]>wrote:
>
>> 2013/6/19 Mahesh V <[email protected]>
>>
>> > Hi ,
>> >
>> > I now understand what you are saying.
>> >
>> > My requirement is something like this
>> >
>> > 1) I will have as many as 16 processes and each process has 23 threads
>> > sending syslog.
>> > 2) My current architecture for logs is file based where I separate each
>> > process log to a different file.
>> > 3) I did the same thing with standalone mysql /sqlite  where I had a
>> single
>> > database and multiple tables for each process.
>> > 4) I start all processes one by one they run for a few hours and stop
>> them
>> > all at once and collect/examine the logs.
>> > 5) I wont need to store the logs permanently, so I can go ahead and
>> delete
>> > them once the analysis is over.
>> >
>> > Is it possible in rsyslog to
>> > 1) create dynamic indexes for each of the process based on  name or
>> time (I
>> > think you mentioned that it is possible using time)
>> >
>>
>> Yes, it's up to you to make a template that works. You can put time in
>> there, program name, or anything from your log.
>>
>>
>> > 2) Query indexes separately for values (e.g. 1 process may have logs
>> from
>> > ip addresses 1.1.1.1 to 1.1.1.10 and other process may have logs from
>> > 1.1.1.11 to 1.1.1.20 etc)
>> >
>>
>> Yes, when querying you can specify one index or multiple indices (even use
>> wildcards). But in your case it might be better to just throw everything
>> in
>> a single index and just add filters by IP addresses, processes, etc.)
>>
>>
>> >
>> > So my index should look something like this
>> >
>> >  "process1"
>> >     "ip" : "x.x.x.x"
>> >     "name": "abcd"
>> >     "log": "test log"
>> >
>>
>> You can start by putting all that info in your log. No need to be
>> hierarchical about it, unless you find later that you need to. Then you
>> can
>> filter by any of those fields.
>>
>>
>> >
>> > I have been trying out too many things in very little time and hence a
>> bit
>> > confused. Sorry If I am asking stupid questions.
>> >
>>
>> Heh, I'm admiring you for ingesting all this info in this short time :)
>> _______________________________________________
>> rsyslog mailing list
>> http://lists.adiscon.net/mailman/listinfo/rsyslog
>> http://www.rsyslog.com/professional-services/
>> What's up with rsyslog? Follow https://twitter.com/rgerhards
>> NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad
>> of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you
>> DON'T LIKE THAT.
>>
>
>
_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.

Reply via email to