sorry for the late reply...

the double-free error went away after I patched my 7.4.1 rsyslog as noted
in the bug report.  But the json parsing error still exists...

This log line:

host-syslog.log:Jul  9 07:14:41 host items[12540]: @cee:
{"transfer_to_guid":"9210989832985564408","item_type":80297,"quantity":1,"auth_code":"2938749283749823749823","transfer_to_character_guid":"9210989832985564414","metric_basename":"ledger","es-service":"ledger","logged_at":1373354081}

which results in this error:

{
        "request":      {
                "url":  "http://elasticsearch-server:9200/_bulk?";,
                "postdata":     "{\"index\":{\"_index\":
\"ledger-2013-07-09\",\"_type\":\"events\"}}\n\", \"logged_at\": 1373354081
}{ \"transfer_to_guid\": \"9210989832985564408\", \"item_type\": 80297,
\"quantity\": 1, \"auth_code\": \"2938749283749823749823\",
\"transfer_to_character_guid\": \"9210989832985564414\",
\"metric_basename\": \"ledger\", \"es-service\": \"ledger\", \"logged_at\":
1373354081 }\n"
        },
        "reply":        {
                "took": 3,
                "items":        [{
                                "create":       {
                                        "_index":       "ledger-2013-07-09",
                                        "_type":        "events",
                                        "_id":  "Bj6cPf3xSOiPdtzxLJirBA",
                                        "error":
 "MapperParsingException[failed to parse]; nested:
ElasticSearchParseException[Failed to derive xcontent from (offset=0,
length=276): [34, 44, 32, 34, 108, 111, 103, 103, 101, 100, 95, 97, 116,
34, 58, 32, 49, 51, 55, 51, 51, 53, 52, 48, 56, 49, 32, 125, 123, 32, 34,
116, 114, 97, 110, 115, 102, 101, 114, 95, 116, 111, 95, 103, 117, 105,
100, 34, 58, 32, 34, 57, 50, 49, 48, 57, 56, 57, 56, 51, 50, 57, 56, 53,
53, 54, 52, 52, 48, 56, 34, 44, 32, 34, 105, 116, 101, 109, 95, 116, 121,
112, 101, 34, 58, 32, 56, 48, 50, 57, 55, 44, 32, 34, 113, 117, 97, 110,
116, 105, 116, 121, 34, 58, 32, 49, 44, 32, 34, 97, 117, 116, 104, 95, 99,
111, 100, 101, 34, 58, 32, 34, 49, 56, 51, 57, 55, 50, 56, 51, 57, 50, 51,
50, 51, 50, 56, 51, 56, 57, 57, 54, 34, 44, 32, 34, 116, 114, 97, 110, 115,
102, 101, 114, 95, 116, 111, 95, 99, 104, 97, 114, 97, 99, 116, 101, 114,
95, 103, 117, 105, 100, 34, 58, 32, 34, 57, 50, 49, 48, 57, 56, 57, 56, 51,
50, 57, 56, 53, 53, 54, 52, 52, 49, 52, 34, 44, 32, 34, 109, 101, 116, 114,
105, 99, 95, 98, 97, 115, 101, 110, 97, 109, 101, 34, 58, 32, 34, 108, 101,
100, 103, 101, 114, 34, 44, 32, 34, 101, 115, 45, 115, 101, 114, 118, 105,
99, 101, 34, 58, 32, 34, 108, 101, 100, 103, 101, 114, 34, 44, 32, 34, 108,
111, 103, 103, 101, 100, 95, 97, 116, 34, 58, 32, 49, 51, 55, 51, 51, 53,
52, 48, 56, 49, 32, 125]]; "
                                }
                        }]
        }
}

You aren't missing characters.  The JSON that is produced is messed up:

{"index":{"_index": "ledger-2013-07-09","_type":"events"}}
", "logged_at": 1373354081 }{ "transfer_to_guid": "9210989832985564408",
"item_type": 80297, "quantity": 1, "auth_code": "2938749283749823749823",
"transfer_to_character_guid": "9210989832985564414", "metric_basename":
"ledger", "es-service": "ledger", "logged_at": 1373354081 }

Has anyone experienced this error before?  It's only when I turn on
bulkmode for elasticsearch.  I'm going to try to get a debug log for this
error.

Thanks,
Ajay



On Wed, Jul 3, 2013 at 2:57 AM, Radu Gheorghe <[email protected]>wrote:

> Hi Ajay,
>
> The request ends up being wrong. I'm bolding the stuff that shouldn't be
> there:
>
> {\"index\":{\"_index\":
> \"ledger-2013-07-03\",\"_type\
> ":\"events\"}*}*\n*1372805031 }*{
> \"transfer_from_guid\": \"1111\", \"item_type\": 30404, \"quantity\": 840,
> \"auth_code\": \"0000\", \"metric_basename\": \"ledger\", \"es-service\":
> \"ledger\", \"logged_at\": 1372805031 }\n"
>
> A bulk request <http://www.elasticsearch.org/guide/reference/api/bulk/
> >should
> have the following structure:
> {$JSON-METADATA1}\n
> {$JSON-DOCUMENT1}\n
> {$JSON-METADATA2}\n
> {$JSON-DOCUMENT2}\n
>
> And so on.
>
> I've tested with (roughly) what you sent here and it works for me. I'm
> attaching my conf and in there you can see the command I've used. Do you
> see any significant differences?
>
> BTW, I've tried this with 7.4.0 from the RPM repository.
>
> If it still doesn't work for you, can you paste me a full conf and rsyslog
> version? I'd appreciate if you can take stuff such as queue settings out
> from the conf you send, to have the bare minimum for reproducing the
> problem and minimize the chance of replies like "it works for me, but there
> might be a minor change that is actually the key" :)
>
> Best regards,
> Radu
>
> 2013/7/3 Ajay Sharma <[email protected]>
>
> > When I turn bulkmode=on for my omelasticsearch setup, I get
> > "MapperParsingException" errors.  Not sure why this is happening other
> than
> > the fact that the JSON sent to elasticsearch doesn't look right.  Maybe
> > someone can help me spot something that I'm not seeing.  First here's my
> > elasticsearch config:
> >
> > $template elasticsearchSchema,"%$!all-json%"
> >
> > action(type="mmjsonparse")
> > if $parsesuccess == "OK" then {
> >   if $!es-service != "" then {
> >     $template
> > elasticsearchIndex,"%$!es-service%-%timereported:1:10:date-rfc3339%"
> >     action(type="omelasticsearch"
> >       template="elasticsearchSchema"
> >       searchIndex="elasticsearchIndex"
> >       dynSearchIndex="on"
> >       server="es-host"
> >       serverport="9200"
> >       errorFile="/var/log/rsyslog-elasticsearch.log"
> >       bulkmode="on"
> >       queue.dequeuebatchsize="200"
> >       queue.type="linkedlist"
> >       queue.filename="dbq"
> >       queue.highwatermark="500000"
> >       queue.lowwatermark="400000"
> >       queue.discardmark="5000000"
> >       queue.timeoutenqueue="0"
> >       queue.maxdiskspace="5g"
> >       queue.size="2000000"
> >       queue.saveonshutdown="on"
> >       action.resumeretrycount="-1"
> >     )
> >   }
> > }
> >
> > I'm creating the index based on a JSON variable so that the developers
> can
> > control which index logs what.  From another machine, if I run this
> command
> > a few times, it will trigger a error:
> >
> > # logger '@cee:
> >
> >
> {"transfer_from_guid":"1111","item_type":30404,"quantity":840,"auth_code":"0000","metric_basename":"ledger","es-service":"ledger","logged_at":1372805031}'
> >
> > The error:
> >
> > {
> >         "request":      {
> >                 "url":  "http://search-v01-ew1.r5internal.com:9200/_bulk
> > ?",
> >                 "postdata":     "{\"index\":{\"_index\":
> > \"ledger-2013-07-03\",\"_type\":\"events\"}}\n1372805031 }{
> > \"transfer_from_guid\": \"1111\", \"item_type\": 30404, \"quantity\":
> 840,
> > \"auth_code\": \"0000\", \"metric_basename\": \"ledger\", \"es-service\":
> > \"ledger\", \"logged_at\": 1372805031 }\n"
> >         },
> >         "reply":        {
> >                 "took": 235,
> >                 "items":        [{
> >                                 "create":       {
> >                                         "_index":
> > "ledger-2013-07-03",
> >                                         "_type":        "events",
> >                                         "_id":  "z2MbQE5IQV6EHyqwiu8ejA",
> >                                         "error":
> >  "MapperParsingException[Malformed content, must start with an object]"
> >                                 }
> >                         }]
> >         }
> > }
> >
> > I've seen other elasticsearch errors like
> >
> > MapperParsingException[failed to parse]; nested:
> > JsonParseException[Unexpected end-of-input in field name\n at [Source:
> > [B@360f5d87; line: 1, column: 261]];
> >
> > and
> >
> > MapperParsingException[failed to parse]; nested:
> > ElasticSearchParseException[Failed to derive xcontent from (offset=59,
> > length=196): [123, 34, 105, 110, 100, 101, 120, 34, 58, 123, 34, 95, 105,
> > 110, 100, 101, 120, 34, 58, 32, 34, 108, 101, 100, 103, 101, 114, 45, 50,
> > 48, 49, 51, 45, 48, 55, 45, 48, 51, 34, 44, 34, 95, 116, 121, 112, 101,
> 34,
> > 58, 34, 101, 118, 101, 110, 116, 115, 34, 125, 125, 10, 34, 44, 32, 34,
> > 108, 111, 103, 103, 101, 100, 95, 97, 116, 34, 58, 32, 49, 51, 55, 50,
> 56,
> > 48, 53, 48, 51, 49, 32, 125, 123, 32, 34, 116, 114, 97, 110, 115, 102,
> 101,
> > 114, 95, 102, 114, 111, 109, 95, 103, 117, 105, 100, 34, 58, 32, 34, 49,
> > 49, 49, 49, 34, 44, 32, 34, 105, 116, 101, 109, 95, 116, 121, 112, 101,
> 34,
> > 58, 32, 51, 48, 52, 48, 52, 44, 32, 34, 113, 117, 97, 110, 116, 105, 116,
> > 121, 34, 58, 32, 56, 52, 48, 44, 32, 34, 97, 117, 116, 104, 95, 99, 111,
> > 100, 101, 34, 58, 32, 34, 48, 48, 48, 48, 34, 44, 32, 34, 109, 101, 116,
> > 114, 105, 99, 95, 98, 97, 115, 101, 110, 97, 109, 101, 34, 58, 32, 34,
> 108,
> > 101, 100, 103, 101, 114, 34, 44, 32, 34, 101, 115, 45, 115, 101, 114,
> 118,
> > 105, 99, 101, 34, 58, 32, 34, 108, 101, 100, 103, 101, 114, 34, 44, 32,
> 34,
> > 108, 111, 103, 103, 101, 100, 95, 97, 116, 34, 58, 32, 49, 51, 55, 50,
> 56,
> > 48, 53, 48, 51, 49, 32, 125, 10]];
> >
> > I did file a bug and included the debug output as an attachment:
> >
> > http://bugzilla.adiscon.com/show_bug.cgi?id=462
> >
> > If someone can provide any insight, I would really appreciate it!!
> >
> > Thanks,
> > Ajay
> > _______________________________________________
> > rsyslog mailing list
> > http://lists.adiscon.net/mailman/listinfo/rsyslog
> > http://www.rsyslog.com/professional-services/
> > What's up with rsyslog? Follow https://twitter.com/rgerhards
> > NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad
> > of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you
> > DON'T LIKE THAT.
> >
>
> _______________________________________________
> rsyslog mailing list
> http://lists.adiscon.net/mailman/listinfo/rsyslog
> http://www.rsyslog.com/professional-services/
> What's up with rsyslog? Follow https://twitter.com/rgerhards
> NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad
> of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you
> DON'T LIKE THAT.
>
_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.

Reply via email to