When trying to reproduce, I have no luck running ES 1.3.4 on Solaris Sparc.
Just like ES 1.2.2, it crashes with SIGBUS, but this time in lzf
compression codec.

So I will open an issue.

Jörg



#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGBUS (0xa) at pc=0xffffffff7e51d838, pid=26623, tid=42
#
# JRE version: Java(TM) SE Runtime Environment (8.0_11-b12) (build
1.8.0_11-b12)

JavaThread "elasticsearch[Taj Nital][clusterService#updateTask][T#1]"
daemon [_thread_in_vm, id=42, stack(0xfffffffedfd00000,0xfffffffedfd40000)]

Stack: [0xfffffffedfd00000,0xfffffffedfd40000],  sp=0xfffffffedfd3e4f0,
 free space=249k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native
code)
V  [libjvm.so+0xd1d838]  Unsafe_GetInt+0x174
J 2115  sun.misc.Unsafe.getInt(Ljava/lang/Object;J)I (0 bytes) @
0xffffffff6c51c0e8 [0xffffffff6c51bfa0+0x148]
j
 
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkEncoderBE._getInt([BI)I+10
j
 
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkEncoderBE.tryCompress([BII[BI)I+104
j
 
org.elasticsearch.common.compress.lzf.ChunkEncoder.encodeChunk([BII)Lorg/elasticsearch/common/compress/lzf/LZFChunk;+17
j
 
org.elasticsearch.common.compress.lzf.LZFEncoder.encode(Lorg/elasticsearch/common/compress/lzf/ChunkEncoder;[BII)[B+17
j  org.elasticsearch.common.compress.lzf.LZFEncoder.encode([BII)[B+9
j  org.elasticsearch.common.compress.lzf.LZFCompressor.compress([BII)[B+3
j
 
org.elasticsearch.common.compress.CompressedString.<init>(Ljava/lang/String;)V+29
j
 
org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(Lorg/elasticsearch/cluster/ClusterState;)Lorg/elasticsearch/cluster/ClusterState;+1536
j
 org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run()V+77
j
 
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run()V+4
j
 
java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
j  java.util.concurrent.ThreadPoolExecutor$Worker.run()V+5
j  java.lang.Thread.run()V+11
v  ~StubRoutines::call_stub
V  [libjvm.so+0x6fd670]  void
JavaCalls::call_helper(JavaValue*,methodHandle*,JavaCallArguments*,Thread*)+0xa58
V  [libjvm.so+0x6fba9c]  void
JavaCalls::call_virtual(JavaValue*,KlassHandle,Symbol*,Symbol*,JavaCallArguments*,Thread*)+0x370
V  [libjvm.so+0x6fbd60]  void
JavaCalls::call_virtual(JavaValue*,Handle,KlassHandle,Symbol*,Symbol*,Thread*)+0x50
V  [libjvm.so+0x825924]  void thread_entry(JavaThread*,Thread*)+0xdc
V  [libjvm.so+0xce4894]  void JavaThread::thread_main_inner()+0x94
V  [libjvm.so+0xce47e0]  void JavaThread::run()+0x408
V  [libjvm.so+0xb3abc4]  java_start+0x364


On Mon, Oct 20, 2014 at 2:00 PM, Clinton Gormley <[email protected]>
wrote:

> Hi Abhinav
>
> It would be good to know exactly where this problem is coming from. Is it
> the way that Logstash adds the template, or is it in the Elasticsearch
> layer.  Please could you try something:
>
> * Delete the existing template and index in Elasticsearch
> * Take the Logstash template and create it yourself, not using Logstash
> * Then index a document into an index that matches the template pattern
>
> If this works, then we know the problem is in the Logstash layer, rather
> than in the Elasticsearch layer
>
> thanks
>
> On 19 October 2014 16:38, Abhinav Sonkar <[email protected]> wrote:
>
>> Thanks Jorg.
>>
>> Some more information:
>>
>> The error message keeps repeating until ES is restarted. After restarting
>> ES, index gets created with some bulk shards message (I am not in VPN right
>> now so can't show the exact message). When the time comes to create new
>> logstash index, error starts coming again.
>>
>> Abhinav
>>
>>
>> On Sunday, 19 October 2014 16:31:07 UTC+2, Jörg Prante wrote:
>>>
>>> Sorry, I overlooked this - yes it is SPARC. I will investigate.
>>>
>>> Jörg
>>>
>>> On Sun, Oct 19, 2014 at 4:30 PM, [email protected] <[email protected]>
>>> wrote:
>>>
>>>> Is this Solaris SPARC?
>>>>
>>>> Looks like a compression / Jackson issue, SPARC is 64bit big-endian, so
>>>> this is interesting.
>>>>
>>>> Jörg
>>>>
>>>> On Sun, Oct 19, 2014 at 12:25 PM, Abhinav Sonkar <[email protected]>
>>>> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I am using ES 1.3.4 with Logstash 1.4.2 on Solaris sparc. Everyday
>>>>> when Logstash tries to create a new index ES stops responding (process is
>>>>> active) with below error:
>>>>>
>>>>> [2014-10-19 12:23:02,264][DEBUG][action.admin.indices.create] [SZ1248
>>>>> Morpheus] [logstash-2014.10.19] failed to create
>>>>> org.elasticsearch.common.jackson.core.JsonParseException: Unexpected
>>>>> character ('_' (code 95)): was expecting a colon to separate field name 
>>>>> and
>>>>> value
>>>>>  at [Source: {"_default_":{"_defa:{"_defled":true},"dynamic_
>>>>> templates":[{"string_fields":[{"stch":"*",""stch":"*ping_
>>>>> fiel":"*","ng",""stch"ga:{"_def":"*","ng","index":"*","
>>>>> yzed":truit_norms":[{"s,"fields":[{"stc:{"_def":"*",""
>>>>> ga:{"_dex":"*","an,"yzed":truitre_abovf":"*",,,,,,,,,
>>>>> roperties":[{"sersionc:{"_def":"*",""ga:{"_dex":"*","an,"
>>>>> yzed"},"geoipc:{"_def":"object",""stcmic":true},"dth":
>>>>> "*","",""stcerties":[{"sertionc:{"_def":"geoipc:nt",,,,}}; line: 1,
>>>>> column: 25]
>>>>>         at org.elasticsearch.common.jackson.core.JsonParser._
>>>>> constructError(JsonParser.java:1419)
>>>>>         at org.elasticsearch.common.jackson.core.base.
>>>>> ParserMinimalBase._reportError(ParserMinimalBase.java:508)
>>>>>         at org.elasticsearch.common.jackson.core.base.
>>>>> ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:437)
>>>>>         at org.elasticsearch.common.jackson.core.json.
>>>>> ReaderBasedJsonParser._skipColon2(ReaderBasedJsonParser.java:1773)
>>>>>         at org.elasticsearch.common.jackson.core.json.
>>>>> ReaderBasedJsonParser._skipColon(ReaderBasedJsonParser.java:1746)
>>>>>         at org.elasticsearch.common.jackson.core.json.
>>>>> ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:621)
>>>>>         at org.elasticsearch.common.xcontent.json.
>>>>> JsonXContentParser.nextToken(JsonXContentParser.java:50)
>>>>>         at org.elasticsearch.common.xcontent.support.
>>>>> AbstractXContentParser.readMap(AbstractXContentParser.java:268)
>>>>>         at org.elasticsearch.common.xcontent.support.
>>>>> AbstractXContentParser.readValue(AbstractXContentParser.java:308)
>>>>>         at org.elasticsearch.common.xcontent.support.
>>>>> AbstractXContentParser.readMap(AbstractXContentParser.java:275)
>>>>>         at org.elasticsearch.common.xcontent.support.
>>>>> AbstractXContentParser.readMap(AbstractXContentParser.java:254)
>>>>>         at org.elasticsearch.common.xcontent.support.
>>>>> AbstractXContentParser.map(AbstractXContentParser.java:208)
>>>>>         at org.elasticsearch.common.xcontent.support.
>>>>> AbstractXContentParser.mapAndClose(AbstractXContentParser.java:219)
>>>>>         at org.elasticsearch.cluster.metadata.
>>>>> MetaDataCreateIndexService.parseMapping(MetaDataCreateIndexService.
>>>>> java:473)
>>>>>         at org.elasticsearch.cluster.metadata.
>>>>> MetaDataCreateIndexService.access$400(MetaDataCreateIndexService.
>>>>> java:89)
>>>>>         at org.elasticsearch.cluster.metadata.
>>>>> MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.
>>>>> java:260)
>>>>>         at org.elasticsearch.cluster.service.InternalClusterService$
>>>>> UpdateTask.run(InternalClusterService.java:328)
>>>>>         at org.elasticsearch.common.util.concurrent.
>>>>> PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(
>>>>> PrioritizedEsThreadPoolExecutor.java:153)
>>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>>> ThreadPoolExecutor.java:1145)
>>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>>> ThreadPoolExecutor.java:615)
>>>>>         at java.lang.Thread.run(Thread.java:745)
>>>>>
>>>>> I tried with fresh install of ES and Logstash but the error starts
>>>>> coming as soon as Logstash tries to create an index. It seems to be 
>>>>> related
>>>>> to mappings but I am not sure. In Unix it works fine, problem is only
>>>>> coming in Solaris.
>>>>>
>>>>> Please help.
>>>>>
>>>>> Regards,
>>>>> Abhinav
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "elasticsearch" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to [email protected].
>>>>> To view this discussion on the web visit https://groups.google.com/d/
>>>>> msgid/elasticsearch/c07d4f45-25ec-4c3c-b201-9cd354e7470d%
>>>>> 40googlegroups.com
>>>>> <https://groups.google.com/d/msgid/elasticsearch/c07d4f45-25ec-4c3c-b201-9cd354e7470d%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>
>>>  --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/c82c923a-dbf9-456f-9c37-423d18f1a575%40googlegroups.com
>> <https://groups.google.com/d/msgid/elasticsearch/c82c923a-dbf9-456f-9c37-423d18f1a575%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAPt3XKQ8s3YH6mkZOChOLgmPG-HE0w__PRNn5R7UMzUZoXeGqQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/elasticsearch/CAPt3XKQ8s3YH6mkZOChOLgmPG-HE0w__PRNn5R7UMzUZoXeGqQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHH-JdW-paF18GFDxR8cdc91Yz7U8BzGYjS98vj2EaCNw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to