2016-04-03 19:13 GMT+02:00 ravi sankar <sankar_rav...@yahoo.co.in.invalid>:
> Hi Romain,
> I have done some tests with JacksonProvider
>
> final Client client = clientBuilder.build().register(new 
> JacksonJsonProvider());
> final ExecutorService es = Executors.newFixedThreadPool(100);
> Json Message - 
> https://raw.githubusercontent.com/Ravisankar-Challa/tomee_embedded/master/src/main/resources/miscellaneousinfo.js
> Heap Memory 64 MB - No Issue
>
>
> final ExecutorService es = Executors.newFixedThreadPool(34);
> This time I had a String in json message around 3MB
> Heap Memory 768 MB - No issues
>

jackson doesn't protect anything there and let to the user/infra the
security concerns.

> So Jackson does handles Json strings of different sizes with out any settings.
> If auto adjustment causes problems in Johnzon then we can leave it as is.
>

that's more in term of design than implementation.

FYI:

import com.example.rest.JsonContentTypeResponseFilter;
import com.example.rest.model.misc.MiscellaneousInfo;
import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider;

import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;

public class Run {
    public static void main(final String[] args) throws InterruptedException {
        final ExecutorService es = Executors.newFixedThreadPool(128);
        final ClientBuilder clientBuilder = ClientBuilder.newBuilder();
        clientBuilder.property("http.connection.timeout",
String.valueOf(30000));
        clientBuilder.property("http.receive.timeout", String.valueOf(30000));
        final Client client = clientBuilder.build().register(new
JacksonJsonProvider());
        final AtomicInteger errors = new AtomicInteger();
        for (int i = 0; i < 1000; i++) {
            es.submit(() -> {
                try {
                    if
(client.target("https://raw.githubusercontent.com/Ravisankar-Challa/tomee_embedded/master/src/main/resources/miscellaneousinfo.js";)
                            .register(JsonContentTypeResponseFilter.class)
                            .request()
                            .get(MiscellaneousInfo.class).getTitles() == null) {
                        throw new IllegalArgumentException();
                    }
                } catch (final Exception e) {
                    e.printStackTrace();
                    errors.incrementAndGet();
                }
            });
        }
        es.shutdown();
        es.awaitTermination(1, TimeUnit.HOURS);
        System.out.println("Errors: " + errors.get());
    }
}

ends with Caused by: java.lang.OutOfMemoryError: GC overhead limit
exceeded too. Didn't check if lower numbers avoids it but 128 threads
and 1000 iterations is a few for a webapp ;)


> Thanks,Ravisankar Challa
>
>     On Monday, 4 April 2016 2:30 AM, Romain Manni-Bucau 
> <rmannibu...@gmail.com> wrote:
>
>
>  http is paticular cause you have the content length so we could maybe
> do a bit better but even jackson has this dynamic mem issue.
>
> Romain Manni-Bucau
> @rmannibucau |  Blog | Github | LinkedIn | Tomitriber
>
>
> 2016-04-03 18:26 GMT+02:00 ravi sankar <sankar_rav...@yahoo.co.in.invalid>:
>> Hi Romain,
>> Yes you are right it doesn't solve the actual problem. I wonder how Jackson 
>> is able to handle payloads of different sizes out of the box.
>>
>> Thanks,Ravi
>>
>>    On Monday, 4 April 2016 12:02 AM, Romain Manni-Bucau 
>> <rmannibu...@gmail.com> wrote:
>>
>>
>>  Hi Ravi,
>>
>> 2016-04-03 14:44 GMT+02:00 ravi sankar <sankar_rav...@yahoo.co.in.invalid>:
>>> Hi Romain,
>>> Scenario: Retrieve the json file at 
>>> https://raw.githubusercontent.com/Ravisankar-Challa/tomee_embedded/master/src/main/resources/miscellaneousinfo.js
>>> using JaxRs client.
>>>
>>> git checkout master
>>>
>>> mvn -f tomee-embedded-shade.xml clean package -Dmaven.test.skip
>>>
>>> java -Xmx768M -jar target/tomeedemo.jar -c
>>>
>>> Using Apache Jmeter I tried firing 50 concurrent requests. 
>>> (Url:http://localhost:8080/api/client/test)
>>> Result: SEVERE: java.lang.OutOfMemoryError: Java heap space
>>>    javax.ws.rs.ProcessingException: java.lang.OutOfMemoryError: Java heap 
>>> space
>>>    at 
>>> org.apache.cxf.jaxrs.client.WebClient.handleResponse(WebClient.java:1187)
>>>
>>
>> reproduced with the same JVM settings and:
>>
>> import com.example.rest.JsonContentTypeResponseFilter;
>> import com.example.rest.model.misc.MiscellaneousInfo;
>> import org.apache.johnzon.jaxrs.JohnzonProvider;
>>
>> import javax.ws.rs.client.Client;
>> import javax.ws.rs.client.ClientBuilder;
>> import java.util.concurrent.ExecutorService;
>> import java.util.concurrent.Executors;
>> import java.util.concurrent.TimeUnit;
>> import java.util.concurrent.atomic.AtomicInteger;
>>
>> public class Run {
>>    public static void main(final String[] args) throws InterruptedException {
>>        final ExecutorService es = Executors.newFixedThreadPool(64);
>>        final ClientBuilder clientBuilder = ClientBuilder.newBuilder();
>>        clientBuilder.property("http.connection.timeout",
>> String.valueOf(30000));
>>        clientBuilder.property("http.receive.timeout", String.valueOf(30000));
>>        final Client client = clientBuilder.build().register(new
>> JohnzonProvider<>());
>>        final AtomicInteger errors = new AtomicInteger();
>>        for (int i = 0; i < 100; i++) {
>>            es.submit(() -> {
>>                try {
>>                    if
>> (client.target("https://raw.githubusercontent.com/Ravisankar-Challa/tomee_embedded/master/src/main/resources/miscellaneousinfo.js";)
>>                            .register(JsonContentTypeResponseFilter.class)
>>                            .request()
>>                            .get(MiscellaneousInfo.class).getTitles() == 
>> null) {
>>                        throw new IllegalArgumentException();
>>                    }
>>                } catch (final Exception e) {
>>                    e.printStackTrace();
>>                    errors.incrementAndGet();
>>                }
>>            });
>>        }
>>        es.shutdown();
>>        es.awaitTermination(1, TimeUnit.HOURS);
>>        System.out.println("Errors: " + errors.get());
>>    }
>> }
>>
>>
>>
>> That said also setting -Dorg.apache.johnzon.max-string-length=4096
>> solved it - can be done outside the application, typically in
>> conf/system.properties.
>>
>>> I have analyzed the heap dump using Ecilpse mat.
>>> One instance of "org.apache.johnzon.core.JsonReaderFactoryImpl" loaded by 
>>> "sun.misc.Launcher$AppClassLoader @ 0xc019df10" occupies 717,491,152 
>>> (91.98%) bytes.
>>> The memory is accumulated in one instance of 
>>> "java.util.concurrent.ConcurrentLinkedQueue$Node" loaded by "<system class 
>>> loader>".
>>>
>>> valueBufferProvider of JsonParserFactoryImpl.java is holding all this 
>>> memory.
>>>
>>> Auto adjust buffer Fix:
>>> JsonParserFactoryImpl.java line number 64this.valueBufferProvider = 
>>> getBufferProvider().newCharProvider(1);  //Set initial size to 1
>>>
>>> JsonStreamParserImpl.java line number 70
>>> private char[] fallBackCopyBuffer;
>>>
>>> JsonStreamParserImpl.java line number 562
>>> else if((endOfValueInBuffer - startOfValueInBuffer) > 
>>> fallBackCopyBuffer.length) {
>>>    System.out.println("Adjusting Buffer from "+fallBackCopyBuffer.length+" 
>>> to "+(endOfValueInBuffer-startOfValueInBuffer));
>>>    fallBackCopyBuffer = new char[endOfValueInBuffer - 
>>> startOfValueInBuffer];  //Auto adjust to required size
>>> }
>>>
>>
>> This doesn't solve the whole issue. of course then you have a small
>> buffer but now let's enrich your .js to have a string of ~10M (can
>> happen when you don't control what you call). Then you:
>>
>> 1. still have your issue
>> 2. have no way to control the memory you can allocate (today you can
>> computing threads using the mapper x memory)
>>
>>
>> I'm happy to resize the default (just start a discussion on
>> dev@johnzon list) to something more reasonable (we have to check
>> twitter, github, jira at least) but I think this auto adjusting
>> solution is a seducing solution which can lead us to crossing fingers
>> in production which is probably the worse I can see when deploying an
>> application.
>>
>> Wdyt?
>>
>>> git checkout johnzon_buffer_fix
>>> mvn -f tomee-embedded-shade.xml clean package -Dmaven.test.skip
>>> java -Xmx768M -jar target/tomeedemo.jar -c
>>>
>>> Using Apache Jmeter I tried firing 50 concurrent requests. 
>>> (Url:http://localhost:8080/api/client/test)
>>> Using Ecilipse mat I analyzed the Heap Dump and didn't found any issues.
>>>
>>> Thanks,Ravisankar Challa
>>>
>>>    On Friday, 1 April 2016 1:23 AM, Romain Manni-Bucau 
>>> <rmannibu...@gmail.com> wrote:
>>>
>>>
>>>  2016-03-31 15:58 GMT+02:00 ravi sankar <sankar_rav...@yahoo.co.in.invalid>:
>>>> right now valueBufferProvider  is eating up 20mb by default, even for 
>>>> small json strings/numbers.
>>>> org.apache.johnzon.max-string-length is there to control the buffer size 
>>>> but how many people will be aware of this option. Every one expects the 
>>>> framework to do best with default settings :-)
>>>>
>>>
>>> I get it but this setting is important - this thread is not about that
>>> so let's not detail there - and if we use a lower value then at
>>> runtime, after 1 week of run you start to just fail and need to
>>> redeploy if you didnt identify it: this is way worse. If we implement
>>> the automatic adjustment: same punishment: it works, it works, OOME,
>>> restart, it works, OOME, restart, ...
>>>
>>> Side note: a switch of the impl from char[] to StringBuilder or
>>> equivalent is not an option cause it is insanely slow compared to
>>> current version, only possible auto adjustment would really be "oops
>>> it fails, can I increase?". If you want to give it a try for a patch
>>> (even without adding a min option but using min(8k, max) as min) I
>>> would be more than happy to review it but please ensure this behavior
>>> is logged explicitly at startup and at runtime in case of adjustment
>>> in INFO level - stating the buffer was not adapted and should be
>>> configured - each time the buffer is adjusted and probably use an
>>> ArrayList like algorithm (adjustment is not +1 char but size*2 in the
>>> limit of the max value). If your patch doesn't imply any slow down or
>>> is done thanks to a BufferProvider (= configurable) then I'm happy to
>>> push it upstream.
>>>
>>>
>>>
>>>>
>>>>    On Friday, 1 April 2016 12:32 AM, Romain Manni-Bucau 
>>>> <rmannibu...@gmail.com> wrote:
>>>>
>>>>
>>>>  2016-03-31 15:26 GMT+02:00 ravi sankar 
>>>> <sankar_rav...@yahoo.co.in.invalid>:
>>>>> Hi Romain,
>>>>>>setting a min-max means you'll use max only pretty quickly with 
>>>>>>auto-adjustment.Lets assume JsonParser has the following default buffer 
>>>>>>sizes of min value 1 and max 10 * 1024 * 1024.and needs to process a 
>>>>>>message like this {"hello":"hai"}
>>>>> Parser adjusts the buffer size to 5 to process the json field "hello". It 
>>>>> never gonna hit maximum value.
>>>>
>>>> What I mean if you will parse different json and will keep the bigger
>>>> size you visited. So you will likely be at max so min is useless. You
>>>> can adjust your max but any adjustment doesn't bring much in practise
>>>> excepted some memory instability.
>>>>
>>>>> Thanks,Ravi
>>>>>
>>>>>    On Thursday, 31 March 2016 11:45 PM, Romain Manni-Bucau 
>>>>> <rmannibu...@gmail.com> wrote:
>>>>>
>>>>>
>>>>>  2016-03-31 14:34 GMT+02:00 ravi sankar 
>>>>> <sankar_rav...@yahoo.co.in.invalid>:
>>>>>> Hi Romain,
>>>>>
>>>>> Hello Ravi,
>>>>>
>>>>>>
>>>>>> I noticed the below issue when using JaxRsClient code.
>>>>>>
>>>>>> Looking in to the code of JsonParserFactoryImpl.java
>>>>>>
>>>>>>  public static final String MAX_STRING_LENGTH = 
>>>>>> "org.apache.johnzon.max-string-length";
>>>>>>  public static final int DEFAULT_MAX_STRING_LENGTH = 
>>>>>> Integer.getInteger(MAX_STRING_LENGTH, 10 * 1024 * 1024); //10m
>>>>>>
>>>>>> variable valueBufferProvider  is holding the 20 megabytes of data in 
>>>>>> heap.
>>>>>>
>>>>>> If my understanding is correct org.apache.johnzon.max-string-length 
>>>>>> determines the max length of the json field or json value when parsing.
>>>>>>
>>>>>> For general use cases json field names and values are very small strings 
>>>>>> unless we are trying to exchange files as binary data using json.
>>>>>>
>>>>>> Allocating 20 mega bytes of heap memory for processing field names or 
>>>>>> values is too big.
>>>>>>
>>>>>
>>>>> The original value was way smaller and it is actually not that big.
>>>>> You hit it very easily using public API like github one.
>>>>>
>>>>>> Is there any thing we can do to optimize it. like starting with small 
>>>>>> size and dynamically increase the valueBufferProvider size when ever 
>>>>>> required until it reaches a maximum value.
>>>>>>
>>>>>
>>>>> You can customize it for your own reader factory since that's a
>>>>> configuration the factory accepts. If you know you have small strings
>>>>> just set it to less.
>>>>>
>>>>>> I tried the option -Dorg.apache.johnzon.max-string-length=5 for the 
>>>>>> parsing the below json string
>>>>>> {
>>>>>>  "hello_world":"test"
>>>>>> }
>>>>>>
>>>>>> and got org.apache.johnzon.mapper.MapperException: Too many characters. 
>>>>>> Maximum string/number length of 5 exceeded
>>>>>>
>>>>>> Is it possible to have a min value and max value and exception should be 
>>>>>> thrown once the max string/number length is reached?
>>>>>
>>>>> That's current behavior excepted we don't auto-adapt. As a side note
>>>>> you can already wrap the jsonp API to auto adapt yourself if you get
>>>>> this exception if you care about buffer allocation. From experience
>>>>> setting a min-max means you'll use max only pretty quickly with
>>>>> auto-adjustment.
>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Ravi
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>

Reply via email to