Hi Dan, thanks for your input, will try this out, although I didn't understand yet how it helps. I don't generally understand how is the lifecycle of view model, do you hold them in server session, Or forward them marshalled to browser and then back to server, or even both?
Alternatively I found that different servers (Tomcat, Jetty) have different default maximum request size. Jetty has default 8192 bytes, but there is a possibility to adapt it. So alternative would be to pass higher value to org.apache.isis.WebServer so it can launch embedded Jetty with higher value? Would also try in this direction and if successful would propose a PR. Thanks,Vladimir Am 06.10.2017 16:10 schrieb "Dan Haywood" <[email protected]>: > Hi Vladimir, > > We hit this issue in Estatio too a while back. I have a solution which, > for some reason, is in the non open source bit of Estatio we have; not sure > why I put it there, will have to move it to the github side. > > Anyway, try adding this: > > > @DomainService(nature = NatureOfService.DOMAIN) > public class UrlEncodingUsingBaseEncodingSupportLargeUrls extends > UrlEncodingServiceUsingBaseEncoding { > > /** > * Strings under this length are not cached, just returned as is. > */ > private static final int MIN_LENGTH_TO_CACHE = 500; > /** > * Used to distinguish which strings represent keys in the cache, > versus those not cached. > */ > private static final String KEY_PREFIX = "______"; > > private static final int EXPECTED_SIZE = 1000; > > // this is a naive implementation that will leak memory > private final BiMap<String, String> cachedValueByKey = > Maps.synchronizedBiMap(HashBiMap.<String, > String>create(EXPECTED_SIZE)); > > @Override > public String encode(final String value) { > if(!canCache(value)) { > return super.encode(value); > } > > synchronized (cachedValueByKey) { > String key = cachedValueByKey.inverse().get(value); > if (key == null) { > key = newKey(); > cachedValueByKey.put(key, value); > } > return KEY_PREFIX + key; > } > } > > @Override > public String decode(final String key) { > if(key == null || !key.startsWith(KEY_PREFIX)) { > return super.decode(key); > } > String keySuffix = key.substring(KEY_PREFIX.length()); > return cachedValueByKey.get(keySuffix); > } > > /** > * Factored out to allow easy subclassing. > */ > protected String newKey() { > return UUID.randomUUID().toString(); > } > > private boolean canCache(final String key) { > return key != null && key.length() > MIN_LENGTH_TO_CACHE; > } > > > } > > > > This is a bit hacky, and will leak memory, so is not suitable for a high > volume environment. You might therefore want to refactor it to use an > external cache such as Redis, or alternativelye to somehow invalidate > sessions, eg by also storing a handle to the session that creates the key. > > If you come up with anything better, please do contribute it back. > > HTH > Dan > > > > > > On Fri, 6 Oct 2017 at 14:34 Vladimir Nišević <[email protected]> wrote: > > > Hi, my view models seems to be such large that I get now the exception > > "Header is too large 8193>8192" when browser submits e.g. an action from > > rendered view model to server running on jetty. > > > > Is this a jetty-specific issue is has generally related with the fact > that > > view model is marshalled into GET parameter. > > > > Any idea how to solve this? > > > > > > Thanks > > Vladimir > > >
