[ 
https://issues.apache.org/jira/browse/COUCHDB-3245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Vatamaniuc resolved COUCHDB-3245.
--------------------------------------
    Resolution: Fixed

> couchjs -S option doesn't have any effect
> -----------------------------------------
>
>                 Key: COUCHDB-3245
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-3245
>             Project: CouchDB
>          Issue Type: Bug
>            Reporter: Nick Vatamaniuc
>
> currently -S option of couchjs sets stack _chunk_ size for js contexts
> Reference: to 
> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/JSAPI_reference/JS_NewContext
> Documentation recommends 8K and I have seen cases where it was raised to 1G+ 
> in production!. That doesn't seem right at all and also probably kills 
> performance and eats memory. 
> Docs from above say:
> > The stackchunksize parameter does not control the JavaScript stack size. 
> > (The JSAPI does not provide a way to adjust the stack depth limit.) Passing 
> > a large number for stackchunksize is a mistake. In a DEBUG build, large 
> > chunk sizes can degrade performance dramatically. The usual value of 8192 
> > is recommended
> Instead we should be setting the max gc value which is set in the runtime
> {{JS_NewRuntime(uint32_t maxbytes)}}
> Experimentally a large maxbytes seems to fix out of memory error caused by 
> large views. I suspect that it works because it stops GC. At some point we 
> probably drops some object, GC collects them and we crash...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to