[ 
https://issues.apache.org/jira/browse/TAJO-1430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14481298#comment-14481298
 ] 

ASF GitHub Bot commented on TAJO-1430:
--------------------------------------

Github user jihoonson commented on a diff in the pull request:

    https://github.com/apache/tajo/pull/442#discussion_r27803484
  
    --- Diff: tajo-core/src/main/java/org/apache/tajo/master/GlobalEngine.java 
---
    @@ -143,6 +148,18 @@ private QueryContext createQueryContext(Session 
session) {
           newQueryContext.putAll(CommonTestingUtil.getSessionVarsForTest());
         }
     
    +    // Set queryCache in session
    +    if (session.getQueryCache() == null) {
    +      session.setQueryCache(CacheBuilder.newBuilder()
    +        .maximumSize(200)
    --- End diff --
    
    Here, the size means the number of items contained in the cache. So, the 
actual cache size depends on the size of cached queries. This will have a 
problem of potentially exhausting memory. So, it would be better to set the 
maximum weight rather than the maximum size.
    
    In addition, I think that the maximum weight and the expiration period 
should be configurable. ```ConfVar``` would be a good place because we don't 
need to maintain different configurations for each user.


> Improve SQLAnalyzer by session-based parsing-result caching
> -----------------------------------------------------------
>
>                 Key: TAJO-1430
>                 URL: https://issues.apache.org/jira/browse/TAJO-1430
>             Project: Tajo
>          Issue Type: New Feature
>          Components: parser
>    Affects Versions: 0.10.0
>            Reporter: Dongjoon Hyun
>            Assignee: Dongjoon Hyun
>             Fix For: 0.10.1
>
>         Attachments: TAJO-1430.patch, long_2times.sql, wide_table.sql
>
>
> There are wide tables with many many columns. Moveover, BI tools generate 
> very complex queries whose size is several MB. Although Tajo executes those 
> queries very fast in a few seconds, the total time of UX is slow.
> To become a fastest Hadoop DW, we need this following feature. 
> {code:sql}
> tsql -f long_2times.sql
> ...
> (0 rows, 30.641 sec, 0 B selected)
> ...
> (0 rows, 1.707 sec, 0 B selected)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to