[GitHub] incubator-hawq pull request #1216: HAWQ-1429. Do not use AggBridge when WHER...

2017-04-11 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/1216

HAWQ-1429. Do not use AggBridge when WHERE clause specified.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sansanichfb/incubator-hawq HAWQ-1429

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1216.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1216


commit aa3c14ab123e914851adda5313741e6fe24fda56
Author: Oleksandr Diachenko 
Date:   2017-04-12T00:51:40Z

HAWQ-1429. Do not use AggBridge when WHERE clause specified.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HAWQ-1429) Do not use AggBridge when WHERE clause specified

2017-04-11 Thread Oleksandr Diachenko (JIRA)
Oleksandr Diachenko created HAWQ-1429:
-

 Summary: Do not use AggBridge when WHERE clause specified
 Key: HAWQ-1429
 URL: https://issues.apache.org/jira/browse/HAWQ-1429
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Oleksandr Diachenko
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


Reduce the scope of AggBridge to the case when WHERE clause is not specified.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1215: HAWQ-1415. Set the default_value of JAVA_...

2017-04-11 Thread ljainpivotalio
GitHub user ljainpivotalio opened a pull request:

https://github.com/apache/incubator-hawq/pull/1215

HAWQ-1415. Set the default_value of JAVA_HOME for running RPS. Move v…

…ar definition statement (no logic change)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ljainpivotalio/incubator-hawq HAWQ-1415-4

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1215.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1215


commit 0d0064b88cbc8e84f0eee96f2fa70c32b9a9ab85
Author: ljainpivotalio 
Date:   2017-04-11T22:51:25Z

HAWQ-1415. Set the default_value of JAVA_HOME for running RPS. Move var 
definition statement (no logic change)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (HAWQ-1417) Crashed at ANALYZE after COPY

2017-04-11 Thread Ming LI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming LI resolved HAWQ-1417.
---
Resolution: Fixed

> Crashed at ANALYZE after COPY
> -
>
> Key: HAWQ-1417
> URL: https://issues.apache.org/jira/browse/HAWQ-1417
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: backlog
>
>
> This is the line in the master log where the PANIC is reported:
> {code}
> (gdb) bt
> #0  0x7f6d35b0e6ab in raise () from 
> /data/logs/52280/new_panic/packcore-core.postgres.457052/lib64/libpthread.so.0
> #1  0x008c7d79 in SafeHandlerForSegvBusIll (postgres_signal_arg=11, 
> processName=) at elog.c:4519
> #2  
> #3  ResourceOwnerEnlargeRelationRefs (owner=0x0) at resowner.c:708
> #4  0x008b5659 in RelationIncrementReferenceCount (rel=0x1baf500) at 
> relcache.c:1941
> #5  RelationIdGetRelation (relationId=relationId@entry=1259) at 
> relcache.c:1895
> #6  0x004ca664 in relation_open (lockmode=lockmode@entry=1, 
> relationId=relationId@entry=1259) at heapam.c:882
> #7  heap_open (relationId=relationId@entry=1259, lockmode=lockmode@entry=1) 
> at heapam.c:1285
> #8  0x008b0945 in ScanPgRelation (targetRelId=targetRelId@entry=5010, 
> indexOK=indexOK@entry=1 '\001', 
> pg_class_relation=pg_class_relation@entry=0x7ffdf2aed390) at relcache.c:279
> #9  0x008b4302 in RelationBuildDesc (targetRelId=5010, 
> insertIt=) at relcache.c:1209
> #10 0x008b56c7 in RelationIdGetRelation 
> (relationId=relationId@entry=5010) at relcache.c:1918
> #11 0x004ca664 in relation_open (lockmode=, 
> relationId=5010) at heapam.c:882
> #12 heap_open (relationId=5010, lockmode=) at heapam.c:1285
> #13 0x0055d1e6 in caql_basic_fn_all (pcql=0x1d70a58, 
> bLockEntireTable=0 '\000', pCtx=0x7ffdf2aed480, pchn=0xf4b328 
> ) at caqlanalyze.c:343
> #14 caql_switch (pchn=pchn@entry=0xf4b328 , 
> pCtx=pCtx@entry=0x7ffdf2aed480, pcql=pcql@entry=0x1d70a58) at 
> caqlanalyze.c:229
> #15 0x005636db in caql_getcount (pCtx0=pCtx0@entry=0x0, 
> pcql=0x1d70a58) at caqlaccess.c:367
> #16 0x009ddc47 in rel_is_partitioned (relid=1882211) at 
> cdbpartition.c:232
> #17 rel_part_status (relid=relid@entry=1882211) at cdbpartition.c:484
> #18 0x005e7d43 in calculate_virtual_segment_number 
> (candidateOids=) at analyze.c:833
> #19 analyzeStmt (stmt=stmt@entry=0x2045dd0, relids=relids@entry=0x0, 
> preferred_seg_num=preferred_seg_num@entry=-1) at analyze.c:486
> #20 0x005e89a7 in analyzeStatement (stmt=stmt@entry=0x2045dd0, 
> relids=relids@entry=0x0, preferred_seg_num=preferred_seg_num@entry=-1) at 
> analyze.c:271
> #21 0x0065c25c in vacuum (vacstmt=vacstmt@entry=0x2045bf0, 
> relids=relids@entry=0x0, preferred_seg_num=preferred_seg_num@entry=-1) at 
> vacuum.c:316
> #22 0x007f6012 in ProcessUtility 
> (parsetree=parsetree@entry=0x2045bf0, queryString=0x2045d30 "ANALYZE 
> mis_data_ig_account_details.e_event_1_0_102", params=0x0, 
> isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0xf04ba0 ,
> completionTag=completionTag@entry=0x7ffdf2aee3f0 "") at utility.c:1471
> #23 0x007f1ade in PortalRunUtility (portal=portal@entry=0x1bfb490, 
> utilityStmt=utilityStmt@entry=0x2045bf0, isTopLevel=isTopLevel@entry=1 
> '\001', dest=dest@entry=0xf04ba0 , 
> completionTag=completionTag@entry=0x7ffdf2aee3f0 "") at pquery.c:1968
> #24 0x007f32be in PortalRunMulti (portal=portal@entry=0x1bfb490, 
> isTopLevel=isTopLevel@entry=1 '\001', dest=0xf04ba0 , 
> dest@entry=0x1b93f60, altdest=0xf04ba0 , 
> altdest@entry=0x1b93f60, completionTag=completionTag@entry=0x7ffdf2aee3f0 "")
> at pquery.c:2078
> #25 0x007f5025 in PortalRun (portal=portal@entry=0x1bfb490, 
> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x1b93f60, altdest=altdest@entry=0x1b93f60, 
> completionTag=completionTag@entry=0x7ffdf2aee3f0 "") at pquery.c:1595
> #26 0x007ee098 in exec_execute_message (max_rows=9223372036854775807, 
> portal_name=0x1b93ad0 "") at postgres.c:2782
> #27 PostgresMain (argc=, argv=, 
> argv@entry=0x1a49b40, username=0x1a498f0 "mis_ig") at postgres.c:5170
> #28 0x007a0390 in BackendRun (port=0x1a185f0) at postmaster.c:5915
> #29 BackendStartup (port=0x1a185f0) at postmaster.c:5484
> #30 ServerLoop () at postmaster.c:2163
> #31 0x007a3159 in PostmasterMain (argc=, 
> argv=) at postmaster.c:1454
> #32 0x004a52b9 in main (argc=9, argv=0x1a20d10) at main.c:226
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1212: HAWQ-1396. Add more test cases for queryi...

2017-04-11 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1212#discussion_r110828640
  
--- Diff: src/test/feature/sanity_tests.txt ---
@@ -2,5 +2,5 @@
 #SERIAL=* are the serial tests to run, optional but should not be empty
 #you can have several PARALLEL or SRRIAL
 

-PARALLEL=TestErrorTable.*:TestPreparedStatement.*:TestUDF.*:TestAOSnappy.*:TestAlterOwner.*:TestAlterTable.*:TestCreateTable.*:TestGuc.*:TestType.*:TestDatabase.*:TestParquet.*:TestPartition.*:TestSubplan.*:TestAggregate.*:TestCreateTypeComposite.*:TestGpDistRandom.*:TestInformationSchema.*:TestQueryInsert.*:TestQueryNestedCaseNull.*:TestQueryPolymorphism.*:TestQueryPortal.*:TestQueryPrepare.*:TestQuerySequence.*:TestCommonLib.*:TestToast.*:TestTransaction.*:TestCommand.*:TestCopy.*:TestHawqRegister.TestPartitionTableMultilevel:TestHawqRegister.TestUsage1ExpectSuccessDifferentSchema:TestHawqRegister.TestUsage1ExpectSuccess:TestHawqRegister.TestUsage1SingleHawqFile:TestHawqRegister.TestUsage1SingleHiveFile:TestHawqRegister.TestDataTypes:TestHawqRegister.TestUsage1EofSuccess:TestHawqRegister.TestUsage2Case1Expected:TestHawqRegister.TestUsage2Case2Expected

-SERIAL=TestHawqRanger.*:TestExternalOid.TestExternalOidAll:TestExternalTable.TestExternalTableAll:TestTemp.BasicTest:TestRowTypes.*

+PARALLEL=TestErrorTable.*:TestPreparedStatement.*:TestUDF.*:TestAOSnappy.*:TestAlterOwner.*:TestAlterTable.*:TestCreateTable.*:TestGuc.*:TestType.*:TestDatabase.*:TestParquet.*:TestPartition.*:TestSubplan.*:TestAggregate.*:TestCreateTypeComposite.*:TestGpDistRandom.*:TestInformationSchema.*:TestQueryInsert.*:TestQueryNestedCaseNull.*:TestQueryPolymorphism.*:TestQueryPortal.*:TestQueryPrepare.*:TestQuerySequence.*:TestCommonLib.*:TestToast.*:TestTransaction.*:TestCommand.*:TestCopy.*:TestHawqRegister.TestPartitionTableMultilevel:TestHawqRegister.TestUsage1ExpectSuccessDifferentSchema:TestHawqRegister.TestUsage1ExpectSuccess:TestHawqRegister.TestUsage1SingleHawqFile:TestHawqRegister.TestUsage1SingleHiveFile:TestHawqRegister.TestDataTypes:TestHawqRegister.TestUsage1EofSuccess:TestHawqRegister.TestUsage2Case1Expected:TestHawqRegister.TestUsage2Case2Expected:TestHawqRanger.FallbackTest:TestHawqRanger.PXF*

+SERIAL=TestHawqRanger.BasicTest:TestHawqRanger.Allow*:TestHawqRanger.Resource*:TestHawqRanger.Deny*:TestExternalOid.TestExternalOidAll:TestExternalTable.TestExternalTableAll:TestTemp.BasicTest:TestRowTypes.*
--- End diff --

I'm not sure how the syntax is parsed in the script, but maybe you could 
check. If it is really complex to do maybe you could file a JIRA. I think this 
will be an annoying issue given there will be more test cases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---