[jira] [Updated] (PHOENIX-5189) Index Scrutiny Fails when data table field (type:double) value is null

2019-03-11 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5189:

Summary: Index Scrutiny Fails when data table field (type:double)  value is 
null  (was: Index Scrutiny Fails when data table field (type:double)  is null)

> Index Scrutiny Fails when data table field (type:double)  value is null
> ---
>
> Key: PHOENIX-5189
> URL: https://issues.apache.org/jira/browse/PHOENIX-5189
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Priority: Minor
>
> Steps to reproduce:
> 1. Create a data table
> {code}
> CREATE TABLE IF NOT EXISTS TEST(k1 CHAR(5) NOT NULL, k2 INTEGER NOT NULL, v1 
> DOUBLE, v2 VARCHAR(1),CONSTRAINT PK PRIMARY KEY(
> k1,
> k2
> ))
> {code}
> 2. Create index table
> {code}
> CREATE INDEX IF NOT EXISTS TEST_INDEX ON TEST (k1,v1) INCLUDE (v2)
> {code}
> 3. Write Data
> {code}
> UPSERT INTO TEST (k1, k2, v1, v2) VALUES ('0', 1, null, 'a' )
> {code}
> 4. Run Index Scrutiny Tool 
> {code}
> hbase org.apache.phoenix.mapreduce.index.IndexScrutinyTool -dt TEST -it 
> TEST_INDEX -src DATA_TABLE_SOURCE
> {code}
> Map reduce Job Logs will contain the Exception
> {code}
> 2019-03-12 05:10:51,085 INFO  [atcher event handler] impl.TaskAttemptImpl - 
> Diagnostics report from attempt_1550549758736_0287_m_00_1001: Error: 
> org.apache.phoenix.schema.IllegalDataException: java.sql.SQLException: ERROR 
> 201 (22000): Illegal data. DOUBLE may not be null
>   at 
> org.apache.phoenix.schema.types.PDataType.newIllegalDataException(PDataType.java:305)
>   at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:93)
>   at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:86)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.getPkHash(IndexScrutinyMapper.java:370)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.buildTargetStatement(IndexScrutinyMapper.java:250)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.processBatch(IndexScrutinyMapper.java:212)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.cleanup(IndexScrutinyMapper.java:185)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:149)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. DOUBLE may 
> not be null
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5189) Index Scrutiny Fails when data table field (type:double) is null

2019-03-11 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5189:

Summary: Index Scrutiny Fails when data table field (type:double)  is null  
(was: Index Scrutiny Fails when field of type double is null)

> Index Scrutiny Fails when data table field (type:double)  is null
> -
>
> Key: PHOENIX-5189
> URL: https://issues.apache.org/jira/browse/PHOENIX-5189
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Priority: Minor
>
> Steps to reproduce:
> 1. Create a data table
> {code}
> CREATE TABLE IF NOT EXISTS TEST(k1 CHAR(5) NOT NULL, k2 INTEGER NOT NULL, v1 
> DOUBLE, v2 VARCHAR(1),CONSTRAINT PK PRIMARY KEY(
> k1,
> k2
> ))
> {code}
> 2. Create index table
> {code}
> CREATE INDEX IF NOT EXISTS TEST_INDEX ON TEST (k1,v1) INCLUDE (v2)
> {code}
> 3. Write Data
> {code}
> UPSERT INTO TEST (k1, k2, v1, v2) VALUES ('0', 1, null, 'a' )
> {code}
> 4. Run Index Scrutiny Tool 
> {code}
> hbase org.apache.phoenix.mapreduce.index.IndexScrutinyTool -dt TEST -it 
> TEST_INDEX -src DATA_TABLE_SOURCE
> {code}
> Map reduce Job Logs will contain the Exception
> {code}
> 2019-03-12 05:10:51,085 INFO  [atcher event handler] impl.TaskAttemptImpl - 
> Diagnostics report from attempt_1550549758736_0287_m_00_1001: Error: 
> org.apache.phoenix.schema.IllegalDataException: java.sql.SQLException: ERROR 
> 201 (22000): Illegal data. DOUBLE may not be null
>   at 
> org.apache.phoenix.schema.types.PDataType.newIllegalDataException(PDataType.java:305)
>   at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:93)
>   at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:86)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.getPkHash(IndexScrutinyMapper.java:370)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.buildTargetStatement(IndexScrutinyMapper.java:250)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.processBatch(IndexScrutinyMapper.java:212)
>   at 
> org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.cleanup(IndexScrutinyMapper.java:185)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:149)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. DOUBLE may 
> not be null
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5189) Index Scrutiny Fails when field of type double is null

2019-03-11 Thread Kiran Kumar Maturi (JIRA)
Kiran Kumar Maturi created PHOENIX-5189:
---

 Summary: Index Scrutiny Fails when field of type double is null
 Key: PHOENIX-5189
 URL: https://issues.apache.org/jira/browse/PHOENIX-5189
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Kiran Kumar Maturi


Steps to reproduce:
1. Create a data table
{code}
CREATE TABLE IF NOT EXISTS TEST(k1 CHAR(5) NOT NULL, k2 INTEGER NOT NULL, v1 
DOUBLE, v2 VARCHAR(1),CONSTRAINT PK PRIMARY KEY(
k1,
k2
))
{code}

2. Create index table
{code}
CREATE INDEX IF NOT EXISTS TEST_INDEX ON TEST (k1,v1) INCLUDE (v2)
{code}

3. Write Data
{code}
UPSERT INTO TEST (k1, k2, v1, v2) VALUES ('0', 1, null, 'a' )
{code}

4. Run Index Scrutiny Tool 
{code}
hbase org.apache.phoenix.mapreduce.index.IndexScrutinyTool -dt TEST -it 
TEST_INDEX -src DATA_TABLE_SOURCE
{code}

Map reduce Job Logs will contain the Exception
{code}
2019-03-12 05:10:51,085 INFO  [atcher event handler] impl.TaskAttemptImpl - 
Diagnostics report from attempt_1550549758736_0287_m_00_1001: Error: 
org.apache.phoenix.schema.IllegalDataException: java.sql.SQLException: ERROR 
201 (22000): Illegal data. DOUBLE may not be null
at 
org.apache.phoenix.schema.types.PDataType.newIllegalDataException(PDataType.java:305)
at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:93)
at org.apache.phoenix.schema.types.PDouble.toBytes(PDouble.java:86)
at 
org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.getPkHash(IndexScrutinyMapper.java:370)
at 
org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.buildTargetStatement(IndexScrutinyMapper.java:250)
at 
org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.processBatch(IndexScrutinyMapper.java:212)
at 
org.apache.phoenix.mapreduce.index.IndexScrutinyMapper.cleanup(IndexScrutinyMapper.java:185)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:149)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. DOUBLE may 
not be null
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5178) SYSTEM schema is not getting cached at MetaData server

2019-03-11 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5178:
---
Attachment: PHOENIX-5178_v1-addendum.patch

> SYSTEM schema is not getting cached at MetaData server
> --
>
> Key: PHOENIX-5178
> URL: https://issues.apache.org/jira/browse/PHOENIX-5178
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5178.patch, PHOENIX-5178_v1-addendum.patch, 
> PHOENIX-5178_v1.patch
>
>
> During initialization, the meta connection will not be able to see the SYSTEM 
> schema as the scanner at meta server is running with max_timestamp of 
> MIN_SYSTEM_TABLE_TIMESTAMP(exclusive) which result in a new connection to 
> create SYSTEM schema metadata everytime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5188) IndexedKeyValue should populate KeyValue fields

2019-03-11 Thread Geoffrey Jacoby (JIRA)
Geoffrey Jacoby created PHOENIX-5188:


 Summary: IndexedKeyValue should populate KeyValue fields
 Key: PHOENIX-5188
 URL: https://issues.apache.org/jira/browse/PHOENIX-5188
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1, 5.0.0
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby


IndexedKeyValue subclasses the HBase KeyValue class, which has three primary 
fields: bytes, offset, and length. These fields aren't populated by 
IndexedKeyValue because it's concerned with index mutations, and has its own 
fields that its own methods use. 

However, KeyValue and its Cell interface have quite a few methods that assume 
these fields are populated, and the HBase-level factory methods generally 
ensure they're populated. Phoenix code should do the same, to maintain the 
polymorphic contract. This is important in cases like custom 
ReplicationEndpoints where HBase-level code may be iterating over WALEdits that 
contain both KeyValues and IndexKeyValues and may need to interrogate their 
contents. 

Since the index mutation has a row key, this is straightforward. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5178) SYSTEM schema is not getting cached at MetaData server

2019-03-11 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-5178.

   Resolution: Fixed
Fix Version/s: 4.15.0

committed to master and 4.x branches. Thanks [~elserj] for the review.

> SYSTEM schema is not getting cached at MetaData server
> --
>
> Key: PHOENIX-5178
> URL: https://issues.apache.org/jira/browse/PHOENIX-5178
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5178.patch, PHOENIX-5178_v1.patch
>
>
> During initialization, the meta connection will not be able to see the SYSTEM 
> schema as the scanner at meta server is running with max_timestamp of 
> MIN_SYSTEM_TABLE_TIMESTAMP(exclusive) which result in a new connection to 
> create SYSTEM schema metadata everytime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5178) SYSTEM schema is not getting cached at MetaData server

2019-03-11 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5178:
---
Attachment: PHOENIX-5178_v1.patch

> SYSTEM schema is not getting cached at MetaData server
> --
>
> Key: PHOENIX-5178
> URL: https://issues.apache.org/jira/browse/PHOENIX-5178
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5178.patch, PHOENIX-5178_v1.patch
>
>
> During initialization, the meta connection will not be able to see the SYSTEM 
> schema as the scanner at meta server is running with max_timestamp of 
> MIN_SYSTEM_TABLE_TIMESTAMP(exclusive) which result in a new connection to 
> create SYSTEM schema metadata everytime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2787) support IF EXISTS for ALTER TABLE SET options

2019-03-11 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-2787:
---
Attachment: PHOENIX-2787.patch

> support IF EXISTS for ALTER TABLE SET options
> -
>
> Key: PHOENIX-2787
> URL: https://issues.apache.org/jira/browse/PHOENIX-2787
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Vincent Poon
>Assignee: Xinyi Yan
>Priority: Trivial
> Attachments: PHOENIX-2787.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> A nice-to-have improvement to the grammar:
> ALTER TABLE my_table IF EXISTS SET options
> currently the 'IF EXISTS' only works for dropping/adding a column



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2787) support IF EXISTS for ALTER TABLE SET options

2019-03-11 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-2787:
---
Attachment: (was: PHOENIX-2787.patch)

> support IF EXISTS for ALTER TABLE SET options
> -
>
> Key: PHOENIX-2787
> URL: https://issues.apache.org/jira/browse/PHOENIX-2787
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Vincent Poon
>Assignee: Xinyi Yan
>Priority: Trivial
> Attachments: PHOENIX-2787.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> A nice-to-have improvement to the grammar:
> ALTER TABLE my_table IF EXISTS SET options
> currently the 'IF EXISTS' only works for dropping/adding a column



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Help us understand the effects of developers' personality on collaboration in OSS development

2019-03-11 Thread collab . uniba
(** Apologies for multiple emails **)

Dear Apache developer,

We are a team of researchers from the Collab research group 
(http://collab.di.uniba.it), in the Department of Computer Science at the 
University of Bari, Italy. We would be grateful if you could help us understand 
the effects of developers' different personalities when they collaborate in the 
development of OSS projects.

Being an Apache developer, you are kindly invited to take a brief personality 
test (the so-called Big Five mini-IPIP test), which only takes *3 minutes* to 
complete (we promise!):
Link: http://collab.di.uniba.it:8000/miniipip/?id=wAfsGk

For more, please read "M.B. Donnellan et al. (2006). The mini-IPIP scales: 
Tiny-yet-effective measures of the Big Five factors of personality. 
Psychological Assessment, 18, 192-203"

All survey responses will be stored on a secure server in our University data 
center. We will openly publish the results so everyone can benefit from them, 
but we will anonymize and/or present them in aggregate so that tracking answers 
back to the respondents will be impossible. By filling in and submitting the 
survey, you are providing implied consent to participate in the study. However, 
if at some point during the survey you want to stop, you are free to do so and 
your partial answers will be discarded. Should you have further privacy 
concerns, please review the details of the ethics protocol for this research 
available at http://collab.di.uniba.it/mini-IPIP/privacy or contact us by email.

We appreciate your participation. Thank you for taking the time for reading 
this email.

Kind regards,
The Collab research team

[jira] [Updated] (PHOENIX-5187) Avoid using FileInputStream and FileOutputStream

2019-03-11 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5187:
-
Attachment: PHOENIX-5187-4.x-HBase-1.3.patch

> Avoid using FileInputStream and FileOutputStream 
> -
>
> Key: PHOENIX-5187
> URL: https://issues.apache.org/jira/browse/PHOENIX-5187
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Attachments: PHOENIX-5187-4.x-HBase-1.3.patch
>
>
> Avoid using FileInputStream and FileOutputStream because of
> [https://bugs.openjdk.java.net/browse/JDK-8080225]
> This has been resolved in jdk10
> A quick workaround is to use File.newInputStream and Files.newOutputStream



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5187) Avoid using FileInputStream and FileOutputStream

2019-03-11 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5187:
-
Description: 
Avoid using FileInputStream and FileOutputStream because of

[https://bugs.openjdk.java.net/browse/JDK-8080225]

The file objects doesnot get cleaned up even if we close it until full GC 
happens

This has been resolved in jdk10

A quick workaround is to use File.newInputStream and Files.newOutputStream

  was:
Avoid using FileInputStream and FileOutputStream because of

[https://bugs.openjdk.java.net/browse/JDK-8080225]

The file objects doesnot get cleaned up even if we close it unless full GC 
happens

This has been resolved in jdk10

A quick workaround is to use File.newInputStream and Files.newOutputStream


> Avoid using FileInputStream and FileOutputStream 
> -
>
> Key: PHOENIX-5187
> URL: https://issues.apache.org/jira/browse/PHOENIX-5187
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Attachments: PHOENIX-5187-4.x-HBase-1.3.patch
>
>
> Avoid using FileInputStream and FileOutputStream because of
> [https://bugs.openjdk.java.net/browse/JDK-8080225]
> The file objects doesnot get cleaned up even if we close it until full GC 
> happens
> This has been resolved in jdk10
> A quick workaround is to use File.newInputStream and Files.newOutputStream



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5187) Avoid using FileInputStream and FileOutputStream

2019-03-11 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5187:
-
Description: 
Avoid using FileInputStream and FileOutputStream because of

[https://bugs.openjdk.java.net/browse/JDK-8080225]

The file objects doesnot get cleaned up even if we close it unless full GC 
happens

This has been resolved in jdk10

A quick workaround is to use File.newInputStream and Files.newOutputStream

  was:
Avoid using FileInputStream and FileOutputStream because of

[https://bugs.openjdk.java.net/browse/JDK-8080225]

This has been resolved in jdk10

A quick workaround is to use File.newInputStream and Files.newOutputStream


> Avoid using FileInputStream and FileOutputStream 
> -
>
> Key: PHOENIX-5187
> URL: https://issues.apache.org/jira/browse/PHOENIX-5187
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Attachments: PHOENIX-5187-4.x-HBase-1.3.patch
>
>
> Avoid using FileInputStream and FileOutputStream because of
> [https://bugs.openjdk.java.net/browse/JDK-8080225]
> The file objects doesnot get cleaned up even if we close it unless full GC 
> happens
> This has been resolved in jdk10
> A quick workaround is to use File.newInputStream and Files.newOutputStream



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5187) Avoid using FileInputStream and FileOutputStream

2019-03-11 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-5187:


 Summary: Avoid using FileInputStream and FileOutputStream 
 Key: PHOENIX-5187
 URL: https://issues.apache.org/jira/browse/PHOENIX-5187
 Project: Phoenix
  Issue Type: Improvement
Reporter: Aman Poonia
Assignee: Aman Poonia


Avoid using FileInputStream and FileOutputStream because of

[https://bugs.openjdk.java.net/browse/JDK-8080225]

This has been resolved in jdk10

A quick workaround is to use File.newInputStream and Files.newOutputStream



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5186) Remove redundant check for local in metadata client

2019-03-11 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5186:
-
Attachment: PHOENIX-5186.4.x-HBase-1.3.patch

> Remove redundant check for local in metadata client
> ---
>
> Key: PHOENIX-5186
> URL: https://issues.apache.org/jira/browse/PHOENIX-5186
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Minor
> Attachments: PHOENIX-5186.4.x-HBase-1.3.patch
>
>
> Remove redundant check for local index type in metadata client
> {code:java}
> if (index.getIndexType() != IndexType.LOCAL) {
> if (index.getIndexType() != IndexType.LOCAL) {
> if (table.getType() != PTableType.VIEW) {
> rowCount += updateStatisticsInternal(index.getPhysicalName(), 
> index,
> updateStatisticsStmt.getProps(), true);
> } else {
> rowCount += updateStatisticsInternal(table.getPhysicalName(), 
> index,
> updateStatisticsStmt.getProps(), true);
> }
> }
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5186) Remove redundant check for local in metadata client

2019-03-11 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-5186:


 Summary: Remove redundant check for local in metadata client
 Key: PHOENIX-5186
 URL: https://issues.apache.org/jira/browse/PHOENIX-5186
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.1
Reporter: Aman Poonia
Assignee: Aman Poonia


Remove redundant check for local index type in metadata client
{code:java}
if (index.getIndexType() != IndexType.LOCAL) {
if (index.getIndexType() != IndexType.LOCAL) {
if (table.getType() != PTableType.VIEW) {
rowCount += updateStatisticsInternal(index.getPhysicalName(), index,
updateStatisticsStmt.getProps(), true);
} else {
rowCount += updateStatisticsInternal(table.getPhysicalName(), index,
updateStatisticsStmt.getProps(), true);
}
}
}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2019-03-11 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Attachment: (was: PHOENIX-5171-master-v2.patch)

> SkipScan incorrectly filters composite primary key which the key range 
> contains all values
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5171-master-v2.patch, PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by incorrect next cell hint, due to we have skipped the rest of 
> solts that some key ranges contain all values(EVERYTHING_RANGE) in 
> ScanUtil.setKey method. The next cell hint of current case is 
> _kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00_, but it shoud be  
> _kv=2018-02-14\x00channel_agg\x00\x82\x00\x00A004_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2019-03-11 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Attachment: PHOENIX-5171-master-v2.patch

> SkipScan incorrectly filters composite primary key which the key range 
> contains all values
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5171-master-v2.patch, PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by incorrect next cell hint, due to we have skipped the rest of 
> solts that some key ranges contain all values(EVERYTHING_RANGE) in 
> ScanUtil.setKey method. The next cell hint of current case is 
> _kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00_, but it shoud be  
> _kv=2018-02-14\x00channel_agg\x00\x82\x00\x00A004_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-03-11 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5169:

Attachment: PHOENIX-5169-master-v2.patch

> Query logger is still initialized for each query when the log level is off
> --
>
> Key: PHOENIX-5169
> URL: https://issues.apache.org/jira/browse/PHOENIX-5169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Fix For: 5.1
>
> Attachments: PHOENIX-5169-master-v2.patch, PHOENIX-5169-master.patch, 
> image-2019-02-28-10-05-00-518.png
>
>
> we will still invoke createQueryLogger in PhoenixStatement for each query 
> when query logger level is OFF, which has significant throughput impacts 
> under multiple threads.
> The below is jstack with the concurrent query:
> !https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-03-11 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5169:

Attachment: (was: PHOENIX-5169-master-2.patch)

> Query logger is still initialized for each query when the log level is off
> --
>
> Key: PHOENIX-5169
> URL: https://issues.apache.org/jira/browse/PHOENIX-5169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Fix For: 5.1
>
> Attachments: PHOENIX-5169-master.patch, 
> image-2019-02-28-10-05-00-518.png
>
>
> we will still invoke createQueryLogger in PhoenixStatement for each query 
> when query logger level is OFF, which has significant throughput impacts 
> under multiple threads.
> The below is jstack with the concurrent query:
> !https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)