[jira] [Resolved] (PHOENIX-3509) Phoenix upgrade failure in specific timezone.

2016-11-28 Thread Jeongdae Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeongdae Kim resolved PHOENIX-3509.
---
Resolution: Duplicate

> Phoenix upgrade failure in specific timezone.
> -
>
> Key: PHOENIX-3509
> URL: https://issues.apache.org/jira/browse/PHOENIX-3509
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jeongdae Kim
>Assignee: Jeongdae Kim
>Priority: Minor
>
> HBase allows to use alphanumeric characters([a-zA-Z_0-9-.]) for snapshot 
> name, and phoenix make snapshot name in combination with tablename, version 
> and current time(SimpleDateFormat("MMddHHmmssZ"))
> when timezone is GMT+ like (GMT+9), snapshot name includes +(plus), and this 
> causes IllegalArgmentException from HBase like below.
> {quote}
> "Illegal character code:43, <+> at 53. Snapshot qualifiers can only contain 
> 'alphanumeric characters': i.e. [a-zA-Z_0-9-.]: 
> SNAPSHOT_SYSTEM.CATALOG_4.8.x_TO_4.9.0_20161129105640+0900"
> {quote}
> i think timezone is not necessary to make snapshot name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3509) Phoenix upgrade failure in specific timezone.

2016-11-28 Thread Jeongdae Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704527#comment-15704527
 ] 

Jeongdae Kim commented on PHOENIX-3509:
---

I found same issue and that was already fixed. PHOENIX-3432

> Phoenix upgrade failure in specific timezone.
> -
>
> Key: PHOENIX-3509
> URL: https://issues.apache.org/jira/browse/PHOENIX-3509
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jeongdae Kim
>Assignee: Jeongdae Kim
>Priority: Minor
>
> HBase allows to use alphanumeric characters([a-zA-Z_0-9-.]) for snapshot 
> name, and phoenix make snapshot name in combination with tablename, version 
> and current time(SimpleDateFormat("MMddHHmmssZ"))
> when timezone is GMT+ like (GMT+9), snapshot name includes +(plus), and this 
> causes IllegalArgmentException from HBase like below.
> {quote}
> "Illegal character code:43, <+> at 53. Snapshot qualifiers can only contain 
> 'alphanumeric characters': i.e. [a-zA-Z_0-9-.]: 
> SNAPSHOT_SYSTEM.CATALOG_4.8.x_TO_4.9.0_20161129105640+0900"
> {quote}
> i think timezone is not necessary to make snapshot name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3509) Phoenix upgrade failure in specific timezone.

2016-11-28 Thread Jeongdae Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeongdae Kim updated PHOENIX-3509:
--
Summary: Phoenix upgrade failure in specific timezone.  (was: Failed to 
create snapshot when upgrading Phoenix in specific timezone.)

> Phoenix upgrade failure in specific timezone.
> -
>
> Key: PHOENIX-3509
> URL: https://issues.apache.org/jira/browse/PHOENIX-3509
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jeongdae Kim
>Assignee: Jeongdae Kim
>Priority: Minor
>
> HBase allows to use alphanumeric characters([a-zA-Z_0-9-.]) for snapshot 
> name, and phoenix make snapshot name in combination with tablename, version 
> and current time(SimpleDateFormat("MMddHHmmssZ"))
> when timezone is GMT+ like (GMT+9), snapshot name includes +(plus), and this 
> causes IllegalArgmentException from HBase like below.
> {quote}
> "Illegal character code:43, <+> at 53. Snapshot qualifiers can only contain 
> 'alphanumeric characters': i.e. [a-zA-Z_0-9-.]: 
> SNAPSHOT_SYSTEM.CATALOG_4.8.x_TO_4.9.0_20161129105640+0900"
> {quote}
> i think timezone is not necessary to make snapshot name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3509) Failed to create snapshot when upgrading Phoenix in specific timezone.

2016-11-28 Thread Jeongdae Kim (JIRA)
Jeongdae Kim created PHOENIX-3509:
-

 Summary: Failed to create snapshot when upgrading Phoenix in 
specific timezone.
 Key: PHOENIX-3509
 URL: https://issues.apache.org/jira/browse/PHOENIX-3509
 Project: Phoenix
  Issue Type: Bug
Reporter: Jeongdae Kim
Assignee: Jeongdae Kim
Priority: Minor


HBase allows to use alphanumeric characters([a-zA-Z_0-9-.]) for snapshot name, 
and phoenix make snapshot name in combination with tablename, version and 
current time(SimpleDateFormat("MMddHHmmssZ"))

when timezone is GMT+ like (GMT+9), snapshot name includes +(plus), and this 
causes IllegalArgmentException from HBase like below.

{quote}
"Illegal character code:43, <+> at 53. Snapshot qualifiers can only contain 
'alphanumeric characters': i.e. [a-zA-Z_0-9-.]: 
SNAPSHOT_SYSTEM.CATALOG_4.8.x_TO_4.9.0_20161129105640+0900"
{quote}

i think timezone is not necessary to make snapshot name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3295) Remove ReplaceArrayColumnWithKeyValueColumnExpressionVisitor

2016-11-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703981#comment-15703981
 ] 

James Taylor commented on PHOENIX-3295:
---

+1. Looks great, [~tdsilva].

> Remove ReplaceArrayColumnWithKeyValueColumnExpressionVisitor 
> -
>
> Key: PHOENIX-3295
> URL: https://issues.apache.org/jira/browse/PHOENIX-3295
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-3295-v2.patch, PHOENIX-3295-v3.patch, 
> PHOENIX-3295.patch
>
>
> ReplaceArrayColumnWithKeyValueColumnExpressionVisitor is only used in one 
> place in IndexUtil.generateIndexData because we use a ValueGetter to get the 
> value of the data table column using the original data table column 
> reference. This is also why ArrayColumnExpression needs to keep track of the 
> original key value column expression. 
> If we don't replace the array column expression with the original column 
> expression when it looks up the column by the qualifier it won't find it. 
> {code}
> ValueGetter valueGetter = new ValueGetter() {
>   
>   @Override
> public byte[] getRowKey() {
>   return dataMutation.getRow();
>   }
> 
> @Override
> public ImmutableBytesWritable 
> getLatestValue(ColumnReference ref) {
> // Always return null for our empty key value, as 
> this will cause the index
> // maintainer to always treat this Put as a new 
> row.
> if (isEmptyKeyValue(table, ref)) {
> return null;
> }
> byte[] family = ref.getFamily();
> byte[] qualifier = ref.getQualifier();
> RowMutationState rowMutationState = 
> valuesMap.get(ptr);
> PColumn column = null;
> try {
> column = 
> table.getColumnFamily(family).getPColumnForColumnQualifier(qualifier);
> } catch (ColumnNotFoundException e) {
> } catch (ColumnFamilyNotFoundException e) {
> }
> if (rowMutationState!=null && column!=null) {
> byte[] value = 
> rowMutationState.getColumnValues().get(column);
> ImmutableBytesPtr ptr = new 
> ImmutableBytesPtr();
> ptr.set(value==null ? 
> ByteUtil.EMPTY_BYTE_ARRAY : value);
> 
> SchemaUtil.padData(table.getName().getString(), column, ptr);
> return ptr;
> }
> return null;
> }
> 
> };
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-3442) Support null when columns have default values for immutable tables with encoding scheme COLUMNS_STORED_IN_SINGLE_CELL

2016-11-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703943#comment-15703943
 ] 

James Taylor edited comment on PHOENIX-3442 at 11/29/16 2:37 AM:
-

Not sure I understand this logic:
{code}
+public static boolean useShortForOffsetArray(int maxoffset, int[] 
offsetPos) {
+   // if any of the offsets could be negative we can't use a short array
+   if (offsetPos!=null) {
+   for (int i=offsetPos.length-1; i>=0; --i) {
+   if (offsetPos[i]<0) {
+   return false;
+   }
+   }
+   }
+   // If the max offset is less than Short.MAX_VALUE then offset array can 
use short
+if (maxoffset <= (2 * Short.MAX_VALUE)) { return true; }
 // else offset array can use Int
 return false;
 }
{code}
We currently subtract Short.MAX_VALUE from the offset so we can use all 16 bits 
of the short, but I was thinking that we'd *not* do this for immutable 
encoding. Instead we could just store the offsets as short values if the 
maxoffset <= Short.MAX_VALUE and maxoffset >= Short.MIN_VALUE without 
subtracting Short.MAX_VALUE. We'd essentially lose the one extra bit we were 
gaining before because now the sign would have significance.

If the code is shared with the array encoding, we might need to use a different 
value as the last byte (i.e. the byte reserved for the encoding format). See 
ARRAY_SERIALIZATION_VERSION and PArrayDataType.serializeHeaderInfoIntoStream(). 
In this case, we could conditionalize the code based on the encoding format 
byte.

We can also not write the separator bytes to save more space (conditionally 
based on the encoding format byte) which would tweak this code (plus probably 
code that appends/inserts an array element):
{code}
   private byte[] createArrayBytes(TrustedByteArrayOutputStream byteStream, 
DataOutputStream oStream,
PhoenixArray array, int noOfElements, PDataType baseType, SortOrder 
sortOrder, boolean rowKeyOrderOptimizable) {
try {
if (!baseType.isFixedWidth()) {
int[] offsetPos = new int[noOfElements];
int nulls = 0;
for (int i = 0; i < noOfElements; i++) {
byte[] bytes = array.toBytes(i);
if (bytes.length == 0) {
offsetPos[i] = byteStream.size();
nulls++;
} else {
nulls = serializeNulls(oStream, nulls);
offsetPos[i] = byteStream.size();
if (sortOrder == SortOrder.DESC) {
SortOrder.invert(bytes, 0, bytes, 0, bytes.length);
}
oStream.write(bytes, 0, bytes.length);
oStream.write(getSeparatorByte(rowKeyOrderOptimizable, 
sortOrder));
}
{code}


was (Author: jamestaylor):
Not sure I understand this logic:
{code}
+public static boolean useShortForOffsetArray(int maxoffset, int[] 
offsetPos) {
+   // if any of the offsets could be negative we can't use a short array
+   if (offsetPos!=null) {
+   for (int i=offsetPos.length-1; i>=0; --i) {
+   if (offsetPos[i]<0) {
+   return false;
+   }
+   }
+   }
+   // If the max offset is less than Short.MAX_VALUE then offset array can 
use short
+if (maxoffset <= (2 * Short.MAX_VALUE)) { return true; }
 // else offset array can use Int
 return false;
 }
{code}
We currently subtract Short.MAX_VALUE from the offset so we can use all 16 bits 
of the short, but I was thinking that we'd *not* do this for immutable 
encoding. Instead we could just store the offsets as short values if the 
maxoffset <= Short.MAX_VALUE and maxoffset >= Short.MIN_VALUE without 
subtracting subtract Short.MAX_VALUE. We'd essentially lose the one extra bit 
we were gaining before because now the sign would have significance.

If the code is shared with the array encoding, we might need to use a different 
value as the last byte (i.e. the byte reserved for the encoding format). See 
ARRAY_SERIALIZATION_VERSION and PArrayDataType.serializeHeaderInfoIntoStream().

We can also not write the separator bytes to save more space (conditionally 
based on the encoding format byte) which would tweak this code (plus probably 
code that appends/inserts an array element):
{code}
   private byte[] createArrayBytes(TrustedByteArrayOutputStream byteStream, 
DataOutputStream oStream,
PhoenixArray array, int noOfElements, PDataType baseType, SortOrder 
sortOrder, boolean rowKeyOrderOptimizable) {
try {
if (!baseType.isFixedWidth()) {
int[] offsetPos = new 

[jira] [Commented] (PHOENIX-3442) Support null when columns have default values for immutable tables with encoding scheme COLUMNS_STORED_IN_SINGLE_CELL

2016-11-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703943#comment-15703943
 ] 

James Taylor commented on PHOENIX-3442:
---

Not sure I understand this logic:
{code}
+public static boolean useShortForOffsetArray(int maxoffset, int[] 
offsetPos) {
+   // if any of the offsets could be negative we can't use a short array
+   if (offsetPos!=null) {
+   for (int i=offsetPos.length-1; i>=0; --i) {
+   if (offsetPos[i]<0) {
+   return false;
+   }
+   }
+   }
+   // If the max offset is less than Short.MAX_VALUE then offset array can 
use short
+if (maxoffset <= (2 * Short.MAX_VALUE)) { return true; }
 // else offset array can use Int
 return false;
 }
{code}
We currently subtract Short.MAX_VALUE from the offset so we can use all 16 bits 
of the short, but I was thinking that we'd *not* do this for immutable 
encoding. Instead we could just store the offsets as short values if the 
maxoffset <= Short.MAX_VALUE and maxoffset >= Short.MIN_VALUE without 
subtracting subtract Short.MAX_VALUE. We'd essentially lose the one extra bit 
we were gaining before because now the sign would have significance.

If the code is shared with the array encoding, we might need to use a different 
value as the last byte (i.e. the byte reserved for the encoding format). See 
ARRAY_SERIALIZATION_VERSION and PArrayDataType.serializeHeaderInfoIntoStream().

We can also not write the separator bytes to save more space (conditionally 
based on the encoding format byte) which would tweak this code (plus probably 
code that appends/inserts an array element):
{code}
   private byte[] createArrayBytes(TrustedByteArrayOutputStream byteStream, 
DataOutputStream oStream,
PhoenixArray array, int noOfElements, PDataType baseType, SortOrder 
sortOrder, boolean rowKeyOrderOptimizable) {
try {
if (!baseType.isFixedWidth()) {
int[] offsetPos = new int[noOfElements];
int nulls = 0;
for (int i = 0; i < noOfElements; i++) {
byte[] bytes = array.toBytes(i);
if (bytes.length == 0) {
offsetPos[i] = byteStream.size();
nulls++;
} else {
nulls = serializeNulls(oStream, nulls);
offsetPos[i] = byteStream.size();
if (sortOrder == SortOrder.DESC) {
SortOrder.invert(bytes, 0, bytes, 0, bytes.length);
}
oStream.write(bytes, 0, bytes.length);
oStream.write(getSeparatorByte(rowKeyOrderOptimizable, 
sortOrder));
}
{code}

> Support null when columns have default values  for immutable tables with 
> encoding scheme COLUMNS_STORED_IN_SINGLE_CELL
> --
>
> Key: PHOENIX-3442
> URL: https://issues.apache.org/jira/browse/PHOENIX-3442
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-3442.patch
>
>
> Comments from [~jamestaylor]: 
> The way we differentiate a null value now is by the value being an empty byte 
> array (explicitly set to null) versus not being present (in which case we use 
> the default value).
> This is encapsulated in the DefaultValueExpression.
> We'll need to tweak our encoding for this.
> One way would be to use a negative number for the offset.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-3479) Modulo support for non integer-types

2016-11-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703577#comment-15703577
 ] 

James Taylor edited comment on PHOENIX-3479 at 11/28/16 11:42 PM:
--

[~kliew] - thanks for the patch. Do you have a particular user/use case that 
would need this? I've never seen someone ask for this. Most databases 
(Postgres, SQL Server,  don't support modulo for doubles and floats (because it 
doesn't make much sense). I guess it'd be fine for decimal.

Also, never add/remove enum values in the middle of the ExpressionType enum as 
it will wreak havoc when an older/existing Phoenix client connects to a new 
Phoenix server since the expression type is communicated through their ordinal 
value in the ExpressionType enum.


was (Author: jamestaylor):
[~kliew] - thanks for the patch. Do you have a particular user/use case that 
would need this? I've never seen someone ask for this. Most databases 
(Postgres, SQL Server,  don't support modulo for doubles and floats (because it 
doesn't make much sense). I guess it'd be fine for decimal.

> Modulo support for non integer-types
> 
>
> Key: PHOENIX-3479
> URL: https://issues.apache.org/jira/browse/PHOENIX-3479
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Minor
> Attachments: PHOENIX-3479.2.patch, PHOENIX-3479.3.patch, 
> PHOENIX-3479.patch
>
>
> MOD should be applicable to all numeric types



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3479) Modulo support for non integer-types

2016-11-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703577#comment-15703577
 ] 

James Taylor commented on PHOENIX-3479:
---

[~kliew] - thanks for the patch. Do you have a particular user/use case that 
would need this? I've never seen someone ask for this. Most databases 
(Postgres, SQL Server,  don't support modulo for doubles and floats (because it 
doesn't make much sense). I guess it'd be fine for decimal.

> Modulo support for non integer-types
> 
>
> Key: PHOENIX-3479
> URL: https://issues.apache.org/jira/browse/PHOENIX-3479
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Minor
> Attachments: PHOENIX-3479.2.patch, PHOENIX-3479.3.patch, 
> PHOENIX-3479.patch
>
>
> MOD should be applicable to all numeric types



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-541) Make mutable batch size bytes-based instead of row-based

2016-11-28 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated PHOENIX-541:
---
Comment: was deleted

(was: Is it possible to configure batches in terms of bytes and not row counts?)

> Make mutable batch size bytes-based instead of row-based
> 
>
> Key: PHOENIX-541
> URL: https://issues.apache.org/jira/browse/PHOENIX-541
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 3.0-Release
>Reporter: mujtaba
>  Labels: newbie
>
> With current configuration of row-count based mutable batch size, ideal value 
> for batch size is around 800 rather then current 15k when creating indexes 
> based on memory consumption, CPU and GC (data size: key: ~60 bytes, 14 
> integer column in separate CFs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-541) Make mutable batch size bytes-based instead of row-based

2016-11-28 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703440#comment-15703440
 ] 

churro morales commented on PHOENIX-541:


Is it possible to do this in terms of bytes and not row counts?  

> Make mutable batch size bytes-based instead of row-based
> 
>
> Key: PHOENIX-541
> URL: https://issues.apache.org/jira/browse/PHOENIX-541
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 3.0-Release
>Reporter: mujtaba
>  Labels: newbie
>
> With current configuration of row-count based mutable batch size, ideal value 
> for batch size is around 800 rather then current 15k when creating indexes 
> based on memory consumption, CPU and GC (data size: key: ~60 bytes, 14 
> integer column in separate CFs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-541) Make mutable batch size bytes-based instead of row-based

2016-11-28 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703440#comment-15703440
 ] 

churro morales edited comment on PHOENIX-541 at 11/28/16 11:08 PM:
---

Is it possible to configure batches in terms of bytes and not row counts?


was (Author: churromorales):
Is it possible to do this in terms of bytes and not row counts?  

> Make mutable batch size bytes-based instead of row-based
> 
>
> Key: PHOENIX-541
> URL: https://issues.apache.org/jira/browse/PHOENIX-541
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 3.0-Release
>Reporter: mujtaba
>  Labels: newbie
>
> With current configuration of row-count based mutable batch size, ideal value 
> for batch size is around 800 rather then current 15k when creating indexes 
> based on memory consumption, CPU and GC (data size: key: ~60 bytes, 14 
> integer column in separate CFs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3488) Support COUNT(DISTINCT x) in Phoenix-Calcite Integration

2016-11-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703360#comment-15703360
 ] 

ASF GitHub Bot commented on PHOENIX-3488:
-

GitHub user lomoree opened a pull request:

https://github.com/apache/phoenix/pull/223

PHOENIX-3488 Support COUNT(DISTINCT x) in Phoenix-Calcite Integration



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/phoenix countdistinct

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/223.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #223


commit f3feb0ac314b36a99faf3684b43c7e2fde0d44d8
Author: Eric 
Date:   2016-11-16T19:48:18Z

Support COUNT(DISTINCT x)




> Support COUNT(DISTINCT x) in Phoenix-Calcite Integration
> 
>
> Key: PHOENIX-3488
> URL: https://issues.apache.org/jira/browse/PHOENIX-3488
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Eric Lomore
>Assignee: Eric Lomore
>  Labels: calcite
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #223: PHOENIX-3488 Support COUNT(DISTINCT x) in Phoenix...

2016-11-28 Thread lomoree
GitHub user lomoree opened a pull request:

https://github.com/apache/phoenix/pull/223

PHOENIX-3488 Support COUNT(DISTINCT x) in Phoenix-Calcite Integration



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/phoenix countdistinct

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/223.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #223


commit f3feb0ac314b36a99faf3684b43c7e2fde0d44d8
Author: Eric 
Date:   2016-11-16T19:48:18Z

Support COUNT(DISTINCT x)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (PHOENIX-3445) Add a CREATE IMMUTABLE TABLE construct to make immutable tables more explicit

2016-11-28 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-3445:
---

Assignee: Thomas D'Silva

> Add a CREATE IMMUTABLE TABLE construct to make immutable tables more explicit
> -
>
> Key: PHOENIX-3445
> URL: https://issues.apache.org/jira/browse/PHOENIX-3445
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Samarth Jain
>Assignee: Thomas D'Silva
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)