[jira] [Created] (PHOENIX-6956) hbase regionserver with phoenix process is java crash.

2023-05-15 Thread Jepson (Jira)
Jepson created PHOENIX-6956:
---

 Summary: hbase regionserver with phoenix process is java crash.
 Key: PHOENIX-6956
 URL: https://issues.apache.org/jira/browse/PHOENIX-6956
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.2
 Environment: hbase: 2.2.2

phoenix: hbase-2.2-phoenix-5.1.2
Reporter: Jepson


hbase regionserver with phoenix process is java crash.

 

[hbase@hadoop62 ~]$ *more hs_err_pid97203.log* 
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x7fc60858ad3e, pid=97203, tid=0x7fbd8291a700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_241-b07) (build 
1.8.0_241-b07)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.241-b07 mixed mode linux-amd64 
)
# Problematic frame:
{color:#ff8b00}*# V  [libjvm.so+0x7ddd3e]*{color}
#
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#

---  T H R E A D  ---

Current thread (0x7fc60329a000):  JavaThread 
"RpcServer.default.RWQ.Fifo.write.handler=72,queue=0,port=16020" daemon 
[_thread_in_vm, id=98
127, stack(0x7fbd8281a000,0x7fbd8291b000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 2 (SEGV_ACCERR), si_addr: 
0x7fbcc3a9

Registers:
RAX=0x7fbcc3ad11de, RBX=0x7fc60329a000, RCX=0x7fbe36e17c40, 
RDX=0x7dc4
RSP=0x7fbd82918f88, RBP=0x7fbd82918fd0, RSI=0x, 
RDI=0x7fbcc39d11e6
R8 =0x0008, R9 =0x00f00018, R10=0x7fc5f16d63a7, 
R11=0x7fc5f16d6358
R12=0x0010, R13=0x7fbd82919000, R14=0x00f00018, 
R15=0x
RIP=0x7fc60858ad3e, EFLAGS=0x00010282, CSGSFS=0x0033, 
ERR=0x0004
  TRAPNO=0x000e





0x7fc60858ad2e:   f0 48 89 74 d1 f0 48 8b 74 d0 f8 48 89 74 d1 f8
0x7fc60858ad3e:   48 8b 34 d0 48 89 34 d1 48 83 c2 04 7e d4 48 83
0x7fc60858ad4e:   ea 04 7c 93 eb a1 49 f7 c0 01 00 00 00 74 0c 66 

Register to memory mapping:

RAX=0x7fbcc3ad11de is pointing into the stack for thread: 0x7fc603d65000
RBX=0x7fc60329a000 is a thread
RCX=
[error occurred during error reporting (printing register info), id 0xb]

Stack: [0x7fbd8281a000,0x7fbd8291b000],  sp=0x7fbd82918f88,  free 
space=1019k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x7ddd3e]
J 3060  
sun.misc.Unsafe.{*}{color:#ff8b00}copyMemory{color}{*}(Ljava/lang/Object;JLjava/lang/Object;JJ)V
 (0 bytes) @ 0x7fc5f16d6421 [0x7fc5f16d6340+0xe1]
j  
org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V+36
j  
org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V+69
J 20259 C2 
org.apache.phoenix.coprocessor.GlobalIndexRegionScanner.apply(Lorg/apache/hadoop/hbase/client/Put;Lorg/apache/hadoop/hbase/client/P
ut;)V (95 bytes) @ 0x7fc5f4539414 [0x7fc5f4538d00+0x714]
J 32073 C2 
org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutate(Lorg/apache/hadoop/hbase/coprocessor/ObserverContext;Lorg/apache/
hadoop/hbase/regionserver/MiniBatchOperationInProgress;)V (31 bytes) @ 
0x7fc5f5e8c8ac [0x7fc5f5e8a6e0+0x21cc]
J 31164 C2 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(Lorg/apache/hadoop/hbase/regionserver/HRegion$BatchOperation;)V
 (500
 bytes) @ 0x7fc5f4ec0e58 [0x7fc5f4ec0640+0x818]
J 31154 C2 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(Lorg/apache/hadoop/hbase/regionserver/HRegion$BatchOperation;)[Lorg/apache
/hadoop/hbase/regionserver/OperationStatus; (171 bytes) @ 0x7fc5f5968218 
[0x7fc5f5967ea0+0x378]
J 31155 C2 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$Region
ActionResult$Builder;Lorg/apache/hadoop/hbase/regionserver/HRegion;Lorg/apache/hadoop/hbase/quotas/OperationQuota;Ljava/util/List;Lorg/apache/
hadoop/hbase/CellScanner;Lorg/apache/hadoop/hbase/quotas/ActivePolicyEnforcement;Z)V
 (646 bytes) @ 0x7fc5f53fa1d4 [0x7fc5f53f9620+0xbb
4]
J 20397 C2 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(Lorg/apache/hadoop/hbase/regionserver/HRegion;Lorg/apa
che/hadoop/hbase/quotas/OperationQuota;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionAction;Lorg/apache/hadoop/hbase/C
ellScanner;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;Ljava/util/List;JLorg/apache/hadoop/hbas

[jira] [Created] (PHOENIX-5191) commit thrown error info: java.util.ConcurrentModificationException

2019-03-12 Thread Jepson (JIRA)
Jepson created PHOENIX-5191:
---

 Summary: commit thrown error info:  
java.util.ConcurrentModificationException
 Key: PHOENIX-5191
 URL: https://issues.apache.org/jira/browse/PHOENIX-5191
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0, 4.12.0, 4.11.0, 4.10.0
Reporter: Jepson


{code:java}
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
at 
org.apache.phoenix.execute.MutationState.generateMutations(MutationState.java:645)
at 
org.apache.phoenix.execute.MutationState.addRowMutations(MutationState.java:519)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:1005)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:1514)
at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1337)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:683)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:679)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:679)
at 
com.jiuye.mcp.agent.runner.impl.PhoenixTargetRunnerImpl.cronScheduleMessageBatchCommit(PhoenixTargetRunnerImpl.java:414)
at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:65)
at 
org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at 
org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at 
java.util.concurrent.Executors$RunnableAdapter.call$$$capture(Executors.java:511)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-03-13 10:15:15.060 [pool-2-thread-1] INFO 
org.apache.phoenix.execute.MutationState - Abort successful
2019-03-13 10:15:15.061 [pool-2-thread-1] ERROR 
c.j.mcp.agent.runner.impl.PhoenixTargetRunnerImpl - 
cronScheduleMessageBatchCommit:java.util.ConcurrentModificationException{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1718) Unable to find cached index metadata during the stablity test with phoenix

2018-06-28 Thread Jepson (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526114#comment-16526114
 ] 

Jepson commented on PHOENIX-1718:
-

[~jamestaylor] Can you share these careful configuration ?

> Unable to find cached index metadata during the stablity test with phoenix
> --
>
> Key: PHOENIX-1718
> URL: https://issues.apache.org/jira/browse/PHOENIX-1718
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
> Environment: linux os ( 128G ram,48T disk,24 cores) * 8
> Hadoop 2.5.1
> HBase 0.98.7
> Phoenix 4.2.1
>Reporter: wuchengzhi
>Priority: Critical
> Attachments: hbase-hadoop-regionserver-cluster-node134 .zip
>
>
> I am making stablity test with phoenix 4.2.1 . But the regionserver became 
> very slow  after 4 hours , and i found some error log in the regionserver log 
> file.
> In this scenario,the cluster has 8 machines(128G ram, 24 cores , 48T disk). i 
> setup 2 regionserver in each pc (total 16 rs). 
> 1. create 8 tables, each table contains an index from TEST_USER0 to 
> TEST_USER7.
> create table TEST_USER0 (id varchar primary key , attr1 varchar, attr2 
> varchar,attr3 varchar,attr4 varchar,attr5 varchar,attr6 integer,attr7 
> integer,attr8 integer,attr9 integer,attr10 integer )  
> DATA_BLOCK_ENCODING='FAST_DIFF',VERSIONS=1,BLOOMFILTER='ROW',COMPRESSION='LZ4',BLOCKSIZE
>  = '65536',SALT_BUCKETS=32;
> create local index TEST_USER_INDEX0 on 
> TEST5.TEST_USER0(attr1,attr2,attr3,attr4,attr5,attr6,attr7,attr8,attr9,attr10);
> 
> 2.  deploy phoenix client each machine to upsert data to tables. ( client1 
> upsert into TEST_USER0 , client 2 upsert into TEST_USER1.)
> One phoenix client start 6 threads, and each thread upsert 10,000 rows in 
> a batch.  and each thread will upsert 500,000,000 in totally.
> 8 clients ran in same time.
>  the log as belowRunning 4 hours later,  threre were about 1,000,000,000 rows 
> in hbase,  and error occur  frequently at about running 4 hours and 50 
> minutes , and the rps became very slow , less than 10,000 (7, in normal) .
> 2015-03-09 19:15:13,337 ERROR 
> [B.DefaultRpcServer.handler=2,queue=2,port=60022] parallel.BaseTaskRunner: 
> Found a failed task because: org.apache.hadoop.hbase.DoNotRetryIOException: 
> ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index metadata. 
>  key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
> at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
> at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
> at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
> at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
> at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
> at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:140)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:274)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:203)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:881)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1522)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1597)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1554)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2476)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2263)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2215)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2219)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4376)
> at 
> 

[jira] [Commented] (PHOENIX-4649) Phoenix Upsert..Select query is not working for long running query.But while we ran same query with limit clause then it works fine

2018-03-14 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399778#comment-16399778
 ] 

Jepson commented on PHOENIX-4649:
-

*hdfs-site.xml:*
{code:java}
dfs.client.socket-timeout180
dfs.socket.timeout180
dfs.datanode.socket.write.timeout180
{code}

Try it.



 

> Phoenix Upsert..Select query is not working for long running query.But while 
> we ran same query with limit clause then it works fine  
> -
>
> Key: PHOENIX-4649
> URL: https://issues.apache.org/jira/browse/PHOENIX-4649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Nitin D Chunke
>Priority: Major
>  Labels: patch
> Fix For: 5.0.0-alpha
>
> Attachments: image-2018-03-11-00-16-14-968.png, 
> image-2018-03-11-00-22-32-613.png
>
>
> We have data in table A which is around 3 Million Records. and from this 
> table we have upsert data in to table B with out limit clause.
> (Note : Here we already export the hbase conf path and set all phoenix 
> properties to mentioned value and both the tables are salted)
> Please find following hbase-site.xml screen shot.
> !image-2018-03-11-00-22-32-613.png!
> But hbase conf was not picked up when we launch sqlline/psql.
> Please refer following screenshot 
> !image-2018-03-11-00-16-14-968.png!
> And when we ran same query with setting limit clause which is higher than 
> total number record present in table A,it runs absolutely fine with out 
> giving any error. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4652) When column is existed index table,drop the column,the index table is dropped.

2018-03-12 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4652:

Description: 
1.When column is existed  index table,drop the column,the index table is 
dropped.
 I hope that the column with index table is dropped,not the index table.
{code:java}
CREATE TABLE JYDW.TEST(
ID INTEGER primary key,
NAME VARCHAR(128),
AGE INTEGER,
CREDATE DATE,
CRETIME TIMESTAMP
)SALT_BUCKETS = 12, COMPRESSION='SNAPPY';


CREATE INDEX TEST_IDX ON
 JYDW.TEST(
    NAME,
    AGE
 );

alter table JYDW.TEST drop column name;
alter table JYDW.TEST add name varchar(256);
{code}
*when drop the name column, the TEST_IDX index table will be dropped.*

2.Phoenix log:

18/03/13 10:44:51 INFO client.HBaseAdmin: Started disable of JYDW:TEST_IDX
 18/03/13 10:44:54 INFO client.HBaseAdmin: Disabled JYDW:TEST_IDX
 18/03/13 10:45:42 INFO client.HBaseAdmin: Deleted JYDW:TEST_IDX
 18/03/13 10:46:03 INFO ConnectionPool.RawMaxwellConnectionPool: 
RawMaxwellConnectionPool: Destroyed connection

3.I want modify the name column, but the modify sql syntax isn't supported, 
 so drop first ,then add.

4.Reference:
 [https://issues.apache.org/jira/browse/PHOENIX-4651|http://example.com/]

  was:
*1.*When column is existed  index table,drop the column,the index table is 
dropped.
 I hope that the column with index table is dropped,not the index table.
{code:java}
CREATE TABLE JYDW.TEST(
ID INTEGER primary key,
NAME VARCHAR(128),
AGE INTEGER,
CREDATE DATE,
CRETIME TIMESTAMP
)SALT_BUCKETS = 12, COMPRESSION='SNAPPY';


CREATE INDEX TEST_IDX ON
 JYDW.TEST(
    NAME,
    AGE
 );

alter table JYDW.TEST drop column name;
alter table JYDW.TEST add name varchar(256);
{code}
*when drop the name column, the TEST_IDX index table will be dropped.*

*2.*Phoenix log:

18/03/13 10:44:51 INFO client.HBaseAdmin: Started disable of JYDW:TEST_IDX
 18/03/13 10:44:54 INFO client.HBaseAdmin: Disabled JYDW:TEST_IDX
 18/03/13 10:45:42 INFO client.HBaseAdmin: Deleted JYDW:TEST_IDX
 18/03/13 10:46:03 INFO ConnectionPool.RawMaxwellConnectionPool: 
RawMaxwellConnectionPool: Destroyed connection

*3.*I want modify the name column, but the modify sql syntax isn't supported, 
 so drop first ,then add.


 *4.*Reference:
[https://issues.apache.org/jira/browse/PHOENIX-4651|http://example.com/]


> When column is existed  index table,drop the column,the index table is 
> dropped.
> ---
>
> Key: PHOENIX-4652
> URL: https://issues.apache.org/jira/browse/PHOENIX-4652
> Project: Phoenix
>  Issue Type: Wish
>Affects Versions: 4.10.0
>Reporter: Jepson
>Priority: Major
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> 1.When column is existed  index table,drop the column,the index table is 
> dropped.
>  I hope that the column with index table is dropped,not the index table.
> {code:java}
> CREATE TABLE JYDW.TEST(
> ID INTEGER primary key,
> NAME VARCHAR(128),
> AGE INTEGER,
> CREDATE DATE,
> CRETIME TIMESTAMP
> )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
> CREATE INDEX TEST_IDX ON
>  JYDW.TEST(
>     NAME,
>     AGE
>  );
> alter table JYDW.TEST drop column name;
> alter table JYDW.TEST add name varchar(256);
> {code}
> *when drop the name column, the TEST_IDX index table will be dropped.*
> 2.Phoenix log:
> 18/03/13 10:44:51 INFO client.HBaseAdmin: Started disable of JYDW:TEST_IDX
>  18/03/13 10:44:54 INFO client.HBaseAdmin: Disabled JYDW:TEST_IDX
>  18/03/13 10:45:42 INFO client.HBaseAdmin: Deleted JYDW:TEST_IDX
>  18/03/13 10:46:03 INFO ConnectionPool.RawMaxwellConnectionPool: 
> RawMaxwellConnectionPool: Destroyed connection
> 3.I want modify the name column, but the modify sql syntax isn't supported, 
>  so drop first ,then add.
> 4.Reference:
>  [https://issues.apache.org/jira/browse/PHOENIX-4651|http://example.com/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4652) When column is existed index table,drop the column,the index table is dropped.

2018-03-12 Thread Jepson (JIRA)
Jepson created PHOENIX-4652:
---

 Summary: When column is existed  index table,drop the column,the 
index table is dropped.
 Key: PHOENIX-4652
 URL: https://issues.apache.org/jira/browse/PHOENIX-4652
 Project: Phoenix
  Issue Type: Wish
Affects Versions: 4.10.0
Reporter: Jepson


*1.*When column is existed  index table,drop the column,the index table is 
dropped.
 I hope that the column with index table is dropped,not the index table.
{code:java}
CREATE TABLE JYDW.TEST(
ID INTEGER primary key,
NAME VARCHAR(128),
AGE INTEGER,
CREDATE DATE,
CRETIME TIMESTAMP
)SALT_BUCKETS = 12, COMPRESSION='SNAPPY';


CREATE INDEX TEST_IDX ON
 JYDW.TEST(
    NAME,
    AGE
 );

alter table JYDW.TEST drop column name;
alter table JYDW.TEST add name varchar(256);
{code}
*when drop the name column, the TEST_IDX index table will be dropped.*

*2.*Phoenix log:

18/03/13 10:44:51 INFO client.HBaseAdmin: Started disable of JYDW:TEST_IDX
 18/03/13 10:44:54 INFO client.HBaseAdmin: Disabled JYDW:TEST_IDX
 18/03/13 10:45:42 INFO client.HBaseAdmin: Deleted JYDW:TEST_IDX
 18/03/13 10:46:03 INFO ConnectionPool.RawMaxwellConnectionPool: 
RawMaxwellConnectionPool: Destroyed connection

*3.*I want modify the name column, but the modify sql syntax isn't supported, 
 so drop first ,then add.


 *4.*Reference:
[https://issues.apache.org/jira/browse/PHOENIX-4651|http://example.com/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4651) alter table test modify column is not support

2018-03-12 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4651:

Description: 
Modify the column type length, is very inconvenient, drop first ,then add.

Such as:

alter table jydw.test drop column name;
 alter table jydw.test add name varchar(256);

The alter table test modify column sql is not support.

 

 

  was:
Modify the column type length, is very inconvenient, drop first ,the add.

Such as:

alter table jydw.test drop column name;
alter table jydw.test add name varchar(256);

The alter table test modify column sql is not support.

 

 


> alter table test modify column is not support
> -
>
> Key: PHOENIX-4651
> URL: https://issues.apache.org/jira/browse/PHOENIX-4651
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.10.0
>Reporter: Jepson
>Priority: Critical
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> Modify the column type length, is very inconvenient, drop first ,then add.
> Such as:
> alter table jydw.test drop column name;
>  alter table jydw.test add name varchar(256);
> The alter table test modify column sql is not support.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4651) alter table test modify column is not support

2018-03-12 Thread Jepson (JIRA)
Jepson created PHOENIX-4651:
---

 Summary: alter table test modify column is not support
 Key: PHOENIX-4651
 URL: https://issues.apache.org/jira/browse/PHOENIX-4651
 Project: Phoenix
  Issue Type: New Feature
Affects Versions: 4.10.0
Reporter: Jepson


Modify the column type length, is very inconvenient, drop first ,the add.

Such as:

alter table jydw.test drop column name;
alter table jydw.test add name varchar(256);

The alter table test modify column sql is not support.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4629) date/datetime/timestamp with timezone issue

2018-03-12 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4629:

Summary: date/datetime/timestamp with timezone issue  (was: timestamp with 
timezone issue)

> date/datetime/timestamp with timezone issue
> ---
>
> Key: PHOENIX-4629
> URL: https://issues.apache.org/jira/browse/PHOENIX-4629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10-hbase1.2
>Reporter: Jepson
>Priority: Major
>  Labels: patch
> Fix For: 4.10.0
>
> Attachments: Phoenix-4629-v2.patch, Phoenix-4629.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> *1.Create timezonetest table:*
> {code:java}
> CREATE TABLE JYDW.timezonetest (
> id bigint(11) not null primary key,
> date_c date ,
> datetime_c timestamp ,
> timestamp_c timestamp
> )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';{code}
> *2.Create TimestampTest.java*
> {code:java}
> package org.apache.phoenix.jdbc;
> import org.apache.phoenix.query.BaseConnectionlessQueryTest;
> import org.apache.phoenix.query.QueryServices;
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.util.Properties;
> /**
>  * Created by Jepson on 2017/11/2.
>  *
>  CREATE TABLE JYDW.timezonetest (
>  id bigint(11) not null primary key,
>  date_c date ,
>  datetime_c timestamp ,
>  timestamp_c timestamp
>  )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
>  */
> public class TimestampTest extends BaseConnectionlessQueryTest {
> public static void main(String[] args) throws Exception {
> Properties props = new Properties();
>// props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
> "Asia/Shanghai");
> String url = 
> "jdbc:phoenix:192.168.117.137,192.168.117.138,192.168.117.140,192.168.117.141,192.168.117.142:2181:/hbase";
> //Connection conn = DriverManager.getConnection(url,props);
> Connection conn = DriverManager.getConnection(url);
> conn.createStatement().execute("UPSERT INTO 
> jydw.TIMEZONETEST(id,date_c,datetime_c,timestamp_c) \n" +
> "values(101,'2018-02-25','2018-02-25 00:00:00','2018-02-25 
> 10:00:00')");
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
> TIMEZONETEST");
> while(rs.next()) {
> System.out.println(rs.getString("id")+" : " + 
> rs.getString("date_c")+" : " + rs.getString("datetime_c")+" : " + 
> rs.getString("timestamp_c"));
> }
> rs.close();
> conn.close();
> }
> }
> {code}
> *3.Run the TimestampTest.java,the console print message:*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
>  100 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
> *{color:#ff}minus 8 hours, is also wrong.{color}*
> *4.Reference these, not work*
> https://issues.apache.org/jira/browse/PHOENIX-997
> https://issues.apache.org/jira/browse/PHOENIX-1485
> 5.Modify DateUtil.java
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "GMT";
> public static final String LOCAL_TIME_ZONE_ID = "LOCAL";{code}
> *Changed:*
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "Asia/Shanghai";
> public static final String LOCAL_TIME_ZONE_ID = "Asia/Shanghai";
> {code}
> -
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("UTC"));{code}
> *Changed:*
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("Asia/Shanghai"));
> {code}
>  
> 6.Again run *TimestampTest.java, the result is ok.*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000
>  100 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4629) timestamp with timezone issue

2018-03-12 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395049#comment-16395049
 ] 

Jepson commented on PHOENIX-4629:
-

*1.Phoenix-4629.patch:*

Change the timezone value to "Asia/Shanghai" , is a fixed value.

 

*2.Phoenix-4629-v2.patch:*

2.1.Phoenix select sql with timezone : using the parameter 
"phoenix.query.dateFormatTimeZone";

2.2.Phoenix upsert sql with timezone :  the code with 
"DateTimeZone.getDefault()", value is from system time zone;

Centos6/7:

[root@hadoop38 ~]# ll /etc/localtime 
lrwxrwxrwx 1 root root 33 May 25 2017 /etc/localtime -> 
/usr/share/zoneinfo/Asia/Shanghai
[root@hadoop38 ~]#

 

 

 

 

> timestamp with timezone issue
> -
>
> Key: PHOENIX-4629
> URL: https://issues.apache.org/jira/browse/PHOENIX-4629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10-hbase1.2
>Reporter: Jepson
>Priority: Major
> Attachments: Phoenix-4629-v2.patch, Phoenix-4629.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> *1.Create timezonetest table:*
> {code:java}
> CREATE TABLE JYDW.timezonetest (
> id bigint(11) not null primary key,
> date_c date ,
> datetime_c timestamp ,
> timestamp_c timestamp
> )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';{code}
> *2.Create TimestampTest.java*
> {code:java}
> package org.apache.phoenix.jdbc;
> import org.apache.phoenix.query.BaseConnectionlessQueryTest;
> import org.apache.phoenix.query.QueryServices;
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.util.Properties;
> /**
>  * Created by Jepson on 2017/11/2.
>  *
>  CREATE TABLE JYDW.timezonetest (
>  id bigint(11) not null primary key,
>  date_c date ,
>  datetime_c timestamp ,
>  timestamp_c timestamp
>  )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
>  */
> public class TimestampTest extends BaseConnectionlessQueryTest {
> public static void main(String[] args) throws Exception {
> Properties props = new Properties();
>// props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
> "Asia/Shanghai");
> String url = 
> "jdbc:phoenix:192.168.117.137,192.168.117.138,192.168.117.140,192.168.117.141,192.168.117.142:2181:/hbase";
> //Connection conn = DriverManager.getConnection(url,props);
> Connection conn = DriverManager.getConnection(url);
> conn.createStatement().execute("UPSERT INTO 
> jydw.TIMEZONETEST(id,date_c,datetime_c,timestamp_c) \n" +
> "values(101,'2018-02-25','2018-02-25 00:00:00','2018-02-25 
> 10:00:00')");
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
> TIMEZONETEST");
> while(rs.next()) {
> System.out.println(rs.getString("id")+" : " + 
> rs.getString("date_c")+" : " + rs.getString("datetime_c")+" : " + 
> rs.getString("timestamp_c"));
> }
> rs.close();
> conn.close();
> }
> }
> {code}
> *3.Run the TimestampTest.java,the console print message:*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
>  100 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
> *{color:#ff}minus 8 hours, is also wrong.{color}*
> *4.Reference these, not work*
> https://issues.apache.org/jira/browse/PHOENIX-997
> https://issues.apache.org/jira/browse/PHOENIX-1485
> 5.Modify DateUtil.java
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "GMT";
> public static final String LOCAL_TIME_ZONE_ID = "LOCAL";{code}
> *Changed:*
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "Asia/Shanghai";
> public static final String LOCAL_TIME_ZONE_ID = "Asia/Shanghai";
> {code}
> -
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("UTC"));{code}
> *Changed:*
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("Asia/Shanghai"));
> {code}
>  
> 6.Again run *TimestampTest.java, the result is ok.*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000
>  100 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4629) timestamp with timezone issue

2018-03-12 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4629:

Attachment: Phoenix-4629-v2.patch

> timestamp with timezone issue
> -
>
> Key: PHOENIX-4629
> URL: https://issues.apache.org/jira/browse/PHOENIX-4629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10-hbase1.2
>Reporter: Jepson
>Priority: Major
> Attachments: Phoenix-4629-v2.patch, Phoenix-4629.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> *1.Create timezonetest table:*
> {code:java}
> CREATE TABLE JYDW.timezonetest (
> id bigint(11) not null primary key,
> date_c date ,
> datetime_c timestamp ,
> timestamp_c timestamp
> )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';{code}
> *2.Create TimestampTest.java*
> {code:java}
> package org.apache.phoenix.jdbc;
> import org.apache.phoenix.query.BaseConnectionlessQueryTest;
> import org.apache.phoenix.query.QueryServices;
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.util.Properties;
> /**
>  * Created by Jepson on 2017/11/2.
>  *
>  CREATE TABLE JYDW.timezonetest (
>  id bigint(11) not null primary key,
>  date_c date ,
>  datetime_c timestamp ,
>  timestamp_c timestamp
>  )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
>  */
> public class TimestampTest extends BaseConnectionlessQueryTest {
> public static void main(String[] args) throws Exception {
> Properties props = new Properties();
>// props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
> "Asia/Shanghai");
> String url = 
> "jdbc:phoenix:192.168.117.137,192.168.117.138,192.168.117.140,192.168.117.141,192.168.117.142:2181:/hbase";
> //Connection conn = DriverManager.getConnection(url,props);
> Connection conn = DriverManager.getConnection(url);
> conn.createStatement().execute("UPSERT INTO 
> jydw.TIMEZONETEST(id,date_c,datetime_c,timestamp_c) \n" +
> "values(101,'2018-02-25','2018-02-25 00:00:00','2018-02-25 
> 10:00:00')");
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
> TIMEZONETEST");
> while(rs.next()) {
> System.out.println(rs.getString("id")+" : " + 
> rs.getString("date_c")+" : " + rs.getString("datetime_c")+" : " + 
> rs.getString("timestamp_c"));
> }
> rs.close();
> conn.close();
> }
> }
> {code}
> *3.Run the TimestampTest.java,the console print message:*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
>  100 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
> *{color:#ff}minus 8 hours, is also wrong.{color}*
> *4.Reference these, not work*
> https://issues.apache.org/jira/browse/PHOENIX-997
> https://issues.apache.org/jira/browse/PHOENIX-1485
> 5.Modify DateUtil.java
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "GMT";
> public static final String LOCAL_TIME_ZONE_ID = "LOCAL";{code}
> *Changed:*
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "Asia/Shanghai";
> public static final String LOCAL_TIME_ZONE_ID = "Asia/Shanghai";
> {code}
> -
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("UTC"));{code}
> *Changed:*
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("Asia/Shanghai"));
> {code}
>  
> 6.Again run *TimestampTest.java, the result is ok.*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000
>  100 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4319) Zookeeper connection should be closed immediately

2018-03-06 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387124#comment-16387124
 ] 

Jepson edited comment on PHOENIX-4319 at 3/6/18 8:32 AM:
-

*Reference:* https://issues.apache.org/jira/browse/PHOENIX-4489

I have compiled, tested, and been ok.


was (Author: 1028344...@qq.com):
*Reference:* https://issues.apache.org/jira/browse/PHOENIX-4489

> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>Priority: Major
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> *Zookeeper connections:*
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4489) HBase Connection leak in Phoenix MR Jobs

2018-03-06 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387441#comment-16387441
 ] 

Jepson commented on PHOENIX-4489:
-

[~karanmehta93] Very nice, i have compiled, tested, and been ok.

> HBase Connection leak in Phoenix MR Jobs
> 
>
> Key: PHOENIX-4489
> URL: https://issues.apache.org/jira/browse/PHOENIX-4489
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4489.001.patch, PHOENIX-4489.002.patch, 
> PHOENIX-4489.4.x-HBase-0.98.001.patch
>
>
> Phoenix MR jobs uses a custom class {{PhoenixInputFormat}} to determine the 
> splits and the parallelism of the work. The class directly opens up a HBase 
> connection, which is not closed after the usage. Independently running MR 
> jobs should not have any concern, however jobs that run through Phoenix-Spark 
> can cause leak issues if this is left unclosed (since those jobs run as a 
> part of same JVM). 
> Apart from this, the connection should be instantiated with 
> {{HBaseFactoryProvider.getHConnectionFactory()}} instead of the default one. 
> It can be useful if a separate client is trying to run jobs and wants to 
> provide a custom implementation of {{HConnection}}. 
> [~jmahonin] Any ideas?
> [~jamestaylor] [~vincentpoon] Any concerns around this?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4319) Zookeeper connection should be closed immediately

2018-03-05 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387124#comment-16387124
 ] 

Jepson commented on PHOENIX-4319:
-

*Reference:* https://issues.apache.org/jira/browse/PHOENIX-4489

> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>Priority: Major
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> *Zookeeper connections:*
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4629) timestamp with timezone issue

2018-02-26 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377979#comment-16377979
 ] 

Jepson commented on PHOENIX-4629:
-

Add the parameter to hbase-site.xml, is not work.
{code:java}

 phoenix.query.dateFormatTimeZone
 Asia/Shanghai
{code}


> timestamp with timezone issue
> -
>
> Key: PHOENIX-4629
> URL: https://issues.apache.org/jira/browse/PHOENIX-4629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10-hbase1.2
>Reporter: Jepson
>Priority: Major
> Attachments: Phoenix-4629.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> *1.Create timezonetest table:*
> {code:java}
> CREATE TABLE JYDW.timezonetest (
> id bigint(11) not null primary key,
> date_c date ,
> datetime_c timestamp ,
> timestamp_c timestamp
> )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';{code}
> *2.Create TimestampTest.java*
> {code:java}
> package org.apache.phoenix.jdbc;
> import org.apache.phoenix.query.BaseConnectionlessQueryTest;
> import org.apache.phoenix.query.QueryServices;
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.util.Properties;
> /**
>  * Created by Jepson on 2017/11/2.
>  *
>  CREATE TABLE JYDW.timezonetest (
>  id bigint(11) not null primary key,
>  date_c date ,
>  datetime_c timestamp ,
>  timestamp_c timestamp
>  )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
>  */
> public class TimestampTest extends BaseConnectionlessQueryTest {
> public static void main(String[] args) throws Exception {
> Properties props = new Properties();
>// props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
> "Asia/Shanghai");
> String url = 
> "jdbc:phoenix:192.168.117.137,192.168.117.138,192.168.117.140,192.168.117.141,192.168.117.142:2181:/hbase";
> //Connection conn = DriverManager.getConnection(url,props);
> Connection conn = DriverManager.getConnection(url);
> conn.createStatement().execute("UPSERT INTO 
> jydw.TIMEZONETEST(id,date_c,datetime_c,timestamp_c) \n" +
> "values(101,'2018-02-25','2018-02-25 00:00:00','2018-02-25 
> 10:00:00')");
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
> TIMEZONETEST");
> while(rs.next()) {
> System.out.println(rs.getString("id")+" : " + 
> rs.getString("date_c")+" : " + rs.getString("datetime_c")+" : " + 
> rs.getString("timestamp_c"));
> }
> rs.close();
> conn.close();
> }
> }
> {code}
> *3.Run the TimestampTest.java,the console print message:*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
>  100 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
> *{color:#ff}minus 8 hours, is also wrong.{color}*
> *4.Reference these, not work*
> https://issues.apache.org/jira/browse/PHOENIX-997
> https://issues.apache.org/jira/browse/PHOENIX-1485
> 5.Modify DateUtil.java
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "GMT";
> public static final String LOCAL_TIME_ZONE_ID = "LOCAL";{code}
> *Changed:*
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "Asia/Shanghai";
> public static final String LOCAL_TIME_ZONE_ID = "Asia/Shanghai";
> {code}
> -
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("UTC"));{code}
> *Changed:*
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("Asia/Shanghai"));
> {code}
>  
> 6.Again run *TimestampTest.java, the result is ok.*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000
>  100 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4629) timestamp with timezone issue

2018-02-26 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377969#comment-16377969
 ] 

Jepson commented on PHOENIX-4629:
-

Compile the jar, test is also ok.

> timestamp with timezone issue
> -
>
> Key: PHOENIX-4629
> URL: https://issues.apache.org/jira/browse/PHOENIX-4629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10-hbase1.2
>Reporter: Jepson
>Priority: Major
> Attachments: Phoenix-4629.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> *1.Create timezonetest table:*
> {code:java}
> CREATE TABLE JYDW.timezonetest (
> id bigint(11) not null primary key,
> date_c date ,
> datetime_c timestamp ,
> timestamp_c timestamp
> )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';{code}
> *2.Create TimestampTest.java*
> {code:java}
> package org.apache.phoenix.jdbc;
> import org.apache.phoenix.query.BaseConnectionlessQueryTest;
> import org.apache.phoenix.query.QueryServices;
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.util.Properties;
> /**
>  * Created by Jepson on 2017/11/2.
>  *
>  CREATE TABLE JYDW.timezonetest (
>  id bigint(11) not null primary key,
>  date_c date ,
>  datetime_c timestamp ,
>  timestamp_c timestamp
>  )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
>  */
> public class TimestampTest extends BaseConnectionlessQueryTest {
> public static void main(String[] args) throws Exception {
> Properties props = new Properties();
>// props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
> "Asia/Shanghai");
> String url = 
> "jdbc:phoenix:192.168.117.137,192.168.117.138,192.168.117.140,192.168.117.141,192.168.117.142:2181:/hbase";
> //Connection conn = DriverManager.getConnection(url,props);
> Connection conn = DriverManager.getConnection(url);
> conn.createStatement().execute("UPSERT INTO 
> jydw.TIMEZONETEST(id,date_c,datetime_c,timestamp_c) \n" +
> "values(101,'2018-02-25','2018-02-25 00:00:00','2018-02-25 
> 10:00:00')");
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
> TIMEZONETEST");
> while(rs.next()) {
> System.out.println(rs.getString("id")+" : " + 
> rs.getString("date_c")+" : " + rs.getString("datetime_c")+" : " + 
> rs.getString("timestamp_c"));
> }
> rs.close();
> conn.close();
> }
> }
> {code}
> *3.Run the TimestampTest.java,the console print message:*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
>  100 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
> *{color:#ff}minus 8 hours, is also wrong.{color}*
> *4.Reference these, not work*
> https://issues.apache.org/jira/browse/PHOENIX-997
> https://issues.apache.org/jira/browse/PHOENIX-1485
> 5.Modify DateUtil.java
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "GMT";
> public static final String LOCAL_TIME_ZONE_ID = "LOCAL";{code}
> *Changed:*
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "Asia/Shanghai";
> public static final String LOCAL_TIME_ZONE_ID = "Asia/Shanghai";
> {code}
> -
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("UTC"));{code}
> *Changed:*
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("Asia/Shanghai"));
> {code}
>  
> 6.Again run *TimestampTest.java, the result is ok.*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000
>  100 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4629) timestamp with timezone issue

2018-02-26 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4629:

Attachment: Phoenix-4629.patch

> timestamp with timezone issue
> -
>
> Key: PHOENIX-4629
> URL: https://issues.apache.org/jira/browse/PHOENIX-4629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10-hbase1.2
>Reporter: Jepson
>Priority: Major
> Attachments: Phoenix-4629.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> *1.Create timezonetest table:*
> {code:java}
> CREATE TABLE JYDW.timezonetest (
> id bigint(11) not null primary key,
> date_c date ,
> datetime_c timestamp ,
> timestamp_c timestamp
> )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';{code}
> *2.Create TimestampTest.java*
> {code:java}
> package org.apache.phoenix.jdbc;
> import org.apache.phoenix.query.BaseConnectionlessQueryTest;
> import org.apache.phoenix.query.QueryServices;
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.util.Properties;
> /**
>  * Created by Jepson on 2017/11/2.
>  *
>  CREATE TABLE JYDW.timezonetest (
>  id bigint(11) not null primary key,
>  date_c date ,
>  datetime_c timestamp ,
>  timestamp_c timestamp
>  )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
>  */
> public class TimestampTest extends BaseConnectionlessQueryTest {
> public static void main(String[] args) throws Exception {
> Properties props = new Properties();
>// props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
> "Asia/Shanghai");
> String url = 
> "jdbc:phoenix:192.168.117.137,192.168.117.138,192.168.117.140,192.168.117.141,192.168.117.142:2181:/hbase";
> //Connection conn = DriverManager.getConnection(url,props);
> Connection conn = DriverManager.getConnection(url);
> conn.createStatement().execute("UPSERT INTO 
> jydw.TIMEZONETEST(id,date_c,datetime_c,timestamp_c) \n" +
> "values(101,'2018-02-25','2018-02-25 00:00:00','2018-02-25 
> 10:00:00')");
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
> TIMEZONETEST");
> while(rs.next()) {
> System.out.println(rs.getString("id")+" : " + 
> rs.getString("date_c")+" : " + rs.getString("datetime_c")+" : " + 
> rs.getString("timestamp_c"));
> }
> rs.close();
> conn.close();
> }
> }
> {code}
> *3.Run the TimestampTest.java,the console print message:*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
>  100 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
> *{color:#ff}minus 8 hours, is also wrong.{color}*
> *4.Reference these, not work*
> https://issues.apache.org/jira/browse/PHOENIX-997
> https://issues.apache.org/jira/browse/PHOENIX-1485
> 5.Modify DateUtil.java
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "GMT";
> public static final String LOCAL_TIME_ZONE_ID = "LOCAL";{code}
> *Changed:*
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "Asia/Shanghai";
> public static final String LOCAL_TIME_ZONE_ID = "Asia/Shanghai";
> {code}
> -
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("UTC"));{code}
> *Changed:*
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("Asia/Shanghai"));
> {code}
>  
> 6.Again run *TimestampTest.java, the result is ok.*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000
>  100 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4629) timestamp with timezone issue

2018-02-26 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4629:

External issue ID: PHOENIX-3221 PHOENIX-997  /PHOENIX-1485  (was: 
PHOENIX-3221 )

> timestamp with timezone issue
> -
>
> Key: PHOENIX-4629
> URL: https://issues.apache.org/jira/browse/PHOENIX-4629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10-hbase1.2
>Reporter: Jepson
>Priority: Major
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> *1.Create timezonetest table:*
> {code:java}
> CREATE TABLE JYDW.timezonetest (
> id bigint(11) not null primary key,
> date_c date ,
> datetime_c timestamp ,
> timestamp_c timestamp
> )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';{code}
> *2.Create TimestampTest.java*
> {code:java}
> package org.apache.phoenix.jdbc;
> import org.apache.phoenix.query.BaseConnectionlessQueryTest;
> import org.apache.phoenix.query.QueryServices;
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.util.Properties;
> /**
>  * Created by Jepson on 2017/11/2.
>  *
>  CREATE TABLE JYDW.timezonetest (
>  id bigint(11) not null primary key,
>  date_c date ,
>  datetime_c timestamp ,
>  timestamp_c timestamp
>  )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
>  */
> public class TimestampTest extends BaseConnectionlessQueryTest {
> public static void main(String[] args) throws Exception {
> Properties props = new Properties();
>// props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
> "Asia/Shanghai");
> String url = 
> "jdbc:phoenix:192.168.117.137,192.168.117.138,192.168.117.140,192.168.117.141,192.168.117.142:2181:/hbase";
> //Connection conn = DriverManager.getConnection(url,props);
> Connection conn = DriverManager.getConnection(url);
> conn.createStatement().execute("UPSERT INTO 
> jydw.TIMEZONETEST(id,date_c,datetime_c,timestamp_c) \n" +
> "values(101,'2018-02-25','2018-02-25 00:00:00','2018-02-25 
> 10:00:00')");
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
> TIMEZONETEST");
> while(rs.next()) {
> System.out.println(rs.getString("id")+" : " + 
> rs.getString("date_c")+" : " + rs.getString("datetime_c")+" : " + 
> rs.getString("timestamp_c"));
> }
> rs.close();
> conn.close();
> }
> }
> {code}
> *3.Run the TimestampTest.java,the console print message:*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
>  100 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
> *{color:#ff}minus 8 hours, is also wrong.{color}*
> *4.Reference these, not work*
> https://issues.apache.org/jira/browse/PHOENIX-997
> https://issues.apache.org/jira/browse/PHOENIX-1485
> 5.Modify DateUtil.java
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "GMT";
> public static final String LOCAL_TIME_ZONE_ID = "LOCAL";{code}
> *Changed:*
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "Asia/Shanghai";
> public static final String LOCAL_TIME_ZONE_ID = "Asia/Shanghai";
> {code}
> -
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("UTC"));{code}
> *Changed:*
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("Asia/Shanghai"));
> {code}
>  
> 6.Again run *TimestampTest.java, the result is ok.*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000
>  100 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4629) timestamp with timezone issue

2018-02-26 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4629:

External issue ID: PHOENIX-3221 PHOENIX-997  PHOENIX-1485  (was: 
PHOENIX-3221 PHOENIX-997  /PHOENIX-1485)

> timestamp with timezone issue
> -
>
> Key: PHOENIX-4629
> URL: https://issues.apache.org/jira/browse/PHOENIX-4629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10-hbase1.2
>Reporter: Jepson
>Priority: Major
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> *1.Create timezonetest table:*
> {code:java}
> CREATE TABLE JYDW.timezonetest (
> id bigint(11) not null primary key,
> date_c date ,
> datetime_c timestamp ,
> timestamp_c timestamp
> )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';{code}
> *2.Create TimestampTest.java*
> {code:java}
> package org.apache.phoenix.jdbc;
> import org.apache.phoenix.query.BaseConnectionlessQueryTest;
> import org.apache.phoenix.query.QueryServices;
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.util.Properties;
> /**
>  * Created by Jepson on 2017/11/2.
>  *
>  CREATE TABLE JYDW.timezonetest (
>  id bigint(11) not null primary key,
>  date_c date ,
>  datetime_c timestamp ,
>  timestamp_c timestamp
>  )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
>  */
> public class TimestampTest extends BaseConnectionlessQueryTest {
> public static void main(String[] args) throws Exception {
> Properties props = new Properties();
>// props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
> "Asia/Shanghai");
> String url = 
> "jdbc:phoenix:192.168.117.137,192.168.117.138,192.168.117.140,192.168.117.141,192.168.117.142:2181:/hbase";
> //Connection conn = DriverManager.getConnection(url,props);
> Connection conn = DriverManager.getConnection(url);
> conn.createStatement().execute("UPSERT INTO 
> jydw.TIMEZONETEST(id,date_c,datetime_c,timestamp_c) \n" +
> "values(101,'2018-02-25','2018-02-25 00:00:00','2018-02-25 
> 10:00:00')");
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
> TIMEZONETEST");
> while(rs.next()) {
> System.out.println(rs.getString("id")+" : " + 
> rs.getString("date_c")+" : " + rs.getString("datetime_c")+" : " + 
> rs.getString("timestamp_c"));
> }
> rs.close();
> conn.close();
> }
> }
> {code}
> *3.Run the TimestampTest.java,the console print message:*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
>  100 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
> *{color:#ff}minus 8 hours, is also wrong.{color}*
> *4.Reference these, not work*
> https://issues.apache.org/jira/browse/PHOENIX-997
> https://issues.apache.org/jira/browse/PHOENIX-1485
> 5.Modify DateUtil.java
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "GMT";
> public static final String LOCAL_TIME_ZONE_ID = "LOCAL";{code}
> *Changed:*
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "Asia/Shanghai";
> public static final String LOCAL_TIME_ZONE_ID = "Asia/Shanghai";
> {code}
> -
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("UTC"));{code}
> *Changed:*
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("Asia/Shanghai"));
> {code}
>  
> 6.Again run *TimestampTest.java, the result is ok.*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000
>  100 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4629) timestamp with timezone issue

2018-02-26 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4629:

External issue ID: PHOENIX-3221   (was: 3221)

> timestamp with timezone issue
> -
>
> Key: PHOENIX-4629
> URL: https://issues.apache.org/jira/browse/PHOENIX-4629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10-hbase1.2
>Reporter: Jepson
>Priority: Major
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> *1.Create timezonetest table:*
> {code:java}
> CREATE TABLE JYDW.timezonetest (
> id bigint(11) not null primary key,
> date_c date ,
> datetime_c timestamp ,
> timestamp_c timestamp
> )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';{code}
> *2.Create TimestampTest.java*
> {code:java}
> package org.apache.phoenix.jdbc;
> import org.apache.phoenix.query.BaseConnectionlessQueryTest;
> import org.apache.phoenix.query.QueryServices;
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.util.Properties;
> /**
>  * Created by Jepson on 2017/11/2.
>  *
>  CREATE TABLE JYDW.timezonetest (
>  id bigint(11) not null primary key,
>  date_c date ,
>  datetime_c timestamp ,
>  timestamp_c timestamp
>  )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
>  */
> public class TimestampTest extends BaseConnectionlessQueryTest {
> public static void main(String[] args) throws Exception {
> Properties props = new Properties();
>// props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
> "Asia/Shanghai");
> String url = 
> "jdbc:phoenix:192.168.117.137,192.168.117.138,192.168.117.140,192.168.117.141,192.168.117.142:2181:/hbase";
> //Connection conn = DriverManager.getConnection(url,props);
> Connection conn = DriverManager.getConnection(url);
> conn.createStatement().execute("UPSERT INTO 
> jydw.TIMEZONETEST(id,date_c,datetime_c,timestamp_c) \n" +
> "values(101,'2018-02-25','2018-02-25 00:00:00','2018-02-25 
> 10:00:00')");
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
> TIMEZONETEST");
> while(rs.next()) {
> System.out.println(rs.getString("id")+" : " + 
> rs.getString("date_c")+" : " + rs.getString("datetime_c")+" : " + 
> rs.getString("timestamp_c"));
> }
> rs.close();
> conn.close();
> }
> }
> {code}
> *3.Run the TimestampTest.java,the console print message:*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
>  100 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
> 02:00:00.000
> *{color:#ff}minus 8 hours, is also wrong.{color}*
> *4.Reference these, not work*
> https://issues.apache.org/jira/browse/PHOENIX-997
> https://issues.apache.org/jira/browse/PHOENIX-1485
> 5.Modify DateUtil.java
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "GMT";
> public static final String LOCAL_TIME_ZONE_ID = "LOCAL";{code}
> *Changed:*
> {code:java}
> public static final String DEFAULT_TIME_ZONE_ID = "Asia/Shanghai";
> public static final String LOCAL_TIME_ZONE_ID = "Asia/Shanghai";
> {code}
> -
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("UTC"));{code}
> *Changed:*
> {code:java}
> private final DateTimeFormatter formatter = 
> ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("Asia/Shanghai"));
> {code}
>  
> 6.Again run *TimestampTest.java, the result is ok.*
>  *id : date_c : datetime_c : timestamp_c*
>  101 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000
>  100 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
> 10:00:00.000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4629) timestamp with timezone issue

2018-02-26 Thread Jepson (JIRA)
Jepson created PHOENIX-4629:
---

 Summary: timestamp with timezone issue
 Key: PHOENIX-4629
 URL: https://issues.apache.org/jira/browse/PHOENIX-4629
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.10.0
 Environment: phoenix4.10-hbase1.2
Reporter: Jepson


*1.Create timezonetest table:*
{code:java}
CREATE TABLE JYDW.timezonetest (
id bigint(11) not null primary key,
date_c date ,
datetime_c timestamp ,
timestamp_c timestamp
)SALT_BUCKETS = 12, COMPRESSION='SNAPPY';{code}
*2.Create TimestampTest.java*
{code:java}
package org.apache.phoenix.jdbc;

import org.apache.phoenix.query.BaseConnectionlessQueryTest;
import org.apache.phoenix.query.QueryServices;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.util.Properties;

/**
 * Created by Jepson on 2017/11/2.
 *
 CREATE TABLE JYDW.timezonetest (
 id bigint(11) not null primary key,
 date_c date ,
 datetime_c timestamp ,
 timestamp_c timestamp
 )SALT_BUCKETS = 12, COMPRESSION='SNAPPY';

 */
public class TimestampTest extends BaseConnectionlessQueryTest {
public static void main(String[] args) throws Exception {
Properties props = new Properties();
   // props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
"Asia/Shanghai");
String url = 
"jdbc:phoenix:192.168.117.137,192.168.117.138,192.168.117.140,192.168.117.141,192.168.117.142:2181:/hbase";
//Connection conn = DriverManager.getConnection(url,props);
Connection conn = DriverManager.getConnection(url);
conn.createStatement().execute("UPSERT INTO 
jydw.TIMEZONETEST(id,date_c,datetime_c,timestamp_c) \n" +
"values(101,'2018-02-25','2018-02-25 00:00:00','2018-02-25 
10:00:00')");
conn.commit();

ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
TIMEZONETEST");
while(rs.next()) {
System.out.println(rs.getString("id")+" : " + 
rs.getString("date_c")+" : " + rs.getString("datetime_c")+" : " + 
rs.getString("timestamp_c"));
}
rs.close();
conn.close();

}
}

{code}
*3.Run the TimestampTest.java,the console print message:*
 *id : date_c : datetime_c : timestamp_c*
 101 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
02:00:00.000
 100 : 2018-02-24 16:00:00.000 : 2018-02-24 16:00:00.000 : 2018-02-25 
02:00:00.000

*{color:#ff}minus 8 hours, is also wrong.{color}*

*4.Reference these, not work*

https://issues.apache.org/jira/browse/PHOENIX-997

https://issues.apache.org/jira/browse/PHOENIX-1485


5.Modify DateUtil.java
{code:java}
public static final String DEFAULT_TIME_ZONE_ID = "GMT";
public static final String LOCAL_TIME_ZONE_ID = "LOCAL";{code}
*Changed:*
{code:java}
public static final String DEFAULT_TIME_ZONE_ID = "Asia/Shanghai";
public static final String LOCAL_TIME_ZONE_ID = "Asia/Shanghai";
{code}
-
{code:java}
private final DateTimeFormatter formatter = 
ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("UTC"));{code}
*Changed:*
{code:java}
private final DateTimeFormatter formatter = 
ISO_DATE_TIME_FORMATTER.withZone(DateTimeZone.forID("Asia/Shanghai"));
{code}
 

6.Again run *TimestampTest.java, the result is ok.*
 *id : date_c : datetime_c : timestamp_c*
 101 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
10:00:00.000
 100 : 2018-02-25 00:00:00.000 : 2018-02-25 00:00:00.000 : 2018-02-25 
10:00:00.000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2018-02-06 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355096#comment-16355096
 ] 

Jepson commented on PHOENIX-4056:
-

[~stepson] Thanks for reply.

> java.lang.IllegalArgumentException: Can not create a Path from an empty string
> --
>
> Key: PHOENIX-4056
> URL: https://issues.apache.org/jira/browse/PHOENIX-4056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: CDH5.12
> Phoenix:4.11
> HBase:1.2
> Spark: 2.2.0
> phoenix-spark.version:4.11.0-HBase-1.2
>Reporter: Jepson
>Priority: Major
>  Labels: features, patch, test
> Attachments: PHOENIX-4056.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 1.use the configuration of server and client(scala project)
>  
> phoenix.schema.isNamespaceMappingEnabled
> true
>   
>   
> phoenix.schema.mapSystemTablesToNamespace
> true
>   
> 2.The Code:
> {code:java}
> resultDF.write
>  .format("org.apache.phoenix.spark")
>  .mode(SaveMode.Overwrite)
>  .option("table", "JYDW.ADDRESS_ORDERCOUNT")
>  .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
>  .save()
> {code}
> 3.Throw this error,help to fix it,thankyou :
> 7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
> SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
> 17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
> job_20170802010717_0079.
> {color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path 
> from an empty string*{color}
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
>   at org.apache.hadoop.fs.Path.(Path.java:134)
>   at org.apache.hadoop.fs.Path.(Path.java:88)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
>   at 
> org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
>   at 
> org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>   at 
> 

[jira] [Updated] (PHOENIX-4520) Strange Phenomenon : Data appear repeat and get back to normal

2018-01-04 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4520:

Description: 
Strange Phenomenon : Data appera repeat and get back to normal.

*Table:*
{code:java}
CREATE TABLE JYDW.bms_biz_outstock_master (
  id bigint(20)  ,
  oms_id varchar(64) ,
  outstock_no varchar(64) ,
  external_no varchar(128) ,
  warehouse_code varchar(64) ,
  warehouse_name varchar(128) ,
  customerid varchar(64) ,
  customer_name varchar(128) ,
  carrier_id varchar(64) ,
  carrier_name varchar(128) ,
  CONSTRAINT pk PRIMARY KEY (id)
) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
{code}



*I do this in 2019-01-04:*
*1.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}

The data appear repeat, 8 rows are same.
!https://issues.apache.org/jira/secure/attachment/12904715/sql1-2018-01-04.png!

*2.Select SQL2:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE ID=2102527;
{code}

The data get back to normal, 1 row is correct.
!https://issues.apache.org/jira/secure/attachment/12904714/sql2-2018-01-04.png!

*After the seconde day , I do this in 2019-01-05:*
*3.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}

Now,the data is one row, get back to normal.
!https://issues.apache.org/jira/secure/attachment/12904713/sql1-2018-01-05.png!



*Question:*
The sql1 is why appear 8 rows?


  was:
Strange Phenomenon : Data appera repeat and get back to normal.

*Table:*
{code:java}
CREATE TABLE JYDW.bms_biz_outstock_master (
  id bigint(20)  ,
  oms_id varchar(64) ,
  outstock_no varchar(64) ,
  external_no varchar(128) ,
  warehouse_code varchar(64) ,
  warehouse_name varchar(128) ,
  customerid varchar(64) ,
  customer_name varchar(128) ,
  carrier_id varchar(64) ,
  carrier_name varchar(128) ,
  CONSTRAINT pk PRIMARY KEY (id)
) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
{code}

*I do this in 2019-01-04:*
*1.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}

*The data appear repeat, 8 rows are same.*
!https://issues.apache.org/jira/secure/attachment/12904715/sql1-2018-01-04.png!

*2.Select SQL2:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE ID=2102527;
{code}

*The data get back to normal, 1 row is correct.*


*After the seconde day , I do this in 2019-01-05:*
*3.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}

*Now,the data is one row, get back to normal.*
!https://issues.apache.org/jira/secure/attachment/12904713/sql1-2018-01-05.png!

*Question:*
The sql1 is why appear 8 rows?



> Strange Phenomenon : Data appear repeat and get back to normal
> --
>
> Key: PHOENIX-4520
> URL: https://issues.apache.org/jira/browse/PHOENIX-4520
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10
> cdh5.12.0-hbase1.2.0
>Reporter: Jepson
> Attachments: sql1-2018-01-04.png, sql1-2018-01-05.png, 
> sql2-2018-01-04.png
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Strange Phenomenon : Data appera repeat and get back to normal.
> *Table:*
> {code:java}
> CREATE TABLE JYDW.bms_biz_outstock_master (
>   id bigint(20)  ,
>   oms_id varchar(64) ,
>   outstock_no varchar(64) ,
>   external_no varchar(128) ,
>   warehouse_code varchar(64) ,
>   warehouse_name varchar(128) ,
>   customerid varchar(64) ,
>   customer_name varchar(128) ,
>   carrier_id varchar(64) ,
>   carrier_name varchar(128) ,
>   CONSTRAINT pk PRIMARY KEY (id)
> ) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
> {code}
> *I do this in 2019-01-04:*
> *1.Select SQL1:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE WAREHOUSE_CODE='B03' 
>   AND OUTSTOCK_NO ='Z31164110'
>   AND CUSTOMERID='110871' ;
> {code}
> The data appear repeat, 8 rows are same.
> !https://issues.apache.org/jira/secure/attachment/12904715/sql1-2018-01-04.png!
> *2.Select SQL2:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE ID=2102527;
> {code}
> The data get back to normal, 1 row is correct.
> !https://issues.apache.org/jira/secure/attachment/12904714/sql2-2018-01-04.png!
> *After the seconde day , I do this in 2019-01-05:*
> *3.Select SQL1:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE WAREHOUSE_CODE='B03' 
>   AND OUTSTOCK_NO ='Z31164110'
>   AND CUSTOMERID='110871' ;
> {code}
> Now,the data is one row, get back 

[jira] [Updated] (PHOENIX-4520) Strange Phenomenon : Data appear repeat and get back to normal

2018-01-04 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4520:

Description: 
Strange Phenomenon : Data appera repeat and get back to normal.

*Table:*
{code:java}
CREATE TABLE JYDW.bms_biz_outstock_master (
  id bigint(20)  ,
  oms_id varchar(64) ,
  outstock_no varchar(64) ,
  external_no varchar(128) ,
  warehouse_code varchar(64) ,
  warehouse_name varchar(128) ,
  customerid varchar(64) ,
  customer_name varchar(128) ,
  carrier_id varchar(64) ,
  carrier_name varchar(128) ,
  CONSTRAINT pk PRIMARY KEY (id)
) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
{code}

*I do this in 2019-01-04:*
*1.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}

*The data appear repeat, 8 rows are same.*
!https://issues.apache.org/jira/secure/attachment/12904715/sql1-2018-01-04.png!

*2.Select SQL2:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE ID=2102527;
{code}

*The data get back to normal, 1 row is correct.*


*After the seconde day , I do this in 2019-01-05:*
*3.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}

*Now,the data is one row, get back to normal.*
!https://issues.apache.org/jira/secure/attachment/12904713/sql1-2018-01-05.png!

*Question:*
The sql1 is why appear 8 rows?


  was:
Strange Phenomenon : Data appera repeat and get back to normal.

*Table:*
{code:java}
CREATE TABLE JYDW.bms_biz_outstock_master (
  id bigint(20)  ,
  oms_id varchar(64) ,
  outstock_no varchar(64) ,
  external_no varchar(128) ,
  warehouse_code varchar(64) ,
  warehouse_name varchar(128) ,
  customerid varchar(64) ,
  customer_name varchar(128) ,
  carrier_id varchar(64) ,
  carrier_name varchar(128) ,
  CONSTRAINT pk PRIMARY KEY (id)
) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
{code}

*I do this in 2019-01-04:*
*1.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}

*The data appear repeat, 8 rows are same.*
!https://issues.apache.org/jira/secure/attachment/12904715/sql1-2018-01-04.png!

*2.Select SQL2:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE ID=2102527;
{code}

*The data get back to normal, 1 row is correct.*
!https://issues.apache.org/jira/secure/attachment/12904713/sql1-2018-01-05.png!

*After the seconde day , I do this in 2019-01-05:*
*3.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}


*Now,the data is one row, get back to normal.*

*Question:*
The sql1 is why appear 8 rows?



> Strange Phenomenon : Data appear repeat and get back to normal
> --
>
> Key: PHOENIX-4520
> URL: https://issues.apache.org/jira/browse/PHOENIX-4520
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10
> cdh5.12.0-hbase1.2.0
>Reporter: Jepson
> Attachments: sql1-2018-01-04.png, sql1-2018-01-05.png, 
> sql2-2018-01-04.png
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Strange Phenomenon : Data appera repeat and get back to normal.
> *Table:*
> {code:java}
> CREATE TABLE JYDW.bms_biz_outstock_master (
>   id bigint(20)  ,
>   oms_id varchar(64) ,
>   outstock_no varchar(64) ,
>   external_no varchar(128) ,
>   warehouse_code varchar(64) ,
>   warehouse_name varchar(128) ,
>   customerid varchar(64) ,
>   customer_name varchar(128) ,
>   carrier_id varchar(64) ,
>   carrier_name varchar(128) ,
>   CONSTRAINT pk PRIMARY KEY (id)
> ) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
> {code}
> *I do this in 2019-01-04:*
> *1.Select SQL1:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE WAREHOUSE_CODE='B03' 
>   AND OUTSTOCK_NO ='Z31164110'
>   AND CUSTOMERID='110871' ;
> {code}
> *The data appear repeat, 8 rows are same.*
> !https://issues.apache.org/jira/secure/attachment/12904715/sql1-2018-01-04.png!
> *2.Select SQL2:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE ID=2102527;
> {code}
> *The data get back to normal, 1 row is correct.*
> *After the seconde day , I do this in 2019-01-05:*
> *3.Select SQL1:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE WAREHOUSE_CODE='B03' 
>   AND OUTSTOCK_NO ='Z31164110'
>   AND CUSTOMERID='110871' ;
> {code}
> *Now,the data is one row, get back to normal.*
> !https://issues.apache.org/jira/secure/attachment/12904713/sql1-2018-01-05.png!
> *Question:*
> The sql1 is why appear 8 rows?



--
This 

[jira] [Updated] (PHOENIX-4520) Strange Phenomenon : Data appear repeat and get back to normal

2018-01-04 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4520:

Description: 
Strange Phenomenon : Data appera repeat and get back to normal.

*Table:*
{code:java}
CREATE TABLE JYDW.bms_biz_outstock_master (
  id bigint(20)  ,
  oms_id varchar(64) ,
  outstock_no varchar(64) ,
  external_no varchar(128) ,
  warehouse_code varchar(64) ,
  warehouse_name varchar(128) ,
  customerid varchar(64) ,
  customer_name varchar(128) ,
  carrier_id varchar(64) ,
  carrier_name varchar(128) ,
  CONSTRAINT pk PRIMARY KEY (id)
) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
{code}

*I do this in 2019-01-04:*
*1.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}

*The data appear repeat, 8 rows are same.*
!https://issues.apache.org/jira/secure/attachment/12904715/sql1-2018-01-04.png!

*2.Select SQL2:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE ID=2102527;
{code}

*The data get back to normal, 1 row is correct.*
!https://issues.apache.org/jira/secure/attachment/12904713/sql1-2018-01-05.png!

*After the seconde day , I do this in 2019-01-05:*
*3.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}


*Now,the data is one row, get back to normal.*

*Question:*
The sql1 is why appear 8 rows?


  was:
Strange Phenomenon : Data appera repeat and get back to normal.

*Table:*
{code:java}
CREATE TABLE JYDW.bms_biz_outstock_master (
  id bigint(20)  ,
  oms_id varchar(64) ,
  outstock_no varchar(64) ,
  external_no varchar(128) ,
  warehouse_code varchar(64) ,
  warehouse_name varchar(128) ,
  customerid varchar(64) ,
  customer_name varchar(128) ,
  carrier_id varchar(64) ,
  carrier_name varchar(128) ,
  CONSTRAINT pk PRIMARY KEY (id)
) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
{code}

*I do this in 2019-01-04:*
*1.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}

*The data appear repeat, 8 rows are same.*
!https://issues.apache.org/jira/secure/attachment/12904715/sql1-2018-01-04.png!

*2.Select SQL2:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE ID=2102527;
{code}

*The data get back to normal, 1 row is correct.*

*After the seconde day , I do this in 2019-01-05:*
*3.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}


*Now,the data is one row, get back to normal.*

*Question:*
The sql1 is why appear 8 rows?



> Strange Phenomenon : Data appear repeat and get back to normal
> --
>
> Key: PHOENIX-4520
> URL: https://issues.apache.org/jira/browse/PHOENIX-4520
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10
> cdh5.12.0-hbase1.2.0
>Reporter: Jepson
> Attachments: sql1-2018-01-04.png, sql1-2018-01-05.png, 
> sql2-2018-01-04.png
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Strange Phenomenon : Data appera repeat and get back to normal.
> *Table:*
> {code:java}
> CREATE TABLE JYDW.bms_biz_outstock_master (
>   id bigint(20)  ,
>   oms_id varchar(64) ,
>   outstock_no varchar(64) ,
>   external_no varchar(128) ,
>   warehouse_code varchar(64) ,
>   warehouse_name varchar(128) ,
>   customerid varchar(64) ,
>   customer_name varchar(128) ,
>   carrier_id varchar(64) ,
>   carrier_name varchar(128) ,
>   CONSTRAINT pk PRIMARY KEY (id)
> ) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
> {code}
> *I do this in 2019-01-04:*
> *1.Select SQL1:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE WAREHOUSE_CODE='B03' 
>   AND OUTSTOCK_NO ='Z31164110'
>   AND CUSTOMERID='110871' ;
> {code}
> *The data appear repeat, 8 rows are same.*
> !https://issues.apache.org/jira/secure/attachment/12904715/sql1-2018-01-04.png!
> *2.Select SQL2:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE ID=2102527;
> {code}
> *The data get back to normal, 1 row is correct.*
> !https://issues.apache.org/jira/secure/attachment/12904713/sql1-2018-01-05.png!
> *After the seconde day , I do this in 2019-01-05:*
> *3.Select SQL1:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE WAREHOUSE_CODE='B03' 
>   AND OUTSTOCK_NO ='Z31164110'
>   AND CUSTOMERID='110871' ;
> {code}
> *Now,the data is one row, get back to normal.*
> *Question:*
> The sql1 is why appear 8 rows?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4520) Strange Phenomenon : Data appear repeat and get back to normal

2018-01-04 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4520:

Description: 
Strange Phenomenon : Data appera repeat and get back to normal.

*Table:*
{code:java}
CREATE TABLE JYDW.bms_biz_outstock_master (
  id bigint(20)  ,
  oms_id varchar(64) ,
  outstock_no varchar(64) ,
  external_no varchar(128) ,
  warehouse_code varchar(64) ,
  warehouse_name varchar(128) ,
  customerid varchar(64) ,
  customer_name varchar(128) ,
  carrier_id varchar(64) ,
  carrier_name varchar(128) ,
  CONSTRAINT pk PRIMARY KEY (id)
) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
{code}

*I do this in 2019-01-04:*
*1.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}

*The data appear repeat, 8 rows are same.*
!https://issues.apache.org/jira/secure/attachment/12904715/sql1-2018-01-04.png!

*2.Select SQL2:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE ID=2102527;
{code}

*The data get back to normal, 1 row is correct.*

*After the seconde day , I do this in 2019-01-05:*
*3.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}


*Now,the data is one row, get back to normal.*

*Question:*
The sql1 is why appear 8 rows?


  was:
Strange Phenomenon : Data appera repeat and get back to normal.

*Table:*
{code:java}
CREATE TABLE JYDW.bms_biz_outstock_master (
  id bigint(20)  ,
  oms_id varchar(64) ,
  outstock_no varchar(64) ,
  external_no varchar(128) ,
  warehouse_code varchar(64) ,
  warehouse_name varchar(128) ,
  customerid varchar(64) ,
  customer_name varchar(128) ,
  carrier_id varchar(64) ,
  carrier_name varchar(128) ,
  CONSTRAINT pk PRIMARY KEY (id)
) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
{code}

*I do this in 2019-01-04:*
*1.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}


*The data appear repeat, 8 rows are same.*

*2.Select SQL2:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE ID=2102527;
{code}


*The data get back to normal, 1 row is correct.*

*After the seconde day , I do this in 2019-01-05:*
*3.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}


*Now,the data is one row, get back to normal.*

*Question:*
The sql1 is why appear 8 rows?



> Strange Phenomenon : Data appear repeat and get back to normal
> --
>
> Key: PHOENIX-4520
> URL: https://issues.apache.org/jira/browse/PHOENIX-4520
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10
> cdh5.12.0-hbase1.2.0
>Reporter: Jepson
> Attachments: sql1-2018-01-04.png, sql1-2018-01-05.png, 
> sql2-2018-01-04.png
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Strange Phenomenon : Data appera repeat and get back to normal.
> *Table:*
> {code:java}
> CREATE TABLE JYDW.bms_biz_outstock_master (
>   id bigint(20)  ,
>   oms_id varchar(64) ,
>   outstock_no varchar(64) ,
>   external_no varchar(128) ,
>   warehouse_code varchar(64) ,
>   warehouse_name varchar(128) ,
>   customerid varchar(64) ,
>   customer_name varchar(128) ,
>   carrier_id varchar(64) ,
>   carrier_name varchar(128) ,
>   CONSTRAINT pk PRIMARY KEY (id)
> ) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
> {code}
> *I do this in 2019-01-04:*
> *1.Select SQL1:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE WAREHOUSE_CODE='B03' 
>   AND OUTSTOCK_NO ='Z31164110'
>   AND CUSTOMERID='110871' ;
> {code}
> *The data appear repeat, 8 rows are same.*
> !https://issues.apache.org/jira/secure/attachment/12904715/sql1-2018-01-04.png!
> *2.Select SQL2:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE ID=2102527;
> {code}
> *The data get back to normal, 1 row is correct.*
> *After the seconde day , I do this in 2019-01-05:*
> *3.Select SQL1:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE WAREHOUSE_CODE='B03' 
>   AND OUTSTOCK_NO ='Z31164110'
>   AND CUSTOMERID='110871' ;
> {code}
> *Now,the data is one row, get back to normal.*
> *Question:*
> The sql1 is why appear 8 rows?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4520) Strange Phenomenon : Data appear repeat and get back to normal

2018-01-04 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4520:

Description: 
Strange Phenomenon : Data appera repeat and get back to normal.

*Table:*
{code:java}
CREATE TABLE JYDW.bms_biz_outstock_master (
  id bigint(20)  ,
  oms_id varchar(64) ,
  outstock_no varchar(64) ,
  external_no varchar(128) ,
  warehouse_code varchar(64) ,
  warehouse_name varchar(128) ,
  customerid varchar(64) ,
  customer_name varchar(128) ,
  carrier_id varchar(64) ,
  carrier_name varchar(128) ,
  CONSTRAINT pk PRIMARY KEY (id)
) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
{code}

*I do this in 2019-01-04:*
*1.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}


*The data appear repeat, 8 rows are same.*

*2.Select SQL2:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE ID=2102527;
{code}


*The data get back to normal, 1 row is correct.*

*After the seconde day , I do this in 2019-01-05:*
*3.Select SQL1:*

{code:java}
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;
{code}


*Now,the data is one row, get back to normal.*

*Question:*
The sql1 is why appear 8 rows?


  was:
Strange Phenomenon : Data appera repeat and get back to normal.

*Table:*
{code:java}
CREATE TABLE JYDW.bms_biz_outstock_master (
  id bigint(20)  ,
  oms_id varchar(64) ,
  outstock_no varchar(64) ,
  external_no varchar(128) ,
  warehouse_code varchar(64) ,
  warehouse_name varchar(128) ,
  customerid varchar(64) ,
  customer_name varchar(128) ,
  carrier_id varchar(64) ,
  carrier_name varchar(128) ,
  CONSTRAINT pk PRIMARY KEY (id)
) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
{code}

*I do this in 2019-01-04:*
*1.Select SQL1:*
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;

*The data appear repeat, 8 rows are same.*

*2.Select SQL2:*
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE ID=2102527;

*The data get back to normal, 1 row is correct.*

*After the seconde day , I do this in 2019-01-05:*
*3.Select SQL1:*
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;

*Now,the data is one row, get back to normal.*

*Question:*
The sql1 is why appear 8 rows?



> Strange Phenomenon : Data appear repeat and get back to normal
> --
>
> Key: PHOENIX-4520
> URL: https://issues.apache.org/jira/browse/PHOENIX-4520
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10
> cdh5.12.0-hbase1.2.0
>Reporter: Jepson
> Attachments: sql1-2018-01-04.png, sql1-2018-01-05.png, 
> sql2-2018-01-04.png
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Strange Phenomenon : Data appera repeat and get back to normal.
> *Table:*
> {code:java}
> CREATE TABLE JYDW.bms_biz_outstock_master (
>   id bigint(20)  ,
>   oms_id varchar(64) ,
>   outstock_no varchar(64) ,
>   external_no varchar(128) ,
>   warehouse_code varchar(64) ,
>   warehouse_name varchar(128) ,
>   customerid varchar(64) ,
>   customer_name varchar(128) ,
>   carrier_id varchar(64) ,
>   carrier_name varchar(128) ,
>   CONSTRAINT pk PRIMARY KEY (id)
> ) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
> {code}
> *I do this in 2019-01-04:*
> *1.Select SQL1:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE WAREHOUSE_CODE='B03' 
>   AND OUTSTOCK_NO ='Z31164110'
>   AND CUSTOMERID='110871' ;
> {code}
> *The data appear repeat, 8 rows are same.*
> *2.Select SQL2:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE ID=2102527;
> {code}
> *The data get back to normal, 1 row is correct.*
> *After the seconde day , I do this in 2019-01-05:*
> *3.Select SQL1:*
> {code:java}
> SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
>   WHERE WAREHOUSE_CODE='B03' 
>   AND OUTSTOCK_NO ='Z31164110'
>   AND CUSTOMERID='110871' ;
> {code}
> *Now,the data is one row, get back to normal.*
> *Question:*
> The sql1 is why appear 8 rows?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4520) Strange Phenomenon : Data appear repeat and get back to normal

2018-01-04 Thread Jepson (JIRA)
Jepson created PHOENIX-4520:
---

 Summary: Strange Phenomenon : Data appear repeat and get back to 
normal
 Key: PHOENIX-4520
 URL: https://issues.apache.org/jira/browse/PHOENIX-4520
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.10.0
 Environment: phoenix4.10
cdh5.12.0-hbase1.2.0
Reporter: Jepson
 Attachments: sql1-2018-01-04.png, sql1-2018-01-05.png, 
sql2-2018-01-04.png

Strange Phenomenon : Data appera repeat and get back to normal.

*Table:*
{code:java}
CREATE TABLE JYDW.bms_biz_outstock_master (
  id bigint(20)  ,
  oms_id varchar(64) ,
  outstock_no varchar(64) ,
  external_no varchar(128) ,
  warehouse_code varchar(64) ,
  warehouse_name varchar(128) ,
  customerid varchar(64) ,
  customer_name varchar(128) ,
  carrier_id varchar(64) ,
  carrier_name varchar(128) ,
  CONSTRAINT pk PRIMARY KEY (id)
) SALT_BUCKETS = 12, COMPRESSION='SNAPPY';
{code}

*I do this in 2019-01-04:*
*1.Select SQL1:*
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;

*The data appear repeat, 8 rows are same.*

*2.Select SQL2:*
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE ID=2102527;

*The data get back to normal, 1 row is correct.*

*After the seconde day , I do this in 2019-01-05:*
*3.Select SQL1:*
SELECT * FROM JYDW.BMS_BIZ_OUTSTOCK_MASTER 
  WHERE WAREHOUSE_CODE='B03' 
  AND OUTSTOCK_NO ='Z31164110'
  AND CUSTOMERID='110871' ;

*Now,the data is one row, get back to normal.*

*Question:*
The sql1 is why appear 8 rows?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4364) java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index metadata.

2017-12-24 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4364:

Priority: Critical  (was: Major)

> java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index 
> metadata. 
> -
>
> Key: PHOENIX-4364
> URL: https://issues.apache.org/jira/browse/PHOENIX-4364
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10.0
>Reporter: Jepson
>Priority: Critical
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> use phoenix jdbc: 
> {code:java}
> java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index 
> metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached 
> index metadata.  key=-1442130476102410039 
> region=JYDW:OMS_ORDERINFO,,1509703165591.421fdfea168d20112be0d74b27cdf23a.host=hadoop52,60020,1510212373872
>  Index update failed
> {code}
>



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4364) java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index metadata.

2017-11-09 Thread Jepson (JIRA)
Jepson created PHOENIX-4364:
---

 Summary: java.sql.SQLException: ERROR 2008 (INT10): Unable to find 
cached index metadata. 
 Key: PHOENIX-4364
 URL: https://issues.apache.org/jira/browse/PHOENIX-4364
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.10.0
 Environment: phoenix4.10.0
Reporter: Jepson


use phoenix jdbc: 

{code:java}
java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index 
metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index 
metadata.  key=-1442130476102410039 
region=JYDW:OMS_ORDERINFO,,1509703165591.421fdfea168d20112be0d74b27cdf23a.host=hadoop52,60020,1510212373872
 Index update failed
{code}

   





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218367#comment-16218367
 ] 

Jepson commented on PHOENIX-4319:
-

[https://issues.apache.org/jira/browse/PHOENIX-4247]
[https://issues.apache.org/jira/browse/PHOENIX-4041]
[https://issues.apache.org/jira/browse/PHOENIX-3563]


> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> *Zookeeper connections:*
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

Zookeeper connections:
[https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

!https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
[https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> Zookeeper connections:
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

*Zookeeper connections:*
[https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

Zookeeper connections:
[https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> *Zookeeper connections:*
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

!https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
[https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

!https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> !https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

!https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> !https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:java}
val zkUrl = 
"192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest3")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"JYDW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> !zookeeper 
> connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = 
"192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest3")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"JYDW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:scala}
val zkUrl = 
"192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest3")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"JYDW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = 
> "192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest3")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "JYDW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> !zookeeper 
> connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)
Jepson created PHOENIX-4319:
---

 Summary: Zookeeper connection should be closed immediately
 Key: PHOENIX-4319
 URL: https://issues.apache.org/jira/browse/PHOENIX-4319
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.10.0
 Environment: phoenix4.10 hbase1.2.0
Reporter: Jepson


*Code:*
{code:scala}
val zkUrl = 
"192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest3")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"JYDW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4247) Phoenix/Spark/ZK connection

2017-10-24 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216370#comment-16216370
 ] 

Jepson commented on PHOENIX-4247:
-

I has also this problem.
The zookeeper connections can't be auto closed.


> Phoenix/Spark/ZK connection
> ---
>
> Key: PHOENIX-4247
> URL: https://issues.apache.org/jira/browse/PHOENIX-4247
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: HBase 1.2 
> Spark 1.6 
> Phoenix 4.10 
>Reporter: Kumar Palaniappan
>
> After upgrading to CDH 5.9.1/Phoenix 4.10/Spark 1.6 from CDH 5.5.2/Phoenix 
> 4.6/Spark 1.5, streaming jobs that read data from Phoenix no longer release 
> their zookeeper connections, meaning that the number of connections from the 
> driver grow with each batch until the ZooKeeper limit on connections per IP 
> address is reached, at which point the Spark streaming job can no longer read 
> data from Phoenix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4093) org.apache.phoenix.exception.PhoenixIOException: java.net.SocketTimeoutException: callTimeout=60000, callDuration=60304:

2017-08-17 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4093:

Summary: org.apache.phoenix.exception.PhoenixIOException: 
java.net.SocketTimeoutException: callTimeout=6, callDuration=60304:  (was: 
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
exceptions:)

> org.apache.phoenix.exception.PhoenixIOException: 
> java.net.SocketTimeoutException: callTimeout=6, callDuration=60304:
> 
>
> Key: PHOENIX-4093
> URL: https://issues.apache.org/jira/browse/PHOENIX-4093
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: Phoenix4.10
> HBase 1.2  CDH5.12
>Reporter: Jepson
>  Labels: performance
>
> SQL Error [101] [08000]: org.apache.phoenix.exception.PhoenixIOException: 
> Failed after attempts=36, exceptions:
> Thu Aug 17 10:51:48 UTC 2017, null, *java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60304*: row '' on table 'DW:OMS_TIO_IDX' at 
> region=DW:OMS_TIO_IDX,,1502808904791.06aa2e941810212e9c8733e5f6bdb9ec., 
> hostname=hadoop44,60020,1502954074181, seqNum=8



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4093) org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:

2017-08-17 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4093:

Summary: org.apache.phoenix.exception.PhoenixIOException: Failed after 
attempts=36, exceptions:  (was:  
org.apache.phoenix.exception.PhoenixIOException 
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
exceptions:)

> org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
> exceptions:
> --
>
> Key: PHOENIX-4093
> URL: https://issues.apache.org/jira/browse/PHOENIX-4093
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: Phoenix4.10
> HBase 1.2  CDH5.12
>Reporter: Jepson
>  Labels: performance
>
> SQL Error [101] [08000]: org.apache.phoenix.exception.PhoenixIOException: 
> Failed after attempts=36, exceptions:
> Thu Aug 17 10:51:48 UTC 2017, null, *java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60304*: row '' on table 'DW:OMS_TIO_IDX' at 
> region=DW:OMS_TIO_IDX,,1502808904791.06aa2e941810212e9c8733e5f6bdb9ec., 
> hostname=hadoop44,60020,1502954074181, seqNum=8



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4093) org.apache.phoenix.exception.PhoenixIOException org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:

2017-08-17 Thread Jepson (JIRA)
Jepson created PHOENIX-4093:
---

 Summary:  org.apache.phoenix.exception.PhoenixIOException 
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
exceptions:
 Key: PHOENIX-4093
 URL: https://issues.apache.org/jira/browse/PHOENIX-4093
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.10.0
 Environment: Phoenix4.10
HBase 1.2  CDH5.12
Reporter: Jepson


SQL Error [101] [08000]: org.apache.phoenix.exception.PhoenixIOException: 
Failed after attempts=36, exceptions:
Thu Aug 17 10:51:48 UTC 2017, null, *java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60304*: row '' on table 'DW:OMS_TIO_IDX' at 
region=DW:OMS_TIO_IDX,,1502808904791.06aa2e941810212e9c8733e5f6bdb9ec., 
hostname=hadoop44,60020,1502954074181, seqNum=8




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2388) Support pooling Phoenix connections

2017-08-10 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122707#comment-16122707
 ] 

Jepson commented on PHOENIX-2388:
-

[~yhxx511]  I am ready to patch this pacth code.
Then, I'll get back to that later.

> Support pooling Phoenix connections
> ---
>
> Key: PHOENIX-2388
> URL: https://issues.apache.org/jira/browse/PHOENIX-2388
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Attachments: PHOENIX-2388.patch
>
>
> Frequently user are plugging Phoenix into an ecosystem that pools 
> connections. It would be possible to implement a pooling mechanism for 
> Phoenix by creating a delegate Connection that instantiates a new Phoenix 
> connection when retrieved from the pool and then closes the connection when 
> returning it to the pool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3814) Unable to connect to Phoenix via Spark

2017-08-07 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117705#comment-16117705
 ] 

Jepson commented on PHOENIX-3814:
-

[~wkhattak] I add this code in the phoenix, and is running ok! Thanks for this.
Also, can I have your thoughts on changing line no. 2427 in the class 
org.apache.phoenix.query.ConnectionQueryServicesImpl (phoenix-core) to 
"if (!admin.tableExists(SYSTEM_MUTEX_NAME_BYTES)) createSysMutexTable(admin);"

> Unable to connect to Phoenix via Spark
> --
>
> Key: PHOENIX-3814
> URL: https://issues.apache.org/jira/browse/PHOENIX-3814
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: Ubuntu 16.04.1, Apache Spark 2.1.0, Hbase 1.2.5, Phoenix 
> 4.10.0
>Reporter: Wajid Khattak
>
> Please see 
> http://stackoverflow.com/questions/43640864/apache-phoenix-for-spark-not-working



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3721) CSV bulk load doesn't work well with SYSTEM.MUTEX

2017-08-07 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116474#comment-16116474
 ] 

Jepson commented on PHOENIX-3721:
-

me to.In distributed mode the error is coming

> CSV bulk load doesn't work well with SYSTEM.MUTEX
> -
>
> Key: PHOENIX-3721
> URL: https://issues.apache.org/jira/browse/PHOENIX-3721
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Sergey Soldatov
>Priority: Blocker
>
> This is quite strange. I'm using HBase 1.2.4 and current master branch.
> During the running CSV bulk load in the regular way I got the following 
> exception: 
> {noformat}
> xception in thread "main" java.sql.SQLException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException):
>  SYSTEM.MUTEX
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:208)
>   at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:337)
>   at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:329)
>   at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:209)
>   at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:183)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException):
>  SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:285)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:106)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:58)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:498)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495)
> {noformat}
> Checked the code and it seems that the problem is in the createSysMutexTable 
> function. Its expect TableExistsException (and skip it), but in my case the 
> exception is wrapped by RemoteException, so it's not skipped and the init 
> fails. The easy fix is to handle RemoteException and check that it wraps 
> TableExistsException, but it looks a bit  ugly.  
> [~jamestaylor] [~samarthjain] any thoughts? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2017-08-02 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson resolved PHOENIX-4056.
-
   Resolution: Fixed
Fix Version/s: 4.11.0

*Degrade spark version from 2.2.0 to 2.1.1, is resolved.*

> java.lang.IllegalArgumentException: Can not create a Path from an empty string
> --
>
> Key: PHOENIX-4056
> URL: https://issues.apache.org/jira/browse/PHOENIX-4056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: CDH5.12
> Phoenix:4.11
> HBase:1.2
> Spark: 2.2.0
> phoenix-spark.version:4.11.0-HBase-1.2
>Reporter: Jepson
>  Labels: features, patch, test
> Fix For: 4.11.0
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 1.use the configuration of server and client(scala project)
>  
> phoenix.schema.isNamespaceMappingEnabled
> true
>   
>   
> phoenix.schema.mapSystemTablesToNamespace
> true
>   
> 2.The Code:
> {code:java}
> resultDF.write
>  .format("org.apache.phoenix.spark")
>  .mode(SaveMode.Overwrite)
>  .option("table", "JYDW.ADDRESS_ORDERCOUNT")
>  .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
>  .save()
> {code}
> 3.Throw this error,help to fix it,thankyou :
> 7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
> SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
> 17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
> job_20170802010717_0079.
> {color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path 
> from an empty string*{color}
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
>   at org.apache.hadoop.fs.Path.(Path.java:134)
>   at org.apache.hadoop.fs.Path.(Path.java:88)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
>   at 
> org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
>   at 
> org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>   at 
> 

[jira] [Comment Edited] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2017-08-02 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112202#comment-16112202
 ] 

Jepson edited comment on PHOENIX-4056 at 8/3/17 5:16 AM:
-

I degradate the spark version from 2.2.0 to 2.1.1, is resolved with the error.
So phoenix 4.11 -hbase1.20 with spark2.2.0 is not work, the compatibility is 
not good.


was (Author: 1028344...@qq.com):
I degradate the spark version from 2.2.0 to 2.1.1, is resolved with the error.

> java.lang.IllegalArgumentException: Can not create a Path from an empty string
> --
>
> Key: PHOENIX-4056
> URL: https://issues.apache.org/jira/browse/PHOENIX-4056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: CDH5.12
> Phoenix:4.11
> HBase:1.2
> Spark: 2.2.0
> phoenix-spark.version:4.11.0-HBase-1.2
>Reporter: Jepson
>  Labels: features, patch, test
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 1.use the configuration of server and client(scala project)
>  
> phoenix.schema.isNamespaceMappingEnabled
> true
>   
>   
> phoenix.schema.mapSystemTablesToNamespace
> true
>   
> 2.The Code:
> {code:java}
> resultDF.write
>  .format("org.apache.phoenix.spark")
>  .mode(SaveMode.Overwrite)
>  .option("table", "JYDW.ADDRESS_ORDERCOUNT")
>  .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
>  .save()
> {code}
> 3.Throw this error,help to fix it,thankyou :
> 7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
> SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
> 17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
> job_20170802010717_0079.
> {color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path 
> from an empty string*{color}
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
>   at org.apache.hadoop.fs.Path.(Path.java:134)
>   at org.apache.hadoop.fs.Path.(Path.java:88)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
>   at 
> org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
>   at 
> org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> 

[jira] [Commented] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2017-08-02 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112202#comment-16112202
 ] 

Jepson commented on PHOENIX-4056:
-

I degradate the spark version from 2.2.0 to 2.1.1, is resolved with the error.

> java.lang.IllegalArgumentException: Can not create a Path from an empty string
> --
>
> Key: PHOENIX-4056
> URL: https://issues.apache.org/jira/browse/PHOENIX-4056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: CDH5.12
> Phoenix:4.11
> HBase:1.2
> Spark: 2.2.0
> phoenix-spark.version:4.11.0-HBase-1.2
>Reporter: Jepson
>  Labels: features, patch, test
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 1.use the configuration of server and client(scala project)
>  
> phoenix.schema.isNamespaceMappingEnabled
> true
>   
>   
> phoenix.schema.mapSystemTablesToNamespace
> true
>   
> 2.The Code:
> {code:java}
> resultDF.write
>  .format("org.apache.phoenix.spark")
>  .mode(SaveMode.Overwrite)
>  .option("table", "JYDW.ADDRESS_ORDERCOUNT")
>  .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
>  .save()
> {code}
> 3.Throw this error,help to fix it,thankyou :
> 7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
> SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
> 17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
> job_20170802010717_0079.
> {color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path 
> from an empty string*{color}
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
>   at org.apache.hadoop.fs.Path.(Path.java:134)
>   at org.apache.hadoop.fs.Path.(Path.java:88)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
>   at 
> org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
>   at 
> org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>   at 
> 

[jira] [Updated] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2017-08-02 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4056:

Environment: 
CDH5.12
Phoenix:4.11
HBase:1.2
Spark: 2.2.0

phoenix-spark.version:4.11.0-HBase-1.2

  was:
CDH5.12
Phoenix:4.11
HBase:1.2

phoenix-spark.version:4.11.0-HBase-1.2


> java.lang.IllegalArgumentException: Can not create a Path from an empty string
> --
>
> Key: PHOENIX-4056
> URL: https://issues.apache.org/jira/browse/PHOENIX-4056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: CDH5.12
> Phoenix:4.11
> HBase:1.2
> Spark: 2.2.0
> phoenix-spark.version:4.11.0-HBase-1.2
>Reporter: Jepson
>  Labels: features, patch, test
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 1.use the configuration of server and client(scala project)
>  
> phoenix.schema.isNamespaceMappingEnabled
> true
>   
>   
> phoenix.schema.mapSystemTablesToNamespace
> true
>   
> 2.The Code:
> {code:java}
> resultDF.write
>  .format("org.apache.phoenix.spark")
>  .mode(SaveMode.Overwrite)
>  .option("table", "JYDW.ADDRESS_ORDERCOUNT")
>  .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
>  .save()
> {code}
> 3.Throw this error,help to fix it,thankyou :
> 7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
> SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
> 17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
> job_20170802010717_0079.
> {color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path 
> from an empty string*{color}
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
>   at org.apache.hadoop.fs.Path.(Path.java:134)
>   at org.apache.hadoop.fs.Path.(Path.java:88)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
>   at 
> org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
>   at 
> org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>   at 
> 

[jira] [Created] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2017-08-01 Thread Jepson (JIRA)
Jepson created PHOENIX-4056:
---

 Summary: java.lang.IllegalArgumentException: Can not create a Path 
from an empty string
 Key: PHOENIX-4056
 URL: https://issues.apache.org/jira/browse/PHOENIX-4056
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.11.0
 Environment: CDH5.12
Phoenix:4.11
HBase:1.2

phoenix-spark.version:4.11.0-HBase-1.2
Reporter: Jepson


1.use the configuration of server and client(scala project)
 
phoenix.schema.isNamespaceMappingEnabled
true
  
  
phoenix.schema.mapSystemTablesToNamespace
true
  

2.The Code:

{code:java}
resultDF.write
 .format("org.apache.phoenix.spark")
 .mode(SaveMode.Overwrite)
 .option("table", "JYDW.ADDRESS_ORDERCOUNT")
 .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
 .save()
{code}


3.Throw this error,help to fix it,thankyou :
7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
job_20170802010717_0079.
{color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path from 
an empty string*{color}
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
at org.apache.hadoop.fs.Path.(Path.java:134)
at org.apache.hadoop.fs.Path.(Path.java:88)
at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
at 
org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
at 
org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
at 
org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
at 
org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
at 
org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:610)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233)