[jira] [Created] (PHOENIX-4798) Update encoded col qualifiers on the base table correctly

2018-06-28 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-4798:
---

 Summary: Update encoded col qualifiers on the base table correctly
 Key: PHOENIX-4798
 URL: https://issues.apache.org/jira/browse/PHOENIX-4798
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Thomas D'Silva


For tables that use column qualifier encoding, when a column is added or view 
created we update the encoded column qualifier on the base table and increment 
its sequence number. 
Add a check to see if the base table is at the expected sequence number and 
fail if it is not with a CONCURRENT_TABLE_MUTATION



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4688) Add kerberos authentication to python-phoenixdb

2018-06-28 Thread Lev Bronshtein (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525303#comment-16525303
 ] 

Lev Bronshtein edited comment on PHOENIX-4688 at 6/28/18 7:26 PM:
--

Test is working 

2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
CREATING PQS CONNECTION
 2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
[[1, u'admin'], [2, u'user']]

now just need to clean it up. 
 * -How do I pass down proxy settings?  Or should I assume no proxy?- Inherited 
from callers shell
 * -The heimdal kerberos utilities apple ships does not support krb5.conf 
format miniKDC ships (minor variations)- Render a custom krb5.conf if MAC
 * Currently a shell script is needed to launch.  I tried taking out as much as 
possible but I still need to kinit, also it needs to source activate script
 * -Do we care if it only works on Linux?-  It will work on MAC OS too
 * -currently only works with Anaconda, would rather see it with virtualenv,- 
though neither one comes pre installed on stock MAC OS

 

 


was (Author: lbronshtein):
Test is working 

2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
CREATING PQS CONNECTION
 2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
[[1, u'admin'], [2, u'user']]

now just need to clean it up. 
 * -How do I pass down proxy settings?  Or should I assume no proxy?- Inherited 
from callers shell
 * -The heimdal kerberos utilities apple ships does not support krb5.conf 
format miniKDC ships (minor variations)- Render a custom krb5.conf if MAC
 * Currently a shell script is needed to launch.  I tried taking out as much as 
possible but I still need to kinit, also it needs to source activate script
 * -Do we care if it only works on Linux?-  It will work on MAC OS too
 * currently only works with Anaconda, would rather see it with virtualenv, 
though neither one comes pre installed on stock MAC OS

 

 

> Add kerberos authentication to python-phoenixdb
> ---
>
> Key: PHOENIX-4688
> URL: https://issues.apache.org/jira/browse/PHOENIX-4688
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Priority: Minor
>
> In its current state python-phoenixdv does not support support kerberos 
> authentication.  Using a modern python http library such as requests or 
> urllib it would be simple (if not trivial) to add this support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4688) Add kerberos authentication to python-phoenixdb

2018-06-28 Thread Lev Bronshtein (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526692#comment-16526692
 ] 

Lev Bronshtein commented on PHOENIX-4688:
-

This will require a very large patch, if anyone would like to start reviewing 
this, I opened https://github.com/apache/phoenix/pull/307

> Add kerberos authentication to python-phoenixdb
> ---
>
> Key: PHOENIX-4688
> URL: https://issues.apache.org/jira/browse/PHOENIX-4688
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Priority: Minor
>
> In its current state python-phoenixdv does not support support kerberos 
> authentication.  Using a modern python http library such as requests or 
> urllib it would be simple (if not trivial) to add this support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #307: Phoenix 4688 Kerberize python phoenixdb

2018-06-28 Thread pu239ppy
GitHub user pu239ppy opened a pull request:

https://github.com/apache/phoenix/pull/307

Phoenix 4688 Kerberize python phoenixdb

Lets rip out httplib and replace with requests and use requests kerberos

 Notes
- This PR mirrors requests kerberos until such time that the maintainers of 
reuests-kerberos can merge 
https://github.com/requests/requests-kerberos/pull/115
- This is trivial comparing to the integration test required

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pu239ppy/phoenix PHOENIX-4688

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/307.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #307


commit 6a5448237fee59c36b167f445bfdb23ce27a308f
Author: Lev Bronshtein 
Date:   2018-06-06T15:06:42Z

moved to a separate subdirectory

commit 6373e0332b24010d7b357065fface0bb7a97f0d5
Author: Lev Bronshtein 
Date:   2018-06-06T15:11:31Z

whoops should have been ls -la

commit c3c86b912f33b964f8cc607b1c43b3eb3ff56b3d
Author: Lev Bronshtein 
Date:   2018-06-06T15:14:35Z

added my fork of requests-kerberos module

commit 03fc4c53b9448704d569fd4af574eecd1ea5536d
Author: Lev Bronshtein 
Date:   2018-06-06T16:00:45Z

Now with KERBEROS

commit 5fb158af09150e5ab35eb9b96fd93550b93629a2
Author: Lev Bronshtein 
Date:   2018-06-06T17:40:08Z

documentation

commit e919c76760f5574477a22bee0028b58d6e1460b2
Author: Lev Bronshtein 
Date:   2018-06-25T22:06:52Z

phoenixdb qualifier

commit 3f23299553673b35aebe0f258142d999e1fe54f9
Author: Lev Bronshtein 
Date:   2018-06-26T02:53:59Z

no need to maintain a separate directory name for forked project

commit 7f2f19c30538d3db54110e7296829fffa87113c4
Author: Lev Bronshtein 
Date:   2018-06-26T13:34:54Z

add test script to run python

commit 0207cc5e9c292eda1859a4ce96e802c4bf3044fd
Author: Lev Bronshtein 
Date:   2018-06-26T13:37:32Z

make excutable

commit 56c7a9a9c07d003d76813cb48243b10598d6cef2
Author: Lev Bronshtein 
Date:   2018-06-26T14:49:30Z

pass command line parameters

commit b2c7c206d830baafc39cdf833153505425187967
Author: Lev Bronshtein 
Date:   2018-06-26T14:54:07Z

phoenix URL

commit 4b6ebc1153643e9e033e84f1cb636f872468dfdd
Author: Lev Bronshtein 
Date:   2018-06-26T21:51:38Z

lets not do heredoc

commit 8203449d342fcee41d0d72227ea80a7b73f62879
Author: Lev Bronshtein 
Date:   2018-06-26T21:52:00Z

get STDOUT/ERR

commit 81dd5b35d18466e0758d3b56960f33ac1d84a365
Author: Lev Bronshtein 
Date:   2018-06-27T11:22:33Z

typo in realms

commit be0f774c10fc790166f6af060d06e7e3b575df07
Author: Lev Bronshtein 
Date:   2018-06-27T11:23:13Z

few safegurds

commit ddfd1e324df83aec7c7bb425adfe435c7c36e11d
Author: Lev Bronshtein 
Date:   2018-06-27T11:42:54Z

Add KDC port to list of params

commit b04a8eed246ea1987a282a34ad832d08d8b390ed
Author: Lev Bronshtein 
Date:   2018-06-27T12:45:12Z

use krb5.conf generated by the MINI KDC

commit 2a4969b9cad2ad178c85804ed458b63f62e0d8dc
Author: Lev Bronshtein 
Date:   2018-06-27T13:00:01Z

use example from README

commit 032879b8abeae71db016ca26a6b4f27000fb504a
Author: Lev Bronshtein 
Date:   2018-06-27T13:00:21Z

comments

commit 6500a024beaa161841b731fdec09523d1c57daf4
Author: Lev Bronshtein 
Date:   2018-06-27T13:01:49Z

lets just hardcode this, what difference doe sit make

commit d7830fcfcc43fac8ff4de176554c39983e718383
Author: Lev Bronshtein 
Date:   2018-06-27T16:54:16Z

avoiding unbound variable mech_oid

commit 7fa5c5d76c74350f91d2373ff452f59810c77da7
Author: Lev Bronshtein 
Date:   2018-06-27T16:55:18Z

have to pass PQS port as it changes on every run

commit e473255835df659386f6b221c31184f6aeabc2c8
Author: Lev Bronshtein 
Date:   2018-06-27T17:00:38Z

pass PQS port to python

commit 7b17feb11cf3d73daaa0584e9359fa64998d2738
Author: Lev Bronshtein 
Date:   2018-06-27T17:53:34Z

OS agnostic path

commit 312bb27c06006b7ff12cf32ff8e422e3940a08f5
Author: Lev Bronshtein 
Date:   2018-06-27T19:20:36Z

shell script inherits proxy settings form caller no need to set, cook up a 
custom heimdal krb5.conf if mac

commit 10021e341d25b530fda7a69f1cd9ef37917a3c14
Author: Lev Bronshtein 
Date:   2018-06-27T19:42:50Z

tell shell script where to find python script

commit 7652edf8a4574f073f2911fe0062ba95ba799167
Author: Lev Bronshtein 
Date:   2018-06-27T21:38:58Z

no longer need to do any cleanup

commit 1aa9147566b6e2c9f5a0eb154f6c09113f143956
Author: Lev Bronshtein 
Date:   2018-06-28T16:55:13Z

call kinit and pass along credentials

commit 933328e01fd02909cda72436104f7a85e0705cb5
Author: Lev Bronshtein 
Date:   2018-06-28T18:47:06Z

stalls while trying to execute kinit, I will leave this for someone else to 
figure out

commit 690b5e9c116121962b585c5a97ca2ff7fe30f992

[jira] [Comment Edited] (PHOENIX-4688) Add kerberos authentication to python-phoenixdb

2018-06-28 Thread Lev Bronshtein (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525303#comment-16525303
 ] 

Lev Bronshtein edited comment on PHOENIX-4688 at 6/28/18 6:57 PM:
--

Test is working 

2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
CREATING PQS CONNECTION
 2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
[[1, u'admin'], [2, u'user']]

now just need to clean it up. 
 * -How do I pass down proxy settings?  Or should I assume no proxy?- Inherited 
from callers shell
 * -The heimdal kerberos utilities apple ships does not support krb5.conf 
format miniKDC ships (minor variations)- Render a custom krb5.conf if MAC
 * Currently a shell script is needed to launch.  I tried taking out as much as 
possible but I still need to kinit, also it needs to source activate script
 * -Do we care if it only works on Linux?-  It will work on MAC OS too
 * currently only works with Anaconda, would rather see it with virtualenv, 
though neither one comes pre installed on stock MAC OS

 

 


was (Author: lbronshtein):
Test is working 

2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
CREATING PQS CONNECTION
 2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
[[1, u'admin'], [2, u'user']]

now just need to clean it up. 
 * -How do I pass down proxy settings?  Or should I assume no proxy?- Inherited 
from callers shell
 * -The heimdal kerberos utilities apple ships does not support krb5.conf 
format miniKDC ships (minor variations)- Render a custom krb5.conf if MAC
 * Currently a shell script is needed to launch.  I tried taking out as much as 
possible but I still need to kinit, also it needs to source activate script
 * -Do we care if it only works on Linux?-  It will work on MAC OS too
 * currently only works with Anaconda, would rather see it with virtualenv

 

 

> Add kerberos authentication to python-phoenixdb
> ---
>
> Key: PHOENIX-4688
> URL: https://issues.apache.org/jira/browse/PHOENIX-4688
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Priority: Minor
>
> In its current state python-phoenixdv does not support support kerberos 
> authentication.  Using a modern python http library such as requests or 
> urllib it would be simple (if not trivial) to add this support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4688) Add kerberos authentication to python-phoenixdb

2018-06-28 Thread Lev Bronshtein (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525303#comment-16525303
 ] 

Lev Bronshtein edited comment on PHOENIX-4688 at 6/28/18 6:47 PM:
--

Test is working 

2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
CREATING PQS CONNECTION
 2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
[[1, u'admin'], [2, u'user']]

now just need to clean it up. 
 * -How do I pass down proxy settings?  Or should I assume no proxy?- Inherited 
from callers shell
 * -The heimdal kerberos utilities apple ships does not support krb5.conf 
format miniKDC ships (minor variations)- Render a custom krb5.conf if MAC
 * Currently a shell script is needed to launch.  I tried taking out as much as 
possible but I still need to kinit, also it needs to source activate script
 * -Do we care if it only works on Linux?-  It will work on MAC OS too
 * currently only works with Anaconda, would rather see it with virtualenv

 

 


was (Author: lbronshtein):
Test is working 

2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
CREATING PQS CONNECTION
 2018-06-27 12:52:15,048 INFO [main] end2end.SecureQueryServerPhoenixDBIT(310): 
[[1, u'admin'], [2, u'user']]

now just need to clean it up. 
 * -How do I pass down proxy settings?  Or should I assume no proxy?- Inherited 
from callers shell
 * -The heimdal kerberos utilities apple ships does not support krb5.conf 
format miniKDC ships (minor variations)- Render a custom krb5.conf if MAC
 * Currently a shell script is needed to launch.  I tried taking out as much as 
possible but I still need to kinit
 * -Do we care if it only works on Linux?-  It will work on MAC OS too
 * currently only works with Anaconda, would rather see it with virtualenv

 

 

> Add kerberos authentication to python-phoenixdb
> ---
>
> Key: PHOENIX-4688
> URL: https://issues.apache.org/jira/browse/PHOENIX-4688
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Priority: Minor
>
> In its current state python-phoenixdv does not support support kerberos 
> authentication.  Using a modern python http library such as requests or 
> urllib it would be simple (if not trivial) to add this support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1718) Unable to find cached index metadata during the stablity test with phoenix

2018-06-28 Thread Jepson (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526114#comment-16526114
 ] 

Jepson commented on PHOENIX-1718:
-

[~jamestaylor] Can you share these careful configuration ?

> Unable to find cached index metadata during the stablity test with phoenix
> --
>
> Key: PHOENIX-1718
> URL: https://issues.apache.org/jira/browse/PHOENIX-1718
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
> Environment: linux os ( 128G ram,48T disk,24 cores) * 8
> Hadoop 2.5.1
> HBase 0.98.7
> Phoenix 4.2.1
>Reporter: wuchengzhi
>Priority: Critical
> Attachments: hbase-hadoop-regionserver-cluster-node134 .zip
>
>
> I am making stablity test with phoenix 4.2.1 . But the regionserver became 
> very slow  after 4 hours , and i found some error log in the regionserver log 
> file.
> In this scenario,the cluster has 8 machines(128G ram, 24 cores , 48T disk). i 
> setup 2 regionserver in each pc (total 16 rs). 
> 1. create 8 tables, each table contains an index from TEST_USER0 to 
> TEST_USER7.
> create table TEST_USER0 (id varchar primary key , attr1 varchar, attr2 
> varchar,attr3 varchar,attr4 varchar,attr5 varchar,attr6 integer,attr7 
> integer,attr8 integer,attr9 integer,attr10 integer )  
> DATA_BLOCK_ENCODING='FAST_DIFF',VERSIONS=1,BLOOMFILTER='ROW',COMPRESSION='LZ4',BLOCKSIZE
>  = '65536',SALT_BUCKETS=32;
> create local index TEST_USER_INDEX0 on 
> TEST5.TEST_USER0(attr1,attr2,attr3,attr4,attr5,attr6,attr7,attr8,attr9,attr10);
> 
> 2.  deploy phoenix client each machine to upsert data to tables. ( client1 
> upsert into TEST_USER0 , client 2 upsert into TEST_USER1.)
> One phoenix client start 6 threads, and each thread upsert 10,000 rows in 
> a batch.  and each thread will upsert 500,000,000 in totally.
> 8 clients ran in same time.
>  the log as belowRunning 4 hours later,  threre were about 1,000,000,000 rows 
> in hbase,  and error occur  frequently at about running 4 hours and 50 
> minutes , and the rps became very slow , less than 10,000 (7, in normal) .
> 2015-03-09 19:15:13,337 ERROR 
> [B.DefaultRpcServer.handler=2,queue=2,port=60022] parallel.BaseTaskRunner: 
> Found a failed task because: org.apache.hadoop.hbase.DoNotRetryIOException: 
> ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index metadata. 
>  key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
> at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
> at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
> at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
> at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
> at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
> at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:140)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:274)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:203)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:881)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1522)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1597)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1554)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2476)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2263)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2215)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2219)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4376)
> at 
> 

[jira] [Updated] (PHOENIX-4797) file not found or file exist exception when create global index use -snaopshot option

2018-06-28 Thread sailingYang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sailingYang updated PHOENIX-4797:
-
Description: 
when use indextool with -snapshot option and if the mapreduce create multi 
mapper.this will cause the hdfs file not found or  hdfs file exist 
exception。finally the mapreduce task must be failed. because the mapper use the 
same restore work dir.
{code:java}
Error: java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: The specified region already exists on disk: 
hdfs://m12v1.mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/2ab2c1d73d2e31bb5a5e2b394da564f8
at 
org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:186)
at 
org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.cloneHdfsRegions(RestoreSnapshotHelper.java:578)
at 
org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegions(RestoreSnapshotHelper.java:249)
at 
org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegions(RestoreSnapshotHelper.java:171)
at 
org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.copySnapshotForScanner(RestoreSnapshotHelper.java:814)
at 
org.apache.phoenix.iterate.TableSnapshotResultIterator.init(TableSnapshotResultIterator.java:77)
at 
org.apache.phoenix.iterate.TableSnapshotResultIterator.(TableSnapshotResultIterator.java:73)
at 
org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(PhoenixRecordReader.java:126)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: The 
specified region already exists on disk: 
hdfs://m12v1.mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/2ab2c1d73d2e31bb5a5e2b394da564f8
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at 
org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:180)
... 15 more
Caused by: java.io.IOException: The specified region already exists on disk: 
hdfs://m12v1.mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/2ab2c1d73d2e31bb5a5e2b394da564f8
at 
org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createRegionOnFileSystem(HRegionFileSystem.java:877)
at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:6252)
at 
org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegion(ModifyRegionUtils.java:205)
at 
org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:173)
at 
org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:170)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

2018-06-28 15:01:55 70909 [main] INFO org.apache.hadoop.mapreduce.Job - Task Id 
: attempt_1530004808977_0011_m_01_0, Status : FAILED
Error: java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: The specified region already exists on disk: 
hdfs://m12v1.mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/2ab2c1d73d2e31bb5a5e2b394da564f8
at 
org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:186)
at 
org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.cloneHdfsRegions(RestoreSnapshotHelper.java:578)
at 
org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegions(RestoreSnapshotHelper.java:249)
at 
org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegions(RestoreSnapshotHelper.java:171)
at 
org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.copySnapshotForScanner(RestoreSnapshotHelper.java:814)
at 
org.apache.phoenix.iterate.TableSnapshotResultIterator.init(TableSnapshotResultIterator.java:77)
at