Re: ABORTING region server and following HBase cluster "crash"

2018-09-08 Thread Ted Yu
It seems you should deploy hbase with the following fix:

HBASE-21069 NPE in StoreScanner.updateReaders causes RS to crash

1.4.7 was recently released.

FYI

On Sat, Sep 8, 2018 at 3:32 PM Batyrshin Alexander <0x62...@gmail.com>
wrote:

>  Hello,
>
> We got this exception from *prod006* server
>
> Sep 09 00:38:02 prod006 hbase[18907]: 2018-09-09 00:38:02,532 FATAL
> [MemStoreFlusher.1] regionserver.HRegionServer: ABORTING region server
> prod006,60020,1536235102833: Replay of WAL required. Forcing server shutdown
> Sep 09 00:38:02 prod006 hbase[18907]:
> org.apache.hadoop.hbase.DroppedSnapshotException:
> region: 
> KM,c\xEF\xBF\xBD\x16I7\xEF\xBF\xBD\x0A"A\xEF\xBF\xBDd\xEF\xBF\xBD\xEF\xBF\xBD\x19\x07t,1536178245576.60c121ba50e67f2429b9ca2ba2a11bad.
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2645)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2322)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2284)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2170)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2095)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> java.lang.Thread.run(Thread.java:748)
> Sep 09 00:38:02 prod006 hbase[18907]: Caused by:
> java.lang.NullPointerException
> Sep 09 00:38:02 prod006 hbase[18907]: at
> java.util.ArrayList.(ArrayList.java:178)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:863)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1172)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1145)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:122)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2505)
> Sep 09 00:38:02 prod006 hbase[18907]: at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2600)
> Sep 09 00:38:02 prod006 hbase[18907]: ... 9 more
> Sep 09 00:38:02 prod006 hbase[18907]: 2018-09-09 00:38:02,532 FATAL
> [MemStoreFlusher.1] regionserver.HRegionServer: RegionServer abort: loaded
> coprocessors
> are: [org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator,
> org.apache.phoenix.coprocessor.SequenceRegionObserver, org.apache.phoenix.c
>
> After that we got ABORTING on almost every Region Servers in cluster with
> different reasons:
>
> *prod003*
> Sep 09 01:12:11 prod003 hbase[11552]: 2018-09-09 01:12:11,799 FATAL
> [PostOpenDeployTasks:88bfac1dfd807c4cd1e9c1f31b4f053f]
> regionserver.HRegionServer: ABORTING region
> server prod003,60020,1536444066291: Exception running postOpenDeployTasks;
> region=88bfac1dfd807c4cd1e9c1f31b4f053f
> Sep 09 01:12:11 prod003 hbase[11552]: java.io.InterruptedIOException:
> #139, interrupted. currentNumberOfTask=8
> Sep 09 01:12:11 prod003 hbase[11552]: at
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1853)
> Sep 09 01:12:11 prod003 hbase[11552]: at
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1823)
> Sep 09 01:12:11 prod003 hbase[11552]: at
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1899)
> Sep 09 01:12:11 prod003 hbase[11552]: at
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:250)
> Sep 09 01:12:11 prod003 hbase[11552]: at
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:213)
> Sep 09 01:12:11 prod003 hbase[11552]: at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1484)
> Sep 09 01:12:11 prod003 hbase[11552]: at
> org.apache.hadoop.hbase.client.HTable.put(HTable.java:1031)
> Sep 09 01:12:11 

Re: local index turn disable when region split

2017-11-03 Thread Ted Yu
Can you give us more information ?

Release of hbase and Phoenix you use

bq. The local index turn disable

Can you pastebin the related exception(s) ?

Snippet from region server log would also help.

On Thu, Nov 2, 2017 at 11:31 PM, vergil  wrote:

> Hi,all:
>
>
> Here is my test table.
> *create table test_table_local(id varchar primary key,f1 varchar,f2
> varchar) salt_buckets=3;*
> Add local index on it at the first time.
> *create local index test_index_local on test_table_local(f1);*
>
> Then upsert data into it.
> As the data increase,the region will split.
> The local index turn disable when the region splits.
> Local index data do not increase and it do not work.
>
> Here is my configuration on each master and regionserver.
>  hbase.regionserver.wal.codec
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
>   
> hbase.region.server.rpc.scheduler.factory.class
> org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory
>   hbase.rpc.controllerfactory.class
> org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory
> 
>
> I need your help,thx!
> --
> The harder, more fortunate
>
>


Re: Phoenix 4.12 error on HDP 2.6

2017-10-25 Thread Ted Yu
Since you're deploying onto a vendor's platform, I suggest asking this
question on the vendor's forum.

Cheers

On Wed, Oct 25, 2017 at 3:59 AM, Sumanta Gh  wrote:

> Hi,
> I am trying to install phoenix-4.12.0 (HBase-1.1) on HDP 2.6.2.0. As per
> installation guide, I have copied the phoenix-4.12.0-HBase-1.1-server.jar
> inside HBase lib directory. After restarting HBase using Ambari and
> connecting through SqlLine, I can see phoenix system tables are getting
> created. I used HBase shell to check them.
>
> When I try to create a table, the region servers stops with the following
> error. Could anyone please guide what is wrong here.
>
> thanks
> sumanta
>
>
>
> DDL :
>
> CREATE TABLE V5.USER (
> ADMIN BOOLEAN,
> KEYA VARCHAR,
> KEYB VARCHAR,
> ID INTEGER,
> USERNAME VARCHAR,
> CONSTRAINT PK PRIMARY KEY (KEYA)) *COLUMN_ENCODED_BYTES=0*;
>
>
> *Region Server Error:*
>
> 2017-10-25 10:47:12,499 ERROR [RS_OPEN_REGION-ip-172-30-3-197:16020-1]
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.phoenix.hbase.index.Indexer
> threw org.apache.hadoop.metrics2.MetricsException: Metrics source
> RegionServer,sub=PhoenixIndexer already exists!
> org.apache.hadoop.metrics2.MetricsException: Metrics source
> RegionServer,sub=PhoenixIndexer already exists!
> at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(
> DefaultMetricsSystem.java:144)
> at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(
> DefaultMetricsSystem.java:117)
> at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.
> register(MetricsSystemImpl.java:229)
> at org.apache.hadoop.hbase.metrics.BaseSourceImpl.(
> BaseSourceImpl.java:74)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(
> MetricsIndexerSourceImpl.java:49)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(
> MetricsIndexerSourceImpl.java:44)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory.
> create(MetricsIndexerSourceFactory.java:34)
> at org.apache.phoenix.hbase.index.Indexer.start(Indexer.java:251)
> at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$
> Environment.startup(CoprocessorHost.java:415)
>
> 
> 2017-10-25 10:47:12,499 FATAL [RS_OPEN_REGION-ip-172-30-3-197:16020-1]
> regionserver.HRegionServer: ABORTING region server 
> ip-172-30-3-197,16020,1508926506368:
> The coprocessor org.apache.phoenix.hbase.index.Indexer threw
> org.apache.hadoop.metrics2.MetricsException: Metrics source
> RegionServer,sub=PhoenixIndexer already exists!
> org.apache.hadoop.metrics2.MetricsException: Metrics source
> RegionServer,sub=PhoenixIndexer already exists!
> at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(
> DefaultMetricsSystem.java:144)
> at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(
> DefaultMetricsSystem.java:117)
> at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.
> register(MetricsSystemImpl.java:229)
> at org.apache.hadoop.hbase.metrics.BaseSourceImpl.(
> BaseSourceImpl.java:74)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(
> MetricsIndexerSourceImpl.java:49)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(
> MetricsIndexerSourceImpl.java:44)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory.
> create(MetricsIndexerSourceFactory.java:34)
> at org.apache.phoenix.hbase.index.Indexer.start(Indexer.java:251)
> at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$
> Environment.startup(CoprocessorHost
>
> 
> 2017-10-25 10:47:12,499 FATAL [RS_OPEN_REGION-ip-172-30-3-197:16020-1]
> regionserver.HRegionServer: RegionServer abort: loaded coprocessors are:
> [org.apache.phoenix.coprocessor.MetaDataEndpointImpl, org.apache.phoenix.
> coprocessor.ScanRegionObserver, org.apache.phoenix.coprocessor.
> UngroupedAggregateRegionObserver, org.apache.phoenix.hbase.index.Indexer,
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver,
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl,
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,
> org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
> 2017-
>
> .
> 2017-10-25 10:47:12,511 INFO  [RS_OPEN_REGION-ip-172-30-3-197:16020-1]
> regionserver.HRegionServer: STOPPED: The coprocessor
> org.apache.phoenix.hbase.index.Indexer threw 
> org.apache.hadoop.metrics2.MetricsException:
> Metrics source RegionServer,sub=PhoenixIndexer already exists!
> 2017-10-25 10:47:12,511 INFO  [regionserver/ip-172-30-3-197/
> 172.30.3.197:16020] regionserver.SplitLogWorker: Sending interrupt to
> stop the worker thread
> 2017-10-25 10:47:12,511 INFO  [regionserver/ip-172-30-3-197/
> 172.30.3.197:16020] regionserver.HRegionServer: Stopping infoServer
> 2017-10-25 10:47:12,511 INFO  [SplitLogWorker-ip-172-30-3-197:16020]
> regionserver.SplitLogWorker: SplitLogWorker interrupted. Exiting.
> 2017-10-25 10:47:12,511 INFO  [SplitLogWorker-ip-172-30-3-197:16020]
> regionserver.SplitLogWorker: 

Re: [ANNOUNCE] New PMC Member: Sergey Soldatov

2017-09-24 Thread Ted Yu
Congratulations, Sergey !

On Sun, Sep 24, 2017 at 1:00 PM, Josh Elser  wrote:

> All,
>
> The Apache Phoenix PMC has recently voted to extend an invitation to
> Sergey to join the PMC in recognition of his continued contributions to the
> community. We are happy to share that he has accepted this offer.
>
> Please join me in congratulating Sergey! Congratulations on a
> well-deserved invitation.
>
> - Josh (on behalf of the entire PMC)
>


Re: Phoenix CSV Bulk Load fails to load a large file

2017-09-06 Thread Ted Yu
bq. hbase.bulkload.retries.retryOnIOException is disabled. Unable to recover

The above is from HBASE-17165.

See if the load can pass after enabling the config.

On Wed, Sep 6, 2017 at 3:11 PM, Sriram Nookala  wrote:

> It finally times out with these exceptions
>
> ed Sep 06 21:38:07 UTC 2017, RpcRetryingCaller{globalStartTime=1504731276347,
> pause=100, retries=35}, java.io.IOException: Call to
> ip-10-123-0-60.ec2.internal/10.123.0.60:16020 failed on local exception:
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=77,
> waitTime=60001, operationTimeout=6 expired.
>
>
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> RpcRetryingCaller.java:159)
>
> at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.
> tryAtomicRegionLoad(LoadIncrementalHFiles.java:956)
>
> at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(
> LoadIncrementalHFiles.java:594)
>
> at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(
> LoadIncrementalHFiles.java:590)
>
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.io.IOException: Call to ip-10-123-0-60.ec2.internal/10
> .123.0.60:16020 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException:
> Call id=77, waitTime=60001, operationTimeout=6 expired.
>
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(
> AbstractRpcClient.java:292)
>
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1274)
>
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(
> AbstractRpcClient.java:227)
>
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$
> BlockingRpcChannelImplementation.callBlockingMethod(
> AbstractRpcClient.java:336)
>
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$
> BlockingStub.bulkLoadHFile(ClientProtos.java:35408)
>
> at org.apache.hadoop.hbase.protobuf.ProtobufUtil.
> bulkLoadHFile(ProtobufUtil.java:1676)
>
> at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3.call(
> LoadIncrementalHFiles.java:656)
>
> at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3.call(
> LoadIncrementalHFiles.java:645)
>
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> RpcRetryingCaller.java:137)
>
> ... 7 more
>
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=77,
> waitTime=60001, operationTimeout=6 expired.
>
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73)
>
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1248)
>
> ... 14 more
>
> 17/09/06 21:38:07 ERROR mapreduce.LoadIncrementalHFiles:
> hbase.bulkload.retries.retryOnIOException is disabled. Unable to recover
>
> 17/09/06 21:38:07 INFO zookeeper.ZooKeeper: Session: 0x15e58ca21fc004c
> closed
>
> 17/09/06 21:38:07 INFO zookeeper.ClientCnxn: EventThread shut down
>
> Exception in thread "main" java.io.IOException: BulkLoad encountered an
> unrecoverable problem
>
> at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(
> LoadIncrementalHFiles.java:614)
>
> at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(
> LoadIncrementalHFiles.java:463)
>
> at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(
> LoadIncrementalHFiles.java:373)
>
> at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.completebulkload(
> AbstractBulkLoadTool.java:355)
>
> at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.submitJob(
> AbstractBulkLoadTool.java:332)
>
> at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(
> AbstractBulkLoadTool.java:270)
>
> at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(
> AbstractBulkLoadTool.java:183)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>
> at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(
> CsvBulkLoadTool.java:101)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:498)
>
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Failed after attempts=35, exceptions:
>
> Wed Sep 06 20:55:36 UTC 2017, RpcRetryingCaller{globalStartTime=1504731276347,
> pause=100, retries=35}, java.io.IOException: Call to
> ip-10-123-0-60.ec2.internal/10.123.0.60:16020 failed on local exception:
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=9,
> 

Re: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread Ted Yu
ailed sync-before-close but no outstanding appends; closing
> WAL: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append
> sequenceId=7846, requesting roll of WAL
> 2017-07-06 19:48:39,261 INFO
> [regionserver/ip-10-74-5-153.us-west-2.compute.internal/10.
> 74.5.153:16020.logRoller]
> wal.FSHLog: Rolled WAL
> /user/hbase/WALs/ip-10-74-5-153.us-west-2.compute.
> internal,16020,1499320260501/ip-10-74-5-153.us-west-2.
> compute.internal%2C16020%2C1499320260501.default.1499370518086
> with entries=0, filesize=174 B; new WAL
> /user/hbase/WALs/ip-10-74-5-153.us-west-2.compute.
> internal,16020,1499320260501/ip-10-74-5-153.us-west-2.
> compute.internal%2C16020%2C1499320260501.default.1499370519235
> 2017-07-06 19:48:39,261 INFO
> [regionserver/ip-10-74-5-153.us-west-2.compute.internal/10.
> 74.5.153:16020.logRoller]
> wal.FSHLog: Archiving
> hdfs://ip-10-74-31-169.us-west-2.compute.internal:8020/
> user/hbase/WALs/ip-10-74-5-153.us-west-2.compute.
> internal,16020,1499320260501/ip-10-74-5-153.us-west-2.
> compute.internal%2C16020%2C1499320260501.default.1499370518086
> to
> hdfs://ip-10-74-31-169.us-west-2.compute.internal:8020/
> user/hbase/oldWALs/ip-10-74-5-153.us-west-2.compute.internal%2C16020%
> 2C1499320260501.default.1499370518086
> 2017-07-06 19:48:40,322 WARN
> [regionserver/ip-10-74-5-153.us-west-2.compute.internal/10.
> 74.5.153:16020.append-pool1-t1]
> wal.FSHLog: Append sequenceId=7847, requesting roll of WAL
> java.lang.NullPointerException
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:106)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(
> FSDataOutputStream.java:60)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at org.apache.hadoop.hbase.KeyValue.oswrite(KeyValue.java:2571)
> at org.apache.hadoop.hbase.KeyValueUtil.oswrite(KeyValueUtil.java:623)
> at
> org.apache.hadoop.hbase.regionserver.wal.WALCellCodec$
> EnsureKvEncoder.write(WALCellCodec.java:338)
> at
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(
> ProtobufLogWriter.java:122)
> at
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$
> RingBufferEventHandler.append(FSHLog.java:1909)
> at
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.
> onEvent(FSHLog.java:1773)
> at
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.
> onEvent(FSHLog.java:1695)
> at
> com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> On Thu, Jul 6, 2017 at 1:55 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > HBASE-16960 mentioned the following :
> >
> > Caused by: java.net.SocketTimeoutException: 2 millis timeout while
> > waiting for channel to be ready for read
> >
> > Do you see similar line in region server log ?
> >
> > Cheers
> >
> > On Thu, Jul 6, 2017 at 1:48 PM, anil gupta <anilgupt...@gmail.com>
> wrote:
> >
> > > Hi All,
> > >
> > > We are running HBase/Phoenix on EMR5.2(HBase1.2.3 and Phoenix4.7) and
> we
> > running into following exception when we are trying to load data into one
> > of our Phoenix table:
> > > 2017-07-06 19:57:57,507 INFO [hconnection-0x60e5272-shared-
> -pool2-t249]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, table=DE.CONFIG_DATA,
> > attempt=30/35 failed=38ops, last exception: org.apache.hadoop.hbase.
> > regionserver.wal.DamagedWALException: org.apache.hadoop.hbase.
> > regionserver.wal.DamagedWALException: Append sequenceId=8689, requesting
> > roll of WAL
> > >   at org.apache.hadoop.hbase.regionserver.wal.FSHLog$
> > RingBufferEventHandler.append(FSHLog.java:1921)
> > >   at org.apache.hadoop.hbase.regionserver.wal.FSHLog$
> > RingBufferEventHandler.onEvent(FSHLog.java:1773)
> > >   at org.apache.hadoop.hbase.regionserver.wal.FSHLog$
> > RingBufferEventHandler.onEvent(FSHLog.java:1695)
> > >   at com.lmax.disruptor.BatchEventProcessor.run(
> > BatchEventProcessor.java:128)
> > >   at java.util.concurrent.ThreadPoolExecutor.runWorker(
> > ThreadPoolExecutor.java:1142)
> > >   at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > ThreadPoolExecutor.java:617)
> > >   at java.lang.Thread.run(Thread.java:745)
> > >
> > > We are OK with wiping out this table and rebuilding the dataset. We
> > tried to drop the table and recreate the table but it didnt fix it.
> > > Can anyone please let us know how can we get rid of above problem? Are
> > we running into https://issues.apache.org/jira/browse/HBASE-16960?
> > >
> > >
> > > --
> > > Thanks & Regards,
> > > Anil Gupta
> > >
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>


Re: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread Ted Yu
HBASE-16960 mentioned the following :

Caused by: java.net.SocketTimeoutException: 2 millis timeout while
waiting for channel to be ready for read

Do you see similar line in region server log ?

Cheers

On Thu, Jul 6, 2017 at 1:48 PM, anil gupta  wrote:

> Hi All,
>
> We are running HBase/Phoenix on EMR5.2(HBase1.2.3 and Phoenix4.7) and we 
> running into following exception when we are trying to load data into one of 
> our Phoenix table:
> 2017-07-06 19:57:57,507 INFO [hconnection-0x60e5272-shared--pool2-t249] 
> org.apache.hadoop.hbase.client.AsyncProcess: #1, table=DE.CONFIG_DATA, 
> attempt=30/35 failed=38ops, last exception: 
> org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: 
> org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append 
> sequenceId=8689, requesting roll of WAL
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1921)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1773)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1695)
>   at 
> com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>
> We are OK with wiping out this table and rebuilding the dataset. We tried to 
> drop the table and recreate the table but it didnt fix it.
> Can anyone please let us know how can we get rid of above problem? Are we 
> running into https://issues.apache.org/jira/browse/HBASE-16960?
>
>
> --
> Thanks & Regards,
> Anil Gupta
>


Re: Specify maven dependency

2017-06-29 Thread Ted Yu
Looking at
https://repo.maven.apache.org/maven2/org/apache/phoenix/phoenix/4.10.0-HBase-1.2/
, there is only test jar.

You need to specify a module under
https://repo.maven.apache.org/maven2/org/apache/phoenix/

e.g.
https://repo.maven.apache.org/maven2/org/apache/phoenix/phoenix-core/4.11.0-HBase-1.2/

FYI

On Thu, Jun 29, 2017 at 7:23 AM, Juvenn Woo <mach...@gmail.com> wrote:

> Hi Ted,
>
> Thanks for quick reply. I’ve setup a new app with just phoenix as
> dependency, you can find the  `mvn -X dependency:tree` output here:
>
> http://termbin.com/qm3q
>
> Best,
> --
> Juvenn Woo
> Sent with Sparrow <http://www.sparrowmailapp.com/?sig>
>
> On Thursday, June 29, 2017 at 9:37 PM, Ted Yu wrote:
>
> Can you add -X to mvn command and pastebin the output?
>
> Ghanks
>
>  Original message 
> From: Juvenn Woo <mach...@gmail.com>
> Date: 6/29/17 6:26 AM (GMT-08:00)
> To: user@phoenix.apache.org
> Subject: Specify maven dependency
>
> Hi all,
>
> For convenience of deployment, I am trying to specify phoenix as maven
> dependency instead of put client jar in the git repo.
>
> While I am able to find phoenix on maven central:
>
> ```
> 
> org.apache.phoenix
> phoenix
> 4.10.0-HBase-1.2
> 
> ```
>
> Yet it fails to download it, and complains with following error:
>
> ```
> Caused by: org.sonatype.aether.resolution.ArtifactResolutionException:
> Could not find artifact org.apache.phoenix:phoenix:jar:4.10.0-HBase-1.2
> in central
> ```
>
> Do I miss anything here? Or is the manual bundling client jar the only
> option?
>
> Thanks!
>
> --
> Juvenn Woo
> Sent with Sparrow <http://www.sparrowmailapp.com/?sig>
>
>
>


Re: Specify maven dependency

2017-06-29 Thread Ted Yu
Can you add -X to mvn command and pastebin the output?
Ghanks
 Original message From: Juvenn Woo  Date: 
6/29/17  6:26 AM  (GMT-08:00) To: user@phoenix.apache.org Subject: Specify 
maven dependency 


Hi all,

For convenience of deployment, I am trying to specify phoenix as maven 
dependency instead of put client jar in the git repo.
While I am able to find phoenix on maven central:
```org.apache.phoenix
phoenix
4.10.0-HBase-1.2```
Yet it fails to download it, and complains with following error:
```Caused by: org.sonatype.aether.resolution.ArtifactResolutionException: Could 
not find artifact org.apache.phoenix:phoenix:jar:4.10.0-HBase-1.2 in central```
Do I miss anything here? Or is the manual bundling client jar the only option?
Thanks!

-- Juvenn WooSent with Sparrow



Re: I got a very weird message from user@phoenix.apache.org

2017-02-20 Thread Ted Yu
In case anyone suspects that certain post is missing, you can always find
the posts using:

http://search-hadoop.com/Phoenix

FYI

On Mon, Feb 20, 2017 at 9:21 AM, Josh Elser <josh.el...@gmail.com> wrote:

> Short answer is (likely) that your mail provider (Gmail) is rejecting
> posts to user@p.a.o which hit its spam trigger but did not hit the ASF's
> spam trigger.
>
> This triggers the mailing list to tell you that a message it tried to send
> you was rejected. So, you get a warning about a message that you never saw
> in the first place.
>
> But, as Ted says, just ignore it.
>
>
> On Feb 20, 2017 09:47, "Ted Yu" <yuzhih...@gmail.com> wrote:
>
> I received such message as well.
>
> You can ignore it.
>
> On Feb 20, 2017, at 5:46 AM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
> Hi! This is the ezmlm program. I'm managing the
> user@phoenix.apache.org mailing list.
>
>
> Messages to you from the user mailing list seem to
> have been bouncing. I've attached a copy of the first bounce
> message I received.
>
> If this message bounces too, I will send you a probe. If the probe bounces,
> I will remove your address from the user mailing list,
> without further notice.
>
>
> I've kept a list of which messages from the user mailing list have
> bounced from your address.
>
>
>


Re: I got a very weird message from user@phoenix.apache.org

2017-02-20 Thread Ted Yu
I received such message as well. 

You can ignore it. 

> On Feb 20, 2017, at 5:46 AM, Cheyenne Forbes 
>  wrote:
> 
> Hi! This is the ezmlm program. I'm managing the
> user@phoenix.apache.org mailing list.
> 
> 
> Messages to you from the user mailing list seem to
> have been bouncing. I've attached a copy of the first bounce
> message I received.
> 
> If this message bounces too, I will send you a probe. If the probe bounces,
> I will remove your address from the user mailing list,
> without further notice.
> 
> 
> I've kept a list of which messages from the user mailing list have
> bounced from your address.


Re: Can I use protobuf2 with Phoenix instead of protobuf3?

2017-02-13 Thread Ted Yu
Phoenix uses protobuf 2.5
>From pom.xml :

   2.5.0

FYI

On Mon, Feb 13, 2017 at 4:52 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:

> my project highly depends on protobuf2, can I tell phoenix which version
> of protobuf to read with when I am sending a request?
>


Re: Query TimeOut on Azure HDInsight

2017-02-10 Thread Ted Yu
Sumanta:
bq. at region=TABLE1,,1450429763940.e30cec826e39df2e3b21e0baa6e1d9c0.,

Please check the log of region server which hosted the above region around
the time of your query.

Which Phoenix / hbase release are you using ?

Thanks

On Fri, Feb 10, 2017 at 6:31 AM, Mark Heppner 
wrote:

> Sumanta,
> Doing the full scan over 100 million rows is going to be costly. How many
> region servers do you have? If this is a common query, you could add a
> secondary index on COL1 and INCLUDE(COLX). Otherwise, you'll have to
> increase hbase.rpc.timeout to something higher than 6 and maybe even
> phoenix.query.timeoutMs. I'm sure there are other optimizations too, but
> I'll let someone else answer that.
>
> On Fri, Feb 10, 2017 at 7:40 AM, Sumanta Gh  wrote:
>
>> Hi,
>> We have a production system on Azure HDInsight.
>> There is a table called TABLE1 which has approx 100 million rows.
>>
>> Recently the following query is always timing out -
>>
>> *SELECT DISTINCT COLX FROM TABLE1 WHERE COL1=1 LIMIT 10;*
>>
>> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException:
>> org.apache.phoenix.exception.PhoenixIOException: Failed after
>> attempts=36, exceptions:
>> Fri Feb 10 12:06:14 GMT 2017, null, java.net.SocketTimeoutException:
>> callTimeout=6, callDuration=72705: row '?  ?' on table 'TABLE1' at
>> region=TABLE1,,1450429763940.e30cec826e39df2e3b21e0baa6e1d9c0.,
>> hostname=workernode1.xx.d1.internal.cloudapp.net,60020,1483615853438,
>> seqNum=173240701
>>
>>
>> The explain plan is -
>> +--+
>> |   PLAN   |
>> +--+
>> | CLIENT 47-CHUNK PARALLEL 47-WAY RANGE SCAN OVER TABLE1 [1] |
>> | SERVER AGGREGATE INTO DISTINCT ROWS BY [COLX] LIMIT 10 GROUPS |
>> | CLIENT MERGE SORT|
>> | CLIENT 10 ROW LIMIT  |
>> +--+
>>
>>
>> How can we make this above query successful? Kindly reply urgently.
>>
>> Regards
>> Sumanta
>>
>> =-=-=
>> Notice: The information contained in this e-mail
>> message and/or attachments to it may contain
>> confidential or privileged information. If you are
>> not the intended recipient, any dissemination, use,
>> review, distribution, printing or copying of the
>> information contained in this e-mail message
>> and/or attachments to it are strictly prohibited. If
>> you have received this communication in error,
>> please notify us by reply e-mail or telephone and
>> immediately and permanently delete the message
>> and any attachments. Thank you
>>
>>
>
>
> --
> Mark Heppner
>


Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

2016-10-23 Thread Ted Yu
Looks like user experience can be improved (by enriching exception message)
if table abc can be found but table ABC cannot be found.

Cheers

On Sun, Oct 23, 2016 at 9:26 AM, Mich Talebzadeh 
wrote:

> Thanks gents
>
> I dropped and recreated the table name and columns in UPPERCASE as follows:
>
> create table DUMMY (PK VARCHAR PRIMARY KEY, PRICE_INFO.TICKER VARCHAR,
> PRICE_INFO.TIMECREATED VARCHAR, PRICE_INFO.PRICE VARCHAR);
>
> and used this command as below passing table name in UPPERCASE as well
>
> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
> hadoop jar /usr/lib/hbase/lib/phoenix-4.8.1-HBase-1.2-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table DUMMY --input
> /data/prices/2016-10-23/prices.1477228923115
>
> and this worked!
>
> 2016-10-23 17:20:33,089 INFO  [main] mapreduce.AbstractBulkLoadTool:
> Incremental load complete for table=DUMMY
> 2016-10-23 17:20:33,089 INFO  [main] mapreduce.AbstractBulkLoadTool:
> Removing output directory /tmp/261410fb-14d5-49fc-a717-dd0469db1673
>
> It will be helpful if documentation is updated to refelect this.
>
> So bottom line I create Phoenix tables and columns on Hbase tables
> UPPERCASE regardless of case of underlying Hbase table?
>
> Thanks
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 23 October 2016 at 17:10, anil gupta  wrote:
>
>> Hi Mich,
>>
>> Its recommended to use upper case for table and column name so that you
>> dont to explicitly quote table and column names.
>>
>> ~Anil
>>
>>
>>
>> On Sun, Oct 23, 2016 at 9:07 AM, Ravi Kiran 
>> wrote:
>>
>>> Sorry, I meant to say table names are case sensitive.
>>>
>>> On Sun, Oct 23, 2016 at 9:06 AM, Ravi Kiran 
>>> wrote:
>>>
 Hi Mich,
Apparently, the tables are case sensitive. Since you have enclosed a
 double quote when creating the table, please pass the same when running the
 bulk load job.

 HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
 hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
 org.apache.phoenix.mapreduce.CsvBulkLoadTool --table "dummy" --input
 /data/prices/2016-10-23/prices.1477228923115

 Regards


 On Sun, Oct 23, 2016 at 8:39 AM, Mich Talebzadeh <
 mich.talebza...@gmail.com> wrote:

> Not sure whether phoenix-4.8.1-HBase-1.2-client.jar is the correct
> jar file?
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any loss, damage or destruction of data or any other property which may
> arise from relying on this email's technical content is explicitly
> disclaimed. The author will in no case be liable for any monetary damages
> arising from such loss, damage or destruction.
>
>
>
> On 23 October 2016 at 15:39, Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> Hi,
>>
>> My stack
>>
>> Hbase: hbase-1.2.3
>> Phoenix: apache-phoenix-4.8.1-HBase-1.2-bin
>>
>>
>> As a suggestion I tried to load an Hbase file via
>> org.apache.phoenix.mapreduce.CsvBulkLoadTool
>>
>> So
>>
>> I created a dummy table in Hbase as below
>>
>> create 'dummy', 'price_info'
>>
>> Then in Phoenix I created a table on Hbase table
>>
>>
>> create table "dummy" (PK VARCHAR PRIMARY KEY, "price_info"."ticker"
>> VARCHAR,"price_info"."timecreated" VARCHAR, "price_info"."price"
>> VARCHAR);
>>
>> And then used the following comman to load the csv file
>>
>>  
>> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table dummy --input
>> /data/prices/2016-10-23/prices.1477228923115
>>
>> However, it does not seem to find the table dummy !
>>
>> 2016-10-23 14:38:39,442 INFO  [main] metrics.Metrics: Initializing
>> metrics system: phoenix
>> 

Re: Index in Phoenix view on Hbase is not updated

2016-10-22 Thread Ted Yu
The first statement creates index, not view. 

Can you check ?

Cheers

> On Oct 22, 2016, at 1:51 AM, Mich Talebzadeh  
> wrote:
> 
> Hi,
> 
> I have a Hbase table that is populated via
> org.apache.hadoop.hbase.mapreduce.ImportTsv
> through bulk load ever 15 minutes. This works fine.
> 
> In Phoenix I created a view on this table
> 
> jdbc:phoenix:rhes564:2181> create index marketDataHbase_idx on
> "marketDataHbase" ("price_info"."ticker", "price_info"."price",
> "price_info"."timecreated");
> 
> This also does what is supposed to do and shows correct count.
> 
> I then created an index in Phoenix as below
> 
> create index index_dx1 on "marketDataHbase"
> ("price_info"."timecreated","price_info"."ticker", "price_info"."price");
> 
> that showed the records OK at that time. I verified this using explain
> 
> 
> 0: jdbc:phoenix:rhes564:2181> explain select count(1) from
> "marketDataHbase";
> +-+
> |  PLAN   |
> +-+
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER INDEX_DX1  |
> | SERVER FILTER BY FIRST KEY ONLY |
> | SERVER AGGREGATE INTO SINGLE ROW|
> +-+
> 
> Now the issue is that the above does not show new data since build in Hbase
> table unless I do the following:
> 
> 0: jdbc:phoenix:rhes564:2181> alter index INDEX_DX1 on "marketDataHbase"
> rebuild;
> 
> 
> Which is not what an index should do (The covered index should be
> maintained automatically).
> The simple issue is how to overcome this problem?
> 
> As I understand the index in Phoenix ia another file independent of the
> original phoenix view so I assume that this index file is not updated for
> one reason or other?
> 
> Thanks


Re: How and where can I get help to set up my "phoenix cluster" for production?

2016-10-13 Thread Ted Yu
Hortonworks does offer support. 

> On Oct 13, 2016, at 5:40 AM, Antonio Murgia  wrote:
> 
> As far as I know, cloudera let's you install Phoenix through a Parcel, for 
> free. But they do not offer support for Phoenix.
> 
>> On 10/13/2016 01:38 PM, Cheyenne Forbes wrote:
>> Thats the question I shouldve asked myself, no
>> 
>> How can I get it done paid?
> 


Re: How and where can I get help to set up my "phoenix cluster" for production?

2016-10-13 Thread Ted Yu
If there're people who do this for free, would you trust them ?

> On Oct 13, 2016, at 4:30 AM, Cheyenne Forbes 
>  wrote:
> 
> Are there people who do this for free?


Re: Creating secondary index on Phoenix view on Hbase table throws error

2016-10-12 Thread Ted Yu
bq. my h-base-site.xml

Seems to be typo above - did you mean hbase-site.xml ?

Have you checked every region server w.r.t. the value
for hbase.regionserver.wal.codec ?

Cheers

On Wed, Oct 12, 2016 at 3:22 PM, Mich Talebzadeh 
wrote:

> Hi,
>
> In the following "marketDataHbase" is a view on Hbase table.
>
> This is my h-base-site.xml (running Hbase on standalone mode)
>
> 
>  hbase.defaults.for.version.skip
>  true
> 
> 
>  hbase.regionserver.wal.codec
>  org.apache.hadoop.hbase.regionserver.wal.
> IndexedWALEditCodec
> 
> 
>   hbase.region.server.rpc.scheduler.factory.class
>   org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory
>   Factory to create the Phoenix RPC Scheduler that uses
> separate queues for index and metadata updates
> 
> 
>   hbase.rpc.controllerfactory.class
>   org.apache.hadoop.hbase.ipc.controller.
> ServerRpcControllerFactory
>   Factory to create the Phoenix RPC Scheduler that uses
> separate queues for index and metadata updates
> 
> 
>  phoenix.functions.allowUserDefinedFunctions
>  true
>  enable UDF functions
> 
>
> and I have restarted Hbase but still getting the below error!
> 0: jdbc:phoenix:thin:url=http://rhes564:8765> create index ticker_index
> on "marketDataHbase" ("ticker");
> Error: Error -1 (0) : Error while executing SQL "create index
> ticker_index on "marketDataHbase" ("ticker")": Remote driver error:
> RuntimeException: java.sql.SQLException: ERROR 1029 (42Y88): Mutable
> secondary indexes must have the hbase.regionserver.wal.codec property set
> to org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec in the
> hbase-sites.xml of every region server. tableName=TICKER_INDEX ->
> SQLException: ERROR 1029 (42Y88): Mutable secondary indexes must have the
> hbase.regionserver.wal.codec property set to org.apache.hadoop.hbase.
> regionserver.wal.IndexedWALEditCodec in the hbase-sites.xml of every
> region server. tableName=TICKER_INDEX (state=0,code=-1)
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>


Re: Accessing phoenix tables in Spark 2

2016-10-07 Thread Ted Yu
JIRA on hbase side:
HBASE-16179

FYI

On Fri, Oct 7, 2016 at 6:07 AM, Josh Mahonin  wrote:

> Hi Mich,
>
> There's an open ticket about this issue here:
> https://issues.apache.org/jira/browse/PHOENIX-
>
> Long story short, Spark changed their API (again), breaking the existing
> integration. I'm not sure the level of effort to get it working with Spark
> 2.0, but based on examples from other projects, it looks like there's a
> fair bit of Maven module work to support both Spark 1.x and Spark 2.x
> concurrently in the same project. Patches are very welcome!
>
> Best,
>
> Josh
>
>
>
> On Fri, Oct 7, 2016 at 8:33 AM, Mich Talebzadeh  > wrote:
>
>> Hi,
>>
>> Has anyone managed to read phoenix table in Spark 2 by any chance please?
>>
>> Thanks
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>
>


Re: where clause on Phoenix view built on Hbase table throws error

2016-10-05 Thread Ted Yu
Please take a look
at phoenix-core/src/it/java/org/apache/phoenix/end2end/ToNumberFunctionIT.java
where to_number() is used.

On Wed, Oct 5, 2016 at 7:34 AM, Mich Talebzadeh 
wrote:

> Thanks John.
>
> 0: jdbc:phoenix:rhes564:2181> select "Date","volume" from "tsco" where
> "Date" = '1-Apr-08';
> +---+---+
> |   Date|  volume   |
> +---+---+
> | 1-Apr-08  | 49664486  |
> +---+---+
> 1 row selected (0.016 seconds)
>
> BTW I believe double quotes in enclosing phoenix column names are needed
> for case sensitivity on Hbase?
>
>
> Also does Phoenix have type conversion from VARCHAR to integer etc? Is
> there such document
>
> Regards
>
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  OABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 5 October 2016 at 15:24, John Leach  wrote:
>
> >
> > Remove the double quotes and try single quote.  Double quotes refers to
> an
> > identifier…
> >
> > Cheers,
> > John Leach
> >
> > > On Oct 5, 2016, at 9:21 AM, Mich Talebzadeh  >
> > wrote:
> > >
> > > Hi,
> > >
> > > I have this Hbase table already populated
> > >
> > > create 'tsco','stock_daily'
> > >
> > > and populated using
> > > $HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
> > > -Dimporttsv.separator=',' -Dimporttsv.columns="HBASE_ROW_KEY,
> > > stock_info:stock,stock_info:ticker,stock_daily:Date,stock_
> > daily:open,stock_daily:high,stock_daily:low,stock_daily:
> > close,stock_daily:volume"
> > > tsco hdfs://rhes564:9000/data/stocks/tsco.csv
> > > This works OK. In Hbase I have
> > >
> > > hbase(main):176:0> scan 'tsco', LIMIT => 1
> > > ROWCOLUMN+CELL
> > > TSCO-1-Apr-08
> > > column=stock_daily:Date, timestamp=1475525222488, value=1-Apr-08
> > > TSCO-1-Apr-08
> > > column=stock_daily:close, timestamp=1475525222488, value=405.25
> > > TSCO-1-Apr-08
> > > column=stock_daily:high, timestamp=1475525222488, value=406.75
> > > TSCO-1-Apr-08
> > > column=stock_daily:low, timestamp=1475525222488, value=379.25
> > > TSCO-1-Apr-08
> > > column=stock_daily:open, timestamp=1475525222488, value=380.00
> > > TSCO-1-Apr-08
> > > column=stock_daily:stock, timestamp=1475525222488, value=TESCO PLC
> > > TSCO-1-Apr-08
> > > column=stock_daily:ticker, timestamp=1475525222488, value=TSCO
> > > TSCO-1-Apr-08
> > > column=stock_daily:volume, timestamp=1475525222488, value=49664486
> > >
> > > In Phoenix I have a view "tsco" created on Hbase table as follows:
> > >
> > > 0: jdbc:phoenix:rhes564:2181> create view "tsco" (PK VARCHAR PRIMARY
> KEY,
> > > "stock_daily"."Date" VARCHAR, "stock_daily"."close" VARCHAR,
> > > "stock_daily"."high" VARCHAR, "stock_daily"."low" VARCHAR,
> > > "stock_daily"."open" VARCHAR, "stock_daily"."ticker" VARCHAR,
> > > "stock_daily"."stock" VARCHAR, "stock_daily"."volume" VARCHAR)
> > >
> > > So all good.
> > >
> > > This works
> > >
> > > 0: jdbc:phoenix:rhes564:2181> select "Date","volume" from "tsco" limit
> 2;
> > > +---+---+
> > > |   Date|  volume   |
> > > +---+---+
> > > | 1-Apr-08  | 49664486  |
> > > | 1-Apr-09  | 24877341  |
> > > +---+---+
> > > 2 rows selected (0.011 seconds)
> > >
> > > However, I don't seem to be able to use where clause!
> > >
> > > 0: jdbc:phoenix:rhes564:2181> select "Date","volume" from "tsco" where
> > > "Date" = "1-Apr-08";
> > > Error: ERROR 504 (42703): Undefined column. columnName=1-Apr-08
> > > (state=42703,code=504)
> > > org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703):
> > > Undefined column. columnName=1-Apr-08
> > >
> > > Why does it think a predicate "1-Apr-08" is a column.
> > >
> > > Any ideas?
> > >
> > > Thanks
> > >
> > >
> > >
> > > Dr Mich Talebzadeh
> > >
> > >
> > >
> > > LinkedIn * https://www.linkedin.com/profile/view?id=
> > AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > >  AAEWh2gBxianrbJd6zP6AcPCCd
> > OABUrV8Pw>*
> > >
> > >
> > >
> > > http://talebzadehmich.wordpress.com
> > >
> > >
> > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> > > loss, damage or destruction of data or any other property which may
> arise
> > > from relying on this email's technical content is explicitly
> disclaimed.
> > > The author will in no case be liable for any monetary damages arising
> > from
> > > such 

Re: where clause on Phoenix view built on Hbase table throws error

2016-10-05 Thread Ted Yu
I think phoenix mailing list is the proper one for this thread.

On Wed, Oct 5, 2016 at 7:24 AM, John Leach  wrote:

>
> Remove the double quotes and try single quote.  Double quotes refers to an
> identifier…
>
> Cheers,
> John Leach
>
> > On Oct 5, 2016, at 9:21 AM, Mich Talebzadeh 
> wrote:
> >
> > Hi,
> >
> > I have this Hbase table already populated
> >
> > create 'tsco','stock_daily'
> >
> > and populated using
> > $HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
> > -Dimporttsv.separator=',' -Dimporttsv.columns="HBASE_ROW_KEY,
> > stock_info:stock,stock_info:ticker,stock_daily:Date,stock_
> daily:open,stock_daily:high,stock_daily:low,stock_daily:
> close,stock_daily:volume"
> > tsco hdfs://rhes564:9000/data/stocks/tsco.csv
> > This works OK. In Hbase I have
> >
> > hbase(main):176:0> scan 'tsco', LIMIT => 1
> > ROWCOLUMN+CELL
> > TSCO-1-Apr-08
> > column=stock_daily:Date, timestamp=1475525222488, value=1-Apr-08
> > TSCO-1-Apr-08
> > column=stock_daily:close, timestamp=1475525222488, value=405.25
> > TSCO-1-Apr-08
> > column=stock_daily:high, timestamp=1475525222488, value=406.75
> > TSCO-1-Apr-08
> > column=stock_daily:low, timestamp=1475525222488, value=379.25
> > TSCO-1-Apr-08
> > column=stock_daily:open, timestamp=1475525222488, value=380.00
> > TSCO-1-Apr-08
> > column=stock_daily:stock, timestamp=1475525222488, value=TESCO PLC
> > TSCO-1-Apr-08
> > column=stock_daily:ticker, timestamp=1475525222488, value=TSCO
> > TSCO-1-Apr-08
> > column=stock_daily:volume, timestamp=1475525222488, value=49664486
> >
> > In Phoenix I have a view "tsco" created on Hbase table as follows:
> >
> > 0: jdbc:phoenix:rhes564:2181> create view "tsco" (PK VARCHAR PRIMARY KEY,
> > "stock_daily"."Date" VARCHAR, "stock_daily"."close" VARCHAR,
> > "stock_daily"."high" VARCHAR, "stock_daily"."low" VARCHAR,
> > "stock_daily"."open" VARCHAR, "stock_daily"."ticker" VARCHAR,
> > "stock_daily"."stock" VARCHAR, "stock_daily"."volume" VARCHAR)
> >
> > So all good.
> >
> > This works
> >
> > 0: jdbc:phoenix:rhes564:2181> select "Date","volume" from "tsco" limit 2;
> > +---+---+
> > |   Date|  volume   |
> > +---+---+
> > | 1-Apr-08  | 49664486  |
> > | 1-Apr-09  | 24877341  |
> > +---+---+
> > 2 rows selected (0.011 seconds)
> >
> > However, I don't seem to be able to use where clause!
> >
> > 0: jdbc:phoenix:rhes564:2181> select "Date","volume" from "tsco" where
> > "Date" = "1-Apr-08";
> > Error: ERROR 504 (42703): Undefined column. columnName=1-Apr-08
> > (state=42703,code=504)
> > org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703):
> > Undefined column. columnName=1-Apr-08
> >
> > Why does it think a predicate "1-Apr-08" is a column.
> >
> > Any ideas?
> >
> > Thanks
> >
> >
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >  OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
>
>


Re: Cannot select data from a system table

2016-08-31 Thread Ted Yu
Thanks for confirmation, Ankit. 

> On Aug 31, 2016, at 3:36 AM, Ankit Singhal <ankitsingha...@gmail.com> wrote:
> 
> bq. Is this documented somewhere ?
> not as such, https://phoenix.apache.org/language/index.html#quoted_name is 
> generally for case sensitive identifier(and to allow some special characters) 
> and same can be used for keywords. 
> 
> bq. Looks like tokens in phoenix-core/src/main/antlr3/PhoenixSQL.g would give 
> us good idea.
> Yes Ted, you are right . Phoenix keywords are the tokens in 
> phoenix-core/src/main/antlr3/PhoenixSQL.g 
> 
> 
> 
>> On Sun, Aug 21, 2016 at 8:33 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>> Looks like tokens in phoenix-core/src/main/antlr3/PhoenixSQL.g would give us 
>> good idea.
>> 
>> Experts please correct me if I am wrong.
>> 
>>> On Sun, Aug 21, 2016 at 7:21 AM, Aaron Molitor <amoli...@splicemachine.com> 
>>> wrote:
>>> Thanks, Ankit, that worked. 
>>> 
>>> And on the heels of Ted's question... Are the reserved words documented 
>>> (even if just a list) somewhere, I've been looking at this page: 
>>> http://phoenix.apache.org/language/index.html  -- it feels like where I 
>>> should find a list like that, but I don't see it explicitly called out.  
>>> 
>>> -Aaron
>>>> On Aug 21, 2016, at 09:04, Ted Yu <yuzhih...@gmail.com> wrote:
>>>> 
>>>> Ankit:
>>>> Is this documented somewhere ?
>>>> 
>>>> Thanks
>>>> 
>>>>> On Sun, Aug 21, 2016 at 6:07 AM, Ankit Singhal <ankitsingha...@gmail.com> 
>>>>> wrote:
>>>>> Aaron,
>>>>> 
>>>>> you can escape check for reserved keyword with double quotes ""
>>>>> 
>>>>> SELECT * FROM SYSTEM."FUNCTION"
>>>>> 
>>>>> Regards,
>>>>> Ankit Singhal
>>>>> 
>>>>>> On Fri, Aug 19, 2016 at 10:47 PM, Aaron Molitor 
>>>>>> <amoli...@splicemachine.com> wrote:
>>>>>> Looks like the SYSTEM.FUNCTION table is names with a reserved word. Is 
>>>>>> this a known bug?
>>>>>> 
>>>>>> 
>>>>>> 0: jdbc:phoenix:stl-colo-srv073.splicemachine> !tables
>>>>>> ++--+-+---+--+++-+--+-+---+---+-++---+
>>>>>> | TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |  TABLE_TYPE   | REMARKS  | 
>>>>>> TYPE_NAME  | SELF_REFERENCING_COL_NAME  | REF_GENERATION  | INDEX_STATE  
>>>>>> | IMMUTABLE_ROWS  | SALT_BUCKETS  | MULTI_TENANT  | VIEW_STATEMENT  | 
>>>>>> VIEW_TYPE  | INDEX_TYP |
>>>>>> ++--+-+---+--+++-+--+-+---+---+-++---+
>>>>>> || SYSTEM   | CATALOG | SYSTEM TABLE  |  |   
>>>>>>  || |  | 
>>>>>> false   | null  | false | |  
>>>>>>   |   |
>>>>>> || SYSTEM   | FUNCTION| SYSTEM TABLE  |  |   
>>>>>>  || |  | 
>>>>>> false   | null  | false | |  
>>>>>>   |   |
>>>>>> || SYSTEM   | SEQUENCE| SYSTEM TABLE  |  |   
>>>>>>  || |  | 
>>>>>> false   | null  | false | |  
>>>>>>   |   |
>>>>>> || SYSTEM   | STATS   | SYSTEM TABLE  |  |   
>>>>>>  || |  | 
>>>>>> false   | null  | false | |  
>>>>>>   |   |
>>>>>> || TPCH | CUSTOMER| TABLE |  |   
>>>>>>  |

Re: Help w/ table that suddenly keeps timing out

2016-08-29 Thread Ted Yu
I searched for "Cannot get all table regions" in hbase repo - no hit.
Seems to be Phoenix error.

Anyway, the cause could be due to the 1 offline region for this table.
Can you retrieve the encoded region name and search for it in the master
log ?

Feel free to pastebin snippets of master / region server logs if needed
(with proper redaction).

See if the following shell command works:

  hbase> assign 'REGIONNAME'
  hbase> assign 'ENCODED_REGIONNAME'

Cheers

On Mon, Aug 29, 2016 at 9:41 AM, Riesland, Zack 
wrote:

> ​Our cluster recently had some issue related to network outages*.
>
> When all the dust settled, Hbase eventually "healed" itself, and almost
> everything is back to working well, with a couple of exceptions.
>
> In particular, we have one table where almost every (Phoenix) query times
> out - which was never the case before. It's very small compared to most of
> our other tables at around 400 million rows.
>
> I have tried with a raw JDBC connection in Java code as well as with Aqua
> Data Studio, both of which usually work fine.
>
> The specific failure is that after 15 minutes (the set timeout),  I get a
> one-line error that says: “Error 1102 (XCL02): Cannot get all table regions”
>
> When I look at the GUI tools (like http:// server>:16010/master-status#storeStats)
> it shows '1' under "offline regions" for that table (it has 33 total
> regions). Almost all the other tables show '0'.
>
> Can anyone help me troubleshoot this?
>
> Are there Phoenix tables I can clear out that may be confused?
>
> This isn’t an issue with the schema or skew or anything. The same table
> with the same data was lightning fast before these hbase issues.
>
> I know there is a CLI tool for fixing HBase issues. I'm wondering whether
> that "offline region" is the cause of these timeouts.
>
> If not, how I can I figure it out?
>
> Thanks!
>
>
>
> * FWIW, what happened was that DNS stopped working for a while, so HBase
> started referring to all the region servers by IP address, which somewhat
> worked, until the region servers restarted. Then they were hosed until a
> bit of manual intervention.
>
>
>


Re: Cannot select data from a system table

2016-08-21 Thread Ted Yu
Looks like tokens in phoenix-core/src/main/antlr3/PhoenixSQL.g would give
us good idea.

Experts please correct me if I am wrong.

On Sun, Aug 21, 2016 at 7:21 AM, Aaron Molitor <amoli...@splicemachine.com>
wrote:

> Thanks, Ankit, that worked.
>
> And on the heels of Ted's question... Are the reserved words documented
> (even if just a list) somewhere, I've been looking at this page:
> http://phoenix.apache.org/language/index.html  -- it feels like where I
> should find a list like that, but I don't see it explicitly called out.
>
> -Aaron
>
> On Aug 21, 2016, at 09:04, Ted Yu <yuzhih...@gmail.com> wrote:
>
> Ankit:
> Is this documented somewhere ?
>
> Thanks
>
> On Sun, Aug 21, 2016 at 6:07 AM, Ankit Singhal <ankitsingha...@gmail.com>
> wrote:
>
>> Aaron,
>>
>> you can escape check for reserved keyword with double quotes ""
>>
>> SELECT * FROM SYSTEM."FUNCTION"
>>
>> Regards,
>> Ankit Singhal
>>
>> On Fri, Aug 19, 2016 at 10:47 PM, Aaron Molitor <
>> amoli...@splicemachine.com> wrote:
>>
>>> Looks like the SYSTEM.FUNCTION table is names with a reserved word. Is
>>> this a known bug?
>>>
>>>
>>> 0: jdbc:phoenix:stl-colo-srv073.splicemachine> !tables
>>> ++--+-+---+-
>>> -+++
>>> -+--+-+---+-
>>> --+-++---+
>>> | TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |  TABLE_TYPE   | REMARKS  |
>>> TYPE_NAME  | SELF_REFERENCING_COL_NAME  | REF_GENERATION  | INDEX_STATE  |
>>> IMMUTABLE_ROWS  | SALT_BUCKETS  | MULTI_TENANT  | VIEW_STATEMENT  |
>>> VIEW_TYPE  | INDEX_TYP |
>>> ++--+-+---+-
>>> -+++
>>> -+--+-+---+-
>>> --+-++---+
>>> || SYSTEM   | CATALOG | SYSTEM TABLE  |  |
>>>   || |  |
>>> false   | null  | false | |
>>> |   |
>>> || SYSTEM   | FUNCTION| SYSTEM TABLE  |  |
>>>   || |  |
>>> false   | null  | false | |
>>> |   |
>>> || SYSTEM   | SEQUENCE| SYSTEM TABLE  |  |
>>>   || |  |
>>> false   | null  | false | |
>>> |   |
>>> || SYSTEM   | STATS   | SYSTEM TABLE  |  |
>>>   || |  |
>>> false   | null  | false | |
>>> |   |
>>> || TPCH | CUSTOMER| TABLE |  |
>>>   || |  |
>>> false   | null  | false | |
>>> |   |
>>> || TPCH | LINEITEM| TABLE |  |
>>>   || |  |
>>> false   | null  | false | |
>>> |   |
>>> || TPCH | NATION  | TABLE |  |
>>>   || |  |
>>> false   | null  | false | |
>>> |   |
>>> || TPCH | ORDERS  | TABLE |  |
>>>   || |  |
>>> false   | null  | false | |
>>> |   |
>>> || TPCH | PART| TABLE |  |
>>>   || |  |
>>> false   | null  | false | |
>>> |   |
>>> || TPCH 

Re: Cannot select data from a system table

2016-08-21 Thread Ted Yu
Ankit:
Is this documented somewhere ?

Thanks

On Sun, Aug 21, 2016 at 6:07 AM, Ankit Singhal 
wrote:

> Aaron,
>
> you can escape check for reserved keyword with double quotes ""
>
> SELECT * FROM SYSTEM."FUNCTION"
>
> Regards,
> Ankit Singhal
>
> On Fri, Aug 19, 2016 at 10:47 PM, Aaron Molitor <
> amoli...@splicemachine.com> wrote:
>
>> Looks like the SYSTEM.FUNCTION table is names with a reserved word. Is
>> this a known bug?
>>
>>
>> 0: jdbc:phoenix:stl-colo-srv073.splicemachine> !tables
>> ++--+-+---+-
>> -+++
>> -+--+-+---+-
>> --+-++---+
>> | TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |  TABLE_TYPE   | REMARKS  |
>> TYPE_NAME  | SELF_REFERENCING_COL_NAME  | REF_GENERATION  | INDEX_STATE  |
>> IMMUTABLE_ROWS  | SALT_BUCKETS  | MULTI_TENANT  | VIEW_STATEMENT  |
>> VIEW_TYPE  | INDEX_TYP |
>> ++--+-+---+-
>> -+++
>> -+--+-+---+-
>> --+-++---+
>> || SYSTEM   | CATALOG | SYSTEM TABLE  |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> || SYSTEM   | FUNCTION| SYSTEM TABLE  |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> || SYSTEM   | SEQUENCE| SYSTEM TABLE  |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> || SYSTEM   | STATS   | SYSTEM TABLE  |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> || TPCH | CUSTOMER| TABLE |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> || TPCH | LINEITEM| TABLE |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> || TPCH | NATION  | TABLE |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> || TPCH | ORDERS  | TABLE |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> || TPCH | PART| TABLE |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> || TPCH | PARTSUPP| TABLE |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> || TPCH | REGION  | TABLE |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> || TPCH | SUPPLIER| TABLE |  |
>> || |  |
>> false   | null  | false | |
>> |   |
>> ++--+-+---+-
>> -+++
>> -+--+-+---+-
>> --+-++---+
>> 0: jdbc:phoenix:stl-colo-srv073.splicemachine> select * from
>> SYSTEM.FUNCTION;
>> Error: ERROR 604 (42P00): Syntax error. Mismatched input. Expecting
>> "NAME", got "FUNCTION" at line 1, column 22. (state=42P00,code=604)
>> org.apache.phoenix.exception.PhoenixParserException: ERROR 604 (42P00):
>> Syntax error. Mismatched input. Expecting "NAME", got "FUNCTION" at line 1,
>> column 22.
>> at org.apache.phoenix.exception.PhoenixParserException.newExcep
>> tion(PhoenixParserException.java:33)
>> at 

Re: Problems with Phoenix and HBase

2015-05-18 Thread Ted Yu
Sending to Phoenix user mailing list.

Here is the thread:
http://search-hadoop.com/m/YGbbu2WzHtZBkq1

On Mon, May 18, 2015 at 7:20 AM, Asfare aman...@hotmail.com wrote:

 Can someone give some tips?



 --
 View this message in context:
 http://apache-hbase.679495.n3.nabble.com/Problems-with-Phoenix-and-HBase-tp4071362p4071537.html
 Sent from the HBase Developer mailing list archive at Nabble.com.



Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

2015-03-05 Thread Ted Yu
Ani:
You can find Phoenix release artifacts here:
http://archive.apache.org/dist/phoenix/

e.g. for 4.1.0:
http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/

Cheers

On Thu, Mar 5, 2015 at 5:26 PM, anil gupta anilgupt...@gmail.com wrote:

 @James: Could you point me to a place where i can find tar file of
 Phoenix-4.0.0-incubating release? All the links on this page are broken:
 http://www.apache.org/dyn/closer.cgi/incubator/phoenix/

 On Thu, Mar 5, 2015 at 5:04 PM, anil gupta anilgupt...@gmail.com wrote:

  I have tried to disable the table but since none of the RS are coming up.
  I am unable to do it. Am i missing something?
  On the server side, we were using the 4.0.0-incubating. It seems like
 my
  only option is to upgrade the server to 4.1.  At-least, the HBase cluster
  to be UP. I just want my cluster to come and then i will disable the
 table
  that has a Phoenix view.
  What would be the possible side effects of using Phoenix 4.1 with
  HDP2.1.5.
  Even after updating to Phoenix4.1, if the problem is not fixed. What is
  the next alternative?
 
 
  On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk ndimi...@gmail.com wrote:
 
  Hi Anil,
 
  HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
  trying out a newer version? As James says, the upgrade must be servers
  first, then client. Also, Phoenix versions tend to be picky about their
  underlying HBase version.
 
  You can also try altering the now-broken phoenix tables via HBase shell,
  removing the phoenix coprocessor. I've tried this in the past with other
  coprocessor-loading woes and had mixed results. Try: disable table,
 alter
  table, enable table. There's still sharp edges around coprocessor-based
  deployment.
 
  Keep us posted, and sorry for the mess.
 
  -n
 
  [0]:
 
 http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
 
  On Thu, Mar 5, 2015 at 4:34 PM, anil gupta anilgupt...@gmail.com
 wrote:
 
  Unfortunately, we ran out of luck on this one because we are not
 running
  the latest version of HBase. This property was introduced recently:
  https://issues.apache.org/jira/browse/HBASE-13044 :(
  Thanks, Vladimir.
 
  On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov 
  vladrodio...@gmail.com wrote:
 
  Try the following:
 
  Update hbase-site.xml config, set
 
  hbase.coprocessor.enabed=false
 
  or:
 
  hbase.coprocessor.user.enabed=false
 
  sync config across cluster.
 
  restart the cluster
 
  than update your table's settings in hbase shell
 
  -Vlad
 
 
  On Thu, Mar 5, 2015 at 3:32 PM, anil gupta anilgupt...@gmail.com
  wrote:
 
  Hi All,
 
  I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
  Phoenix4.1 client because i could not find tar file for
  Phoenix4-0.0-incubating.
  I tried to create a view on existing table and then my entire cluster
  went down(all the RS went down. MAster is still up).
 
 
  This is the exception i am seeing:
 
  2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
 regionserver.HRegionServer: ABORTING region server 
 bigdatabox.com,60020,1423589420136:
 The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
 threw an unexpected exception
  java.io.IOException: No jar path specified for
 org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
  at
 org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
  at
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
  at
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.init(RegionCoprocessorHost.java:163)
  at
 org.apache.hadoop.hbase.regionserver.HRegion.init(HRegion.java:555)
  at
 org.apache.hadoop.hbase.regionserver.HRegion.init(HRegion.java:462)
  at
 sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
  at
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at
 java.lang.reflect.Constructor.newInstance(Constructor.java:526)
  at
 org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
  at
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
  at
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
  at
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
  at
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
  at
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
  at
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
  at
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
  at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)