unsubscribe.
unsubscribe.
Re: Create 2 Index on single and Drop 1 Index makes the view not usable...
Hi James, I have Filed JIRA PHOENIX-1482 https://issues.apache.org/jira/browse/PHOENIX-1482... Thanks, Saravanan.A On Tue, Nov 25, 2014 at 8:03 AM, James Taylor jamestay...@apache.org wrote: Please file a JIRA. If you can put together a unit test that repros the issue, that'd be much appreciated. Thanks, James On Wed, Nov 19, 2014 at 6:02 AM, Saravanan A asarava...@alphaworkz.com wrote: Hi, I have a UseCase where am trying to create two index on same column on same table with different indexname. 1.create view on hbase table(MainTable). 2.create index1 on column (A). 3.create index2 on column(A)/ 4.Drop index index1. select * from MainTable where A='xyz'; Gives following error Error: ERROR 1012 (42M03): ERROR 1012 (42M03): Table undefined. tableName=test_status_2 Index not found (state=42M03,code=1012) This brings down the entire table i can't do any process on that view even drop view is giving error... what is the problem??? Regards, Saravanan.A
Create 2 Index on single and Drop 1 Index makes the view not usable...
Hi, I have a UseCase where am trying to create two index on same column on same table with different indexname. 1.create view on hbase table(MainTable). 2.create index1 on column (A). 3.create index2 on column(A)/ 4.Drop index index1. select * from MainTable where A='xyz'; Gives following error Error: ERROR 1012 (42M03): ERROR 1012 (42M03): Table undefined. tableName=test_status_2 Index not found (state=42M03,code=1012) This brings down the entire table i can't do any process on that view even drop view is giving error... what is the problem??? Regards, Saravanan.A
Re: Error when try to create mutable secondary index...
Issue resolved...added the property property namehbase.regionserver.wal.codec/name valueorg.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec/value /property in CDH UI under the following valueRegionServer Configuration Safety Valve for hbase-site.xml..Thanks for your help,,.[?] On Mon, Aug 11, 2014 at 11:05 PM, Saravanan A asarava...@alphaworkz.com wrote: yes.. On Mon, Aug 11, 2014 at 10:59 PM, Jesse Yates jesse.k.ya...@gmail.com wrote: That seems correct. I'm not sure where the issue is either. It seems like the property isn't in the correct config files (also, you don't need it on the master configs, but it won't hurt). Is the property there when you dump the config from the RS's UI page? --- Jesse Yates @jesse_yates jyates.github.com On Mon, Aug 11, 2014 at 10:27 AM, Saravanan A asarava...@alphaworkz.com wrote: No am not sure where the issue is... Procedure i did for installation for Phoenix installation: 1.Extracted Phoenix-3.0. 2.Added Phoenix core.jar in all region servers and in master.. 3. Added this property namehbase.regionserver.wal.codec/name valueorg.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec/value /property property in all hbase-site.xml file in region servers,master and in phoenix bin dir.. 4.Restarted hbase. is this right or am missing anything??? On Mon, Aug 11, 2014 at 10:38 PM, Jesse Yates jesse.k.ya...@gmail.com wrote: Well now, that is strange. Maybe its something to do with CDH? Have you talked to those fellas? Or maybe someone from Cloudera has an insight? Seems like it should work On Aug 11, 2014 9:55 AM, Saravanan A asarava...@alphaworkz.com wrote: *bin/hbase classpath:* */opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../conf:/usr/java/default/lib/tools.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/..:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../hbase-0.94.15-cdh4.7.0-security.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../hbase-0.94.15-cdh4.7.0-security-tests.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../hbase.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/avro-1.7.4.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/cloudera-jets3t-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-daemon-1.0.3.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-io-2.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-lang-2.5.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/core-3.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/gmbal-api-only-3.0.0-b023.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-framework-2.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-framework-2.1.1-tests.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-http-2.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-http-server-2.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-http-servlet-2.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-rcm-2.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/guice
Index over View is not getting updated..
Hi, I created view over existing Hbase table and created index on columns...(at this point my table has 5 records)..Later i added 5 more records to hbase table and i can able to see 10 record in Phoenix View...but when i checked Index it has only 1st 5 records...remaining 5 records are not getting updated...So index over the view will not automatically update the index??? or we need to re-create the index...?? *Step 1:* jdbc:phoenix:zk1.alp.com select * from tab2; +++++++--+ | RowKey | t2_col5 | t2_col3 | t2_col1 | t2_col2 | t2_col6 | | +++++++--+ | a | 22 | 11 | 1 | 6 | 27 | | | b | 23 | 12 | 2 | 7 | 28 | | | c | 24 | 13 | 3 | 8 | 29 | | | d | 25 | 14 | 4 | 9 | 30 | | | e | 26 | 15 | 5 | 10 | vv | | +++++++ *Step 2:* create index tab2_t2_col2 on tab2 (t2_col2); 5 rows affected (1.48 seconds) select * from tab2_t2_col2 +-++ | cf2:t2_col2 | :RowKey | +-++ | 27 | a | | 28 | b | | 29 | c | | 30 | d | | vv | e | +-++ *Step 3:* Added 5 more record to hbase table select * from tab2; +++++++--+ | RowKey | t2_col5 | t2_col3 | t2_col1 | t2_col2 | t2_col6 | | +++++++--+ | a | 22 | 11 | 1 | 6 | 27 | | | b | 23 | 12 | 2 | 7 | 28 | | | c | 24 | 13 | 3 | 8 | 29 | | | d | 25 | 14 | 4 | 9 | 30 | | | e | 26 | 15 | 5 | 10 | vv | | | lkiouy | lbftcv | lnmbvf | llkjhj | liuyb | lbcfrtg | | | mjhui | mjyfc | mvdrt | mbvty | mufcgb | mvcdr | | | njuhy | nloids | ncsrfb | nhtdcb | nvdtbn | noadv | | | opiygb | obgvv | ougfvc | oufvcb | onhjvty| ofccv | | | pnbgfuh| pdfgvew| pdsvbf | padscv | padfqv | fadfdsv | | +++++++--+ *Step 4:* select * from tab2_t2_col2; +-++ | cf2:t2_col2 | :RowKey | +-++ | 27 | a | | 28 | b | | 29 | c | | 30 | d | | vv | e | +-++ at step 4 i expected to see 10 record but am only getting 5 record.. My Phoenix version 3.0 and Hbase version-- 0.94.15. Regards, Saravanan.A
Re: Error when try to create mutable secondary index...
doesn't include the HBase config files, so the code executed will correctly tell you that the class exists, but is not configured. Have you tried running bin/hbase classpath to see what you're classpath is at RS startup? If its the same as the -cp argument, its missing the config files. On Aug 11, 2014 6:10 AM, Saravanan A asarava...@alphaworkz.com wrote: *This is the command i run in hbase classpath (test1.jar is my jar file)*: hbase -cp .:hadoop-common-2.0.0-cdh4.7.0.jar:commons-logging-1.1.1.jar:hbase-0.94.15-cdh4.7.0-security.jar:com.google.collections.jar:commons-collections-3.2.1.jar:phoenix-core-3.0.0-incubating.jar:com.google.guava_1.6.0.jar:test1.jar FixConfigFile *The Output:* Found Not Found *This is my full code:* import org.apache.hadoop.conf.Configuration; public class FixConfigFile { public static final String INDEX_WAL_EDIT_CODEC_CLASS_NAME = org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec; public static final String WAL_EDIT_CODEC_CLASS_KEY = org.apache.hadoop.hbase.regionserver.wal.codec; public static void main(String[] args) { Configuration config=new Configuration(); isWALEditCodecSet(config); } public static boolean isWALEditCodecSet(Configuration conf) { // check to see if the WALEditCodec is installed try { // Use reflection to load the IndexedWALEditCodec, since it may not load with an older version // of HBase Class.forName(INDEX_WAL_EDIT_CODEC_CLASS_NAME); System.out.println(Found); } catch (Throwable t) { System.out.println(Error); return false; } if (INDEX_WAL_EDIT_CODEC_CLASS_NAME.equals(conf.get(WAL_EDIT_CODEC_CLASS_KEY, null))) { // its installed, and it can handle compression and non-compression cases System.out.println(True); return true; } System.out.println(Not Found); return false; } } am not sure this is how you want me to execute the code...If am wrong please guide me... On Sat, Aug 9, 2014 at 8:32 PM, Jesse Yates jesse.k.ya...@gmail.com wrote: When you run $ bin/hbase classpath What do you get? Should help illuminate if everything is setup right. If the phoenix jar is there, then check the contents of the jar ( http://docs.oracle.com/javase/tutorial/deployment/jar/view.html) and make sure the classes are present. On Aug 9, 2014 1:03 AM, Saravanan A asarava...@alphaworkz.com wrote: Hi Jesse, I ran the following code to test the existence of the classes you asked me to check. I initialized the two constants to the following values. === public static final String INDEX_WAL_EDIT_CODEC_CLASS_NAME = org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec; public static final String WAL_EDIT_CODEC_CLASS_KEY = hbase.regionserver.wal.codec; == Then I ran the following code and got the error Not found in the equality test. if (INDEX_WAL_EDIT_CODEC_CLASS_NAME.equals(conf.get(WAL_EDIT_CODEC_CLASS_KEY, null))) { // its installed, and it can handle compression and non-compression cases System.out.println(True); return true; } System.out.println(Not Found); I am not sure, if I initialized the values for the constants correctly. If I did, then I think some jars are missing or I have incorrect version. We use CDH 4.7 which has the Hbase version of 0.94.15 and Phoenix version of 3.0 Can you tell me how to make this work? Your assistance is greatly appreciated. Regards, Saravanan.A Full code == public static void main(String[] args) { Configuration config=new Configuration(); isWALEditCodecSet(config); } public static boolean isWALEditCodecSet(Configuration conf) { // check to see if the WALEditCodec is installed try { // Use reflection to load the IndexedWALEditCodec, since it may not load with an older version // of HBase Class.forName(INDEX_WAL_EDIT_CODEC_CLASS_NAME); System.out.println(Found); } catch (Throwable t) { System.out.println(Error); return false; } if (INDEX_WAL_EDIT_CODEC_CLASS_NAME.equals(conf.get(WAL_EDIT_CODEC_CLASS_KEY, null))) { // its installed, and it can handle compression and non-compression cases System.out.println(True); return true; } System.out.println(Not Found); return false; } On Sat, Aug 9, 2014 at 12:02 AM, Jesse Yates jesse.k.ya...@gmail.com wrote: This error is thrown when on the server-side, the following code returns false (IndexManagementUtil#isWALEditCodecSet): public static boolean isWALEditCodecSet(Configuration conf) { // check to see if the WALEditCodec is installed try
Error when try to create mutable secondary index...
Hi, I have a table in hbase and created view in phoenix and try to create index on a column on the view..but i got following error.. Error: ERROR 1029 (42Y88): Mutable secondary indexes must have the hbase.regionserver.wal.codec property set to org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec in the hbase-sites.xml of every region server tableName=tab2_col4 (state=42Y88,code=1029) but i have added the hbase.regionserver.wal.codec property in all my region server...i can able to create IMMUTABLE index for that... Am using Hbase ---0.94.15-cdh4.7.0 Phoenix---3.0 am i missing something??? thanks in advance... Regards, Saravanan
Re: Error when try to create mutable secondary index...
This is my Hbase-site.xml file... ?xml version=1.0 encoding=UTF-8? !--Autogenerated by Cloudera CM on 2014-06-16T11:10:16.319Z-- configuration property namehbase.regionserver.wal.codec/name valueorg.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec/value /property property namehbase.region.server.rpc.scheduler.factory.class/name valueorg.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory/value descriptionFactory to create the Phoenix RPC Scheduler that knows to put index updates into index queues/description /property property namehbase.rootdir/name valuehdfs://alpmas.alp.com:8020/hbase/value /property property namehbase.client.write.buffer/name value2097152/value /property property namehbase.client.pause/name value1000/value /property property namehbase.client.retries.number/name value10/value /property property namehbase.client.scanner.caching/name value1000/value /property property namehbase.client.keyvalue.maxsize/name value20971520/value /property property namehbase.rpc.timeout/name value120/value /property property namehbase.security.authentication/name valuesimple/value /property property namezookeeper.session.timeout/name value24/value /property property namezookeeper.retries/name value5/value /property property namezookeeper.pause/name value5000/value /property property namezookeeper.znode.parent/name value/hbase/value /property property namezookeeper.znode.rootserver/name valueroot-region-server/value /property property namehbase.zookeeper.quorum/name valuezk3.alp.com,zk2.alp.com,zk1.alp.com/value /property property namehbase.zookeeper.property.clientPort/name value2181/value /property /configuration On Fri, Aug 8, 2014 at 2:46 PM, Saravanan A asarava...@alphaworkz.com wrote: I already included this property in hbase-site.xml in all region servers..but still am getting that error...If i define my view as IMMUTABLE_ROWS = true, then i can able to create view..but i want to create index for mutable.. On Fri, Aug 8, 2014 at 2:10 PM, Abhilash L L abhil...@capillarytech.com wrote: Really sorry, shared the wrong config property namehbase.regionserver.wal.codec/name valueorg.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec/value /property Regards, Abhilash L L Capillary Technologies M:919886208262 abhil...@capillarytech.com | www.capillarytech.com Email from people at capillarytech.com may not represent official policy of Capillary Technologies unless explicitly stated. Please see our Corporate-Email-Policy http://support.capillary.co.in/policy-public/Corporate-Email-Policy.pdf for details. Contents of this email are confidential. Please contact the Sender if you have received this email in error. On Fri, Aug 8, 2014 at 1:07 PM, Saravanan A asarava...@alphaworkz.com wrote: Hi Abhilash, Thanks for the replay...i included above property and restarted the region servers but still am getting the same error... On Fri, Aug 8, 2014 at 12:39 PM, Abhilash L L abhil...@capillarytech.com wrote: Hi Saravanan, Please check the Setup section here http://phoenix.apache.org/secondary_indexing.html You will need to add this config to all Region Servers in hbase-site. xml, as the error says as well (You will need to restart the servers after the change) property namehbase.region.server.rpc.scheduler.factory.class/name valueorg.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory/value descriptionFactory to create the Phoenix RPC Scheduler that knows to put index updates into index queues/description /property Regards, Abhilash L L Capillary Technologies M:919886208262 abhil...@capillarytech.com | www.capillarytech.com Email from people at capillarytech.com may not represent official policy of Capillary Technologies unless explicitly stated. Please see our Corporate-Email-Policy http://support.capillary.co.in/policy-public/Corporate-Email-Policy.pdf for details. Contents of this email are confidential. Please contact the Sender if you have received this email in error. On Fri, Aug 8, 2014 at 12:22 PM, Saravanan A asarava...@alphaworkz.com wrote: Hi, I have a table in hbase and created view in phoenix and try to create index on a column on the view..but i got following error.. Error: ERROR 1029 (42Y88): Mutable secondary indexes must have the hbase.regionserver.wal.codec property set to org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec in the hbase-sites.xml of every region server tableName=tab2_col4 (state=42Y88,code=1029) but i have added the hbase.regionserver.wal.codec property in all my region server...i can able to create IMMUTABLE index for that... Am using Hbase ---0.94.15-cdh4.7.0 Phoenix---3.0 am i