Looks like user experience can be improved (by enriching exception message)
if table abc can be found but table ABC cannot be found.

Cheers

On Sun, Oct 23, 2016 at 9:26 AM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> Thanks gents
>
> I dropped and recreated the table name and columns in UPPERCASE as follows:
>
> create table DUMMY (PK VARCHAR PRIMARY KEY, PRICE_INFO.TICKER VARCHAR,
> PRICE_INFO.TIMECREATED VARCHAR, PRICE_INFO.PRICE VARCHAR);
>
> and used this command as below passing table name in UPPERCASE as well
>
> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
> hadoop jar /usr/lib/hbase/lib/phoenix-4.8.1-HBase-1.2-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table DUMMY --input
> /data/prices/2016-10-23/prices.1477228923115
>
> and this worked!
>
> 2016-10-23 17:20:33,089 INFO  [main] mapreduce.AbstractBulkLoadTool:
> Incremental load complete for table=DUMMY
> 2016-10-23 17:20:33,089 INFO  [main] mapreduce.AbstractBulkLoadTool:
> Removing output directory /tmp/261410fb-14d5-49fc-a717-dd0469db1673
>
> It will be helpful if documentation is updated to refelect this.
>
> So bottom line I create Phoenix tables and columns on Hbase tables
> UPPERCASE regardless of case of underlying Hbase table?
>
> Thanks
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 23 October 2016 at 17:10, anil gupta <anilgupt...@gmail.com> wrote:
>
>> Hi Mich,
>>
>> Its recommended to use upper case for table and column name so that you
>> dont to explicitly quote table and column names.
>>
>> ~Anil
>>
>>
>>
>> On Sun, Oct 23, 2016 at 9:07 AM, Ravi Kiran <maghamraviki...@gmail.com>
>> wrote:
>>
>>> Sorry, I meant to say table names are case sensitive.
>>>
>>> On Sun, Oct 23, 2016 at 9:06 AM, Ravi Kiran <maghamraviki...@gmail.com>
>>> wrote:
>>>
>>>> Hi Mich,
>>>>    Apparently, the tables are case sensitive. Since you have enclosed a
>>>> double quote when creating the table, please pass the same when running the
>>>> bulk load job.
>>>>
>>>> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>>>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table "dummy" --input
>>>> /data/prices/2016-10-23/prices.1477228923115
>>>>
>>>> Regards
>>>>
>>>>
>>>> On Sun, Oct 23, 2016 at 8:39 AM, Mich Talebzadeh <
>>>> mich.talebza...@gmail.com> wrote:
>>>>
>>>>> Not sure whether phoenix-4.8.1-HBase-1.2-client.jar is the correct
>>>>> jar file?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Dr Mich Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> LinkedIn * 
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>>
>>>>>
>>>>>
>>>>> http://talebzadehmich.wordpress.com
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>> On 23 October 2016 at 15:39, Mich Talebzadeh <
>>>>> mich.talebza...@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> My stack
>>>>>>
>>>>>> Hbase: hbase-1.2.3
>>>>>> Phoenix: apache-phoenix-4.8.1-HBase-1.2-bin
>>>>>>
>>>>>>
>>>>>> As a suggestion I tried to load an Hbase file via
>>>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool
>>>>>>
>>>>>> So
>>>>>>
>>>>>> I created a dummy table in Hbase as below
>>>>>>
>>>>>> create 'dummy', 'price_info'
>>>>>>
>>>>>> Then in Phoenix I created a table on Hbase table
>>>>>>
>>>>>>
>>>>>> create table "dummy" (PK VARCHAR PRIMARY KEY, "price_info"."ticker"
>>>>>> VARCHAR,"price_info"."timecreated" VARCHAR, "price_info"."price"
>>>>>> VARCHAR);
>>>>>>
>>>>>> And then used the following comman to load the csv file
>>>>>>
>>>>>>  
>>>>>> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>>>>>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>>>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table dummy --input
>>>>>> /data/prices/2016-10-23/prices.1477228923115
>>>>>>
>>>>>> However, it does not seem to find the table dummy !
>>>>>>
>>>>>> 2016-10-23 14:38:39,442 INFO  [main] metrics.Metrics: Initializing
>>>>>> metrics system: phoenix
>>>>>> 2016-10-23 14:38:39,479 INFO  [main] impl.MetricsConfig: loaded
>>>>>> properties from hadoop-metrics2.properties
>>>>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl:
>>>>>> Scheduled snapshot period at 10 second(s).
>>>>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: phoenix
>>>>>> metrics system started
>>>>>> Exception in thread "main" java.lang.IllegalArgumentException: Table
>>>>>> DUMMY not found
>>>>>>         at org.apache.phoenix.util.SchemaUtil.generateColumnInfo(
>>>>>> SchemaUtil.java:873)
>>>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.
>>>>>> buildImportColumns(AbstractBulkLoadTool.java:377)
>>>>>>         at org.apache.phoenix.mapreduce.A
>>>>>> bstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:214)
>>>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(
>>>>>> AbstractBulkLoadTool.java:183)
>>>>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>>>>>>         at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(
>>>>>> CsvBulkLoadTool.java:101)
>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>>> Method)
>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke(
>>>>>> NativeMethodAccessorImpl.java:62)
>>>>>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>>>>> DelegatingMethodAccessorImpl.java:43)
>>>>>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>>>>>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>>>>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>>>>>
>>>>>> I tried putting it inside "" etc but no joy I am afraid!
>>>>>>
>>>>>> Dr Mich Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> LinkedIn * 
>>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>>>
>>>>>>
>>>>>>
>>>>>> http://talebzadehmich.wordpress.com
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>> for any loss, damage or destruction of data or any other property which 
>>>>>> may
>>>>>> arise from relying on this email's technical content is explicitly
>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>> arising from such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>

Reply via email to