BTW, you can also get the Hive schema and partitions (using the code from #1)
Table table = hiveMetastoreClient.getTable(databaseName, tableName); List<FieldSchema> schema = hiveMetastoreClient.getSchema(databaseName, tableName); List<FieldSchema> partitions = table.getPartitionKeys(); The HCat and Hive APIs for the schema differ but for the task at hand maybe you do not need HCatSchema... just a thought... On Fri, Jun 13, 2014 at 10:32 AM, Dmitry Vasilenko <dvasi...@gmail.com> wrote: > Please take a look at > > http://stackoverflow.com/questions/22630323/hadoop-java-lang-incompatibleclasschangeerror-found-interface-org-apache-hadoo > > > > > On Fri, Jun 13, 2014 at 9:53 AM, Brian Jeltema < > brian.jelt...@digitalenvoy.net> wrote: > >> Doing this, with the appropriate substitutions for my table, jarClass, >> etc: >> >> 2. To get the table schema... I assume that you are after HCat schema >> >> >> import org.apache.hadoop.conf.Configuration; >> import org.apache.hadoop.mapreduce.InputSplit; >> import org.apache.hadoop.mapreduce.Job; >> import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; >> import org.apache.hcatalog.data.schema.HCatSchemaUtils; >> import org.apache.hcatalog.mapreduce.HCatInputFormat; >> import org.apache.hcatalog.mapreduce.HCatSplit; >> import org.apache.hcatalog.mapreduce.InputJobInfo; >> >> >> Job job = new Job(config); >> job.setJarByClass(XXXXXX.class); // this will be your class >> job.setInputFormatClass(HCatInputFormat.class); >> job.setOutputFormatClass(TextOutputFormat.class); >> InputJobInfo inputJobInfo = InputJobInfo.create("my_data_base", >> "my_table", "partition filter"); >> HCatInputFormat.setInput(job, inputJobInfo); >> HCatSchema s = HCatInputFormat.getTableSchema(job); >> >> >> results in: >> >> Exception in thread "main" java.lang.IncompatibleClassChangeError: Found >> interface org.apache.hadoop.mapreduce.JobContext, but class was expected >> at >> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getTableSchema(HCatBaseInputFormat.java:234) >> >> >> >