Yes, I also find it very strange.
Unfortunately, it's not an ideal workaround since having
hive.merge.mapfiles=false means that we have a lot of blocks being utilized
that only have a small amount of data in them (much less than the
configured HDFS block size).
Can you think of any other
I just put the Hive log4j config file on DEBUG, and here is the error that
I'm seeing:
2010-12-07 12:16:50,281 WARN mapred.JobClient
(JobClient.java:configureCommandLineOptions(539)) - Use GenericOptionsParser
for parsing the arguments. Applications should implement Tool for the same.
2010-12-07
Digging even further, here's what I see:
NOTE: We have a table in Hive called test_table but this seems to look for
default.test_table ? )
2010-12-07 00:52:24,600 ERROR metadata.Hive (Hive.java:getTable(357)) -
NoSuchObjectException(message:default.test_table table not found)
at