(bcc impala-dev, [email protected]) Please use the incubator lists for development discussions. Thanks!
On 4 March 2016 at 13:43, Tim Armstrong <[email protected]> wrote: > It also looks like it got far enough that you should have a bit of data > loaded - have you been able to start impala and run queries on some of > those tables? > > We're starting a new release cycle so I'm actually about to focus on > upgrading our version of LLVM to 3.7 and getting the Intel support working. > I think we're going to be putting a bit of effort into reducing LLVM code > generation time: it seems like LLVM 3.7 is slightly slower in some cases. > > We should stay in sync, it would be good to make sure that any changes I > make will work for your PowerPC work too. If you want to share any patches > (even if you're not formally contributing them) it would be helpful for me > to understand what you have already done on this path. > > Cheers, > Tim > > On Fri, Mar 4, 2016 at 1:40 PM, Tim Armstrong <[email protected]> > wrote: > >> >> Hi Nishidha, >> It looks like Hive is maybe missing the native snappy library: I see >> this in the logs: >> >> java.lang.Exception: org.xerial.snappy.SnappyError: >> [FAILED_TO_LOAD_NATIVE_LIBRARY] null >> at >> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) >> at >> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) >> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] >> null >> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:229) >> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:44) >> at org.apache.avro.file.SnappyCodec.compress(SnappyCodec.java:43) >> at >> org.apache.avro.file.DataFileStream$DataBlock.compressUsing(DataFileStream.java:361) >> at >> org.apache.avro.file.DataFileWriter.writeBlock(DataFileWriter.java:394) >> at org.apache.avro.file.DataFileWriter.sync(DataFileWriter.java:413) >> >> >> >> If you want to try making progress without Hive snappy support, I think >> you coudl disable some of the files formats by editing >> testdata/workloads/*/*.csv and removing some of the "snap" file formats. >> The impala test suite generates data in many different file formats with >> different compression settings. >> >> >> On Wed, Mar 2, 2016 at 7:08 AM, nishidha panpaliya <[email protected]> >> wrote: >> >>> Hello, >>> >>> After building Impala on ppc64le, I'm trying to run all the tests of >>> Impala. In the process, I'm getting an error while test data creation. >>> Command ran - >>> >>> ${IMPALA_HOME}/buildall.sh -testdata -format >>> >>> Output - Attached log (output.txt) >>> >>> Also attached logs named >>> cluster_logs/data_loading/data-load-functional-exhaustive.log. And hive.log. >>> >>> I tried setting below parameters in hive-site.xml but of no use. >>> >>> hive.exec.max.dynamic.partitions=100000; >>> >>> hive.exec.max.dynamic.partitions.pernode=100000; >>> >>> hive.exec.parallel=false >>> >>> >>> I'll be really thankful if you could provide me some help here. >>> >>> Thanks in advance, >>> Nishidha >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "Impala Dev" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> >> >> > -- > You received this message because you are subscribed to the Google Groups > "Impala Dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > -- Henry Robinson Software Engineer Cloudera 415-994-6679
