Re: Write Acess to Hive Wiki

2018-02-24 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Madhudeep!

-- Lefty


On Tue, Feb 20, 2018 at 9:24 PM, Madhudeep petwal <
madhudeep11pet...@gmail.com> wrote:

> Hi,
>
> My Jira  has been
> merged. Now I need to edit website for
> documentation.
> Please provide write access.
>
> Confluence UserName - madhudeep.petwal
>
> Thanks
> Madhudeep Petwal
>


Re: Proposal: File based metastore

2018-02-24 Thread Johannes Alberti
Have you looked at 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport? 
Why does this not suit your use case?

Regards,

Johannes

Sent from my iPhone

> On Feb 23, 2018, at 3:40 PM, Alexander Kolbasov  wrote:
> 
> Would it be useful to have a tool that can save database(s), table(s) and 
> partition(s) metadata in a file and then import this file in another 
> metastore? These files can be stored together with data files or elsewhere.
> 
> This would allow for targeted exchange of metadata between multiple HMS 
> services.
> 
> - Alex


Re: Why the filter push down does not reduce the read data record count

2018-02-24 Thread Sun, Keith
Hi ,


My env : Hive 1.2.1 and Parquet 1.8.1


Per my search in hive and parquet source code of version 1.8.1,  I did not see 
the paramters in that slides. but found that here :

https://github.com/apache/parquet-mr/blob/parquet-1.8.x/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetInputFormat.java


For hive, (also detected in  my test),  row group filter is auto-applied , 
check this  ParquetRecordReaderWrapper

if (filter != null) {
  filtedBlocks = RowGroupFilter.filterRowGroups(filter, splitGroup, 
fileMetaData.getSchema());
  if (filtedBlocks.isEmpty()) {
LOG.debug("All row groups are dropped due to filter predicates");
return null;
  }


I will dig into more detail of parquet and do some test later.


Thanks,

Keith


From: Furcy Pin 
Sent: Friday, February 23, 2018 8:03:34 AM
To: user@hive.apache.org
Subject: Re: Why the filter push down does not reduce the read data record count

And if you come across a comprehensive documentation of parquet configuration, 
please share it!!!

The Parquet documentation says that it can be configured but doesn't explain 
how: 
http://parquet.apache.org/documentation/latest/
and apparently, both TAJO 
(http://tajo.apache.org/docs/0.8.0/table_management/parquet.html)
 and Drill 
(https://drill.apache.org/docs/parquet-format/)
 seem to have some configuration parameters for Parquet.
If Hive has configuration parameters for Parquet too, I couldn't find it 
documented anywhere.



On 23 February 2018 at 16:48, Sun, Keith 
> wrote:

I got your point and thanks for the nice slides info.


So the parquet filter is not an easy thing and I will try that according to the 
deck.


Thanks !


From: Furcy Pin >
Sent: Friday, February 23, 2018 3:37:52 AM
To: user@hive.apache.org
Subject: Re: Why the filter push down does not reduce the read data record count

Hi,

Unless your table is partitioned or bucketed by myid, Hive generally requires 
to read through all the records to find the records that match your predicate.

In other words, Hive table are generally not indexed for single record 
retrieval like you would expect RDBMs tables or Vertica tables to be indexed to 
allow single record.
Some file formats like ORC (and maybe Parquet, I'm not sure) allow to add bloom 
filters on specific columns of a 
table,
 which could work as a kind of index.
Also, depending on the query engine you are using (Hive, Spark-SQL, Impala, 
Presto...) and its version, they may or may not be able to leverage certain 
storage optimization.
For example, Spark still does not support Hive Bucketed Table optimization. But 
it might come in the upcoming Spark 2.3.


I'm much less familiar with Parquet, so if anyone has links to a good 
documentation for Parquet fine tuning (or even better a comparison with ORC 
features) that would be really helpful.
By googling, I found these slides where someone at 
Netflix
 seems to have tried the same kind of optimization as you in Parquet.





On 23 February 2018 at 12:02, Sun, Keith 
> wrote:

Hi,


Why Hive still read so much "records" even with a filter pushdown enabled and 
the returned dataset would be a very small amount ( 4k out of  30billion 
records).


The "RECORDS_IN" 

Re: ODBC-hiveserver2 question

2018-02-24 Thread Jörn Franke
HDFS support depends on the version. A long time it was not supported.

> On 23. Feb 2018, at 21:08, Andy Srine  wrote:
> 
> Team,
> 
> Is ADD JAR from HDFS (ADD JAR hdfs:///hive_jars/hive-contrib-2.1.1.jar;) 
> supported in hiveserver2 via an ODBC connection? 
> 
> Some relevant points:
> I am able to do it in Hive 2.1.1 via JDBC (beeline), but not via an ODBC 
> client.
> In Hive 1.2.1, I can add a jar from the local node, but not a JAR on HDFS.
> Some old blogs online say HiveServer2 doesn't support "ADD JAR " period. But 
> thats not what I experience via beeline.
> Let me know your thoughts and experiences.
> 
> Thanks,
> Andy
>