Adding to that

- Multiple files can be concatenated from the directory like
Example:  cat 00000-0 000000-1 00000-2 > final



________________________________
 From: Raj Hadoop <hadoop...@yahoo.com>
To: "user@hive.apache.org" <user@hive.apache.org>; "matouk.iftis...@ysance.com" 
<matouk.iftis...@ysance.com> 
Sent: Friday, July 5, 2013 12:17 AM
Subject: Re: How Can I store the Hive query result in one file ?
 


 

 hive > set hive.io.output.fileformat=CSVTextFile;
 hive > insert overwrite local directory '/usr/home/hadoop/da1/' select * from 
customers

*** customers is a Hive table


________________________________
 From: Edward Capriolo <edlinuxg...@gmail.com>
To: "user@hive.apache.org" <user@hive.apache.org> 
Sent: Friday, July 5, 2013 12:10 AM
Subject: Re: How Can I store the Hive query result in one file ?
 


Normally if use set mapred.reduce.tasks=1 you get one output file. You can also 
look at
hive.merge.mapfiles, mapred.reduce.tasks, hive.merge.reducefiles also you can 
use a separate tool https://github.com/edwardcapriolo/filecrush




On Thu, Jul 4, 2013 at 6:38 AM, Nitin Pawar <nitinpawar...@gmail.com> wrote:

will hive -e "query" > filename  or hive -f query.q > filename will do ? 
>
>
>you specially want it to write into a named file on hdfs only? 
>
>
>
>On Thu, Jul 4, 2013 at 3:12 PM, Matouk IFTISSEN <matouk.iftis...@ysance.com> 
>wrote:
>
>Hello Hive users,
>>Is there a manner to store the Hive  query result (SELECT *.....) in a 
>>specfique  and alone file (given the file name) like (INSERT OVERWRITE LOCAL 
>>DIRECTORY '/directory_path_name/')?
>>Thanks for your answers
>>
>>
>>
>
>
>
>-- 
>Nitin Pawar
>

Reply via email to