hi, I've tested branch 0.5 rev 903141. The sql runs as expected. Thx. I can't compile trunk version, so haven't test it.
2010/1/12 Ning Zhang <[email protected]> > HIVE-1039 is created for the bug when inserting to multiple local > directories. > > Thanks, > Ning > On Jan 11, 2010, at 11:05 AM, Zheng Shao wrote: > > > For your second question, currently we can do it with a little extra > work: > > 1. Create an external table on the target directory with the field > > delimiter you want; > > 2. Run the query and insert overwrite the target external table. > > > > For the first question we can also do the similar thing (create a > > bunch of external table and then insert), but I think we should fix > > the problem. > > > > Zheng > > > > On Mon, Jan 11, 2010 at 8:31 AM, Anty <[email protected]> wrote: > >> HI: > >> I came across the same problean, therein is no data.I have one > >> more question,can i specify the field delimiter for the output > >> file,not just the default ctrl-a field delimiter? > >> > >> On Fri, Jan 8, 2010 at 2:23 PM, wd <[email protected]> wrote: > >>> hi, > >>> > >>> I'v tried use hive svn version, seems this bug still exists. > >>> > >>> svn st -v > >>> > >>> 896805 896744 namit . > >>> 896805 894292 namit eclipse-templates > >>> 896805 894292 namit eclipse-templates/.classpath > >>> 896805 765509 zshao > >>> eclipse-templates/TestHive.launchtemplate > >>> 896805 765509 zshao > eclipse-templates/TestMTQueries.l > >>> .......... > >>> > >>> svn reversion 896805 ? > >>> > >>> follows is the execute log. > >>> > >>> hive> from > >>> test > >>> > INSERT OVERWRITE LOCAL DIRECTORY '/home/stefdong/tmp/0' select * > where > >>> a = 1 > >>> > INSERT OVERWRITE LOCAL DIRECTORY '/home/stefdong/tmp/1' select * > where > >>> a = 3; > >>> Total MapReduce jobs = 1 > >>> Launching Job 1 out of 1 > >>> Number of reduce tasks is set to 0 since there's no reduce operator > >>> Starting Job = job_201001071716_4691, Tracking URL = > >>> http://abc.com:50030/jobdetails.jsp?jobid=job_201001071716_4691 > >>> Kill Command = hadoop job -Dmapred.job.tracker=abc.com:9001 -kill > >>> job_201001071716_4691 > >>> 2010-01-08 14:14:55,442 Stage-2 map = 0%, reduce = 0% > >>> 2010-01-08 14:15:00,643 Stage-2 map = 100%, reduce = 0% > >>> Ended Job = job_201001071716_4691 > >>> Copying data to local directory /home/stefdong/tmp/0 > >>> Copying data to local directory /home/stefdong/tmp/0 > >>> 13 Rows loaded to /home/stefdong/tmp/0 > >>> 9 Rows loaded to /home/stefdong/tmp/1 > >>> OK > >>> Time taken: 9.409 seconds > >>> > >>> > >>> thx. > >>> > >>> 2010/1/6 wd <[email protected]> > >>>> > >>>> hi, > >>>> > >>>> Single insert can extract data into '/tmp/out/1'.I even can see "xxx > rows > >>>> loaded to '/tmp/out/0', xxx rows loaded to '/tmp/out/1'...etc in multi > >>>> inserts, but there is no data in fact. > >>>> > >>>> Havn't try svn revision, will try it today.thx. > >>>> > >>>> 2010/1/5 Zheng Shao <[email protected]> > >>>>> > >>>>> Looks like a bug. > >>>>> What is the svn revision of Hive? > >>>>> > >>>>> Did you verify that single insert into '/tmp/out/1' produces > non-empty > >>>>> files? > >>>>> > >>>>> Zheng > >>>>> > >>>>> On Tue, Jan 5, 2010 at 12:51 AM, wd <[email protected]> wrote: > >>>>>> In hive wiki: > >>>>>> > >>>>>> Hive extension (multiple inserts): > >>>>>> FROM from_statement > >>>>>> INSERT OVERWRITE [LOCAL] DIRECTORY directory1 select_statement1 > >>>>>> > >>>>>> [INSERT OVERWRITE [LOCAL] DIRECTORY directory2 select_statement2] > ... > >>>>>> > >>>>>> I'm try to use hive multi inserts to extract data from hive to local > >>>>>> disk. > >>>>>> Follows is the hql > >>>>>> > >>>>>> from test_tbl > >>>>>> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/out/0' select select * where > >>>>>> id%10=0 > >>>>>> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/out/1' select select * where > >>>>>> id%10=1 > >>>>>> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/out/2' select select * where > >>>>>> id%10=2 > >>>>>> > >>>>>> This hql can execute, but only /tmp/out/0 have datafile in it, other > >>>>>> directories are empty. why this happen? bug? > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>> > >>>>> > >>>>> > >>>>> -- > >>>>> Yours, > >>>>> Zheng > >>>> > >>> > >>> > >> > >> > >> > >> -- > >> Best Regards > >> Anty Rao > >> > > > > > > > > -- > > Yours, > > Zheng > >
