Can you try compiling with hadoop-2.7.2 version and use it and let us know
if the issue still persists.

"mvn package -DskipTests -Pspark-1.5.2 -Phadoop-2.7.2 -DskipTests"

Regards
Manish Gupta

On Fri, Jan 20, 2017 at 1:30 PM, 彭 <ffp900...@126.com> wrote:

> I build the jar with hadoop2.6,  like  "mvn package -DskipTests
> -Pspark-1.5.2 -Phadoop-2.6.0 -DskipTests"
> My Spark version is "spark-1.5.2-bin-hadoop2.6"
> However my hadoop environment is hadoop-2.7.2
>
>
>
> At 2017-01-20 15:05:56, "manish gupta" <tomanishgupt...@gmail.com> wrote:
> >Hi,
> >
> >Which version of hadoop you are using while compiling the carbondata jar?
> >
> >If you are using hadoop-2.2.0, then please go through the below link which
> >says that there is some issue with hadoop-2.2.0 while writing a file in
> >append mode.
> >
> >http://stackoverflow.com/questions/21655634/hadoop2-2-
> 0-append-file-occur-alreadybeingcreatedexception
> >
> >Regards
> >Manish Gupta
> >
> >On Fri, Jan 20, 2017 at 8:10 AM, ffpeng90 <ffp900...@126.com> wrote:
> >
> >> I have met the same problem.
> >> I load data for three times and this exception always throws at the
> third
> >> time.
> >> I use the branch-1.0 version from git.
> >>
> >> Table :
> >> cc.sql(s"create table if not exists flightdb15(ID Int, date string,
> country
> >> string, name string, phonetype string, serialname string, salary Int)
> ROW
> >> FORMAT SERDE 'org.apache.hadoop.hive.serde2.
> MetadataTypedColumnsetSerDe'
> >> STORED BY 'org.apache.carbondata.format'  TBLPROPERTIES
> >> ('table_blocksize'='256 MB')")
> >>
> >> Exception:
> >>
> >> <http://apache-carbondata-mailing-list-archive.1130556.
> >> n5.nabble.com/file/n6843/bug1.png>
> >>
> >>
> >>
> >>
> >>
> >> --
> >> View this message in context: http://apache-carbondata-
> >> mailing-list-archive.1130556.n5.nabble.com/Failed-to-
> >> APPEND-FILE-hadoop-hdfs-protocol-AlreadyBeingCreatedException-
> >> tp5433p6843.html
> >> Sent from the Apache CarbonData Mailing List archive mailing list
> archive
> >> at Nabble.com.
> >>
>

Reply via email to