And why kylin stores meta data in two different places at the same time ?

lec ssmi <[email protected]> 于2020年11月15日周日 下午1:29写道:

> Hi:
>   Recently, in the process of transforming kylin, I found a problem.
>   My kylin's version is 2.6.2, the building engine used is spark . When
> building the dimensional dictionary, kylin server will start a shell
> process separately to start the spark program.
> After spark creates the dictionary for each field, it stores the
> dictionary information on hdfs, and then updates the cubename.json file
> under the /cube path on hdfs. But in the process of creating HTable later,
> the information of tablename.json will be read again, and then updated. The
> problem is that the creation of Htable is in the process of kylin server,
> so the  ResourceStore   instance is different from the spark program.In the
> process of creating a dictionary, spark only updates the information on
> hdfs, but does not update metadata in other places, such as mysql. The
> ResourceStore instance in kylin server is in the form of JDBC, resulting in
> the contents of the jdbc  json file and the file on hdfs are different.Will
> this be a problem?
>
>  Thanks.
>

Reply via email to