This is a challenge when dealing with JSON. You can either force the data type 
in the CTAS statement (likely better option) or deal with the data type change 
in parquet table(s) by using CAST, etc. In the case of zip codes you need to 
consider if it will be 5 digits or the extended 5-4 digits to decide if the 
data type should be INT or VARCHAR.

Also look into the TYPEOF function, which you can use with CASE to deal with 
these types of issues.

I prefer to deal with data issues as soon as possible in the pipeline, so the 
tables you create are consistent and clean.


On 2/23/18, 12:04 PM, "Lee, David" <> wrote:

    Using Drill's CTAS statements I've run into a schema inconsistency issue 
and I'm not sure how to solve it..
    CREATE TABLE name [ (column list) ] AS query;  
    If I have a directory called Cities which have JSON files which look like:
    { "city":"San Francisco", "zip":"94105"}
    { "city":"San Jose", "zip":"94088"}
    { "city":"Toronto ", "zip": null}
    { "city":"Montreal", "zip" null}
    If I create a parquet file out of the Cities directory I will end up with 
files called:
    1_0_0.parquet through 1_5_1.parquet
    Now I got a problem:
    Most of the parquet files have a column type of char for zip.
    Some of the parquet files have a column type of int for zip because the zip 
value for a group of records was NULL..
    This produces schema change errors later when trying to query the parquet 
    Is it possible for Drill to do a better job learning schemas across all 
json files in a directory before creating parquet?
    This message may contain information that is confidential or privileged. If 
you are not the intended recipient, please advise the sender immediately and 
delete this message. See
 for further information.  Please refer to
 for more information about BlackRock’s Privacy Policy.
    For a list of BlackRock's office addresses worldwide, see
    © 2018 BlackRock, Inc. All rights reserved.

Reply via email to