Only do that if you’re willing to get rid of your entire database.


*From:* Sean Broeder [mailto:[email protected]]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* [email protected]
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:[email protected]]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* [email protected]
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

Reply via email to