Hi Eric,


There might be hung transactions. Do “dtmci” and then “status trans” to
see. If so, you can get rid of them by doing sqstop + sqstart. Though you
might need to do ckillall with sqstop because sometimes it hangs on these.



There likely is messed up metadata. To clean that up, you can do “cleanup
table customer”.



Actually, you can do this without cleaning up the hung transactions. Those
can just stay there.



Dave



*From:* Eric Owhadi [mailto:[email protected]]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* [email protected]
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

Reply via email to