May want to add this to our knowledgeware as an FAQ.


   Roberta



*From:* Eric Owhadi [mailto:[email protected]]
*Sent:* Tuesday, February 2, 2016 4:00 PM
*To:* [email protected]
*Subject:* RE: fixing/checking corrupted metadata?



Awesome, bookmarked J. And yes it solved the problem.

Thanks all for the precious help,
Eric



*From:* Suresh Subbiah [mailto:[email protected]]
*Sent:* Tuesday, February 2, 2016 5:09 PM
*To:* [email protected]
*Subject:* Re: fixing/checking corrupted metadata?



Here is the syntax for cleanup.

https://cwiki.apache.org/confluence/display/TRAFODION/Metadata+Cleanup



We need to add this to the manual that Gunnar created. I will file a JIRA
to raise an error an exit early if requested compression type is not
available.



Thanks

Suresh



On Tue, Feb 2, 2016 at 5:05 PM, Eric Owhadi <[email protected]> wrote:

Great thanks for the info, very helpful.

You mention Trafodion documentation, in what DOC is it described? I looked
for it in Trafodion Command Interface Guide and Trafodion SQL Reference
Manual with no luck? The other doc titles did not look promising?

Eric





*From:* Anoop Sharma [mailto:[email protected]]
*Sent:* Tuesday, February 2, 2016 4:54 PM


*To:* [email protected]
*Subject:* RE: fixing/checking corrupted metadata?



Dave mentioned ‘cleanup table customer’. You can use that if you know which
table is messed up in metadata.



Or one can use:

  cleanup metadata, check, return details;     to find out all entries
which may be corrupt.

and then:

  cleanup metadata, return details;



Cleanup command is also documented in trafodion documentation which is a
good place to check.



anoop



*From:* Sean Broeder [mailto:[email protected]]
*Sent:* Tuesday, February 2, 2016 2:49 PM
*To:* [email protected]
*Subject:* RE: fixing/checking corrupted metadata?



Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:[email protected]]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* [email protected]
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:[email protected]]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* [email protected]
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:[email protected]]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* [email protected]
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

Reply via email to