In HIVE-675, Carl posted the relevant alter table commands. I tried those out on a MySQL DB, and didn't get the error when using the Hive CLI. Can you try something similar to
ALTER TABLE DBS MODIFY `DESC` VARCHAR(4000); ALTER TABLE DBS ADD COLUMN DB_LOCATION_URI VARCHAR(4000) DEFAULT '' NOT NULL; on your DB? -----Original Message----- From: ravi...@gmail.com [mailto:ravi...@gmail.com] On Behalf Of Raviv M-G Sent: Tuesday, September 28, 2010 1:14 PM To: Paul Yang; hive-user@hadoop.apache.org Subject: Re: fix for DB_LOCATION_URI NOT NULL migration error? I relied on the JDO: <property> <name>datanucleus.autoCreateSchema</name> <value>true</value> </property> Hive 675 apparently changed this to allows-null="false". https://issues.apache.org/jira/secure/attachment/12454730/HIVE-675-backport-v6.2.patch.txt Should I manually alter the derby table to allow nulls? Thanks! -Raviv On Tue, Sep 28, 2010 at 3:02 PM, Paul Yang <py...@facebook.com> wrote: > For migration, did you manually alter the column or are you relying on JDO to > auto-create the schema? > > -----Original Message----- > From: ravi...@gmail.com [mailto:ravi...@gmail.com] On Behalf Of Raviv M-G > Sent: Monday, September 27, 2010 11:57 PM > To: hive-user@hadoop.apache.org > Subject: fix for DB_LOCATION_URI NOT NULL migration error? > > Does anyone have a fix for the below error? I can see that it is > caused by changes made in HIVE-675, but I can't find a patch or > instructions for migrating that metastore_db that fixes the problem. > > FAILED: Error in metadata: javax.jdo.JDODataStoreException: Error(s) > were found while auto-creating/validating the datastore for classes. > The errors are printed in the log, and are attached to this exception. > NestedThrowables: > java.sql.SQLSyntaxErrorException: In an ALTER TABLE statement, the > column 'DB_LOCATION_URI' has been specified as NOT NULL and either the > DEFAULT clause was not specified or was specified as DEFAULT NULL. > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask > > Thanks, > Raviv >