I was using default settings. Are  'APP' and 'mine' the default?

jpox...
javax.jdo.PersistenceManagerFactoryClass=org.jpox.PersistenceManagerFactoryImpl
org.jpox.autoCreateSchema=false
org.jpox.validateTables=false
org.jpox.validateColumns=false
org.jpox.validateConstraints=false
org.jpox.storeManagerType=rdbms
org.jpox.autoCreateSchema=true
org.jpox.autoStartMechanismMode=checked
org.jpox.transactionIsolation=read_committed
javax.jdo.option.DetachAllOnCommit=true
javax.jdo.option.NontransactionalRead=true
javax.jdo.option.ConnectionDriverName=org.apache.derby.jdbc.ClientDriver
javax.jdo.option.ConnectionURL=jdbc:derby://hadoop1.jointhegrid.local:1527/metastore_db;create=true
javax.jdo.option.ConnectionUserName=APP
javax.jdo.option.ConnectionPassword=mine
org.jpox.cache.level2=true
org.jpox.cache.level2.type=SOFT

hive-site

<configuration>
<property>
  <name>hive.metastore.local</name>
  <value>true</value>
  <description>controls whether to connect to remove metastore server
or open a new metastore server in Hive Client JVM</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  
<value>jdbc:derby://hadoop1.jointhegrid.local:1527/metastore_db;create=true</value>
  <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>org.apache.derby.jdbc.ClientDriver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>
</configuration>


On Fri, Apr 17, 2009 at 11:24 PM, Prasad Chakka <[email protected]> wrote:
> Did you change the password parameter (for the metastore db) from the 
> default? If so is it empty? Because, the branch and trunk are working for me 
> and that is the only case I think could cause problems and want to make sure 
> that is the cased before doing a checkin.
>
> Prasad
>
>
> ________________________________
> From: Edward Capriolo <[email protected]>
> Reply-To: <[email protected]>
> Date: Fri, 17 Apr 2009 19:40:49 -0700
> To: <[email protected]>
> Subject: Re: null pointer in 3.0
>
> I setup using the local meta store as well as a remote metastore using
> the derby client. Both of them were failing. After applying your patch
> my trunk version is now working. I only tested one query.
>
> I would suggest making a release candidate, because the 3.0 release is
> not functional.
>
>
> On Fri, Apr 17, 2009 at 8:56 PM, Ashish Thusoo <[email protected]> wrote:
>> Shoudl we build a new release candidate for this? Is there a reasonable 
>> workaround?
>>
>> Ashish
>>
>> -----Original Message-----
>> From: Prasad Chakka [mailto:[email protected]]
>> Sent: Friday, April 17, 2009 3:30 PM
>> To: [email protected]
>> Subject: Re: null pointer in 3.0
>>
>> Hi Edward,
>>
>> Can you try to apply this patch 
>> https://issues.apache.org/jira/secure/attachment/12405820/tmp1.patch and let 
>> me know it works?
>>
>> What type of metastore setup are you using here?
>>
>> Thanks,
>> Prasad
>>
>>
>> ________________________________
>> From: Edward Capriolo <[email protected]>
>> Reply-To: <[email protected]>
>> Date: Fri, 17 Apr 2009 14:40:13 -0700
>> To: <[email protected]>
>> Subject: null pointer in 3.0
>>
>> Hey all,
>> I checked out 3.0 and build it against hadoop 0.18.3. Technically I am 
>> running against the Cloudera version witch is patched 0.18.3. However I have 
>> also checked out the trunk and built against 0.19.0 and seem to be having 
>> the same issue.
>>
>> I have a two column table straight selects work but any time I try a select 
>> (distinct), or anything that involves map reduce the job fails.
>>
>> 09/04/17 17:05:45 ERROR exec.ExecDriver: Ended Job =
>> job_200904101611_0018 with exception
>> 'java.lang.NullPointerException(null)'
>> java.lang.NullPointerException
>>        at java.util.Hashtable.put(Hashtable.java:394)
>>        at java.util.Properties.setProperty(Properties.java:143)
>>        at org.apache.hadoop.conf.Configuration.set(Configuration.java:300)
>>        at 
>> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:398)
>>        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:245)
>>        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:174)
>>        at 
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:207)
>>        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:306)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>        at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
>>        at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
>>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>>        at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
>>
>> FAILED: Execution Error, return code 1 from 
>> org.apache.hadoop.hive.ql.exec.ExecDriver
>>
>> Any ideas?
>>
>>
>
>

Reply via email to