RE: SparkSQL 1.1 hang when DROP or LOAD

2014-09-16 Thread linkpatrickliu
Seems like the thriftServer cannot connect to Zookeeper, so it cannot get
lock.

This is how it the log looks when I run SparkSQL:
load data inpath kv1.txt into table src;
log:
14/09/16 14:40:47 INFO Driver: PERFLOG method=acquireReadWriteLocks
14/09/16 14:40:47 INFO ClientCnxn: Opening socket connection to server
SVR4044HW2285.hadoop.lpt.qa.nt.ctripcorp.com/10.2.4.191:2181. Will not
attempt to authenticate using SASL (unknown error)
14/09/16 14:40:47 INFO ClientCnxn: Socket connection established to
SVR4044HW2285.hadoop.lpt.qa.nt.ctripcorp.com/10.2.4.191:2181, initiating
session
14/09/16 14:40:47 INFO ClientCnxn: Session establishment complete on server
SVR4044HW2285.hadoop.lpt.qa.nt.ctripcorp.com/10.2.4.191:2181, sessionid =
0x347c1b1f78d495e, negotiated timeout = 18
14/09/16 14:40:47 INFO Driver: /PERFLOG method=acquireReadWriteLocks
start=1410849647447 end=1410849647457 duration=10

You can see, between the PERFLOG of acquireReadWriteLocks, the ClientCnxn
will try to connect to Zookeeper. After the connection has been successfully
established, the acquireReadWriteLocks phrase can be finished.

But, when I run the ThriftServer, and run the same SQL.
Here is the log:
14/09/16 14:40:09 INFO Driver: PERFLOG method=acquireReadWriteLocks

It will wait here. 
So I doubt, the reason why Drop or Load failed in thriftServer mode, is
because of the thriftServer cannot connect to Zookeeper.





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-1-1-hang-when-DROP-or-LOAD-tp14222p14336.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



RE: SparkSQL 1.1 hang when DROP or LOAD

2014-09-16 Thread linkpatrickliu
Hi, Hao Cheng.

I have done other tests. And the result shows the thriftServer can connect
to Zookeeper.

However, I found some more interesting things. And I think I have found a
bug!

Test procedure:
Test1:
(0) Use beeline to connect to thriftServer.
(1) Switch database use dw_op1; (OK)
The logs show that the thriftServer connected with Zookeeper and acquired
locks.

(2) Drop table drop table src; (Blocked)
The logs show that the thriftServer is acquireReadWriteLocks.

Doubt:
The reason why I cannot drop table src is because the first SQL use dw_op1
have left locks in Zookeeper  unsuccessfully released.
So when the second SQL is acquiring locks in Zookeeper, it will block.

Test2:
Restart thriftServer.
Instead of switching to another database, I just drop the table in the
default database;
(0) Restart thriftServer  use beeline to connect to thriftServer.
(1) Drop table drop table src; (OK)
Amazing! Succeed!
(2) Drop again!  drop table src2; (Blocked)
Same error: the thriftServer is blocked in the acquireReadWriteLocks
phrase.

As you can see. 
Only the first SQL requiring locks can succeed.
So I think the reason is that the thriftServer cannot release locks
correctly in Zookeeper.









--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-1-1-hang-when-DROP-or-LOAD-tp14222p14339.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



RE: SparkSQL 1.1 hang when DROP or LOAD

2014-09-16 Thread Cheng, Hao
Thank you for pasting the steps, I will look at this, hopefully come out with a 
solution soon.

-Original Message-
From: linkpatrickliu [mailto:linkpatrick...@live.com] 
Sent: Tuesday, September 16, 2014 3:17 PM
To: u...@spark.incubator.apache.org
Subject: RE: SparkSQL 1.1 hang when DROP or LOAD

Hi, Hao Cheng.

I have done other tests. And the result shows the thriftServer can connect to 
Zookeeper.

However, I found some more interesting things. And I think I have found a bug!

Test procedure:
Test1:
(0) Use beeline to connect to thriftServer.
(1) Switch database use dw_op1; (OK)
The logs show that the thriftServer connected with Zookeeper and acquired locks.

(2) Drop table drop table src; (Blocked) The logs show that the thriftServer 
is acquireReadWriteLocks.

Doubt:
The reason why I cannot drop table src is because the first SQL use dw_op1
have left locks in Zookeeper  unsuccessfully released.
So when the second SQL is acquiring locks in Zookeeper, it will block.

Test2:
Restart thriftServer.
Instead of switching to another database, I just drop the table in the default 
database;
(0) Restart thriftServer  use beeline to connect to thriftServer.
(1) Drop table drop table src; (OK)
Amazing! Succeed!
(2) Drop again!  drop table src2; (Blocked) Same error: the thriftServer is 
blocked in the acquireReadWriteLocks
phrase.

As you can see. 
Only the first SQL requiring locks can succeed.
So I think the reason is that the thriftServer cannot release locks correctly 
in Zookeeper.









--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-1-1-hang-when-DROP-or-LOAD-tp14222p14339.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: SparkSQL 1.1 hang when DROP or LOAD

2014-09-16 Thread Yin Huai
Seems https://issues.apache.org/jira/browse/HIVE-5474 is related?

On Tue, Sep 16, 2014 at 4:49 AM, Cheng, Hao hao.ch...@intel.com wrote:

 Thank you for pasting the steps, I will look at this, hopefully come out
 with a solution soon.

 -Original Message-
 From: linkpatrickliu [mailto:linkpatrick...@live.com]
 Sent: Tuesday, September 16, 2014 3:17 PM
 To: u...@spark.incubator.apache.org
 Subject: RE: SparkSQL 1.1 hang when DROP or LOAD

 Hi, Hao Cheng.

 I have done other tests. And the result shows the thriftServer can connect
 to Zookeeper.

 However, I found some more interesting things. And I think I have found a
 bug!

 Test procedure:
 Test1:
 (0) Use beeline to connect to thriftServer.
 (1) Switch database use dw_op1; (OK)
 The logs show that the thriftServer connected with Zookeeper and acquired
 locks.

 (2) Drop table drop table src; (Blocked) The logs show that the
 thriftServer is acquireReadWriteLocks.

 Doubt:
 The reason why I cannot drop table src is because the first SQL use
 dw_op1
 have left locks in Zookeeper  unsuccessfully released.
 So when the second SQL is acquiring locks in Zookeeper, it will block.

 Test2:
 Restart thriftServer.
 Instead of switching to another database, I just drop the table in the
 default database;
 (0) Restart thriftServer  use beeline to connect to thriftServer.
 (1) Drop table drop table src; (OK)
 Amazing! Succeed!
 (2) Drop again!  drop table src2; (Blocked) Same error: the thriftServer
 is blocked in the acquireReadWriteLocks
 phrase.

 As you can see.
 Only the first SQL requiring locks can succeed.
 So I think the reason is that the thriftServer cannot release locks
 correctly in Zookeeper.









 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-1-1-hang-when-DROP-or-LOAD-tp14222p14339.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional
 commands, e-mail: user-h...@spark.apache.org


 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




Re: SparkSQL 1.1 hang when DROP or LOAD

2014-09-16 Thread Yin Huai
I meant it may be a Hive bug since we also call Hive's drop table
internally.

On Tue, Sep 16, 2014 at 1:44 PM, Yin Huai huaiyin@gmail.com wrote:

 Seems https://issues.apache.org/jira/browse/HIVE-5474 is related?

 On Tue, Sep 16, 2014 at 4:49 AM, Cheng, Hao hao.ch...@intel.com wrote:

 Thank you for pasting the steps, I will look at this, hopefully come out
 with a solution soon.

 -Original Message-
 From: linkpatrickliu [mailto:linkpatrick...@live.com]
 Sent: Tuesday, September 16, 2014 3:17 PM
 To: u...@spark.incubator.apache.org
 Subject: RE: SparkSQL 1.1 hang when DROP or LOAD

 Hi, Hao Cheng.

 I have done other tests. And the result shows the thriftServer can
 connect to Zookeeper.

 However, I found some more interesting things. And I think I have found a
 bug!

 Test procedure:
 Test1:
 (0) Use beeline to connect to thriftServer.
 (1) Switch database use dw_op1; (OK)
 The logs show that the thriftServer connected with Zookeeper and acquired
 locks.

 (2) Drop table drop table src; (Blocked) The logs show that the
 thriftServer is acquireReadWriteLocks.

 Doubt:
 The reason why I cannot drop table src is because the first SQL use
 dw_op1
 have left locks in Zookeeper  unsuccessfully released.
 So when the second SQL is acquiring locks in Zookeeper, it will block.

 Test2:
 Restart thriftServer.
 Instead of switching to another database, I just drop the table in the
 default database;
 (0) Restart thriftServer  use beeline to connect to thriftServer.
 (1) Drop table drop table src; (OK)
 Amazing! Succeed!
 (2) Drop again!  drop table src2; (Blocked) Same error: the
 thriftServer is blocked in the acquireReadWriteLocks
 phrase.

 As you can see.
 Only the first SQL requiring locks can succeed.
 So I think the reason is that the thriftServer cannot release locks
 correctly in Zookeeper.









 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-1-1-hang-when-DROP-or-LOAD-tp14222p14339.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional
 commands, e-mail: user-h...@spark.apache.org


 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org





RE: SparkSQL 1.1 hang when DROP or LOAD

2014-09-16 Thread Cheng, Hao
Thank you Yin Huai. This is probably true.

I saw in the hive-site.xml, Liu has changed the entry, which is default should 
be false.

  property
namehive.support.concurrency/name
descriptionEnable Hive's Table Lock Manager Service/description
valuetrue/value
  /property

Someone is working on upgrading the Hive to 0.13 for SparkSQL 
(https://github.com/apache/spark/pull/2241), not sure if you can wait for this. 
☺

From: Yin Huai [mailto:huaiyin@gmail.com]
Sent: Wednesday, September 17, 2014 1:50 AM
To: Cheng, Hao
Cc: linkpatrickliu; u...@spark.incubator.apache.org
Subject: Re: SparkSQL 1.1 hang when DROP or LOAD

I meant it may be a Hive bug since we also call Hive's drop table internally.

On Tue, Sep 16, 2014 at 1:44 PM, Yin Huai 
huaiyin@gmail.commailto:huaiyin@gmail.com wrote:
Seems https://issues.apache.org/jira/browse/HIVE-5474 is related?

On Tue, Sep 16, 2014 at 4:49 AM, Cheng, Hao 
hao.ch...@intel.commailto:hao.ch...@intel.com wrote:
Thank you for pasting the steps, I will look at this, hopefully come out with a 
solution soon.

-Original Message-
From: linkpatrickliu 
[mailto:linkpatrick...@live.commailto:linkpatrick...@live.com]
Sent: Tuesday, September 16, 2014 3:17 PM
To: u...@spark.incubator.apache.orgmailto:u...@spark.incubator.apache.org
Subject: RE: SparkSQL 1.1 hang when DROP or LOAD
Hi, Hao Cheng.

I have done other tests. And the result shows the thriftServer can connect to 
Zookeeper.

However, I found some more interesting things. And I think I have found a bug!

Test procedure:
Test1:
(0) Use beeline to connect to thriftServer.
(1) Switch database use dw_op1; (OK)
The logs show that the thriftServer connected with Zookeeper and acquired locks.

(2) Drop table drop table src; (Blocked) The logs show that the thriftServer 
is acquireReadWriteLocks.

Doubt:
The reason why I cannot drop table src is because the first SQL use dw_op1
have left locks in Zookeeper  unsuccessfully released.
So when the second SQL is acquiring locks in Zookeeper, it will block.

Test2:
Restart thriftServer.
Instead of switching to another database, I just drop the table in the default 
database;
(0) Restart thriftServer  use beeline to connect to thriftServer.
(1) Drop table drop table src; (OK)
Amazing! Succeed!
(2) Drop again!  drop table src2; (Blocked) Same error: the thriftServer is 
blocked in the acquireReadWriteLocks
phrase.

As you can see.
Only the first SQL requiring locks can succeed.
So I think the reason is that the thriftServer cannot release locks correctly 
in Zookeeper.









--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-1-1-hang-when-DROP-or-LOAD-tp14222p14339.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: 
user-unsubscr...@spark.apache.orgmailto:user-unsubscr...@spark.apache.org For 
additional commands, e-mail: 
user-h...@spark.apache.orgmailto:user-h...@spark.apache.org


-
To unsubscribe, e-mail: 
user-unsubscr...@spark.apache.orgmailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: 
user-h...@spark.apache.orgmailto:user-h...@spark.apache.org




RE: SparkSQL 1.1 hang when DROP or LOAD

2014-09-15 Thread Cheng, Hao
What's your Spark / Hadoop version? And also the hive-site.xml? Most of case 
like that caused by incompatible Hadoop client jar and the Hadoop cluster.

-Original Message-
From: linkpatrickliu [mailto:linkpatrick...@live.com] 
Sent: Monday, September 15, 2014 2:35 PM
To: u...@spark.incubator.apache.org
Subject: SparkSQL 1.1 hang when DROP or LOAD

I started sparkSQL thrift server:
sbin/start-thriftserver.sh

Then I use beeline to connect to it:
bin/beeline
!connect jdbc:hive2://localhost:1 op1 op1

I have created a database for user op1.
create database dw_op1;

And grant all privileges to user op1;
grant all on database dw_op1 to user op1;

Then I create a table:
create tabel src(key int, value string)

Now, I want to load data into this table:
load data inpath kv1.txt into table src; (kv1.txt is located in the
/user/op1 directory in hdfs)

However, the client will hang...

The log in the thrift server:
14/09/15 14:21:25 INFO Driver: PERFLOG method=acquireReadWriteLocks


Then I ctrl-C to stop the beeline client, and restart the beelien client.
Now I want to drop the table src in dw_op1; use dw_op1
drop table src

Then, the beeline client is hanging again..
The log in the thrift server:
14/09/15 14:23:27 INFO Driver: PERFLOG method=acquireReadWriteLocks


Anyone can help on this? Many thanks!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-1-1-hang-when-DROP-or-LOAD-tp14222.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



RE: SparkSQL 1.1 hang when DROP or LOAD

2014-09-15 Thread linkpatrickliu
Hi, Hao Cheng,

Here is the Spark\Hadoop version:
Spark version = 1.1.0
Hadoop version = 2.0.0-cdh4.6.0

And hive-site.xml:
configuration

  property
namefs.default.name/name
valuehdfs://ns/value
  /property
  property
namedfs.nameservices/name
valuens/value
  /property
  
  property
namedfs.ha.namenodes.ns/name
valuemachine01,machine02/value
  /property
  
  property
namedfs.namenode.rpc-address.ns.machine01/name
valuemachine01:54310/value

  /property
  property
namedfs.namenode.rpc-address.ns.machine02/name
valuemachine02:54310/value

  /property
  
  property
namedfs.client.failover.proxy.provider.ns/name
   
valueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value
  /property
  property
namejavax.jdo.option.ConnectionURL/name
valuejdbc:mysql://localhost:3306/metastore/value
descriptionJDBC connect string for a JDBC metastore/description
  /property
  property
namejavax.jdo.option.ConnectionDriverName/name
valuecom.mysql.jdbc.Driver/value
descriptionDriver class name for a JDBC metastore/description
  /property
  property
namejavax.jdo.option.ConnectionUserName/name
valuehive_user/value
  /property
  property
namejavax.jdo.option.ConnectionPassword/name
valuehive_123/value
  /property
  property
namedatanucleus.autoCreateSchema/name
valuefalse/value
  /property 
  property
namedatanucleus.autoCreateTables/name
valuetrue/value
  /property 
  property
namedatanucleus.fixedDatastore/name
valuefalse/value
  /property
  property
namehive.support.concurrency/name
descriptionEnable Hive's Table Lock Manager Service/description
valuetrue/value
  /property

  property
namehive.zookeeper.quorum/name
valuemachine01,machine02,machine03/value
descriptionZookeeper quorum used by Hive's Table Lock
Manager/description
  /property
  property
namehive.metastore.warehouse.dir/name
value/user/hive/warehouse/value
descriptionHive warehouse directory/description
  /property
  property
namemapred.job.tracker/name
valuemachine01:8032/value
  /property
  property
 nameio.compression.codecs/name
 valueorg.apache.hadoop.io.compress.SnappyCodec/value
  /property
  property
namemapreduce.output.fileoutputformat.compress.codec/name
valueorg.apache.hadoop.io.compress.SnappyCodec/value
  /property
  property
namemapreduce.output.fileoutputformat.compress.type/name
valueBLOCK/value
  /property
  property
namehive.exec.show.job.failure.debug.info/name
valuetrue/value
description
If a job fails, whether to provide a link in the CLI to the task with
the
most failures, along with debugging hints if applicable.
/description
  /property
  property
namehive.hwi.listen.host/name
value0.0.0.0/value
descriptionThis is the host address the Hive Web Interface will listen
on/description
  /property
  property
namehive.hwi.listen.port/name
value/value
descriptionThis is the port the Hive Web Interface will listen
on/description
  /property
  property
namehive.hwi.war.file/name
value/lib/hive-hwi-0.10.0-cdh4.2.0.war/value
descriptionThis is the WAR file with the jsp content for Hive Web
Interface/description
  /property
  property
namehive.aux.jars.path/name
   
valuefile:///usr/lib/hive/lib/hive-hbase-handler-0.10.0-cdh4.6.0.jar,file:///usr/lib/hbase/hbase-0.94.15-cdh4.6.0-security.jar,file:///usr/lib/zookeeper/zookeeper.jar/value
  /property
  property
 namehbase.zookeeper.quorum/name
 valuemachine01,machine02,machine03/value
  /property
  property
namehive.cli.print.header/name
valuetrue/value
  /property
  property
namehive.metastore.execute.setugi/name
valuetrue/value
descriptionIn unsecure mode, setting this property to true will cause
the metastore to execute DFS operations using the client's reported user and
group permissions. Note that this property must be set on both the client
and server sides. Further note that its best effort. If client sets its to
true and server sets it to false, client setting will be
ignored./description
  /property
  property
namehive.security.authorization.enabled/name
valuetrue/value
descriptionenable or disable the hive client
authorization/description
  /property
  property
namehive.metastore.authorization.storage.checks/name
valuetrue/value
  /property
  property
namehive.security.authorization.createtable.owner.grants/name
valueALL/value
descriptionthe privileges automatically granted to the owner whenever
a table gets created.
An example like select,drop will grant select and drop privilege to
the owner of the table/description
  /property 
/configuration



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-1-1-hang-when-DROP-or-LOAD-tp14222p14320.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

RE: SparkSQL 1.1 hang when DROP or LOAD

2014-09-15 Thread linkpatrickliu
Hi, Hao Cheng,

Here is the Spark\Hadoop version:
Spark version = 1.1.0
Hadoop version = 2.0.0-cdh4.6.0

And hive-site.xml:
configuration

  property
namefs.default.name/name
valuehdfs://ns/value
  /property
  property
namedfs.nameservices/name
valuens/value
  /property
  
  property
namedfs.ha.namenodes.ns/name
valuemachine01,machine02/value
  /property
  
  property
namedfs.namenode.rpc-address.ns.machine01/name
valuemachine01:54310/value

  /property
  property
namedfs.namenode.rpc-address.ns.machine02/name
valuemachine02:54310/value

  /property
  
  property
namedfs.client.failover.proxy.provider.ns/name
   
valueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value
  /property
  property
namejavax.jdo.option.ConnectionURL/name
valuejdbc:mysql://localhost:3306/metastore/value
descriptionJDBC connect string for a JDBC metastore/description
  /property
  property
namejavax.jdo.option.ConnectionDriverName/name
valuecom.mysql.jdbc.Driver/value
descriptionDriver class name for a JDBC metastore/description
  /property
  property
namejavax.jdo.option.ConnectionUserName/name
valuehive_user/value
  /property
  property
namejavax.jdo.option.ConnectionPassword/name
valuehive_123/value
  /property
  property
namedatanucleus.autoCreateSchema/name
valuefalse/value
  /property 
  property
namedatanucleus.autoCreateTables/name
valuetrue/value
  /property 
  property
namedatanucleus.fixedDatastore/name
valuefalse/value
  /property
  property
namehive.support.concurrency/name
descriptionEnable Hive's Table Lock Manager Service/description
valuetrue/value
  /property

  property
namehive.zookeeper.quorum/name
valuemachine01,machine02,machine03/value
descriptionZookeeper quorum used by Hive's Table Lock
Manager/description
  /property
  property
namehive.metastore.warehouse.dir/name
value/user/hive/warehouse/value
descriptionHive warehouse directory/description
  /property
  property
namemapred.job.tracker/name
valuemachine01:8032/value
  /property
  property
 nameio.compression.codecs/name
 valueorg.apache.hadoop.io.compress.SnappyCodec/value
  /property
  property
namemapreduce.output.fileoutputformat.compress.codec/name
valueorg.apache.hadoop.io.compress.SnappyCodec/value
  /property
  property
namemapreduce.output.fileoutputformat.compress.type/name
valueBLOCK/value
  /property
  property
namehive.exec.show.job.failure.debug.info/name
valuetrue/value
description
If a job fails, whether to provide a link in the CLI to the task with
the
most failures, along with debugging hints if applicable.
/description
  /property
  property
namehive.hwi.listen.host/name
value0.0.0.0/value
descriptionThis is the host address the Hive Web Interface will listen
on/description
  /property
  property
namehive.hwi.listen.port/name
value/value
descriptionThis is the port the Hive Web Interface will listen
on/description
  /property
  property
namehive.hwi.war.file/name
value/lib/hive-hwi-0.10.0-cdh4.2.0.war/value
descriptionThis is the WAR file with the jsp content for Hive Web
Interface/description
  /property
  property
namehive.aux.jars.path/name
   
valuefile:///usr/lib/hive/lib/hive-hbase-handler-0.10.0-cdh4.6.0.jar,file:///usr/lib/hbase/hbase-0.94.15-cdh4.6.0-security.jar,file:///usr/lib/zookeeper/zookeeper.jar/value
  /property
  property
 namehbase.zookeeper.quorum/name
 valuemachine01,machine02,machine03/value
  /property
  property
namehive.cli.print.header/name
valuetrue/value
  /property
  property
namehive.metastore.execute.setugi/name
valuetrue/value
descriptionIn unsecure mode, setting this property to true will cause
the metastore to execute DFS operations using the client's reported user and
group permissions. Note that this property must be set on both the client
and server sides. Further note that its best effort. If client sets its to
true and server sets it to false, client setting will be
ignored./description
  /property
  property
namehive.security.authorization.enabled/name
valuetrue/value
descriptionenable or disable the hive client
authorization/description
  /property
  property
namehive.metastore.authorization.storage.checks/name
valuetrue/value
  /property
  property
namehive.security.authorization.createtable.owner.grants/name
valueALL/value
descriptionthe privileges automatically granted to the owner whenever
a table gets created.
An example like select,drop will grant select and drop privilege to
the owner of the table/description
  /property 
/configuration



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-1-1-hang-when-DROP-or-LOAD-tp14222p14319.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

RE: SparkSQL 1.1 hang when DROP or LOAD

2014-09-15 Thread Cheng, Hao
The Hadoop client jar should be assembled into the uber-jar, but (I suspect) 
it's probably not compatible with your Hadoop Cluster.
Can you also paste the Spark uber-jar name? Usually will be under the path 
lib/spark-assembly-1.1.0-xxx-hadoopxxx.jar.


-Original Message-
From: linkpatrickliu [mailto:linkpatrick...@live.com] 
Sent: Tuesday, September 16, 2014 12:14 PM
To: u...@spark.incubator.apache.org
Subject: RE: SparkSQL 1.1 hang when DROP or LOAD

Hi, Hao Cheng,

Here is the Spark\Hadoop version:
Spark version = 1.1.0
Hadoop version = 2.0.0-cdh4.6.0

And hive-site.xml:
configuration

  property
namefs.default.name/name
valuehdfs://ns/value
  /property
  property
namedfs.nameservices/name
valuens/value
  /property
  
  property
namedfs.ha.namenodes.ns/name
valuemachine01,machine02/value
  /property
  
  property
namedfs.namenode.rpc-address.ns.machine01/name
valuemachine01:54310/value

  /property
  property
namedfs.namenode.rpc-address.ns.machine02/name
valuemachine02:54310/value

  /property
  
  property
namedfs.client.failover.proxy.provider.ns/name
   
valueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value
  /property
  property
namejavax.jdo.option.ConnectionURL/name
valuejdbc:mysql://localhost:3306/metastore/value
descriptionJDBC connect string for a JDBC metastore/description
  /property
  property
namejavax.jdo.option.ConnectionDriverName/name
valuecom.mysql.jdbc.Driver/value
descriptionDriver class name for a JDBC metastore/description
  /property
  property
namejavax.jdo.option.ConnectionUserName/name
valuehive_user/value
  /property
  property
namejavax.jdo.option.ConnectionPassword/name
valuehive_123/value
  /property
  property
namedatanucleus.autoCreateSchema/name
valuefalse/value
  /property
  property
namedatanucleus.autoCreateTables/name
valuetrue/value
  /property
  property
namedatanucleus.fixedDatastore/name
valuefalse/value
  /property
  property
namehive.support.concurrency/name
descriptionEnable Hive's Table Lock Manager Service/description
valuetrue/value
  /property

  property
namehive.zookeeper.quorum/name
valuemachine01,machine02,machine03/value
descriptionZookeeper quorum used by Hive's Table Lock 
Manager/description
  /property
  property
namehive.metastore.warehouse.dir/name
value/user/hive/warehouse/value
descriptionHive warehouse directory/description
  /property
  property
namemapred.job.tracker/name
valuemachine01:8032/value
  /property
  property
 nameio.compression.codecs/name
 valueorg.apache.hadoop.io.compress.SnappyCodec/value
  /property
  property
namemapreduce.output.fileoutputformat.compress.codec/name
valueorg.apache.hadoop.io.compress.SnappyCodec/value
  /property
  property
namemapreduce.output.fileoutputformat.compress.type/name
valueBLOCK/value
  /property
  property
namehive.exec.show.job.failure.debug.info/name
valuetrue/value
description
If a job fails, whether to provide a link in the CLI to the task with the
most failures, along with debugging hints if applicable.
/description
  /property
  property
namehive.hwi.listen.host/name
value0.0.0.0/value
descriptionThis is the host address the Hive Web Interface will listen 
on/description
  /property
  property
namehive.hwi.listen.port/name
value/value
descriptionThis is the port the Hive Web Interface will listen 
on/description
  /property
  property
namehive.hwi.war.file/name
value/lib/hive-hwi-0.10.0-cdh4.2.0.war/value
descriptionThis is the WAR file with the jsp content for Hive Web 
Interface/description
  /property
  property
namehive.aux.jars.path/name
   
valuefile:///usr/lib/hive/lib/hive-hbase-handler-0.10.0-cdh4.6.0.jar,file:///usr/lib/hbase/hbase-0.94.15-cdh4.6.0-security.jar,file:///usr/lib/zookeeper/zookeeper.jar/value
  /property
  property
 namehbase.zookeeper.quorum/name
 valuemachine01,machine02,machine03/value
  /property
  property
namehive.cli.print.header/name
valuetrue/value
  /property
  property
namehive.metastore.execute.setugi/name
valuetrue/value
descriptionIn unsecure mode, setting this property to true will cause the 
metastore to execute DFS operations using the client's reported user and group 
permissions. Note that this property must be set on both the client and server 
sides. Further note that its best effort. If client sets its to true and server 
sets it to false, client setting will be ignored./description
  /property
  property
namehive.security.authorization.enabled/name
valuetrue/value
descriptionenable or disable the hive client authorization/description
  /property
  property
namehive.metastore.authorization.storage.checks/name
valuetrue/value
  /property
  property
namehive.security.authorization.createtable.owner.grants

RE: SparkSQL 1.1 hang when DROP or LOAD

2014-09-15 Thread linkpatrickliu
Hi, Hao Cheng,

This is my spark assembly jar name:
spark-assembly-1.1.0-hadoop2.0.0-cdh4.6.0.jar

I compiled spark 1.1.0 with following cmd:
export MAVEN_OPTS=-Xmx2g -XX:MaxPermSize=512M
-XX:ReservedCodeCacheSize=512m
mvn -Dhadoop.version=2.0.0-cdh4.6.0 -Phive -Pspark-ganglia-lgpl -DskipTests
package




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-1-1-hang-when-DROP-or-LOAD-tp14222p14325.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



RE: SparkSQL 1.1 hang when DROP or LOAD

2014-09-15 Thread Cheng, Hao
Sorry, I am not able to reproduce that. 

Can you try add the following entry into the hive-site.xml? I know they have 
the default value, but let's make it explicitly.

hive.server2.thrift.port
hive.server2.thrift.bind.host
hive.server2.authentication (NONE、KERBEROS、LDAP、PAM or CUSTOM)

-Original Message-
From: linkpatrickliu [mailto:linkpatrick...@live.com] 
Sent: Tuesday, September 16, 2014 1:10 PM
To: u...@spark.incubator.apache.org
Subject: RE: SparkSQL 1.1 hang when DROP or LOAD

Besides,

When I use bin/spark-sql, I can Load data and drop table freely.

Only when I use sbin/start-thriftserver.sh and connect with beeline, the client 
will hang!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-1-1-hang-when-DROP-or-LOAD-tp14222p14326.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
commands, e-mail: user-h...@spark.apache.org