hi,Cheng Lian
thanks, printing stdout/stderr of the forked process is more reasonable.
On 2014/8/19 13:35, Cheng Lian wrote:
The exception indicates that the forked process doesn’t executed as expected,
thus the test case /should/ fail.
Instead of replacing the exception with a
@rxin With the fixes, I could run it fine on top of branch-1.0
On master when running using YARN I am getting another KryoException:
Exception in thread main org.apache.spark.SparkException: Job aborted due
to stage failure: Task 247 in stage 52.0 failed 4 times, most recent
failure: Lost task
Just FYI, thought this might be helpful, I'm refactoring Hive Thrift server
test suites. These suites also fork new processes and suffer similar
issues. Stdout and stderr of forked processes are logged in the new version
of test suites with utilities under scala.sys.process package
Is there anyone make the query join different data sources work? especially
Join hive table with other data sources.
For example, hql uses HiveContext, and it needs first call use
database_name and other datasources use SqlContext, how can SqlContext
know Hive tables? I follow
and even the same process where the data might be cached.
these are the different locality levels:
PROCESS_LOCAL
NODE_LOCAL
RACK_LOCAL
ANY
relevant code:
Hi,
During the 4th ALS iteration, I am noticing that one of the executor gets
disconnected:
14/08/19 23:40:00 ERROR network.ConnectionManager: Corresponding
SendingConnectionManagerId not found
14/08/19 23:40:00 INFO cluster.YarnClientSchedulerBackend: Executor 5
disconnected, so removing it