[GitHub] spark pull request #20642: i m not able to open Spark UI on local using loca...

2018-02-21 Thread AtulKumVerma
Github user AtulKumVerma closed the pull request at:

https://github.com/apache/spark/pull/20642


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20642: i m not able to open Spark UI on local using localhost:4...

2018-02-21 Thread AtulKumVerma
Github user AtulKumVerma commented on the issue:

https://github.com/apache/spark/pull/20642
  
i had resolve by removing (javax.servlet.servlet-api-2.5.jar) and use 3.0 
or above .


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20642: i m not able to open Spark UI on local using loca...

2018-02-20 Thread AtulKumVerma
GitHub user AtulKumVerma opened a pull request:

https://github.com/apache/spark/pull/20642

i m not able to open Spark UI on local using localhost:4040

20-02-18 20:54:24:118 [WARN] org.spark_project.jetty.server.HttpChannel 
/jobs/ org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:384) 
java.lang.NoSuchMethodError: 
javax.servlet.http.HttpServletRequest.isAsyncStarted()Z
at 
org.spark_project.jetty.servlets.gzip.GzipHandler.handle(GzipHandler.java:484)
at 
org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.spark_project.jetty.server.Server.handle(Server.java:499)
at 
org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at 
org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.spark_project.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at 
org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.spark_project.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
20-02-18 20:54:24:122 [WARN] 
org.spark_project.jetty.util.thread.QueuedThreadPool  
org.spark_project.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:610)
 
java.lang.NoSuchMethodError: 
javax.servlet.http.HttpServletResponse.getStatus()I
at 
org.spark_project.jetty.server.handler.ErrorHandler.handle(ErrorHandler.java:112)
at org.spark_project.jetty.server.Response.sendError(Response.java:597)
at 
org.spark_project.jetty.server.HttpChannel.handleException(HttpChannel.java:487)
at 
org.spark_project.jetty.server.HttpConnection$HttpChannelOverHttp.handleException(HttpConnection.java:594)
at 
org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:387)
at 
org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.spark_project.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at 
org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.spark_project.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/spark branch-2.3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/20642.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #20642


commit 2db523959658d8cd04f83c176e46f6bcdb745335
Author: sethah <shendrickson@...>
Date:   2018-01-10T07:32:47Z

[SPARK-22993][ML] Clarify HasCheckpointInterval param doc

## What changes were proposed in this pull request?

Add a note to the `HasCheckpointInterval` parameter doc that clarifies that 
this setting is ignored when no checkpoint directory has been set on the spark 
context.

## How was this patch tested?

No tests necessary, just a doc update.

Author: sethah <shendrick...@cloudera.com>

Closes #20188 from sethah/als_checkpoint_doc.

(cherry picked from commit 70bcc9d5ae33d6669bb5c97db29087ccead770fb)
Signed-off-by: Felix Cheung <felixche...@apache.org>

commit 60d4d79bb40f13c68773a0224f2003cdca28c138
Author: Josh Rosen <joshrosen@...>
Date:   2018-01-10T08:45:47Z

[SPARK-22997] Add additional defenses against use of freed MemoryBlocks

## What changes were proposed in this pull request?

This patch modifies Spark's `MemoryAllocator` implementations so that 
`free(MemoryBlock)` mutates the passed block to clear pointers (in the off-heap 
case) or null out references to backing `long[]` arrays (in the on-heap case). 
The goal of this change is to add an extra layer of defense against 
use-after-free bugs because currently it's hard to detect corruption caused by 
blind writes to freed memory blocks.

## How was this patch tested?

New unit tests in `PlatformSuite`, including new tests for existing 
functionality because we did not have sufficient mutation coverage of the 
on-heap memory allocator's pooling logic.

Author: Josh Rosen <joshro...@databricks.com>

Closes #20191 from 
JoshRosen/SPARK-22997-add-defenses-against-use-after-free-bugs-in-memory-allocator.

(cherry picked from commit f340b6b3066033d40b7e163fd5fb68e9820adfb1)
Signed-off-by: Josh Rosen <joshro...@databricks.com>

commit 5b5851cb685f395574c94174d45a47c4fbf946c8
Author: Wang Gengliang <ltnwgl@...>
Date:   2018-01-10T17:

[GitHub] spark pull request #20334: How to check registered table name.

2018-01-20 Thread AtulKumVerma
GitHub user AtulKumVerma opened a pull request:

https://github.com/apache/spark/pull/20334

How to check registered table name.

Dear fellows,

I want to know how can i see all the registered dataset or dataframe as 
temporary table or view in sql context.
I read about it catalyst is responsible for maintaing one to one mapping 
between dataframe and its temporary table name.
I want to just list down that all from catalyst.

Your Response highly Appreciate.
Thanks All in Advance.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/spark branch-2.3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/20334.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #20334


commit 5244aafc2d7945c11c96398b8d5b752b45fd148c
Author: Xianjin YE <advancedxy@...>
Date:   2018-01-02T15:30:38Z

[SPARK-22897][CORE] Expose stageAttemptId in TaskContext

## What changes were proposed in this pull request?
stageAttemptId added in TaskContext and corresponding construction 
modification

## How was this patch tested?
Added a new test in TaskContextSuite, two cases are tested:
1. Normal case without failure
2. Exception case with resubmitted stages

Link to [SPARK-22897](https://issues.apache.org/jira/browse/SPARK-22897)

Author: Xianjin YE <advance...@gmail.com>

Closes #20082 from advancedxy/SPARK-22897.

(cherry picked from commit a6fc300e91273230e7134ac6db95ccb4436c6f8f)
Signed-off-by: Wenchen Fan <wenc...@databricks.com>

commit b96a2132413937c013e1099be3ec4bc420c947fd
Author: Juliusz Sompolski <julek@...>
Date:   2018-01-03T13:40:51Z

[SPARK-22938] Assert that SQLConf.get is accessed only on the driver.

## What changes were proposed in this pull request?

Assert if code tries to access SQLConf.get on executor.
This can lead to hard to detect bugs, where the executor will read 
fallbackConf, falling back to default config values, ignoring potentially 
changed non-default configs.
If a config is to be passed to executor code, it needs to be read on the 
driver, and passed explicitly.

## How was this patch tested?

Check in existing tests.

Author: Juliusz Sompolski <ju...@databricks.com>

Closes #20136 from juliuszsompolski/SPARK-22938.

(cherry picked from commit 247a08939d58405aef39b2a4e7773aa45474ad12)
Signed-off-by: Wenchen Fan <wenc...@databricks.com>

commit a05e85ecb76091567a26a3a14ad0879b4728addc
Author: gatorsmile <gatorsmile@...>
Date:   2018-01-03T14:09:30Z

[SPARK-22934][SQL] Make optional clauses order insensitive for CREATE TABLE 
SQL statement

## What changes were proposed in this pull request?
Currently, our CREATE TABLE syntax require the EXACT order of clauses. It 
is pretty hard to remember the exact order. Thus, this PR is to make optional 
clauses order insensitive for `CREATE TABLE` SQL statement.

```
CREATE [TEMPORARY] TABLE [IF NOT EXISTS] [db_name.]table_name
[(col_name1 col_type1 [COMMENT col_comment1], ...)]
USING datasource
[OPTIONS (key1=val1, key2=val2, ...)]
[PARTITIONED BY (col_name1, col_name2, ...)]
[CLUSTERED BY (col_name3, col_name4, ...) INTO num_buckets BUCKETS]
[LOCATION path]
[COMMENT table_comment]
[TBLPROPERTIES (key1=val1, key2=val2, ...)]
[AS select_statement]
```

The proposal is to make the following clauses order insensitive.
```
[OPTIONS (key1=val1, key2=val2, ...)]
[PARTITIONED BY (col_name1, col_name2, ...)]
[CLUSTERED BY (col_name3, col_name4, ...) INTO num_buckets BUCKETS]
[LOCATION path]
[COMMENT table_comment]
[TBLPROPERTIES (key1=val1, key2=val2, ...)]
```

The same idea is also applicable to Create Hive Table.
```
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
[(col_name1[:] col_type1 [COMMENT col_comment1], ...)]
[COMMENT table_comment]
[PARTITIONED BY (col_name2[:] col_type2 [COMMENT col_comment2], ...)]
[ROW FORMAT row_format]
[STORED AS file_format]
[LOCATION path]
[TBLPROPERTIES (key1=val1, key2=val2, ...)]
[AS select_statement]
```

The proposal is to make the following clauses order insensitive.
```
[COMMENT table_comment]
[PARTITIONED BY (col_name2[:] col_type2 [COMMENT col_comment2], ...)]
[ROW FORMAT row_format]
[STORED AS file_format]
[LOCATION path]
[TBLPROPERTIES (key1=val1, key2=val2, ...)]
```

## How was this patch tested?
Added test cases

Author: gatorsmile <gato