Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/17696#discussion_r112636940
--- Diff: docs/configuration.md ---
@@ -213,6 +213,15 @@ of the most common options to set are:
and typically can have up to 50 characters
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/15078
**In Linux**
mvn -Dtest=none
-DwildcardSuites=org.apache.spark.deploy.rest.StandaloneRestSubmitSuite test
I want to know why?thank you!@HyukjinKwon
**Report errors
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17709
This PR title should add [DOC].
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17696
Please cut a picture, I see how you show on your computer?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17696
@HyukjinKwon
But I wrote, no use ...
My computer is window7 system.
The browser is Chrome.
![1](https://cloud.githubusercontent.com/assets/26266482/25218223/a72f629a
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17696
[SPARK-20401][DOC]In the spark official configuration document, the
'spark.driver.supervise' configuration parameter specification and default
values are necessary.
## What ch
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17682
Can I use the Jenkins test?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17682
@srowen
I found 9 places.
Are the following Scala files:
MesosClusterPage.scala
DriverPage.scala
ApplicationPage.scala
---
If your project is set up for it, you can
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17682
Ok, I have not been how to upload pictures in PR, I will learn about
it.Thank you.@HyukjinKwon
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17682
Ok, i'm dealing right away.@srowen
This function is very important to our large spark distributed
system.Because this UI, the user concerned more, rather than our
devel
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17682
[SPARK-20385][WEB-UI]'Submitted Time' field, the date format needs to be
formatted, in running Drivers table or Completed Drivers table in master web ui.
## What changes were p
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17656
[SPARK-20354][CORE][REST-API]/api/v1/applicationsâ return sparkUser is
null in REST API.
## What changes were proposed in this pull request?
When I request access to the '
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
@HyukjinKwon
**I just replaced the spark-core_2.11-2.2.0-SNAPSHOT.jar
Successfully start history server, but I click the Go button, now is error,
the error log is as follows
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
@HyukjinKwon
Thank you again, and now windows machines can compile apache master.
No my java version is 1.8.
---
If your project is set up for it, you can reply to this email and
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
@ajbozarth
@HyukjinKwon
OK.thanks. I will close this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17608
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
@HyukjinKwon
**Now build the apache master is error.**
[INFO] Compiling 73 Java sources to
/home/spark_build/spark-2.1.0/common/network-common/target/scala-2.11/classes
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
I am trying hard to test it in apache master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
@srowen
@jerryshao
I understand, thank you.
I changed the code in my project to keep the program consistent with the
latest example.
---
If your project is set up for it, you
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
JavaDStream lines = messages.map(new Function, String>() {
@Override
public String call(Tuple2 tuple2) {
return tuple2
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
![Uploading image.pngâ¦]()
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
@ajbozarth @HyukjinKwon I have added the PR description again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
eg
messages.map(Tuple2::_2) is the Scala grammar code.But
JavaKafkaWordCount is a java class.
I think the Java example program should be fully implemented in Java
code
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
@jerryshao
Excuse me,I would like to ask, these examples of Java examples, why use
some of the Scala to write it?thank you!
SparkConf sparkConf = new
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
Browser is Chrome.
Spark version is 2.1.0.
Must be History Server Web ui.
My Url:
http://10.43.183.120:18082/history/app-20170411132432-0004/jobs
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
I just downloaded the latest spark master code to compile and install ,
test the problem, there are still bugs, the page is wrong.
---
If your project is set up for it, you can reply to
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
So how do I deal with this PR?@ajbozarth
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
@ajbozarth I am using the latest spark version.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
Is this, the only way to encode, will not let the browser to escape our
special characters.The page will not be error.
---
If your project is set up for it, you can reply to this email and
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17608
[SPARK-20293][WEB UI][History]In the page of 'jobs' or 'stages' of history
server web ui,,click the 'Go' button, query paging data, the page error
## What chan
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17593
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17580
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17563
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17563
Does the PR nobody deal with it?@srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17593
sorting can change?I do not think so.
Even if the sorting is only show the last 200, and I raised the issue is
not contradictory.
The last 200 are the concept of a batch of data
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17593
[SPARK-20279][WEB-UI]In web ui,'Only showing 200' should be changed to
'only showing last 200'.
## What changes were proposed in this pull request?
In web
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
@jerryshao
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import
GitHub user guoxiaolongzte reopened a pull request:
https://github.com/apache/spark/pull/17580
[20269][Structured Streaming] add class 'JavaWordCountProducer' to 'provide
java word count producer'.
## What changes were proposed in this pull request?
GitHub user guoxiaolongzte reopened a pull request:
https://github.com/apache/spark/pull/17563
[SPARK-20005][WEB UI]fix 'There is no "Newline" in UI in describtion'.
## What changes were proposed in this pull request?
Before I fix this issue, if the des
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17580
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17563
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17563
@ajbozarth
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17481
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
Because of the API changes of Kafka, we do not want to delete it, but to
maintain and modify it.
Although it is absolutely a Kafka producer code, but this is part of spark
streaming, it
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
When a user use spark to develop a stream&kafka application, he first
wants to find and learn example program in 'spark \ examples \ src \ main \
java \ org \ apache \ spark \
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
Sorry,spark java code style is different from the style of my project
team.Now I know, and have been fixed.
Use 2-space indentation in general. For function declarations, use 4
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
Title,PR description and motive, has been modified.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17580
[20269][Structured Streaming][Examples] add JavaWordCountProducer in
steaming examples
## What changes were proposed in this pull request?
run example of streaming kafka,currently
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17579
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17579
[20269][Structured Streaming][Examples] add JavaWordCountProducer in
steaming examples
## What changes were proposed in this pull request?
run example of streaming kafka,currently
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17578
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17578
[SPARK-20269][Structured Streaming][Examples]add JavaWordCountProducer in
steaming examples
## What changes were proposed in this pull request?
run example of streaming kafka
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/17563#discussion_r110415759
--- Diff: docs/monitoring.md ---
@@ -299,6 +299,7 @@ can be identified by their `[attempt-id]`. In the API
listed below, when running
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17563
@srowen
Look at the attachment of 'SPARK-20005', you know what happened.
sorry,this is Unrelated. I merge code is error about docs/monitoring.md.
---
If your
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17481
@srowen
In jobs in web ui,Choose 'show' text box, fill in the data to show a number
greater than or equal to the data to the total number of article. Click on the
"Go&q
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17563
[SPARK-20005]fix 'There is no "Newline" in UI in describtion'.
## What changes were proposed in this pull request?
Before I fix this issue, if the describt
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/17534#discussion_r110303522
--- Diff: docs/monitoring.md ---
@@ -299,12 +299,12 @@ can be identified by their `[attempt-id]`. In the API
listed below, when running
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17534
Added one more
2.'/applications/[app-id]/stages/[stage-id]' in REST API,remove redundant
description â?status=[active|complete|pending|failed] list only stages in th
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17534
Because I just use this API today to develop, only to find the problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17534
[SPARK-20218]'/applications/[app-id]/stages' in REST API,add description.
## What changes were proposed in this pull request?
'/applications/[app-id]/stages'
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17507
help merge spark master@srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17507
@srowen Help code review,thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17507
[SPARK-20190]'/applications/[app-id]/jobs' in rest api,status should be
[running|sâ¦
â¦ucceeded|failed|unknown]
## What changes were proposed in this pu
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17498
@srowen @jerryshao
If a spark application developer, using event compress, from the spark
official document, will not see the use of spark.io.compression.codec is
specified compression
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17498
@jerryshao
This is just an optimization suggestion.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17498
@srowen i add a space
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17481
@ajbozarth
I now need to constantly switch pageSize to change paging data. Sometimes I
want to see all the data, but now show all the data, page area is lost. When I
quit, I have to re
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/17498#discussion_r109172977
--- Diff: docs/configuration.md ---
@@ -773,14 +774,15 @@ Apart from these, the following properties are also
available, and may be useful
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17498
[SPARK-20177]Document about compression way has some little detail châ¦
â¦anges.
## What changes were proposed in this pull request?
Document compression way little
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17497
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17497
[SPARK-20177]Document about compression way has some little detail changes.
## What changes were proposed in this pull request?
Document compression way little detail changes.
1
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17481
@ajbozarth
Sometimes watch I need to see all data, sometimes watch I need to see the
paging data, if I go back, the cache data if too much, and I need to find a
long time. But if the
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17481
@srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17481
Sometimes watch I need to see all data, sometimes watch I need to see the
paging data, if I go back, the cache data if too much, and I need to find a
long time. But if the solution to this
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17220
Sorry,I download the master of the branch 2.1.My issue also mentioned to
remove the code, and should not be Resolution: Won't Fix.
---
If your project is set up for it, you can rep
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17220
spark2.0.2
val shortShuffleMgrNames = Map(
"sort" ->
classOf[org.apache.spark.shuffle.sort.SortShuffleManager].getName,
"tungsten
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17220
@jerryshao@rxin@srowen
In spark2.1.0,"tungsten-sort" ->
classOf[org.apache.spark.shuffle.sort.SortShuffleManager].getName has been
deleted,but you didn't agree w
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17479
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17490
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17490
OK,i know.thank you very much.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17490
However, the name of the dynamic method cannot be exactly matched.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17481
Please help to code view,thank you.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17490
[SPARK-20167]In SqlBase.g4,some of the comments is not correct.
## What changes were proposed in this pull request?
In SqlBase.g4,some of the comments is not correct.
eg
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17481
if you expand the table to equal the total rows the paging options will
disappear. This is too unreasonable. For me a large number of data pages show,
it is not convenient and reasonable
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17481
I think this design is not reasonable, because I may need to look at all,
but I have a look at the time I need to query the page.I can do it across all
paged tables in the UI.
---
If your
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17481
Ok, I have fixed the PR description.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17481
[SPARK-20157]In the menu âStorageâin Web UI, click the Go button, andâ¦
⦠shows no paging menu interface.
## What changes were proposed in this pull request?
(Please
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/17479#discussion_r108947909
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/executorspage-template.html
---
@@ -24,7 +24,7 @@ Summary
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/17479#discussion_r108876916
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/executorspage-template.html
---
@@ -24,7 +24,7 @@ Summary
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/17479#discussion_r108863264
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/executorspage-template.html
---
@@ -24,7 +24,7 @@ Summary
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17479
[SPARK-20154][Web UI]In web ui,http://ip:4040/executors/,the title 'Storage
Memory' shouldâ¦
⦠modify 'Storage Memory used/total'
## What changes were p
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17220
why HashShuffleManager have been deleted.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17220
thanks.I understand this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17220
I think I should delete, update in the document at the same time, so that
to ensure the uniqueness of function.
---
If your project is set up for it, you can reply to this email and have
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17220
I think the compatibility, the resulting shuffle manager is not I want.Only
the parameter values' sort real SortShuffleManager said.
---
If your project is set up for it, you can rep
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17220
Ok, I have modified the title.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17220
In spark 1.4.1, you delete the parameter of 'hash'.I think it should be
deleted.In the spark website indicated in the documents, should not keep this
logic i
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17220
remove tungsten-sort.Because it is not represent 'org.apache.spark.shâ¦
JIRA Issue: https://github.com/guoxiaolongzte/spark/tree/SPARK-19862
In SparkEnv.scala,remove tun
301 - 398 of 398 matches
Mail list logo