[jira] [Created] (SPARK-21120) Increasing the master's metric is conducive to the spark cluster management system monitoring.

2017-06-16 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-21120:
--

 Summary: Increasing the master's metric is conducive to the spark 
cluster management system monitoring.
 Key: SPARK-21120
 URL: https://issues.apache.org/jira/browse/SPARK-21120
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 2.3.0
Reporter: guoxiaolongzte
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21120) Increasing the master's metric is conducive to the spark cluster management system monitoring.

2017-06-18 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21120:
---
Attachment: 1.png

> Increasing the master's metric is conducive to the spark cluster management 
> system monitoring.
> --
>
> Key: SPARK-21120
> URL: https://issues.apache.org/jira/browse/SPARK-21120
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png
>
>
> The current number of master metric is very small, unable to meet the needs 
> of spark large-scale cluster management system. So I am as much as possible 
> to complete the relevant metric.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21120) Increasing the master's metric is conducive to the spark cluster management system monitoring.

2017-06-18 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21120:
---
Description: The current number of master metric is very small, unable to 
meet the needs of spark large-scale cluster management system. So I am as much 
as possible to complete the relevant metric.

> Increasing the master's metric is conducive to the spark cluster management 
> system monitoring.
> --
>
> Key: SPARK-21120
> URL: https://issues.apache.org/jira/browse/SPARK-21120
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png
>
>
> The current number of master metric is very small, unable to meet the needs 
> of spark large-scale cluster management system. So I am as much as possible 
> to complete the relevant metric.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21120) Increasing the master's metric is conducive to the spark cluster management system monitoring.

2017-06-18 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053426#comment-16053426
 ] 

guoxiaolongzte commented on SPARK-21120:


Sorry, the last two or three days I did not deal with my jira in time.

> Increasing the master's metric is conducive to the spark cluster management 
> system monitoring.
> --
>
> Key: SPARK-21120
> URL: https://issues.apache.org/jira/browse/SPARK-21120
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png
>
>
> The current number of master metric is very small, unable to meet the needs 
> of spark large-scale cluster management system. So I am as much as possible 
> to complete the relevant metric.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21200) Spark REST API is not working or Spark documentation is wrong.

2017-06-23 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061737#comment-16061737
 ] 

guoxiaolongzte commented on SPARK-21200:


Please give an example, such as which specific REST api.

> Spark REST API is not working or Spark documentation is wrong.
> --
>
> Key: SPARK-21200
> URL: https://issues.apache.org/jira/browse/SPARK-21200
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.1.1
>Reporter: Srinivasarao Daruna
>
> unable to access spark REST Api.
> Was able to access it as per the documentation in older version of spark. 
> But, with spark 2.1.1 when i tried to do the same, it is not working.
> Either a code bug or documentation has to be updated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21250) Add a url in the table of 'Running Executors' in worker page to visit job page

2017-06-29 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-21250:
--

 Summary: Add a url in the table of 'Running Executors'  in worker 
page to visit job page
 Key: SPARK-21250
 URL: https://issues.apache.org/jira/browse/SPARK-21250
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 2.3.0
Reporter: guoxiaolongzte
Priority: Minor


Add a url in the table of 'Running Executors'  in worker page to visit job page.

When I click URL of 'Name', the current page jumps to the job page. Of course 
this is only in the table of 'Running Executors'.

This URL of 'Name' is in the table of 'Finished Executors' does not exist, the 
click will not jump to any page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21297) Add State in 'Session Statistics' table and add count in 'JDBC/ODBC Server' page.

2017-07-03 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-21297:
--

 Summary: Add State in 'Session Statistics' table and add count in 
'JDBC/ODBC Server' page.
 Key: SPARK-21297
 URL: https://issues.apache.org/jira/browse/SPARK-21297
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 2.3.0
Reporter: guoxiaolongzte
Priority: Minor


1.Add State in 'Session Statistics' table  and add count in 'JDBC/ODBC Server' 
page.The purpose is to identify the status of online or offline, if there is a 
large number of Sessions.

2.add count about 'Session Statistics' and 'SQL Statistics' in 'JDBC/ODBC 
Server' page.The purpose is to know the statistics clearly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-06 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-21325:
--

 Summary: The shell of 'spark-submit' about '--jars' and '--fils', 
jars and files can be placed on local and hdfs.
 Key: SPARK-21325
 URL: https://issues.apache.org/jira/browse/SPARK-21325
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 2.3.0
Reporter: guoxiaolongzte
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-06 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16076202#comment-16076202
 ] 

guoxiaolongzte commented on SPARK-21325:


.I'm going to add details. What about it now?

> The shell of 'spark-submit' about '--jars' and '--fils', jars and files can 
> be placed on local and hdfs.
> 
>
> Key: SPARK-21325
> URL: https://issues.apache.org/jira/browse/SPARK-21325
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-06 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21325:
---
Comment: was deleted

(was: .I'm going to add details. What about it now?)

> The shell of 'spark-submit' about '--jars' and '--fils', jars and files can 
> be placed on local and hdfs.
> 
>
> Key: SPARK-21325
> URL: https://issues.apache.org/jira/browse/SPARK-21325
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-06 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte reopened SPARK-21325:


1.My submit way:
spark-submit --class cn.gxl.TestSql -{color:red}-jars 
hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
 --files hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt 
{color} hdfs://nameservice:/gxl/spark_2.0.2_project.jar

2.spark-submit  description:
  --jars JARS  {color:red}Comma-separated list of local jars{color} to 
include on the driver and executor classpaths.

  --files FILES {color:red}Comma-separated list of files{color} to be 
placed in the working directory of each executor. File paths of these files in 
executors can be accessed via SparkFiles.get(fileName).

3.Problem Description:
 jars and files Not only can be placed on local but also  can be placed on hdfs.

The description of '' - jars '', that can only be placed on local.This is wrong

The description of '--files' is not clear that can be placed locally or 
hdfs.This is blurry. Not conducive to the developer to understand and use.

So, this is an optimization feature that deserves to be modified.

> The shell of 'spark-submit' about '--jars' and '--fils', jars and files can 
> be placed on local and hdfs.
> 
>
> Key: SPARK-21325
> URL: https://issues.apache.org/jira/browse/SPARK-21325
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-06 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21325:
---
Description: 
1.My submit way:
spark-submit --class cn.gxl.TestSql --jars 
hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
 --files hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt 
hdfs://nameservice:/gxl/spark_2.0.2_project.jar
2.spark-submit description:
--jars JARS Comma-separated list of local jars to include on the driver and 
executor classpaths.
--files FILES Comma-separated list of files to be placed in the working 
directory of each executor. File paths of these files in executors can be 
accessed via SparkFiles.get(fileName).
3.Problem Description:
jars and files Not only can be placed on local but also can be placed on hdfs.
The description of '' - jars '', that can only be placed on local.This is wrong
The description of '--files' is not clear that can be placed locally or 
hdfs.This is blurry. Not conducive to the developer to understand and use.
So, this is an optimization feature that deserves to be modified.

> The shell of 'spark-submit' about '--jars' and '--fils', jars and files can 
> be placed on local and hdfs.
> 
>
> Key: SPARK-21325
> URL: https://issues.apache.org/jira/browse/SPARK-21325
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> 1.My submit way:
> spark-submit --class cn.gxl.TestSql --jars 
> hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
>  --files 
> hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt 
> hdfs://nameservice:/gxl/spark_2.0.2_project.jar
> 2.spark-submit description:
> --jars JARS Comma-separated list of local jars to include on the driver and 
> executor classpaths.
> --files FILES Comma-separated list of files to be placed in the working 
> directory of each executor. File paths of these files in executors can be 
> accessed via SparkFiles.get(fileName).
> 3.Problem Description:
> jars and files Not only can be placed on local but also can be placed on hdfs.
> The description of '' - jars '', that can only be placed on local.This is 
> wrong
> The description of '--files' is not clear that can be placed locally or 
> hdfs.This is blurry. Not conducive to the developer to understand and use.
> So, this is an optimization feature that deserves to be modified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-06 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21325:
---
Description: 
1.My submit way:
spark-submit --class cn.gxl.TestSql{color:red} --jars 
hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
 --files 
hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
hdfs://nameservice:/gxl/spark_2.0.2_project.jar

2.spark-submit description:
--jars JARS {color:red}Comma-separated list of local jars{color} to include on 
the driver and executor classpaths.
--files{color:red} FILES Comma-separated list of files{color} to be placed in 
the working directory of each executor. File paths of these files in executors 
can be accessed via SparkFiles.get(fileName).

3.Problem Description:
{color:red}*jars and files Not only can be placed on local but also can be 
placed on hdfs.

The description of '' - jars '', that can only be placed on local.This is wrong

The description of '--files' is not clear that can be placed locally or 
hdfs.This is blurry. Not conducive to the developer to understand and 
use.*{color}

So, this is an optimization feature that deserves to be modified.

  was:
1.My submit way:
spark-submit --class cn.gxl.TestSql --jars 
hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
 --files hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt 
hdfs://nameservice:/gxl/spark_2.0.2_project.jar
2.spark-submit description:
--jars JARS Comma-separated list of local jars to include on the driver and 
executor classpaths.
--files FILES Comma-separated list of files to be placed in the working 
directory of each executor. File paths of these files in executors can be 
accessed via SparkFiles.get(fileName).
3.Problem Description:
jars and files Not only can be placed on local but also can be placed on hdfs.
The description of '' - jars '', that can only be placed on local.This is wrong
The description of '--files' is not clear that can be placed locally or 
hdfs.This is blurry. Not conducive to the developer to understand and use.
So, this is an optimization feature that deserves to be modified.


> The shell of 'spark-submit' about '--jars' and '--fils', jars and files can 
> be placed on local and hdfs.
> 
>
> Key: SPARK-21325
> URL: https://issues.apache.org/jira/browse/SPARK-21325
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> 1.My submit way:
> spark-submit --class cn.gxl.TestSql{color:red} --jars 
> hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
>  --files 
> hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
> hdfs://nameservice:/gxl/spark_2.0.2_project.jar
> 2.spark-submit description:
> --jars JARS {color:red}Comma-separated list of local jars{color} to include 
> on the driver and executor classpaths.
> --files{color:red} FILES Comma-separated list of files{color} to be placed in 
> the working directory of each executor. File paths of these files in 
> executors can be accessed via SparkFiles.get(fileName).
> 3.Problem Description:
> {color:red}*jars and files Not only can be placed on local but also can be 
> placed on hdfs.
> The description of '' - jars '', that can only be placed on local.This is 
> wrong
> The description of '--files' is not clear that can be placed locally or 
> hdfs.This is blurry. Not conducive to the developer to understand and 
> use.*{color}
> So, this is an optimization feature that deserves to be modified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-06 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21325:
---
Description: 
1.My submit way:
spark-submit --class cn.gxl.TestSql{color:red} --jars 
hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
 --files 
hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
hdfs://nameservice:/gxl/spark_2.0.2_project.jar

2.spark-submit description:
--jars {color:red}JARS Comma-separated list of local jars{color} to include on 
the driver and executor classpaths.
--files{color:red} FILES Comma-separated list of files{color} to be placed in 
the working directory of each executor. File paths of these files in executors 
can be accessed via SparkFiles.get(fileName).

3.Problem Description:
{color:red} jars and files Not only can be placed on local but also can be 
placed on hdfs.

The description of '' - jars '', that can only be placed on local.This is wrong

The description of '--files' is not clear that can be placed locally or 
hdfs.This is blurry. Not conducive to the developer to understand and 
use.*{color}

So, this is an optimization feature that deserves to be modified.

  was:
1.My submit way:
spark-submit --class cn.gxl.TestSql{color:red} --jars 
hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
 --files 
hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
hdfs://nameservice:/gxl/spark_2.0.2_project.jar

2.spark-submit description:
--jars {color:red}JARS Comma-separated list of local jars{color} to include on 
the driver and executor classpaths.
--files{color:red} FILES Comma-separated list of files{color} to be placed in 
the working directory of each executor. File paths of these files in executors 
can be accessed via SparkFiles.get(fileName).

3.Problem Description:
{color:red}*jars and files Not only can be placed on local but also can be 
placed on hdfs.

The description of '' - jars '', that can only be placed on local.This is wrong

The description of '--files' is not clear that can be placed locally or 
hdfs.This is blurry. Not conducive to the developer to understand and 
use.*{color}

So, this is an optimization feature that deserves to be modified.


> The shell of 'spark-submit' about '--jars' and '--fils', jars and files can 
> be placed on local and hdfs.
> 
>
> Key: SPARK-21325
> URL: https://issues.apache.org/jira/browse/SPARK-21325
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> 1.My submit way:
> spark-submit --class cn.gxl.TestSql{color:red} --jars 
> hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
>  --files 
> hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
> hdfs://nameservice:/gxl/spark_2.0.2_project.jar
> 2.spark-submit description:
> --jars {color:red}JARS Comma-separated list of local jars{color} to include 
> on the driver and executor classpaths.
> --files{color:red} FILES Comma-separated list of files{color} to be placed in 
> the working directory of each executor. File paths of these files in 
> executors can be accessed via SparkFiles.get(fileName).
> 3.Problem Description:
> {color:red} jars and files Not only can be placed on local but also can be 
> placed on hdfs.
> The description of '' - jars '', that can only be placed on local.This is 
> wrong
> The description of '--files' is not clear that can be placed locally or 
> hdfs.This is blurry. Not conducive to the developer to understand and 
> use.*{color}
> So, this is an optimization feature that deserves to be modified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-06 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21325:
---
Description: 
1.My submit way:
spark-submit --class cn.gxl.TestSql{color:red} --jars 
hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
 --files 
hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
hdfs://nameservice:/gxl/spark_2.0.2_project.jar

2.spark-submit description:
--jars {color:red}JARS Comma-separated list of local jars{color} to include on 
the driver and executor classpaths.
--files{color:red} FILES Comma-separated list of files{color} to be placed in 
the working directory of each executor. File paths of these files in executors 
can be accessed via SparkFiles.get(fileName).

3.Problem Description:
{color:red}*jars and files Not only can be placed on local but also can be 
placed on hdfs.

The description of '' - jars '', that can only be placed on local.This is wrong

The description of '--files' is not clear that can be placed locally or 
hdfs.This is blurry. Not conducive to the developer to understand and 
use.*{color}

So, this is an optimization feature that deserves to be modified.

  was:
1.My submit way:
spark-submit --class cn.gxl.TestSql{color:red} --jars 
hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
 --files 
hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
hdfs://nameservice:/gxl/spark_2.0.2_project.jar

2.spark-submit description:
--jars JARS {color:red}Comma-separated list of local jars{color} to include on 
the driver and executor classpaths.
--files{color:red} FILES Comma-separated list of files{color} to be placed in 
the working directory of each executor. File paths of these files in executors 
can be accessed via SparkFiles.get(fileName).

3.Problem Description:
{color:red}*jars and files Not only can be placed on local but also can be 
placed on hdfs.

The description of '' - jars '', that can only be placed on local.This is wrong

The description of '--files' is not clear that can be placed locally or 
hdfs.This is blurry. Not conducive to the developer to understand and 
use.*{color}

So, this is an optimization feature that deserves to be modified.


> The shell of 'spark-submit' about '--jars' and '--fils', jars and files can 
> be placed on local and hdfs.
> 
>
> Key: SPARK-21325
> URL: https://issues.apache.org/jira/browse/SPARK-21325
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> 1.My submit way:
> spark-submit --class cn.gxl.TestSql{color:red} --jars 
> hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
>  --files 
> hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
> hdfs://nameservice:/gxl/spark_2.0.2_project.jar
> 2.spark-submit description:
> --jars {color:red}JARS Comma-separated list of local jars{color} to include 
> on the driver and executor classpaths.
> --files{color:red} FILES Comma-separated list of files{color} to be placed in 
> the working directory of each executor. File paths of these files in 
> executors can be accessed via SparkFiles.get(fileName).
> 3.Problem Description:
> {color:red}*jars and files Not only can be placed on local but also can be 
> placed on hdfs.
> The description of '' - jars '', that can only be placed on local.This is 
> wrong
> The description of '--files' is not clear that can be placed locally or 
> hdfs.This is blurry. Not conducive to the developer to understand and 
> use.*{color}
> So, this is an optimization feature that deserves to be modified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-06 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte resolved SPARK-21325.

Resolution: Fixed

This question has been modified by jerryshao.
[~jerryshao]

> The shell of 'spark-submit' about '--jars' and '--fils', jars and files can 
> be placed on local and hdfs.
> 
>
> Key: SPARK-21325
> URL: https://issues.apache.org/jira/browse/SPARK-21325
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> 1.My submit way:
> spark-submit --class cn.gxl.TestSql{color:red} --jars 
> hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
>  --files 
> hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
> hdfs://nameservice:/gxl/spark_2.0.2_project.jar
> 2.spark-submit description:
> --jars {color:red}JARS Comma-separated list of local jars{color} to include 
> on the driver and executor classpaths.
> --files{color:red} FILES Comma-separated list of files{color} to be placed in 
> the working directory of each executor. File paths of these files in 
> executors can be accessed via SparkFiles.get(fileName).
> 3.Problem Description:
> {color:red} jars and files Not only can be placed on local but also can be 
> placed on hdfs.
> The description of '' - jars '', that can only be placed on local.This is 
> wrong
> The description of '--files' is not clear that can be placed locally or 
> hdfs.This is blurry. Not conducive to the developer to understand and 
> use.*{color}
> So, this is an optimization feature that deserves to be modified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-06 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16076244#comment-16076244
 ] 

guoxiaolongzte edited comment on SPARK-21325 at 7/6/17 9:36 AM:


This question has been modified by jerryshao.
Trouble will link this jira to your jira
[~jerryshao]


was (Author: guoxiaolongzte):
This question has been modified by jerryshao.
[~jerryshao]

> The shell of 'spark-submit' about '--jars' and '--fils', jars and files can 
> be placed on local and hdfs.
> 
>
> Key: SPARK-21325
> URL: https://issues.apache.org/jira/browse/SPARK-21325
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> 1.My submit way:
> spark-submit --class cn.gxl.TestSql{color:red} --jars 
> hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
>  --files 
> hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
> hdfs://nameservice:/gxl/spark_2.0.2_project.jar
> 2.spark-submit description:
> --jars {color:red}JARS Comma-separated list of local jars{color} to include 
> on the driver and executor classpaths.
> --files{color:red} FILES Comma-separated list of files{color} to be placed in 
> the working directory of each executor. File paths of these files in 
> executors can be accessed via SparkFiles.get(fileName).
> 3.Problem Description:
> {color:red} jars and files Not only can be placed on local but also can be 
> placed on hdfs.
> The description of '' - jars '', that can only be placed on local.This is 
> wrong
> The description of '--files' is not clear that can be placed locally or 
> hdfs.This is blurry. Not conducive to the developer to understand and 
> use.*{color}
> So, this is an optimization feature that deserves to be modified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21297) Add count in 'JDBC/ODBC Server' page.

2017-07-12 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21297:
---
Summary: Add count in 'JDBC/ODBC Server' page.  (was: Add State in 'Session 
Statistics' table and add count in 'JDBC/ODBC Server' page.)

> Add count in 'JDBC/ODBC Server' page.
> -
>
> Key: SPARK-21297
> URL: https://issues.apache.org/jira/browse/SPARK-21297
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> 1.Add State in 'Session Statistics' table  and add count in 'JDBC/ODBC 
> Server' page.The purpose is to identify the status of online or offline, if 
> there is a large number of Sessions.
> 2.add count about 'Session Statistics' and 'SQL Statistics' in 'JDBC/ODBC 
> Server' page.The purpose is to know the statistics clearly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21297) Add count in 'JDBC/ODBC Server' page.

2017-07-12 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21297:
---
Description: 1.Add count about 'Session Statistics' and 'SQL Statistics' in 
'JDBC/ODBC Server' page.The purpose is to know the statistics clearly.  (was: 
1.Add State in 'Session Statistics' table  and add count in 'JDBC/ODBC Server' 
page.The purpose is to identify the status of online or offline, if there is a 
large number of Sessions.

2.add count about 'Session Statistics' and 'SQL Statistics' in 'JDBC/ODBC 
Server' page.The purpose is to know the statistics clearly.)

> Add count in 'JDBC/ODBC Server' page.
> -
>
> Key: SPARK-21297
> URL: https://issues.apache.org/jira/browse/SPARK-21297
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> 1.Add count about 'Session Statistics' and 'SQL Statistics' in 'JDBC/ODBC 
> Server' page.The purpose is to know the statistics clearly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-27 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte closed SPARK-21325.
--

duplicate

> The shell of 'spark-submit' about '--jars' and '--fils', jars and files can 
> be placed on local and hdfs.
> 
>
> Key: SPARK-21325
> URL: https://issues.apache.org/jira/browse/SPARK-21325
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> 1.My submit way:
> spark-submit --class cn.gxl.TestSql{color:red} --jars 
> hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
>  --files 
> hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
> hdfs://nameservice:/gxl/spark_2.0.2_project.jar
> 2.spark-submit description:
> --jars {color:red}JARS Comma-separated list of local jars{color} to include 
> on the driver and executor classpaths.
> --files{color:red} FILES Comma-separated list of files{color} to be placed in 
> the working directory of each executor. File paths of these files in 
> executors can be accessed via SparkFiles.get(fileName).
> 3.Problem Description:
> {color:red} jars and files Not only can be placed on local but also can be 
> placed on hdfs.
> The description of '' - jars '', that can only be placed on local.This is 
> wrong
> The description of '--files' is not clear that can be placed locally or 
> hdfs.This is blurry. Not conducive to the developer to understand and 
> use.*{color}
> So, this is an optimization feature that deserves to be modified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21325) The shell of 'spark-submit' about '--jars' and '--fils', jars and files can be placed on local and hdfs.

2017-07-27 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16104410#comment-16104410
 ] 

guoxiaolongzte commented on SPARK-21325:


https://issues.apache.org/jira/browse/SPARK-21012
Help me put this jira off,thanks.

> The shell of 'spark-submit' about '--jars' and '--fils', jars and files can 
> be placed on local and hdfs.
> 
>
> Key: SPARK-21325
> URL: https://issues.apache.org/jira/browse/SPARK-21325
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> 1.My submit way:
> spark-submit --class cn.gxl.TestSql{color:red} --jars 
> hdfs://nameservice:/gxl/spark-core_2.11-2.3.0-SNAPSHOT.jar,hdfs://nameservice:/gxl/zookeeper-3.4.6.jar
>  --files 
> hdfs://nameservice:/gxl/value1.txt,hdfs://nameservice:/gxl/value2.txt{color} 
> hdfs://nameservice:/gxl/spark_2.0.2_project.jar
> 2.spark-submit description:
> --jars {color:red}JARS Comma-separated list of local jars{color} to include 
> on the driver and executor classpaths.
> --files{color:red} FILES Comma-separated list of files{color} to be placed in 
> the working directory of each executor. File paths of these files in 
> executors can be accessed via SparkFiles.get(fileName).
> 3.Problem Description:
> {color:red} jars and files Not only can be placed on local but also can be 
> placed on hdfs.
> The description of '' - jars '', that can only be placed on local.This is 
> wrong
> The description of '--files' is not clear that can be placed locally or 
> hdfs.This is blurry. Not conducive to the developer to understand and 
> use.*{color}
> So, this is an optimization feature that deserves to be modified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21600) The description of "this requires spark.shuffle.service.enabled to be set" for the spark.dynamicAllocation.enabled configuration item is not clear

2017-08-01 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-21600:
--

 Summary: The description of "this requires 
spark.shuffle.service.enabled to be set" for the 
spark.dynamicAllocation.enabled configuration item is not clear
 Key: SPARK-21600
 URL: https://issues.apache.org/jira/browse/SPARK-21600
 Project: Spark
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 2.3.0
Reporter: guoxiaolongzte
Priority: Trivial


The description of "this requires spark.shuffle.service.enabled to be set" for 
the spark.dynamicAllocation.enabled configuration item is not clear. I am not 
sure how to set spark.shuffle.service.enabled is true or false, so that the 
user to guess, resulting in doubts. All i have changed here, stressed that must 
spark.shuffle.service.enabled to be set true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21600) The description of "this requires spark.shuffle.service.enabled to be set" for the spark.dynamicAllocation.enabled configuration item is not clear

2017-08-01 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16110095#comment-16110095
 ] 

guoxiaolongzte commented on SPARK-21600:


I will do it.

> The description of "this requires spark.shuffle.service.enabled to be set" 
> for the spark.dynamicAllocation.enabled configuration item is not clear
> --
>
> Key: SPARK-21600
> URL: https://issues.apache.org/jira/browse/SPARK-21600
> Project: Spark
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Trivial
>
> The description of "this requires spark.shuffle.service.enabled to be set" 
> for the spark.dynamicAllocation.enabled configuration item is not clear. I am 
> not sure how to set spark.shuffle.service.enabled is true or false, so that 
> the user to guess, resulting in doubts. All i have changed here, stressed 
> that must spark.shuffle.service.enabled to be set true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21609) In the Master ui add "log directory" display, is conducive to users to quickly find the log directory path.

2017-08-02 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-21609:
--

 Summary: In the Master ui add "log directory" display, is 
conducive to users to quickly find the log directory path.
 Key: SPARK-21609
 URL: https://issues.apache.org/jira/browse/SPARK-21609
 Project: Spark
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 2.3.0
Reporter: guoxiaolongzte


In the Master ui add "log directory" display, is conducive to users to quickly 
find the log directory path.

In the spark application development process, we not only view the executor log 
and driver log, but also to see the master log and worker log, but the current 
UI will not show the master and worker log path, resulting in the user is not 
very clear to find the log path. So, I add "log directory" display.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21609) In the Master ui add "log directory" display, is conducive to users to quickly find the log directory path.

2017-08-02 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21609:
---
Component/s: (was: Documentation)
 Web UI

> In the Master ui add "log directory" display, is conducive to users to 
> quickly find the log directory path.
> ---
>
> Key: SPARK-21609
> URL: https://issues.apache.org/jira/browse/SPARK-21609
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>
> In the Master ui add "log directory" display, is conducive to users to 
> quickly find the log directory path.
> In the spark application development process, we not only view the executor 
> log and driver log, but also to see the master log and worker log, but the 
> current UI will not show the master and worker log path, resulting in the 
> user is not very clear to find the log path. So, I add "log directory" 
> display.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21609) In the Master ui add "log directory" display, is conducive to users to quickly find the log directory path.

2017-08-02 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16110564#comment-16110564
 ] 

guoxiaolongzte commented on SPARK-21609:


yes.

> In the Master ui add "log directory" display, is conducive to users to 
> quickly find the log directory path.
> ---
>
> Key: SPARK-21609
> URL: https://issues.apache.org/jira/browse/SPARK-21609
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> In the Master ui add "log directory" display, is conducive to users to 
> quickly find the log directory path.
> In the spark application development process, we not only view the executor 
> log and driver log, but also to see the master log and worker log, but the 
> current UI will not show the master and worker log path, resulting in the 
> user is not very clear to find the log path. So, I add "log directory" 
> display.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21620) Add metrics in web ui.

2017-08-02 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-21620:
--

 Summary: Add metrics in web ui.
 Key: SPARK-21620
 URL: https://issues.apache.org/jira/browse/SPARK-21620
 Project: Spark
  Issue Type: New Feature
  Components: Web UI
Affects Versions: 2.3.0
Reporter: guoxiaolongzte
Priority: Trivial


Add metrics in spark web ui.
Big data system several other components of the ui are related metrics, such as 
Hadoop, hbase. So i think spark ui should increase the relevant metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21620) Add metrics url in spark web ui.

2017-08-02 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21620:
---
Description: 
Add metrics url in spark web ui.
Big data system several other components of the ui are related metrics, such as 
Hadoop, hbase. So i think spark ui should increase the relevant metrics.

  was:
Add metrics in spark web ui.
Big data system several other components of the ui are related metrics, such as 
Hadoop, hbase. So i think spark ui should increase the relevant metrics.


> Add metrics url in spark web ui.
> 
>
> Key: SPARK-21620
> URL: https://issues.apache.org/jira/browse/SPARK-21620
> Project: Spark
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Trivial
>
> Add metrics url in spark web ui.
> Big data system several other components of the ui are related metrics, such 
> as Hadoop, hbase. So i think spark ui should increase the relevant metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21620) Add metrics url in spark web ui.

2017-08-02 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-21620:
---
Summary: Add metrics url in spark web ui.  (was: Add metrics in web ui.)

> Add metrics url in spark web ui.
> 
>
> Key: SPARK-21620
> URL: https://issues.apache.org/jira/browse/SPARK-21620
> Project: Spark
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Trivial
>
> Add metrics in spark web ui.
> Big data system several other components of the ui are related metrics, such 
> as Hadoop, hbase. So i think spark ui should increase the relevant metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-22311) stage api modify the description format, add version api, modify the duration real-time calculation

2017-10-18 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-22311:
--

 Summary: stage api modify the description format, add version api, 
modify the duration real-time calculation
 Key: SPARK-22311
 URL: https://issues.apache.org/jira/browse/SPARK-22311
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 2.3.0
Reporter: guoxiaolongzte
Priority: Trivial


stage api modify the description format

 A list of all stages for a given application.
 ?status=[active|complete|pending|failed] list only stages 
in the state.
content should be included in  

add version api  doc '/api/v1/version'

modify the duration real-time calculation




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22365) Spark UI executors empty list with 500 error

2017-10-29 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16224284#comment-16224284
 ] 

guoxiaolongzte commented on SPARK-22365:


You need to provide a snapshot to help other people understand your reason, 
thank you.

> Spark UI executors empty list with 500 error
> 
>
> Key: SPARK-22365
> URL: https://issues.apache.org/jira/browse/SPARK-22365
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.2.0
>Reporter: Jakub Dubovsky
>
> No data loaded on "execturos" tab in sparkUI with stack trace below. Apart 
> from exception I have nothing more. But if I can test something to make this 
> easier to resolve I am happy to help.
> {{java.lang.NullPointerException
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
>   at 
> org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
>   at 
> org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
>   at 
> org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.spark_project.jetty.server.Server.handle(Server.java:524)
>   at 
> org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
>   at 
> org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
>   at 
> org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at 
> org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23357) 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar to hive, and partition is empty, also need to show empty partition field []

2018-07-18 Thread guoxiaolongzte (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-23357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte resolved SPARK-23357.

Resolution: Won't Fix

> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] 
> 
>
> Key: SPARK-23357
> URL: https://issues.apache.org/jira/browse/SPARK-23357
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png, 3.png, 4.png, 5.png
>
>
> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] .
> hive:
>  !3.png! 
> sparkSQL Non-partitioned table  fix before:
>  !1.png! 
> sparkSQL partitioned table  fix before:
>  !2.png! 
> sparkSQL Non-partitioned table  fix after:
>  !4.png! 
> sparkSQL partitioned table  fix after:
>  !5.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-24701) SparkMaster WebUI allow all appids to be shown in detail on port 4040 rather than different ports per app

2018-07-18 Thread guoxiaolongzte (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-24701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16548753#comment-16548753
 ] 

guoxiaolongzte commented on SPARK-24701:


I don't quite catch your meaning. Can you tell me more about it? It's best to 
have a snapshot

> SparkMaster WebUI allow all appids to be shown in detail on port 4040 rather 
> than different ports per app
> -
>
> Key: SPARK-24701
> URL: https://issues.apache.org/jira/browse/SPARK-24701
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.3.1
>Reporter: t oo
>Priority: Major
>  Labels: master, security, ui, web, web-ui
>
> Right now the detail for all application ids are shown on a diff port per app 
> id, ie. 4040, 4041, 4042...etc this is problematic for environments with 
> tight firewall settings. Proposing to allow 4040?appid=1,  4040?appid=2,  
> 4040?appid=3..etc for the master web ui just like what the History Web UI 
> does.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23967) Description add native sql show in SQL page.

2018-07-18 Thread guoxiaolongzte (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-23967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16548754#comment-16548754
 ] 

guoxiaolongzte commented on SPARK-23967:


I don't quite catch your meaning. Can you tell me more about it? 

> Description add native sql show in SQL page.
> 
>
> Key: SPARK-23967
> URL: https://issues.apache.org/jira/browse/SPARK-23967
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.4.0
>Reporter: JieFang.He
>Priority: Minor
>
> Description add native sql show in SQL page to for better observation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23357) 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar to hive, and partition is empty, also need to show empty partition field []

2018-02-08 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-23357:
--

 Summary: 'SHOW TABLE EXTENDED LIKE pattern=STRING' add 
‘Partitioned’ display similar to hive, and partition is empty, also need to 
show empty partition field [] 
 Key: SPARK-23357
 URL: https://issues.apache.org/jira/browse/SPARK-23357
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 2.4.0
Reporter: guoxiaolongzte


'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar to 
hive, and partition is empty, also need to show empty partition field [] .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23357) 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar to hive, and partition is empty, also need to show empty partition field []

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23357:
---
Attachment: 3.png

> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] 
> 
>
> Key: SPARK-23357
> URL: https://issues.apache.org/jira/browse/SPARK-23357
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 3.png
>
>
> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23357) 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar to hive, and partition is empty, also need to show empty partition field []

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23357:
---
Attachment: 2.png

> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] 
> 
>
> Key: SPARK-23357
> URL: https://issues.apache.org/jira/browse/SPARK-23357
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png, 3.png
>
>
> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23357) 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar to hive, and partition is empty, also need to show empty partition field []

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23357:
---
Attachment: 1.png

> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] 
> 
>
> Key: SPARK-23357
> URL: https://issues.apache.org/jira/browse/SPARK-23357
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png, 3.png
>
>
> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23357) 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar to hive, and partition is empty, also need to show empty partition field []

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23357:
---
Attachment: 4.png

> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] 
> 
>
> Key: SPARK-23357
> URL: https://issues.apache.org/jira/browse/SPARK-23357
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png, 3.png, 4.png
>
>
> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23357) 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar to hive, and partition is empty, also need to show empty partition field []

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23357:
---
Description: 
'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar to 
hive, and partition is empty, also need to show empty partition field [] .

hive:
 !3.png! 


sparkSQL Non-partitioned table  fix before:
 !1.png! 

sparkSQL partitioned table  fix before:
 !2.png! 

sparkSQL Non-partitioned table  fix after:
 !4.png! 

sparkSQL partitioned table  fix after:
 !5.png! 

  was:'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display 
similar to hive, and partition is empty, also need to show empty partition 
field [] .


> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] 
> 
>
> Key: SPARK-23357
> URL: https://issues.apache.org/jira/browse/SPARK-23357
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png, 3.png, 4.png, 5.png
>
>
> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] .
> hive:
>  !3.png! 
> sparkSQL Non-partitioned table  fix before:
>  !1.png! 
> sparkSQL partitioned table  fix before:
>  !2.png! 
> sparkSQL Non-partitioned table  fix after:
>  !4.png! 
> sparkSQL partitioned table  fix after:
>  !5.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23357) 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar to hive, and partition is empty, also need to show empty partition field []

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23357:
---
Attachment: 5.png

> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] 
> 
>
> Key: SPARK-23357
> URL: https://issues.apache.org/jira/browse/SPARK-23357
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png, 3.png, 4.png, 5.png
>
>
> 'SHOW TABLE EXTENDED LIKE pattern=STRING' add ‘Partitioned’ display similar 
> to hive, and partition is empty, also need to show empty partition field [] .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23363) Fix spark-sql bug or improvement

2018-02-08 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-23363:
--

 Summary: Fix spark-sql bug or improvement
 Key: SPARK-23363
 URL: https://issues.apache.org/jira/browse/SPARK-23363
 Project: Spark
  Issue Type: Task
  Components: SQL
Affects Versions: 2.4.0
Reporter: guoxiaolongzte






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23364) desc table add column head display

2018-02-08 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-23364:
--

 Summary: desc table add column head display
 Key: SPARK-23364
 URL: https://issues.apache.org/jira/browse/SPARK-23364
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 2.4.0
Reporter: guoxiaolongzte






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23364) desc table add column head display

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23364:
---
Priority: Minor  (was: Major)

> desc table add column head display
> --
>
> Key: SPARK-23364
> URL: https://issues.apache.org/jira/browse/SPARK-23364
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23364) desc table add column head display

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23364:
---
Attachment: 2.png
1.png

> desc table add column head display
> --
>
> Key: SPARK-23364
> URL: https://issues.apache.org/jira/browse/SPARK-23364
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23364) desc table add column head display

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23364:
---
Description: 
fix before: 
 !2.png! 


fix after:
 !1.png! 

> desc table add column head display
> --
>
> Key: SPARK-23364
> URL: https://issues.apache.org/jira/browse/SPARK-23364
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png
>
>
> fix before: 
>  !2.png! 
> fix after:
>  !1.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23364) desc table add column head display

2018-02-08 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357946#comment-16357946
 ] 

guoxiaolongzte commented on SPARK-23364:


I will PR to solve this matter, thank you.

> desc table add column head display
> --
>
> Key: SPARK-23364
> URL: https://issues.apache.org/jira/browse/SPARK-23364
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png
>
>
> fix before: 
>  !2.png! 
> fix after:
>  !1.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-23364) desc table add column head display

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte reopened SPARK-23364:


> desc table add column head display
> --
>
> Key: SPARK-23364
> URL: https://issues.apache.org/jira/browse/SPARK-23364
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png
>
>
> fix before: 
>  !2.png! 
> fix after:
>  !1.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23364) 'desc table' command in spark-sql add column head display

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23364:
---
Summary: 'desc table' command in spark-sql add column head display  (was: 
desc table add column head display)

> 'desc table' command in spark-sql add column head display
> -
>
> Key: SPARK-23364
> URL: https://issues.apache.org/jira/browse/SPARK-23364
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png
>
>
> fix before: 
>  !2.png! 
> fix after:
>  !1.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-23363) Fix spark-sql bug or improvement

2018-02-08 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte reopened SPARK-23363:


> Fix spark-sql bug or improvement
> 
>
> Key: SPARK-23363
> URL: https://issues.apache.org/jira/browse/SPARK-23363
> Project: Spark
>  Issue Type: Task
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23382) Spark Streaming ui about the contents of the form need to have hidden and show features, when the table records very much.

2018-02-10 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-23382:
--

 Summary: Spark Streaming ui about the contents of the form need to 
have hidden and show features, when the table records very much.
 Key: SPARK-23382
 URL: https://issues.apache.org/jira/browse/SPARK-23382
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 2.4.0
Reporter: guoxiaolongzte


Spark Streaming ui about the contents of the form need to have hidden and show 
features, when the table records very much.

Specific reasons, please refer to 
https://issues.apache.org/jira/browse/SPARK-23024



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23384) When it has no incomplete(completed) applications found, the last updated time is not formatted and client local time zone is not show in history server web ui.

2018-02-10 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-23384:
--

 Summary: When it has no incomplete(completed) applications found, 
the last updated time is not formatted and client local time zone is not show 
in history server web ui.
 Key: SPARK-23384
 URL: https://issues.apache.org/jira/browse/SPARK-23384
 Project: Spark
  Issue Type: Bug
  Components: Web UI
Affects Versions: 2.4.0
Reporter: guoxiaolongzte


When it has no incomplete(completed) applications found, the last updated time 
is not formatted and client local time zone is not show in history server web 
ui. It is a bug.

fix before:

 

fix after:

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23384) When it has no incomplete(completed) applications found, the last updated time is not formatted and client local time zone is not show in history server web ui.

2018-02-10 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23384:
---
Attachment: 2.png
1.png

> When it has no incomplete(completed) applications found, the last updated 
> time is not formatted and client local time zone is not show in history 
> server web ui.
> 
>
> Key: SPARK-23384
> URL: https://issues.apache.org/jira/browse/SPARK-23384
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png
>
>
> When it has no incomplete(completed) applications found, the last updated 
> time is not formatted and client local time zone is not show in history 
> server web ui. It is a bug.
> fix before:
>  
> fix after:
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23384) When it has no incomplete(completed) applications found, the last updated time is not formatted and client local time zone is not show in history server web ui.

2018-02-10 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23384:
---
Description: 
When it has no incomplete(completed) applications found, the last updated time 
is not formatted and client local time zone is not show in history server web 
ui. It is a bug.

fix before: !1.png!

fix after:

!2.png!

 

  was:
When it has no incomplete(completed) applications found, the last updated time 
is not formatted and client local time zone is not show in history server web 
ui. It is a bug.

fix before:

 

fix after:

 


> When it has no incomplete(completed) applications found, the last updated 
> time is not formatted and client local time zone is not show in history 
> server web ui.
> 
>
> Key: SPARK-23384
> URL: https://issues.apache.org/jira/browse/SPARK-23384
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png
>
>
> When it has no incomplete(completed) applications found, the last updated 
> time is not formatted and client local time zone is not show in history 
> server web ui. It is a bug.
> fix before: !1.png!
> fix after:
> !2.png!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23382) Spark Streaming ui about the contents of the form need to have hidden and show features, when the table records very much.

2018-02-11 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16360209#comment-16360209
 ] 

guoxiaolongzte commented on SPARK-23382:


Before jira, modify the job page, stage page, task page, master page, worke 
page, SQL page and other pages. I forgot the streaming page.Sorry.

> Spark Streaming ui about the contents of the form need to have hidden and 
> show features, when the table records very much.
> --
>
> Key: SPARK-23382
> URL: https://issues.apache.org/jira/browse/SPARK-23382
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> Spark Streaming ui about the contents of the form need to have hidden and 
> show features, when the table records very much.
> Specific reasons, please refer to 
> https://issues.apache.org/jira/browse/SPARK-23024



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23492) Application shows up as running in history server even when latest attempt has completed

2018-02-25 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16376458#comment-16376458
 ] 

guoxiaolongzte commented on SPARK-23492:


I tested with the latest environment, but I did not reproduce your problem, I 
think there is a problem, please email srowen instead of creating a jira. 
Thanks.

 

[ {
 "id" : "app-20180226150516-0002",
 "name" : "Spark shell",
 "attempts" : [ {
 "startTime" : "2018-02-26T07:05:15.006GMT",
 "endTime" : "1969-12-31T23:59:59.999GMT",
 "lastUpdated" : "2018-02-26T07:05:19.814GMT",
 "duration" : 0,
 "sparkUser" : "root",
 {color:#FF}"completed" : false,{color}
 "appSparkVersion" : "2.4.0-SNAPSHOT",
 "startTimeEpoch" : 1519628715006,
 "endTimeEpoch" : -1,
 "lastUpdatedEpoch" : 1519628719814
 } ]
}, {
 "id" : "app-20180212094839-0008",
 "name" : "SparkSQL::10.43.183.120",
 "attempts" : [ {
 "startTime" : "2018-02-12T01:48:38.165GMT",
 "endTime" : "1969-12-31T23:59:59.999GMT",
 "lastUpdated" : "2018-02-26T06:58:00.065GMT",
 "duration" : 0,
 "sparkUser" : "root",
 {color:#FF}"completed" : false,{color}
 "appSparkVersion" : "2.4.0-SNAPSHOT",
 "startTimeEpoch" : 1518400118165,
 "endTimeEpoch" : -1,
 "lastUpdatedEpoch" : 1519628280065
 } ]

> Application shows up as running in history server even when latest attempt 
> has completed
> 
>
> Key: SPARK-23492
> URL: https://issues.apache.org/jira/browse/SPARK-23492
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, YARN
>Affects Versions: 2.2.0
>Reporter: Kevin Kim
>Priority: Major
>
> Hi, the Spark history server API "/application?status=running" returns 
> applications whose latest attempts have completed (see example output below). 
> Spark code says that an application is considered "running" when any one of 
> the attempts are not "completed". 
> [https://github.com/apache/spark/blob/1cc34f3e58c92dd06545727e9d931008a1082bbf/core/src/main/scala/org/apache/spark/status/api/v1/ApplicationListResource.scala#L44]
> In my case, attempt 1 is shown as incomplete, but the attempt has finished 
> and is stamped with an endTime. Is this a bug in Spark history server?
> {code:java}
> "attempts" : [ {
> "attemptId" : "2",
> "startTime" : "2018-02-14T23:59:17.785GMT",
> "endTime" : "2018-02-15T08:08:28.927GMT",
> "lastUpdated" : "2018-02-15T08:08:28.949GMT",
> "duration" : 29351142,
> "sparkUser" : {omitted},
> "completed" : true,
> "startTimeEpoch" : 1518652757785,
> "endTimeEpoch" : 1518682108927,
> "lastUpdatedEpoch" : 1518682108949
> }, {
> "attemptId" : "1",
> "startTime" : "2018-02-14T23:53:02.629GMT",
> "endTime" : "2018-02-14T23:59:13.426GMT",
> "lastUpdated" : "2018-02-23T05:03:45.434GMT",
> "duration" : 370797,
> "sparkUser" : {omitted},
> "completed" : false,
> "startTimeEpoch" : 1518652382629,
> "endTimeEpoch" : 1518652753426,
> "lastUpdatedEpoch" : 1519362225434
> } ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23523) Incorrect result caused by the rule OptimizeMetadataOnlyQuery

2018-03-04 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16385673#comment-16385673
 ] 

guoxiaolongzte commented on SPARK-23523:


 What is the correct result? The description did not write the correct result.

[~smilegator]

> Incorrect result caused by the rule OptimizeMetadataOnlyQuery
> -
>
> Key: SPARK-23523
> URL: https://issues.apache.org/jira/browse/SPARK-23523
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.2, 2.2.1, 2.3.0
>Reporter: Xiao Li
>Assignee: Xiao Li
>Priority: Major
> Fix For: 2.4.0
>
>
> {code:scala}
>  val tablePath = new File(s"${path.getCanonicalPath}/cOl3=c/cOl1=a/cOl5=e")
>  Seq(("a", "b", "c", "d", "e")).toDF("cOl1", "cOl2", "cOl3", "cOl4", "cOl5")
>  .write.json(tablePath.getCanonicalPath)
>  val df = spark.read.json(path.getCanonicalPath).select("CoL1", "CoL5", 
> "CoL3").distinct()
>  df.show()
> {code}
> This returns a wrong result 
> {{[c,e,a]}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-23523) Incorrect result caused by the rule OptimizeMetadataOnlyQuery

2018-03-04 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23523:
---
Comment: was deleted

(was:  What is the correct result? The description did not write the correct 
result.

[~smilegator])

> Incorrect result caused by the rule OptimizeMetadataOnlyQuery
> -
>
> Key: SPARK-23523
> URL: https://issues.apache.org/jira/browse/SPARK-23523
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.2, 2.2.1, 2.3.0
>Reporter: Xiao Li
>Assignee: Xiao Li
>Priority: Major
> Fix For: 2.4.0
>
>
> {code:scala}
>  val tablePath = new File(s"${path.getCanonicalPath}/cOl3=c/cOl1=a/cOl5=e")
>  Seq(("a", "b", "c", "d", "e")).toDF("cOl1", "cOl2", "cOl3", "cOl4", "cOl5")
>  .write.json(tablePath.getCanonicalPath)
>  val df = spark.read.json(path.getCanonicalPath).select("CoL1", "CoL5", 
> "CoL3").distinct()
>  df.show()
> {code}
> This returns a wrong result 
> {{[c,e,a]}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23433) java.lang.IllegalStateException: more than one active taskSet for stage

2018-03-05 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387247#comment-16387247
 ] 

guoxiaolongzte commented on SPARK-23433:


I also encountered the same problem, who can solve it?

> java.lang.IllegalStateException: more than one active taskSet for stage
> ---
>
> Key: SPARK-23433
> URL: https://issues.apache.org/jira/browse/SPARK-23433
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.1
>Reporter: Shixiong Zhu
>Priority: Major
>
> This following error thrown by DAGScheduler stopped the cluster:
> {code}
> 18/02/11 13:22:27 ERROR DAGSchedulerEventProcessLoop: 
> DAGSchedulerEventProcessLoop failed; shutting down SparkContext
> java.lang.IllegalStateException: more than one active taskSet for stage 
> 7580621: 7580621.2,7580621.1
>   at 
> org.apache.spark.scheduler.TaskSchedulerImpl.submitTasks(TaskSchedulerImpl.scala:229)
>   at 
> org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1193)
>   at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:1059)
>   at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:900)
>   at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:899)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> org.apache.spark.scheduler.DAGScheduler.submitWaitingChildStages(DAGScheduler.scala:899)
>   at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1427)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1929)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1880)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1868)
>   at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23675) Title add spark logo, use spark logo image

2018-03-13 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-23675:
--

 Summary: Title add spark logo, use spark logo image
 Key: SPARK-23675
 URL: https://issues.apache.org/jira/browse/SPARK-23675
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 2.4.0
Reporter: guoxiaolongzte


Title add spark logo, use spark logo image



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23675) Title add spark logo, use spark logo image

2018-03-13 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23675:
---
Attachment: spark_fix_after.png
flink.png
nifi.png
yarn.png
storm.png
kafka.png
spark_fix_before.png

> Title add spark logo, use spark logo image
> --
>
> Key: SPARK-23675
> URL: https://issues.apache.org/jira/browse/SPARK-23675
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: flink.png, kafka.png, nifi.png, spark_fix_after.png, 
> spark_fix_before.png, storm.png, yarn.png
>
>
> Title add spark logo, use spark logo image



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23675) Title add spark logo, use spark logo image

2018-03-13 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23675:
---
Attachment: storm.png

> Title add spark logo, use spark logo image
> --
>
> Key: SPARK-23675
> URL: https://issues.apache.org/jira/browse/SPARK-23675
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: flink.png, kafka.png, nifi.png, spark_fix_after.png, 
> spark_fix_before.png, storm.png, storm.png, yarn.png
>
>
> Title add spark logo, use spark logo image



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23675) Title add spark logo, use spark logo image

2018-03-13 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23675:
---
Attachment: yarn.png

> Title add spark logo, use spark logo image
> --
>
> Key: SPARK-23675
> URL: https://issues.apache.org/jira/browse/SPARK-23675
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: flink.png, kafka.png, nifi.png, spark_fix_after.png, 
> spark_fix_before.png, storm.png, storm.png, yarn.png, yarn.png
>
>
> Title add spark logo, use spark logo image



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23675) Title add spark logo, use spark logo image

2018-03-13 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23675:
---
Description: 
Title add spark logo, use spark logo image. reference other big data system ui, 
so i think spark should add it.

spark fix before: !spark_fix_before.png!

 

spark fix after: !spark_fix_after.png!

 

reference kafka ui: !kafka.png!

 

reference storm ui: !storm.png!

 

reference yarn ui: !yarn.png!

 

reference nifi ui: !nifi.png!

 

reference flink ui: !flink.png!

 

 

  was:Title add spark logo, use spark logo image


> Title add spark logo, use spark logo image
> --
>
> Key: SPARK-23675
> URL: https://issues.apache.org/jira/browse/SPARK-23675
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: flink.png, kafka.png, nifi.png, spark_fix_after.png, 
> spark_fix_before.png, storm.png, storm.png, yarn.png, yarn.png
>
>
> Title add spark logo, use spark logo image. reference other big data system 
> ui, so i think spark should add it.
> spark fix before: !spark_fix_before.png!
>  
> spark fix after: !spark_fix_after.png!
>  
> reference kafka ui: !kafka.png!
>  
> reference storm ui: !storm.png!
>  
> reference yarn ui: !yarn.png!
>  
> reference nifi ui: !nifi.png!
>  
> reference flink ui: !flink.png!
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23958) HadoopRdd filters empty files to avoid generating empty tasks that affect the performance of the Spark computing performance.

2018-04-10 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-23958:
--

 Summary: HadoopRdd filters empty files to avoid generating empty 
tasks that affect the performance of the Spark computing performance.
 Key: SPARK-23958
 URL: https://issues.apache.org/jira/browse/SPARK-23958
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 2.4.0
Reporter: guoxiaolongzte


HadoopRdd filter empty files to avoid generating empty tasks that affect the 
performance of the Spark computing performance.

Empty file's length is zero.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-22999) 'show databases like command' can remove the like keyword

2018-01-08 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-22999:
--

 Summary: 'show databases like command' can remove the like keyword
 Key: SPARK-22999
 URL: https://issues.apache.org/jira/browse/SPARK-22999
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 2.3.0
Reporter: guoxiaolongzte
Priority: Trivial


SHOW DATABASES (LIKE pattern = STRING)? Can be like the back increase?
When using this command, LIKE keyword can be removed. 
You can refer to the SHOW TABLES command, SHOW TABLES 'test *' and SHOW TABELS 
like 'test *' can be used. 
Similarly SHOW DATABASES 'test *' and SHOW DATABASES like 'test *' can be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23002) SparkUI inconsistent driver hostname compare with other executors

2018-01-09 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319807#comment-16319807
 ] 

guoxiaolongzte commented on SPARK-23002:


Good idea. I tend to ip address, not the host name. So that we can any house in 
the windows machine, without the need to configure hosts, visit spark ui.

> SparkUI inconsistent driver hostname compare with other executors
> -
>
> Key: SPARK-23002
> URL: https://issues.apache.org/jira/browse/SPARK-23002
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.2.0
>Reporter: Ran Tao
>Priority: Minor
>
> As the picture shows, driver name is ip address and other executors are 
> machine hostname.
> !https://raw.githubusercontent.com/Lemonjing/issues-assets/master/pics/driver.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23024) Spark ui about the contents of the form need to have hidden and show features, when the table records very much.

2018-01-10 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-23024:
--

 Summary: Spark ui about the contents of the form need to have 
hidden and show features, when the table records very much. 
 Key: SPARK-23024
 URL: https://issues.apache.org/jira/browse/SPARK-23024
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 2.3.0
Reporter: guoxiaolongzte
Priority: Minor


Spark ui about the contents of the form need to have hidden and show features, 
when the table records very much. Because sometimes you do not care about the 
record of the table, you just want to see the contents of the next table, but 
you have to scroll the scroll bar for a long time to see the contents of the 
next table.

Currently we have about 500 workers, but I just wanted to see the logs for the 
running applications table. I had to scroll through the scroll bars for a long 
time to see the logs for the running applications table.

In order to ensure functional consistency, I modified the Master Page, Worker 
Page, Job Page, Stage Page, Task Page, Configuration Page, Storage Page, Pool 
Page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23024) Spark ui about the contents of the form need to have hidden and show features, when the table records very much.

2018-01-10 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23024:
---
Attachment: 1.png
2.png

> Spark ui about the contents of the form need to have hidden and show 
> features, when the table records very much. 
> -
>
> Key: SPARK-23024
> URL: https://issues.apache.org/jira/browse/SPARK-23024
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: 1.png, 2.png
>
>
> Spark ui about the contents of the form need to have hidden and show 
> features, when the table records very much. Because sometimes you do not care 
> about the record of the table, you just want to see the contents of the next 
> table, but you have to scroll the scroll bar for a long time to see the 
> contents of the next table.
> Currently we have about 500 workers, but I just wanted to see the logs for 
> the running applications table. I had to scroll through the scroll bars for a 
> long time to see the logs for the running applications table.
> In order to ensure functional consistency, I modified the Master Page, Worker 
> Page, Job Page, Stage Page, Task Page, Configuration Page, Storage Page, Pool 
> Page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22995) Spark UI stdout/stderr links point to executors internal address

2018-01-11 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16323607#comment-16323607
 ] 

guoxiaolongzte commented on SPARK-22995:


Please close this jira, thank you.

> Spark UI stdout/stderr links point to executors internal address
> 
>
> Key: SPARK-22995
> URL: https://issues.apache.org/jira/browse/SPARK-22995
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.2.1
> Environment: AWS EMR, yarn cluster.
>Reporter: Jhon Cardenas
> Attachments: link.jpeg
>
>
> On Spark ui, in Environment and Executors tabs, the links of stdout and 
> stderr point to the internal address of the executors. This would imply to 
> expose the executors so that links can be accessed. Shouldn't those links be 
> pointed to master then handled internally serving the master as a proxy for 
> these files instead of exposing the internal machines?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23066) Master Page ncrease master start-up time.

2018-01-13 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-23066:
--

 Summary: Master Page ncrease master start-up time.
 Key: SPARK-23066
 URL: https://issues.apache.org/jira/browse/SPARK-23066
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 2.3.0
Reporter: guoxiaolongzte
Priority: Minor


When a spark system runs stably for a long time, we do not know how long it 
actually runs and can not get its startup time from the UI. 

So, it is necessary to increase the Master start-up time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23066) Master Page increase master start-up time.

2018-01-13 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23066:
---
Summary: Master Page increase master start-up time.  (was: Master Page 
ncrease master start-up time.)

> Master Page increase master start-up time.
> --
>
> Key: SPARK-23066
> URL: https://issues.apache.org/jira/browse/SPARK-23066
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> When a spark system runs stably for a long time, we do not know how long it 
> actually runs and can not get its startup time from the UI. 
> So, it is necessary to increase the Master start-up time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23121) When the Spark Streaming app is running for a period of time, the page is incorrectly reported when accessing '/ jobs /' or '/ jobs / job /? Id = 13' and ui can not be a

2018-01-16 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-23121:
--

 Summary: When the Spark Streaming app is running for a period of 
time, the page is incorrectly reported when accessing '/ jobs /' or '/ jobs / 
job /? Id = 13' and ui can not be accessed.
 Key: SPARK-23121
 URL: https://issues.apache.org/jira/browse/SPARK-23121
 Project: Spark
  Issue Type: Bug
  Components: Web UI
Affects Versions: 2.4.0
Reporter: guoxiaolongzte


When the Spark Streaming app is running for a period of time, the page is 
incorrectly reported when accessing '/ jobs /' or '/ jobs / job /? Id = 13' and 
ui can not be accessed.

 

Test command:

./bin/spark-submit --class org.apache.spark.examples.streaming.HdfsWordCount 
./examples/jars/spark-examples_2.11-2.4.0-SNAPSHOT.jar /spark

 

The app is running for a period of time,  ui can not be accessed, please see 
attachment.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23121) When the Spark Streaming app is running for a period of time, the page is incorrectly reported when accessing '/ jobs /' or '/ jobs / job /? Id = 13' and ui can not be a

2018-01-16 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23121:
---
Attachment: 2.png
1.png

> When the Spark Streaming app is running for a period of time, the page is 
> incorrectly reported when accessing '/ jobs /' or '/ jobs / job /? Id = 13' 
> and ui can not be accessed.
> -
>
> Key: SPARK-23121
> URL: https://issues.apache.org/jira/browse/SPARK-23121
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Major
> Attachments: 1.png, 2.png
>
>
> When the Spark Streaming app is running for a period of time, the page is 
> incorrectly reported when accessing '/ jobs /' or '/ jobs / job /? Id = 13' 
> and ui can not be accessed.
>  
> Test command:
> ./bin/spark-submit --class org.apache.spark.examples.streaming.HdfsWordCount 
> ./examples/jars/spark-examples_2.11-2.4.0-SNAPSHOT.jar /spark
>  
> The app is running for a period of time,  ui can not be accessed, please see 
> attachment.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23121) When the Spark Streaming app is running for a period of time, the page is incorrectly reported when accessing '/ jobs /' or '/ jobs / job /? Id = 13' and ui can not be

2018-01-16 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16328233#comment-16328233
 ] 

guoxiaolongzte commented on SPARK-23121:


The problem is the page is down and it has not been able to recover.

> When the Spark Streaming app is running for a period of time, the page is 
> incorrectly reported when accessing '/ jobs /' or '/ jobs / job /? Id = 13' 
> and ui can not be accessed.
> -
>
> Key: SPARK-23121
> URL: https://issues.apache.org/jira/browse/SPARK-23121
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Major
> Attachments: 1.png, 2.png
>
>
> When the Spark Streaming app is running for a period of time, the page is 
> incorrectly reported when accessing '/ jobs /' or '/ jobs / job /? Id = 13' 
> and ui can not be accessed.
>  
> Test command:
> ./bin/spark-submit --class org.apache.spark.examples.streaming.HdfsWordCount 
> ./examples/jars/spark-examples_2.11-2.4.0-SNAPSHOT.jar /spark
>  
> The app is running for a period of time,  ui can not be accessed, please see 
> attachment.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23066) Master Page increase master start-up time.

2018-01-21 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte resolved SPARK-23066.

Resolution: Won't Fix

> Master Page increase master start-up time.
> --
>
> Key: SPARK-23066
> URL: https://issues.apache.org/jira/browse/SPARK-23066
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> When a spark system runs stably for a long time, we do not know how long it 
> actually runs and can not get its startup time from the UI. 
> So, it is necessary to increase the Master start-up time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23270) FileInputDStream Streaming UI 's records should not be set to the default value of 0, it should be the total number of rows of new files.

2018-01-30 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-23270:
--

 Summary: FileInputDStream Streaming UI 's records should not be 
set to the default value of 0, it should be the total number of rows of new 
files.
 Key: SPARK-23270
 URL: https://issues.apache.org/jira/browse/SPARK-23270
 Project: Spark
  Issue Type: Bug
  Components: DStreams
Affects Versions: 2.4.0
Reporter: guoxiaolongzte
 Attachments: 1.png

FileInputDStream Streaming UI 's records should not be set to the default value 
of 0, it should be the total number of rows of new files.

^---in FileInputDStream.scala 
start-^

val inputInfo = StreamInputInfo(id, {color:#FF}0{color}, metadata) 
{color:#FF}// set to the default value of 0{color}
 ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo)

case class StreamInputInfo(
 inputStreamId: Int, numRecords: Long, metadata: Map[String, Any] = Map.empty)

-in FileInputDStream.scala 
end---

 

^---in DirectKafkaInputDStream.scala 
start-^

val inputInfo = StreamInputInfo(id, {color:#FF}rdd.count{color}, metadata) 
{color:#FF}//set to rdd count as numRecords{color}
 ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo)

case class StreamInputInfo(
inputStreamId: Int, numRecords: Long, metadata: Map[String, Any] = Map.empty)

-in DirectKafkaInputDStream.scala 
end---

 

test method:

./bin/spark-submit --class org.apache.spark.examples.streaming.HdfsWordCount 
examples/jars/spark-examples_2.11-2.4.0-SNAPSHOT.jar /spark/tmp/

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-23270) FileInputDStream Streaming UI 's records should not be set to the default value of 0, it should be the total number of rows of new files.

2018-01-30 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-23270:
---
Attachment: 1.png

> FileInputDStream Streaming UI 's records should not be set to the default 
> value of 0, it should be the total number of rows of new files.
> -
>
> Key: SPARK-23270
> URL: https://issues.apache.org/jira/browse/SPARK-23270
> Project: Spark
>  Issue Type: Bug
>  Components: DStreams
>Affects Versions: 2.4.0
>Reporter: guoxiaolongzte
>Priority: Major
> Attachments: 1.png
>
>
> FileInputDStream Streaming UI 's records should not be set to the default 
> value of 0, it should be the total number of rows of new files.
> ^---in FileInputDStream.scala 
> start-^
> val inputInfo = StreamInputInfo(id, {color:#FF}0{color}, metadata) 
> {color:#FF}// set to the default value of 0{color}
>  ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo)
> case class StreamInputInfo(
>  inputStreamId: Int, numRecords: Long, metadata: Map[String, Any] = Map.empty)
> -in FileInputDStream.scala 
> end---
>  
> ^---in DirectKafkaInputDStream.scala 
> start-^
> val inputInfo = StreamInputInfo(id, {color:#FF}rdd.count{color}, 
> metadata) {color:#FF}//set to rdd count as numRecords{color}
>  ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo)
> case class StreamInputInfo(
> inputStreamId: Int, numRecords: Long, metadata: Map[String, Any] = Map.empty)
> -in DirectKafkaInputDStream.scala 
> end---
>  
> test method:
> ./bin/spark-submit --class org.apache.spark.examples.streaming.HdfsWordCount 
> examples/jars/spark-examples_2.11-2.4.0-SNAPSHOT.jar /spark/tmp/
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20154) In web ui,http://ip:4040/executors/,the title 'Storage Memory' should modify 'Storage Memory used /total'

2017-03-29 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-20154:
--

 Summary: In web ui,http://ip:4040/executors/,the title 'Storage 
Memory' should modify  'Storage Memory used /total'
 Key: SPARK-20154
 URL: https://issues.apache.org/jira/browse/SPARK-20154
 Project: Spark
  Issue Type: Bug
  Components: Web UI
Affects Versions: 2.1.0
Reporter: guoxiaolongzte


In web ui,http://ip:4040/executors/,the title 'Storage Memory' should modify  
'Storage Memory used /total',because of this change, easier to understand for 
users and observation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20154) In web ui,http://ip:4040/executors/,the title 'Storage Memory' should modify 'Storage Memory used /total'

2017-03-29 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-20154:
---
Attachment: Before the change.png
After the change.png

> In web ui,http://ip:4040/executors/,the title 'Storage Memory' should modify  
> 'Storage Memory used /total'
> --
>
> Key: SPARK-20154
> URL: https://issues.apache.org/jira/browse/SPARK-20154
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
> Attachments: After the change.png, Before the change.png
>
>
> In web ui,http://ip:4040/executors/,the title 'Storage Memory' should modify  
> 'Storage Memory used /total',because of this change, easier to understand for 
> users and observation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20154) In web ui,http://ip:4040/executors/,the title 'Storage Memory' should modify 'Storage Memory used/total'

2017-03-30 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-20154:
---
Summary: In web ui,http://ip:4040/executors/,the title 'Storage Memory' 
should modify  'Storage Memory used/total'  (was: In web 
ui,http://ip:4040/executors/,the title 'Storage Memory' should modify  'Storage 
Memory used /total')

> In web ui,http://ip:4040/executors/,the title 'Storage Memory' should modify  
> 'Storage Memory used/total'
> -
>
> Key: SPARK-20154
> URL: https://issues.apache.org/jira/browse/SPARK-20154
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
> Attachments: After the change.png, Before the change.png
>
>
> In web ui,http://ip:4040/executors/,the title 'Storage Memory' should modify  
> 'Storage Memory used/total',because of this change, easier to understand for 
> users and observation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20154) In web ui,http://ip:4040/executors/,the title 'Storage Memory' should modify 'Storage Memory used /total'

2017-03-30 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-20154:
---
Description: In web ui,http://ip:4040/executors/,the title 'Storage Memory' 
should modify  'Storage Memory used/total',because of this change, easier to 
understand for users and observation  (was: In web 
ui,http://ip:4040/executors/,the title 'Storage Memory' should modify  'Storage 
Memory used /total',because of this change, easier to understand for users and 
observation)

> In web ui,http://ip:4040/executors/,the title 'Storage Memory' should modify  
> 'Storage Memory used /total'
> --
>
> Key: SPARK-20154
> URL: https://issues.apache.org/jira/browse/SPARK-20154
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
> Attachments: After the change.png, Before the change.png
>
>
> In web ui,http://ip:4040/executors/,the title 'Storage Memory' should modify  
> 'Storage Memory used/total',because of this change, easier to understand for 
> users and observation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20157) In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu interface.

2017-03-30 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-20157:
--

 Summary: In the menu ‘Storage’in Web UI, click the Go button, and 
shows no paging menu interface.
 Key: SPARK-20157
 URL: https://issues.apache.org/jira/browse/SPARK-20157
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 2.1.0
Reporter: guoxiaolongzte


Choose 'show' text box, fill in the data to show a number greater than or equal 
to the data to the total number of article. Click on the "Go" button, display 
interface display the total number of the data, but the page menu disappear, 
cause I want to continue to choose "show" text box and fill in the article to 
show the data number, can only leave the interface, select specific link click 
again come in to look at it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20157) In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu interface.

2017-03-30 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-20157:
---
Attachment: After the change.png
Before the change2.png
Before the change1.png

> In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu 
> interface.
> 
>
> Key: SPARK-20157
> URL: https://issues.apache.org/jira/browse/SPARK-20157
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
> Attachments: After the change.png, Before the change1.png, Before the 
> change2.png
>
>
> Choose 'show' text box, fill in the data to show a number greater than or 
> equal to the data to the total number of article. Click on the "Go" button, 
> display interface display the total number of the data, but the page menu 
> disappear, cause I want to continue to choose "show" text box and fill in the 
> article to show the data number, can only leave the interface, select 
> specific link click again come in to look at it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20137) In Spark1.5 I can 'cache table as select' for many times. In Spark2.1 it will show error TempTableAlreadyExistsException

2017-03-30 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949107#comment-15949107
 ] 

guoxiaolongzte commented on SPARK-20137:


I also want to know why, thank you.

> In Spark1.5 I can 'cache table as select' for many times. In Spark2.1 it will 
> show error TempTableAlreadyExistsException
> 
>
> Key: SPARK-20137
> URL: https://issues.apache.org/jira/browse/SPARK-20137
> Project: Spark
>  Issue Type: Question
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Ruhui Wang
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> About  'cache table as select'. In Spark1.5, I can run this sql for many 
> times:  
> cache table t as select * from t1;  
> cache table t as select * from t2;  
> In Spark2.1 when I run the second sql will show "Error in query: Temporary 
> table 't1' already exists;"  
> Why Spark2.1 don't support this sql??  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20157) In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu interface.

2017-03-30 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949117#comment-15949117
 ] 

guoxiaolongzte commented on SPARK-20157:


Help to review the code, thank you

> In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu 
> interface.
> 
>
> Key: SPARK-20157
> URL: https://issues.apache.org/jira/browse/SPARK-20157
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: After the change.png, Before the change1.png, Before the 
> change2.png
>
>
> Choose 'show' text box, fill in the data to show a number greater than or 
> equal to the data to the total number of article. Click on the "Go" button, 
> display interface display the total number of the data, but the page menu 
> disappear, cause I want to continue to choose "show" text box and fill in the 
> article to show the data number, can only leave the interface, select 
> specific link click again come in to look at it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-20157) In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu interface.

2017-03-30 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-20157:
---
Comment: was deleted

(was: Help to review the code, thank you)

> In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu 
> interface.
> 
>
> Key: SPARK-20157
> URL: https://issues.apache.org/jira/browse/SPARK-20157
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: After the change.png, Before the change1.png, Before the 
> change2.png
>
>
> Choose 'show' text box, fill in the data to show a number greater than or 
> equal to the data to the total number of article. Click on the "Go" button, 
> display interface display the total number of the data, but the page menu 
> disappear, cause I want to continue to choose "show" text box and fill in the 
> article to show the data number, can only leave the interface, select 
> specific link click again come in to look at it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20167) In SqlBase.g4,some of the comments is not correct.

2017-03-30 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-20167:
--

 Summary: In SqlBase.g4,some of the comments is not correct.
 Key: SPARK-20167
 URL: https://issues.apache.org/jira/browse/SPARK-20167
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 2.1.0
Reporter: guoxiaolongzte
Priority: Minor


In SqlBase.g4,some of the comments is not correct.
eg.
  | DROP TABLE (IF EXISTS)? tableIdentifier PURGE? #dropTable
  | DROP VIEW (IF EXISTS)? tableIdentifier  
#dropTable

 the comments of ‘DROP VIEW (IF EXISTS)? tableIdentifier ’should be  dropView



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20177) Document about compression way has some little detail changes.

2017-03-31 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-20177:
--

 Summary: Document about compression way has some little detail 
changes.
 Key: SPARK-20177
 URL: https://issues.apache.org/jira/browse/SPARK-20177
 Project: Spark
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 2.1.0
Reporter: guoxiaolongzte
Priority: Minor


Document compression way little detail changes.
1.spark.eventLog.compress add 'Compression will use spark.io.compression.codec.'
2.spark.broadcast.compress add 'Compression will use 
spark.io.compression.codec.'
3,spark.rdd.compress add 'Compression will use spark.io.compression.codec.'
4.spark.io.compression.codec add 'event log describe'



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20177) Document about compression way has some little detail changes.

2017-03-31 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-20177:
---
Description: 
Document compression way little detail changes.
1.spark.eventLog.compress add 'Compression will use spark.io.compression.codec.'
2.spark.broadcast.compress add 'Compression will use 
spark.io.compression.codec.'
3,spark.rdd.compress add 'Compression will use spark.io.compression.codec.'
4.spark.io.compression.codec add 'event log describe'
eg 
Through the documents, I don't know  what is compression mode about 'event log'.

  was:
Document compression way little detail changes.
1.spark.eventLog.compress add 'Compression will use spark.io.compression.codec.'
2.spark.broadcast.compress add 'Compression will use 
spark.io.compression.codec.'
3,spark.rdd.compress add 'Compression will use spark.io.compression.codec.'
4.spark.io.compression.codec add 'event log describe'


> Document about compression way has some little detail changes.
> --
>
> Key: SPARK-20177
> URL: https://issues.apache.org/jira/browse/SPARK-20177
> Project: Spark
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> Document compression way little detail changes.
> 1.spark.eventLog.compress add 'Compression will use 
> spark.io.compression.codec.'
> 2.spark.broadcast.compress add 'Compression will use 
> spark.io.compression.codec.'
> 3,spark.rdd.compress add 'Compression will use spark.io.compression.codec.'
> 4.spark.io.compression.codec add 'event log describe'
> eg 
> Through the documents, I don't know  what is compression mode about 'event 
> log'.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20190) '/applications/[app-id]/jobs' in rest api,status is [running|succeeded|failed|unknown]

2017-04-01 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-20190:
--

 Summary: '/applications/[app-id]/jobs' in rest api,status is 
[running|succeeded|failed|unknown]
 Key: SPARK-20190
 URL: https://issues.apache.org/jira/browse/SPARK-20190
 Project: Spark
  Issue Type: Bug
  Components: Documentation
Affects Versions: 2.1.0
Reporter: guoxiaolongzte
Priority: Minor


'/applications/[app-id]/jobs' in rest api.status is 
'[running|succeeded|failed|unknown]'.
now status is '[complete|succeeded|failed]'.

but '/applications/[app-id]/jobs?status=complete' the server return 'HTTP ERROR 
404'.

Added '?status=running' and '?status=unknown'.

code :
public enum JobExecutionStatus {
  RUNNING,
  SUCCEEDED,
  FAILED,
  UNKNOWN;





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20190) '/applications/[app-id]/jobs' in rest api,status should be [running|succeeded|failed|unknown]

2017-04-01 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-20190:
---
Summary: '/applications/[app-id]/jobs' in rest api,status should be 
[running|succeeded|failed|unknown]  (was: '/applications/[app-id]/jobs' in rest 
api,status is [running|succeeded|failed|unknown])

> '/applications/[app-id]/jobs' in rest api,status should be 
> [running|succeeded|failed|unknown]
> -
>
> Key: SPARK-20190
> URL: https://issues.apache.org/jira/browse/SPARK-20190
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> '/applications/[app-id]/jobs' in rest api.status is 
> '[running|succeeded|failed|unknown]'.
> now status is '[complete|succeeded|failed]'.
> but '/applications/[app-id]/jobs?status=complete' the server return 'HTTP 
> ERROR 404'.
> Added '?status=running' and '?status=unknown'.
> code :
> public enum JobExecutionStatus {
>   RUNNING,
>   SUCCEEDED,
>   FAILED,
>   UNKNOWN;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20190) '/applications/[app-id]/jobs' in rest api,status should be [running|succeeded|failed|unknown]

2017-04-01 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-20190:
---
Description: 
'/applications/[app-id]/jobs' in rest api.status should 
be'[running|succeeded|failed|unknown]'.
now status is '[complete|succeeded|failed]'.

but '/applications/[app-id]/jobs?status=complete' the server return 'HTTP ERROR 
404'.

Added '?status=running' and '?status=unknown'.

code :
public enum JobExecutionStatus {
  RUNNING,
  SUCCEEDED,
  FAILED,
  UNKNOWN;



  was:
'/applications/[app-id]/jobs' in rest api.status is 
'[running|succeeded|failed|unknown]'.
now status is '[complete|succeeded|failed]'.

but '/applications/[app-id]/jobs?status=complete' the server return 'HTTP ERROR 
404'.

Added '?status=running' and '?status=unknown'.

code :
public enum JobExecutionStatus {
  RUNNING,
  SUCCEEDED,
  FAILED,
  UNKNOWN;




> '/applications/[app-id]/jobs' in rest api,status should be 
> [running|succeeded|failed|unknown]
> -
>
> Key: SPARK-20190
> URL: https://issues.apache.org/jira/browse/SPARK-20190
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> '/applications/[app-id]/jobs' in rest api.status should 
> be'[running|succeeded|failed|unknown]'.
> now status is '[complete|succeeded|failed]'.
> but '/applications/[app-id]/jobs?status=complete' the server return 'HTTP 
> ERROR 404'.
> Added '?status=running' and '?status=unknown'.
> code :
> public enum JobExecutionStatus {
>   RUNNING,
>   SUCCEEDED,
>   FAILED,
>   UNKNOWN;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20218) '/applications/[app-id]/stages' in REST API,add description.

2017-04-04 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-20218:
--

 Summary: '/applications/[app-id]/stages' in REST API,add 
description.
 Key: SPARK-20218
 URL: https://issues.apache.org/jira/browse/SPARK-20218
 Project: Spark
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 2.1.0
Reporter: guoxiaolongzte
Priority: Minor


'/applications/[app-id]/stages' in rest api.status should add description 
'?status=[active|complete|pending|failed] list only stages in the state.'

Now the lack of this description, resulting in the use of this api do not know 
the use of the status through the brush stage list.

code:
  @GET
  def stageList(@QueryParam("status") statuses: JList[StageStatus]): 
Seq[StageData] = {
val listener = ui.jobProgressListener
val stageAndStatus = AllStagesResource.stagesAndStatus(ui)
val adjStatuses = {
  if (statuses.isEmpty()) {
Arrays.asList(StageStatus.values(): _*)
  } else {
statuses
  }
};



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20218) '/applications/[app-id]/stages' in REST API,add description.

2017-04-04 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-20218:
---
Description: 
1. '/applications/[app-id]/stages' in rest api.status should add description 
'?status=[active|complete|pending|failed] list only stages in the state.'

Now the lack of this description, resulting in the use of this api do not know 
the use of the status through the brush stage list.


2.'/applications/[app-id]/stages/[stage-id]' in REST API,remove redundant 
description ‘?status=[active|complete|pending|failed] list only stages in the 
state.’.
Because only one stage is determined based on stage-id.

code:
  @GET
  def stageList(@QueryParam("status") statuses: JList[StageStatus]): 
Seq[StageData] = {
val listener = ui.jobProgressListener
val stageAndStatus = AllStagesResource.stagesAndStatus(ui)
val adjStatuses = {
  if (statuses.isEmpty()) {
Arrays.asList(StageStatus.values(): _*)
  } else {
statuses
  }
};

  was:
'/applications/[app-id]/stages' in rest api.status should add description 
'?status=[active|complete|pending|failed] list only stages in the state.'

Now the lack of this description, resulting in the use of this api do not know 
the use of the status through the brush stage list.

code:
  @GET
  def stageList(@QueryParam("status") statuses: JList[StageStatus]): 
Seq[StageData] = {
val listener = ui.jobProgressListener
val stageAndStatus = AllStagesResource.stagesAndStatus(ui)
val adjStatuses = {
  if (statuses.isEmpty()) {
Arrays.asList(StageStatus.values(): _*)
  } else {
statuses
  }
};


> '/applications/[app-id]/stages' in REST API,add description.
> 
>
> Key: SPARK-20218
> URL: https://issues.apache.org/jira/browse/SPARK-20218
> Project: Spark
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
>Priority: Minor
>
> 1. '/applications/[app-id]/stages' in rest api.status should add description 
> '?status=[active|complete|pending|failed] list only stages in the state.'
> Now the lack of this description, resulting in the use of this api do not 
> know the use of the status through the brush stage list.
> 2.'/applications/[app-id]/stages/[stage-id]' in REST API,remove redundant 
> description ‘?status=[active|complete|pending|failed] list only stages in the 
> state.’.
> Because only one stage is determined based on stage-id.
> code:
>   @GET
>   def stageList(@QueryParam("status") statuses: JList[StageStatus]): 
> Seq[StageData] = {
> val listener = ui.jobProgressListener
> val stageAndStatus = AllStagesResource.stagesAndStatus(ui)
> val adjStatuses = {
>   if (statuses.isEmpty()) {
> Arrays.asList(StageStatus.values(): _*)
>   } else {
> statuses
>   }
> };



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20005) There is no "Newline" in UI in describtion

2017-04-07 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-20005:
---
Attachment: before_fix_of_stages.png
before_fix_jobs.png
after_fix_stages.png
after_fix_jobs.png

> There is no "Newline" in UI in describtion 
> ---
>
> Key: SPARK-20005
> URL: https://issues.apache.org/jira/browse/SPARK-20005
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.1.0
>Reporter: Egor Pahomov
>Priority: Trivial
> Attachments: after_fix_jobs.png, after_fix_stages.png, 
> before_fix_jobs.png, before_fix_of_stages.png
>
>
> There is no "newline" in UI in describtion: https://ibb.co/bLp2yv



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20157) In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu interface.

2017-04-07 Thread guoxiaolongzte (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guoxiaolongzte updated SPARK-20157:
---
Attachment: new.png

> In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu 
> interface.
> 
>
> Key: SPARK-20157
> URL: https://issues.apache.org/jira/browse/SPARK-20157
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: After the change.png, Before the change1.png, Before the 
> change2.png, new.png
>
>
> Choose 'show' text box, fill in the data to show a number greater than or 
> equal to the data to the total number of article. Click on the "Go" button, 
> display interface display the total number of the data, but the page menu 
> disappear, cause I want to continue to choose "show" text box and fill in the 
> article to show the data number, can only leave the interface, select 
> specific link click again come in to look at it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20157) In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu interface.

2017-04-07 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15960508#comment-15960508
 ] 

guoxiaolongzte commented on SPARK-20157:


[~srowen]
In  jobs in web ui,Choose 'show' text box, fill in the data to show a number 
greater than or equal to the data to the total number of article. Click on the 
"Go" button, display interface display the total number of the data, but the 
page menu no disappear.

In storage in web ui,
Choose 'show' text box, fill in the data to show a number greater than or equal 
to the data to the total number of article. Click on the "Go" button, display 
interface display the total number of the data, but the page menu disappear.

I still feel that I changed the more reasonable.

> In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu 
> interface.
> 
>
> Key: SPARK-20157
> URL: https://issues.apache.org/jira/browse/SPARK-20157
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: After the change.png, Before the change1.png, Before the 
> change2.png, new.png
>
>
> Choose 'show' text box, fill in the data to show a number greater than or 
> equal to the data to the total number of article. Click on the "Go" button, 
> display interface display the total number of the data, but the page menu 
> disappear, cause I want to continue to choose "show" text box and fill in the 
> article to show the data number, can only leave the interface, select 
> specific link click again come in to look at it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-20157) In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu interface.

2017-04-07 Thread guoxiaolongzte (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15960508#comment-15960508
 ] 

guoxiaolongzte edited comment on SPARK-20157 at 4/7/17 8:57 AM:


[~srowen]
In  jobs in web ui,Choose 'show' text box, fill in the data to show a number 
greater than or equal to the data to the total number of article. Click on the 
"Go" button, display interface display the total number of the data, but the 
page menu no disappear.

In storage in web ui,
Choose 'show' text box, fill in the data to show a number greater than or equal 
to the data to the total number of article. Click on the "Go" button, display 
interface display the total number of the data, but the page menu disappear.

I still feel that I changed the more reasonable.

Please refer to the attachment new.png.


was (Author: guoxiaolongzte):
[~srowen]
In  jobs in web ui,Choose 'show' text box, fill in the data to show a number 
greater than or equal to the data to the total number of article. Click on the 
"Go" button, display interface display the total number of the data, but the 
page menu no disappear.

In storage in web ui,
Choose 'show' text box, fill in the data to show a number greater than or equal 
to the data to the total number of article. Click on the "Go" button, display 
interface display the total number of the data, but the page menu disappear.

I still feel that I changed the more reasonable.

> In the menu ‘Storage’in Web UI, click the Go button, and shows no paging menu 
> interface.
> 
>
> Key: SPARK-20157
> URL: https://issues.apache.org/jira/browse/SPARK-20157
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.1.0
>Reporter: guoxiaolongzte
>Priority: Minor
> Attachments: After the change.png, Before the change1.png, Before the 
> change2.png, new.png
>
>
> Choose 'show' text box, fill in the data to show a number greater than or 
> equal to the data to the total number of article. Click on the "Go" button, 
> display interface display the total number of the data, but the page menu 
> disappear, cause I want to continue to choose "show" text box and fill in the 
> article to show the data number, can only leave the interface, select 
> specific link click again come in to look at it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20269) add JavaWordCountProducer in steaming examples

2017-04-09 Thread guoxiaolongzte (JIRA)
guoxiaolongzte created SPARK-20269:
--

 Summary: add JavaWordCountProducer in steaming examples
 Key: SPARK-20269
 URL: https://issues.apache.org/jira/browse/SPARK-20269
 Project: Spark
  Issue Type: Improvement
  Components: Examples, Structured Streaming
Affects Versions: 2.1.0
Reporter: guoxiaolongzte
Priority: Minor


run example of streaming kafka,currently missing java word count producer,not 
conducive to java developers to learn and test.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



  1   2   >