[jira] [Commented] (SPARK-25696) The storage memory displayed on spark Application UI is incorrect.

2018-12-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-25696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16715847#comment-16715847
 ] 

ASF GitHub Bot commented on SPARK-25696:


srowen closed pull request #22683: [SPARK-25696] The storage memory displayed 
on spark Application UI is…
URL: https://github.com/apache/spark/pull/22683
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/R/pkg/R/context.R b/R/pkg/R/context.R
index e99136723f65b..0207f249f9aa0 100644
--- a/R/pkg/R/context.R
+++ b/R/pkg/R/context.R
@@ -87,7 +87,7 @@ objectFile <- function(sc, path, minPartitions = NULL) {
 #' in the list are split into \code{numSlices} slices and distributed to nodes
 #' in the cluster.
 #'
-#' If size of serialized slices is larger than spark.r.maxAllocationLimit or 
(200MB), the function
+#' If size of serialized slices is larger than spark.r.maxAllocationLimit or 
(200MiB), the function
 #' will write it to disk and send the file name to JVM. Also to make sure each 
slice is not
 #' larger than that limit, number of slices may be increased.
 #'
diff --git a/R/pkg/R/mllib_tree.R b/R/pkg/R/mllib_tree.R
index 0e60842dd44c8..9844061cfd074 100644
--- a/R/pkg/R/mllib_tree.R
+++ b/R/pkg/R/mllib_tree.R
@@ -157,7 +157,7 @@ print.summary.decisionTree <- function(x) {
 #' @param checkpointInterval Param for set checkpoint interval (>= 1) or 
disable checkpoint (-1).
 #'   Note: this setting will be ignored if the 
checkpoint directory is not
 #'   set.
-#' @param maxMemoryInMB Maximum memory in MB allocated to histogram 
aggregation.
+#' @param maxMemoryInMB Maximum memory in MiB allocated to histogram 
aggregation.
 #' @param cacheNodeIds If FALSE, the algorithm will pass trees to executors to 
match instances with
 #' nodes. If TRUE, the algorithm will cache node IDs for 
each instance. Caching
 #' can speed up training of deeper trees. Users can set 
how often should the
@@ -382,7 +382,7 @@ setMethod("write.ml", signature(object = 
"GBTClassificationModel", path = "chara
 #' @param checkpointInterval Param for set checkpoint interval (>= 1) or 
disable checkpoint (-1).
 #'   Note: this setting will be ignored if the 
checkpoint directory is not
 #'   set.
-#' @param maxMemoryInMB Maximum memory in MB allocated to histogram 
aggregation.
+#' @param maxMemoryInMB Maximum memory in MiB allocated to histogram 
aggregation.
 #' @param cacheNodeIds If FALSE, the algorithm will pass trees to executors to 
match instances with
 #' nodes. If TRUE, the algorithm will cache node IDs for 
each instance. Caching
 #' can speed up training of deeper trees. Users can set 
how often should the
@@ -588,7 +588,7 @@ setMethod("write.ml", signature(object = 
"RandomForestClassificationModel", path
 #' @param checkpointInterval Param for set checkpoint interval (>= 1) or 
disable checkpoint (-1).
 #'   Note: this setting will be ignored if the 
checkpoint directory is not
 #'   set.
-#' @param maxMemoryInMB Maximum memory in MB allocated to histogram 
aggregation.
+#' @param maxMemoryInMB Maximum memory in MiB allocated to histogram 
aggregation.
 #' @param cacheNodeIds If FALSE, the algorithm will pass trees to executors to 
match instances with
 #' nodes. If TRUE, the algorithm will cache node IDs for 
each instance. Caching
 #' can speed up training of deeper trees. Users can set 
how often should the
diff --git a/core/src/main/resources/org/apache/spark/ui/static/utils.js 
b/core/src/main/resources/org/apache/spark/ui/static/utils.js
index deeafad4eb5f5..22985e31a7808 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/utils.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/utils.js
@@ -40,9 +40,9 @@ function formatDuration(milliseconds) {
 function formatBytes(bytes, type) {
 if (type !== 'display') return bytes;
 if (bytes == 0) return '0.0 B';
-var k = 1000;
+var k = 1024;
 var dm = 1;
-var sizes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB'];
+var sizes = ['B', 'KiB', 'MiB', 'GiB', 'TiB', 'PiB', 'EiB', 'ZiB', 'YiB'];
 var i = Math.floor(Math.log(bytes) / Math.log(k));
 return parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + ' ' + sizes[i];
 }
diff --git a/core/src/main/scala/org/apache/spark/SparkContext.scala 
b/core/src/main/scala/org/apache/spark/SparkContext.scala
index 845a3d5f6d6f9..696dafda6d1ec 100644
--- a/core/src/main/scala/org/apache/spark/SparkContext.scala
+++ b/core/src/main/sc

[jira] [Commented] (SPARK-25696) The storage memory displayed on spark Application UI is incorrect.

2018-12-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-25696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16715841#comment-16715841
 ] 

ASF GitHub Bot commented on SPARK-25696:


srowen commented on issue #22683: [SPARK-25696] The storage memory displayed on 
spark Application UI is…
URL: https://github.com/apache/spark/pull/22683#issuecomment-446026600
 
 
   Merged to master


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> The storage memory displayed on spark Application UI is incorrect.
> --
>
> Key: SPARK-25696
> URL: https://issues.apache.org/jira/browse/SPARK-25696
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, Web UI
>Affects Versions: 2.3.2
>Reporter: hantiantian
>Assignee: hantiantian
>Priority: Major
>  Labels: release-notes
> Fix For: 3.0.0
>
>
> In the reported heartbeat information, the unit of the memory data is bytes, 
> which is converted by the formatBytes() function in the utils.js file before 
> being displayed in the interface. The cardinality of the unit conversion in 
> the formatBytes function is 1000, which should be 1024.
> function formatBytes(bytes, type)
> {    if (type !== 'display') return bytes;    if (bytes == 0) return '0.0 B'; 
>    var k = 1000;    var dm = 1;    var sizes = ['B', 'KB', 'MB', 'GB', 'TB', 
> 'PB', 'EB', 'ZB', 'YB'];    var i = Math.floor(Math.log(bytes) / 
> Math.log(k)); return parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + ' ' + 
> sizes[i]; }
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-25696) The storage memory displayed on spark Application UI is incorrect.

2018-11-09 Thread Sean Owen (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-25696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16681500#comment-16681500
 ] 

Sean Owen commented on SPARK-25696:
---

Per the pull request -- the error is actually slightly different. Yes 1024 
should be the factor, but, all the units need to be displayed as kibibytes, 
etc. KiB, GiB and so on. Just changing the 1000 is wrong.

> The storage memory displayed on spark Application UI is incorrect.
> --
>
> Key: SPARK-25696
> URL: https://issues.apache.org/jira/browse/SPARK-25696
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.2
>Reporter: hantiantian
>Priority: Major
>
> In the reported heartbeat information, the unit of the memory data is bytes, 
> which is converted by the formatBytes() function in the utils.js file before 
> being displayed in the interface. The cardinality of the unit conversion in 
> the formatBytes function is 1000, which should be 1024.
> function formatBytes(bytes, type)
> {    if (type !== 'display') return bytes;    if (bytes == 0) return '0.0 B'; 
>    var k = 1000;    var dm = 1;    var sizes = ['B', 'KB', 'MB', 'GB', 'TB', 
> 'PB', 'EB', 'ZB', 'YB'];    var i = Math.floor(Math.log(bytes) / 
> Math.log(k)); return parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + ' ' + 
> sizes[i]; }
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-25696) The storage memory displayed on spark Application UI is incorrect.

2018-10-09 Thread Apache Spark (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-25696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644527#comment-16644527
 ] 

Apache Spark commented on SPARK-25696:
--

User 'httfighter' has created a pull request for this issue:
https://github.com/apache/spark/pull/22683

> The storage memory displayed on spark Application UI is incorrect.
> --
>
> Key: SPARK-25696
> URL: https://issues.apache.org/jira/browse/SPARK-25696
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.2
>Reporter: hantiantian
>Priority: Major
>
> In the reported heartbeat information, the unit of the memory data is bytes, 
> which is converted by the formatBytes() function in the utils.js file before 
> being displayed in the interface. The base of the unit conversion in the 
> formatBytes function is 1000, which should be 1024.
> function formatBytes(bytes, type) {
>    if (type !== 'display') return bytes;
>    if (bytes == 0) return '0.0 B';
>    var k = 1000;
>    var dm = 1;
>    var sizes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB'];
>    var i = Math.floor(Math.log(bytes) / Math.log(k)); return 
> parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + ' ' + sizes[i];
> }
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org