Github user coderxiang commented on a diff in the pull request:
https://github.com/apache/spark/pull/1135#discussion_r14202953
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/ExecutorTable.scala
---
@@ -67,18 +67,20 @@ private[ui] class ExecutorTable(stageId: Int, parent:
JobProgressTab) {
executorIdToSummary match {
case Some(x) =>
x.toSeq.sortBy(_._1).map { case (k, v) => {
+ // scalastyle:off
<tr>
<td>{k}</td>
<td>{executorIdToAddress.getOrElse(k, "CANNOT FIND
ADDRESS")}</td>
- <td>{UIUtils.formatDuration(v.taskTime)}</td>
+ <td
sorttable_customekey={v.taskTime.toString}>{UIUtils.formatDuration(v.taskTime)}</td>
<td>{v.failedTasks + v.succeededTasks}</td>
<td>{v.failedTasks}</td>
<td>{v.succeededTasks}</td>
- <td>{Utils.bytesToString(v.shuffleRead)}</td>
- <td>{Utils.bytesToString(v.shuffleWrite)}</td>
- <td>{Utils.bytesToString(v.memoryBytesSpilled)}</td>
- <td>{Utils.bytesToString(v.diskBytesSpilled)}</td>
+ <td
sorttable_customekey={v.shuffleRead.toString}>{Utils.bytesToString(v.shuffleRead)}</td>
--- End diff --
Strangely in the RDD page, the sorting is also by string but it displays
correctly. I haven't gone through all the implementation so I'm not sure
about why this happens. It looks like the sorttable_customkey only accepts
string as key, a not-so-good work around would be use fixed length string
with preceding zeros.
On Thu, Jun 19, 2014 at 7:40 PM, Guoqiang Li <[email protected]>
wrote:
> In core/src/main/scala/org/apache/spark/ui/jobs/ExecutorTable.scala:
>
> > <td>{v.failedTasks + v.succeededTasks}</td>
> > <td>{v.failedTasks}</td>
> > <td>{v.succeededTasks}</td>
> > - <td>{Utils.bytesToString(v.shuffleRead)}</td>
> > - <td>{Utils.bytesToString(v.shuffleWrite)}</td>
> > - <td>{Utils.bytesToString(v.memoryBytesSpilled)}</td>
> > - <td>{Utils.bytesToString(v.diskBytesSpilled)}</td>
> > + <td
sorttable_customekey={v.shuffleRead.toString}>{Utils.bytesToString(v.shuffleRead)}</td>
>
> There must be a string, or compile error.
>
> [error]
/Users/witgo/work/code/java/spark/core/src/main/scala/org/apache/spark/ui/jobs/ExecutorTable.scala:78:
overloaded method constructor UnprefixedAttribute with alternatives:
> [error] (key: String,value: Option[Seq[scala.xml.Node]],next:
scala.xml.MetaData)scala.xml.UnprefixedAttribute <and>
> [error] (key: String,value: String,next:
scala.xml.MetaData)scala.xml.UnprefixedAttribute <and>
> [error] (key: String,value: Seq[scala.xml.Node],next1:
scala.xml.MetaData)scala.xml.UnprefixedAttribute
> [error] cannot be applied to (String, Long, scala.xml.MetaData)
> [error] <td
sorttable_customekey={v.shuffleRead}>{Utils.bytesToString(v.shuffleRead)}</td>
> [error]
>
> â
> Reply to this email directly or view it on GitHub
> <https://github.com/apache/spark/pull/1135/files#r14005150>.
>
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---