Github user kwmonroe commented on the issue:

    https://github.com/apache/bigtop/pull/153
  
    Charms in ~bigdata-dev namespace have been refreshed and verified that 
terasort works on lxd again:
    ```
    results:
      meta:
        composite:
          direction: asc
          units: secs
          value: "385"
        start: 2016-10-25T22:19:00Z
        stop: 2016-10-25T22:25:25Z
      results:
        raw: '{"GC time elapsed (ms)": "19102", "Launched reduce tasks": "1", 
"Shuffled
          Maps ": "8", "FILE: Number of bytes written": "2081067810", "Physical 
memory
          (bytes) snapshot": "4834197504", "Total megabyte-seconds taken by all 
reduce
          tasks": "240579584", "Rack-local map tasks": "1", "HDFS: Number of 
large read
          operations": "0", "Failed Shuffles": "0", "Reduce output records": 
"10000000",
          "Map input records": "10000000", "Total vcore-seconds taken by all 
map tasks":
          "1682215", "WRONG_REDUCE": "0", "Spilled Records": "20000000", "Total 
time spent
          by all reduces in occupied slots (ms)": "234941", "FILE: Number of 
read operations":
          "0", "BAD_ID": "0", "Input split bytes": "1040", "Reduce input 
groups": "10000000",
          "Total megabyte-seconds taken by all map tasks": "1722588160", "HDFS: 
Number
          of read operations": "27", "Map output materialized bytes": 
"1040000048", "Bytes
          Read": "1000000000", "FILE: Number of bytes read": "1040000012", 
"CONNECTION":
          "0", "Combine output records": "0", "Total vcore-seconds taken by all 
reduce
          tasks": "234941", "Total time spent by all map tasks (ms)": 
"1682215", "CPU
          time spent (ms)": "179730", "Map output bytes": "1020000000", "Bytes 
Written":
          "1000000000", "IO_ERROR": "0", "Merged Map outputs": "8", "FILE: 
Number of write
          operations": "0", "Total time spent by all maps in occupied slots 
(ms)": "1682215",
          "Launched map tasks": "14", "Killed map tasks": "6", "Reduce shuffle 
bytes":
          "1040000048", "HDFS: Number of write operations": "2", "Map output 
records":
          "10000000", "HDFS: Number of bytes written": "1000000000", "Combine 
input records":
          "0", "FILE: Number of large read operations": "0", "Data-local map 
tasks": "13",
          "Total committed heap usage (bytes)": "4217896960", "Virtual memory 
(bytes)
          snapshot": "25333989376", "WRONG_LENGTH": "0", "Reduce input 
records": "10000000",
          "Total time spent by all reduce tasks (ms)": "234941", "HDFS: Number 
of bytes
          read": "1000001040", "WRONG_MAP": "0"}'
    status: completed
    timing:
      completed: 2016-10-25 22:25:27 +0000 UTC
      enqueued: 2016-10-25 22:18:47 +0000 UTC
      started: 2016-10-25 22:18:47 +0000 UTC
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to