GitHub wrote:
  Branch: refs/heads/4.0
  Home:   https://github.com/pharo-project/pharo-core
  Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
      https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
  Author: Jenkins Build Server <[email protected]>
  Date:   2014-08-11 (Mon, 11 Aug 2014)


13806 Remove ThreadSafeTranscriptPluggableTextMorph
	https://pharo.fogbugz.com/f/cases/13806


  

For anyone concerned about the performance of writing to Transcript from higher priority threads, just reporting that altering ThreadSafeTranscript to be safe for Morphic without ThreadSafeTranscriptPluggableTextMorph had a side effect of enhancing performance by 25x.   With two runs of the following script...

    Smalltalk garbageCollect.
    Transcript open. "close after each run"
    [    Transcript crShow: (     
        [
            | string |
            string := '-'.   
            1 to: 2000 do:
            [        :n |
                     string := string , '-', n printString.
                    Transcript show: string.
            ].
            (Delay forMilliseconds: 10) wait.
        ]) timeToRun.
    ] forkAt: 41.



Build 40162 reports timeToRun of 0:00:00:02.483 & 0:00:00:02.451
Build 40165 reports timeToRun of 0:00:00:00.037 & 0:00:00:00.099


Now I had meant to ask... I notice that FLFuelCommandLineHandler installs ThreadSafeTranscript, so I wonder it is affected by this change. Can some Fuel experts comment?


Also I am looking for some advice for a minor downside I just noticed.  The whole script above can complete between steps, so the entire output ends up in the PluggableTextMorph its size without being culled, which causes making a selection become really slow.  Normally the excess text shown by Transcript is culled in half [1] by PluggableTextMorph>>appendEntry each time #changed: is called.

PluggabletextMorph>>appendEntry
    "Append the text in the model's writeStream to the editable text. "
    textMorph asText size > model characterLimit ifTrue:   "<---[0]"
        ["Knock off first half of text"
        self selectInvisiblyFrom: 1 to: textMorph asText size // 2.   "<---[1]"
        self replaceSelectionWith: Text new].
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size.
    self replaceSelectionWith: model contents asText.  "<----[2]"
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size

That works fine when #appendEntry is being called with lots of small changes, but for a single large change the entire change ends up in PluggableTextMorph via [2]. In this case
    model characterLimit  "--> 20,000"     [0]
    model contents size "--> 5,671,343"    [2]
where model == Transcript.

So what is the behaviour you'd like to when too much is sent to Transcript?
a. Show all content however briefly.
b. Only the last 20,000 characters are put into the PluggableTextMorph, and the earlier data thrown away.

I see a few ways to deal with this:
1. Limit the stream inside Transcript to a maximum 20,000 characters by basing it on some circular buffer.
2. Have "Transcript conents" return only the last 20,000 characters of its stream.
3. Limit to text sent to #replaceSelectionWith: [2]  to 20,000 characters.

Thoughts anyone?

cheers -ben

Reply via email to