Well, I have no idea why that line is there :). It's old (so I do not
remember), a tweak (read ´hack´) I probably did to make the bootstrap work
at the time, and I don't think it's correct that line is there, so maybe we
should remove it? :)


On Tue, Aug 12, 2014 at 10:03 PM, Max Leske <[email protected]> wrote:

>
> On 12.08.2014, at 14:27, Ben Coman <[email protected]> wrote:
>
>  GitHub wrote:
>
>   Branch: refs/heads/4.0
>   Home:   https://github.com/pharo-project/pharo-core
>   Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
>       
> https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
>   Author: Jenkins Build Server <[email protected]> 
> <[email protected]>
>   Date:   2014-08-11 (Mon, 11 Aug 2014)
>
>
> 13806 Remove ThreadSafeTranscriptPluggableTextMorph
>       https://pharo.fogbugz.com/f/cases/13806
>
>
>
> For anyone concerned about the performance of writing to Transcript from
> higher priority threads, just reporting that altering ThreadSafeTranscript
> to be safe for Morphic without ThreadSafeTranscriptPluggableTextMorph had a
> side effect of enhancing performance by 25x.   With two runs of the
> following script...
>
>     Smalltalk garbageCollect.
>     Transcript open. "close after each run"
>     [    Transcript crShow: (
>         [
>             | string |
>             string := '-'.
>             1 to: 2000 do:
>             [        :n |
>                      string := string , '-', n printString.
>                     Transcript show: string.
>             ].
>             (Delay forMilliseconds: 10) wait.
>         ]) timeToRun.
>     ] forkAt: 41.
>
>
>
> Build 40162 reports timeToRun of 0:00:00:02.483 & 0:00:00:02.451
> Build 40165 reports timeToRun of 0:00:00:00.037 & 0:00:00:00.099
>
>
> Now I had meant to ask... I notice that FLFuelCommandLineHandler installs
> ThreadSafeTranscript, so I wonder it is affected by this change. Can some
> Fuel experts comment?
>
>
> Hi Ben
>
> I tried to figure this out with Mariano. Apparently the Transcript parts
> come from an improvement that was suggested by Pavel Krivanek (see this
> issue: http://code.google.com/p/pharo/issues/detail?id=6428). The code
> presumably was tailored to the needs that Hazel (today called Seed
> http://smalltalkhub.com/#!/~Guille/Seed) had. We couldn’t find any other
> reason why we would do *anything* Transcript related, especially since the
> command line handlers print to stdout anyway.
>
> Mariano suggested I CC Ben and Guille, so I’m doing that. Maybe one of the
> two can shed some light on that method. From my point of view I don’t see
> why we should keep that part of the code.
>
> Cheers,
> Max
>
>
>
> Also I am looking for some advice for a minor downside I just noticed.
> The whole script above can complete between steps, so the entire output
> ends up in the PluggableTextMorph its size without being culled, which
> causes making a selection become really slow.  Normally the excess text
> shown by Transcript is culled in half [1] by
> PluggableTextMorph>>appendEntry each time #changed: is called.
>
> PluggabletextMorph>>appendEntry
>     "Append the text in the model's writeStream to the editable text. "
>     textMorph asText size > model characterLimit ifTrue:   "<---[0]"
>         ["Knock off first half of text"
>         self selectInvisiblyFrom: 1 to: textMorph asText size // 2.
> "<---[1]"
>         self replaceSelectionWith: Text new].
>     self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph
> asText size.
>     self replaceSelectionWith: model contents asText.  "<----[2]"
>     self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph
> asText size
>
> That works fine when #appendEntry is being called with lots of small
> changes, but for a single large change the entire change ends up in
> PluggableTextMorph via [2]. In this case
>     model characterLimit  "--> 20,000"     [0]
>     model contents size "--> 5,671,343"    [2]
> where model == Transcript.
>
> So what is the behaviour you'd like to when too much is sent to Transcript?
> a. Show all content however briefly.
> b. Only the last 20,000 characters are put into the PluggableTextMorph,
> and the earlier data thrown away.
>
> I see a few ways to deal with this:
> 1. Limit the stream inside Transcript to a maximum 20,000 characters by
> basing it on some circular buffer.
> 2. Have "Transcript conents" return only the last 20,000 characters of its
> stream.
> 3. Limit to text sent to #replaceSelectionWith: [2]  to 20,000 characters.
>
> Thoughts anyone?
>
> cheers -ben
>
>
>

Reply via email to