On Wed, Aug 13, 2014 at 8:24 AM, Ben Coman <[email protected]> wrote:

>  Eliot Miranda wrote:
>
>  On Tue, Aug 12, 2014 at 5:27 AM, Ben Coman <[email protected]> wrote:
>
>>  GitHub wrote:
>>
>>   Branch: refs/heads/4.0
>>   Home:   https://github.com/pharo-project/pharo-core
>>   Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
>>       
>> https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
>>   Author: Jenkins Build Server <[email protected]> 
>> <[email protected]>
>>   Date:   2014-08-11 (Mon, 11 Aug 2014)
>>
>>
>> 13806 Remove ThreadSafeTranscriptPluggableTextMorph
>>      https://pharo.fogbugz.com/f/cases/13806
>>
>>
>>
>> For anyone concerned about the performance of writing to Transcript from
>> higher priority threads, just reporting that altering ThreadSafeTranscript
>> to be safe for Morphic without ThreadSafeTranscriptPluggableTextMorph had a
>> side effect of enhancing performance by 25x.   With two runs of the
>> following script...
>>
>>     Smalltalk garbageCollect.
>>     Transcript open. "close after each run"
>>     [    Transcript crShow: (
>>         [
>>             | string |
>>             string := '-'.
>>             1 to: 2000 do:
>>             [        :n |
>>                      string := string , '-', n printString.
>>                     Transcript show: string.
>>             ].
>>             (Delay forMilliseconds: 10) wait.
>>         ]) timeToRun.
>>     ] forkAt: 41.
>>
>>
>>
>> Build 40162 reports timeToRun of 0:00:00:02.483 & 0:00:00:02.451
>> Build 40165 reports timeToRun of 0:00:00:00.037 & 0:00:00:00.099
>>
>>
>> Now I had meant to ask... I notice that FLFuelCommandLineHandler installs
>> ThreadSafeTranscript, so I wonder it is affected by this change. Can some
>> Fuel experts comment?
>>
>>
>> Also I am looking for some advice for a minor downside I just noticed.
>> The whole script above can complete between steps, so the entire output
>> ends up in the PluggableTextMorph its size without being culled, which
>> causes making a selection become really slow.  Normally the excess text
>> shown by Transcript is culled in half [1] by
>> PluggableTextMorph>>appendEntry each time #changed: is called.
>>
>> PluggabletextMorph>>appendEntry
>>     "Append the text in the model's writeStream to the editable text. "
>>     textMorph asText size > model characterLimit ifTrue:   "<---[0]"
>>         ["Knock off first half of text"
>>         self selectInvisiblyFrom: 1 to: textMorph asText size // 2.
>> "<---[1]"
>>         self replaceSelectionWith: Text new].
>>     self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph
>> asText size.
>>     self replaceSelectionWith: model contents asText.  "<----[2]"
>>     self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph
>> asText size
>>
>> That works fine when #appendEntry is being called with lots of small
>> changes, but for a single large change the entire change ends up in
>> PluggableTextMorph via [2]. In this case
>>     model characterLimit  "--> 20,000"     [0]
>>     model contents size "--> 5,671,343"    [2]
>> where model == Transcript.
>>
>> So what is the behaviour you'd like to when too much is sent to
>> Transcript?
>> a. Show all content however briefly.
>> b. Only the last 20,000 characters are put into the PluggableTextMorph,
>> and the earlier data thrown away.
>> I see a few ways to deal with this:
>> 1. Limit the stream inside Transcript to a maximum 20,000 characters by
>> basing it on some circular buffer.
>> 2. Have "Transcript contents" return only the last 20,000 characters of
>> its stream.
>> 3. Limit to text sent to #replaceSelectionWith: [2]  to 20,000 characters.
>>
>> Thoughts anyone?
>>
>
>  IMO 20,000 characters is way too little.  I routinely set my limit to
> 200,000.   So b) but with a higher limit.  a) is pointlessly slow.
>
> Thanks Eliot. Its good to know what is expected from Transcript.
>
>
>  In your implementation list, surely 2. is the only thing that makes
> sense?
>
>
> I agree, (2.) would be efficient.  It just would have been precluded if
> (a) was required.
>
>    If the limit can remain soft then anyone that wants to see more simply
> raises their limit.
>
>
> Do you mean you'd like Transcript characterLimit to be a Preference ?
>

I'm not sure.  If it was a preference I'd use it, but few others may find
it useful.  As I say, I need at east 20,000 chars on the transcript to
effectively develop Cog; the VM simulator produces lots of output and
realistically a transcript is the only convenient place to view it.   So
whether it was a preference ofr not I'd make sure the limit was >= 20k.

   1. & 3. are implementation details left up to you.
>
>
> I can do (1.) with Steph's new circular list.  Doing (2.) means (3.) wont
> be required for this, but its probably beneficial so I'll log it as
> separate Issue.
>
>    As long as the transcript is reasonably fast and has the same protocol
> as WriteStream (it needs to support printOn: and storeOn: et al) implement
> it as you will.  BTW, the Squeak transcript is appallingly slow.
>
> --
best,
Eliot

Reply via email to