> Hi
>
> I've been doing some performance work lately in Seaside. Long story
> short Seaside (and I guess AIDA too) spends of if it's rendering time in
> WriteStream (#nextPutAll:, #nextPut:).
>
> The way WriteStream >> #pastEndPut: behaves is not really ideal for
> Seaside. It grows the underlying collection by just enough to
> accommodate the argument collection (or 20 which ever is bigger). Now
> image the following not very unlikely scenario. You start with a 4k
> buffer and put on average a 10 element collection (remember all those
> tags are put individually) until you have a 16k response. You allocate
> more than a thousand intermediary collections to get there.
> What would be better suited for Seaside is doubling the required size.
> In the worst case that would mean wasting 50% of memory but it would
> make the overhead of creating intermediary collections logarithmic. In
> the given example that would take us only three allocations to get there.
> Now I do realize there are other applications for Pharo where this
> strategy is not ideal and this is not a killer for us. I just wanted to
> shed some light and this and ask whether other projects are in a similar
> situation.
Thanks for the info.
Do you have an idea in which case allocating more would be a real problem?
Because in VW there were some patterns:
turning , into nextPut: to avoid exact the underlying string allocation.
preallocation OrderedCollection new: instead of relying on its growing
behavior.
Final question
May be we should be able to plug the stream behavior we want. Like that seaside
people can
get the speed out of it. I think that having fast web app is important.
>
> To get a feel how allocation limited Seaside is: avoiding one allocation
> of a 16k ByteArray per request can make a difference in throughput
> between 10 Mbyte/s and 30 Mbyte/s (see "[Progress Report] Zinc HTTP
> Components"). If anybody knows a way to make allocation of large young
> space objects faster (Set GC Bias to Grow?, #vmParameterAt:put:?) I'd
> like to hear it.
>
> Cheers
> Philippe
>
>