On Tuesday, Jul 29, 2003, at 16:58 Europe/Rome, Hunsberger, Peter wrote:
Stefano Mazzocchi <[EMAIL PROTECTED]> asks:
If you do the Google search you'll notice the references to
randomized
paging algorithms. I didn't chase these very far other than to
determine that at least one author shows tha
Stefano Mazzocchi <[EMAIL PROTECTED]> asks:
> > If you do the Google search you'll notice the references to
> randomized
> > paging algorithms. I didn't chase these very far other than to
> > determine that at least one author shows that they can
> perform as good
> > as conventional algorith
On Monday, Jul 28, 2003, at 17:24 Europe/Rome, Hunsberger, Peter wrote:
Stefano Mazzocchi <[EMAIL PROTECTED]> wrote:
NOTE: this is a refactoring of an email I wrote 2 1/2 years
ago.
Stefano,
had a little more time to read this last week:
Memphis got hit by some 85 mph winds, no power at our h
Stefano Mazzocchi <[EMAIL PROTECTED]> wrote:
> NOTE: this is a refactoring of an email I wrote 2 1/2 years
> ago.
Stefano,
had a little more time to read this last week:
Memphis got hit by some 85 mph winds, no power at our house for a week
yet, but I have a printed copy...
(See
http://s
Stefano Mazzocchi <[EMAIL PROTECTED]> writes:
>
> On Friday, Jul 18, 2003, at 13:03 America/Guayaquil,
> Hunsberger, Peter
> wrote:
>
> > Let me try this a different way: one of the design
> decisions driving
> > the use of SAX over DOM is that SAX is more memory efficient.
> > However, if
On Friday, Jul 18, 2003, at 08:16 America/Guayaquil, Andreas Hochsteger
wrote:
Stefano Mazzocchi wrote:
Exactly! I came to that same exact conclusion from this very point: I
tried to come up with an optimal way to know which object to throw
aaway
when the cache is full, and I found out that we w
On Friday, Jul 18, 2003, at 13:03 America/Guayaquil, Hunsberger, Peter
wrote:
Let me try this a different way: one of the design decisions driving
the use of SAX over DOM is that SAX is more memory efficient. However,
if you're caching SAX event streams this is no longer true (assuming
the
SAX
On Friday, Jul 18, 2003, at 07:58 America/Guayaquil, Berin Loritsch
wrote:
Geoff Howard wrote:
Well, since Peter's dragged me into this... ;)
Hunsberger, Peter wrote:
Stefano Mazzocchi <[EMAIL PROTECTED]> writes (and writes, and
writes,
and writes):
WARNING: this RT is long! and very dense,
sorry for some delay in the discussion
On Friday, Jul 18, 2003, at 07:28 America/Guayaquil, Geoff Howard wrote:
Well, since Peter's dragged me into this... ;)
Hunsberger, Peter wrote:
Stefano Mazzocchi <[EMAIL PROTECTED]> writes (and writes, and writes,
and writes):
WARNING: this RT is long! an
Berin Loritsch <[EMAIL PROTECTED]> comments:
> >
> > Let me try this a different way: one of the design
> decisions driving
> > the use of SAX over DOM is that SAX is more memory efficient.
> > However, if you're caching SAX event streams this is no longer true
> > (assuming the SAX data
Hunsberger, Peter wrote:
Stefano Mazzocchi <[EMAIL PROTECTED]> responds:
I think I really lost you here. What does it mean to "retain the
intermediate results of the parser"? what are you referring to? and
what kind of database do you envision in a push pipe? Sorry, I don't
get it but I sme
Stefano Mazzocchi <[EMAIL PROTECTED]> responds:
> >>
> >> There are three possible ways to generate a resource
> >>
> >> 1) ---> cache? -(yes)-> production --->
> >> 2) ---> cache? -(yes)-> valid? -(no)--> production -->
> storage -->
> >> 3) ---> cache? -(yes)-> valid? -(yes)-> lookup
Geoff Howard <[EMAIL PROTECTED]> writes:
> > At first it would seem that if there is no way to determine the
> > ergodic period of a fragment there is no reason to cache
> it! However,
> > there is an alternative method of using the cache (which
> Geoff Howard
> > has been working on) whic
Stefano Mazzocchi wrote:
> Exactly! I came to that same exact conclusion from this very point: I
> tried to come up with an optimal way to know which object to throw aaway
> when the cache is full, and I found out that we were heuristically
> trying to estimate which one was saving more resource
Geoff Howard wrote:
Well, since Peter's dragged me into this... ;)
Hunsberger, Peter wrote:
Stefano Mazzocchi <[EMAIL PROTECTED]> writes (and writes, and writes,
and writes):
WARNING: this RT is long! and very dense, so I suggest you to turn on
your printer.
Stefano, I started writing a re
Well, since Peter's dragged me into this... ;)
Hunsberger, Peter wrote:
Stefano Mazzocchi <[EMAIL PROTECTED]> writes (and writes, and writes,
and writes):
WARNING: this RT is long! and very dense, so I suggest you to
turn on your printer.
Stefano, I started writing a response back about 5 min
On Thursday, Jul 17, 2003, at 13:29 America/Guayaquil, Hunsberger,
Peter wrote:
Stefano Mazzocchi <[EMAIL PROTECTED]> writes (and writes, and writes,
and writes):
LOL!
WARNING: this RT is long! and very dense, so I suggest you to
turn on your printer.
I don't have time to go through this in de
On Thursday, Jul 17, 2003, at 12:48 America/Guayaquil, Berin Loritsch
wrote:
Berin Loritsch wrote:
The concept translates over to the adaptive cache approach at almost
1:1.
The actual individual cost sample is not important. It is the
windowed average
that is important. For our cache a simpl
Stefano Mazzocchi wrote:
On Thursday, Jul 17, 2003, at 08:03 America/Guayaquil, Berin Loritsch
wrote:
[skipping nice parallel in digital audio]
The concept translates over to the adaptive cache approach at almost 1:1.
The actual individual cost sample is not important. It is the
windowed avera
On Thursday, Jul 17, 2003, at 08:03 America/Guayaquil, Berin Loritsch
wrote:
[skipping nice parallel in digital audio]
The concept translates over to the adaptive cache approach at almost
1:1.
The actual individual cost sample is not important. It is the
windowed average that is important.
We
Berin Loritsch <[EMAIL PROTECTED]> writes:
> For this reason, providing a generic cache that works on
> whole resources is a much more efficient use of time. For
> example, it would make my site run much more efficiently if I
> could use a cache for my database bound objects instead of
> cr
Jason Foster <[EMAIL PROTECTED]> asks:
> Unfortunately even with constant cost savings this is a
> variant of the
> Knapsack problem, which means it's NP-complete. Stefano's
> cache would
> then be a packing heuristic :)
I think you're correct for a fully loaded system (which is when the
al
Stefano Mazzocchi <[EMAIL PROTECTED]> writes (and writes, and writes,
and writes):
> WARNING: this RT is long! and very dense, so I suggest you to
> turn on your printer.
I don't have time to go through this in detail yet, but I've had a
couple of fundamental questions that it might be usef
Berin Loritsch wrote:
The concept translates over to the adaptive cache approach at almost 1:1.
The actual individual cost sample is not important. It is the windowed
average
that is important. For our cache a simple mean would best suit our problem
space. The size of the window for our sample
Stefano Mazzocchi wrote:
The functions as defined by your initial algorithm used the sum notation,
which means we needed to maintain a list of samples for the resource.
I.e.
the cost functions identified were a function of the resource and time of
request. The more samples maintained, the less
On Wednesday, Jul 16, 2003, at 14:40 America/Guayaquil, Berin Loritsch
wrote:
Stefano Mazzocchi wrote:
What you left as an excersize to the reader was the Cost function.
See my email to Marc for more insights.
And my response to that. I think I am starting to "get it". Having
an adaptive cost f
On Wednesday, Jul 16, 2003, at 15:21 America/Guayaquil, Berin Loritsch
wrote:
Stefano Mazzocchi wrote:
At the end, it might all be an accademic exercise that doesn't work
IRL, I don't know. But in case it does, it would be a very different
system that would provide cocoon with even more adv
On Wednesday, Jul 16, 2003, at 15:00 America/Guayaquil, Jason Foster
wrote:
You'd get the same thing using a non-linear function instead of using
variable weightings.
If we're thinking non-linear, then how about considering fuzzy
parameters?
Hmmm. This would place even more configuration hassl
Berin Loritsch wrote:
Speaking of accademic, anyone have a Java implementation of tanh(x)? The
java.lang.Math class only has tan(x) or atan(x), but no hyperbolic
function.
Trivial, see
http://functions.wolfram.com/ElementaryFunctions/Tanh/02/
(yeah, wolfram research is for mathematics on the net
Hunsberger, Peter wrote:
Berin Loritsch <[EMAIL PROTECTED]> asks:
Speaking of accademic, anyone have a Java implementation of
tanh(x)? The java.lang.Math class only has tan(x) or
atan(x), but no hyperbolic function.
Spoke to soon, Google search digs it up:
http://www.bsdg.org/swag/MATH/0
Berin Loritsch <[EMAIL PROTECTED]> writes:
> Here is a definition I found after googling:
>
> http://mathworld.wolfram.com/HyperbolicTangent.html
>
> I *think* that translates to:
>
>
> public double tanh( double z )
> {
> return (Math.exp(2 * z) - 1) / (Math.exp(2 * z) + 1);
>
Hunsberger, Peter wrote:
Berin Loritsch <[EMAIL PROTECTED]> asks:
Speaking of accademic, anyone have a Java implementation of
tanh(x)? The java.lang.Math class only has tan(x) or
atan(x), but no hyperbolic function.
Can't recall the details off the top of my head, but I believe you just
need
Berin Loritsch <[EMAIL PROTECTED]> asks:
> Speaking of accademic, anyone have a Java implementation of
> tanh(x)? The java.lang.Math class only has tan(x) or
> atan(x), but no hyperbolic function.
Spoke to soon, Google search digs it up:
http://www.bsdg.org/swag/MATH/0067.PAS.html
Y
Berin Loritsch <[EMAIL PROTECTED]> asks:
>
> Speaking of accademic, anyone have a Java implementation of
> tanh(x)? The java.lang.Math class only has tan(x) or
> atan(x), but no hyperbolic function.
Can't recall the details off the top of my head, but I believe you just
need the natural log f
math in the commons sandbox?
It should be in the utils class as a static method.
On Wed, Jul 16, 2003 at 04:21:24PM -0400, Berin Loritsch wrote:
> Stefano Mazzocchi wrote:
>
> > At the end, it might all be an accademic exercise that doesn't work IRL,
> > I don't know. But in case it does, it wou
Stefano Mazzocchi wrote:
At the end, it might all be an accademic exercise that doesn't work IRL,
I don't know. But in case it does, it would be a very different system
that would provide cocoon with even more advantages over other solutions
(or, at least, that's my goal).
thanks for taking th
You'd get the same thing using a non-linear function instead of using
variable weightings.
If we're thinking non-linear, then how about considering fuzzy
parameters? This would be an opportunity to include some of Berin's
thoughts on adaptive rules and add in even more math :)
Is it just me,
Stefano Mazzocchi wrote:
What you left as an excersize to the reader was the Cost function.
See my email to Marc for more insights.
And my response to that. I think I am starting to "get it". Having
an adaptive cost function with an adaptive cache can efficiently
protect against absolute const
Berin Loritsch <[EMAIL PROTECTED]> writes:
> Taking the adaptive cache to new levels, we can also explore adaptive
> cost functions--nothing will have to change from the overall
> architecture
> for us to do that.
>
> For example, if we have a requirement from our hosting
> service that we can
On Wednesday, Jul 16, 2003, at 07:38 America/Guayaquil, Berin Loritsch
wrote:
My
statements about the limited value (cost/benefit ratio) of partial
pipeline
caching has to do with *my* experience. Maybe others have had
different
experiences, but all my dynamic information was always encapsulat
Tony Collen wrote:
Berin Loritsch wrote:
Stefano Mazzocchi wrote:
-snip stuff-
Hmmm, all this is a little over my head, but in my Algorithms class we
talked about amortized cost analysis.. is this the same thing (or close)?
Never having math classes over Trigonometry... I can't say, but it so
Berin Loritsch wrote:
Stefano Mazzocchi wrote:
-snip stuff-
Hmmm, all this is a little over my head, but in my Algorithms class we talked about amortized cost
analysis.. is this the same thing (or close)?
Tony
Stefano Mazzocchi wrote:
A pretty reasonable cost function could be
0.7 * time + 0.2 * memory + 0.1 * disk
that reflects the real-life costs of the hardware used to operate the
machine. In fact, the "cost function" is better the more it mimics real
life economical costs.
Why? well, the above
On Wednesday, Jul 16, 2003, at 04:31 America/Guayaquil, Marc Portier
wrote:
Stefano Mazzocchi wrote:
Oh god, it seems that I really can't get thru these days.
It's probably time to shut up and let code speak for me.
--
Stefano.
I feel your pain, brother...
yep, language barriers can be more dif
On Tuesday, Jul 15, 2003, at 17:06 America/Guayaquil, Berin Loritsch
wrote:
Berin, a little purturbed by assumptions on assumptions about the
accademic level of understanding of the fellow contributors. :(
Dude,
I'm sorry. I'm an asshole. I really am. Please excuse me for having
jumped on you. I
Stefano Mazzocchi wrote:
On Tuesday, Jul 15, 2003, at 17:06 America/Guayaquil, Berin Loritsch wrote:
Dumbing it down a lot will help emensely. The problem is that it is
already 18+ pages of hard to read stuff. Then when we don't
understand it we get verbally flogged.
Berin, look: if you sai
Stefano Mazzocchi wrote, On 16/07/2003 0.31:
...
let code speak for me.
YAY! :-D
@see
--
Nicola Ken Barozzi, that talks talks talks... and should write more
- verba volant, scripta manent -
(discussions get forgotten, just cod
Stefano Mazzocchi wrote, On 15/07/2003 22.45:
...
Please, don't reply without having understood what I wrote and if there
is something unclear, please ask, don't just assume.
That's what happened last time, when you got no responses ;-P
...
Stefano, a little depressed by the fact that what he con
Stefano Mazzocchi wrote:
Oh god, it seems that I really can't get thru these days.
It's probably time to shut up and let code speak for me.
--
Stefano.
I feel your pain, brother...
as for the topic: printed out already, but somehow all my slack
time gets eaten up at the moment so I haven't pu
On Tuesday, Jul 15, 2003, at 17:06 America/Guayaquil, Berin Loritsch
wrote:
Stefano Mazzocchi wrote:
Stefano, a little depressed by the fact that what he considers the
best idea of his entire life is not even barely understood :-(
Not all of us are super-geniouses.
Guess what, I'm not either.
Stefano Mazzocchi wrote:
On Monday, Jul 14, 2003, at 11:29 America/Guayaquil, Berin Loritsch wrote:
We would have to apply a set of rules that make sense in this instance:
* If resource is already cached, use cached resource.
* If current system load is too great, extend ergodic period.
* If pro
On Monday, Jul 14, 2003, at 11:29 America/Guayaquil, Berin Loritsch
wrote:
The basic underlying issue here is that we want a smart and adaptive
cache.
Yep. Basically, this comes from the fact that I would not know whether
caching a particular resource fragment makes things faster or not.
[snipp
Christoph Gaffga wrote:
Berin Loritsch wrote:
As to the good enough vs. perfect issue, caching partial pipelines (i.e.
the results of a generator, each transformer, and the final result) will
prove to be an inadequate way to improve system performance.
I think caching parts of a pipeline ist a v
Berin Loritsch wrote:
> As to the good enough vs. perfect issue, caching partial pipelines (i.e.
> the results of a generator, each transformer, and the final result) will
> prove to be an inadequate way to improve system performance.
I think caching parts of a pipeline ist a very smart way of opt
Stefano Mazzocchi wrote:
NOTE: this is a refactoring of an email I wrote 2 1/2 years ago. The
original can be found here:
http://marc.theaimsgroup.com/?l=xml-cocoon-dev&m=98205774411049&w=2
I re-edited the content to fit the current state of affairs and I'm
resending hoping to trigger some disc
NOTE: this is a refactoring of an email I wrote 2 1/2 years ago. The
original can be found here:
http://marc.theaimsgroup.com/?l=xml-cocoon-dev&m=98205774411049&w=2
I re-edited the content to fit the current state of affairs and I'm
resending hoping to trigger some discussion that didn't happen
56 matches
Mail list logo