On Thursday 21 August 2003 21:40, Brent Dax wrote:
# we're already running with a faster opcode dispatch
Man I wish I had the time to keep up with parrot development. Though, as
others have pointed out, the core archetecture is somewhat solidified by this
point, I thought I'd put in my
On Fri, 7 Dec 2001, Andy Dougherty wrote:
On Fri, 7 Dec 2001, Bryan C. Warnock wrote:
On Friday 07 December 2001 08:43 am, Andy Dougherty wrote:
Funny you should mention that, because Perl's Configure does things in
order determined by 'Dependency-ish rules, a la make'. Configure is
While your point is taken, it's hardly considered C++ anymore. Many
C-compilers have adopted many such useful features.
On Wed, 28 Nov 2001, Andy Dougherty wrote:
diff -r -u parrot-current/classes/perlnum.pmc parrot-andy/classes/perlnum.pmc
void set_integer (PMC* value) {
-//
I've done a bunch of reading, and though I'm not finished, I'm starting to
look towards the following overall algorithm based on the below specified
assumptions. I'm not _necessarily_ looking for comments at this point,
because I haven't finished evaluating the specifics of several algorithms,
On Fri, 19 Oct 2001, Dan Sugalski wrote:
At 01:24 PM 10/19/2001 -0400, Gregor N. Purdy wrote:
James --
Should we have bsr(i|ic, i|ic), that jumps to $1, with the return
address below the $2 arguments? Similarly, should we have ret(i|ic),
that rotates the return address out
On Fri, 19 Oct 2001, Gregor N. Purdy wrote:
Dan --
FWIW, I'd rather not dedicate registers to special uses at the Parrot
level. Jako reserves [INPS]0 for temporaries, but that is at its
discretion, not dictated by Parrot (and I like it that way :). I was
wishing for separate data and
I'm about to start welding in support for multiple interpreters, running
both serially and simultaneously, into Parrot. (With provisions for
starting and coordinating interpreters from other interpreters)
This is just a heads-up, since it probably means platforms without POSIX
thread
At 08:48 PM 10/11/2001 -0400, Ryan O'Neil wrote:
I was playing with Parrot and wanted a basic random numbers
implementation. Just in case anyone else wants it too, here are the
appropriate diffs and test file. It seemed logical for rand to return a
real number between 0 and 1 instead of
On Fri, 12 Oct 2001, Ritz Daniel wrote:
i fixed that. but ther's only a jump_i, no jump_ic...
jump Ix now jumps to the absolute address held in a register Ix.
Absolute means
start of the bytecode + Ix.
It can't mean that. This:
set I0, 0
jump I0
should coredump.
On Fri, 12 Oct 2001, Dan Sugalski wrote:
At 01:30 PM 10/12/2001 -0400, Michael Maraist wrote:
I'm of the opinion that you shouldn't just be able to jump into
another code-segment.
[Snip]
And thus be prevented from changing context; That would be relegated
to a subroutine invocation
On Tue, 9 Oct 2001, Dan Sugalski wrote:
At 01:20 PM 10/9/2001 +0200, Paolo Molaro wrote:
On 10/07/01 Bryan C. Warnock wrote:
while (*pc) {
switch (*pc) {
}
}
If anyone wants an official ruling...
DO_OP can contain any code you like as long as it:
*) Some version
Currently, instead of pushing the contents of fixed registers onto the
stack, the registers themselves float on top of the stack. That doesn't
preserve register contents on a push_x, as the registers are now in a new
memory location. After toying with fixed registers per interpreter, I
On Sat, 6 Oct 2001, Simon Cozens wrote:
On Sat, Oct 06, 2001 at 09:01:34AM -0500, Gibbs Tanton - tgibbs wrote:
Also, how will adds of different types be handled. In the above if pmc2 is
an int and pmc3 is a float we're going to have to know that and do a switch
or something to convert
On Sat, 6 Oct 2001, Michael Maraist wrote:
My question at this point is if the PMC's are polymorphic like Perl5
or if there is an explicit type type. Polymorphics can make for vary
large vtable sub-arrays. (int, int_float, int_float_string,
int_string, etc).
If PMC-types are bit-masked
Linux/Athlon/gcc.
Why does changing this: (DO_OP loop partially inlined)
while (pc = code_start pc code_end *pc) {
do {
x = z-opcode_funcs; \
y = x[*w]; \
w = (y)(w,z); \
} while (0);
}
to
x = z-opcode_funcs;
while (pc = code_start pc code_end
Linux/Athlon/gcc.
Why does changing this: (DO_OP loop partially inlined)
while (pc = code_start pc code_end *pc) {
do {
x = z-opcode_funcs; \
y = x[*w]; \
w = (y)(w,z); \
} while (0);
}
to
x = z-opcode_funcs;
while (pc =
On Sun, 30 Sep 2001, Hong Zhang wrote:
How does python handle MT?
Honestly? Really, really badly, at least from a performance point of view.
There's a single global lock and anything that might affect shared state
anywhere grabs it.
One way to reduce sync overhead is to make more
I generally divide signals into two groups:
*) Messages from outside (i.e. SIGHUP)
*) Indicators of Horrific Failure (i.e. SIGBUS)
Generally speaking, parrot should probably just up and die for the first
type, and turn the second into events.
I don't know. SIGHUP is useful to
or have entered a muteX,
If they're holding a mutex over a function call without a
_really_ good reason, it's their own fault.
General perl6 code is not going to be able to prevent someone from
calling code that in-tern calls XS-code. Heck, most of what you do in
perl involves some sort of
and a call to the API would be:
char *label = gettext( This feels strange\n );
Does you idea allow for:
int msgid = txtToMsgid( This feels strange\n );
char *label = msgidToRes( msgid );
In addition to the above, since this affords compile-time optimizations?
I'm not following this thread
All --
I've created a varargs-ish example by making a new op, print_s_v.
This is pretty rough, and I haven't updated the assembler, but it
seems to work.
Um.. I *have* updated the assembler. Its the *dis*assembler I haven't
updated. This is what happens:
* *_v ops list their number
We're talking bytecode. That will indeed be a case of huge arrays of
tightly packed integers.
For bytecode, it's not a big problem, certainly not one I'm worried about.
Machines that want 64-bit ints have, likely speaking, more than enough
memory to handle the larger bytecode.
I'm more
I have a suggestion for allowing parrot implementations to execute
code more efficiently. Add an instruction or other annotation which
denotes what registers are live at some point in the code. The
Does it have to be in the instruction stream to be useful? Why not just
be part of the
On Mon, 24 Sep 2001, Buggs wrote:
On Monday 24 September 2001 03:27, Dan Sugalski wrote:
At 01:47 AM 9/24/2001 +0100, Simon Cozens wrote:
http://astray.com/mandlebrot.pasm
Leon, you're a sick, sick man.
Okay, I think that means we need to weld in bitmap handling opcodes into
the
DS At 12:29 AM 9/21/2001 +0200, Bart Lateur wrote:
Horribly wasteful of memory, definitely, and the final allocation system
will do things better, but this is OK to start.
So to stop it waste memory, subtract 1 first and add it again later.
DS Nah, it'll still waste
I'm just curious, is there a plan for how closures will work in Parrot? I
think that closures are one of the coolest Perl 5 features around (despite
their memory leak issues :-), and I'd hate to see them go away.
I doubt that there's any limitation. In Java, all they had to do was
supply a
Odds are you'll get per-op event checking if you enable debugging, since
the debugging oploop will really be a generic check event every op loop
that happens to have the pending debugging event bit permanently set.
Dunno whether we want to force this at compile time or consider some way to
then what about a win/win? we could make the event checking style a
compile time option.
Odds are you'll get per-op event checking if you enable debugging, since
the debugging oploop will really be a generic check event every op loop
that happens to have the pending debugging event bit
GNU does offer the gettext tools library for just such a purpose. I don't
know how it will translate to the various platforms however, and it likely
is a major overkill for what we are trying to do.
http://www.gnu.org/manual/gettext/html_mono/gettext.html#SEC2 - Purpose
It might make sense
is it possible the ops to handle variable number of arguments, what I have
in mind :
print I1,,,N2,\n
This should be done by create array opcode plus print array opcode.
[1, 2, 3, 4, 5]
I have a minor issue with a proliferation of createArray. In perl5 we
used the Stack for just
Just curious, do we need a dedicated zero register and sink register?
The zero register always reads zero, and can not be written. The sink
register can not be read, and write to it can be ignored.
I brain-stormed this idea a while ago, and here's what I came up with.
We're not RISC, so we
I have a minor issue with a proliferation of createArray. In perl5 we
used the Stack for just about everything minus physically setting @x =
(1,2,3). The creation of a dynamic array is a memory hog.
Less of a hog in many ways than using a stack. Worth the times when it's not.
I don't
On Sun, 23 Sep 2001, Bart Lateur wrote:
On Thu, 13 Sep 2001 06:27:27 +0300 [ooh I'm far behind on these lists],
Jarkko Hietaniemi wrote:
I always see this claim (why would you use 64 bits unless you really
need them big, they must be such a waste) being bandied around, without
much hard
On Thu, Sep 20, 2001 at 11:11:42AM -0400, Dan Sugalski wrote:
Actually the ops=C conversion was conceived to do exactly what's being
done now--to abstract out the body of the opcodes so that they could be
turned into a switch, or turned into generated machine code, or TIL'd. If
you're
Question. It seems to me that the current do-loop:
while(code x code y *code) { DO_OP }
Is fail-fast and succeed-slow. I know it has a brother:
while(*code), but this could lead to seg-faults, especially if we allow
dynamically modified parsers / compilers.
The first method has an even
On Fri, Sep 21, 2001 at 02:24:43PM -0400, Dan Sugalski wrote:
Doing this by hand with -O3, you can see a speedup of around a factor of 45
over an unoptimised runops loop, so it's definitely worth doing in some
cases...
Cool! Parrot JIT!
But that tells us *just* how time-consuming our
On Thu, 20 Sep 2001, Brent Dax wrote:
Damien Neil:
# RETURN(0); (written exactly like that, no variation permitted)
# is a special case, and terminates the runops loop. The only op
# which uses this is end, and it doesn't actually ever execute.
# Personally, I feel that this special case
Ordered bytecode
Bytecode should be structured in such a way that reading and executing
it can be parallelised.
Are you suggesting a threaded VM? I know that the core is being rewritten,
so it's a possibility. If this is the case, then you'll want to reference
some of the other RFC's
- Original Message -
From: "Perl6 RFC Librarian" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Tuesday, August 29, 2000 10:20 PM
Subject: RFC 172 (v1) Precompiled Perl scripts.
This and other RFCs are available on the web at
http://dev.perl.org/rfc/
=head1
I don't understand this desire to not want anything to change.
You misread.
I sympathise. There are definate goals and focuses that each language is
built around.. Change these too much, and you have a different language,
while at the same time, alienating the people that chose that language
I would actually further this sort of activity.
I admire micro-kernel-type systems. C and Java give you no functions out of
the box. Keywords are just that, keywords. I believe python is like this
as well. The idea being that everything has to come from a module.. This
limits how much a new
41 matches
Mail list logo