Hi Geoff,
thank you for the stack, it is very instructive.
Kind regards
Bernd
Geoff Canyon-4 wrote
I did something like this:
https://dl.dropboxusercontent.com/u/41182876/foreach.livecode
It shows a field and parses through the field, highlighting each character
in blue as it considers
On Tue, Feb 17, 2015 at 4:33 PM, Alex Tweedly a...@tweedly.net wrote:
Though it would be kinda cool to do a quick LC simulation showing visible
animation of the variable and index as it goes through the loop.
I did something like this:
Mae inglesh nawt so goot. I didn't include the transaction code (or the
creating the database in memory instead of on disk code) in the examples
because it's the same in both and the question is why is the one sample so
much slower than the other.
On Tue, Feb 17, 2015 at 11:06 PM, Dr. Hawkins
That's great - thank you for that !
-- Alex.
On 18/02/2015 20:39, Geoff Canyon wrote:
On Tue, Feb 17, 2015 at 4:33 PM, Alex Tweedly a...@tweedly.net wrote:
Though it would be kinda cool to do a quick LC simulation showing visible
animation of the variable and index as it goes through the
IOC nvm. Looks like that is how revExecuteSQL works. I can see now why I was
befuddled trying to get SQL to work in LC using their built in functions.
Bob S
On Feb 17, 2015, at 12:11 , Mike Kerner
mikeker...@roadrunner.commailto:mikeker...@roadrunner.com wrote:
Peter (Brett),
Help me with
On 2/17/2015 4:29 PM, Alex Tweedly wrote:
I am *absolutely* not recommending that anyone should modify the
variable in question within the loop - even if that seems to work in
some cases, it is known to be dangerous, and so should just NOT BE DONE
in real code. But doing it in limited cases
On Tue, Feb 17, 2015 at 2:20 PM, Mike Kerner mikeker...@roadrunner.com
wrote:
I didn't include everything, including the transaction code, or opening the
database in memory instead of on a disk. When we were messing with this,
the piece that became the (58 minute) bottleneck was inside the
On Tue, Feb 17, 2015 at 11:55 AM, Dr. Hawkins doch...@gmail.com wrote:
Won't this be orders of magnitude slower?
Yes.
Given that you have access to lines, items, and words, if possible it would
be better to set the outer loop to work on lines, and then do whatever you
like with items within
On Mon, Feb 16, 2015 at 6:23 PM, Peter M. Brigham pmb...@gmail.com wrote:
No need to change the itemdel in a loop, you can use this instead:
put getItem(pList, pIndex, pDelim) into tItem
Won't this be orders of magnitude slower?
I can see the use in the general case, but Im ususally in
On Tue, Feb 17, 2015 at 9:01 AM, Geoff Canyon gcan...@gmail.com wrote:
Given that you have access to lines, items, and words, if possible it would
be better to set the outer loop to work on lines, and then do whatever you
like with items within the loop.
Most of the time, it is enough to set
Alex,
Thanks for that.
Disbelieving soul that I am (sorry), I puzzled for a while over the results of
these two versions.
on mouseup
put empty into msg
put abc CR def CR ghi CR into t
repeat for each line L in t
put the number of chars in L L CR after msg
Then past posts are incorrect on this matter. It was explicitly stated that the
actual memory holding the variable data was “indexed” and that altering the
variable data could relocate the variable in memory resulting in invalid data.
I have seen this myself. The data returned is garbled
The second is slow, because the current position is not tracked. So, for
line 1, its easy. You grab line 1. For line 2, you count the lines, until
you get to line 2. Same for line 3. Or think of it this way.. If you have
100 lines, and you are grabbing all of them using the second method, The
Yes, speed will be an issue if the data is thousands of lines. I'm usually
dealing with less than a thousand iterations, and taking a half-second or so to
do the job is nearly unnoticeable for the user.
-- Peter
Peter M. Brigham
pmb...@gmail.com
http://home.comcast.net/~pmbrig
On Feb 17,
Peter (Brett),
Help me with the chunking piece, then.
The following is very fast:
put INSERT INTO sortTest VALUES :1 into tSQL
repeat for each line tLine in tDataSet
revExecuteSQL dbid, tsql, tline
end repeat
The following is very slow:
put INSERT INTO sortTest VALUES :1 into tSQL
put the
It just seems odd to me that the chunking is so slow when done the second
way. Just for the heck of it, I did a SPLIT so as to work on an array, and
sure enough, it is just a little slower than the first technique (even
though there is the overhead involved with splitting the container into an
On 2/17/2015 2:11 PM, Mike Kerner wrote:
Help me with the chunking piece, then.
Put 100 apples in a box.
repeat with x = 1 to 100:
pick up one apple
drop it back in the box
pick up one apple
pick up a second apple
drop them both back into the box
pick up one apple
pick up a
On Tue, Feb 17, 2015 at 12:11 PM, Mike Kerner mikeker...@roadrunner.com
wrote:
The following is very fast:
put INSERT INTO sortTest VALUES :1 into tSQL
repeat for each line tLine in tDataSet
revExecuteSQL dbid, tsql, tline
end repeat
Faster still would probably be to build a command in
I didn't include everything, including the transaction code, or opening the
database in memory instead of on a disk. When we were messing with this,
the piece that became the (58 minute) bottleneck was inside the loop, or,
according to Peter, the way the chunks are accessed in the loop.
On Tue,
On 17/02/2015 15:43, Bob Sneidar wrote:
Then past posts are incorrect on this matter. It was explicitly stated that the
actual memory holding the variable data was “indexed” and that altering the
variable data could relocate the variable in memory resulting in invalid data.
I have seen this
On 17/02/2015 13:25, Dave Cragg wrote:
Alex,
Thanks for that.
Disbelieving soul that I am (sorry), I puzzled for a while over the results of
these two versions.
...
I had to use a pencil and paper to track what was in t and what the engine was referring to after
the x and y inserts. Then it
According to LC, we're dealing with somewhere bewtween nlog(n) and n on the
one side and n^2 on the other, which is about as far apart as we can get.
On Mon, Feb 16, 2015 at 11:09 AM, Bob Sneidar bobsnei...@iotecdigital.com
wrote:
Ah. Because the keys of an array are effectively a system of
Ah. Because the keys of an array are effectively a system of pointers in
themselves. It might actually be slightly quicker, since the array has already
been created, while the For Each will have to create the pointers on the fly at
the start of the loop. I’d be curious to find out how much time
The For Each form is also quite handy as it eliminates the need for stuffing
some variables in the process. As mentioned in past threads, the one downside
is that you *MUST* not change the contents of the source data (and I think the
each variable as well) as doing so will corrupt what ends up
On 15/02/2015 02:28, Mike Kerner wrote:
I just read the dictionary entry (again), and I would say that it is not at
all clear that there would appear to be an ENORMOUS difference. For
starters, you have to read wy down to find the mention, it isn't
really called out with a NOTE or
Hi Peter,
you also might want to check your reverse algorithm on 7.x.x
in my testing Jacque's initial post with little tweaks is as fast as your
code and faster on 7.x.x (tested on 7.0.2 rc2) In my testing it took only
60% of the time compared to yours on 7.x.x
Of course Alex Tweedly's ingenious
On Feb 16, 2015, at 1:58 PM, BNig wrote:
Hi Peter,
you also might want to check your reverse algorithm on 7.x.x
Well, I'm still running 5.5.1, since I have a more than full-time job already
taking care of patients and I don't have time to debug 38,000 lines of script
for my practice
On 2/16/2015 3:02 PM, Mike Kerner wrote:
At least
for the case of your squares of integers, I would expect that there is a
crossover where it's going to be faster to build the list, first. I don't
know if that is at 100, 1000, or some bigger number, but n vs. n^2 is a
very big difference.
If
On 16/02/2015 21:15, Peter M. Brigham wrote:
As I now understand it, the really big difference is between the repeat for
n = 1 to… form on the one hand, and the repeat for each… and repeat n times
forms. The latter 2 are not that different, but when the engine has to
count lines/items every
As I now understand it, the really big difference is between the repeat for n =
1 to… form on the one hand, and the repeat for each… and repeat n times forms.
The latter 2 are not that different, but when the engine has to count
lines/items every time, it slows things down a very significant
On 2015-02-16 22:02, Mike Kerner wrote:
I don't think I follow on the first part. Edinburgh says that the
complexity of the two traversals are dramatically different. repeat
for
each is somewhere between nlogn and n, and repeat with is n^2. At
least
for the case of your squares of integers,
I wrote:
I referenced the list and turned the function into a command, saves memory
(possibly speed?) on very large lists.
I just realized that no memory is saved this way because we are building a new
duplicate (reversed) list within the command. So referencing the list has no
advantage.
So, Alex's way of doing it is the fastest pure-LC way (I didn't get into using
the database methods). I referenced the list and turned the function into a
command, saves memory (possibly speed?) on very large lists.
on reverseSort @pList, pDelim
-- reverse-sorts an arbitrary list
--
On 16/02/2015 16:06, Bob Sneidar wrote:
The For Each form is also quite handy as it eliminates the need for stuffing
some variables in the process. As mentioned in past threads, the one downside
is that you *MUST* not change the contents of the source data (and I think the
each variable as
Hi Jacque,
J. Landman Gay wrote
This is getting to be just too much fun. I think Geoff was right, we
don't need to initialize sNum because we don't care if we go into
negative territory. And since the handler isn't actually using the value
of the line, we can omit passing it by removing
Right, just saw that in the dictionary. But I'm still confused on why it
results in less messages. Is it because the engine checks to see if there
is a private handler before sending a message along the message path?
As Richard and Mark mentioned, seems like any handlers in an object's
script
That's very interesting. I've never used private since I had the
impression that the only thing it did was stop the handler from being
called outside of the script it appears in.
But it seems there is a performance benefit too. Why would that be, I
wonder. I understand that the engine only
Richard-
Monday, February 16, 2015, 3:55:54 PM, you wrote:
I think my new habit is to declare everything as private unless I know I
need it available to other scripts.
Yeah, that's my normal MO anyway.
And for just that reason.
--
-Mark Wieder
ahsoftw...@gmail.com
This communication may
Hi Pete,
Peter Haworth wrote
That's very interesting. I've never used private since I had the
impression that the only thing it did was stop the handler from being
called outside of the script it appears in.
But it seems there is a performance benefit too. Why would that be, I
wonder.
Bernd wrote:
funny, nobody likes private
It save up to 20 % all else equal.
On about 44000 lines 5 items on each line
with private 86 ms, without 106 ms LC 6.7.2
with private roughly 180 ms, without 200 ms on LC 7.0.2 rc2 (times vary much
more on LC 7.0.2 rc2 than on LC 6.7.2
Ooooh, good
You were right first time
if you use a reference, then there is no copy created when you do the
call; and then you build up the output list.
without the reference, there is an initial copy and then you
additionally build the output list.
So using a reference parameter saves the
On February 16, 2015 5:04:40 PM CST, BNig bernd.niggem...@uni-wh.de wrote:
funny, nobody likes private
It save up to 20 % all else equal.
On about 44000 lines 5 items on each line
with private 86 ms, without 106 ms LC 6.7.2
with private roughly 180 ms, without 200 ms on LC 7.0.2 rc2 (times
On Mon, Feb 16, 2015 at 3:08 PM, Alex Tweedly a...@tweedly.net wrote:
That's not quite correct. It doesn't do a single initial complete scan of
the whole variable and keep all the pointers. What it does is (more like)
keep track of how far it has currently processed, and then when it needs
Right. Back to the original version, a command, with referenced list.
-- Peter
Peter M. Brigham
pmb...@gmail.com
http://home.comcast.net/~pmbrig
On Feb 16, 2015, at 7:19 PM, Alex Tweedly wrote:
You were right first time
if you use a reference, then there is no copy created when you do
On Feb 16, 2015, at 9:05 PM, Dr. Hawkins wrote:
On Mon, Feb 16, 2015 at 3:08 PM, Alex Tweedly a...@tweedly.net wrote:
That's not quite correct. It doesn't do a single initial complete scan of
the whole variable and keep all the pointers. What it does is (more like)
keep track of how far it
It's important to note that the efficiency is all/mostly in the function
call, not in the execution of the function itself. So for really short
functions that will be called many times, this is significant. For longer
functions, the difference all but vanishes:
on mouseUp
put 1000 into n
--
Good point. Besides being a good general habit, it would be especially
important to make recursive functions private.
.Jerry
On Feb 16, 2015, at 7:18 PM, Geoff Canyon gcan...@gmail.com wrote:
It's important to note that the efficiency is all/mostly in the function
call, not in the execution
RichardG wrote:
I would imagine that a handler in the same
script as the caller would be faster than having it just about any other
place, but to limit its scope trims the execution time by a surprising
amount.
Whoda thunk!
I think my new habit is to declare everything as private
Hey, silly question, but is there a way to do this..
sort lines of tWorking numeric by (--sCount) so that there is no actual
function call? (an in position, decrement sort of thing)
Just curious.
On Mon, Feb 16, 2015 at 8:25 PM, Jerry Jensen j...@jhj.com wrote:
Good point. Besides being
I don't think I follow on the first part. Edinburgh says that the
complexity of the two traversals are dramatically different. repeat for
each is somewhere between nlogn and n, and repeat with is n^2. At least
for the case of your squares of integers, I would expect that there is a
crossover
This is getting to be just too much fun. I think Geoff was right, we
don't need to initialize sNum because we don't care if we go into
negative territory. And since the handler isn't actually using the value
of the line, we can omit passing it by removing each. So it could be
even
Peter,
I don’t follow. If I change the repeat portion of your code to use repeat n
times as below, the speed doesn’t change. And the speed scales linearly in both
cases if the size of the data set is increased.
put the keys of pList into indexList
put the number of lines of indexList into i
My mistake. You are correct that the two are equally efficient. It was an error
in my timing test handler.
-- Peter
Peter M. Brigham
pmb...@gmail.com
http://home.comcast.net/~pmbrig
On Feb 15, 2015, at 7:56 AM, Dave Cragg wrote:
Peter,
I don’t follow. If I change the repeat portion of
Pete, is that a typo, or did you mean to have a semicolon instead of a
colon in front of memory? Does ;memory: work, too, or just :memory:?
AND HOLY CRAP, yes, Pete, you're right, you were doing 100k records, where
the other example was only doing 10k. So doing 100k records with REPEAT
WITH
Typo, should be :memory:.
On Sat Feb 14 2015 at 2:01:45 PM Mike Kerner mikeker...@roadrunner.com
wrote:
Pete, is that a typo, or did you mean to have a semicolon instead of a
colon in front of memory? Does ;memory: work, too, or just :memory:?
AND HOLY CRAP, yes, Pete, you're right, you
Mike Kerner wrote:
...
REPEAT FOR is .129 seconds, and REPEAT WITH is TWENTY SEVEN THOUSAND
TIMES SLOWER (for this operation)??!?!?!?!?!???
Hey, Pete, That's a common technique...WHAT? If it's so common,
and all of this is common knowledge, then how come it isn't
documented, anywhere
The
Richard,
I just read the dictionary entry (again), and I would say that it is not at
all clear that there would appear to be an ENORMOUS difference. For
starters, you have to read wy down to find the mention, it isn't
really called out with a NOTE or anything else to draw one's attention
Harking back to the original discussion on reversing a list -- still the
subject of this thread, here's the original example as I saved it in my library.
function reverseSort pList, pDelim
-- reverse sorts an arbitrary list
--ie, item/line -1 - item/line 1, item/line -2 - item/line 2,
I must have missed a thread, somewhere. That would be the thread on how LC
handles loops.
To recap, doing this sort using an sqlite database (insert the values into
a table, then sort the table), was taking me almost 40 seconds. Then Pete
chimed in and had it working in a couple hundred
We both used in memory databases. The filename is ;memory:
On Fri Feb 13 2015 at 2:46:45 PM Bob Sneidar bobsnei...@iotecdigital.com
wrote:
He may also have been using a memory resident database. That is what I
suggested at the first. To do this, use “memory” as the file name.
Bob S
On
He may also have been using a memory resident database. That is what I
suggested at the first. To do this, use “memory” as the file name.
Bob S
On Feb 13, 2015, at 12:40 , Mike Kerner
mikeker...@roadrunner.commailto:mikeker...@roadrunner.com wrote:
I must have missed a thread, somewhere.
NO! SORRY! My mixup in the last post - the REPEAT FOR is faster (repeat
for each line...)
On Fri, Feb 13, 2015 at 4:02 PM, Mike Kerner mikeker...@roadrunner.com
wrote:
No, no, it isn't 100,000 lines, it's only 10,000 lines. 0.129 vs 39.0.
So then, just for the heck of it, because if we do
Hi Mike,
Glad you figured out the reason for the speed difference.
Not sure if there's a single thread anywhere that talks about repeat loops
but repeat with can be orders of magnitude faster than repeat for as
you've discovered. In this case there were about 100k lines in the data
and I think
No, no, it isn't 100,000 lines, it's only 10,000 lines. 0.129 vs 39.0.
So then, just for the heck of it, because if we do the repeat for, we
gain some additional information (the line number we're on), I added put 0
into i before the loop and then add 1 to i inside the loop, at the top.
We
Right,that's a common technique to avoid the timing problmem when you need
a numeric index for some reason.
My stack is 100,000 lines Mike, actually 99,913. You're probably getting
mixed up with the stack name which includes 1 because I started
testing that way then increased it to 100, 000
Oh thanks. That would have screwed me up if I had tried to use “memory”.
Bob S
On Feb 13, 2015, at 15:34 , Peter Haworth
p...@lcsql.commailto:p...@lcsql.com wrote:
We both used in memory databases. The filename is ;memory:
___
use-livecode mailing
On 2/12/2015 12:54 PM, Peter Haworth wrote:
Lets say for example you had
a list of 10,000 customers and their email addresses. Most customers do
have an email, a few don't and it just so happens that your first customer
Aardvark, and last customer, Zoe, don't have email. If you read just the
I'm feeling a delimiter argument coming on
On Thu, Feb 12, 2015 at 2:04 PM, J. Landman Gay jac...@hyperactivesw.com
wrote:
On 2/12/2015 12:54 PM, Peter Haworth wrote:
Lets say for example you had
a list of 10,000 customers and their email addresses. Most customers do
have an email, a few
On Wed, Feb 11, 2015 at 6:18 PM, Kay C Lan lan.kc.macm...@gmail.com wrote:
I did a similar test to you using a Valentina DB with and without unicode.
Specifically I used a UTF8 db so the unicode test data had to be passed
through LCs textDecode(dboutput,utf8) to get the correct results; which
On Fri, Feb 13, 2015 at 3:04 AM, J. Landman Gay jac...@hyperactivesw.com
wrote:
On 2/12/2015 12:54 PM, Peter Haworth wrote:
I haven't run any of the LC scripts to do this but if that's true, then
they don't achieve the original objective of reversing the list.
I don't think it's true.
Hi Mike,
I doubt the sqlite approach will be faster than the other algorithms, but
36 seconds is still way too long to insert 10,000 lines. Could you post
your code?
Pete
lcSQL Software
On Feb 11, 2015 4:52 AM, Mike Kerner mikeker...@roadrunner.com wrote:
With sqlite on my box, doing the
I just tried this with an in memory SQLite DB using 6.6.2 and 7.0.1.
Results were pretty much identical in both cases. Here they are:
Time to open in-memory db and create a table with one text column: zero
milliseconds
Time to load approx 100k rows into the db: 900 milliseconds
Time to
On Thu, Feb 12, 2015 at 8:16 AM, Peter Haworth p...@lcsql.com wrote:
Oh yes, and doesn't matter whether you're using LC 7 or something prior to
that, although that problem appears to be a bug that is already being
fixed.
I think that statement requires a qualifier - 'if you are not dealing
With sqlite on my box, doing the inserts via a transction took the time
down to 36 seconds from 64, still not good enough.
On Tue, Feb 10, 2015 at 11:58 AM, Mike Bonner bonnm...@gmail.com wrote:
You can find an example that uses begin transaction, and commit with a
repeat loop here:
Ali Lloyd wrote:
6.7.1
revers(ta) took 427 ms
qrevers(ta) took 6 ms
krevers(ta) took 412 ms
7.0.2 + bugfix
revers(ta) took 142 ms
qrevers(ta) took 32 ms
krevers(ta) took 258 ms
Very exciting progress, Ali.
Which v7 build is that? Is it one we have or one coming up?
--
Richard
Mike K, are you wrapping the inserts in a begin/commit block? It makes a
HUGE difference in speed. (otherwise, each is a separate transaction with
all the overhead. If wrapped, its a single transaction, and so much
faster.
On Tue, Feb 10, 2015 at 7:36 AM, Geoff Canyon gcan...@gmail.com wrote:
Which v7 build is that? Is it one we have or one coming up?
I've just submitted the pull request so once it's reviewed it will be
merged and appear in the next build, so hopefully 7.0.2 RC 3.
Does LC 7 now do character references in constant (albeit a bit slower)
time? Or linear? Or...
In
Thanks Ali!
On Tue, Feb 10, 2015 at 8:10 AM, Mike Bonner bonnm...@gmail.com wrote:
Mike K, are you wrapping the inserts in a begin/commit block? It makes a
HUGE difference in speed. (otherwise, each is a separate transaction with
all the overhead. If wrapped, its a single transaction, and
Yay, that's great news. Does LC 7 now do character references in constant
(albeit a bit slower) time? Or linear? Or...
On Tue, Feb 10, 2015 at 4:50 AM, Ali Lloyd a...@runrev.com wrote:
Apologies - hit send too early.
6.7.1
There are 2931 lines in tstart
There are now 14655 lines in
You can find an example that uses begin transaction, and commit with a
repeat loop here:
http://forums.livecode.com/viewtopic.php?f=7t=14145hilit=+transaction
On Tue, Feb 10, 2015 at 9:20 AM, Mike Kerner mikeker...@roadrunner.com
wrote:
Mike B, no, I wasn't, proving once again that I don't know
Mike B, no, I wasn't, proving once again that I don't know everything.
Could you come over here, I need to do a mind meld. I'll mess with that in
a minute. I was also going to see if mySQL was any different, but I
haven't done it, yet.
On Tue, Feb 10, 2015 at 10:28 AM, Ali Lloyd
Apologies - hit send too early.
6.7.1
There are 2931 lines in tstart
There are now 14655 lines in tstart
revers(ta) took 427 ms
qrevers(ta) took 6 ms
Output OK
krevers(ta) took 412 ms
Output OK
7.0.2 + bugfix
There are 2931 lines in tstart
There are now 14655 lines in tstart
It's not quite as fast as LC6, but I'm seeing a vast improvement here:
On 10 February 2015 at 04:10, Kay C Lan lan.kc.macm...@gmail.com wrote:
On Tue, Feb 10, 2015 at 11:58 AM, Geoff Canyon gcan...@gmail.com wrote:
It seems that what we've lost with unicode/7 is the speed of character
Well, that answers that question: Just trying to insert the data into the
database takes 64 seconds for 10,000 lines.
On Tue, Feb 10, 2015 at 5:50 AM, Ali Lloyd a...@runrev.com wrote:
Apologies - hit send too early.
6.7.1
There are 2931 lines in tstart
There are now 14655 lines in
On Mon, Feb 9, 2015 at 7:37 AM, Alex Tweedly a...@tweedly.net wrote:
Wow. I can see why LC7 would be expected to be slower for this case - in
the multi-byte Unicode world,
It just doesn't appear to be characters and bytes. I tried a slightly
different approach to Jacques, using brute force
Then there is the method of storing the data in a memory based sqLite instance
and using SELECT with the ORDER BY DESC ordering term. Might not be faster, but
it should be a lot more flexible.
Bob S
On Feb 8, 2015, at 14:37 , Alex Tweedly
a...@tweedly.netmailto:a...@tweedly.net wrote:
If you use SQLite or mySQL, you'd have to do the same thing with the index,
so unless you already have the data structure in place, you'ld have to
create the table, populate the table with the values and the indexes, and
then order by the index and read the data back, but all of that is done
with
and don't get me wrong, it's not ideal to have to kluge this way, just like
it's not ideal to have to kluge around the last item in a container being
empty. I'm not a fan of either behavior. Both should be dealt with, and
this is just another reason why I will be avoiding 7.0 as long as
Yes, but the second way is so much more sophisticated.
Bob S
On Feb 8, 2015, at 13:52 , J. Landman Gay
jac...@hyperactivesw.commailto:jac...@hyperactivesw.com wrote:
Just tinkering around on a lazy Sunday, and I thought I'd come up with a neat
way to reverse a list without using the
It seems that what we've lost with unicode/7 is the speed of character
references. In other words, this:
On Sun, Feb 8, 2015 at 4:37 PM, Alex Tweedly a...@tweedly.net wrote:
SO, instead, we can use put ... into char x to y of ... - since it uses
char indexing, it takes constant time (i.e. no
Dave-
Monday, February 9, 2015, 2:47:12 PM, you wrote:
In this case, I dont think theres an advantage in repeat for
each as were iterating through array elements and not chunks.
Are you really seeing it work faster?
I was using a 24519-line list of 555Kb.
(tried to reply earlier but
On Tue, Feb 10, 2015 at 8:53 AM, Mike Kerner mikeker...@roadrunner.com
wrote:
can we come up with a dataset for this test?
I personally find scripting a standard dataset the easiest. Here's a script
that will create identical lines, each with 18 x 5 char words. I've just
added 3 lines at the
On 2/9/2015 8:10 PM, Kay C Lan wrote:
On Tue, Feb 10, 2015 at 8:53 AM, Mike Kernermikeker...@roadrunner.com
wrote:
can we come up with a dataset for this test?
I personally find scripting a standard dataset the easiest.
I just used the colorNames repeated a number of times, since they
On Tue, Feb 10, 2015 at 11:58 AM, Geoff Canyon gcan...@gmail.com wrote:
It seems that what we've lost with unicode/7 is the speed of character
references.
See Ali Lloyd's earlier response that the LC team have been watching this
tread and it's clear that 'inefficient code' has been revealed.
I like the idea Mike K.
The slow part with the array method is rebuilding the list. Why not just
build the array, grab and reverse numeric sort the keys and use the data
directly from the array? For 80k lines, the split takes 24 ms. Putting the
keys into a variable, and reverse sorting them
On 2/9/2015 3:46 AM, Kay C Lan wrote:
But
if your last line was This is the last line and there was a CR and the
insertion point was sitting on the line below I would expect a reverse sort
to produce a blank line and the 2nd line would read this is the last line.
No. Delimiters are
A quick check and this works:
local sCount
function getCount
subtract 1 from sCount
return sCount
end getCount
on mouseUp
sort lines of fld 1 numeric by getCount()
end mouseUp
You don't need to have sCount start at the number of lines of the text to
be sorted. It can start at 0 and go
@lists.runrev.com
Sent: Mon, Feb 9, 2015 1:31 pm
Subject: Re: Reverse a list
On 2/9/2015 3:46 AM, Kay C Lan wrote:
But
if your last line was This is the last line and there was a CR and the
insertion point was sitting on the line below I would expect a reverse sort
to produce a blank line
On 2/9/2015 12:58 PM, Mike Bonner wrote:
1,2,3,4, is 4 items (comma terminator as you said, acts as a terminator
to the preceeding) 1,2,3,4,5 is 5, so the trailing coma is implied.
Yup, that's it.
--
Jacqueline Landman Gay | jac...@hyperactivesw.com
HyperActive Software
dunbarx wrote:
Jacque.
No. Delimiters are terminators, not dividers. They belong to the text
that precedes them.
Hmmm, what an interesting comment. Maybe we should discuss this...
It does seem a frequent enjoyment in our community, but I suspect the
outcome of yet another discussion on
1 - 100 of 131 matches
Mail list logo