That's great - thank you for that !
-- Alex.
On 18/02/2015 20:39, Geoff Canyon wrote:
On Tue, Feb 17, 2015 at 4:33 PM, Alex Tweedly wrote:
Though it would be kinda cool to do a quick LC simulation showing visible
"animation" of the variable and index as it goes through the loop.
I did somet
step along the way, so you
> can
> see the difference in steps taken, and gives an overall time for each
> simulation.
--
View this message in context:
http://runtime-revolution.278305.n4.nabble.com/Reverse-a-list-tp4688611p4689044.html
Sent from the Revo
On Tue, Feb 17, 2015 at 4:33 PM, Alex Tweedly wrote:
> Though it would be kinda cool to do a quick LC simulation showing visible
> "animation" of the variable and index as it goes through the loop.
>
I did something like this:
https://dl.dropboxusercontent.com/u/41182876/foreach.livecode
It sh
Mae inglesh nawt so goot. I didn't include the transaction code (or the
creating the database in memory instead of on disk code) in the examples
because it's the same in both and the question is why is the one sample so
much slower than the other.
On Tue, Feb 17, 2015 at 11:06 PM, Dr. Hawkins wr
On Tue, Feb 17, 2015 at 2:20 PM, Mike Kerner
wrote:
> I didn't include everything, including the transaction code, or opening the
> database in memory instead of on a disk. When we were messing with this,
> the piece that became the (58 minute) bottleneck was inside the loop, or,
> according to
IOC nvm. Looks like that is how revExecuteSQL works. I can see now why I was
befuddled trying to get SQL to work in LC using their built in functions.
Bob S
On Feb 17, 2015, at 12:11 , Mike Kerner
mailto:mikeker...@roadrunner.com>> wrote:
Peter (Brett),
Help me with the chunking piece, then.
On 2/17/2015 4:29 PM, Alex Tweedly wrote:
I am *absolutely* not recommending that anyone should modify the
variable in question within the loop - even if that seems to work in
some cases, it is known to be dangerous, and so should just NOT BE DONE
in "real" code. But doing it in limited cases do
On 17/02/2015 13:25, Dave Cragg wrote:
Alex,
Thanks for that.
Disbelieving soul that I am (sorry), I puzzled for a while over the results of
these two versions.
...
I had to use a pencil and paper to track what was in t and what the engine was referring to after
the "x" and "y" inserts. Then
On 17/02/2015 15:43, Bob Sneidar wrote:
Then past posts are incorrect on this matter. It was explicitly stated that the
actual memory holding the variable data was “indexed” and that altering the
variable data could relocate the variable in memory resulting in invalid data.
I have seen this my
I didn't include everything, including the transaction code, or opening the
database in memory instead of on a disk. When we were messing with this,
the piece that became the (58 minute) bottleneck was inside the loop, or,
according to Peter, the way the chunks are accessed in the loop.
On Tue, F
On Tue, Feb 17, 2015 at 12:11 PM, Mike Kerner
wrote:
> The following is very fast:
> put "INSERT INTO sortTest VALUES :1" into tSQL
> repeat for each line tLine in tDataSet
>revExecuteSQL dbid, tsql, tline
> end repeat
>
Faster still would probably be to build a command in the loop. Termina
It just seems odd to me that the chunking is so slow when done the second
way. Just for the heck of it, I did a SPLIT so as to work on an array, and
sure enough, it is just a little slower than the first technique (even
though there is the overhead involved with splitting the container into an
arr
On 2/17/2015 2:11 PM, Mike Kerner wrote:
Help me with the chunking piece, then.
Put 100 apples in a box.
repeat with x = 1 to 100:
pick up one apple
drop it back in the box
pick up one apple
pick up a second apple
drop them both back into the box
pick up one apple
pick up a secon
The second is slow, because the current position is not tracked. So, for
line 1, its easy. You grab line 1. For line 2, you count the lines, until
you get to line 2. Same for line 3. Or think of it this way.. If you have
100 lines, and you are grabbing all of them using the second method, The
fi
Peter (Brett),
Help me with the chunking piece, then.
The following is very fast:
put "INSERT INTO sortTest VALUES :1" into tSQL
repeat for each line tLine in tDataSet
revExecuteSQL dbid, tsql, tline
end repeat
The following is very slow:
put "INSERT INTO sortTest VALUES :1" into tSQL
put the
Yes, speed will be an issue if the data is thousands of lines. I'm usually
dealing with less than a thousand iterations, and taking a half-second or so to
do the job is nearly unnoticeable for the user.
-- Peter
Peter M. Brigham
pmb...@gmail.com
http://home.comcast.net/~pmbrig
On Feb 17, 2015,
On Tue, Feb 17, 2015 at 9:01 AM, Geoff Canyon wrote:
> Given that you have access to lines, items, and words, if possible it would
> be better to set the outer loop to work on lines, and then do whatever you
> like with items within the loop.
>
Most of the time, it is enough to set the outer loo
On Tue, Feb 17, 2015 at 11:55 AM, Dr. Hawkins wrote:
> Won't this be orders of magnitude slower?
>
Yes.
Given that you have access to lines, items, and words, if possible it would
be better to set the outer loop to work on lines, and then do whatever you
like with items within the loop.
Or tak
On Mon, Feb 16, 2015 at 6:23 PM, Peter M. Brigham wrote:
> No need to change the itemdel in a loop, you can use this instead:
>
> put getItem(pList, pIndex, pDelim) into tItem
>
Won't this be orders of magnitude slower?
I can see the use in the general case, but Im ususally in nested loops
wher
Then past posts are incorrect on this matter. It was explicitly stated that the
actual memory holding the variable data was “indexed” and that altering the
variable data could relocate the variable in memory resulting in invalid data.
I have seen this myself. The data returned is garbled nonsens
Alex,
Thanks for that.
Disbelieving soul that I am (sorry), I puzzled for a while over the results of
these two versions.
on mouseup
put empty into msg
put "abc" & CR & "def" & CR & "ghi" & CR into t
repeat for each line L in t
put the number of chars in L && L &CR aft
Hey, silly question, but is there a way to do this..
sort lines of tWorking numeric by (--sCount) so that there is no actual
function call? (an in position, decrement sort of thing)
Just curious.
On Mon, Feb 16, 2015 at 8:25 PM, Jerry Jensen wrote:
> Good point. Besides being a good gener
Good point. Besides being a good general habit, it would be especially
important to make recursive functions private.
.Jerry
> On Feb 16, 2015, at 7:18 PM, Geoff Canyon wrote:
>
> It's important to note that the efficiency is all/mostly in the function
> call, not in the execution of the functi
It's important to note that the efficiency is all/mostly in the function
call, not in the execution of the function itself. So for really short
functions that will be called many times, this is significant. For longer
functions, the difference all but vanishes:
on mouseUp
put 1000 into n
--
RichardG wrote:
>
> I would imagine that a handler in the same
> script as the caller would be faster than having it just about any other
> place, but to limit its scope trims the execution time by a surprising
> amount.
>
Whoda thunk!
> I think my new habit is to declare everything as priva
On Feb 16, 2015, at 9:05 PM, Dr. Hawkins wrote:
> On Mon, Feb 16, 2015 at 3:08 PM, Alex Tweedly wrote:
>
>> That's not quite correct. It doesn't do a single initial complete scan of
>> the whole variable and keep all the pointers. What it does is (more like)
>> keep track of how far it has curre
Right. Back to the original version, a command, with referenced list.
-- Peter
Peter M. Brigham
pmb...@gmail.com
http://home.comcast.net/~pmbrig
On Feb 16, 2015, at 7:19 PM, Alex Tweedly wrote:
> You were right first time
>
> if you use a reference, then there is no copy created when you
On Mon, Feb 16, 2015 at 3:08 PM, Alex Tweedly wrote:
> That's not quite correct. It doesn't do a single initial complete scan of
> the whole variable and keep all the pointers. What it does is (more like)
> keep track of how far it has currently processed, and then when it needs
> the next line,
ms on LC 7.0.2 rc2 (times vary
>much
>more on LC 7.0.2 rc2 than on LC 6.7.2
>
>Kind regards
>Bernd
>
>
>
>--
>View this message in context:
>http://runtime-revolution.278305.n4.nabble.com/Reverse-a-list-tp4688611p4688925.ht
You were right first time
if you use a reference, then there is no copy created when you do the
call; and then you build up the output list.
without the reference, there is an initial copy and then you
additionally build the output list.
So using a reference parameter saves the memory
less messages are passed through the message path.
>
> Kind regards
> Bernd
>
>
>
> --
> View this message in context: http://runtime-revolution.
> 278305.n4.nabble.com/Reverse-a-list-tp4688611p4688930.html
> Sent from the Revolution - User mailing list archive at Nabble.
Richard-
Monday, February 16, 2015, 3:55:54 PM, you wrote:
> I think my new habit is to declare everything as private unless I know I
> need it available to other scripts.
Yeah, that's my normal MO anyway.
And for just that reason.
--
-Mark Wieder
ahsoftw...@gmail.com
This communication may
Bernd wrote:
funny, nobody likes "private"
It save up to 20 % all else equal.
On about 44000 lines 5 items on each line
with private 86 ms, without 106 ms LC 6.7.2
with private roughly 180 ms, without 200 ms on LC 7.0.2 rc2 (times vary much
more on LC 7.0.2 rc2 than on LC 6.7.2
Ooooh, good
in a performance
> gain as less messages are passed through the message path.
Kind regards
Bernd
--
View this message in context:
http://runtime-revolution.278305.n4.nabble.com/Reverse-a-list-tp4688611p4688930.html
Sent from the Revolution - User mailing list archive at Nabble.com.
>> return pList
> >> end reverseText
> >>
> >> private function reverseSort
> >> subtract 1 from sNum
> >> return sNum
> >> end reverseSort
> >>
> >>
> &
m
>> end reverseSort
>>
>>
>> private and @ help when the line count is high.
>>
funny, nobody likes "private"
It save up to 20 % all else equal.
On about 44000 lines 5 items on each line
with
On 16/02/2015 16:06, Bob Sneidar wrote:
The For Each form is also quite handy as it eliminates the need for stuffing
some variables in the process. As mentioned in past threads, the one downside
is that you *MUST* not change the contents of the source data (and I think the
each variable as wel
I wrote:
> I referenced the list and turned the function into a command, saves memory
> (possibly speed?) on very large lists.
I just realized that no memory is saved this way because we are building a new
duplicate (reversed) list within the command. So referencing the list has no
advantage.
So, Alex's way of doing it is the fastest pure-LC way (I didn't get into using
the database methods). I referenced the list and turned the function into a
command, saves memory (possibly speed?) on very large lists.
on reverseSort @pList, pDelim
-- reverse-sorts an arbitrary list
--ie,
On 2015-02-16 22:02, Mike Kerner wrote:
I don't think I follow on the first part. Edinburgh says that the
complexity of the two traversals are dramatically different. repeat
for
each is somewhere between nlogn and n, and repeat with is n^2. At
least
for the case of your squares of integers,
On 16/02/2015 21:15, Peter M. Brigham wrote:
As I now understand it, the really big difference is between the repeat for
n = 1 to… form on the one hand, and the repeat for each… and repeat n times
forms. The latter 2 are not that different, but when the engine has to
count lines/items every time,
egards
Bernd
--
View this message in context:
http://runtime-revolution.278305.n4.nabble.com/Reverse-a-list-tp4688611p4688891.html
Sent from the Revolution - User mailing list archive at Nabble.com.
___
use-livecode mailing list
use-livecode@lists.runrev.co
On 2/16/2015 3:02 PM, Mike Kerner wrote:
At least
for the case of your squares of integers, I would expect that there is a
crossover where it's going to be faster to build the list, first. I don't
know if that is at 100, 1000, or some bigger number, but n vs. n^2 is a
very big difference.
If y
As I now understand it, the really big difference is between the repeat for n =
1 to… form on the one hand, and the repeat for each… and repeat n times forms.
The latter 2 are not that different, but when the engine has to count
lines/items every time, it slows things down a very significant amo
I don't think I follow on the first part. Edinburgh says that the
complexity of the two traversals are dramatically different. repeat for
each is somewhere between nlogn and n, and repeat with is n^2. At least
for the case of your squares of integers, I would expect that there is a
crossover whe
On Feb 16, 2015, at 1:58 PM, BNig wrote:
> Hi Peter,
>
> you also might want to check your reverse algorithm on 7.x.x
Well, I'm still running 5.5.1, since I have a more than full-time job already
taking care of patients and I don't have time to debug 38,000 lines of script
for my practice mana
On 15/02/2015 02:28, Mike Kerner wrote:
I just read the dictionary entry (again), and I would say that it is not at
all clear that there would appear to be an ENORMOUS difference. For
starters, you have to read wy down to find the mention, it isn't
really called out with a NOTE or anythi
sNum
end reverseSort
private and @ help when the line count is high.
Kind regards
Bernd
--
View this message in context:
http://runtime-revolution.278305.n4.nabble.com/Reverse-a-list-tp4688611p4688891.html
Sent from the Revolution - User mailing list a
According to LC, we're dealing with somewhere bewtween nlog(n) and n on the
one side and n^2 on the other, which is about as far apart as we can get.
On Mon, Feb 16, 2015 at 11:09 AM, Bob Sneidar
wrote:
> Ah. Because the keys of an array are effectively a system of pointers in
> themselves. It m
Ah. Because the keys of an array are effectively a system of pointers in
themselves. It might actually be slightly quicker, since the array has already
been created, while the For Each will have to create the pointers on the fly at
the start of the loop. I’d be curious to find out how much time
The For Each form is also quite handy as it eliminates the need for stuffing
some variables in the process. As mentioned in past threads, the one downside
is that you *MUST* not change the contents of the source data (and I think the
each variable as well) as doing so will corrupt what ends up i
My mistake. You are correct that the two are equally efficient. It was an error
in my timing test handler.
-- Peter
Peter M. Brigham
pmb...@gmail.com
http://home.comcast.net/~pmbrig
On Feb 15, 2015, at 7:56 AM, Dave Cragg wrote:
> Peter,
>
> I don’t follow. If I change the repeat portion of y
Peter,
I don’t follow. If I change the repeat portion of your code to use repeat n
times as below, the speed doesn’t change. And the speed scales linearly in both
cases if the size of the data set is increased.
put the keys of pList into indexList
put the number of lines of indexList into i
put
Harking back to the original discussion on reversing a list -- still the
subject of this thread, here's the original example as I saved it in my library.
function reverseSort pList, pDelim
-- reverse sorts an arbitrary list
--ie, item/line -1 -> item/line 1, item/line -2 -> item/line 2,
Richard,
I just read the dictionary entry (again), and I would say that it is not at
all clear that there would appear to be an ENORMOUS difference. For
starters, you have to read wy down to find the mention, it isn't
really called out with a NOTE or anything else to draw one's attention
Typo, should be ":memory:".
On Sat Feb 14 2015 at 2:01:45 PM Mike Kerner
wrote:
> Pete, is that a typo, or did you mean to have a semicolon instead of a
> colon in front of "memory"? Does ";memory:" work, too, or just ":memory:"?
>
> AND HOLY CRAP, yes, Pete, you're right, you were doing 100k r
Mike Kerner wrote:
...
> REPEAT FOR is .129 seconds, and REPEAT WITH is TWENTY SEVEN THOUSAND
> TIMES SLOWER (for this operation)??!?!?!?!?!???
>
> Hey, Pete, "That's a common technique"...WHAT? If it's so common,
> and all of this is common knowledge, then how come it isn't
> documented, anywher
Pete, is that a typo, or did you mean to have a semicolon instead of a
colon in front of "memory"? Does ";memory:" work, too, or just ":memory:"?
AND HOLY CRAP, yes, Pete, you're right, you were doing 100k records, where
the other example was only doing 10k. So doing 100k records with REPEAT
WIT
Oh thanks. That would have screwed me up if I had tried to use “memory”.
Bob S
On Feb 13, 2015, at 15:34 , Peter Haworth
mailto:p...@lcsql.com>> wrote:
We both used in memory databases. The filename is ";memory:"
___
use-livecode mailing list
use-l
We both used in memory databases. The filename is ";memory:"
On Fri Feb 13 2015 at 2:46:45 PM Bob Sneidar
wrote:
> He may also have been using a memory resident database. That is what I
> suggested at the first. To do this, use “memory” as the file name.
>
> Bob S
>
>
> On Feb 13, 2015, at 12:4
He may also have been using a memory resident database. That is what I
suggested at the first. To do this, use “memory” as the file name.
Bob S
On Feb 13, 2015, at 12:40 , Mike Kerner
mailto:mikeker...@roadrunner.com>> wrote:
I must have missed a thread, somewhere. That would be the thread o
Right,that's a common technique to avoid the timing problmem when you need
a numeric index for some reason.
My stack is 100,000 lines Mike, actually 99,913. You're probably getting
mixed up with the stack name which includes "1" because I started
testing that way then increased it to 100, 000
NO! SORRY! My mixup in the last post - the REPEAT FOR is faster (repeat
for each line...)
On Fri, Feb 13, 2015 at 4:02 PM, Mike Kerner
wrote:
> No, no, it isn't 100,000 lines, it's only 10,000 lines. 0.129 vs 39.0.
>
> So then, just for the heck of it, because if we do the "repeat for", we
>
No, no, it isn't 100,000 lines, it's only 10,000 lines. 0.129 vs 39.0.
So then, just for the heck of it, because if we do the "repeat for", we
gain some additional information (the line number we're on), I added "put 0
into i" before the loop and then "add 1 to i" inside the loop, at the top.
We
Hi Mike,
Glad you figured out the reason for the speed difference.
Not sure if there's a single thread anywhere that talks about repeat loops
but "repeat with" can be orders of magnitude faster than "repeat for" as
you've discovered. In this case there were about 100k lines in the data
and I thin
I must have missed a thread, somewhere. That would be the thread on how LC
handles loops.
To recap, doing this sort using an sqlite database (insert the values into
a table, then sort the table), was taking me almost 40 seconds. Then Pete
chimed in and had it working in a couple hundred millisec
On Fri, Feb 13, 2015 at 3:04 AM, J. Landman Gay
wrote:
> On 2/12/2015 12:54 PM, Peter Haworth wrote:
>
>>
>>> I haven't run any of the LC scripts to do this but if that's true, then
>> they don't achieve the original objective of reversing the list.
>>
>
> I don't think it's true. Using LC scrip
I'm feeling a delimiter argument coming on
On Thu, Feb 12, 2015 at 2:04 PM, J. Landman Gay
wrote:
> On 2/12/2015 12:54 PM, Peter Haworth wrote:
>
>> Lets say for example you had
>>> >a list of 10,000 customers and their email addresses. Most customers do
>>> >have an email, a few don't and it ju
On 2/12/2015 12:54 PM, Peter Haworth wrote:
Lets say for example you had
>a list of 10,000 customers and their email addresses. Most customers do
>have an email, a few don't and it just so happens that your first customer
>Aardvark, and last customer, Zoe, don't have email. If you read just the
>
On Wed, Feb 11, 2015 at 6:18 PM, Kay C Lan wrote:
> I did a similar test to you using a Valentina DB with and without unicode.
> Specifically I used a UTF8 db so the unicode test data had to be passed
> through LCs textDecode(dboutput,"utf8") to get the correct results; which
> obviously takes ti
On Thu, Feb 12, 2015 at 8:16 AM, Peter Haworth wrote:
>
> Oh yes, and doesn't matter whether you're using LC 7 or something prior to
> that, although that problem appears to be a bug that is already being
> fixed.
>
> I think that statement requires a qualifier - 'if you are not dealing with
unic
on.
> > > >>>
> > > >>> -- Alex.
> > > >>>
> > > >>> on mouseup
> > > >>>put fld "fldout" into tstart
> > > >>>
> > > >>>put tstart into ta
> &
Hi Mike,
I doubt the sqlite approach will be faster than the other algorithms, but
36 seconds is still way too long to insert 10,000 lines. Could you post
your code?
Pete
lcSQL Software
On Feb 11, 2015 4:52 AM, "Mike Kerner" wrote:
> With sqlite on my box, doing the inserts via a transction took
With sqlite on my box, doing the inserts via a transction took the time
down to 36 seconds from 64, still not good enough.
On Tue, Feb 10, 2015 at 11:58 AM, Mike Bonner wrote:
> You can find an example that uses begin transaction, and commit with a
> repeat loop here:
> http://forums.livecode.co
You can find an example that uses begin transaction, and commit with a
repeat loop here:
http://forums.livecode.com/viewtopic.php?f=7&t=14145&hilit=+transaction
On Tue, Feb 10, 2015 at 9:20 AM, Mike Kerner
wrote:
> Mike B, no, I wasn't, proving once again that I don't know everything.
> Could yo
Mike B, no, I wasn't, proving once again that I don't know everything.
Could you come over here, I need to do a mind meld. I'll mess with that in
a minute. I was also going to see if mySQL was any different, but I
haven't done it, yet.
On Tue, Feb 10, 2015 at 10:28 AM, Ali Lloyd wrote:
> > Wh
> Which v7 build is that? Is it one we have or one coming up?
I've just submitted the pull request so once it's reviewed it will be
merged and appear in the next build, so hopefully 7.0.2 RC 3.
> Does LC 7 now do character references in constant (albeit a bit slower)
time? Or linear? Or...
In t
Thanks Ali!
On Tue, Feb 10, 2015 at 8:10 AM, Mike Bonner wrote:
> Mike K, are you wrapping the inserts in a begin/commit block? It makes a
> HUGE difference in speed. (otherwise, each is a separate transaction with
> all the overhead. If wrapped, its a single transaction, and so much
> faster.
Mike K, are you wrapping the inserts in a begin/commit block? It makes a
HUGE difference in speed. (otherwise, each is a separate transaction with
all the overhead. If wrapped, its a single transaction, and so much
faster.
On Tue, Feb 10, 2015 at 7:36 AM, Geoff Canyon wrote:
> Yay, that's grea
Yay, that's great news. Does LC 7 now do character references in constant
(albeit a bit slower) time? Or linear? Or...
On Tue, Feb 10, 2015 at 4:50 AM, Ali Lloyd wrote:
> Apologies - hit send too early.
>
>
> 6.7.1
>
> There are 2931 lines in tstart
>
> There are now 14655 lines in tstart
>
> re
Ali Lloyd wrote:
> 6.7.1
> revers(ta) took 427 ms
> qrevers(ta) took 6 ms
> krevers(ta) took 412 ms
>
> 7.0.2 + bugfix
> revers(ta) took 142 ms
> qrevers(ta) took 32 ms
> krevers(ta) took 258 ms
Very exciting progress, Ali.
Which v7 build is that? Is it one we have or one coming up?
--
Richa
Well, that answers that question: Just trying to insert the data into the
database takes 64 seconds for 10,000 lines.
On Tue, Feb 10, 2015 at 5:50 AM, Ali Lloyd wrote:
> Apologies - hit send too early.
>
>
> 6.7.1
>
> There are 2931 lines in tstart
>
> There are now 14655 lines in tstart
>
> re
Apologies - hit send too early.
6.7.1
There are 2931 lines in tstart
There are now 14655 lines in tstart
revers(ta) took 427 ms
qrevers(ta) took 6 ms
Output OK
krevers(ta) took 412 ms
Output OK
7.0.2 + bugfix
There are 2931 lines in tstart
There are now 14655 lines in tstart
revers(ta
It's not quite as fast as LC6, but I'm seeing a vast improvement here:
On 10 February 2015 at 04:10, Kay C Lan wrote:
> On Tue, Feb 10, 2015 at 11:58 AM, Geoff Canyon wrote:
>
> > It seems that what we've lost with unicode/7 is the speed of character
> > references.
> >
>
> See Ali Lloyd's ear
On Tue, Feb 10, 2015 at 11:58 AM, Geoff Canyon wrote:
> It seems that what we've lost with unicode/7 is the speed of character
> references.
>
See Ali Lloyd's earlier response that the LC team have been watching this
tread and it's clear that 'inefficient code' has been revealed. The LC team
are
It seems that what we've lost with unicode/7 is the speed of character
references. In other words, this:
On Sun, Feb 8, 2015 at 4:37 PM, Alex Tweedly wrote:
> SO, instead, we can use "put ... into char x to y of ..." - since it uses
> char indexing, it takes constant time (i.e. no scan, just dir
On 2/9/2015 8:10 PM, Kay C Lan wrote:
On Tue, Feb 10, 2015 at 8:53 AM, Mike Kerner
wrote:
>can we come up with a dataset for this test?
>
I personally find scripting a standard dataset the easiest.
I just used the colorNames repeated a number of times, since they start
out alphabetized and
On Tue, Feb 10, 2015 at 8:53 AM, Mike Kerner
wrote:
> can we come up with a dataset for this test?
>
I personally find scripting a standard dataset the easiest. Here's a script
that will create identical lines, each with 18 x 5 char words. I've just
added 3 lines at the beginning:
aa
bb
cc
an
Dave-
Monday, February 9, 2015, 2:47:12 PM, you wrote:
> In this case, I dont think theres an advantage in "repeat for
> each" as were iterating through array elements and not chunks.
> Are you really seeing it work faster?
> I was using a 24519-line list of 555Kb.
(tried to reply earlier b
can we come up with a dataset for this test? I was about to go write the
database code to share, but I realized I don't have anything (real) to test
against.
On Mon, Feb 9, 2015 at 5:47 PM, Dave Cragg
wrote:
> In this case, I don’t think there’s an advantage in "repeat for each" as
> we’re iter
In this case, I don’t think there’s an advantage in "repeat for each" as we’re
iterating through array elements and not chunks.
Are you really seeing it work faster?
I was using a 24519-line list of 555Kb.
Dave
> On 9 Feb 2015, at 22:36, Mark Wieder wrote:
>
> Dave Cragg writes:
>
>> Stil
Yay.
But the speed is the same as my original. (On both 6.0.2 and 7.0.1)
> On 9 Feb 2015, at 22:25, Mark Wieder wrote:
>
> Note to self - paste the actual code...
>
> function arevers p
> local t
> local tNumElems
>
> put the number of lines in p into tNumElems
> split p by cr
> pu
Dave Cragg writes:
> Still no data. The problem is here:
Yeah... see my re-corrected version.
Comes out faster due to the "repeat for each" construct.
Even in LC 7.x.
--
Mark Wieder
ahsoftw...@gmail.com
___
use-livecode mailing list
use-livecode@
Mark
Still no data. The problem is here:
> repeat for each line l in p
I would have been surprised if that had worked.
I also tried the following, but it give the same speed as my original (not
surprising as it’s doing much the same thing)
function arevers p
put the number of lines in p
Note to self - paste the actual code...
function arevers p
local t
local tNumElems
put the number of lines in p into tNumElems
split p by cr
put empty into t
repeat for each line l in the keys of p
put p[tNumElems] & cr after t
subtract 1 from tNumElems
end rep
Just a thought, untested.
Change the first two executable lines to:
split p by cr
put the number of lines of the keys of p into tNumElems
That way instead of having to spin through the entire data twice counting
lines, once for the number of lines and again for split,
it would only spin all the
least one "layer" down. When you drill into its elements, things start to
come back to life.
-Original Message-
From: Dave Cragg
To: How to use LiveCode
Sent: Mon, Feb 9, 2015 4:54 pm
Subject: Re: Reverse a list
Mark,
It makes it faster, but it doesn’t return any data. :-)
Sorry - got one line out of place. Here ya go.
Still the fastest yet.
function arevers p
local t
local tNumElems
put the number of lines in p into tNumElems
split p by cr
put empty into t
repeat for each line l in p
put p[tNumElems] & cr after t
subtract 1 from tN
Stands to reason. ;-)
Bob S
On Feb 9, 2015, at 13:53 , Dave Cragg
mailto:dave.cr...@lacscentre.co.uk>> wrote:
Mark,
It makes it faster, but it doesn’t return any data. :-)
___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit thi
Mark,
It makes it faster, but it doesn’t return any data. :-)
The number of lines in p = 0
Cheers
Dave
> On 9 Feb 2015, at 20:13, Mark Wieder wrote:
>
> Dave-
>
> Using 'repeat for each' for the loop makes this faster yet.
>
> function arevers p
> local t
> local tNumElems
>
> split
1 - 100 of 132 matches
Mail list logo