Someone more knowledgeable than I can probably give a reason why this works, whereas yours didn't: {&(_1+i.5)^:(<4) 4
4 3 2 1 -------------------------------------------- On Wed, 9/7/16, Xiao-Yong Jin <jinxiaoy...@gmail.com> wrote: Subject: Re: [Jprogramming] Greatest Increasing Subsequence To: programm...@jsoftware.com Date: Wednesday, September 7, 2016, 12:53 PM I’m trying to rewrite lisM in a readable tacit form but hit something I didn’t understand. Supplying i.n to ^: works fine, but <n does not work like the dictionary says. Namely (_1+i.5){~^:(i.4)4 produces 4 3 2 1, but (_1+i.5){~^:(<4)4 exhausts all the memory with J804 and J805beta. The dictionary says If n is boxed it must be an atom, and u^:(<m) ↔ u^:(i.m) y if m is a non-negative integer so the above two should be the same. What’s going on here? > On Sep 5, 2016, at 3:17 AM, 'Mike Day' via Programming <programm...@jsoftware.com> wrote: > > Thanks Jon > > lisM isn't really "my" version - I was just translating the pseudo-code... > I expect -without checking! - that its advantage over lisl resides in avoiding the do-it-yourself binary search by using J's built-in dyadic I. , albeit at the expense of an explicit sorted array, which I called mx . I tried limiting the I. search to only the first L items of mx, but with no benefit, at least for the sizes I tried. > The time taken to recover the result is fairly trivial, so I don't suppose my use of > |. x{~(p{~])^:(i.L) L{m > is particularly interesting, but it avoided that explicit while loop. > > Mike > > Please reply to mike_liz....@tiscali.co.uk. > Sent from my iPad > >> On 5 Sep 2016, at 08:07, 'Jon Hough' via Programming <programm...@jsoftware.com> wrote: >> >> Just out of interest, I compared the various results. >> I wrote a fully imperative version, which uses in-place modification of the results array. >> >> I also wrote a version that avoids any while / for loops (because I wanted to avoid using them), and ended up putting >> most of the logic in anonymous verbs. Which is probably a big mistake because that gets very slow. Also barely readable. >> >> NB.=========================================================================== >> >> NB. Jon - bizarre version. Uses anonymous verbs for most of the logic. >> NB. Slow and, uses huge amounts of memory. >> appendOrInsert=:(3 : 'insert max' [ 3 : '(max=:<:#lst)[(min=:0)[val=:i{r[i=:y')`(3 : '(lst=:lst,y)[prev=:(_1{lst) y} prev')@.(3 : '(val>({:lst){r)[val=:y{r') >> insert=:(3 : '(prev=: ((y-1){lst)i}prev)[(lst=: i y }lst)' ] ])@:(3 : 'mid'`(3 : 'min=: mid+1')@.(3 : '(val>(mid{lst){r)[(mid=:<.(min+y)%2)')^:(0&<)^:_) >> >> lisJ =: 3 : 0 >> r=:y NB. array >> [val=:0[max=:0[mid=:0[min=:0 NB. useful numbers >> lst=:0 NB. list of indices >> prev=: (#y)#0 NB. previous indices >> i=:0 NB. current index >> >> appendOrInsert"0 >: i.<: # y NB. append the items, or insert them, into lst (and update prev) >> res=:'' >> ptr=: _1{lst >> (ptr&buildEx)^:(#lst) res >> |. res >> ) >> >> buildEx =: 4 : 0 >> res=:y,ptr{r >> ptr=: ptr{prev >> res >> ) >> >> >> NB.=========================================================================== >> >> >> NB. Imperative version. just for benchmarking >> lisI =: 3 : 0 >> lst =: 0 >> r=: y >> parent=: (#y)#0 >> for_i.>:i.<:# r do. >> if.((i{r) > ({: lst){r ) do. >> parent=:(_1{lst) i}parent >> lst =: lst, i >> else. >> min=: 0 >> max =: <:#lst >> while. min < max do. >> mid =: <. (min + max) % 2 >> if. (i{r) > ((mid { lst{r)) do. min =: mid + 1 >> else. max =: mid end. >> end. >> lst=: (i) max} lst >> parent=: ((max-1){lst)(i}) parent >> end. >> end. >> res=: '' >> ptr=: _1{lst >> while. (# res) < # lst do. >> res=:res,ptr{r >> ptr=:ptr{parent >> end. >> I. res >> ) >> >> >> >> NB.=========================================================================== >> >> >> NB. Mike's version. >> >> lisM =: 3 : 0 >> n =. #x =. y >> if. 2>#x do. x return. end. NB. special case empty or singleton array >> if. (-:/:~)x do. ~.x return. end. NB. special case sorted arrays >> if. (-:\:~)x do. {.x return. end. >> p =. n#0 >> NB. seed m with trailing _1 so that final L (as in wiki algorithm) can be found >> m =. n 0 } _1#~>:n >> NB. set up sorted mx, with just the first element of x, so that I. works >> mx =. ({.x) 1 } (<:<./x), n#>:>./x >> for_i. i.n do. >> xi =. i{x >> lo =. mx I. xi >> p =. (m{~<:lo) i } p NB. better than appending to short p for larger x >> m =. i lo } m >> mx =. xi lo } mx >> end. >> |. x{~(p{~])^:(i.L) m{~ L =. <: m i. _1 >> ) >> >> NB.=========================================================================== >> >> NB. Louis' version. >> p=: ] , [ ,&.> ] #~ (< {.&>) (= >./)@:* #&>@] >> e=: }:@>@(#~ (= >./)@:(#&>))@> >> >> s=: [: e (<<_) p&.>/@,~ <"0 >> pi=: ] , [ ,&.> ] {~ (< {.&>) (i. >./)@:* #&>@] >> lisL =: [: e (<<_) pi&.>/@,~ <"0 NB. <------------- this one. >> >> f=: ((] >: (-~ >./)) #&>) # ] >> >> P=: ] , [ ,&.> ] #~ (< {.&>) (,@[ #^:_1 (= >./)@#) #&>@] >> sp=: [: e (<<_) ({:@[ f {.@[ P ])&.>/@,~ (,&.> i.@#) >> >> >> >> NB.=========================================================================== >> >> >> NB. Xiao's version with Raul's modifications. Renamed verbs to avoid clashing. >> >> NB. dyads >> exr=: (1 + {:@[ < ]) {. [ ; , >> hxr=: [: ; exr&.> >> >> NB. monads >> gxr=: (}.@] ;~ {.@] ; (hxr {.))&>/ >> cxr=: ({::~ [: (i. >./) #@>)@>@{. >> dxr=: (<_) ; ] >> lisXR=: ([: cxr gxr^:(#@(>@{:)))@dxr >> >> NB.=========================================================================== >> >> a=: 25 ? 25 >> timespacex 'lisI a' >> timespacex 'lisJ a' >> timespacex 'lisM a' >> timespacex 'lisL a' >> timespacex 'lisXR a' NB. comment out for larger arrays,a. >> >> Playing around with various arrays, it seems lisM is the most efficient in time and space (more than the imperative version). >> lisM and lisI can handle array larger than 10000. The others struggle or give up with them. >> >> >> -------------------------------------------- >> On Sun, 9/4/16, 'Mike Day' via Programming <programm...@jsoftware.com> wrote: >> >> Subject: Re: [Jprogramming] Greatest Increasing Subsequence >> To: programm...@jsoftware.com >> Date: Sunday, September 4, 2016, 4:30 AM >> >> This version of >> "lis" is a bit more J-like, especially in using >> dyadic I. >> instead of the diy binary search, >> at the expense of a slightly more >> complicated set-up for the m and mx arrays. >> >> lis =: 3 : 0 >> n =. #x =. y >> if. 2>#x do. x return. end. >> NB. special case empty or singleton array >> if. (-:/:~)x do. ~.x return. end. NB. special >> case sorted arrays >> if. (-:\:~)x do. {.x >> return. end. >> p =. n#0 >> NB. seed m with trailing _1 so that final L (as >> in wiki algorithm) can >> be found >> m =. n 0 } _1#~>:n >> NB. set up sorted mx, with just the first >> element of x, so that I. works >> mx =. >> ({.x) 1 } (<:<./x), n#>:>./x >> for_i. i.n do. >> xi =. >> i{x >> lo =. mx I. xi >> p =. (m{~<:lo) i } p >> NB. better than appending to short p for >> larger x >> m >> =. i lo } m >> mx =. >> xi lo } mx >> end. >> |. >> x{~(p{~])^:(i.L) m{~ L =. <: m i. _1 >> ) >> >> Mike >> >> >> On 02/09/2016 20:45, 'Mike >> Day' via Programming wrote: >>> Well, >> assuming for now that it does work, here's an attempt >> at a J >>> version of >> the pseudo-code listed at >>> https://en.wikipedia.org/wiki/Longest_increasing_subsequence#Efficient_algorithms >> >>> >>> >>> lis =: 3 : 0 NB. longest >> increasing subsequence >>> m >> =. 0#~>:n =. >> #x =. y >>> L >> =. #p =. '' >>> mx =. m{x NB. added this vector >> for better look-up of x{~mid{m >>> for_i. >> i.n do. >>> 'lo hi' =. >> 1, L >>> xi =. i{x >>> while. lo <: hi do. >>> mid =. >.@-: lo + hi >> NB. if. xi > x{~ mid{m do. NB. next >> line a bit better >> if. xi > mid{mx do. >> lo =. >: mid >> else. >>> hi =. >> <: mid >>> end. >>> end. >>> p >> =. p, m{~<:lo >>> m >> =. i lo } m >>> mx >> =. xi lo } mx NB. update my additional array >>> L =. L >. lo >>> NB. smoutput i; (x{.~ >:i); >> L{.m NB. diagnostic >> end. >>> |. x{~(p{~])^:(i.L) L{m >>> ) >>> >>> It's reasonably fast on ?10000#5000 - >> but possibly wrong!It does >>> fail on an >> empty array. >> Mike >>> >>> >>>> On 02/09/2016 17:30, Raul Miller wrote: >>>> It seems to me that the >> "efficient algorithm" documented on the >>>> wikipedia page would have an analogous >> flaw. It is performing binary >> searches on unsorted lists. That said, it does perform >> correctly for >>>> this example. >>>> >>>> Thanks, >>> >>> >>> --- >>> This email has been checked for viruses by >> Avast antivirus software. >>> https://www.avast.com/antivirus >> ---------------------------------------------------------------------- >>> For information about J forums see http://www.jsoftware.com/forums.htm >> >> >> --- >> This email >> has been checked for viruses by Avast antivirus software. >> https://www.avast.com/antivirus >> >> ---------------------------------------------------------------------- >> For information about J forums see http://www.jsoftware.com/forums.htm >> ---------------------------------------------------------------------- >> For information about J forums see http://www.jsoftware.com/forums.htm > > ---------------------------------------------------------------------- > For information about J forums see http://www.jsoftware.com/forums.htm ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm