Sorry, if there is a bug, it's somewhere else and doesn't do any harm in the 
current libs, I was a bit confused. ;-)

Greets,Kiste

    Am Donnerstag, 10. Dezember 2020, 18:35:04 MEZ hat 'Oliver Seitz' via 
jallib <[email protected]> Folgendes geschrieben:  
 
  Hi!
I've added some "if"s, reduced compile time by 15% and found a bug ;-)
It was even present in the current lib:    ...
    260    elsif LARGE_ARRAY_1_SIZE > 512 then
    261       if address >= 512 then
    262          _large_array_1_byte_3c[address - 512] = data
    263       elsif address >= 256 then
    264          _large_array_1_byte_2c[address - 256] = data
    265       else
    266          _large_array_1_byte_1c[address - 0] = data
    267       end if
    268    elsif LARGE_ARRAY_1_SIZE > 256 then
    ...
If the chosen size is a multiple of the chunk size, the highest variable won't 
work. If LARGE_ARRAY_1_SIZE is 512, it is not *greater* than 512, therefore  
only the "greater-than-256" part will be compiled.

By the way, the part like "-256", "-512"... is useless, it can be replaced by 
byte(address)... Think I have a look if it's worth the change ;-)
Greets,Kiste
    Am Donnerstag, 10. Dezember 2020, 07:03:45 MEZ hat Matt Schinkel 
<[email protected]> Folgendes geschrieben:  
 
 Great work Kiste, time to fix those typos. You can add your name to the 
library somewhere.

When I originally created this I didn't think anyone would use it, it was just 
a fun lib to create at the time. I guess my bit array library is more useless.

Matt.

Sent from my Android device.
From: 'Oliver Seitz' via jallib <[email protected]>
Sent: Wednesday, December 9, 2020 1:12:04 PM
To: [email protected] <[email protected]>
Subject: Re: [jallib] Large_array update Hi Rob,
seems like you didn't see that the maximum size of the large arrays is also 
configurable. I've set it to 14.5kB:
max_bytes_pic16=$((58*256))

That way you can use all the memory in a linear fashion without stitching 
together several large_arrays. 
I would, however, keep a minimum of four large_arrays as it was like that until 
now. 
The only drawback on the greater size of the large_array is that the compiler 
takes longer to compile. Over 1700 arrays of 256 bytes each are defined, most 
of them thrown out again as unused. That takes a bit of time.
Greets,
Kiste
Am Mittwoch, 9. Dezember 2020, 18:50:32 MEZ hat Rob CJ <[email protected]> 
Folgendes geschrieben:

Hi Kiste,
Very nice. I ran your script and it seems to work perfectly. I did not yet test 
all memory locations.
We could add enough large arrays to Jallib to cover the 13 kB and we should 
mention that they are created with your script. So we would at least need 7 
large arrays instead of the current 4.
If more memory comes available we could generate more libraries. We must also 
save your script on GitHub then.
What do the others think?
Kind regards,
Rob


Van: 'Oliver Seitz' via jallib <[email protected]>
Verzonden: woensdag 9 december 2020 15:29
Aan: Jallib <[email protected]>
Onderwerp: [jallib] Large_array update Hi all!
Recently, new PIC controllers were released which break through the 4kB RAM 
boundary, and there are more announced with up to 13kB.
The compiler has been updated to correctly address all of that memory, thanks 
Rob!
Now, if you want to use that lot of memory, soon you'll notice that there is no 
way to use it in a linear way. Arrays can only hold 256 bytes, and the 
large_array libs only go to 2048 bytes. There are four of them, so even when 
you use the RAM in four chunks, with the current version of large_array you can 
only use 8kB. 

Attached you'll find a bash script which generates the large_array libs, 
including the output files adjusted to cover the so-far largest RAM space in a 
single array.
The source code generated is nearly the same as with the current large_array 
(including typos in the comments) written by Matt.
Of course everyone is encouraged to test your programs with these libraries, 
but as the source code generated is virtually the same as the current, I 
presume there shouldn't be any problems.
Greets,Kiste


-- 
You received this message because you are subscribed to the Google Groups 
"jallib" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
[email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jallib/1896695521.6146523.1607524193216%40mail.yahoo.com.


-- 
You received this message because you are subscribed to the Google Groups 
"jallib" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
[email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jallib/AM0PR07MB62411C41F917095A72BD4B92E6CC0%40AM0PR07MB6241.eurprd07.prod.outlook.com.


-- 
You received this message because you are subscribed to the Google Groups 
"jallib" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
[email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jallib/1807358186.6273965.1607537524675%40mail.yahoo.com.


-- 
You received this message because you are subscribed to the Google Groups 
"jallib" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jallib/DM6PR02MB49711B3666FB48CF283E84A7DECB0%40DM6PR02MB4971.namprd02.prod.outlook.com.
  

-- 
You received this message because you are subscribed to the Google Groups 
"jallib" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jallib/455889486.6649292.1607619744625%40mail.yahoo.com.
  

-- 
You received this message because you are subscribed to the Google Groups 
"jallib" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jallib/1752194243.6733298.1607629842001%40mail.yahoo.com.

Reply via email to