Several things here aren't true. 
First this assumes you are always reading all the data, if that was the 
case then why have an SQL database in the first place. It also assumes only 
one table and other tables aren't impacted.

The second mistake is that randomly seeking through a large file is free if 
you have an index. This isn't true. It's true that it's MUCH faster than it 
was in the days of spinning platters but it's far from free and very costly 
on a mobile device. A memory mapped file would need caching both in RAM and 
in CPU caches. Both of these are limited resources that would cause 
thrashing to flash storage when depleted. Every time you change something 
in that large file you trigger disk fragmentation which has a lot of impact 
on an SSD in terms of performance. Yes modern filesystems are better at 
handling this but writing to a single large file is still way more 
The problem is that these will impact all your queries for the other tables 
as well. RAM and CPU cache utilization is far more noticeable on devices 
than on the desktop where the differences might not be noticeable. 

We'd need to detect the various types of arguments that are submitted and 
convert them to native calls for iOS. Passing arrays and other arbitrary 
objects to the C layer in the iOS port is painful so no one got around to 
do it for years. As I said, the demand for this was low and our resources 
are low as well.

You received this message because you are subscribed to the Google Groups 
"CodenameOne Discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
Visit this group at
To view this discussion on the web visit
For more options, visit

Reply via email to