Several things here aren't true. First this assumes you are always reading all the data, if that was the case then why have an SQL database in the first place. It also assumes only one table and other tables aren't impacted.
The second mistake is that randomly seeking through a large file is free if you have an index. This isn't true. It's true that it's MUCH faster than it was in the days of spinning platters but it's far from free and very costly on a mobile device. A memory mapped file would need caching both in RAM and in CPU caches. Both of these are limited resources that would cause thrashing to flash storage when depleted. Every time you change something in that large file you trigger disk fragmentation which has a lot of impact on an SSD in terms of performance. Yes modern filesystems are better at handling this but writing to a single large file is still way more expensive. The problem is that these will impact all your queries for the other tables as well. RAM and CPU cache utilization is far more noticeable on devices than on the desktop where the differences might not be noticeable. We'd need to detect the various types of arguments that are submitted and convert them to native calls for iOS. Passing arrays and other arbitrary objects to the C layer in the iOS port is painful so no one got around to do it for years. As I said, the demand for this was low and our resources are low as well. -- You received this message because you are subscribed to the Google Groups "CodenameOne Discussions" group. To unsubscribe from this group and stop receiving emails from it, send an email to codenameone-discussions+unsubscr...@googlegroups.com. Visit this group at https://groups.google.com/group/codenameone-discussions. To view this discussion on the web visit https://groups.google.com/d/msgid/codenameone-discussions/f44a5592-55df-4572-b102-7716ecd61e53%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.