> > > OK, I am quite excited about the future possibilities of 64-bit Pharo. So > I played a bit more with the current test version [1], trying to push the > limits. In the past, it was only possible to safely allocate about 1.5GB of > memory even though a 32-bit process' limit is theoretically 4GB (the OS and > the VM need space too). > > The limit for 32 bit is 2GB apart from some exceptions
https://en.wikipedia.org/wiki/2_GB_limit This happens because the OS reserve some of the memory 1GB or more depending the OS for the smooth running of applications. You dont want to run out of physical memory, there is a bug in XCODE with indexing files that eats away all of my physical memory and the whole system comes to a crawl the moment free memory reaches 700mb , by crawl I mean that it takes 10 seconds to move the mouse from point a to point b and another 10 to click. I don't have an SSD , probably in that case would be still crawl but much better. In any case if you decide to push the computer to its limits always remember to have backups because running out of memory is the worst thing that can happen to an OS. Viruses used to crash computer by filling the memory with useless information which I suspect is the reason why the OSes no longer allow a single process to capture the entire free memory . But then if you have your backups , hack away. I will play the devils advocate here and I will say that Pharo would be ok loading a couple of GBs it but processing probably will send it to snail speeds. That assumption is based on benchmark for Visualworks that show it as around 50 times slower than an average C application. But even C applications have a hard time keeping up when you go above MACH1 aka 1GB, the 3d cover I rendered for PBE is around 2GBs mainly because the ocean is around 4 million polygons and Blender does not use just CPU it uses also GPU to accelerate. None the less its a big improvement for Pharo and congratulation are deserved for anyone involved and of course a thank you :)
