Karl Ma wrote:
The process is so memory-hungry that it starts swap after the physical RAM max out. (To be exact, I've lowered the per-process limitation to make this possible).
What did you lower, exactly? If you reduce the max resident datasize needlessly, you're going to make your program swap more and run much slower.
However, when I use top to monitor the status, the STATE of the process started to stay as "swread" for most of the time (instead of RUN before using swap) and its priority has dropped to -20; and the corresponding WCPU drops to around 1% only. And the CPU consumption time in total (for the whole job) would only increase a minute or two even the process has been running for more than a few hours.
Yes, because the task isn't using much CPU, it's entirely I/O bound.
In Windows XP, which has less per-task resource restriction (I guess?), I did successfully complete the task on the same hardware machine; although it takes more than 30 mins. How can I push up the priority of the whole paging task? How can I allocate more CPU attention to this process? I've tried using "nice" but it does not help.
Won't help. Add more RAM, or adjust the program to be more clever about the use of memory, possibly by using Numeric/numarray.
The size of your python process is surprising to me, python tends to run relatively lightweight process sizes even when handling large data sets (ie, > 1GB of data per day)...
-- -Chuck _______________________________________________ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"