> After some investigations I stated that count_node_pages() was computing > mem_data[1].min_pfn = 0, and mem_data[1].max_pfn = 20000 for node 1, > thus conflicting with the 0-2GB DMA memory range on node 0. > This is due to the line: > start = ORDERROUNDDOWN(start);
There is an assumption here that the memory space on a node doesn't cross a MAX_ORDER boundary ... and I'm not really sure where to go with that. Your patch papers over the problem for your specific case, but as you point out it will just re-appear for someone who picks a bigger MAX_ORDER. Having nodes that are smaller than MAX_ORDER will cause confusion in the allocator (if all the memory belonging to two nodes is in a single MAX_ORDER page, the buddy allocator will give all the memory to one node, and none to the other (won't it?). > This should at least be checked in the count_node_pages() function. Yes, a check should be made ... but count_node_pages() doesn't have all the information if needs to do this (it just gets the start/size for the memory on the node ... and it needs to check whether the rounddown of the start address (or the roundup of the end address) would cause conflicts with memory belonging to other nodes. Do we need a "max_order" variable that could be adjusted to some lower value that MAX_ORDER if we find the memory topology doesn't fit inside the lines? -Tony - To unsubscribe from this list: send the line "unsubscribe linux-ia64" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
