In particular, focus on what your primary bottlenecks are.
If your data set isn't that large but you're wasting a lot of time
getting data off disk, than a large sized RAM disk - 8 Gigabytes
for two GC-Ramdisk's put together at:
http://www.gigabyte.com.tw/Products/Storage/Products_Overview.aspx?ProductID=2180&ProductName=GC-RAMDISK
-
or 16 Gigs at
http://www.hyperossystems.co.uk/07042003/products.htm#hyperosHDIIproduct
can speed things up tremendously. (And yes, you can do some of this
through on-board
memory, but sometimes it's easier to work just having an available
super fast drive
rather than considering memory management).
Douglas Roberts wrote:
Owen:
I'm all for practical. But first, show us your requirements. A "step
or two towards higher performance" is a bit vague. ;-}
What's your goal: 16 million agents, simulated at 80X real time?
Or something less. Or something more.
Joking aside,
What are your requirements? How much do you need to scale now; how far
do you need to scale eventually, how soon do you need to do it, what
are your agent complexities, output requirements, data I/O needs, post
processing requirements, what existing designs do you have now, and
what are their limitations, what is the memory footprint for your
existing implementation, what are your current run times, etc. etc.
etc...
System requirements should come first; these will lead to suggestions
for SW & HW implementation environments.
--Doug
--
Doug Roberts, RTI International
[EMAIL PROTECTED]
[EMAIL PROTECTED]
505-455-7333 - Office
505-670-8195 - Cell
On 10/7/06, Owen Densmore <
[EMAIL PROTECTED]> wrote:
On
Oct 7, 2006, at 10:29 AM, Owen Densmore wrote:
> Turns out there is a poll being taken on some mail lists on the
topic
> of new parallel hardware and if/how it will be used:
> Parallelism: the next generation -- a small survey
>
http://www.nabble.com/A-small-survey-tf2337745.html
>
> -- Owen
OK, so we've had an interesting interchange on Distribution /
Parallelization of ABM's. But what I'm interested is a bit more
practical:
Given what *we* want to do, and given the recent advances in desktop,
workstation, and server computing, and given our experiences over the
last year with things like the Blender Render Farm .. what would be
the most reasonable way for us to take a step or two toward higher
performance?
- Should we consider buying a fairly high performance linux box?
- How about buying a multi-processor/multi-core system?
- Do we want to consider a shared Santa Fe Super Cluster?
- What public computing facilities could we use?
And possibly more to the point:
- What computing architecture are we interested in?
I'll say from my experience, I'm mainly interested two approaches:
- Unix based piped systems where I don't have to consider the
architecture in my programs, only in the way I use sh/bash to execute
them to make sure they work well in parallel. In plain words: good
parameter scanning, or piped tasks (model, visualize, render) using
built-in unix piping mechanisms with parallel execution of the
programs. I've done this in the past with dramatic increase in
elapsed times. And its dead simple.
- Java or similar based multi-threaded approaches where I need a
bit of awareness in my code as to how I approach parallelism, but
*the language supports it*. I'm not very much interested in exotic
and difficult to maintain grid/cluster architectures, I'm not at all
convinced for the scale we're approaching that they make sense. And,
yes, Java is good enough.
In other words, given Redfish, Commodicast, and other local
scientific computing endeavors, what would be interesting systems for
our scale of computing? I.e. reasonable increase in power with
modest change in architecture.
Owen
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org
|