>
> Something as simple as this does not give me any reason to code out the
> thing on two different OS. Why do I need multiple OSes for writing
> parallel algorithms ?

I didn't- You did ! WHAT were you upto with the Windows PVM? This question
is best answered by you!

> If you are talking about real benchmarking to see whether you get some
> kind of a lift from distribbing the algo, you might as well find it find
> it analytically.As to the case of an analytical explanation not being
> confirmed by expt. ones, the problem probably lies in the analysis.
1. If it is purely analytical- it is no longer 'real'

2. If you are able to identify all the variables involved during a normal
run of  code , you are
          a. a genius with more patience than job
          b  simplifying your situation  with assumptions which may be valid
but nevertheless introduce errors.
Experimentation assumes greater importance in distributed systems with real
time/ multitasking systems with semaphores /monitors in operation - Results
can be at best empirical as it might be *impossible* to reproduce  the exact
series of conditions  for every run. You CANNOT predict when exactly a given
sequence of code may finish- an fundamental tenet of all multitasking
systems- The OSes of today. You have to assume something and test it out to
ensure that the assumptions are valid in order to analyse further. For
example - do try explaining away precisely how and with what relationship
Apache's performance degrades with load under multiprocessor systems with
more than 4 processors and multiple interfaces. I am talking of something
which is being done today - A link to such a story appeared in LI recently
IIRC.

> All this while I have been talking at the algorithmic level, it is a well
> known fact that algos do not scale efficiently and things at the
> application level would be rather different.

Exactly. You do need to experiment to find practical situations.
> However, it still beats me as to however you can even think of benching
> on a uniproc VM. You will definitely get incorrect indications. IT might
> even turn out , that your parallel implementation is slower than your
> sequential implementatin.
The VM  was VMware which distributes the cpu evenly among different VM's .If
the difference if large - that is a pointer-NOT a cast iron result. I am
talking of preliminary analysis before actual implementation . The case you
are referring to is only possible in the case of a bad VM implementation.
IMHO Vmware does a good job
> >*> Repeating
> >*>myself- if and only If under an Ideal situation - if there is a
significant
> >*>performance difference between a MIMD implementation of any algorithm
and
> >*>the SIMD one , ( IT MAY NOT be always faster- Taking the underlying h/w
in
> >*>consideration)
>
> Restating my assertion, you can do this analytically on an algo level ,
> why even bother to write #include <mpi.h> ? Also, please state what you
> mean by an Ideal situation ?

Who is? You cannot access a VMware VM through another VM directly. For all
intents and purposes they are independant mchines, not plain Vm's.

>
> >*>  Then one goes about trying
> >*>to use a Parellel architecture to get a speedup( I am talking only of
the
> >*>need for speed - NOT fault tolerance , which is an entirely different
> >*>story). This is where The price performance ratio and user needs come
in. If
> >*>you have enough cash - go in for a Close-coupled machine with the
attendant
> >*>*High* cost. If you can get an acceptable level of performance ( by
user
> >*>definition) using a loosely coupled architecture  like the NOW( Network
of
> >*>workstations ) then go for it..
>
> You started this by justifying the use of a uniproc VMWare  now you have
> moved into harware granularity . Voila !!
You have lost the thread- Did you notice that I had changed the header?
>
> >*>  A Beowolf is a software implementation of a
> >*>closely coupled system on loosely coupled hardware. The End result is
> >*>transparency - NOT a difference in speed with a NOW/POP ( The reason I
told
> >*>you to look up the
> >*>defination is because typical examples using named pipes do NOT require
a
> >*>Beowolf to run).
>
>
> Tch, Tch..Couldn't be further from the truth. Which "definition" of
> Beowulf are you looking at anyway ? Try running a Beowulf cluster on a
> general-purpose network and be ready to be booed down and flamed by the
> Beowulfers. You can definitely run NOWs on gen-purp networks... NOWS are
> NOT optimised for speed , but Beowulfs (or should it be Beowulves ??) are.
> IT is amazing how you got confused on this point. BASICALLY, NOWs are
> about heterogeneity/COTS in parallel environments, Beowulves are for
> speed.
>
I think you have misread something. There is so much that can be said w.r.t.
this that i suggest we move this off the list and talk about it in collage
for an hour or so. No point in spamming the list!

> <OT>
> Infact replace that "simple network" with a quite 100Mbps backplane and
> you GET a Beowulf.Definitions can be very misleading. For example, could
> you refer me to a necessary and sufficient description of an
> System software so as to distinguish itself from Application software. ?
> Do you STILL believe in waht the ICSE books taught us about them , huh ?
> Looks easy, but as you know you can easily contradict yourself after
> having provided a definition. Non theoritical computer science is so much
> based on definitions which trod the fringes. Eg, Is middleware in the
> geometric middle, actually ? (Bad humor )
> </OT>
 I came thru CBSE not ISC!!! Have you forgotten the days when ,years back ,
we used to roam about in the BITM, Bosco Byte- SPHS ....  where we first
met!

>
> Who bothers about a __significant__ speedup. ?? you can get __actual__
> speedups creating an analytical model and substituiting parameters like
> PRocessor periodicity and inter processor latency.. or cyclic process
> allocation. Don't even talk about VM's while trying to bench mark. IF you
> have multiple processors, get similar types of OS running to create a
> Beowulf. If you are stuck with a single processor, don't even think of
> benchmarking. Stick to heterogeneity tests (which is NOT Beowulf) or at
> the most efficient parallelisation of code (although I haven't the
> faintest idea of how you are going to prove this on a uniproc :) All you
> can prove is the optimality of surface to volume ratio.

1. Define the difference between significant and actual - I'd like to hear
your def.
2. Did you think they were the only parameters?
3. Look up threads under Win32 or Digital Unix. While they are not exactly
what I  am driving at- They should provide a good beginning. We need to talk
this over , Not on the list.
> >*>  The entire point of this discussion from my side has
> >*>been directed towards that. Processors will keep getting faster and
This
> >*>means newer MIMD algorithims which need fewer interactions/Messages to
pass
> >*>are needed.
>
> I am not sure i get you . Are you talking about time complexity as a
> function of processor speed ? In that case,I need a revision of my algo
> analysis and design theory... I am not sure i encountered that
> anywhere.Anybody did ?
 You are now wilfully misunderstanding things- The Beowouf howto cointained
some stuff on this - . If designing a SIMD or closely coupled MIMD system
were cheap/easy then we would never have needed a Beowolf. Can You honestly
say that the ratio of CPU speed to interconnection Network ( regardless of
coupling type) has remained static? The performance penalty has got heavier
over the years . During the time a Proc is forced to wait for a message the
performance hit due to inability to continue the algo has increased by leaps
and bounds. This is one of the main areas of contention- You cannot budget
for this in analytical analysis.

> >*> Once you have got such a situation - even multiple  733 Mhz m/c
> >*>on such a network will be useful.
>
> I don't get this one. Did you miss a negation somewhere ? And why teh
> "even".
Hope you got it now.

Shanker



--
To unsubscribe, send mail to [EMAIL PROTECTED] with the body
"unsubscribe ilug-cal" and an empty subject line.
FAQ: http://www.ilug-cal.org/help/faq_list.html

Reply via email to