Try dense matrix multiplication. You can find GPU matrix code online and
integrate with MPI. Increase matrix size can saturate machines of any size.
Justin
On Wed, Aug 14, 2019 at 11:14 AM Michael Di Domenico
wrote:
> On Wed, Aug 14, 2019 at 11:13 AM Prentice Bisbal via Beowulf
> wrote:
> >
>
HPC cloud really does not work well for traditional HPC apps. US National
Science Foundation has built the "configurable" Chameleon cloud that offers
regular OpenStack VMs and bare metal VMs: https://www.chameleoncloud.org/
Justin
On Mon, Feb 20, 2017 at 9:44 PM, Lev Lafayette
Snapshot restart would only work for you if your application leaves
restarting points on the disk. Otherwise restarting the snapshot is the
same as restarting the program.
Justin
On Thu, Oct 27, 2016 at 9:57 AM, Michael Di Domenico wrote:
> thanks for the insights.
John's post is really funny! But I would only endorse Gavin's
recommendation for it solves the problem statistically (and correctly).
Justin
On Wed, Oct 26, 2016 at 12:07 AM, Christopher Samuel
wrote:
> On 26/10/16 14:45, John Hanks wrote:
>
> > I'd suggest making NFS
AM, Justin Y. Shi <s...@temple.edu> wrote:
> Great conversations for starting an exciting weekend!
>
> The critical architecture feature for extreme scale of anything is its
> growth potential. Even for the large data transfer example, the
> architecture does not breakdown.
certain data rate through it at a particular (lower) error rate.
>
>
>
>
>
> Jim Lux
>
> (818)354-2075 (office)
>
> (818)395-2714 (cell)
>
>
>
> *From:* Justin Y. Shi [mailto:s...@temple.edu]
> *Sent:* Thursday, March 10, 2016 3:12 PM
> *To:* Lux,
Great memories indeed. Thanks for C's hardwork to keep the list alive!
I happened to remember RGB and Vincent. My last personal web server just
retired after nearly 30 year "non-stop" service (except for power outages)
was the first generation dual processor 386 ... It still runs...
Justin
On
hat
> might be an effective data rate reduction of 30-50% (depending on the
> coding).
>
>
>
>
>
>
>
> Jim Lux
>
> (818)354-2075 (office)
>
> (818)395-2714 (cell)
>
>
>
> *From:* Justin Y. Shi [mailto:s...@temple.edu]
> *Sent:* Thursday, March 10, 2016 1
Not that fast though. 100% reliability is practically achievable. And we
are enjoying the results everyday. I mean the wireless and wired packet
switching networks.
The problem is our tendency of drawing fast conclusions. The one human made
architecture that defies the "curse" of component
I will supper C's "hater" listing effort just to keep a spot light on the
important subject.
The question is not MPI is efficient or not. Fundamentally, all electronics
will fail in unexpected ways. Bare metal computing was important decades
ago but detrimental to large scale computing. It is
It's also interesting to observe the API designs follow XML (key, value)
pair in all these startups that want to handle failures ...
Justin
On Mon, Mar 7, 2016 at 8:32 AM, John Hearns wrote:
> Indeed. Some interesting news here:
>
>
>
wrote:
> On Mon, Mar 7, 2016 at 5:58 AM, Justin Y. Shi <s...@temple.edu> wrote:
> > Peter:
> >
> > Thanks for the questions.
> >
> > The impossibility was theoretically proved that it is impossible to
> > implement reliable communication in th
n that MPI is cost-ineffective in
> proportion to reliability? If so, why?
>
> Thanks,
> Peter
>
> On Sun, Mar 6, 2016 at 11:10 AM, Justin Y. Shi <s...@temple.edu> wrote:
>
>> Actually my interest in your group is not much between "hate" and &
ment is AnkaCom that was designed to tackling data
intensive HPC without scaling limits.
My apologies in advance for my shameless self-advertising. I am looking
for serious collaborators who are interested in breaking this decade-old
barrier.
Justin Y. Shi
s...@temple.edu
SERC 310
Temple University
Thank you for creating the list. I have subscribed.
Justin
On Fri, Mar 4, 2016 at 5:43 AM, C Bergström
wrote:
> Sorry for the shameless self indulgence, but there seems to be a
> growing trend of love/hate around MPI. I'll leave my opinions aside,
> but at the same
ORGANIZERS
Justin Y. Shi
Computer and Information Sciences Department
Temple University
Philadelphia, PA, USA
Phone: +1 (215) 204-6437
Email: s...@temple.edu
Boleslaw K. Szymanski
Department of Computer Science
Rensselaer Polytechnic Institute, Troy, New York, USA
Phone: +1
://sea.ucar.edu/event/statistic-multiplexed-computing-smc-neglected-path-unlimited-application-scalability
I am also helping with InterCloud HPC 2014 in Italy this year. Please
submit your work if interested.
Many thanks in advance!
Justin Y. Shi
s...@temple.edu
On Wed, Mar 5, 2014 at 9:49 PM, Christopher
, 2014
Conference Dates: - July
21 - 25, 2014
SYMPOSIUM ORGANIZERS
Justin Y. Shi
Computer and Information Sciences Department
Temple University
Philadelphia, PA, USA
Phone: +1 (215) 204-6437
Email: s...@temple.edu
Boleslaw K
: April 05, 2013
Camera Ready Papers and Registration Due: April 23,
2013
Conference Dates: July 1 – 5,
2013
SYMPOSIUM ORGANIZERS
Justin Y. Shi
Computer and Information Sciences Department
Notification: April 05, 2013
Camera Ready Papers and Registration Due: April 23,
2013
Conference Dates: July 1 – 5,
2013
SYMPOSIUM ORGANIZERS
Justin Y. Shi
Computer and Information Sciences Department
20 matches
Mail list logo