Re: [Beowulf] [External] load testing

2019-08-14 Thread Justin Y. Shi
Try dense matrix multiplication. You can find GPU matrix code online and integrate with MPI. Increase matrix size can saturate machines of any size. Justin On Wed, Aug 14, 2019 at 11:14 AM Michael Di Domenico wrote: > On Wed, Aug 14, 2019 at 11:13 AM Prentice Bisbal via Beowulf > wrote: > > >

Re: [Beowulf] HPC cloud bursting providers?

2017-02-20 Thread Justin Y. Shi
HPC cloud really does not work well for traditional HPC apps. US National Science Foundation has built the "configurable" Chameleon cloud that offers regular OpenStack VMs and bare metal VMs: https://www.chameleoncloud.org/ Justin On Mon, Feb 20, 2017 at 9:44 PM, Lev Lafayette

Re: [Beowulf] non-stop computing

2016-10-27 Thread Justin Y. Shi
Snapshot restart would only work for you if your application leaves restarting points on the disk. Otherwise restarting the snapshot is the same as restarting the program. Justin On Thu, Oct 27, 2016 at 9:57 AM, Michael Di Domenico wrote: > thanks for the insights.

Re: [Beowulf] non-stop computing

2016-10-26 Thread Justin Y. Shi
John's post is really funny! But I would only endorse Gavin's recommendation for it solves the problem statistically (and correctly). Justin On Wed, Oct 26, 2016 at 12:07 AM, Christopher Samuel wrote: > On 26/10/16 14:45, John Hanks wrote: > > > I'd suggest making NFS

Re: [Beowulf] MPI, fault handling, etc.

2016-03-12 Thread Justin Y. Shi
AM, Justin Y. Shi <s...@temple.edu> wrote: > Great conversations for starting an exciting weekend! > > The critical architecture feature for extreme scale of anything is its > growth potential. Even for the large data transfer example, the > architecture does not breakdown.

Re: [Beowulf] MPI, fault handling, etc.

2016-03-12 Thread Justin Y. Shi
certain data rate through it at a particular (lower) error rate. > > > > > > Jim Lux > > (818)354-2075 (office) > > (818)395-2714 (cell) > > > > *From:* Justin Y. Shi [mailto:s...@temple.edu] > *Sent:* Thursday, March 10, 2016 3:12 PM > *To:* Lux,

Re: [Beowulf] [OT] MPI-haters

2016-03-11 Thread Justin Y. Shi
Great memories indeed. Thanks for C's hardwork to keep the list alive! I happened to remember RGB and Vincent. My last personal web server just retired after nearly 30 year "non-stop" service (except for power outages) was the first generation dual processor 386 ... It still runs... Justin On

Re: [Beowulf] MPI, fault handling, etc.

2016-03-10 Thread Justin Y. Shi
hat > might be an effective data rate reduction of 30-50% (depending on the > coding). > > > > > > > > Jim Lux > > (818)354-2075 (office) > > (818)395-2714 (cell) > > > > *From:* Justin Y. Shi [mailto:s...@temple.edu] > *Sent:* Thursday, March 10, 2016 1

Re: [Beowulf] MPI, fault handling, etc.

2016-03-10 Thread Justin Y. Shi
Not that fast though. 100% reliability is practically achievable. And we are enjoying the results everyday. I mean the wireless and wired packet switching networks. The problem is our tendency of drawing fast conclusions. The one human made architecture that defies the "curse" of component

Re: [Beowulf] MPI, fault handling, etc.

2016-03-10 Thread Justin Y. Shi
I will supper C's "hater" listing effort just to keep a spot light on the important subject. The question is not MPI is efficient or not. Fundamentally, all electronics will fail in unexpected ways. Bare metal computing was important decades ago but detrimental to large scale computing. It is

Re: [Beowulf] Jeff Squayres MPI proposals

2016-03-07 Thread Justin Y. Shi
It's also interesting to observe the API designs follow XML (key, value) pair in all these startups that want to handle failures ... Justin On Mon, Mar 7, 2016 at 8:32 AM, John Hearns wrote: > Indeed. Some interesting news here: > > >

Re: [Beowulf] [OT] MPI-haters

2016-03-06 Thread Justin Y. Shi
wrote: > On Mon, Mar 7, 2016 at 5:58 AM, Justin Y. Shi <s...@temple.edu> wrote: > > Peter: > > > > Thanks for the questions. > > > > The impossibility was theoretically proved that it is impossible to > > implement reliable communication in th

Re: [Beowulf] [OT] MPI-haters

2016-03-06 Thread Justin Y. Shi
n that MPI is cost-ineffective in > proportion to reliability? If so, why? > > Thanks, > Peter > > On Sun, Mar 6, 2016 at 11:10 AM, Justin Y. Shi <s...@temple.edu> wrote: > >> Actually my interest in your group is not much between "hate" and &

Re: [Beowulf] [OT] MPI-haters

2016-03-06 Thread Justin Y. Shi
ment is AnkaCom that was designed to tackling data intensive HPC without scaling limits. My apologies in advance for my shameless self-advertising. I am looking for serious collaborators who are interested in breaking this decade-old barrier. Justin Y. Shi s...@temple.edu SERC 310 Temple University

Re: [Beowulf] [OT] MPI-haters

2016-03-04 Thread Justin Y. Shi
Thank you for creating the list. I have subscribed. Justin On Fri, Mar 4, 2016 at 5:43 AM, C Bergström wrote: > Sorry for the shameless self indulgence, but there seems to be a > growing trend of love/hate around MPI. I'll leave my opinions aside, > but at the same

[Beowulf] [hpc-announce] InterCloud HPC 2014 Deadline Extension: March 25

2014-03-17 Thread Justin Y. Shi
ORGANIZERS Justin Y. Shi Computer and Information Sciences Department Temple University Philadelphia, PA, USA Phone: +1 (215) 204-6437 Email: s...@temple.edu Boleslaw K. Szymanski Department of Computer Science Rensselaer Polytechnic Institute, Troy, New York, USA Phone: +1

Re: [Beowulf] Exascale by the end of the year?

2014-03-13 Thread Justin Y. Shi
://sea.ucar.edu/event/statistic-multiplexed-computing-smc-neglected-path-unlimited-application-scalability I am also helping with InterCloud HPC 2014 in Italy this year. Please submit your work if interested. Many thanks in advance! Justin Y. Shi s...@temple.edu On Wed, Mar 5, 2014 at 9:49 PM, Christopher

[Beowulf] InterCloud HPC 2014 Deadline Extension: 3/25/2014

2014-03-13 Thread Justin Y. Shi
, 2014 Conference Dates: - July 21 - 25, 2014 SYMPOSIUM ORGANIZERS Justin Y. Shi Computer and Information Sciences Department Temple University Philadelphia, PA, USA Phone: +1 (215) 204-6437 Email: s...@temple.edu Boleslaw K

[Beowulf] [hpc-announce] InterCloud-HPC 2013 Call for Papers

2013-03-16 Thread Justin Y. Shi
: April 05, 2013 Camera Ready Papers and Registration Due: April 23, 2013 Conference Dates: July 1 – 5, 2013 SYMPOSIUM ORGANIZERS Justin Y. Shi Computer and Information Sciences Department

[Beowulf] [hpc-announce] InterCloud-HPC 2013 Call for Papers

2013-03-12 Thread Justin Y. Shi
Notification: April 05, 2013 Camera Ready Papers and Registration Due: April 23, 2013 Conference Dates: July 1 – 5, 2013 SYMPOSIUM ORGANIZERS Justin Y. Shi Computer and Information Sciences Department