09.10.2014, 01:30, "David P Grove" <gro...@us.ibm.com>:

Sorry for the very delayed response;  poor inbox management :(

John MacFrenz <macfr...@yandex.com> wrote on 10/05/2014 06:46:27 AM:
>
> Thanks for your reply. I actually started writing my own classes
> using Rail as backing storage for dynamically distributable arrays.
> Not sure if I'll ever manage to bring this project to finish,
> though, nor is there even real need to... Relating to this does
> there exist in X10 for Rails something like memmove in C? Also, do
> there exists ordered associative container with unique keys, like
> std::map in c++?


For memmove, there are copy and asyncCopy methods defined in x10.lang.Rail.  
(copy is basically a memmove.  asyncCopy supports asynchronous copying of Rails of non-pointer containing elements between places).

For the map, it isn't ordered, but X10 has a x10.util.HashMap that is a somewhat like Java's java.util.HashMap

Well, I think in my case doubly linked list or simple binary search tree will do better.
 
Also, I'd like to know if I have understood this correctly: If I have Rail which stores instances of class, Rail stores just references and instances aren't in continuous block of memory. If that's the case, is there way to change this behaviour since doesn't this cause overhead when accessing data?
 
Anyway, I will probably be implementing some containers is missing. I'm not really that good programmer, but would you still be interesting in getting patches?


>
> >    If you really need to be able to dynamically redistribute the longest
> > axis between places after the array is created, you might consider
> > something that was more like an array of arrays in each place to make it
> > easier to redistribute the longest axis while maintaining spatial locality
> > for the shorter axis.
>
> Well, I'm new to programming clusters so I don't know how these
> things are usually done... But, let's say that in cluster I have
> some computers that have 1 core, some that have 16. What would be
> the usual way to distribute the load in such case?
>

One approach that might work would be to map more X10 places onto the nodes that have more resources.  (In effect reduce the difference between the machines by treating a 'big' machine as multiple virtual small machines).  This may not get the best performance, but it is certainly very simple to do.  You can then pretend that all the nodes are roughly the same and distribute the load evenly.  

 

I though about that too. However this would work only for fixed grids, in other cases unbalance would become too great, I think. Also since the whole cluster will be probably calculating just one job, computing time would be completely wasted.
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
X10-users mailing list
X10-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/x10-users

Reply via email to