On Wed, May 04, 2005 at 10:23:59AM +0200, Antenore Gatta wrote: > My goal is to create a distributed system for compiling software on > demand for not experienced users.
There are distributions that you can download or buy and they will just work for the not experienced users. > In the first stage I'd like to have a web based form where the user > can choose from a software list and between some different platforms > (amd, intel, sparc, arm, ..., etc.) after N minutes user should be > able to download a tar archive with the platform optimized compiled > version. Before wasting your time and resources you should evalute how big the possible improvement in speed is at all. And the number of packages that _could_ make any use of a possible improvement in speed is also small. And how about "support" of all the binaries with minor differences on different systems? > I tried with ICMP and anycast but both doesn't pass trough every routers. > We have obtained some result sending a little image (png) via ssh > counting the seconds, b ut sometimes the routing change unexpectedly. > Perhaps using the same distcc socket the routing should be the same > (except for router/backbone problems). One of the ideas of TCP/IP is actually that you don't have to do the routing yourself. I don't know what you are about, but when the connections of the nodes are so slow/bogus that you experience "routing changes" while transfering "a little image (png)" you can forget everything. Did you at least measure the amount of data that is transfered when compiling some Package? > Please do you have an idea about this? Yes, please don't waste your time and don't terrorize the world with defunct binaries. When you like compiling and optimizing stuff you could try to increase the amount of parallelisation that can be used in some packages when compiling them. Christian Leber -- http://www.nosoftwarepatents.com __ distcc mailing list http://distcc.samba.org/ To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/distcc
