retard wrote:

Of course the major issues limiting Web 2.0 adoption are unreliable, high latency, expensive communication channels. Another is that the technologies have not matured on non-x86/windows platforms. I bought a new cell phone recently and can't really play any videos with it even though it definitely has enough cpu power to play even 576p mpeg streams.

Sure, high latency and bandwidth costs are a major limiting factor, but platform isn't. From the browser-based client side, the non-windows platforms are just as mature as the windows side (although non-x86 tends to lag somewhat). Server side, non-windows has always been more mature then windows. Unix has always been known for 'server operations', and for good reason, its designed for it and does impose artificial limitations that windows likes to do.

On the embedded side of things, alot of media-based embedded devices have hardware assistance for for things like video decoding, but it is definitely a market that I think languages like D could thrive.

Unfortunately D isn't targeting that market (which I think is a mistake). dmd doesn't have any capable back ends, gdc's dmd front end is lagging and almost seems to be unmaintained. I haven't used ldc much, so I don't have any real comments on that, but I suspect it is similar to the gdc situation.

Hopefully after the D2 spec is frozen, gdc and ldc will catch up. I have looked at the possibility of using D for NDS development (although it'd only be homebrew crap). That is one of GCC's biggest strengths, it's versatility. It runs everywhere and targets almost everything.

Btw, you can write iPhone apps in .NET languages. Just use Unity.

And server-side, there's also a lot of static language development going
on. Often dynamic languages don't scale, and you'll see dynamic
languages with performance-intensive parts written in C or C++, or
static languages such as Java.

Sure. It's just that not everyone uses them.

Server side scalability has almost nothing to do with the language in use. The server side processing time of anything in any language is dwarfed by the communications latency.

Scalability and speed are 2 very different things. Server side scalability is all about being able to handle concurrency (typically across a bunch of machines). Dynamic languages, especially with web-based stuff, is so simple to model in a task-based model. One page (request) == one task == one process. Even the state sharing mechanisms are highly scalable with some of the new database and caching technologies.

Since most state in a web-based application is transient, and reliability isn't really required, caching systems like memcached is often enough to handle the requirements of most server side applications.

Typically large scale sites have at least a few good developers who can make things perform as needed. The people who really take the hit of poor code in inefficient languages are the hosting providers. They have to deal with tons of clients, runs tons of different poorly written scripts.

I've worked in a data center with 2,000 servers that was pushing only a few hundred megabit (web hosting). I have also worked on a cluster of 12 machines that pushed over 5 gigabit. The difference in priority is very obvious in these 2 environments.

The future of D to me is very uncertain. I see some very bright possibilities in the embedded area and the web cluster area (these are my 2 areas, so I can't speak on the scientific applications). However the limited targets for the official DMD, and the adoption lag in gdc (and possibly ldc) are issues that need to be addressed before I can see the language getting some of the real attention that it deserves. (of course with real attention comes stupid people, MSFUD, and bad code).

Reply via email to