MelonMonkey;158486 Wrote: > > I'm running with a real Squeezebox now. I thanked the Slim support > team for handling the situation with UPS. UPS says they delivered the > original package yet I received nothing. They had no signature because > they said it was left at the front of the house. Nice job. In 2003 my > car was stolen from the driveway while I slept. They thought they should > trust the neighborhood with a US$340 package while no one was home > (that's the price with shipping).
UPS seems to have a "if it is worth stealing, leave it on the step; if it is worthless, leave a yellow card and make them come get it" rule. How they know what is worthless/worthwhile I never have figured out. > > I also wasn't accessing the web server from the same system. I've > tested using a third system, a PowerBook G4 (my main workhorse), both > with Camino and Safari and wired plus wireless connection. I no longer > run Firefox as it's a huge bloated memory pig and so full of bugs that > it's practically useless on a Mac. Well it has less bugs than IE, but yes, it sucks tons of RAM, especially if you leave it running for a while and even more so if pages auto-refresh. (Even 2.0 seems to leak those...) > > I have no doubt most of the time spent by the CPU is with the sql > queries and I didn't mean to imply that the blame rested on the > shoulders of the Perl implementation. But to call Perl fast has to be > taken in context. Sure, it can be considered adequate in terms of a > scripting/interpreted language, but look at the broader picture which > includes compiled languages like C. Execution times of 100x faster for > the same piece of code would not be uncommon. Whether speeding up > portions or all of the current Perl modules would help the web serving > performance drammatically is anyone's guess (I suppose anyone touching > that code can comment with better hypothesis). But having some portions > compiled would certainly not adversely affect runtime speed. Be careful: Perl -is- compiled. It just has a very fast compiler. (It is compiled to bytecode, which is actually pretty efficient.) In cases were Perl shines (like pattern matching), it can be very much on par with C, especially since a ton of very strange optimizations go on in libperl: "commonly used functions" are grouped together, for example, so that they will have a tendency to always stick in the CPU cache. The real difference you are seeing is probably in constant resizing of images, and the lack of a real multithreaded http server. There has been a lot of cleanup of the number of SQL calls so they shouldn't be too painful any more. > > So, how does one time the web interface as was briefly mentioned > earlier in this topic? I'll try getting some numbers for various > scenarios. There are some debugging flags you can turn on for timing http transactions, and you can also use the "Server and Network Health" option under Health, it should show you how certain operations are typically timing. Since you are on MacOS, you may also want to turn on the forking for HTTP on the Performance setting page. This may speed up web clients by pawning off some of the multithreading onto the OS instead of faking it in Perl. (That is probably the worst thing about the perl implementation ... that perl doesn't deal well with multithreaded applications so it has to be faked.) 640M shouldn't really be a problem... I have... um... 256M. I'm cheap. The CPU isn't the fastest, but then you have a dual cpu, so that should help a bit since the 6.5 model lets you have at least some fake multithreading with the SQL stuff moved into mysqld. Turning on forking for http may help use both processors as well. -- snarlydwarf ------------------------------------------------------------------------ snarlydwarf's Profile: http://forums.slimdevices.com/member.php?userid=1179 View this thread: http://forums.slimdevices.com/showthread.php?t=30145 _______________________________________________ discuss mailing list [email protected] http://lists.slimdevices.com/lists/listinfo/discuss
