Re: The iGoogle bug
Bernardo Innocenti wrote: In the short term, one quick and dirty fix would be to disable the EXA upload hook. The driver may even become somewhat faster because we avoid the extra copy! Here's a version of amd_drv with this kludge implemented and full debug symbols enabled: http://www.codewiz.org/pub/amd_drv.so.13 - built for Xorg 1.3 http://www.codewiz.org/pub/amd_drv.so.14 - built for Xorg 1.4 Aleph, would you mind installing it in your test environment and run some of your benchmarks before and after the cure? Testing with 1.4 only should be fine... The cairo performance suite is probably the best test. -- // Bernardo Innocenti - http://www.codewiz.org/ \X/ One Laptop Per Child - http://www.laptop.org/ ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: #3469 HIGH Trial-3: Human readable file names in the journal
On 9/18/07, Kim Quirk [EMAIL PROTECTED] wrote: Do we have another solution for being able to find a file after the journal has saved it to the school server? How are backups to the school server going to be handled for trial-3? It think the proper long term solution the remote datastore stuff Ben has been thinking about (and designed the datastore to support), but clearly that's not going to be ready for trial-3. Marco ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
This is #3352: https://dev.laptop.org/ticket/3352 What's special about this page is that the clock applet uses two very wide bitmaps of 3200 pixels each to represent the clock arms in all possible positions. (that's over 2MB of RAM wasted, nice!) Wow, 2MB of ram for a clock? That's 32 Commodore 64's: even worse than Sun's Open Look clock tool! Kids these days... From the X-Windows Disaster: http://www.art.net/~hopkins/Don/unix-haters/x-windows/disaster.html X has had its share of $5,000 toilet seats -- like Sun's Open Look clock tool, which gobbles up 1.4 megabytes of real memory! If you sacrificed all the RAM from 22 Commodore 64s to clock tool, it still wouldn't have enough to tell you the time. Even the vanilla X11R4 xclock utility consumed 656K to run. And X's memory usage is increasing. -Don ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
On Sep 18, 2007, at 10:45 , Don Hopkins wrote: This is #3352: https://dev.laptop.org/ticket/3352 What's special about this page is that the clock applet uses two very wide bitmaps of 3200 pixels each to represent the clock arms in all possible positions. (that's over 2MB of RAM wasted, nice!) Wow, 2MB of ram for a clock? That's 32 Commodore 64's: even worse than Sun's Open Look clock tool! Kids these days... From the X-Windows Disaster: http://www.art.net/~hopkins/Don/unix-haters/x-windows/disaster.html X has had its share of $5,000 toilet seats -- like Sun's Open Look clock tool, which gobbles up 1.4 megabytes of real memory! If you sacrificed all the RAM from 22 Commodore 64s to clock tool, it still wouldn't have enough to tell you the time. Even the vanilla X11R4 xclock utility consumed 656K to run. And X's memory usage is increasing. Well, if the XO's calculator activity can take 20 MB RAM (+10MB shared), then 2 MB for a clock seems small. - Bert - ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
Ah, X's guilt is no more (in this instance): at the time you wrote that amusing tirade, there was no way to find out the memory consumption in the X server due to stupid clients: now there is (XRes) Would that more tools used it to report memory properly - Jim On Tue, 2007-09-18 at 10:45 +0200, Don Hopkins wrote: This is #3352: https://dev.laptop.org/ticket/3352 What's special about this page is that the clock applet uses two very wide bitmaps of 3200 pixels each to represent the clock arms in all possible positions. (that's over 2MB of RAM wasted, nice!) Wow, 2MB of ram for a clock? That's 32 Commodore 64's: even worse than Sun's Open Look clock tool! Kids these days... From the X-Windows Disaster: http://www.art.net/~hopkins/Don/unix-haters/x-windows/disaster.html X has had its share of $5,000 toilet seats -- like Sun's Open Look clock tool, which gobbles up 1.4 megabytes of real memory! If you sacrificed all the RAM from 22 Commodore 64s to clock tool, it still wouldn't have enough to tell you the time. Even the vanilla X11R4 xclock utility consumed 656K to run. And X's memory usage is increasing. -Don ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel -- Jim Gettys One Laptop Per Child ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
Jim Gettys wrote: Bert, Don't confuse virtual address space used with RAM consumed: most of that is shared memory (glibc, pango, gtk+). That's why memphis was written and is in our build: ps gives very misleading memory usage statistics, unless you really understand what it is reporting. It does a bit better job accounting for memory used in a mere-mortal way to understand. cool http://dev.laptop.org/git?p=projects/memphis;a=tree Even so, right now we'll find lots more RAM consumed that we'd like, due to how python loads modules; we have schemes for fixing this using fork and copy on write. See also http://www.pixelbeat.org/scripts/ps_mem.py cheers, Pádraig. ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
Memphis still says Calculate is taking 10 MB, which compared to the RAM in a physical calculator is quite a lot. Also, e.g. Memphis is attributing 110 MB to Squeak, when most of that is unused virtual address space (which top actually gets right). - Bert - On Sep 18, 2007, at 13:53 , Jim Gettys wrote: Bert, Don't confuse virtual address space used with RAM consumed: most of that is shared memory (glibc, pango, gtk+). That's why memphis was written and is in our build: ps gives very misleading memory usage statistics, unless you really understand what it is reporting. It does a bit better job accounting for memory used in a mere-mortal way to understand. Even so, right now we'll find lots more RAM consumed that we'd like, due to how python loads modules; we have schemes for fixing this using fork and copy on write. - Jim On Tue, 2007-09-18 at 11:30 +0200, Bert Freudenberg wrote: Well, if the XO's calculator activity can take 20 MB RAM (+10MB shared), then 2 MB for a clock seems small. - Bert - ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
On Sep 18, 2007, at 13:53 , Jim Gettys wrote: Even so, right now we'll find lots more RAM consumed that we'd like, due to how python loads modules; we have schemes for fixing this using fork and copy on write. Last time I talked to Ivan that's not going to be possible (in the foreseeable future) given the security framework's constraints. As so much, sadly. - Bert - ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
On Sep 18, 2007, at 9:13 AM, Bert Freudenberg wrote: Last time I talked to Ivan that's not going to be possible (in the foreseeable future) given the security framework's constraints. As so much, sadly. It's how memory is normally managed for Python modules that's problematic; security has little to do with it. There are some efforts around making Python's memory management more friendly to embedded(ish) platforms, and we need to see if we stand to benefit from this kind of work. -- Ivan Krstić [EMAIL PROTECTED] | http://radian.org ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
On Tue, 2007-09-18 at 15:25 +0200, Bert Freudenberg wrote: Memphis still says Calculate is taking 10 MB, which compared to the RAM in a physical calculator is quite a lot. Also, e.g. Memphis is attributing 110 MB to Squeak, when most of that is unused virtual address space (which top actually gets right). File a bug against the developer's console Thanks, - Jim - Bert - On Sep 18, 2007, at 13:53 , Jim Gettys wrote: Bert, Don't confuse virtual address space used with RAM consumed: most of that is shared memory (glibc, pango, gtk+). That's why memphis was written and is in our build: ps gives very misleading memory usage statistics, unless you really understand what it is reporting. It does a bit better job accounting for memory used in a mere-mortal way to understand. Even so, right now we'll find lots more RAM consumed that we'd like, due to how python loads modules; we have schemes for fixing this using fork and copy on write. - Jim On Tue, 2007-09-18 at 11:30 +0200, Bert Freudenberg wrote: Well, if the XO's calculator activity can take 20 MB RAM (+10MB shared), then 2 MB for a clock seems small. - Bert - ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel -- Jim Gettys One Laptop Per Child ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: wireless networking
On Mon, 2007-09-17 at 06:55 -0400, Jim Gettys wrote: On Thu, 2007-09-13 at 16:10 -0400, Dan Williams wrote: On Thu, 2007-09-13 at 19:52 +0100, Victor Lazzarini wrote: Thanks, I am now seeing my wireless router on the machines and can ping other machines connected to it. Wired internet seems to work (as far as ifconfig and ping are concerned). Two questions: 1. How should I set my hostname and fix it to work with DHCP? The hostname isn't changed by NM because that breaks Xauth and stops you from being able to launch X apps. I thought this got fixed in X quite a while ago (something like 18 months ago). It certainly used to be a PITA. Certainly not fixed, I just tested it out and it fails miserably. It is a PITA. Somebody who knows X needs to fix Xauth to not rely on hostnames. The xauth cookie depends on the hostname of the machine at the time xinit gets run, so we either need to change our xinit stuff to put 127.0.0.1 there for a hostname, or suck up the fact that changing the hostname kills the ability to start X apps. Dan ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: wireless networking
On Tue, 2007-09-18 at 10:27 -0400, Dan Williams wrote: On Mon, 2007-09-17 at 06:55 -0400, Jim Gettys wrote: On Thu, 2007-09-13 at 16:10 -0400, Dan Williams wrote: On Thu, 2007-09-13 at 19:52 +0100, Victor Lazzarini wrote: Thanks, I am now seeing my wireless router on the machines and can ping other machines connected to it. Wired internet seems to work (as far as ifconfig and ping are concerned). Two questions: 1. How should I set my hostname and fix it to work with DHCP? The hostname isn't changed by NM because that breaks Xauth and stops you from being able to launch X apps. I thought this got fixed in X quite a while ago (something like 18 months ago). It certainly used to be a PITA. Certainly not fixed, I just tested it out and it fails miserably. It is a PITA. Somebody who knows X needs to fix Xauth to not rely on hostnames. The xauth cookie depends on the hostname of the machine at the time xinit gets run, so we either need to change our xinit stuff to put 127.0.0.1 there for a hostname, or suck up the fact that changing the hostname kills the ability to start X apps. Keithp demo'ed this at a conference over a year ago; I'll try to track him down to figure out what we're doing wrong. - Jim Dan -- Jim Gettys One Laptop Per Child ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
FIrst of all, fix your mail editor. It is broken. On 18/09/07 00:50 -0400, Bernardo Innocenti wrote: - Removing all of the asm wizardry (useless IMHO, maybe even counter-productive) - Implementing access macros for the ring buffer using the normal, plain wrapping policy of all ring buffers - Killing the WRITE_COMMAND32() and WRITE_COMMANDSTRING32() abstractions. - Removing gp_declare_blt(), which needs to be called before starting any blitting operation NAK. What you are suggesting will completely breaking the entire Cimarron infrastructure, which is not something I am willing to do at this stage. Much time (and by that I mean nearly 4 years) went into writing, verifying and validating this code. We have a bug that needs to be fixed - and that doesn't happen by completely removing the internal workings of the engine. Thank you for reporting this, and I'll look into ways we can make the upload blit behave better. Jordan -- Jordan Crouse Systems Software Development Engineer Advanced Micro Devices, Inc. ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: radio off guarantee?
Suppose I'm someplace where I don't expect or want to do any mesh networking. How much would turning off the radio help battery life? Last I checked, the effect was very small. There will be occasional scans as the unit hunts around for nearby radios. One could save more by making those scans back off rather than provide a UI element. We see 700 to 800 mW consumed by the mesh interface, and as with most WiFi interfaces, receiving consumes as much power as transmitting. While it makes sense to turn off the wireless networking interface on developer machines, we are hesitant to add this to the UI. We are really relying on laptops to extend the mesh away from the school. wad ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: #3469 HIGH Trial-3: Human readable file names in the journal
Any reason this cannot be made public? - Bert - On Sep 18, 2007, at 17:23 , John Watlington wrote: Marco, I apologize that this was discussed at a school server meeting and we didn't point you at the results. Details are at: http://laptop.org/teamwiki/index.php/Team:SS_Meeting_2007-09-12 On Sep 18, 2007, at 4:21 AM, Marco Pesenti Gritti wrote: On 9/18/07, Kim Quirk [EMAIL PROTECTED] wrote: Do we have another solution for being able to find a file after the journal has saved it to the school server? How are backups to the school server going to be handled for trial-3? It think the proper long term solution the remote datastore stuff Ben has been thinking about (and designed the datastore to support), but clearly that's not going to be ready for trial-3. Marco ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: #3469 HIGH Trial-3: Human readable file names in the journal
We should get you access to TeamWiki. We do need some space for common notes that isn't published to the world at large... Here are the minutes: --- School Server Meeting Minutes, 2007-09-12 Attending: Scott, Michail, Alex, Kim, Wad, Walter * Jabber needs to be packaged * Apache is not going to work as transparent proxy (can be proxy and cache). We would have to change the browser config behind a school server. Squid only supports IPv4, not a good solution, maybe the only short term solution. * Can test registration on official build school servers (Kim and SJ have one; we may be able to get one for RH). o If you run more than one school server in a location for testing purposes, you should use blinding tables. If you are in 1CC, please also use channel 6 or 11. o A Wiki page has info on how to set these tables up (mesh- debug). There are 3 scenarios when we need to recover/restore from school server backup: * Update requires a backup; followed by restore of entire laptop * Catastrophic failure; restore entire laptop * Lost/Deleted file; Individual file by file restore * In the Trial3 scenario, we get a sharing mechanism for free: individually 'restore' someone else's files For restore we need to publish the backup directory (schoolserver/ share/SN) and this will regenerate the link that includes the nickname. Click on a file and it will download it into the journal. * We had some good discussions on what our journal and library might look like in the future and how we might access 'published' files/activities, and 'example' files. There is still alot of planning to do for this 3 year vision. * We also discussed what we can do today for FRS: we need to first define what gets stored, where, and whether it needs to be backed up. * FRS Picture o What currently lives in Lib is textbook and demo stuff; shouldn't be backed up o User owns everything in /home and /security o Need to maintain a manifest of items downloaded to be able to restore them. o Shouldn't be system things in home/olpc o We decided to give Jim the task to document and communicate where things must be stored; what will be preserved over upgrade; what will be backed up to school server. This needs widespread communications once it is decided. ACTION ITEMS: * Backup doesn't work unless ssh to ss first, bug 2974 * Registration should be automatic; and logged * Backup should be automatic; and logged [Dan Williams?] * Backup should use user-friendly file names [Marco?] * Backup dir should be published on SS, Wad/Scott * Wad will create changes to school server links for user data; need user friendly nick names * Jim will sort out where user data (of all sorts) need to be stored and backed up depending on backup mechanism. On Sep 18, 2007, at 11:41 AM, Marco Pesenti Gritti wrote: Looks like I don't have access to that part of teamwiki... Are you planning to make this public? Marco On 9/18/07, John Watlington [EMAIL PROTECTED] wrote: Marco, I apologize that this was discussed at a school server meeting and we didn't point you at the results. Details are at: http://laptop.org/teamwiki/index.php/Team:SS_Meeting_2007-09-12 On Sep 18, 2007, at 4:21 AM, Marco Pesenti Gritti wrote: On 9/18/07, Kim Quirk [EMAIL PROTECTED] wrote: Do we have another solution for being able to find a file after the journal has saved it to the school server? How are backups to the school server going to be handled for trial-3? It think the proper long term solution the remote datastore stuff Ben has been thinking about (and designed the datastore to support), but clearly that's not going to be ready for trial-3. Marco ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Power envelope (was Re: radio off guarantee?)
John Watlington wrote: Suppose I'm someplace where I don't expect or want to do any mesh networking. How much would turning off the radio help battery life? Last I checked, the effect was very small. There will be occasional scans as the unit hunts around for nearby radios. One could save more by making those scans back off rather than provide a UI element. We see 700 to 800 mW consumed by the mesh interface, and as with most WiFi interfaces, receiving consumes as much power as transmitting. Isn't that about 1/2 of our total power budget? .7 to .8 W on a 2W machine? I'd thought the Marvell chip was supposed to be down in the .3W range. Obviously haven't been following the hardware closely enough. (http://wiki.laptop.org/go/Power_Management is where I got the .3W idea). Is that .7W something we'll be able to bring down with software to reduce collisions or the like? University of Toronto's system's group has algorithms that optimise for power savings by reducing collisions, if that would help. From the same page, we'd only last 20 hours at ~.7W draw with a *full* charge (and with hand-charging a full charge likely won't be there, especially at the end of the day), which means that the machines are going to be dead each morning (having drained their batteries keeping up an unused network all night). With a 40 hour period (.3W) we were possibly going to have some juice left in the morning (need to be at 1/4 power when you shut off for the night), but that becomes less likely with a 20 hour period (need to be at 1/2 power when you shut off for the night). While it makes sense to turn off the wireless networking interface on developer machines, we are hesitant to add this to the UI. We are really relying on laptops to extend the mesh away from the school. If we really are spending half our power budget on the mesh network I would imagine that kids will want to be able to turn it off. Yes, meshing is good, but if you could double your battery life while you're at home reading, that would be very worthwhile too. What about *only* keeping the mesh up (continuously) if you are actually routing packets? That is, wake up the mesh interface every few minutes, check to see if your new routing structure makes you a link that is desirable, and only stay on if that's the case. (I realise I'm talking about long-term projects here). You'd need the machines to be able to queue up messages to go out so that they could leap at the request and say yes, please stay up when a linking machine pops on to check for need (implying from activity requests on the local machine would be likely be sufficient, but a simple UI might be workable too). If the network is inactive for X period, go into periodic sleep mode again to save energy. Even if you had to wake up the processor for a few seconds to run the code that decides whether to sleep, it's only about double the power of running the mesh for those seconds, and it could save you the entire power budget for a few minutes. Of course, in the field it may be that the mesh is so densely used that it's never going to go down with that algorithm, but it seems like something we need to investigate if we really are losing that much power to the interface. Just a thought, Mike -- Mike C. Fletcher Designer, VR Plumber, Coder http://www.vrplumber.com http://blog.vrplumber.com ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: radio off guarantee?
moin, On Tue, 18 Sep 2007 14:42:51 +1000 James Cameron [EMAIL PROTECTED] wrote: Do we need a solid way to turn off the RF before it's OK to use an XO on an airplane? Our target market usually won't have this problem. Yes, but the developers need a solid way to switch of the RF. The Test XO target market are developers and sometimes they travel by air. Javier wrote: # killall NetworkManager # iwpriv eth0 radiooff How could we ensure not bringing up the RF at boot time? Which file takes care of this at fedora systems? ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
Okay - after some investigation and talking to the original author of the Cimarron code, I have some answers. So the request gets through the amd_drv upload hook, and eventually we reach gp_color_bitmap_to_screen_blt(), whose purpose is to do the actual uploading: The *real* purpose of the gp_color_bitmap_to_screen_blt() function is to allow uploads from system memory with arbitrary ROPs. Since we're only ever doing a straight source copy (0xCC), we really don't need all the additional logic. So Bernie's recommendation that we eliminate the upload() function all together is the right solution, provided that the default EXA function waits for the command buffer to clear first. Otherwise, we'll need our own simple upload function that calls gp_wait_until_idle() first. The gp_color_bitmap_to_screen_blt() is indeed the way it is because of virtual/physical translation concerns - if we can get around those, then a blt would probably be faster, but its hard to do in userspace, as we well know. Other code confirms the statement in this comment: GP3_MAX_COMMAND_SIZE is defined to be 8K. However, this limit is arbitrary: I couldn't find anywhere in the databook a reason why the blitter couldn't copy more than 8K of data. The actual limit is 64K of DWORDS. I guess 8KB was just chosen as a reasonable waste of buffer space. The 8K limit in the command buffer was based on the assumption that we wouldn't be handling any pixmaps wider then the widest possible visible line (1920 * 4 = 6480 bytes). We can crank that up if we want to, but it will have a direct effect on how many BLTs we can queue up unless we crank up the amount of command buffer memory, which eats into our video memory, and so on and so forth. If we just move to a straight memcpy() above, then this is no longer a going concern. Moreover, the GPU is very well capable of wrapping its command pointer at arbitrary positions, even in the middle of a command. And so should the software. I strongly disagree with the claim in the comment that this strategy simplifies anything. This is incorrect. The wrap bit tells the command buffer to wrap at the end of the command, not in the middle of the command. The bottom line is that you absolutely, positively do not want to get in the business of messing with the command buffer functions - unless you want to break a lot of stuff. These functions have been carefully tuned to ensure that wrapping and other intelligence work well. If you think yourself suited to writing your own, there is a 100% chance of pain. If you want to replace the WRITE_COMMAND* macros, feel free - but remember that bitmaps almost always need to be copied line by line - few pixmaps are stored contiguously. So to summarize: Removing all of the asm wizardry (useless IMHO, maybe even counter-productive) Remove whatever macros you think you need to - but remember, if it ain't broke, don't fix it, and please send it to this list before putting it anywhere near the production code. - Implementing access macros for the ring buffer using the normal, plain wrapping policy of all ring buffers NAK. The ring buffers work, don't change them. - Killing the WRITE_COMMAND32() and WRITE_COMMANDSTRING32() abstractions. if you want, keeping in mind what I said before. - Removing gp_declare_blt(), which needs to be called before starting any blitting operation Utter and absolutely NAK - this would break the entire system horribly. - Seeing if we can get the blitter to read source data directly from system memory. I'd be very surprised if there was no way to make it work with virtual memory enabled, because, without such a mechanism, the blitter would be less than fully useful. You can't make it go with virtual memory, so NAK on this one too. Jordan -- Jordan Crouse Systems Software Development Engineer Advanced Micro Devices, Inc. ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: radio off guarantee?
We see 700 to 800 mW consumed by the mesh interface, and as with most WiFi interfaces, receiving consumes as much power as transmitting. Crazy thought dept... How long does it take to turn the receiver on? If all the clocks in a mesh were synchronized, would it make sense to turn all the receivers off during idle periods and turn them on for say 100 ms on the second boundary to see if there was any traffic? (I'm assuming the transmit side would cooperate.) -- These are my opinions, not necessarily my employer's. I hate spam. ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: radio off guarantee?
On Tue, 2007-09-18 at 18:52 +0200, [EMAIL PROTECTED] wrote: moin, On Tue, 18 Sep 2007 14:42:51 +1000 James Cameron [EMAIL PROTECTED] wrote: Do we need a solid way to turn off the RF before it's OK to use an XO on an airplane? Our target market usually won't have this problem. Yes, but the developers need a solid way to switch of the RF. The Test XO target market are developers and sometimes they travel by air. Javier wrote: # killall NetworkManager # iwpriv eth0 radiooff How could we ensure not bringing up the RF at boot time? Which file takes care of this at fedora systems? Add a hardware killswitch if you want to be 100% sure. There is no single file that takes care of this on any system really. But if you remove the kernel modules, firmware will not get uploaded to the card, and therefore the radio will essentially do nothing AFAIK. Dan ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Software status meeting on IRC (today, 21:00 EDT Boston)
On Sep 18, 2007, at 3:24 PM, Jim Gettys wrote: Feedback seemed mostly positive to last weeks IRC meeting; let's try again. irc.freenode.net, #olpc. Correction: #olpc-meeting, like last time, NOT #olpc. Thanks, -- Ivan Krstić [EMAIL PROTECTED] | http://radian.org ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Power envelope (was Re: radio off guarantee?)
On Tue, 2007-09-18 at 12:12 -0400, Mike C. Fletcher wrote: John Watlington wrote: Suppose I'm someplace where I don't expect or want to do any mesh networking. How much would turning off the radio help battery life? Last I checked, the effect was very small. There will be occasional scans as the unit hunts around for nearby radios. One could save more by making those scans back off rather than provide a UI element. We see 700 to 800 mW consumed by the mesh interface, and as with most WiFi interfaces, receiving consumes as much power as transmitting. Isn't that about 1/2 of our total power budget? .7 to .8 W on a 2W machine? I'd thought the Marvell chip was supposed to be down in the .3W range. Obviously haven't been following the hardware closely enough. (http://wiki.laptop.org/go/Power_Management is where I got the .3W idea). Is that .7W something we'll be able to bring down with software to reduce collisions or the like? University of Toronto's system's group has algorithms that optimise for power savings by reducing collisions, if that would help. The current firmware has not yet been optimized, and is consuming much more power than it should once that is done. And future wireless will also help. - Jim -- Jim Gettys One Laptop Per Child ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Software status meeting on IRC (today, 21:00 EDT Boston)
Feedback seemed mostly positive to last weeks IRC meeting; let's try again. irc.freenode.net, #olpc. ***Please** go through trac, comment and close bugs that are done. Please push bugs for first deployment. If you aren't sure, put the bugs into untriaged and we'll get you feedback. Please examine at least the blocker and high priority bugs, for Trial-3. You can easily use the query page in trac to see what open issues you have. http://dev.laptop.org has a number of common useful queries. http://www.timeanddate.com/worldclock/fixedtime.html?month=9day=18year=2007hour=1min=0sec=0p1=43 Status Wireless bugs Priorities activation registration. Blockers for Trial-3 https://dev.laptop.org/query?status=assignedstatus=newstatus=reopenedorder=prioritypriority=blockermilestone=Trial-3col=idcol=summarycol=statuscol=ownercol=typecol=component Power management http://dev.laptop.org/query?status=newstatus=assignedstatus=reopenedkeywords=powerorder=priority Blockers for Trial-3 http://dev.laptop.org/query?status=newstatus=assignedstatus=reopenedpriority=blockermilestone=Trial-3order=priority School server status Please send any agenda items. -- Jim Gettys One Laptop Per Child ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: radio off guarantee?
John Watlington wrote: While it makes sense to turn off the wireless networking interface on developer So there's a slight problem with powering off the wireless interface from an electrical standpoint. You can't. At least not if you want a working system. WLAN_EN controls WLAN_3.3V. +3.3V is derived from WLAN_3.3V. So if you drop WLAN_3.3V you lose +3.3V and you also lose: VDDIO on the LX700 Pullups on the PCI bus. Pullups on the jtag lines Supply voltages for COREPLL, GLPLL, and DOTPLL on the LX700 Power supply to the LX700 Therm alarm circuits Power supply to the system clock chip. And lots of other stuff...you get the idea. System no workie. That leaves 3 alternatives: 1 Don't load the wlan firmware. 2 Load the firmware and tell wlan xmit to shutdown 3 Hold the WLAN module in reset. #3 Can be done via the EC but there's currently not a command to _hold_ it in reset. There is a reset WLAN command which will strobe the reset line for 1ms. I can add enable/disable reset commands if 1 and 2 are not viable. -- Richard Smith [EMAIL PROTECTED] One Laptop Per Child ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
The gecko engine on the XO
Hi All, I am building an application for the XO and i want to embed a browser in the application. Can somebody tell me the details of the Gecko engine(version and other jazz) that is there on the XO? I would like to replicate the XO mozilla engine on my linux box so that I can build my application because of easier debugging on my linux box than on the XO. Thanks! Ankita ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The gecko engine on the XO
Browser example here: https://dev.laptop.org/git?p=projects/hulahop;a=blob;f=tests/test-web-view.py;h=63ede6b7e100a4c3dc40c434997a05fa9f7bae64;hb=HEAD Easier way to build it on your box is sugar-jhbuild: http://wiki.laptop.org/go/Sugar_with_sugar-jhbuild You can probably just build xulrunner and hulahop (in that order) using the buildone command. Marco On 9/18/07, ankita prasad [EMAIL PROTECTED] wrote: Hi All, I am building an application for the XO and i want to embed a browser in the application. Can somebody tell me the details of the Gecko engine(version and other jazz) that is there on the XO? I would like to replicate the XO mozilla engine on my linux box so that I can build my application because of easier debugging on my linux box than on the XO. Thanks! Ankita ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The gecko engine on the XO
git-clone is getting a fatal error and aborting with message Network is unreachable. A network trace revealed that dev.laptop.org (crank.laptop.org) is resetting the connection. Is this an intermittent error that should go away when I retry later or am I doing something wrong? thanks, Ravi. [EMAIL PROTECTED] ~]$ git-clone git://dev.laptop.org/sugar-jhbuild sugar-jhbuild fatal: unable to connect a socket (Network is unreachable) fetch-pack from 'git://dev.laptop.org/sugar-jhbuild' failed. [EMAIL PROTECTED] ~]$ 12:09:54.108404 IP 10.217.6.55.59099 crank.laptop.org.9418: S 2797094520:2797094520(0) win 5840 mss 1460,sackOK,timestamp 3724308108 0,nop,wscale 2 12:09:54.108603 IP crank.laptop.org.9418 10.217.6.55.59099: R 1:1(0) ack 2797094521 win 5840 On 9/18/07, Marco Pesenti Gritti [EMAIL PROTECTED] wrote: Browser example here: https://dev.laptop.org/git?p=projects/hulahop;a=blob;f=tests/test-web-view.py;h=63ede6b7e100a4c3dc40c434997a05fa9f7bae64;hb=HEAD Easier way to build it on your box is sugar-jhbuild: http://wiki.laptop.org/go/Sugar_with_sugar-jhbuild You can probably just build xulrunner and hulahop (in that order) using the buildone command. Marco On 9/18/07, ankita prasad [EMAIL PROTECTED] wrote: Hi All, I am building an application for the XO and i want to embed a browser in the application. Can somebody tell me the details of the Gecko engine(version and other jazz) that is there on the XO? I would like to replicate the XO mozilla engine on my linux box so that I can build my application because of easier debugging on my linux box than on the XO. Thanks! Ankita ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The gecko engine on the XO
On 9/18/07, Marco Pesenti Gritti [EMAIL PROTECTED] wrote: Easier way to build it on your box is sugar-jhbuild: Or, if you have Fedora 7, you can also just install the olpc xulrunner/hulahop rpms on it. Marco ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The gecko engine on the XO
Or, if you have Fedora 7, you can also just install the olpc xulrunner/hulahop rpms on it. Would this be the closest approximation of the gecko engine on the XO? Do you know what is the version of the engine present on the XO? Thanks! Ankita On 9/18/07, Marco Pesenti Gritti [EMAIL PROTECTED] wrote: On 9/18/07, Marco Pesenti Gritti [EMAIL PROTECTED] wrote: Easier way to build it on your box is sugar-jhbuild: Or, if you have Fedora 7, you can also just install the olpc xulrunner/hulahop rpms on it. Marco ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
Bernardo Innocenti wrote: In the short term, one quick and dirty fix would be to disable the EXA upload hook. The driver may even become somewhat faster because we avoid the extra copy! Here's a version of amd_drv with this kludge implemented and full debug symbols enabled: http://www.codewiz.org/pub/amd_drv.so.13 - built for Xorg 1.3 http://www.codewiz.org/pub/amd_drv.so.14 - built for Xorg 1.4 Aleph, would you mind installing it in your test environment and run some of your benchmarks before and after the cure? Results are here: https://dev.laptop.org/ticket/3352#comment:10 https://dev.laptop.org/ticket/3352#comment:11 Considering Ahmdal's law, I'd say this patch *greatly* improves performance in amd_drv. Is it ok to commit, for now? -- // Bernardo Innocenti - http://www.codewiz.org/ \X/ One Laptop Per Child - http://www.laptop.org/ ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
Jordan Crouse wrote: NAK. What you are suggesting will completely breaking the entire Cimarron infrastructure, which is not something I am willing to do at this stage. Much time (and by that I mean nearly 4 years) went into writing, verifying and validating this code. We have a bug that needs to be fixed - and that doesn't happen by completely removing the internal workings of the engine. You make it seem like this code was the product of 4 years of refinement. In reality, the parts I proposed to refactor are one reason why it was so struggling. Yes, this particular bug *could* be just fixed by adding yet another special case in the code. But don't you see there won't ever be an end to this? This is already the fifth or sixth serious amd_drv bug I fix in a short span of time. The more I look at the code, the more I'm convinced there are several others coming. I can't even imagine how hard it would be to write this much code without even enabling compiler warnings, which I did a couple of months ago, after spending a day chasing a missing prototype. What you call verified and validated code, is actually a very fragile, complex set of ad-hoc checks and magic numbers. The slightest environmental changes break it badly, as happened multiple times when I upgraded the X server from 1.1 to 1.3: Debugging this class of problems, namely memory corruption, uninitialized values, and missing synchronization, is *extremely* hard and time consuming. I'm suggesting a way out... In a matter of weeks rather than years. -- // Bernardo Innocenti - http://www.codewiz.org/ \X/ One Laptop Per Child - http://www.laptop.org/ ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: The iGoogle bug
On 18/09/07 20:09 -0400, Bernardo Innocenti wrote: Jordan Crouse wrote: NAK. What you are suggesting will completely breaking the entire Cimarron infrastructure, which is not something I am willing to do at this stage. Much time (and by that I mean nearly 4 years) went into writing, verifying and validating this code. We have a bug that needs to be fixed - and that doesn't happen by completely removing the internal workings of the engine. You make it seem like this code was the product of 4 years of refinement. In reality, the parts I proposed to refactor are one reason why it was so struggling. Yes, this particular bug *could* be just fixed by adding yet another special case in the code. But don't you see there won't ever be an end to this? This is already the fifth or sixth serious amd_drv bug I fix in a short span of time. The more I look at the code, the more I'm convinced there are several others coming. I can't even imagine how hard it would be to write this much code without even enabling compiler warnings, which I did a couple of months ago, after spending a day chasing a missing prototype. What you call verified and validated code, is actually a very fragile, complex set of ad-hoc checks and magic numbers. The slightest environmental changes break it badly, as happened multiple times when I upgraded the X server from 1.1 to 1.3: Debugging this class of problems, namely memory corruption, uninitialized values, and missing synchronization, is *extremely* hard and time consuming. I'm suggesting a way out... In a matter of weeks rather than years. There wasn't a single bug that you or anybody has fixed that has been the fault of Cimarron. Not one. Stop mischaracterizing the situation. If you want to rewrite the driver and the engine, then please, be my guest, but you'll be in the middle of it for several years, and it will continue to be buggy long after you have given up and moved on to other hardware. This is not what an OLPC representative should be proposing weeks before the final images are due. Jordan -- Jordan Crouse Systems Software Development Engineer Advanced Micro Devices, Inc. ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
Re: Power envelope (was Re: radio off guarantee?)
On Tue, 2007-09-18 at 12:12 -0400, Mike C. Fletcher wrote: John Watlington wrote: Suppose I'm someplace where I don't expect or want to do any mesh networking. How much would turning off the radio help battery life? Last I checked, the effect was very small. There will be occasional scans as the unit hunts around for nearby radios. One could save more by making those scans back off rather than provide a UI element. We see 700 to 800 mW consumed by the mesh interface, and as with most WiFi interfaces, receiving consumes as much power as transmitting. Isn't that about 1/2 of our total power budget? .7 to .8 W on a 2W machine? I'd thought the Marvell chip was supposed to be down in the .3W range. Obviously haven't been following the hardware closely The .3W number is applicable to _infrastructure_ mode with the Marvell part, because the device (like most 802.11 devices) can go into powersave poll mode when in a BSS. In ad-hoc situations, of course, there is no central controller buffering frames and sending out the TIM and therefore you have to have your RX pieces powered on most of the time to receive traffic from other stations, or you start missing frames entirely. You can't use the same powersave algorithms in a mesh (or adhoc even) network as you can use in an infrastructure network. Of course everything in the world right now pretty much uses infrastructure networks, and so the powersave algorithms are tuned for that. But mesh requires new powersave algorithms, or at least an implementation of existing algorithms. There's enough work getting the mesh implemented and working well right now without throwing power algorithms into the mix. Dan enough. (http://wiki.laptop.org/go/Power_Management is where I got the .3W idea). Is that .7W something we'll be able to bring down with software to reduce collisions or the like? University of Toronto's system's group has algorithms that optimise for power savings by reducing collisions, if that would help. From the same page, we'd only last 20 hours at ~.7W draw with a *full* charge (and with hand-charging a full charge likely won't be there, especially at the end of the day), which means that the machines are going to be dead each morning (having drained their batteries keeping up an unused network all night). With a 40 hour period (.3W) we were possibly going to have some juice left in the morning (need to be at 1/4 power when you shut off for the night), but that becomes less likely with a 20 hour period (need to be at 1/2 power when you shut off for the night). While it makes sense to turn off the wireless networking interface on developer machines, we are hesitant to add this to the UI. We are really relying on laptops to extend the mesh away from the school. If we really are spending half our power budget on the mesh network I would imagine that kids will want to be able to turn it off. Yes, meshing is good, but if you could double your battery life while you're at home reading, that would be very worthwhile too. What about *only* keeping the mesh up (continuously) if you are actually routing packets? That is, wake up the mesh interface every few minutes, check to see if your new routing structure makes you a link that is desirable, and only stay on if that's the case. (I realise I'm talking about long-term projects here). You'd need the machines to be able to queue up messages to go out so that they could leap at the request and say yes, please stay up when a linking machine pops on to check for need (implying from activity requests on the local machine would be likely be sufficient, but a simple UI might be workable too). If the network is inactive for X period, go into periodic sleep mode again to save energy. Even if you had to wake up the processor for a few seconds to run the code that decides whether to sleep, it's only about double the power of running the mesh for those seconds, and it could save you the entire power budget for a few minutes. Of course, in the field it may be that the mesh is so densely used that it's never going to go down with that algorithm, but it seems like something we need to investigate if we really are losing that much power to the interface. Just a thought, Mike ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel
utimes()/gid problem CoW Link-Breaking
Folks, Herbert Poetzl took a look at our utimes()/gid problem and has produced a patch that he thinks will fix the bug: http://vserver.13thfloor.at/Experimental/delta-cow-fix13.diff His patch basically confirms that our work-around for updating (rsync'ing twice) is correct; therefore, I feel quite good about proceeding with live updates. However, Andres or I should try to test the patch tomorrow to see if we can merge it since, while we have a work-around for updates, we're still at risk of uid/gid corruption when we're running inside a shallow-copied COW-linked filesystem. Michael ___ Devel mailing list Devel@lists.laptop.org http://lists.laptop.org/listinfo/devel