BUG in 2.4.0: dd if=/dev/random of=out.txt bs=10000 count=100
If I do the dd line in the title under 2.4.0 I get an out.txt file of 591 bytes. If I do the same thing from /dev/zero, I get the expected 1,000,000 byte file. I've shoehorned 2.4.0 into a fresh red hat 7.0 install which could quite easily be a bad thing, yes ripped out their strange gcc and symlinked kgcc->gcc to do the compile. But other than this it seems to be working. (So far...) dd says it completes happily even when copying from random. 0+100 records in, 0+100 records out. It takes about thirty seconds to finish on the dual gigahertz processor intel box I'm using to test it, which implies it's actually performing the truly impressive waste of CPU cycles I'm requesting from it. I'm just not getting the data in my file. Am I doing something wrong? My dd is from fileutils-4.0x-3. (Straight Red Hat 7.0, I think...) Didn't see anything about that in Documentation/Changes... I'll be happy to try one of the prepatches if anybody thinks they've addressed this problem already. Anybody? Need more debugging info? Want me to wave a dead chicken at something specific? Stick printk's into the kernel?... Rob __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: BUG in 2.4.0: dd if=/dev/random of=out.txt bs=10000 count=100
--- David Ford <[EMAIL PROTECTED]> wrote: > Rob Landley wrote: > > > If I do the dd line in the title under 2.4.0 I get > an > > out.txt file of 591 bytes. > > It isn't broken, you have no more entropy. You must > have some system > activity of various sorts before you regain some > entropy. Moving the mouse > around, hitting keys, etc, will slowly add more > entropy. > > -d I'd wondered what urandom was for. Thanks. Rob __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
qlogicfc.c hard lockups in 2.4.0
I've got a 20 drive raid0 set up off of two asus fiber channel controllers using the qlogicfc.c driver. (I think it's a variant of ISP2200.) Dual pentium 866 SMP machine, apparently stable under NT. Half a gig of ram, booting off of a different drive (hanging off of an LSI1010 scsi controller not involved in the raid, doubt that's involved). The problem is that when I go: time dd if=/dev/zero of=/root/raid/zerofile bs=100 count=8000 The sucker not only has visible bus stalls (the array has lots of lights on the front), but sometimes the whole machine locks hard (I.E. num-lock no longer affecting the keyboard LEDs). Before it goes bye-bye (and it doesn't always. Sometimes it finishes with lower throughput than it should have...) I get rather a lot of the following in /var/log/messages: Jan 16 03:40:55 dalek kernel: qlogicfc0 : no handle slots, this should not happen. Jan 16 03:40:55 dalek kernel: hostdata->queued is 4f, in_ptr: 49 Jan 16 03:40:55 dalek kernel: qlogicfc0 : no handle slots, this should not happen. Jan 16 03:40:55 dalek kernel: hostdata->queued is 4f, in_ptr 53 Repeat the above with in_ptr being 58, 5d, 6c, 76, 7b, 5, f, 19, 20, 2a, 2f, and 39. At that point, the machine achieved lockup and the next thing in the log is syslog restarting from the (hard power off) reboot. Help! Should I enable any of the debugging macros in qlogicfc.c? I've reproduced this lockup three times so far this morning. It's rather annoying, but making it happen again doesn't seem to be a problem... Rob __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: qlogicfc.c hard lockups in 2.4.0
More data: I changed QLOGICFC_REQ_QUEUE_LEN from 127 to 255, and the lockup hasn't happened again yet, in circumstances that fairly reliably reproduced it before. (Could just mean I have to hit it harder.) BUT: I am still getting those "no handle slots, this should not happen" messages. (Especially when I play with nicing my dd down to -20, although that may be cheating...) These seem to match with noticeable stalls in the drive array light blinkiness, where about every 2 seconds there's a visible pause in the whole array. If I do a read that's short enough to NOT cause one of these stalls, I get 178 megabytes/second throughput (which is about up towards the theoretical limit of the two qlogic fiber channel cards and the ten drives per card). With the stalls, it gets dragged down into the 150 megabyte per second range. That's right on the border of causing dropped capturing from the hdtv card, and stalls playing to it. (I forget exactly what I need, something like 157 megabytes/second... Bigger is better, of course...) According to top, at least one CPU is pegged during all this, by the way. dd uses 99.9% of available cpu time, with bdflush taking another 32% or so (on the other cpu, one assumes. :) Guess: lots of copying of the zero page to fill up dd's buffers? I'm trying to do an i/o bound test, may have to write my own it seems. Oh well, no biggie... (The real applicatino for this is dma into and out of a mondo ring buffer, not particularly cpu intensive at all, you'd think...) Fun fun fun... Sun's coming up in an hour, I should probably get some sleep. Rob __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: qlogicfc.c hard lockups in 2.4.0 - solved?
I think the hangs were actually CAUSED by the messages being printked. If I make those go away, it stops getting unhappy. (I suspect repeatedly printk-ing stuff from the middle of the scsi layer with interrupts disabled and other fun stuff occuring is not a good thing. Delays something or other long enough to trigger a watchdog timer and make the system go bye-bye.) Increasing the handle array thingy to 511 seems to have made the problem go away for me almost entirely. (Still periodic slight bus stalls from overrunning the scsi command queue (the test right before the "should never happen" test), but it recovers pretty quickly.) Throughput's a little over 160 million bytes per second on both writes and reads. That's about 20 million per second below the maximums (due to the stalls), but that's survivable at the moment... Close enough I can go home and get some sleep, anyway. Unresolved issues: That handle thingy should probably dynamically scale somehow. (Maybe the "out of handles" behavior could resize the array to the next 2^x-1 bump? I can try to whip up a patch to this effect if nobody thinks this is too crazy.) Bus stalls take to long to recover from, slowing throughput. (Pure scsi-ness, I may investigate later but haven't a CLUE on this right now. I'm guessing.) Rob __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
New rtl8139 driver prevents ssh from exiting.
The kernel thread the new rtl8139 driver spawns apparently wants to write to stdout, because it counts as an unfinished process that prevents an ssh session from exiting. I have a script that remotely reconfigures subnets in a vpn, which gets run via an ssh in through eth0 and does, among other things: ifconfig eth1 down ifconfig eth1 up Hence kicking off the thread. I have to run "killall eth1" at the end of the script to make ssh exit. (This doesn't seem to have any negative impact on eth1's functioning.) Anybody feel like shedding light on this? Rob __ Do You Yahoo!? Yahoo! Auctions - buy the things you want at great prices http://auctions.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: New rtl8139 driver prevents ssh from exiting.
--- Andrew Morton <[EMAIL PROTECTED]> wrote: > Rob Landley wrote: > > > > The kernel thread the new rtl8139 driver spawns > > apparently wants to write to stdout, because it > counts > > as an unfinished process that prevents an ssh > session > > from exiting. > > Does this help? --- Andrew Morton <[EMAIL PROTECTED]> wrote: > Rob Landley wrote: > > > > The kernel thread the new rtl8139 driver spawns > > apparently wants to write to stdout, because it > counts > > as an unfinished process that prevents an ssh > session > > from exiting. > > Does this help? Assuming this applies cleanly against 2.2.19 (production box; 2.4.3 wasn't ready and 2.4.4 wasn't available when the first prototypes had to go to test), I'll let you know in about an hour. Thanks, Rob __ Do You Yahoo!? Yahoo! Auctions - buy the things you want at great prices http://auctions.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: New rtl8139 driver prevents ssh from exiting.
--- Andrew Morton <[EMAIL PROTECTED]> wrote: > ack. You never said 2.2.19 :( > > It won't apply... No, but this one did. (Never underestimate the power of somebody with source code, a text editor, and the willingness to totally hose their system.) And it fixed the problem. Thank you. Rob __ Do You Yahoo!? Yahoo! Auctions - buy the things you want at great prices http://auctions.yahoo.com/ patch-2.2.19
Re: Break 2.4 VM in five easy steps
>I realize that assembly is platform-specific. Being >that I use the IA32 class machine, that's what I >would write for. Others who use other platforms could >do the deed for their native language. Meaning we'd still need a good C implementation anyway for the 75% of platforms nobody's going to get around to writing an assembly implementation for this year, so we might as well do that first, eh? As for IA32 being everywhere, 16 bit 8086 was everywhere until 1990 or so. And 64 bitness is right around the corner (iTanic is a pointless way of de-optimizing for memory bus bandwidth, which is your real bottleneck and not whatever happens inside a chip you've clock multiplied by a factor of 12 or more. But x86-64 looks seriously cool if AMD would get off their rear and actually implement sledgehammer in silicon within our lifetimes. And that's probably transmeta's way of going 64 bit eventually too. (And that was obvious even BEFORE the cross licensing agreement was announced.)) And interestingly, an assembly routine optimized for 386 assembly just might get beaten by C code compiled for Athlon optimization. It's not JUST "IA32". Memory management code probably has to know about the PAE addressing extensions, different translation lookaside buffer versions, and interacting with the wonderful wide world of DMA. Luckily in kernel we just don't do floating point (MMX/3DNow/whatever it was they're so proud of in Pentium 4 whose acronym I've forgotten at the moment. Not SLS, that was a linux distribution...) If your'e a dyed in the wool assembly hacker, go help the GCC/EGCS folks make a better compiler. They could use you. The kernel isn't the place for assembly optimization. >Being that most users are on the IA32 platform, I'm >sure they wouldn't reject an assembly solution to >this problem. If it's unreadable to C hackers, so that nobody understands it, so that it's black magic that positively invites subtle bugs from other code that has to interface with it... Yes they darn well WOULD reject it. Simplicity and clarity are actually slightly MORE important than raw performance, since if you just six months the midrange hardware gets 30% faster. The ONLY assembly that's left in the kernel is the stuff that's unavoidable, like boot sectors and the setup code that bootstraps the first kernel init function in C, or perhaps the occasional driver that's so amazingly timing dependent it's effectively real-time programming at the nanosecond level. (And for most of those, they've either faked a C solution or restricted the assembly to 5 lines in the middle of a bunch of C code. Memo: this is the kind of thing where profanity gets into kernel comments.) And of course there are a few assembly macros for half-dozen line things like spinlocks that either can't be done any other way or are real bottleneck cases where the cost of the extra opacity (which is a major cost, that is definitely taken into consideration) honestly is worth it. > As for kernel acceptance, that's an >issue for the political eggheads. Not my forte. :-) The problem in this case is an O(n^2) or worse algorithm is being used. Converting it to assembly isn't going to fix something that gets exponentially worse, it just means that instead of blowing up at 2 gigs it now blows up at 6 gigs. That's not a long term solution. If eliminating 5 lines of assembly is a good thing, rewriting an entire subsystem in assembly isn't going to happen. Trust us on this one. Rob __ Do You Yahoo!? Get personalized email addresses from Yahoo! Mail - only $35 a year! http://personal.mail.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Any limitations on bigmem usage?
On Tuesday 12 June 2001 12:29, Alan Cox wrote: > If your algorithm can work well with say 2Gb windows on the data and only > change window evey so often (in computing terms) then it should be ok, if > its access is random you need to look at a 64bit box like an Alpha, Sparc64 > or eventually IA64 No, eventually Sledgehammer. You know that IA64 will never ship, because the performance will always suck. The design is fundamentally flawed. The real bottleneck is the memory bus. These days they're clock multiplying the inside of the processor by a factor of what, twelve? Fun as long as you're entirely in L1 cache and aren't task switching. But that's basicaly an embedded system. CISC instructions are effectively compressed. When you exhaust your cache your next 64 bit gulp can get more than 2 instructions, you can get more like 5 (which still means you're getting about 1/4 the performance of in-cache execution, although L2 and L3 caches help here, of course..) But optimizing for something that isn't actually your bottleneck is just dumb, and that's exactly what Intel did with the move to VLIW/EPIC/IA64. 3 times 64 bit instructions is 192, whereas Pentium can fit more than that in a single 64 bit chunk. Brilliant. You need what, a 6x larger cache just to break even with the amount of time you're running in-cache? And of course the compiler is GOING to put NOPs in that because it won't always be able to figure out something for the second and third cores to do this clock, regardless of how good a compiler it is. That's just beautiful. This is why they were so desperate for RAMBUS. They KNOW the memory bus is killing them, they were grasping at straws to fix it. (Currently they're saying that a future vaporware version of iTanium will fix everything, but it's a fundamental design flaw: the new instruction set they've chosen is WAY less efficient going across the memory bus, and that's the real limiting factor in performance.) Transmeta at least sucks CISC in from main memory to keep the northbridge bottleneck optimized. And they have a big cache, and they're still using 32 bit instructions so they only need 96 bytes per chunk (or atom or whatever they're calling it these days). Sledgehammer is the interesting x86 64 bit instruction set. You still have the cisc/risc hybrid that made pentium work. (And, of course, backwards compatability that's inherent in the design rather than bolted onto the side with duct tape and crazy glue.) Yeah the circuitry to make it work is probably fairly insane, but at least the Athlon design team made all the racy logic go away so they can migrate the design to new manufacturing processes without four months of redesign work. (And no, making insanely long pipelines in Pentium 4 is not a major improvement there either. Cache exhaustion still kills you, so does branch misprediction. Stalling a longer pipeline is more expensive.) I saw an 800 mhz iTanium which benchmarked about the same (running 64 bit code) as a Penium III 300 mhz running 32 bit code. That's just sad. Go ahead and blame the compiler. That's still just sad. And a design flaw in the new instruction set is not something that can be easily optimized away. Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Getting A Patch Into The Kernel
On Tuesday 12 June 2001 18:34, Craig Lyons wrote: > We have a patch that fixes this and are wondering if it > is possible to get this patch into the kernel, and if so, how this would be > done? Well, you start by reading this: http://www.linuxhq.com/kernel/v2.4/doc/SubmittingPatches.html Which basically says that you post it here, with a title along the lines of: "[PATCH] promise IDE raid support". Start the body of your email with a brief description of the patch (the above is fine, mentioning that this is an official patch from promise is nice), and then include the patch itself at the end of the email in plain text. (Linus won't read Mime attachments, although others sometimes do and forward them to him. Sometimes.) You do know how to make a unified diff using "diff -u", right? (I'm assuming you have an includeable patch already prepared?) Also, try to use an email program that doesn't mangle whitespace. (It's a nit-pick, but it's good hygiene.) The difference between spaces and tabs is generally considered to be a good thing to maintain. Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Hour long timeout to ssh/telnet/ftp to down host?
I have scripts that ssh into large numbers of boxes, which are sometimes down. The timeout for figuring out the box is down is over an hour. This is just insane. Telnet and ftp behave similarly, or at least tthey lasted the 5 minutes I was willing to wait, anyway. Basically anything that calls connect(). If the box doesn't respond in 15 seconds, I want to give up. Is this a problem with the kernel or with glibc? If it's the kernel, I'd expect a /proc entry where I can set this, but I can't seem to find one. Is there one? What would be involved in writing one? If it's glibc I'm probably better off writing a wrapper to ping the destination before trying to connect, or killing the connection after a timeout with no traffic. But both of those are really ugly solutions. Anybody have any light to shed on the situation? Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Hour long timeout to ssh/telnet/ftp to down host?
>You can tune things by setting the tcp-timeout probably..I don't >know exactly where to set this.. Aha, found it. /proc/sys/net/ipv4/tcp_syn_retries I am a victim of the exponential retry falloff, it would seem. syn_retries of 1 takes a few seconds, 3 takes less than half a minute, and 5 takes several minutes. The default value of 10 is what's giving me the problem (something like 20 minutes to time out, according to my earlier tests.) Then the fact that ssh then re-attempts the connection four times before actually failing is where I got my hour and change timeout. ("ssh -v -v -v" comes in handy...) Fun. Can we change the default value for this to something more sane, like 5? Exponential falloff is not good when your order of magnitude hits double digits. > You probably don't want all tcp to time out at 15 seconds anyway, so Just connection initiation. (If their ip stack hasn't replied to me by then, I doubt it's going to.) > I'd suggest either using non-blocking connect (if you have the code > that does the connect), or just set a timer (or use sigalarm) when you > start the attempt, and fail the attempt if the timer or alarm signal > goes off. Except I'm using off-the-shelf ssh. (I asked them about this problem a month ago, and there was some discussion of a workaround on their mailing list, but 2.9 came out and still had the same behavior. Apparently, if it's not a problem in OpenBSD, it's not a problem in OpenSSH...) > > If it's glibc I'm probably better off writing a wrapper to ping the > > destination before trying to connect, or killing the connection after a > > timeout with no traffic. But both of those are really ugly solutions. > > Ugly is relative, and don't use ping because there is still a race > condition (ping worked, but by the time you try tcp, the box is down.) Yeah, but it would eventually time out and recover, I've got ten threads out querying boxes, that's a really rare race condition. And I already acknowledged it was ugly. :) So the problem is just that tcp_syn_retries' default value of 10 is way too high due to the exponentially increasing gap between each retry. Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Hour long timeout to ssh/telnet/ftp to down host?
On Wednesday 13 June 2001 05:40, Luigi Genoni wrote: > On Tue, 12 Jun 2001, Ben Greear wrote: > > You can tune things by setting the tcp-timeout probably..I don't > > know exactly where to set this.. > > /proc/sys/net/ipv4/tcp_fin_timeout > > default is 60. Never got that far. My problem was actually tcp_syn_retries. Remember, I was talking to a host that was unplugged. (I wasn't even getting "host unreachable" messages, the packets were just disappearing.) The default timeout in that case is rediculous do to the exponentially increasing delays between retries. 10 retries wound up being something like 20 minutes. I set it to 5 and everything works beautifully now. ssh (which retries the connection 4 times, and used to take over an hour to time out) now takes just over 3 minutes, which I can live with. Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [craigl@promise.com: Getting A Patch Into The Kernel] (fwd)
On Wednesday 13 June 2001 03:06, Andre Hedrick wrote: > No I would not take their code and apply it. > I might not even want to look at it. Well, you're maintainer and I'm obviously not, but it's nice to hear you've kept an open mind on this issue. :) > All I want is the API rules to the signatures and we have them now. > > We do not need their driver. Reinventing the wheel can be fun. Richard Stallman's been doing that for years since he refuses to take any patch where he can't physically track down the author and make them sign a piece of paper handing the copyright over to the Free Software Foundation. (He's got a file cabinet full of them so he'll have unassailable standing in case he ever has to sue anybody to enforce the GPL on GNU code.) Of course the unfortunate side effect of this is that the GNU project stalled in the late 1980's, and this whole "Linux" think forked off of it and took over instead, in large part because there was just too much friction getting patches through the maintainer bottleneck, while Linus would accept anything from anybody. (Linus sucked all the developers away from comp.os.minix for the same reason.) But oh well. > Next insults to linux in this form are unacceptable means of > communication. > ... insult omitted > > Stating/Implying that Linux Maintainers do not care about "quality". "Quality" is a loaded word in marketing circles, due to ISO nine zillion and the sickening-sigma stuff and all that. I always think of it in terms of "the most prominent qualities of this product are that it smells bad and tends to explode without warning. Now wrap that up in flowers and make it sound good." And off the marketing droids go... You are aware that you were speaking to a marketing person from Promise, correct? (He admitted it and everything. We didn't even have to use thumbscrews. Kind of a waste to get them all out and oiled and everything in that case...) > Oh it gets much worse, but I want to see if the sales for Promise have hit > hard enough to break their linux-unfriendly additude. The dude came hat in hand into the linux-kernel mailing list asking how he could play nice with us, (apparently honestly not knowing,) and you bit his head off. I don't think sales have hit hard enough to overcome THAT just yet. But I could be wrong... > Mind you the came begging for help because their sales are off, and I was > willing to help on the terms of GPL/GNU and mine. But GPL/GNU was to big > to choke down. Okay, THERE is the problem. Halfway through the message. Why not start with that next time? If the problem is that the code will not be made available under the GPL, then of course that IS an insurmountable problem for getting it included in the kernel. But it's entirely possible that our marketing friend didn't know that. It's entirely possible he doesn't know what the GPL -IS-. (If you've been sharing a private conversation with him that hasn't been CC'd to the list, than obviously I could be wrong about this...) > When the sales hurt enough and they have not choice, I will reconsider. No, hopefully THEY will reconsider. You couldn't get Linus to accept non-gpl code either. > Breathe, because you die before I change my position, if you are hold a > breath. > > I do not trust Promise, and three years of their general arrogance is more > than enough. I honestly doubt that the suit who just wandered through has a clue what the GPL is. He's not a lawyer, and he doesn't write free software. If he really was trying to help, and he was new to this, woudn't it be a nice first impression to clearly say "this licensing issue is blocking the inclusion of your code" so he knows what the problem is rather than "we're biased against promise, so we're going to pick on you and call you names?" > Mind you that at one point I had two people in the San Jose > office that were friendly be they are now gone. If you've approached every new person from promise this way ever since, I'm not exactly suprised you haven't met more like them. (I honestly hope that the previous sentence was a harsh and unfair assessment, and that you haven't been doing that.) No corporation is truly a monolithic entity. It's just a bunch of disjointed individuals who spend a lot of time in meetings filling out forms. You can deal with them as a faceless professional with a known set of duties, or you can try to deal with them as a human being. (Either way has been known to work, a bit like having two interfaces for the same object in Java. I learned that working at IBM. Plus the concepts of plausible deniability, least expected effort, a sort of judo approach to political infighting, that forgiveness is an order of magnitude easier to get than permission because punishing you takes effort, turning uncertainty to your advantage through the power of procrastination, and that everything I've seen so far in dilbert is less than 5% off
Re: Configure.help entries for Bluetooth (updated)
Okay, I'll bite. What's HCI stand for? I'm guessing it ends in "Connection Interface", but the H has me stumped. Happy? Hostile? Hysterical? Hippopotamus? If we're connecting a bluetooth compliant hippopotamus to Linux, I can only hope there's an RFC somewhere explaining how to do it. That's not the kind of thing you want to make up your own interface for. Especially since Alan Cox is entirely likely to get bored enough to try and implement it some day. The pigeons didn't stop him... Rob (I can just see Alan now, sneaking up on a hippopotamus with a blue magic marker to color its teeth with, and whatever tiny PDA he's managed to port Linux to by then. A wristwatch, probably. If Telsa is in the area, the question becomes "is it possible to crash a hippopotamus?) I need to go home now. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Download process for a "split kernel" (was: obsolete code must die)
On Thursday 14 June 2001 08:14, David Luyer wrote: > Well, I'm actually looking at the 2nd idea I mentioned in my e-mail -- a > very small "kernel package" which has a config script, a list of config > options and the files they depend on and an appropriately tagged CVS tree > which can then be used for a compressed checkout of a version to do a > build. (Or maybe something more bandwidth-friendly than CVS for the > initial checkout.) > > Maybe I'll find the spare time to do it, maybe I won't, either way I won't > post any more on the subject until I have something tangible (so far I've > just done the 'easy bit': written a quick shell script which imported 2.4.x > into a tagged CVS tree; the 'hard bit', to write a script to analyse each > kernel rev and determine which files are used by which config options and > mix that in together with the minimal install for a 'make menuconfig' will > take somewhat longer). > > David. You might want to float this idea by Eric Raymond. It's POSSIBLE (distant but possible) that the new CML2 stuff might make this sort of thing easier to automate. Correction, it's possible CML2 might make this POSSIBLE to automate. It sounds like it would still be a female dog and a half to implement. But I'm not the one to ask... Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Alan Cox quote? (was: Re: accounting for threads)
On Tuesday 19 June 2001 12:52, Larry McVoy wrote: > On Tue, Jun 19, 2001 at 05:26:09PM +0100, Matthew Kirkwood wrote: > > On Tue, 19 Jun 2001, Larry McVoy wrote: > > > ``Think of it this way: threads are like salt, not like pasta. You > > > like salt, I like salt, we all like salt. But we eat more pasta.'' > > > > This is oft-quoted but has, IMO, the property of not actually > > making any sense. > > Hmm. Makes sense to me. All it is saying is that you can't make a good > dinner out of a pile of salt. It was originally said to some GUI people > who wanted to use a thread per object, every button, scrollbar, canvas, > text widget, menu, etc., are all threads. What they didn't figure out is > that a thread needs a stack, the stack is the dominating space term, and > they were looking at a typical desktop with around 9,000 objects. Times, > at that point, an 8K stack. That was 73MB in stacks. Pretty darned stupid > when you look at it that way, huh? I've seen worse. Back around '93 or so I took a look at Linux and wound up putting OS/2 on my desktop instead of Linux for 2 reasons. 1) Diamond Stealth video card. It worked in OS/2, didn't work in Linux, and the reason was you guys couldn't get vendor support while IBM could. According to this list, it was entirely possible this sort of problem would NEVER be addressed. (We got around it with reverse engineering and critical mass, but at the time hardware was changing way faster than we could keep up with it.) 2) Not only did Linux not have threads (at all), it didn't plan to have threads, and anybody who brought up the idea of threads was dismissed. Considering this was long before clone, and SMP hardware was starting to come into the high and and looked like it might wind up on the desktop eventually (who knew MS would keep DOS around another ten years, unable to understand two processors, to displays, two mice, two keyboards, and barely able to cope with two hard drives under a 26 letter limit...) So I wound up work at IBM doing OS/2 development for a couple years. On a project called Feature Install, which was based on a subclassed folder in the workplace shell (object oriented desktop). The idea was you fill up a folder with files, drag and drop them from diskette to desktop, and they install themselves automatically. (Went out the window when a single install object had to span two disks, but oh well. By then, they were stuck with the design. I inherited it after a year or two of scar tissue had accumulated.) They created THREE threads for every single object, and they objects could nest arbitrarily deep. (Modular installs with sub-components.) The OS had a default of 1024 threads (expandable to 4096, I believe, before they hit a 64k limit somewhere in the coding). When they made up a test object hierarchy for all the components of the entire OS, it created so many threads the system ran out and got completely hosed. I had a command line window open, but couldn't RUN anything, since anything it tried to spawn required a thread to run. (Child of the shell.) One three finger salute later, the desktop comes back up automatically, with this nasty tangle of objects STILL there, and of course hoses itself AGAIN as the desktop instantiates all those objects in memory with their three threads apiece, and runs out again... Those were the days. If I remember correctly, I replace ALL those threads with a single thread doing all the work, a mutex protected linked list of work to do, and an event semaphore to tell it to do it. > Nobody is arguing that having more than one thread of execution in an > application is a bad idea. On an SMP machine, having the same number of > processes/threads as there are CPUs is a requirement to get the scaling > if that app is all you are running. That's fine. But on a uniprocessor, > threads are basically stupid. Sometimes they're an easy way to get asynchronous behavior, and to perform work in the background without the GUI being locked up. But the difference between "processes" and "threads" there is academic. Processes with shared memory and some variant of semaphores to avoid killing each other in it. Same thing. And I think the most I've ever REALLY used is about three for a whole program. (Unless you're talking about a server with a thread pool handling connections. And yeah that could be done with poll and non-blocking I/O, but not if you're shelling out to ssh and such...) > The only place that I know of where it is > necessary is for I/O which blocks. And even there you only need enough > to overlap the I/O such that it streams. And processes will, and do, > work fine for that. And nonblocking GUI elements, and breaking up work for multiple processors, and probably a few other things. But I think the MAIN difference between the two camps is that the people who despise threads consider them to be a lot different from processes, and
Re: [OT] Threads, inelegance, and Java
On Wednesday 20 June 2001 07:25, Aaron Lehmann wrote: > On Wed, Jun 20, 2001 at 09:00:47AM +, Henning P. Schmiedehausen wrote: > > Just the fact that some people use Java (or any other language) does > > not mean, that they don't care about "performance, system-design or > > any elegance whatsoever" [2]. > > However, the very concept of Java encourages not caring about > "performance, system-design or any elegance whatsoever". The same arguments were made 30 years ago about writing the OS in a high level language like C rather than in raw assembly. And back in the days of the sub-1-mhz CPU, that really meant something. > If you cared > about any of those things you would compile to native code (it exists > for a reason). Need run-anywhere support? Distribute sources instead. > Once they are compiled they won't need to be reinterpreted on every > run. I don't know about that. The 8 bit nature of java bytecode means you can suck WAY more instructions in across the memory bus in a given clock cycle, and you can also hold an insane amount of them in cache. These are the real performance limiting factors, since the inside of your processor is clock multiplied into double digits nowdays, and that'll only increase as die sizes shrink, transistor budgets grow, and cache sizes get bigger. In theory, a 2-core RISC or 3-core VLIW processor can execute an interpretive JVM pretty darn fast. Think a jump-table based version (not quite an array of function pointers dereferenced by the bytecode, instead something that doesn't push anything on the stack since you know where they all return to). The lookup is seperate enough that it can be done by another (otherwise idle) processor core. Dunno how that would affect speculative execution. But the REAL advantage of this is, you run straight 8 bit java code that you can fit GOBS of in the L1 and L2 caches. Or if you like the idea of a JIT, think about transmeta writing a code morphing layer that takes java bytecodes. Ditch the VM and have the processor do it in-cache. This doesn't mean java is really likely to outperform native code. But it does mean that the theoretical performance problems aren't really that bad. Most java programs I've seen were written by rabid monkeys, but that's not the fault of the language. [1]. I've done java on a 486dx75 with 24 megabytes of ram (my old butterfly keyboard thinkpad laptop), and I got decent performance out of it. You don't make a lot of objects, you make one object with a lot of variables and methods in it. You don't unnecessarily allocate and discard objects (including strings, the stupid implementation of string+string+string is evil,) to bulk up the garbage collector and fragment your memory. The garbage collector does NOT shrink the heap allocated from the OS, the best you can hope for is that some of it'll be swapped out, which is evil, but an implementation problem rather than a language design problem.) For a while I was thinking of fixing java's memory problems by implementing a bytecode interpreter based on an object table. (Yeah, one more layer of indirection, but it allowed a 64 bit address space, truly asynchronous garbage collection, and packing the heap asynchronously as well. I was actually trying to get it all to work within hard realtime constraints for a while. (No, you can't allocate a new object within hard realtime. But you could do just about everything else.) But Sun owns java and I didn't feel like boxing myself in with one of thier oddball licenses. Maybe I'll look at python's bytecode stuff someday and see if any of the ideas transfer over...) How many instructions does your average processor really NEED? MIT's first computer had 4 instructions: load, save, add, and test/jump. We only need 32 or 64 bits for the data we're manipulating. 8 bit code is a large part of what allowed people to write early video games in 8k of ram. By the way: Python's approach to memory management is to completely ignore it. Have you seen the way it deals with loops? (Hint: allocate an object for each iteration through the loop, and usually this is done up front creating a big array of them that is then iterated through.) And this turns out to be a usable enough approach that people are writing action games in python via SDL bindings. The largest contributing factor to Java's reputation for bad performance is that most java programmers really really suck. (Considering that most of them were brand new to the language in 1998 and even veterans like me only learned it in 1995, it's not entirely suprising. And that for many of them, it's their first language period...) So if it's possible to write efficient code in python, it's GOT to be possible to write efficient code in Java, which can at least loop without allocating objects in the process. (Yeah, I know about map, I'm trying to make a point here. :) Rob [1] Java 1.1 anyway. 1.0
Re: Alan Cox quote? (was: Re: accounting for threads)
On Tuesday 19 June 2001 19:31, Timur Tabi wrote: > Amen. This is one of the reasons why I also prefer OS/2 over Linux. Preferred. OS/2's day has come and gone. IBM killed it with a stupid diversion into the power PC version between 1993 and 1995. By the time Windows 95 was released, MS had finally (after 11 years) managed to properly clone the original 1984 macintosh, and 90% of the PC user base was no longer actively looking to replace their OS (Windows 3.1 and/or DOS). The window of opportunity for a proprietary OS to replace Microsoft's had closed, and OS/2 4.0 was just plain too late. (If it had been released two years earlier, things might have been different. But it wasn't.) But the window to commoditize a propreitary niche never closes. The PC clones didn't care whether digital's minicomputers or IBM's mainframes won out in the marketplace, they were quite happy to replace both. Linux is winning because it's free software, not for whatever transient technical reasons make one version of an OS better than another on a given day. > Feature Installer is a bad example. That software is a piece of crap for > lots of reasons, excessive threading being only one, and every OS/2 user > knew it the day it was released. Why do you think WarpIN was created? I know, and I'm sorry I rescued FI from the corpse of OS/2 for the Power PC. But it was my job, you know. :) (And I did rewrite half of it from scratch. Before that it didn't work at ALL. There were algorithms in there that scaled (failed to scale) to n^n complexity on the object hierarchy. Unfortunately, IBM wouldn't let me rewrite the other half of it, and most of that was pretty darn obnoxious and evil.) Isn't it interesting that OS/2 turned into a unix platform? (EMX and gcc, where all the software was coming from? I think I tried to point that out to you when I attempted to recruit you into the LInux world at that gaming session at armadillocon in... 1999? Glad to see you did eventually come over to the chocolate side of the force after all... :) > Not quite. What makes OS/2's threads superior is that the OS multitasks > threads, not processes. So I can create a time-critical thread in my > process, and it will have priority over ALL threads in ALL processes. And in Linux you can create a time-critical process that shares large gorps of memory where you keep your global variables, and call it a thread, and it works the same way. My only real gripe with Linux's threads right now (other than the fact they keep trying to get us to use the posix api, which sucks. What IS an event variable. What's wrong with event semaphores?) is that ps and top and such aren't thread aware and don't group them right. I'm told they added some kind of "threadgroup" field to processes that allows top and ps and such to get the display right. I haven't noticed any upgrades, and haven't had time to go hunting myself. (Ever tried to sumit a patch to the FSF? They want you to sign legal documents. That's annoying. I usually just send the bug reports to red hat and let THEM deal with it...) > A lot of OS/2 software is written with this feature in mind. I know of one > programmer who absolutely hates Linux because it's just too difficult > porting software to it, and the lack of decent thread support is part of > the problem. Yup. OS/2 is the largest nest of trained, experienced multi-threaded programmers. (And it takes a LOT of training experience to do threads right.) That's why I've been trying to recruit ex-OS/2 guys over to Linux for years now. (Most followed Java to Linux after Netscape opened up Mozilla, but there used to be several notable holdouts...) Threading is just another way to look at programming, with both advantages and disadvantages. You get automatic SMP scalability that's quite nice if you keep cache coherency in mind. You get great latency and responsiveness in user interfaces with just one or two extra threads. (State machines may be superior if implemented perfectly, but like co-operative multitasking (which they are) it's trivially easy to block the suckers and hang the whole GUI. If you hang one thread, you can even program another thread to notice automatically and unwedge it without much overhead at all. Trying to unwedge a state machine is a little more complicated than kill/respawn of a thread. Of course a perfect state machine shouldn't have those problems, but when you interface with other people's code in a large system, you accept imperfection and make the system survive as best it can.) > Exactly. Saying that threads cause bad code is just as dumb as saying that > a kernel debugger will cause bad code because programmers will start using > the debugger instead of proper design. > > Oh wait, never mind . Ah, don't pick on Linus. It's not exactly like the limiting factor to kernel development is a SHORTAGE of patches sent in to the various
Re: Alan Cox quote? (was: Re: accounting for threads)
On Wednesday 20 June 2001 10:35, Mike Porter wrote: > > But that foregoes the point that the code is far more complex and harder > > to make 'obviously correct', a concept that *does* translate well to > > userspace. > > One point is that 'obviously correct' is much harder to 'prove' for > threads (or processes with shared memory) than you might think. > With a state machine, you can 'prove' that object accesses won't > conflict much more easily. With a threaded or process based model, > you have to spend considerable time thinking about multiple readers > and writers and locking. > > One metric I use to evaluate program complexity is how big of a > headache I get when trying to prove something "correct". > Multi-process or multi-threaded code hurts more than a well written > state machine. The same applies to security though. There's programmers out there we're unwilling to give the tools to create race conditions, but we expect them to write stuff that won't allow a box on the internet to be own3d in under 24 hours... Obvious isn't always correct... Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [OT] Threads, inelegance, and Java
On Wednesday 20 June 2001 12:53, Larry McVoy wrote: > We couldn't believe that Java was really that bad so our GUI guy, Aaron > Kushner, sat down and rewrote the revision history browser in Java. > On a 500 node graph, the Java tool was up to 85MB. The tk tool doing > the same thing was 5MB. Note that we routinely run the tool on files > with 4000 nodes, we can't even run the Java tool on files that big, > it crashes. I can second that. I recently mentioned an OS/2 abonination called Feature Install. Around 1996 I tried to port Feature Install to java 1.0. I got as far as the response file reading code, and reading in a 100k file exhausted available memory on the 32 megabyte machine I was working on. Remember, every single java object includes a BUNCH of data, including two semaphores (one event, one mutex) and who knows what else. On OS/2 the overhead of a java 1.0 object (new Object();) was 2 kilobytes! In later versions they got that down to around 200k, but it's still just rediculous. Every single String, every pointless Integer() wrapper, every temporarily created Stringbuffer() discarded by a + operation left there littering the stack. Plus I have yet to see a JVM that actually reclaims heap space after a garbage collect and gives it back to the OS. (You have to be able to relocate objects to do this at all reliably...) So if you create a large number of small objects, EVER, (tree, etc,) it's going to explode the heap and it'll never come down until the program exits. > So, yeah, we have done what you think we haven't done, and we've tried > the Java way, we aren't making this stuff up. We run into Java fanatics > all the time and when we start asking "so what toolkits do you use" we > get back "well, actually, err, umm, none of them are any good so we write > our own". That's pathetic. Also true. The Graphics class isn't too bad, and lightweight containers are actually quite nice. But swing is just insanely bad (I have to understand model/view/controller and select a look-and-feel just to pop up a dialog with an "ok" button?), and the only serious third party challenger to it was from MIcrosoft... Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [OT] Threads, inelegance, and Java
On Wednesday 20 June 2001 15:27, Mike Harrold wrote: > Martin Dalecki wrote:> > > > Blah blah blah. The performance of the Transmeta CPU SUCKS ROCKS. No > > matter > > what they try to make you beleve. A venerable classical desing like > > the Geode outperforms them in any terms. There is simple significant >[root@mobile1 /root]# cat /proc/cpuinfo >processor : 0 >vendor_id : CyrixInstead >cpu family : 5 >model : 7 >model name : Cyrix MediaGXtm MMXtm Enhanced >stepping: 4 >fdiv_bug: no >hlt_bug : no >sep_bug : no >f00f_bug: no >coma_bug: no >fpu : yes >fpu_exception : yes >cpuid level : 2 >wp : yes >flags : fpu msr cx8 cmov 16 mmx cxmmx >bogomips: 46.89 Let's just say I haven't exactly been thrilled with the performance of the geode samples we've been using at work. I have a 486 at home that outperforms this sucker. Maybe it's clocking itself down for heat reasons, but it really, really sucks. (Especially since I'm trying to get it to do ssl.) And yes, we're thinking about transmeta as a potential replacement for the next generation hardware. We're also looking around for other (x86 compatable) alternatives... > > Well the actual paper states that the theorethical performance was "just" > > 20% worser then a comparable normal design. Well "just 20%" is a half > > universe diameter for CPU designers. In the case of transmeta, that's in exchange for a third processor core, which is probably worth something. 20% is only about 3 months of moore's law. 90% of processor speed improvements over the past few years have been die size shrinks. You could clock a 486 at several hundred mhz with current manufacturing techniques, and get better performance out of it than low end pentiums. (Somebody did it with a bottle of frozen alcohol and got themselves injured, but was managing a quite nice quake frame rate before the bottle exploded.) And that's not counting the fact a pentium has twice as many pins to suck data through... And I repeat, if you're clocking the processor over 10x the memory bus rate, your cache size and your memory bus become fairly important limiting factors. (Modern processors are much more efficient about using the memory bus, doing bursts and predictive prefetches and all. But that's a seperate issue.) Look at pentium 4. Almost all the work done there was simply so they could clock the sucker higher, because Intel uses racy logic in their designs and had to break everything down into really small pipeline stages to get the timing tolerances into something they could manufacture above 1 ghz. It's AT LEAST 20% slower per clock than a PIII or Athlon. It's all noise compared to manufacturing advances shrinking die sizes and reducing trace lengths and capacitance and all that fun stuff... > So what? Crusoe isn't designed for use in supercomputers. It's designed > for use in laptops where the user is running an email reader, a web Not just that, think "cluster density". 142 processors per 1U, air cooled, running around 600 mhz each. The winner hands down in mips per square foot. (Well, I suppose you could do the same thing with arm, but I haven't seen it on the market yet. I may not have been paying attention...) > browser, a word processor, and where the user couldn't give a cr*p about > performance as long as it isn't noticeable (20% *isn't* for those types > of apps), but where the user does give a cr*p about how long his or her > battery lasts (ie, the entire business day, and not running out of power > at lunch time). Our mobiles aren't (currently) battery powered, but a processor that doesn't clock itself down to 46 bogomips when it's running without a fan is a GOOD thing if you're trying to pump encrypted bandwidth through it at faster than 350 kilobytes per second. (The desktop units are getting 3.5 megs/second running the same code...) > Yes, it *can* be used in a supercomputer (or more preferably, a cluster > of Linux machines), or even as a server where performance isn't the > number one concern and things like power usage (read: anywhere in > California right now ;-) ), and rack size are important. You can always > get faster, more efficient hardware, but you'll pay for it. It's still not power, it''s heat. You can run some serious voltage into a rack pretty easily, but it'll melt unless you bury the thing in fluorinert, which is expensive. (Water cooling of an electrical applicance is NOT something you want to be anywhere near when anything goes wrong.) Processors in a 1U are tied together by a PCI bus or some such. The latency going from one to another is very low. Processors in different racks are tied together by cat 5 or myrinet or some such, and have a much higher latency due to speed of light concerns. A tightly enough coupled cluster can act like NUMA, which can deal with a lot
Re: [OT] Threads, inelegance, and Java
On Wednesday 20 June 2001 15:53, Martin Dalecki wrote: > Mike Harrold wrote: > > Well the transmeta cpu isn't cheap actually. Any processor's cheap once it's got enough volume. That's an effect not a cause. > And if you talk about > super computing, hmm what about some PowerPC CPU variant - they very > compettetiv in terms of cost and FPU performance! Transmeta isn't the > adequate choice here. You honestly think you can fit 142 PowerPC processors in a single 1U, air cooled? Liquid air cooled, maybe... > Well both of those concepts fail in terms of optimization due > to the same reason: much less information is present about > the structure of the code then during source code compilation. LESS info? Anybody wanna explain to me how it's possible to do branch prediction and speculative execution at compile time? (Ala iTanium?) I've heard a few attempts at an explanation, but nothing by anybody who was sober at the time... You have less time to work, but you actually have MORE info about how it's actually running... > Additionaly there may be some performance wins due to the > ability of runtime profiling (anykind thereof), however it still remains > to be shown that this performs better then statically analyzed code. Okay, I'll bite. Why couldn't a recompiler (like MetroWerks stuff) do the same static analysis on large code runs that GCC or some such could do if you give it -Oinsane and let it think for five minutes about each file? Obviously the run-time version isn't going to spend the TIME to do that. But claiming the information to perform these actions isn't available just because your variables no longer have english names... > > /Mike - who doesn't work for Transmeta, in case anyone was wondering... > > :-) > > /Marcin - who doesn't bet a penny on Transmeta /Rob, who still owns stock in Intel of all things. Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Alan Cox quote? (was: Re: accounting for threads)
On Wednesday 20 June 2001 17:20, Albert D. Cahalan wrote: > Rob Landley writes: > > My only real gripe with Linux's threads right now [...] is > > that ps and top and such aren't thread aware and don't group them > > right. > > > > I'm told they added some kind of "threadgroup" field to processes > > that allows top and ps and such to get the display right. I haven't > > noticed any upgrades, and haven't had time to go hunting myself. > > There was a "threadgroup" added just before the 2.4 release. > Linus said he'd remove it if he didn't get comments on how > useful it was, examples of usage, etc. So I figured I'd look at > the code that weekend, but the patch was removed before then! Can we give him feedback now, asking him to put it back? > Submit patches to me, under the LGPL please. The FSF isn't likely > to care. What, did you think this was the GNU system or something? I've stopped even TRYING to patch bash. try a for loop calling "echo $$&", eery single process bash forks off has the PARENT'S value for $$, which is REALLY annoying if you spend an afternoon writing code not knowing that and then wonder why the various process's temp file sare stomping each other... Oh, and anybody who can explain this is welcome to try: lines=`ls -l | awk '{print "\""$0"\""}'` for i in $lines do echo line:$i done > How about a filesystem filter to spit out patches, or a filesystem > interface to version control? Explain please? The patches-linus-actuall-applies mailing list idea is based on how Linus says he works: he appends patches he likes to a file and then calls patch -p1 < thatfile after a mail reading session. It wouldn't be too much work for somebody to write a toy he could use that lets him work about the same way but forwards the messages to another folder where they can go out on an otherwise read-only list. (No extra work for Linus. This is EXTREMELY important, 'cause otherwise he'll never touch it.) The advantage of this way is: 1) We know who sent the patches. (We get the message with the "from" headers intact.) 2) Patch mails have descriptions in them most of the time, at least saying why, if not what they do. 3) This way, we know (more or less in realtime) that Linus has gotten a patch and applied it to his tree. (What version and everything.) It may be backed out again later, but we could give him another tool that can do that and notify the list... This way, no mucking about with version control, no extra work for Linus, and in fact he doesn't have to worry about keeping track of what patches he's applied and when because he has a place he can go check if he forgets. Now everybody tell me why this won't work. (Sure, all at once, why not...) Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [OT] Threads, inelegance, and Java
On Wednesday 20 June 2001 18:07, J . A . Magallon wrote: > On 20010620 Rob Landley wrote: > What do you worry about caches if every bytecode turns into a jump and more > code ? 'cause the jump may be overlappable with extra execution cores in RISC and VLIW? I must admit, I've never actually seen somebody try to assembly-level optimize a non-JIT java VM before. JIT always struck me as a bit of a kludge... > And that native code is not in a one-to-one basis with respect to > bytecode, but many real machine code instructions to exec a bytecode op ? Sure. But if they're honestly doing real work, that's ok then. (If they're setting up and tearing down unnecessary state, that's bad. I must admit the friction between register and stack based programming models was pretty bad in the stuff I saw back around JavaOS, which was long enough ago I can't say I remember the details as clearly as I'd like...) Then again JavaOS was an abortion on top of Slowaris. Why they didn't just make a DPMI DOS port with an SVGA AWT and say "hey, we're done, and it boots off a single floppy", I'll never know. I mean, they were using green threads and a single task for all threads ANYWAY... (Actually, I know exactly why. Sun thinks in terms of Solaris, even when it's totally the wrong tool for the job. Sigh...) Porting half of Solaris to Power PC for JavaOS has got to be one of the most peverse things I've seen in my professional career. > I have seen school projects with interfaces done in java (to be 'portable') > and you could go to have a coffee while a menu pulled down. Yeah, but the slowness there comes from the phrase "school project" and not the phrase "done in java". I've seen menuing interfaces on a 1 mhz commodore 64 that refreshed faster than the screen retrace, and I've WRITTEN java programs that calculated animated mathematical function plots point by point in realtime on a 486. > Would you implement a search funtion into a BIG BIG database in java ? You mean spitting out an SQL request that goes to a backend RDMS? I've done it. (Via MQSeries to DB2.) Interestingly, a rather large chunk of DB2 itself seems to be implemented in Java Dunno how much, though. Probably not the most performance critical sections. But it uses a heck of a lot of it... > No, you give a java interface to C or C++ code. A large part of this is "not reinventing the wheel". Also, I'd like to point out that neither Java 1.0 nor Java 1.1 had an API to truncate an existing file. (I pointed that out to Mark English at Sun back when I worked at IBM, and apparently nobody'd ever noticed it before me. Fixed in 1.2, I'm told.) > Until java can be efficiently compiled, it is no more than a toy. I haven't played with Jikes. > Then why do you use Java ? If you just write few objects with many methods > you are writing VisualBasic. That was below the belt. I'm trying to figure out if you've just violated a corolary of Godwin's law with regards to language discussions, but I'll let it pass and answer seriously. Because used that way, Java doesn't suck nearly as badly as visual basic does? (My cumulative life experience trying to program in visual basic adds up to about three hours, so I don't consider myself an authority on it. But I've had enough exposure to say it sucks based on actually trying to use it.) That and it was developed on OS/2 to be deployed on (at least) windows 95, 98, and power macintosh? I still had threading inherent in the language. The graphics() class is actually a fairly nice API for doing simple 2d line drawing stuff in a portable way. (It is too, except for fonts. If you don't use any heavyweight AWT objects, and you're religious about using fontmetrics to measure your text, you can actually get pretty portable displays.) I still had the GOOD bits of C++ syntax without having to worry about conflicting virtual base classes. It's TRULY object oriented, with metaclasses and everything. > See above. Traversing a list of objects to draw is not time consuming, > implementing a zbuffer or texturing is. Try to implement a zbuffer in java. I'll top that, I tried to implement "deflate" in java 1.0. (I was porting info-zip to java when java 1.1 came out. Yeah, the performance sucked. But the performance of IBM's OS/2 java 1.0 jdk sucked compared to anything anybody's using today (even without JIT). > The problem with java is that people tries to use it as a general purpose > programming language, and it is not efficient. It can be used to organize > your program and to interface to low-level libraries written in C. But > do not try to implement any fast path in java. I once wrote an equation parser that took strings, substituted values for variables via string search and replace, and performed the calculation the string described. It did this for ever
Re: The latest Microsoft FUD. This time from BillG, himself.
On Wednesday 20 June 2001 18:31, Daniel Phillips wrote: > On Wednesday 20 June 2001 23:33, Rik van Riel wrote: > > On 20 Jun 2001, Miles Lane wrote: > > > http://www.zdnet.com/zdnn/stories/news/0,4586,5092935,00.html > > > > Yes, he sure knows how to bring Linux to the attention > > of people ;) > > Not to mention the GPL, which I can guarantee you, before today my mom had > *never* heard of. > > -- > Daniel Ooh, do I get to say "I told you so"? (LinuxToday buried my submission way back under a blurb about caldera, but still...) http://linuxtoday.com/news_story.php3?ltsn=2001-05-10-002-20-PS Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Alan Cox quote? (was: Re: accounting for threads)
On Wednesday 20 June 2001 20:42, D. Stimits wrote: > Rob Landley wrote: > ...snip... > > > The patches-linus-actuall-applies mailing list idea is based on how Linus > > says he works: he appends patches he likes to a file and then calls patch > > -p1 < thatfile after a mail reading session. It wouldn't be too much > > work for somebody to write a toy he could use that lets him work about > > the same way but forwards the messages to another folder where they can > > go out on an otherwise read-only list. (No extra work for Linus. This > > is EXTREMELY important, 'cause otherwise he'll never touch it.) > > What if the file doing patches from is actually visible on a web page? > Or better yet, if the patch command itself was modified such that at the > same time it applies a patch, the source and the results were added to a > MySQL server which in turn shows as a web page? His patch file already has a bunch of patches glorped together. In theory they could be separated again by parsing the mail headers. I suppose that would be less work for Linus... The point is, the difference between the patches WE get and the patches LINUS gets is the granularity. He's constantly extorting other people to watch the granularity of what they send him (small patches, each doing one thing, with good documentation as to what they do and why, in seperate messages), but what WE get is a great big diff about once a week. So what I'm trying to figure out is if we can impose on Linus to cc: a mailing list on the stream of individual patch messages he's applying to his tree, so we can follow it better. And he IS willing to do a LITTLE work for us. He's issuing changelogs now. This would be significantly less effort than that... MySQL is overkill, and since these things started as mail messages why should they be converted into a foriegn format? There's threaded archivers for mailing lists and newsgroups and stuff already, why reinvent the wheel? Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [OT] Threads, inelegance, and Java
On Wednesday 20 June 2001 23:13, Pete Zaitcev wrote: > > Then again JavaOS was an abortion on top of Slowaris. [...] > > This is a false statemenet, Rob. It was an abortion, all right, > but not related to Solaris in any way at all. I worked on the sucker for six months at IBM in 1997. I don't know if the code we worked on is the code you're thinking of, but we had a unix kernel (which my coworkers SAID was solaris) with a JVM running on top of it as the init task. Ported to Power PC by some overseas IBM lab (might have been the japanese guys). We had two flavors of it to test, one the Power PC build and one the Intel build, and to make the Intel build work first we installed Solaris for Intel. Other fun little details included that when I left IBM it still couldn't read from the hard drive, so the entire system TFTP'd itself through the network at boot, including a compressed ramdisk image containing Lotus Desktop. The machine required 64 megs of memory to run that (~32 megs or so of which was for the ramdisk, and no it didn't run executables straight out of the ramdisk, it copied them into MORE ram to run). > JavaOS existed in two flavours minimum, which had very little > in common. The historically first of them (Luna), was a home-made > executive with pretty rudimentary abilities. I believe that's what we were using, and that home-made executive was a stripped down version of the solaris kernel. > Second flavour of JavaOS was made on top of > Chorus, and, _I think_, used large parts of Luna in the the > JVM department, but it had decent kernel, with such novations > as a device driver interface :) You haven't lived until you've seen java code, with inline assembly, claiming to be a video framebuffer device driver thing. That sort of thing gives you a real appreciation for the portions of your life where you DON'T have to deal with that kind of thing. I only had to go into there to debug stuff though, mostly I was working on the application end of things. (Taking third party closed source beta-release nonsense from people who wanted a single point of contact with IBM who turned out to be someone in Poughkipsee who had quit the company a couple weeks back. So it didn't work, we didn't have source, the people who supplied it wouldn't talk to us, and when we did trace a problem to the JavaOS code and reported it to Sun they went "we fixed that over a month ago and we've been sending you twice-weekly drops!" But of course our codebase didn't sync with their codebase more than once a month or so because there was so much porting effort involved... And you wonder why I quit... > Such a thing existed. I do not remember its proper name, > but I remember that it booted from hard disk. Floppy > was too small for it. Maybe. Wouldn't have been nearly as big as JavaOS, though. FAR better suited to an embedded system... > > Porting half of Solaris to Power PC for JavaOS has got to be one of the > > most peverse things I've seen in my professional career. > > I never heard of PPC port of either of JavaOSes, although > Chorus runs on PPC. Perhaps this is what you mean. It was called "JavaOS for Business"... http://www.javaworld.com/javaworld/jw-05-1998/jw-05-idgns.javaos.html And was killed about a year later... http://news.cnet.com/news/0-1003-200-346375.html?tag=bplst I worked on it for six months in the second half of 1997. > Solaris for PPC existed, but never was widespread. > It did not have JVM bundled. We mostly used AIX on the PPC systems that weren't directly testing the new code. > > I'm upset that Red Hat 7.1 won't install on that old laptop because it > > only has 24 megs of ram and RedHat won't install in that. [...] > > You blew adding a swap partition, I suspect... This was before it let me run fdisk or Disk Druid. I'll try again this weekend... > -- Pete Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Alan Cox quote? (was: Re: accounting for threads)
On Thursday 21 June 2001 10:02, Jesse Pollard wrote: > Rob Landley <[EMAIL PROTECTED]>: > > On Wednesday 20 June 2001 17:20, Albert D. Cahalan wrote: > > > Rob Landley writes: > > > > My only real gripe with Linux's threads right now [...] is > > > > that ps and top and such aren't thread aware and don't group them > > > > right. > > > > > > > > I'm told they added some kind of "threadgroup" field to processes > > > > that allows top and ps and such to get the display right. I haven't > > > > noticed any upgrades, and haven't had time to go hunting myself. > > > > > > There was a "threadgroup" added just before the 2.4 release. > > > Linus said he'd remove it if he didn't get comments on how > > > useful it was, examples of usage, etc. So I figured I'd look at > > > the code that weekend, but the patch was removed before then! > > > > Can we give him feedback now, asking him to put it back? > > > > > Submit patches to me, under the LGPL please. The FSF isn't likely > > > to care. What, did you think this was the GNU system or something? > > > > I've stopped even TRYING to patch bash. try a for loop calling "echo > > $$&", eery single process bash forks off has the PARENT'S value for $$, > > which is REALLY annoying if you spend an afternoon writing code not > > knowing that and then wonder why the various process's temp file sare > > stomping each other... > > Actually - you have an error there. $$ gets evaluated during the parse, not > during execution of the subprocess. Well, that would explain it then. (So $$ isn't actually an environment variable, it just looks like one? I had my threads calling a seperate function...) > To get what you are describing it is > necessary to "sh -c 'echo $$'" to force the delay of evaluation. Except that this spawns a new child and gives me the PID of the child, which lives for maybe a tenth of a second... Maybe I could do something with eval...? Either way, I switched the project in question to python. > That depends on what you are trying to do. A couple people emailed me and pointed out I had to set IFS=$'\n', (often after first saying "there's no way to do that" and then correcting themselves towards the end of the message...) My first attempt to do it was to pipe the data into a sub-function do read a line at a time and store the results in an array. (Seemed pretty straightforward.) Except there's no way to get that array back out of the function, and that IS documented at the end of the bash man page. As I said: Python. > Are you trying to echo the > entire "ls -l"? or are you trying to convert an "ls -l" into a single > column based on a field extracted from "ls -l". I was trying to convert the lines into an array I could then iterate through. > If the fields don't matter, but you want each line processed in the > loop do: > > ls -l | while read i > do >echo line:$i > done Hmmm... If that avoids the need to export the array from one shell instance to another, maybe it'd work. (I'm not really doing echo, I was doing var[$i]="line", with some other misc stuff.) But I got it to work via IFS, and that's good enough for right now. > If you want such elaborate handling of strings, I suggest using perl. I came to the same conclusion, but would like for more than one person to be able to work on the same piece of code, so it's python. (Or PHP if Carl's the one who gets around to porting it first. :) But in the meantime, the shell script is now working. Thanks everybody...) Back to talking about the kernel.:) Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Idea: Patches-from-linus mailing list? (Was Re: Alan Cox quote? (was: Re: accounting for threads))
On Wednesday 20 June 2001 21:57, D. Stimits wrote: > MySQL is just a sample. I mention it because it is quite easy to link a > web server to. Imagine patch running on a large file that is a > conglomeration of 50 small patches; it could easily summarize this, and > storing it through MySQL adds a lot of increased web flexibility (such > as searching and sorting). It is, however, just one example of a way to > make "patch" become autodocumenting. Not so much patch as a wrapper around patch. That's a good idea. A small perl script would do it... Right now, Linus makes a big file by appending mail messages to it. His mailer is, in theory, putting mail headers at the start of each of these messages (from, to, subject, and all that). At the end of the day, he feeds that big file to patch and it applies all the patches he's read through and decided he likes. It should be fairly easy to make a wrapper around patch that splits out a single mail message, feeds it to patch, and on some measurement of "success" forwards it to an otherwise read-only mailing list. (Which can then have a database based archiver subscribed to that list, if necessary.) If Linus used such a beast, we could get the actual mail messages Linus is applying patches from, as they're applied. Including any human readable documentation in them that patch itself would discard. No more asking "did patch such and such get applied, when, who was it from"... And we get the extra patch granularity Linus himself is so keen on. Instead of waiting for our weekly pre2-pre3 100k patch, we could follow the individual ones as logically grouped changes, with subject lines saying what the patch is about and everything. And the people hankering to make a CVS tree out of LInux kernel development would then have a much better checkin granularity to work with. :) The main thing, though, is that done right, it's no extra burden on Linus. (Which is kind of important if we ever hope to get him to use it. :) Sound like an idea to anybody else? Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: The latest Microsoft FUD. This time from BillG, himself.
On Thursday 21 June 2001 04:37, Henning P. Schmiedehausen wrote: > > Devils' advocate position: If Linux would not be under GPL but under > BSD license, M$ may have already done so. But consider them porting > one of their monster applications and release it just to find out that > they've linked to GNU readline somewhere because of an QM oversight. I said as much in an article to LinuxToday. (They buried it under a page of commentary about Ransom Love, but they did post it.) http://linuxtoday.com/news_story.php3?ltsn=2001-05-10-002-20-PS BSD forked to death in the 80's. Everybody from AT to Sun to IBM who saw money in it spun off their own incompatable, proprietary version. If MS was currently facing BSD rather than LInux, they would have "embrace and extend"ed it long ago. Hide half of office in the system libraries (just like windows), come out with a closed-source version, loot the open competition for any advances but don't share yours... > I'd guess, to them, the risk of having their core code base (their > source of revenue) "infected by the GNU virus" is just too high. The GPL was designed to block embrace and extend. It embraces and extends right back. And it's torquing microsoft off big time. > Hmmm. After all, they're already using FreeBSD. Maybe they will > release "Windows for FreeBSD" with Office. Now that would be an > interesting impact on Linux (I would be over there in seconds =:-) ) Just like AT did to free Unix in ~1984. How long before it's "Office for BSD incidentally distributed with a closed-source copy of BSD" mutated into "yet another incompatable proprietary operating system, just with lots of unix code." That wouldn't solve anything. We've been through a few years with netscape as our only viable web browser on linux, how much fun was that? Rember the ben franklin quote about exchanging liberty for safety. Buying short-term gains with long-term sacrifices is a dumb idea. Been there. Done that. Came here to recover. > Regards > Henning Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: The latest Microsoft FUD. This time from BillG, himself.
On Thursday 21 June 2001 04:50, Henning P. Schmiedehausen wrote: > Rob Landley <[EMAIL PROTECTED]> writes: > >Ooh, do I get to say "I told you so"? (LinuxToday buried my submission > > way back under a blurb about caldera, but still...) > > And the quote of "stealing the TCP stack from BSD" is still wrong. Everybody took the BSD tcp stack, including VMS and OS/2. It was the first major lump of code they separated when AT started making legal threats around 1983. Did I say stealing? The berkeley people gave it away for free... > And the web browser they have today derives from NCSA Mosaic as > prominently displayed in the "About" box of every single IE version > out. No TBL here. You take microsoft's word for things? Read this: http://www.businessweek.com/bwdaily/dnflash/january/new0122d.htm Various other coverage: http://www.zdnet.com/eweek/news/0120/22aspy.html http://www4.zdnet.com/anchordesk/story/story_587.html And two years later, spyglass still hadn't learned their lesson: http://www.zdnet.com/eweek/stories/general/0,11011,1014310,00.html Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: The latest Microsoft FUD. This time from BillG, himself.
On Thursday 21 June 2001 17:49, Schilling, Richard wrote: > > -Original Message- > > From: Rob Landley > > Sent: Thursday, June 21, 2001 9:25 AM > > [snip] > > > BSD forked to death in the 80's. Everybody from AT to Sun > > to IBM who saw > > money in it spun off their own incompatable, proprietary version. > > Microsoft also had a UNIX variant, but they gave up on the product . . > .forget why. Because Paul Allen got leukemia and quit the company around 1983. Microsoft was founded by two people: Paul Allen (the techie) and Bill Gates (the marketer, whose father was a lawyer. Gates was a bit technical in the 8-bit days, but the last piece of code he personally wrote that shipped in a product was the text editor for the TRS-80.) In late '79 early '80, they heard the rumors that IBM was pondering a PC, and Paul Allen went "any real computer will run Unix", so they got a license from AT and ported the sucker, calling it "Xenix". (MS was a porting house, they made their living porting software (mostly BASIC) from one platform to another in those days, and porting unix was a big thing, so as the name implies: they'd port it anywhere). And then IBM dropped the PC's tech specs on them after they signed non-disclosure and it said "minimum 16k of ram", and they went "okay, we need an embedded OS". So they sent IBM to talk to Gary Kildall at Intergalactic Digital Research (I.E. Kildall's living room) and get CP/M, but the meeting fell through famously. But Allen knew a guy who knew a guy who had reverse engineered CP/M from a store bought API manual as a summer project (Quick And Dirty Operating System). They got a bank loan for $50k, bought it, and offered it to IBM. Remember, the original PC the floppy was optional. Dos 1.0 was only needed if you got the optional floppy, the in-ROM basic (which was the real reason IBM was talking to MS, the rest was just gravy) had support for the casette tape interface built into the original PC. That was the default interface, floppies were an expensive luxury. But microsoft had conditionally licensed to IBM their entire rest of their software catalog (from typing tutor on up), conditional on having a floppy drive to load them from. They went out and got their own version of CPM so their application software deal with IBM wouldn't fall through. And of course IBM had two sources for everything. (As a big evil monopoly, they understood that being on the receiving end, at the mercy of a monopoly supplier, was a bad thing.) They even made Intel license the 8086/8088 design to AMD so they'd have a second source. (And that's how AMD got into the clone business.) DOS 1.0 and CP/M ran EXACTLY the same software, they were two sources for the same thing. At first. Paul Allen didn't give up on Unix, of course. He knew the PC memory would grow and someday would be enough to run Unix, so in he set about making a migration path from DOS to unix. The dos 2.0 manuals went out and said that DOS would someday be replaced with Xenix, and in the meantime here's a lot of unix functionality to get you used to it. He added subdirectories (using \ instead of / only because / was already the command line option indicator. "dir /s". In 2.0 the deprecated that and changed it to "dir -s" as the recommended method, to be unixish.) Plus device drivers, pipes and redirects (hacked onto the CP/M base as best they could), and of course file control blocks were replaced with file handles. The dos 2.0 manual eventual promised they'd give DOS multiple process support (multitasking). Dos 3.0 was mostly based on adding new hardware support, specifically hard drives since the XT was coming out. And it's about this time (1983ish) that Allen got sick and took a leave of absence from microsoft which he never returned from. And Microsoft's technical side fell apart, but not until after they shipped DOS 3. When allen left, two things happened. 1) Gates was left with absolute power within Microsoft and started succumbing to it. (He was always a greedy bastard, but so are steve jobs, larry ellison, the heads of commodore and atari, and just about everybody else in the business. Linus has his "i'm a bastard" speech too...) 2) The technical side of Microsoft imploded (at the mercy of marketing). Xenix was unloaded on the Santa-Cruz operation almost immediately, and Gates allowed microsoft to be led around by the nose by IBM for the next five years or so in place of any in-house technical agenda. (And hence OS/2 1.0)... Did I mention I'm writing a book on all this? (The history of linux and the computer industry, going back to World War II...) This makes me the only person I know who's excited about finding ~50 issues of "Compute" and "Compute's gazette" from the mid 80's at a garage
Re: The latest Microsoft FUD. This time from BillG, himself.
On Thursday 21 June 2001 18:49, Alan Cox wrote: > > Except that Apple keeps the old code open. Probably because > > they'll gain nothing from it, and at best, they can appeal to > > the techies. > > A company that seems to write 'you shall not work on open source projects > in your spare time' into its employment contracts is not what I would call > friendly or want to work for. Im sure its only a small step to 'employees > shall not snowboard' 'employees shall not go skiing' - all of course argued > for the same reason as being essential to the company interest This IS the company that had the "I work 90 hours all the time" club with t-shirts and everything back under Jobs in the early 80's. And far more recently, where at least one employee got in trouble for "thinking different' with a parody site involving famous serial killers. The "Proprietary frosting" model is fine for leaf-node projects like games. But if the new layer is infrastructure other people are expected to build on top of, then what you're really saying is "I want slaves". Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Controversy over dynamic linking -- how to end the panic
On Thursday 21 June 2001 14:46, Timur Tabi wrote: > 1. License the Linux kernel under a different license that is effectively > the GPL but with additional text that clarifies the binary module issue. > Unfortunately, this license cannot be called the GPL. Politically, this > would probably be a bad idea. I thought this was what the LGPL was for? Unfortunately, it wouldn't be easy to switch from GPL to LGPL for the LInux kernel precisely BECAUSE Linus is not the sole copyright holder. (Note: Richard Stallman insisted anyone who contributed a patch of any size to GNU sign a piece of paper handing their copyright over to the FSF. Unfortunately, this created so much friction around getting patches in that it was a significant factor to GNU stalling and forking to produce Linux.) Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Fair Use (Was Re: Controversy over dynamic linking -- how to end the panic)
On Thursday 21 June 2001 16:34, Craig Milo Rogers wrote: > The in-core kernel image, including a dynamically-loaded > driver, is clearly a derived work per copyright law. As above, the > portion consisting only of the dynamically-loaded driver's binary code > may or may not be a derived work per the GPL. It doesn't much matter > under the GPL, anyway, so long as the in-code kernel image isn't > "copied or distributed". It would be entirely legal to generate a patch against the copyrighted work which contains no code from the copyrighted work, and which is seperately distributed. This falls under fair use. The "fortify" folks did it for netscape navigator. Your instructions for a set of changes to someone else's work are a unique creation as long as they don't actually include any portion of the other person's work. (And things like book reviews or parody may contains small sections of the work being referred to, exactly how much is nebulous and generally decided on a case by case basis if it ever gets to court.) A tarball containing a set of source code for a new, independent module, which does not include any GPLed code, is obviously not a "derivative work" in legal terms. (It may be inspired by, but ideas aren't copyrighted, only the specific expression/implementation. You can paraphrase an encyclopedia article for a school report, etc. Just don't quote verbatim.) Now the COMPILED version is where the fun begins. Distributing a precompiled binary of this module means that GPL code was used during the compilation (at the very least header files), and that an argument could be made that technically this binary is a derivative work, and distribution of it might trigger licensing terms of the GPL. Considering that the implied use of headers (I.E. .h files) is to allow their inclusion in other programs and therefore the creation of other programs, it's possible that an argument could be made that header files (originally distributed with a .h extension by the person who issued the license in the first place) are intended to be used to create other programs, and therefore this is fair use and not the creation of a derived work. After all, the copyright holder of the code made a distinction between .c and .h files, when the functional aspects of the language dont' demand it. And normally, .h files do not directly result in code in the output (inline macros being an exception that doesn't change the overall nature and intent of the file). But it'd be a coin flip whether the judge would buy that argument. Now what LINUS seems to have done is offer an nebulous and imprecisely worded second license on at least the header files of Linux, which allows the creation of precompiled binaries as seperate items, as long as none of the existing Linux code has to be modified in any way to create these modules. (Allowing this sort of thing while still being GPL compatable with a larger work is, incidentally, the intent of the LGPL. But he didn't use it.) Binary modules created under this second license (and a verbal permission can be a legal license, as much as any oral agreement is ever likely to stand up in court), can therefore be distributed on a Linux compilation CD readily as any other piece of non-GPL code can (such as the binary-only version of netscape on the Red Hat CD's). Nobody's claiming that such an anthology is creating a derived work, merely a collective distribution mechanism for the seperately licensed entities. (Like an FTP site you can take with you.) Run-time linking of the modules with the Linux code is a seperate issue, almost certainly falling under fair use. You are not distributing the result, and you have a license for both components saying that they are in fact legal copies, therefore what you do with them is your business. (What use a legal copy is put to is NOT subject to copyright law. Contract law as part of the license terms maybe, but that's a can of worms we won't open just now, especially such fun aspects as "standing" and "informed consent"...) The only real legal question is whether Linus has the authority to offer the second license allowing the creation of non-GPL binary modules. A case could be made that he does, and not just due to the anthology copyright, but because posession really is 9/10 of the law. The "binary modules" clause has been out there for years now without being challeneged. Everybody's known about it, the statute of limitations for objecting to it has probably gone by. Everyone submitting code to Linus for a long time has been implying acceptance of his license terms for distribution of that code. A paralell could also be dawn between intellectual property law and regular property law, along the lines of homesteading. If you find an abandoned house and live in it long enough, especially if you make significant changes like fixing it up, it's considered to have been
Re: Maintainers master list?
On Friday 22 June 2001 17:19, Timur Tabi wrote: > ** Reply to message from "Eric S. Raymond" <[EMAIL PROTECTED]> on Fri, 22 Jun > 2001 17:09:45 -0400 > > > What happens now when somebody takes over responsibility for a file > > or subsystem and the MAINTAINERS file doesn't get patched, either because > > that person forgets to send a MAINTAINERS update or Linus doesn't > > happen to take the MAINTAINERS patch for a while? > > Wouldn't this whole problem go away if the kernel source were stored in a > master CVS repository? Maintainers would have write access to their > respective code, but only Linus and Alan would have delete access. > Everyone else would have read-only access. This has been suggested about eight thousand times so far, and the answer is "no". I'm fairly certain there's a FAQ entry on this. The reason Linus won't do it is it conflicts with the way he works. He approves patches by reading through them in his mail reader (Pine, I think) and appending the ones he likes to a file. Then when he's done reading mail he pipes the whole file to patch and it goes into his tree. (I'm pondering an idea of sending Linus a perl script that he can use to pipe that file to patch which will split out the individual emails and forward them to an otherwise read-only "patches-linus-has-applied" mailing list. The important part of this idea is it doesn't change the way he works or make him do any extra work at all. And we get the documentation in the emails and a record of what patches got applied when. And this WOULD allow a fairly granular CVS tree to be kept up-to-date by a third party who simply subscribes to the list and automatically feeds the patches into CVS.) But Linus will NOT allow a line of code into his tree which he hasn't personally approved. He may apply patches forwarded to him by maintainers without thoroughly reading them first, but he still approves them and knows when they go in, and makes sure they don't conflict with anything else he's applying or already applied. So in a "Linux-kernel CVS tree", only Linus would have the ability to check stuff in, so he considers it a waste of time and just sends us tarballs instead. The fact Linus does this is a bottleneck, sure. But the fact we've got one guy in charge making decisions and vetoing anything that shouldn't go in is also the main reason we've got a coherent source base. Look at the ongoing fight between Rik and Andrea: even smart people who are generally right can disagree about architectural direction, and if they both make changes without somebody steering Bad Things will result. Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Saturday 23 June 2001 13:57, Mike Jagdis wrote: > > I hope the following adds a more direct perspective on this, as I > > was a user at the time. > > I was _almost_ at university :-). However I do have a first edition > of the IBM Xenix Software Development Guide from december 1984. It has > '84 IBM copyright and '83 MS copyright. The SCO stuff I have goes back > to '83 - MS copyrights on it go back to '81 but that's probably just > the compiler and DOS compatibility. Ooh! Ooh! I don't suppose I could borrow that? (Hmm... Driving to london isn't quite something my car's up to. For one thing, there's no gas stations in the middle of the atlantic.) The copyright dates back to when they shipped it. I believe Microsoft's license with AT was signed in 1979 and actual work started in 1980, but that's in a different notebook... > Basically Xenix was the first MS/IBM attempt at a "real OS" for the > PC. MS realised that multiuser/multitasking was less important than > colour graphics for PC owners and decided to pull out of the Xenix > business. IBM licensed it under their name to keep their desktop computer > concept alive while the Xenix team emerged from the shake out to form SCO. Don't make the mistake of treating IBM -OR- Microsoft as a monolithic entity. IBM had a dozen departments constantly at war with each other: Unix had its pockets of supporters at IBM, some of whom did AIX. At Microsoft, Paul Allen was the bix Unix fan. Gates was indifferent to it, and was far more interested in the Xerox Parc perspective. Both Bell Labs and Xerox Parc totally revolutionized computing. Bell Labs worked from the inside out, how the machine works and what programmers can get it to do. Multitasking, hierarchical filesystem, block and character device drivers, streams, pipes, etc. Xerox Parc worked from the outside in, how the user interacts with the computer and what they experience. Wysiwyg printing, Windows and Icons and Mice in a GUI. (Xerox also did object oriented programming, and networking which was related to both the user and system level. Then again Unix spun out of porting a flight simulator to the PDP 7. It's not QUITE that black and white...) In any case, gates was on the Xerox side and Allen was on the BTL side. When Allen left microsoft, Xenix followed soon after. (First SCO was "helping", then over the next few years the whole thing was gradually dumped on them and the umbilical severed.) Remember, Xenix hadn't made much of a splash in the PC world before 1984 because the PC simply didn't have the power to run it. YOU try doing anything useful with Unix in -LESS- than 512k of ram. That doesn't mean it wasn't having a big impact behind the scenes at Microsoft. (Similarly, windowing interfaces were Jobs's passion for 4 or 5 years before the macintosh launch, whether or not Apple's revenues or customers even knew about it.) Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Missing help entries in 2.4.6pre5
On Friday 22 June 2001 10:00, Wichert Akkerman wrote: > In article <[EMAIL PROTECTED]>, > > Eric S. Raymond <[EMAIL PROTECTED]> wrote: > >You're a bit irritated. That's good. I *want* people who don't write > >help entries for their configuration symbols to be a bit irritated. > >That way, they might get around to actually doing what they ought to. > > You mean you actually want people to start ignoring you? There's a really simple solution to that. Eric can just make up his own help file entries that are wildly inaccurate and actively insulting to whoever it is who owns the symbol. Something along the lines of: "Enabling this subsystem may cause your house to burn down and your dog to explode. The prevailing opinion is that Linus was probably blackmailed into including this option by someone with naked pictures of his cat. It's useless and irritating, and just might be removed soon, so don't count on it continuing to be there. Nobody knows how to use it because they didn't provide any documentation for it." Then they're welcome to ignore it. :) Rob (As mel brooks said, it's good to be the help file maintainer...) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Friday 22 June 2001 18:41, Alan Chandler wrote: > I am not subscribed to the list, but I scan the archives and saw the > following. Please cc e-mail me in followups. I've had several requests to start a mailing list on this, actually... Might do so in a bit... > I was working (and still am) for a UK computer systems integrator called > Logica. One of our departments sold and supported Xenix (as distributor > for Microsoft? - all the manuals had Logica on the covers although there > was at least some mention of Microsoft inside) in the UK. At the time it I don't suppose you have any of those manuals still lying around? > It was more like (can't remember exactly when) 1985/1986 that Xenix got > ported to the IBM PC. Sure. Before that the PC didn't have enough Ram. Dos 2.0 was preparing the dos user base for the day when the PC -would- have enough ram. Stuff Paul Allen set in motion while he was in charge of the technical side of MS still had some momentum when he left. Initially, Microsoft's partnership with SCO was more along the lines of outsourcing development and partnering with people who knew Unix. But without Allen rooting for it, Xenix gradually stopped being strategic. Gates allowed his company to be led around by the nose by IBM, and sucked into the whole SAA/SNA thing (which DOS was the bottom tier of along with a bunch of IBM big iron, and which OS/2 emerged from as an upgrade path bringing IBM mainframe technology to higher-end PCs.) IBM had a unix, AIX, which had more or less emerged from the early RISC research (the 701 project? Lemme grab my notebook...) Ok, SAA/SNA was "Systems Application Architecture" and "Systems Network Architecture", which was launched coinciding with the big PS/2 announcement on April 2, 1987. (models 50, 60, and 80.) The SAA/SNA push also extended through the System/370 and AS400 stuff too. (I think 370's the mainframe and AS400 is the minicomputer, but I'd have to look it up. One of them (AS400?) had a database built into the OS. Interestingly, this is where SQL originated (my notes say SQL came from the System/370 but I have to double-check that, I thought the AS400 was the one with the built in database?). In either case, it was first ported to the PC as part of SAA. We also got the acronym "API" from IBM about this time.) Dos 4.0 was new, it added 723 meg disks, EMS bundled into the OS rather than an add-on (the Lotus-Intel-Microsoft Expanded Memory Specification), and "DOSShell" which conformed to the SAA graphical user interface guidelines. (Think an extremely primitive version of midnight commander.) The PS/2 model 70/80 (desktop/tower versions of same thing) were IBM's first 386 based PC boxes, which came with either DOS 3.3, DOS 4.0, OS/2 (1.0), or AIX. AIX was NOT fully SAA/SNA compliant, since Unix had its own standards that conflicted with IBM's. Either they'd have a non-standard unix, or a non-IBM os. (They kind of wound up with both, actually.) The IBM customers who insisted on Unix wanted it to comply with Unix standards, and the result is that AIX was an outsider in the big IBM cross-platform push of the 80's, and was basically sidelined within IBM as a result. It was its own little world. skip skip skip skip (notes about boca's early days... The PC was launched in August 1981, list of specs, xt, at, specs for PS/2 models 25/30, 50, 70/80, and the "pc convertable" which is a REALLY ugly laptop.) Here's what I'm looking for: AIX was first introduced for the IBM RT/PC in 1986, which came out of the early RISC research. It was ported to PS/2 and S/370 by SAA, and was based on unix SVR2. (The book didn't specify whether the original version or the version ported to SAA was based on SVR2, I'm guessing both were.) AIX was "not fully compliant" with SAA due to established and conflicting unix standards it had to be complant with, and was treated as a second class citizen by IBM because of this. It was still fairly hosed according to the rest of the unix world, but IBM mostly bent standards rather than breaking them. Hmmm... Notes on the history of shareware (pc-write/bob wallace/quiicksoft, pc-file/pc-calc/jim button/buttonware, pc-talk/andrew flugelman, apparently the chronological order is andrew-jim-bob, and bob came up with the name "shareware" because "freeware" was a trademark of Headlands Press, Inc...) Notes on the IBM Risc System 6000 launch out of a book by Jim Hoskins (which is where micro-channel came from, and also had one of the first cd-rom drives, scsi based, 380 ms access time, 150k/second, with a caddy.) Notes on the specifications of the 8080 and 8085 processors, plus the Z80 Sorry, that risc thing was the 801 project led by John Cocke, named after the building it was in and started in 1975. Ah, here's the rest of it: The IBM Person Computer RT (Risc Technology) was launched in January 1986 running AIX. The engineers (in Austin) whent on
Re: Alan Cox quote? (was: Re: accounting for threads)
On Friday 22 June 2001 10:46, Mikulas Patocka wrote: > I did some threaded programming on OS/2 and it was real pain. The main > design flaw in OS/2 API is that thread can be blocked only on one > condition. There is no way thread can wait for more events. For example Sure. But you know what a race condition is, and how to spot one (in potential during coding, or during debugging.) You know how to use semaphores and when and why, and when you DON'T need them. You know about the potential for deadlocks. And most of all, you know just because you got it to run once doesn't mean it's RIGHT... > When OS/2 designers realised this API braindamage, they somewhere randomly > added funtions to unblock threads waiting for variuos events - for example > VioModeUndo or VioSavRedrawUndo - quite insane. OS/2 had a whole raft of problems. The fact half the system calls weren't available if you didn't boot the GUI was my personal favorite annoyance. It was a system created _for_ users instead of _by_ users. Think of the great successes in the computing world: C, Unix, the internet, the web. All of them were developed by people who were just trying to use them, as the tools they used which they modified and extended in response to their needs. This is why C is a better language than pascal, why the internet beat compuserve, and why Unix was better than OS/2. Third parties writing code "for" somebody else (to sell, as a teaching tool, etc) either leave important stuff out or add in stuff people don't want (featuritis). It's the nature of the beast: design may be clever in spurts but evolution never sleeps. (Anybody who doesn't believe that has never studied antibiotic resistant bacteria, or had to deal with cockroaches.) > Programming with select, poll and kqueue on Unix is really much better > than with threads on OS/2. I still consider the difference between threads and processes with shared resources (memory, fds, etc) to be largely semantic. > Mikulas Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Alan Cox quote? (was: Re: accounting for threads)
On Sunday 24 June 2001 17:41, J . A . Magallon wrote: > On 20010622 Rob Landley wrote: > >I still consider the difference between threads and processes with shared > >resources (memory, fds, etc) to be largely semantic. > > They should not be the same. Processes are processes, and threads were > designed for situations where processes are too heavy. Other thing is that > in new kernels (for example, Linux) processes are being optimized (ie, vm > fast 'cloning' via copy-on-write) or expanded with new features (Linux' > clone+ CLONE_VM). But they are different beasts. This is a bit like like saying that a truck and a train are totally different beasts. If I'm trying to haul cargo from point A to point B, which is served by both, all I care about is how long it takes and how much it costs. I don't care what it was INTENDED to do. A rock makes a decent hammer. So does a crescent wrench. The question is how good a tool is it for the uses we're trying to put it to? > This remembers on other question I read in this thread (I tried to answer > then but I had broke balsa...). Somebody posted some benchmarks of linux > fork()+exec() vs Solaris fork()+exec(). What programs does this make a difference in? These are tools meant to be used. What real-world usage causes them to differ? If a reasonably competent programmer ported a program from one platform to another, is this the behavior they would expect to see (without a major rewrite of the code)? > That is comparing apples and oranges. Show me a real-world program that makes the distinction you're making. Something in actual use. > The clean battle should be linux fork-exec vs vfork-exec in > Solaris, because for in linux is really a vfork in solaris. But the point of threading is it's a fork WITHOUT an exec. So now you're saying the comparison we're making of "using these tools to do goal X" is invalid because you don't like goal X, and you want to do something else with them. I'm not going to comment on the difference between fork and vfork becaue to be honest I've forgotten what the difference is. Something to do with resource sharing but darned if I can remember it. I'm not sure I've ever used vfork, last thing I remember were people arguing over the man page for it... (Sorry if I seem a bit snappish, I've had this kind of discussion too many times with ivory tower types at UT who object to the topic of conversation because it's not what they've focused on until now. Which logical fallacy was this, "begging the question"? Straw man? I forget. I'm tired...) Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Saturday 23 June 2001 20:13, Michael Alan Dorman wrote: > Rob Landley <[EMAIL PROTECTED]> writes: > > That would be the X version of emacs. And there's the explanation > > for the split between GNU and X emacs: it got forked and the > > closed-source version had a vew years of divergent development > > before opening back up, by which point it was very different to > > reconcile the two code bases. > > No, sorry, wrong, for at least a couple of reasons reasons: I've had this pointed out to me by about five people now. Apparently there's more to emacs than I thought... (Considering its kitchen sink icon, this should come as a suprise to no one...) > I refer you to http://www.jwz.org/doc/emacs-timeline.html for > documentation---JWZ was Mr. Lucid Emacs for quite a time. Thanks for the link. I've also been pointed to xemacs.org. Have to check out both next time I plug this laptop in to the net. (And I apparently need to set up a mailing list on this, since the number of people asking me to do so has now hit double digits...) I'll post a thing here when I do that so we can move at least most of this discussion off linux-kernel. > In 1987, there are any number of things that it could have been---I'd > guess either Unipress Emacs or perhaps Gosling Emacs. I sort of know about gosling's version. (It's mentioned in Stallman's history of emacs on gnu.org...) Interesting how the same people keep popping up as you move from topic to topic. (Licklidder wasn't just a bigwig behind arpanet, he also kicked off project mac at MIT. Doug McIllroy who was one of the half-dozen figures behind the unix launch at bell labs came to BTL after working on project whirlwind at Lincoln Labs (I.E. MIT.) And of course Ken Olsen, hotshot at whirlwind behind core memory, creator of the memory test computer that (when donated to marvin minsky's computing lab) virtually created the whole "Hacker" phenomenon, whose wrote a paper as a graduate student suggesting the use of transistors in computers which convinced IBM to build the first fully transistorized computer (I -THINK-, timeline still a bit fuzzy there to claim "first", may just have been first commercially shipping one), and then of course went off to found Digital after tx-0... Hmmm... I should probably corner Alan Cox at some event and ask him about his Amiga days. (And I DID track down Commodore guru Jim Butterfield last year, he was living in Canada at the time. Just got back into computing after years with cataracts obstructing his vision, apparently...) Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Saturday 23 June 2001 23:07, Mike Castle wrote: > On Sat, Jun 23, 2001 at 09:41:29PM -0500, [EMAIL PROTECTED] wrote: > > Ah, yes, the RT/PC. That brings back some fond memories. My first > > exposure to Unix was with AIX on the RT. I still have some of those > > weird-sized RT AIX manuals around somewhere... > > We always ran AOS on RT's. Actually, the server was the only RT, the rest > were some other model that was basically a PS/2 (286) that booted DOS, then > booted the other same chip that the RT used that was on a daughter card. > > AOS was basically IBM's version of BSD. Academic Operating System. Now if somebody here could just point me to a decent reference on A/UX - Apple's mid-80's version of Unix (for the early macintosh, I believe...) A big thing I'm trying to show in my book is that Unix has been, for almost thirty years, the standard against which everything else was compared. Even when it wasn't what people were directly using it's what the techies were thinking about when they designed their other stuff. (That and the Xerox Parc work...) Let's see, the real earthquakes in the computing world (off the top of my head) are: MIT: project whirlwind (which got computing off of vacuum tubes, spawned DEC, and Minsky's hacker lab. Gurus too numerous to mention.) Bell Labs: (the transistor, and 20 years later Unix. Gurus ken thompson, dennis ritchie, the three transistor guys, ). DARPA: (Arpanet (BBN), funded project MAC at MIT, and Multics which brought the MIT stuff to bell labs.) Xerox Parc (WIMP interface, WYSIWYG word processing/printing/desktop publishing, object oriented programming, The integrated circuit/microchip (Texas Instruments' manufacturing innovation, which led to the Intel 4004, which eventually led to the Altair, which led to the personal computer. Moore's Law would probably be the theme here...) The whole free software thing (Berkeley in the 70's to early 80's, Stallman and the FSF taking over from there. And Andrew Tanenbaum's Minix, which spawned Linux...) Huh, I'd have to mention IBM (forget the PC, how about the winchester drive?), and of course the AT breakup (a negative earthquake, but big anyway, sort of leading to the commercialization of the software side of things, although Gates was trying that already. AT just removed a lot of the roadblocks by shattering the opposition for a while.) Alright, I need to sit down and make an outline and a timeline. I admit this... (Collecting the data is the easy part. ORGANIZING this fermenting heap of disconnected facts and observations is the hard part...) > mrc Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Saturday 23 June 2001 20:49, John Adams wrote: > On Saturday 23 June 2001 10:07, Rob Landley wrote: > > Here's what I'm looking for: > > > > AIX was first introduced for the IBM RT/PC in 1986, which came out of the > > early RISC research. It was ported to PS/2 and S/370 by SAA, and was > > based on unix SVR2. (The book didn't specify whether the original > > version or the version ported to SAA was based on SVR2, I'm guessing both > > were.) > > You are partially correct. AIX (Advanced Interactive eXecutive) was built > by the Boston office of Interactive Systems under contract to IBM. We had > a maximum of 17 people in the effort which shipped on the RT in January > 1986. Ah. I got the above out of a book in the UT library. (I have the name written down in my notebook... Um, possibly "IBM PS/2, a business perspective" by Jim Hoskins, or more likely "IBM RISC 6000, a business perspective" also by Jim Hoskins. I have no idea who Jim Hoskins is.) Obviously It's better to have somebody who was actually there. Mind if I bug you offline about this? (Or better yet, convince you to join the mailing list I'll be creating this afternoon...) > Prior to that time, Interactive Systems had produced a port of System III > running on the PC/XT called PC/IX which was sold via IBM. I used PC/IX to > produce the software only floating point code in the first version of AIX. Cool. I know there were several nebulous versions of unix available for the PC. (I don't know when coherent was introduced but it was around in 89... And Xenix was always sort of floating around... Considering that IBM also had access to Xenix (if it wanted it), that's at least three versions of Unix IBM could have put on the PC. What do you want to bet no two of them ran the same binaries? :) > johna Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Saturday 23 June 2001 22:41, [EMAIL PROTECTED] wrote: > Ah, yes, the RT/PC. That brings back some fond memories. My first > exposure to Unix was with AIX on the RT. I still have some of those > weird-sized RT AIX manuals around somewhere... > > Wayne Ooh! Old manuals! Would you be willing to part with them? I am collecting old manuals, and old computing magazines. I even pay for postage, with a bit of warning that they're coming... Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: The Joy of Forking
On Sunday 24 June 2001 09:46, Luigi Genoni wrote: > > > no SMP > > > x86 only (and similar, e.g. Crusoe) > > Is this a joke? > I hope it is. > > Luigi Nah, I think it's an intentional troll. Either that or somebody who's So naieve they honestly think that having different "text mode" and "binary mode" attributes of files (the cr/lf thing) can in some strange way actually improve a system. (Justifying it with the way printers work when sent an ascii text stream, despite the fact that most printers these days receive postscript or something equally distant from ascii after the printer drivers get done with it. And that text processing itself is, regrettably, moving to Unicode.) Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Saturday 23 June 2001 22:47, Eric W. Biederman wrote: > Rob Landley <[EMAIL PROTECTED]> writes: > > Ummm... GEM was the Geos stuff? (Yeah I remember it, I haven't > > researched it yet though...) > > GEM was a gui from Digital Research I believe. > Geoworks/Geos was a seperate entity. Ah, the DR-DOS answer to dosshell/windows. Cool. (I used Dr. Dos byt never tried its gui.) I know the geos had nothing to do with digital, it started as a windowing GUI for the commodore 64, if you can believe that... > It's been a long time since I looked but they both run fine under > dosemu... I don't suppose you've got reference to literature or some such? I'd love to work this into my huge obnoxious data tree I'm building... Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Alan Cox quote? (was: Re: accounting for threads)
On Sunday 24 June 2001 18:30, J . A . Magallon wrote: > Take a programmer comming from other system to linux. If he wants multi- > threading and protable code, he will choose pthreads. And you say to him: > do it with 'clone', it is better. Answer: non protable. Again: do it > with fork(), it is fast in linux. Answer: better for linux, but it is a > real pain in other systems. So we should be as bad as other systems? I'm trying to figure out what you're saying here... > And worst, you are allowing people to program based on a tool that will > give VERY diferent performance when ported to other systems. They use > fork(). They port their app to solaris. The performance sucks. It is not > Solaris fault. It is linux fast fork() that makes people not looking for > the correct standard tool for what they want todo. Hang on, the fact the LInux implementation of something is 10x faster than somebody else's brain damaged implementation is Linux's fault? On Linux, there is no real difference between threads and processes. On some other systems, there is unnecessary overhead for processes. "Threads" are from a programming perspective multiple points of execution that share an awful lot of state. Which basically, processes can be if you tell them to share this state. >From a system perspective threads are a hack to avoid a lot of the overhead processes have poisoning caches and setting up and tearing down page tables and such. Linux processes don't have this level of overhead. So -ON LINUX- there isn't much difference between threads and processes. Because the process overhead threads try to avoid has been optimized away, and thus there is a real world implementation of the two concepts which behaves in an extremely similar manner. Thus the conceptual difference between the two is mostly either artificial or irrelevant, certainly in this context. If there is a context without this difference, then in other contexts it would appear to be an artifact of implementation. On Solaris there is a difference between threads and processes, but the reason is that Solaris' processes are inefficient (and shown by the benchmarks you don't seem to like). > Instead of trying to enhance linux thread support, you try to move > programmes to use linux 'specialities'. No, I'm saying Linux processes right now have 90% of the features of thread support as is, and the remaining differences are mostly semantic. It would be fairly easy to take an API like the OS/2 threading model and put it on Linux. From what I remember of the OS/2 threading model, we're talking about a couple hours worth of work. Adding support for the Posix model is harder, but from what I've seen of the Posix model this is because it's evil and requires a lot of things that don't really have much to do with the concept of threading. I will admit that Linux (or the traditional unix programming model) has some things that don't mesh well with the abstract thread programming model. Signals, for example. But since the traditional abstract threads model doesn't try to handle signals at ALL that I am aware of (OS/2 didn't, Dunno about NT. Posix has some strange duct-tape solution I'm not entirely familiar with... In java you can kill or suspend a thread which is IMPLEMENTED with signals but conceptually discrete...) > >> That is comparing apples and oranges. > > > >Show me a real-world program that makes the distinction you're making. > >Something in actual use. > > shell scripting, for example. You've done multi-threaded shell scripting? I tried to use bash with multiple processes and ran into the fact that $$ always gives the root process's PID, and nobody had ever apparently decided that this was dumb behavior... > Or multithreaded web servers. Apache has a thread pool, you know. Keeps the suckers around... > With the above > test (fork() + immediate exec()), you just try to mesaure the speed of > fork(). Say you have a fork'ing server. On each connection you fork and > exec the real transfer program. Like a CGI script, you mean? Not using mod_perl or mode_php but shelling out? And you're worried about performance? > There time for fork matters. It can run > very fast in Linux but suck in other systems. And this is linux's problem, is it? > Just because the programmer > chose fork() instead of vfork(). So "#define fork vfork" would not be part of the "#ifdef slowaris" block then? I'm still a bit unclear about what the behavior differences of them actualy are. I believe vfork is the hack that goes really funny if you DON'T exec immediately after calling it. Yet the argument here was about THREADS, which don't DO that. So the entire discussion is a red herring. If you're going to fork and then exec, you're not having multiple points of execution in the same program. Linux can do this efficiently on a process level because its fork is efficient (and it has clone() which allows more control
Re: Microsoft and Xenix.
On Sunday 24 June 2001 21:45, Jeff Dike wrote: > [EMAIL PROTECTED] said: > > Licklidder wasn't just a bigwig behind arpanet, he also kicked off > > project mac at MIT. > > You're right, but you could at least spell his name right - J. C. R. > Licklider. > > Jeff (who was his last undergraduate thesis supervisee at MIT) What can I say, I'm bad with names? This is why I'm so careful to write them down accurately in my notebook, which is at home. (I have some stuff typed into a text file on my laptop, but it's easier to drag out a notebook and jot something down then to wait 30 seconds for my dell monstrosity's bios to boot up, open a window, cd to the approprite directory, edit a text file, then shut everything down again. I should probably get a palm pilot one of these days... Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Alan Cox quote? (was: Re: accounting for threads)
On Sunday 24 June 2001 19:50, Larry McVoy wrote: > On Mon, Jun 25, 2001 at 12:30:02AM +0200, J . A . Magallon wrote: > > They use fork(). > > They port their app to solaris. > > The performance sucks. > > It is not Solaris fault. > > It is linux fast fork() ... > > One for the quotes page, eh? We're terribly sorry, we'll get busy on > adding some delay loops in Linux so it too can be slow. I'm still working that one out myself... Okay, so Linux programmers are supposed to avoid APIs that are either broken, badly implemented, or missing, on some other platform. That's pretty much all of them, isn't it? Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix - Now there's a mailing list for this discussion.
On Sunday 24 June 2001 18:41, Chris Meadors wrote: > Okay, I brushed on GEOS, Microsoft, Xenix, and even Linux. So I'm as on > topic as the rest of this thread. I just have never told my story on l-k, > and this seemed a good place to put a little of it in. :) > > -Chris I just created a mailing list for this discussion attached to one of my existing sourceforge projects. It's [EMAIL PROTECTED] This is sort of an abuse of sourceforge, but then again the project I attached it to is to put together a Linux convention in Austin in 2003 and we'll probably have at least one panel on computer history, and most likely a BOF too, so it's SORT of on topic. :) To subscribe, apparently you go here: http://lists.sourceforge.net/lists/listinfo/penguicon-comphist (I've cc'd the people who've emailed me about this topic so far, but haven't subscribed anybody. If you're interested, you have to do it yourself.) Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Sunday 24 June 2001 22:51, [EMAIL PROTECTED] wrote: > Sorry, but I'm hanging on to my old computer manuals. The AIX manuals in > particular have sentimemtal value for me. Entirely undersandable. Would you be willing to xerox any "introduction" or "about" sections? > OTOH, I have quite a few old computer magazines (from the 80's) like Byte, > Infoworld, etc. I've been intending to get rid of them for some time now, > but hated just to throw them away. They're in storage in a neighboring > state right now, but my wife probably will be driving there in the next > couple of weeks to pick up a few things. If you're interested, she could > bring back the magazines and I can tell you exactly what I have. You're > welcome to them if you want them. Sure. Let me know what you have and I'll tell you what I don't have... > Wayne Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Monday 25 June 2001 11:13, you wrote: > 1937 claude shannon A Symbolic Analysis of Relay and Switching Circuits," > > 1948 claude shannon A mathematical theory of information. > > without those you're kind in trouble on the computing front... Yeah, I know I've bumped into that name (and probably taken notes) somewhere. Hmmm... Might be from "Where wizards stay up late", or might have been an article linked from slashdot... (I don't THINK it was mentioned in "Hackers"... Rodents, where was the reference... Crystal fire? That's mostly hardware. Accidental Empries? Doesn't sound right... Can't have been "Fire in the valley" because I haven't read that yet, it's still sitting on the bookshelf. Not soul of a new machine, that's post-digital Equipment Corporation...) I THINK that's in the set of notes that's on the box that's not hooked up right now... (Shortage of monitors at home.) This was the dude who decided to apply a binary and boolean approach to electronic computation, right? I KNOW I've read some stuff about him... late last year? Now I remember. Slashdot linked to his obituary: http://www.bell-labs.com/news/2001/february/26/1.html Rob The list for this discussion is: http://lists.sourceforge.net/lists/listinfo/penguicon-comphist - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Monday 25 June 2001 13:14, [EMAIL PROTECTED] wrote: > Hi, > > If you're really keen on old mags and manuals I'll go up to attic and look > around. I know there are old SCO Xenix & TCP/IP, as well as Byte and Dr > Dobbs > Ooh! Yes! Very much so. Thanks, Rob The mailing list for this discussion is: http://lists.sourceforge.net/lists/listinfo/penguicon-comphist - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [comphist] Re: Microsoft and Xenix.
On Monday 25 June 2001 16:19, [EMAIL PROTECTED] wrote: > Hi again, > > > > some old brain-cells got excited with the "good-ol-days" and other names > have surfaced like "Superbrain","Sirius" and "Apricot".Sirius was Victor in > the USA. If you go done the so-called IBM compatible route then the nearly > compatible nightmares will arise and haunt you, your lucky if the scars > have faded!! With the spelling cleaned up slightly, I just might want to quote that last sentence in my book. Would you object? I take it that superbrain, sirius/victor, and apricot were PC clones like the Tandy and Wang that were sort of but not really compatable? > I learnt my computing on a PDP8/E with papertape punch/reader, RALF, > Fortran II, then later 2.4Mb removable cartridges (RK05 I think). toggling > in the bootstrap improved your concentration. Much later you could > get a single chip(?) version of this in a wee knee sized box. "A quarter century of unix" mentions RK05 cartridges several times, but never says much ABOUT them. Okay, so they're 2.4 megabyte removable cartridges? How big? Are they tapes or disk packs? (I.E. can you run off of them or are they just storage?) I know lots of early copies of unix were sent out from Bell Labs on RK05 cartidges signed "love, ken"... What was that big reel to reel tape they always show in movies, anyway? I need a weekend just to collate stuff... > One summer job was working on a PDP15 analog computer alongside an 11/20 > with DECTAPE, trying to compute missile firing angles. [A simple version of > Pres Bush's starwars shield]. Considering that the Mark I was designed to compute tables of artillery firing angles during World War II... It's a distinct trend, innit? And the source of the game "artillery duel", of course... > -- > > Andrew Smith in Edinburgh,Scotland > > On 25 Jun 2001, Kai Henningsen wrote: > > [EMAIL PROTECTED] (Rob Landley) wrote on 24.06.01 in <[EMAIL PROTECTED]>: > > > Now if somebody here could just point me to a decent reference on A/UX > > > - Apple's mid-80's version of Unix (for the early macintosh, I > > > believe...) > > > > http://www.google.com/search?q=%22%2ba/ux%22 > > > > Usually a good idea. > > > > > > > > Also, you probably want to look at RFC 2235. > > > > MfG Kai > > - > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" > > in the body of a message to [EMAIL PROTECTED] > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > Please read the FAQ at http://www.tux.org/lkml/ > > ___ > Penguicon-comphist mailing list > [EMAIL PROTECTED] > http://lists.sourceforge.net/lists/listinfo/penguicon-comphist - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Monday 25 June 2001 15:23, Kai Henningsen wrote: > The AS/400 is still going strong. It's a virtual machine based on a > relational database (among other things), mostly programmed in COBOL (I > think the C compiler has sizeof(void*) == 16 or something like that, so > you can put a database position in that pointer), it doesn't know the > difference between disk and memory (memory is *really* only a cache), and > these days it's usually running on PowerPC hardware. > > ISTR there's a gcc port for the AS/400. Oh, and it does have normal BSD > Sockets. These days, it's often sold as a web server. > > Main customer base seems to be medium large businesses and banks. The AS400 seems to be based out of Austin. We hear a lot about it around here... > > Lotus-Intel-Microsoft Expanded Memory Specification), and "DOSShell" > > which conformed to the SAA graphical user interface guidelines. > > Nope, the text user interface guidelines, a related but not the same > beast. That's where F1 == Help is from, by the way. Same overall push. I think the distinction there is a bit nit-picking to put in the book, but I'll have to look it up to make sure... > In fact, the user interface part of SAA was (is?) called CUA. And many IBM > text mode interfaces more or less follow it, including OS/400 (the os of > the AS/400). Once upon a time, I had the specs for CUA. When I worked at IBM I had to program for CUA. Ouch. Painful memories... How did any of this related to the "Common Desktop Environment", by the way? > > The PS/2 model 70/80 (desktop/tower versions of same thing) were IBM's > > first 386 based PC boxes, which came with either DOS 3.3, DOS 4.0, OS/2 > > (1.0), or AIX. > > The first 386 PCs where not from IBM, by the way. Was it Compaq? It was compaq. The "Desqpro" or some such. That was actually an important turning point, Compaq basically stuck a 32 bit processor in a machine that was otherwise designed for a 16 bit one. It had a 16 bit ISA bus, 8 bit 30 pin simms that had been paired off now needed to be used in groups of 4... It was a painful hack from a hardware perspective. IBM was busy trying to upgrade the memory system and bus and stuff to be a better platform for the 386, but the waited to long and compaq just came out with "a quick hack now", and everybody else started copying compaq (especially when IBM's alternative was patented and thus not easily clonable...) With the PS/2 IBM succeeded in preventing the clones from copying them. Their mistake was in thinking that this was a good thing. > > AIX was NOT fully SAA/SNA compliant, > > AFAICT, nothing ever was fully SAA compliant, though some systems were > more compliant than others. Yeah, but AIX didn't even pretend to be. And that sidelined it within IBM in the late 80's in a big way. (Up until Gerster took over and overturned everything.) > > Hmmm... Notes on the history of shareware (pc-write/bob > > wallace/quiicksoft, pc-file/pc-calc/jim button/buttonware, pc-talk/andrew > > flugelman, apparently the chronological order is andrew-jim-bob, and bob > > came up with the name "shareware" because "freeware" was a trademark of > > Headlands Press, Inc...) > > That may be, but I believe the *concept* was invented in 1980 by Bill The "concept" of freeware had been around as public domain software forever. The homebrew club thought that way naturally about micros, and the MIT hackers thought that way also. If you're saying basham invented shareware... Maybe. I'll have to look into it. I'm just tracing back the origin of the word... > Basham, with the Apple ][ DOS replacement Diversi-DOS (which was the most > popular of many versions to increase disk speed by about a factor of 5). I > still remember discussions how copying this stuff was actually the right > thing to do. Seems he's still in business as "Diversified Software > Research", http://www.divtune.com/. Adding link to link pile... > > running AIX. The engineers (in Austin) whent on for the second > > generation Risc System 6000 (the RS/6000) with AIX version 3, launched > > February 15 1990. The acronym "POWER" stands for Performance Optimized > > WIth Enhanced Risc. > > The PowerPC was split off from the POWER architecture, and then the POWER > stuff was turned into the high end above PowerPC (with system prices about > a factor of ten higher as the lower bound). Yeah, I have to research that bit still. I know the vague bits (the IBM/apple/motorola hegemony to unseat Intel with risc, conceived before Intel came out with the Pentium, of course...) > IBM developed a version of OS/2 2.0 for the PowerPC, but *never* marketed > it - you could buy it if you knew the right number, but they never spent a > single cent on advertizing - by the time it was done, IBM had given up on > OS/2. Most OS/2 fans agreed that it was killed by IBM with extremely bad > marketing. My first job out of college was working at IBM in Boca Raton florida on the install
Re: Fwd: Re: Microsoft and Xenix.
On Tuesday 26 June 2001 08:57, Patrick O'Callaghan wrote: > Ah, fame at last :-) You seem to have been inexplicably excluded from "a quarter century of unix" by peter salus. (You're not in the index, anyway.) Haven't read "life with unix" yet... > I'm not on the linux-kernel list but a friend forwarded me this message: > > Subject: Re: Microsoft and Xenix. > > Date: Mon, 25 Jun 2001 18:11:01 +0100 (BST) > > From: <[EMAIL PROTECTED]> > > > > I first used Unix on a PDP11/44 whilst studying for my Computer > > Engineering degree at Heriot-Watt University in Edinburgh. I think > > they and Queen Margaret > > College, London were the first folk running Unix version 6 outside > > Bell Labs. > > It was in fact a PDP-11/45. Unix 5th Edition was first installed by Peter > De Souza around January 1975 (if anyone knows Peter's whereabouts please > mail me; I know he emigrated to the US and I lost track of him). Anyway, > the 11/45 had only 48kb of (core) memory, which was enough to boot the > system and run the Shell but almost nothing else. We had to connect the > machine to a neighbouring 11/20 with Unibus cable and a special bus switch > box built in-house in order to do anything. I'm a little fuzzy on how that would really improve things... It still wouldn't have enough memory to run anything except the shell. (Ummm... You skipped running the shell and booted straight to your app as process 1?) > This quickly improved when we > purchased a 256kb semiconductor memory board from Plessey (the DEC guy > couldn't believe all that memory would fit on only one 19-inch board :-). Sounds cool. Who's Plessey? (What was DEC selling at about that time? I take it they hadn't made the jump to IC dram yet?) Had anybody actually HEARD of Intel at this time? They seem to be a no-name fringe DRAM player until the 4004. Their retroactive history makes it seem quite noble, but I'm still not sure who they actually sold their DRAM -TO-... > It cost 3000 quid. Okay, a quid's a pound? (You are referring to the cost of the DRAM here?) I have no idea what the exchange rate was back about then... > We had 2 RK05 removable disks (2.5 Mb each!) and a paper > tape reader. Note that we had no tape drive, and Unix came on a reel of > tape, so we had to trudge around various places in the Edinburgh area doing > media conversions on non-Unix machines. Oh how we laughed. We later bought > an SMC 80Mb removable washing-machine style disk for I think about 15000 > pounds (for which we had to fight off the Control Engineering guys who > wanted to buy a floating-point unit -- yes, the fp was emulated!). How many different departments shared this box? What kind of things were done on it? > This system supported around 10 ASCII terminals via a DZ-11 serial-line > multiplexor. Memory was so tight we couldn't run VI, but I wrote my PhD > thesis on it (in NROFF) using George Coulouris' EM editor from Queen Mary > College (not Queen Margaret). They were the first to run Unix in the UK > along with us. I've never known who was really the first because of course > there was no Internet, not even UUCP mail. Ken Thompson would know, he sent out the tapes. Peter Salus's book just says (page 70), "The United Kingdom, which had received Unix in 1973..." Sigh... > We may even have been the first > in Europe for all I know, though I think Andy Tanenbaum was fairly close > behind. I thought Tanenbaum worked at Bell Labs? (Did he just visit, or did he move to europe after leaving the BTL?) > Anyway, I'll not rabbit on. Those were the days when men were men, real > programmers wrote assembler, and we didn't need no steenking GUIs (mumbles > into beer). And I'm writing a book about it. :) > > If anyone knows where Patrick O'Callaghan is now (ask > > him). > > I'm at Simón BolÃvar University in Caracas, Venezuela. My home page is > http://www.ldc.usb.ve/~poc. And the mailing list we're discussing computing history on is: http://lists.sourceforge.net/lists/listinfo/penguicon-comphist (Yes, I am recruiting! :) > Cheers > > poc Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Microsoft and Xenix.
On Tuesday 26 June 2001 12:15, Daniel Phillips wrote: > On Tuesday 26 June 2001 17:15, Joel Jaeggli wrote: > > On Tue, 26 Jun 2001, Jocelyn Mayer wrote: > > > > you get DR-DOS = Digital Research DOS, then you get Novell DOS, then > > you get Caldera OpenDOS, currently opendos is owned by lineo > > Yes, and the source actually was open for a short time when Caldera had it, > then it snapped back shut like a clam. I wanted to use DrDos for an > industrial project because of less paranoid licensing than MS-Dos, but > after being rebuffed in no uncertain terms when I offered to fix a bug I > ran away shuddering and jumped on the Linux cluetrain. > > > > I think I remember that DR-DOS was the name that Caldera > > > gave to the Digital Research OS, previously known as GEMDOS, After Ransom Love fell for Microsoft's "Stop using the GPL so we can fork your stuff and make a proprietary version" campaign... That pretty much buried the needle on my "cluelessness" meter. As far as I'm concerned, the only thing Caldera could still do that would suprise me would be to come to their senses. Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Cosmetic JFFS patch.
On Thursday 28 June 2001 14:36, Miquel van Smoorenburg wrote: > You know what I hate? Debugging stuff like BIOS-e820, zone messages, > dentry|buffer|page-cache hash table entries, CPU: Before vendor init, > CPU: After vendor init, etc etc, PCI: Probing PCI hardware, > ip_conntrack (256 buckets, 2048 max), the complete APIC tables, etc We've got a couple of VA rackmount servers with adaptec scsi controllers that print out several SCREENS worth of information as they probe all the busses and joyfully announce that yes, there are still hard drives plugged into some of them, and even gives me a list of the ones that DON'T have hard drives plugged into them. Interestingly, the bios also goes through a very similar ritual earlier in the boot. Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: The latest Microsoft FUD. This time from BillG, himself.
On Friday 29 June 2001 15:11, Clayton, Mark wrote: > > -Original Message- > > From: Paul Fulghum [mailto:[EMAIL PROTECTED]] > > Sent: Friday, June 29, 2001 4:02 PM > > To: Pavel Machek; [EMAIL PROTECTED]; Schilling, Richard; > > [EMAIL PROTECTED]; Henning P. Schmiedehausen; > > [EMAIL PROTECTED] > > Subject: Re: The latest Microsoft FUD. This time from BillG, himself. > > > > > Is this accurate? I never knew NT was mach-based. I do not think NT > > > 1-3 were actually ever shipped, first was NT 3.5 right? > > > Pavel > > > > NT 3.1 was the 1st to ship. > > I still have my 3.1 package all boxed up in the basement. I remember > impatiently waiting for its arrival. What a disappointment it turned > out to be. > > Mark I already answered this on the comphist list, but I've gotten in the habit of trimming linux-kernel from the replies. NT 3.1 was the first release version to ship, but there had been a "beta 1" in late 1992 and a "beta 2" in 1993. (This is why I said I needed my notebook. :) NT 3.1 was obviously numbered that due to the success of Windows 3.1. It didn't fool anybody, of course. But it DID manage to confuse things enough to delay the release of Windows 4.0 (nee 95) for about two years while they tried to shoehorn NT into the consumer space... http://www.jwntug.or.jp/misc/japanization/history.html The dos death march: Dos 1.0 they didn't mean to do until the CP/M deal fell through. DOS 2.0 was documented as being a transitional product until the PC could run Xenix. Dos 4.0 was going to be replaced by OS/2. Dos 6 was going to be replaced by NT. Dos 7 (in windows 95) was the absolutely last version ever, swear on a stack of printouts. Windows 98 tried to avoid mentioning the word "dos". Bill Gates' evil sidekick winnie-me (You can just see him, shaved head, pinkie in corner of mouth, "I shall call it...") tried very hard to hide the presence of dos, actively denying access to command.com wherever possible. What kind of odds are Lloyds of London giving on the presence of DOS in Windows XP at this point? Just curious... And any FURTHER discusson of this belongs on: http://lists.sourceforge.net/lists/listinfo/penguicon-comphist Really. Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
The SUID bit (was Re: [PATCH] more SAK stuff)
On Thursday 05 July 2001 21:45, Albert D. Cahalan wrote: > Oh, cry me a river. You can set the RUID, EUID, SUID, and FUID > in that same parent process or after you fork(). Okay, I'll bite. The file user ID is fine, the effective user ID is what the suid bit sets to root of course, the saved user id is irrelevant to this (haven't encountered something that actually cares about it yet, and yes I have been checking source code when I bump into a problem). But the actual uid (real user ID) ain't root, and an euid of root doesn't let me change the uid itself to root, or at least I haven't figured out how. (And haven't really tried: there are some things that might conceivably care whether you really are root or not, but the samba change password command isn't one of them. I have a password protected cgi accessed via ssl which allows the manipulation of a limited subset of samba users, and the samba tool will happily let me change anybody's password as suid root. But to add a user, the script has to append an entry to the file manually and then change the password from "racecondition" (which it is) to whatever the user's password should be. I could patch and ship nonstandard samba binaries, but that makes automatic upgrades problematic. (And samba, being a net accessable server, REALLY needs to be kept up to date.)) Do you have a code example of how a program with euid root can change its actual uid (which several programs check when they should be checking euid, versions of dhcpcd before I complained about it case in point)? Some of it's misguided "policy", assuming that the suid bit is on the executable itself instead of its parent process. A check and an error "Thou shalt not set this suid root" is fairly common on things that can be securely run from a daemon running AS root. So apparently, the obvious way to fix it is to relax the security restrictions even MORE, which is silly. > Since you didn't set all the UID values, I have to wonder what > else you forgot to do. Maybe you shouldn't be messing with > setuid programming. Ah, the BSD attitude. If you don't already know it, you should die rather than try to learn it. Anybody who isn't perfect should leave us alone, we LIKE our user base small. :) Following this logic, nobody should use Linux because the kernel has repeatedly shipped with holes allowing people to hack root, gaping big holes like the insmod `;rm -rf /` thing last year. Apparently we should all be using an early 90's version of netware or some kind of embedded system audited for stack overflows and burned in ROM... Rob (Reference dilbert: "Here's a quarter kid, go buy yourself a real computer." That's a nice way to recruit new users to help politically support decss or convince video card manufacturers to release source code to their 3d drivers, winmodems, funky encryption in USB audio, slipping registration stuff in the ATA spec...) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
2.4.6 APM suspend kills Dell inspiron 3500 sound card, but revives network card.
My devices on my laptop work very strangely with kernel 2.4.6. -- Sound problems: The sound card on my laptop (Dell Inspiron 3500) works fine when the system first boots up, but stops working with the first suspend. Any attempt to write sound to it after that blocks indefinitely. I don't even get console beeps until I reboot. That's under kde with their not-esd daemon (I.E. using noatun). If I do the same from the console it still plays fine before the suspend (using mpg123), and afterwords plays short samples in a loop with "DMA timeout" error messages to the console. -- Network problems. I have 3 pcmcia network cards (10baseT xircomm, 100baseT cardbus thing that isn't with me right now, and a wireless card.) The two 10baseT (pcmica) ones have about the same behavior, the 100baseT (cardbus?) one's a little different. Under previous kernels (the mutuant red hat 7.1 2.4.2), the pcmcia network card would work fine on boot but die when the system suspends. (I didn't mind because I could pop it out and put it back in and it would work again.) Now with 2.4.6 it's exactly the OPPOSITE behavior: the network card doesn't work at all until I suspend and resume, but when the system comes back up after a suspend the card works fine. Before the suspend, popping it out and putting it back in accomplish nothing. Afterwards popping out and putting in work great, re-runs dhcpcd and everything. Back under red hat's 2.4.2, putting the cardbus card in, suspending, resuming, and popping the card out produced a kernel panic. I haven't tried with 2.4.6 (don't have the card with me), but I can try to reproduce this under 2.4.6 if it sounds interesting to anybody... -- Fun little detail: The two cardbus bridges and the sound card are all on IRQ 11, it seems. /proc/pci attached. Rob (P.S. I take it the XFree86 hangs are an XFree86 problem, not kernel? Rat pointer still moves, drive still chugs a bit in the background, so the kernel seems sort of still there... Can't get out of the frozen gui though, no ctrl-alt-F1, no ctrl-alt-backspace... Oh well.) PCI devices found: Bus 0, device 0, function 0: Host bridge: Intel Corporation 440BX/ZX - 82443BX/ZX Host bridge (rev 2). Master Capable. Latency=64. Prefetchable 32 bit memory at 0xe000 [0xe3ff]. Bus 0, device 1, function 0: PCI bridge: Intel Corporation 440BX/ZX - 82443BX/ZX AGP bridge (rev 2). Master Capable. Latency=128. Min Gnt=140. Bus 0, device 4, function 0: CardBus bridge: Texas Instruments PCI1220 (rev 2). IRQ 11. Master Capable. Latency=168. Min Gnt=192.Max Lat=5. Non-prefetchable 32 bit memory at 0x1000 [0x1fff]. Bus 0, device 4, function 1: CardBus bridge: Texas Instruments PCI1220 (#2) (rev 2). IRQ 11. Master Capable. Latency=168. Min Gnt=192.Max Lat=5. Non-prefetchable 32 bit memory at 0x10001000 [0x10001fff]. Bus 0, device 7, function 0: Bridge: Intel Corporation 82371AB PIIX4 ISA (rev 2). Bus 0, device 7, function 1: IDE interface: Intel Corporation 82371AB PIIX4 IDE (rev 1). Master Capable. Latency=64. I/O at 0xfcd0 [0xfcdf]. Bus 0, device 7, function 2: USB Controller: Intel Corporation 82371AB PIIX4 USB (rev 1). IRQ 10. Master Capable. Latency=64. I/O at 0xfce0 [0xfcff]. Bus 0, device 7, function 3: Bridge: Intel Corporation 82371AB PIIX4 ACPI (rev 2). IRQ 9. Bus 1, device 0, function 0: VGA compatible controller: Neomagic Corporation [MagicMedia 256AV] (rev 18). Master Capable. Latency=128. Min Gnt=16.Max Lat=255. Prefetchable 32 bit memory at 0xfd00 [0xfdff]. Non-prefetchable 32 bit memory at 0xfe80 [0xfebf]. Non-prefetchable 32 bit memory at 0xfec0 [0xfecf]. Bus 1, device 0, function 1: Multimedia audio controller: Neomagic Corporation [MagicMedia 256AV Audio] (rev 18). IRQ 11. Prefetchable 32 bit memory at 0xfe00 [0xfe3f]. Non-prefetchable 32 bit memory at 0xfe70 [0xfe7f]. xirc2ps_cs 11808 1 ad1848 17456 1 sound 57728 1 [ad1848]
Re: Alan Cox quote? (was: Re: accounting for threads)
On Wednesday 20 June 2001 11:33, Alexander Viro wrote: > On 20 Jun 2001, Jes Sorensen wrote: > > Not to mention how complex it is to get locking right in an efficient > > manner. Programming threads is not that much different from kernel SMP > > programming, except that in userland you get a core dump and retry, in > > the kernel you get an OOPS and an fsck and retry. > > Arrgh. As long as we have that "SMP makes locking harder" myth floating > around we _will_ get problems. Kernel UP programming is not different > from SMP one. It is multithreaded. And amount of genuine SMP bugs is > very small compared to ones that had been there on UP since way back. > And yes, programming threads is the same thing. No arguments here. Hopefully in 2.5 we'll get the pre-emptive UP patch in that enables the SMP locks on UP and finally clean it all out by exposing the bugs to the main user base. As for multi-threaded programming being hard, people are unfamiliar with it. Any programming is hard the first time. And almost anybody's first attempt at it is going to suck. (Dig out the linux kernel 0.02 sources sometime and compare them with what we have today.) The more experience you get with it, the better you are. Encouraging people to stay away from it rather than learn to do it RIGHT is the wrong attitude. People will figure out that using 1000 threads when you need 3 isn't the best way to go, that locking is an expense but failing to lock is worse, how to spot race conditions (the same old "security" mindset except you don't need a malicious third party to get bitten by it.) And they'll learn it the way I did, and the way everybody does. Do it wrong repeatedly. Make every mistake there is, hard. Get burned, rewrite it from scratch four times until you figure out how to design it right, spend long weekends looking for subtle little EVIL bugs you can't reproduce but which bite you the instant you stop looking for them, learn to play "hot potato" with volatile data you have to manipulate atomically... Everybody starts as a bad programmer. Some of us get it out of our systems when we're 12. Others decide programming is lucrative when they're 35 and inflict their "hello world" opus upon their coworkers. Me, I wrote a disk editor and a bunch of bbses in 5th and 6th grade back on the C64 that will never see the light of day. And yes they sucked. But I'm still proud of them. And I wrote awful multithreaded code back on OS/2, and can now write decent threading code because I've learned a large number of things to avoid doing. And I take proper care because I know how hard it is to FIND one of these if you do wind up doing it. I've done it. Once for two consecutive weeks on the same
Re: [OT] Re: When the FUD is all around (sniff).
On Tuesday 26 June 2001 11:09, Jonathan Lundell wrote: > >account of the speech didn't mention it. The Fehrenbachers give the > >old-timers' recollections a D. The evidence, the scholars say, > >"suggests that this is a case of reminiscence echoing folklore or > >fiction." I don't feel NEARLY so bad about the ongoing computer history thread being too far off-topic now. :) Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: For comment: draft BIOS use document for the kernel
On Friday 22 June 2001 12:20, Alan Cox wrote: > int 0x10 service 3 is used during the boot loading sequence to obtain the > cursor position. int 0x10 service 13 is used to display loading messages > as the loading procedure continues. int 0x10 AH=0xE is used to display a > progress bar of '=' characters during the bootstrap I seem to remember '.' characters, not '='... > It then uses int 0x10 AH=0x0E in order to print initial progress banners so > that immediate feedback on the boot status is available. The 0x07 character > is issued as well as printable characters and is expected to generate a > bell. Hmmm... About when during the boot is this? (I get a beep from the bios long before lilo, and another when the pcmcia stuff detects a card, but that's it...) > usable memory data. It also handles older BIOSes that return AX/BX but not > AX/BX data. What does that mean? (Return garbage in AX/BX?) > Having sized memory the kernel moves on to set up peripherals. The BIOS > INT 0x16, AH=0x03 service is invoked in order to set the keyboard repeat > rate and the video BIOS is the called to set up video modes. "then called"... > The kernel tries to identify the video in terms of its generic features. > Initially it invokes INT 0x10 AH=0x12 to test for the presence of EGA/VGA > as oppose to CGA/MGA/HGA hardware. "as opposed to"... > Having completed video set up the hard disk data for hda and hdb is copied > from the low memory BIOS area into the kernel tables. INT 0x13 AH-0x15 is > used to check if a second disk is present. Second disk or second IDE controller? (We already copied hdb from low memory, are we now confirming it?) > The kernel invokes the PCI_BIOS_PRESENT function initially, in order to > test the availability of PCI services in the firmware. Assuming this is > found them PCIBIOS_FIND_PCI_DEVICE, PCIBIOS_FIND_PCI_CLASS_CODE, > PCIBIOS_GENERATE_SPECIAL_CYCLE, PCIBIOS_READ/WRITE_CONFIG_BYTE/WORD/DWORD > calls are issued as the PCI service are configured, along with either "services are" or "service is"... > compatibility. One extension the Linux kernel makes to the official rules > for parsing this table, is that in the presence of PCI/ISA machines it will That is a totally gratuitous comma. (Okay, I'm nit-picking. It can stay if you think it can be house-trained, but I'm not feeding it.) > 4.1 Boot Linux on the system > > 4.2 Insert a PCMCIA card, ensure the kernel detects it > > 4.3 Remove the PCMCIA card, ensure the kernel detects the change > > 4.4 Insert a cardbus card, ensure the kernel detects it > > 4.5 Verify the cardbus device is usable > > 4.6 Remove the cardbus device, ensure the kernel detects it I have a 100% reproducable crash on Red Hat 7.1 if I put in a cardbus card, apm suspend, resume the system, then pop the cardbus card out. Kernel panic, every time. (I assumed it had been fixed in newer versions. I've been meaning to look into it, but it works fine with a 16 bit PCMCIA card so I just swapped my 100baseT for a 10baseT and everything's fine. The cardbus card works fine if I put it in and pop it out without suspending, and it suspend works fine by itself (Although sound never comes back after an APM suspend. I have to reboot the laptop to get sound back...) Aht he joys of the Dell inspiron 3500. Nice big screen, though... Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] more SAK stuff
On Monday 02 July 2001 15:10, Hua Zhong wrote: > -> From Alan Cox <[EMAIL PROTECTED]> : > > > (a) It does less, namely will not kill processes with uid 0. > > > Ted, any objections? > > > > That breaks the security guarantee. Suppose I use a setuid app to confuse > > you into doing something ? > > a setuid app only changes euid, doesn't it? Yup. And you'd be amazed how many fun little user mode things were either never tested with the suid bit or obstinately refuse to run for no good reason. (Okay, I made something like a sudo script. It's in a directory that non-root users can't access and I'm being as careful as I know how to be, but I've got a cgi that needs root access to query/set system and network configuration.) Off the top of my head, fun things you can't do suid root: The samba adduser command. (But I CAN edit the smb.passwd file directly, which got me around this.) su without password (understandable, implementation detail. It's always suid, being run by somebody other than root is how it knows when it NEEDS to ask for a password. But when I want to DROP root privelidges... Wound up making "suid-to" to do it.) ps (What the...? Worked in Red Hat 7, but not in suse 7.1. Huh? "suid-to apache ps ax" works fine, though...) dhcpcd (I patched it and yelled at the maintainer of this months ago, should be fixed now. But a clear case of checking uid when he meant euid, which is outright PERVASIVE...). I keep bumping into more of these all the time. Often it's fun little warnings "you shouldn't have the suid bit on this executable", which is frustrating 'cause I haven't GOT the suid bit on that executable, it inherited it from its parent process, which DOES explicitly set the $PATH and blank most of the environment variables and other fun stuff...) By the way, anybody who knows why samba goes postal if you change the hostname of the box while it's running, please explain it to me. It's happy once HUPed, then again it execs itself. (Not nmbd. smbd. Why does it CARE? And sshd has the most amazing timeouts if it can't do a reverse dns lookup on the incoming IP, even if I tell it not to log!) Apache has a similar problem, and HUP-ing it interrupts in-progress transfers, which could be very large files, 'cause it execs itself. I made that happy by telling it its host name was a dot notation IP address, although that does mean that logging into a password protected web page using the host name forces you to log in twice (again when it switches you to http://1.2.3.4/blah...) Fun, isn't it? :) Alan's right. We DO need a rant tag. Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Is there a Linux trademark issue with sun?
Heads up everybody. Scott McNealy has apparently been calling Solaris Sun's implementation of Linux. Trademark violation time. The article's here: http://linuxtoday.com/news_story.php3?ltsn=2000-12-14-020-04-NW-CY Quick quote: >When asked by a reporter why Sun's new clustering >software was restricted to Solaris and not available >on Linux, McNealy's aggravation seemed to peak. "You >people just don't get it, do you? All Linux >applications run on Solaris, which is our >implementation of Linux. Now ask the question again," Assuming the quote is accurate (which, being ZD, is iffy), this strikes me as a mondo trademark violation, and exactly the sort of thing the Linux trademark was designed to prevent. Solaris is NOT Linux. That's just my opinion, of course, but I wanted to make sure everybody was aware of the situation... Rob (Yes, it finally happened. The Unix idiots have now "protected" the trademark "Unix" to the point where Linux is now a more valuable name to be associated with. But turnabout IS fair play. And they know the rules if they want to participate. Add in the MS profit warning and IBM's billion dollar pledge to our little PBS station and it's been a good week...) __ Do You Yahoo!? Yahoo! Shopping - Thousands of Stores. Millions of Products. http://shopping.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
2.2.18ac14 yamaha debug output removal patch.
The new driver works fine on the box here, produces all sorts of debug gorp to the console though. Most of the unnecessary printk's are commented out, these are the three I've been seeing while playing around with mpg123 and some mp3 files... Other than that, it worked for me... Rob --- linux/drivers/sound/ymfpci.cMon Oct 2 01:46:48 2000 +++ linux2/drivers/sound/ymfpci.c Mon Oct 2 01:43:03 2000 @@ -668,10 +668,10 @@ /* * Normal end of DMA. */ - printk("ymfpci%d: %d: done: delta %d" - " hwptr %d swptr %d distance %d count %d\n", - codec->inst, voice->number, delta, - dmabuf->hwptr, swptr, distance, dmabuf->count); +// printk("ymfpci%d: %d: done: delta %d" +// " hwptr %d swptr %d distance %d count %d\n", +// codec->inst, voice->number, delta, +// dmabuf->hwptr, swptr, distance, dmabuf->count); } played = dmabuf->count; if (ypcm->running) { @@ -826,8 +826,8 @@ end >>= 1; if (w_16) end >>= 1; -/* P3 */ printk("ymf_pcm_init_voice: %d: Rate %d Format 0x%08x Delta 0x%x End 0x%x\n", - voice->number, rate, format, delta, end); +/* P3 */ // printk("ymf_pcm_init_voice: %d: Rate %d Format 0x%08x Delta 0x%x End +0x%x\n", +// voice->number, rate, format, delta, end); for (nbank = 0; nbank < 2; nbank++) { bank = >bank[nbank]; bank->format = format; @@ -1710,7 +1710,7 @@ case SNDCTL_DSP_SETFRAGMENT: get_user_ret(val, (int *)arg, -EFAULT); /* P3: these frags are for Doom. Amasingly, it sets [2,2**11]. */ - /* P3 */ printk("ymfpci: ioctl SNDCTL_DSP_SETFRAGMENT 0x%x\n", val); + /* P3 */ // printk("ymfpci: ioctl SNDCTL_DSP_SETFRAGMENT 0x%x\n", val); dmabuf->ossfragshift = val & 0x; dmabuf->ossmaxfrags = (val >> 16) & 0x;
Re: Is there a Linux trademark issue with sun?
--- "Jon 'maddog' Hall, Executive Director, Linux International" <[EMAIL PROTECTED]> wrote: > [Warning: Highly controversial topic ahead. > Messenger does not want to be shot] Aw come on, it's traditional. :) > This does bring up an interesting situation. > > The Linux community keeps saying that "Linux is a > re-implementation of Unix." > > This gets X/Open all pissed off at us, because Linux Understood. Linux is NOT Unix. (Just as Gnu's Not Unix, either. :) We do go to certain lengths so as not to violate their trademark, and when we slip up we acknowledge it, back off, and clarify. This is what SUN needs to do. I think LI or somebody needs to send them a letter informing them that Linux is, in point of fact, a trademark, and that they can't throw it around like a generic term or it will go the way of "asprin". The rules for using that trademark were at least partly defined almost a year ago. The following post was picked up and duplicated at dozens of locations (check google): http://boudicca.tux.org/hypermail/linux-kernel/2000week04/0654.html If Sun's going to start calling Solaris Linux, then I think we need to have somebody official send them a letter asking them not to, or at the VERY least acknowledging the trademark. > Yet there is no real definition for "Linux". A definite point. However, one obvious definition is something that uses the Linux kernel. You can have a Linux-workalike that is not, in point of fact, Linux. (Just as you can have a generic version of acetominaphen that is not, in point of fact, Tylenol. This sort of distinction is what a trademark is FOR.) > Some people (the FSF for instance) say that Linux is > just the kernel, but > there are different kernels, with different patches. And there are many variants of "cheerios" (honey nut, frosted, etc). And there are many cheerio-like toasted oatmeal loop thingy cereals on the market. But they have to have their own name, they can't infringe on somebody else's trademark. > There was even a Microkernel version of Linux called > "MKLinux". Good point. But it was fundamentally a port of the Linux kernel to a new environment. It started from the Linux kernel (didn't start as a seperate project), and it ended up containing huge quantities of the Linux kernel. Moreover, they had Linus's fairly explicit permission to use the name anyway, which glosses over a lot of sins. :) (Not to mention this was back before we particularly cared about trademark issues, but that's not a good legal argument, is it?) > Others say that Linux is the whole distribution, but > there are lots of > distributions, all different (Red Hat, SuSE, etc.) > There are different > placements of files in the file tree. True. But there we go back to Linus's january post, which DID cover using the name "Linux" for larger projects (like Red Hat Inc.) Intent matters. Sun's intent is clearly to take an existing system and jump on the Linux bandwagon, and confuse people as to what is Linux and what isn't. There's a lot of Linux-like systems out there, and yes most of them predate Linux in some way. Forget the proprietary stuff for a moment, look at BSD. BSD isn't Linux. Linux isn't BSD either. They're functionally equivalent in most respects, but neither project is attempting to take credit for the work of the other. Sun is free to put out a version of Linux. But to call Solaris Linux is, in my opinion, going over the edge here and diluting the trademark. > I know from conversations with Linus that he > anticipates having (perhaps) > radically different kernels on top of "BIG IRON" > machines, where the kernels > (and the distributions) come from the "BIG IRON" > makers. Sure. But they diverge from the same code base. Look at it this way: can the linux-kernel mailing list community take any credit/blame for what goes on in those "Big Iron" kernels? Yes, we can. We're not ENTIRELY responsible for them (any more than we're responsible for the patched kernels Red Hat puts out), but they are in large part based on/derived from the work done here. Especially the work of Linus torvalds, Top Banana of this community and personal owner of the Linux trademark. Can we or Linus take credit or blame for Solaris? No. It's not us. We didn't do any of it, we didn't contribute to it or prevent anything from being added to it, we didn't even advise it's development. It was and is a totally seperate project that has been attempting to converge with a more succesful project, live in its shadow, and take credit from it. This is 100% what trademarks are FOR. > The licensing of the Linux trademark has basically > allowed someone to use > the term "Linux" in their own trademark, but has > done nothing to prevent > someone from comparing their accumulation of code > with "Linux", and nothing > to define what Linux actually is. Comparing with Linux, no. Saying Solaris is an implementation of Linux, yes. The fact we haven't done this YET is
Re: Is there a Linux trademark issue with sun?
--- Rik van Riel <[EMAIL PROTECTED]> wrote: > On Thu, 14 Dec 2000, Rob Landley wrote: > > >people just don't get it, do you? All Linux > > >applications run on Solaris, which is our > > >implementation of Linux. Now ask the question > again," > > I wouldn't worry about this. It's only a question > of time > before people will start to ask him why Sun isn't > shipping > the "original Linux" but has their own, strange, > version ;) Sure. But why HAVE a trademark if we don't enforce it? Grassroots support is always a wondeful thing, and educating the public is extremely important. Then again, what McNealy's trying to confuse his customers. Enforcing the trademark would therefore serve an educational purpose, wouldn't it? :) > cheers, > > Rik Rob > Hollywood goes for world dumbination, > Trailer at 11. Projector finally fixed, film at 11. Coughing fit strikes city, phlegm at 11. Mad dog on mains treat, foam at 11... This counts as a genre, doesn't it? Rob ... I'm not an actor, but I play one on TV. __ Do You Yahoo!? Yahoo! Shopping - Thousands of Stores. Millions of Products. http://shopping.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: Is there a Linux trademark issue with sun?
--- Larry McVoy <[EMAIL PROTECTED]> wrote: > Yup, that's Scooter (all the Sun old timers call him > Scooter, I dunno where > it came from, I wasn't enough of an old timer). > And, yeah, he does a lot > of marketing. But in many respects, he's the > perfect CEO. He's always > out in public, pushing the message, and he tends to > leave the day to day > stuff to the other folks. I'll take him over Gates > any day of the week. I'm not against them, and I wouldn't make too big a deal out of it. I'm just recommending that somebody official ask them politely to stop doing it. Here's how I see it: Sun feels that their core product, Solaris, is threatened by Linux. They have several options: A) Jump on board and use Linux on their hardware. B) Improve Solaris until it can compete on its own merits. C) Market Solaris better, to make people want Solaris instead of Linux. D) Confuse people into thinking that Linux and Solaris are the same thing. He's gone for D, and he's run straight into the Linux trademark doing so. If everybody wants to abolish the Linux trademark, that's fine. But if we don't defend it here, I really do think it becomes too weak to be useful in other situations. McNealy wants to leverage the growth of Linux to help his company, which is fine, but he's going about it the wrong way. What if IBM had done this sort of thing with AIX or Monterey instead of miraculously acquiring a clue? IBM hasn't, they've respected the Linux trademark very conscientiously. > Scott's only big sin was to dump SunOS for Slowaris. I still dunno WHY that happened (other than gaining threading), but I suspect that should go to email... Rob __ Do You Yahoo!? Yahoo! Shopping - Thousands of Stores. Millions of Products. http://shopping.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [OT] Re: Is there a Linux trademark issue with sun?
--- Dana Lacoste <[EMAIL PROTECTED]> wrote: > I don't think he did that at all : > (Devil's Advocate time :) Always a fun occupation. :) > What he did was say that, while everyone was looking > at Linux as the solution to modern computing > problems, > he didn't need to : he already has Solaris. So > Solaris > is his "Linux". The question he was responding to was why Sun hadn't put out a Linux version of some solaris-only piece of software. His answer was that Solaris was Sun's implementation of Linux. > A matter of grammar, not legal or technical terms : > he > didn't say that Solaris IS linux; he used a metaphor > : > "[Solaris] is our implementation of Linux". (I'm going to resort to a sports analogy. Brace yourself. No, I don't know which sport. Volleyball, possibly.) Yeah, it's a borderline foul. But I still think it deserves a warning from the referee. (It's over, you can relax now.) Point: I really do think somebody official should send the guy a letter asking him to be careful around the trademark. > I'm not saying he's RIGHT : I'm just saying that he > didn't intend to abuse the Linux trademark. He's Somebody asked him a question about why there was no Linux version of a piece of software, and he attacked the validity of the question by saying Solaris is Sun's implementation of Linux. My reading of it is that he didn't answer the question, instead he implied very strongly that the question was invalid, and did so by implying very strongly that Solaris -IS- Linux hence no need for a seperate Linux version. Either he's fundamentally confused, or he's intentionally trying to be confusing. The first calls for clarification, the second calls for defending the infringed trademark. I'm not sure which of the two would be "giving him the benefit of the doubt", neither's particularly flattering. > taken a mix of (B) and (C) from above, claiming that > his Solaris product can accomplish the same product > targets that Linux does. But that's not actually what he said, is it? > Why should Sun provide anything for Linux if they > already have Solaris providing all of the > functionality? He could have said that, true. Would have been well within his rights to say it, and a valid commercial strategy (although not necessarily a winning one). But that's not what he said. > Could I say that Wine is my Windows implementation? You could say the sky is green. The more interesting question is, are you the one putting out Wine? Winehq.com doesn't claim Wine is a windows implementation, does it? It calls it an implementation of the Windows APIs. > Windows is a trademark, but everyone knows what I > mean, right? > Microsoft's not going to be writing me any letters, > right? Actually, I wouldn't be too suprised if they did. They have lawyers on salary just waiting for something to do. The question is, are you big/important/noticeable enough to go after? By the way, have you read the actual "about Wine" page from Wine's site? http://www.winehq.com/about.shtml Trademark acknowledgement is at the bottom (albiet in a rather vague way), and the "about Wine" section is quite clear on what Wine is and what it isn't. > (well, none that I'm going to pay attention to, > right? :) But you're not a corporation, are you? > All just rhetoric, of course. > Advocacy doesn't belong on linux-kernel :) I'm not advocacying, I raised a question about the Linux trademark in the venue I thought most appropriate (don't know of a better one), and I'm following up on the replies. I've trimmed the "cc:" list on several occasions. It doesn't noticeably seem to be skewing the overall signal to noise ratio of l-k so far. :) > -- > Dana Lacoste > Linux Developer > Peregrine Systems > Rob __ Do You Yahoo!? Yahoo! Shopping - Thousands of Stores. Millions of Products. http://shopping.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: Is there a Linux trademark issue with sun?
> I am not sure it is a big deal. If you read the > comment it was more of an off-the-cuff remark. > > I doubt anyone would testify in court that McNealy > said this. The only way it is something to worry > about is if they used it in a printed format (IANAL) Law isn't an all-or-nothing thing. Obviously this isn't worth a lawsuit. By itself it's not even close. But sending an official letter asking them to respect the trademark counts as "defending the trademark" if it's abused in the future and we DO want to get serious about it. (Neutralizes this as a precedent slimy lawyers can point to of "Linux" being undefendable as a trademark and instead being a generic term.) And if the pattern of behavior WERE to continue/get worse, having gone through the appropriate steps way back when (measured, proprotionate response to earlier incidents) makes a much firmer foundation for a lawsuit later. And, because Sun's lawyers know that last point, they're fairly likely to take it seriously enough to let McNealy know that his course of action carries certain risks. (Remember that Meme Hacking talk, at the Fortune 500 it's all about reducing and managing risk.) It's a bit like saying your cat's name when you see them up on the coffee table where they're not supposed to be. It's not the same as actually punishing them, just letting them know you're aware that they're doing it. > Kevin Rob __ Do You Yahoo!? Yahoo! Shopping - Thousands of Stores. Millions of Products. http://shopping.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: tighter compression for x86 kernels
>The UPX team owns all copyright in all of UPX and in each part of > UPX. Therefore, the UPX team may choose which license(s), and has > chosen two ... > This permits using UPX to pack a non-GPL executable. Stupid question time: isn't this what the LGPL was designed to do? The Library GPL, so people who compiled stuff with gcc and linked it with glibc wouldn't necessarily be gpl-ing their binary by doing so? (Or the leprosy GPL, or whatever Stallman's renamed it this month. The license text hasn't changed...) Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: CPRM copy protection for ATA drives
> Its probably very hard to defeat. It also in its current form means > you can throw disk defragmenting tools out. Dead, gone. Welcome to > the United Police State Of America. Doesn't anybody remember the days of "dongle keys" on the Commodore 64? Plug a special circuit into the joystick port in order to use this program? And we all remember how the pirates got around this, don't we? The easy way: crack the program. This is yet another hardware based copy protection tool like floppy disks with strategically placed holes burned into them by lasers (leaving a bad sector you can't reformat away), or cartridge-based programs that tried to overwrite their own memory address ranges. Or forcing people to the third word from paragraph two on page ten of the instruction manual (since the manual is, more or less, hardware.) Welcome back to the 1980's, they never learn... There's nothing new under the sun, and the "zero day warez" people never even broke stride dealing with this sort of thing. All it WILL do is annoy people who try to legitimately use the system. And, of coruse, make a lot more people buy SCSI if they sabotage the ATA spec this way... What are they going to do installing one of these programs on a non-compliant drive? (A modern 74 gig drive is likely to last me a while, you know.) Refuse and limit their potential installed base to only systems manufactured after 2002? Yeah, people do that kind of thing all the time (requires MMX), and the products don't last that long on the shelves, do they? Has anybody brought up the LEVELS of nested stupidity in this particular proposal to the committe? (Committee iq: average intelligence of members, divide by headcount. Nice to see that holds true.) I'm not particularly alarmed by it, though. Disappointed, yes. But a market that refused to buy micro-channel architecture, refused to buy rambus memory, and outright laughed at Microsoft BOB, isn't likely to let this get shoved down its throat even if it DOES pass as an official spec. And another advantage Open Source has over proprietary software (we provide what the users actually WANT, if only 'cause we're the users. A GPLed program isn't likely to depend on this "feature", is it? Or the Intel CPU ID...). Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: CPRM copy protection for ATA drives
Andre Hedrick wrote: > > On Tue, 2 Jan 2001, Rob Landley wrote: > > > And we all remember how the pirates got around this, don't we? The easy > > way: crack the program. > > Nope...it is embedded to the vender portion of the media. My point was that using this kind of thing to protect applications is just as pointless as any other kind of copy protection in a debuggable and editable binary. (And if it's not debuggable and editable, it's kind of hard to make it runnable. Even if you encrypt it, it has to decrypt itself in memory to run.) If it's protecting content (DVD css again), the actual PIRATES will do what they did before decss: put their non-interfering interception layer between the program and the media so that the program unlocks the media for them and they snoop and record the data going by as part of normal operation. And if the data goes into the binary in encrypted form, the binary must be able to encrypt it. Then they convert it to whatever other form they like (mp3, mpeg 4, etc) before throwing it on irq and bragging about it to their other 14 year old friends. Before decss, the pirates hacked some software dvd player to do this, or wrote their own video/audio driver that intercepted the data that thought it was going to the display. A combination of the two (interactive game like Dragon's Lair, for example) could be cracked with a combination of the approaches. Might take a month or two, but there honestly are people with nothing better to do with their time. (A decande or so back I actually knew some of them.) So it's not even going to dent ACTUAL piracy. > You were not listening, SCSI/MMC grabbed their ankles already! Missed that part. (Mostly read the zdnet stuff, serves me right.) Read read read... Fun. So are applications expected to bare metal talk to the hardware straight from user space like the days of DOS, or are people going to hack device drivers to emulate and undermine this magic extra storage space? (Read the read-only data, and then supply the same data for another drive? You don't HAVE to crack if your intention is simply to copy verbatim, or fake a verbatim copy anyway...) Sheesh, you could do that sort of thing with a hacked version of plex86 or something even if they DO read the bare metal. And yes, somebody WILL do that eventually if this actually ships. > > Has anybody brought up the LEVELS of nested stupidity in this particular > > proposal to the committe? (Committee iq: average intelligence of > > members, divide by headcount. Nice to see that holds true.) > > Yes, it is not part of the STANDARD because I successfully stopped it for > now until February. Oh and I sit and vote on that committee. I've since read your message on that, yes. (As I said, the individuals are smarter than the committee. :) > > users. A GPLed program isn't likely to depend on this "feature", is > > it? Or the Intel CPU ID...). > > It requires a licensed HOST/Application like a JAVA-thingy, or a > real-local one. > > If you want to kill it somebody create a GNU-CPRM and open-source it. > License it for FREE. Or, for the pirates, hack somebody else's CPRM into a pesudo-generic tool to read the data, and/or intercept an application's "here's my key, now give me the data" requests. The register thing IS a bit vague on what this proposal actually DOES. (Okay, read-only key space initialized by manufacturer. Got it. Compliant application required to read (unlock/decrypt?) the data. Okaaay... Is the application providing a decryption key (er, not writing it into the read-only space... Ummm... "Not usually accessed by the end user", oh yeah like that's going to true for longer than 15 minutes once it ships...) How does it actually WORK? (Is there some kind of one-time write into one of these slots? What happens when you run out of slots? It says it makes use of the physical location of the info on the drive, but doesn't say HOW... (These slot thingies? Sector address of the file being read?)) I've read the register's "how it works" section, and I'm not sure I'm much more informed than I started... Here's from The Register article that started this thread: > But for home users, the party's over. CRPM paves the way for > CPRM-compliant audio CDs, and the free exchange of digital > recordings will be limited to non-CPRM media. Play the thing using an "approved" player into an audio driver of my own devising which records the data, then make a normal MP3 out of it. How does the system intend to stop this? (Even if the "approved" player has to be some kind of boom box, stick the headphone jack into the sound card's "in" jack, reducing the "how many mathematicians" light bulb joke to an earlier joke.) I'm still unclear on exactly what they're trying to DO. Either the
Learn from minix: fork ramfs.
(Argh! Linus replies to my post and my cc: to the linux-kernel was to rutgers.edu. Teach me to post on three hours of sleep, it's like getting a hole-in-one with nobody around...) Linus said in Re: Patch (repost): cramfs memory corruption fix > I wonder what to do about this - the limits are obviously useful, > as would the "use swap-space as a backing store" thing be. At the > same time I'd really hate to lose the lean-mean-clean ramfs. So fork ramfs already. Copy the snapshot you like as an educational tool, call it skeletonfs.c or some such, and let the current code evolve into something more useful. Seems to me a dude named Andrew was in a similar situation a decade or so back, and decided to resist all change in the name of having a clear educational example. Patch pressure built up past the "reimplementation from scratch threshold event horizon thingy" (the tanenbaum-torvalds barrier), at which point the code forked under its own weight anyway. Saves a lot of bother to do it now, if you ask me. You'll wind up with a new ramfs one way or the other. People will keep writing it as long as it's not there. (The whole "why climb mount everest" thing, you know.) I could, of course, be totally wrong about this... Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
[Fwd: Learn from minix: fork ramfs.] - linus's reply
He replied to my bad cc:, so forwarding this here should be okay... Linus Torvalds wrote: > > On Mon, 8 Jan 2001, Rob Landley wrote: > > > > So fork ramfs already. Copy the snapshot you like as an educational > > tool, call it skeletonfs.c or some such, and let the current code evolve > > into something more useful. > > The thing is, that I'm not sure that even the extended ramfs is really > useful except for very controlled environments (ie initrd-type things > where the contents of the ramdisk is _controlled_, and as such the > addition of limits is not necessarily all that useful a feature). Others > have spoken up on why tmpfs isn't a good thing either, with good > arguments. I've got a use for it. Diskless render nodes. At my day job we do (among other things) rackmounted render farms, and we're thinking about adding diskless dual procs to our line in our Copious Free Time (tm). The plan right now is to throw at least half a gig of ram in each machine (quite possibly more, maya and other renderers positively suck the stuff down even WITH local disks), boot the sucker up via intel's PXE (dhcp/bootp/mtftp apparently has a new name since last I saw it, right...), and use a combination of NFS and ramfs for the root filesystem. Ramfs is WAY better than the ram block device 'cause of the auto-scaling. If the client wants to use all the memory for the rendering app, ramfs is only really dealing with logs and such. If they want to copy their dataset over and belch out frame after frame from the local copy to cut down on server bandwidth (which can EASILY get to be a bottleneck here). Best of all, they don't have to come back to US to get any kind of configuration changes, this is all a matter of what their apps choose to do. And when they delete stuff they get the ram back, which is nice when things like maya can spit out verbose logs you have to parse to see if something went wrong, or when you have to run some odd tool to convert an entire file from one data format to another to parse out some info, which results in an enormous file existing for only a second or two. Getting that ram back is very, very nice. Tying up half the box's ram with a block device when it's only briefly used is not. > I think the ramfs limit code has a good argument from Alan for embedded > devices, so that probably will make it in. However, even so it's obviously I'd like to second that. The oom killer is better than nothing, but people are way more used to comprehending/diagnosing a full file system rather than their apps crashing. If I can tell it not to let the disk grow big enough to hose the system, this is a Good Thing. :) (We're selling this stuff to art majors. They have an IT department, but I strongly suspect they need all the help they can get.) > not a 2.4.1 issue, AND as shown by the fact that apparently the thing is > buggy and still worked on I wouldn't want the patches right now in the > first place. You mean other than the free when delete thing in .0? Take your time. Enjoy life. Happy birthday. Pet the cat. Pack for LWE. Bask in the adoration of the world at large for surviving yet another development cycle. Teach your daughters to program (the world needs more geekettes. Where Illiad and Eric Raymond find them I'll never know). No rush. I can grab an AC patch to get limits for testing and configuring my config. I'd just not sure I could convince management to ship one, or our customers to accept it. But that's my problem, isn't it? :) (Come to think of it, I could probably just use user-level filesystem quotas in this case. All the apps the node runs should be under one user, and the log files root writes out are pretty much a rounding error... Hmmm... Not exactly an elegant solution, and I can only hope the multi-megabyte files that user owns on NFS don't throw off the count. (Queue pooh: Think think think, think-think...)) > Linus Rob P.S. Linus responded to my email! Wow! Did it himself and everything. I feel like I've been initiated into something, but I don't know what...
[Fwd: Learn from minix: fork ramfs.] - linus's ACTUAL reply.
Okay, the sleep situation has not improved. I'll admit that right now. But it's ABOUT to. G'night... Rob On Mon, 8 Jan 2001, Rob Landley wrote: > > So fork ramfs already. Copy the snapshot you like as an educational > tool, call it skeletonfs.c or some such, and let the current code evolve > into something more useful. The thing is, that I'm not sure that even the extended ramfs is really useful except for very controlled environments (ie initrd-type things where the contents of the ramdisk is _controlled_, and as such the addition of limits is not necessarily all that useful a feature). Others have spoken up on why tmpfs isn't a good thing either, with good arguments. So it's not all about teaching. I think the ramfs limit code has a good argument from Alan for embedded devices, so that probably will make it in. However, even so it's obviously not a 2.4.1 issue, AND as shown by the fact that apparently the thing is buggy and still worked on I wouldn't want the patches right now in the first place. Linus
255.255.255.255 won't broadcast to multiple NICs
Under 2.2.16, broadcast packets addressed to 255.255.255.255 do not go out to all interfaces in a machine with multiple network cards. They're getting routed out the default gateway's interface instead. If I ifconfig eth1 down (which has the gateway behind it), I start getting "no route to host", even though the other subnet's still up and the default gateway's cleaned out of the routing tables. Under no circumstances can I get the broadcast packet to go out more than one interface (I hate to say "like it does under windows" but in this case, yes). The packets aren't actually getting sent to the gateway, they're just getting sent out the gateway's interface. They're still broadcast packets. I.E. in a machine with only one NIC, broadcasting 255.255.255.255 works fine. Is there something I can echo into /proc somewhere to make this work, or some magic combination of ifconfig and route that will tell it to actually broadcast out more than one interface? Should I mess around with the ethernet bridging code? I don't know if any of these will work. The problem seems to be conceptual: when one packet goes into the stack, only one packet comes out. Global broadcast means with multiple NICS, multiple packets should come out (one per NIC), and apparently there's no support for that. Ummm... Help? (I have config info and test code if you can't reproduce this. I, unfortunately, have spent the entire afternoon trying NOT to reproduce this. Sigh...) Rob __ Do You Yahoo!? >From homework help to love advice, Yahoo! Experts has your answer. http://experts.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: 255.255.255.255 won't broadcast to multiple NICs
--- Jeff Garzik <[EMAIL PROTECTED]> wrote: > Rob Landley wrote: > > Under 2.2.16, broadcast packets addressed to > > 255.255.255.255 do not go out to all interfaces in > a > > machine with multiple network cards. They're > getting > > routed out the default gateway's interface > instead. > > Are the network cards on the same network? Two subnets. (both martians: 10.blah and 192.168.blah). Gateway's off of 10.blah (beyond which lives the internet), the 192 thing is the small cluster I'm putting together in my office to test the software. I take it this makes a difference? If there's some kind of "don't do that" here, I might be happy just documenting it. (In theory, I could iterate through the NICs and send out a broadcast packet to each interface's broadcast address (although for reasons that are a bit complicated to go into right now unless you really want to know, that's not easy to do in this case).) But that's just a workaround to cover up the fact that the IP stack isn't doing the obvious with global broadcasts. So the question is, is the stack's behavior right? If not, what's involved in fixing it, and if so, is it documented anywhere? (I checked google rather a lot before coming here, and linux/Documentation, and even glanced at the route.c source code. ip_route_output_slow has several explicit checks for "" which are easily searched for, but the upshot is that the packet gets mapped to a single interface anyway. Around line 1641 of my sources there's an #ifdef CONFIG_IP_MROUTE that looked very interesting, but it turns out only to be for multicast addresses and I don't know if IN_DEV_FORWARD is forking the packet or not...) At which point I came here. :) Rob __ Do You Yahoo!? >From homework help to love advice, Yahoo! Experts has your answer. http://experts.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: 255.255.255.255 won't broadcast to multiple NICs
--- "Richard B. Johnson" <[EMAIL PROTECTED]> wrote: > Using an IP packet of 255.255.255.255 doesn't mean > it's a broadcast > packet. It is going to your default gateway because > it is outside > your netmask, which guarantees that it is not a > broadcast. 1) No, it's still a broadcast packet when it goes out. That's the behavior of the current code. The packet isn't redirected to the gateway, it retains its 255.255.255.255 address when heading out the other interface. (Other computers on the subnet connected to that interface see it, even though their address isn't 255.255.255.255 and it doesn't match the address or broadcast address of any of their interfaces.) I just re-confirmed this. On 192.168.0.3 I moved the default gateway to 192.168.0.99 (a non-existent machine, but told it to still go out eth0), ran the 255.255.255.255 broadcaster on .3, ran a listener on 192.168.0.1, and the listener heard the packets from the broadcaster (and confirmed their source address, 192.168.0.3). The broadcast address on both interfaces (.3 and .1) is 192.168.0.255, with netmask 255.255.255.0. So once again: when sending to 255.255.255.255, Broadcast packets are spit out the gateway's interface (AS broadcast packets), but not out the other interface(s). The behavior I expected is the broadcast packets getting sent out ALL the interfaces this machine had. (Most gateways won't FORWARD broadcast packets, which is why this doesn't flood the whole internet. This is also WHY gateways have to go out of their way not to forward broadcast packets, because there IS a way of specifying a broadcast address larger than a single subnet.) 2) Windows does it. (That's no defense of the practice, but it is at least circumstantial evidence that the fact linux is at least partially supporting it is not just some strange accident.) 3) The support that's in there now has explicit code implementing it. If it has no special meaning, why does linux/net/ipv4/route.c treat "0x" specially? Grep for it. Sample code snippet (from route.c): if (key.dst = 0x) res.type = RTN_BROADCAST; There's a half-dozen or more of those in there (2.2.16 on this box, I could check 2.2.18pre if you like but I don't expect a difference)... > To use a broadcast of 255.255.255.255, your netmask > would have > to be 0.0.0.0, which would gurantee that you have no > default > route. I'm not making this up. There are a lot of precedents of it being used. Look at cisco: http://www.ieng.com/univercd/cc/td/doc/product/software/ios113ed/113ed_cr/np1_r/1ripadr.htm#xtocid587510 And of course bootp/dhcp use it for their queries when they don't know WHO they are: http://ctdp.tripod.com/os/linux/usersguide/linux_ugdhcp.html I always thought it was the the global broadcast address, propogated to all NICs. That's what it USED to be, but I've been using it so long (and so intermittently) I don't remember what documentation that originally came from. Yes, it could be a bad habit and maybe I should stop. But if so, I'd like to see where that's in writing. (Documented somewhere. Anywhere. Alan Cox saying "this is so" is plenty authoritative enough for me. Documentation counts as resolving the issue...) > As an example, we 'own' a network from: > > 204.178.40.1 to 204.178.47.255. > > This means that the network address is 204.178.40.0, > the netmask > is 255.255.248.0, and the broadcast address is > 204.178.47.255 > > Anything outside the LAN, which means anything that > won't 'fit' > inside the netmask goes out the default route. > That's how it > works. I know how that works. But the fact remains that machines that aren't 255.255.255.255 know to receive that address. Both Linux machines and windows machines. The switching hub propogates it and the stack receives it. The behavior it has right now DOES come very close to what I expect it to do, and way in the past (on a mixed network of OS/2, windows, and macintoshes, I suspect, don't remember who was server and who was client, it was in java) using 255.255.255.255 did exactly what I wanted it to do. I wasn't network administrator on those boxes though, so I dunno if this was an OS default or the way the NICs were configured. (Come to think of it, I don't remember if any of the boxes I was using HAD more than one NIC, it was a while ago. They probably didn't.) It's just not doing it here. I'm wondering if not doing it here is documented and something the network guys intend to keep, or if the half-support in there now is going to be extended to full support. > When you set a broadcast address, you are simply > plugging in > an IP, within your network, that is not otherwise in > use. That's the subnet broadcast, sure. But I remember something about global broadcast addresses from way back in college. Maybe I imagined it, but I know I got tested on it. > This means that when a HARDWARE address of all bits > set is > being sent out your Ethernet
Re: 255.255.255.255 won't broadcast to multiple NICs
--- Philippe Troin <[EMAIL PROTECTED]> wrote: > Rob Landley <[EMAIL PROTECTED]> writes: > > So the question is, is the stack's behavior right? > If > > not, what's involved in fixing it, and if so, is > it > > documented anywhere? > > I think historically, BSD stacks were routing > 255.255.255.255 to the > "primary interface" (whatever that means). Yeah, that maps with what I've seen. Apparently, the stack is stricly "one packet in, one packet out", and only concerns itself with where to send that packet. So when a packet can go out more than one interface (as in 255.255.255.255), the broadcast nature of it doesn't actually cause the packet to fork in the stack, it's treated like a load balancing situation instead where one interface is selected and the one packet is delivered. (The multiple delivery aspect is left solely to the ethernet layer. Despite being a broadcast packet, the IP layer doesn't replicate the broadcast on an interface level.) I personally did not expect this behavior, I expected the packet to go out all the interfaces. And I still think the behavior I expected makes more sense. However, if the behavior that's in the kernel now is documented and what IP stacks the world over are expected to do, I can certainly live with it. :) It's entirely possible that the other platforms I worked on before never had more than one NIC in a box I actually used. The fact I wasn't suprised by the behavior I saw elsewhere could simply be because I didn't try stuff on a box where the behavior would be suprising. That said, I still think 255.255.255.255 "should" go out all interfaces (since gateways don't forward broadcast packets anyway, it's not going to flood the network or anything). I suspect the "it works, leave it alone" principle will apply here, but it'd be nice to attract the attention of a networking guru like Alan Cox, David S. Miller, or Donald Becker for a few seconds to at least get shot down on this issue decisively. :) > All the code I've encountered which actually needed > to perform > broadcast on all interfaces was sending > subnet-directed broadcasts by > hand on all interfaces. Unfortunately, I literally can't do that in this instance. (Yes, I've tried.) The client and server I'm writing are in Java. There's two problems here. 1) It's for a heterogeneous render farm. Linux boxes, NT/2000 boxes, and SGI boxes at least. Maybe someday macintoshes too, and who knows what else. (That's why I'm writing it in Java in the first place.) 2) I'm trying to have it configure itself as much as possible automatically. A lot of the people using render farms went to art college and liked it. That's why the broadcast packets (so the clients can find their server and vice versa when everybody boots up, without being told). I don't want them to have to tell it anything, I want them to throw it on the machine and run it (quite possibly running all the nodes from a shared mount) and just have it work. I've managed to avoid any node specific configuration so far, and that makes things MUCH easier for my expected user base. 3) Java sucks in many ways. Today's way is that it never occurred to Sun that a machine might have more than one IP address assigned to it, so InetAddress.getLocalHost() returns exactly one address. Unfortunately, just about EVERY machine has two interfaces defined, the other one being loopback on 127.0.0.1, and natrually the loopback is the one that getLocalHost() returns. (Since it's the one that we pretty much already know the address of anyway, and querying it is therefore useless, that's the one it queries. Thank you Sun.) There is no way to query the current machine's interfaces without resorting to native code. Bind to a socket to a local port and query that address you say? Nope, too easy. The address returned when I query a socket (rather than a connection) is 0.0.0.0 on any machine with multiple interfaces (even loopback), since the socket is bound to that port on ALL the interfaces. Each incoming or outgoing connection does have a valid "from" IP address, but I have to wait for a connection to come in to get that. (Unless I explicitly specify which IP to bind to when I create the socket, but if I knew that I'd already be there.) Nope, making my own connection to a port on the same machine just means 127.0.0.1 is talking to 127.0.0.1. Tried it. Didn't work. Nope, feeding the loopback address to getAllByName() doesn't help either. I tried that too, it just returns a length 1 array containing just the loopback address. Now you know why I'm resorting to 255.255.255.255. I'm sort of faking things: when the server broadcasts to clients they know who it is, and when they broadcast to it, it knows who THEY are (it says in the UDP datagram header info). And the way I've written it, that's all they really need to know (
Re: 255.255.255.255 won't broadcast to multiple NICs
--- Paul Flinders <[EMAIL PROTECTED]> wrote: > > Rob Landley <[EMAIL PROTECTED]> writes: > > 3) Java sucks in many ways. Today's way is that ... > > There is no way to query the current machine's > > interfaces without resorting to > > native code. > > I faced this problem a while ago - in the end I > cheated and put this bit of code in a shell script > used to start the application I've considered it. Counts as "native code", but thanks for the script anyway. :) For my current app, I've pretty much decided that for the boxes where broadcasting 255.255.255.255, they have to supply it on the command line. Maybe I'll have the script supply it on the command line for them since I'll probably offer an RPM or linux-specific tar as an install option. I need to potentially install the JRE for them (assuming licensing issues work out ok there, which I'm 99% certain is fine but want to double check), and should have a shell script to encapsulate the "jre -cp myjar.jar runthisclass" part anyway into "runclient" or "runserver". (Possibly starting from the init scripts, or with a nice Gnome icon. Depends how industrious I feel when I'm done, and/or what my boss wants. :) The larger question of "should the Linux kernel's IP stack behavior be fixed, documented, or left alone" is what I'm interested in now. If people agree that 255.255.255.255 should go out to multiple interfaces, I'd be willing to try my hand at a patch to route.c (be afraid, be very afraid), but I'm still waiting to hear from on high (higher than me anyway) about whether or not the current behavior is something they're happy with. (My app will NOT require a custom kernel to function properly, that's not an option. :) > including ${NET_ADDRESSES} in the java command line > sets > up a set of defines, one per interface. For example > > -Dethaddr.172.16.1.1=00:00:0A:BC:CD:78 > -Dnetmask.172.16.1.1=255.255.0.0 > > which you can use via System.getProperty() and > System.getProperties() If I go with a script I'll just have it spit the IP broadcast addresses one after the other to stdout, and then call it from the command line with back quotes as some variant of: ./myprog broadcast `./findbroadcasts` Encased in the platform-specific launch shell script, of course. :) Why on earth would my app need the ethernet address? If the stack didn't abstract that away, there would be a much bigger problem than global broadcasts not really being global... Rob __ Do You Yahoo!? >From homework help to love advice, Yahoo! Experts has your answer. http://experts.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: 255.255.255.255 won't broadcast to multiple NICs
--- Philippe Troin <[EMAIL PROTECTED]> wrote: > Rob Landley <[EMAIL PROTECTED]> writes: > The source IP address (as returned by getsockname()) > is only set when > the socket is connected... It follows the same > logic: for a multihomed > machine, we know which interface will be used only > when we know who > we'll be talking to... I know, it does make sense. I'm not really blaming Java for that one. It's just yet another thing I tried that didn't work. The lack of a way to query what the current addresses are directly is the only reason I tried that as a potential workaround. (If it's going to return 0 unless you told it what the address was in the first place, why do they have a query address method on a server socket anyway? Does it serve a purpose? Right...) This isn't the most egregious oversight in java, by the way. The list is pretty long, but my vote goes for being unable to truncate a file before Java 1.2. (There was no API for it. Couldn't be done. Period. getLength() was there, but setLength() wasn't. And I had to email them more than once before they'd admit it was a bug...) > You could use SIOGIFCONF (from C) to get the address > list. See "avoidance of native code" under the heading "configuration details I don't want them to have to deal with". When I make platform specific installers (which there will be for both Linux and NT/2000), i'll throw a shell around it and feed info in on the command line. But a better solution is not having to do it at all, which is the case for machines with one NIC. > I'm not sure is java has the equivalent... It doesn't. > Or maybe a very small native method... Querying the system when it first runs and passing the data onthe command line, maybe. Running platform specific native code during execution, no. > > > Broadcast is ugly anyways, why don't you use > > > multicast ? Having looked into this question a bit more, I can now answer: Because broadcast is, in this case, a more elegant solution, which requires less configuration (even NOT working the way I expected), and (as far as I can tell) is actually more efficient (in this case) in terms of utilizing network resources, and has no known scalability problems as used here, either. (Especially for the server, which is the most likely bottleneck.) The most common case here is that the server broadcasts to all clients (either during boot or after a period of inactivity). In either case, when this happens all clients are interested in hearing what the server has to say. The only time where this isn't the case is when an individual client is rebooted and has to look for its server. This case is relatively rare (not a scalability issue from a network traffic standpoint, anyway, even Google only reboots around a hundred boxen a day). Only the server would respond to the traffic here, so there's no broadcast storm. (The initial boot case is closer to that, but the response is via TCP, not UDP.) And I fail to see how multicast improves the client reboot case. If multicast ISN'T, underneath it all, doing a broadcast at the MAC level, then how is it supposed to find the server if it doesn't know where it is? So what have we gained? And in any case, as long as the broadcast mechanism's already been implemented for the first paragraph, implementing a gratuitous second mechanism for a fairly rare case is a gratuitous source of complexity and potential bugs for no apparent reason. > Sounds like a good job for multicast... It's fairly > simple to use, > but: I still don't understand why you consider broadcast a bad solution here. (Other than aesthetic reasons.) If multicast requires configuration and the whole point of the broadcast packets in the first place was to AVOID configuration... I missed something. > 1) I'm not sure if java gives you access to the > required ioctls > (there's only five of them). It does (or at least claims to), but you still haven't explained WHY multicast is a more elegant solution in this case than broadcast. Broadcast is actually the same mechanism as sending targeted UDP packets to hosts to wake them up, except the list I iterate through is one address long. Multicast is a completely different mechanism. You want to explain how multicast works to me down at the MAC layer? Is the server sending a small number of broadcast MAC packets, or a large number of individually MAC addressed packets to each host that has registered itself as interested in these broadcasts. If the former, what's really changed? If the latter, how did they register themselves as interested in the first place? How about compatability? Will it work on the various flavors of windows box? Will it work on an Irix box? Would it conceivably work on a power Mac? I know broadcast is at least theoretically present in every networking stack there is, whereas
Original destination of transparent proxied connections?
Help. I thought transparent proxying would allow some means for the recipient of the proxied connections to find out what their original destination port and socket address were. This does not seem to be the case. The socket structure only has one address and one socket, and those have the source address, not the destination address. How do forward connections to a given address range to a user space program that then has the opportunity to bidirectionally munge the data in them and forward them on? Transparent proxying works just fine assuming I only ever want to forward a single port to just one other machine... IPCHAINS isn't up to it. Before I go and upgrade to the 2.4 kernel on production systems that ship Real Soon Now, could somebody give me at least an opinion on whether or not iptables and the 2.4 nat stuff can do this kind of thing without me having to modify the kernel to fill out a larger socket-oid structure? (Is 2.4 iptables documented anywhere yet?) I've got everything else. If I could just get a destination address and port out of transparently proxied connections I'd be home free. I'm amazed this data isn't there already, I must have missed something stupid. How do sockets bound to multiple interfaces figure out which interface the connection came from? Rob __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/?.refer=text - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Original destination of transparent proxied connections?
Yeah, I found it. While researching replacing the 2.2 kernel with 2.4 to get my proxy-oid to work, I stumbled accross the following section in the unofficial NAT-HOWTO (which is not on linuxdoc's website as far as I can tell). At this address: http://netfilter.kernelnotes.org/unreliable-guides/NAT-HOWTO/NAT-HOWTO.linuxdoc-4.html Under section four ("quick translation from 2.0 and 2.2 kernels"), under the heading "Hackers may also notice:", item two in the list: >The (undocumented) `getsockname' hack, which >transparent proxy programs could use to find out the >real destinations of connections no longer works. Ah! A clue! But no idea how to make it work under 2.4, and no mention of what replaces it! (I read the rest of the howto carefully. Never mentioned this topic again.) But there IS a way to get it to work under 2.2, if I can learn an undocumented (but functional) hack. So I jump to the contents page to see who the HOWTO maintainer is to ask rather pointed questions. His email address isn't listed, but I do I find out that the netfilter mailing list is at [EMAIL PROTECTED] http://list.samba.org turns out to have a page of hosted lists, with a link that eventually leads to an archive, which is not easily searchable except by date. Fun. This brings us to google, which can find anything if you just know what to ask for. I search for "lists.samba.org netfilter getsockname". The first hit is just that silly howto again, but the second hit: http://lists.samba.org/pipermail/netfilter/2000-September/005317.html An explanation, complete with example code. From september of last year. And there was much rejoicing. If I were to perhaps send linuxdoc.org a check or something, might a day come to pass when learning to do seemingly obvious things under linux does NOT require fairly good forensic investigation skills? I ask merely for information. I need to get more caffiene now. I'm going to be up REALLY late coding. :) Rob __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/?.refer=text - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
SIS 5513/IBM Deskstar HDIO_SET_DMA Operation not permitted?
2.2 allowed me to set DMA on an SIS 5513 using an IBM Deskstar 40 gig IDE. 2.4 goes "Operation not permitted" when I try it. Why? I hit it with ide0=ata66 in lilo, and it sped up from 3 megs/sec to 5 megs/second, but I used to get 12. hdparm /dev/hda still says I'm not using DMA. I realise I'm doing dangerous stuff here to get the performance back. I'm just curious why it's not doing it for me. (Is there a known problem I should be worried about?) Let's see, ide0=dma,ata66... right? Should I be really really worried about my data integrity if I do this? It never had a problem under 2.2... Rob __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Original destination of transparent proxied connections?
--- Rusty Russell <[EMAIL PROTECTED]> wrote: > Summary: you had to use a *search engine* to find an > obscure piece of > coding information. Actually, I had to use a search engine to find a tangentially related howto that halfway through mentioned something in passing which gave me a clue of something else to search for that, it turns out, didn't work anyway. (getsockname() in 2.2 returns the original destination ip, but not the original destination port. I had to move to 2.4/netfilter/getsockopt to get that piece of information.) And the reason I didn't ask on the netfilter list is I was originally trying to use 2.2 ipchains, not 2.4 iptables. Didn't think the old stuff was on-topic there. > Shocked! > Rusty. It still requires pretty good forensic investigation skills to make it work... > Premature optmztion is rt of all evl. --DK Wouldn't that be "Premtur"? :) Rob __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Repeatable hang in 2.4.3 with 4 ide drives.
I'm trying to make 3 copies of a 40 gig IBM deskstar IDE drive. I've got red hat 7 booted into single user mode, doing the following: cat /dev/hda | count | tee /dev/hdb | tee /dev/hdc > /dev/hdd The copy seems to work fine if I never let the console blank. I copied 2 gigs worth of data (at about 1.5 megabytes/second) by hitting the shift key every few minutes. It never even paused. It also hasn't been having problems copying just hda to hdb. It's adding hdc and hdd into the mix that triggers the hang. I suspect the two IDE controllers are fighting somehow. Unblanking the screen locks it solid semi-reliably, 3 of the 4 times I've let it do that. (Perhaps console blanking/unblanking causes a latency spike? The console unblanks but the red hard drive writing light goes off instantly and the progress display "count" gives you stops moving. The kernel's still there, num-lock changes the keyboard light, but I can't ctrl-c out of the process.) The one time it continued writing after unblanking it hung again a minute or so later, unblocked itself after thirty seconds or so, and then hung again and stayed that way for five minutes until I turned it off. It was not happy. "count" is a ten-line program I wrote to read data from stdin and write it back to stdout with a progress indicator going to stderr. (fprintf(stderr,"%lld bytes\r",bytecount); In a loop. Woo.)) The chipset is sis 5513. The 4 IBM deskstars are ata66 drive with an ata66 cable but the 2.4.3 unconditionally refuses to allow me to switch on DMA on any of them. (hdparm -d1 /dev/hda goes operation not permitted.) I left it off for this test cause I just want to get it to work at the moment. I have tried it with -c1 and without -c1, it hangs either way. Rob (Yes, I'm copying the mounted drive's block device out from under it. It's a bit impolite to the system, but I've done this lots of times. That's why I boot into single user mode and sync beforehand. Yes, the new drive fscks on the way back up, but since nothing's actually changing the data it's fine. This is a seperate issue, even if I was copying hosed data it still shouldn't be hanging.) Update: back to the ide0 only master-to-slave copy. "cat /dev/hda | count > /dev/hdb". It's done 3 gigs so far with no complaints, and I let it blank and unblank twice now and it didn't even pause. Also, the single controller copy is going significantly faster (about twice as fast) even though the two drives still share the same cable. Are the two ide controllers sharing some kind of lock? (The data SHOULD be able to go in paralell, but it doesn't look like it was... I thought the google guys had a patch for that back at ALS...?) __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
How do I make a circular pipe?
How do I do the following: # --> pppd notty | pppoe -I eth1 | -- |_| I.E. connect the stdout of a process (or chain thereof) to its own stdin? So I wrote a program to do it, along the lines of: sixty-nine /bin/sh -c "pppd notty | pppoe -I eth1" With an executable approximately along the lines of (warning, pseudo-code, the other machine isn't hooked up to the internet at the moment for obvious reasons): int main(int argc, char *argv[], char *envp[]) { int fd[2]; pipe(fd); dup2(fd[0],0); dup2(fd[0],1); execve(argv[1],argv+1,envp); fprintf(stderr,"Bad.\n"); exit(1); } And it didn't work. I made a little test program that writes to stdout and reads from stdin and reports to stderr, and it gets nothing. Apparently, the pipe fd's evaporate when the process does an execve. What do I do? (If anybody else knows an easier way to get pppoe working, that would be helpful too. Rob (P.S. WHY does pppd want to talk to a tty by default instead of stdin and stdout? Were the people who wrote it at all familiar with the unix philosophy? Just curious...) __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: How do I make a circular pipe?
> > Apparently, the pipe > > fd's evaporate when the process does an execve. > > Check out: > > #include > #include > >/* ... */ > > fcntl (fd, F_SETFD, (long) FD_CLOEXEC); > > to set/reset the close on exec bit. Cool. That's EXACTLY what I was looking for. Thanks. Thanks also to the people who pointed out pppd's "pty" option (yes I read the docs on that, but they're a bit cryptic). And yes, my pseudo-code example used fd[0] twice. Thanks. You can stop emailing me now. :) > You might want to check out something like Stevens > advanced UNIX programming, though it is probably > somewhat dated :-) I've got about fifteen books with names like that, but strangely in real world situations I keep trying to use the man pages instead. Sad, I know... > At a guess I would say that the reason is you don't > have as much control with pipes as you do with > devices. Under the standard termios, you can tell > the system to not return from the read until either n > characters have been read, or a given character such > as a newline has been read. You can also switch to > alternative line disciplines that are more targeted > to a given application such as PPP, etc. Hmmm. And the reason these cool toys aren't available as some kind of wrapper around a normal read-from-fd is...? (Performance?) > You probably want to check out pseudo tty's (pty's), > which allow you to create your own terminal. This occurred to me in the car on the way home after five hours messing with this. (Of course. :) > Here is the glibc documentation, Thanks. Info. Never thought to check info. Here I am checking linuxdoc's howtos, man pages, and google... Sigh... I don't suppose there's an info2html tool anywhere? Well what do you know, there is. (I LIKE Google.) http://www.cs.dartmouth.edu/~jonh/info2html/ Rob __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [patch 6/8] uml: fix hostfs special perm handling [for 2.6.12]
On Wednesday 30 March 2005 12:34 pm, [EMAIL PROTECTED] wrote: > From: Paolo 'Blaisorblade' Giarrusso <[EMAIL PROTECTED]> > CC: Rob Landley <[EMAIL PROTECTED]> > When opening devices nodes on hostfs, it does not make sense to call > access(), since we are not going to open the file on the host. > > If the device node is owned by root, the root user in UML should succeed in > opening it, even if UML won't be able to open the file. > > As reported by Rob Landley, UML currently does not follow this, so here's > an (untested) fix. > > Signed-off-by: Paolo 'Blaisorblade' Giarrusso <[EMAIL PROTECTED]> Not untested, it Worked For Me (tm). Signed-off-by: Rob Landley <[EMAIL PROTECTED]> > --- > > linux-2.6.11-paolo/fs/hostfs/hostfs_kern.c | 20 +--- > 1 files changed, 13 insertions(+), 7 deletions(-) > > diff -puN fs/hostfs/hostfs_kern.c~uml-fix-hostfs-special-perm-handling > fs/hostfs/hostfs_kern.c --- > linux-2.6.11/fs/hostfs/hostfs_kern.c~uml-fix-hostfs-special-perm-handling > 2 >005-03-22 20:10:07.0 +0100 +++ > linux-2.6.11-paolo/fs/hostfs/hostfs_kern.c2005-03-22 20:12:45.0 > +0100 @@ -806,15 +806,21 @@ int hostfs_permission(struct inode *ino, > char *name; > int r = 0, w = 0, x = 0, err; > > - if(desired & MAY_READ) r = 1; > - if(desired & MAY_WRITE) w = 1; > - if(desired & MAY_EXEC) x = 1; > + if (desired & MAY_READ) r = 1; > + if (desired & MAY_WRITE) w = 1; > + if (desired & MAY_EXEC) x = 1; > name = inode_name(ino, 0); > - if(name == NULL) return(-ENOMEM); > - err = access_file(name, r, w, x); > + if (name == NULL) return(-ENOMEM); > + > + if (S_ISCHR(ino->i_mode) || S_ISBLK(ino->i_mode) || > + S_ISFIFO(ino->i_mode) || S_ISSOCK(ino->i_mode)) > + err = 0; > + else > + err = access_file(name, r, w, x); > kfree(name); > - if(!err) err = generic_permission(ino, desired, NULL); > - return(err); > + if(!err) > + err = generic_permission(ino, desired, NULL); > + return err; > } > > int hostfs_setattr(struct dentry *dentry, struct iattr *attr) > _ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [uml-devel] Re: [patch 03/12] uml: export getgid for hostfs
On Thursday 31 March 2005 09:40 am, Christoph Hellwig wrote: > > Sorry, I wasn't clear... I read *that* answer, but it says "as mentioned > > in the discussion about ROOT_DEV", and I couldn't find it. > > That'd be: > > http://marc.theaimsgroup.com/?l=linux-fsdevel=110664428918937=2 As the only user who seems to be crazy enough to regularly run UML with a hostfs root (ala "./linux rootfstype=hostfs rw init=/bin/sh"), I'd just like to say that I'm fairly certain I'm _not_ using ROOT_DEV special casing (my root files actually do belong to root, I'm just borrowing the parent's filesystem to avoid the trouble of setting up a whole filesystem under loopback.) And actually, the ROOT_DEV hack wouldn't help me, because my project is using a dirty trick where I make a loopback mounted ext2 image (which could easily be ramfs or tmpfs if my project didn't need 500 megs of scratch space), --bind mount all the directories from the parent I need into it, and chroot into it. (Thus I have the parent's binaries and libraries, but the rest is writeable space I can mknod and chown and such in.) This is done with a trivial shell script, the guts of which are: --- for i in /* do i="${i:1}" if [ "$i" != "lost+found" ] then if [ -h "$i" ] then # Copy symlinks ln -s `readlink "$i"` "$i" elif [ -d "/$i" ] then # Bind mount directories mkdir "$i" && mount -n -o bind "/$i" "$i" fi fi if [ $? -ne 0 ]; then exit 1; fi done # Don't use system /tmp, use a tmp in workspace.img. umount tmp mount -n -t devpts /dev/pts dev/pts With that, the hostfs might as well be read-only. And as you can see, the above will very much NOT work with anything that cares about ROOT_DEV, since ROOT_DEV gets chrooted away fairly quickly... > > Also, I'd like to know whether there's a correct way to implement this > > (using something different than root_dev, for instance the init[1] root > > directory mount device). I understand that with the possibility for > > multiple mounts the "root device" is more difficult to know (and maybe > > this is the reason for which ROOT_DEV is bogus, is this?), but at least a > > check on the param "rootfstype=hostfs" could be done. > > personally I think it's a bad misfeature by itself. If you absolutely > want it make it a mount option so it's explicit at least. If it's going to have it at all, then yes it should be done the way. Lots of filesystems have something close to this (The affs uid= and gid= options aren't that far off, for example.) I'd like to point out that hostfs's rootflags= parsing needs an update. Right now, it sets the path to the parent directory hostfs is to mount from, period. (If omitted, the default is "rootflags=/".) Appending something like ,rw gets treated as part of the path, so right now turning this feature on would require a remount after UML came up. Unless it's ALWAYS the default that a hostfs mount turns files belonging to the user running UML into files belonging to root. It's possible that this is really the intended behavior, by the way. Whether the mount point is ROOT_DEV or not is probably irrelevant. Then again, I haven't personally needed this behavior yet. Mounting hostfs when UML is run by a non-root user means I can't mknod or chown, no matter what ownership or permissions the directory I'm in has. And THAT is something that means I have to supplement hostfs with a loopback mount to get real writeable space in which I can get anything major done. Making files look like they belong to root is a purely cosmetic change under those circumstances. > And yes, the only place where ROOT_DEV makes sense is in the early boot > process where the first filesystem in the first namespace is mounted, > that's why I want to get rid of the export to modules for it. > > > Ok, this is nice. I'll repost the (updated) patch CC'ing Ingo Molnar > > (unless there's another Ingo). > > Yupp, mingo Is anyone, anywhere, actually USING this? I'm using hostfs root fairly extensively and _not_ using the funky ownership rewriting feature. Is this something people thought might be needed, or is somewhere somewhere actually inconvenienced by the lack? Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [uml-devel] [patch] Make User Mode Linux compile in 2.6.11-rc3
On Saturday 05 February 2005 01:00 pm, Frank Sorenson wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Rob Landley wrote: > | As of yesterday afternoon, the UML build still breaks in > | sys_call_table.c, ... > This patch for sys_call_table.c was merged into the main tree in this > changeset: > http://linux.bkbits.net:8080/linux-2.5/[EMAIL PROTECTED]|ChangeSet >@-2d > > The patch fixes both the sys_call_table and the pud_alloc breakage, and > as of 2.6.11-rc3-bk2, the main tree compiles again for UML. Verified. 2.6.11-rc3-bk2 does indeed build, and the result is chugging through my big compile script. It seems to be working fine, although ye olde display glitch is still there: binutils-2.14/ld/testsuite/ld-sparc/tlssunbin64.rd binutils-2.14/ld/testsuite/lde/ld-/ld-sld-spd-spa-sparsparcparc/arc/trc/tlc/tls/tlsstlssulssunssunbsunbiunbinnbin6bin64in64.n64.s64.s4.s.ss binubinutinutinutilutilstils-ils-2ls-2.s-2.1-2.142.14/.14/l14/ld4/ld//ld/tld/ted/tes/testtestsestsustsuitsuitsuiteuite/ite/lte/lde/ld-/ld-sld-spd-spa-sparsparcparc/tlssunbin64.sd binutils-2.14/ld/testsuite/ld-sparc/tlssunbin64.td But that's a purely cosmetic bug. Thanks, Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Policy question (was Re: [2.6.12-rc1][ACPI][suspend] /proc/acpi/sleep vs /sys/power/state issue - 'standby' on a laptop)
On Wednesday 06 April 2005 05:22 pm, Shawn Starr wrote: > --- Pavel Machek <[EMAIL PROTECTED]> wrote: > > Hi! > > > > > So nobody minds if I make this into a CONFIG > > > > option marked as Deprecated? :) > > > > Actually it should probably go through > > > > Documentation/feature-removal-schedule.txt > > > > ...and give it *long* timeout, since it is API > > change. > > Pavel Shouldn't all deprecated features be in feature-removal-schedule.txt? There are four entries in feature-removal-schedule in 2.6.12-rc2, but `find . -name "Kconfig" | xargs grep -i deprecated` finds eight entries. (And there's more if the grep -i is for "obsolete" instead...) Just wondering... Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [uml-devel] [PATCH 3/9] UML - "Hardware" random number generator
On Wednesday 09 March 2005 09:15 pm, Jeff Dike wrote: > This implements a hardware random number generator for UML which attaches > itself to the host's /dev/random. Direct use of /dev/random always makes me nervous. I've had a recurring problem with /dev/random blocking, and generally configure as much as possible to use /dev/urandom instead. It's really easy for a normal user to drain the /dev/random entropy pool on a server (at least one that doesn't have a sound card you can tell it to read white noise from). cat /dev/random > /dev/null I like /dev/urandom because it'll feed you as much entropy as it's got, but won't block, and will presumably round-robin insert real entropy in the streams that multiple users get from /dev/urandom. (I realize this may not be the best place to get gpg keys from.) I'm just thinking about those UML hosting farms, with several UML instances per machine, on machines which haven't got a keyboard attached constantly feeding entropy into the pool. If just ONE of them is serving ssl connections from its own /dev/urandom, that would drain the /dev/random entropy pool on the host machine almost immediately... Admittedly if UML used /dev/urandom instead of /dev/random, it wouldn't know how much "real" randomness it was getting and how much synthetic randomness, but this makes predicting the numbers it's producing easier how? Rob - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[patch] Make User Mode Linux compile in 2.6.11-rc3
As of yesterday afternoon, the UML build still breaks in sys_call_table.c, here's the patch I submitted earlier (which got me past the break when I tried it). Last week, this produced what seemed like a working UML. Now there's a second break in mm/memory.c: the move to four level page tables conflicts with a stub in our headers. Not quite sure how to fix that. Jeff? (Yeah, I know Andrew's tree works. But wouldn't it be nice if the kernel.org tree to worked too, before 2.6.11 release.) Rob -- Forwarded Message -- Subject: [uml-devel] [patch] Make User Mode Linux compile in 2.6.11-rc2-bk6. Date: Saturday 29 January 2005 05:51 am From: Rob Landley <[EMAIL PROTECTED]> To: linux-kernel@vger.kernel.org, user-mode-linux-devel@lists.sourceforge.net User Mode Linux doesn't compile in 2.6.11-rc2-bk6. Here's the change I made to sys_call_table.c to make it compile. (I ran the result and brought up a shell.) We're really close to finally having a usable UML kernel in mainline. 2.6.9's ARCH=um built but was very unstable, 2.6.10 didn't even build for me, but 2.6.11-rc1-mm2 builds fine unmodified, and ran my tests correctly to completion. Here's the patch. Nothing fancy, it simply removes or stubs out all the syscalls the compiler complains about. Rob Signed-off-by: Rob Landley <[EMAIL PROTECTED]> --- linux-2.6.10/arch/um/kernel/sys_call_table.c2005-01-28 21:20:38.0 -0600 +++ linux-2.6.10-um/arch/um/kernel/sys_call_table.c2005-01-28 21:40:30.735892144 -0600 @@ -20,7 +20,7 @@ #define NFSSERVCTL sys_ni_syscall #endif -#define LAST_GENERIC_SYSCALL __NR_vperfctr_read +#define LAST_GENERIC_SYSCALL (NR_syscalls-1) #if LAST_GENERIC_SYSCALL > LAST_ARCH_SYSCALL #define LAST_SYSCALL LAST_GENERIC_SYSCALL @@ -52,13 +52,7 @@ extern syscall_handler_t sys_mbind; extern syscall_handler_t sys_get_mempolicy; extern syscall_handler_t sys_set_mempolicy; -extern syscall_handler_t sys_sys_kexec_load; extern syscall_handler_t sys_sys_setaltroot; -extern syscall_handler_t sys_vperfctr_open; -extern syscall_handler_t sys_vperfctr_control; -extern syscall_handler_t sys_vperfctr_unlink; -extern syscall_handler_t sys_vperfctr_iresume; -extern syscall_handler_t sys_vperfctr_read; syscall_handler_t *sys_call_table[] = { [ __NR_restart_syscall ] = (syscall_handler_t *) sys_restart_syscall, @@ -273,7 +267,7 @@ [ __NR_mq_timedreceive ] = (syscall_handler_t *) sys_mq_timedreceive, [ __NR_mq_notify ] = (syscall_handler_t *) sys_mq_notify, [ __NR_mq_getsetattr ] = (syscall_handler_t *) sys_mq_getsetattr, - [ __NR_sys_kexec_load ] = (syscall_handler_t *) sys_kexec_load, + [ __NR_sys_kexec_load ] = (syscall_handler_t *) sys_ni_syscall, [ __NR_waitid ] = (syscall_handler_t *) sys_waitid, #if 0 [ __NR_sys_setaltroot ] = (syscall_handler_t *) sys_sys_setaltroot, @@ -281,11 +275,6 @@ [ __NR_add_key ] = (syscall_handler_t *) sys_add_key, [ __NR_request_key ] = (syscall_handler_t *) sys_request_key, [ __NR_keyctl ] = (syscall_handler_t *) sys_keyctl, - [ __NR_vperfctr_open ] = (syscall_handler_t *) sys_vperfctr_open, - [ __NR_vperfctr_control ] = (syscall_handler_t *) sys_vperfctr_control, - [ __NR_vperfctr_unlink ] = (syscall_handler_t *) sys_vperfctr_unlink, - [ __NR_vperfctr_iresume ] = (syscall_handler_t *) sys_vperfctr_iresume, - [ __NR_vperfctr_read ] = (syscall_handler_t *) sys_vperfctr_read, ARCH_SYSCALLS [ LAST_SYSCALL + 1 ... NR_syscalls ] = - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/