Re: Stupid UDP NAT argument
brk wrote: On Jul 13, 2007, at 8:59 AM, Dan Jenkins wrote: Travis Roy wrote: Don't be a troll. ;-) Uhh, have you ever met Ben in person? (sorry, I couldn't help it So... Come to the BBQ. Bring Food. Feed the Troll. :-D Just for clarification, do you mean feed as in beer, or feed as in debian is teh sux? Hmm, well I was thinking the former might inhibit the latter, but I was not thinking beer, which might well disinhibit the latter instead. -- Dan Jenkins ([EMAIL PROTECTED]) Rastech Inc., Bedford, NH, USA --- 1-603-206-9951 *** Technical Support Excellence for over a Quarter Century ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Stupid UDP NAT argument
On Jul 13, 2007, at 8:59 AM, Dan Jenkins wrote: Travis Roy wrote: Don't be a troll. ;-) Uhh, have you ever met Ben in person? (sorry, I couldn't help it So... Come to the BBQ. Bring Food. Feed the Troll. :-D I've seen Ben eat, no thanks. (sorry, I neither could I help it. And I haven't even physically met him yet. Apologies to Ben, but the setup seemed too good to pass up.) BBQ - http://wiki.gnhlug.org/twiki2/bin/view/Www/SummerBBQ2007 -- Dan Jenkins ([EMAIL PROTECTED]) Rastech Inc., Bedford, NH, USA --- 1-603-206-9951 *** Technical Support Excellence for over a Quarter Century ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: UDP, TCP, and NAT (was: ...)
On 7/12/07, Ben Scott [EMAIL PROTECTED] wrote: Every dynamic NAT implementation I've ever used, ever, did this. Heck, Linux 2.0 could do it, so long as you didn't want a firewall, too. Can you find me any dynamic NAT implementation which *doesn't* handle UDP? Bit of clarification on my terminology here: I'm specifically talking about dynamic one-to-many translation of both addresses and port numbers. Some call this NAPT (Network Address/Port Translation). Any kind of one-to-one translation of addresses (either static or dynamic), will, of course, support UDP and almost everything else. The only things that break down are application protocols which derive return IP addresses from the payload. By your own statement, explain then why NAT routers need to do 'funny things' with very basic UDP based services, like DNS. They don't. I have never had to do application-layer inspection with DNS. Nor NTP. Fire up WireShark and look at the packets if you don't believe me. ... and correlate what you see with WireShark to what the Linux NetFilter source does. Find me any code that does anything beyond port number rewriting just to make DNS work, and I'll gladly eat crow (provided you provide sanitary, cooked crow meat for me to eat). -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Stupid UDP NAT argument
On Jul 13, 2007, at 8:59 AM, Dan Jenkins wrote: Travis Roy wrote: Don't be a troll. ;-) Uhh, have you ever met Ben in person? (sorry, I couldn't help it So... Come to the BBQ. Bring Food. Feed the Troll. :-D Just for clarification, do you mean feed as in beer, or feed as in debian is teh sux? ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Stupid UDP NAT argument
Travis Roy wrote: Don't be a troll. ;-) Uhh, have you ever met Ben in person? (sorry, I couldn't help it So... Come to the BBQ. Bring Food. Feed the Troll. :-D (sorry, I neither could I help it. And I haven't even physically met him yet. Apologies to Ben, but the setup seemed too good to pass up.) BBQ - http://wiki.gnhlug.org/twiki2/bin/view/Www/SummerBBQ2007 -- Dan Jenkins ([EMAIL PROTECTED]) Rastech Inc., Bedford, NH, USA --- 1-603-206-9951 *** Technical Support Excellence for over a Quarter Century ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Stupid UDP NAT argument (was: OpenVPN TCP vs UDP)
Don't be a troll. ;-) Uhh, have you ever met Ben in person? (sorry, I couldn't help it) ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Stupid UDP NAT argument
On Jul 13, 2007, at 09:17, Dan Jenkins wrote: but I was not thinking beer, which might well disinhibit the latter instead. Oh, c'mon, now, we don't need to get Ben drunk to find out how much Debian sucks. That's fair game from him straight-sober! Anybody who thinks otherwise ought to sign up for the BBQ today and come defend their rickety distro. -Bill (hey, it's worth a shot) - Bill McGonigle, Owner Work: 603.448.4440 BFC Computing, LLC Home: 603.448.1668 [EMAIL PROTECTED] Cell: 603.252.2606 http://www.bfccomputing.com/Page: 603.442.1833 Blog: http://blog.bfccomputing.com/ VCard: http://bfccomputing.com/vcard/bill.vcf smime.p7s Description: S/MIME cryptographic signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: [OT] Looking for ancient boot media
On Jul 12, 2007, at 18:14, Ben Scott wrote: Seriously, that does sound like a neat meeting night. It's cute that you think this could get done in a couple hours. :) OK, maybe the debate about who's used the oldest computer could be done in that timeframe. Or I could just bring Doug along and we could skip that. -Bill - Bill McGonigle, Owner Work: 603.448.4440 BFC Computing, LLC Home: 603.448.1668 [EMAIL PROTECTED] Cell: 603.252.2606 http://www.bfccomputing.com/Page: 603.442.1833 Blog: http://blog.bfccomputing.com/ VCard: http://bfccomputing.com/vcard/bill.vcf smime.p7s Description: S/MIME cryptographic signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
re: office space/colocation
Our company currently occupies an office space of about 1200 square feet. For the three of us its really way too much space, and all it serves to do is give us more room to hoard computer junk. If any of you are one person shops that would like an office in Keene, we might be interested in subrenting the space to make it more economical for us. Office Perks Include: High Speed Internets Connection Your own desk and/or office Kitchen/Dinette Area/Private meeting room 5 Enormous White Boards A conference table Projector Server Rack Space Storage Room Limited Access to Brendan (Oracle of computer/programming knowledge) I think we would be looking at a couple hundred bucks for one person, depending on his/her needs. -- Warren Luebkeman Founder, COO Resara LLC 1.888.357.9195 www.resara.com ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Rebuilding RHEL3 libc.so from source
I wrote: it looks like we might have to resort to instrumenting some low-level funcs in libc After flailing about and wondering why something that I'd assumed would be straightforward is so hard, I found this writeup that appears to confirm this *is* surprisingly difficult. No need to read the whole thing but you might find the last section (The Easy Way) entertaining: Warning: the only profanity in the document happens to be in that last section... http://bradthemad.org/tech/notes/patching_rpms.php --M ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Rebuilding RHEL3 libc.so from source
On 7/13/07, Michael ODonnell [EMAIL PROTECTED] wrote: http://bradthemad.org/tech/notes/patching_rpms.php That's pretty much the standard drill on how to build a package. While I agree there is some non-trivial work involved, I really don't see how it could be drastically simpler: If you're going to build a package with metadata, you need to have a file that describes the package. Otherwise you've just got a .tar.gz with binaries inside it. I'm more concerned about how this addresses your original post. From what you were saying, you were rebuilding from source but getting a very different between the official Red Hat binaries and the ones you built yourself. The article you linked to doesn't seem to have anything to do with that. No? -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Stupid UDP NAT argument
On Fri, 13 Jul 2007, brk wrote: On Jul 13, 2007, at 8:59 AM, Dan Jenkins wrote: Travis Roy wrote: Don't be a troll. ;-) Uhh, have you ever met Ben in person? (sorry, I couldn't help it So... Come to the BBQ. Bring Food. Feed the Troll. :-D Just for clarification, do you mean feed as in beer, or feed as in debian is teh sux? Yes. -- TARogue (Linux user number 234357) I hope we shall crush in its birth the aristocracy of our monied corporations which dare already to challenge our government to a trial by strength, and bid defiance to the laws of our country. -- Thomas Jefferson ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Rebuilding RHEL3 libc.so from source
you were rebuilding from source but getting a very different between the official Red Hat binaries and the ones you built yourself. The article you linked to doesn't seem to have anything to do with that. No? I've modified my approach somewhat. I had first naively thought I could just explode the source RPM and run a build that would leave all the resultant objs and libs in the build area where I could (A) verify that they were (effectively) the same as the currently installed pre-built versions as a basic sanity check before (B) commencing the classic edit-compile-crash cycle as I inflicted our local instrumentation on the sources, working within that build area. But for whatever reason (probably my unfamiliarity with RPM technology, though nobody I've described my situation to has told me I was doing anything wrong) the libc.so that I ended up with seems to be wildly different from the pre-built version(s) so I did not have confidence that I could start making my local modifications from a known state. My new approach is (will be) inspired by the one outlined in that writeup, involving some sequence or cycle of: - Explode the RPM - Make local modifications. - Generate patch file reflecting those changes. - Mention that patch file in a local spec file derived from the one originally in the SRPM. - rpmbuild a new RPM from the SRPM - Install (or something) from that new RPM - crash ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Rebuilding RHEL3 libc.so from source
On 7/13/07, Michael ODonnell [EMAIL PROTECTED] wrote: I've modified my approach somewhat. I had first naively thought I could just explode the source RPM and run a build that would leave all the resultant objs and libs in the build area ... rpmbuild -ba should do that, more or less. It will also package whatever the spec says to package, but that shouldn't hurt. ... the libc.so that I ended up with seems to be wildly different from the pre-built version(s) ... That's what worries me. It should match the official binaries, more or less. Some things do occur to me though: 1. Build environment. Binaries are very dependent on their build environment. While RPM lets the packager specify build dependencies and such, that process is as error-prone as any other part of software development is. Building the system library is especially tricky. Chicken-and-egg problem. How did Red Hat build the first RHEL3 system library, since there wasn't a RHEL3 yet? It must have been on a non-RHEL3 system. But you're on a RHEL3 system. But... I don't know if the glibc spec file alone is smart enough to take care of all the subtleties that could arise in the above. It may be that Red Hat has some external framework to automate the bootstrap process in the above. Indeed, I'd be surprised if they didn't. Unfortunately, I have no clue as to help you find out on this one. 2. Build options. It's possible to specify conditions in a spec file, and define macros on the rpmbuild command line to trigger them, such that the same spec file can build very different binaries. Same idea as with the C pre-processor. But again, I have no clue as to specifics. 3. Symbols This strikes me as rather unlikely, but: I wonder if strip(1) is being built-in to the RPM packaging process, and thus leaving you with unstripped binaries under BUILD, but stripped binaries in the package. I don't think so, but I can't say for sure that isn't what's going on. You can use rpm2cpio(1) and cpio(1) to extract the plain old files from an RPM package to any given directory, and check them that way. - Explode the RPM - Make local modifications. - Generate patch file reflecting those changes. - Mention that patch file in a local spec file derived from the one originally in the SRPM. - rpmbuild a new RPM from the SRPM - Install (or something) from that new RPM - crash That's roughly the right idea. One thing you may want to explore is the --root option to rpm(8). You can use it to create a nearly-complete copy of a Red Hat system in a subdirectory, and then use chroot(1) to switch into that system. Kind of like a virtual machine, but without all the separate-partition-and-kernel grief. HTH, -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Rebuilding RHEL3 libc.so from source
On 7/13/07, Ben Scott [EMAIL PROTECTED] wrote: On 7/13/07, Michael ODonnell [EMAIL PROTECTED] wrote: This strikes me as rather unlikely, but: I wonder if strip(1) is being built-in to the RPM packaging process, and thus leaving you with unstripped binaries under BUILD, but stripped binaries in the package. I don't think so, but I can't say for sure that isn't what's going on. You can use rpm2cpio(1) and cpio(1) to extract the plain old files from an RPM package to any given directory, and check them that way. Don't have time to hunt the past message, but I believe he showed symbol information in both his versions of the file, and the installed binary files from the distro. What concerns me the most is that he stated the symbols IN the files looked dramatically different. -- -- Thomas ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: UDP, TCP, and NAT (was...)
On 7/13/07, Ben Scott [EMAIL PROTECTED] wrote: On 7/12/07, Ben Scott [EMAIL PROTECTED] wrote: Every dynamic NAT implementation I've ever used, ever, did this. Heck, Linux 2.0 could do it, so long as you didn't want a firewall, too. Can you find me any dynamic NAT implementation which *doesn't* handle UDP? Bit of clarification on my terminology here: I'm specifically talking about dynamic one-to-many translation of both addresses and port numbers. Some call this NAPT (Network Address/Port Translation). I know that's the case, however, PAT can often cause UDP based protocols to break, as opposed to TCP protocols, where doing PAT is much easier. But your right, I suppose I presented it as 'This sux0rs!' as opposed to 'It will rarely, however, not work depending on the protocol'. Any kind of one-to-one translation of addresses (either static or dynamic), will, of course, support UDP and almost everything else. The only things that break down are application protocols which derive return IP addresses from the payload. Yes, one to one maps can for the most part always work, because the external interface is one to one mapped directly. By your own statement, explain then why NAT routers need to do 'funny things' with very basic UDP based services, like DNS. They don't. I have never had to do application-layer inspection with DNS. Nor NTP. Fire up WireShark and look at the packets if you don't believe me. ... and correlate what you see with WireShark to what the Linux NetFilter source does. Find me any code that does anything beyond port number rewriting just to make DNS work, and I'll gladly eat crow (provided you provide sanitary, cooked crow meat for me to eat). You are correct, Linux NetFilter *does* operate in the case of DNS as you say. But others do not, and will actually maintain state information based on the serial number of the DNS request contained inside the UDP packet. -- -- Thomas ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: UDP, TCP, and NAT (was Thomas is a doodoo head)
On 7/13/07, Thomas Charron [EMAIL PROTECTED] wrote: By your own statement, explain then why NAT routers need to do 'funny things' with very basic UDP based services, like DNS. They don't. I have never had to do application-layer inspection with DNS. Nor NTP. Fire up WireShark and look at the packets if you don't believe me. ... and correlate what you see with WireShark to what the Linux NetFilter source does. Find me any code that does anything beyond port number rewriting just to make DNS work, and I'll gladly eat crow (provided you provide sanitary, cooked crow meat for me to eat). You are correct, Linux NetFilter *does* operate in the case of DNS as you say. But others do not, and will actually maintain state information based on the serial number of the DNS request contained inside the UDP packet. Clarification. When I sent that, I knew it sounded wrong. I meant NOT the request 'serial number' but the request sequence number. -- -- Thomas ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: UDP, TCP, and NAT (was...)
On 7/13/07, Thomas Charron [EMAIL PROTECTED] wrote: I know that's the case, however, PAT can often cause UDP based protocols to break, as opposed to TCP protocols, where doing PAT is much easier. The thing you seem to be missing is that it's not so much a UDP vs TCP distinction, but an application protocol thing. It is true that UDP is inherently connectionless and one-way, while TCP is inherently connection-oriented and two-way. At the same time, almost any application protocol is bi-directional by nature (how often do you really only want to talk one way?). So there's pretty much always going to be a two-way conversion. Now, if you're a programmer or protocol designer, are you going to go to the trouble of rolling your own mechanism for negotiating the port numbers for the return traffic (or using portmapper or some other thing), and making multiple sockets, or are you just going to use the port number and socket which are already there? Most use the later approach, and thus we end up with many application protocols (TCP and UDP) which work just fine over a simple address/port translation which assumes incoming and outgoing are symetrical. There are exceptions in all cases. Most of them arise not because of the use of UDP, but because the application protocol expects to send additional packets to *different* port numbers on the way back in. The classic example is FTP. The server listens on TCP/21. The client picks an ephemeral port and connects to that. This is the command channel. When data transfer needs to happen, the client listens on an ephemeral port, and tells the server about this port in the command it sends. The *server* then initiates a connection to the *client*. This is called the data channel. With a dumb NAT implementation, the server's connection will bounce off the NAT boundary, and fail. The solution is a NAT implementation which monitors the FTP command traffic, and dynamically adds the ephemeral ports. (This is all in active mode FTP. In passive mode, the roles are reversed, but you still have the same problem and solution if the server is behind a NAT boundary.) BitTorrent works similarly. Swarm members register the pieces they have with the tracker by making an outgoing TCP connection. Swarm members *also* make TCP connections to peers which have pieces they need. With a dumb NAT, those will fail. (There is a mechanism to have peers request reverse connections through the tracker, but it can get overloaded, and it's worthless if both peers are behind dumb NAT.) I understand VoIP works similarly, but happens to use UDP rather than TCP. Linux's NetFilter state module calls packets which are part of the same TCP connection or UDP session ESTABLISHED, and packets for things like the FTP data channel RELATED. That's why the firewall rules need to allow both ESTABLISHED and RELATED for FTP to work. You also need to load a conntrack (connection tracking) module specific to each application protocol. There's one for FTP, for example. But your right, I suppose I presented it as 'This sux0rs!' as opposed to 'It will rarely, however, not work depending on the protocol'. Sure. But the same can be said for TCP applications. See above. You are correct, Linux NetFilter *does* operate in the case of DNS as you say. But others do not, and will actually maintain state information based on the serial number of the DNS request contained inside the UDP packet. That sounds more like a firewall thing than a NAT thing, although I readily admit the line between the two is rather blurry in many cases. In any event, though, the point was that you don't *need* to do that kind of thing for UDP to work through a NAT boundary -- but you may well need it for specific application protocols, on either TCP or UDP. -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Rebuilding RHEL3 libc.so from source
A couple more points: Ben wrote: http://bradthemad.org/tech/notes/patching_rpms.php That's pretty much the standard drill on how to build a package. I understand that, but it wasn't originally my goal to build an RPM, just to recreate libc.so so we can try to reproduce a certain fault and then instrument it to catch it in the act. I'd thought (hoped) I'd end up with a build area full of extracted sources that I could just mess around in. Also, I forgot to mention something weird that nudged along my current path: when I just allowed rpmbuild to totally generate an RPM from the SRPM start to finish (using rpmbuild --rebuild blahblah.src.rpm) the outcome was much more as expected, with the resultant libs inside the RPM similar in size to those of the installed versions. So I'm hoping that whatever voodoo made that (appear to) work out will also work to my advantage with the current approach. It's as if different compiler options (or something) get used when you say rpmbuild -bc ... versus --rebuild though I suspect that's not actually what's happening here... ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Rebuilding RHEL3 libc.so from source
On 7/13/07, Michael ODonnell [EMAIL PROTECTED] wrote: That's pretty much the standard drill on how to build a package. I understand that, but it wasn't originally my goal to build an RPM, just to recreate libc.so so we can try to reproduce a certain fault and then instrument it to catch it in the act. I'd thought (hoped) I'd end up with a build area full of extracted sources that I could just mess around in. Well, rpm -ivh foo.srpm will unpack the original tarballs and patches and everything, and then rpmbuild -bp foo.spec will unpack and patch everything. Then you can play around as you like under $RPM/BUILD/foo/, including running ./configure and make or whatever. Assuming that this other weirdness with the build configuration gets figured out, that may help you. ... using rpmbuild --rebuild blahblah.src.rpm) the outcome was much more as expected, with the resultant libs inside the RPM similar in size to those of the installed versions. Hmmm. Curiouser and curiouser. -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: UDP, TCP, and NAT (was...)
On 7/13/07, Ben Scott [EMAIL PROTECTED] wrote: On 7/13/07, Thomas Charron [EMAIL PROTECTED] wrote: I know that's the case, however, PAT can often cause UDP based protocols to break, as opposed to TCP protocols, where doing PAT is much easier. The thing you seem to be missing is that it's not so much a UDP vs TCP distinction, but an application protocol thing. ... Yes. And UDP is by design and specification connectionless datagrams. One way. It is true that UDP is inherently connectionless and one-way, while TCP is inherently connection-oriented and two-way. At the same time, almost any application protocol is bi-directional by nature (how often do you really only want to talk one way?). So there's pretty much always going to be a two-way conversion. ... UDP is by DEFINITION connectionless, and TCP is by DEFINITION connection oriented! And generally utilizing streamed UDP data *IS* a one way conversation. Any secondary streams, such as a command and control stream are *different* in their protocol. If you need to have a two way conversation, it *generally* is a better idea to use TCP, unless you do not care about data which may be lost, such as with audio and/or video streams. Now, if you're a programmer or protocol designer, are you going to go to the trouble of rolling your own mechanism for negotiating the port numbers for the return traffic (or using portmapper or some other thing), and making multiple sockets, or are you just going to use the port number and socket which are already there? Depends, honestly. But your point is perfectly valid. I'm not saying it's wrong, I'm saying that the automatic UDP port forwarding of NAT boxes isn't standardized anywhere, they just started 'doing it'. Some may argue that it's a really bad idea in general. Most use the later approach, and thus we end up with many application protocols (TCP and UDP) which work just fine over a simple address/port translation which assumes incoming and outgoing are symetrical. Except of course, when the initial UDP packet is coming from the outside. Which is why 'obtuse' ways of discovering and forcing a NAT router to allow traversal of UDP data via STUN and/or ICE. If it was so 'easy', these methods would never be required, but they are, and their sole purpose is to 'trick' NAT boxes to let traffic it doesn't WANT to let thru, thru. There are exceptions in all cases. Most of them arise not because of the use of UDP, but because the application protocol expects to send additional packets to *different* port numbers on the way back in. See above. There are others. (This is all in active mode FTP. In passive mode, the roles are reversed, but you still have the same problem and solution if the server is behind a NAT boundary.) I snipped the text as it was informative, but irrelevant. FTP works backwards, but mostly because it was a throwback to days before NAT WASN'T short for 'Natalie'. ;-P BitTorrent works similarly. Swarm members register the pieces they have with the tracker by making an outgoing TCP connection. Swarm members *also* make TCP connections to peers which have pieces they need. With a dumb NAT, those will fail. (There is a mechanism to have peers request reverse connections through the tracker, but it can get overloaded, and it's worthless if both peers are behind dumb NAT.) I understand VoIP works similarly, but happens to use UDP rather than TCP. Linux's NetFilter state module calls packets which are part of the same TCP connection or UDP session ESTABLISHED, and packets for things like the FTP data channel RELATED. That's why the firewall rules need to allow both ESTABLISHED and RELATED for FTP to work. You also need to load a conntrack (connection tracking) module specific to each application protocol. There's one for FTP, for example. Ok, I understand your argument now. You are right, and ALL of these applications which utilize TCP don't traverse NATs very well without tricks. However, the list of applications which utilize UDP and cannot traverse a NAT is much greater then those which have issues with TCP. But your right, I suppose I presented it as 'This sux0rs!' as opposed to 'It will rarely, however, not work depending on the protocol'. Sure. But the same can be said for TCP applications. See above. Point taken. See paragraph earlier, I see what you where saying now. But here is the core point from MY side. It is generally much easier to deal with multiple incoming connections via TCP. And if your having a two way conversation, it is much easier, and in my opinion better, to use TCP. Applications which use UDP and have to go thru the hoops of STUN/ICE negotiation prob shouldnt be using UDP. UDP is meant for datagrams. Little snippets of data. It original intent wasn't for connection oriented communications. You are correct, Linux NetFilter *does* operate in the case of DNS
How to open a device for exclusive access?
I have a device (it happens to be a somewhat exotic serial port) which is managed by a server process. I want my server to detect whether another instance of that server already has that device open. I'm guaranteed that no other program other that the one I'm in control of is capable of opening that particular device. I was hoping that I could maybe not have to use either mandatory or advisory file locking. I have tried O_EXCL | O_RDWR | O_NONBLOCK when opening the device node hoping to get back EBUSY or EAGAIN. Does anyone know if I'm SOOL or is there a way to do it? TIA -- Time flies like the wind. Fruit flies like a banana. Stranger things have .0. happened but none stranger than this. Does your driver's license say Organ ..0 Donor?Black holes are where God divided by zero. Listen to me! We are all- 000 individuals! What if this weren't a hypothetical question? steveo at syslang.net ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: How to open a device for exclusive access?
One common method is for your servers to attempt to open() an appropriately named file under /var/lock/subsystem using O_EXCL|O_CREAT. If that fails you can assume another instance is already running and has already claimed the corresponding /dev entry. Remember to delete your lock files on exit... ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: Rebuilding RHEL3 libc.so from source
On Jul 13, 2007, at 17:50, Ben Scott wrote: On 7/13/07, Michael ODonnell [EMAIL PROTECTED] wrote: I've modified my approach somewhat. I had first naively thought I could just explode the source RPM and run a build that would leave all the resultant objs and libs in the build area ... rpmbuild -ba should do that, more or less. It will also package whatever the spec says to package, but that shouldn't hurt. While you're playing around, rpmbuild -bb is probably sufficient, as - ba will write out a new srpm every time, which may just be wasted time. I generally prod a spec, do what needs doing, rpmbuild -bb a few times until I've got exactly what I'm after, then do an rpmbuild - bs to immortalize it in a new srpm. Well, actually, most of the time, I'm working from our cvs repo, so I just commit the changes and throw a build at the build system, but then I'm talking about work on some of the official packages that come out of Red Hat... :) (I've been VERY busy with our kernel spec file of late...) ... the libc.so that I ended up with seems to be wildly different from the pre-built version(s) ... That's what worries me. It should match the official binaries, more or less. Some things do occur to me though: 1. Build environment. Binaries are very dependent on their build environment. While RPM lets the packager specify build dependencies and such, that process is as error-prone as any other part of software development is. Building the system library is especially tricky. Chicken-and-egg problem. How did Red Hat build the first RHEL3 system library, since there wasn't a RHEL3 yet? It must have been on a non-RHEL3 system. But you're on a RHEL3 system. But... If you do a diff on the package list between RHEL3 GA and Red Hat Linux 9 GA, you'll find an amazing number of packages with the exact same name-version-release, and in fact, they *are* the same bits... And a lot of RHEL3 was built on Red Hat Linux 9, and/or on the development tree leading up to FC1. Similar case with RHEL5. During the pre-release RHEL5 development cycle, pre-Fedora Core 6 rawhide and to-be-RHEL5 were built from the exact same bits, the only real difference being the kernel. They were then forked at some point, and Fedora kept on rolling forward with new bits, while RHEL5 started stabilizing the bits we froze on. If there's a big toolchain change (big update to glibc, gcc, etc), everything will get rebuilt with the latest toolchain, but yeah, stuff sometimes gets built against earlier versions of other stuff that winds up being slightly different than what you'd get if you rebuilt against the released stuff. Fun, eh? :) I don't know if the glibc spec file alone is smart enough to take care of all the subtleties that could arise in the above. It may be that Red Hat has some external framework to automate the bootstrap process in the above. Indeed, I'd be surprised if they didn't. Unfortunately, I have no clue as to help you find out on this one. It largely boils down to mock these days. A srpm gets thrown at a build server, which ultimately calls mock to populate a build chroot from a given package tree, based on minimal buildroot specs and the BuildRequires of the srpm, then builds the package. The current public build system for Fedora 7 and rawhide, called koji, the old FC6-extras build system, called plague, and Red Hat's internal build system, called brew, all ultimately make the same calls to mock. (nb: koji and brew are almost identical, but brew as-is wasn't possible to release to the world). 2. Build options. It's possible to specify conditions in a spec file, and define macros on the rpmbuild command line to trigger them, such that the same spec file can build very different binaries. Same idea as with the C pre-processor. But again, I have no clue as to specifics. I suspect a combination of maybe not applying all the patches and other possible tweaks from the %prep section of the spec, and then using a different set of compile flags probably were the cause. Most packages are configured using the %configure macro, which auto- expands to a bunch of stuff... (grep around in /usr/lib/rpm to see the full expansion, which includes CFLAGS/CPPFLAGS/LDFLAGS, etc). 3. Symbols This strikes me as rather unlikely, but: I wonder if strip(1) is being built-in to the RPM packaging process, and thus leaving you with unstripped binaries under BUILD, but stripped binaries in the package. I don't think so, but I can't say for sure that isn't what's going on. You can use rpm2cpio(1) and cpio(1) to extract the plain old files from an RPM package to any given directory, and check them that way. Yes, the rpmbuild process automagically strips all binaries as part of its %install section. These bits are pulled out and put into debuginfo packages. Special care is taken to make sure Makefiles and install commands DON'T