Re: Some specific questions about 5.x
On Wed, 26 Mar 2003 10:57:07 +0300 Alex [EMAIL PROTECTED] wrote: Howdy. 1.Is it true that kernel threads are more heavy than userspace ones (pthread) and hence application with hundreds of threads will work evidently slower than that using pthreads due to more switching penalties? AFAIK, not in a hybrid model. Systems that do 1:1 thread mapping (Like Gah! Nu/Linux) will suffer from this kind of situation, also will use more kernel memory. In hybrid implementations based on Scheduler Activations, like FreeBSD's KSE, and NetBSD's SA, there's a balance between the number of kernel virtual processors available and the number of userland threads, it's an N:M model. Nathan Williams' paper on the subject suggests that context switch is not much slower than a pure userland implementation. Also, keep in mind that pure userland has other problems, like when one thread blocks on I/O. In pure userland threading systems this means the whole process is blocked, whereas in KSE and SA only that thread is stopped. 2.Is it true that even 5.x has no implementation for inter-process semaphores that are blocking calling thread only not the whole process as usually in FreeBSD? That I don't know, perhaps the local KSE guru, Julian might have an answer for this. Cheers, -- Miguel Mendez - [EMAIL PROTECTED] GPG Public Key :: http://energyhq.homeip.net/files/pubkey.txt EnergyHQ :: http://www.energyhq.tk Tired of Spam? - http://www.trustic.com pgp0.pgp Description: PGP signature ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Some specific questions about 5.x
Alex wrote: I was so much enthusiastic about kernel threads implemented in 5.x but some ugly rumors spoiled my dreams :0) So I want to get if these rumors are myths or not. 5.x does not implement traditional kernel threads like you appear to be thinking about them. Instead, it implements a variation of scheduler activations. Traditional kernel threads have a lot of unnecessary overhead problems, including CPU affinity and thread group negaffinity, necessary for increased single application concurrency. See the KSE documentation for more information. 1.Is it true that kernel threads are more heavy than userspace ones (pthread) and hence application with hundreds of threads will work evidently slower than that using pthreads due to more switching penalties? Yes and No. See the KSE documentation for more information. 2.Is it true that even 5.x has no implementation for inter-process semaphores that are blocking calling thread only not the whole process as usually in FreeBSD? No, for values of x 0. See the KSE documentation for more information. -- Terry ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Some specific questions about 5.x
Miguel Mendez wrote: On Wed, 26 Mar 2003 10:57:07 +0300 Alex [EMAIL PROTECTED] wrote: Howdy. 1.Is it true that kernel threads are more heavy than userspace ones (pthread) and hence application with hundreds of threads will work evidently slower than that using pthreads due to more switching penalties? AFAIK, not in a hybrid model. Systems that do 1:1 thread mapping (Like Gah! Nu/Linux) will suffer from this kind of situation, also will use more kernel memory. In hybrid implementations based on Scheduler Activations, like FreeBSD's KSE, and NetBSD's SA, there's a balance between the number of kernel virtual processors available and the number of userland threads, it's an N:M model. Nathan Williams' paper on the subject suggests that context switch is not much slower than a pure userland implementation. Also, keep in mind that pure userland has other problems, like when one thread blocks on I/O. In pure userland threading systems this means the whole process is blocked, whereas in KSE and SA only that thread is stopped. What about Solaris' migration towards 1:1 model from the N:M one they had supported for years already? Who are insane, Solaris folks (moving towards Linux) or Free/NetBSD ones (migrating to the old Solaris' behavior)? -- Lev Walkin [EMAIL PROTECTED] ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Question about BPF API (PCAP not like for me)
Vladimir Yu. Stepanov wrote: Hello ! I have a little question about BPF: how to determine incoming or outgoing packet given into the user level mode. Current API do not supported this or I are wrong ? Unfortunately, there is no way of determining this fact. However, there is a flag names BIOCSSEESENT that enables you to skip over locally generated packets on the interface in question. See ports/net/ipcad. -- Lev Walkin [EMAIL PROTECTED] ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: shared mem and panics when out of PV Entries
On Tue, 25 Mar 2003, Andrew Kinney wrote: I'm going to expose my newbness here with respect to BSD memory management, but could the number of files served and filesystem caching have something to do with the PV Entry usage by Apache? We've got around 1.2 million files served by this Apache. Could it be that the extensive PV Entry usage has something to do with that? Obviously, not all are accessed all the time, but it wouldn't take a very large percentage of them being accessed to cause issues if filesystem caching is in any way related to PV Entry usage by Apache. I think the most of these PV entries related to forked Apaches. As far as I know PV entries used for files only if these files are mmap()ed. I remember reading somewhere (sorry, didn't keep track of the link) that someone running a heavily used Squid proxy had a very similar issue with running out of PV Entries as they got more and more files in the cache. Squid is basically a modified Apache with proxy caching turned on. No, Squid is not modified Apache. It's completely different animal. Apache uses preforked model while Squid uses select()/poll() in the single process. I think you should try to decrease memory that shared between Apache processes. If you can not change scripts then the single method is to decrease number of Apache processes while keeping to handle the current workload: 1) disable keepalive if it enabled; 2) set Apache behind reverse-proxy server that frees Apache processes as soon as proxy get the whole responses. We had keepalive set to the default of on (at least default for this install) with the default keepalive timeout of 15 seconds. Dropping the keepalive timeout down to 3 seconds has dramatically reduced the number of Apache processes required to serve the load. With the new settings, we're averaging 30 to 80 Apache processes, which is much more manageable in terms of memory usage, though we weren't anywhere near running out of physical RAM prior to this. We're servicing a little over 1000 requests per minute, which by some standards isn't a huge amount. I strongly recommend to disable keepalive at all. Keepalive mostly make a sense only to download HTML with the inline images. With heavy mod_perl server it's simply waste of the resources - megabytes of a physical memory (usually) or PV entries (in your rare case). Furthermore heavy Apache processes should not handle images or another static files. Heavy Apache should generate dynamic responses only and should be set behind reverse-proxy (though not mod_proxy). Let's see a modem user that gets 50K response. FreeBSD 4.4+ default in-kernel TCP send buffer is 32K and Apache would quickly fill it and need to wait for the freeing space for the rest 18K. A modem user gets 18K for 6 seconds at 3K/s. So while mod_perl can generate answer in 0.1 seconds it needs to wait 6 seconds for transfer this answer. All your megabytes and PV entries are bound to this transer. But this long transfer can be made by reverse-proxy that gets response from heavy server in 0.1 seconds or so. Furthermore if responses is small enough to fill completely in in-kernel TCP buffer there is Apache lingering close 2-second timeout. Before closing the connection Apache calls shutdown(SHUT_WR) and then calls select() with 2 seconds timeout. Slow client would not close the connection on its side because it's still getting response so Apache waits these 2 seconds and then close() the connection. So you spend at least 2 seconds for a slow client even your server can generate responses in a fraction of a second. We're still seeing quite heavy PV Entry usage, though. The reduced number of Apache processes (by more than half) doesn't seem to have appreciably reduced PV Entry usage versus the previous settings, so I suspect I may have been wrong about memory sharing as the culprit for the PV Entry usage. This observation may just be coincidence, but the average PV Entry usage seems to have gone up by a couple million entries since the changes to the Apache config. If you have 200M shared memory it takes about 50,000 PV entries per process. 20 processes takes 1 million PV entries. Time will tell if the PV Entries are still getting hit hard enough to cause panics due to running out of them. They're supposed to get forcibly recycled at 90% utilization from what I see in the kernel code, so if we never get above 90% utilization I guess I could consider the issue resolved. What other things in Apache (besides memory sharing via PHP and/or mod_perl) could generate PV Entry usage on a massive scale? The file mmap()s. But they should not descreased because they alwayes shared as compared with copy-on-write fork()ed pages. Igor Sysoev http://sysoev.ru/en/ ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To
pam_ldap...
Thanks for the answers, but why pam_ldap in FreeBSD, if i can't authenticate in ldap servers? Sorry, but i can't understand... You did give me solutions with nis.. nis/gateway... where can i find a official howto? The FreeBSD team do not talk about it. The last question? Why FreeBSD do not support ldap authentication? (nss_ldap) files, nis, hesiod??? do we live in the past? One of great things in 5.0 release for me, should be this! :) Thanks again, and sorry by the english. --- ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: shared mem and panics when out of PV Entries
On 25 Mar 2003, at 19:28, Terry Lambert wrote: Basically, you don't really care about pv_entry_t's, you care about KVA space, and running out of it. In a previous posting, you suggested increasing KVA_PAGES fixed the problem, but caused a pthreads problem. Will running out of KVA space indirectly cause PV Entries to hit its limit as shown in sysctl vm.zone? To my knowledge, I've never seen a panic on this system directly resulting from running out of KVA space. They've all been traced back to running out of available PV Entries. I'm invariably hitting the panic in pmap_insert_entry() and I only get the panic when I run out of available PV Entries. I've seen nothing to indicate that running out of KVA space is causing the panics, though I'm still learning the ropes of the BSD memory management code and recognize that there are many interactions with different portions of the memory management code that could have unforeseen results. Regarding the other thread you mentioned, increasing KVA_PAGES was just a way to make it possible to squeeze a higher PV Entry limit out of the system because it would allow a higher value for PMAP_SHPGPERPROC while still allowing the system to boot. I have not determined if it fixed the problem because I had to revert to an old kernel when MySQL wigged out on boot, apparently due to the threading issue in 4.7 that shows up with increased KVA_PAGES. I never got a chance to increase PMAP_SHPGPERPROC after increasing KVA_PAGES because MySQL is an important service on this system and I had to get it back up and running. What you meant to say is that it caused a Linux threads kernel module mailbox location problem for the user space Linux threads library. In other words, it's because you are using the Linux threads implementation, that you have this problem, not FreeBSD's pthreads. I may have misspoken in the previous thread about pthreads having a problem when KVA_PAGES was increased. I was referencing a previous thread in which the author stated pthreads had a problem when KVA_PAGES was increased and had assumed that the author knew what he was talking about. At any rate, this was apparently patched and included into the RELENG_4 tree after 4.7- RELEASE. I plan on grabbing RELENG_4_8 once it's officially released. That should give me room to play with KVA_PAGES, if necessary, without breaking MySQL. Also worth reiterating is that resource usage by Apache is the source of the panics. The version I'm using is 1.3.27, so it doesn't even make use of threading, at least not like Apache 2.0. I would just switch to Apache 2.0, but it doesn't support all the modules we need yet. Threads were only an issue with MySQL when KVA_PAGES256, which doesn't appear to be related to the panics happening while KVA_PAGES=256. In any case, the problem you are having is because the uma_zalloc() (UMA) allocator is feeling KVA space pressure. One way to move this pressure somewhere else, rather than dealing with it in an area which results in a panic on you because the code was not properly retrofit for the limitations of UMA, is to decide to preallocate the UMA region used for the PV ENTRY zone. I haven't read up on that section of the source, but I'll go do so now and determine if the changes you suggested would help in this case. I know in some other posts you're a strong advocate for mapping all physical RAM into KVA right up front rather than messing around with some subset of physical RAM getting mapped into KVA. That approach seems to make sense, at least for large memory systems, if I understand all the dynamics of the situation correctly. The way to do this is to modify /usr/src/sys/i386/i386/pmap.c at about line 122, where it says: #define MINPV 2048 If I read the code correctly in pmap.c, MINPV just guarantees that the system will have at least *some* PV Entries available by preallocating the KVA (28 bytes each on my system) for those PV Entries specified by MINPV. See the section of /usr/src/sys/i386/i386/pmap.c labelled init the pv free list. I'm not certain it makes a lot of sense to preallocate KVA space for 11,113,502 PV Entries when we don't appear to be completely KVA starved. As I understand it (and as you seem to have suggested), increasing MINPV would only be useful if we were running out of KVA due to other KVA consumers (like buffers, cache, mbuf clusters, and etc.) before we could get enough PV Entries on the free list. I don't believe that is what's happening here. Here's some sysctl's that are pertinent: vm.zone_kmem_kvaspace: 350126080 vm.kvm_size: 1065353216 vm.kvm_free: 58720256 vm.zone_kmem_kvaspace indicates (if I understand it correctly) that kmem_alloc() allocated about 334MB of KVA at boot. vm.kvm_free indicates that KVM is only pressured after the system has been running awhile. The sysctl's above were read after running for about 90 minutes after a
Re: shared mem and panics when out of PV Entries
On 26 Mar 2003, at 13:29, Igor Sysoev wrote: If you have 200M shared memory it takes about 50,000 PV entries per process. 20 processes takes 1 million PV entries. We've got about 11.1 million PV entries to play with, so I went ahead and made MaxClients 150 just to ensure Apache couldn't panic the system at will. That combined with minimizing the KeepAliveTimeout has solved the problem for now, though I'm still not happy about letting all that RAM sit idle. I guess I'll just have to live with it. Eventually, I may be forced to turn off KeepAlive and make use of FreeBSD's accept filters or put in a reverse proxy as you recommend. For now, though, we are serving the whole gammut from this Apache. Static pages, images, mod_perl, PHP, Apache::ASP, and most anything else a customer might want or need to serve from a web server. I know it isn't the most efficient way to use Apache, but nobody has any complaints about performance at this point. Sincerely, Andrew Kinney President and Chief Technology Officer Advantagecom Networks, Inc. http://www.advantagecom.net ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Lots of kernel core dumps
On Monday 24 March 2003 11:18, Daniela wrote: On Sunday 23 March 2003 20:20, Wes Peters wrote: The reason for creating the 5.0 release is to make it easy for more developers and testers to jump onto the 5.x bandwagon by giving them a known (relatively) good starting point. Quite a number of problems have been fixed since 5.0-RELEASE; CURRENT is now generally much more stable, and nobody is going to spend time updating 5.0 which is essentially an early access release. You have to decide for yourself if this machine is too critical to run CURRENT, in which case it's probably best off running STABLE or the latest 4.x release branch, or if you want to update it to CURRENT, follow the CURRENT mailing list, and update again at known stable development points. It looks like right now is pretty good if you want to jump. At any rate, thanks for your tenacity. We really do appreciate the contributions of everyone. Well, it's just a home server. I don't mind a few crashes, but security is important for me. What do you think, should I go back to -stable? FreeBSD is the world's best OS, I want to see it succeeding and I want to help as much as possible. I have two machines at home and run STABLE on my workstation, which is also our 'group server' for the home. I have current on a crash test box that used to be my workstation 6 years ago, a K6/233 I can't imagine not having. If you're similarly hardware-rich, I'd recommend a similar approach. If you have only the one box, I personally would probably run CURRENT and be careful about when to run CVSup. Good luck! -- Where am I, and what am I doing in this handbasket? Wes Peters [EMAIL PROTECTED] ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: pam_ldap...
Why FreeBSD do not support ldap authentication? (nss_ldap) files, nis, hesiod??? do we live in the past? One of great things in 5.0 release for me, should be this! :) Wait for FreeBSD 5.1. Does that mean there will be official support for nss_ldap in FBSD 5.1? Is it on the -current right now? I would like to test it. --- Lou ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Lots of kernel core dumps
On Tuesday 25 March 2003 08:14, Peter Jeremy wrote: On Mon, Mar 24, 2003 at 08:18:43PM +0100, Daniela wrote: Well, it's just a home server. I don't mind a few crashes, but security is important for me. What do you think, should I go back to -stable? If you're willing to put up with a few crashes _and_ assist with debugging the crashes (eg trying patches) then running -CURRENT would help the Project. One option you could try is to stick with -CURRENT for a month or two and see how it pans out - if you feel it's too painful, downgrade to -STABLE. (I ran -CURRENT on my main workstation for about 3 years - I dropped back to -STABLE midway through last year because -CURRENT happened to strike an extended period of instability and it was causing me too much angst). On the topic of security, you should be aware that -CURRENT is not officially supported and therefore isn't mentioned in security advisories - in general -CURRENT will have security patches applied before -STABLE but you will need to do some detective work if you want to identify the exact time/revision affected. There have also been a couple of instances where security problems only affected -CURRENT. In short, if I keep my eyes open, security isn't bad, right? I'll give -current a try, thanks for your advice. Daniela ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: shared mem and panics when out of PV Entries
Andrew Kinney wrote: On 25 Mar 2003, at 19:28, Terry Lambert wrote: Basically, you don't really care about pv_entry_t's, you care about KVA space, and running out of it. In a previous posting, you suggested increasing KVA_PAGES fixed the problem, but caused a pthreads problem. Will running out of KVA space indirectly cause PV Entries to hit its limit as shown in sysctl vm.zone? Yes. The UMA code preallocates only a small set of entries in a zone, and then allocates more on an as-needed basis. Previously, the zalloci() code was used (the i stands for interrupt). This allocator preestablished page mappings (bt not necessarily pages) for every object that could be allocated in the zone. The zalloci() had the benefit of preallocating mappings, so you could not run out, but has the disadvantage that you can now run out unexpectedly, if something else puts pressure on the number of page mappings, and runs you out first. The main problem that causes this is that memory allocations in the zone allocation case are type-stable, which means that if they are allocated to be of a particular type, they remain of that type. By giving a larger minimum, you are effectively reverting to the old behaviour of defining a maximum, by allocating higher than any possible usage. Note that you can still run out, if you go over this number, and that running out of some things is fatal. Basically, when the new allocation policies went in, the code that it was applied to was not checked for low end failure cases, so there are some introduced bugs that are slowly being beat out of old code that never had to deal with an allocation failure under normal conditions, ever before. The code changes I posted only work around the introduced bugs in this one case; I expect that if you push your hernia in with a girdle, it will pop out somewhere else. But at least you will be doing valuable work, identifying where the introduced bugs live. 8-). To my knowledge, I've never seen a panic on this system directly resulting from running out of KVA space. They've all been traced back to running out of available PV Entries. But ask yourself Why did the allocation of new PV Entries fail this time?. The answer is that you ran out of page mappings for the new page you wanted to allocate to contain the new entries. As I said, it's an introduced bug, and a side effect of the change in zone allocation policy implementation. The patch I posted lets you work around it by pushing the number of PV Entries, specifically above the number you will ever need, at the expense of you maybe running out of page mappings somewhere else. Technically, the code should have been changes to attempt to prereserve all necessary mappings on a fork(), and, if it was not possible, then to fail the fork(). Probably this would require counted lists, so you could lock, know how many were free, unlock, attempt to allocate blocks until there were enough more free, relock, verify the count, and then complete the operation and unlock. I'm invariably hitting the panic in pmap_insert_entry() and I only get the panic when I run out of available PV Entries. I've seen nothing to indicate that running out of KVA space is causing the panics, though I'm still learning the ropes of the BSD memory management code and recognize that there are many interactions with different portions of the memory management code that could have unforeseen results. You need to look at the traceback, and the function where the panic is actually called from. Neither pmap_insert_entry() nor get_pv_entry() call panic directly. Once you understand who is calling who, you can understand why the panic is called, rather than merely returning an error to the caller. Basically, it boils down to the caller being unable to accept an allocation failure. Regarding the other thread you mentioned, increasing KVA_PAGES was just a way to make it possible to squeeze a higher PV Entry limit out of the system because it would allow a higher value for PMAP_SHPGPERPROC while still allowing the system to boot. I have not determined if it fixed the problem because I had to revert to an old kernel when MySQL wigged out on boot, apparently due to the threading issue in 4.7 that shows up with increased KVA_PAGES. I never got a chance to increase PMAP_SHPGPERPROC after increasing KVA_PAGES because MySQL is an important service on this system and I had to get it back up and running. Yes. I also suggested how to crank up the initial number of pv_entry_t's in the first place, so that the allocation failure won't happen in the code path that calls panic() in event of an allocation failure it's not expecting. 8-). The MySQL problem is the threads mailbox issue. You can fix it the way I suggested, so you can use a larger KVA_PAGES, without the threading issue showing up. I just didn't go into detail on the lines of code to change to do it, but it's conceptually very easy.
RFA: Keeping sysadmin programs resident/available
Hi all, I was wondering if anyone here knows of a way to force specific programs to stay resident (not swap) - specifically, I'm trying to see if there's a way one could keep sshd out of swap, and then execute a shell and some basic sysadmin tools (ps, top, etc), with the same swap-prevention, so when an end-user inevitably runs a system out of swap certain processes would still be available. (I'm thinking that it'd only need a few MB reserved, so the impact to the system would be minimal). So far my search has led me to mlock(), which seems to work on address ranges. In my tests I haven't actually been able to prevent pages from swapping, but I was able to cause the RSS of a test binary to grow by modifying rtld to mlock() everything it mmap()s. Obviously I'm grasping at straws there - idealy the solution would work on static executables as well. I've also heard of madvise - same results. Has anyone here dealt with this issue? Any tips/leads I could follow? - dpk ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
trouble with DHCP and ipfw -1 fragment rule
Hello. I've got a box which is booting up just fine with an IP address, but as soon as I change my adapter to DHCP and reboot it hangs during the initial network setup. It seems that my router (a LinkSys 4 port DSL hub) is sending bad packets and triggering the -1 rule. Then, the DHCP process just hangs; I can do a CTRL+C to break out and continue setup. I'm not sure how to proceed debugging this bugger. Any suggestions? Clark ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: [hackers] Re: Realtek
Hi Kirill! That's a real myth about Realtek :0) We're using them on all our FreeBSD machines for over 5 years without any problems. Driver is working fine. Speed and reliability is OK. So I for one can even offer to choose them instead of 3COM as in Russia you can easily get 2 from the 3 3COM cards that is not 3COM indeed :0) Not so for cheap Realteks :0) Good luck David Gilbert wrote: Kirill == Kirill Ponomarew [EMAIL PROTECTED] writes: Kirill Hi, On Thu, Mar 06, 2003 at 11:46:43AM -0300, Pablo Morales Kirill wrote: Someone said that the realtek 8029 and 8139 ethernet cards are the worst cards ever made. My boss is planning to make a great buy of this cards for a communication project ( the reasons is obius, the cost of this cards ) I'm trying to persuade him to by 3com ethernet cards, but I need technical information to demostrate him that it's not a good inversion to buy those kind of cards. Can someone give a good explanation of that or at least where can I find information about it? Kirill please read comments in /usr/src/sys/pci/if_rl.c It's worth noting that there's a scale to all this. The driver comments imply that the card will push 100Mb if you have enough power in the CPU ... defined as 400Mhz. Given the price of this card ... and the fact that less-than-400Mhz CPU's are rather rare, and that this is only an issue for high bandwidth applications ... the rl cards might fit for you. We use them extensively in workstations... even diskless. The reason being that with modern processors, they perform adequately ... and although they take up extra CPU ... CPUs are rather under-utilized resources in most workstations. Dave. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
FreeBSD Installation Problems
Hello, I am trying to install FreeBSD into my personal computer, however I am receiving error messages during the course of my installation. When I am trying to install using the floppy method at the final prompt where the istallation program requests me to put in floppy disk in drive a and press enter. After putting in the disk and pressing enter I receive a warning or error message like Error mounting floppy fd0 (/dev/fd0) on /dist: Invalid argument. When I try to install it via FTP, at the end of the installation procedure I receive a message like Couldn't open FTP connection to ftp.freebsd.org: undefined error : 0 Can someone please help me troubleshoot this problems which I had been facing with the installation of FreeBSD into my personal computer. Thanks, ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]