Re: [Asterisk-Users] Hardware for Asterisk
Have you experienced a hardware failure yet that you had to come back from? If you loose a drive, it is a high probability that you will loose the controller. So unless you have a add on card, or some motherboard Yes, many times. I have _never_ lost a controller when the drive went; the drive failures were all mechanical, not electrical, or not electrical to the point of causing the controller to die as well. with 4 IDE ports, you will corrupt the second drive of a mirror. If the second drive is corrupted, then you are only a hair above not having anything. If you don't trust that, check out the GOOD IDE raid controllers. You are only allowed to place 1 drive per port, and they only use 1 port on a IDE controller. If you're using two drives per channel for IDE RAID you're just asking for trouble. One drive per channel. Lose a drive, it's _very_ unlikely that you will lose the _other_ drive. I don't buy it that any truly redundant raid system is as fast in software as in hardware on a machine doing anything significant. In raid 1, you are double or more writing all data to the drives. in a read environment, it might be able to share the load out to more than 1 drive and help, but I don't expect it would be much better than a dedicated controller handling the load. Any load of a software raid solution takes processor time away from the processes it is trying to complete. So take our VoIP application, if I am spending time getting the voice recording to 2 or more drives and the software to get it there, you have significantly reduced the amount of time available to the CPU to handle the VoIP packets in a timely manner. This only gets worse as call volume goes up. If it is hardware raid, you know it will be a single write and the controller deals with the problems. I agree that software RAID of any kind adds load to the system. I never stated otherwise. I _did_ state, however, that if you're speccing a system and the system load is approaching a level where adding software RAID1 gives you appreciable load increase, you are speccing your systems far too tightly. The additional write to another drive channel for RAID1 is practically inconsequential for most systems, IMO. RAID5 with a failed system, yes you do suffer _significant_ performance loss, but I wasn't talking about software RAID5... :-) On server hardware, Dell has their own boards. IBM had their own boards. Compaq and HP also produce their own boards. Maybe they don't produce their own boards in the desktop models, but they do in the server class machines. While you can buy Intel, Tyan, and SuperMicro boards, I wouldn't consider any of the remaining ones you list as truly server class. I stand corrected; I was under the assumption that Dell was farming their customized motherboards out to a standard OEM. Maybe not by default, but if you get into the hot swap PSUs you absolutely are talking quality. Agreed. While I'll agree that a complete spare is a good idea, if you are looking for the bargains now, I don't have faith that you would also be the person who would buy 2 and leave the second untouched until failure occurs. I'll admit I couldn't leave a fully functioning machine just laying around not doing something. :-) I'm not much of a bargain-hunter when it comes to stuff that Must Worktm -- the bean-counters will bitch but when the system goes down for whatever reason and I can have it back up almost immediately it is worth every penny to them. Regards, Andrew ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
Re: [Asterisk-Users] Hardware for Asterisk
On Saturday 17 January 2004 00:31, Chris Albertson wrote: Software RAID vs. Hardward RAID??? Welcome to the 80s. There IS no Hardward RAID it's all software the difference is only where the software lives, in ROM on the controler card in the RAID box or in a Linux driver. Actually, hardware RAID typically runs something called firmware (which is technically software, though it tends to be a little more difficult to alter) and offloads the task of balancing the data across multiple disks off the CPU. This is the primary difference between software RAID (which, since it uses the CPU, reduces the available CPU for other tasks) and hardware RAID. For Asterisk all you would need is a simple disk mirror at most. That's a gross oversimplification without consideration of a particular setup. -Tilghman ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
Re: [Asterisk-Users] Hardware for Asterisk
On Friday, 16 January, 2004 12:27, Steven Critchfield wrote: On Fri, 2004-01-16 at 06:47, Andrew Kohlsmith wrote: If you value your data, don't use software raid. If you value performance don't use software raid. If you value uptime/stability don't use any raid on IDE. That's pure bullshit -- I use software RAID *specifically* because I value my data. I don't want to buy two hardaware RAID controllers to have one sit on the shelf just in case the first dies... and if the second dies you're SOL because they've lasted long enough that they're no longer available. Linux software RAID is available on any Linux system and if the system blows up I can put the drives in another system and *not* worry about it not being detected. As far as performance goes, I have some bonnie++ tests that I've run that show that at least on the few systems I've tested, software RAID 1 beat out hardware RAID 1 (these systems were IDE, SCSI-2 and Ultra320, with DPT RAID controllers for SCSI on P4 and I think regular Promise IDE RAID controllers on P3) -- not a huge difference in speed but one that at least tosses your if you value performance don't use software raid argument. Perhaps on a _heavily_ loaded server you might be right, but then again I feel that you're stupid for letting a server get so loaded up that it can't handle the simple mirroring algorithms in addition to normal file servering functions without degrading performance to a noticable degree. I used to believe that HW RAID was the only way to go. With RAID5 I still feel that is true to an extent. However if you're just mirroring there is _no_ significant advantage to choosing hardware RAID over software RAID. Not on IDE, and not on SCSI. In fact, there are advantages to choosing software RAID over hardware RAID, as I've mentioned above. Have you experienced a hardware failure yet that you had to come back from? If you loose a drive, it is a high probability that you will loose the controller. So unless you have a add on card, or some motherboard with 4 IDE ports, you will corrupt the second drive of a mirror. If the second drive is corrupted, then you are only a hair above not having anything. If you don't trust that, check out the GOOD IDE raid controllers. You are only allowed to place 1 drive per port, and they only use 1 port on a IDE controller. Now here we are seeing that you must have had a really abnormal, bad experience, or you are not talking from experience at all. I have, in fact, used many software and hardware RAID configurations, and I have had a great many drive failures. For mirroring, I use software RAID because is greatly superior due precisely for the reliance on the controller of any given hardware RAID array. Although I think it is very far-fetched to set such a high relational coefficient of drive failure to controller failure, (since I have had _far_ more drives fail than controllers) the facts that hardware controllers are both expensive (compared to free software) and rare (compared to any machine's normal IDE ports) culminates in my use of software RAID. I can stick the good drive of any software-mirrored RAID array into _any_ other system (Linux OR Windows), boot up off my trusty rescue CD with software RAID and networking, and immediately recover data or functionality. Further, this presumes that the machine which housed the failed drive is otherwise in a non-functional state. If this is a false presumption, because I have RAIDed my boot partition the system boots just fine with only one working drive. Even better, when I get the new drive, I can simply install and rebuild the array while I am on-line... a feature not all hardware RAID controllers have. _My_ horror stories are those of single brick outhouse servers which all sorts of special hardware failing out in the field with an SCA drive and no SCA backplane/controller within 100 miles. Even the large NAS devices that use IDE have the IDE controller built into the sled that holds the drive and use PCI hotswap technology. I don't buy it that any truly redundant raid system is as fast in software as in hardware on a machine doing anything significant. In raid 1, you are double or more writing all data to the drives. in a read environment, it might be able to share the load out to more than 1 drive and help, but I don't expect it would be much better than a dedicated controller handling the load. Any load of a software raid solution takes processor time away from the processes it is trying to complete. So take our VoIP application, if I am spending time getting the voice recording to 2 or more drives and the software to get it there, you have significantly reduced the amount of time available to the CPU to handle the VoIP packets in a timely manner. This only gets worse as call volume goes up. If it is hardware raid, you know it will be a single write and the
Re: [Asterisk-Users] Hardware for Asterisk
On Fri, 2004-01-16 at 16:55, Robert L Mathews wrote: At 1/16/04 7:25 AM, Andrew Kohlsmith [EMAIL PROTECTED] wrote: That's pure bullshit -- I use software RAID *specifically* because I value my data. I don't want to buy two hardaware RAID controllers to have one sit on the shelf just in case the first dies... and if the second dies you're SOL because they've lasted long enough that they're no longer available. Linux software RAID is available on any Linux system and if the system blows up I can put the drives in another system and *not* worry about it not being detected. Yeah, I couldn't agree more. We originally thought hardware RAID was the way to go, and we bought a couple of fully loaded Dell PowerEdge 2550s with SCSI hardware RAID 5 arrays at about $4500 a pop. We also bought a PowerEdge 600SC for around $900 with lots of disk space to use as a network backup machine (backing up the 2550s) with Linux software RAID 5. I've also had a crappy old desktop machine running Linux software RAID 1 for a couple of years. It turns out that the software RAID is just as reliable (more so, in fact -- we have had a number of lockups on the 2550s that appear to be due to the hardware RAID subsystem locking up, and the software RAID machines have never done that, even though the backup server does more disk I/O than the others). The software RAID on the 600SC is faster than the hardware RAID in bonnie tests. I believe there is a recall option on those machines. So far no one has identified what exactly is the problem there. I was reading the aac-raid list for a while, some people point the finger at the firmware on the disks, and some at the drivers. Either way, there are a few machines that Dell acknoledges trouble with. ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
Re: [Asterisk-Users] Hardware for Asterisk
If you value your data, don't use software raid. If you value performance don't use software raid. If you value uptime/stability don't use any raid on IDE. That's pure bullshit -- I use software RAID *specifically* because I value my data. I don't want to buy two hardaware RAID controllers to have one sit on the shelf just in case the first dies... and if the second dies you're SOL because they've lasted long enough that they're no longer available. Linux software RAID is available on any Linux system and if the system blows up I can put the drives in another system and *not* worry about it not being detected. As far as performance goes, I have some bonnie++ tests that I've run that show that at least on the few systems I've tested, software RAID 1 beat out hardware RAID 1 (these systems were IDE, SCSI-2 and Ultra320, with DPT RAID controllers for SCSI on P4 and I think regular Promise IDE RAID controllers on P3) -- not a huge difference in speed but one that at least tosses your if you value performance don't use software raid argument. Perhaps on a _heavily_ loaded server you might be right, but then again I feel that you're stupid for letting a server get so loaded up that it can't handle the simple mirroring algorithms in addition to normal file servering functions without degrading performance to a noticable degree. I used to believe that HW RAID was the only way to go. With RAID5 I still feel that is true to an extent. However if you're just mirroring there is _no_ significant advantage to choosing hardware RAID over software RAID. Not on IDE, and not on SCSI. In fact, there are advantages to choosing software RAID over hardware RAID, as I've mentioned above. What matters as far as the computers being used is that you are unlikely to get your hands on a real server class motherboard without having bought it in a Dell or Compaq. It also matters as to the supporting Again I call bullshit -- Where do you think Dell and Compaq get their motherboards from? (ok compaq might actually manufacture them) -- I can get server-class motherboards from Asus, Gigabyte, Intel, Tyan, and a host of manufacturers without having to buy into the proprietary nature of anything Name Brand. hardware. If the PSU isn't quality enough, then it doesn't matter what motherboard you use. Dell doesn't want to deal with your system after sales. They will put a few extra dimes into the PSU so it stays in shape for a few more years. The companies you are most likely to purchase a case from will usually expect you to not come after them if the PSU fails. So why would they bother to spend the extra money to make the PSU last longer. I can also put some extra dimes into the power supply... or fans... or anything. Dell/Compaq/whoever does not mean high quality by default. Also Dell is more likely to have a part to fix your machine in the mail within hours instead of you waiting till you can get to the store to purchase your replacement part before RMAing the part and waiting the couple of weeks for the replacement. This is true. In general, you get what you pay for, and less so when you go bargain hunting. It all comes down to the same old problem of figuring out what your time and downtime are worth. Agreed. Personally I'd rather have a complete second system on the shelf that I can swap out within 15 minutes than rely on anyone plus a courier, but that's just me. Regards, Andrew ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
Re: [Asterisk-Users] Hardware for Asterisk
On Fri, Jan 16, 2004 at 07:47:34AM -0500, Andrew Kohlsmith said: I can also put some extra dimes into the power supply... or fans... or anything. Dell/Compaq/whoever does not mean high quality by default. In fact, they generally used the CHEAPEST parts they can find to keep costs down. Dell's low price point is low for a reason. Also Dell is more likely to have a part to fix your machine in the mail within hours instead of you waiting till you can get to the store to purchase your replacement part before RMAing the part and waiting the couple of weeks for the replacement. This is true. Well, overnight anyway. But it's also true that if you use standard off the shelf items, you can get replacement parts within hours or even minutes depending how close you are to a computer store. So many companies like HPaq and Dell use custom parts - you can't just drop a generic replacement in. In general, you get what you pay for, and less so when you go bargain hunting. It all comes down to the same old problem of figuring out what your time and downtime are worth. When I build my systems from scratch, I don't generally go bargain hunting, I generally buy the best quality parts I can. This doesn't mean that I buy the most expensive parts however. When you buy a complete system, you don't know WHAT you will get. eMachines for example, but gateway and dell fall have the same issues. ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
Re: [Asterisk-Users] Hardware for Asterisk
On Fri, 2004-01-16 at 06:47, Andrew Kohlsmith wrote: If you value your data, don't use software raid. If you value performance don't use software raid. If you value uptime/stability don't use any raid on IDE. That's pure bullshit -- I use software RAID *specifically* because I value my data. I don't want to buy two hardaware RAID controllers to have one sit on the shelf just in case the first dies... and if the second dies you're SOL because they've lasted long enough that they're no longer available. Linux software RAID is available on any Linux system and if the system blows up I can put the drives in another system and *not* worry about it not being detected. As far as performance goes, I have some bonnie++ tests that I've run that show that at least on the few systems I've tested, software RAID 1 beat out hardware RAID 1 (these systems were IDE, SCSI-2 and Ultra320, with DPT RAID controllers for SCSI on P4 and I think regular Promise IDE RAID controllers on P3) -- not a huge difference in speed but one that at least tosses your if you value performance don't use software raid argument. Perhaps on a _heavily_ loaded server you might be right, but then again I feel that you're stupid for letting a server get so loaded up that it can't handle the simple mirroring algorithms in addition to normal file servering functions without degrading performance to a noticable degree. I used to believe that HW RAID was the only way to go. With RAID5 I still feel that is true to an extent. However if you're just mirroring there is _no_ significant advantage to choosing hardware RAID over software RAID. Not on IDE, and not on SCSI. In fact, there are advantages to choosing software RAID over hardware RAID, as I've mentioned above. Have you experienced a hardware failure yet that you had to come back from? If you loose a drive, it is a high probability that you will loose the controller. So unless you have a add on card, or some motherboard with 4 IDE ports, you will corrupt the second drive of a mirror. If the second drive is corrupted, then you are only a hair above not having anything. If you don't trust that, check out the GOOD IDE raid controllers. You are only allowed to place 1 drive per port, and they only use 1 port on a IDE controller. Even the large NAS devices that use IDE have the IDE controller built into the sled that holds the drive and use PCI hotswap technology. I don't buy it that any truly redundant raid system is as fast in software as in hardware on a machine doing anything significant. In raid 1, you are double or more writing all data to the drives. in a read environment, it might be able to share the load out to more than 1 drive and help, but I don't expect it would be much better than a dedicated controller handling the load. Any load of a software raid solution takes processor time away from the processes it is trying to complete. So take our VoIP application, if I am spending time getting the voice recording to 2 or more drives and the software to get it there, you have significantly reduced the amount of time available to the CPU to handle the VoIP packets in a timely manner. This only gets worse as call volume goes up. If it is hardware raid, you know it will be a single write and the controller deals with the problems. What matters as far as the computers being used is that you are unlikely to get your hands on a real server class motherboard without having bought it in a Dell or Compaq. It also matters as to the supporting Again I call bullshit -- Where do you think Dell and Compaq get their motherboards from? (ok compaq might actually manufacture them) -- I can get server-class motherboards from Asus, Gigabyte, Intel, Tyan, and a host of manufacturers without having to buy into the proprietary nature of anything Name Brand. On server hardware, Dell has their own boards. IBM had their own boards. Compaq and HP also produce their own boards. Maybe they don't produce their own boards in the desktop models, but they do in the server class machines. While you can buy Intel, Tyan, and SuperMicro boards, I wouldn't consider any of the remaining ones you list as truly server class. hardware. If the PSU isn't quality enough, then it doesn't matter what motherboard you use. Dell doesn't want to deal with your system after sales. They will put a few extra dimes into the PSU so it stays in shape for a few more years. The companies you are most likely to purchase a case from will usually expect you to not come after them if the PSU fails. So why would they bother to spend the extra money to make the PSU last longer. I can also put some extra dimes into the power supply... or fans... or anything. Dell/Compaq/whoever does not mean high quality by default. Maybe not by default, but if you get into the hot swap PSUs you absolutely are talking quality. Also Dell is more likely to have a
Re: [Asterisk-Users] Hardware for Asterisk
At 1/16/04 7:25 AM, Andrew Kohlsmith [EMAIL PROTECTED] wrote: That's pure bullshit -- I use software RAID *specifically* because I value my data. I don't want to buy two hardaware RAID controllers to have one sit on the shelf just in case the first dies... and if the second dies you're SOL because they've lasted long enough that they're no longer available. Linux software RAID is available on any Linux system and if the system blows up I can put the drives in another system and *not* worry about it not being detected. Yeah, I couldn't agree more. We originally thought hardware RAID was the way to go, and we bought a couple of fully loaded Dell PowerEdge 2550s with SCSI hardware RAID 5 arrays at about $4500 a pop. We also bought a PowerEdge 600SC for around $900 with lots of disk space to use as a network backup machine (backing up the 2550s) with Linux software RAID 5. I've also had a crappy old desktop machine running Linux software RAID 1 for a couple of years. It turns out that the software RAID is just as reliable (more so, in fact -- we have had a number of lockups on the 2550s that appear to be due to the hardware RAID subsystem locking up, and the software RAID machines have never done that, even though the backup server does more disk I/O than the others). The software RAID on the 600SC is faster than the hardware RAID in bonnie tests. In addition, the Dell PowerEdge mailing lists are full of people with horror stories about their hardware RAID systems -- if that dies on mine, I'm screwed until I can convince Dell to come out and fix it (which they often won't do until they've spent hours on the phone with you trying various things). We should have simply bought 4 600SCs (instead of 2 2550s and a 600SC), using one as a hot standby, and saved ourselves around $6000. In fact, we're planning on moving to that and selling the 2550s on eBay to improve our overall reliability. If the power supply, motherboard or RAM of a 600SC dies, we can easily move the disks to the spare machine and be back up within a few minutes without relying on anyone else. In the worst case (RAID corruption/machine catches on fire), I'm still going to be okay, because I can restore from backups in a couple of hours. The key thing to me is that at no point do we have to rely on any other company to get things up and running again, which is far more important than any putative risk of data corruption from software RAID (which I have not seen even under very heavy disk loads, and which I think is pretty much a myth these days; look at the Dell PowerEdge mailing lists if you think hardware RAID is more reliable -- those stories of hardware RAID problems from real users have scared me to the point that I'll never consider buying any sort of proprietary disk subsystem again). -- Robert L Mathews, Tiger Technologies http://www.tigertech.net/ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. -- Charles Babbage ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
Re: [Asterisk-Users] Hardware for Asterisk
Software RAID vs. Hardward RAID??? There IS no Hardward RAID it's all software the difference is only where the software lives, in ROM on the controler card in the RAID box or in a Linux driver. If you go top of the line and buy a Netapp network attached storage box. It thin it is just BSD running on Intel hardware but all closed up so it looks like a turn key system. Same with Sun. Thier hardware RAID has a SPARC CPU in the raid box. For Asterisk all you would need is a simple disk mirror at most. = Chris Albertson Home: 310-376-1029 [EMAIL PROTECTED] Cell: 310-990-7550 Office: 310-336-5189 [EMAIL PROTECTED] KG6OMK __ Do you Yahoo!? Yahoo! Hotjobs: Enter the Signing Bonus Sweepstakes http://hotjobs.sweepstakes.yahoo.com/signingbonus ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
[Asterisk-Users] Hardware for Asterisk
I am real close to finalizing my hardware selection for my Asterisk test machine. I am going to use the following hardware: Dell 400SC w\Red Hat 9.0 1 - 4 Port TDM40B Card (FXS) 3 - Wildcard X100P Cards (FXO) Are there any known conflicts using this setup in this machine? I will be occupying all the PCI slots for this configuration. Also, is it worth the trouble to tie Asterisk into our present system which is a Panasonic D816 Hybrid System, or should I just dump our current Panasonic system all together? Thanks, Charles Alvis Internet Technology Group, Inc. Redmond, WA --- [This E-mail scanned for viruses by Virus Hunter at itechgroup.com] ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
Re: [Asterisk-Users] Hardware for Asterisk
--- calvis [EMAIL PROTECTED] wrote: I am real close to finalizing my hardware selection for my Asterisk test machine. I am going to use the following hardware: Dell 400SC w\Red Hat 9.0 1 - 4 Port TDM40B Card (FXS) 3 - Wildcard X100P Cards (FXO) It does not matter if the PC is a Del, Compaq or you built it yourself. What matters is the mother board that Del is using. Find out if there is a way to assign each Digum card it's own interrupt. Don't bother with the RAID controller it will not work with Linux. but Linux has it's own RAID in software. = Chris Albertson Home: 310-376-1029 [EMAIL PROTECTED] Cell: 310-990-7550 Office: 310-336-5189 [EMAIL PROTECTED] KG6OMK __ Do you Yahoo!? Yahoo! Hotjobs: Enter the Signing Bonus Sweepstakes http://hotjobs.sweepstakes.yahoo.com/signingbonus ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
Re: [Asterisk-Users] Hardware for Asterisk
On Thu, 2004-01-15 at 17:44, Chris Albertson wrote: --- calvis [EMAIL PROTECTED] wrote: I am real close to finalizing my hardware selection for my Asterisk test machine. I am going to use the following hardware: Dell 400SC w\Red Hat 9.0 1 - 4 Port TDM40B Card (FXS) 3 - Wildcard X100P Cards (FXO) It does not matter if the PC is a Del, Compaq or you built it yourself. What matters is the mother board that Del is using. Find out if there is a way to assign each Digum card it's own interrupt. Don't bother with the RAID controller it will not work with Linux. but Linux has it's own RAID in software. If you value your data, don't use software raid. If you value performance don't use software raid. If you value uptime/stability don't use any raid on IDE. What matters as far as the computers being used is that you are unlikely to get your hands on a real server class motherboard without having bought it in a Dell or Compaq. It also matters as to the supporting hardware. If the PSU isn't quality enough, then it doesn't matter what motherboard you use. Dell doesn't want to deal with your system after sales. They will put a few extra dimes into the PSU so it stays in shape for a few more years. The companies you are most likely to purchase a case from will usually expect you to not come after them if the PSU fails. So why would they bother to spend the extra money to make the PSU last longer. Also Dell is more likely to have a part to fix your machine in the mail within hours instead of you waiting till you can get to the store to purchase your replacement part before RMAing the part and waiting the couple of weeks for the replacement. In general, you get what you pay for, and less so when you go bargain hunting. It all comes down to the same old problem of figuring out what your time and downtime are worth. ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
[Asterisk-Users] hardware requirements - asterisk
In relation to voice degradation when having 2 or more connection to Asterisk. The comment on the network setup is quite possible. I am not too familiar with linux. How do I check whether the asterisk server's nic is running at full-duplex mode. Does Asterisk use the sound card on the box to do voice processing? I am running xlite on 2 pc and making calls through iax, FWD and back to my incoming call menu. Voice degradation happens. David Kwok smime.p7s Description: S/MIME Cryptographic Signature
RE: [Asterisk-Users] hardware requirements - asterisk
What is your internet connection speed up and down? That could be your problem the traffic. Jimmy Riley Network Administrator VeriCore 985-626-1701 X1103 -Original Message- From: dkwok [mailto:[EMAIL PROTECTED] Sent: January 15, 2004 1:23 AM To: [EMAIL PROTECTED] Subject: [Asterisk-Users] hardware requirements - asterisk In relation to voice degradation when having 2 or more connection to Asterisk. The comment on the network setup is quite possible. I am not too familiar with linux. How do I check whether the asterisk server's nic is running at full-duplex mode. Does Asterisk use the sound card on the box to do voice processing? I am running xlite on 2 pc and making calls through iax, FWD and back to my incoming call menu. Voice degradation happens. David Kwok ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
Re: [Asterisk-Users] hardware requirements - asterisk
dkwok wrote: In relation to voice degradation when having 2 or more connection to Asterisk. The comment on the network setup is quite possible. I am not too familiar with linux. How do I check whether the asterisk server's nic is running at full-duplex mode. Does Asterisk use the sound card on the box to do voice processing? I am running xlite on 2 pc and making calls through iax, FWD and back to my incoming call menu. Voice degradation happens. David Kwok David, I too am new to Asterisk. Howerver I know howto check nic card. Use mii-tool command. /glen ___ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users