Re: freebsd server limits question
Hi there Huhammet What are the contents of the following files on you're CentOS 6.x shards ? /etc/security/limits.confand /etc/security/limits.d/90-nproc.conf What version of MongoDB are you running, is it from packages (if so who's) or is it self compiled? Have you tried running the MongoDB shards on the most recent CentOS 5.x release? If so what differences do you note, if any? This could help diagnose the source of you're problems. Also what is the current stack size of you're MongoDB shards (set via the -s parameter) ? And lastly what is the system load like at the heaviest transaction points (vmstat and iostat can help you out there) ? If this is a branded name server set what is the exact model and hardware configuration? Are you running 32bit or 64bit instances of MongoDB on 32bit or 64bit CentOS 6.x ? Regards,... Ross Cameron eMail : ross.came...@unix.net Phone : +27 (0)79 491-9954 On Mon, Jan 2, 2012 at 9:12 PM, Muhammet S. AYDIN whalb...@gmail.comwrote: Hello everyone. My first post here and I'd like to thank everyone who's involved within the FreeBSD project. We are using FreeBSD on our web servers and we are very happy with it. We have an online messaging application that is using mongodb. Our members send messages to the voice show's (turkish version) contestants. Our two mongodb instances ended up in two centos6 servers. We have failed. So hard. There were announcements and calls made live on tv. We had +30K/sec visitors to the app. When I looked at the mongodb errors, I had thousands of these: http://pastie.org/private/nd681sndos0bednzjea0g. You may be wondering why I'm telling you about centos. Well, we are making the switch from centos to freebsd FreeBSD. I would like to know what are our limits? How we can set it up so our FreeBSD servers can handle min 20K connections (mongodb's connection limit)? Our two servers have 24 core CPUs and 32 GBs of RAM. We are also very open to suggestions. Please help me out here so we don't fail deadly, again. ps. this question was asked in the forums as well however as someone suggested in the forums, i am posting it here too. -- Muhammet S. AYDIN http://compector.com http://mengu.net ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
RE: freebsd server limits question
-Original Message- From: owner-freebsd-questi...@freebsd.org [mailto:owner-freebsd- questi...@freebsd.org] On Behalf Of Muhammet S. AYDIN Sent: Monday, January 02, 2012 11:13 AM To: freebsd-questions@freebsd.org Subject: freebsd server limits question Hello everyone. My first post here and I'd like to thank everyone who's involved within the FreeBSD project. We are using FreeBSD on our web servers and we are very happy with it. We have an online messaging application that is using mongodb. Our members send messages to the voice show's (turkish version) contestants. Our two mongodb instances ended up in two centos6 servers. We have failed. So hard. There were announcements and calls made live on tv. We had +30K/sec visitors to the app. When I looked at the mongodb errors, I had thousands of these: http://pastie.org/private/nd681sndos0bednzjea0g. You may be wondering why I'm telling you about centos. Well, we are making the switch from centos to freebsd FreeBSD. I would like to know what are our limits? How we can set it up so our FreeBSD servers can handle min 20K connections (mongodb's connection limit)? Our two servers have 24 core CPUs and 32 GBs of RAM. We are also very open to suggestions. Please help me out here so we don't fail deadly, again. We have similar hardware (24x core CPUs but 48GB of RAM instead of 32). NOTE: The machine has 2x igb(4) interfaces and we're negotiating at 1000baseTX Gigabit full-duplex link-speed. We had similar problems, but have had zero problems in the past 2 months with high-load (read below). ASIDE: We're using FreeBSD 8.1-RELEASE-p6 We found that the following tweaks had to be made in /etc/sysctl.conf : ### Network Tuning ### # Increase TCP maximum segment lifetime net.inet.tcp.msl=15000 # Increase TCP time before keepalive probes again net.inet.tcp.keepidle=30 # Increase maximum number of mbuf clusters allowed (174808 = 32768) kern.ipc.nmbclusters=32768 # Increase by 8-times the maximum socket buffer size (262144 = 2097152) kern.ipc.maxsockbuf=2097152 # Increase by 64-times the max pending socket conn. queue size (128 = 8192) kern.ipc.somaxconn=8192 # Increase by ~8-times the maximum number of [open] files (8232 = 65536) kern.maxfiles=65536 # Increase by ~4-times the max files allowed open per process (7408 = 32768) kern.maxfilesperproc=32768 # Disable delay of ACK to try and piggyback it onto a data packet (1 = 0) net.inet.tcp.delayed_ack=0 # Increase by ~2-times the maximum outgoing TCP datagram size (32768 = 65535) net.inet.tcp.sendspace=65535 # Increase maximum space for incoming UDP datagrams (41600 = 65535) net.inet.udp.recvspace=65535 # Increase by ~6-times the maximum outgoing UDP datagram size (9216 = 57344) net.inet.udp.maxdgram=57344 # Increase by ~8-times the default stream receive space (8192 = 65535) net.local.stream.recvspace=65535 # Increase by ~8-times the default stream send space (8192 = 65535) net.local.stream.sendspace=65535 Meanwhile, yet more tweaks go into /boot/loader.conf : ### Process/Memory Tuning ### # Increase by 4-times the maximum data size (536870912 = 2147483648) kern.maxdsiz=2147483648 # Increase by 4-times the maximum stack size (67108864 = 268435456) kern.maxssiz=268435456 ### Network Tuning ### # Increase maximum outgoing Netgraph datagram size (20480 = 45000) net.graph.maxdgram=45000 # Increase maximum space for incoming Netgraph datagrams (20480 = 45000) net.graph.recvspace=45000 # Increase by 128-times max num of data queue items to allocate (512 = 65536) net.graph.maxdata=65536 With the above tweaks in-place for both sysctl.conf(5) and loader.conf(5), all our problems are gone. Your mileage may vary, but I suspect that the above collection of tweaks will work well for you. They should be safe for both 32-bit (both regular and PAE) and 64 (all tested). However, if you are the cautious type, I would recommend adding one optimizer at a time, rebooting after each tweak. -- Devin _ The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: freebsd server limits question
hello... I supose you are using 64bits version of FreeBSD and at least 8.2 version... What happens is that you have exhausted the thread limit of your appplication your systeam is unable to create more threads for that appplication a command: sysctl -a | grep thread will show how they are setted up in your system. mine has: - kern.threads.max_threads_hits: 0 kern.threads.max_threads_per_proc: 1500 vm.stats.vm.v_kthreadpages: 0 vm.stats.vm.v_kthreads: 24 vfs.nfsrv.minthreads: 4 vfs.nfsrv.maxthreads: 4 vfs.nfsrv.threads: 4 net.isr.numthreads: 1 net.isr.bindthreads: 0 net.isr.maxthreads: 1 -- note that the number of threads per proc is 1500 here (a notebook) to increase the number of threads, edit the file /etc/sysctl.conf put a line: kern.threads.max_threads_per_proc=9000 and than the command: /etc/rc.d/sysctl restart Hope this will help ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: freebsd server limits question
At 20:12 02/01/2012, Muhammet S. AYDIN wrote: Hello everyone. My first post here and I'd like to thank everyone who's involved within the FreeBSD project. We are using FreeBSD on our web servers and we are very happy with it. We have an online messaging application that is using mongodb. Our members send messages to the voice show's (turkish version) contestants. Our two mongodb instances ended up in two centos6 servers. We have failed. So hard. There were announcements and calls made live on tv. We had +30K/sec visitors to the app. When I looked at the mongodb errors, I had thousands of these: http://pastie.org/private/nd681sndos0bednzjea0g. You may be wondering why I'm telling you about centos. Well, we are making the switch from centos to freebsd FreeBSD. I would like to know what are our limits? How we can set it up so our FreeBSD servers can handle min 20K connections (mongodb's connection limit)? Our two servers have 24 core CPUs and 32 GBs of RAM. We are also very open to suggestions. Please help me out here so we don't fail deadly, again. ps. this question was asked in the forums as well however as someone suggested in the forums, i am posting it here too. Is your app limited by cpu or by i/o? What do vmstat/iostat says about your hd usage? Perhaps mongodb fails to read/write fast enough and making process thread pool bigger only will make problem worse, there will be more threads trying to read/write. Have you already tuned mongodb? Post more info please, several lines (not the first one) of iostat and vmstat may be a start. Your hd configuration, raid, etc... too. L ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: freebsd server limits question
To deal with this kind of traffic you will most likely need to set up a mongo db cluster of more than a few instances… much better. There should be A LOT of info on how to scale mongo to the level you are looking for but most likely you will find that on ruby forums NOT on *NIX boards…. The OS boards/focus will help you with fine tuning but all the fine tuning in the world will not solve an app architecture issue… I have setup MASSIVE mongo/ruby installs for testing that can do this sort of volume with ease… the stack looks something like this…. Nginix Unicorn Sinatra MongoMapper MongoDB with only one Nginix instance can feed an almost arbitrary number of Unicorn/Sinatra/MongoMapper instances that can in turn feed a properly configured MongoDB cluster with pre-allocated key distribution so that the incoming inserts are spread evenly against the cluster instances… Even if you do not use ruby that community will have scads of info on scaling MongoDB. One more comment related to L's advice - true you DO NOT want more transactions queued up if your back-end resources cannot handle the TPS - this will just make the issue harder to isolate and potentially make the recovery more difficult. Better to reject the connection at the front-end than take it and blow up the app/system. The beauty of the Nginix/Unicorn solution (Unicorn is ruby specific) is that there is no queue that is feed to the workers when there are no workers - the request is rejected. The unicorn worker model can be reproduced for any other implementation environment (PHP/Perl/C/etc) outside of ruby in about 30 minutes. It's simple and Nginix is very well suited to low overhead reverse proxy to this kind of setup. Wishing you the best - if i can be of more help let me know… RB On Jan 2, 2012, at 3:38 PM, Eduardo Morras wrote: At 20:12 02/01/2012, Muhammet S. AYDIN wrote: Hello everyone. My first post here and I'd like to thank everyone who's involved within the FreeBSD project. We are using FreeBSD on our web servers and we are very happy with it. We have an online messaging application that is using mongodb. Our members send messages to the voice show's (turkish version) contestants. Our two mongodb instances ended up in two centos6 servers. We have failed. So hard. There were announcements and calls made live on tv. We had +30K/sec visitors to the app. When I looked at the mongodb errors, I had thousands of these: http://pastie.org/private/nd681sndos0bednzjea0g. You may be wondering why I'm telling you about centos. Well, we are making the switch from centos to freebsd FreeBSD. I would like to know what are our limits? How we can set it up so our FreeBSD servers can handle min 20K connections (mongodb's connection limit)? Our two servers have 24 core CPUs and 32 GBs of RAM. We are also very open to suggestions. Please help me out here so we don't fail deadly, again. ps. this question was asked in the forums as well however as someone suggested in the forums, i am posting it here too. Is your app limited by cpu or by i/o? What do vmstat/iostat says about your hd usage? Perhaps mongodb fails to read/write fast enough and making process thread pool bigger only will make problem worse, there will be more threads trying to read/write. Have you already tuned mongodb? Post more info please, several lines (not the first one) of iostat and vmstat may be a start. Your hd configuration, raid, etc... too. L ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: freebsd server limits question
On Jan 2, 2012, at 4:21 PM, Robert Boyer wrote: To deal with this kind of traffic you will most likely need to set up a mongo db cluster of more than a few instances… much better. There should be A LOT of info on how to scale mongo to the level you are looking for but most likely you will find that on ruby forums NOT on *NIX boards…. Suggest hitting up 10gen as well they usually have some knowledgeable individuals available to talk mongo... Cheers, m ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: freebsd server limits question
Sorry one more thought and a clarification…. I have found that it is best to run mongos with each app server instance most of the mongo interface libraries aren't intelligent about the way that they distribute requests to available mongos processes. mongos processes are also relatively lightweight and need no coordination or synchronization with each other - simplifies things a lot and makes any potential bugs/complexity with app server/mongo db connection logic just go away. It's pretty important when configuring shards to take on the write volume that you do your best to pre-allocate chunks and avoid chunk migrations during your traffic floods - not hard to do at all. There are also about a million different ways to deal with atomicity (if that is a word) and a very mongo specific way of ensuring writes actually made it to disk somewhere = from your brief description of the app in question it does not sound that it is too critical to ensure every single solitary piece of data persists no matter what as I am assuming most of it is irrelevant and becomes completely irrelevant after the show- or some time there after. Most of the programing and config examples make an opposite assumption in that they assume that each transaction MUST be completely durable - if you forgo that you can get screaming TPS out of a mongo shard. Also if you do not find what you are looking for via a ruby support group - the JS and node JS community also may be of assistance but they tend to have a very narrow view of the world…. ;-) RB On Jan 2, 2012, at 4:21 PM, Robert Boyer wrote: To deal with this kind of traffic you will most likely need to set up a mongo db cluster of more than a few instances… much better. There should be A LOT of info on how to scale mongo to the level you are looking for but most likely you will find that on ruby forums NOT on *NIX boards…. The OS boards/focus will help you with fine tuning but all the fine tuning in the world will not solve an app architecture issue… I have setup MASSIVE mongo/ruby installs for testing that can do this sort of volume with ease… the stack looks something like this…. Nginix Unicorn Sinatra MongoMapper MongoDB with only one Nginix instance can feed an almost arbitrary number of Unicorn/Sinatra/MongoMapper instances that can in turn feed a properly configured MongoDB cluster with pre-allocated key distribution so that the incoming inserts are spread evenly against the cluster instances… Even if you do not use ruby that community will have scads of info on scaling MongoDB. One more comment related to L's advice - true you DO NOT want more transactions queued up if your back-end resources cannot handle the TPS - this will just make the issue harder to isolate and potentially make the recovery more difficult. Better to reject the connection at the front-end than take it and blow up the app/system. The beauty of the Nginix/Unicorn solution (Unicorn is ruby specific) is that there is no queue that is feed to the workers when there are no workers - the request is rejected. The unicorn worker model can be reproduced for any other implementation environment (PHP/Perl/C/etc) outside of ruby in about 30 minutes. It's simple and Nginix is very well suited to low overhead reverse proxy to this kind of setup. Wishing you the best - if i can be of more help let me know… RB On Jan 2, 2012, at 3:38 PM, Eduardo Morras wrote: At 20:12 02/01/2012, Muhammet S. AYDIN wrote: Hello everyone. My first post here and I'd like to thank everyone who's involved within the FreeBSD project. We are using FreeBSD on our web servers and we are very happy with it. We have an online messaging application that is using mongodb. Our members send messages to the voice show's (turkish version) contestants. Our two mongodb instances ended up in two centos6 servers. We have failed. So hard. There were announcements and calls made live on tv. We had +30K/sec visitors to the app. When I looked at the mongodb errors, I had thousands of these: http://pastie.org/private/nd681sndos0bednzjea0g. You may be wondering why I'm telling you about centos. Well, we are making the switch from centos to freebsd FreeBSD. I would like to know what are our limits? How we can set it up so our FreeBSD servers can handle min 20K connections (mongodb's connection limit)? Our two servers have 24 core CPUs and 32 GBs of RAM. We are also very open to suggestions. Please help me out here so we don't fail deadly, again. ps. this question was asked in the forums as well however as someone suggested in the forums, i am posting it here too. Is your app limited by cpu or by i/o? What do vmstat/iostat says about your hd usage? Perhaps mongodb fails to read/write fast enough and making process thread pool bigger only will make problem worse, there will be more threads trying to read/write. Have you already tuned
Re: freebsd server limits question
Just realized that the MongoDB site now has some recipes up for what you really need to do to make sure you can handle a lot of incoming new documents concurrently…. Boy you had to figure this stuff out yourself just last year - I guess the mongo community has come a very long way…. Splitting Shard Chunks - MongoDB enjoy…. RB On Jan 2, 2012, at 5:38 PM, Robert Boyer wrote: Sorry one more thought and a clarification…. I have found that it is best to run mongos with each app server instance most of the mongo interface libraries aren't intelligent about the way that they distribute requests to available mongos processes. mongos processes are also relatively lightweight and need no coordination or synchronization with each other - simplifies things a lot and makes any potential bugs/complexity with app server/mongo db connection logic just go away. It's pretty important when configuring shards to take on the write volume that you do your best to pre-allocate chunks and avoid chunk migrations during your traffic floods - not hard to do at all. There are also about a million different ways to deal with atomicity (if that is a word) and a very mongo specific way of ensuring writes actually made it to disk somewhere = from your brief description of the app in question it does not sound that it is too critical to ensure every single solitary piece of data persists no matter what as I am assuming most of it is irrelevant and becomes completely irrelevant after the show- or some time there after. Most of the programing and config examples make an opposite assumption in that they assume that each transaction MUST be completely durable - if you forgo that you can get screaming TPS out of a mongo shard. Also if you do not find what you are looking for via a ruby support group - the JS and node JS community also may be of assistance but they tend to have a very narrow view of the world…. ;-) RB On Jan 2, 2012, at 4:21 PM, Robert Boyer wrote: To deal with this kind of traffic you will most likely need to set up a mongo db cluster of more than a few instances… much better. There should be A LOT of info on how to scale mongo to the level you are looking for but most likely you will find that on ruby forums NOT on *NIX boards…. The OS boards/focus will help you with fine tuning but all the fine tuning in the world will not solve an app architecture issue… I have setup MASSIVE mongo/ruby installs for testing that can do this sort of volume with ease… the stack looks something like this…. Nginix Unicorn Sinatra MongoMapper MongoDB with only one Nginix instance can feed an almost arbitrary number of Unicorn/Sinatra/MongoMapper instances that can in turn feed a properly configured MongoDB cluster with pre-allocated key distribution so that the incoming inserts are spread evenly against the cluster instances… Even if you do not use ruby that community will have scads of info on scaling MongoDB. One more comment related to L's advice - true you DO NOT want more transactions queued up if your back-end resources cannot handle the TPS - this will just make the issue harder to isolate and potentially make the recovery more difficult. Better to reject the connection at the front-end than take it and blow up the app/system. The beauty of the Nginix/Unicorn solution (Unicorn is ruby specific) is that there is no queue that is feed to the workers when there are no workers - the request is rejected. The unicorn worker model can be reproduced for any other implementation environment (PHP/Perl/C/etc) outside of ruby in about 30 minutes. It's simple and Nginix is very well suited to low overhead reverse proxy to this kind of setup. Wishing you the best - if i can be of more help let me know… RB On Jan 2, 2012, at 3:38 PM, Eduardo Morras wrote: At 20:12 02/01/2012, Muhammet S. AYDIN wrote: Hello everyone. My first post here and I'd like to thank everyone who's involved within the FreeBSD project. We are using FreeBSD on our web servers and we are very happy with it. We have an online messaging application that is using mongodb. Our members send messages to the voice show's (turkish version) contestants. Our two mongodb instances ended up in two centos6 servers. We have failed. So hard. There were announcements and calls made live on tv. We had +30K/sec visitors to the app. When I looked at the mongodb errors, I had thousands of these: http://pastie.org/private/nd681sndos0bednzjea0g. You may be wondering why I'm telling you about centos. Well, we are making the switch from centos to freebsd FreeBSD. I would like to know what are our limits? How we can set it up so our FreeBSD servers can handle min 20K connections (mongodb's connection limit)? Our two servers have 24 core CPUs and 32 GBs of RAM. We are also very open to suggestions. Please help me out here so we don't fail deadly,