Dear Neha, 1. You need to enter 90 for both fields: “Concurrent Users” under “Driver” tab, and “Loaded for Concurrent Users” under “Data Servers” tab. It scales your number by factor of 100, so you have 90*100 = 9000 users.
2. There is no threshold for populating the database (except your available storage). You can find your response here: http://www.mail-archive.com/cloudsuite@listes.epfl.ch/msg00050.html <http://www.mail-archive.com/cloudsuite@listes.epfl.ch/msg00050.html> Best regards, Nooshin > > ________________________________________ > From: Neha Soni [nehaso...@gmail.com] > Sent: Friday, March 20, 2015 5:30 AM > To: Djordje Jevdjic > Subject: Fwd: Queries on Cloudstone > > Hello Djordje, > > I am benchmarking web 2.0 applications on IaaS clouds using Cloudstone > benchmarking tool. I have a couple of questions as described in the below > email. Can you please respond to these questions? > > Appreciate your help. > > Thank you, > Neha > > ---------- Forwarded message ---------- > From: "Neha Soni" <nehaso...@gmail.com<mailto:nehaso...@gmail.com>> > Date: Mar 19, 2015 10:12 PM > Subject: Queries on Cloudstone > To: <cloudsuite@listes.epfl.ch<mailto:cloudsuite@listes.epfl.ch>> > > Hello, > > I am benchmarking web 2.0 applications using cloudstone benchmark. I have a > couple of questions here: > > > I entered 90 as a load scale which means database will be populated with 9000 > (90*100) users. While running the benchmark, I am not sure what to enter in > the ‘Loaded for concurrent users’ parameter under ‘Data Servers’ tab. If I > enter 90, it does not allow me to run the benchmark for more than 90 users. > On the other hand, if I enter 9000, it fails with a lot of exception if I > enter more than 90 concurrent users in ‘Driver’ tab. > > Can you please let me know what should I enter in ‘Loaded for concurrent > users’ under ‘Data Servers’ tab? 90 or 9000? > > Also, if I enter 9000 as a load scale, dbloader.sh never finishes. Can you > please let me know if there is any threshold for this load scale value? > > Thank you, > > Neha >