Thanks. We already implemented several of these (based on Pascal's suggestion about concurrency being turned on, I assumed that was the default but was apparently wrong). Concurrency and 4 instances have dramatically reduced the occurrences of long wait times. Down to something like 4 out of many thousands of accesses. I will be doing the separate thread/concurrency pool thing first thing in the AM.
Thanks a bunch! I would not have even known where to look without this list. Sent from my iPad On Jul 31, 2011, at 5:29 PM, Kieran Kelleher <kelleh...@gmail.com> wrote: > IMMEDIATE BAND-AID 60 second FIX > ======================== > 1. Go to Monitor, create a new 'Application' named "Admin" and select the > current application executable path, launch an instance and use that Admin > instance for sending notifications. (You don't need to copy the app, rename > and redeploy to do this as you suggested!) > 2. Restart your "Public" instance with properties in WOMonitor to (1) provide > multiple OSC's for regular requests using wonder property for the ERXOSC > pool, and (2) to turn on concurrency. > 3. If your public instance(s) struggle(s) on the next batch of notifications, > add more instances. > > > > MEDIUM, LONG TERM IMPROVEMENT > ======================= > > You only have 76 MB of data, do you expect this to grow significantly in the > next year or two? > > Your innodb memory allocation is 128MB, so your entire DB fits in memory, > albeit there are a few settings that can be tweaked even as it is right now. > > For high concurrency on this small dataset, you are better off with a few > CPUs and (if necessary to offset costs, less memory than 16GB). Not sure if > your server is a physical machine or a VPS that can be reconfigured. > > Before I would make recomendations on config settings, you need to decide how > much memory you want MySQL to use based on projected peak data size over next > 12 months.... 256MB, 512MB, 1GB, 2GB ??? > > From what I understand, your current issues are related to high concurrency > in short periods during and after the times when you send your 20,000 > notifications, and IIRC, you have database/woa and apache on one machine with > 16GB RAM and 1 CPU > > Here is some recommendations to handle that concurrency better. You can do > the first 3 right now after the BAND-AID fix above gives you breathing space: > * Turn on concurrency in your app. > * Use ERXObjectStoreCoordinatorPool for handling regular request-response > EOF. Start with 3 OSC's per instance and see how it goes. > * Use a background thread for your NotificationsSending process with its own > OSC (easy way: just extend ERXAbstractTask, use its newEditingContext() > method in your task, or copy the logic in that class to your Runnable > background task.) > * If your server is a VPS that can have its configuration changed, consider > more CPUs (4?) and less total memory, if that offsets the cost of more CPUs, > with memory based on next 12 month peak traffic and data size expectations > (4GB?) > > > Then > * Monitor memory for your app instance(s) and adjust as needed. > * You might find that you handle the peak traffic bursts by having a few > instances rather than just one > * Monitor, measure and adjust again. > > So just maybe an ideal config for your scenario right now might be sth like > this: > > MySQL: allocate 512MB, covers 6x data size growth, and edit my.cnf to make > the best use of that 512MB. > WOA App Instances: 4-6 instances x ???MB each > > > Some other observations: > TABLE INDICES > ------------------- > * Every index adds time to inserts, updates and adds space to the database > size, so no need for redundant indices. For example, the following join table > has 3 indices and all (single and compound indices) begin with 'app_dev_Id', > so therefore this one, KEY `app_dev_Id` (`app_dev_Id`), is redundant. Either > of the other two covers that index requirement. > > > CREATE TABLE IF NOT EXISTS `app_dev_not_type` ( > `app_dev_Id` bigint(20) NOT NULL default '0', > `not_type_id` int(11) NOT NULL default '0', > `active` int(11) default '1', > `create_date` datetime default NULL, > `modify_date` datetime default NULL, > PRIMARY KEY (`app_dev_Id`,`not_type_id`), > KEY `app_dev_Id` (`app_dev_Id`), > KEY `app_dev_Id_3` (`app_dev_Id`,`active`), > KEY `idx_not_type_id` (`not_type_id`) > ) ENGINE=InnoDB DEFAULT CHARSET=latin1; > > is the same as: > > CREATE TABLE IF NOT EXISTS `app_dev_not_type` ( > `app_dev_Id` bigint(20) NOT NULL default '0', > `not_type_id` int(11) NOT NULL default '0', > `active` int(11) default '1', > `create_date` datetime default NULL, > `modify_date` datetime default NULL, > PRIMARY KEY (`app_dev_Id`,`not_type_id`), > KEY `app_dev_Id_3` (`app_dev_Id`,`active`), > KEY `idx_not_type_id` (`not_type_id`) > ) ENGINE=InnoDB DEFAULT CHARSET=latin1; > > > * it is a good practice to add index for the reverse relationship in join > tables, for example: > > CREATE TABLE IF NOT EXISTS `device_notification` ( > `device_id` bigint(20) NOT NULL default '0', > `notification_id` bigint(20) NOT NULL default '0', > PRIMARY KEY (`device_id`,`notification_id`) > ) ENGINE=InnoDB DEFAULT CHARSET=latin1; > > You should put compound index for the reverse relationship, so it should be: > > CREATE TABLE IF NOT EXISTS `device_notification` ( > `device_id` bigint(20) NOT NULL default '0', > `notification_id` bigint(20) NOT NULL default '0', > PRIMARY KEY (`device_id`,`notification_id`), > KEY `reverse_rel` (`notification_id`,`device_id`) > ) ENGINE=InnoDB DEFAULT CHARSET=latin1; > > > SUMMARY > ========== > Andrew, it seems your primary issue here is EOF concurrency and app > concurrency in general during and immediately after you send your large group > of notifications. > > > > HTH, Kieran > > On Jul 29, 2011, at 3:38 PM, Andrew Kinnie wrote: > >> OK, thanks. >> >> I have implemented a simple task version of my old utilities class to run >> the sends, but I did not know that about EOF being single threaded or the >> single db connection. I will look at the ERXTaskObjectStoreCoordinatorPool. >> >> BIG help. >> >> I attached the query results and the show variables result >> >> <query_result.csv><variables.csv><my.cnf><apns schema.sql> >> >> Note, based on Pascal's suggestion we have bumped the memory on MySQL, but >> it is not reflected yet, and won't be until we restart it at 3:30 AM. >> Hopefully there won't be any major breaking news at that time. >> >> >> >> On Jul 29, 2011, at 2:56 PM, Kieran Kelleher wrote: >> >>> Andrew, >>> >>> Hi Andrew, >>> >>> Just to endorse what some have said, after reading this thread: >>> >>> 1) concurrency must be ON >>> 2) For your 1 minute task do it in a background thread and use a different >>> OSC. Remember EOF is a single-threaded, single-db-connection stack. If you >>> want high concurrency performance, you cannot just use the default OSC. Use >>> a ERXTaskObjectStoreCoordinatorPool just for tasks, even if it is just a >>> pool of one. >>> >>> Also, if I get a few minutes later or at the weekend, I can eyeball your >>> setup for possible low-hanging fruit if you send the following: >>> >>> A) Send your /etc/my.cnf file to the list, and tell me how much total max >>> memory you want mysql to have - I will take a quick look at it to see if it >>> looks OK. >>> >>> B) send the output of the following SQL statement in a text file: >>> select TABLE_SCHEMA, TABLE_NAME, TABLE_ROWS, (DATA_LENGTH + >>> INDEX_LENGTH)/1024/1024 as SIZE_IN_MB, DATA_LENGTH/1024/1024 as >>> DATA_SIZE_IN_MB, INDEX_LENGTH/1024/1024 as INDEX_SIZE_IN_MB from >>> information_schema.TABLES order by SIZE_IN_MB desc; >>> >>> C) send the output of the following SQL statement in a text file: >>> SHOW VARIABLES; >>> >>> D) Send the output (allschemas.sql) of the following CLI statement: >>> mysqldump --all-databases --opt --no-data > allschemas.sql >>> >>> >>> >>> >>> >>> >>> >>> On Jul 29, 2011, at 10:56 AM, Andrew Kinnie wrote: >>> >>>> That will help, thanks! >>>> >>>> On Jul 29, 2011, at 10:55 AM, Alexis Tual wrote: >>>> >>>>> An example of all that John said is available there thanks to Kieran : >>>>> >>>>> https://github.com/projectwonder/wonder/tree/master/Examples/Misc/BackgroundTasks >>>>> >>>>> Alex >>>>> >>>>> Le 29 juil. 2011 à 16:52, Andrew Kinnie a écrit : >>>>> >>>>>> Thanks. I may give that a try. That was one of the other options I >>>>>> thought of, but was hoping to avoid a significant re-write. >>>>>> >>>>>> >>>>>> >>>>>> On Jul 29, 2011, at 10:44 AM, John & Kim Larson wrote: >>>>>> >>>>>>> rather than increasing worker threads, why not just spawn a new Java >>>>>>> thread for sending the notifications? That thread can run in the >>>>>>> background while you're doing EO stuff and free your app up to do the >>>>>>> servicing of requests. >>>>>>> >>>>>>> If you go down this path, I always pass EOs to other threads as >>>>>>> globalIDs to prevent problems. Also, make sure you don't lock the OSC >>>>>>> for the app during your work or your app will hang while other threads' >>>>>>> ECs wait to get it. If this gets bad enough, use a separate OSC stack >>>>>>> and dispose of it when your done. >>>>>>> >>>>>>> John >>>>>>> >>>>>>> Sent from my iPhone >>>>>>> >>>>>>> On Jul 29, 2011, at 9:28 AM, Andrew Kinnie <akin...@mac.com> wrote: >>>>>>> >>>>>>>> Greetings >>>>>>>> >>>>>>>> I have a deployed app which serves as a push notification server for >>>>>>>> our iOS app. It uses a recent ERRest (post WOWODC) to provide access >>>>>>>> to the data which is located on a MySQL database (using innoDB). The >>>>>>>> model has entities for PushApplication (the iOS app), >>>>>>>> ApplicationDevice (i.e. an iOS device which has our iOS app), >>>>>>>> Notification and has a lookup table for NotificationType (5 rows). >>>>>>>> Notification is a message, and there is a many to many with >>>>>>>> ApplicationDevice along with a corresponding device_notification >>>>>>>> table, as well as ApplicationDeviceNotificationType to allow >>>>>>>> particular devices to have particular types of notifications turned on >>>>>>>> or off. >>>>>>>> >>>>>>>> Our app in connected to by our editorial staff via a Cold Fusion app >>>>>>>> to send out breaking news alerts as push notifications. I then get >>>>>>>> (via a fetch) all the devices which have that particular notification >>>>>>>> type (basically 90% of our 20,000+ "installed" applicationDevices), >>>>>>>> then I pass that array into a method which makes the connection to >>>>>>>> Apple and iterates through the array sending one notification to each >>>>>>>> device in turn, then closes the connection. >>>>>>>> >>>>>>>> It takes approximately 1 minute to send an alert to all 20,000 devices. >>>>>>>> >>>>>>>> While this happens, some of these devices are getting the push from >>>>>>>> Apple (which is crazy fast about it), and some of them are running the >>>>>>>> app and the iOS app itself is querying the server for details about >>>>>>>> the notification and loading it in. However, if this happens while >>>>>>>> the push is still in the process of sending (i.e. within the 1 minute >>>>>>>> time frame), the iOS app may be forced to wait for the send process to >>>>>>>> finish (as many as 60 seconds presumably. It doesn't happen all that >>>>>>>> often, because our app doesn't buzz or makes a sound when it receives >>>>>>>> a notification, but it is not ideal. We anticipate using this same >>>>>>>> app and server for the Android version, and for the upcoming iPhone >>>>>>>> update, so the number of installed devices could increase pretty >>>>>>>> dramatically. Currently it is iPad only. >>>>>>>> >>>>>>>> So, we're trying to figure out what to do about it. Currently the app >>>>>>>> is deployed on a CentOS server (single core processor) which also >>>>>>>> houses the db, but nothing else. It has 16 GB of RAM. >>>>>>>> >>>>>>>> We were considering: >>>>>>>> >>>>>>>> 1. Trying to increase the threads the app can create, but I'm not sure >>>>>>>> that would fix it as much as mask it >>>>>>>> 2. Trying to run an additional copy of the app to send while the other >>>>>>>> one handles the incoming client requests, but I am not sure how to >>>>>>>> accomplish this other than copying the whole project, renaming it, >>>>>>>> then deploying that. I am also not sure this would fix anything if in >>>>>>>> fact the issue were locking in the database or jdbc or something of >>>>>>>> that nature. >>>>>>>> 3. Seeing if there was something easier, more efficient and less >>>>>>>> kludgy feeling than either of those. (assuming either of those would >>>>>>>> work anyway, we have some difficulty testing it without sending out >>>>>>>> 20,000 push notifications) >>>>>>>> >>>>>>>> Anyone have any insight? >>>>>>>> >>>>>>>> Andrew >>>>>>>> _______________________________________________ >>>>>>>> Do not post admin requests to the list. They will be ignored. >>>>>>>> Webobjects-dev mailing list (Webobjects-dev@lists.apple.com) >>>>>>>> Help/Unsubscribe/Update your Subscription: >>>>>>>> http://lists.apple.com/mailman/options/webobjects-dev/the_larsons%40mac.com >>>>>>>> >>>>>>>> This email sent to the_lars...@mac.com >>>>>> >>>>>> _______________________________________________ >>>>>> Do not post admin requests to the list. They will be ignored. >>>>>> Webobjects-dev mailing list (Webobjects-dev@lists.apple.com) >>>>>> Help/Unsubscribe/Update your Subscription: >>>>>> http://lists.apple.com/mailman/options/webobjects-dev/alexis.tual%40gmail.com >>>>>> >>>>>> This email sent to alexis.t...@gmail.com >>>>> >>>> >>>> _______________________________________________ >>>> Do not post admin requests to the list. They will be ignored. >>>> Webobjects-dev mailing list (Webobjects-dev@lists.apple.com) >>>> Help/Unsubscribe/Update your Subscription: >>>> http://lists.apple.com/mailman/options/webobjects-dev/kelleherk%40gmail.com >>>> >>>> This email sent to kelleh...@gmail.com >>> >> > _______________________________________________ Do not post admin requests to the list. They will be ignored. Webobjects-dev mailing list (Webobjects-dev@lists.apple.com) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com This email sent to arch...@mail-archive.com