Hi Jiri, I see your concerns but with automated solutions there would be special cases that need separate handling. Our main goal is to relieve our NOC and event mgmt teams from manually start/stop monitoring. Most part the automated unavailability records seems to be working fine for us except a few cases where there are bulk CIs being impacted by change tkt. We turn off monitoring for them out of remedy.
--> In our case specific servers are scheduled to be patched on certain dates. They do in bulk at a time. For each date they do patching they will have a separate change ticket. That is one change tkt per window. Approvals are for that window only. These are usually scheduled during the weekends so the max window they get per change tkt is about a day or two not in weeks. Again the CAB will give heads up to the application teams about the long windows. Also the application or implementation teams can change the date/time on the specific unavailability record. So they have the option to fine tune the timings if they really are concerned about the long windows. -->Most of our critical apps have their own hardware so this was not a major concern for us. The only shared components I can see is network components or dababase cluser or VM clusters. If they are down they are going to impact all services any way. There are several kinds of monitors like hardware(cpu/memory/disk io), application (peformance,transaction) and other kind of monitors. So when the hardware is shared across several services and not all of them are impacted we just relate the application CIs to change tkt which will turn off monitoring for those specific applications not all applications deployed on that server. On Tuesday, March 27, 2012 3:49:13 AM UTC-5, Jiri Pospisil wrote: > > ** > > Hi, > > > > I am diverging from the original question, but cannot help but ask about > your points 3 and 4 – turning off/on monitoring based on unavailability > records. > > We have been discussing this here for ages, but there are always concerns, > mainly what happens if the implementation window is too large and hence > monitoring might be switched off for days (i.e. changes for patching a > number of servers that span across many days). > > Another concern here is regarding the potential to miss genuine alerts > that might not be related to the change work, i.e. if the server supports > more than one service and work is being carried on one service while the > other should not be affected. > > > > Would really appreciate if you could share your experience with how well > this works for you. > > > > Thanks > > Jiri Pospisil > > > > Remedy Specialist, IT Production > > Email. [email protected] <[email protected]> > > *[image: LCH Clearnet logo NEW AUG 2008 small.jpg]* > > [image: risk_e-mail] > > > > > > > > *From:* Action Request System discussion list(ARSList) [mailto: > [email protected]] *On Behalf Of *patchsk > *Sent:* 26 March 2012 19:47 > *To:* [email protected] > *Subject:* Re: Bulk Relate with Unavailability - ITSM 7.6.03 > > > > ** We had to do the same thing in our organization. > > 1.You could bulk relate multiple CI to change ticket out of the box. In > the CI search window search for a CI name pattern and multiple select and > click relate. It will relate all of them to change tkt. > > 2. Then we have an escalation which will create the unavailability records > on Schedule Start date of the Change tkt. > > It does some validation like if the change is approved or not. If it > is approved then it will create unavailability records once the Schedule > Start Date passes. > > 3. To extend this process, we also have integration with monitoring tools, > so upon unavailability records creation on remedy will issue commands to > turn off monitoring for the CIs.. > > 4.Once the Schedule End date passes remedy will again issue commands to > start monitoring for those CIs. > > > > There are a few checks and balances you many need to do for the process > but general idea is as described as above. > > On Monday, March 26, 2012 3:31:10 AM UTC-5, Kali Obsum wrote: > > ** > > Hi, > > > > Since it is not possible to select multiple assets and Relate them With > Unavailability in one go, has anybody implemented any work around for this? > Our process entails that for some changes, we need to bulk relate hundreds > of assets (e.g. patching). Raised this with BMC and they asked us for an > RFE. > > > > Regards, > > *Kali* > > > > NOTICE > > The information contained in this email is confidential. If you are not > the intended recipient, you must not disclose or use the information in > this email in any way. If you received it in error, please tell us > immediately by return email and delete the document. We do not guarantee > the integrity of any e-mails or attached files and are not responsible for > any changes made to them by any other person. > > > > _attend WWRUG12 www.wwrug.com ARSlist: "Where the Answers Are"_ > > _attend WWRUG12 www.wwrug.com ARSlist: "Where the Answers Are"_ > > > ************************************************************************************************* > > > > This email is intended for the named recipient(s) only. Its contents are > confidential and may only be retained by the named recipient(s) and may > only be copied or disclosed with the consent of LCH.Clearnet Limited and/or > LCH.Clearnet SA. If you are not an intended recipient please delete this > e-mail and notify [email protected]. > > LCH.Clearnet Limited, LCH.Clearnet SA and each other member of the > LCH.Clearnet Group accept no liability, including liability for negligence, > in respect of any statement in this email. > > The contents of this email are subject to contract in all cases, and > LCH.Clearnet Limited and/or LCH.Clearnet SA makes no contractual commitment > save where confirmed by hard copy. > > Cet e-mail et toutes les pièces jointes (ci-après le "message") sont > confidentiels et établis à l'intention exclusive de ses destinataires. > Toute utilisation de ce message non conforme à sa destination, toute > diffusion ou toute publication, est interdite, sauf autorisation expresse > de LCH.Clearnet Limited et/ou LCH.Clearnet SA. Si ce message vous a été > adressé par erreur, merci de le détruire et d'en avertir immédiatement > [email protected]. > > LCH.Clearnet Limited, LCH.Clearnet SA et les autres entités du groupe > LCH.Clearnet Group, ne peuvent en aucun cas être tenues responsables au > titre de ce message à moins qu’il n’ait fait l’objet d’un contrat signé. > > LCH.Clearnet Limited, Registered Office: Aldgate House, 33 Aldgate High > Street, London EC3N 1EA. Recognised as a Clearing House under the Financial > Services & Markets Act 2000. Reg in England No.25932 > > Telephone: +44 20 7426 7000 Internet: http://www.lchclearnet.com > > LCH.Clearnet SA, Siège Social, 18 rue du Quatre Septembre, 75002 Paris, > Chambre de Compensation conformément au Code Monétaire et Financier. > > > > > ************************************************************************************************* > > > _attend WWRUG12 www.wwrug.com ARSlist: "Where the Answers Are"_ _______________________________________________________________________________ UNSUBSCRIBE or access ARSlist Archives at www.arslist.org attend wwrug12 www.wwrug12.com ARSList: "Where the Answers Are"

