Howdy,
My name is Kenneth and I'm the architect of TDF. I thought I would take a few minutes to clarify a little about TDF. This is my first time doing this. First, TDF is designed to be much more than an interactive debug tool. It wasn't designed to compete with any existing products. It's primary purpose is to expand the realm of debugging tools outside of the development scope into testing, maintenance and customer problem determination. The first release is primarily about the interactive component because it's the part that is currently tested to our standards and we feel can be used reliably. Consider, the example of locked code. TDF is carefully architected so you could interactively debug any locked code except disabled code and code in a CPU lock. One day TDF might actually support this mode but it would require more code than is currently justified. But TDF is designed to provide non-interactive data collection. Through panels to define 'traces', you can specify what states and data you want to collect up to 2K per trace point. Each trace point can be tailored to the specific needs of that trace point. So TDF can be used to provide a dynamic trace to any code without any modification. Now consider this. TDF has a scripting capability where all the commands needed to perform a trace or debug are recorded. A customer reports a problem. You design a set of traces to collect the needed data. You send the script to the customer. Since TDF does not require any code modifications, they start up a batch runtime component that executes the script against a test case. It collects the trace data which can be shipped back to the product developer for analyses. A fix is prepared and shipped to the customer. The same script can now be run again to verify the fix. That is what TDF is designed to do. It's an entirely different debug paradigm that expands debugging tools into the realm of maintenance and problem determination. Second, TDF has no boundaries. TDF is dynamic and can operate across any number of address spaces, tasks, SRBs, PC routines and RTM exits. It does this by using the TRAP instruction. This instruction can execute in almost all environments. Without going into details, essentially it can execute where ever a PC instruction can execute. Essentially, TDF uses what we call Dynamic Program Intercepts to wrap all content supervision (LOAD, LINK, XCTL and ATTACH), RTM exit (ESTAE(X), STAE, (E)SPIE and SETFRR) and selected schedule (such as IEAMSCHD) service calls. This list will grow as demand dictates. It also uses this same technology to wrap user 'identified' PC routines and common routines. It does this by making a copy of the identified code thus isolating it from any other callers. In fact, two or more sessions can debug the same PC concurrently. A future enhancement (still being tested) will allow the grouping of any number of tasks and address spaces into a debug group. This will become essential for problem determination in complex task or server/client scenarios as described in the runtime component in the second paragraph. Third, IBM is a hardware and software vendors. It has the luxury of pairing the hardware, Z/Architecture, and software, z/OS, architectures into one of the most powerful, if not the most powerful operating system. TDF is designed to exploit both architectures. The TRAP instructions are a simple example of that. The PC screening technology is another. In fact, TDF is architected more on the Z/Architecture that z/OS. It requires z/OS to execute but it is much more reliant on the Z/Architecture. TDF only uses 3 IBM services in the debugging of a dispatchable unit. Anyone that has any specific technical questions about TDF simply need to ask. Kenneth ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
