Sarah,

Thanks for the phone discussion. As we discussed, most of these issues 
were related to the fact that I misunderstood the scope of what I was 
responsible for. I was not considering the transfer module checkpoint in 
my design. So with
that in mind, comments are inline.

Jean

On 03/ 1/10 04:47 PM, Sarah Jelinek wrote:
> Hi Jean,
>
>
>> On 02/25/10 05:27 PM, Sarah Jelinek wrote:
>>> Hi Jean,
>>>
>>> I reviewed v2.0 of the doc. I tried not to repeat others comments.
>>>
>>>
>>> Section 2:
>>>> The checkpointing module will be available.
>>>
>>> I am not sure what this means exactly. The Transfer Module is a
>>> 'checkpoint' it will implement checkpoint so if that's what you mean,
>>> sure, but if you expect the checkpoint to be a separate module that
>>> controls everything that's not how it is currently architected.
>>
>> To me the checkpointing module is the code that is involved in saving
>> the data in the data cache and reading the data out of the data cache so
>> that the pause/resume
>> functionality will be available. It also includes anything else the
>> pause/resume functionality might need.
>>
>
> Each of the Common Application classes, of which Transfer is one, is 
> an implementation of the Checkpoint interface. So, I believe your 
> design must show how this implements the Checkpoint interface. Along 
> with the other classes and interfaces that must be defined in support 
> of providing Transfer functionality.
>
Yes. I'll be reviewing the Caiman Architectural document and adding this 
to my design doc.

>>>
>>>> The data source will exist and be specified by the caller.
>>>
>>> In general the dependency is that the data source will be available
>>> via the data cache. I think this is a better way to state this. We are
>>> not passing parameters, the checkpoint, in this case the transfer
>>> module, will query the data cache for the data it needs.
>>
>> By data source I mean the source for the transfer. i.e. if we are
>> transferring from /usr/lib to /a/usr/lib then /usr/lib/ is the data 
>> source.
>> I'll try to reword this to make it clearer.
>
> My concern isn't the wording, it really is that in all cases we set 
> and get data in the data cache and we don't have anything 'specified' 
> by the caller. So, the ExecutionEngine is the thing that will invoke 
> Transfer when it is time. All the data that is required for Transfer 
> to operate will be stored in the Data Cache. So, we really are getting 
> the 'data source' in this case from the Data Cache, not specified by 
> the caller. I want to be sure that anyone who reads this document 
> could take it and implement it to the interfaces you defined as well 
> as those that it must implement within this to interact with other 
> components in the architecture. Your document doesn't make the 
> interaction of the Transfer module with the other components clear and 
> I think it needs to.

With the new scope in mind I agree. The design for the Transfer 
checkpoint will use the data cache for getting
it's information.

>
>>> Section 3.3.1:
>>> Most of the classes have similar methods in common, such as set_type,
>>> and set_dst, etc.. I think that maybe we should have an interface or
>>> abstract base class defined so the subclasses can utilize this
>>> commonality.
>>
>> Agreed. I'll have to reword the doc a bit to make this clear.
>>>
>>> Section 3.4.1.2.8:
>>>> _logger
>>>> Attribute to hold the logger object to be used for
>>>> logging/progress reporting. It is set during the registration
>>>> of the logger module during the constructor method.
>>>
>>> This would not be set during registration of the logger module during
>>> construction. I wouldn't think anyway. The application creates a
>>> logger service, and then adds handlers as it needs. Each checkpoint
>>> can either be instantiated with the logger object data from the Engine
>>> or it can query the Engine for the logger object.
>>
>> I had discussed this with Sanjay so maybe the confusion is that the
>> design for the logging module is still underway?
>
> The logging service is instantiated by the ExecutionEngine and when 
> the Transfer component needs to log something it gets a handle to the 
> log and log handlers. Perhaps we store this handle in an attribute on 
> the objects you define, I am ok with that. But the attribute isn't set 
> on your objects when we register the logging service, it is something 
> the Transfer classes must ask the engine for.

After talking with Sanjay, it appears that the logging design changed 
from the time I had talked with him earlier.
I'll update the doc to agree with the more current design.

>
>
>>
>>>
>>> I am thinking that each of these transfer classes you have defined are
>>> instantiated within the TransferModule checkpoint, depending on the
>>> type of transfer requested. So, whatever data the Transfer checkpoint
>>> gets, either from instantiation by the Engine or by querying the
>>> Engine can use to instantiate the objects in support of the transfer
>>> functionality. Is this how you envision this working? If not, can you
>>> describe how/who instantiates the transfer object when required?
>>
>> Yes. The TM checkpoint will need to instantiate the appropriate transfer
>> object depending upon the type of transfer desired and "control" the
>> actual transfer by
>> calling the appropriate transfer methods.
>>
>
> Ok.. this would be good to show in a diagram or pseudo code so we can 
> validate how this fits into the architecture.

Yes.
>
>>>
>>> Section 3.4.2.2.5: _alternate_root
>>> This is the same as dst_mntpt right? I mean basically, it is the
>>> source and dest that we have to track for all transfer types. Can we
>>> name them consistently or put them in a superclass as I mention above
>>> to keep it consistent?
>> Yes to both.
>>>
>>> I can see in the case of an IPS class where we might need additional
>>> attributes, such as preferred publisher, or mirror. But in general I
>>> think src and dst suffice.
>>
>> Not really. If you're doing a cpio transfer and all you have available
>> are src and dst you get 100% of the src copied to the dst. In the past
>> this hasn't really been quite flexible enough.
>
> but, I thought this was covered via your cpio_args attribute? The 
> source and destination for each transfer type should be able to be 
> modeled the same, right? Package lists, etc...are modeled with other 
> attributes on the class.

We'll have to talk about this tomorrow. I think we're actually on the 
same page but need to make sure.

Jean

>
>>
>> With the SVR4 transfer, you'll really want the ability to specify a
>> package list too.
>
>
>
>>>
>>> Section 3.4.3:
>>> I was thinking for SVR4 packages we would only need to pkgadd, pkgrm.
>>> The rest isn't something I can see we would make use of. Can you
>>> describe why you think we might use these as part of an installation
>>> process?
>>
>> pkginfo and pkgparam were included in case we need/want to include or
>> exclude packages based upon some package information or parameter. I can
>> easily remove
>> them from the doc if you don't foresee using them in the immediate 
>> future.
>
> I don't think we need to get that fancy. I believe for SVR4 we are 
> supporting it only to provide backward compatibility so pkgadd/pkgrm 
> are sufficient.
>
> thanks,
> sarah
> *****
>
>> Jean
>>
>>

Reply via email to