Hi Dominik, yes indeed this can be quite frustrating. I often have the case to build the same pipeline for multiple similar data sources. Further, I sometimes build a pipeline on a pre-collected data set with the data set adapter. Once I found the best parameters I apply this pipeline to the streaming data source. This process is time-consuming and error-prone.
So I guess it would be good if we can exchange the data source, or change the pipeline in the beginning. The main question is how flexible this should be? The reason for the current approach is that the configuration of the downstream algorithms depend on the configuration of the processing element, right? So my questions would be: * Do we want a completely flexible solution where a user can change anything? * Should it be possible to replace data streams? Philipp On 2021/11/25 17:54:32 Dominik Riemer wrote: > Hi, > > > > currently, when building pipelines, we use a rather strict validation > approach where pipeline elements are added one-after-the-other and a new > connection can only be added if the ancestor element is fully configured. > > The drawback is that it is currently impossible to exchange intermediate > pipeline elements (e.g., replace a stream while keeping the rest of the > pipeline). > > My feeling is that this can be frustrating and time-consuming for users in > case only minor changes are applied to the pipeline structure. > > > > What's your opinion on that? How should a perfect pipeline modeling process > work from your point of view? > > > > Dominik > > > > > > > >
