Re: Changing the Proton build system to accommodate jni bindings
I would echo some of Robs points (since he beat me to saying them msyelf :) ) and add some of my own. I also dont see a need to check out proton-c or proton-j in isolation, if the tests for both of them sit a level up then thats what people should be grabbing in my mind. Duplicating code sounds fishy to start with, but doing so given the apparent real need to check out the common parent directory seems easily questionable. One possible adjustment I might suggest (but dont personally see the need for) would be that if the compile requirement for maven to generate the proton-api jar used by the C tree to build the JNI bindings is considered unworkable for some, if its just a simple jar it could also be built with CMake for the C build, leaving Maven to do it for the Java build. I'm not sure how such developers would be planning to run the common test suite that still needed Maven though. If we are releasing the C and Java components at the same time, and the tests sit at the top level, why does there need to be two source tars? We had this discussion with regards to the various Qpid clients and brokers some time ago and the agreed (but never fully implemented, we still have subset source tars) outcome was that we should do away with the component-specific source tars and only have the one main source tar which is actually 'the release' in terms of the project, with e.g Java binaries being separate complementary things. If we did just have a single source artifact to consitutate the full release and either we or some third party then wanted to build a c-only source artifact for some reason, that could of course still be done by simply processing the contents of the repository or single 'the release' tar appropriately. E.g, the 'individual component' source releases in Qpid arent simple svn exports, they contain different parts of the tree bundled into a tar, which is I guess ok because they are not actually 'the release'. Robbie On 21 January 2013 12:10, Rob Godfrey rob.j.godf...@gmail.com wrote: This results in something that is quite awkward for the C build. For one thing I'm not sure an svn export of the proton-c directory would be considered releasable under this scheme as it would include the java binding, but not the source code necessary to build it, and apache policy requires releases to include full source code. Regardless it would no longer be a useful/sensible artifact to end-users since they couldn't actually build the java binding. This seems a slightly odd position to take. The artefact doesn't include the entire source to python, ruby, openssl, etc. If the dependencies for these are not present then the relevant parts of the tree are not built. The same is true in this proposal with respect to the java binding... there is a dependency on the Java API being installed in order to build the JNI bindings within the C build. I must admit I remain bemused by the idea that trying to maintain two copies of the Java API in the source tree makes any kind of sense. I think we are contorting ourselves and adding potentially huge complication to our build/development process in order to try to satisfy a number of somewhat arbitrary requirements that are being imposed on the directory structure. Personally I don't perceive there to be an actually need to allow checking out of only part of the Proton tree. Indeed I would wish to strongly discourage the sort of silo-d attitude that checking out only Java or only C would imply. Moreover, while I see that it is advantageous to be able to release source packages directly as svn exports from points in the tree... I don't find this so compelling that I would break fundamental tenets of source control is expected to be used. Personally, given that our current plan is to release all of Proton at the same time, I'm not sure what would be wrong with simply shipping a single source tarball of the entire directory structure. People who wish to build from source would thus be able to build whatever they so desired. -- Rob
Re: mailing lists and fragmented communication
On 01/21/2013 11:43 AM, Robbie Gemmell wrote: I don't think that list being separate is the main source of most of the confusion with proton. I agree and was not suggesting that it was. I do however think that had past conversations on both the proton and dev lists been more visible then the community as a whole would have a better view of what was happening and any questions would get asked earlier forcing them to be dealt with earlier. What prompted this thread was the observation that communication was more fragmented than in my view it needed to be, not that this was the cause or solution to any specific point or issue. That is actually something I have felt for a while and not at all specifically with regard to proton. Recent email threads somehow just pushed me from thinking about it to voicing my thoughts out loud. People have asked roughly the same basic questions about proton on users@ and proton@ at roughly the same time, which did indeed mean certain discussion with answers might have only gone to one of the lists at a time, but the key point for me was that they had to ask those basic questions on either list in the first place. We are talking about improving communication, and for me the main problem is often that information isn't being written down or sent to any of the lists until someone asks a question requiring it. That question typically gets met with a [large] email explaining the answer, but much of the time it should be possible for the response to just be a link to somewhere the answer is already written down in general, e.g the website, with perhaps some context-specific additions. Some website update stats would probably entertaining right about now for example. I think the website is indeed a problem area for the project. It does tend to get stale and has never been particularly comprehensive. I think the addition of proton (and indeed the move to AMQP 1.0) is a significant enough change that the overall structure needs some thought. [...] I think users@ and dev@ should be left as is, and that we potentially just adjust how we use them slightly. That is fine with me. I'm really just hoping to nudge more of the conversation emails onto the user list for wider visibility as I think that will be generally beneficial (while not being a panacea for any specific issue or indeed for the need for better communication in general).
Re: mailing lists and fragmented communication
On 01/21/2013 01:14 PM, Gordon Sim wrote: On 01/21/2013 11:43 AM, Robbie Gemmell wrote: I think users@ and dev@ should be left as is, and that we potentially just adjust how we use them slightly. That is fine with me. I'm really just hoping to nudge more of the conversation emails onto the user list for wider visibility as I think that will be generally beneficial (while not being a panacea for any specific issue or indeed for the need for better communication in general). From the website: The user's list is for discussions that relate to use or questions on Qpid. If you have questions about how a feature works, suggestions on additional requirements, or general questions about Qpid please use this list. and: The developer's list is for discussions that relate to the on going development of Qpid. If you have questions about how a feature is being developed, suggestions on how to implement a new feature, or requests for a new feature this is the list to use. So, I guess being more specific, I'm saying that I think suggestions on how to implement a new feature or questions (and discussions) on new features being developed would actually be better directed to the user list. I certainly don't want to spam user with unwanted emails, but the volume historically has not been that high and I suspect that doing so will give users a greater sense of awareness of what's coming down the line (yes, we should be better at communicating that through more formal roadmaps etc, but this would at least alleviate our current failings) and would result in wider input into design questions.
Re: mailing lists and fragmented communication
On 21 January 2013 12:43, Robbie Gemmell robbie.gemm...@gmail.com wrote: I'm happy enough with the idea of collapsing proton@ given that Protons scope is in some ways wider than when it started out (where the very specific protocol library made a good case for a separate list), but I don't think that list being separate is the main source of most of the confusion with proton. People have asked roughly the same basic questions about proton on users@ and proton@ at roughly the same time, which did indeed mean certain discussion with answers might have only gone to one of the lists at a time, but the key point for me was that they had to ask those basic questions on either list in the first place. +1 We are talking about improving communication, and for me the main problem is often that information isn't being written down or sent to any of the lists until someone asks a question requiring it. That question typically gets met with a [large] email explaining the answer, but much of the time it should be possible for the response to just be a link to somewhere the answer is already written down in general, e.g the website, with perhaps some context-specific additions. Some website update stats would probably entertaining right about now for example. Completely agreed (and hands up to not personally having updated the website in ages). I think users@ and dev@ should be left as is, and that we potentially just adjust how we use them slightly. These lists have existed for several years, and its the structure almost every Apache project works away just fine with; I don't think we are all that special in this regard. I also don't think we should subscribe everyone to a bunch of traffic they didn't sign up for. That said, this doesn't mean developers actually need to post discussion mails to dev@, the users@ list is always there and I know Gordon at least often posts only to that if it is a user related discussion, and I think that approach works well enough if others were to use it. The dev@ list can continue at least to hold things like the JIRA traffic (I could see ReviewBoard postings going to either list), even if general discussion moves to the users@ list. Personally I'd have JIRAs and ReviewBoards on dev and make sure everything else was on users. However I agree with your main point that it's not the multitude of mailing lists that is necessarily the issue... it's the fact that information isn't available *anywhere* :-) Summarising, I agree we need to be better at communicating, I think a bit of mailing list adjustment would be a good thing where proton@ could go and dev@ should stay in some guise, but that there are other problems with our communication that reducing the number of mailing lists potentially does little to solve. Agreed, Rob Robbie On 18 January 2013 17:21, Gordon Sim g...@redhat.com wrote: I believe that we have too many mailing lists and that we are missing out on valuable collaboration and transparency as a result. Too often in the past topics have been discussed on the dev list without reflecting any of the discussion back to the user list, keeping a large part of the community in the dark. Now that we have a distinct list for proton there is the possibility of yet more fragmentation. I honestly believe that we would be better off with just one list for discussions. I think there will increasingly be issues that cross-cut different components or that would benefit from wider participation. Not all topics will be of interest to all subscribers, but that is always going to be the case. It doesn't seem to me like any of the lists are so high in volume that this would cause significant problems. More rigorous use of subject could help people filter if needed. (JIRA and commit notices I think do warrant their own lists allowing a lot of the 'noise' to be avoided if so desired). Any other thoughts on this? Does anyone have fears of being deluged with unwanted emails? --**--**- To unsubscribe, e-mail: users-unsubscribe@qpid.apache.**org users-unsubscr...@qpid.apache.org For additional commands, e-mail: users-h...@qpid.apache.org
Re: Changing the Proton build system to accommodate jni bindings
On 21 January 2013 15:11, Rafael Schloming r...@alum.mit.edu wrote: On Mon, Jan 21, 2013 at 7:10 AM, Rob Godfrey rob.j.godf...@gmail.com wrote: This results in something that is quite awkward for the C build. For one thing I'm not sure an svn export of the proton-c directory would be considered releasable under this scheme as it would include the java binding, but not the source code necessary to build it, and apache policy requires releases to include full source code. Regardless it would no longer be a useful/sensible artifact to end-users since they couldn't actually build the java binding. This seems a slightly odd position to take. The artefact doesn't include the entire source to python, ruby, openssl, etc. If the dependencies for these are not present then the relevant parts of the tree are not built. The same is true in this proposal with respect to the java binding... there is a dependency on the Java API being installed in order to build the JNI bindings within the C build. The problem isn't with not including the source code to external dependencies (i.e. Java in your analogy), the problem is with the fact that all of the Java binding (the API and the JNI implementation of it) is developed within the qpid project, and the artifact would not include all of it. The apache release policy is quite clear on this front: The Apache Software Foundation produces open source software. All releases are in the form of the source materials needed to make changes to the software being released. In some cases, binary/bytecode packages are also produced as a convenience to users that might not have the appropriate tools to build a compiled version of the source. In all such cases, the binary/bytecode package must have the same version number as the source release and may only add binary/bytecode files that are the result of compiling that version of the source code release. Producing an artifact that has source code for impls, but not source for the interfaces would quite clearly constitute an artifact that didn't include all the source materials needed to make changes. Ummm... it's a dependency... you're familiar with those, yeah? The same way that the Qpid JMS clients depend on a JMS API jar, for which the source is readily available from another source. The JNI binding would build if the dependency was installed. The same way I believe the SSL code in the core of proton-c builds if the dependency for it is installed. I must admit I remain bemused by the idea that trying to maintain two copies of the Java API in the source tree makes any kind of sense. I think we are contorting ourselves and adding potentially huge complication to our build/development process in order to try to satisfy a number of somewhat arbitrary requirements that are being imposed on the directory structure. You're arguing against a straw man here. Nobody has proposed copying the API the way you keep describing it. The original solution implemented on the JNI branch was to have the API in two places at once via svn externals. This isn't in two places... it's very clearly in one place in the repository, with another place linking to it, through a rather inelegant manner. Having said that, the externals solution is not a particularly pleasant solution and was only put in place because of the requirement to be able to check out from a subdirectory of proton. Having further considered the matter, my feeling is that it is better to re-examine the need to be able to check out just a single subdirectory of the proton tree. This however does violate one of the fundamental tenants of source control as you put it since it fundamentally loses track of what version of the API source goes with what version of the implementation source. Umm... no it doesn't. Again... I'm not pushing for svn:externals but if you insist on the each subdirectory must be provide able to be independently checked out then I think svn:externals is a better solution than the copy. The original svn:externals proposal makes it very clear that the version of the Java API code that the JNI binding works with must be the same as that which the Java impl works with. The externals is to a sibling directory within the same project. So long as you consider the proton project as a whole then it is never unclear as to which version you should be using. Only in w orld where the Java and C versions are not progressed with a common API does this become a problem. If you do not believe the two should have a common API then I think we need to have a wider discussion (since we've been working pretty hard until now to keep the APIs in sync). Branching the API into two places and putting the necessary scripts in place to enforce that the C version of that branch is a read only copy of the Java version is simply another way to achieve exactly what is currently
Re: mailing lists and fragmented communication
On 21 January 2013 13:14, Gordon Sim gordon.r@gmail.com wrote: On 01/21/2013 11:43 AM, Robbie Gemmell wrote: I don't think that list being separate is the main source of most of the confusion with proton. I agree and was not suggesting that it was. Sorry, I didnt really mean to imply you were suggesting that it was (or that it was part of your motivation), I just felt that it had been suggested. I was just being lazy and replying to / giving my thoughts on the entire thread in one email. I do however think that had past conversations on both the proton and dev lists been more visible then the community as a whole would have a better view of what was happening and any questions would get asked earlier forcing them to be dealt with earlier. Agreed
summary/conclusion (was Re: mailing lists and fragmented communication)
I'm going to suggest that we leave all the lists in place for now, and leave the choice of list to individual discretion. For my part however I will be focusing on the user list, which I see as a community wide list for anyone with an interest at AMQP related software at Apache. I would encourage people to only use other lists if they are convinced this is too wide an audience for their thread.
Re: Changing the Proton build system to accommodate jni bindings
On Mon, Jan 21, 2013 at 9:33 AM, Rob Godfrey rob.j.godf...@gmail.comwrote: Ummm... it's a dependency... you're familiar with those, yeah? The same way that the Qpid JMS clients depend on a JMS API jar, for which the source is readily available from another source. The JNI binding would build if the dependency was installed. The same way I believe the SSL code in the core of proton-c builds if the dependency for it is installed. That's not really a proper analogy. Again the JMS interfaces are defined outside of qpid. We don't release them, and we depend only on a well defined version of them, we don't share a release cycle with them. If the JMS API was something that we developed/defined right alongside the impl and was part of the same release process, we would certainly not be allowed to release without the source. --Rafael
Re: mailing lists and fragmented communication
It's really about architecture and audience and how they interact. The architecture we are currently developing is closely modelled on the existing architecture of the internet. At the lowest layer the TCP stack provides a very general purpose protocol to a very wide range of applications. This is directly the role the protocol engine plays for AMQP. Slightly above that in the software stack the socket API makes it easy (relatively speaking) for your application to speak TCP. Again this is identical to the role that the Messenger API serves. Neither the socket API nor Messenger provide you direct control over every aspect of the protocol details, but they do make it easy to interface to the basic functionality of the respective protocols and they provide you indirect access (via intermediaries) to many more advanced capabilities of the protocol. At the highest layer applications build on top of the protocol. In the case of TCP there are many thousands of applications including very important ones like HTTP, SMTP, etc. For AMQP, we currently have three examples at apache (the cpp/java brokers, and activemq), however I believe there are potentially many many other applications that could build on top of AMQP, perhaps even as many as currently exist on TCP. From this perspective, I would assert that both messenger and the protocol engine have potentially very cross-cutting and broad audiences, whereas the brokers (relatively speaking) have inherently narrower and more domain specific audiences. While I can sympathize with the idea that a single broadcast communication channel might make it easier to explain this picture in the short term, I am deeply concerned that it will lead to distortion of this picture in the longer term as architecture tends to follow audience. The users of a piece of software inherently shape its direction, and forcing two pieces of software that need to be quite independent to have a single user group is going to influence and shape that architecture in a way that is contrary to them being independent in the first place. I think this is especially concerning because the dev and users list are already largely established as the cpp/java broker lists. So to answer your question, I don't actually think the arrangement of mailing lists will make all that much difference in the short term, that is something we need to proactively work on through other means, however I do think it can have a significant influence in the long term. It is my belief that if AMQP is successful the architectural layer represented by the protocol engine + messenger, and the various applications that use it (qpid-cpp, qpid-ava, activemq, and more) will ultimately be strongly reflected in their own distinct communities and it may well be as strange and alien to think of merging the communities/lists as it would be to combine the TCP stack/socket API into a single project with the apache web server. Already it's hard to imagine how the details of implementing ssl support in proton and the details of implementing a transactional persistent message store will significantly benefit from cross communication. All that said, my primary concern is to promote a good understanding and foster development of the architecture outlined above, and this requires good communication beyond just the people who are on any of our mailing lists. If we can't explain proton to users of other qpid components, we certainly can't explain it well to the rest of the world. So if the above picture is well understood and there is still overwhelming consensus that merging lists will help achieve this goal then I won't stand in the way. I don't claim to know that we can't evolve to where we need to be through that path, merely that it worries me in some significant ways and that qpid mailing list communication in general is a very small subset of our overall communication problems, so any action (or inaction) we take with the lists should not make us feel better about having actually done something to solve the problem. --Rafael On Fri, Jan 18, 2013 at 4:15 PM, Ken Giusti kgiu...@redhat.com wrote: Hi Rafi, You raise some good points, but I don't understand how keeping a separate proton list makes it easier to provide a coherent view of the qpid project, especially to newcomers. As you point out: The project goals/identity issue in my mind has very little to do with the lists and more to do with the fact that many people think of qpid == broker, qpid cpp == cpp broker, and qpid java == java broker. While this understanding may have been more or less true at one point, it is now and going forward a misconception, yet we have done nothing to educate our users about this. Agreed, and to that point, I think it would be a very bad precedent to structure the mailing lists into component silos. Wouldn't creating a separate mailing list for, say qpid-cpp-bro...@qpid.apache.org, send exactly the wrong message?
Re: Changing the Proton build system to accommodate jni bindings
On Mon, Jan 21, 2013 at 8:03 AM, Robbie Gemmell robbie.gemm...@gmail.comwrote: I would echo some of Robs points (since he beat me to saying them msyelf :) ) and add some of my own. I also dont see a need to check out proton-c or proton-j in isolation, if the tests for both of them sit a level up then thats what people should be grabbing in my mind. Duplicating code sounds fishy to start with, but doing so given the apparent real need to check out the common parent directory seems easily questionable. One possible adjustment I might suggest (but dont personally see the need for) would be that if the compile requirement for maven to generate the proton-api jar used by the C tree to build the JNI bindings is considered unworkable for some, if its just a simple jar it could also be built with CMake for the C build, leaving Maven to do it for the Java build. I'm not sure how such developers would be planning to run the common test suite that still needed Maven though. If we are releasing the C and Java components at the same time, and the tests sit at the top level, why does there need to be two source tars? We had this discussion with regards to the various Qpid clients and brokers some time ago and the agreed (but never fully implemented, we still have subset source tars) outcome was that we should do away with the component-specific source tars and only have the one main source tar which is actually 'the release' in terms of the project, with e.g Java binaries being separate complementary things. I'm not sure I can answer this in a way that will be satisfying to you as the answer is based a lot on C development standards where source tarballs play a much more active role as a means to distribute software than in the Java world where everything is distributed via binaries. But I'll try by saying that having a C project where you can't simply untar it and do one of src/configure make or cmake src make is a bit like having a Java project that doesn't use maven or ant. I'm aware we could have cmake at the top level alongside a pom.xml, and some third entry script that invokes both for system tests and the like, and while I would encourage that for proton developers, it is imposing a very complex set of entry points onto our users. I can see that this might impact Java users less as they may care less about src distros, but it is far from an ideal release artifact for C users. As for producing a C tarball by post processing a large source tarball, it's simply something I would prefer to avoid given that there are alternatives as having a complex mapping from source control to release artifact is in my experience quite bad for the health of a project. It means developers are more detached from what their users experience out of the box. --Rafael
Re: Changing the Proton build system to accommodate jni bindings
On Sat, Jan 19, 2013 at 5:48 PM, Phil Harvey p...@philharveyonline.comwrote: I worked with Keith on this proposal so I should state up front that I'm not coming to this debate from a neutral standpoint. Hopefully we can find a solution that is acceptable to everyone. To this end, we listed our understanding of the requirements on https://issues.apache.org/jira/browse/PROTON-194. I'm hoping that this discussion will allow us to clarify our requirements, such that the best technical solution naturally follows. I've added some comments in-line below... On 18 January 2013 19:29, Rafael Schloming r...@alum.mit.edu wrote: On Fri, Jan 18, 2013 at 11:17 AM, Keith W keith.w...@gmail.com wrote: We are currently in the process of implementing the proton-jni binding for the proton-c library that implements the Java Proton-API, allow Java users to choose the C based proton stack if they wish. This work is being performed on the jni-branch under PROTON-192 (for the JNI work) and PROTON-194 (for the build system changes). Currently, Proton has two independent build systems: one for the proton-c and its ruby/perl/python/php bindings (based on Cmake/Make), and second a separate build system for proton-j (based on Maven). As proton-jni will cut across both technology areas, non trivial changes are required to both build systems. The nub of the problem is the sharing of the Java Proton-API between both proton-c and proton-j trees. Solutions based on svn-external and a simple tree copy have been considered and discussed at length on conference calls. We have identified drawbacks in both solutions. To be honest I don't think we've sufficiently explored the copy option. While its true there were a lot of hypothetical issues thrown around on the calls, many of them have quite reasonable solutions that may well be less work than the alternatives. In my experience, maintaining two copies of any code is usually a bad thing. However, I try to be open minded so I agree that it's worth exploring this option. I'd be interested to hear your opinion on (a) the scenarios when it would be acceptable for these two copies to diverge and (b) the mechanism you're envisaging for achieving convergence. I imagine there are both technical and process dimensions to making this work. This is a good question, sorry I missed it with the flurry of other posts. To answer (a), I think on trunk these two things should probably never (or very rarely at least) diverge. On very specific feature development branches I think we've seen it can be convenient to let them diverge a little, but as the whole point of a feature branch is to be able to break things I think that's neither here nor there, either way I would consider non matching APIs to be a broken state of things. The mechanism I'd propose would be to add a check to the C build system that would cause a build failure if the API as viewed from the JNI binding was any different from the API as it exists in the Java source tree. I believe for most developer scenarios this would achieve *almost* the same thing that svn externals does without the inherent drawbacks. I'll detail the scenarios I've thought of below: 1. Changing the Java API from the Java tree If the Java developer changes the API, the C build will break due to check failure. If the Java developer changes both to avoid the check, the C build will break to do compile failure. 2. Changing the Java API from the C tree If the C developer forgets to change the Java API, then the C build will break due to check failure. If the C developer changes both to avoid the check, the Java build will break due to compile failure. I believe the above scenarios enforce pretty much the same thing svn externals does. The only added step is the need to copy changes to both places when you are being a good citizen and bringing forward both at the same time. I would hope that this would become less and less of an issue as the API should really stabilize and not change, however if that is an issue for the near term I would propose adding a sync script on the C side to pull the changes over to a local checkout. This would result in the following process for someone changing both simultaneously: - Make you changes and test on the Java build. When this works, transition over to the C build and see what the impact of the changes has been. The first breakage will be the check failure and running the sync script will fix this. You can then proceed to see what other build failures there are and how to fix them. I would hope overall this would minimize the impact of the syncing as I would expect API changes to be primarily driven from the Java side. Either way, I think the only development process difference between this setup and the svn externals one is that for the C build you fix the check breakage by running the sync script before proceeding to fix
Re: mailing lists and fragmented communication
On Mon, Jan 21, 2013 at 1:42 PM, Gordon Sim g...@redhat.com wrote: On 01/21/2013 05:22 PM, Rafael Schloming wrote: The users of a piece of software inherently shape its direction, and forcing two pieces of software that need to be quite independent to have a single user group is going to influence and shape that architecture in a way that is contrary to them being independent in the first place. Having a single communication channel for a community is not the same as forcing independent pieces of software to have a single user group. No one is 'forcing' anyone into anything. I don't believe having more conversations in the wider community need have any negative impact on the architecture of independent components. I think this is especially concerning because the dev and users list are already largely established as the cpp/java broker lists. I think the lists are as much about the clients (and indeed management mechanisms) as the brokers. I see them as being places where all the components have been discussed, in combination with each other or in conjunction with software outside the project (RabbitMQ, Mule etc etc). The conversations evolve as the components evolve. Ultimately people talk about what they are interested in and what they are using. None of our lists are particularly high volume at this point, so I am of the opinion that there is more benefit to sharing a channel of communication than there is from segmenting it. [...] It is my belief that if AMQP is successful the architectural layer represented by the protocol engine + messenger, and the various applications that use it (qpid-cpp, qpid-ava, activemq, and more) will ultimately be strongly reflected in their own distinct communities and it may well be as strange and alien to think of merging the communities/lists as it would be to combine the TCP stack/socket API into a single project with the apache web server. Time will tell of course, but I myself take a different view. I think the analogy is somewhat flawed. I think AMQP will have a community of interest around it, a community that is specifically driven by the vision of interoperability, of composing systems from many different parts. Not all members of that community will be interested in the exactly the same set of components of course, but I think there will be a lot more common interests than your analogy would suggest. I think there will also be general issues that are relevant to different components (e.g. global addressing). Having an AMQP focused community at Apache and having that community discuss various different components with different architectural relationships seems entirely natural to me. Calling it an analogy is not really being fair. Getting closer to the level of generality I've described has been one of if not the primary design goal behind AMQP 1.0 since it's inception, and the exact parallel I've described has motivated many of its fundamental design choices. You can certainly argue that the design is flawed and it is impossible to implement the architecture in such a decoupled manner, however it's not realistic to simply discount it as a flawed analogy. [...] Already it's hard to imagine how the details of implementing ssl support in proton and the details of implementing a transactional persistent message store will significantly benefit from cross communication. In my view that entirely misses the point. Those topics themselves may seem quite distinct, but there are many people who would be interested in both of those topics. There are also likely some people who might be interested in knowing a bit about them and/or contributing to discussions around them even if they are at present primarily focused on some other component entirely. Clearly users of proton's messenger API may be interested in communicating with a persistent transactional store. Users of other APIs might be interested in how the messenger API differs in that use case with whatever API they use. Implementers of such a store may be interested in proton's engine API (as well as some other broker/brokers). So even with these two distinct topics it seems to me (at the risk of repeating myself to the point you all stop listening and filter my emails to /dev/null!) that there are benefits to sharing a communication channel and at present no real concerns about excessive traffic (at least none so far expressed). I don't think I've missed your point. I agree 100% with you that we need more communication about architecture and how components fit together and that this communication needs to reach a lot of people. Where I disagree with you is that altering the mailing lists will achieve a significant measure of that goal. This communication really needs to be captured in a more permanent form that can be sent (ideally via a small easy to remember URL) to lots of mailing lists, even (perhaps especially) ones
Re: mailing lists and fragmented communication
On 01/21/2013 07:39 PM, Rafael Schloming wrote: Calling it an analogy is not really being fair. Getting closer to the level of generality I've described has been one of if not the primary design goal behind AMQP 1.0 since it's inception, and the exact parallel I've described has motivated many of its fundamental design choices. You can certainly argue that the design is flawed and it is impossible to implement the architecture in such a decoupled manner, however it's not realistic to simply discount it as a flawed analogy. I'm not arguing that the design is flawed. I'm arguing that comparing the relationship of the TCP stack to the Apache Web server as being the same as that of Proton to a specific broker implementation and drawing from that the conclusion that the communities around them are thus necessarily as distinct is unconvincing to me. [...] I agree 100% with you that we need more communication about architecture and how components fit together and that this communication needs to reach a lot of people. Where I disagree with you is that altering the mailing lists will achieve a significant measure of that goal. I don't believe I ever argued that it would. This communication really needs to be captured in a more permanent form that can be sent (ideally via a small easy to remember URL) to lots of mailing lists, even (perhaps especially) ones outside of qpid. Sounds great! Even when that exists though, I still believe a single list on which the community can discuss diverse AMQP and Qpid related topics is a good thing.
Re: mailing lists and fragmented communication
On Mon, Jan 21, 2013 at 3:10 PM, Gordon Sim g...@redhat.com wrote: On 01/21/2013 07:39 PM, Rafael Schloming wrote: Calling it an analogy is not really being fair. Getting closer to the level of generality I've described has been one of if not the primary design goal behind AMQP 1.0 since it's inception, and the exact parallel I've described has motivated many of its fundamental design choices. You can certainly argue that the design is flawed and it is impossible to implement the architecture in such a decoupled manner, however it's not realistic to simply discount it as a flawed analogy. I'm not arguing that the design is flawed. I'm arguing that comparing the relationship of the TCP stack to the Apache Web server as being the same as that of Proton to a specific broker implementation and drawing from that the conclusion that the communities around them are thus necessarily as distinct is unconvincing to me. I certainly wasn't intending to draw such a conclusion, and I apologize for any sloppy wording that may have implied this. I'm merely stating my own beliefs and conjectures about the future. I've conceded that you can do what you like with the lists and I won't stand in the way, however I can't make myself believe that it is the right choice, and if only for my own cathartic benefit I feel the need to document the minority view. Ultimately the dissent over this issue is more damaging than simply moving forward and making progress. I've pushed it as much as I have in the past because I do have very strong beliefs surrounding it and I'm sorry if trying to explain my perspective has wasted more time. --Rafael