Trond Norbye wrote:
Hi all,

I am working on a team at Sun Microsystems focusing on Memcached, and we have just completed the integration of Memcached 1.2.2 into OpenSolaris (it will be available in the next build). Our next step will be to try to improve the scalability on multicore-machines, so we are about to set up our development and test environment. I would therefore like to get some ideas on how we should do this from all of you.

Good to finally hear from you folks! :) We were wondering what happened. Were any patches necessary to integrate 1.2.2 into OpenSolaris?


My first question is about SCM.
We are a team of developers here that are going to be working on the code, and want to share our work with the team while we're working. I am not a big fan of sending "diff-files" within the team and applying them, since tracking the changes will become difficult (until we're done so we can push a patch back out to the community to get it integrated into the Subversion repository).

How is this being solved by other companies?

I despise SVN. I do believe in restricting access to the main repository, SVN does not make collaborative development very easy. I personally use git while working on memcached. I import the repo via git-svn, do all of my branch/etc work in native git, then `git-svn dcommit` my data back to SVN when it's ready.

I'd love to switch memcached _to_ git, but it'll take a little more time and require more sign-on from the developers at large. Git isn't very windows friendly, although it's nearly ubiquitous everywhere else.

For your case I would recommend a local "centralized" git repository, or elect one of you to be the local patch integration master. I envision workflow a bit like this:

- Either a central git repo, or a git "patchmaster" is elected. This repo is imported via git-svn, and tracks updates from "us". - Each worker uses a local clone of that git repo. Commits are done locally, etc.

- If two workers go off on a tangent, they may use local branches and pull code from each other.
- Individual code review is a git pull into local branch operation.

- In the case of a patch master, they will pull completed sections of code from workers. They'll `git-svn fetch && git-svn rebase` to get the latest changes from upstream, then manually merge the finished code. At that point you may do integration testing.

- In the case of a centralized repo, I'm not too sure how that would work. It would still have to track changes to SVN, or someone will need to pull all of the final work and re-integrate it later.

It's a lot simpler than I'm actually explaining this. I figure you guys understand ;) It would certainly make our lives simpler today if I could just add dustin's work as a remote to my local git clone and pull from that for review/etc. Presently it's a disaster with SVN, git-svn, hg, etc.

I'll also take this time to re-iterate that frequently getting patches integrated with upstream (ie; sending them to the list for review and inclusion) will decrease the amount of pain required to integrate the final product. I know you folks like "releasing" finalized things, but you wouldn't do that to other programmers internally, so it makes little sense to do that to us, too.

Obviously there're exceptions such as withholding nonworking code, branches that "might not make it", etc.

Test environment:
I want to set up a test-suite to cover as much as possible of Memcached. How are people doing this?


`make test` ;)

Memcached presently has a server test suite (and a client test suite, separately) in the trunk/server/t directory. If you're adding or changing functionality please add proper perl tests to cover the changes. There's already a lot of tests in there. More would be fantastic.

I'd also personally like to see more standard benchmarks we can use for testing fill rate and running extended load tests to help discover slow moving bugs or memory leaks.

Thanks!
-Dormando

Reply via email to