I heard from one experienced iOS dev, that the best way to interconnect locally is in GameKit.
Chris On Fri, Apr 8, 2011 at 11:22 AM, Andrew Tunnell-Jones <[email protected]> wrote: > I've been pondering a subset of this problem as I want to implement > opportunistic replication. I'd like to use DNS Service Discovery > (often referred to as ZeroConf) and I've already written an Erlang > interface to Apple's Bonjour DNSSD API[1] which works on OS X and > Linux via Avahi's compatibility layer. The code should be portable to > Windows but I've not gone there yet. > > Slight digression - Apple Bonjour on OS X and Windows can be > configured to advertise and browse services over the internet (it'll > setup a port forward via NAT-PMP or uPnP if needed/available) via any > DNS server which supports DNS Update and has an appropriately > configured zone[2], though it works a lot better if the server > implements Apple's extensions for real-time updates (DNS-LLQ) and > automatic record expiration (DNS-UL). (I've written a CouchDB backed > one that does[3].) Avahi's wide-area support is read-only, though > their trac install indicates write support is slated for 0.7[4]. > > Back to CouchDB - I haven't worked out the exact mechanics but I'd > like to do something along the lines of starting an SSL mochiweb > listener and then advertising it using the hash of the SSL cert > (self-signed or otherwise) as the service name. As part of the service > advertisement, a user-friendly text identifier would be included that > can be arbitrarily changed (as the cert hash is the real identifier). > The SSL transport would require mutual acceptance of certificates > which would be configured by POST'ing a hash to a particular URL. A > list of services (hashes) and further information on a given service > (including the friendly identifier) would be retrievable via other > URLs. Replication would be configured in a similar manner to > _replicator with schema being dropped from target/source URLs and a > peers hash being used in place of a hostname. Continuous replication > would be persisted between restarts and then setup and torn down as > services appear and disappear. > > Keeping in mind this is mostly conjecture at this point, any thoughts? > > Andrew > > 1. https://github.com/andrewtj/dnssd_erlang > 2. http://dns-sd.org/ServerSetup.html (dated) > 3. https://github.com/andrewtj/dnsxd > 4. http://avahi.org/milestone/Avahi%200.7 > >> From: Ryan Ramage <[email protected]> >> To: [email protected] >> Date: Thu, 7 Apr 2011 09:40:48 -0600 >> Subject: Re: Peer-to-Peer Replication >> I think we are missing the issue. We do all agree that couch is great >> at replicating when it has been wired up with src and dest urls. >> >> The issue is more around creating a distributed graph management to >> handle nodes (couches) in a peer to peer manner. I don't think this >> space has been really explored. >> >> For a local network, there are lots of service discovery protocols >> that you could use, something like >> http://en.wikipedia.org/wiki/Zero_configuration_networking or >> http://en.wikipedia.org/wiki/Universal_Plug_and_Play >> but of course this would be outside of what couch does. I think >> BigCouch may have something like that built in, but someone from >> Cloudant would have to confirm. >> >> For a more p2p system, this is a much harder problem. For one, you >> mentioned the network ports. If you are imagining average people using >> this, then you would have to deal with managing port forwarding using >> Internet Gateway Device Protocol. At least one couch on the end of a >> replication would have to be able to be accessed by http through a >> router. You would want a user friendly way to do this. >> >> Next is managing the graph. This is hard. No help from couch again. >> Nodes going up and down, etc. >> >> It would be fun to see some work done on this. For extra points it >> would be cool if it where done in erlang and could be contributed into >> the couch core :) >> >> Ryan >> >> >> >> > -- Chris Anderson http://jchrisa.net http://couchbase.com
