I you're ready to try mad science, you could check out the
scuttlebutt<https://github.com/dominictarr/scuttlebutt>family
On Monday, 15 July 2013 15:32:33 UTC+2, Jerry Liu wrote:
>
> Hi, I have a question on a pub-sub implement using Node.js.
>
> The scenarios is that I have a chat-room, but members transfer lots of
> data. When new member joins in, not only all data should be sent to him,
> but take part in data transmission. This is actually a white-board scenarios.
> And is also like pub-sub.
>
> I also considered using Redis or RabbitMQ. But these data is usually not
> small, such as 100MB. I don't think it should be store in database or
> something.
>
> P2P should be a good solution, but I still not ready for it. Now I still
> use C/S model for this.
>
> In my implement, I have 3 ways of doing so.
>
> Ideas:
>
> // v1
> var server = new net.Server();
>
> server.on('connection', function(client) {
> var s = this;
> s.file = fs.openSync('/tmp/somewhere', 'a+'); // should be a async call,
> but here just to simplify
> // save to member-list
> s.clients.push(client);
>
> // read saved file and send to new client
> var ddd = fs.readFileSync('/tmp/somewhere'); // should be a async call,
> but here just to simplify
> client.write(ddd);
>
> client.on('data', function(data){
> // send to all members
> for (var i = s.clients.length - 1; i >= 0; i--) {
> if (s.clients[i] != client) { // client doesn't send data to itself
> s.clients[i].write(data);
> }
> };
>
> // write data into file
> fs.writeFile('/tmp/somewhere', data);
> });
>
> // here should be more error handler and code due with closed client.
> But I just simplify.
> });
>
> // v2
>
> var server = new net.Server();
>
> server.on('connection', function(client) {
> var s = this;
> // save to member-list
> s.clients.push(client);
>
> // read saved file and send to new client
> var rd = fs.createReadStream('/tmp/somewhere'); // should be a async
> call, but here just to simplify
> rd.pipe(rd, {'end': false}); // set end to false, so that client won't
> close after file sending
>
> for (var i = s.clients.length - 1; i >= 0; i--) {
> if (s.clients[i] != client) {
> s.clients[i].pipe(client, {'end': false}); // all other members pipe
> to new one
> client.pipe(s.clients[i], {'end': false}); // the new client pipes
> to everybody else
> }
>
> };
>
> var wr = fs.createWriteStream('/tmp/somewhere');
>
> client.pipe(wr, {'end': false});
>
> // here should be more error handler and code due with closed client.
> But I just simplified it.
> // Also, I simplified the code that close these two streams.
> });
>
>
> // v3
>
> // Well, v3 is different. It's even hard to use real code to represent.
> Here I use some pseudo code.
>
> var server = new net.Server();
> server.file = fs.openSync('/tmp/somewhere', 'a+'); // should be a async
> call, but here just to simplify
>
> function processOne(cli, data, callback){
> cli.write(data, callback);
> };
>
> function runPendingJobs(cli, file){
> if (cli.pendingList.length < 1) {
> return;
> }
> // fetch one job
> var firstJob = cli.pendingList[0];
> cli.pendingList.shift();
> // read chunk
> var d = readChunkFromFile(firstJob); // pseudo code
> // process it and run next job when it ends
> processOne(cli, d, runPendingJobs(cli, file));
> };
>
> server.on('connection', function(client) {
> var s = this;
> // save to member-list
> s.clients.push(client);
>
> // pending list is a list contains pending data info.
> // This is more like a queue, so that client can receive data chunks one
> by one
> // won't be flooded
> client.pendingList = [];
>
> for (var i = s.clients.length - 1; i >= 0; i--) {
> if (s.clients[i] != client) {
> s.clients[i].on('data', function(data){
> // when any data appears, write to file
> writeToFile(server.file, data); // pseudo code
> // record its postion
> var newJob = {'start': fileEnds, 'size': data.length};
> // and attach it to client's pending list.
> client.pendingList.push(newJob);
> // process the job queue
> runPendingJobs(client, server.file);
> });
> }
> }
>
> // here should be more error handler and code due with closed client.
> But I just simplify.
> });
>
> ---
>
> Results:
>
> I have all these 3 version run in my server. But found that no one is good.
>
> v1 is a very straight way. But if clients speed are not symmetrical, lots
> of Buffer stays in memory waiting to be sent. More clients, more memory
> consumed. So is v2.
>
> v3 store all data into file first, and only attaches minimal information
> to a job queue. However, This method comsumes high cpu useage. I have no
> idea why.
>
>
>
> As a beginer of network programmer, here I wondered which of these ideas
> is right. And does my results fit the idea? Also, what's the best practice
> to this pub-sub problem?
>
> Sorry for this VERY long email, I even don't know whether I expressed it
> clealy. Thanks!
>
>
--
--
Job Board: http://jobs.nodejs.org/
Posting guidelines:
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en
---
You received this message because you are subscribed to the Google Groups
"nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.