Hi, Ilshat, the graphics make sense. Good work.! Regards, Vicente.
On Mon, Aug 3, 2015 at 9:46 AM Ilshat Shakirov <[email protected]> wrote: > Hello!, > > I've written couple of scripts for testing only STrPe mechanism (for now). > Here it is: > https://github.com/ishakirov/p2psp/blob/master/tools/test_strpe.py > https://github.com/ishakirov/p2psp/blob/master/tools/parse_test_strpe.py > To run it on not Mac systems, you have to change "runStream" function > (since it runs mac specific path for vlc). > Also this commit contains changes for splitter and peer for logging some > data (buffer correctnes, currenct round, teamsize). > The fist script run the experiment, it has 4 params: > n - teamsize without malicious and trusted peers > t - number of trusted peers > m - number of malicious peers > w - wait time in seconds > > And the second script parse the result files and produce an output, which > can be 'copy-pasted' to excel or google sheet. > It runs with params: > n - teamsize with malicious and trusted peers > m - number of malicious peers. > > So these scripts can be used like: > >> ./test_strpe.py -n 48 -t 2 -m 50 -w 200 >> ./parse_test_strpe.py -n 100 -m 50 >> > > And here is some results: > > https://docs.google.com/spreadsheets/d/1BEHWLtKTVnZoqjEXNs-uLeH0sRU7SCXmZHSMuWc_-lo/edit#gid=985076992 > > Now I am doing the same for the STrPe-DS mechanism. Could you please give > me some feedback for this graphs? =) > I will write a blogpost later, when STrPe-DS will be ready. > > Your reputation system, as many others, may produce false positive and >> false negative. >> >> False positive: a bunch of malicious peers complaint about a well >> intended one. >> False negative: a bunch of malicious peers does not complain about a >> malicious peer. >> >> About time on the team: a malicious peer can perform correctly for a long >> period and suddenly make an attack. >> >> It is not an easy task. >> > To be honest, I dont see how splitter can determine malicious peers if > there more than 50% of malicious peers in the team. Could you please advice > some direction? > > Thanks! > > > 2015-07-28 12:36 GMT+05:00 L.G.Casado <[email protected]>: > >> Dear ILshat, >> >> Your reputation system, as many others, may produce false positive and >> false negative. >> >> False positive: a bunch of malicious peers complaint about a well >> intended one. >> False negative: a bunch of malicious peers does not complain about a >> malicious peer. >> >> About time on the team: a malicious peer can perform correctly for a long >> period and suddenly make an attack. >> >> It is not an easy task. >> >> Best, >> >> Leo >> >> >> >> El mar, 28-07-2015 a las 12:00 +0500, Ilshat Shakirov escribió: >> >> As I said we can assign some reputation parameter to each peer. Let's say >> that we exclude peer P from the team if Q > X, where X some threshold >> value and >> >> where I_i = 1 if i-th peer marked peer P as malicious and 0 in the other >> case. >> w_i can be assigned by the time of existing peer in the team. So if peer >> exists in team from the start of streaming, and it hasn't complaints >> from the other peers, it has the maximum reputation value (w_i parameter). >> So the question is how to choose w_i more optimal (and the threshold >> value X respectively). >> >> 2015-07-28 1:08 GMT+05:00 Juan Álvaro Muñoz Naranjo < >> [email protected]>: >> >> Hi again, >> >> Also, I wanted to develop heuristic for the excluding malicious peers >> from the team based on the all the team (not only trusted peers). Do you >> have any ideas? I think about smth like: 'exclude peer if more than x% of >> the team marked it as malicious'. Also, we can assign 'reputation' to each >> peer, so some peers will have more influence on the decision of excluding >> peer. What do you think? >> >> >> >> >> Yeah, we've been considering it. The problem with the x% solution is that >> i can easily turn against us. Imagine the attacker controls a high >> percentage of the nodes in the network (that would be easy, just run a huge >> number of peers > x% in a small number of machines and you've got it) and >> starts complaining against valid peers. The valid peers would be expulsed >> by the splitter. That would be an easy DoS. >> >> So, to reduce the impact of attackers, let's say that we set x to 50%: >> the attacker would need to control more than half of the team in order to >> expulse someone. But let's say the attacker controls 45% of the peers. Not >> enough to expulse anyone, but now he can act inversely: he uses that 45% of >> malicious peers to send corrupted chunks to a set of peers smaller than >> 50%. Those affected legal peers will not be able to play 45% of the >> packets, so they will probably abandon the team due to playback quality >> problems. Again a DoS. And the attackers will not be expulsed since the >> splitter did not receive at least 50% of complaints! >> >> Any idea on this? >> >> Juan >> >> >> >> Thanks! =) >> >> >> 2015-07-23 2:01 GMT+05:00 Juan Álvaro Muñoz Naranjo < >> [email protected]>: >> >> Hi Ilshat, >> >> first of all thanks for your update, it was very interesting. Just one >> thing: when the DS technique is completed we'll send the public key under a >> X.509 certificate format. Ideally this certificate should be signed by a >> trusted certificate authority and contain information about the >> organization managing the splitter to offer some degree of trust. The >> certificate might even be distributed with the software, or be given by the >> web page if we were in a web player with WebRTC. Otherwise the attacker >> might send its own public key to the peers impersonating the splitter. But >> for now it is ok like that. >> >> Now, let's get to the point. How to run the experiments. Vicente already >> suggested the use of tools/create_a_team.sh in a previous message (thank >> you Vicente!). Also, Cristóbal suggests this: >> https://github.com/cristobalmedinalopez/p2psp-chunk-scheduling/blob/master/tools/run_experiment.sh >> These solutions are for experiments in one machine of course, which is >> enough for us. If you need more peers you should be able to combine several >> machines by running one script per machine. Of course, we're interested in >> seeing how peers' buffers are filled with chunks and not in video playback: >> as you can see, both scripts send the video signal to /dev/null. >> >> Which experiment to run? We propose the following: we're interested in >> average expulsion times for an attacker, and if all of them are expulsed >> after a given time. Also, the average percentage of gaps in the peers' >> buffers (so we can see if playback is possible in presence of attackers and >> after how long). I think you should measure time in terms of sending rounds >> (you know, one round would be the splitter sending one chunk to every >> member of the team). >> >> So, let's say that you have a team of 100 peers. From that team, a >> percentage of peers will be malicious: 1%, 10%, 25%, 50%. I imagine a plot >> in which the X axis is time (number of rounds) and in which we depict: >> number of remaining malicious peers in the team (because some of them will >> be expulsed) and average filling of peers' buffers. Ideally, as the number >> of remaining malicious peers decreases the filling of buffers should >> increase. >> >> Showing the number of complains from peers in the first technique would >> be also interesting. >> >> Another thing to measure would be the percentage of bandwidth used for >> real multimedia data (this is, how many bytes from the total are really >> used for transmitting the video). You can compare the baseline (no security >> measures, just plain video without malicious attackers) against both >> techniques. >> >> So, for running these experiments you'll need to decide which information >> you want to store from each peer (buffer filling percentage at each >> iteration, how many malicious peers at each iteration, how many bytes were >> sent and how many of them were used for video, how many complains arrived >> to the splitter in every iteration). Am I forgetting anything? >> >> My suggestion is run the experiment for the first technique and see how >> it goes. Make sure to run the experiment more than once, say 5 times, and >> then get the average of them all. >> >> Good work, >> >> Juan >> >> 2015-07-21 20:06 GMT+02:00 Vicente Gonzalez < >> [email protected]>: >> >> Hi Ilshat, >> >> did you try tools/create_a_team.sh? >> >> (I tested to run up to 100 peers in my 8HG Mac machine) >> >> Regards, >> Vi. >> >> >> On Sun, Jul 19, 2015 at 8:36 PM Ilshat Shakirov <[email protected]> >> wrote: >> >> Hello!, >> >> Sorry for the long delay. >> >> Here is status update about CIS of rules project: >> http://shakirov-dev.blogspot.ru/2015/07/5-6-7-week.html >> >> Also, I need some help with testing a big (ie, 20 peers) p2psp-teams. I >> want solution that allows to reproduce testing experiments easily. So the >> commenting lines (to remove need in running vlc) is not suitable for this. >> I've wrote simple script which runs several peers (in one machine) and >> here is result >> <https://www.evernote.com/shard/s427/sh/0b070670-8de9-4a61-acec-562035cfc3ef/7403917d3ca736eea6d60da8ba23543b>. >> I think it's quite hard to understand smth in this (and reproduce). So, >> what is the best solution for testing p2psp-teams and gather some stats? >> >> Thanks! >> >> 2015-06-25 16:13 GMT+05:00 Vicente Gonzalez < >> [email protected]>: >> >> >> >> On Wed, Jun 24, 2015 at 5:48 PM L.G.Casado <[email protected]> wrote: >> >> Hi all, >> >> El mié, 24-06-2015 a las 16:44 +0500, Ilshat Shakirov escribió: >> >> Ok; Is there any option run peer without running a player? I'm going to >> run all peers in one local machine, is it right? >> >> >> >> >> At this moment, the easiest way to test a lot of peers in one machine is >> to connect to each peer a NetCat client [http://netcat.sourceforge.net/]. >> It is not the most efficient solution, but you should be able to run >> hundreds of peers in a 8GB machine. However, is quite simple to avoid >> sending the stream in each peer. Just comment (temporally) the code that >> feeds the player. >> >> Regards, >> Vi. >> -- >> -- >> Vicente González Ruiz >> Depto de Informática >> Escuela Técnica Superior de Ingeniería >> Universidad de Almería >> >> Carretera Sacramento S/N >> 04120, La Cañada de San Urbano >> Almería, España >> >> e-mail: [email protected] >> http://www.ual.es/~vruiz >> tel: +34 950 015711 >> fax: +34 950 015486 >> >> >> >> -- >> -- >> Vicente González Ruiz >> Depto de Informática >> Escuela Técnica Superior de Ingeniería >> Universidad de Almería >> >> Carretera Sacramento S/N >> 04120, La Cañada de San Urbano >> Almería, España >> >> e-mail: [email protected] >> http://www.ual.es/~vruiz >> tel: +34 950 015711 >> fax: +34 950 015486 >> >> >> >> >> >> >> >> >> >> >> >> >> -- >> Mailing list: https://launchpad.net/~p2psp >> Post to : [email protected] >> Unsubscribe : https://launchpad.net/~p2psp >> More help : https://help.launchpad.net/ListHelp >> >> > -- > Mailing list: https://launchpad.net/~p2psp > Post to : [email protected] > Unsubscribe : https://launchpad.net/~p2psp > More help : https://help.launchpad.net/ListHelp > -- -- Vicente González Ruiz Depto de Informática Escuela Técnica Superior de Ingeniería Universidad de Almería Carretera Sacramento S/N 04120, La Cañada de San Urbano Almería, España e-mail: [email protected] http://www.ual.es/~vruiz tel: +34 950 015711 fax: +34 950 015486
-- Mailing list: https://launchpad.net/~p2psp Post to : [email protected] Unsubscribe : https://launchpad.net/~p2psp More help : https://help.launchpad.net/ListHelp

