Dear Ilshat, Very nice piece of work. Now we can move to study the details of the experimentation. In order to do so and to facilitate the evaluation you should include a brief description of the results and the details about how you get them. Excu se me, but I am involved in many other things and I need this level of description in order to help to the improvement.
I have many questions. I know I can get them from the code, but I would like you to answer them in order to be aware of what is being measured. In your example: > > ./test_strpe.py -n 48 -t 2 -m 50 -w 200 why n are different? What is the waiting time. It was the same in all the experiments? As far I understood from the excel, n is always 100. This is just a curiosity, for instance in the first case (m=10), 10 rounds are needed to have 10 malicious and 100 well intended peers. Are they included randomly? The concern is if the initialization will affect the results. When malicious start to attack? What type of attack is measured? How is buffer_correctness calculated? (an average?, best and worst case will be also good to know). Why well intended peers leave the team? The interesting information will be the summary from the excel. Once the type of attack is fixed. For instance: How buffer_correctness is affected by the number of good and malicious peers. Can we guest it mathematically? What is the number of rounds to expel a given number of malicious? Does it depends on the number of good/trusted peers? I think there is a direct correlation between/among: i)buffer_correctness and number of malicious ii)number of rounds to expel malicious- number of good peers -number of trusted peers. Taking into account the number of trusted peers: Comparing -n 100 -m 10 -t 1 with -n 100 -m 10 -t 2 malicious are expelled at 75 and 30 rounds resp. This seems logic. More trusted earlier expelled. Curiosity, Why in the first case one malicious remains and not in the second? Comparing -n 100 -m 25 -t 1 with -n 100 -m 25 -t 2, malicious are expelled at 91 and 84, resp, which is not too much difference You save 7 rounds adding one trusted peer. This is not logic. For the last case, comparing -n 100 -m 25 -t 1 with -n 100 -m 25 -t 2 malicious are expelled at 139 (remains 2 malicious, why?) and 102. They are not comparable. In order to compare similar things, please, can you run the experiments until no malicious peer are in the team. Can we guess the final number of good peers? I think the attack should start when all peers are in the team, to hide the initialization phase. And those rounds do not count in determining how many rounds were needed to expel them all. Additionally, due to the non deterministic behavior 5 runs should be done, removing the best and worst in terms on the number of rounds to expel all malicious peers and provide the average of the other three. Regarding > To be honest, I dont see how splitter can determine malicious peers if > there more than 50% of malicious peers in the team. Could you please > advice some direction? We have some ideas, but it is better to know how and why system works first. I do not know if this can be seen as included in GSoC but the answer to previous questions is what we want to know, because from them we can determine which code will be finally included. Best regards, Leo El lun, 03-08-2015 a las 12:46 +0500, Ilshat Shakirov escribió: > Hello!,> > I've written couple of scripts for testing only STrPe mechanism (for now). > Here it is: > https://github.com/ishakirov/p2psp/blob/master/tools/test_strpe.py > https://github.com/ishakirov/p2psp/blob/master/tools/parse_test_strpe.py> To > run it on not Mac systems, you have to change "runStream" function (since it > runs mac specific path for vlc).> Also this commit contains changes for > splitter and peer for logging some data (buffer correctnes, currenct round, > teamsize).> The fist script run the experiment, it has 4 params:> n - > teamsize without malicious and trusted peers> t - number of trusted peers> m > - number of malicious peers> w - wait time in seconds> > And the second > script parse the result files and produce an output, which can be > 'copy-pasted' to excel or google sheet.> It runs with params:> n - teamsize > with malicious and trusted peers> m - number of malicious peers.> > So these scripts can be used like: > > ./test_strpe.py -n 48 -t 2 -m 50 -w 200 > > ./parse_test_strpe.py -n 100 -m 50 > > And here is some results: > https://docs.google.com/spreadsheets/d/1BEHWLtKTVnZoqjEXNs-uLeH0sRU7SCXmZHSMuWc_-lo/edit#gid=985076992> > > Now I am doing the same for the STrPe-DS mechanism. Could you please give me > some feedback for this graphs? =)> I will write a blogpost later, when > STrPe-DS will be ready. > > Your reputation system, as many others, may produce false positive and > > false negative. > > > > False positive: a bunch of malicious peers complaint about a well intended > > one. > > False negative: a bunch of malicious peers does not complain about a > > malicious peer. > > > > About time on the team: a malicious peer can perform correctly for a long > > period and suddenly make an attack. > > > > It is not an easy task. > To be honest, I dont see how splitter can determine malicious peers if there > more than 50% of malicious peers in the team. Could you please advice some > direction?> > Thanks! > 2015-07-28 12:36 GMT+05:00 L.G.Casado > <[email protected]>> : > > Dear ILshat, > > > > Your reputation system, as many others, may produce false positive and > > > > false negative. > > > > False positive: a bunch of malicious peers complaint about a well > > > > intended one. > > False negative: a bunch of malicious peers does not complain about a > > malicious peer. > > > > About time on the team: a malicious peer can perform correctly for a > > > > long period and suddenly make an attack. > > > > It is not an easy task. > > > > Best, > > > > Leo > > > > > > > > El mar, 28-07-2015 a las 12:00 +0500, Ilshat Shakirov escribió: > > > As I said we can assign some reputation parameter to each peer. Let's say > > > that we exclude peer P from the team if Q > X, where X some > > > > > > threshold value and > > > where I_i = 1 if i-th peer marked peer P as malicious and 0 in the other > > > case. > > > w_i can be assigned by the time of existing peer in the team. So if peer > > > exists in team from the start of streaming, and it hasn't > > > > > > complaints from the other peers, it has the maximum reputation value (w_i > > > parameter). > > > So the question is how to choose w_i more optimal (and the threshold > > > value X respectively). > > > 2015-07-28 1:08 GMT+05:00 Juan Álvaro Muñoz Naranjo > > > > > > <[email protected]>> > > : > > > > Hi again,> > > > > > > > > > > Also, I wanted to develop heuristic for the excluding malicious > > > > > > > peers from the team based on the all the team (not only trusted > > > > > > > peers). Do you have any ideas? I think about smth like: 'exclude > > > > > > > peer if more than x% of the team marked it as malicious'. Also, > > > > > > > we can assign 'reputation' to each peer, so some peers will have > > > > > > > more influence on the decision of excluding peer. What do you > > > > > > > think? > > > > > > > > Yeah, we've been considering it. The problem with the x% > > > > > > > > solution is that i can easily turn against us. Imagine the > > > > > > > > attacker controls a high percentage of the nodes in the network > > > > > > > > (that would be easy, just run a huge number of peers > x% in a > > > > > > > > small number of machines and you've got it) and starts > > > > > > > > complaining against valid peers. The valid peers would be > > > > > > > > expulsed by the splitter. That would be an easy DoS. > > > > > > > > So, to reduce the impact of attackers, let's say that we set x to 50%: > > > > the attacker would need to control more than half of the team in order > > > > to expulse someone. But let's say the attacker controls 45% of the > > > > peers. Not enough to expulse anyone, but now he can act inversely: he > > > > uses that 45% of malicious peers to send corrupted chunks to a set of > > > > peers smaller than 50%. Those affected legal peers will not be able to > > > > play 45% of the packets, so they will probably abandon the team due to > > > > playback quality problems. Again a DoS. And the attackers will not be > > > > expulsed since the splitter did not receive at least 50% of complaints! > > > > > > > > > > > > Any idea on this?> > > > > > > > Juan > > > > > > > > > > > Thanks! =)> > > > > > > > > > > > > > 2015-07-23 2:01 GMT+05:00 Juan Álvaro Muñoz Naranjo > > > > > > > > > > > > > > <[email protected]>> > > > > > > : > > > > > > > > Hi Ilshat,> > > > > > > > > > > > > > > > first of all thanks for your update, it was very interesting. > > > > > > > > Just one thing: when the DS technique is completed we'll send > > > > > > > > the public key under a X.509 certificate format. Ideally this > > > > > > > > certificate should be signed by a trusted certificate authority > > > > > > > > and contain information about the organization managing the > > > > > > > > splitter to offer some degree of trust. The certificate might > > > > > > > > even be distributed with the software, or be given by the web > > > > > > > > page if we were in a web player with WebRTC. Otherwise the > > > > > > > > attacker might send its own public key to the peers > > > > > > > > impersonating the splitter. But for now it is ok like that.> > > > > > > > > > > > > > > > > > > > > > > > Now, let's get to the point. How to run the experiments. > > > > > > > > Vicente already suggested the use of tools/create_a_team.sh in > > > > > > > > a previous message (thank you Vicente!). Also, Cristóbal > > > > > > > > suggests this: > > > > > > > > https://github.com/cristobalmedinalopez/p2psp-chunk-scheduling/blob/master/tools/run_experiment.sh > > > > > > > > > > > > > > > > These solutions are for experiments in one machine of course, > > > > > > > > which is enough for us. If you need more peers you should be > > > > > > > > able to combine several machines by running one script per > > > > > > > > machine. Of course, we're interested in seeing how peers' > > > > > > > > buffers are filled with chunks and not in video playback: as > > > > > > > > you can see, both scripts send the video signal to /dev/null.> > > > > > > > > > > > > > > > > > > > > > > > Which experiment to run? We propose the following: we're > > > > > > > > interested in average expulsion times for an attacker, and if > > > > > > > > all of them are expulsed after a given time. Also, the average > > > > > > > > percentage of gaps in the peers' buffers (so we can see if > > > > > > > > playback is possible in presence of attackers and after how > > > > > > > > long). I think you should measure time in terms of sending > > > > > > > > rounds (you know, one round would be the splitter sending one > > > > > > > > chunk to every member of the team).> > > > > > > > > > > > > > > > So, let's say that you have a team of 100 peers. From that > > > > > > > > team, a percentage of peers will be malicious: 1%, 10%, 25%, > > > > > > > > 50%. I imagine a plot in which the X axis is time (number of > > > > > > > > rounds) and in which we depict: number of remaining malicious > > > > > > > > peers in the team (because some of them will be expulsed) and > > > > > > > > average filling of peers' buffers. Ideally, as the number of > > > > > > > > remaining malicious peers decreases the filling of buffers > > > > > > > > should increase.> > > > > > > > > > > > > > > > Showing the number of complains from peers in the first > > > > > > > > technique would be also interesting.> > > > > > > > > > > > > > > > Another thing to measure would be the percentage of bandwidth > > > > > > > > used for real multimedia data (this is, how many bytes from the > > > > > > > > total are really used for transmitting the video). You can > > > > > > > > compare the baseline (no security measures, just plain video > > > > > > > > without malicious attackers) against both techniques. > > > > > > > > > > > > > > > > > > > > > > > > So, for running these experiments you'll need to decide which > > > > > > > > information you want to store from each peer (buffer filling > > > > > > > > percentage at each iteration, how many malicious peers at each > > > > > > > > iteration, how many bytes were sent and how many of them were > > > > > > > > used for video, how many complains arrived to the splitter in > > > > > > > > every iteration). Am I forgetting anything?> > > > > > > > > > > > > > > > My suggestion is run the experiment for the first technique and > > > > > > > > see how it goes. Make sure to run the experiment more than > > > > > > > > once, say 5 times, and then get the average of them all.> > > > > > > > > > > > > > > > > > > > > > > > Good work,> > > > > > > > > > > > > > > > Juan > > > > > > > > 2015-07-21 20:06 GMT+02:00 Vicente Gonzalez > > > > > > > > > > > > > > > > <[email protected]>> > > > > > > > : > > > > > > > > > Hi Ilshat,> > > > > > > > > > > > > > > > > > did you try > > > > > > > > > tools/create_a_team.sh? > > > > > > > > > > > > > > > > > > (I tested to run up to 100 peers in my 8HG > > > > > > > > > > > > > > > > > > Mac machine) > > > > > > > > > > > > > > > > > > Regards, > > > > > > > > > Vi.> > > > > > > > > > > > > > > > > > On Sun, Jul 19, 2015 at 8:36 PM Ilshat Shakirov > > > > > > > > > <[email protected]> wrote: > > > > > > > > > > Hello!,> > > > > > > > > > > > > > > > > > > > Sorry for the long delay.> > > > > > > > > > > > > > > > > > > > Here is status update about CIS of rules project: > > > > > > > > > > http://shakirov-dev.blogspot.ru/2015/07/5-6-7-week.html> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Also, I need some help with testing a big (ie, 20 peers) > > > > > > > > > > p2psp-teams. I want solution that allows to reproduce > > > > > > > > > > testing experiments easily. So the commenting lines (to > > > > > > > > > > remove need in running vlc) is not suitable for this.> > > > > > > > > > > > > > > > > > > > I've wrote simple script which runs several > > > > > > > > > > peers (in one machine) and here is result. I think it's > > > > > > > > > > quite hard to understand smth in this (and reproduce). So, > > > > > > > > > > what is the best solution for testing p2psp-teams and > > > > > > > > > > gather some stats?> > > > > > > > > > > > > > > > > > > > Thanks! > > > > > > > > > > 2015-06-25 16:13 GMT+05:00 Vicente Gonzalez > > > > > > > > > > > > > > > > > > > > <[email protected]>> > > > > > > > > > : > > > > > > > > > > > > > > > > > > > > > > On Wed, Jun 24, 2015 at 5:48 PM L.G.Casado <[email protected]> > > > > > > > > > > > wrote:> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hi all, > > > > > > > > > > > > > > > > > > > > > > > > El mié, 24-06-2015 a las 16:44 +0500, Ilshat Shakirov > > > > > > > > > > > > escribió: > > > > > > > > > > > > > > > > > > > > > > > > Ok; Is there any option run peer without running a > > > > > > > > > > > > player? I'm going to run all peers in one local > > > > > > > > > > > > machine, is it right? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > At this moment, the easiest way to > > > > > > > > > > > > > > > > > > > > > > test a lot of peers in one machine > > > > > > > > > > > > > > > > > > > > > > is to connect to each peer a NetCat > > > > > > > > > > > > > > > > > > > > > > client > > > > > > > > > > > > > > > > > > > > > > [http://netcat.sourceforge.net/]. > > > > > > > > > > > > > > > > > > > > > > It is not the most efficient > > > > > > > > > > > > > > > > > > > > > > solution, but you should be able to > > > > > > > > > > > > > > > > > > > > > > run hundreds of peers in a 8GB > > > > > > > > > > > > > > > > > > > > > > machine. However, is quite simple > > > > > > > > > > > > > > > > > > > > > > to avoid sending the stream in each > > > > > > > > > > > > > > > > > > > > > > peer. Just comment (temporally) the > > > > > > > > > > > > > > > > > > > > > > code that feeds the player. > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > > > > > > > > Vi. > > > > > > > > > > > -- > > > > > > > > > > > -- > > > > > > > > > > > Vicente González Ruiz > > > > > > > > > > > Depto de Informática> > > > > > > > > > > Escuela Técnica > > > > > > > > > > > Superior de Ingeniería > > > > > > > > > > > Universidad de Almería > > > > > > > > > > > > > > > > > > > > > > Carretera Sacramento S/N > > > > > > > > > > > 04120, La Cañada de San Urbano > > > > > > > > > > > Almería, España> > > > > > > > > > > > > > > > > > > > > > e-mail: [email protected] > > > > > > > > > > > http://www.ual.es/~vruiz > > > > > > > > > > > tel: +34 950 015711> > > > > > > > > > > fax: +34 950 > > > > > > > > > > > 015486 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > -- > > > > > > > > > Vicente González > > > > > > > > > Ruiz > > > > > > > > > Depto de Informática> > > > > > > > > Escuela Técnica > > > > > > > > > Superior de Ingeniería > > > > > > > > > Universidad de Almería > > > > > > > > > > > > > > > > > > Carretera Sacramento S/N > > > > > > > > > 04120, La Cañada de San Urbano > > > > > > > > > Almería, España> > > > > > > > > > > > > > > > > > e-mail: [email protected] > > > > > > > > > http://www.ual.es/~vruiz> > > > > > > > > tel: +34 950 015711 > > > > > > > > > fax: +34 950 015486 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > Mailing list: https://launchpad.net/~p2psp > > > > > > Post to : [email protected] > > > > > > Unsubscribe : https://launchpad.net/~p2psp > > > > > > More help : https://help.launchpad.net/ListHelp> > > >
-- Mailing list: https://launchpad.net/~p2psp Post to : [email protected] Unsubscribe : https://launchpad.net/~p2psp More help : https://help.launchpad.net/ListHelp

