[tor-commits] [onionperf/develop] Pass date_filter argument to parent analysis

2020-07-23 Thread karsten
commit e32deb2994576cd1bfb83477da5d36b7991ef4d1
Author: Ana Custura 
Date:   Wed Jul 22 15:10:06 2020 +0100

Pass date_filter argument to parent analysis
---
 onionperf/analysis.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index e269f6a..898b165 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -34,7 +34,7 @@ class OPAnalysis(Analysis):
 return
 
 self.date_filter = date_filter
-super().analyze(do_complete=True)
+super().analyze(do_complete=True, date_filter=self.date_filter)
 torctl_parser = TorCtlParser(date_filter=self.date_filter)
 
 for (filepaths, parser, json_db_key) in [(self.torctl_filepaths, 
torctl_parser, 'tor')]:



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Fix unit tests.

2020-07-23 Thread karsten
commit 899dbf2518a191167d779063a85c6c0285a68b7b
Author: Karsten Loesing 
Date:   Thu Jul 23 21:10:22 2020 +0200

Fix unit tests.

Latest TGenTools contain a fix for correctly computing the start time
of failed streams. We have to update our unit tests.

Related to #30362.
---
 onionperf/tests/test_analysis.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/onionperf/tests/test_analysis.py b/onionperf/tests/test_analysis.py
index 82e2043..6a8280b 100644
--- a/onionperf/tests/test_analysis.py
+++ b/onionperf/tests/test_analysis.py
@@ -48,7 +48,7 @@ def test_stream_complete_event_init():
 assert_equals(complete.time_info['usecs-to-proxy-choice'], '348')
 assert_equals(complete.time_info['usecs-to-socket-connect'], '210')
 assert_equals(complete.time_info['usecs-to-socket-create'], '11')
-assert_equals(complete.unix_ts_start, 1555940480.6472511)
+assert_equals(complete.unix_ts_start, 1555940359.2286081)
 
 
 def test_stream_error_event():
@@ -146,7 +146,7 @@ def test_stream_object_end_to_end():
 'now-ts': '5948446579043'
 },
 'stream_id': 
'4:12:localhost:127.0.0.1:46878:dc34og3c3aqdqntblnxkstzfvh7iy7llojd4fi5j23y2po32ock2k7ad.onion:0.0.0.0:8080',
-'unix_ts_start': 1555940480.6472511
+'unix_ts_start': 1555940359.2286081
 })
 
 def test_parsing_parse_error():

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Retrieve error_code from the stream_info dict

2020-07-23 Thread karsten
commit 89c8a8f826ee0dc460feb81f67e6bebfbbb9981f
Author: Ana Custura 
Date:   Thu Jul 23 15:04:57 2020 +0100

Retrieve error_code from the stream_info dict
---
 onionperf/visualization.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index 5e7f92b..daa5c3d 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -88,8 +88,8 @@ class TGenVisualization(Visualization):
 s = stream_data["elapsed_seconds"]
 if stream_data["stream_info"]["recvsize"] == 
"5242880" and "0.2" in s["payload_progress_recv"]:
  stream["mbps"] = 4.194304 / 
(s["payload_progress_recv"]["0.2"] - s["payload_progress_recv"]["0.1"])
-if "error" in stream_data["transport_info"] and 
stream_data["transport_info"]["error"] != "NONE":
-error_code = 
stream_data["transport_info"]["error"]
+if "error" in stream_data["stream_info"] and 
stream_data["stream_info"]["error"] != "NONE":
+error_code = 
stream_data["stream_info"]["error"]
 if "local" in stream_data["transport_info"] and 
len(stream_data["transport_info"]["local"].split(":")) > 2:
 source_port = 
stream_data["transport_info"]["local"].split(":")[2]
 if "unix_ts_end" in stream_data:



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Rotate error code on y axis tick labels.

2020-07-23 Thread karsten
commit 30aa6dcb52d31403ab366599b13d7fa4db3adb80
Author: Karsten Loesing 
Date:   Thu Jul 23 21:36:10 2020 +0200

Rotate error code on y axis tick labels.

Still part of #34218.
---
 onionperf/visualization.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index daa5c3d..3abcb3f 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -292,6 +292,7 @@ class TGenVisualization(Visualization):
 g.set(title=title, xlabel=xlabel, ylabel=ylabel,
   xlim=(xmin - 0.03 * (xmax - xmin), xmax + 0.03 * (xmax - xmin)))
 plt.xticks(rotation=10)
+plt.yticks(rotation=80)
 sns.despine()
 self.page.savefig()
 plt.close()



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Merge branch 'task-34218' into develop

2020-07-23 Thread karsten
commit dd42ef1d5193bf96a59ee336ff6aff8f53bfd979
Merge: 899dbf2 30aa6dc
Author: Karsten Loesing 
Date:   Thu Jul 23 21:37:11 2020 +0200

Merge branch 'task-34218' into develop

 onionperf/analysis.py  |   6 +++
 onionperf/visualization.py | 108 +++--
 2 files changed, 72 insertions(+), 42 deletions(-)

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Refine error codes into TOR or TGEN errors.

2020-07-23 Thread karsten
commit 4533b39591cc0a6df124e017366ea71343c842bf
Author: Karsten Loesing 
Date:   Fri Jul 17 22:24:43 2020 +0200

Refine error codes into TOR or TGEN errors.

With this change we include more detailed error codes in visualization
output. In order to do so we map TGen transfers/streams to TorCtl
STREAM event details based on source ports and unix_ts_end timestamps.
This code reuses some concepts used in metrics-lib.

Implements tpo/metrics/onionperf#34218.
---
 onionperf/analysis.py  |   6 +++
 onionperf/visualization.py | 107 +++--
 2 files changed, 71 insertions(+), 42 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index e269f6a..8fcc0a6 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -97,6 +97,12 @@ class OPAnalysis(Analysis):
 except:
 return None
 
+def get_tor_streams(self, node):
+try:
+return self.json_db['data'][node]['tor']['streams']
+except:
+return None
+
 @classmethod
 def load(cls, filename="onionperf.analysis.json.xz", 
input_prefix=os.getcwd()):
 filepath = os.path.abspath(os.path.expanduser("{0}".format(filename)))
diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index 80c0781..5e7f92b 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -54,50 +54,57 @@ class TGenVisualization(Visualization):
 streams = []
 for (analyses, label) in self.datasets:
 for analysis in analyses:
-if analysis.json_db['version'] >= '3':
-for client in analysis.get_nodes():
- tgen_streams = analysis.get_tgen_streams(client)
- for stream_id, stream_data in tgen_streams.items():
- stream = {"id": stream_id, "label": label,
- "filesize_bytes": 
int(stream_data["stream_info"]["recvsize"]),
- "error_code": None}
- stream["server"] = "onion" if ".onion:" in 
stream_data["transport_info"]["remote"] else "public"
- if "time_info" in stream_data:
- s = stream_data["time_info"]
- if "usecs-to-first-byte-recv" in s:
- stream["time_to_first_byte"] = 
float(s["usecs-to-first-byte-recv"])/100
- if "usecs-to-last-byte-recv" in s:
- stream["time_to_last_byte"] = 
float(s["usecs-to-last-byte-recv"])/100
- if "elapsed_seconds" in stream_data:
- s = stream_data["elapsed_seconds"]
- # Explanation of the math below for computing 
Mbps: From stream_info/recvsize
- # and payload_progress_recv fields we can 
compute the number of seconds that
- # have elapsed between receiving bytes 
524,288 and 1,048,576, which is a
- # total amount of 524,288 bytes or 4,194,304 
bits or 4.194304 megabits.
- # We want the reciprocal of that value with 
unit megabits per second.
- if stream_data["stream_info"]["recvsize"] == 
"5242880" and "0.2" in s["payload_progress_recv"]:
-  stream["mbps"] = 4.194304 / 
(s["payload_progress_recv"]["0.2"] - s["payload_progress_recv"]["0.1"])
- if "error" in stream_data["transport_info"] and 
stream_data["transport_info"]["error"] != "NONE":
- stream["error_code"] = 
stream_data["transport_info"]["error"]
- if "unix_ts_start" in stream_data:
- stream["start"] = 
datetime.datetime.utcfromtimestamp(stream_data["unix_ts_start"])
- streams.append(stream)
-else:
-for client in analysis.get_nodes():
-tgen_transfers = analysis.get_tgen_transfers(client)
-for transfer_id, transfer_data in 
tgen_transfers.items():
+for client in analysis.get_nodes():
+tor_streams_by_source_port = {

[tor-commits] [onionperf/develop] Do some repository housekeeping.

2020-07-24 Thread karsten
commit 7576ce3ede211722db59bde3ccbc7e79ac4ac60c
Author: Karsten Loesing 
Date:   Thu Jul 23 20:18:03 2020 +0200

Do some repository housekeeping.

Fixes #40006.
---
 .gitignore |  4 ++--
 .gitlab-ci.yml | 23 ---
 .gitmodules|  0
 README.md  |  6 ++
 Vagrantfile| 35 ---
 debian/changelog   |  5 -
 debian/compat  |  1 -
 debian/control | 40 
 debian/install |  1 -
 debian/rules   | 15 ---
 debian/source/format   |  1 -
 onionperf/__init__.py  |  1 +
 onionperf/analysis.py  |  1 +
 onionperf/measurement.py   |  1 +
 onionperf/model.py |  1 +
 onionperf/monitor.py   |  1 +
 onionperf/onionperf|  1 +
 onionperf/util.py  |  1 +
 onionperf/visualization.py |  1 +
 run_tests.sh   |  3 ---
 20 files changed, 16 insertions(+), 126 deletions(-)

diff --git a/.gitignore b/.gitignore
index 5e79b0e..e3e7b28 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,8 +1,8 @@
-/.onionperf/*
 onionperf-data
+onionperf-private
 venv
 *.json.xz
 *.pdf
+*.csv
 *.pyc
-onionperf/docs/_build
 .coverage
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
deleted file mode 100644
index aed7769..000
--- a/.gitlab-ci.yml
+++ /dev/null
@@ -1,23 +0,0 @@
-variables:
-  GIT_STRATEGY: clone
-
-stages:
- - test
-
-test:
- stage: test
- image: debian:buster
- coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
- script:
-  - apt -y update
-  - apt -y install git cmake make build-essential gcc libigraph0-dev 
libglib2.0-dev python-dev libxml2-dev python-lxml python-networkx python-scipy 
python-matplotlib python-numpy libevent-dev libssl-dev python-stem tor 
python-nose python-cov-core
-  - git clone https://github.com/shadow/tgen.git
-  - mkdir -p tgen/build
-  - pushd tgen/build
-  - cmake ..
-  - make
-  - ln -s `pwd`/tgen /usr/bin/
-  - popd
-  - python setup.py build
-  - python setup.py install
-  - ./run_tests.sh
diff --git a/.gitmodules b/.gitmodules
deleted file mode 100644
index e69de29..000
diff --git a/README.md b/README.md
index b9bf7e6..ad53a9e 100644
--- a/README.md
+++ b/README.md
@@ -125,6 +125,12 @@ deactivate
 
 However, in order to perform measurements or analyses, the virtual environment 
needs to be activated first. This will ensure all the paths are found.
 
+If needed, unit tests are run with the following command:
+
+```shell
+cd ~/onionperf/
+python3 -m nose --with-coverage --cover-package=onionperf
+```
 
 ## Measurement
 
diff --git a/Vagrantfile b/Vagrantfile
deleted file mode 100644
index 1fdd10f..000
--- a/Vagrantfile
+++ /dev/null
@@ -1,35 +0,0 @@
-# -*- mode: ruby -*-
-# vi: set ft=ruby :
-
-$setup_onionperf = <

[tor-commits] [onionperf/develop] Add CHANGELOG.md.

2020-07-24 Thread karsten
commit a671736fa1e7328c08e2a1644be280c3c045d29a
Author: Karsten Loesing 
Date:   Thu Jul 23 19:44:48 2020 +0200

Add CHANGELOG.md.
---
 CHANGELOG.md | 67 
 1 file changed, 67 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
new file mode 100644
index 000..54b632b
--- /dev/null
+++ b/CHANGELOG.md
@@ -0,0 +1,67 @@
+# Changes in version 0.6 - 2020-??-??
+
+ - Update to TGen 1.0.0, use TGenTools for parsing TGen log files, and
+   update analysis results file version to 3.0. Implements #33974.
+ - Remove summaries from analysis results files, and remove the
+   `onionperf analyze -s/--do-simple-parse` switch. Implements #40005.
+ - Add JSON schema for analysis results file format 3.0. Implements
+   #40003.
+
+# Changes in version 0.5 - 2020-07-02
+
+ - Add new graph showing the cumulative distribution function of
+   throughput in Mbps. Implements #33257.
+ - Improve `README.md` to make it more useful to developers and
+   researchers. Implements #40001.
+ - Always include the `error_code` column in visualization CSV output,
+   regardless of whether data contains measurements with an error code
+   or not. Fixes #40004.
+ - Write generated torrc files to disk for debugging purposes.
+   Implements #40002.
+
+# Changes in version 0.4 - 2020-06-16
+
+ - Include all measurements when analyzing log files at midnight as
+   part of `onionperf measure`, not just the ones from the day before.
+   Also add `onionperf analyze -x/--date-prefix` switch to prepend a
+   given date string to an analysis results file. Fixes #29369.
+ - Add `size`, `last_modified`, and `sha256` fields to index.xml.
+   Implements #29365.
+ - Add support for single onion services using the switch `onionperf
+   measure -s/--single-onion`. Implements #29368.
+ - Remove unused `onionperf measure --traffic-model` switch.
+   Implements #29370.
+ - Make `onionperf measure -o/--onion-only` and `onionperf measure
+   -i/--inet-only` switches mutually exclusive. Fixes #34316.
+ - Accept one or more paths to analysis results files or directories
+   of such files per dataset in `onionperf visualize -d/--data` to
+   include all contained measurements in a dataset. Implements #34191.
+
+# Changes in version 0.3 - 2020-05-30
+
+ - Automatically compress logs when rotating them. Fixes #33396.
+ - Update to Python 3. Implements #29367.
+ - Integrate reprocessing mode into analysis mode. Implements #34142.
+ - Record download times of smaller file sizes from partial completion
+   times. Implements #26673.
+ - Stop generating .tpf files. Implements #34141.
+ - Update analysis results file version to 2.0. Implements #34224.
+ - Export visualized data to a CSV file. Implements #33258.
+ - Remove version 2 onion service support. Implements #33434.
+ - Reduce timeout and stallout values. Implements #34024.
+ - Remove 50 KiB and 1 MiB downloads. Implements #34023.
+ - Remove existing Tor control log visualizations. Implements #34214.
+ - Update to Networkx version 2.4. Fixes #34298.
+ - Update time to first/last byte definitions to include the time 
+   between starting a measurement and receiving the first/last byte. 
+   Implements #34215.
+ - Update `requirements.txt` to actual requirements, and switch from 
+   distutils to setuptools. Fixes #30586.
+ - Split visualizations into public and onion service measurements. 
+   Fixes #34216.
+
+# Changes from before 2020-04
+
+ - Changes made before 2020-04 are not listed here. See `git log` for
+   details.
+



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Merge branch 'issue-40006' into develop

2020-07-24 Thread karsten
commit c41a7036852b0d7ae93b6cef8e58ed3c364f518e
Merge: dd42ef1 7576ce3
Author: Karsten Loesing 
Date:   Fri Jul 24 16:31:42 2020 +0200

Merge branch 'issue-40006' into develop

 .gitignore |  4 +--
 .gitlab-ci.yml | 23 
 .gitmodules|  0
 CHANGELOG.md   | 67 ++
 README.md  |  6 +
 Vagrantfile| 35 
 debian/changelog   |  5 
 debian/compat  |  1 -
 debian/control | 40 ---
 debian/install |  1 -
 debian/rules   | 15 ---
 debian/source/format   |  1 -
 onionperf/__init__.py  |  1 +
 onionperf/analysis.py  |  1 +
 onionperf/measurement.py   |  1 +
 onionperf/model.py |  1 +
 onionperf/monitor.py   |  1 +
 onionperf/onionperf|  1 +
 onionperf/util.py  |  1 +
 onionperf/visualization.py |  1 +
 run_tests.sh   |  3 ---
 21 files changed, 83 insertions(+), 126 deletions(-)

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Add change log entries for latest two changes.

2020-07-24 Thread karsten
commit 251fd9c2b5fe9be1685f4d4e4453b617b505a638
Author: Karsten Loesing 
Date:   Fri Jul 24 16:36:34 2020 +0200

Add change log entries for latest two changes.
---
 CHANGELOG.md | 4 
 1 file changed, 4 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 54b632b..0c4c4f2 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,10 @@
`onionperf analyze -s/--do-simple-parse` switch. Implements #40005.
  - Add JSON schema for analysis results file format 3.0. Implements
#40003.
+ - Correctly compute the start time of failed streams as part of the
+   update to TGen and TGenTools 1.0.0. Fixes #30362.
+ - Refine error codes shown in visualizations into TOR or TGEN errors.
+   Implements #34218.
 
 # Changes in version 0.5 - 2020-07-02
 

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [collector/master] Update to latest metrics-base.

2020-08-05 Thread karsten
commit 427b2c63cdc54928230cc162a197b597605c857d
Author: Karsten Loesing 
Date:   Wed Aug 5 11:14:01 2020 +0200

Update to latest metrics-base.
---
 src/build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/build b/src/build
index b5e1a2d..92e78ed 16
--- a/src/build
+++ b/src/build
@@ -1 +1 @@
-Subproject commit b5e1a2d7b29e58cc0645f068a1ebf4377bf9d8b8
+Subproject commit 92e78eda6addb8692efa2fe236810f2b7cc65115



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [collector/master] Prepare for 1.16.0 release.

2020-08-05 Thread karsten
commit b3cc8fa20f3d4efda0545dff4f5b7bd984ff987c
Author: Karsten Loesing 
Date:   Wed Aug 5 11:15:36 2020 +0200

Prepare for 1.16.0 release.
---
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/build.xml b/build.xml
index 6bf561a..7e74b30 100644
--- a/build.xml
+++ b/build.xml
@@ -9,7 +9,7 @@
 
   
   
-  
+  
   
   
   

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [collector/master] Retain ipv6- lines in bridge extra-infos.

2020-08-05 Thread karsten
commit 295e2c69def4808c1ccd53aa347ea053c5c4c324
Author: Karsten Loesing 
Date:   Wed Aug 5 11:08:51 2020 +0200

Retain ipv6- lines in bridge extra-infos.

These lines have been added by proposal 313 and are usually not
included by bridges. But apparently some bridges include them anyway,
probably bridges that have been configured as non-bridge relays
before. We should retain them just like we retain other statistics
lines.
---
 CHANGELOG.md | 5 -
 .../metrics/collector/bridgedescs/SanitizedBridgesWriter.java| 3 +++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index f1dc8ac..c8ffd84 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,7 @@
-# Changes in version 1.??.? - 2020-??-??
+# Changes in version 1.16.0 - 2020-08-05
+
+ * Medium changes
+   - Retain ipv6- lines in bridge extra-info descriptors.
 
 
 # Changes in version 1.15.2 - 2020-05-17
diff --git 
a/src/main/java/org/torproject/metrics/collector/bridgedescs/SanitizedBridgesWriter.java
 
b/src/main/java/org/torproject/metrics/collector/bridgedescs/SanitizedBridgesWriter.java
index c4f783a..b8e7f2d 100644
--- 
a/src/main/java/org/torproject/metrics/collector/bridgedescs/SanitizedBridgesWriter.java
+++ 
b/src/main/java/org/torproject/metrics/collector/bridgedescs/SanitizedBridgesWriter.java
@@ -1193,11 +1193,14 @@ public class SanitizedBridgesWriter extends 
CollecTorMain {
  * descriptor. */
 } else if (line.startsWith("write-history ")
 || line.startsWith("read-history ")
+|| line.startsWith("ipv6-write-history ")
+|| line.startsWith("ipv6-read-history ")
 || line.startsWith("geoip-start-time ")
 || line.startsWith("geoip-client-origins ")
 || line.startsWith("geoip-db-digest ")
 || line.startsWith("geoip6-db-digest ")
 || line.startsWith("conn-bi-direct ")
+|| line.startsWith("ipv6-conn-bi-direct ")
 || line.startsWith("bridge-")
 || line.startsWith("dirreq-")
 || line.startsWith("cell-")



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-tasks/master] Update criteria for partial/full IPv6 support.

2020-08-05 Thread karsten
commit 770082bfee89c9c02945fa5e1a1c34fc69b9ce5c
Author: Karsten Loesing 
Date:   Wed Aug 5 22:38:39 2020 +0200

Update criteria for partial/full IPv6 support.
---
 tpo-metrics-trac-40002/TESTS.txt | 55 
 tpo-metrics-trac-40002/ipv6.py   |  4 +--
 2 files changed, 57 insertions(+), 2 deletions(-)

diff --git a/tpo-metrics-trac-40002/TESTS.txt b/tpo-metrics-trac-40002/TESTS.txt
new file mode 100644
index 000..f66646d
--- /dev/null
+++ b/tpo-metrics-trac-40002/TESTS.txt
@@ -0,0 +1,55 @@
+# count relays
+grep -c "^r " 2020-07-01-01-00-00-consensus
+
+# count relays with IPv6 ORPort
+grep -c "^a \[" 2020-07-01-01-00-00-consensus
+
+# compute total consensus weight
+grep "^w " 2020-07-01-01-00-00-consensus | cut -d"=" -f2 | awk 
'{s+=$1}END{print s}'
+
+# compute consensus weight of relays with IPv6 ORPort
+grep -A5 "^a \[" 2020-07-01-01-00-00-consensus | grep "^w " | cut -d"=" -f2 | 
awk '{s+=$1}END{print s}'
+
+# count relays with IPv6 ORPort and partial IPv6 reachability checks
+grep -A5 "^a \[" 2020-07-01-01-00-00-consensus | grep "^v Tor 0.4.4.[1-9]" | 
sort | uniq -c
+
+# count relays with IPv6 ORPort and full IPv6 reachability checks
+grep -A5 "^a \[" 2020-07-01-01-00-00-consensus | grep -B1 "^pr.* Relay=..3" | 
grep "^v Tor 0.4.5" | sort | uniq -c
+
+# compute consensus weight of relays with IPv6 ORPort and partial IPv6 
reachability checks
+grep -A5 "^a \[" 2020-07-01-01-00-00-consensus | grep -A2 "^v Tor 0.4.4.[1-9]" 
| grep "^w " | cut -d"=" -f2 | awk '{s+=$1}END{print s}'
+
+# compute consensus weight of relays with IPv6 ORPort and full IPv6 
reachability checks
+grep -A5 "^a \[" 2020-07-01-01-00-00-consensus | grep -C1 "^pr.* Relay=..3" | 
grep -A2 "^v Tor 0.4.5" | grep "^w " | cut -d"=" -f2 | awk '{s+=$1}END{print s}'
+
+# obtain subset of usable guards
+grep -v "^s.* Exit" 2020-07-01-01-00-00-consensus | grep -B2 -A4 "^s.* Guard" 
> 2020-07-01-01-00-00-usable-guards
+grep -B2 -A4 "^s.* BadExit.* Guard" 2020-07-01-01-00-00-consensus >> 
2020-07-01-01-00-00-usable-guards
+
+# count usable guards
+grep -c "^r " 2020-07-01-01-00-00-usable-guards
+
+# find Wgd value
+grep Wgd 2020-07-01-01-00-00-consensus
+
+# count usable guards with IPv6 ORPort
+grep -c "^a \[" 2020-07-01-01-00-00-usable-guards
+
+# compute usable guards consensus weight
+grep "^w " 2020-07-01-01-00-00-usable-guards | cut -d"=" -f2 | awk 
'{s+=$1}END{print s}'
+
+# compute consensus weight of usable guards with IPv6 ORPort
+grep -A5 "^a \[" 2020-07-01-01-00-00-usable-guards | grep "^w " | cut -d"=" 
-f2 | awk '{s+=$1}END{print s}'
+
+# count usable guards with IPv6 ORPort and partial IPv6 reachability checks
+grep -A5 "^a \[" 2020-07-01-01-00-00-usable-guards | grep "^v Tor 0.4.4.[1-9]" 
| sort | uniq -c
+
+# count usable guards with IPv6 ORPort and full IPv6 reachability checks
+grep -A5 "^a \[" 2020-07-01-01-00-00-usable-guards | grep -C1 "^pr.* 
Relay=..3" | grep "^v Tor 0.4.5" | sort | uniq -c
+
+# compute consensus weight of usable guards with IPv6 ORPort and partial IPv6 
reachability checks
+grep -A5 "^a \[" 2020-07-01-01-00-00-usable-guards | grep -A2 "^v Tor 
0.4.4.[1-9]" | grep "^w " | cut -d"=" -f2 | awk '{s+=$1}END{print s}'
+
+# compute consensus weight of usable guards with IPv6 ORPort and full IPv6 
reachability checks
+grep -A5 "^a \[" 2020-07-01-01-00-00-usable-guards | grep -C1 "^pr.* 
Relay=..3" | grep -A2 "^v Tor 0.4.5" | grep "^w " | cut -d"=" -f2 | awk 
'{s+=$1}END{print s}'
+
diff --git a/tpo-metrics-trac-40002/ipv6.py b/tpo-metrics-trac-40002/ipv6.py
index 497ea26..286692b 100644
--- a/tpo-metrics-trac-40002/ipv6.py
+++ b/tpo-metrics-trac-40002/ipv6.py
@@ -78,7 +78,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH 
DAMAGE.
 
 import sys, stem, stem.version, stem.descriptor
 
-partial_support_version = stem.version.Version('0.4.4')
+partial_support_version = stem.version.Version('0.4.4.1')
 full_support_version = stem.version.Version('0.4.5')
 
 def read(consensus_filename):
@@ -120,7 +120,7 @@ def read(consensus_filename):
 has_partial_support = False
 has_full_support = False
 if relay.version:
-if relay.version >= full_support_version:
+if "Relay" in relay.protocols and 3 in relay.protocols["Relay"] 
and relay.version >= full_support_version:
 has_full_support = True
 if relay.version >= partial_support_version:
 has_partial_support = True

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-lib/master] Parse OnionPerf analysis results format v3.0.

2020-08-07 Thread karsten
commit b44c6bccb46ca1eb9c861a84dfcd7d5d2047dee4
Author: Karsten Loesing 
Date:   Thu Jul 16 21:37:19 2020 +0200

Parse OnionPerf analysis results format v3.0.

Implements tpo/metrics/library#40001.
---
 CHANGELOG.md   |   1 +
 .../onionperf/OnionPerfAnalysisConverter.java  | 235 -
 .../onionperf/ParsedOnionPerfAnalysis.java | 206 ++
 .../onionperf/TorperfResultsBuilder.java   |  38 
 .../onionperf/OnionPerfAnalysisConverterTest.java  |  80 ++-
 ...20-07-13.op-nl-test1.onionperf.analysis.json.xz | Bin 0 -> 1736 bytes
 6 files changed, 499 insertions(+), 61 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 7bb22c4..4679688 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,6 +2,7 @@
 
  * Medium changes
- Extend Torperf results to provide error codes.
+   - Parse OnionPerf analysis results format version 3.0.
 
 
 # Changes in version 2.13.0 - 2020-05-16
diff --git 
a/src/main/java/org/torproject/descriptor/onionperf/OnionPerfAnalysisConverter.java
 
b/src/main/java/org/torproject/descriptor/onionperf/OnionPerfAnalysisConverter.java
index 61fd173..f1b9d11 100644
--- 
a/src/main/java/org/torproject/descriptor/onionperf/OnionPerfAnalysisConverter.java
+++ 
b/src/main/java/org/torproject/descriptor/onionperf/OnionPerfAnalysisConverter.java
@@ -69,25 +69,32 @@ public class OnionPerfAnalysisConverter {
* Torperf results.
*/
   public List asTorperfResults() throws DescriptorParseException {
-ParsedOnionPerfAnalysis parsedOnionPerfAnalysis;
+ParsedOnionPerfAnalysis parsedOnionPerfAnalysis
+= this.parseOnionPerfAnalysis();
+this.verifyDocumentTypeAndVersion(parsedOnionPerfAnalysis);
+StringBuilder formattedTorperfResults
+= this.formatTorperfResults(parsedOnionPerfAnalysis);
+this.parseFormattedTorperfResults(formattedTorperfResults);
+return this.convertedTorperfResults;
+  }
+
+  /**
+   * Parse the OnionPerf analysis JSON document.
+   */
+  private ParsedOnionPerfAnalysis parseOnionPerfAnalysis()
+  throws DescriptorParseException {
 try {
   InputStream compressedInputStream = new ByteArrayInputStream(
   this.rawDescriptorBytes);
   InputStream decompressedInputStream = new XZCompressorInputStream(
   compressedInputStream);
   byte[] decompressedBytes = IOUtils.toByteArray(decompressedInputStream);
-  parsedOnionPerfAnalysis = ParsedOnionPerfAnalysis.fromBytes(
-  decompressedBytes);
+  return ParsedOnionPerfAnalysis.fromBytes(decompressedBytes);
 } catch (IOException ioException) {
   throw new DescriptorParseException("Ran into an I/O error while "
   + "attempting to parse an OnionPerf analysis document.",
   ioException);
 }
-this.verifyDocumentTypeAndVersion(parsedOnionPerfAnalysis);
-StringBuilder formattedTorperfResults
-= this.formatTorperfResults(parsedOnionPerfAnalysis);
-this.parseFormattedTorperfResults(formattedTorperfResults);
-return this.convertedTorperfResults;
   }
 
   /**
@@ -109,9 +116,9 @@ public class OnionPerfAnalysisConverter {
   throw new DescriptorParseException("Parsed OnionPerf analysis file does "
   + "not contain version information.");
 } else if ((parsedOnionPerfAnalysis.version instanceof Double
-&& (double) parsedOnionPerfAnalysis.version > 2.999)
+&& (double) parsedOnionPerfAnalysis.version > 3.999)
 || (parsedOnionPerfAnalysis.version instanceof String
-&& ((String) parsedOnionPerfAnalysis.version).compareTo("3.") >= 0)) {
+&& ((String) parsedOnionPerfAnalysis.version).compareTo("4.") >= 0)) {
   throw new DescriptorParseException("Parsed OnionPerf analysis file "
   + "contains unsupported version " + parsedOnionPerfAnalysis.version
   + ".");
@@ -131,8 +138,7 @@ public class OnionPerfAnalysisConverter {
 : parsedOnionPerfAnalysis.data.entrySet()) {
   String nickname = data.getKey();
   ParsedOnionPerfAnalysis.MeasurementData measurements = data.getValue();
-  if (null == measurements.measurementIp || null == measurements.tgen
-  || null == measurements.tgen.transfers) {
+  if (null == measurements.tgen) {
 continue;
   }
   String measurementIp = measurements.measurementIp;
@@ -153,57 +159,69 @@ public class OnionPerfAnalysisConverter {
   }
 }
   }
-  for (ParsedOnionPerfAnalysis.Transfer transfer
-  : measurements.tgen.transfers.values()) {
-if (null == transfer.endpointLocal) {
-  continue;
-}
-String[] endpointLocalParts = transfer.endpointLocal.split(":");
-if (endpointLocalParts.length < 3) {
-  continue;
-}
-   

[tor-commits] [metrics-lib/master] Parse new ipv6-* lines in extra-info descriptors.

2020-08-07 Thread karsten
commit a7850a2dc0e8be8a7d9bc289c96dbeed6f204b00
Author: Karsten Loesing 
Date:   Fri Aug 7 20:48:57 2020 +0200

Parse new ipv6-* lines in extra-info descriptors.
---
 CHANGELOG.md   |  2 +
 .../torproject/descriptor/ExtraInfoDescriptor.java | 72 
 .../descriptor/impl/ExtraInfoDescriptorImpl.java   | 95 ++
 .../java/org/torproject/descriptor/impl/Key.java   |  3 +
 .../impl/ExtraInfoDescriptorImplTest.java  | 74 +
 5 files changed, 246 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 4679688..bb55312 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -3,6 +3,8 @@
  * Medium changes
- Extend Torperf results to provide error codes.
- Parse OnionPerf analysis results format version 3.0.
+   - Parse new ipv6-{write,read}-history and ipv6-conn-bi-direct
+ lines in extra-info descriptors.
 
 
 # Changes in version 2.13.0 - 2020-05-16
diff --git a/src/main/java/org/torproject/descriptor/ExtraInfoDescriptor.java 
b/src/main/java/org/torproject/descriptor/ExtraInfoDescriptor.java
index a2c893b..e094fed 100644
--- a/src/main/java/org/torproject/descriptor/ExtraInfoDescriptor.java
+++ b/src/main/java/org/torproject/descriptor/ExtraInfoDescriptor.java
@@ -106,6 +106,22 @@ public interface ExtraInfoDescriptor extends Descriptor {
*/
   BandwidthHistory getWriteHistory();
 
+  /**
+   * Return the server's history of written IPv6 bytes, or {@code null} if the
+   * descriptor does not contain a bandwidth history.
+   *
+   * @since 2.14.0
+   */
+  BandwidthHistory getIpv6WriteHistory();
+
+  /**
+   * Return the server's history of read IPv6 bytes, or {@code null} if the
+   * descriptor does not contain a bandwidth history.
+   *
+   * @since 2.14.0
+   */
+  BandwidthHistory getIpv6ReadHistory();
+
   /**
* Return a SHA-1 digest of the GeoIP database file used by this server
* to resolve client IP addresses to country codes, encoded as 40
@@ -419,6 +435,62 @@ public interface ExtraInfoDescriptor extends Descriptor {
*/
   int getConnBiDirectBoth();
 
+  /**
+   * Return the time in milliseconds since the epoch when the included
+   * statistics on bi-directional IPv6 connection usage ended, or -1 if no such
+   * statistics are included.
+   *
+   * @since 2.14.0
+   */
+  long getIpv6ConnBiDirectStatsEndMillis();
+
+  /**
+   * Return the interval length of the included statistics on
+   * bi-directional IPv6 connection usage in seconds, or -1 if no such
+   * statistics are included.
+   *
+   * @since 2.14.0
+   */
+  long getIpv6ConnBiDirectStatsIntervalLength();
+
+  /**
+   * Return the number of IPv6 connections on which this server read and wrote
+   * less than 2 KiB/s in a 10-second interval, or -1 if no such
+   * statistics are included.
+   *
+   * @since 2.14.0
+   */
+  int getIpv6ConnBiDirectBelow();
+
+  /**
+   * Return the number of IPv6 connections on which this server read and wrote
+   * at least 2 KiB/s in a 10-second interval and at least 10 times more
+   * in read direction than in write direction, or -1 if no such
+   * statistics are included.
+   *
+   * @since 2.14.0
+   */
+  int getIpv6ConnBiDirectRead();
+
+  /**
+   * Return the number of IPv6 connections on which this server read and wrote
+   * at least 2 KiB/s in a 10-second interval and at least 10 times more
+   * in write direction than in read direction, or -1 if no such
+   * statistics are included.
+   *
+   * @since 2.14.0
+   */
+  int getIpv6ConnBiDirectWrite();
+
+  /**
+   * Return the number of IPv6 connections on which this server read and wrote
+   * at least 2 KiB/s in a 10-second interval but not 10 times more in
+   * either direction, or -1 if no such statistics are included.
+   *
+   * @since 2.14.0
+   */
+  int getIpv6ConnBiDirectBoth();
+
   /**
* Return the time in milliseconds since the epoch when the included
* exit statistics interval ended, or -1 if no such statistics are
diff --git 
a/src/main/java/org/torproject/descriptor/impl/ExtraInfoDescriptorImpl.java 
b/src/main/java/org/torproject/descriptor/impl/ExtraInfoDescriptorImpl.java
index f02b540..5cab6ab 100644
--- a/src/main/java/org/torproject/descriptor/impl/ExtraInfoDescriptorImpl.java
+++ b/src/main/java/org/torproject/descriptor/impl/ExtraInfoDescriptorImpl.java
@@ -97,6 +97,12 @@ public abstract class ExtraInfoDescriptorImpl extends 
DescriptorImpl
 case WRITE_HISTORY:
   this.parseWriteHistoryLine(line, partsNoOpt);
   break;
+case IPV6_READ_HISTORY:
+  this.parseIpv6ReadHistoryLine(line, partsNoOpt);
+  break;
+case IPV6_WRITE_HISTORY:
+  this.parseIpv6WriteHistoryLine(line, partsNoOpt);
+  break;
 case GEOIP_DB_DIGEST:
   this.parseGeoipDbDigestLine(line, partsNoOpt);
   break;
@@ -179,6 +185,9 @@ public abstract class ExtraInfoDescriptorImpl extends 
DescriptorImpl
 case CONN

[tor-commits] [metrics-lib/master] Prepare for 2.14.0 release.

2020-08-07 Thread karsten
commit 750f6471a42c6cbdbfc3755076ae3a50dc822e67
Author: Karsten Loesing 
Date:   Fri Aug 7 20:54:48 2020 +0200

Prepare for 2.14.0 release.
---
 CHANGELOG.md | 2 +-
 build.xml| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index bb55312..06c9bd7 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,4 @@
-# Changes in version 2.??.? - 2020-??-??
+# Changes in version 2.14.0 - 2020-08-07
 
  * Medium changes
- Extend Torperf results to provide error codes.
diff --git a/build.xml b/build.xml
index 00aba93..9bd68e6 100644
--- a/build.xml
+++ b/build.xml
@@ -7,7 +7,7 @@
 
 
-  
+  
   
   
   



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-lib/master] Bump version to 2.14.0-dev.

2020-08-07 Thread karsten
commit cf4830cac377503697c7e688e995a3f3cea5225e
Author: Karsten Loesing 
Date:   Fri Aug 7 21:01:10 2020 +0200

Bump version to 2.14.0-dev.
---
 CHANGELOG.md | 3 +++
 build.xml| 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 06c9bd7..c6886e8 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,6 @@
+# Changes in version 2.??.? - 2020-??-??
+
+
 # Changes in version 2.14.0 - 2020-08-07
 
  * Medium changes
diff --git a/build.xml b/build.xml
index 9bd68e6..72990c6 100644
--- a/build.xml
+++ b/build.xml
@@ -7,7 +7,7 @@
 
 
-  
+  
   
   
   

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-web/master] Update to latest metrics-lib version 2.14.0.

2020-08-07 Thread karsten
commit 09d94661dbca84eb9a29b2e6863e797a0b18f351
Author: Karsten Loesing 
Date:   Fri Aug 7 21:07:50 2020 +0200

Update to latest metrics-lib version 2.14.0.

As a result, we'll be able to process OnionPerf analysis JSON file
version 3.0.
---
 CHANGELOG.md | 2 +-
 build.xml| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 132f269..f8fe202 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -13,7 +13,7 @@
- Estimate bridge users by country based on requests by country, if
  available, to get more accurate numbers than those obtained from
  unique IP address counts.
-   - Update to metrics-lib 2.13.0 and ExoneraTor 4.4.0.
+   - Update to metrics-lib 2.14.0 and ExoneraTor 4.4.0.
- Switch from processing Torperf .tpf to OnionPerf analysis .json
  files.
 
diff --git a/build.xml b/build.xml
index b44ac38..0d82a81 100644
--- a/build.xml
+++ b/build.xml
@@ -10,7 +10,7 @@
   
   
   
-  
+  
   
   
   https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Update TGen traffic model to use TGen version 1.0.0

2020-08-10 Thread karsten
commit 25bae0dc902e3b486c357e29ea0368cb85fad262
Author: Ana Custura 
Date:   Fri Jun 26 10:40:24 2020 +0100

Update TGen traffic model to use TGen version 1.0.0
---
 onionperf/model.py | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/onionperf/model.py b/onionperf/model.py
index 1f4d9fe..f06f8e0 100644
--- a/onionperf/model.py
+++ b/onionperf/model.py
@@ -74,7 +74,7 @@ class TorperfModel(GeneratableTGenModel):
 else:
 g.add_node("start", serverport=self.tgen_port, peers=server_str, 
loglevel="info", heartbeat="1 minute")
 g.add_node("pause", time="5 minutes")
-g.add_node("transfer5m", type="get", protocol="tcp", size="5 MiB", 
timeout="270 seconds", stallout="0 seconds")
+g.add_node("stream5m", sendsize="0", recvsize="5 mib", timeout="270 
seconds", stallout="0 seconds")
 
 g.add_edge("start", "pause")
 
@@ -83,7 +83,7 @@ class TorperfModel(GeneratableTGenModel):
 g.add_edge("pause", "pause")
 
 # these are chosen with weighted probability, change edge 'weight' 
attributes to adjust probability
-g.add_edge("pause", "transfer5m")
+g.add_edge("pause", "stream5m")
 
 return g
 
@@ -103,10 +103,10 @@ class OneshotModel(GeneratableTGenModel):
 g.add_node("start", serverport=self.tgen_port, peers=server_str, 
loglevel="info", heartbeat="1 minute", socksproxy=self.socksproxy)
 else:
 g.add_node("start", serverport=self.tgen_port, peers=server_str, 
loglevel="info", heartbeat="1 minute")
-g.add_node("transfer5m", type="get", protocol="tcp", size="5 MiB", 
timeout="15 seconds", stallout="10 seconds")
+g.add_node("stream5m", sendsize="0", recvsize="5 mib", timeout="270 
seconds", stallout="0 seconds")
 
-g.add_edge("start", "transfer5m")
-g.add_edge("transfer5m", "start")
+g.add_edge("start", "stream5m")
+g.add_edge("stream5m", "start")
 
 return g
 



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Update Analysis and TGenParser classes to use TGenTools

2020-08-10 Thread karsten
commit 0a64de95106fcc3fb389165a74d99200cf4e18ea
Author: Ana Custura 
Date:   Fri Jun 26 11:01:16 2020 +0100

Update Analysis and TGenParser classes to use TGenTools
---
 onionperf/analysis.py | 283 ++
 1 file changed, 8 insertions(+), 275 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index f845dd2..2466aad 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -16,48 +16,28 @@ from stem import CircEvent, CircStatus, CircPurpose, 
StreamStatus
 from stem.response.events import CircuitEvent, CircMinorEvent, StreamEvent, 
BandwidthEvent, BuildTimeoutSetEvent
 from stem.response import ControlMessage, convert
 
+# tgentools imports
+from tgentools.analysis import Analysis, TGenParser
+
 # onionperf imports
 from . import util
 
-class Analysis(object):
+class OPAnalysis(Analysis):
 
 def __init__(self, nickname=None, ip_address=None):
-self.nickname = nickname
-self.measurement_ip = ip_address
-self.hostname = gethostname().split('.')[0]
+super().__init__(nickname, ip_address)
 self.json_db = {'type':'onionperf', 'version':'2.0', 'data':{}}
-self.tgen_filepaths = []
 self.torctl_filepaths = []
-self.date_filter = None
-self.did_analysis = False
-
-def add_tgen_file(self, filepath):
-self.tgen_filepaths.append(filepath)
 
 def add_torctl_file(self, filepath):
 self.torctl_filepaths.append(filepath)
 
-def get_nodes(self):
-return list(self.json_db['data'].keys())
-
 def get_tor_bandwidth_summary(self, node, direction):
 try:
 return 
self.json_db['data'][node]['tor']['bandwidth_summary'][direction]
 except:
 return None
 
-def get_tgen_transfers(self, node):
-try:
-return self.json_db['data'][node]['tgen']['transfers']
-except:
-return None
-
-def get_tgen_transfers_summary(self, node):
-try:
-return self.json_db['data'][node]['tgen']['transfers_summary']
-except:
-return None
-
 def analyze(self, do_complete=False, date_filter=None):
 if self.did_analysis:
 return
@@ -84,17 +64,11 @@ class Analysis(object):
 if self.measurement_ip is None:
 self.measurement_ip = "unknown"
 
-self.json_db['data'].setdefault(self.nickname, 
{'measurement_ip': self.measurement_ip}).setdefault(json_db_key, 
parser.get_data())
-
+self.json_db['data'].setdefault(self.nickname, 
{'measurement_ip' : self.measurement_ip}).setdefault(json_db_key, 
parser.get_data())
+self.json_db['data'][self.nickname]["tgen"].pop("heartbeats")
+self.json_db['data'][self.nickname]["tgen"].pop("init_ts")
 self.did_analysis = True
 
-def merge(self, analysis):
-for nickname in analysis.json_db['data']:
-if nickname in self.json_db['data']:
-raise Exception("Merge does not yet support multiple Analysis 
objects from the same node \
-(add multiple files from the same node to the same Analysis 
object before calling analyze instead)")
-else:
-self.json_db['data'][nickname] = 
analysis.json_db['data'][nickname]
 
 def save(self, filename=None, output_prefix=os.getcwd(), do_compress=True, 
date_prefix=None):
 if filename is None:
@@ -147,150 +121,6 @@ class Analysis(object):
 analysis_instance.json_db = db
 return analysis_instance
 
-def subproc_analyze_func(analysis_args):
-signal(SIGINT, SIG_IGN)  # ignore interrupts
-a = analysis_args[0]
-do_complete = analysis_args[1]
-a.analyze(do_complete=do_complete)
-return a
-
-class ParallelAnalysis(Analysis):
-
-def analyze(self, search_path, do_complete=False, nickname=None, 
tgen_search_expressions=["tgen.*\.log"],
-torctl_search_expressions=["torctl.*\.log"], 
num_subprocs=cpu_count()):
-
-pathpairs = util.find_file_paths_pairs(search_path, 
tgen_search_expressions, torctl_search_expressions)
-logging.info("processing input from {0} 
nodes...".format(len(pathpairs)))
-
-analysis_jobs = []
-for (tgen_filepaths, torctl_filepaths) in pathpairs:
-a = Analysis()
-for tgen_filepath in tgen_filepaths:
-a.add_tgen_file(tgen_filepath)
-for torctl_filepath in torctl_filepaths:
-a.add_torctl_file(torctl_filepath)
-analysis_args = [a, do_complete]
-analysis_jobs.append(analysis_args)
-
-analyses = None
-pool = Pool(num_subprocs if num_subprocs > 0 else cpu_count())
-try:
-mr = pool.map_async(subproc_analyze_func, analysis_jobs)
-pool.close()
-while not mr.ready(): mr.wait(1)
-analyses = mr.get()
-except Keyboa

[tor-commits] [onionperf/master] Update do_simple analysis param to new do_complete tgen semantics

2020-08-10 Thread karsten
commit 05eb9cdf56f6ae275ace65a0bdbdcf2c3b5e1c40
Author: Ana Custura 
Date:   Fri Jun 26 10:52:46 2020 +0100

Update do_simple analysis param to new do_complete tgen semantics
---
 onionperf/analysis.py | 42 +-
 onionperf/onionperf   |  8 
 onionperf/reprocessing.py |  8 
 3 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index 20ca354..eaacbb9 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -58,7 +58,7 @@ class Analysis(object):
 except:
 return None
 
-def analyze(self, do_simple=True, date_filter=None):
+def analyze(self, do_complete=False, date_filter=None):
 if self.did_analysis:
 return
 
@@ -70,7 +70,7 @@ class Analysis(object):
 if len(filepaths) > 0:
 for filepath in filepaths:
 logging.info("parsing log file at {0}".format(filepath))
-parser.parse(util.DataSource(filepath), 
do_simple=do_simple)
+parser.parse(util.DataSource(filepath), 
do_complete=do_complete)
 
 if self.nickname is None:
 parsed_name = parser.get_name()
@@ -150,13 +150,13 @@ class Analysis(object):
 def subproc_analyze_func(analysis_args):
 signal(SIGINT, SIG_IGN)  # ignore interrupts
 a = analysis_args[0]
-do_simple = analysis_args[1]
-a.analyze(do_simple=do_simple)
+do_complete = analysis_args[1]
+a.analyze(do_complete=do_complete)
 return a
 
 class ParallelAnalysis(Analysis):
 
-def analyze(self, search_path, do_simple=True, nickname=None, 
tgen_search_expressions=["tgen.*\.log"],
+def analyze(self, search_path, do_complete=False, nickname=None, 
tgen_search_expressions=["tgen.*\.log"],
 torctl_search_expressions=["torctl.*\.log"], 
num_subprocs=cpu_count()):
 
 pathpairs = util.find_file_paths_pairs(search_path, 
tgen_search_expressions, torctl_search_expressions)
@@ -169,7 +169,7 @@ class ParallelAnalysis(Analysis):
 a.add_tgen_file(tgen_filepath)
 for torctl_filepath in torctl_filepaths:
 a.add_torctl_file(torctl_filepath)
-analysis_args = [a, do_simple]
+analysis_args = [a, do_complete]
 analysis_jobs.append(analysis_args)
 
 analyses = None
@@ -293,7 +293,7 @@ class Transfer(object):
 
 class Parser(object, metaclass=ABCMeta):
 @abstractmethod
-def parse(self, source, do_simple):
+def parse(self, source, do_complete):
 pass
 @abstractmethod
 def get_data(self):
@@ -321,7 +321,7 @@ class TGenParser(Parser):
 # both the filter and the unix timestamp should be in UTC at this 
point
 return util.do_dates_match(self.date_filter, date_to_check)
 
-def __parse_line(self, line, do_simple):
+def __parse_line(self, line, do_complete):
 if self.name is None and re.search("Initializing traffic generator on 
host", line) is not None:
 self.name = line.strip().split()[11]
 
@@ -334,7 +334,7 @@ class TGenParser(Parser):
 if not self.__is_date_valid(line_date):
 return True
 
-if not do_simple and re.search("state\sRESPONSE\sto\sstate\sPAYLOAD", 
line) is not None:
+if do_complete and re.search("state\sRESPONSE\sto\sstate\sPAYLOAD", 
line) is not None:
 # another run of tgen starts the id over counting up from 1
 # if a prev transfer with the same id did not complete, we can be 
sure it never will
 parts = line.strip().split()
@@ -343,7 +343,7 @@ class TGenParser(Parser):
 if transfer_id in self.state:
 self.state.pop(transfer_id)
 
-elif not do_simple and re.search("transfer-status", line) is not None:
+elif do_complete and re.search("transfer-status", line) is not None:
 status = TransferStatusEvent(line)
 xfer = self.state.setdefault(status.transfer_id, 
Transfer(status.transfer_id))
 xfer.add_event(status)
@@ -351,7 +351,7 @@ class TGenParser(Parser):
 elif re.search("transfer-complete", line) is not None:
 complete = TransferSuccessEvent(line)
 
-if not do_simple:
+if do_complete:
 xfer = self.state.setdefault(complete.transfer_id, 
Transfer(complete.transfer_id))
 xfer.add_event(complete)
 self.transfers[xfer.id] = xfer.get_data()
@@ -369,7 +369,7 @@ class TGenParser(Parser):
 elif re.search("transfer-error", line) is not None:
 error = TransferErrorEvent(line)
 
-if not do_simple:
+if do_complete:
 xfer = self.state.setdefault(error.transfer_id, 
Transfer(error.transfer_id))
 xfer.add_event(error)
 self.transfers[xfer.id] = xfer.get_data()
@@ -382,12 +382

[tor-commits] [onionperf/master] Merge branch 'task-40003' into develop

2020-08-10 Thread karsten
commit dff6ed7269e97e03e4d285eda2d8d329c779686a
Merge: a2ef62c 7c1ae34
Author: Karsten Loesing 
Date:   Thu Jul 16 22:10:42 2020 +0200

Merge branch 'task-40003' into develop

 schema/onionperf-3.0.json | 546 ++
 1 file changed, 546 insertions(+)



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Fix visualizations.

2020-08-10 Thread karsten
commit b9dbc158666940bc1bb2c7ffccee2b73873c7368
Author: Karsten Loesing 
Date:   Sun Jul 12 23:18:29 2020 +0200

Fix visualizations.

Turns out that stream_info/recvsize contains a string, not an int.
---
 onionperf/visualization.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index 52d08fc..80c0781 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -70,12 +70,12 @@ class TGenVisualization(Visualization):
  stream["time_to_last_byte"] = 
float(s["usecs-to-last-byte-recv"])/100
  if "elapsed_seconds" in stream_data:
  s = stream_data["elapsed_seconds"]
- # Explanation of the math below for computing 
Mbps: From filesize_bytes
- # and payload_progress fields we can compute 
the number of seconds that
+ # Explanation of the math below for computing 
Mbps: From stream_info/recvsize
+ # and payload_progress_recv fields we can 
compute the number of seconds that
  # have elapsed between receiving bytes 
524,288 and 1,048,576, which is a
  # total amount of 524,288 bytes or 4,194,304 
bits or 4.194304 megabits.
  # We want the reciprocal of that value with 
unit megabits per second.
- if stream_data["stream_info"]["recvsize"] == 
5242880 and "0.2" in s["payload_progress_recv"]:
+ if stream_data["stream_info"]["recvsize"] == 
"5242880" and "0.2" in s["payload_progress_recv"]:
   stream["mbps"] = 4.194304 / 
(s["payload_progress_recv"]["0.2"] - s["payload_progress_recv"]["0.1"])
  if "error" in stream_data["transport_info"] and 
stream_data["transport_info"]["error"] != "NONE":
  stream["error_code"] = 
stream_data["transport_info"]["error"]



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Fix reprocessing submode.

2020-08-10 Thread karsten
commit aea46c56a8bcd3d6f54580868c3b8e72cbcb2d9b
Author: Karsten Loesing 
Date:   Tue Jul 14 09:01:08 2020 +0200

Fix reprocessing submode.
---
 onionperf/reprocessing.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/onionperf/reprocessing.py b/onionperf/reprocessing.py
index ad0308f..8d8dcd4 100644
--- a/onionperf/reprocessing.py
+++ b/onionperf/reprocessing.py
@@ -1,4 +1,4 @@
-from onionperf.analysis import Analysis
+from onionperf.analysis import OPAnalysis
 from onionperf import util
 from functools import partial
 from multiprocessing import Pool, cpu_count
@@ -47,7 +47,7 @@ def match(tgen_logs, tor_logs, date_filter):
 
 
 def analyze_func(prefix, nick, do_complete, pair):
-analysis = Analysis(nickname=nick)
+analysis = OPAnalysis(nickname=nick)
 logging.info('Analysing pair for date {0}'.format(pair[2]))
 analysis.add_tgen_file(pair[0])
 analysis.add_torctl_file(pair[1])



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Restore #40004 fix that got lost during rebase.

2020-08-10 Thread karsten
commit 120e425ebf78d36b3e1efd06de1e6cbe134a4a58
Author: Karsten Loesing 
Date:   Sun Jul 12 22:47:58 2020 +0200

Restore #40004 fix that got lost during rebase.
---
 onionperf/visualization.py | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index f85c10f..7cc36d1 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -59,7 +59,8 @@ class TGenVisualization(Visualization):
  tgen_streams = analysis.get_tgen_streams(client)
  for stream_id, stream_data in tgen_streams.items():
  stream = {"id": stream_id, "label": label,
- "filesize_bytes": 
int(stream_data["stream_info"]["recvsize"])}
+ "filesize_bytes": 
int(stream_data["stream_info"]["recvsize"]),
+ "error_code": None}
  stream["server"] = "onion" if ".onion:" in 
stream_data["transport_info"]["remote"] else "public"
  if "time_info" in stream_data:
  s = stream_data["time_info"]
@@ -86,7 +87,8 @@ class TGenVisualization(Visualization):
 tgen_transfers = analysis.get_tgen_transfers(client)
 for transfer_id, transfer_data in 
tgen_transfers.items():
 stream = {"id": transfer_id, "label": label,
-"filesize_bytes": 
transfer_data["filesize_bytes"]}
+"filesize_bytes": 
transfer_data["filesize_bytes"],
+"error_code": None}
 stream["server"] = "onion" if ".onion:" in 
transfer_data["endpoint_remote"] else "public"
 if "elapsed_seconds" in transfer_data:
s = transfer_data["elapsed_seconds"]



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Make a few whitespace fixes.

2020-08-10 Thread karsten
commit 2eceb8c7044a770059f4d1ad06241b60216d4b84
Author: Karsten Loesing 
Date:   Sun Jul 12 22:50:41 2020 +0200

Make a few whitespace fixes.
---
 onionperf/analysis.py  |  4 ++--
 onionperf/visualization.py | 12 ++--
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index 49d6dba..8506756 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -26,7 +26,7 @@ class OPAnalysis(Analysis):
 
 def __init__(self, nickname=None, ip_address=None):
 super().__init__(nickname, ip_address)
-self.json_db = {'type' : 'onionperf', 'version' : '3.0', 'data' : {}}
+self.json_db = {'type': 'onionperf', 'version': '3.0', 'data': {}}
 self.torctl_filepaths = []
 
 def add_torctl_file(self, filepath):
@@ -64,7 +64,7 @@ class OPAnalysis(Analysis):
 if self.measurement_ip is None:
 self.measurement_ip = "unknown"
 
-self.json_db['data'].setdefault(self.nickname, 
{'measurement_ip' : self.measurement_ip}).setdefault(json_db_key, 
parser.get_data())
+self.json_db['data'].setdefault(self.nickname, 
{'measurement_ip': self.measurement_ip}).setdefault(json_db_key, 
parser.get_data())
 self.json_db['data'][self.nickname]["tgen"].pop("heartbeats")
 self.json_db['data'][self.nickname]["tgen"].pop("init_ts")
 self.json_db['data'][self.nickname]["tgen"].pop("stream_summary")
diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index 7cc36d1..52d08fc 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -68,7 +68,7 @@ class TGenVisualization(Visualization):
  stream["time_to_first_byte"] = 
float(s["usecs-to-first-byte-recv"])/100
  if "usecs-to-last-byte-recv" in s:
  stream["time_to_last_byte"] = 
float(s["usecs-to-last-byte-recv"])/100
- if "elapsed_seconds" in stream_data: 
+ if "elapsed_seconds" in stream_data:
  s = stream_data["elapsed_seconds"]
  # Explanation of the math below for computing 
Mbps: From filesize_bytes
  # and payload_progress fields we can compute 
the number of seconds that
@@ -93,11 +93,11 @@ class TGenVisualization(Visualization):
 if "elapsed_seconds" in transfer_data:
s = transfer_data["elapsed_seconds"]
if "payload_progress" in s:
-   # Explanation of the math below for computing 
Mbps: From filesize_bytes
-   # and payload_progress fields we can compute 
the number of seconds that
-   # have elapsed between receiving bytes 524,288 
and 1,048,576, which is a
-   # total amount of 524,288 bytes or 4,194,304 
bits or 4.194304 megabits.
-   # We want the reciprocal of that value with 
unit megabits per second.
+   # Explanation of the math below for 
computing Mbps: From filesize_bytes
+   # and payload_progress fields we can 
compute the number of seconds that
+   # have elapsed between receiving bytes 
524,288 and 1,048,576, which is a
+   # total amount of 524,288 bytes or 
4,194,304 bits or 4.194304 megabits.
+   # We want the reciprocal of that value with 
unit megabits per second.
if transfer_data["filesize_bytes"] == 
1048576 and "1.0" in s["payload_progress"]:
stream["mbps"] = 4.194304 / 
(s["payload_progress"]["1.0"] - s["payload_progress"]["0.5"])
if transfer_data["filesize_bytes"] == 
5242880 and "0.2" in s["payload_progress"]:



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Pass date_filter argument to parent analysis

2020-08-10 Thread karsten
commit e32deb2994576cd1bfb83477da5d36b7991ef4d1
Author: Ana Custura 
Date:   Wed Jul 22 15:10:06 2020 +0100

Pass date_filter argument to parent analysis
---
 onionperf/analysis.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index e269f6a..898b165 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -34,7 +34,7 @@ class OPAnalysis(Analysis):
 return
 
 self.date_filter = date_filter
-super().analyze(do_complete=True)
+super().analyze(do_complete=True, date_filter=self.date_filter)
 torctl_parser = TorCtlParser(date_filter=self.date_filter)
 
 for (filepaths, parser, json_db_key) in [(self.torctl_filepaths, 
torctl_parser, 'tor')]:



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Updates visualization code to use new dictionary structure

2020-08-10 Thread karsten
commit 1b6c211ea77c7ba7f96c0cc53e7d392b9f6eb1b7
Author: Ana Custura 
Date:   Thu Jul 9 15:33:39 2020 +0100

Updates visualization code to use new dictionary structure
---
 onionperf/analysis.py  |  7 +++
 onionperf/visualization.py | 44 +---
 2 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index 245d9ae..64b4b5b 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -92,6 +92,13 @@ class OPAnalysis(Analysis):
 
 logging.info("done!")
 
+
+def get_tgen_streams(self, node):
+try:
+return self.json_db['data'][node]['tgen']['streams']
+except:
+return None
+
 @classmethod
 def load(cls, filename="onionperf.analysis.json.xz", 
input_prefix=os.getcwd()):
 filepath = os.path.abspath(os.path.expanduser("{0}".format(filename)))
diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index 0a2a9d9..48c837b 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -51,38 +51,36 @@ class TGenVisualization(Visualization):
 self.page.close()
 
 def __extract_data_frame(self):
-transfers = []
+streams = []
 for (analyses, label) in self.datasets:
 for analysis in analyses:
 for client in analysis.get_nodes():
-tgen_transfers = analysis.get_tgen_transfers(client)
-for transfer_id, transfer_data in tgen_transfers.items():
-transfer = {"transfer_id": transfer_id, "label": label,
-"filesize_bytes": 
transfer_data["filesize_bytes"],
-"error_code": None}
-transfer["server"] = "onion" if ".onion:" in 
transfer_data["endpoint_remote"] else "public"
-if "elapsed_seconds" in transfer_data:
-s = transfer_data["elapsed_seconds"]
+   tgen_streams = analysis.get_tgen_streams(client)
+for stream_id, stream_data in tgen_streams.items():
+stream = {"stream_id": stream_id, "label": label,
+"filesize_bytes": 
stream_data["stream_info"]["recvsize"]}
+stream["server"] = "onion" if ".onion:" in 
stream_data["transport_info"]["remote"] else "public"
+if "time_info" in stream_data:
+s = stream_data["time_info"]
 if "payload_progress" in s:
# Explanation of the math below for computing 
Mbps: From filesize_bytes
# and payload_progress fields we can compute 
the number of seconds that
# have elapsed between receiving bytes 524,288 
and 1,048,576, which is a
# total amount of 524,288 bytes or 4,194,304 
bits or 4.194304 megabits.
# We want the reciprocal of that value with 
unit megabits per second.
-   if transfer_data["filesize_bytes"] == 1048576 
and "1.0" in s["payload_progress"]:
-   transfer["mbps"] = 4.194304 / 
(s["payload_progress"]["1.0"] - s["payload_progress"]["0.5"])
-   if transfer_data["filesize_bytes"] == 5242880 
and "0.2" in s["payload_progress"]:
-   transfer["mbps"] = 4.194304 / 
(s["payload_progress"]["0.2"] - s["payload_progress"]["0.1"])
-if "first_byte" in s:
-transfer["time_to_first_byte"] = 
s["first_byte"]
-if "last_byte" in s:
-transfer["time_to_last_byte"] = s["last_byte"]
-if "error_code" in transfer_data and 
transfer_data["error_code"] != "NONE":
-transfer["error_code"] = 
transfer_data["error_code"]
-if "unix_ts_start" in transfer_data:
-transfer["start"] = 
datetime.datetime.utcfromtimestamp(transfer_data["unix_ts_start"])
-transfers.append(transfer)
-self.data = pd.DataFrame.from_records(transfers, index="transfer_id")
+   if stream_data["stream_info"]["recv_size"] == 
5242880 and "0.2" in s["elapsed_seconds"]["payload_progress_recv"]:
+   stream["mbps"] = 4.194304 / 
(s["elapsed_seconds"]["payload_progress_recv"]["0.2"] - 
s["elapsed_seconds"]["payload_progress_recv"]["0.1"])
+
+if "usecs-to-first-byte-recv" in s:
+stream["time_to_first_byte"] = 
float(s["usecs-to-first-byte-recv"])/100
+if "usecs-to-last-byte-recv" in s:
+   

[tor-commits] [onionperf/master] Update path to TGen 1.0.0 binary in README.md.

2020-08-10 Thread karsten
commit 5d5bff153dfd4a4b98d2bd3632cc4e813498ae0a
Author: Karsten Loesing 
Date:   Sun Jul 12 22:45:01 2020 +0200

Update path to TGen 1.0.0 binary in README.md.
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 032dd41..e3df3bf 100644
--- a/README.md
+++ b/README.md
@@ -83,7 +83,7 @@ cmake ..
 make
 ```
 
-The TGen binary will be contained in `~/tgen/build/tgen`, which is also the 
path that needs to be passed to OnionPerf's `--tgen` parameter when doing 
measurements.
+The TGen binary will be contained in `~/tgen/build/src/tgen`, which is also 
the path that needs to be passed to OnionPerf's `--tgen` parameter when doing 
measurements.
 
 ### OnionPerf
 



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Remove README_JSON.md and point to JSON schema.

2020-08-10 Thread karsten
commit a817e5ae5ece1d753f3688ee579e87e93951a779
Author: Karsten Loesing 
Date:   Thu Jul 16 22:14:28 2020 +0200

Remove README_JSON.md and point to JSON schema.
---
 README.md  |   2 +-
 README_JSON.md | 213 -
 2 files changed, 1 insertion(+), 214 deletions(-)

diff --git a/README.md b/README.md
index ecd7c0c..b9bf7e6 100644
--- a/README.md
+++ b/README.md
@@ -236,7 +236,7 @@ For example, the following command analyzes current log 
files of a running (or s
 onionperf analyze --tgen ~/onionperf-data/tgen-client/onionperf.tgen.log 
--torctl ~/onionperf-data/tor-client/onionperf.torctl.log
 ```
 
-The output analysis file is written to `onionperf.analysis.json.xz` in the 
current working directory. The file format is described in more detail in 
`README_JSON.md`.
+The output analysis file is written to `onionperf.analysis.json.xz` in the 
current working directory. The file format is described in more detail in 
`schema/onionperf-3.0.json`.
 
 The same analysis files are written automatically as part of ongoing 
measurements once per day at UTC midnight and can be found in 
`onionperf-data/htdocs/`.
 
diff --git a/README_JSON.md b/README_JSON.md
deleted file mode 100644
index f0ad281..000
--- a/README_JSON.md
+++ /dev/null
@@ -1,213 +0,0 @@
-# DB Structure
-
-This document describes the structure of the json database file that gets 
exported in `analysis` mode and gets placed in the www docroot when running in 
`measure` mode.
-
-The structure is given here with variable keys marked as such.
-
-{
-  "data": { # generic keyword
-"phantomtrain": { # nickname of the OnionPerf client, hostname if not 
set
-  "measurement_ip" : "192.168.1.1", # public-facing IP address of the 
machine used for the measurements
-  "tgen": { # to indicate data from TGen
-"transfers": { # the records for transfers TGen attempted
-  "transfer1m:1": { # the id of a single transfer
-"elapsed_seconds": { # timing for various steps in transfer, 
in seconds
-  "checksum": 0.0, # step 12 if using a proxy, else step 8 
(initial GET/PUT)
-  "command": 0.319006, # step 7 if using a proxy, else step 3 
(initial GET/PUT)
-  "first_byte": 0.0, # step 9 if using a proxy, else step 5 
(initial GET/PUT)
-  "last_byte": 0.0, # step 11 if using a proxy, else step 7 
(initial GET/PUT)
-  "payload_bytes": { # similar to payload_progress below
-"10240": 0.0, # number of payload bytes completed : 
seconds to complete it
-"20480": 0.0,
-"51200": 0.0,
-"102400": 0.0,
-"204800": 0.0,
-"512000": 0.0,
-"1048576": 0.0,
-"2097152": 0.0,
-"5242880": 0.0
-  },
-  "payload_progress": { # step 10 if using a proxy, else step 
6 (initial GET/PUT)
-"0.0": 0.0, # percent of payload completed : seconds to 
complete it
-"0.1": 0.0,
-"0.2": 0.0,
-"0.3": 0.0,
-"0.4": 0.0,
-"0.5": 0.0,
-"0.6": 0.0,
-"0.7": 0.0,
-"0.8": 0.0,
-"0.9": 0.0,
-"1.0": 0.0
-  },
-  "proxy_choice": 0.000233, # step 4 if using a proxy, else 
absent
-  "proxy_init": 0.000151, # step 3 if using a proxy, else 
absent
-  "proxy_request": 0.010959, # step 5 if using a proxy, else 
absent
-  "proxy_response": 0.318873, # step 6 if using a proxy, else 
absent
-  "response": 0.0, # step 8 if using a proxy, else step 4 
(initial GET/PUT)
-  "socket_connect": 0.000115, # step 2
-  "socket_create": 2e-06 # step 1
-},
-"endpoint_local": "localhost:127.0.0.1:45416", # tgen client 
socket name:ip:port
-"endpoint_proxy": "localhost:127.0.0.1:27942", # proxy socket 
name:ip:port, if present
-"endpoint_remote": 
"server1.peach-hosting.com:216.17.99.183:", # tgen server hostname:ip:port
-"error_code": "READ", # 'NONE' or a code to indicate the type 
of erro

[tor-commits] [onionperf/master] Fix a bug introduced by the TGen 1.0.0 update.

2020-08-10 Thread karsten
commit c761ac65aa71452a4ddd85137e4313c639e5fc63
Author: Karsten Loesing 
Date:   Tue Jul 14 10:31:14 2020 +0200

Fix a bug introduced by the TGen 1.0.0 update.

The bug was that we were using the wrong Analysis class in measure
mode:

AttributeError: 'Analysis' object has no attribute 'add_torctl_file'
---
 onionperf/measurement.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/onionperf/measurement.py b/onionperf/measurement.py
index 1540bac..9d84d4e 100644
--- a/onionperf/measurement.py
+++ b/onionperf/measurement.py
@@ -150,7 +150,7 @@ def logrotate_thread_task(writables, tgen_writable, 
torctl_writable, docroot, ni
 public_measurement_ip_guess = util.get_ip_address()
 
 # set up the analysis object with our log files
-anal = analysis.Analysis(nickname=nickname, 
ip_address=public_measurement_ip_guess)
+anal = analysis.OPAnalysis(nickname=nickname, 
ip_address=public_measurement_ip_guess)
 if tgen_writable is not None:
 
anal.add_tgen_file(tgen_writable.rotate_file(filename_datetime=next_midnight))
 if torctl_writable is not None:



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Do some repository housekeeping.

2020-08-10 Thread karsten
commit 7576ce3ede211722db59bde3ccbc7e79ac4ac60c
Author: Karsten Loesing 
Date:   Thu Jul 23 20:18:03 2020 +0200

Do some repository housekeeping.

Fixes #40006.
---
 .gitignore |  4 ++--
 .gitlab-ci.yml | 23 ---
 .gitmodules|  0
 README.md  |  6 ++
 Vagrantfile| 35 ---
 debian/changelog   |  5 -
 debian/compat  |  1 -
 debian/control | 40 
 debian/install |  1 -
 debian/rules   | 15 ---
 debian/source/format   |  1 -
 onionperf/__init__.py  |  1 +
 onionperf/analysis.py  |  1 +
 onionperf/measurement.py   |  1 +
 onionperf/model.py |  1 +
 onionperf/monitor.py   |  1 +
 onionperf/onionperf|  1 +
 onionperf/util.py  |  1 +
 onionperf/visualization.py |  1 +
 run_tests.sh   |  3 ---
 20 files changed, 16 insertions(+), 126 deletions(-)

diff --git a/.gitignore b/.gitignore
index 5e79b0e..e3e7b28 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,8 +1,8 @@
-/.onionperf/*
 onionperf-data
+onionperf-private
 venv
 *.json.xz
 *.pdf
+*.csv
 *.pyc
-onionperf/docs/_build
 .coverage
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
deleted file mode 100644
index aed7769..000
--- a/.gitlab-ci.yml
+++ /dev/null
@@ -1,23 +0,0 @@
-variables:
-  GIT_STRATEGY: clone
-
-stages:
- - test
-
-test:
- stage: test
- image: debian:buster
- coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
- script:
-  - apt -y update
-  - apt -y install git cmake make build-essential gcc libigraph0-dev 
libglib2.0-dev python-dev libxml2-dev python-lxml python-networkx python-scipy 
python-matplotlib python-numpy libevent-dev libssl-dev python-stem tor 
python-nose python-cov-core
-  - git clone https://github.com/shadow/tgen.git
-  - mkdir -p tgen/build
-  - pushd tgen/build
-  - cmake ..
-  - make
-  - ln -s `pwd`/tgen /usr/bin/
-  - popd
-  - python setup.py build
-  - python setup.py install
-  - ./run_tests.sh
diff --git a/.gitmodules b/.gitmodules
deleted file mode 100644
index e69de29..000
diff --git a/README.md b/README.md
index b9bf7e6..ad53a9e 100644
--- a/README.md
+++ b/README.md
@@ -125,6 +125,12 @@ deactivate
 
 However, in order to perform measurements or analyses, the virtual environment 
needs to be activated first. This will ensure all the paths are found.
 
+If needed, unit tests are run with the following command:
+
+```shell
+cd ~/onionperf/
+python3 -m nose --with-coverage --cover-package=onionperf
+```
 
 ## Measurement
 
diff --git a/Vagrantfile b/Vagrantfile
deleted file mode 100644
index 1fdd10f..000
--- a/Vagrantfile
+++ /dev/null
@@ -1,35 +0,0 @@
-# -*- mode: ruby -*-
-# vi: set ft=ruby :
-
-$setup_onionperf = <

[tor-commits] [onionperf/master] Remove now unused imports.

2020-08-10 Thread karsten
commit 0189c3a73339c7b2554aadb1c60b1dc92d534572
Author: Karsten Loesing 
Date:   Sun Jul 12 22:57:11 2020 +0200

Remove now unused imports.
---
 onionperf/analysis.py | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index 8506756..19c0192 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -4,11 +4,8 @@
   See LICENSE for licensing information
 '''
 
-import sys, os, re, json, datetime, logging
+import os, re, json, datetime, logging
 
-from multiprocessing import Pool, cpu_count
-from signal import signal, SIGINT, SIG_IGN
-from socket import gethostname
 from abc import ABCMeta, abstractmethod
 
 # stem imports



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Remove TGen 0.0.1 reference from README.md.

2020-08-10 Thread karsten
commit 272cd2558ad352936260522de8d49a417e40b501
Author: Karsten Loesing 
Date:   Mon Jul 13 16:53:27 2020 +0200

Remove TGen 0.0.1 reference from README.md.
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index e3df3bf..ecd7c0c 100644
--- a/README.md
+++ b/README.md
@@ -70,7 +70,7 @@ In this case the resulting `tor` binary can be found in 
`~/tor/src/app/tor` and
 
 ### TGen
 
-OnionPerf uses TGen to generate traffic on client and server side for its 
measurements. Installing dependencies, cloning TGen to a subdirectory in the 
user's home directory, checking out version 0.0.1, and building TGen is done as 
follows:
+OnionPerf uses TGen to generate traffic on client and server side for its 
measurements. Installing dependencies, cloning TGen to a subdirectory in the 
user's home directory, and building TGen is done as follows:
 
 ```shell
 sudo apt install cmake libglib2.0-dev libigraph0-dev make



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Remove stream_summary from analysis results

2020-08-10 Thread karsten
commit 560194bb1f4f25975cde6123fde75e7ff3754ddb
Author: Ana Custura 
Date:   Fri Jul 10 21:03:19 2020 +0100

Remove stream_summary from analysis results
---
 onionperf/analysis.py | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index 64b4b5b..53d879c 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -65,8 +65,9 @@ class OPAnalysis(Analysis):
 self.measurement_ip = "unknown"
 
 self.json_db['data'].setdefault(self.nickname, 
{'measurement_ip' : self.measurement_ip}).setdefault(json_db_key, 
parser.get_data())
-self.json_db['data'][self.nickname]["tgen"].pop("heartbeats")
-self.json_db['data'][self.nickname]["tgen"].pop("init_ts")
+self.json_db['data'][self.nickname]["tgen"].pop("heartbeats")
+self.json_db['data'][self.nickname]["tgen"].pop("init_ts")
+self.json_db['data'][self.nickname]["tgen"].pop("stream_summary")
 self.did_analysis = True
 
 



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Update tor analysis class and method names to be tor-specific

2020-08-10 Thread karsten
commit 54911cd9fc2659e0ea4b5cd5a249489e3fed18eb
Author: Ana Custura 
Date:   Fri Jun 26 10:58:57 2020 +0100

Update tor analysis class and method names to be tor-specific
---
 onionperf/analysis.py | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index eaacbb9..f845dd2 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -400,7 +400,7 @@ class TGenParser(Parser):
 def get_name(self):
 return self.name
 
-class Stream(object):
+class TorStream(object):
 def __init__(self, sid):
 self.stream_id = sid
 self.circuit_id = None
@@ -460,7 +460,7 @@ class Stream(object):
' '.join(['%s=%s' % (event, arrived_at)
for (event, arrived_at) in sorted(self.elapsed_seconds, 
key=lambda item: item[1])])))
 
-class Circuit(object):
+class TorCircuit(object):
 def __init__(self, cid):
 self.circuit_id = cid
 self.unix_ts_start = None
@@ -543,7 +543,7 @@ class TorCtlParser(Parser):
 def __handle_circuit(self, event, arrival_dt):
 # first make sure we have a circuit object
 cid = int(event.id)
-circ = self.circuits_state.setdefault(cid, Circuit(cid))
+circ = self.circuits_state.setdefault(cid, TorCircuit(cid))
 is_hs_circ = True if event.purpose in (CircPurpose.HS_CLIENT_INTRO, 
CircPurpose.HS_CLIENT_REND, \
CircPurpose.HS_SERVICE_INTRO, 
CircPurpose.HS_SERVICE_REND) else False
 
@@ -597,7 +597,7 @@ class TorCtlParser(Parser):
 
 def __handle_stream(self, event, arrival_dt):
 sid = int(event.id)
-strm = self.streams_state.setdefault(sid, Stream(sid))
+strm = self.streams_state.setdefault(sid, TorStream(sid))
 
 if event.circ_id is not None:
 strm.set_circ_id(event.circ_id)



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Adds support for previous analyses

2020-08-10 Thread karsten
commit 074842a412c2d7ede45a40eb4fb610cd32619c11
Author: Ana Custura 
Date:   Sat Jul 11 11:38:54 2020 +0100

Adds support for previous analyses
---
 onionperf/analysis.py  |  6 
 onionperf/visualization.py | 72 --
 2 files changed, 57 insertions(+), 21 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index 53d879c..49d6dba 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -100,6 +100,12 @@ class OPAnalysis(Analysis):
 except:
 return None
 
+def get_tgen_transfers(self, node):
+try:
+return self.json_db['data'][node]['tgen']['transfers']
+except:
+return None
+
 @classmethod
 def load(cls, filename="onionperf.analysis.json.xz", 
input_prefix=os.getcwd()):
 filepath = os.path.abspath(os.path.expanduser("{0}".format(filename)))
diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index 48c837b..68ad751 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -54,32 +54,62 @@ class TGenVisualization(Visualization):
 streams = []
 for (analyses, label) in self.datasets:
 for analysis in analyses:
-for client in analysis.get_nodes():
-   tgen_streams = analysis.get_tgen_streams(client)
-for stream_id, stream_data in tgen_streams.items():
-stream = {"stream_id": stream_id, "label": label,
-"filesize_bytes": 
stream_data["stream_info"]["recvsize"]}
-stream["server"] = "onion" if ".onion:" in 
stream_data["transport_info"]["remote"] else "public"
-if "time_info" in stream_data:
-s = stream_data["time_info"]
-if "payload_progress" in s:
+if analysis.json_db['version'] >= '3':
+for client in analysis.get_nodes():
+ tgen_streams = analysis.get_tgen_streams(client)
+ for stream_id, stream_data in tgen_streams.items():
+ stream = {"stream_id": stream_id, "label": label,
+ "filesize_bytes": 
int(stream_data["stream_info"]["recvsize"])}
+ stream["server"] = "onion" if ".onion:" in 
stream_data["transport_info"]["remote"] else "public"
+ if "time_info" in stream_data:
+ s = stream_data["time_info"]
+ if "usecs-to-first-byte-recv" in s:
+ stream["time_to_first_byte"] = 
float(s["usecs-to-first-byte-recv"])/100
+ if "usecs-to-last-byte-recv" in s:
+ stream["time_to_last_byte"] = 
float(s["usecs-to-last-byte-recv"])/100
+ if "elapsed_seconds" in stream_data: 
+ s = stream_data["elapsed_seconds"]
+ # Explanation of the math below for computing 
Mbps: From filesize_bytes
+ # and payload_progress fields we can compute 
the number of seconds that
+ # have elapsed between receiving bytes 
524,288 and 1,048,576, which is a
+ # total amount of 524,288 bytes or 4,194,304 
bits or 4.194304 megabits.
+ # We want the reciprocal of that value with 
unit megabits per second.
+ if stream_data["stream_info"]["recvsize"] == 
5242880 and "0.2" in s["payload_progress_recv"]:
+  stream["mbps"] = 4.194304 / 
(s["payload_progress_recv"]["0.2"] - s["payload_progress_recv"]["0.1"])
+ if "error" in stream_data["transport_info"] and 
stream_data["transport_info"]["error"] != "NONE":
+ stream["error_code"] = 
stream_data["transport_info"]["error"]
+ if "unix_ts_start" in stream_data:
+ stream["start"] = 
datetime.datetime.utcfromtimestamp(stream_data["unix_ts_start"])
+ streams.append(stream)
+else:
+for client in analysis.get_nodes():
+tgen_transfers = analysis.get_tgen_transfers(client)
+for transfer_id, transfer_data in 
tgen_transfers.items():
+stream = {"stream_id": transfer_id, "label": label,
+"filesize_bytes": 
transfer_data["filesize_bytes"]}
+stream["server"] = "onion" if ".onion:" in 
transfer_data["endpoint_remote"] else "public"
+if "elapsed_seconds" in transfer_data:
+  

[tor-commits] [onionperf/master] Retrieve error_code from the stream_info dict

2020-08-10 Thread karsten
commit 89c8a8f826ee0dc460feb81f67e6bebfbbb9981f
Author: Ana Custura 
Date:   Thu Jul 23 15:04:57 2020 +0100

Retrieve error_code from the stream_info dict
---
 onionperf/visualization.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index 5e7f92b..daa5c3d 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -88,8 +88,8 @@ class TGenVisualization(Visualization):
 s = stream_data["elapsed_seconds"]
 if stream_data["stream_info"]["recvsize"] == 
"5242880" and "0.2" in s["payload_progress_recv"]:
  stream["mbps"] = 4.194304 / 
(s["payload_progress_recv"]["0.2"] - s["payload_progress_recv"]["0.1"])
-if "error" in stream_data["transport_info"] and 
stream_data["transport_info"]["error"] != "NONE":
-error_code = 
stream_data["transport_info"]["error"]
+if "error" in stream_data["stream_info"] and 
stream_data["stream_info"]["error"] != "NONE":
+error_code = 
stream_data["stream_info"]["error"]
 if "local" in stream_data["transport_info"] and 
len(stream_data["transport_info"]["local"].split(":")) > 2:
 source_port = 
stream_data["transport_info"]["local"].split(":")[2]
 if "unix_ts_end" in stream_data:



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Merge branch 'task-40005' into develop

2020-08-10 Thread karsten
commit a2ef62c2b7d1406f72ac87769ce0396f543146d8
Merge: c761ac6 c1cd60e
Author: Karsten Loesing 
Date:   Thu Jul 16 09:55:36 2020 +0200

Merge branch 'task-40005' into develop

 onionperf/analysis.py| 76 
 onionperf/measurement.py |  2 +-
 onionperf/onionperf  |  9 +
 onionperf/reprocessing.py|  8 ++--
 onionperf/tests/test_reprocessing.py |  2 +-
 5 files changed, 33 insertions(+), 64 deletions(-)



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Add JSON schema for analysis file format 3.0.

2020-08-10 Thread karsten
commit 7c1ae345effc8197e35de86d318893d42285
Author: Karsten Loesing 
Date:   Sun Jul 12 17:43:08 2020 +0200

Add JSON schema for analysis file format 3.0.

This schema can be used to validate an analysis file using version 3.0
by using the following commands:

```
pip3 install jsonschema
unxz final_onionperf.analysis.json.xz
jsonschema -i final_onionperf.analysis.json onionperf-3.0.json
```

Implements tpo/metrics/onionperf#40003.
---
 schema/onionperf-3.0.json | 546 ++
 1 file changed, 546 insertions(+)

diff --git a/schema/onionperf-3.0.json b/schema/onionperf-3.0.json
new file mode 100644
index 000..f670c4c
--- /dev/null
+++ b/schema/onionperf-3.0.json
@@ -0,0 +1,546 @@
+{
+  "$schema": "http://json-schema.org/draft-07/schema";,
+  "$id": 
"https://gitlab.torproject.org/tpo/metrics/onionperf/-/raw/master/schema/onionperf-3.0.json";,
+  "type": "object",
+  "title": "OnionPerf analysis JSON file format 3.0",
+  "required": [
+"data",
+"type",
+"version"
+  ],
+  "properties": {
+"data": {
+  "type": "object",
+  "title": "Measurement data by source name",
+  "propertyNames": {
+"pattern": "^[A-Za-z0-9-]+$"
+  },
+  "additionalProperties": {
+"type": "object",
+"title": "Measurement data from a single source",
+"required": [
+  "measurement_ip",
+  "tgen",
+  "tor"
+],
+"properties": {
+  "measurement_ip": {
+"type": "string",
+"title": "Public IP address of the measuring host."
+  },
+  "tgen": {
+"type": "object",
+"title": "Measurement data obtained from client-side TGen logs",
+"required": [
+  "streams"
+],
+"properties": {
+  "streams": {
+"type": "object",
+"title": "Measurement data, by TGen stream identifier",
+"additionalProperties": {
+  "type": "object",
+  "title": "Information on a single measurement, obtained from 
a single [stream-success] or [stream-error] log message (except for 
elapsed_seconds)",
+  "required": [
+"byte_info",
+"is_complete",
+"is_error",
+"is_success",
+"stream_id",
+"stream_info",
+"time_info",
+"transport_info",
+"unix_ts_end",
+"unix_ts_start"
+  ],
+  "properties": {
+"byte_info": {
+  "type": "object",
+  "title": "Information on sent and received bytes",
+  "required": [
+"payload-bytes-recv",
+"payload-bytes-send",
+"payload-progress-recv",
+"payload-progress-send",
+"total-bytes-recv",
+"total-bytes-send"
+  ],
+  "properties": {
+"payload-bytes-recv": {
+  "type": "string",
+  "pattern": "^[0-9]+$",
+  "title": "Number of payload bytes received"
+},
+"payload-bytes-send": {
+  "type": "string",
+  "pattern": "^[0-9]+$",
+  "title": "Number of payload bytes sent"
+},
+"payload-progress-recv": {
+  "type": "string",
+  "pattern": "^[0-9]+\\.[0-9]+%$",
+  "title": "Progress of receiving payload in percent"
+},
+  

[tor-commits] [onionperf/master] Add TGenTools requirement, update code to use new OPAnalysis class

2020-08-10 Thread karsten
commit ae18e7a7cbf1330f096ecd569eb45e9cfc039dda
Author: Ana Custura 
Date:   Fri Jun 26 11:08:28 2020 +0100

Add TGenTools requirement, update code to use new OPAnalysis class
---
 onionperf/onionperf | 8 
 requirements.txt| 1 +
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/onionperf/onionperf b/onionperf/onionperf
index ddbeaf1..5e2c2fb 100755
--- a/onionperf/onionperf
+++ b/onionperf/onionperf
@@ -381,8 +381,8 @@ def analyze(args):
 logging.warning("No logfile paths were given, nothing will be 
analyzed")
 
 elif (args.tgen_logpath is None or os.path.isfile(args.tgen_logpath)) and 
(args.torctl_logpath is None or os.path.isfile(args.torctl_logpath)):
-from onionperf.analysis import Analysis
-analysis = Analysis(nickname=args.nickname, ip_address=args.ip_address)
+from onionperf.analysis import OPAnalysis
+analysis = OPAnalysis(nickname=args.nickname, 
ip_address=args.ip_address)
 if args.tgen_logpath is not None:
 analysis.add_tgen_file(args.tgen_logpath)
 if args.torctl_logpath is not None:
@@ -403,13 +403,13 @@ def analyze(args):
 
 def visualize(args):
 from onionperf.visualization import TGenVisualization
-from onionperf.analysis import Analysis
+from onionperf.analysis import OPAnalysis
 
 tgen_viz = TGenVisualization()
 for (paths, label) in args.datasets:
 analyses = []
 for path in paths:
-analysis = Analysis.load(filename=path)
+analysis = OPAnalysis.load(filename=path)
 if analysis is not None:
analyses.append(analysis)
 tgen_viz.add_dataset(analyses, label)
diff --git a/requirements.txt b/requirements.txt
index f70e46a..46853f8 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8,3 +8,4 @@ pandas
 scipy
 seaborn
 stem >= 1.7.0
+tgentools



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Updates analysis version to 3.0

2020-08-10 Thread karsten
commit 5c4c77bb0e0823f6fa9ae8046b86c911915265c9
Author: Ana Custura 
Date:   Thu Jul 9 15:56:40 2020 +0100

Updates analysis version to 3.0
---
 onionperf/analysis.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index 2466aad..245d9ae 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -26,7 +26,7 @@ class OPAnalysis(Analysis):
 
 def __init__(self, nickname=None, ip_address=None):
 super().__init__(nickname, ip_address)
-self.json_db = {'type':'onionperf', 'version':'2.0', 'data':{}}
+self.json_db = {'type' : 'onionperf', 'version' : '3.0', 'data' : {}}
 self.torctl_filepaths = []
 
 def add_torctl_file(self, filepath):
@@ -113,7 +113,7 @@ class OPAnalysis(Analysis):
 if 'type' not in db or 'version' not in db:
 logging.warning("'type' or 'version' not present in database")
 return None
-elif db['type'] != 'onionperf' or str(db['version']) >= '3.':
+elif db['type'] != 'onionperf' or str(db['version']) >= '4.':
 logging.warning("type or version not supported (type={0}, 
version={1})".format(db['type'], db['version']))
 return None
 else:



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Refine error codes into TOR or TGEN errors.

2020-08-10 Thread karsten
commit 4533b39591cc0a6df124e017366ea71343c842bf
Author: Karsten Loesing 
Date:   Fri Jul 17 22:24:43 2020 +0200

Refine error codes into TOR or TGEN errors.

With this change we include more detailed error codes in visualization
output. In order to do so we map TGen transfers/streams to TorCtl
STREAM event details based on source ports and unix_ts_end timestamps.
This code reuses some concepts used in metrics-lib.

Implements tpo/metrics/onionperf#34218.
---
 onionperf/analysis.py  |   6 +++
 onionperf/visualization.py | 107 +++--
 2 files changed, 71 insertions(+), 42 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index e269f6a..8fcc0a6 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -97,6 +97,12 @@ class OPAnalysis(Analysis):
 except:
 return None
 
+def get_tor_streams(self, node):
+try:
+return self.json_db['data'][node]['tor']['streams']
+except:
+return None
+
 @classmethod
 def load(cls, filename="onionperf.analysis.json.xz", 
input_prefix=os.getcwd()):
 filepath = os.path.abspath(os.path.expanduser("{0}".format(filename)))
diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index 80c0781..5e7f92b 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -54,50 +54,57 @@ class TGenVisualization(Visualization):
 streams = []
 for (analyses, label) in self.datasets:
 for analysis in analyses:
-if analysis.json_db['version'] >= '3':
-for client in analysis.get_nodes():
- tgen_streams = analysis.get_tgen_streams(client)
- for stream_id, stream_data in tgen_streams.items():
- stream = {"id": stream_id, "label": label,
- "filesize_bytes": 
int(stream_data["stream_info"]["recvsize"]),
- "error_code": None}
- stream["server"] = "onion" if ".onion:" in 
stream_data["transport_info"]["remote"] else "public"
- if "time_info" in stream_data:
- s = stream_data["time_info"]
- if "usecs-to-first-byte-recv" in s:
- stream["time_to_first_byte"] = 
float(s["usecs-to-first-byte-recv"])/100
- if "usecs-to-last-byte-recv" in s:
- stream["time_to_last_byte"] = 
float(s["usecs-to-last-byte-recv"])/100
- if "elapsed_seconds" in stream_data:
- s = stream_data["elapsed_seconds"]
- # Explanation of the math below for computing 
Mbps: From stream_info/recvsize
- # and payload_progress_recv fields we can 
compute the number of seconds that
- # have elapsed between receiving bytes 
524,288 and 1,048,576, which is a
- # total amount of 524,288 bytes or 4,194,304 
bits or 4.194304 megabits.
- # We want the reciprocal of that value with 
unit megabits per second.
- if stream_data["stream_info"]["recvsize"] == 
"5242880" and "0.2" in s["payload_progress_recv"]:
-  stream["mbps"] = 4.194304 / 
(s["payload_progress_recv"]["0.2"] - s["payload_progress_recv"]["0.1"])
- if "error" in stream_data["transport_info"] and 
stream_data["transport_info"]["error"] != "NONE":
- stream["error_code"] = 
stream_data["transport_info"]["error"]
- if "unix_ts_start" in stream_data:
- stream["start"] = 
datetime.datetime.utcfromtimestamp(stream_data["unix_ts_start"])
- streams.append(stream)
-else:
-for client in analysis.get_nodes():
-tgen_transfers = analysis.get_tgen_transfers(client)
-for transfer_id, transfer_data in 
tgen_transfers.items():
+for client in analysis.get_nodes():
+tor_streams_by_source_port = {

[tor-commits] [onionperf/master] Removes unused import and do_complete attribute from tor parser

2020-08-10 Thread karsten
commit c1cd60e05e23ef022076df9665a9d23c891561ca
Author: Ana Custura 
Date:   Wed Jul 15 11:56:50 2020 +0100

Removes unused import and do_complete attribute from tor parser
---
 onionperf/analysis.py | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index 26b00b2..e269f6a 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -10,7 +10,7 @@ from abc import ABCMeta, abstractmethod
 
 # stem imports
 from stem import CircEvent, CircStatus, CircPurpose, StreamStatus
-from stem.response.events import CircuitEvent, CircMinorEvent, StreamEvent, 
BandwidthEvent, BuildTimeoutSetEvent
+from stem.response.events import CircuitEvent, CircMinorEvent, StreamEvent, 
BuildTimeoutSetEvent
 from stem.response import ControlMessage, convert
 
 # tgentools imports
@@ -34,14 +34,14 @@ class OPAnalysis(Analysis):
 return
 
 self.date_filter = date_filter
-tgen_parser = TGenParser(date_filter=self.date_filter)
+super().analyze(do_complete=True)
 torctl_parser = TorCtlParser(date_filter=self.date_filter)
 
-for (filepaths, parser, json_db_key) in [(self.tgen_filepaths, 
tgen_parser, 'tgen'), (self.torctl_filepaths, torctl_parser, 'tor')]:
+for (filepaths, parser, json_db_key) in [(self.torctl_filepaths, 
torctl_parser, 'tor')]:
 if len(filepaths) > 0:
 for filepath in filepaths:
 logging.info("parsing log file at {0}".format(filepath))
-parser.parse(util.DataSource(filepath), do_complete=True)
+parser.parse(util.DataSource(filepath))
 
 if self.nickname is None:
 parsed_name = parser.get_name()
@@ -128,7 +128,7 @@ class OPAnalysis(Analysis):
 
 class Parser(object, metaclass=ABCMeta):
 @abstractmethod
-def parse(self, source, do_complete=True):
+def parse(self, source):
 pass
 @abstractmethod
 def get_data(self):
@@ -404,7 +404,7 @@ class TorCtlParser(Parser):
 
 return True
 
-def parse(self, source, do_complete=True):
+def parse(self, source):
 source.open(newline='\r\n')
 for line in source:
 # ignore line parsing errors



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Renames stream_id to id

2020-08-10 Thread karsten
commit deff90a2b21b4be8e51036dd352d1f4c398836ea
Author: Ana Custura 
Date:   Sun Jul 12 15:04:35 2020 +0100

Renames stream_id to id
---
 onionperf/visualization.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index 68ad751..f85c10f 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -58,7 +58,7 @@ class TGenVisualization(Visualization):
 for client in analysis.get_nodes():
  tgen_streams = analysis.get_tgen_streams(client)
  for stream_id, stream_data in tgen_streams.items():
- stream = {"stream_id": stream_id, "label": label,
+ stream = {"id": stream_id, "label": label,
  "filesize_bytes": 
int(stream_data["stream_info"]["recvsize"])}
  stream["server"] = "onion" if ".onion:" in 
stream_data["transport_info"]["remote"] else "public"
  if "time_info" in stream_data:
@@ -85,7 +85,7 @@ class TGenVisualization(Visualization):
 for client in analysis.get_nodes():
 tgen_transfers = analysis.get_tgen_transfers(client)
 for transfer_id, transfer_data in 
tgen_transfers.items():
-stream = {"stream_id": transfer_id, "label": label,
+stream = {"id": transfer_id, "label": label,
 "filesize_bytes": 
transfer_data["filesize_bytes"]}
 stream["server"] = "onion" if ".onion:" in 
transfer_data["endpoint_remote"] else "public"
 if "elapsed_seconds" in transfer_data:
@@ -110,7 +110,7 @@ class TGenVisualization(Visualization):
 stream["start"] = 
datetime.datetime.utcfromtimestamp(transfer_data["unix_ts_start"])
 streams.append(stream)
 
-self.data = pd.DataFrame.from_records(streams, index="stream_id")
+self.data = pd.DataFrame.from_records(streams, index="id")
 
 def __plot_firstbyte_ecdf(self):
 for server in self.data["server"].unique():



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Take out Tor summaries and the do_complete switch.

2020-08-10 Thread karsten
commit 95a2f8b482ee882b4ec898a7bbc76018a6d98659
Author: Karsten Loesing 
Date:   Tue Jul 14 10:14:22 2020 +0200

Take out Tor summaries and the do_complete switch.

Implements tpo/metrics/onionperf#40005.
---
 onionperf/analysis.py| 70 
 onionperf/measurement.py |  2 +-
 onionperf/onionperf  |  9 ++---
 onionperf/reprocessing.py|  8 ++---
 onionperf/tests/test_reprocessing.py |  2 +-
 5 files changed, 30 insertions(+), 61 deletions(-)

diff --git a/onionperf/analysis.py b/onionperf/analysis.py
index 19c0192..26b00b2 100644
--- a/onionperf/analysis.py
+++ b/onionperf/analysis.py
@@ -29,13 +29,7 @@ class OPAnalysis(Analysis):
 def add_torctl_file(self, filepath):
 self.torctl_filepaths.append(filepath)
 
-def get_tor_bandwidth_summary(self, node, direction):
-try:
-return 
self.json_db['data'][node]['tor']['bandwidth_summary'][direction]
-except:
-return None
-
-def analyze(self, do_complete=False, date_filter=None):
+def analyze(self, date_filter=None):
 if self.did_analysis:
 return
 
@@ -47,7 +41,7 @@ class OPAnalysis(Analysis):
 if len(filepaths) > 0:
 for filepath in filepaths:
 logging.info("parsing log file at {0}".format(filepath))
-parser.parse(util.DataSource(filepath), 
do_complete=do_complete)
+parser.parse(util.DataSource(filepath), do_complete=True)
 
 if self.nickname is None:
 parsed_name = parser.get_name()
@@ -134,7 +128,7 @@ class OPAnalysis(Analysis):
 
 class Parser(object, metaclass=ABCMeta):
 @abstractmethod
-def parse(self, source, do_complete):
+def parse(self, source, do_complete=True):
 pass
 @abstractmethod
 def get_data(self):
@@ -270,14 +264,10 @@ class TorCtlParser(Parser):
 
 def __init__(self, date_filter=None):
 ''' date_filter should be given in UTC '''
-self.do_complete = False
-self.bandwidth_summary = {'bytes_read':{}, 'bytes_written':{}}
 self.circuits_state = {}
 self.circuits = {}
-self.circuits_summary = {'buildtimes':[], 'lifetimes':[]}
 self.streams_state = {}
 self.streams = {}
-self.streams_summary = {'lifetimes':{}}
 self.name = None
 self.boot_succeeded = False
 self.build_timeout_last = None
@@ -320,15 +310,10 @@ class TorCtlParser(Parser):
 
 data = circ.get_data()
 if data is not None:
-if built is not None and started is not None and 
len(data['path']) == 3:
-self.circuits_summary['buildtimes'].append(built - 
started)
-if ended is not None and started is not None:
-self.circuits_summary['lifetimes'].append(ended - 
started)
-if self.do_complete:
-self.circuits[cid] = data
+self.circuits[cid] = data
 self.circuits_state.pop(cid)
 
-elif self.do_complete and isinstance(event, CircMinorEvent):
+elif isinstance(event, CircMinorEvent):
 if event.purpose != event.old_purpose or event.event != 
CircEvent.PURPOSE_CHANGED:
 key = "{0}:{1}".format(event.event, event.purpose)
 circ.add_event(key, arrival_dt)
@@ -364,15 +349,9 @@ class TorCtlParser(Parser):
 
 data = strm.get_data()
 if data is not None:
-if self.do_complete:
-self.streams[sid] = data
-self.streams_summary['lifetimes'].setdefault(stream_type, 
[]).append(ended - started)
+self.streams[sid] = data
 self.streams_state.pop(sid)
 
-def __handle_bw(self, event, arrival_dt):
-self.bandwidth_summary['bytes_read'][int(arrival_dt)] = event.read
-self.bandwidth_summary['bytes_written'][int(arrival_dt)] = 
event.written
-
 def __handle_buildtimeout(self, event, arrival_dt):
 self.build_timeout_last = event.timeout
 self.build_quantile_last = event.quantile
@@ -382,8 +361,6 @@ class TorCtlParser(Parser):
 self.__handle_circuit(event, arrival_dt)
 elif isinstance(event, StreamEvent):
 self.__handle_stream(event, arrival_dt)
-elif isinstance(event, BandwidthEvent):
-self.__handle_bw(event, arrival_dt)
 elif isinstance(event, BuildTimeoutSetEvent):
 self.__handle_buildtimeout(event, arrival_dt)
 
@@ -408,27 +385,26 @@ class TorCtlParser(Parser):
 elif re.search("BOOTSTRAP", line) is not 

[tor-commits] [onionperf/master] Merge branch 'tgen-1.0.0' into develop

2020-08-10 Thread karsten
commit a11531b53fb4443aa3c49eed0bf78ea9d577a500
Merge: adece5d aea46c5
Author: Karsten Loesing 
Date:   Tue Jul 14 09:03:24 2020 +0200

Merge branch 'tgen-1.0.0' into develop

 README.md|   7 +-
 onionperf/analysis.py| 332 +--
 onionperf/model.py   |  10 +-
 onionperf/onionperf  |  16 +-
 onionperf/reprocessing.py|  12 +-
 onionperf/tests/test_analysis.py | 265 +--
 onionperf/visualization.py   |  90 +++
 requirements.txt |   1 +
 8 files changed, 231 insertions(+), 502 deletions(-)



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Fix unit tests.

2020-08-10 Thread karsten
commit 3e67cb32257ecbaec632b782603c162aa10a0af6
Author: Karsten Loesing 
Date:   Mon Jul 13 16:55:10 2020 +0200

Fix unit tests.
---
 onionperf/tests/test_analysis.py | 265 ---
 1 file changed, 110 insertions(+), 155 deletions(-)

diff --git a/onionperf/tests/test_analysis.py b/onionperf/tests/test_analysis.py
index 6463458..82e2043 100644
--- a/onionperf/tests/test_analysis.py
+++ b/onionperf/tests/test_analysis.py
@@ -1,7 +1,8 @@
 import os
 import pkg_resources
 from nose.tools import *
-from onionperf import analysis, util
+from onionperf import util
+from tgentools import analysis
 
 
 def absolute_data_path(relative_path=""):
@@ -12,188 +13,142 @@ def absolute_data_path(relative_path=""):
"tests/data/" + relative_path)
 
 DATA_DIR = absolute_data_path()
-LINE_ERROR = '2019-04-22 14:41:20 1555940480.647663 [message] 
[shd-tgen-transfer.c:1504] [_tgentransfer_log] [transfer-error] transport 
TCP,12,localhost:127.0.0.1:46878,localhost:127.0.0.1:43735,dc34og3c3aqdqntblnxkstzfvh7iy7llojd4fi5j23y2po32ock2k7ad.onion:0.0.0.0:8080,state=ERROR,error=READ
 transfer transfer5m,4,cyan,GET,5242880,(null),0,state=ERROR,error=PROXY 
total-bytes-read=0 total-bytes-write=0 payload-bytes-read=0/5242880 (0.00%) 
usecs-to-socket-create=11 usecs-to-socket-connect=210 usecs-to-proxy-init=283 
usecs-to-proxy-choice=348 usecs-to-proxy-request=412 usecs-to-proxy-response=-1 
usecs-to-command=-1 usecs-to-response=-1 usecs-to-first-byte=-1 
usecs-to-last-byte=-1 usecs-to-checksum=-1'
-
-NO_PARSE_LINE = '2018-04-14 21:10:04 1523740204.809894 [message] 
[shd-tgen-transfer.c:803] [_tgentransfer_log] [transfer-error] transport 
TCP,17,NULL:37.218.247.40:26006,NULL:0.0.0.0:0,146.0.73.4:146.0.73.4:1313,state=SUCCESS,error=NONE
 transfer (null),26847,op-nl,NONE,0,(null),0,state=ERROR,error=AUTH 
total-bytes-read=1 total-bytes-write=0 payload-bytes-write=0/0 (-nan%) 
usecs-to-socket-create=0 usecs-to-socket-connect=8053676879205 
usecs-to-proxy-init=-1 usecs-to-proxy-choice=-1 usecs-to-proxy-request=-1 
usecs-to-proxy-response=-1 usecs-to-command=-1 usecs-to-response=-1 
usecs-to-first-byte=-1 usecs-to-last-byte=-1 usecs-to-checksum=-1'
-
-def test_transfer_status_event():
-transfer = analysis.TransferStatusEvent(LINE_ERROR)
-assert_equals(transfer.is_success, False)
-assert_equals(transfer.is_error, False)
-assert_equals(transfer.is_complete, False)
-assert_equals(transfer.unix_ts_end, 1555940480.647663)
-assert_equals(transfer.endpoint_local, 'localhost:127.0.0.1:46878')
-assert_equals(transfer.endpoint_proxy, 'localhost:127.0.0.1:43735')
-assert_equals(
-transfer.endpoint_remote,
-
'dc34og3c3aqdqntblnxkstzfvh7iy7llojd4fi5j23y2po32ock2k7ad.onion:0.0.0.0:8080'
-)
+LINE_ERROR = '2019-04-22 14:41:20 1555940480.647663 [message] 
[tgen-stream.c:1618] [_tgenstream_log] [stream-error] transport 
[fd=12,local=localhost:127.0.0.1:46878,proxy=localhost:127.0.0.1:43735,remote=dc34og3c3aqdqntblnxkstzfvh7iy7llojd4fi5j23y2po32ock2k7ad.onion:0.0.0.0:8080,state=ERROR,error=READ]
 stream 
[id=4,vertexid=stream5m,name=cyan,peername=(null),sendsize=0,recvsize=5242880,sendstate=SEND_COMMAND,recvstate=RECV_NONE,error=PROXY]
 bytes 
[total-bytes-recv=0,total-bytes-send=0,payload-bytes-recv=0,payload-bytes-send=0,payload-progress-recv=0.00%,payload-progress-send=100.00%]
 times 
[created-ts=5948325159988,usecs-to-socket-create=11,usecs-to-socket-connect=210,usecs-to-proxy-init=283,usecs-to-proxy-choice=348,usecs-to-proxy-request=412,usecs-to-proxy-response=-1,usecs-to-command=-1,usecs-to-response=-1,usecs-to-first-byte-recv=-1,usecs-to-last-byte-recv=-1,usecs-to-checksum-recv=-1,usecs-to-first-byte-send=-1,usecs-to-last-byte-send=-1,usecs-to-checksum-send=-1,now-ts=594844
 6579043]'
+
+NO_PARSE_LINE = '2018-04-14 21:10:04 1523740204.809894 [message] 
[tgen-stream.c:1618] [_tgenstream_log] [stream-error] transport 
[fd=17,local=localhost:127.0.0.1:46878,proxy=localhost:127.0.0.1:43735,remote=dc34og3c3aqdqntblnxkstzfvh7iy7llojd4fi5j23y2po32ock2k7ad.onion:0.0.0.0:8080,state=SUCCESS,error=NONE]
 stream 
[id=4,vertexid=stream5m,name=cyan,peername=(null),sendsize=0,recvsize=5242880,sendstate=SEND_COMMAND,recvstate=RECV_NONE,error=PROXY]
 bytes 
[total-bytes-recv=1,total-bytes-send=0,payload-bytes-recv=0,payload-bytes-send=0,payload-progress-recv=0.00%,payload-progress-send=100.00%]
 times 
[created-ts=5948325159988,usecs-to-socket-create=0,usecs-to-socket-connect=210,usecs-to-proxy-init=-1,usecs-to-proxy-choice=-1,usecs-to-proxy-request=-1,usecs-to-proxy-response=-1,usecs-to-command=-1,usecs-to-response=-1,usecs-to-first-byte-recv=-1,usecs-to-last-byte-recv=-1,usecs-to-checksum-recv=-1,usecs-to-first-byte-send=-1,usecs-to-last-byte-send=-1,usecs-to-checksum-send=-1,now-ts=59484
 46579043]'
+
+
+def test_stream_status_event

[tor-commits] [onionperf/master] Updates README.md following the update to TGen 1.0.0

2020-08-10 Thread karsten
commit 7282eea4f8349fb8e95a5e0c8c81f574abe8a30d
Author: Ana Custura 
Date:   Sun Jul 12 15:45:29 2020 +0100

Updates README.md following the update to TGen 1.0.0
---
 README.md | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/README.md b/README.md
index 77384c6..032dd41 100644
--- a/README.md
+++ b/README.md
@@ -77,7 +77,6 @@ sudo apt install cmake libglib2.0-dev libigraph0-dev make
 cd ~/
 git clone https://github.com/shadow/tgen.git
 cd tgen/
-git checkout -b v0.0.1 v0.0.1
 mkdir build
 cd build/
 cmake ..
@@ -281,7 +280,7 @@ The PDF output file contains visualizations of the 
following metrics:
 
 The CSV output file contains the same data that is visualized in the PDF file. 
It contains the following columns:
 
-- `transfer_id` is the identifier used in the TGen client logs which may be 
useful to look up more details about a specific measurement.
+- `id` is the identifier used in the TGen client logs which may be useful to 
look up more details about a specific measurement.
 - `error_code`  is an optional error code if a measurement did not succeed.
 - `filesize_bytes` is the requested file size in bytes.
 - `label` is the data set label as given in the `--data/-d` parameter to the 
`visualize` mode.



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Rotate error code on y axis tick labels.

2020-08-10 Thread karsten
commit 30aa6dcb52d31403ab366599b13d7fa4db3adb80
Author: Karsten Loesing 
Date:   Thu Jul 23 21:36:10 2020 +0200

Rotate error code on y axis tick labels.

Still part of #34218.
---
 onionperf/visualization.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index daa5c3d..3abcb3f 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -292,6 +292,7 @@ class TGenVisualization(Visualization):
 g.set(title=title, xlabel=xlabel, ylabel=ylabel,
   xlim=(xmin - 0.03 * (xmax - xmin), xmax + 0.03 * (xmax - xmin)))
 plt.xticks(rotation=10)
+plt.yticks(rotation=80)
 sns.despine()
 self.page.savefig()
 plt.close()



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Add change log entries for latest two changes.

2020-08-10 Thread karsten
commit 251fd9c2b5fe9be1685f4d4e4453b617b505a638
Author: Karsten Loesing 
Date:   Fri Jul 24 16:36:34 2020 +0200

Add change log entries for latest two changes.
---
 CHANGELOG.md | 4 
 1 file changed, 4 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 54b632b..0c4c4f2 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,10 @@
`onionperf analyze -s/--do-simple-parse` switch. Implements #40005.
  - Add JSON schema for analysis results file format 3.0. Implements
#40003.
+ - Correctly compute the start time of failed streams as part of the
+   update to TGen and TGenTools 1.0.0. Fixes #30362.
+ - Refine error codes shown in visualizations into TOR or TGEN errors.
+   Implements #34218.
 
 # Changes in version 0.5 - 2020-07-02
 



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Merge branch 'issue-40006' into develop

2020-08-10 Thread karsten
commit c41a7036852b0d7ae93b6cef8e58ed3c364f518e
Merge: dd42ef1 7576ce3
Author: Karsten Loesing 
Date:   Fri Jul 24 16:31:42 2020 +0200

Merge branch 'issue-40006' into develop

 .gitignore |  4 +--
 .gitlab-ci.yml | 23 
 .gitmodules|  0
 CHANGELOG.md   | 67 ++
 README.md  |  6 +
 Vagrantfile| 35 
 debian/changelog   |  5 
 debian/compat  |  1 -
 debian/control | 40 ---
 debian/install |  1 -
 debian/rules   | 15 ---
 debian/source/format   |  1 -
 onionperf/__init__.py  |  1 +
 onionperf/analysis.py  |  1 +
 onionperf/measurement.py   |  1 +
 onionperf/model.py |  1 +
 onionperf/monitor.py   |  1 +
 onionperf/onionperf|  1 +
 onionperf/util.py  |  1 +
 onionperf/visualization.py |  1 +
 run_tests.sh   |  3 ---
 21 files changed, 83 insertions(+), 126 deletions(-)




___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Add CHANGELOG.md.

2020-08-10 Thread karsten
commit a671736fa1e7328c08e2a1644be280c3c045d29a
Author: Karsten Loesing 
Date:   Thu Jul 23 19:44:48 2020 +0200

Add CHANGELOG.md.
---
 CHANGELOG.md | 67 
 1 file changed, 67 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
new file mode 100644
index 000..54b632b
--- /dev/null
+++ b/CHANGELOG.md
@@ -0,0 +1,67 @@
+# Changes in version 0.6 - 2020-??-??
+
+ - Update to TGen 1.0.0, use TGenTools for parsing TGen log files, and
+   update analysis results file version to 3.0. Implements #33974.
+ - Remove summaries from analysis results files, and remove the
+   `onionperf analyze -s/--do-simple-parse` switch. Implements #40005.
+ - Add JSON schema for analysis results file format 3.0. Implements
+   #40003.
+
+# Changes in version 0.5 - 2020-07-02
+
+ - Add new graph showing the cumulative distribution function of
+   throughput in Mbps. Implements #33257.
+ - Improve `README.md` to make it more useful to developers and
+   researchers. Implements #40001.
+ - Always include the `error_code` column in visualization CSV output,
+   regardless of whether data contains measurements with an error code
+   or not. Fixes #40004.
+ - Write generated torrc files to disk for debugging purposes.
+   Implements #40002.
+
+# Changes in version 0.4 - 2020-06-16
+
+ - Include all measurements when analyzing log files at midnight as
+   part of `onionperf measure`, not just the ones from the day before.
+   Also add `onionperf analyze -x/--date-prefix` switch to prepend a
+   given date string to an analysis results file. Fixes #29369.
+ - Add `size`, `last_modified`, and `sha256` fields to index.xml.
+   Implements #29365.
+ - Add support for single onion services using the switch `onionperf
+   measure -s/--single-onion`. Implements #29368.
+ - Remove unused `onionperf measure --traffic-model` switch.
+   Implements #29370.
+ - Make `onionperf measure -o/--onion-only` and `onionperf measure
+   -i/--inet-only` switches mutually exclusive. Fixes #34316.
+ - Accept one or more paths to analysis results files or directories
+   of such files per dataset in `onionperf visualize -d/--data` to
+   include all contained measurements in a dataset. Implements #34191.
+
+# Changes in version 0.3 - 2020-05-30
+
+ - Automatically compress logs when rotating them. Fixes #33396.
+ - Update to Python 3. Implements #29367.
+ - Integrate reprocessing mode into analysis mode. Implements #34142.
+ - Record download times of smaller file sizes from partial completion
+   times. Implements #26673.
+ - Stop generating .tpf files. Implements #34141.
+ - Update analysis results file version to 2.0. Implements #34224.
+ - Export visualized data to a CSV file. Implements #33258.
+ - Remove version 2 onion service support. Implements #33434.
+ - Reduce timeout and stallout values. Implements #34024.
+ - Remove 50 KiB and 1 MiB downloads. Implements #34023.
+ - Remove existing Tor control log visualizations. Implements #34214.
+ - Update to Networkx version 2.4. Fixes #34298.
+ - Update time to first/last byte definitions to include the time 
+   between starting a measurement and receiving the first/last byte. 
+   Implements #34215.
+ - Update `requirements.txt` to actual requirements, and switch from 
+   distutils to setuptools. Fixes #30586.
+ - Split visualizations into public and onion service measurements. 
+   Fixes #34216.
+
+# Changes from before 2020-04
+
+ - Changes made before 2020-04 are not listed here. See `git log` for
+   details.
+



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Fix unit tests.

2020-08-10 Thread karsten
commit 899dbf2518a191167d779063a85c6c0285a68b7b
Author: Karsten Loesing 
Date:   Thu Jul 23 21:10:22 2020 +0200

Fix unit tests.

Latest TGenTools contain a fix for correctly computing the start time
of failed streams. We have to update our unit tests.

Related to #30362.
---
 onionperf/tests/test_analysis.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/onionperf/tests/test_analysis.py b/onionperf/tests/test_analysis.py
index 82e2043..6a8280b 100644
--- a/onionperf/tests/test_analysis.py
+++ b/onionperf/tests/test_analysis.py
@@ -48,7 +48,7 @@ def test_stream_complete_event_init():
 assert_equals(complete.time_info['usecs-to-proxy-choice'], '348')
 assert_equals(complete.time_info['usecs-to-socket-connect'], '210')
 assert_equals(complete.time_info['usecs-to-socket-create'], '11')
-assert_equals(complete.unix_ts_start, 1555940480.6472511)
+assert_equals(complete.unix_ts_start, 1555940359.2286081)
 
 
 def test_stream_error_event():
@@ -146,7 +146,7 @@ def test_stream_object_end_to_end():
 'now-ts': '5948446579043'
 },
 'stream_id': 
'4:12:localhost:127.0.0.1:46878:dc34og3c3aqdqntblnxkstzfvh7iy7llojd4fi5j23y2po32ock2k7ad.onion:0.0.0.0:8080',
-'unix_ts_start': 1555940480.6472511
+'unix_ts_start': 1555940359.2286081
 })
 
 def test_parsing_parse_error():



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Merge branch 'task-34218' into develop

2020-08-10 Thread karsten
commit dd42ef1d5193bf96a59ee336ff6aff8f53bfd979
Merge: 899dbf2 30aa6dc
Author: Karsten Loesing 
Date:   Thu Jul 23 21:37:11 2020 +0200

Merge branch 'task-34218' into develop

 onionperf/analysis.py  |   6 +++
 onionperf/visualization.py | 108 +++--
 2 files changed, 72 insertions(+), 42 deletions(-)




___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/master] Bump version to 0.6.

2020-08-10 Thread karsten
commit e333be25a85952d78f1655e80516d01ab59e50bc
Author: Karsten Loesing 
Date:   Sat Aug 8 14:31:20 2020 +0200

Bump version to 0.6.
---
 CHANGELOG.md | 2 +-
 setup.py | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 0c4c4f2..b8a86ee 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,4 @@
-# Changes in version 0.6 - 2020-??-??
+# Changes in version 0.6 - 2020-08-08
 
  - Update to TGen 1.0.0, use TGenTools for parsing TGen log files, and
update analysis results file version to 3.0. Implements #33974.
diff --git a/setup.py b/setup.py
index d15157f..7fd7f85 100644
--- a/setup.py
+++ b/setup.py
@@ -6,7 +6,7 @@ with open('requirements.txt') as f:
 install_requires = f.readlines()
 
 setup(name='OnionPerf',
-  version='0.5',
+  version='0.6',
   description='A utility to monitor, measure, analyze, and visualize the 
performance of Tor and Onion Services',
   author='Rob Jansen',
   url='https://github.com/robgjansen/onionperf/',

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-web/master] Update to latest metrics-lib.

2020-08-16 Thread karsten
commit 884b31988af03cd3fe9a72b9d19d5c694f61d726
Author: Karsten Loesing 
Date:   Sun Aug 16 12:16:05 2020 +0200

Update to latest metrics-lib.
---
 src/submods/metrics-lib | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/submods/metrics-lib b/src/submods/metrics-lib
index 1f3028d..cf4830c 16
--- a/src/submods/metrics-lib
+++ b/src/submods/metrics-lib
@@ -1 +1 @@
-Subproject commit 1f3028d082befde33f2294231b41c84d21f4c99b
+Subproject commit cf4830cac377503697c7e688e995a3f3cea5225e

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [collector/master] Prepare for 1.16.1 release.

2020-08-16 Thread karsten
commit 63562f1ce1cd8dc5fefd3ef674a3c9d9b1a446e5
Author: Karsten Loesing 
Date:   Sun Aug 16 22:22:54 2020 +0200

Prepare for 1.16.1 release.
---
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/build.xml b/build.xml
index 09c0636..e6e727a 100644
--- a/build.xml
+++ b/build.xml
@@ -9,7 +9,7 @@
 
   
   
-  
+  
   
   
   

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [collector/master] Update to metrics-lib 2.14.0.

2020-08-16 Thread karsten
commit 6e2a8bc1fe35d462b809fa08f587503f52b6b7e0
Author: Karsten Loesing 
Date:   Sun Aug 16 22:21:59 2020 +0200

Update to metrics-lib 2.14.0.
---
 CHANGELOG.md | 6 ++
 build.xml| 2 +-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index c8ffd84..ff2e9e7 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,9 @@
+# Changes in version 1.16.1 - 2020-08-16
+
+ * Medium changes
+   - Update to metrics-lib 2.14.0.
+
+
 # Changes in version 1.16.0 - 2020-08-05
 
  * Medium changes
diff --git a/build.xml b/build.xml
index 7e74b30..09c0636 100644
--- a/build.xml
+++ b/build.xml
@@ -12,7 +12,7 @@
   
   
   
-  
+  
   
 
   



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Bump version to 0.6.

2020-08-17 Thread karsten
commit e333be25a85952d78f1655e80516d01ab59e50bc
Author: Karsten Loesing 
Date:   Sat Aug 8 14:31:20 2020 +0200

Bump version to 0.6.
---
 CHANGELOG.md | 2 +-
 setup.py | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 0c4c4f2..b8a86ee 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,4 @@
-# Changes in version 0.6 - 2020-??-??
+# Changes in version 0.6 - 2020-08-08
 
  - Update to TGen 1.0.0, use TGenTools for parsing TGen log files, and
update analysis results file version to 3.0. Implements #33974.
diff --git a/setup.py b/setup.py
index d15157f..7fd7f85 100644
--- a/setup.py
+++ b/setup.py
@@ -6,7 +6,7 @@ with open('requirements.txt') as f:
 install_requires = f.readlines()
 
 setup(name='OnionPerf',
-  version='0.5',
+  version='0.6',
   description='A utility to monitor, measure, analyze, and visualize the 
performance of Tor and Onion Services',
   author='Rob Jansen',
   url='https://github.com/robgjansen/onionperf/',

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-web/master] Update fallback directories in Relay Search.

2020-08-17 Thread karsten
commit 69a6b2c544dacc92d4f94d6dcabf6c03b2959def
Author: Karsten Loesing 
Date:   Mon Aug 17 10:05:13 2020 +0200

Update fallback directories in Relay Search.

Triggered by tpo/core/fallback-scripts#30974.
---
 src/main/resources/web/js/rs/fallback_dir.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/main/resources/web/js/rs/fallback_dir.js 
b/src/main/resources/web/js/rs/fallback_dir.js
index ade34c5..4a19a0d 100644
--- a/src/main/resources/web/js/rs/fallback_dir.js
+++ b/src/main/resources/web/js/rs/fallback_dir.js
@@ -8,7 +8,7 @@ To update run:
 ant update-fallback-dir-list
 */
 
-var fallbackDirs = ["001524DD403D729F08F7E5D77813EF12756CFA8D", 
"025B66CEBC070FCB0519D206CF0CF4965C20C96E", 
"0338F9F55111FE8E3570E7DE117EF3AF999CC1D7", 
"0B85617241252517E8ECF2CFC7F4C1A32DCD153F", 
"0C039F35C2E40DCB71CD8A07E97C7FD7787D42D6", 
"113143469021882C3A4B82F084F8125B08EE471E", 
"11DF0017A43AF1F08825CD5D973297F81AB00FF3", 
"1211AC1BBB8A1AF7CBA86BCE8689AA3146B86423", 
"12AD30E5D25AA67F519780E2111E611A455FDC89", 
"12FD624EE73CEF37137C90D38B2406A66F68FAA2", 
"183005F78229D94EE51CE7795A42280070A48D0D", 
"185663B7C12777F052B2C2D23D7A239D8DA88A0F", 
"1938EBACBB1A7BFA888D9623C90061130E63BB3F", 
"1AE039EE0B11DB79E4B4B29CBA9F752864A0259E", 
"1CD17CB202063C51C7DAD3BACEF87ECE81C2350F", 
"1F6ABD086F40B890A33C93CC4606EE68B31C9556", 
"20462CBA5DA4C2D963567D17D0B7249718114A68", 
"204DFD2A2C6A0DC1FA0EACB495218E0B661704FD", 
"230A8B2A8BA861210D9B4BA97745AEC217A94207", 
"2F0F32AB1E5B943CA7D062C03F18960C86E70D94", 
"322C6E3A973BC10FC36DE3037AD27BC89F14723B", 
"32EE911D968BE3E016ECA572BB1ED0A9EE43FC2F", "330CD3DB
 6AD266DC70CDB512B036957D03D9BC59", "361D33C96D0F161275EE67E2C91EE10B276E778B", 
"375DCBB2DBD94E5263BC0C015F0C9E756669617E", 
"39F91959416763AFD34DBEEC05474411B964B2DC", 
"3AFDAAD91A15B4C6A7686A53AA8627CA871FF491", 
"3CA0D15567024D2E0B557DC0CF3E962B37999A79", 
"3CB4193EF4E239FCEDC4DC43468E0B0D6B67ACC3", 
"3E53D3979DB07EFD736661C934A1DED14127B684", 
"3F092986E9B87D3FDA09B71FA3A602378285C77A", 
"4061C553CA88021B8302F0814365070AAE617270", 
"4623A9EC53BFD83155929E56D6F7B55B5E718C24", 
"465D17C6FC297E3857B5C6F152006A1E212944EA", 
"46791D156C9B6C255C2665D4D8393EC7DBAA7798", 
"484A10BA2B8D48A5F0216674C8DD50EF27BC32F3", 
"489D94333DF66D57FFE34D9D59CC2D97E2CB0053", 
"4EB55679FA91363B97372554F8DC7C63F4E5B101", 
"4F0DB7E687FC7C0AE55C8F243DA8B0EB27FBF1F2", 
"509EAB4C5D10C9A9A24B4EA0CE402C047A2D64E6", 
"51E1CF613FD6F9F11FE24743C91D6F9981807D82", 
"547DA56F6B88B6C596B3E3086803CDA4F0EF8F21", 
"557ACEC850F54EEE65839F83CACE2B0825BE811E", 
"5BF17163CBE73D8CD9FDBE030C944EA05707DA93", 
"5E56738E7F97AA81DEEF59AF28494293DFBFC
 CDF", "616081EC829593AF4232550DE6FFAA1D75B37A90", 
"68F175CCABE727AA2D2309BCD8789499CEE36ED7", 
"6A7551EEE18F78A9813096E82BF84F740D32B911", 
"6EF897645B79B6CB35E853B32506375014DE3621", 
"7088D485934E8A403B81531F8C90BDC75FA43C98", 
"70C55A114C0EF3DC5784A4FAEE64388434A3398F", 
"72B2B12A3F60408BDBC98C6DF53988D3A0B3F0EE", 
"742C45F2D9004AADE0077E528A4418A6A81BC2BA", 
"745369332749021C6FAF100D327BC3BF1DF4707B", 
"77131D7E2EC1CA9B8D737502256DA9103599CE51", 
"775B0FAFDE71AADC23FFC8782B7BEB1D5A92733E", 
"79509683AB4C8DDAF90A120C69A4179C6CD5A387", 
"7BB70F8585DFC27E75D692970C0EEB0F22983A63", 
"7BFB908A3AA5B491DA4CA72CCBEE0E1F2A939B55", 
"7E281CD2C315C4F7A84BC7C8721C3BC974DDBFA3", 
"80AAF8D5956A43C197104CEF2550CD42D165C6FB", 
"8101421BEFCCF4C271D5483C5AABCAAD245BBB9D", 
"81B75D534F91BFB7C57AB67DA10BCEF622582AE8", 
"823AA81E277F366505545522CEDC2F529CE4DC3F", 
"844AE9CAD04325E955E2BE1521563B79FE7094B7", 
"8456DFA94161CDD99E480C2A2992C366C6564410", 
"855BC2DABE24C861CD887DB9B2E950424B49FC34", "85A885433E50B1874F11CE
 C9BE98451E24660976", "86C281AD135058238D7A337D546C902BE8505DDE", 
"8C00FA7369A7A308F6A137600F0FA07990D9D451", 
"8D79F73DCD91FC4F5017422FAC70074D6DB8DD81", 
"8FA37B93397015B2BC5A525C908485260BE9F422", 
"90A5D1355C4B5840E950EB61E673863A6AE3ACA1", 
"91D23D8A539B83D2FB56AA67ECD4D75CC093AC55", 
"91E4015E1F82DAF0121D62267E54A1F661AB6DC7", 
"924B24AFA7F075D059E8EEB284CC400B33D3D036", 
"9288B75B5FF8861EFF32A6BE8825CC38A4F9F8C2", 
"935F589545B8A271A722E330445BB99F67DBB058", 
"94C4B7B8C50C86A92B6A2

[tor-commits] [onionperf/develop] Measure static guard nodes.

2020-08-20 Thread karsten
commit e9fd47d95db102b3a7ace36fa412e18d182c5fa4
Author: Karsten Loesing 
Date:   Tue Jun 16 21:30:07 2020 +0200

Measure static guard nodes.

Add --drop-guards parameter to use and drop guards after a given
number of hours.

Implements #33399.
---
 onionperf/measurement.py |  7 ---
 onionperf/monitor.py | 18 +-
 onionperf/onionperf  |  9 -
 3 files changed, 25 insertions(+), 9 deletions(-)

diff --git a/onionperf/measurement.py b/onionperf/measurement.py
index 4a58bc4..899b277 100644
--- a/onionperf/measurement.py
+++ b/onionperf/measurement.py
@@ -172,7 +172,7 @@ def logrotate_thread_task(writables, tgen_writable, 
torctl_writable, docroot, ni
 
 class Measurement(object):
 
-def __init__(self, tor_bin_path, tgen_bin_path, datadir_path, 
privatedir_path, nickname, oneshot, additional_client_conf=None, 
torclient_conf_file=None, torserver_conf_file=None, single_onion=False):
+def __init__(self, tor_bin_path, tgen_bin_path, datadir_path, 
privatedir_path, nickname, oneshot, additional_client_conf=None, 
torclient_conf_file=None, torserver_conf_file=None, single_onion=False, 
drop_guards_interval_hours=None):
 self.tor_bin_path = tor_bin_path
 self.tgen_bin_path = tgen_bin_path
 self.datadir_path = datadir_path
@@ -188,6 +188,7 @@ class Measurement(object):
 self.torclient_conf_file = torclient_conf_file
 self.torserver_conf_file = torserver_conf_file
 self.single_onion = single_onion
+self.drop_guards_interval_hours = drop_guards_interval_hours
 
 def run(self, do_onion=True, do_inet=True, client_tgen_listen_port=5, 
client_tgen_connect_ip='0.0.0.0', client_tgen_connect_port=8080, 
client_tor_ctl_port=59050, client_tor_socks_port=59000,
  server_tgen_listen_port=8080, server_tor_ctl_port=59051, 
server_tor_socks_port=59001):
@@ -388,7 +389,7 @@ WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 
seconds\nDataDirectory
 tor_config = tor_config + f.read()
 if name == "client" and self.additional_client_conf:
 tor_config += self.additional_client_conf
-if not 'UseEntryGuards' in tor_config and not 'UseBridges' in 
tor_config:
+if not 'UseEntryGuards' in tor_config and not 'UseBridges' in 
tor_config and self.drop_guards_interval_hours == 0:
 tor_config += "UseEntryGuards 0\n"
 if name == "server" and self.single_onion:
 tor_config += "HiddenServiceSingleHopMode 
1\nHiddenServiceNonAnonymousMode 1\n"
@@ -467,7 +468,7 @@ WarnUnsafeSocks 0\nSafeLogging 0\nMaxCircuitDirtiness 60 
seconds\nDataDirectory
 
 torctl_events = [e for e in monitor.get_supported_torctl_events() if e 
not in ['DEBUG', 'INFO', 'NOTICE', 'WARN', 'ERR']]
 newnym_interval_seconds = 300
-torctl_args = (control_port, torctl_writable, torctl_events, 
newnym_interval_seconds, self.done_event)
+torctl_args = (control_port, torctl_writable, torctl_events, 
newnym_interval_seconds, self.drop_guards_interval_hours, self.done_event)
 torctl_helper = threading.Thread(target=monitor.tor_monitor_run, 
name="torctl_{0}_helper".format(name), args=torctl_args)
 torctl_helper.start()
 self.threads.append(torctl_helper)
diff --git a/onionperf/monitor.py b/onionperf/monitor.py
index 5387bff..ac6fea9 100644
--- a/onionperf/monitor.py
+++ b/onionperf/monitor.py
@@ -22,7 +22,7 @@ class TorMonitor(object):
 self.writable = writable
 self.events = events
 
-def run(self, newnym_interval_seconds=None, done_ev=None):
+def run(self, newnym_interval_seconds=None, drop_guards_interval_hours=0, 
done_ev=None):
 with Controller.from_port(port=self.tor_ctl_port) as torctl:
 torctl.authenticate()
 
@@ -54,6 +54,10 @@ class TorMonitor(object):
 # let stem run its threads and log all of the events, until user 
interrupts
 try:
 interval_count = 0
+if newnym_interval_seconds is not None:
+next_newnym = newnym_interval_seconds
+if drop_guards_interval_hours > 0:
+next_drop_guards = drop_guards_interval_hours * 3600
 while done_ev is None or not done_ev.is_set():
 # if self.filepath != '-' and 
os.path.exists(self.filepath):
 #with open(self.filepath, 'rb') as sizef:
@@ -61,9 +65,13 @@ class TorMonitor(object):
 #logging.info(msg)
 sleep(1)
 interval_count += 1
-if newnym_interval_seconds is not None and interval_count 
>= newnym_interval_seconds:
-interval_count = 0
+i

[tor-commits] [onionperf/develop] Merge branch 'task-33399' into develop

2020-08-20 Thread karsten
commit 4ff257c4270c0d1e5fd0f1ef76e640696ec2c514
Merge: e333be2 e9fd47d
Author: Karsten Loesing 
Date:   Thu Aug 20 15:05:23 2020 +0200

Merge branch 'task-33399' into develop

 onionperf/measurement.py |  7 ---
 onionperf/monitor.py | 18 +-
 onionperf/onionperf  |  9 -
 3 files changed, 25 insertions(+), 9 deletions(-)




___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Tweak #33399 patch.

2020-08-20 Thread karsten
commit dfec0b8960ac214f63985dfc317ffce750ef5922
Author: Karsten Loesing 
Date:   Thu Aug 20 15:40:25 2020 +0200

Tweak #33399 patch.

 - Add a change log entry.
 - Pick are more sensible default for `drop_guards_interval_hours`,
   also to fix unit tests.
---
 CHANGELOG.md | 5 +
 onionperf/measurement.py | 2 +-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index b8a86ee..c31a40e 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,8 @@
+# Changes in version 0.7 - 2020-??-??
+
+ - Add `onionperf measure --drop-guards` parameter to use and drop
+   guards after a given number of hours. Implements #33399.
+
 # Changes in version 0.6 - 2020-08-08
 
  - Update to TGen 1.0.0, use TGenTools for parsing TGen log files, and
diff --git a/onionperf/measurement.py b/onionperf/measurement.py
index 1a5f3bb..709fbc6 100644
--- a/onionperf/measurement.py
+++ b/onionperf/measurement.py
@@ -173,7 +173,7 @@ def logrotate_thread_task(writables, tgen_writable, 
torctl_writable, docroot, ni
 
 class Measurement(object):
 
-def __init__(self, tor_bin_path, tgen_bin_path, datadir_path, 
privatedir_path, nickname, oneshot, additional_client_conf=None, 
torclient_conf_file=None, torserver_conf_file=None, single_onion=False, 
drop_guards_interval_hours=None):
+def __init__(self, tor_bin_path, tgen_bin_path, datadir_path, 
privatedir_path, nickname, oneshot, additional_client_conf=None, 
torclient_conf_file=None, torserver_conf_file=None, single_onion=False, 
drop_guards_interval_hours=0):
 self.tor_bin_path = tor_bin_path
 self.tgen_bin_path = tgen_bin_path
 self.datadir_path = datadir_path

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-lib/master] Provide microdescriptor digest in hex encoding.

2020-12-11 Thread karsten
commit 664921eb50491a774a5281bf1c12836ab7dedd94
Author: Karsten Loesing 
Date:   Fri Dec 11 10:22:39 2020 +0100

Provide microdescriptor digest in hex encoding.
---
 CHANGELOG.md |  3 +++
 .../java/org/torproject/descriptor/Microdescriptor.java  |  9 +
 .../torproject/descriptor/impl/MicrodescriptorImpl.java  | 16 
 .../descriptor/impl/MicrodescriptorImplTest.java |  3 +++
 4 files changed, 31 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 828718d..bee22c3 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -7,6 +7,9 @@
  * Medium changes
- Optimize parsing of large files containing many descriptors.
 
+ * Minor changes
+   - Provide microdescriptor SHA-256 digest in hexadecimal encoding.
+
 
 # Changes in version 2.14.0 - 2020-08-07
 
diff --git a/src/main/java/org/torproject/descriptor/Microdescriptor.java 
b/src/main/java/org/torproject/descriptor/Microdescriptor.java
index feaf00b..0d329b7 100644
--- a/src/main/java/org/torproject/descriptor/Microdescriptor.java
+++ b/src/main/java/org/torproject/descriptor/Microdescriptor.java
@@ -32,6 +32,15 @@ public interface Microdescriptor extends Descriptor {
*/
   String getDigestSha256Base64();
 
+  /**
+   * Return the SHA-256 descriptor digest, encoded as 64 lower-case hexadecimal
+   * characters, that can be used as file name when writing this descriptor to
+   * disk.
+   *
+   * @since 2.15.0
+   */
+  String getDigestSha256Hex();
+
   /**
* Return the RSA-1024 public key in PEM format used to encrypt CREATE
* cells for this server, or null if the descriptor doesn't contain an
diff --git 
a/src/main/java/org/torproject/descriptor/impl/MicrodescriptorImpl.java 
b/src/main/java/org/torproject/descriptor/impl/MicrodescriptorImpl.java
index 87ab7ae..8d4ac1b 100644
--- a/src/main/java/org/torproject/descriptor/impl/MicrodescriptorImpl.java
+++ b/src/main/java/org/torproject/descriptor/impl/MicrodescriptorImpl.java
@@ -6,6 +6,9 @@ package org.torproject.descriptor.impl;
 import org.torproject.descriptor.DescriptorParseException;
 import org.torproject.descriptor.Microdescriptor;
 
+import org.apache.commons.codec.binary.Base64;
+import org.apache.commons.codec.binary.Hex;
+
 import java.io.File;
 import java.util.ArrayList;
 import java.util.Arrays;
@@ -26,6 +29,7 @@ public class MicrodescriptorImpl extends DescriptorImpl
 super(descriptorBytes, offsetAndLength, descriptorFile, false);
 this.parseDescriptorBytes();
 this.calculateDigestSha256Base64(Key.ONION_KEY.keyword + NL);
+this.convertDigestSha256Base64ToHex();
 this.checkExactlyOnceKeys(EnumSet.of(Key.ONION_KEY));
 Set atMostOnceKeys = EnumSet.of(
 Key.NTOR_ONION_KEY, Key.FAMILY, Key.P, Key.P6, Key.ID);
@@ -212,6 +216,18 @@ public class MicrodescriptorImpl extends DescriptorImpl
 }
   }
 
+  private void convertDigestSha256Base64ToHex() {
+this.digestSha256Hex = Hex.encodeHexString(Base64.decodeBase64(
+this.getDigestSha256Base64()));
+  }
+
+  private String digestSha256Hex;
+
+  @Override
+  public String getDigestSha256Hex() {
+return this.digestSha256Hex;
+  }
+
   private String onionKey;
 
   @Override
diff --git 
a/src/test/java/org/torproject/descriptor/impl/MicrodescriptorImplTest.java 
b/src/test/java/org/torproject/descriptor/impl/MicrodescriptorImplTest.java
index 128d39a..fbc2fc9 100644
--- a/src/test/java/org/torproject/descriptor/impl/MicrodescriptorImplTest.java
+++ b/src/test/java/org/torproject/descriptor/impl/MicrodescriptorImplTest.java
@@ -74,6 +74,9 @@ public class MicrodescriptorImplTest {
 Microdescriptor micro = DescriptorBuilder.createWithDefaultLines();
 assertEquals("ER1AC4KqT//o3pJDrqlmej5G2qW1EQYEr/IrMQHNc6I",
 micro.getDigestSha256Base64());
+assertEquals(
+"111d400b82aa4fffe8de9243aea9667a3e46daa5b5110604aff22b3101cd73a2",
+micro.getDigestSha256Hex());
   }
 
   @Test



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-lib/master] Bump version to 2.15.0-dev.

2020-12-11 Thread karsten
commit 179d47ee259ffe86302e57dad6cbc39fdbb72f0b
Author: Karsten Loesing 
Date:   Fri Dec 11 14:53:56 2020 +0100

Bump version to 2.15.0-dev.
---
 CHANGELOG.md | 3 +++
 build.xml| 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index cb5bce6..ac865a9 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,6 @@
+# Changes in version 2.??.? - 2020-??-??
+
+
 # Changes in version 2.15.0 - 2020-12-11
 
  * Medium changes
diff --git a/build.xml b/build.xml
index 1a9e555..7544a32 100644
--- a/build.xml
+++ b/build.xml
@@ -7,7 +7,7 @@
 
 
-  
+  
   
   
   

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-lib/master] Parse version 3 onion service statistics lines.

2020-12-11 Thread karsten
commit ee7c3eb73824a84e33b6c44b60fa8a32f37ab71e
Author: Karsten Loesing 
Date:   Tue Dec 8 10:40:19 2020 +0100

Parse version 3 onion service statistics lines.

Implements the library part of tpo/metrics/statistics#40002.
---
 CHANGELOG.md   |  6 +-
 .../torproject/descriptor/ExtraInfoDescriptor.java | 55 +
 .../descriptor/impl/ExtraInfoDescriptorImpl.java   | 93 ++
 .../java/org/torproject/descriptor/impl/Key.java   |  3 +
 .../impl/ExtraInfoDescriptorImplTest.java  | 46 +++
 5 files changed, 202 insertions(+), 1 deletion(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index c6886e8..8ff5723 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,8 @@
-# Changes in version 2.??.? - 2020-??-??
+# Changes in version 2.15.0 - 2020-??-??
+
+ * Medium changes
+   - Parse version 3 onion service statistics contained in extra-info
+ descriptors.
 
 
 # Changes in version 2.14.0 - 2020-08-07
diff --git a/src/main/java/org/torproject/descriptor/ExtraInfoDescriptor.java 
b/src/main/java/org/torproject/descriptor/ExtraInfoDescriptor.java
index e094fed..b553a22 100644
--- a/src/main/java/org/torproject/descriptor/ExtraInfoDescriptor.java
+++ b/src/main/java/org/torproject/descriptor/ExtraInfoDescriptor.java
@@ -674,6 +674,61 @@ public interface ExtraInfoDescriptor extends Descriptor {
*/
   Map getHidservDirOnionsSeenParameters();
 
+  /**
+   * Return the time in milliseconds since the epoch when the included version 
3
+   * onion service statistics interval ended, or -1 if no such statistics are
+   * included.
+   *
+   * @since 2.15.0
+   */
+  long getHidservV3StatsEndMillis();
+
+  /**
+   * Return the interval length of the included version 3 onion service
+   * statistics in seconds, or -1 if no such statistics are included.
+   *
+   * @since 2.15.0
+   */
+  long getHidservV3StatsIntervalLength();
+
+  /**
+   * Return the approximate number of RELAY cells seen in either direction on a
+   * version 3 onion service circuit after receiving and successfully 
processing
+   * a RENDEZVOUS1 cell, or null if no such statistics are included.
+   *
+   * @since 2.15.0
+   */
+  Double getHidservRendV3RelayedCells();
+
+  /**
+   * Return the obfuscation parameters applied to the original measurement 
value
+   * of RELAY cells seen in either direction on a version 3 onion service
+   * circuit after receiving and successfully processing a RENDEZVOUS1 cell, or
+   * null if no such statistics are included.
+   *
+   * @since 2.15.0
+   */
+  Map getHidservRendV3RelayedCellsParameters();
+
+  /**
+   * Return the approximate number of unique version 3 onion service identities
+   * seen in descriptors published to and accepted by this onion service
+   * directory, or null if no such statistics are included.
+   *
+   * @since 2.15.0
+   */
+  Double getHidservDirV3OnionsSeen();
+
+  /**
+   * Return the obfuscation parameters applied to the original measurement 
value
+   * of unique version 3 onion service identities seen in descriptors published
+   * to and accepted by this onion service directory, or null if no such
+   * statistics are included.
+   *
+   * @since 2.15.0
+   */
+  Map getHidservDirV3OnionsSeenParameters();
+
   /**
* Return the time in milliseconds since the epoch when the included
* padding-counts statistics ended, or -1 if no such statistics are included.
diff --git 
a/src/main/java/org/torproject/descriptor/impl/ExtraInfoDescriptorImpl.java 
b/src/main/java/org/torproject/descriptor/impl/ExtraInfoDescriptorImpl.java
index 5cab6ab..4f87237 100644
--- a/src/main/java/org/torproject/descriptor/impl/ExtraInfoDescriptorImpl.java
+++ b/src/main/java/org/torproject/descriptor/impl/ExtraInfoDescriptorImpl.java
@@ -225,6 +225,15 @@ public abstract class ExtraInfoDescriptorImpl extends 
DescriptorImpl
 case HIDSERV_DIR_ONIONS_SEEN:
   this.parseHidservDirOnionsSeenLine(line, partsNoOpt);
   break;
+case HIDSERV_V3_STATS_END:
+  this.parseHidservV3StatsEndLine(line, partsNoOpt);
+  break;
+case HIDSERV_REND_V3_RELAYED_CELLS:
+  this.parseHidservRendV3RelayedCellsLine(line, partsNoOpt);
+  break;
+case HIDSERV_DIR_V3_ONIONS_SEEN:
+  this.parseHidservDirV3OnionsSeenLine(line, partsNoOpt);
+  break;
 case PADDING_COUNTS:
   this.parsePaddingCountsLine(line, partsNoOpt);
   break;
@@ -764,6 +773,46 @@ public abstract class ExtraInfoDescriptorImpl extends 
DescriptorImpl
 partsNoOpt, 2);
   }
 
+  private void parseHidservV3StatsEndLine(String line,
+  String[] partsNoOpt) throws DescriptorParseException {
+long[] parsedStatsEndData = this.parseStatsEndLine(line, partsNoOpt,
+5);
+this.hidservV3StatsEndMillis = parsedStatsEndData[0];
+this.hidservV3StatsIntervalLength = parsedStatsEndData[1];
+  }
+
+  private void

[tor-commits] [metrics-lib/master] Prepare for 2.15.0 release.

2020-12-11 Thread karsten
commit 07cab7f5604fe943c915c91251b8da322d53036c
Author: Karsten Loesing 
Date:   Fri Dec 11 14:50:18 2020 +0100

Prepare for 2.15.0 release.
---
 CHANGELOG.md | 4 +---
 build.xml| 2 +-
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index bee22c3..cb5bce6 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,10 +1,8 @@
-# Changes in version 2.15.0 - 2020-??-??
+# Changes in version 2.15.0 - 2020-12-11
 
  * Medium changes
- Parse version 3 onion service statistics contained in extra-info
  descriptors.
-
- * Medium changes
- Optimize parsing of large files containing many descriptors.
 
  * Minor changes
diff --git a/build.xml b/build.xml
index 72990c6..1a9e555 100644
--- a/build.xml
+++ b/build.xml
@@ -7,7 +7,7 @@
 
 
-  
+  
   
   
   



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-lib/master] Optimize parsing large files with many descriptors.

2020-12-11 Thread karsten
commit ff7e36c15626bdc24df54ebd94da5ab58f4de4c4
Author: Karsten Loesing 
Date:   Thu Dec 10 17:54:02 2020 +0100

Optimize parsing large files with many descriptors.

When parsing a large file with many descriptors we would repeatedly
search the remaining file for the sequence "newline + keyword + space"
and then "newline + keyword + newline" to find the start of the next
descriptor. However, if the keyword is always followed by newline, the
first search would always fail.

The optimization here is to search once whether the keyword is
followed by space or newline and avoid unnecessary searches when going
through the file.

In the long term we should use a better parser. But in the short term
this optimization will have a major impact on performance, in
particular with regard to concatenated microdescriptors.
---
 CHANGELOG.md   |  3 +++
 .../descriptor/impl/DescriptorParserImpl.java  | 27 ++
 2 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 8ff5723..828718d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -4,6 +4,9 @@
- Parse version 3 onion service statistics contained in extra-info
  descriptors.
 
+ * Medium changes
+   - Optimize parsing of large files containing many descriptors.
+
 
 # Changes in version 2.14.0 - 2020-08-07
 
diff --git 
a/src/main/java/org/torproject/descriptor/impl/DescriptorParserImpl.java 
b/src/main/java/org/torproject/descriptor/impl/DescriptorParserImpl.java
index e008e7a..abe4411 100644
--- a/src/main/java/org/torproject/descriptor/impl/DescriptorParserImpl.java
+++ b/src/main/java/org/torproject/descriptor/impl/DescriptorParserImpl.java
@@ -181,16 +181,25 @@ public class DescriptorParserImpl implements 
DescriptorParser {
 String ascii = new String(rawDescriptorBytes, StandardCharsets.US_ASCII);
 boolean containsAnnotations = ascii.startsWith("@")
 || ascii.contains(NL + "@");
+boolean containsKeywordSpace = ascii.startsWith(key.keyword + SP)
+|| ascii.contains(NL + key.keyword + SP);
+boolean containsKeywordNewline = ascii.startsWith(key.keyword + NL)
+|| ascii.contains(NL + key.keyword + NL);
 while (startAnnotations < endAllDescriptors) {
-  int startDescriptor;
-  if (startAnnotations == ascii.indexOf(key.keyword + SP,
-  startAnnotations) || startAnnotations == ascii.indexOf(
-  key.keyword + NL)) {
+  int startDescriptor = -1;
+  if ((containsKeywordSpace
+  && startAnnotations == ascii.indexOf(key.keyword + SP,
+  startAnnotations))
+  || (containsKeywordNewline
+  && startAnnotations == ascii.indexOf(key.keyword + NL,
+  startAnnotations))) {
 startDescriptor = startAnnotations;
   } else {
-startDescriptor = ascii.indexOf(NL + key.keyword + SP,
-startAnnotations - 1);
-if (startDescriptor < 0) {
+if (containsKeywordSpace) {
+  startDescriptor = ascii.indexOf(NL + key.keyword + SP,
+  startAnnotations - 1);
+}
+if (startDescriptor < 0 && containsKeywordNewline) {
   startDescriptor = ascii.indexOf(NL + key.keyword + NL,
   startAnnotations - 1);
 }
@@ -204,10 +213,10 @@ public class DescriptorParserImpl implements 
DescriptorParser {
   if (containsAnnotations) {
 endDescriptor = ascii.indexOf(NL + "@", startDescriptor);
   }
-  if (endDescriptor < 0) {
+  if (endDescriptor < 0 && containsKeywordSpace) {
 endDescriptor = ascii.indexOf(NL + key.keyword + SP, startDescriptor);
   }
-  if (endDescriptor < 0) {
+  if (endDescriptor < 0 && containsKeywordNewline) {
 endDescriptor = ascii.indexOf(NL + key.keyword + NL, startDescriptor);
   }
   if (endDescriptor < 0) {



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Merge branch 'task-40008' into develop

2020-12-13 Thread karsten
commit ce36170f03b540fc4e2c8716a0bddade37196162
Merge: 118d149 0bf86a6
Author: Karsten Loesing 
Date:   Sun Dec 13 22:01:45 2020 +0100

Merge branch 'task-40008' into develop

 onionperf/onionperf | 3 +++
 1 file changed, 3 insertions(+)

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Log version number in each execution.

2020-12-13 Thread karsten
commit 0bf86a60854b567fd792728e1957dcf5107d1229
Author: Karsten Loesing 
Date:   Sun Dec 13 22:01:09 2020 +0100

Log version number in each execution.

Implements #40008.
---
 onionperf/onionperf | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/onionperf/onionperf b/onionperf/onionperf
index 032cd7d..90b5bf6 100755
--- a/onionperf/onionperf
+++ b/onionperf/onionperf
@@ -14,6 +14,8 @@ from socket import gethostname
 import onionperf.util as util
 from onionperf.monitor import get_supported_torctl_events
 
+__version__ = '0.8'
+
 DESC_MAIN = """
 OnionPerf is a utility to monitor, measure, analyze, and visualize the
 performance of Tor and Onion Services.
@@ -379,6 +381,7 @@ files generated by this script will be written""",
 sys.exit(1)
 else:
 args = main_parser.parse_args()
+logging.info("Starting OnionPerf version {0} in {1} 
mode.".format(__version__, sys.argv[1]))
 args.func(args)
 
 def monitor(args):



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Merge branch 'task-40012' into develop

2020-12-13 Thread karsten
commit f8944174633f3b0fa5e74747086c02761f83ee70
Merge: ce36170 c46deb0
Author: Karsten Loesing 
Date:   Sun Dec 13 22:57:46 2020 +0100

Merge branch 'task-40012' into develop

 CHANGELOG.md   |  5 +
 onionperf/visualization.py | 42 --
 2 files changed, 33 insertions(+), 14 deletions(-)



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Move two data.empty checks after data.dropna calls.

2020-12-13 Thread karsten
commit c46deb00967899bdd9c15b9b7ee41ca5c53b
Author: Karsten Loesing 
Date:   Sun Dec 13 22:26:52 2020 +0100

Move two data.empty checks after data.dropna calls.

Suggested by acute, still related to #40012.
---
 onionperf/visualization.py | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index 75ec2b5..a793791 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -298,25 +298,26 @@ class TGenVisualization(Visualization):
 plt.close()
 
 def __draw_countplot(self, x, data, title, xlabel, ylabel, hue=None, 
hue_name=None):
+data = data.dropna(subset=[x])
 if data.empty:
 return
 plt.figure()
 if hue is not None:
 data = data.rename(columns={hue: hue_name})
-g = sns.countplot(data=data.dropna(subset=[x]), x=x, hue=hue_name)
+g = sns.countplot(data=data, x=x, hue=hue_name)
 g.set(xlabel=xlabel, ylabel=ylabel, title=title)
 sns.despine()
 self.page.savefig()
 plt.close()
 
 def __draw_stripplot(self, x, y, hue, hue_name, data, title, xlabel, 
ylabel):
+data = data.dropna(subset=[y])
 if data.empty:
 return
 plt.figure()
 data = data.rename(columns={hue: hue_name})
 xmin = data[x].min()
 xmax = data[x].max()
-data = data.dropna(subset=[y])
 g = sns.stripplot(data=data, x=x, y=y, hue=hue_name)
 g.set(title=title, xlabel=xlabel, ylabel=ylabel,
   xlim=(xmin - 0.03 * (xmax - xmin), xmax + 0.03 * (xmax - xmin)))



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [onionperf/develop] Avoid tracebacks when visualizing measurements.

2020-12-13 Thread karsten
commit fe6ff0874dfd9b18bf5eb71c2394431ca44874bc
Author: Karsten Loesing 
Date:   Tue Nov 24 12:17:21 2020 +0100

Avoid tracebacks when visualizing measurements.

Attempting to visualize analysis files containing only unsuccessful
measurements results in various tracebacks.

With this patch we check more carefully whether a data frame is empty
before adding a plot.

Another related change is that we always include
"time_to_{first,last}_byte" and "mbps" columns in the CSV output,
regardless of whether there are there are any non-null values in the
data. See also #40004 for a previous related change.

And we check whether a Tor stream identifier exists before retrieving
a Tor stream.

Finally, we only include TTFB/TTLB if the usecs value is non-zero.

Fixes #44012.
---
 CHANGELOG.md   |  5 +
 onionperf/visualization.py | 37 +
 2 files changed, 30 insertions(+), 12 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 2c1f5ca..2655e01 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,8 @@
+# Changes in version 0.9 - 2020-1?-??
+
+ - Avoid tracebacks when visualizing analysis files containing only
+   unsuccessful measurements. Fixes #44012.
+
 # Changes in version 0.8 - 2020-09-16
 
  - Add a new `onionperf filter` mode that takes an OnionPerf analysis
diff --git a/onionperf/visualization.py b/onionperf/visualization.py
index 2cd3161..75ec2b5 100644
--- a/onionperf/visualization.py
+++ b/onionperf/visualization.py
@@ -66,6 +66,7 @@ class TGenVisualization(Visualization):
 tgen_streams = analysis.get_tgen_streams(client)
 tgen_transfers = analysis.get_tgen_transfers(client)
 while tgen_streams or tgen_transfers:
+stream = {"time_to_first_byte": None, 
"time_to_last_byte": None, "error_code": None, "mbps": None}
 error_code = None
 source_port = None
 unix_ts_end = None
@@ -76,15 +77,15 @@ class TGenVisualization(Visualization):
 # that value with unit megabits per second.
 if tgen_streams:
 stream_id, stream_data = tgen_streams.popitem()
-stream = {"id": stream_id, "label": label,
-  "filesize_bytes": 
int(stream_data["stream_info"]["recvsize"]),
-  "error_code": None}
+stream["id"] = stream_id
+stream["label"] = label
+stream["filesize_bytes"] = 
int(stream_data["stream_info"]["recvsize"])
 stream["server"] = "onion" if ".onion:" in 
stream_data["transport_info"]["remote"] else "public"
 if "time_info" in stream_data:
 s = stream_data["time_info"]
-if "usecs-to-first-byte-recv" in s:
+if "usecs-to-first-byte-recv" in s and 
float(s["usecs-to-first-byte-recv"]) >= 0:
 stream["time_to_first_byte"] = 
float(s["usecs-to-first-byte-recv"])/100
-if "usecs-to-last-byte-recv" in s:
+if "usecs-to-last-byte-recv" in s and 
float(s["usecs-to-last-byte-recv"]) >= 0:
 stream["time_to_last_byte"] = 
float(s["usecs-to-last-byte-recv"])/100
 if "elapsed_seconds" in stream_data:
 s = stream_data["elapsed_seconds"]
@@ -100,9 +101,9 @@ class TGenVisualization(Visualization):
 stream["start"] = 
datetime.datetime.utcfromtimestamp(stream_data["unix_ts_start"])
 elif tgen_transfers:
 transfer_id, transfer_data = 
tgen_transfers.popitem()
-stream = {"id": transfer_id, "label": label,
-  "filesize_bytes": 
transfer_data["filesize_bytes"],
-  "error_code": None}
+stream["id"] = transfer_id
+stream["label"] = label
+stream["filesize_bytes"] = 
transfer_data["filesize_bytes

[tor-commits] [onionperf/develop] Correct issue number in change log.

2020-12-13 Thread karsten
commit 15ff2beed583768964c6c3e19ea48f5d4319286b
Author: Karsten Loesing 
Date:   Sun Dec 13 22:58:39 2020 +0100

Correct issue number in change log.
---
 CHANGELOG.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 2655e01..5bce6f8 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,7 +1,7 @@
 # Changes in version 0.9 - 2020-1?-??
 
  - Avoid tracebacks when visualizing analysis files containing only
-   unsuccessful measurements. Fixes #44012.
+   unsuccessful measurements. Fixes #40012.
 
 # Changes in version 0.8 - 2020-09-16
 

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [collector/master] Include microdescriptors when syncing.

2020-12-14 Thread karsten
commit a1b8ebb9545b148524c6e3db2a1601f228784905
Author: Karsten Loesing 
Date:   Wed Dec 9 22:01:21 2020 +0100

Include microdescriptors when syncing.
---
 CHANGELOG.md   |  3 ++
 build.xml  |  2 +-
 .../persist/MicrodescriptorPersistence.java| 36 ++
 .../collector/relaydescs/ArchiveWriter.java| 11 +--
 .../metrics/collector/sync/SyncPersistence.java| 10 ++
 5 files changed, 58 insertions(+), 4 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 2928937..22d3517 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -4,6 +4,9 @@
   - Clean up descriptors written to the `out/` directory by deleting
 files that are older than seven weeks.
   - Correctly index files that are moved away and back.
+  - Include microdescriptors when syncing from another CollecTor
+instance.
+  - Update to metrics-lib 2.15.0.
 
 
 # Changes in version 1.16.1 - 2020-08-16
diff --git a/build.xml b/build.xml
index 5dcb29d..f2bda4e 100644
--- a/build.xml
+++ b/build.xml
@@ -12,7 +12,7 @@
   
   
   
-  
+  
   
 
   
diff --git 
a/src/main/java/org/torproject/metrics/collector/persist/MicrodescriptorPersistence.java
 
b/src/main/java/org/torproject/metrics/collector/persist/MicrodescriptorPersistence.java
new file mode 100644
index 000..41929af
--- /dev/null
+++ 
b/src/main/java/org/torproject/metrics/collector/persist/MicrodescriptorPersistence.java
@@ -0,0 +1,36 @@
+/* Copyright 2020 The Tor Project
+ * See LICENSE for licensing information */
+
+package org.torproject.metrics.collector.persist;
+
+import org.torproject.descriptor.Microdescriptor;
+import org.torproject.metrics.collector.conf.Annotation;
+
+import java.nio.file.Paths;
+
+public class MicrodescriptorPersistence
+extends DescriptorPersistence {
+
+  private static final String RELAY_DESCRIPTORS = "relay-descriptors";
+
+  public MicrodescriptorPersistence(Microdescriptor descriptor, long received,
+  String year, String month) {
+super(descriptor, Annotation.Microdescriptor.bytes());
+calculatePaths(received, year, month);
+  }
+
+  private void calculatePaths(long received, String year, String month) {
+String file = PersistenceUtils.dateTime(received);
+this.recentPath = Paths.get(
+RELAY_DESCRIPTORS, MICRODESCS, "micro",
+file + "-micro-" + year + "-" + month).toString();
+String digest = desc.getDigestSha256Hex();
+this.storagePath = Paths.get(
+RELAY_DESCRIPTORS,
+MICRODESC, year, month, "micro",
+digest.substring(0,1),
+digest.substring(1,2),
+digest).toString();
+  }
+}
+
diff --git 
a/src/main/java/org/torproject/metrics/collector/relaydescs/ArchiveWriter.java 
b/src/main/java/org/torproject/metrics/collector/relaydescs/ArchiveWriter.java
index 28472f8..5c58f23 100644
--- 
a/src/main/java/org/torproject/metrics/collector/relaydescs/ArchiveWriter.java
+++ 
b/src/main/java/org/torproject/metrics/collector/relaydescs/ArchiveWriter.java
@@ -7,6 +7,7 @@ import org.torproject.descriptor.BandwidthFile;
 import org.torproject.descriptor.Descriptor;
 import org.torproject.descriptor.DescriptorParser;
 import org.torproject.descriptor.DescriptorSourceFactory;
+import org.torproject.descriptor.Microdescriptor;
 import org.torproject.descriptor.RelayExtraInfoDescriptor;
 import org.torproject.descriptor.RelayNetworkStatusConsensus;
 import org.torproject.descriptor.RelayNetworkStatusVote;
@@ -109,6 +110,8 @@ public class ArchiveWriter extends CollecTorMain {
 this.mapPathDescriptors.put(
 "recent/relay-descriptors/microdescs/consensus-microdesc",
 RelayNetworkStatusConsensus.class);
+this.mapPathDescriptors.put("recent/relay-descriptors/microdescs/micro/",
+Microdescriptor.class);
 this.mapPathDescriptors.put("recent/relay-descriptors/server-descriptors",
 RelayServerDescriptor.class);
 this.mapPathDescriptors.put("recent/relay-descriptors/extra-infos",
@@ -801,15 +804,17 @@ public class ArchiveWriter extends CollecTorMain {
  * file written in the first call.  However, this method must be
  * called twice to store the same microdescriptor in two different
  * valid-after months. */
-SimpleDateFormat descriptorFormat = new SimpleDateFormat("/MM/");
+SimpleDateFormat descriptorFormat = new SimpleDateFormat("-MM");
+String[] yearMonth = descriptorFormat.format(validAfter).split("-");
 File tarballFile = Paths.get(this.outputDirectory, MICRODESC,
-descriptorFormat.format(validAfter), MICRO,
+yearMonth[0], yearMonth[1], MICRO,
 microdescriptorDigest.substring(0, 1),
 microdescriptorDigest.substring(1, 2),
 microdescriptorDigest).toFile();
 boolean tarballFileExistedBefore = tarballFile.exists(

[tor-commits] [collector/master] Add log statement for current memory usage.

2020-12-14 Thread karsten
commit bddc73ce6700a1498cbffb294de135949d02a41d
Author: Karsten Loesing 
Date:   Thu Dec 10 22:08:42 2020 +0100

Add log statement for current memory usage.
---
 .../org/torproject/metrics/collector/indexer/CreateIndexJson.java | 4 
 1 file changed, 4 insertions(+)

diff --git 
a/src/main/java/org/torproject/metrics/collector/indexer/CreateIndexJson.java 
b/src/main/java/org/torproject/metrics/collector/indexer/CreateIndexJson.java
index 3d86947..887a3e1 100644
--- 
a/src/main/java/org/torproject/metrics/collector/indexer/CreateIndexJson.java
+++ 
b/src/main/java/org/torproject/metrics/collector/indexer/CreateIndexJson.java
@@ -245,6 +245,10 @@ public class CreateIndexJson extends CollecTorMain {
   logger.info("Writing uncompressed and compressed index.json files to "
   + "disk.");
   this.writeIndex(this.index, now);
+  Runtime rt = Runtime.getRuntime();
+  logger.info("Current memory usage is: free = {} B, total = {} B, "
+  + "max = {} B.",
+  rt.freeMemory(), rt.totalMemory(), rt.maxMemory());
   logger.info("Pausing until next index update run.");
 } catch (IOException e) {
   logger.error("I/O error while updating index.json files. Trying again in 
"



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [collector/master] Only clean up a single time during sync.

2020-12-14 Thread karsten
commit 5e5a7d1dfa30ffaf0a27ea4e246de5bef30d1cbc
Author: Karsten Loesing 
Date:   Fri Dec 11 16:55:58 2020 +0100

Only clean up a single time during sync.
---
 src/main/java/org/torproject/metrics/collector/sync/SyncManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/src/main/java/org/torproject/metrics/collector/sync/SyncManager.java 
b/src/main/java/org/torproject/metrics/collector/sync/SyncManager.java
index 1fa1347..8e9c7d3 100644
--- a/src/main/java/org/torproject/metrics/collector/sync/SyncManager.java
+++ b/src/main/java/org/torproject/metrics/collector/sync/SyncManager.java
@@ -108,11 +108,11 @@ public class SyncManager {
 
   persist.storeDesc(desc, collectionDate.getTime());
 }
-persist.cleanDirectory();
 descriptorReader.saveHistoryFile(historyFile);
   }
   logger.info("Done merging {} from {}.", marker, source.getHost());
 }
+persist.cleanDirectory();
   }
 
 }

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [collector/master] Make sure that the DirectoryStream gets closed.

2020-12-14 Thread karsten
commit 83850daa9de893cf2d7f846e67191a469cc64900
Author: Karsten Loesing 
Date:   Sat Dec 5 22:22:22 2020 +0100

Make sure that the DirectoryStream gets closed.

As the docs say, "If timely disposal of file system resources is
required, the try-with-resources construct should be used to ensure
that the stream's close method is invoked after the stream operations
are completed."

Turns out that without closing the stream, the JVM runs out of memory
pretty quickly. Doing this is not optional but mandatory.
---
 .../torproject/metrics/collector/persist/PersistenceUtils.java | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git 
a/src/main/java/org/torproject/metrics/collector/persist/PersistenceUtils.java 
b/src/main/java/org/torproject/metrics/collector/persist/PersistenceUtils.java
index 2b7621d..1dc36d6 100644
--- 
a/src/main/java/org/torproject/metrics/collector/persist/PersistenceUtils.java
+++ 
b/src/main/java/org/torproject/metrics/collector/persist/PersistenceUtils.java
@@ -20,6 +20,7 @@ import java.nio.file.attribute.BasicFileAttributes;
 import java.text.SimpleDateFormat;
 import java.time.Instant;
 import java.util.Date;
+import java.util.stream.Stream;
 
 public class PersistenceUtils {
 
@@ -132,9 +133,12 @@ public class PersistenceUtils {
   @Override
   public FileVisitResult postVisitDirectory(Path dir, IOException exc)
   throws IOException {
-if (!pathToClean.equals(dir)
-&& !Files.list(dir).findFirst().isPresent()) {
-  Files.delete(dir);
+if (!pathToClean.equals(dir)) {
+  try (Stream files = Files.list(dir)) {
+if (!files.findFirst().isPresent()) {
+  Files.delete(dir);
+}
+  }
 }
 return FileVisitResult.CONTINUE;
   }



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [collector/master] Fix a few paths for cleaning up.

2020-12-14 Thread karsten
commit 4879cdc219c797e3e0a54226978462fe67d7cf3b
Author: Karsten Loesing 
Date:   Fri Dec 11 16:55:37 2020 +0100

Fix a few paths for cleaning up.
---
 .../metrics/collector/bridgedb/BridgedbMetricsProcessor.java| 6 --
 .../metrics/collector/bridgedescs/SanitizedBridgesWriter.java   | 6 --
 .../metrics/collector/snowflake/SnowflakeStatsDownloader.java   | 6 --
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git 
a/src/main/java/org/torproject/metrics/collector/bridgedb/BridgedbMetricsProcessor.java
 
b/src/main/java/org/torproject/metrics/collector/bridgedb/BridgedbMetricsProcessor.java
index d05aa9c..94f697e 100644
--- 
a/src/main/java/org/torproject/metrics/collector/bridgedb/BridgedbMetricsProcessor.java
+++ 
b/src/main/java/org/torproject/metrics/collector/bridgedb/BridgedbMetricsProcessor.java
@@ -178,9 +178,11 @@ public class BridgedbMetricsProcessor extends 
CollecTorMain {
* in the last three days (seven weeks).
*/
   private void cleanUpDirectories() {
-PersistenceUtils.cleanDirectory(Paths.get(this.recentPathName),
+PersistenceUtils.cleanDirectory(
+Paths.get(this.recentPathName).resolve("bridgedb-metrics"),
 Instant.now().minus(3, ChronoUnit.DAYS).toEpochMilli());
-PersistenceUtils.cleanDirectory(Paths.get(this.outputPathName),
+PersistenceUtils.cleanDirectory(
+Paths.get(this.outputPathName).resolve("bridgedb-metrics"),
 Instant.now().minus(49, ChronoUnit.DAYS).toEpochMilli());
   }
 }
diff --git 
a/src/main/java/org/torproject/metrics/collector/bridgedescs/SanitizedBridgesWriter.java
 
b/src/main/java/org/torproject/metrics/collector/bridgedescs/SanitizedBridgesWriter.java
index 7619453..ffafdea 100644
--- 
a/src/main/java/org/torproject/metrics/collector/bridgedescs/SanitizedBridgesWriter.java
+++ 
b/src/main/java/org/torproject/metrics/collector/bridgedescs/SanitizedBridgesWriter.java
@@ -449,9 +449,11 @@ public class SanitizedBridgesWriter extends CollecTorMain {
* in the last three days (seven weeks), and remove the .tmp extension from
* newly written files. */
   private void cleanUpDirectories() {
-PersistenceUtils.cleanDirectory(this.recentDirectory,
+PersistenceUtils.cleanDirectory(
+this.recentDirectory.resolve("bridge-descriptors"),
 Instant.now().minus(3, ChronoUnit.DAYS).toEpochMilli());
-PersistenceUtils.cleanDirectory(this.outputDirectory,
+PersistenceUtils.cleanDirectory(
+this.outputDirectory.resolve("bridge-descriptors"),
 Instant.now().minus(49, ChronoUnit.DAYS).toEpochMilli());
   }
 }
diff --git 
a/src/main/java/org/torproject/metrics/collector/snowflake/SnowflakeStatsDownloader.java
 
b/src/main/java/org/torproject/metrics/collector/snowflake/SnowflakeStatsDownloader.java
index 93388d5..c9b6b03 100644
--- 
a/src/main/java/org/torproject/metrics/collector/snowflake/SnowflakeStatsDownloader.java
+++ 
b/src/main/java/org/torproject/metrics/collector/snowflake/SnowflakeStatsDownloader.java
@@ -156,9 +156,11 @@ public class SnowflakeStatsDownloader extends 
CollecTorMain {
   /** Delete all files from the rsync (out) directory that have not been
* modified in the last three days (seven weeks). */
   private void cleanUpDirectories() {
-PersistenceUtils.cleanDirectory(Paths.get(this.recentPathName),
+PersistenceUtils.cleanDirectory(
+Paths.get(this.recentPathName, "snowflakes"),
 Instant.now().minus(3, ChronoUnit.DAYS).toEpochMilli());
-PersistenceUtils.cleanDirectory(Paths.get(this.outputPathName),
+PersistenceUtils.cleanDirectory(
+Paths.get(this.outputPathName, "snowflakes"),
 Instant.now().minus(49, ChronoUnit.DAYS).toEpochMilli());
   }
 }



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [collector/master] Include certs when syncing from another instance.

2020-12-14 Thread karsten
commit 9834cec8c0e88c1e810434f543f5d8c4d5cb2de8
Author: Karsten Loesing 
Date:   Mon Dec 14 09:35:35 2020 +0100

Include certs when syncing from another instance.
---
 CHANGELOG.md   |  4 ++--
 .../DirectoryKeyCertificatePersistence.java| 27 ++
 .../collector/relaydescs/ArchiveWriter.java|  7 +-
 .../metrics/collector/sync/SyncPersistence.java|  6 +
 4 files changed, 41 insertions(+), 3 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 22d3517..f56f74d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -4,8 +4,8 @@
   - Clean up descriptors written to the `out/` directory by deleting
 files that are older than seven weeks.
   - Correctly index files that are moved away and back.
-  - Include microdescriptors when syncing from another CollecTor
-instance.
+  - Include microdescriptors and certs when syncing from another
+CollecTor instance.
   - Update to metrics-lib 2.15.0.
 
 
diff --git 
a/src/main/java/org/torproject/metrics/collector/persist/DirectoryKeyCertificatePersistence.java
 
b/src/main/java/org/torproject/metrics/collector/persist/DirectoryKeyCertificatePersistence.java
new file mode 100644
index 000..39f88f3
--- /dev/null
+++ 
b/src/main/java/org/torproject/metrics/collector/persist/DirectoryKeyCertificatePersistence.java
@@ -0,0 +1,27 @@
+/* Copyright 2020 The Tor Project
+ * See LICENSE for licensing information */
+
+package org.torproject.metrics.collector.persist;
+
+import org.torproject.descriptor.DirectoryKeyCertificate;
+import org.torproject.metrics.collector.conf.Annotation;
+
+import java.nio.file.Paths;
+
+public class DirectoryKeyCertificatePersistence
+extends DescriptorPersistence {
+
+  public DirectoryKeyCertificatePersistence(
+  DirectoryKeyCertificate descriptor) {
+super(descriptor, Annotation.Cert.bytes());
+this.calculatePaths();
+  }
+
+  private void calculatePaths() {
+String fileName = this.desc.getFingerprint().toUpperCase() + "-"
++ PersistenceUtils.dateTime(this.desc.getDirKeyPublishedMillis());
+this.recentPath = Paths.get(RELAYDESCS, "certs", fileName).toString();
+this.storagePath = this.recentPath;
+  }
+}
+
diff --git 
a/src/main/java/org/torproject/metrics/collector/relaydescs/ArchiveWriter.java 
b/src/main/java/org/torproject/metrics/collector/relaydescs/ArchiveWriter.java
index 5c58f23..616d7dd 100644
--- 
a/src/main/java/org/torproject/metrics/collector/relaydescs/ArchiveWriter.java
+++ 
b/src/main/java/org/torproject/metrics/collector/relaydescs/ArchiveWriter.java
@@ -7,6 +7,7 @@ import org.torproject.descriptor.BandwidthFile;
 import org.torproject.descriptor.Descriptor;
 import org.torproject.descriptor.DescriptorParser;
 import org.torproject.descriptor.DescriptorSourceFactory;
+import org.torproject.descriptor.DirectoryKeyCertificate;
 import org.torproject.descriptor.Microdescriptor;
 import org.torproject.descriptor.RelayExtraInfoDescriptor;
 import org.torproject.descriptor.RelayNetworkStatusConsensus;
@@ -105,6 +106,8 @@ public class ArchiveWriter extends CollecTorMain {
 super(config);
 this.mapPathDescriptors.put("recent/relay-descriptors/votes",
 RelayNetworkStatusVote.class);
+this.mapPathDescriptors.put("recent/relay-descriptors/certs",
+DirectoryKeyCertificate.class);
 this.mapPathDescriptors.put("recent/relay-descriptors/consensuses",
 RelayNetworkStatusConsensus.class);
 this.mapPathDescriptors.put(
@@ -738,7 +741,9 @@ public class ArchiveWriter extends CollecTorMain {
 "-MM-dd-HH-mm-ss");
 File tarballFile = Paths.get(this.outputDirectory, "certs",
 fingerprint + "-" + printFormat.format(new Date(published))).toFile();
-File[] outputFiles = new File[] { tarballFile };
+File rsyncFile = Paths.get(recentPathName, RELAY_DESCRIPTORS, "certs",
+tarballFile.getName()).toFile();
+File[] outputFiles = new File[] { tarballFile, rsyncFile };
 if (this.store(Annotation.Cert.bytes(), data, outputFiles, null)) {
   this.storedCertsCounter++;
 }
diff --git 
a/src/main/java/org/torproject/metrics/collector/sync/SyncPersistence.java 
b/src/main/java/org/torproject/metrics/collector/sync/SyncPersistence.java
index e8f780e..ba06bd1 100644
--- a/src/main/java/org/torproject/metrics/collector/sync/SyncPersistence.java
+++ b/src/main/java/org/torproject/metrics/collector/sync/SyncPersistence.java
@@ -10,6 +10,7 @@ import org.torproject.descriptor.BridgePoolAssignment;
 import org.torproject.descriptor.BridgeServerDescriptor;
 import org.torproject.descriptor.BridgedbMetrics;
 import org.torproject.descriptor.Descriptor;
+import org.torproject.descriptor.DirectoryKeyCertificate;
 import org.torproject.descriptor.ExitList;
 import org.torproject.descriptor.Microdescri

[tor-commits] [collector/master] Include OnionPerf analysis files when syncing.

2020-12-14 Thread karsten
commit 06c0d78e4a73042c0e7fb6052f079237b8d7ff01
Author: Karsten Loesing 
Date:   Mon Dec 14 23:23:54 2020 +0100

Include OnionPerf analysis files when syncing.
---
 CHANGELOG.md   |  4 +-
 .../collector/onionperf/OnionPerfDownloader.java   |  1 +
 .../collector/persist/OnionPerfPersistence.java| 60 +++---
 src/main/resources/collector.properties|  2 +-
 4 files changed, 57 insertions(+), 10 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index f56f74d..7956e55 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -4,8 +4,8 @@
   - Clean up descriptors written to the `out/` directory by deleting
 files that are older than seven weeks.
   - Correctly index files that are moved away and back.
-  - Include microdescriptors and certs when syncing from another
-CollecTor instance.
+  - Include microdescriptors, certs, and OnionPerf analysis files when
+syncing from another CollecTor instance.
   - Update to metrics-lib 2.15.0.
 
 
diff --git 
a/src/main/java/org/torproject/metrics/collector/onionperf/OnionPerfDownloader.java
 
b/src/main/java/org/torproject/metrics/collector/onionperf/OnionPerfDownloader.java
index f90bdfe..352d24a 100644
--- 
a/src/main/java/org/torproject/metrics/collector/onionperf/OnionPerfDownloader.java
+++ 
b/src/main/java/org/torproject/metrics/collector/onionperf/OnionPerfDownloader.java
@@ -57,6 +57,7 @@ public class OnionPerfDownloader extends CollecTorMain {
   public OnionPerfDownloader(Configuration config) {
 super(config);
 this.mapPathDescriptors.put("recent/torperf", TorperfResult.class);
+this.mapPathDescriptors.put("recent/onionperf", TorperfResult.class);
   }
 
   /** File containing the download history, which is necessary, because
diff --git 
a/src/main/java/org/torproject/metrics/collector/persist/OnionPerfPersistence.java
 
b/src/main/java/org/torproject/metrics/collector/persist/OnionPerfPersistence.java
index 7ed16a2..8975d80 100644
--- 
a/src/main/java/org/torproject/metrics/collector/persist/OnionPerfPersistence.java
+++ 
b/src/main/java/org/torproject/metrics/collector/persist/OnionPerfPersistence.java
@@ -6,12 +6,21 @@ package org.torproject.metrics.collector.persist;
 import org.torproject.descriptor.TorperfResult;
 import org.torproject.metrics.collector.conf.Annotation;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
 import java.nio.file.Paths;
 import java.nio.file.StandardOpenOption;
 
 public class OnionPerfPersistence
 extends DescriptorPersistence {
 
+  private static final Logger logger
+  = LoggerFactory.getLogger(OnionPerfPersistence.class);
+
   private static final String ONIONPERF = "torperf";
 
   public OnionPerfPersistence(TorperfResult desc) {
@@ -32,18 +41,55 @@ public class OnionPerfPersistence
 name).toString();
   }
 
-  /** OnionPerf default storage appends. */
+  /** If the original descriptor file was a .tpf file, append the parsed 
Torperf
+   * result to the destination .tpf file, but if it was a .json.xz file, just
+   * copy over the entire file, unless it already exists. */
   @Override
-  public boolean storeOut(String outRoot) {
-return super.storeOut(outRoot, StandardOpenOption.APPEND);
+  public boolean storeOut(String outRoot, StandardOpenOption option) {
+if (desc.getDescriptorFile().getName().endsWith(".tpf")) {
+  return super.storeOut(outRoot, StandardOpenOption.APPEND);
+} else {
+  String fileName = desc.getDescriptorFile().getName();
+  String[] dateParts = fileName.split("\\.")[0].split("-");
+  return this.copyIfNotExists(
+  Paths.get(outRoot,
+  "onionperf",
+  dateParts[0], // year
+  dateParts[1], // month
+  dateParts[2], // day
+  fileName));
+}
   }
 
-  /** OnionPerf default storage appends. */
+  /** If the original descriptor file was a .tpf file, append the parsed 
Torperf
+   * result to the destination .tpf file, but if it was a .json.xz file, just
+   * copy over the entire file, unless it already exists. */
   @Override
-  public boolean storeAll(String recentRoot, String outRoot) {
-return super.storeAll(recentRoot, outRoot, StandardOpenOption.APPEND,
-StandardOpenOption.APPEND);
+  public boolean storeRecent(String recentRoot, StandardOpenOption option) {
+if (desc.getDescriptorFile().getName().endsWith(".tpf")) {
+  return super.storeRecent(recentRoot, StandardOpenOption.APPEND);
+} else {
+  String fileName = desc.getDescriptorFile().getName();
+  return this.copyIfNotExists(
+  Paths.get(recentRoot,
+  "onionperf",
+  fileName));
+}
   }
 
+  private boolean copyIfNotExists(Path destinationFile) {
+if (Files.exists(destinationFile)) {

[tor-commits] [metrics-web/master] Update news.json.

2020-12-17 Thread karsten
commit 5df9d4c9115b4d289ee1cb428f0e433549b88718
Author: Karsten Loesing 
Date:   Thu Dec 17 15:31:35 2020 +0100

Update news.json.
---
 src/main/resources/web/json/news.json | 119 +-
 1 file changed, 118 insertions(+), 1 deletion(-)

diff --git a/src/main/resources/web/json/news.json 
b/src/main/resources/web/json/news.json
index 901b00c..b788fe5 100644
--- a/src/main/resources/web/json/news.json
+++ b/src/main/resources/web/json/news.json
@@ -1,4 +1,56 @@
 [ {
+  "start" : "2020-12-05",
+  "end" : "2020-12-09",
+  "protocols" : [ "meek", "moat" ],
+  "short_description" : "An update to an Azure CDN edge server TLS certificate 
causes an outage of meek and Moat",
+  "description" : "An update to an Azure CDN edge server TLS certificate 
causes an outage of meek and Moat. It is fixed by the release of Tor Browser 
10.0.6, which updates the built-in public key pinning of obfs4proxy.",
+  "links" : [ {
+"label" : "ticket",
+"target" : 
"https://bugs.torproject.org/tpo/anti-censorship/pluggable-transports/meek/40001";
+  }, {
+"label" : "blog post",
+"target" : 
"https://blog.torproject.org/new-release-tor-browser-1006#comments";
+  }, {
+"label" : "meek graph",
+"target" : 
"https://metrics.torproject.org/userstats-bridge-transport.html?start=2020-11-01&end=2020-12-31&transport=meek";
+  }, {
+"label" : "BridgeDB graph",
+"target" : 
"https://metrics.torproject.org/bridgedb-distributor.html?start=2020-11-01&end=2020-12-31";
+  }, {
+"label" : "certificate transparency",
+"target" : 
"https://crt.sh/?q=cd29bc427d93bc4453d129a294cfd5e082eacbf3fe9a19b7718a50422b6e6cc5";
+  } ]
+}, {
+  "start" : "2020-11-08",
+  "end" : "2020-11-09",
+  "protocols" : [ "snowflake" ],
+  "short_description" : "Outage of the snowflake bridge",
+  "description" : "Outage of the snowflake bridge",
+  "links" : [ {
+"label" : "ticket",
+"target" : 
"https://bugs.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/40020";
+  } ]
+}, {
+  "start" : "2020-10-27",
+  "ongoing" : true,
+  "places" : [ "tz" ],
+  "protocols" : [ "", "obfs4" ],
+  "short_description" : "Blocking of Tor and default obfs4 bridges during 
elections in Tanzania.",
+  "description" : "Blocking of Tor and default obfs4 bridges during elections 
in Tanzania.",
+  "links" : [ {
+"label" : "OONI report",
+"target" : 
"https://ooni.org/post/2020-tanzania-blocks-social-media-tor-election-day/#blocking-of-tor";
+  } ]
+}, {
+  "start" : "2020-10-05",
+  "protocols" : [ "snowflake" ],
+  "short_description" : "Deployed version 0.4.2 of the Snowflake WebExtension, 
with better counting of clients.",
+  "description" : "Deployed version 0.4.2 of the Snowflake WebExtension, with 
better counting of clients.",
+  "links" : [ {
+"label" : "comment",
+"target" : 
"https://bugs.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/33157#note_2710701";
+  } ]
+}, {
   "start" : "2020-08-31",
   "protocols" : [ "obfs4" ],
   "short_description" : "Shutdown of default obfs4 bridge frosty.",
@@ -16,6 +68,19 @@
 "label" : "comment",
 "target" : 
"https://bugs.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/13#note_2705805";
   } ]
+}, {
+  "start" : "2020-08-09",
+  "end" : "2020-08-12",
+  "places" : [ "by" ],
+  "short_description" : "Internet shutdowns in Belarus, during protests 
following a presidential election.",
+  "description" : "Internet shutdowns in Belarus, during protests following a 
presidential election.",
+  "links" : [ {
+"label" : "report",
+"target" : 
"https://ooni.org/post/2020-belarus-internet-outages-website-censorship/#internet-outages-amid-2020-belarusian-presidential-election";
+  }, {
+"label" : "thread",
+"target" : "https://github.com/net4people/bbs/issues/46";
+  } ]
 }, {
   "start" : "2020-07-06"

[tor-commits] [metrics-web/master] Update instructions when javascript is disabled.

2020-12-17 Thread karsten
commit 07f70b1ce967585c2307ea1170f85a9274b04569
Author: Karsten Loesing 
Date:   Thu Dec 17 15:29:11 2020 +0100

Update instructions when javascript is disabled.

Fixes #31714.
---
 src/main/resources/web/jsps/rs.jsp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/main/resources/web/jsps/rs.jsp 
b/src/main/resources/web/jsps/rs.jsp
index ec635f7..281f932 100644
--- a/src/main/resources/web/jsps/rs.jsp
+++ b/src/main/resources/web/jsps/rs.jsp
@@ -29,7 +29,7 @@
 
   
   JavaScript required
-  Please enable JavaScript to use this service. If you are using Tor 
Browser on High Security mode, it is possible to enable JavaScript to run only 
on this page. Click the NoScript  icon on your 
address bar and select "Temporarily allow all on this page". Relay Search only 
uses JavaScript resources that are hosted by the Tor Metrics team.
+  Please enable JavaScript to use this service. If you are using Tor 
Browser on Safest mode, you'll have to switch to Safer or Standard mode. Relay 
Search only uses JavaScript resources that are hosted by the Tor Metrics 
team.
 
   
  



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-web/master] Update to metrics-lib 2.15.0.

2020-12-17 Thread karsten
commit a07829a30bb85864f46f8668882689d16382ad6a
Author: Karsten Loesing 
Date:   Thu Dec 17 15:27:02 2020 +0100

Update to metrics-lib 2.15.0.
---
 build.xml   | 2 +-
 src/submods/metrics-lib | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/build.xml b/build.xml
index 0d82a81..8cfc81a 100644
--- a/build.xml
+++ b/build.xml
@@ -10,7 +10,7 @@
   
   
   
-  
+  
   
   
   https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-lib/master] Parse new NAT-based Snowflake lines.

2020-12-18 Thread karsten
commit 8976cdd9be1bb70d3457bf2551971e12ee253ce1
Author: Karsten Loesing 
Date:   Fri Dec 18 12:02:55 2020 +0100

Parse new NAT-based Snowflake lines.

Implements #40002.
---
 CHANGELOG.md   |  5 +-
 .../org/torproject/descriptor/SnowflakeStats.java  | 54 
 .../java/org/torproject/descriptor/impl/Key.java   |  5 ++
 .../descriptor/impl/SnowflakeStatsImpl.java| 75 ++
 .../descriptor/impl/SnowflakeStatsImplTest.java| 50 +++
 5 files changed, 188 insertions(+), 1 deletion(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index ac865a9..68ad2a2 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,7 @@
-# Changes in version 2.??.? - 2020-??-??
+# Changes in version 2.16.0 - 2020-??-??
+
+ * Medium changes
+   - Parse new NAT-based Snowflake lines.
 
 
 # Changes in version 2.15.0 - 2020-12-11
diff --git a/src/main/java/org/torproject/descriptor/SnowflakeStats.java 
b/src/main/java/org/torproject/descriptor/SnowflakeStats.java
index 2fe78df..967d061 100644
--- a/src/main/java/org/torproject/descriptor/SnowflakeStats.java
+++ b/src/main/java/org/torproject/descriptor/SnowflakeStats.java
@@ -103,6 +103,30 @@ public interface SnowflakeStats extends Descriptor {
*/
   Optional clientDeniedCount();
 
+  /**
+   * Return a count of the number of times a client with a restricted or 
unknown
+   * NAT type has requested a proxy from the broker but no proxies were
+   * available, rounded up to the nearest multiple of 8.
+   *
+   * @return Count of the number of times a client with a restricted or unknown
+   * NAT type has requested a proxy from the broker but no proxies were
+   * available, rounded up to the nearest multiple of 8.
+   * @since 2.16.0
+   */
+  Optional clientRestrictedDeniedCount();
+
+  /**
+   * Return a count of the number of times a client with an unrestricted NAT
+   * type has requested a proxy from the broker but no proxies were available,
+   * rounded up to the nearest multiple of 8.
+   *
+   * @return Count of the number of times a client with an unrestricted NAT 
type
+   * has requested a proxy from the broker but no proxies were available,
+   * rounded up to the nearest multiple of 8.
+   * @since 2.16.0
+   */
+  Optional clientUnrestrictedDeniedCount();
+
   /**
* Return a count of the number of times a client successfully received a
* proxy from the broker, rounded up to the nearest multiple of 8.
@@ -112,5 +136,35 @@ public interface SnowflakeStats extends Descriptor {
* @since 2.7.0
*/
   Optional clientSnowflakeMatchCount();
+
+  /**
+   * Return a count of the total number of unique IP addresses of snowflake
+   * proxies that have a restricted NAT type.
+   *
+   * @return Count of the total number of unique IP addresses of snowflake
+   * proxies that have a restricted NAT type.
+   * @since 2.16.0
+   */
+  Optional snowflakeIpsNatRestricted();
+
+  /**
+   * Return a count of the total number of unique IP addresses of snowflake
+   * proxies that have an unrestricted NAT type.
+   *
+   * @return Count of the total number of unique IP addresses of snowflake
+   * proxies that have an unrestricted NAT type.
+   * @since 2.16.0
+   */
+  Optional snowflakeIpsNatUnrestricted();
+
+  /**
+   * Return a count of the total number of unique IP addresses of snowflake
+   * proxies that have an unknown NAT type.
+   *
+   * @return Count of the total number of unique IP addresses of snowflake
+   * proxies that have an unknown NAT type.
+   * @since 2.16.0
+   */
+  Optional snowflakeIpsNatUnknown();
 }
 
diff --git a/src/main/java/org/torproject/descriptor/impl/Key.java 
b/src/main/java/org/torproject/descriptor/impl/Key.java
index b02d96e..410cef6 100644
--- a/src/main/java/org/torproject/descriptor/impl/Key.java
+++ b/src/main/java/org/torproject/descriptor/impl/Key.java
@@ -36,7 +36,9 @@ public enum Key {
   CELL_STATS_END("cell-stats-end"),
   CELL_TIME_IN_QUEUE("cell-time-in-queue"),
   CLIENT_DENIED_COUNT("client-denied-count"),
+  CLIENT_RESTRICTED_DENIED_COUNT("client-restricted-denied-count"),
   CLIENT_SNOWFLAKE_MATCH_COUNT("client-snowflake-match-count"),
+  CLIENT_UNRESTRICTED_DENIED_COUNT("client-unrestricted-denied-count"),
   CLIENT_VERSIONS("client-versions"),
   CONN_BI_DIRECT("conn-bi-direct"),
   CONSENSUS_METHOD("consensus-method"),
@@ -149,6 +151,9 @@ public enum Key {
   SNOWFLAKE_IDLE_COUNT("snowflake-idle-count"),
   SNOWFLAKE_IPS("snowflake-ips"),
   SNOWFLAKE_IPS_BADGE("snowflake-ips-badge"),
+  SNOWFLAKE_IPS_NAT_RESTRICTED("snowflake-ips-nat-restricted"),
+  SNOWFLAKE_IPS_NAT_UNKNOWN("snowflake-ips-nat-unknown"),
+  SNOWFLAKE_IPS_NAT_UNRESTRICTED("snowflake-ips-nat-unrestricted"),
   SNOWFLAKE_IPS_STANDALONE("snowflake-ips-standalone&qu

[tor-commits] [bridgedb/master] Fix http utf8 issue: Add a content-type header to the HTML.

2011-04-10 Thread karsten
commit c69bc14f88e8b903bcf90538e8e5fac147805748
Author: Christian Fromme 
Date:   Sat Apr 9 12:03:41 2011 +0200

Fix http utf8 issue: Add a content-type header to the HTML.
---
 lib/bridgedb/Server.py |5 -
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/lib/bridgedb/Server.py b/lib/bridgedb/Server.py
index 7b7ff55..2becee8 100644
--- a/lib/bridgedb/Server.py
+++ b/lib/bridgedb/Server.py
@@ -100,7 +100,10 @@ class WebResource(twisted.web.resource.Resource):
 + "".join(("%s"%d for d in self.domains)) + ""
 else:
 email_domain_list = "" + t.gettext(I18n.BRIDGEDB_TEXT[8]) + 
""
-html_msg = "" \
+html_msg = ""\
+   + "" \
+   + "" \
+ "" + t.gettext(I18n.BRIDGEDB_TEXT[0]) \
+ "" \
+ "%s" \

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [bridgedb/master] Add new translation files for cs, es, pl_PL, sl_SI and zh_CN

2011-04-10 Thread karsten
commit 0512c91c5daada79c3e3630c02c7aec86df11e3b
Author: Christian Fromme 
Date:   Sat Apr 9 20:34:21 2011 +0200

Add new translation files for cs, es, pl_PL, sl_SI and zh_CN
---
 i18n/cs/bridgedb.po|   90 ++
 i18n/es/bridgedb.po|   92 
 i18n/pl_PL/bridgedb.po |   89 ++
 i18n/sl_SI/bridgedb.po |   90 ++
 i18n/zh_CN/bridgedb.po |   77 
 5 files changed, 438 insertions(+), 0 deletions(-)

diff --git a/i18n/cs/bridgedb.po b/i18n/cs/bridgedb.po
new file mode 100644
index 000..b4ee341
--- /dev/null
+++ b/i18n/cs/bridgedb.po
@@ -0,0 +1,90 @@
+# SOME DESCRIPTIVE TITLE.
+# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
+# This file is distributed under the same license as the PACKAGE package.
+# Christian Fromme , 2010
+# 
+msgid ""
+msgstr ""
+"Project-Id-Version: The Tor Project\n"
+"Report-Msgid-Bugs-To: https://trac.torproject.org/projects/tor\n";
+"POT-Creation-Date: 2011-01-01 07:48-0800\n"
+"PO-Revision-Date: 2011-03-22 16:45+\n"
+"Last-Translator: Elisa \n"
+"Language-Team: LANGUAGE \n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Language: cs\n"
+"Plural-Forms: nplurals=3; plural=(n==1) ? 0 : (n>=2 && n<=4) ? 1 : 2\n"
+
+#: lib/bridgedb/I18n.py:21
+msgid "Here are your bridge relays: "
+msgstr "Tady jsou vaše přemosťující spojení:"
+
+#: lib/bridgedb/I18n.py:23
+msgid ""
+"Bridge relays (or \"bridges\" for short) are Tor relays that aren't listed\n"
+"in the main directory. Since there is no complete public list of them,\n"
+"even if your ISP is filtering connections to all the known Tor relays,\n"
+"they probably won't be able to block all the bridges."
+msgstr ""
+"Přemosťující spojení (nebo zkráceně \"mosty\") jsou Tor síťové 
uzly, které nejsou zaznamenány\n"
+"v hlavním adresáři Tor uzlů. Protože neexistuje veřejný seznam těchto 
uzlů,\n"
+"proto ani váš internet poskytovatel, který by filtroval známé Tor 
uzly,\n"
+"nebude pravděpodobně schopen blokovat všechny Tor mosty."
+
+#: lib/bridgedb/I18n.py:28
+msgid ""
+"To use the above lines, go to Vidalia's Network settings page, and click\n"
+"\"My ISP blocks connections to the Tor network\". Then add each bridge\n"
+"address one at a time."
+msgstr ""
+"Aby jste použít výš zmiňované, jděte do síťového nastavení Vidalia 
a klikněte na\n"
+"\"Můj ISP blokuje připojení do Tor sítě\". Pak přidejte adresu 
každého\n"
+"mostu, po jednom..."
+
+#: lib/bridgedb/I18n.py:32
+msgid ""
+"Configuring more than one bridge address will make your Tor connection\n"
+"more stable, in case some of the bridges become unreachable."
+msgstr ""
+"Nastavením více než jednoho mostu udělá vaše Tor připojení\n"
+"více stabilní, v případě, že by se staly některé mosty nedostupné."
+
+#: lib/bridgedb/I18n.py:35
+msgid ""
+"Another way to find public bridge addresses is to send mail to\n"
+"brid...@torproject.org with the line \"get bridges\" by itself in the body\n"
+"of the mail. However, so we can make it harder for an attacker to learn\n"
+"lots of bridge addresses, you must send this request from an email address 
at\n"
+"one of the following domains:"
+msgstr ""
+"Jiný způsob najít veřejné adresy mostů je, poslat mail na\n"
+"brid...@torproject.org s řádkem \"get bridges\" v těle zprávy\n"
+"mailu. Ačkoli můžeme ztížit útočníku, aby se\n"
+"dověděl mnoho adres mostů, musíte poslat tuto žádost z některých 
email. adres z\n"
+"těchto domén:"
+
+#: lib/bridgedb/I18n.py:41
+msgid "[This is an automated message; please do not reply.]"
+msgstr "[Tohle je automatická odpověď; prosím, neodpovídejte na ni]"
+
+#: lib/bridgedb/I18n.py:43
+msgid ""
+"Another way to find public bridge addresses is to visit\n"
+"https://bridges.torproject.org/. The answers you get from that page\n"
+"will change every few days, so check back periodically if you need more\n"
+"bridge addresses."
+msgstr ""
+"Další způsob, jak najít adresy veřejných mostů, je jít na\n"
+"https://bridges.torproject.org/. Odpovědi, které naleznete na stránce\n"
+"se pravidelně mění, takže stránku pravidelně kontrolujte, pokud 
potřebujete víc\n"
+"adres mostů."
+
+#: lib/bridgedb/I18n.py:48
+msgid "(no bridges currently available)"
+msgstr "(nejsou momentálně dostupné žádné mosty)"
+
+#: lib/bridgedb/I18n.py:50
+msgid "(e-mail requests not currently supported)"
+msgstr "(e-mail žádosti nejsou momentálně podporovány)"
diff --git a/i18n/es/bridgedb.po b/i18n/es/bridgedb.po
new file mode 100644
index 000..a3f7edf
--- /dev/null
+++ b/i18n/es/bridgedb.po
@@ -0,0 +1,92 @@
+# SOME DESCRIPTIVE TITLE.
+# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
+# This file is distributed under the same li

[tor-commits] [metrics-web/master] Write "Apr-2011" instead of "Apr-11" in graphs.

2011-04-11 Thread karsten
commit cbd4dcc23b68d1d357895662318793b5aa923979
Author: Karsten Loesing 
Date:   Mon Apr 11 10:54:56 2011 +0200

Write "Apr-2011" instead of "Apr-11" in graphs.
---
 rserve/graphs.R |   78 +-
 1 files changed, 65 insertions(+), 13 deletions(-)

diff --git a/rserve/graphs.R b/rserve/graphs.R
index 70ab86f..c3d0ea3 100644
--- a/rserve/graphs.R
+++ b/rserve/graphs.R
@@ -30,7 +30,11 @@ plot_networksize <- function(start, end, path, dpi) {
   ggplot(networksize, aes(x = as.Date(date, "%Y-%m-%d"), y = value,
 colour = variable)) + geom_line(size = 1) +
 scale_x_date(name = paste("\nThe Tor Project - ",
-"https://metrics.torproject.org/";, sep = "")) +
+"https://metrics.torproject.org/";, sep = ""), format =
+c("%d-%b", "%d-%b", "%b-%Y", "%b-%Y", "%Y", "%Y")[
+cut(as.numeric(max(as.Date(networksize$date, "%Y-%m-%d")) -
+min(as.Date(networksize$date, "%Y-%m-%d"))),
+c(0, 10, 56, 365, 730, 5000, Inf), labels=FALSE)]) +
 scale_y_continuous(name = "", limits = c(0, max(networksize$value,
 na.rm = TRUE))) +
 scale_colour_hue("", breaks = c("relays", "bridges"),
@@ -58,7 +62,11 @@ plot_versions <- function(start, end, path, dpi) {
   colour = version)) +
 geom_line(size = 1) +
 scale_x_date(name = paste("\nThe Tor Project - ",
-"https://metrics.torproject.org/";, sep = "")) +
+"https://metrics.torproject.org/";, sep = ""), format =
+c("%d-%b", "%d-%b", "%b-%Y", "%b-%Y", "%Y", "%Y")[
+cut(as.numeric(max(as.Date(versions$date, "%Y-%m-%d")) -
+min(as.Date(versions$date, "%Y-%m-%d"))),
+c(0, 10, 56, 365, 730, 5000, Inf), labels=FALSE)]) +
 scale_y_continuous(name = "",
   limits = c(0, max(versions$relays, na.rm = TRUE))) +
 scale_colour_hue(name = "Tor version", h.start = 280,
@@ -82,7 +90,11 @@ plot_platforms <- function(start, end, path, dpi) {
   colour = variable)) +
 geom_line(size = 1) +
 scale_x_date(name = paste("\nThe Tor Project - ",
-"https://metrics.torproject.org/";, sep = "")) +
+"https://metrics.torproject.org/";, sep = ""), format =
+c("%d-%b", "%d-%b", "%b-%Y", "%b-%Y", "%Y", "%Y")[
+cut(as.numeric(max(as.Date(platforms$date, "%Y-%m-%d")) -
+min(as.Date(platforms$date, "%Y-%m-%d"))),
+c(0, 10, 56, 365, 730, 5000, Inf), labels=FALSE)]) +
 scale_y_continuous(name = "",
   limits = c(0, max(platforms$value, na.rm = TRUE))) +
 scale_colour_hue(name = "Platform", h.start = 180,
@@ -115,7 +127,11 @@ plot_bandwidth <- function(start, end, path, dpi) {
   colour = variable)) +
 geom_line(size = 1) +
 scale_x_date(name = paste("\nThe Tor Project - ",
-"https://metrics.torproject.org/";, sep = "")) +
+"https://metrics.torproject.org/";, sep = ""), format =
+c("%d-%b", "%d-%b", "%b-%Y", "%b-%Y", "%Y", "%Y")[
+cut(as.numeric(max(as.Date(bandwidth$date, "%Y-%m-%d")) -
+min(as.Date(bandwidth$date, "%Y-%m-%d"))),
+c(0, 10, 56, 365, 730, 5000, Inf), labels=FALSE)]) +
 scale_y_continuous(name="Bandwidth (MiB/s)",
 limits = c(0, max(bandwidth$value, na.rm = TRUE) / 2^20)) +
 scale_colour_hue(name = "", h.start = 90,
@@ -144,7 +160,11 @@ plot_dirbytes <- function(start, end, path, dpi) {
   colour = variable)) +
 geom_line(size = 1) +
 scale_x_date(name = paste("\nThe Tor Project - ",
-"https://metrics.torproject.org/";, sep = "")) +
+"https://metrics.torproject.org/";, sep = ""), format =
+c("%d-%b", "%d-%b", "%b-%Y", "%b-%Y", "%Y", "%Y")[
+cut(as.numeric(max(as.Date(dir$date, "%Y-%m-%d")) -
+min(as.Date(dir$date, "%Y-%m-%d"))),
+c(0, 10, 56, 365, 730, 5000, Inf), labels=FALSE)]) +
 scale_y_continuous(name="Bandwidth (MiB/s)",
 limits = c(0, max(dir$value, na.rm = TRUE) / 2^20)) +
 scale_colour_hue(name = "",
@@ -175,7 +195,11 @@ plot_relayflags <- function(start, end, flags, 
granularity, path, dpi) {
 ggplot(networksize, aes(x = as.Date(date, "%Y-%m-%d"), y = value,
   colour = varia

[tor-commits] [torperf/master] Update analyze_guards.py to take multiple .extradata files.

2011-04-11 Thread karsten
commit 31dc1db35befd0283b6e069cc33557537b590888
Author: Mike Perry 
Date:   Fri Apr 8 02:19:52 2011 -0700

Update analyze_guards.py to take multiple .extradata files.
---
 analyze_guards.py |   50 +-
 1 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/analyze_guards.py b/analyze_guards.py
index 365d658..8d528f7 100755
--- a/analyze_guards.py
+++ b/analyze_guards.py
@@ -1,20 +1,13 @@
 #!/usr/bin/python
 #
-# This script takes a list of $idhex=nickname OR nickname=idhex lines and
-# prints out min, avg, dev, max statistics based on the current consensus.
+# This script takes a list of extradata files and tells you some statistics
+# about the guard selection used by checking against the current consensus.
 #
-# Be sure to include ratio in the filenames of idhexes that are supposedly
-# chosen by consensus to descriptor ratio values.
+# Use the script like this:
+# ./analyze_guards.py slowratio50kb.extradata slowratio1mb50kb.extradata
 #
-# Here is an example scriptlet for extracting the guards from an extrainfo
-# file:
-#   awk '{ print $3; }' < torperfslowratio-50kb.extradata > slowratio50kb.extra
-#
-# Use this result like this:
-# ./analyze_guards.py slowratio50kb.extra
-#
-# It should then print out ranking stats. Use your brain to determine if these
-# stats make sense for the run you selected.
+# It should then print out ranking stats one per file. Use your brain to
+# determine if these stats make sense for the run you selected.
 
 import sys
 import math
@@ -51,23 +44,29 @@ def analyze_list(router_map, idhex_list):
 
   return (min_rank, avg, math.sqrt(varience/(len(idhex_list)-absent-1)), 
max_rank, absent)
 
-def main():
-  f = file(sys.argv[1], "r")
+def process_file(router_map, file_name):
+  f = file(file_name, "r")
   idhex_list = f.readlines()
+  guard_list = []
 
   for i in xrange(len(idhex_list)):
-if "~" in idhex_list[i]: char = "~"
-else: char = "="
+line = idhex_list[i].split()
+path = None
+used = False
+for word in line:
+  if word.startswith("PATH="): path = word[5:]
+  if word.startswith("USED_BY"): used = True
+
+if path and used:
+  guard = path.split(",")
+  guard_list.append(guard[0])
 
-split = idhex_list[i].split(char)
+  print "Guard rank stats (min, avg, dev, total, absent): "
+  print file_name + ": " + str(analyze_list(router_map, guard_list))
 
-if split[0][0] == "$":
-  idhex_list[i] = split[0]
-else:
-  idhex_list[i] = "$"+split[1]
 
+def main():
   c = TorCtl.TorCtl.connect(HOST, PORT)
-  nslist = c.get_network_status()
   sorted_rlist = filter(lambda r: r.desc_bw > 0, 
c.read_routers(c.get_network_status()))
   router_map = {}
   for r in sorted_rlist: router_map["$"+r.idhex] = r
@@ -88,8 +87,9 @@ def main():
 
   for i in xrange(len(sorted_rlist)): sorted_rlist[i].list_rank = i
 
-  print "Guard rank stats (min, avg, dev, total, absent): "
-  print str(analyze_list(router_map, idhex_list))
+  for file_name in sys.argv[1:]:
+process_file(router_map, file_name)
+
 
 if __name__ == '__main__':
   main()



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [torperf/master] I hate everything.

2011-04-11 Thread karsten
commit 8c9237a2c15bbd5eada8d3093b317194e8fac581
Author: Mike Perry 
Date:   Sun Apr 10 23:10:57 2011 -0700

I hate everything.

Why didn't pychecker catch this??
---
 entrycons.py |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/entrycons.py b/entrycons.py
index fbbdc3c..c86f38d 100755
--- a/entrycons.py
+++ b/entrycons.py
@@ -19,7 +19,7 @@ class EntryTracker(TorCtl.ConsensusTracker):
 if self.consensus_count < DESCRIPTORS_NEEDED*len(self.ns_map):
   TorUtil.plog("NOTICE",
  "Insufficient routers to choose new guard. Waiting for more..")
-  self.need_guads = True
+  self.need_guards = True
 else:
   self.set_entries()
   self.need_guards = False

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [torperf/master] Wait until we get 99% of all descriptors before choosing guards.

2011-04-11 Thread karsten
commit 69e2e707cae12a7936027a01937dd2ddd767c699
Author: Mike Perry 
Date:   Fri Apr 8 02:21:17 2011 -0700

Wait until we get 99% of all descriptors before choosing guards.
---
 entrycons.py |   46 +++---
 1 files changed, 35 insertions(+), 11 deletions(-)

diff --git a/entrycons.py b/entrycons.py
index fb89f1f..fbbdc3c 100755
--- a/entrycons.py
+++ b/entrycons.py
@@ -7,6 +7,7 @@ import copy
 HOST = "127.0.0.1"
 
 SAMPLE_SIZE = 3
+DESCRIPTORS_NEEDED = 0.99 # 99% of descriptors must be downloaded
 
 class EntryTracker(TorCtl.ConsensusTracker):
   used_entries = []
@@ -14,13 +15,28 @@ class EntryTracker(TorCtl.ConsensusTracker):
   def __init__(self, conn, speed):
 TorCtl.ConsensusTracker.__init__(self, conn, consensus_only=False)
 self.speed = speed
-self.set_entries()
+self.used_entries = []
+if self.consensus_count < DESCRIPTORS_NEEDED*len(self.ns_map):
+  TorUtil.plog("NOTICE",
+ "Insufficient routers to choose new guard. Waiting for more..")
+  self.need_guads = True
+else:
+  self.set_entries()
+  self.need_guards = False
 
   def new_consensus_event(self, n):
 TorCtl.ConsensusTracker.new_consensus_event(self, n)
-TorUtil.plog("INFO", "New consensus arrived. Rejoice!")
-self.used_entries = []
-self.set_entries()
+self.need_guards = True
+
+  def new_desc_event(self, n):
+TorCtl.ConsensusTracker.new_desc_event(self, n)
+if self.need_guards and self.consensus_count >= 
DESCRIPTORS_NEEDED*len(self.ns_map):
+  TorUtil.plog("INFO", "We have enough routers. Rejoice!")
+  self.used_entries = []
+  self.set_entries()
+  self.need_guards = False
+else:
+  self.need_guards = True
 
   def guard_event(self, event):
 TorCtl.EventHandler.guard_event(self, event)
@@ -29,11 +45,16 @@ class EntryTracker(TorCtl.ConsensusTracker):
   def handle_entry_deaths(self, event):
 state = event.status
 if (state == "DOWN" or state == "BAD" or state == "DROPPED"):
+  if self.consensus_count < DESCRIPTORS_NEEDED*len(self.ns_map):
+self.need_guards = True
+TorUtil.plog("NOTICE",
+   "Insufficient routers to choose new guard. Waiting for more..")
+return
   nodes_tuple = self.c.get_option("EntryNodes")
   nodes_list = nodes_tuple[0][1].split(",")
   try: 
 nodes_list.remove(event.idhex)
-nodes_list.append(self.get_next_router(event.idhex, nodes_list))
+nodes_list.append(self.get_next_guard())
 self.c.set_option("EntryNodes", ",".join(nodes_list))
 TorUtil.plog("NOTICE", "Entry: " + event.nick + ":" + event.idhex +
  " died, and we replaced it with: " + nodes_list[-1] + "!")
@@ -64,15 +85,18 @@ class EntryTracker(TorCtl.ConsensusTracker):
 elif self.speed == "slowratio":
   routers.sort(lambda x,y: ratio_cmp(y,x))
 
-# Print top 5 routers + ratios
-for i in xrange(5):
-  TorUtil.plog("DEBUG", self.speed+" router "+routers[i].nickname+" 
#"+str(i)+": "
-+str(routers[i].bw)+"/"+str(routers[i].desc_bw)+" = "
-+str(routers[i].bw/float(routers[i].desc_bw)))
+# Print top 3 routers + ratios
+if len(routers) < SAMPLE_SIZE:
+  TorUtil.plog("WARN", "Only "+str(len(routers))+" in our list!")
+else:
+  for i in xrange(SAMPLE_SIZE):
+TorUtil.plog("INFO", self.speed+" router "+routers[i].nickname+" 
#"+str(i)+": "
+  +str(routers[i].bw)+"/"+str(routers[i].desc_bw)+" = "
+  +str(routers[i].bw/float(routers[i].desc_bw)))
 
 return routers
 
-  def get_next_router(self, event, nodes_list):
+  def get_next_guard(self):
 # XXX: This is inefficient, but if we do it now, we're sure that
 # we're always using the very latest networkstatus and descriptor data
 sorted_routers = self.sort_routers(self.current_consensus().sorted_r)



___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-web/master] Fix navigation on consensus-health page.

2011-04-12 Thread karsten
commit 045d6df204a1efa1e3ccf19de8d58b42b4a2e142
Author: Karsten Loesing 
Date:   Tue Apr 12 08:02:07 2011 +0200

Fix navigation on consensus-health page.
---
 .../ernie/cron/ConsensusHealthChecker.java |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/src/org/torproject/ernie/cron/ConsensusHealthChecker.java 
b/src/org/torproject/ernie/cron/ConsensusHealthChecker.java
index 602d030..4e2adc4 100644
--- a/src/org/torproject/ernie/cron/ConsensusHealthChecker.java
+++ b/src/org/torproject/ernie/cron/ConsensusHealthChecker.java
@@ -476,6 +476,7 @@ public class ConsensusHealthChecker {
   + "Status\n"
   + "\n"
   + "\n"
+  + "  Network Status\n"
   + "  ExoneraTor\n"
   + "  Relay Search\n"
   + "  Consensus Health\n"

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [torperf/master] Add a ChangeLog for version 0.0.1.

2011-04-12 Thread karsten
commit f503c0c1c2b33c467f4b0d5441118820ad4bc155
Author: Karsten Loesing 
Date:   Tue Apr 12 13:55:28 2011 +0200

Add a ChangeLog for version 0.0.1.
---
 ChangeLog |5 +
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/ChangeLog b/ChangeLog
new file mode 100644
index 000..e32939b
--- /dev/null
+++ b/ChangeLog
@@ -0,0 +1,5 @@
+Torperf change log:
+
+Changes in version 0.0.1 - 2011-04-12
+  - Initial release
+

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [torperf/master] Prepare change log for next version.

2011-04-12 Thread karsten
commit f152f1e47de3d7b3c3f4671410704dec1a62d647
Author: Karsten Loesing 
Date:   Tue Apr 12 14:24:50 2011 +0200

Prepare change log for next version.
---
 ChangeLog |2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/ChangeLog b/ChangeLog
index e32939b..d05cd70 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,5 +1,7 @@
 Torperf change log:
 
+Changes in version 0.0.2 - 2011-??-??
+
 Changes in version 0.0.1 - 2011-04-12
   - Initial release
 

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [metrics-web/master] Add Torperf 0.0.1 tarball.

2011-04-12 Thread karsten
commit 7fc42547a9d586e8afdaa946e357b1c2d432158e
Author: Karsten Loesing 
Date:   Tue Apr 12 14:28:30 2011 +0200

Add Torperf 0.0.1 tarball.
---
 web/WEB-INF/tools.jsp |7 +--
 1 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/web/WEB-INF/tools.jsp b/web/WEB-INF/tools.jsp
index b744463..01e5a84 100644
--- a/web/WEB-INF/tools.jsp
+++ b/web/WEB-INF/tools.jsp
@@ -81,6 +81,9 @@
 download files of various sizes over the Tor network and notes how
 long substeps take.
 
+  Download
+  Torperf 0.0.1
+  (sig)
   Browse the https://gitweb.torproject.org/torperf.git";>Git repository
   git clone git://git.torproject.org/torperf
 
@@ -96,7 +99,7 @@
 
   Download
   ExoneraTor 0.0.2
-  (sig)
+  (sig)
   Browse the https://gitweb.torproject.org/metrics-utils.git/tree/HEAD:/exonerator";>Git
 repository
   git clone git://git.torproject.org/metrics-utils
 
@@ -111,7 +114,7 @@
 
   Download
   VisiTor 0.0.4
-  (sig)
+  (sig)
   Browse the https://gitweb.torproject.org/metrics-utils.git/tree/HEAD:/visitor";>Git 
repository
   git clone git://git.torproject.org/metrics-utils
 

___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tor-commits] [bridgedb/master] Add first version of bridge-db-spec.txt.

2011-04-12 Thread karsten
commit 6136c48d95d3e6ffb1fef8c9f918038e5bcf6c9b
Author: Karsten Loesing 
Date:   Sun Feb 13 21:24:40 2011 +0100

Add first version of bridge-db-spec.txt.
---
 bridge-db-spec.txt |  106 
 1 files changed, 106 insertions(+), 0 deletions(-)

diff --git a/bridge-db-spec.txt b/bridge-db-spec.txt
new file mode 100644
index 000..a9e2c8f
--- /dev/null
+++ b/bridge-db-spec.txt
@@ -0,0 +1,106 @@
+
+   BridgeDB specification
+
+0. Preliminaries
+
+   This document specifies how BridgeDB processes bridge descriptor files
+   to learn about new bridges, maintains persistent assignments of bridges
+   to distributors, and decides which descriptors to give out upon user
+   requests.
+
+1. Importing bridge network statuses and bridge descriptors
+
+   BridgeDB learns about bridges from parsing bridge network statuses and
+   bridge descriptors as specified in Tor's directory protocol.  BridgeDB
+   SHOULD parse one bridge network status file and at least one bridge
+   descriptor file.
+
+1.1. Parsing bridge network statuses
+
+   Bridge network status documents contain the information which bridges
+   are known to the bridge authority at a certain time.  We expect bridge
+   network statuses to contain at least the following two lines for every
+   bridge in the given order:
+
+   "r" SP nickname SP identity SP digest SP publication SP IP SP ORPort SP
+   DirPort NL
+   "s" SP Flags NL
+
+   BridgeDB parses the identity from the "r" line and scans the "s" line
+   for flags Stable and Running.  BridgeDB MUST discard all bridges that
+   do not have the Running flag.  BridgeDB MAY only consider bridges as
+   running that have the Running flag in the most recently parsed bridge
+   network status.  BridgeDB MUST also discard all bridges for which it
+   does not find a bridge descriptor.  BridgeDB memorizes all remaining
+   bridges as the set of running bridges that can be given out to bridge
+   users.
+# I'm not 100% sure if BridgeDB discards (or rather doesn't use) bridges
+# for which it doesn't have a bridge descriptor.  But as far as I can see,
+# it wouldn't learn the bridge's IP and OR port in that case, so we
+# shouldn't use it.  Is this a bug?  -KL
+# What's the reason for parsing bridge descriptors anyway?  Can't we learn
+# a bridge's IP address and OR port from the "r" line, too?  -KL
+
+1.2. Parsing bridge descriptors
+
+   BridgeDB learns about a bridge's most recent IP address and OR port
+   from parsing bridge descriptors.  Bridge descriptor files MAY contain
+   one or more bridge descriptors.  We expect bridge descriptor to contain
+   at least the following lines in the stated order:
+
+   "@purpose" SP purpose NL
+   "router" SP nickname SP IP SP ORPort SP SOCKSPort SP DirPort NL
+   ["opt "] "fingerprint" SP fingerprint NL
+
+   BridgeDB parses the purpose, IP, ORPort, and fingerprint.  BridgeDB
+   MUST discard bridge descriptors if the fingerprint is not contained in
+   the bridge network status(es) parsed in the same execution or if the
+   bridge does not have the Running flag.  BridgeDB MAY discard bridge
+   descriptors which have a different purpose than "bridge".  BridgeDB
+   memorizes the IP addresses and OR ports of the remaining bridges.  If
+   there is more than one bridge descriptor with the same fingerprint,
+   BridgeDB memorizes the IP address and OR port of the most recently
+   parsed bridge descriptor.
+# I think that BridgeDB simply assumes that descriptors in the bridge
+# descriptor files are in chronological order.  If not, it would overwrite
+# a bridge's IP address and OR port with an older descriptor, which would
+# be bad.  The current cached-descriptors* files should write descriptors
+# in chronological order.  But we might change that, e.g., when trying to
+# limit the number of descriptors in Tor.  Should we make the assumption
+# that descriptors are ordered chronologically, or should we specify that
+# we have to check that explicitly?  -KL
+
+2. Assigning bridges to distributors
+
+# In this section I'm planning to write how BridgeDB should decide to
+# which distributor (https, email, unallocated/file bucket) it assigns a
+# new bridge.  I should also write down whether BridgeDB changes
+# assignments of already known bridges (I think it doesn't).  The latter
+# includes cases when we increase/reduce the probability of bridges being
+# assigned to a distributor or even turn off a distributor completely.
+# -KL
+
+3. Selecting bridges to be given out via https
+
+# This section is about the specifics of the https distributor, like which
+# IP addresses get bridges from the same ring, how often the results
+# change, etc.  -KL
+
+4. Selecting bridges to be given out via email
+
+# This section is about the

[tor-commits] [bridgedb/master] Finish first draft of bridge-db-spec.txt.

2011-04-12 Thread karsten
commit 9d7dad7f97a05eba479c7a84a10e6663c79205ee
Author: Karsten Loesing 
Date:   Mon Feb 14 14:52:21 2011 +0100

Finish first draft of bridge-db-spec.txt.
---
 bridge-db-spec.txt |  245 
 1 files changed, 171 insertions(+), 74 deletions(-)

diff --git a/bridge-db-spec.txt b/bridge-db-spec.txt
index a9e2c8f..5c8ca57 100644
--- a/bridge-db-spec.txt
+++ b/bridge-db-spec.txt
@@ -5,98 +5,168 @@
 
This document specifies how BridgeDB processes bridge descriptor files
to learn about new bridges, maintains persistent assignments of bridges
-   to distributors, and decides which descriptors to give out upon user
+   to distributors, and decides which bridges to give out upon user
requests.
 
 1. Importing bridge network statuses and bridge descriptors
 
BridgeDB learns about bridges from parsing bridge network statuses and
-   bridge descriptors as specified in Tor's directory protocol.  BridgeDB
-   SHOULD parse one bridge network status file and at least one bridge
-   descriptor file.
+   bridge descriptors as specified in Tor's directory protocol.
+   BridgeDB SHOULD parse one bridge network status file first and at least
+   one bridge descriptor file afterwards.
 
 1.1. Parsing bridge network statuses
 
Bridge network status documents contain the information which bridges
-   are known to the bridge authority at a certain time.  We expect bridge
-   network statuses to contain at least the following two lines for every
-   bridge in the given order:
-
-   "r" SP nickname SP identity SP digest SP publication SP IP SP ORPort SP
-   DirPort NL
-   "s" SP Flags NL
-
-   BridgeDB parses the identity from the "r" line and scans the "s" line
-   for flags Stable and Running.  BridgeDB MUST discard all bridges that
-   do not have the Running flag.  BridgeDB MAY only consider bridges as
-   running that have the Running flag in the most recently parsed bridge
-   network status.  BridgeDB MUST also discard all bridges for which it
-   does not find a bridge descriptor.  BridgeDB memorizes all remaining
-   bridges as the set of running bridges that can be given out to bridge
-   users.
-# I'm not 100% sure if BridgeDB discards (or rather doesn't use) bridges
-# for which it doesn't have a bridge descriptor.  But as far as I can see,
-# it wouldn't learn the bridge's IP and OR port in that case, so we
-# shouldn't use it.  Is this a bug?  -KL
-# What's the reason for parsing bridge descriptors anyway?  Can't we learn
-# a bridge's IP address and OR port from the "r" line, too?  -KL
+   are known to the bridge authority and which flags the bridge authority
+   assigns to them.
+   We expect bridge network statuses to contain at least the following two
+   lines for every bridge in the given order:
+
+  "r" SP nickname SP identity SP digest SP publication SP IP SP ORPort
+  SP DirPort NL
+  "s" SP Flags NL
+
+   BridgeDB parses the identity from the "r" line and the assigned flags
+   from the "s" line.
+   BridgeDB MUST discard all bridges that do not have the Running flag.
+   BridgeDB memorizes all remaining bridges as the set of running bridges
+   that can be given out to bridge users.
+   BridgeDB SHOULD memorize assigned flags if it wants to ensure that sets
+   of bridges given out SHOULD contain at least a given number of bridges
+   with these flags.
 
 1.2. Parsing bridge descriptors
 
BridgeDB learns about a bridge's most recent IP address and OR port
-   from parsing bridge descriptors.  Bridge descriptor files MAY contain
-   one or more bridge descriptors.  We expect bridge descriptor to contain
-   at least the following lines in the stated order:
-
-   "@purpose" SP purpose NL
-   "router" SP nickname SP IP SP ORPort SP SOCKSPort SP DirPort NL
-   ["opt "] "fingerprint" SP fingerprint NL
-
-   BridgeDB parses the purpose, IP, ORPort, and fingerprint.  BridgeDB
-   MUST discard bridge descriptors if the fingerprint is not contained in
-   the bridge network status(es) parsed in the same execution or if the
-   bridge does not have the Running flag.  BridgeDB MAY discard bridge
-   descriptors which have a different purpose than "bridge".  BridgeDB
-   memorizes the IP addresses and OR ports of the remaining bridges.  If
-   there is more than one bridge descriptor with the same fingerprint,
+   from parsing bridge descriptors.
+   Bridge descriptor files MAY contain one or more bridge descriptors.
+   We expect bridge descriptor to contain at least the following lines in
+   the stated order:
+
+  "@purpose" SP purpose NL
+  "router" SP nickname SP IP SP ORPort SP SOCKSPort SP DirPort NL
+  ["opt" SP] "fingerprint" SP fingerprint NL
+
+   BridgeDB parses th

[tor-commits] [bridgedb/master] Respond to Nick's comments.

2011-04-12 Thread karsten
commit e3b46883f593271829787f42d123d9fc3bc13f33
Author: Karsten Loesing 
Date:   Tue Apr 12 10:19:18 2011 +0200

Respond to Nick's comments.
---
 bridge-db-spec.txt |   82 +---
 1 files changed, 27 insertions(+), 55 deletions(-)

diff --git a/bridge-db-spec.txt b/bridge-db-spec.txt
index 48a9590..89f0e5c 100644
--- a/bridge-db-spec.txt
+++ b/bridge-db-spec.txt
@@ -41,15 +41,8 @@
 
BridgeDB parses the identity from the "r" line and the assigned flags
from the "s" line.
-   BridgeDB MUST discard all bridges that do not have the Running flag.
-# I don't think that "discard" is the right word here: we don't actually
-# seem to "Forget they exist".  Instead, we remember that they are not
-# running.  (See how parseStatusFile yields a flag that says if the bridges
-# are running, and how Main.load sets the bridge's status to running or
-# non-running appropriately before passing it to splitter.  At no point here
-# does a non-running bridge get "discarded", sfaict). -NM
-   BridgeDB memorizes all remaining bridges as the set of running bridges
-   that can be given out to bridge users.
+   BridgeDB memorizes all bridges that have the Running flag as the set of
+   running bridges that can be given out to bridge users.
BridgeDB SHOULD memorize assigned flags if it wants to ensure that sets
of bridges given out SHOULD contain at least a given number of bridges
with these flags.
@@ -58,6 +51,12 @@
 
BridgeDB learns about a bridge's most recent IP address and OR port
from parsing bridge descriptors.
+   In theory, both IP address and OR port of a bridge are also contained
+   in the "r" line of the bridge network status, so there is no mandatory
+   reason for parsing bridge descriptors.  But this functionality is still
+   implemented in case we need information from the bridge descriptor in
+   the future.
+
Bridge descriptor files MAY contain one or more bridge descriptors.
We expect bridge descriptor to contain at least the following lines in
the stated order:
@@ -68,45 +67,20 @@
 
BridgeDB parses the purpose, IP, ORPort, and fingerprint from these
lines.
-   BridgeDB MUST discard bridge descriptors if the fingerprint is not
-   contained in the bridge network status parsed before or if the bridge
-   does not have the Running flag.
-# See comment above -NM
-   BridgeDB MAY discard bridge descriptors which have a different purpose
-   than "bridge".
-# "MAY" isn't good enough; we need to know whether it's safe to give
-# bridgedb a list of non-bridge-purpose descriptors or not.  If it
-# discards them, then you shouldn't give bridgedb non-bridge descriptors
-# if you _do_ want them handed out.  If it doesn't discard them, then you
-# shouldn't give bridgedb non-bridge descriptors _unless_ you want them
-# handed out.
+   BridgeDB skips bridge descriptors if the fingerprint is not contained
+   in the bridge network status parsed before or if the bridge does not
+   have the Running flag.
+   BridgeDB discards bridge descriptors which have a different purpose
+   than "bridge".  BridgeDB can be configured to only accept descriptors
+   with another purpose or not discard descriptors based on purpose at
+   all.
BridgeDB memorizes the IP addresses and OR ports of the remaining
bridges.
If there is more than one bridge descriptor with the same fingerprint,
BridgeDB memorizes the IP address and OR port of the most recently
parsed bridge descriptor.
-# I confirmed that BridgeDB simply assumes that descriptors in the bridge
-# descriptor files are in chronological order and that descriptors in
-# cached-descriptors.new are newer than those in cached-descriptors.  If
-# this is not the case, BridgeDB overwrites a bridge's IP address and OR
-# port with those from an older descriptor!  I think that the current
-# cached-descriptors* files that Tor produces always have descriptors in
-# chronological order.  But what if we change that, e.g., when trying to
-# limit the number of descriptors that Tor memorizes.  Should we make the
-# assumption that descriptors are ordered chronologically, or should we
-# specify that we have to check that explicitly and fix BridgeDB to do
-# that?  We could also look at the bridge descriptor that is referenced
-# from the bridge network status by its descriptor identifier, even though
-# that would require us to calculate the descriptor hash.  -KL
-# We should just look at the 'published' dates in the bridges. Call this a bug,
-# I'd say. -NM
If BridgeDB does not find a bridge descriptor for a bridge contained in
the bridge network status parsed before, it MUST discard that bridge.
-# I confirmed that BridgeDB discards (or at least doesn't use) bridges for
-# which it doesn't have a bridge d

  1   2   3   4   5   6   7   8   9   10   >