On Sunday, February 23, 2020 at 3:20:10 AM UTC-5, Nadav Har'El wrote:
>
> On Sat, Feb 22, 2020 at 4:20 PM Waldemar Kozaczuk <jwkoz...@gmail.com 
> <javascript:>> wrote:
>
>> This is the third and the last batch of the python 3 upgrade changes and
>> focuses on debug (loader.py) and trace related scripts.
>>
>> Following scripts have either been adapted to version 3 or
>> verified that they do NOT need to be changed by all patches the series:
>>
> ./scripts/export_manifest.py
>>
>
> Please note that most or not all of the scripts that have 
> "#!/usr/bin/python" on top
> (not python2 or python3) have been tested for years by different people 
> who had either
> python2 or python3 as their default "python", and should ideally be 
> working for both.
> That's especially true in the scripts used in regular scripts/build, which 
> people would 
> have noticed if they were broken.
> So it's indeed worth testing that this is in fact true, but the last thing 
> I want to see is
> rushing patches to "convert" a script that already works on Python3 to 
> Python3.
>
>
> Signed-off-by: Waldemar Kozaczuk <jwkoz...@gmail.com <javascript:>>
>> ---
>>  scripts/loader.py    |  5 +--
>>  scripts/osv/debug.py |  3 +-
>>  scripts/osv/prof.py  | 16 ++++++---
>>  scripts/osv/trace.py | 45 ++++++++++++++----------
>>  scripts/osv/tree.py  |  4 +--
>>  scripts/trace.py     | 84 ++++++++++++++++++++++++--------------------
>>  6 files changed, 90 insertions(+), 67 deletions(-)
>>
>> diff --git a/scripts/loader.py b/scripts/loader.py
>> index 500d864a..4b82fd4a 100644
>> --- a/scripts/loader.py
>> +++ b/scripts/loader.py
>> @@ -1,4 +1,4 @@
>> -#!/usr/bin/python2
>> +#!/usr/bin/python
>>
>>  import gdb
>>  import re
>> @@ -1034,7 +1034,7 @@ class osv_info_callouts(gdb.Command):
>>              fname = callout['c_fn']
>>
>>              # time
>> -            t = int(callout['c_to_ns'])
>> +            t = int(callout['c_to_ns']['__d']['__r'])
>>
>
> Thanks, this was an already-needed fix (not related to Python 3). It was 
> probably casued by changing C++
> ABI, and while this change is good for us, it may not work for people 
> using older version of the compiler.
> Traditionally we handled this with code like:
>
>         try:
>             return self.map_header['_M_bbegin']
>         except gdb.error:
>             return self.map_header['_M_before_begin']['_M_nxt']
>
> Would be good to do this here too.
> But I'll not block this commit. I'll commit it as-is, and for your 
> consideration if you want to submit a followup patch to make this change 
> backward-compatible with older compilers.
>
I am about to send a patch accommodating your request. 

I think we have a similar issue with this code (probably line in bold):

def runqueue(cpuid, node=None):

    if node == None:

        cpus = gdb.lookup_global_symbol('sched::cpus').value()

        cpu = cpus['_M_impl']['_M_start'][cpuid]

        rq = cpu['runqueue']

        *p = rq['data_']['node_plus_pred_']*

        node = p['header_plus_size_']['header_']['parent_']


    if node:

        offset = gdb.parse_and_eval('(int)&((sched::thread 
*)0)->_runqueue_link')

        thread = node.cast(gdb.lookup_type('void').pointer()) - offset

        thread = thread.cast(gdb.lookup_type('sched::thread').pointer())


        for x in runqueue(cpuid, node['left_']):

            yield x


        yield thread


        for x in runqueue(cpuid, node['right_']):

            yield x
 
Trying to run 'osv runqueue' gives this error:

 osv runqueue

CPU 0:

Python Exception <class 'gdb.error'> There is no member or method named 
data_.: 

Error occurred in Python: There is no member or method named data_.

I could not quite figure out how exactly the 'runqueue' has changed - it 
does not seem to have '_data' field anymore. Do you have any idea how we 
should fix it?

>
>
>>              # flags
>>              CALLOUT_ACTIVE = 0x0002
>> @@ -1176,6 +1176,7 @@ def all_traces():
>>          max_trace = ulong(trace_buffer['_size'])
>>
>>          if not trace_log_base:
>> +            print('!!! Could not find any trace data! Make sure 
>> "--trace" option matches some tracepoints.')
>>              raise StopIteration
>>
>>          trace_log = inf.read_memory(trace_log_base, max_trace)
>> diff --git a/scripts/osv/debug.py b/scripts/osv/debug.py
>> index fe42be60..83372ada 100644
>> --- a/scripts/osv/debug.py
>> +++ b/scripts/osv/debug.py
>> @@ -38,7 +38,7 @@ class SymbolResolver(object):
>>          if show_inline:
>>              flags += 'i'
>>          self.addr2line = subprocess.Popen(['addr2line', '-e', 
>> object_path, flags],
>> -            stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>> +            stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=True)
>>          self.cache = {}
>>
>>      def next_line(self):
>> @@ -82,6 +82,7 @@ class SymbolResolver(object):
>>          if self.show_inline:
>>              self.addr2line.stdin.write('0\n')
>>
>> +        self.addr2line.stdin.flush()
>>          result = self.parse_line(addr, self.next_line())
>>
>>          if self.show_inline:
>> diff --git a/scripts/osv/prof.py b/scripts/osv/prof.py
>> index 95db15b8..3a000013 100644
>> --- a/scripts/osv/prof.py
>> +++ b/scripts/osv/prof.py
>> @@ -51,7 +51,7 @@ time_units = [
>>  ]
>>
>>  def parse_time_as_nanos(text, default_unit='ns'):
>> -    for level, name in sorted(time_units, key=lambda (level, name): 
>> -len(name)):
>> +    for level, name in sorted(time_units, key=lambda level_name: 
>> -len(level_name[1])):
>>
>          if text.endswith(name):
>>              return float(text.rstrip(name)) * level
>>      for level, name in time_units:
>> @@ -60,7 +60,7 @@ def parse_time_as_nanos(text, default_unit='ns'):
>>      raise Exception('Unknown unit: ' + default_unit)
>>
>>  def format_time(time, format="%.2f %s"):
>> -    for level, name in sorted(time_units, key=lambda (level, name): 
>> -level):
>> +    for level, name in sorted(time_units, key=lambda level_name1: 
>> -level_name1[0]):
>>          if time >= level:
>>              return format % (float(time) / level, name)
>>      return str(time)
>> @@ -207,10 +207,16 @@ class timed_trace_producer(object):
>>          self.last_time = None
>>
>>      def __call__(self, sample):
>> +        if not sample.time:
>> +            return
>> +
>>          if not sample.cpu in self.earliest_trace_per_cpu:
>>              self.earliest_trace_per_cpu[sample.cpu] = sample
>>
>> -        self.last_time = max(self.last_time, sample.time)
>> +        if not self.last_time:
>> +            self.last_time = sample.time
>> +        else:
>> +            self.last_time = max(self.last_time, sample.time)
>>
>>          matcher = self.matcher_by_name.get(sample.name, None)
>>          if not matcher:
>> @@ -239,7 +245,7 @@ class timed_trace_producer(object):
>>              return trace.TimedTrace(entry_trace, duration)
>>
>>      def finish(self):
>> -        for sample in self.open_samples.itervalues():
>> +        for sample in self.open_samples.values():
>>              duration = self.last_time - sample.time
>>              yield trace.TimedTrace(sample, duration)
>>
>> @@ -402,7 +408,7 @@ def print_profile(samples, symbol_resolver, 
>> caller_oriented=False,
>>      if not order:
>>          order = lambda node: (-node.resident_time, -node.hit_count)
>>
>> -    for group, tree_root in sorted(groups.iteritems(), key=lambda 
>> (thread, node): order(node)):
>> +    for group, tree_root in sorted(iter(groups.items()), key=lambda 
>> thread_node: order(thread_node[1])):
>>          collapse_similar(tree_root)
>>
>>          if max_levels:
>> diff --git a/scripts/osv/trace.py b/scripts/osv/trace.py
>> index 2c28582b..636ea0a4 100644
>> --- a/scripts/osv/trace.py
>> +++ b/scripts/osv/trace.py
>> @@ -65,7 +65,12 @@ class TimeRange(object):
>>          return self.end - self.begin
>>
>>      def intersection(self, other):
>> -        begin = max(self.begin, other.begin)
>> +        if not self.begin:
>> +            begin = other.begin
>> +        elif not other.begin:
>> +            begin = self.begin
>> +        else:
>> +            begin = max(self.begin, other.begin)
>>
>>          if self.end is None:
>>              end = other.end
>> @@ -143,11 +148,11 @@ class Trace:
>>  class TimedTrace:
>>      def __init__(self, trace, duration=None):
>>          self.trace = trace
>> -        self.duration = duration
>> +        self.duration_ = duration
>>
>>      @property
>>      def duration(self):
>> -        return self.duration
>> +        return self.duration_
>>
>>      @property
>>      def time(self):
>> @@ -183,6 +188,8 @@ def do_split_format(format_str):
>>
>>  _split_cache = {}
>>  def split_format(format_str):
>> +    if not format_str:
>> +        return []
>>      result = _split_cache.get(format_str, None)
>>      if not result:
>>          result = list(do_split_format(format_str))
>> @@ -190,7 +197,7 @@ def split_format(format_str):
>>      return result
>>
>>  formatters = {
>> -    '*': lambda bytes: '{' + ' '.join('%02x' % ord(b) for b in bytes) + 
>> '}'
>> +    '*': lambda bytes: '{' + ' '.join('%02x' % b for b in bytes) + '}'
>>  }
>>
>>  def get_alignment_of(fmt):
>> @@ -238,16 +245,15 @@ class SlidingUnpacker:
>>                  size = struct.calcsize(fmt)
>>                  val, = struct.unpack_from(fmt, 
>> self.buffer[self.offset:self.offset+size])
>>                  self.offset += size
>> -                values.append(val)
>> +                if fmt.startswith('50p'):
>> +                   values.append(val.decode('utf-8'))
>> +                else:
>> +                   values.append(val)
>>
>>          return tuple(values)
>>
>> -    def __nonzero__(self):
>> -        return self.offset < len(self.buffer)
>> -
>> -    # Python3
>>      def __bool__(self):
>> -        return self.__nonzero__()
>> +        return self.offset < len(self.buffer)
>>
>>  class WritingPacker:
>>      def __init__(self, writer):
>> @@ -270,7 +276,10 @@ class WritingPacker:
>>              if fmt == '*':
>>                  self.pack_blob(arg)
>>              else:
>> -                self.writer(struct.pack(fmt, arg))
>> +                if fmt == '50p':
>> +                    self.writer(struct.pack(fmt, arg.encode('utf-8')))
>> +                else:
>> +                    self.writer(struct.pack(fmt, arg))
>>                  self.offset += struct.calcsize(fmt)
>>
>>      def pack_blob(self, arg):
>> @@ -298,7 +307,7 @@ class TraceDumpReaderBase :
>>          self.endian = '<'
>>          self.file = open(filename, 'rb')
>>          try:
>> -            tag = self.file.read(4)
>> +            tag = self.file.read(4).decode()
>>              if tag == "OSVT":
>>                  endian = '>'
>>              elif tag != "TVSO":
>> @@ -347,7 +356,7 @@ class TraceDumpReaderBase :
>>
>>      def readString(self):
>>          len = self.read('H')
>> -        return self.file.read(len)
>> +        return self.file.read(len).decode()
>>
>>  class TraceDumpReader(TraceDumpReaderBase) :
>>      def __init__(self, filename):
>> @@ -378,7 +387,7 @@ class TraceDumpReader(TraceDumpReaderBase) :
>>              sig = ""
>>              for j in range(0, n_args):
>>                  arg_name = self.readString()
>> -                arg_sig = self.file.read(1)
>> +                arg_sig = self.file.read(1).decode()
>>                  if arg_sig == 'p':
>>                      arg_sig = '50p'
>>                  sig += arg_sig
>> @@ -405,7 +414,7 @@ class TraceDumpReader(TraceDumpReaderBase) :
>>
>>              backtrace = None
>>              if flags & 1:
>> -                backtrace = filter(None, unpacker.unpack('Q' * 
>> self.backtrace_len))
>> +                backtrace = [_f for _f in unpacker.unpack('Q' * 
>> self.backtrace_len) if _f]
>>
>>              data = unpacker.unpack(tp.signature)
>>              unpacker.align_up(8)
>> @@ -414,7 +423,7 @@ class TraceDumpReader(TraceDumpReaderBase) :
>>              yield last_trace
>>
>>      def traces(self):
>> -        iters = map(lambda data: self.oneTrace(data), self.trace_buffers)
>> +        iters = [self.oneTrace(data) for data in self.trace_buffers]
>>          return heapq.merge(*iters)
>>
>>
>> @@ -523,7 +532,7 @@ def read(buffer_view):
>>
>>      while unpacker:
>>          tp_key, thread_ptr, thread_name, time, cpu = 
>> unpacker.unpack('QQ16sQI')
>> -        thread_name = thread_name.rstrip('\0')
>> +        thread_name = thread_name.rstrip(b'\0').decode('utf-8')
>>          tp = tracepoints[tp_key]
>>
>>          backtrace = []
>> @@ -551,7 +560,7 @@ def write(traces, writer):
>>                      trace.time, trace.cpu)
>>
>>          if trace.backtrace:
>> -            for frame in filter(None, trace.backtrace):
>> +            for frame in [_f for _f in trace.backtrace if _f]:
>>                  packer.pack('Q', frame)
>>          packer.pack('Q', 0)
>>
>> diff --git a/scripts/osv/tree.py b/scripts/osv/tree.py
>> index 594b00e2..86345157 100644
>> --- a/scripts/osv/tree.py
>> +++ b/scripts/osv/tree.py
>> @@ -18,11 +18,11 @@ class TreeNode(object):
>>
>>      def squash_child(self):
>>          assert self.has_only_one_child()
>> -        self.children_by_key = 
>> next(self.children_by_key.itervalues()).children_by_key
>> +        self.children_by_key = 
>> next(iter(self.children_by_key.values())).children_by_key
>>
>>      @property
>>      def children(self):
>> -        return self.children_by_key.itervalues()
>> +        return iter(self.children_by_key.values())
>>
>>      def has_only_one_child(self):
>>          return len(self.children_by_key) == 1
>> diff --git a/scripts/trace.py b/scripts/trace.py
>> index 34cfb2ab..1b35e568 100755
>> --- a/scripts/trace.py
>> +++ b/scripts/trace.py
>> @@ -1,4 +1,4 @@
>> -#!/usr/bin/env python2
>> +#!/usr/bin/env python3
>>  import sys
>>  import errno
>>  import argparse
>> @@ -13,6 +13,7 @@ from collections import defaultdict
>>  from osv import trace, debug, prof
>>  from osv.client import Client
>>  import memory_analyzer
>> +from functools import reduce
>>
>>  class InvalidArgumentsException(Exception):
>>      def __init__(self, message):
>> @@ -114,7 +115,7 @@ def list_trace(args):
>>      with get_trace_reader(args) as reader:
>>          for t in reader.get_traces():
>>              if t.time in time_range:
>> -                print t.format(backtrace_formatter, 
>> data_formatter=data_formatter)
>> +                print(t.format(backtrace_formatter, 
>> data_formatter=data_formatter))
>>
>>  def mem_analys(args):
>>      mallocs = {}
>> @@ -276,7 +277,7 @@ def extract(args):
>>              stderr=subprocess.STDOUT)
>>          _stdout, _ = proc.communicate()
>>          if proc.returncode or not os.path.exists(args.tracefile):
>> -            print(_stdout)
>> +            print(_stdout.decode())
>>              sys.exit(1)
>>      else:
>>          print("error: %s not found" % (elf_path))
>> @@ -332,8 +333,10 @@ def write_sample_to_pcap(sample, pcap_writer):
>>          }
>>
>>          pkt = dpkt.ethernet.Ethernet()
>> -        pkt.data = sample.data[1]
>>          pkt.type = eth_types[proto]
>> +        pkt.src = b''
>> +        pkt.dst = b''
>> +        pkt.data = sample.data[1]
>>          pcap_writer.writepkt(pkt, ts=ts)
>>
>>  def format_packet_sample(sample):
>> @@ -343,7 +346,7 @@ def format_packet_sample(sample):
>>      pcap = dpkt.pcap.Writer(proc.stdin)
>>      write_sample_to_pcap(sample, pcap)
>>      pcap.close()
>> -    assert(proc.stdout.readline() == "reading from file -, link-type 
>> EN10MB (Ethernet)\n")
>> +    assert(proc.stdout.readline().decode() == "reading from file -, 
>> link-type EN10MB (Ethernet)\n")
>>      packet_line = proc.stdout.readline().rstrip()
>>      proc.wait()
>>      return packet_line
>> @@ -361,7 +364,7 @@ def pcap_dump(args, target=None):
>>      needs_dpkt()
>>
>>      if not target:
>> -        target = sys.stdout
>> +        target = sys.stdout.buffer
>>
>>      pcap_file = dpkt.pcap.Writer(target)
>>      try:
>> @@ -439,7 +442,10 @@ def print_summary(args, printer=sys.stdout.write):
>>                  else:
>>                      min_time = min(min_time, t.time)
>>
>> -                max_time = max(max_time, t.time)
>> +                if not max_time:
>> +                    max_time = t.time
>> +                else:
>> +                    max_time = max(max_time, t.time)
>>
>>              if args.timed:
>>                  timed = timed_producer(t)
>> @@ -450,42 +456,42 @@ def print_summary(args, printer=sys.stdout.write):
>>          timed_samples.extend((timed_producer.finish()))
>>
>>      if count == 0:
>> -        print "No samples"
>> +        print("No samples")
>>          return
>>
>> -    print "Collected %d samples spanning %s" % (count, 
>> prof.format_time(max_time - min_time))
>> +    print("Collected %d samples spanning %s" % (count, 
>> prof.format_time(max_time - min_time)))
>>
>> -    print "\nTime ranges:\n"
>> -    for cpu, r in sorted(cpu_time_ranges.items(), key=lambda (c, r): 
>> r.min):
>> -        print "  CPU 0x%02d: %s - %s = %10s" % (cpu,
>> +    print("\nTime ranges:\n")
>> +    for cpu, r in sorted(list(cpu_time_ranges.items()), key=lambda c_r: 
>> c_r[1].min):
>> +        print("  CPU 0x%02d: %s - %s = %10s" % (cpu,
>>              trace.format_time(r.min),
>>              trace.format_time(r.max),
>> -            prof.format_time(r.max - r.min))
>> +            prof.format_time(r.max - r.min)))
>>
>> -    max_name_len = reduce(max, map(lambda tp: len(tp.name), 
>> count_per_tp.iterkeys()))
>> +    max_name_len = reduce(max, [len(tp.name) for tp in 
>> iter(count_per_tp.keys())])
>>      format = "  %%-%ds %%8s" % (max_name_len)
>> -    print "\nTracepoint statistics:\n"
>> -    print format % ("name", "count")
>> -    print format % ("----", "-----")
>> +    print("\nTracepoint statistics:\n")
>> +    print(format % ("name", "count"))
>> +    print(format % ("----", "-----"))
>>
>> -    for tp, count in sorted(count_per_tp.iteritems(), key=lambda (tp, 
>> count): tp.name):
>> -        print format % (tp.name, count)
>> +    for tp, count in sorted(iter(count_per_tp.items()), key=lambda 
>> tp_count: tp_count[0].name):
>> +        print(format % (tp.name, count))
>>
>>      if args.timed:
>>          format = "  %-20s %8s %8s %8s %8s %8s %8s %8s %15s"
>> -        print "\nTimed tracepoints [ms]:\n"
>> +        print("\nTimed tracepoints [ms]:\n")
>>
>> -        timed_samples = filter(lambda t: 
>> t.time_range.intersection(time_range), timed_samples)
>> +        timed_samples = [t for t in timed_samples if 
>> t.time_range.intersection(time_range)]
>>
>>          if not timed_samples:
>> -            print "  None"
>> +            print("  None")
>>          else:
>> -            print format % ("name", "count", "min", "50%", "90%", "99%", 
>> "99.9%", "max", "total")
>> -            print format % ("----", "-----", "---", "---", "---", "---", 
>> "-----", "---", "-----")
>> +            print(format % ("name", "count", "min", "50%", "90%", "99%", 
>> "99.9%", "max", "total"))
>> +            print(format % ("----", "-----", "---", "---", "---", "---", 
>> "-----", "---", "-----"))
>>
>> -            for name, traces in 
>> get_timed_traces_per_function(timed_samples).iteritems():
>> +            for name, traces in 
>> get_timed_traces_per_function(timed_samples).items():
>>                  samples = 
>> sorted(list((t.time_range.intersection(time_range).length() for t in 
>> traces)))
>> -                print format % (
>> +                print(format % (
>>                      name,
>>                      len(samples),
>>                      format_duration(get_percentile(samples, 0)),
>> @@ -494,9 +500,9 @@ def print_summary(args, printer=sys.stdout.write):
>>                      format_duration(get_percentile(samples, 0.99)),
>>                      format_duration(get_percentile(samples, 0.999)),
>>                      format_duration(get_percentile(samples, 1)),
>> -                    format_duration(sum(samples)))
>> +                    format_duration(sum(samples))))
>>
>> -    print
>> +    print()
>>
>>  def list_cpu_load(args):
>>      load_per_cpu = {}
>> @@ -550,7 +556,7 @@ def list_timed(args):
>>
>>          for timed in timed_traces:
>>              t = timed.trace
>> -            print '0x%016x %-15s %2d %20s %7s %-20s %s%s' % (
>> +            print('0x%016x %-15s %2d %20s %7s %-20s %s%s' % (
>>                              t.thread.ptr,
>>                              t.thread.name,
>>                              t.cpu,
>> @@ -558,7 +564,7 @@ def list_timed(args):
>>                              trace.format_duration(timed.duration),
>>                              t.name,
>>                              trace.Trace.format_data(t),
>> -                            bt_formatter(t.backtrace))
>> +                            bt_formatter(t.backtrace)))
>>
>>  def list_wakeup_latency(args):
>>      bt_formatter = get_backtrace_formatter(args)
>> @@ -575,9 +581,9 @@ def list_wakeup_latency(args):
>>          return "%4.6f" % (float(nanos) / 1e6)
>>
>>      if not args.no_header:
>> -        print '%-18s %-15s %3s %20s %13s %9s %s' % (
>> +        print('%-18s %-15s %3s %20s %13s %9s %s' % (
>>              "THREAD", "THREAD-NAME", "CPU", "TIMESTAMP[s]", 
>> "WAKEUP[ms]", "WAIT[ms]", "BACKTRACE"
>> -        )
>> +        ))
>>
>>      with get_trace_reader(args) as reader:
>>          for t in reader.get_traces():
>> @@ -594,14 +600,14 @@ def list_wakeup_latency(args):
>>                      if t.cpu == waiting_thread.wait.cpu:
>>                          wakeup_delay = t.time - waiting_thread.wake.time
>>                          wait_time = t.time - waiting_thread.wait.time
>> -                        print '0x%016x %-15s %3d %20s %13s %9s %s' % (
>> +                        print('0x%016x %-15s %3d %20s %13s %9s %s' % (
>>                                      t.thread.ptr,
>>                                      t.thread.name,
>>                                      t.cpu,
>>                                      trace.format_time(t.time),
>>                                      format_wakeup_latency(wakeup_delay),
>>                                      trace.format_duration(wait_time),
>> -                                    bt_formatter(t.backtrace))
>> +                                    bt_formatter(t.backtrace)))
>>
>>  def add_trace_listing_options(parser):
>>      add_time_slicing_options(parser)
>> @@ -615,7 +621,7 @@ def convert_dump(args):
>>          if os.path.exists(args.tracefile):
>>              os.remove(args.tracefile)
>>              assert(not os.path.exists(args.tracefile))
>> -        print "Converting dump %s -> %s" % (args.dumpfile, 
>> args.tracefile)
>> +        print("Converting dump %s -> %s" % (args.dumpfile, 
>> args.tracefile))
>>          td = trace.TraceDumpReader(args.dumpfile)
>>          trace.write_to_file(args.tracefile, list(td.traces()))
>>      else:
>> @@ -631,7 +637,7 @@ def download_dump(args):
>>      client = Client(args)
>>      url = client.get_url() + "/trace/buffers"
>>
>> -    print "Downloading %s -> %s" % (url, file)
>> +    print("Downloading %s -> %s" % (url, file))
>>
>>      r = requests.get(url, stream=True, **client.get_request_kwargs())
>>      size = int(r.headers['content-length'])
>> @@ -641,7 +647,7 @@ def download_dump(args):
>>          for chunk in r.iter_content(8192):
>>              out_file.write(chunk)
>>              current += len(chunk)
>> -            sys.stdout.write("[{0:8d} / {1:8d} k] {3} 
>> {2:.2f}%\r".format(current/1024, size/1024, 100.0*current/size, 
>> ('='*32*(current/size)) + '>'))
>> +            sys.stdout.write("[{0:8d} / {1:8d} k] {3} 
>> {2:.2f}%\r".format(current//1024, size//1024, 100.0*current//size, 
>> ('='*32*(current//size)) + '>'))
>>              if current >= size:
>>                  sys.stdout.write("\n")
>>              sys.stdout.flush()
>> @@ -789,7 +795,7 @@ if __name__ == "__main__":
>>      args = parser.parse_args()
>>
>>      if getattr(args, 'paginate', False):
>> -        less_process = subprocess.Popen(['less', '-FX'], 
>> stdin=subprocess.PIPE)
>> +        less_process = subprocess.Popen(['less', '-FX'], 
>> stdin=subprocess.PIPE, text=True)
>>          sys.stdout = less_process.stdin
>>      else:
>>          less_process = None
>> @@ -797,7 +803,7 @@ if __name__ == "__main__":
>>      try:
>>          args.func(args)
>>      except InvalidArgumentsException as e:
>> -        print "Invalid arguments:", e.message
>> +        print("Invalid arguments:", e.message)
>>      except IOError as e:
>>          if e.errno != errno.EPIPE:
>>              raise
>> -- 
>> 2.20.1
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "OSv Development" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to osv...@googlegroups.com <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/osv-dev/20200222142004.9909-1-jwkozaczuk%40gmail.com
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups "OSv 
Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to osv-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/osv-dev/1ee648f9-938e-4018-b1a5-da81fbc1dc42%40googlegroups.com.

Reply via email to