Anarchopulco considered harmful

2022-07-07 Thread professor rat
These various ’ anarchist ’ ventures are bad, playthings of the evil wealthy, 
anti-democratic, even loose affiliates of these ventures were bad people, and 
so on.

https://marginalrevolution.com/marginalrevolution/2022/07/adventure-capitalism.html#comments

TL/DR Jeff Berwick still needs killing ( My 2 Sats )


FBI and M15 issue rare joint warning for all iPhone and Android users over growing China cybersecurity attacks

2022-07-07 Thread jim bell
 FBI and M15 issue rare joint warning for all iPhone and Android users over 
growing China cybersecurity attacks 
https://share.newsbreak.com/1ejnpwln

"DIRECTORS from the top intelligence agencies representing the United States 
and the United Kingdom have appeared together to make a forceful statement.

"Statements indicate Western intelligence agencies are suspicious of potential 
cybercrime and espionage operations orchestrated by China.
MI5 Director General Ken McCallum and FBI Director Christopher Wray appeared 
together Credit: PAChinese President Xi Jinping had term limits scrapped so he 
could remain in power Credit: Alamy Live News
“Today is the first time the heads of the FBI and MI5 have shared a public 
platform,” MI5 general director Ken McCallum told reporters from the podium at 
the MI5 headquarters in London.

“We’re doing so to send the clearest signal we can on a massive shared 
challenge: China.”

The joint appearance denounced activity in China that could negatively impact 
the global economy.



Enter the Dragon

2022-07-07 Thread professor rat
workerpoet
Jul 7   #17959  
Greatly enjoyed reading this. In practice, the Spanish Mondragon cooperatives 
seem to me a more direct and practical route to the Marxist ideal than  
attempts at derivative Leninism. For us, building cooperatives and networks of 
cooperatives builds capitalisms replacement. For Cuba and other socialst states 
struggling under a constant assault by militarized capital, progress is 
especially challenged and undermined and must be defended against  to preserve 
even a foundation on which to build the commune.

( Repost etc ) 


Re: Drug smuggling: Underwater drones seized by Spanish police

2022-07-07 Thread grarpamp
>> On 7/6/22, punk  wrote:
>>> accusing me of being "pro argentine government"
>>
>> Nobody has to accuse you of that, it's plain fact.
>> You use their roads ...

> haha
> ...

Classic reactionary affirmation via faux humor,
and attempted redirect away, and not a refutation.
No true anarchist as JuanG self-proclaims to be would use
govt roads, or do anything above. Thus guilty as charged,
JuanG is in fact a "sucker" of govt's "cock", case closed.
Have fun using your govt tax-abated computer to email
the bitbucket for a while. Bye ;]


Cryptocurrency: BTC Silent About NIST PQC And Looming Classical Break

2022-07-07 Thread grarpamp
Threat to classical crypto known and growing for many years.
NIST approves PQC algorithms.
Bitcoin-BTC says nothing about moving to PQC
to secure users funds.

Move to and use privacy enabled PQC coins.


Cryptocurrency: US Bans Crypto Owning Officials From Public Service

2022-07-07 Thread grarpamp
What about owning stocks, fiat, cars, houses, clothing,
food, owning anything at all... according to the logic
of having any influence over things including property,
US politicians should be jailed for owning anything
at all, and "be happy" about it ;)

https://thecryptobasic.com/2022/07/07/us-officials-owning-crypto-banned-from-working-on-crypto-regulations/


Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
# find leaf count and leaf depths
leaves = [*self.leaves()]
max_depth = len(leaves).bit_length()
current_entry = None
for leaf_depth, leaf_offset, leaf_data in leaves:
# leaves that have depth that is too deep can be added
to the index so as to find them quickly
# or a parent of them could be added

# when we find a leaf that is too deep, we can start
an index entry
# we can then continue moving on, and move the index
entry to its parent to include adjacent leaves.
# if we reach prev_flush as the only shared parent,
then we write out the entry using the last too-deep item as the
endpoint
# instead if we find another too deep item, we make it
the last, and continue on


Better get yourself a law-degree, son

2022-07-07 Thread professor rat
Better get a real good one

emptywheel
·5h
Sabrina Shroff schooled her client well...
Quote Tweet
Inner City Press
@innercitypress
· 5h
Judge Furman: Ladies and gentlemen, we'll have another break before the 
rebuttal.
All rise - jury exits.
Judge Furman: Mr. Schulte, that was very impressively done. Depending on what 
happens here you may have a future as a defense lawyer.
[Break of 30- story soon]
Show this thread


Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
subgoal met, passes test

# block tree structure

# blocks written to are leaves
# depth is log2 leaves

# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.


# consider we have an existing tree
# with say m flushes, containing n leaves (or m leaves). we'll likely call it n.


# each flush shows which leaves it has

# additionally, with the final flush, each leaf has an existing depth.

# when we reflush, we need to provide a new index for any leaves that become too deep.

# which leaves are too deep?

# we could basically walk them all to find out. this would be a consistent first approach.

class Simple:
class Chunk:
def __init__(self, offset, data):
self.start = offset
self.data = data
self.end = self.start + len(self.data)
class Flush:
# flush has a list of new leaves, and a list of indexes to old leaves with ranges
def __init__(self, *writes, prev_flush=None):
self.prev_flush = prev_flush
self.data = writes

# find extents
start = min((write.start for write in self.data))
end = max((write.end for write in self.data))

if prev_flush is None:
self.start = start
self.end = end
self.index = []
return

self.start = min(start, prev_flush.start)
self.end = max(end, prev_flush.end)
self.index = [Simple.Chunk(prev_flush.start, prev_flush)]

# find leaf count and leaf depths
#offset = start
#while offset < end:

def __len__(self):
return self.end - self.start

#def lookup(self, offset
def leaves(self, start = None, end = None, depth = 0):
if start is None:
start = self.start
if end is None:
end = self.end
#print('leaves', depth, self, start, end)
offset = start
data_iter = iter(self.data)
index_iter = iter(self.index)
next_write = next(data_iter, None)
next_index = next(index_iter, None)
while offset < end:
if next_write is not None and offset >= next_write.start:
if offset < next_write.end:
# offset >= next_write
# so we look in the write
substart = offset - next_write.start
subend = min(end, next_write.end) - next_write.start
assert subend >= substart
#print('yielding', depth, self, offset, subend + next_write.start)
yield (depth, offset, next_write.data[substart:subend])
offset += subend - substart
assert offset <= end
next_write = next(data_iter, None)
else:
# offset < next_write
# so we look in the index
assert next_index is not None
subend = min(next_write.start, end) if next_write is not None else end
while offset >= next_index.end:
next_index = next(index_iter)
assert offset >= next_index.start and offset < next_index.end
subend = min(subend, next_index.end)
assert subend <= end
yield from next_index.data.leaves(offset, subend, depth + 1)
offset = subend
assert offset <= end
if end == self.end:
assert next(index_iter, None) is None

def __init__(self, latest = None):
self.tip = latest
self.pending = []
def write(self, offset, data):
#print('write', offset, offset+len(data))
for idx, pending in [*enumerate(self.pending)]:
if pending.start <= offset + len(data) and pending.end >= offset:
# merge pending with data
# then merge pending with neighbors
if offset > pending.start:
merged_data = pending.data[:offset - pending.start] + data
else:
merged_data = data
if offset + len(data) < pending.end:
merged_data = merged_data + pending.data[offset + len(data) - pending.start:]
# we could maybe merge 'merged' via recursion
# basically we'd excise pending, and then write merged
self.pending.pop(idx)
return self.write(min(offset, pending.start), merged_data)
elif pending.start > offset + len(data):
# passed this data without overlap
return self.pending.insert(idx, self.Chunk(offset, data))

Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many


# block tree structure

# blocks written to are leaves
# depth is log2 leaves

# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.


# consider we have an existing tree
# with say m flushes, containing n leaves (or m leaves). we'll likely call it n.


# each flush shows which leaves it has

# additionally, with the final flush, each leaf has an existing depth.

# when we reflush, we need to provide a new index for any leaves that become too deep.

# which leaves are too deep?

# we could basically walk them all to find out. this would be a consistent first approach.

class Simple:
class Chunk:
def __init__(self, offset, data):
self.start = offset
self.data = data
self.end = self.start + len(self.data)
class Flush:
# flush has a list of new leaves, and a list of indexes to old leaves with ranges
def __init__(self, *writes, prev_flush=None):
self.prev_flush = prev_flush
self.data = writes

# find extents
start = min((write.start for write in self.data))
end = max((write.end for write in self.data))

if prev_flush is None:
self.start = start
self.end = end
self.index = []
return

self.start = min(start, prev_flush.start)
self.end = max(end, prev_flush.end)
self.index = [Simple.Chunk(prev_flush.start, prev_flush)]

# find leaf count and leaf depths
#offset = start
#while offset < end:

def __len__(self):
return self.end - self.start

#def lookup(self, offset
def leaves(self, start = None, end = None, depth = 0):
if start is None:
start = self.start
if end is None:
end = self.end
print('leaves', depth, self, start, end)
offset = start
data_iter = iter(self.data)
index_iter = iter(self.index)
next_write = next(data_iter, None)
next_index = next(index_iter, None)
while offset < end:
if next_write is not None and offset >= next_write.start:
if offset < next_write.end:
# offset >= next_write
# so we look in the write
substart = offset - next_write.start
subend = min(end, next_write.end) - next_write.start
assert subend >= substart
print('yielding', depth, self, offset, subend + next_write.start)
yield (depth, offset, next_write.data[substart:subend])
offset += subend - substart
assert offset <= end
next_write = next(data_iter, None)
else:
# offset < next_write
# so we look in the index
assert next_index is not None
subend = min(next_write.start, end) if next_write is not None else end
while offset >= next_index.end:
next_index = next(index_iter)
assert offset >= next_index.start and offset < next_index.end
subend = min(subend, next_index.end)
assert subend <= end
yield from next_index.data.leaves(offset, subend, depth + 1)
offset = subend
assert offset <= end
if end == self.end:
assert next(index_iter, None) is None

def __init__(self, latest = None):
self.tip = latest
self.pending = []
def write(self, offset, data):
print('write', offset, offset+len(data))
for idx, pending in [*enumerate(self.pending)]:
if pending.start <= offset + len(data) and pending.end >= offset:
# merge pending with data
# then merge pending with neighbors

# so, algorithm could simplify to inserting in a sorted manner, and then merging neighbors, except we want to ensure that this data replaces old data

merged_data = pending.data[:offset - pending.start] + data + pending.data[offset + len(data) - pending.start:]
# we could maybe merge 'merged' via recursion
# basically we'd excise pending, and then write merged
self.pending.pop(idx)
return self.write(min(offset, pending.start), merged_data)
elif pending.start > offset + len(data):
# passed this data without overlap
return self.pending.insert(idx, self.Chunk(offset, data))
self.pending.append(self.Chunk(offset, data))
def flush(self):
self.tip = 

Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many


# block tree structure

# blocks written to are leaves
# depth is log2 leaves

# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.


# consider we have an existing tree
# with say m flushes, containing n leaves (or m leaves). we'll likely call it n.


# each flush shows which leaves it has

# additionally, with the final flush, each leaf has an existing depth.

# when we reflush, we need to provide a new index for any leaves that become too deep.

# which leaves are too deep?

# we could basically walk them all to find out. this would be a consistent first approach.

class Simple:
class Chunk:
def __init__(self, offset, data):
self.start = offset
self.data = data
self.end = self.start + len(self.data)
class Flush:
# flush has a list of new leaves, and a list of indexes to old leaves with ranges
def __init__(self, *writes, prev_flush=None):
self.prev_flush = prev_flush
self.data = writes

# find extents
start = min((write.start for write in self.data))
end = max((write.end for write in self.data))

if prev_flush is None:
self.start = start
self.end = end
self.index = []
return

self.start = min(start, prev_flush.start)
self.end = max(end, prev_flush.end)
self.index = [Simple.Chunk(prev_flush.start, prev_flush)]

# find leaf count and leaf depths
#offset = start
#while offset < end:

def __len__(self):
return self.end - self.start

#def lookup(self, offset
def leaves(self, start = None, end = None, depth = 0):
if start is None:
start = self.start
if end is None:
end = self.end
print('leaves', depth, self, start, end)
offset = start
data_iter = iter(self.data)
index_iter = iter(self.index)
next_write = next(data_iter, None)
next_index = next(index_iter, None)
while offset < end:
if next_write is not None and offset >= next_write.start:
if offset < next_write.end:
assert offset <= next_write.end # haven't considered what this means
# offset >= next_write
# so we look in the write
substart = offset - next_write.start
subend = min(end, next_write.end) - next_write.start
assert subend >= substart
print('yielding', depth, self, offset, subend + next_write.start)
yield (depth, offset, next_write.data[substart:subend])
offset += subend - substart
assert offset <= end
next_write = next(data_iter, None)
else:
# offset < next_write
# so we look in the index
assert next_index is not None
subend = min(next_write.start, end) if next_write is not None else end
while offset >= next_index.end:
next_index = next(index_iter)
assert offset >= next_index.start and offset < next_index.end
subend = min(subend, next_index.end)
assert subend <= end
yield from next_index.data.leaves(offset, subend, depth + 1)
offset = subend
assert offset <= end
if end == self.end:
assert next(index_iter, None) is None

def __init__(self, latest = None):
self.tip = latest
self.pending = []
def write(self, offset, data):
for idx, pending in [*enumerate(self.pending)]:
if pending.start <= offset + len(data) and pending.end >= offset:
# merge pending with data
# then merge pending with neighbors

# so, algorithm could simplify to inserting in a sorted manner, and then merging neighbors, except we want to ensure that this data replaces old data

merged_data = pending.data[:offset - pending.start] + data + pending.data[offset + len(data) - pending.start:]
# we could maybe merge 'merged' via recursion
# basically we'd excise pending, and then write merged
self.pending.pop(idx)
return self.write(min(offset, pending.start), merged_data)
elif pending.start > offset + len(data):
# passed this data without overlap
return self.pending.insert(idx, self.Chunk(offset, data))
self.pending.append(self.Chunk(offset, 

Re: [ot][wrong] Please let me sleep outdoors in the coldest winter. Was: Re: PLEASE LET ME SLEEP OUTDOORS

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
<3

4/0:17.83 53.7F


Re: confirm 3f2f27df53ef674dc692480950947e293cff93a6

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
I'm guessing you came back to your email and found a volley of
messages from me that you did not want to see, that made it hard for
you to find other emails.

On 7/7/22, cypherpunks-requ...@lists.cpunks.org
 wrote:
> Mailing list removal confirmation notice for mailing list cypherpunks
>
> We have received a request from 23.154.177.2 for the removal of your
> email address, "gmk...@gmail.com" from the
> cypherpunks@lists.cpunks.org mailing list.  To confirm that you want
> to be removed from this mailing list, simply reply to this message,
> keeping the Subject: header intact.  Or visit this web page:
>
>
> https://lists.cpunks.org/mailman/confirm/cypherpunks/3f2f27df53ef674dc692480950947e293cff93a6
>
>
> Or include the following line -- and only the following line -- in a
> message to cypherpunks-requ...@lists.cpunks.org:
>
> confirm 3f2f27df53ef674dc692480950947e293cff93a6
>
> Note that simply sending a `reply' to this message should work from
> most mail readers, since that usually leaves the Subject: line in the
> right form (additional "Re:" text in the Subject: is okay).
>
> If you do not wish to be removed from this list, please simply
> disregard this message.  If you think you are being maliciously
> removed from the list, or have any other questions, send them to
> cypherpunks-ow...@lists.cpunks.org.
>


Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
here's another bug bandaid
this one got through the 3rd flush

# block tree structure

# blocks written to are leaves
# depth is log2 leaves

# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.


# consider we have an existing tree
# with say m flushes, containing n leaves (or m leaves). we'll likely call it n.


# each flush shows which leaves it has

# additionally, with the final flush, each leaf has an existing depth.

# when we reflush, we need to provide a new index for any leaves that become too deep.

# which leaves are too deep?

# we could basically walk them all to find out. this would be a consistent first approach.

class Simple:
class Chunk:
def __init__(self, offset, data):
self.start = offset
self.data = data
self.end = self.start + len(self.data)
class Flush:
# flush has a list of new leaves, and a list of indexes to old leaves with ranges
def __init__(self, *writes, prev_flush=None):
self.prev_flush = prev_flush
self.data = writes

# find extents
start = min((write.start for write in self.data))
end = max((write.end for write in self.data))

if prev_flush is None:
self.start = start
self.end = end
self.index = []
return

self.start = min(start, prev_flush.start)
self.end = max(end, prev_flush.end)
self.index = [Simple.Chunk(prev_flush.start, prev_flush)]

# find leaf count and leaf depths
#offset = start
#while offset < end:

def __len__(self):
return self.end - self.start

#def lookup(self, offset
def leaves(self, start = None, end = None, depth = 0):
if start is None:
start = self.start
if end is None:
end = self.end
print('leaves', depth, self, start, end)
offset = start
data_iter = iter(self.data)
index_iter = iter(self.index)
next_write = next(data_iter, None)
next_index = next(index_iter, None)
while offset < end:
if next_write is not None and offset >= next_write.start:
# offset >= next_write
# so we look in the write
substart = offset - next_write.start
subend = min(end, next_write.end) - next_write.start
print('yielding', depth, self, offset, subend + next_write.start)
yield (depth, offset, next_write.data[substart:subend])
offset += subend - substart
assert offset <= end
next_write = next(data_iter, None)
else:
# offset < next_write
# so we look in the index
assert next_index is not None
subend = min(next_write.start, end) if next_write is not None else end
while offset >= next_index.end:
next_index = next(index_iter)
assert offset >= next_index.start and offset < next_index.end
subend = min(subend, next_index.end)
assert subend <= end
yield from next_index.data.leaves(offset, subend, depth + 1)
offset = subend
assert offset <= end
if end == self.end:
assert next(index_iter, None) is None

def __init__(self, latest = None):
self.tip = latest
self.pending = []
def write(self, offset, data):
for idx, pending in [*enumerate(self.pending)]:
if pending.start <= offset + len(data) and pending.end >= offset:
# merge pending with data
# then merge pending with neighbors

# so, algorithm could simplify to inserting in a sorted manner, and then merging neighbors, except we want to ensure that this data replaces old data

merged_data = pending.data[:offset - pending.start] + data + pending.data[offset + len(data) - pending.start:]
# we could maybe merge 'merged' via recursion
# basically we'd excise pending, and then write merged
self.pending.pop(idx)
return self.write(min(offset, pending.start), merged_data)
elif pending.start > offset + len(data):
# passed this data without overlap
return self.pending.insert(idx, self.Chunk(offset, data))
self.pending.append(self.Chunk(offset, data))
def flush(self):
self.tip = self.Flush(*self.pending, prev_flush=self.tip)
self.pending = []
def leaves(self, start = None, end = 

Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
i added debugging statements and fixed 1 bug i think
now it seems it continues yielding leaves after it reaches the end

# block tree structure

# blocks written to are leaves
# depth is log2 leaves

# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.


# consider we have an existing tree
# with say m flushes, containing n leaves (or m leaves). we'll likely call it n.


# each flush shows which leaves it has

# additionally, with the final flush, each leaf has an existing depth.

# when we reflush, we need to provide a new index for any leaves that become too deep.

# which leaves are too deep?

# we could basically walk them all to find out. this would be a consistent first approach.

class Simple:
class Chunk:
def __init__(self, offset, data):
self.start = offset
self.data = data
self.end = self.start + len(self.data)
class Flush:
# flush has a list of new leaves, and a list of indexes to old leaves with ranges
def __init__(self, *writes, prev_flush=None):
self.prev_flush = prev_flush
self.data = writes

# find extents
start = min((write.start for write in self.data))
end = max((write.end for write in self.data))

if prev_flush is None:
self.start = start
self.end = end
self.index = []
return

self.start = min(start, prev_flush.start)
self.end = max(end, prev_flush.end)
self.index = [Simple.Chunk(prev_flush.start, prev_flush)]

# find leaf count and leaf depths
#offset = start
#while offset < end:

def __len__(self):
return self.end - self.start

#def lookup(self, offset
def leaves(self, start = None, end = None, depth = 0):
if start is None:
start = self.start
if end is None:
end = self.end
print('leaves', self, start, end, depth)
offset = start
data_iter = iter(self.data)
index_iter = iter(self.index)
next_write = next(data_iter, None)
next_index = next(index_iter, None)
while offset < end:
if next_write is not None and offset >= next_write.start:
# offset >= next_write
# so we look in the write
substart = offset - next_write.start
subend = min(end, next_write.end) - next_write.start
print('yielding', depth, self, offset, subend + next_write.start)
yield (depth, offset, next_write.data[substart:subend])
offset += subend - substart
assert offset <= end
next_write = next(data_iter, None)
else:
# offset < next_write
# so we look in the index
assert next_index is not None
subend = next_write.start if next_write is not None else end
while offset >= next_index.end:
next_index = next(index_iter)
assert offset >= next_index.start and offset < next_index.end
subend = min(subend, next_index.end)
yield from next_index.data.leaves(offset, subend, depth + 1)
offset = subend
assert offset <= end
if end == self.end:
assert next(index_iter, None) is None

def __init__(self, latest = None):
self.tip = latest
self.pending = []
def write(self, offset, data):
for idx, pending in [*enumerate(self.pending)]:
if pending.start <= offset + len(data) and pending.end >= offset:
# merge pending with data
# then merge pending with neighbors

# so, algorithm could simplify to inserting in a sorted manner, and then merging neighbors, except we want to ensure that this data replaces old data

merged_data = pending.data[:offset - pending.start] + data + pending.data[offset + len(data) - pending.start:]
# we could maybe merge 'merged' via recursion
# basically we'd excise pending, and then write merged
self.pending.pop(idx)
return self.write(min(offset, pending.start), merged_data)
elif pending.start > offset + len(data):
# passed this data without overlap
return self.pending.insert(idx, self.Chunk(offset, data))
self.pending.append(self.Chunk(offset, data))
def flush(self):
self.tip = self.Flush(*self.pending, prev_flush=self.tip)
self.pending = []
def leaves(self, start = None, 

Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
i dropped the writes to 1/flush and it failed on the 3rd flush


Fwd: Your Freedom of Information Law ("FOIL") Requests

2022-07-07 Thread Gunnar Larson
xNY.io - Bank.org, PBC perhaps is a New York phenomenon.

The issue is all 10 FOIL materials are delayed and we fear obstruction.

We may setup a digital asset fund to full court press each one. Any
interest?


-- Forwarded message -
From: Mazza, Stephanie (DFS) 
Date: Wed, Jun 29, 2022, 12:07 PM
Subject: RE: Your Freedom of Information Law ("FOIL") Requests
To: g...@xny.io 


Mr. Larson:



Attached please find a list of all ten open FOIL requests you have
submitted to the Department, followed by the text of your request, and the
expected due date:



   1. 2022-091718: “Dear Sir or Madam: International human rights scholars
   are quick to point out that the lack of protection of cognitive liberty in
   such instances is due to the relative lack of technology capable of
   directly interfering with mental autonomy at the time the core human rights
   treaties were created. Similar to a ransomware attack, the technology
   behind such operations can be abused. Canada is said to have exploited
   advanced technologies without the authority to do so. Even worse, it is
   alleged that Canada forcefully abused technology in the unsanctioned
   production of reports that appeared to be aimed at cognitive activities of
   Canadians. Reference:
   
https://ottawacitizen.com/news/local-news/havana-syndrome-part-2-how-a-dogs-brain-may-help-solve-the-mystery-of-canadian-diplomats-cuban-nightmare
   Other reports highlight similar technologies being explored by the New York
   City Police Department. In 2021, members of the National Lawyers Guild won
   $650,000 in litigation financed fees from abuse of the technology in New
   York. We would like to request access to any and all records pertaining to
   information concerning if a St. Bernard named "Brody" could potentially be
   harmed (or, harrassed) by sonic or microwave (or, any other form of
   technology similar) surveillance while living in Chelsea, Manhattan.
   Furthermore, we would like to receive all records pertaining to New York
   State's approach to Gotham software intellectual property. Respectfully
   yours, Gunnar Larson” *8/2/22*



   1. 2022-091109: “Today’s memo seeks to kindly request any and all
   records related to the Executive Chamber’s oversight and/or approval of the
   New York Department of Financial Services’ engagement of the Alliance for
   Regulatory Innovation and Brex, Inc.” *7/25/22*



   1. 2022-090627: “Dear Madam or Sir: Today's memo requests records
   related to NY-DFS monitoring of $10,610,000,000 of pledge funds (detailed
   below) specific to financial inclusion. xNY.io would like to receive any
   and all records related to NY-DFS' monitoring of Goldman Sachs' potential
   money laundering and other abuses of Goldman project, One Million Black
   Women, with $10B in direct capital and $100M in charity investments:
   
https://www.goldmansachs.com/our-commitments/sustainability/one-million-black-women/
   xNY.io would like to receive any and all documents correlating NY-DFS
   monitoring of (or, confirming no correlation) the One Million Black Women
   fund related to any association to other Goldman student scholarship funds
   in Africa ... Furthermore, xNY.io would like to receive any and all
   documents, related to NY-DFS' regulatory scrutiny of interlocking
   directorates at the NAACP, Goldman Sachs, Wells Fargo and PayPal with
   potential collusion in leveraging marketplace spoofing tactics (or, records
   confirming no spoofing tactics) concerning $810 million in pledge funds
   associated with the 2020 Mission Driven Banks whitepaper:
   
https://drive.google.com/file/d/1rOXE7ierZcd8HlvNVy7hAYg7d86q_1dv/view?usp=sharing
   Attached and below, kindly find outline of the $850M pledged: Goldman Sachs
   Industry: Financial Services Pledge Amount: $250 million Use of Funds:
   Goldman Sachs has committed $250 million for small business lending.
   Goldman Sachs will not issue these loans directly since it is not an
   approved small business loan provider in the United States. Instead, it
   will provide the financing to CDFIs and other mission-driven lenders to
   make the loans. Wells Fargo Industry: Banking Pledge Amount: $50 million
   Use of Funds: Wells Fargo has pledged a $50 million investment in African
   American MDIs. See Press Release PayPal Industry: Financial Services Pledge
   Amount: $510 million Use of Funds: PayPal has pledged $500 million for a
   long-term economic opportunity fund to support African American and
   underrepresented minority businesses and communities. The initiative will
   include “bolstering the company’s relationships with community banks and
   credit unions serving underrepresented minority communities, as well as
   investing directly into African American and minority led startups and
   minority-focused investment funds.” PayPal deposited $50 million in Optus
   Bank, an African American MDI in Columbia, South Carolina. PayPal will use
   another 

Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
more logic errors engaged

the first pass works now
thinking it does too many passes to actually make a graph, which is
fine for now since flushing after the first is failing.

# block tree structure

# blocks written to are leaves
# depth is log2 leaves

# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.


# consider we have an existing tree
# with say m flushes, containing n leaves (or m leaves). we'll likely call it n.


# each flush shows which leaves it has

# additionally, with the final flush, each leaf has an existing depth.

# when we reflush, we need to provide a new index for any leaves that become too deep.

# which leaves are too deep?

# we could basically walk them all to find out. this would be a consistent first approach.

class Simple:
class Chunk:
def __init__(self, offset, data):
self.start = offset
self.data = data
self.end = self.start + len(self.data)
class Flush:
# flush has a list of new leaves, and a list of indexes to old leaves with ranges
def __init__(self, *writes, prev_flush=None):
self.prev_flush = prev_flush
self.data = writes

# find extents
start = min((write.start for write in self.data))
end = max((write.end for write in self.data))

if prev_flush is None:
self.start = start
self.end = end
self.index = []
return

self.start = min(start, prev_flush.start)
self.end = max(end, prev_flush.end)
self.index = [Simple.Chunk(prev_flush.start, prev_flush)]

# find leaf count and leaf depths
#offset = start
#while offset < end:

def __len__(self):
return self.end - self.start

#def lookup(self, offset
def leaves(self, start = None, end = None, depth = 0):
if start is None:
start = self.start
if end is None:
end = self.end
offset = start
data_iter = iter(self.data)
index_iter = iter(self.index)
next_write = next(data_iter, None)
next_index = next(index_iter, None)
while offset < end:
if next_write is not None and offset >= next_write.start:
# offset >= next_write
# so we look in the write
substart = offset - next_write.start
subend = min(end, next_write.end)
yield (depth, offset, next_write.data[substart:subend])
offset += subend - substart
next_write = next(data_iter, None)
else:
# offset < next_write
# so we look in the index
assert next_index is not None
subend = next_write.start if next_write is not None else end
while offset >= next_index.end:
next_index = next(index_iter)
assert offset >= next_index.start and offset < next_index.end
subend = min(subend, next_index.end)
yield from next_index.data.leaves(offset, subend, depth + 1)
offset = subend
assert offset == end
if end == self.end:
assert next(index_iter, None) is None

def __init__(self, latest = None):
self.tip = latest
self.pending = []
def write(self, offset, data):
for idx, pending in [*enumerate(self.pending)]:
if pending.start <= offset + len(data) and pending.end >= offset:
# merge pending with data
# then merge pending with neighbors

# so, algorithm could simplify to inserting in a sorted manner, and then merging neighbors, except we want to ensure that this data replaces old data

merged_data = pending.data[:offset - pending.start] + data + pending.data[offset + len(data) - pending.start:]
# we could maybe merge 'merged' via recursion
# basically we'd excise pending, and then write merged
self.pending.pop(idx)
return self.write(min(offset, pending.start), merged_data)
elif pending.start > offset + len(data):
# passed this data without overlap
return self.pending.insert(idx, self.Chunk(offset, data))
self.pending.append(self.Chunk(offset, data))
def flush(self):
self.tip = self.Flush(*self.pending, prev_flush=self.tip)
self.pending = []
def leaves(self, start = None, end = None):
if self.tip is not None:
return self.tip.leaves(start, end)

if __name__ == '__main__':
import random
SIZE=4096
 

Fwd: Alliance for Innovative Regulation

2022-07-07 Thread Gunnar Larson
-- Forwarded message -
From: Gunnar Larson 
Date: Thu, May 19, 2022, 2:20 PM
Subject: Alliance for Innovative Regulation
To: cypherpunks 


https://regulationinnovation.org/team/jo-ann-barefoot/

Jo Ann BarefootFounder & CEO

Jo Ann Barefoot is CEO & Founder of AIR - the Alliance for Innovative
Regulation, Cofounder of Hummingbird Regtech, and host of the podcast show
Barefoot Innovation. A noted advocate of “regulation innovation,” Jo Ann is
Senior Fellow Emerita at the Harvard Kennedy School Center for Business &
Government. She has been Deputy Comptroller of the Currency, partner at
KPMG, Co-Chairman of Treliant Risk Advisors, and staff member at the U.S.
Senate Banking Committee. She’s an angel investor, serves on the board of
Oportun, serves on the fintech advisory committee for FINRA, is a member of
the Milken Institute U.S. FinTech Advisory Committee, and is a member of
the California Blockchain Working Group Advisory Board. Jo Ann chairs the
board of directors of FinRegLab, previously chaired the board of the
Financial Health Network, and previously served on the CFPB’s Consumer
Advisory Board. In 2020 Jo Ann was inducted into the Fintech Hall of Fame
by CB Insights.

My work centers on convergence, the places where disparate currents flow
together and form something new. At Barefoot Innovation Group, we are
finding better solutions for financial consumers at the confluence of
technology, markets, demographics, cultures, science, the arts, global
poverty reduction, and, crucially, regulation. As part of this search we
discovered the need for my new nonprofit, AIR - the Alliance for Innovative
Regulation. Transformation is coming — Fintech, Regtech, and “regulation
innovation.” Here are some of the connections I’ve put together in my life.
I am:

   - CEO and Cofounder of AIR - the Alliance for Innovative Regulation
   - CEO of Barefoot Innovation Group
   - Senior Fellow Emerita at Harvard University’s John F. Kennedy School of
   - Government’s Center for Business & Government.
   - Publishing a Harvard Kennedy School M-RCBG Associate Working Paper
   series on regulation innovation and working on a book on it as well
   - Host of the global podcast show Barefoot Innovation.
   - Co-founder of Hummingbird Regtech, designing software to fight
   financial crime.
   - A global public speaker, reaching thousands of people throughout the
   world each year including with my speech, Never Fear.
   - Board member at Oportun.
   - Chair of the board of FinRegLab.
   - An angel investor and an adviser to fintech startups.
   - A convener of roundtables and cross-pollination of ideas.
   - An original member of the CFPB’s Consumer Advisory Board.
   - Former chair of the board of the Financial Health Network.
   - Board member of the National Foundation for Credit Counseling (NFCC).
   - Member of the Milken Institute FinTech Advisory Committee.
   - Member of the fintech committee of FINRA
   - Member of the California Blockchain Working Group Advisory Board.
   - Author of several books and nearly 200 articles and contributor to
   Forbes.
   - Author of major white papers, including the Regtech Manifesto.
   - Media source, frequent podcast guest, and expert witness at hearings
   of Congress and federal agencies.
   - The first woman to serve as Deputy Comptroller of the Currency, where
   I led the OCC’s original consumer protection unit and later directed new
   media, Congressional and interagency relationships, including for the
   Comptroller’s role on the board of the FDIC.
   - Former Staff Member at the U.S. Senate Banking Committee.
   - Former Co-Chair of the consulting firm Treliant Risk Advisers.
   - Former Partner and Managing Director at KPMG, heading consumer
   financial regulatory and privacy advisory teams.
   - A serial entrepreneur, starting several consulting and technology
   firms.
   - Former Director of Mortgage Finance for the National Association of
   Realtors, and researcher at the Department of Housing and Urban Development
   and Federal Home Loan Bank Board.
   - Volunteer leader in conservation, education, women’s opportunity and
   the arts. I have chaired the global Trustee Council of the Nature
   Conservancy and founded its initiative on Trustees Without Borders;
   previously chaired the boards of trustees of the Ohio Nature Conservancy
   and Hiram College; served on the Council of Board Chairs of the Association
   of Governing Boards of Universities and Colleges, and serve on the Board of
   the Theodore Roosevelt Conservation Partnership.
   - I’m a National Advisory Board member of the National Museum of Women
   in the Arts. I have been an International Visitor to the European Community
   and worked in distressed villages in rural India.

I love adventure and challenge. I’ve raised three amazing kids. I've
fly-fished on five continents. I’ve searched for wolves in the Arctic. I’ve
crisscrossed Alaska in small planes. I’ve been charged by an 

Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
i failed to consolidate the writes before flushing.


Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
here is version with different crash. previous one was because the
leaf iterator did not handle sparse data. so i just filled the
structure with 0s before starting.

# block tree structure

# blocks written to are leaves
# depth is log2 leaves

# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.


# consider we have an existing tree
# with say m flushes, containing n leaves (or m leaves). we'll likely call it n.


# each flush shows which leaves it has

# additionally, with the final flush, each leaf has an existing depth.

# when we reflush, we need to provide a new index for any leaves that become too deep.

# which leaves are too deep?

# we could basically walk them all to find out. this would be a consistent first approach.

class Simple:
class Chunk:
def __init__(self, offset, data):
self.start = offset
self.data = data
self.end = self.start + len(self.data)
class Flush:
# flush has a list of new leaves, and a list of indexes to old leaves with ranges
def __init__(self, *writes, prev_flush=None):
self.prev_flush = prev_flush
self.data = writes

# find extents
start = min((write.start for write in self.data))
end = max((write.end for write in self.data))

if prev_flush is None:
self.start = start
self.end = end
self.index = []
return

self.start = min(start, prev_flush.start)
self.end = max(end, prev_flush.end)
self.index = [(prev_flush.start, prev_flush.end, prev_flush)]

# find leaf count and leaf depths
#offset = start
#while offset < end:

#def lookup(self, offset
def leaves(self, start = None, end = None, depth = 0):
if start is None:
start = self.start
if end is None:
end = self.end
offset = start
data_iter = iter(self.data)
index_iter = iter(self.index)
next_write = next(data_iter, None)
next_index = next(index_iter, None)
while offset < end:
if next_write is not None and offset >= next_write.start:
# offset >= next_write
# so we look in the write
substart = offset - next_write.start
subend = min(end, next_write.end)
yield (depth, offset, next_write.data[substart:subend])
else:
# offset < next_write
# so we look in the index
assert next_index is not None
subend = next_write.start if next_write is not None else end
while offset >= next_index.end:
next_index = next(index_iter)
assert offset >= next_index.start and offset < next_index.end
subend = min(subend, next_index.end)
yield from next_index.leaves(offset, subend, depth + 1)
assert offset == end
if end == self.end:
assert next(index_iter, None) is None

def __init__(self, latest = None):
self.tip = latest
self.pending = []
def write(self, offset, data):
self.pending.append(self.Chunk(offset, data))
def flush(self):
self.tip = self.Flush(*self.pending, prev_flush=self.tip)
self.pending = []
def leaves(self, start = None, end = None):
if self.tip is not None:
return self.tip.leaves(start, end)

if __name__ == '__main__':
import random
SIZE=4096
store = Simple()
comparison = bytearray(SIZE)
store.write(0, bytes(SIZE))
for flushes in range(1024):
for writes in range(1024):
start = random.randint(0, SIZE)
end = random.randint(0, SIZE)
start, end = (start, end) if start <= end else (end, start)
data = random.randbytes(end - start)
comparison[start:end] = data
store.write(start, data)
store.flush()
last_offset = 0
max_depth = 0
for depth, offset, data in store.leaves():
assert comparison[last_offset:offset] == bytes(offset - last_offset)
last_offset = offset + len(data)
assert comparison[offset:last_offset] == data
max_depth = max(depth, max_depth)
assert comparison[last_offset:] == bytes(len(comparison) - last_offset)
print(flush, max_depth, 'OK')


Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
latest topical post below. code reattached with 1 change that does not
fix the run error.

On 7/7/22, Undiscussed Horrific Abuse, One Victim of Many
 wrote:
> here's what i have now
>
> its intent is to test [the correctness] of iterating leaves when the
> structure is just appending, without using a tree.
>
> there is presently a logic error demonstrated when it is run. there
> may be many, many logic errors in it.
>
> i called the confusing class name "Chunk". seems ok.
>

# block tree structure

# blocks written to are leaves
# depth is log2 leaves

# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.


# consider we have an existing tree
# with say m flushes, containing n leaves (or m leaves). we'll likely call it n.


# each flush shows which leaves it has

# additionally, with the final flush, each leaf has an existing depth.

# when we reflush, we need to provide a new index for any leaves that become too deep.

# which leaves are too deep?

# we could basically walk them all to find out. this would be a consistent first approach.

class Simple:
class Chunk:
def __init__(self, offset, data):
self.start = offset
self.data = data
self.end = self.start + len(self.data)
class Flush:
# flush has a list of new leaves, and a list of indexes to old leaves with ranges
def __init__(self, *writes, prev_flush=None):
self.prev_flush = prev_flush
self.data = writes

# find extents
start = min((write.start for write in self.data))
end = max((write.end for write in self.data))

if prev_flush is None:
self.start = start
self.end = end
self.index = []
return

self.start = min(start, prev_flush.start)
self.end = max(end, prev_flush.end)
self.index = [(prev_flush.start, prev_flush.end, prev_flush)]

# find leaf count and leaf depths
#offset = start
#while offset < end:

#def lookup(self, offset
def leaves(self, start = None, end = None, depth = 0):
if start is None:
start = self.start
if end is None:
end = self.end
offset = start
data_iter = iter(self.data)
index_iter = iter(self.index)
next_write = next(data_iter, None)
next_index = next(index_iter, None)
while offset < end:
if next_write is not None and offset >= next_write.start:
# offset >= next_write
# so we look in the write
substart = offset - next_write.start
subend = min(end, next_write.end)
yield (depth, offset, next_write.data[substart:subend])
else:
# offset < next_write
# so we look in the index
assert next_index is not None
subend = next_write.start if next_write is not None else end
while offset >= next_index.end:
next_index = next(index_iter)
assert offset >= next_index.start and offset < next_index.end
subend = min(subend, next_index.end)
yield from next_index.leaves(offset, subend, depth + 1)
assert offset == end
if end == self.end:
assert next(index_iter, None) is None

def __init__(self, latest = None):
self.tip = latest
self.pending = []
def write(self, offset, data):
self.pending.append(self.Chunk(offset, data))
def flush(self):
self.tip = self.Flush(*self.pending, prev_flush=self.tip)
self.pending = []
def leaves(self, start = None, end = None):
if self.tip is not None:
return self.tip.leaves(start, end)

if __name__ == '__main__':
import random
SIZE=4096
store = Simple()
comparison = bytearray(4096)
for flushes in range(1024):
for writes in range(1024):
start = random.randint(0, SIZE)
end = random.randint(0, SIZE)
start, end = (start, end) if start <= end else (end, start)
data = random.randbytes(end - start)
comparison[start:end] = data
store.write(start, data)
store.flush()
last_offset = 0
max_depth = 0
for depth, offset, data in store.leaves():
assert comparison[last_offset:offset] == bytes(offset - last_offset)
last_offset = offset + len(data)
assert comparison[offset:last_offset] == data
max_depth = max(depth, max_depth)
assert comparison[last_offset:] == bytes(len(comparison) - last_offset)

Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
wrong thread


Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
i am receiving a mail bomb

i don't know how many mail bombs i have received, or how i used to
handle them when my mind was more together

i imagine it is normal to filter them out.


Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
here's what i have now

its intent is to test [the correctness] of iterating leaves when the
structure is just appending, without using a tree.

there is presently a logic error demonstrated when it is run. there
may be many, many logic errors in it.

i called the confusing class name "Chunk". seems ok.

# block tree structure

# blocks written to are leaves
# depth is log2 leaves

# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.


# consider we have an existing tree
# with say m flushes, containing n leaves (or m leaves). we'll likely call it n.


# each flush shows which leaves it has

# additionally, with the final flush, each leaf has an existing depth.

# when we reflush, we need to provide a new index for any leaves that become too deep.

# which leaves are too deep?

# we could basically walk them all to find out. this would be a consistent first approach.

class Simple:
class Chunk:
def __init__(self, offset, data):
self.start = offset
self.data = data
self.end = self.start + len(self.data)
class Flush:
# flush has a list of new leaves, and a list of indexes to old leaves with ranges
def __init__(self, *writes, prev_flush=None):
self.prev_flush = prev_flush
self.data = writes

# find extents
start = min((write.start for write in self.data))
end = max((write.end for write in self.data))

if prev_flush is None:
self.start = start
self.end = end
self.index = []
return

self.start = min(start, prev_flush.start)
self.end = max(end, prev_flush.end)
self.index = [(prev_flush.start, prev_flush.end, prev_flush)]

# find leaf count and leaf depths
#offset = start
#while offset < end:

#def lookup(self, offset
def leaves(self, start = None, end = None, depth = 0):
if start is None:
start = self.start
if end is None:
end = self.end
offset = start
data_iter = iter(self.data)
index_iter = iter(self.index)
next_write = next(data_iter, None)
next_index = next(index_iter, None)
while offset < end:
if next_write is not None and offset >= next_write.start:
# offset >= next_write
# so we look in the write
substart = next_write.start - offset
subend = min(end, next_write.end)
yield (depth, offset, next_write.data[substart:subend])
else:
# offset < next_write
# so we look in the index
assert next_index is not None
subend = next_write.start if next_write is not None else end
while offset >= next_index.end:
next_index = next(index_iter)
assert offset >= next_index.start and offset < next_index.end
subend = min(subend, next_index.end)
yield from next_index.leaves(offset, subend, depth + 1)
assert offset == end
if end == self.end:
assert next(index_iter, None) is None

def __init__(self, latest = None):
self.tip = latest
self.pending = []
def write(self, offset, data):
self.pending.append(self.Chunk(offset, data))
def flush(self):
self.tip = self.Flush(*self.pending, prev_flush=self.tip)
self.pending = []
def leaves(self, start = None, end = None):
if self.tip is not None:
return self.tip.leaves(start, end)

if __name__ == '__main__':
import random
SIZE=4096
store = Simple()
comparison = bytearray(4096)
for flushes in range(1024):
for writes in range(1024):
start = random.randint(0, SIZE)
end = random.randint(0, SIZE)
start, end = (start, end) if start <= end else (end, start)
data = random.randbytes(end - start)
comparison[start:end] = data
store.write(start, data)
store.flush()
last_offset = 0
max_depth = 0
for depth, offset, data in store.leaves():
assert comparison[last_offset:offset] == bytes(offset - last_offset)
last_offset = offset + len(data)
assert comparison[offset:last_offset] == data
max_depth = max(depth, max_depth)
assert comparison[last_offset:] == bytes(len(comparison) - last_offset)
print(flush, max_depth, 'OK')


Re: [ot][spam][crazy] log: hobby chart of removal requests

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
i like the idea of turning the behavior of programs back into code.
it's been a theme.


Re: [ot][spam][crazy] log: hobby chart of removal requests

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
then maybe we could look at the script and try to guess how the author
feels that they are running it. it seems it is maybe not the best way,
but it exists.


Re: [ot][spam][crazy] log: hobby chart of removal requests

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
what could be even more fun might be reproducing a script that would
create removal requests with the same statistical properties ! like
reverse engineering the requests.


Re: [ot][spam][crazy] log: hobby chart of removal requests

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
we can form a hypothesis and pretend we are researchers or scientists
or something.

i hypothesize that it could be a uniform distribution within set ranges.

the null hypothesis then would be that it is _not_ a uniform
distribution, but instead some other shape.

how can we test this?

we would need _data_


Re: [ot][spam][crazy] log: hobby chart of removal requests

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
i'm pausing this goal because i worry it could be rude to analyse
somebody's statistical distribution

however i expect it to come up again as a fun goal


Re: [ot][spam][crazy] log: hobby chart of removal requests

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
ok: the temporary goal now is to figure out what kind of statistical
distribution the jitter in the removal requests has.

it's 1634 UTC-5.

1634 and i'm installing the google library

1635 and i've pasted their example into a file
1635. i need a file called 'credentials.json' to log in. better learn
what format it has.

1636 i skimmed the text and it looks like it's supposed to prompt me
if token.json doesn't exist.

1636 i tried setting DISPLAY but still no prompt

1637 i found this problem is discussed at
https://developers.google.com/gmail/api/quickstart/python#file_not_found_error_for_credentialsjson

1638 it looks like this library is for google cloud platform.

i guess i'd rather use a normal imap library then.

1639 this sounds pretty familiar. https://docs.python.org/3/library/imaplib.html


Re: [ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
On 7/7/22, Undiscussed Horrific Abuse, One Victim of Many
 wrote:
> often i try to implement an append-only random-access data structure.

clarification: the intention of the structure under design is to be
stored on append-only media, but support random writes as well as
random reads.  similar to tape-based file systems, which unfortunately
tend to be very tape-specific.


[ot][spam][crazy] log: append-only random-access data

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
often i try to implement an append-only random-access data structure.

it's a fun puzzle for my confused mind. i never finish it, but today i
thought about it a little bit and i think over the years i might be
able to put together a simplified form and maybe make it exist some
day.

here is where i wrote right now. i started spinning out when I renamed
Simple.Write to Simple.RangedData .

I was in the middle of implementing a leaf iterator, so as to make a
simple way to count the number of data leaves and associate them with
a depth. the approach, as written in the file, is to simply relink to
leaves with excessive depth each flush, so as to maintain O(log n)
random seek time.

i vaguely recall that approach has a major issue, but i might be
remembering wrongly, and it's still quite exciting to be able to spend
some minutes trying.

# block tree structure

# blocks written to are leaves
# depth is log2 leaves

# when a flush is made, all blocks are written, and also enough nodes such that every leaf can be accessed within depth lookups.


# consider we have an existing tree
# with say m flushes, containing n leaves (or m leaves). we'll likely call it n.


# each flush shows which leaves it has

# additionally, with the final flush, each leaf has an existing depth.

# when we reflush, we need to provide a new index for any leaves that become too deep.

# which leaves are too deep?

# we could basically walk them all to find out. this would be a consistent first approach.

class Simple:
class RangedData:
def __init__(self, offset, data):
self.start = offset
self.data = data
self.end = self.start + len(self.data)
class Flush:
# flush has a list of new leaves, and a list of indexes to old leaves with ranges
def __init__(self, *writes, prev_flush=None):
self.prev_flush = prev_flush
self.data = writes

# find extents
start = min((write.start for write in self.data))
end = max((write.end for write in self.data))

if prev_flush is None:
self.start = start
self.end = end
self.index = []
return

self.start = min(start, prev_flush.start)
self.end = max(end, prev_flush.end)
self.index = [(prev_flush.start, prev_flush.end, prev_flush)]

# find leaf count and leaf depths
offset = start
while offset < end:

#def lookup(self, offset
def leaves(self, start = None, end = None):
offset = self.start
data_iter = iter(self.data)
index_iter = iter(self.index)
next_write = next(data_iter, None)
next_index = next(index_iter, None)
while offset < self.end:
if next_write is not None:
assert offset <= next_write.start
if offset == next_write.start:
yield (0, next_write)
continue


def __init__(self, latest = None):
self.latest = latest
self.pending = []
def write(self, offset, data):
self.pending.append((offset, data))
def flush(self):
self.latest = self.Flush(*self.pending, prev_flush=self.latest)
self.pending = []



Re: [ot][spam][crazy] log: hobby chart of removal requests

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
turns out google has a library at
https://developers.google.com/gmail/api/quickstart/python

i'm pausing this goal now


[ot][spam][crazy] log: hobby chart of removal requests

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
i'd kind of like to make charts showing the removal requests i'm seeing

i think they would make a pleasant linear line with some jitter that
relates to network latency.

they could be more interesting than that, i don't know.

some time ago i figured out how to access my email programmatically,
but i don't remember well how or where that was.

i have inhibition around it, relates to archival of personal evidence.

but i bet my email is already set up for imap access.

maybe a python imap library


Re: Bank.org | SHELF CHARTER (Notice of Intent)

2022-07-07 Thread Gunnar Larson
Dear Madam Superintendent:

xNY.io - Bank.org, PBC sends you Ms. Lacewell intent of our global
enterprise.

We would like to earn a call with your office for a meet and greet.

When would be a good time for you?

Sending you the very best.

Thank you,

Gunnar Larson
--
*Gunnar Larson *
*xNY.io  - Bank.org , PBC*

MSc

-
Digital Currency
MBA

- Entrepreneurship
and Innovation (ip)

g...@xny.io
+1-646-454-9107
New York, New York 10001

On Sat, Mar 13, 2021, 2:32 PM Gunnar Larson  wrote:

> Honorable Madam Superintendent:
>
> Bank.org submits notice of intent to begin preliminary conversations with
> your office to earn your Shelf Charter designation approval.
>
>- Please notice the attached correspondence made to your attention.
>Additionally, the memo can be reviewed here:
>
> https://docs.google.com/document/d/1d1WU0ScurhEXwNm1YY-e2bPS6CDZK6nJt-ue0PXkWCw/edit?usp=sharing
>
> We will contact Ms. Seema Shah's office on Monday 15 March with hopes of
> scheduling Bank.org's first series of discussions on the
> Superintendent's Shelf Charter consideration.
>
> With great appreciation,
>
> Gunnar Larson
> --
> *Gunnar Larson - xNY.io  | Bank.org *
> MSc
> 
> - Digital Currency
> MBA
> 
> - Entrepreneurship and Innovation (ip)
>
> g...@xny.io
> +1-646-454-9107
> New York, New York 10001
>


Re: confirm eb13662d87c7f13f55766a31893a16cfb7c88ffa

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
Getting this maybe once every five minutes for maybe four hours. Feels
kinda frustrating.

On Thu, Jul 7, 2022, 1:45 PM  wrote:

> Mailing list removal confirmation notice for mailing list cypherpunks
>
> We have received a request from 185.220.102.8 for the removal of your
> email address, "gmk...@gmail.com" from the
> cypherpunks@lists.cpunks.org mailing list.  To confirm that you want
> to be removed from this mailing list, simply reply to this message,
> keeping the Subject: header intact.  Or visit this web page:
>
>
> https://lists.cpunks.org/mailman/confirm/cypherpunks/eb13662d87c7f13f55766a31893a16cfb7c88ffa
>
>
> Or include the following line -- and only the following line -- in a
> message to cypherpunks-requ...@lists.cpunks.org:
>
> confirm eb13662d87c7f13f55766a31893a16cfb7c88ffa
>
> Note that simply sending a `reply' to this message should work from
> most mail readers, since that usually leaves the Subject: line in the
> right form (additional "Re:" text in the Subject: is okay).
>
> If you do not wish to be removed from this list, please simply
> disregard this message.  If you think you are being maliciously
> removed from the list, or have any other questions, send them to
> cypherpunks-ow...@lists.cpunks.org.
>


Re: MIA Coin and POX Kleptocracy: CORRUPTION FIN-2022-A001 - SAR 38(m).

2022-07-07 Thread Gunnar Larson
Hello:

We are kindly following up on this claim.

Thank you very much.

Gunnar Larson

On Wed, Apr 20, 2022, 4:38 PM Gunnar Larson  wrote:

> Dear Department of the Treasury:
>
> From the prompt of FinCEN's Resource Center, the details below are kindly
> submitted to your esteemed attention.
>
> Please let me know if I can be of any assistance.
>
> Sending you the very best regards.
>
> Thank you,
>
> Gunnar Larson
> --
> *Gunnar Larson - xNY.io  - Bank.org *
> MSc
> 
>  -
> Digital Currency
> MBA
> 
>  - Entrepreneurship
> and Innovation (ip)
>
> g...@xny.io
> +1-646-454-9107
> New York, New York 10001
>
> -- Forwarded message -
> From: Gunnar Larson 
> Date: Fri, Apr 15, 2022 at 1:17 PM
> Subject: MIA Coin and POX Kleptocracy: CORRUPTION FIN-2022-A001 - SAR
> 38(m).
> To: 
> Cc: cypherpunks 
>
>
> Dear FinCEN:
>
> xNY.io - Bank.org, PBC thanks you for sending yesterday's advisory on
> kleptocracy and foreign public corruption. We have made 30 highlights to
> the FIN-2022-A001
> 
> advisory for reference.
>
> *The aim of today's memo is to learn FinCEN's insights into MIA Coin and
> the consensus algorithm Proof of Transfer (POX). xNY.io - Bank.org, PBC is
> concerned about potential MIA Coin and City Coin kleptocracy that may be
> affecting our global enterprise. *
>
> *FinCEN may note that NYCCoin is illegal in New York State, given the
> BitLicense mandate. As such, we are not concerned with NYCCoin's legality
> given the clear BitLicense mandate.  *
>
>- On February 4, 2022 we submitted a City of Miami records request for
>any and all correspondence between the City of Miami concerning CityCoins,
>MIA Coin and Stacks (STX). Additionally, any and all related correspondence
>concerning CityCoins, MIA Coin, Stacks (STX), Digital World Acquisition
>Corp and Harvard Management Company.
>- Today, April 15, 2022
>
> 
>we have yet to receive the 13, 092 records. Attached you will find our
>latest correspondence with Miami, notifying them of our intention to
>contact FinCEN concerning MIA Coin and POX.
>- Conducting independent marketplace research, xNY.io - Bank.org, PBC
>established a premise to potential kleptocracy between the City of Miami
>and City Coins, of Iceland 
>international registration. Furthermore, MineMiamiCoin.com is registered
>in Germany .
>- Please find the City of Miami's resolution
>
> approving
>gifts from City Coins ... Furthermore, City Coins suggests a $25,000
>"reward"
>
> 
>(that may be confused as bribery) for Mayors who participate.
>
> MIA Coin is powered by POX
> ,
> a computer protocol that may exploit the U.S. and international financial
> systems to launder illicit gains, including through the use of shell
> companies, offshore financial centers, and professional service providers
> who enable the movement and laundering.
>
> MIA Coin' POX protocol is a wealth extraction tool that unfairly rewards
> an inside group of miners, rewarding patronage networks that benefit his
> inner circle and regime. These practices harm the competitive landscape of
> financial markets and often have long-term corrosive effects on good
> governance, democratic institutions, and human rights standards.
>
> FinCEN, we are concerned Miami's City Coin
> 
> resolution potentially confirms Miami leaders as kleptocrats, using their
> position and influence to enrich themselves and their networks of corrupt
> actors.
>
> Finally, we understand that MIA Coin and POX  operations are powered by
> international data warehouses, located in Iceland, Germany and Hong Kong.
> For these reasons, given FinCEN's alert on kleptocracy and foreign public
> corruption we kindly seek guidance.
>
> Sending you the very best regards.
>
> Thank you,
>
> Gunnar Larson
> --
> *Gunnar Larson - xNY.io  - Bank.org ,
> PBC*
> MSc
> 

Filthy Red-Fascist Commies in the DHS

2022-07-07 Thread professor rat
Communists in the Dept of Reichstag Security - the Red-Brown tide is loosed!

srubenfeld
·16m
current and former DHS officials charged in the case for allegedly obstructing 
justice, “including by destroying evidence” after they were approached by FBI 
agents about procuring and disseminating “sensitive and confidential 
information from a law enforcement database”
Quote Tweet
US Attorney EDNY
@EDNYnews
· 21m
Five Individuals Indicted for Crimes Related to Transnational Repression Scheme 
to Silence Critics of the People’s Republic of China Residing in the United 
States

@DOJNatSec @NewYorkFBI
https://justice.gov/usao-edny/pr/five-individuals-indicted-crimes-related-transnational-repression-scheme-silence
Show this thread




Re: [ot][wrong] Please let me sleep outdoors in the coldest winter. Was: Re: PLEASE LET ME SLEEP OUTDOORS

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
Quotes trimmed through Jul 2, 2022, 8:49 AM


2022-07-02 1417 2/0:10.7 cool, medium, food after
2022-07-02 1604 2/0:11.56 67.1F easy-medium, food after
2022-07-02 1756 2/0:16.20 64.4F medium, food after
2022-07-03 0853 2/0:15.63 62F hard, with sugar cookie, nap after
2022-07-03 1422 2/0:11.29 54.8F medium, food after
2022-07-03 1836 long warm then 2/unk at around 53F, easy, food after
2022-07-04 0728 3/0:23.74 60.9F medium-easy, two chocolates, with food
2022-07-04 0914 3/0:18.76 60.9F medium-easy, one chocolate, with food
2022-07-04 1117 3/0:16 57.2F medium-easy, with food, one chocolate after
2022-07-05 0656 3/0:20.20 58.1F medium, two chocolates, food after
2022-07-05 1444 unk warm then 4/0:25.76 58.6F medium, food after
2022-07-05 2051 3/0:18.78 59.5F medium-easy, held pee, sleep after
2022-07-06 4/0:22.68 59.5F medium-easy, one chocolate, food after
2022-07-06 1317 4/0:21.94 59.5F medium-easy, one chocolate, food after
2022-07-06 1554 4/0:18.27 59.1F medium, with food
2022-07-07 0855 unk warm then 5/0:23.66 54.1F, food after, hard-easy

The temp is likely lower because the thermometer had longer time.

After warm water, it was pleasant to have cold water, but i kept it to only
a count of 5 because of how the inhibition experiences might respond to my
choices.


Re: [ot][wrong] Please let me sleep outdoors in the coldest winter. Was: Re: PLEASE LET ME SLEEP OUTDOORS

2022-07-07 Thread Undiscussed Horrific Abuse, One Victim of Many
Quotes trimmed through Jul 2, 2022, 8:49 AM


2022-07-02 1417 2/0:10.7 cool, medium, food after
2022-07-02 1604 2/0:11.56 67.1F easy-medium, food after
2022-07-02 1756 2/0:16.20 64.4F medium, food after
2022-07-03 0853 2/0:15.63 62F hard, with sugar cookie, nap after
2022-07-03 1422 2/0:11.29 54.8F medium, food after
2022-07-03 1836 long warm then 2/unk at around 53F, easy, food after
2022-07-04 0728 3/0:23.74 60.9F medium-easy, two chocolates, with food
2022-07-04 0914 3/0:18.76 60.9F medium-easy, one chocolate, with food
2022-07-04 1117 3/0:16 57.2F medium-easy, with food, one chocolate after
2022-07-05 0656 3/0:20.20 58.1F medium, two chocolates, food after
2022-07-05 1444 unk warm then 4/0:25.76 58.6F medium, food after
2022-07-05 2051 3/0:18.78 59.5F medium-easy, held pee, sleep after
2022-07-06 4/0:22.68 59.5F medium-easy, one chocolate, food after
2022-07-06 1317 4/0:21.94 59.5F medium-easy, one chocolate, food after
2022-07-06 1554 4/0:18.27 59.1F medium, with food

Seemed harder without the chocolate, although I also had an inhibiting
thought by accident. I waited longer before reading the temperature and it
is lower.


Ars Technica: The cryptopocalypse is nigh! NIST rolls out new encryption standards to prepare

2022-07-07 Thread professor rat
Fake news from a fake anarchist.  There is already quantum encryption that's 
expensive now but prices will fall with wider adoption.  There are also forms 
of encryption not subject to these tiresome beat-up bullshit journalist stories.

Conde Nast are a well-known fraud on the public and Jim Bell should be better 
known as one now especially since his perversions of the last few years - 
Prague - Anarchopulco - list spam for the Lunar Right.

Fuck off Bell - you're not a crypto-anarchists arsehole.  GO DIE!


Ars Technica: The cryptopocalypse is nigh! NIST rolls out new encryption standards to prepare

2022-07-07 Thread jim bell
Ars Technica: The cryptopocalypse is nigh! NIST rolls out new encryption 
standards to prepare.
https://arstechnica.com/information-technology/2022/07/nist-selects-quantum-proof-algorithms-to-head-off-the-coming-cryptopocalypse/

In the not-too-distant future—as little as a decade, perhaps, nobody knows 
exactly how long—the cryptography protecting your bank transactions, chat 
messages, and medical records from prying eyes is going to break spectacularly 
with the advent of quantum computing. On Tuesday, a US government agency named 
four replacement encryption schemes to head off this cryptopocalypse.

Some of the most widely used public-key encryption systems—including those 
using the RSA, Diffie-Hellman, and elliptic curve Diffie-Hellman 
algorithms—rely on mathematics to protect sensitive data. These mathematical 
problems include (1) factoring a key's large composite number (usually denoted 
as N) to derive its two factors (usually denoted as P and Q) and (2) computing 
the discrete logarithm that key is based on.


Dutch Revolt: Spreads Manure On Govt Buildings, Govt Shoots People

2022-07-07 Thread grarpamp
Netherlands: The Dutch farmers have had enough.
They are spreading manure on government buildings and blocking the
highways with tractors.
https://twitter.com/JamesMelville/status/1543837782677880832


Re: 1776: When Freedom From The State?

2022-07-07 Thread grarpamp
Kween Josie @KweenJosie
I want to tell you about what happened to the 56 signers of The
Declaration of Independence. Freedom does not come free. It is pivotal
as we devolve into tyranny that we know what that means.
The Continental Congress, approved the final wording of the
Declaration of Independence on July 4, 1776. 56 men signed it.
Signing the declaration was a sacrifice involving risk. Sometimes
those who sacrifice never regain what they gave up. Some don’t see the
results that later generations see, enjoy or experience. And the risk
might include the ultimate sacrifice – giving one’s life for the
cause.
• Five of those 56 Declaration signers were captured by the British
and tortured as traitors.
• Four of the 56 Declaration signers lost their sons in the
Continental Army or had sons who were captured.
• Nine of the 56 Declaration signers fought and died in the American Revolution.
• 12 of the 56 Declaration signers had their homes looted and destroyed.
Carter Braxton of Virginia was a prosperous planner and trader. His
ships were destroyed by the British Navy. He lost his home to pay off
the debts and died in poverty.
Thomas McKean of Delaware was harassed mercilessly. His family went
into hiding during the war, moving multiple times. He served in
Congress without pay and died in poverty.
• Thomas Nelson JR. of Virginia put his own home up as collateral to
raise $2 million for the French allies. The struggling French
government was unable to pay back the loans and Nelson’s entire estate
was wiped out.
Frances Hopkinsof New Jersey and William Floyd of New York both had
their homes confiscated and used as housing by the British. • Frances
Lewis of New York had his wife imprisoned by the British where she
died. He also lost his home and everything in it.
John Hart had to leave his dying wife’s bedside to flee the British.
For more than a year, he lived in caves and forests. He returned home
to find his wife dead, his 13 children missing & of his property gone.
He died shortly after of physical & mental exhaustion & a broken heart
Lewis Morris and Phillip Livingston died of similar circumstances to
Hart’s. Too sad and exhausted to carry on.
Many were bountied, including John Hancock, who was famously insulted
by the “low” price on his head.
• Declaration signer Richard Stockton, a New Jersey State Supreme
Court Justice, returned to his Princeton estate to find that his wife
and children were living like refugees after he was betrayed by a Tory
sympathizer. British troops captured and tortured him with starvation.
When Stockton was finally released, he went home to find his estate
had been looted and burned. He had been so badly beaten in prison that
he died before the war’s end. His surviving family lived the rest of
their lives off charity.
• At the Battle of Yorktown on the York River in Virginia, Thomas
Nelson, Jr.’s home had been overrun by British Gen. Charles
Cornwallis, who had taken over the his home for headquarters. Nelson
urged Gen. George Washington to open fire on his own home.
Washington agreed. This was done, and Nelson’s home was destroyed.
Cornwallis later surrendered the British forces at Yorktown in 1781,
ending the fighting in the American Revolution. Nelson, one of the
brave and noble 56 signers, died bankrupt some years later.
The 56 signers of the Declaration of Independence came from various
walks of life. Most were considered well-educated for the time. The 56
included lawyers, store merchants, farmers, teachers, one surveyor
(Abraham Clark) and of course one multifaceted genius (Ben Franklin).
Each of them knew the risks that being caught by the British or
exposed by a traitor carried. Still, they signed that beautiful
document. Still they persisted.
And because of these brave men, many whose names are nearly lost to
history, The Declaration of Independence, along with the U.S.
Constitution, set the foundation for the greatest nation on earth.
Up until the American experiment, every single ruling class was some
kind of dictatorship. But because of those 56 signers, who believed so
deeply in freedom, self ownership & a Republican form of government,
we had an explosion of innovation, creativity, success, and
prosperity.
While I can look at evidence and history and make a qualified logical
hypothesis on where the next year, 2 years, of 10 years will take us,
we can’t know for sure. But I do know that while America the
institution is dying, America the idea still exists in all of us.
And she’s still worth it. Thanks for reading.


Re: War re Ukraine: Thread

2022-07-07 Thread grarpamp
> US general says Elon Musk's Starlink has 'totally destroyed Putin's 
> information campaign'

Lol, even if so, didn't do any good. Putin, the murdering invader
and rampaging destroyer, now totally controls the aforementioned
areas, and just destroyed two of the west's prized bling weapons.
Ukraine confirmed to be secretly crawling with western agencies
on the ground, yet is still losing. Will take long time to win anything
back.

Prepare yourselves so you don't suffer same fate as ill-prepared
Ukraine.

Anyway, Putin is an asshole that needs to die...
https://twitter.com/UA_struggle/status/1543221420272001025
Another video of the Russian shelling in the #Odesa region. It was
recorded right after the missile struck. It is horrifying what #Russia
does to the Ukrainian people. #RussiaIsATerroristState

Ukrainian forces are currently advancing in several tactical
directions, including in the south - in the Kherson region, in the
Zaporozhye region. We will not give up our land – the entire sovereign
territory of Ukraine will be Ukrainian. In the south of our country,
in the occupied areas, Russian forces blocked any opportunity for
people to know the truth about what was happening. Block access to
social networks, messengers and YouTube. People need to know about
it.. Therefore, if you have the opportunity to talk to people in the
south – Kherson, Henichesk, Berdyansk, Melitopol and other cities and
villages – please spread the truth there. Take every opportunity to
tell people in the occupied areas that we remember them. -- Zelensky

The same one who came to power through the blood of Muslims throughout
the North Caucasus? The same one who streamlined the "fight against
terrorism" will turn this into a flywheel of repression of Muslims in
Russia? The one who killed thousands of Muslims in Syria? And the one
who killed your own people? -- @Islamicfront1


Re: USA 2020 Elections: Thread

2022-07-07 Thread grarpamp
https://www.dailymail.co.uk/news/article-10983315/Oil-U-S-reserves-head-overseas-gasoline-prices-stay-high.html

They are trying to make you think he is incompetent, when in reality
they know exactly what they are doing and they are just using Biden as
a front man for the collapse that they are orchestrating!