Re: Coronavirus: Thread

2022-01-18 Thread grarpamp
Biden, the Democrats Fave Covid "Nazi", Gets Rekd by WHO, and by everyone else, including "The Science", lately... https://twitter.com/RandPaul/status/1477672463454507009 WHO Crushes Biden COVID Plan, Says "No Evidence At All" That Healthy Kids Needs 'Booster' Jabs The Chief Scientist of the

Re: Coronavirus: Thread

2022-01-18 Thread grarpamp
> ivermectin MIT-Educated Doctor Ordered To Undergo Psych Evaluation After Prescribing Ivermectin https://summit.news/2022/01/18/mit-educated-doctor-ordered-to-undergo-psych-evaluation-after-prescribing-ivermectin/

Breaking News: Robinhood Rebrands as 'Libra'

2022-01-18 Thread Gunnar Larson
*SILICON VALLEY - Embroiled in the naturally malevolent evolutionary controversy of brand fakery in the name of manipulating humanity, Robinhood Markets (a financial services company) rocked the

Re: 1984: Thread

2022-01-18 Thread grarpamp
Biden's Infrastructure Bill, Now Signed Into Law, Mandates "Vehicle Kill Switches" By 2026 https://www.yahoo.com/now/law-install-kill-switches-cars-17930.html https://www.musclecarsandtrucks.com/biden-infrastructure-bill-vehicle-kill-switch-2026/

Re: Preaching to the choir: (Was: "The spectacle of Jim Bell's recent behavior")

2022-01-18 Thread jim bell
On Tue, Jan 18, 2022 at 6:08 PM, professor rat wrote >I told you to stop - now its too late Since I don't think I've been 'doing anything' for a few years, maybe you  ought to be way more specific about what you don't want people to do, and who you don't want to do it.  And why.

Re: The spectacle of Jim Bell's recent behavior

2022-01-18 Thread professor rat
Zeynep   On the question of talking nonsense I  shall always defer to you, deer

Preaching to the choir: (Was: "The spectacle of Jim Bell's recent behavior")

2022-01-18 Thread professor rat
I told you to stop - now its too late

Re: CoinDesk: Intel to Unveil 'Ultra Low-Voltage Bitcoin Mining ASIC' in February

2022-01-18 Thread grarpamp
> https://www.coindesk.com/tech/2022/01/18/intel-to-unveil-ultra-low-voltage-bitcoin-mining-asic-in-february/ https://patents.google.com/patent/US20180006808A1/en

CoinDesk: Intel to Unveil 'Ultra Low-Voltage Bitcoin Mining ASIC' in February

2022-01-18 Thread jim bell
CoinDesk: Intel to Unveil 'Ultra Low-Voltage Bitcoin Mining ASIC' in February. https://www.coindesk.com/tech/2022/01/18/intel-to-unveil-ultra-low-voltage-bitcoin-mining-asic-in-february/ Intel, one of the world's largest chip makers, is likely to unveil a specialized crypto-mining chip at the

Preaching to the choir: (Was: "The spectacle of Jim Bell's recent behavior")

2022-01-18 Thread jdb10987
On Jan 17, 2022 5:11 PM, professor rat wrote:Jim Bell made a spectacle of himself hobnobbing with Jeff  " Anarchopulco " Berwick.  Berwick was probably behind the death of cryptoanarchist John Galt btw.Rat foolishly has apparently never heard of a saying, "Preaching to the choir".If I only

Re: [crazy][hobby][spam] Automated Reverse Engineering

2022-01-18 Thread k
a jax contributor kindly shared this with me. you can store tpu models precompiled, which significantly speeds launch time, by using a compilation cache folder. from jax.experimental.compilation_cache import compilation_cache as cc cc.initialize_cache("/path/name/here", max_cache_size_bytes=32 *

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
i'm drafting an attempt to use perceiver i thought i'd try to convert between numbers and the words for them the next step is, i need to figure out how to get the ends of the model to match that kind of data. means comprehending the configs, preprocessor, postprocessor/decoder, again. i'm

Re: The spectacle of Jim Bell's recent behavior

2022-01-18 Thread zeynepaydogan
>> And killing these four will send a message to potential trolls ( Hi Zeynep ) > > Your days here are numbered. > this make me so laugh. Let's put this to a vote if you want. I'm sure > everyone here will vote for you to leave. Are you taking this challenge or not? Let's see if you're brave or

Re: [crazy][hobby][spam] Automated Reverse Engineering

2022-01-18 Thread k
um er - i went back to that and it turned out i had just scrolled up, and the training was all there - i think i may have uploaded another snapshot - i let it train for a number more hours, but when i returned the vm had run out of ram and X wasn't accepting keyboard input. it took me some time

Journalism under fire

2022-01-18 Thread professor rat
theguardian.com - Two journalists exposing Mexico’s corruption and drug violence murdered within one week Margarito Martínez Esquivel and José Luis Gamboa are the latest casualties in the world’s most dangerous country for reporters. And in related news the president of Mexico renewed his offer

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
## num_self_attends_per_block ## - number of self attention layers configuration object is transformers.models.perceiver.configuration_perceiver.PerceiverConfig and it briefly documents the parameters (and likely a few more). configs deriving from perceiverconfig can simply set attribute to add

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
So, if we were to train a new perceiver model, we'd need some idea of what configuration parameters to provide. How big to make all the different dimensions. I also noted there were some hardcoded parameters in there, likely taken from an initial paper or such. Let's collect this stuff

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
Note: Head mask information should go on 2B, not 3.

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
Draft summary of PerceiverForMaskedLM: 1. PerceiverTextPreprocessor: Inputs -> Embeddings 2A. PerceiverEncoder: Embeddings + Weights + Attention mask -> PerdeiverAttention(is_cross_attention = True) -> Hidden states 2B. PerceiverEncoder: Hidden states + Attention mask -> layers of

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
Figuring out Step 4: Embedding Decoding logits = self.embedding_decoder(outputs, embedding_layer = perceiver.input_preprocessor.embeddings) The .embeddings proeprty of PerceiverTextPreprocessor (the input_preprocessor) is the matrix of tokens or bytes to embedding vectors. Without the position

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
Figuring out Step 3b: Decoding decoder_outputs = self.decoder( query=decoder_query, # looks like just trainable parameters z=sequence_output, # this are the encoded hidden states query_mask=extended_attention_mask, # huh the mask is

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
Figuring out Step 3: Decoding Step 3 is interesting because it takes both the unencoded embedding vectors (inputs) and the encoded hidden states, as input. PerceiverBasicDecoder configuration parameters: output_num_channels = d_latents output_index_dims = max_position_embeddings num_channels =

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
I typed Step 2 almost fully out here, but the browser window left and it has disappeared. Anyway, in Step 2 the input data Embedding Vectors are fed into PerceiverEncoder as "inputs". PerceiverEncoder mutates them using "cross attention" with the passed "hidden states" which appear to be the

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
Figuring out Step 2: Encoding Here's a copy paste of this call: embedding_output = self.embeddings(batch_size=batch_size) encoder_outputs = self.encoder( embedding_output, attention_mask=None, head_mask=head_mask, inputs=inputs,

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
I'm interested in going through each step and describing the data flow in human words. Or at least starting this. Could make substeps if the steps are complex. Step 1: Input preprocessing. PerceiverTextPreprocessor collects embeddings and position embeddings into one summed tensor. They are

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
so a perceiver model is: input -> preprocessor -> encoder [required] -> decoder [processes encoded and unencoded input together] -> postprocessor -> output let's see how the encoder is constructed in PerceiverModel.__init__: self.embeddings = PerceiverEmbeddings() self.encoder =

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
forward function of perceivermodel, when conditioned with only input_preprocessor and decoder, pseudocode: # step 1 inputs and dimension info = input_preprocessor(inputs) # step 2 encoder_outputs = self.encoder( self.embeddings(batch_size) inputs ) # step 3 decoder_outputs = decoder(

Re: [spam] [personal] perceiver model notes

2022-01-18 Thread k
let's check out perceiver's masked language modeling architecture just a little bit PerceiverForMaskedLM (in github.com/transformers/transformers file src/transformers/models/perceiver/modeling_perceiver.py ) i'm skipping down to the implementation of the forward() function, as this will give me

Re: [crazy][hobby][spam] Automated Reverse Engineering

2022-01-18 Thread k
- show and tell - the checkpoint on huggingface currently has a loss of around 2.1, so it doesn't succeed yet. but it turns out it can produce an output, and guesses a simple signature correctly: git clone https://github.com/xloem/techsketball cd techsketball python3 demo.py it compiles a very

Re: Litigating Universal Cognitive Liberty

2022-01-18 Thread grarpamp
> https://ottawacitizen.com/news/national/defence-watch/military-leaders-saw-pandemic-as-unique-opportunity-to-test-propaganda-techniques-on-canadians-forces-report-says > the Canadian armed forces have launched ‘psychological operations’ as an > experiment in government propaganda It's not just

Re: [crazy][hobby][spam] Automated Reverse Engineering

2022-01-18 Thread k
note: in my bumbling i found this doc which gives a general intro to flax/jax/huggingface from google: https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md . i'm wondering if stuff like that doc is how jax reached me.