i saw somebody else with a large prosthesis and again engaged some prosthesis
project energy :D
i've learned more about dissociation and i took it very slowly and made very
tiny moving-forward progress :}
i made 11 practices of very simple metamodel behavior (holding idea of a
prosthetic limb could use a low-end chip to learn to adapt to an owner's
changing signals some day) [metamodel work is the hardest part of the project
design for me]
hey i'm sorry i'm not including an example yet i was just practicing
some of them are very poor and some of them perform astoundingly and it's kind
of random and i was navigating harsh dissociations through them all the time
so i'm taking it gentle, i'm very very sorry
but i'm learning to do it and the basic form is something like (PLEASE DON'T
HURT IT ! it's soooo nascent !!!!)
import pr_dep, torch, tqdm
def main():
s = pr_dep.make_sin_arch().make_settable() # student model
s_len = len(s.get())
t = pr_dep.[TBasic or TGroup](...named hyperparameters maybe scaled from s,
plus output for weights and anything else of interest
optim = torch.optim.AdamW(t.parameters(), lr=[something between 1e-4 and
1e-8])
test_data = torch.linspace(0, 3, [# of items], device=s.device,
dtype=s.dtype)
test_out = torch.sin(test_data)
with tqdm.tqdm(range(num_loops)) as pbar:
for it in pbar:
# optional, replace a random value in test_data and test_out
wts = t([optional parameters, it can synthesize with no input too
if configured to])
s.set(wts)
test_loss = torch.nn.function.mse_loss(s(test_data), test_out)
# if the model is configured to predict its own loss, which makes
it perform better, mse_loss that too and sum it with test_loss
if pbar.last_print_n == it:
pbar.desc = str(loss.item())
loss.backward()
optim.step()
optim.zero_grad()
if __name__ == '__main__':
main()
... so, yeah, i'm still practicing it, and i'm not sure which of my approaches
were successsful and i'd want to share one that succeeds not one that fails ;p
library closes in 7 minutes !!
encourage me to keep trying and to have positive warm safe energy around it :)
i really need the ability to continue well here <expressing some of this didn't go
well :s>
be well !!
attached is the dependency file i made toward the end. grad output untested
but i've figured out how to make it work if it doesn't.
the grid search was just to figure out what hyperparameters could make a
successful sin(x) model (it was my synthesis test), i didn't do any other
grid search yet
this uses a high-level class from torch.nn but i have since learned there is
more now in the newer torchtune package. but its one way to abstract the idea
away.
the make_settable() etc methods let one use the weights of one model in the
training graph of another without crashing pytorch by letting the user
specify when this is or isn't happening and what the role is
it's just been very hard for me to try any of this at all up until now