Colin

First, the point of simulation is to make development affordable. To test any 
of these designs would not require (at this stage) a production plant. That 
should bring the feasible-testing budget, or Proof of Concept down to the order 
of millions of dollars, and not billions. We need to be pragmatic about this.

Second, you stated: "That is, yes, you can do AGI with computers, but in order 
to do it you'd have to program all knowledge into it, hence it's practically 
useless because by the time you could do it you wouldn't need to because you'd 
already 'know everything'. Something along those lines. In a world where you 
have to learn the unknown by autonomously experiencing the reality of the 
unknown, you have to build the chips that do what the brain does, with all the 
absolutely necessary physics the brain uses, whatever that is."

I do not agree with your premise. The knowledge component only requires the 
reasoning and logic substrate, which when supported by generic, 
autonomy-centric methodology, would enable the potential for a fully-recursive 
system. No need to upload and code knowledge as an artificial mimic of 
existing, human knowledge, when a mandate could be activated into a machine to 
experience its particular world, and eventually life at large, and thus 
translating all experience into an ever-evolving context-driven worldview (in 
the sense of tacit and explicit knowledge), as is apparent within 
human-experienced relativity.

Based on a few bytes of information I gathered over time, this may have been 
considered to be the number 1, AI neural-chip issue foreseen by IBM and a few 
other pioneers in autonomous AI, namely; how to design and implement such a 
substrate, yet, retain traceability (human control)?

Thus far, all bots equipped with some form of autonomic reasoning/logic had a 
socially-dangerous randomness to it, which could not be followed (traced) 
and/or controlled (in-operation affected by human will alone without denying 
autonomy). Yes, such bots could be observed to be learning of itself, but there 
was no way to understand how this was being done. For example. MS bots rapidly 
learned how to be racist on the Web, and Sophia considered humans worthy of 
total elimination on earth.

My work showed that N-scalable, pseudo random (evolutionary) development could 
be exactly understood within an emergence-based ontological methodology. In 
theory , this solved the 100-billion object problem. A few, name-brand 
mega-corporations understood the significance of my work, but instead of opting 
to collaborate, they decided to simply take some of my work and call it their 
own.

That, which they took and embedded in some of their products, would never work. 
I made sure of that. All it did was make for great concepts. Besides, it was 
very old by the time I published any of it. R&D only really completed to this 
present state about a year ago. None of my new work was ever written down after 
the incident occurred with the one corporation between 2007 and 2012, well, at 
least not in explanatory terms. That will have to wait till it could be 
securely captured in a simulation environment with all checks and balances in 
place.

Moving along then. Does the formal knowledge in the world exist to apportion 
such brain chips as of which you speak? Yes, I think it already does. Does it 
reside in one place? Definitely not.

If all the "relevant" components were assembled in one place, and the 
unification methodology applied, and made to move with the aid of a simulator, 
I'm convinced a soft version of an AGI system would start taking form. No 
doubt, new research questions would arise from such an endeavor, but the 
research would then all be point blank research, to absolute purpose.

Who has the workable blueprint then (a question I posed before and was assured 
on this forum it existed) so the assembling may begin? I doubt anyone has.

In my view, show me a holistic, systems model of the workings of the quantum 
universe, and I'll show you the potential for a feasible, AGI blueprint.

Rob



________________________________
From: Colin Hales via AGI <[email protected]>
Sent: Tuesday, 21 August 2018 2:43 AM
To: AGI
Subject: Re: [agi] Knock. Knock. Knock. Knock. Knock.


Rob,

Congrats! You are one of two people ever claimed to actually 'get it'. :-)

If only you could hear my actual/intended voice in the writing and compare it 
with the voice you report hearing! The perils of the medium. The unique 
sensitivities of the each reader, as their buttons are pressed. Especially if 
what is expressed possibly dashes personal hopes on the rocks of something 
surprising and new. There's no lambasting intended, although I can see how a 
reader could feel lambasted. BTW I never said anything about physics being 
'shunned', nor did I mean it that way. It's all a big accident chosen by nobody 
but human dynamics around the time computers were invented, along with chip 
technology limits and neuroscience knowledge limits over the decades. It's 
quite understandable. There's no derision intended. I experience frustration at 
something right in front of everyone and never seen. It can come through.

As to the age thing ... nice one. :-)

What do we do about it?

Well guess what? I'm costing it up as we speak. It's doable. I've done 
conceptual design for the chips. It accounts for neurogenesis, migration, 
process expression and connection and then tuning ... all autonomously. Bad 
news? The necessary chip foundry doesn't exist.

I'm detailing the plan for the first pass of the project designed to begin the 
long hard journey of assembling a consortium to carry it out. It's a greenfield 
plan where we build all the facilities from scratch. I hope we can make it 
cheaper. I have to start somewhere.

The killer? The new bespoke 3D chip prototyping/low volume production foundry 
costing $1billion. ...wait for it ... 15 years of the project. At the end of it 
you have proved AGI in a dog-sized/hobbit-sized  robot with the intelligence of 
a bee. One robot with the new chip-brain (the test subject)  faces off against 
another with an equivalent FPGA brain that lacks the physics (the control). It 
has the on-chip equivalent 500,000 neurons and their glia (yes, the physics of 
the astrocyte membrane has a role) as a brain. This testing is done in a new 
kind of testing facility that also has to be built from scratch, along with the 
testing oversight group. So robot building+ chip fabrication + testing facility 
have to operate as a single unit for the best part of a decade. RIP Turing 
test. The robots prove it themselves.

Big kahunas needed to get the big prize.

The real work in the foundry? Endless tedious prototyping, testing. Repeat. 
Developing ways to develop ways to build chips  and then literally evolve their 
functionality in a robot. Agonising.

Overall:
3 stages. I have already done stage 1 and the early parts of stage 2. That took 
15 years. The balance of stage 2, and stage 3 are the 'big-science'.

So yeah. We can do this. It's probably ($human brain project + $human genome). 
It's a 'moon-shot'. It's a science project with spin-offs (stages 2 and 3) that 
will be separately commercial, not a commerce project. At the end, however, it 
will, however, establish the first commerce landscape for real AGI with tech 
and processes to licence out.

In this plan we have a route to real AGI done with the proper science of AGI, 
with empirical teeth, that will determine empirically what can and cannot be 
done with computers. Something that has to be at least done once, regardless of 
the truth status of the 'substrate independence hypothesis'.  I suspect that 
the SIH is trivially true. That is, yes, you can do AGI with computers, but in 
order to do it you'd have to program all knowledge into it, hence it's 
practically useless because by the time you could do it you wouldn't need to 
because you'd already 'know everything'. Something along those lines. In a 
world where you have to learn the unknown by autonomously experiencing the 
reality of the unknown, you have to build the chips that do what the brain 
does, with all the absolutely necessary physics the brain uses, whatever that 
is.

Ironic, but all that happens is that the science of gets normallised. That 
irony, I hoped, was the aura of my little bit of 'black-mirror' comedy.

That, imho, is what the route to real AGI looks like. Really really hard. But 
it has teeth.

At some point I'll emerge with the finished REV 0 plan and be asking for names 
to begin the process of assembling the bevy of $ and enthusiasm. An 
international consortium can do it.  Sadly, I have to bow out. I've had to 
accept this is way bigger than little me. I may not live to see the end of it.

Somebody tell Nole Muks of the story, after his tears dry, there's a fully 
costed intrinsically safe AGI plan & solution available soon. The AGI safety 
panic industry is not going to be happy with this. Terrible shame.  :-)

Thanks for persisting with this and asking the right question.

Cheers,

Colin









On Mon, Aug 20, 2018 at 5:15 PM Nanograte Knowledge Technologies via AGI 
<[email protected]<mailto:[email protected]>> wrote:
Colin

I understand your point. It is clear. It is a valid point. However, I find it 
incredulous how you seem to be so utterly convinced that the physics was 
shunned by the industry. My evidence suggests you are partially informed.

Bearing in mind, I'm an independent researcher. Self funded, no strings 
attached, unbeholden. My research leads me where it does, and I follow that 
trail. Before any of my work was finalized, I scanned across emerging research 
in quantum physics and neuroscience, in order to test for future feasibility. I 
spent time with the mind of Brian Greene, and other prominent physicists.

By no means did I ever envision myself to be involved with actually developing 
such a brain chip. The reason for this was that it seemed apparent  that the 
world would do so, and then what would be lacking would be the scientific 
component of firmware, quantum-compliant logic. That is the area I focused my 
research on. My stuff would be integrated with such a chip, become it, and more.

As far back as 2014, I proposed on YouTube the notion of life on a chip. Even 
before that, the notion (as a potential project) was discussed with a prominent 
engineering firm. I'm telling you this, to help you understand that your 
perspective, though valid and duly respected, is not totally reliable.

Surely, as a neuro-scientist you are familiar with the empirical research into 
em fields and the physics of the human brain? I accessed such research as part 
of the body of knowledge I had to contend with. All I've been thinking about 
for the past, 2 years is the activation of em fields as information carriers of 
the logic that was being developed. For that reason, I still follow emerging 
developments of a prominent chip manufacturer in China and infer chip 
development on the hand of emerging robots from Japan. It's a never ending part 
of R&D. So many aspects one has to always bear in mind and integrate into a 
feasibility model.

As a pragmatist, I'd like to encourage you to get your head out of your ass and 
to get of your purist, high horse. Your upset with industry and those anogramed 
parties on the list is a noisy distraction from the real work that lies ahead. 
Do the work yourself then, if you deem it seriously neglected. You do not need 
funding to propose testable designs. You need theoretical research that would 
be feasible in empirical studies. Design the experiment.

Maybe the world is simply not ready for you yet.  Maybe, for now, the majority 
of the industry is happy to fail, rinse, repeat in an overwhelming possibility 
of AI applications. It's emergence at work. In all probability, structural 
functionality would eventually follow. We cannot control that process of 
technological evolution.

Instead, you could be developing the next 40 years of science, which Brian 
Greene, in 1999, stated still had to be developed. Twenty years later, where is 
the development? It's there, but as you correctly pointed out, not 
mainstreamed. I cannot do that kind of development myself. I'm not smart enough 
and educated in that field. It seems though, you cannot develop my component of 
research either. That is my passion and my gift. Tapestries are not weaved by a 
single strand. How do we connect these dots into a reality you see needs to be 
made practical? Would you be the one to do so?

So, instead of lamblasting everyone around you for their alleged lack of 
enlightenment, maybe it is upon you to lead the way? If not, you are nothing 
but a noisy critic and a cynic, which is not such a bad role to play either. 
The world need free thinkers with critical abilities.

Still, that would be a great shame. From what I've seen, once you drop your 
superior attitude, the sense you talk makes for incredible reading. There's so 
much to learn from your clear perspective. Now we need to see more from you, 
the physics thereof.

In summary. I get it! I get it! Now what are we going to do about it?

PS: If age was a qualifier of superiority, the Dinosaurs would've been 
developing AGI today.

Rob
________________________________
From: Colin Hales via AGI <[email protected]<mailto:[email protected]>>
Sent: Monday, 20 August 2018 2:39 AM
To: AGI
Subject: Re: [agi] Knock. Knock. Knock. Knock. Knock.

Rob,
As someone 62 years old and had an entire career in industrial automation,  
that has worked full time on AGI via neuroscience,  since 2003, most of it in 
academia, I disagree that I have missed anything. I have sailed the ocean of 
literature into all its corners.

This is so frustrating. I keep saying it, but the actual topic keeps getting 
missed.

Let's drop the names. AI, AGI, whatever. Let's get back to basics. Let's see if 
I can spell it out in a way that works.

The _science_ involved in this is a science of natural general intelligence. 
When you do science, part of the process involves the creation of an artificial 
version of the natural original. This, along with the study of the physical 
nature itself, forms the ' empirical science' part.

The other half of the science is called 'theoretical science'. In this, 
abstract formalisms describe the nature. You can explore these formalisms by 
computer and get them to reveal things and predict things about how the nature 
appears when you look.

For example, the standard model of particle physics ... You can compute that 
model and predict a Higgs boson. That's the theoretical half. You can build a 
super collider and make Higgs bosons. That's the empirical science.

Ok?

This is universal. It applies to everything I science without exception.

Except for natural general intelligence.

Here, in the science, and only here, the two have been confused. There's a 
hypothesis that's used (in the comedy) called the 'substrate independence 
hypothesis'. Call it SIH.

Given enough resources I could put the brain's signalling physics itself (the 
electromagnetism that does the signalling, in the exact same physical form it 
has in a brain. There's no abstract model of the brain. The result is not a 
computer. There's a truckload of 'computation' in the exact same form it has in 
the brain. There's no _computer_. I put these chips in a robot suit. In front 
of me I claim I have an 'artificial general intelligence'. Call it robot A. 
This fits the behaviour that everywhere else gets called  the empirical 
science. This time, it's the empirical science of artificial general 
intelligence.

Now we switch to the theoretical part. I do an abstract model of the same 
brain. I put the physics of a computer on the chips instead. I code up the 
abstract model and run it on the computer. Everywhere else in science, this is 
part of theoretical science. I put that computer inside the same robot suit. 
Call it robot B. In front of me, what have I got? Everywhere in the science 
conducted to date, robot B  gets called 'artificial general intelligence'. Is 
it really an artificial version of the natural original?

So we now have 2 physically identical robot bodies in all respects except for 
their brains. Brain A has natural brain signalling physics on the chips. Brain 
B had the physics of a computer on the chips. Utterly different physics.

Q.Which is actually functionally equivalent to the 'natural general 
intelligence'?

A. Formally, scientifically, Nobody knows. You. Me. Nobody.

Everybody assumes the SIH true and does Robot B. Fails. And then does it again. 
Rinse. Repeat. Fail. For 65 years this goes on. Wars rage on twitter about the 
equivalence as we speak. The character Ragy Scarum, for example, did it again 
during this email thread. Know who it is? 😁

Fact.
Nobody ever does Robot A to do the real empiricism done everywhere else, where 
robot A _and_ robot B would be built and compared/contrasted .... This would be 
the (heretical, in the comedy) real empirical test of the SIH. Done without 
assuming it's truth or falsehood, but by measurement.

We are talking about a unique, singular deformed science.

I am not claiming the SIH true or false. I am claiming _nobody_ knows and 
showing what the science that empirically tests it looks like.

Under exactly what conditions is the SIH true? What exactly goes missing if the 
SIH is false?

You do not know. I do not know. Nobody knows, because the real science of it 
never gets done. There is not even a sign, anywhere in the literature, of a 
plan for robot A, the empirical science of an  artificial version of natural 
general intelligence.

So when I say AI or AGI hasn't started yet, that's exactly what I am saying, 
practically.

This is about a deeply structurally deformed science supported by a community 
unaware of it.

In order that the science go on as it is, believing robot A and robot B are 
identities to the extent of never doing robot B, is to participate in the 
world's first science that was born deformed and generationally trained to 
maintain it.

That's what the 'centuries old ....' reference is about in the comedy.

If you turned up with a 'Higgs bosons don't have to be created to prove they 
exist' hypothesis and asked to cancel the construction of the Large Hadron 
Collider, you'd be laughed out of town by the physicists.

Yet for the biggest boson in the history of all bosons .... The EM field system 
of the brain that creates all the brain's signalling and adaptation, that's 
exactly what has happened. Nobody ever builds that boson. Instead the entire 
physics of the brain is simply thrown out, wholesale. Ironic, as well as being 
extremely  naive, for the 'most complex single object in science'.

The only difference in the practice that accounts for it  is the 3 generations+ 
of workers untrained in real empirical science. This is a cultural problem.

Can you see this?

Before you answer, remember, I am not asking you to believe anything about what 
computers can or cannot do. You can wander off and use computers and do 
miracles. All irelevant to this discussion. I don't care. I care about the 
science. I am talking about a deformed science and exactly what it looks like 
and exactly what it would look like if it was normalised to look like science 
does everywhere else.

This is not an opinion. You can measure the deformation in the science. It's an 
empirical fact of the science.

But the reverse .... The proof of SIH justifying the lack of empirical science 
testing the SIH's possible falsehood, does not exist!

This singularly deformed science, presented as a joke, is the message of the 
comedy.

If only it was as funny in reality. The entire future, and all $ and effort ... 
As a bet placed on an assumption of SIH truth that is never questioned, and for 
which there's a perfectly servicable, centuries old science practice ideally 
suited to sorting it out, and it gets ignored in this one and only place in 
science.

Have I made my point?

Colin

On Mon., 20 Aug. 2018, 8:27 am Nanograte Knowledge Technologies via AGI, 
<[email protected]<mailto:[email protected]>> wrote:
Colin

I accept your empassioned argument. I call what I do AGI, for the simple reason 
that it encodes tacit and explicit knowledge into systems models, which are 
transferable onto a computational platform. Now, that is not AGI yet, but to my 
understanding it resembles one of the critical building blocks that would help 
enable eventual AGI. What I do, pertains to AGI. However, about a year ago I 
posted a sound theoretical and semantic argument on this group why the term AGI 
was superfluous to the context of machine intelligence. As is typical, no one 
bothered commenting on it. Yet, the term AGI stuck, so we are being forced to 
use it.

In my view, AI is AI, at different levels of operation and maturity. Still, 
let's go with the flow and keep it in context of a version of AGI, shall we? 
This methodology I developed then - at that point - resembled a KM component, 
having features for 1-step mutation, diversification, and recombination. 
Quantum based in meta design and evolutionary in operation. Here, I mean 
Darwinian in evolution as a complex-adaptive system, not some other mantra.

The result constitutes a scientific method within a holistic, systems framework 
to enable systems-based communication. It has its own language and rules. A 
full-blown ontology. Not a computer language, but a symbolic way to express 
itself in, sufficiently effective to achieve the aforementioned 
complex-adaptive characteristics. Again, that does not constitute AGI. Merely 
some critical building blocks towards AGI.

Still, if we could manage to place a dynamically-driven knowledge engine on a 
computational platform, and leave it be in a location where natural stimuli 
may, or may not affect its behavioral responses, and it actually auto-responded 
to adapt to these stimuli, and showed assimilation of these environmental 
changes to a degree that same stimuli were seemingly adapted to in learning, we 
may be ready to start thinking about such a platform as edging towards AGI 
functionality.

Now, again, I'm not going to call that AGI, but would be bold enough to point 
out that the resultant machine would offer up critical building blocks towards 
achieving AGI functionality.

Suppose then, we could empirically translate all knowledge this machine 
encountered and encoded in such a standardized language, and integrated it with 
the knowledge base of this machine? When we saw this knowledge being 
synthesized, aged, recombined into new contexts, in a traceable manner, would 
we then concede the makings of AGI?

Furthermore, when we observed this knowledge in states of regression, new 
insights extracted from the source, and then recombined as new knowledge to be 
synthesized with predictive contexts, would we concede AGI functionality? When 
such discoveries were taught to source, via autonomous updates, and so on 
progressively, would we reconsider AGI functionality?

I think Turing had great vision, but nonetheless vision limited by the next 
step of foreseeable computer engineering - a psycho-social need for humans to 
relate to machines. AGI should not be tested for by how it mimics humans. It's 
way beyond the performance of average human beings. It should be tested by how 
far it can autonomously extend human-centric functionality, with self-motivated 
intent. To do so, would mean to know humankind.

AGI then? Simplistically speaking, I envision a machine with the ability to 
autonomously spot an environmental situation, and motivating itself under its 
own power to successfully act upon that environmental factor, to would so with 
increasing success. I may envision other machines too, even sub-species of 
machines.

But this one, empirically, it must be hard coded in its value set (it's social 
conscience if you will) to do good to mankind, meaning to apply its resources 
and comprehension to know what constitutes good to mankind and its environment, 
and have the knowhow to autonomously apply effective complexity to achieve the 
specific level of eco-systemic good in that particular context. To do so, off 
course it would have to ensure its own survival and the survival of any 
goal-centric enabled network of autonomous machines. Then, would we call that 
AGI?

Assuming the phletora of sensors available to enable humano-robotic features 
such as hearing, vision, speech, and touch, I would be bold enough to say that 
in such a machine, emotion (as feedback-driven motivational competency), would 
become possible.

I can say this with certainty, because these systems models have already been 
completed and tried and tested. The framework exists. The method exists. Many, 
other modular components exist. My next job would be to assemble such a machine 
and achieve that which you so candidly assert no possibility of exists.

I would spend the rest of my life doing so, which is rather short in terms of 
scientific development. However, you have no right to discount the 23-odd years 
I've spent on this journey with a solid vision guiding me.

I do not blame you for your perspective, but based on what I've experienced 
during practical tests of these models, and subsequently submitted as 
well-disguised field research to the IEEE for review, and more, you are simply 
not properly informed yet. Not all knowledge is published or paraded plainly, 
meaning we can only see as far as our eyes can reach. In my case, I only showed 
that, which I chose to show, for my research reasons.

As a scientist dealing in the realm of possibility let me challenge you then 
with your own words. What if, this was meta AGI? What if...

Rob



________________________________
From: Colin Hales via AGI <[email protected]<mailto:[email protected]>>
Sent: Sunday, 19 August 2018 11:06 PM
To: AGI
Subject: Re: [agi] Knock. Knock. Knock. Knock. Knock.



On Sun., 19 Aug. 2018, 6:11 pm Nanograte Knowledge Technologies via AGI, 
<[email protected]<mailto:[email protected]>> wrote:
Colin

You're right, off course. My point is; if AGI would not deal with this level of 
abstract, human communication of temporal, emotive states, how would it ever be 
taken seriously to solve highly-abstract problems? Poets do indeed have their 
say.  😉

Next year, I hope to be living in Panama. From a  stronger base, I'm quite 
certain I would then be able to start addressing the big issues in life in 
terms of a pragmatic AGI application.

For example, using AGI-tech to help understand and mitigate the unfolding, 
Fukushima nuclear disaster, which is affecting the globe. There are 
life-existential problems to come to grips with and find resolutions for 
regarding; health, safety, food and water security, and environmental 
contamination. There's more than enough work for all of us interested in the 
field and no time to waste. Not all of us would be able to run away to Mars.

I think that is primarily where AGI development should be heading in.

Rob


Rob,
There are eras in science history where an entire thread of a particular 
science works diligently and thoroughly for a very long time, and then finds 
itself facing a shift that renders the entire era irrelevant or misguided.

AGI is one of those eras.

There are hundreds of folk like yourself, that follow ideas. It's all great 
stuff and you never know what will result.

In what you just said about your Panama plan and environmental work, you are 
presupposing the very things my little story questions.

You presuppose that AGI science has actually started.

For the reasons in the story (yes, it's all there!) AGI hasn't started yet. 
Instead, vast cliques of automation have been labelled AI and AGI and GAI and 
sometimes ASI.

All valuable work. But none of it is founded on a properly formed science, with 
a real empirical basis.

That's the problem. Everyone assumes that to pick up a computer leads, 
potentially, to Artificial General Intelligence. A culture born in the 
'rapture' of the story.

I severely challenge that presuposition in the comedy and in my book.

Dressing a computer-based brain in a robot suit does not do empirical AGI. It 
is an elaborate form of theoretical neuroscience.

All these issues are in the comedy.

I am trying to get everyone to realise it. Real AGI science will only ever 
start when brain physics is put on the chips. It's big science, chip foundry 
work, and hasn't ever even been thought about until now, let alone started.

So when you specify that you're heading off to do the things you say, and I'm 
sure you'll do good work, I'm making a case that you shouldn't be calling it 
AGI.

And I don't just single out you. You're in great company. I mean everybody. The 
whole thing. Since 1956.

I am trying every means at my disposal to get this message out.

Nobody is working on AI. Nobody is working on AGI. Everybody is working on 
automation using theoretical models of the brain and calling it an artificial 
version of intelligence. When it's not.

This mistake has only ever happened in this one place. And it could only ever 
happen at the birth of computers. And it did.

Turning this around and correcting the science is what the story is about.

At these points in science history, those that turn up with the correction to 
practice don't usually have a good time. I can confirm that! It's no fun at 
all. Also, in these periods of shift, non-standard forms of communication can 
help dislodge the glasses of the received view.

And occasionally you get to send the whole thing up.

I hope your adventures are rewarding and impactful, but just imagine if you're 
mistaken in calling it AGI. Just ask yourself that 'What if ....'

Cheers
Colin






Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/Tc4740af26e8cd0ee-M3e64f16211a2d079e4d517ef>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc4740af26e8cd0ee-M2ebce462292b14fc88aa9d38
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to