The following  is by Scott Alexander, the author of Astral Codex Ten. It's
the most intelligent article about AI that I've read in a long time.

John K Clark


---------- Forwarded message ---------
From: Astral Codex Ten <[email protected]>
Date: Tue, Feb 13, 2024 at 1:14 AM
Subject: Sam Altman Wants $7 Trillion
To: <[email protected]>


Machine Alignment Monday 2/12/24
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
Forwarded this email? Subscribe here
<https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly93d3cuYXN0cmFsY29kZXh0ZW4uY29tL3N1YnNjcmliZT91dG1fc291cmNlPWVtYWlsJnV0bV9jYW1wYWlnbj1lbWFpbC1zdWJzY3JpYmUmcj02eDNubiZuZXh0PWh0dHBzJTNBJTJGJTJGd3d3LmFzdHJhbGNvZGV4dGVuLmNvbSUyRnAlMkZzYW0tYWx0bWFuLXdhbnRzLTctdHJpbGxpb24iLCJwIjoxNDE1NjkwNzYsInMiOjg5MTIwLCJmIjp0cnVlLCJ1IjoxMTYyMjA4MywiaWF0IjoxNzA3ODA0ODEzLCJleHAiOjE3MTAzOTY4MTMsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.qOX0UAmJZM5qBCFFiDFMEFvNeCIyOXwjr039RGMYvuI?>
for more
Sam Altman Wants $7 Trillion
<https://substack.com/app-link/post?publication_id=89120&post_id=141569076&utm_source=post-email-title&utm_campaign=email-post-title&isFreemail=true&r=6x3nn&token=eyJ1c2VyX2lkIjoxMTYyMjA4MywicG9zdF9pZCI6MTQxNTY5MDc2LCJpYXQiOjE3MDc4MDQ4MTMsImV4cCI6MTcxMDM5NjgxMywiaXNzIjoicHViLTg5MTIwIiwic3ViIjoicG9zdC1yZWFjdGlvbiJ9.WIlDqhg6pmhRSq6H5wZw1iBLUNn8YmwTIXbHLAcHh8A>Machine
Alignment Monday 2/12/24

Feb 13

<https://substack.com/app-link/post?publication_id=89120&post_id=141569076&utm_source=substack&isFreemail=true&submitLike=true&token=eyJ1c2VyX2lkIjoxMTYyMjA4MywicG9zdF9pZCI6MTQxNTY5MDc2LCJyZWFjdGlvbiI6IuKdpCIsImlhdCI6MTcwNzgwNDgxMywiZXhwIjoxNzEwMzk2ODEzLCJpc3MiOiJwdWItODkxMjAiLCJzdWIiOiJyZWFjdGlvbiJ9.zYXvbhstO4tO512aRwQlUveGou8ALhcWLDlNGdD_8KE&utm_medium=email&utm_campaign=email-reaction&r=6x3nn>
<https://substack.com/app-link/post?publication_id=89120&post_id=141569076&utm_source=substack&utm_medium=email&isFreemail=true&comments=true&token=eyJ1c2VyX2lkIjoxMTYyMjA4MywicG9zdF9pZCI6MTQxNTY5MDc2LCJpYXQiOjE3MDc4MDQ4MTMsImV4cCI6MTcxMDM5NjgxMywiaXNzIjoicHViLTg5MTIwIiwic3ViIjoicG9zdC1yZWFjdGlvbiJ9.WIlDqhg6pmhRSq6H5wZw1iBLUNn8YmwTIXbHLAcHh8A&r=6x3nn&utm_campaign=email-half-magic-comments&utm_source=substack&utm_medium=email>
<https://substack.com/app-link/post?publication_id=89120&post_id=141569076&utm_source=substack&utm_medium=email&utm_content=share&utm_campaign=email-share&action=share&triggerShare=true&isFreemail=true&r=6x3nn&token=eyJ1c2VyX2lkIjoxMTYyMjA4MywicG9zdF9pZCI6MTQxNTY5MDc2LCJpYXQiOjE3MDc4MDQ4MTMsImV4cCI6MTcxMDM5NjgxMywiaXNzIjoicHViLTg5MTIwIiwic3ViIjoicG9zdC1yZWFjdGlvbiJ9.WIlDqhg6pmhRSq6H5wZw1iBLUNn8YmwTIXbHLAcHh8A>
<https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9vcGVuLnN1YnN0YWNrLmNvbS9wdWIvYXN0cmFsY29kZXh0ZW4vcC9zYW0tYWx0bWFuLXdhbnRzLTctdHJpbGxpb24_dXRtX3NvdXJjZT1zdWJzdGFjayZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jYW1wYWlnbj1lbWFpbC1yZXN0YWNrLWNvbW1lbnQmYWN0aW9uPXJlc3RhY2stY29tbWVudCZyPTZ4M25uJnRva2VuPWV5SjFjMlZ5WDJsa0lqb3hNVFl5TWpBNE15d2ljRzl6ZEY5cFpDSTZNVFF4TlRZNU1EYzJMQ0pwWVhRaU9qRTNNRGM0TURRNE1UTXNJbVY0Y0NJNk1UY3hNRE01TmpneE15d2lhWE56SWpvaWNIVmlMVGc1TVRJd0lpd2ljM1ZpSWpvaWNHOXpkQzF5WldGamRHbHZiaUo5LldJbERxaGc2cG1oUlNxNkg1d1p3MWlCTFVObjhZbXdUSVhiSExBY0hoOEEiLCJwIjoxNDE1NjkwNzYsInMiOjg5MTIwLCJmIjp0cnVlLCJ1IjoxMTYyMjA4MywiaWF0IjoxNzA3ODA0ODEzLCJleHAiOjE3MTAzOTY4MTMsImlzcyI6InB1Yi0wIiwic3ViIjoibGluay1yZWRpcmVjdCJ9.wtjF40LCsRwrH63rLX4rdIL4_RLzy0tyN8rBJtdhoqA?&utm_source=substack&utm_medium=email>

READ IN APP
<https://open.substack.com/pub/astralcodexten/p/sam-altman-wants-7-trillion?utm_source=email&redirect=app-store>


*[All numbers here are very rough and presented in a sloppy way. For the
more rigorous versions of this, read Tom Davidson
<https://substack.com/redirect/00016847-447f-4263-8c90-e1e91d2d5930?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>,
Yafah Edelman
<https://substack.com/redirect/f94d75d2-560c-4fe7-b893-065ccb439613?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>,
and EpochAI
<https://substack.com/redirect/53fded7a-deac-4e7b-89ac-7cda974c41ba?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>)*

*I.*

Sam Altman wants $7 trillion
<https://substack.com/redirect/8c766bdb-b6b9-49bb-aefd-308fc8329c38?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
.

In one sense, this isn’t news. Everyone wants $7 trillion. I want $7
trillion. I’m not going to get it, and Sam Altman probably won’t either.

Still, the media treats this as worthy of comment, and I agree. It’s a
useful reminder of what it will take for AI to scale in the coming years.

The basic logic: GPT-1 cost approximately nothing to train. GPT-2
<https://substack.com/redirect/327917a6-94a3-4d6b-b323-2958669c1043?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
cost $40,000. GPT-3
<https://substack.com/redirect/7f966c9f-beb2-4d3f-b429-319a4b23dbd5?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
cost $4 million. GPT-4
<https://substack.com/redirect/4b89f2e2-0f84-4cd3-9fd7-852b7a803326?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
cost $100 million. Details about GPT-5 are still secret, but one extremely
unreliable estimate
<https://substack.com/redirect/1afb74dd-a6f8-420b-abc2-110a45c4f3f7?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
says $2.5 billion, and this seems the right order of magnitude given the $8
billion that Microsoft gave OpenAI.

So each GPT costs between 25x and 100x the last one. Let’s say 30x on
average. That means we can expect GPT-6 to cost $75 billion, and GPT-7 to
cost $2 trillion.

(Unless they slap the name “GPT-6” on a model that isn’t a full generation
ahead of GPT-5. Consider these numbers to represent models that are eg as
far ahead of GPT-4 as GPT-4 was to GPT-3, regardless of how they brand
them.)

Let’s try to break that cost down. In a very abstract sense, training an AI
takes three things:

   -

   Compute (ie computing power, hardware, chips)
   -

   Electricity (to power the compute)
   -

   Training data

*Compute*

Compute is measured in floating point operations (FLOPs). GPT-3 took 10^23
<https://substack.com/redirect/c1f86b73-e074-44a3-9ec8-c703b78aca2f?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
FLOPs to train, and GPT-4 plausibly 10^25
<https://substack.com/redirect/1535ce27-e680-405d-8601-9d669b7ceae4?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>.


The capacity of all the computers in the world is about 10^21
<https://substack.com/redirect/a235a2d0-9b63-4e13-985e-541d5bf6d390?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>FLOP/second,
so they could train GPT-4 in 10^4 seconds (ie two hours). Since OpenAI has
fewer than all the computers in the world, it took them six months. This
suggests OpenAI was using about 1/2000th of all the computers in the world
during that time.

If we keep our 30x scaling factor, GPT-5 will take 1/70th of all the
computers in the world, GPT-6 will take 1/2, and GPT-7 will take 15x as
many computers as exist. The computing capacity of the world grows quickly
- this source
<https://substack.com/redirect/ff4ed36f-aa66-4c3b-a8ee-13431c9ea808?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
says it doubles every 1.5 years, which means it grows by an order of
magnitude every five years, which means these numbers are probably
overestimates. If we imagine five years between GPTs, then GPT-6 will
actually only need 1/10th of the world’s computers, and GPT-7 will only
need 1/3. Still, 1/3 of the world’s computers is a lot.

Probably you can’t get 1/3 of the world’s computers, especially when all
the other AI companies want them too. You would need to vastly scale up
chip manufacturing.

*Energy*

GPT-4 needed about 50 gigawatt-hours
<https://substack.com/redirect/ee7b284e-104f-4880-a0a5-4b9870707f12?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
of energy to train. Using our scaling factor of 30x, we expect GPT-5 to
need 1,500, GPT-6 to need 45,000, and GPT-7 to need 1.3 million.

Let’s say the training run lasts six months, ie 4,320 hours. That means
GPT-6 will need 10 GW - about half the output of the Three Gorges Dam, the
biggest power plant in the world. GPT-7 will need fifteen Three Gorges
Dams. This isn’t just “the world will need to produce this much power total
and you can buy it”. You need the power pretty close to your data center.
Your best bet here is either to get an entire pipeline like Nord Stream
hooked up to your data center, or else a fusion reactor.

(Sam Altman is working on fusion power
<https://substack.com/redirect/68a5d3ed-da90-467a-8544-f6013d09679d?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>,
but this seems to be a coincidence. At least, he’s been interested in
fusion since at least 2016, which is way too early for him to have known
about any of this.)

*Training Data*

This is the text or images or whatever that the AI reads to understand how
its domain works. GPT-3
<https://substack.com/redirect/06489869-7cd8-4630-af07-ebceb751ea8f?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
used 300 billion tokens. GPT-4
<https://substack.com/redirect/4623f792-bb0f-4ebb-829e-f095df1ea91a?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
used 13 trillion tokens (another source says 6 trillion). This sort of
looks like our scaling factor of 30x still kind of holds, but in theory
training data is supposed to scale as the square root of compute - so you
should expect a scaling factor of 5.5x. That means GPT-5 will need
somewhere in the vicinity of 50 trillion tokens, GPT-6 somewhere in the
three-digit trillions, and GPT-7 somewhere in the quadrillions.

There isn’t that much text in the whole world. You might be able to get a
few trillion more by combining all published books, Facebook messages,
tweets, text messages, and emails. You could get some more by adding in all
images, videos, and movies, once the AIs learn to understand those. I still
don’t think you’re getting to a hundred trillion, let alone a quadrillion.

You could try to make an AI that can learn things with less training data.
This ought to be possible, because the human brain learns things without
reading all the text in the world. But this is hard and nobody has a great
idea how to do it yet.

More promising is synthetic data, where the AI generates data for itself.
This sounds like a perpetual motion machine that won’t work, but there are
tricks to get around this. For example, you can train a chess AI on
synthetic data by making it play against itself a million times. You can
train a math AI by having it randomly generate steps to a proof, eventually
stumbling across a correct one by chance, automatically detecting the
correct proof, and then training on that one. You can train a video game
playing AI by having it make random motions, then see which one gets the
highest score. In general you can use synthetic data when you don’t know
how to create good data, but you do know how to recognize it once it exists
(eg the chess AI won the game against itself, the math AI got a correct
proof, the video game AI gets a good score). But nobody knows how to do
this well for written text yet.

Maybe you can create a smart AI through some combination of text, chess,
math, and video games - at least some humans pursue this curriculum, and it
works fine for them.

This is kind of the odd one out - compute and electricity can be solved
with lots of money, but this one might take more of a breakthrough.

*Algorithmic Progress*

This means “people make breakthroughs and become better at building AI”. It
seems to be another one of those things that gives an order of magnitude of
progress per five years or so, so I’m revising the estimates above down by
a little.

*Putting It All Together*

GPT-5 might need about 1% of computers in the world, a small power plant’s
worth of energy, and a lot of training data.

GPT-6 might need about 10% of all the computers in the world, a large power
plant’s worth of energy, and more training data than exists. Probably this
looks like a town-sized data center attached to a lot of solar panels or a
nuclear reactor.

GPT-7 might need all the computers in the world, a gargantuan power plant
beyond any that currently exist, and *way* more training data than exists.
Probably this looks like a city-sized data center attached to a fusion
plant.

Building GPT-8 is currently impossible. Even if you solve synthetic data
and fusion power, and you take over the whole semiconductor industry, you
wouldn’t come close. Your only hope is that GPT-7 is superintelligent and
helps you with this, either by telling you how to build AIs for cheap, or
by growing the global economy so much that it can fund currently-impossible
things.
<https://substack.com/redirect/4da04741-f615-43e8-b587-a41ca3711bb4?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
Everything
about GPTs >5 is a naive projection of existing trends and probably false.
Order of magnitude estimates only.

You might call this “speculative” and “insane”. But if Sam Altman didn’t
believe something at least this speculative and insane, he wouldn’t be
asking for $7 trillion.

*II.*

Let’s back up.

GPT-6 will probably cost $75 billion or more. OpenAI can’t afford this.
Microsoft or Google could afford it, but it would take a significant
fraction (maybe half?) of company resources.

If GPT-5 fails, or is only an incremental improvement, nobody will want to
spend $75 billion making GPT-6, and all of this will be moot.

On the other hand, if GPT-5 is close to human-level, and revolutionizes
entire industries, and seems poised to start an Industrial-Revolution-level
change in human affairs, then $75 billion for the next one will seem like a
bargain.

Also, if you’re starting an Industrial Revolution level change in human
affairs, maybe things get cheaper. I don’t expect GPT-5 to be good enough
that it can handle the planning for GPT-6. But you’ve got to think of this
stepwise. Can it do enough stuff that large projects (like GPT-6, or its
associated chip fabs, or its associated power plants) get 10% cheaper?
Maybe.

The upshot of this is that we’re looking at an exponential process, like R
for a pandemic. If the exponent is > 1, it gets very big very quickly. If
the exponent is < 1, it fizzles out.

In this case, if each new generation of AI is exciting enough to inspire
more investment, *and/or* smart enough to decrease the cost of the next
generation, then these two factors combined allow the creation of another
generation of AIs in a positive feedback loop (R > 1).

But if each new generation of AI isn’t exciting enough to inspire the
massive investment required to create the next one, and isn’t smart enough
to help bring down the price of the next generation on its own, then at
some point nobody is willing to fund more advanced AIs, and the current AI
boom fizzles out (R < 1). This doesn’t mean you never hear about AI -
people will probably generate amazing AI art and videos and androids and
girlfriends and murderbots. It just means that raw intelligence of the
biggest models won’t increase as quickly.

Even when R < 1, we still get the bigger models eventually. Chip factories
can gradually churn out more chips. Researchers can gradually churn out
more algorithmic breakthroughs. If nothing else, you can spend ten years
training GPT-7 very slowly. It just means we get human or above-human level
AI in the mid-21st century, instead of the early part.

*III.*

When Sam Altman asks for $7 trillion, I interpret him as wanting to do this
process in a centralized, quick, efficient way. One guy builds the chip
factories and power plants and has them all nice and ready by the time he
needs to train the next big model.

Probably he won’t get his $7 trillion. Then this same process will happen,
but slower, more piecemeal, and more decentralized. They’ll come out with
GPT-5. If it’s good, someone will want to build GPT-6. Normal capitalism
will cause people to gradually increase chip capacity. People will make a
lot of GPT-5.1s and GPT-5.2s until finally someone takes the plunge and
builds the giant power plant somewhere. All of this will take decades,
happen pretty naturally, and no one person or corporation will have a
monopoly.

I would be happier with the second situation: the safety perspective
<https://substack.com/redirect/a4d3a155-c9fa-454d-8f3f-59a2a386c81e?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
here is that we want as much time as we can get to prepare for disruptive
AI.

Sam Altman previously endorsed this position! He said that OpenAI’s efforts
were good for safety, because you want to avoid *compute overhang*. That
is, you want AI progress to be as gradual as possible, not to progress in
sudden jerks. And one way you can keep things gradual is to max out the
level of AI you can build with your current chips, and then AI can grow (at
worst) as fast as the chip supply, which naturally grows pretty slowly.

…*unless* you ask for $7 trillion dollars to increase the chip supply in a
giant leap as quickly as possible! People who trusted OpenAI’s good nature
based on the compute overhang argument are feeling betrayed right now
<https://substack.com/redirect/20cba976-4adf-4799-8738-0a430ab6ce08?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>.


My current impression of OpenAI’s multiple contradictory perspectives here
<https://substack.com/redirect/b27c0abf-815a-469d-9634-6ed68b6c1ab3?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
is that they are genuinely interested in safety - but only insofar as
that’s compatible with scaling up AI as fast as possible. This is far from
the worst way that an AI company could be. But it’s not reassuring either.


[image: Get the app]
<https://substack.com/redirect/1a111cc5-a94e-4548-86ae-3d98c95de215?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>[image:
Start writing]
<https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9zdWJzdGFjay5jb20vc2lnbnVwP3V0bV9zb3VyY2U9c3Vic3RhY2smdXRtX21lZGl1bT1lbWFpbCZ1dG1fY29udGVudD1mb290ZXImdXRtX2NhbXBhaWduPWF1dG9maWxsZWQtZm9vdGVyJmZyZWVTaWdudXBFbWFpbD1qb2hua2NsYXJrQGdtYWlsLmNvbSZyPTZ4M25uIiwicCI6MTQxNTY5MDc2LCJzIjo4OTEyMCwiZiI6dHJ1ZSwidSI6MTE2MjIwODMsImlhdCI6MTcwNzgwNDgxMywiZXhwIjoxNzEwMzk2ODEzLCJpc3MiOiJwdWItMCIsInN1YiI6ImxpbmstcmVkaXJlY3QifQ.SiZMw-Ca8va-6BM3bj8fNV8VhYy0vQzl6THJB5njZIs?>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv27iR%3DYJmcm%3Dk5ci3OtxegAZHD58hBdwbzm7jzCnAA%3D4w%40mail.gmail.com.

Reply via email to