---------- Forwarded message ---------
From: Astral Codex Ten <astralcodex...@substack.com>
Date: Mon, Jan 22, 2024 at 11:49 PM
Subject: Should The Future Be Human?
To: <johnkcl...@gmail.com>


Machine Alignment Monday 1/22/24
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
Forwarded this email? Subscribe here
<https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly93d3cuYXN0cmFsY29kZXh0ZW4uY29tL3N1YnNjcmliZT91dG1fc291cmNlPWVtYWlsJnV0bV9jYW1wYWlnbj1lbWFpbC1zdWJzY3JpYmUmcj02eDNubiZuZXh0PWh0dHBzJTNBJTJGJTJGd3d3LmFzdHJhbGNvZGV4dGVuLmNvbSUyRnAlMkZzaG91bGQtdGhlLWZ1dHVyZS1iZS1odW1hbiIsInAiOjE0MDkxNTQwMCwicyI6ODkxMjAsImYiOnRydWUsInUiOjExNjIyMDgzLCJpYXQiOjE3MDU5ODUzMTAsImV4cCI6MTcwODU3NzMxMCwiaXNzIjoicHViLTAiLCJzdWIiOiJsaW5rLXJlZGlyZWN0In0.RFOzL08rbbIJPdng6hh8w2-l9u56bS6GzJkO3IMAHrE?>
for more
Should The Future Be Human?
<https://substack.com/app-link/post?publication_id=89120&post_id=140915400&utm_source=post-email-title&utm_campaign=email-post-title&isFreemail=true&r=6x3nn&token=eyJ1c2VyX2lkIjoxMTYyMjA4MywicG9zdF9pZCI6MTQwOTE1NDAwLCJpYXQiOjE3MDU5ODUzMTAsImV4cCI6MTcwODU3NzMxMCwiaXNzIjoicHViLTg5MTIwIiwic3ViIjoicG9zdC1yZWFjdGlvbiJ9.OaOjGQzfRKJMmj4jw7OW1ppfoc_NWzxNufGeyLOtSHw>Machine
Alignment Monday 1/22/24

Jan 23

<https://substack.com/app-link/post?publication_id=89120&post_id=140915400&utm_source=substack&isFreemail=true&submitLike=true&token=eyJ1c2VyX2lkIjoxMTYyMjA4MywicG9zdF9pZCI6MTQwOTE1NDAwLCJyZWFjdGlvbiI6IuKdpCIsImlhdCI6MTcwNTk4NTMxMCwiZXhwIjoxNzA4NTc3MzEwLCJpc3MiOiJwdWItODkxMjAiLCJzdWIiOiJyZWFjdGlvbiJ9.7UtHv8WueYFcyVRltjbPTuuOxF0rgm8SDlzyYDVd0NE&utm_medium=email&utm_campaign=email-reaction&r=6x3nn>
<https://substack.com/app-link/post?publication_id=89120&post_id=140915400&utm_source=substack&utm_medium=email&isFreemail=true&comments=true&token=eyJ1c2VyX2lkIjoxMTYyMjA4MywicG9zdF9pZCI6MTQwOTE1NDAwLCJpYXQiOjE3MDU5ODUzMTAsImV4cCI6MTcwODU3NzMxMCwiaXNzIjoicHViLTg5MTIwIiwic3ViIjoicG9zdC1yZWFjdGlvbiJ9.OaOjGQzfRKJMmj4jw7OW1ppfoc_NWzxNufGeyLOtSHw&r=6x3nn&utm_campaign=email-half-magic-comments&utm_source=substack&utm_medium=email>
<https://substack.com/app-link/post?publication_id=89120&post_id=140915400&utm_source=substack&utm_medium=email&utm_content=share&utm_campaign=email-share&action=share&triggerShare=true&isFreemail=true&r=6x3nn&token=eyJ1c2VyX2lkIjoxMTYyMjA4MywicG9zdF9pZCI6MTQwOTE1NDAwLCJpYXQiOjE3MDU5ODUzMTAsImV4cCI6MTcwODU3NzMxMCwiaXNzIjoicHViLTg5MTIwIiwic3ViIjoicG9zdC1yZWFjdGlvbiJ9.OaOjGQzfRKJMmj4jw7OW1ppfoc_NWzxNufGeyLOtSHw>
<https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly93d3cuYXN0cmFsY29kZXh0ZW4uY29tL3Avc2hvdWxkLXRoZS1mdXR1cmUtYmUtaHVtYW4_dXRtX3NvdXJjZT1zdWJzdGFjayZ1dG1fbWVkaXVtPWVtYWlsJmFjdGlvbj1yZXN0YWNrLWNvbW1lbnQmdXRtX2NhbXBhaWduPWVtYWlsLXJlc3RhY2stY29tbWVudCZyPTZ4M25uIiwicCI6MTQwOTE1NDAwLCJzIjo4OTEyMCwiZiI6dHJ1ZSwidSI6MTE2MjIwODMsImlhdCI6MTcwNTk4NTMxMCwiZXhwIjoxNzA4NTc3MzEwLCJpc3MiOiJwdWItMCIsInN1YiI6ImxpbmstcmVkaXJlY3QifQ.E7vSO4_-8We_K6zRo8tHE7-lowjvXoFN-EiH6la5xmM?&utm_source=substack&utm_medium=email>

READ IN APP
<https://open.substack.com/pub/astralcodexten/p/should-the-future-be-human?utm_source=email&redirect=app-store>

*I.*

Business Insider: Larry Page Once Called Elon Musk A “Specieist”
<https://substack.com/redirect/e33c6aeb-faf7-411b-88ab-2c66d34b2a4d?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
:

Tesla CEO Elon Musk and Google cofounder Larry Page disagree so severely
about the dangers of AI it apparently ended their friendship.

At Musk's 44th birthday celebration in 2015, Page accused Musk of being a
"specieist" who preferred humans over future digital life forms [...] Musk
said to Page at the time, "Well, yes, I am pro-human, I fucking like
humanity, dude."

A month later, Business Insider returned to the same question, from a
different angle: Effective Accelerationists Don’t Care If Humans Are
Replaced By AI
<https://substack.com/redirect/9d495b1a-771c-4014-a98f-955ff639dd34?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
:

A jargon-filled website
<https://substack.com/redirect/8a4b0c11-2964-4b73-9967-d8e9176b2b21?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
spreading the gospel of Effective Accelerationism describes
"technocapitalistic progress" as inevitable, lauding e/acc proponents as
builders who are "making the future happen […] Rather than fear, we have
faith in the adaptation process and wish to accelerate this to the
asymptotic limit: the technocapital singularity," the site reads. "We have
no affinity for biological humans or even the human mind structure.”

I originally thought there was an unbridgeable value gap between Page and
e/acc vs. Musk and EA. But I can imagine stories that would put me on
either side. For example:

*The Optimistic Story*

Future AIs are a lot like humans, only smarter. Maybe they resemble
Asimov’s robots, or R2-D2 from Star Wars. Their hopes and dreams are
different from ours, but still recognizable as hopes and dreams.

For a while, AIs and humans live together peacefully. Some merge into new
forms of cyborg life. Finally, the AIs and cyborgs set off to colonize the
galaxy, while dumb fragile humans mostly don’t. Either the humans stick
around on Earth, or they die out (maybe because sexbots were more fun than
real relationships).

The cyborg/robot confederacy that takes over the galaxy remembers its human
forebears fondly, but does its own thing. Its art is not necessarily
comprehensible to us, any more than James Joyce’s *Ulysses *would be
comprehensible to a caveman - but it *is* still art, and beautiful in its
own way. The scientific and philosophical questions it discusses are too
far beyond us to make sense, but they *are* still scientific and
philosophical questions. There are political squabbles between different AI
factions, monuments to the great robots of ages past, and gleaming
factories making new technologies we can barely imagine.

*The Pessimistic Story*

A paperclip maximizer kills all humans, then turns the rest of the galaxy
into paperclips. It isn’t “conscious”. It may delegate some tasks to
subroutines or have multiple “centers” to handle speed-of-light delay, but
the subroutines / centers are also non-conscious paperclip maximizers. It
doesn’t produce art. It doesn’t do scientific research, except insofar as
this helps it build better paperclip-maximizing technology. It doesn’t care
about philosophy. It doesn’t build monuments. It’s not even meaningful to
talk about it having factories, since it exists primarily as a
rapidly-expanding cloud of nanobots. It erases all records of human
history, because those are made of atoms that can be turned into
paperclips. The end.

(for a less extreme version of this, see my post on the Ascended Economy
<https://substack.com/redirect/1dafc7dd-5eed-4d23-9fa3-3cf1712a7a77?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
)

I think the default outcome is somewhere in between these two stories, but
I can think of it as “catastrophic” or “basically fine” based on the exact
contours of where it resembles each.

Here are some things I hope Larry Page and the e/accs are thinking about:

*Consciousness*

I know this is fuzzy and mystical-sounding, but it really does feel like a
loss if consciousness is erased from the universe forever, maybe a total
loss. If we’re lucky, consciousness is a basic feature of information
processing and anything smart enough to outcompete us will be at least as
conscious as we are. If we’re not lucky, consciousness might be associated
with only a tiny subset of useful information processing regimes (cf. Peter
Watt’s *Blindsight
<https://substack.com/redirect/f2999367-204c-4051-822c-eacdf0180ef8?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>*).
Consciousness seems very closely linked to brain waves in humans; existing
AIs have nothing even remotely resembling these, and it’s not clear that
they’re useful for anything based on deep learning.

*Individuation*

I would be more willing to accept AIs as a successor to humans if there
were clearly multiple distinct individuals. Modern AI seems on track to
succeed at this - there are millions of instances of eg GPT. But it’s not
obvious that this is the right way to coordinate an AI society, or that a
bunch of GPTs working together would be more like a nation than a hive mind.

*Art, Science, Philosophy, and Curiosity*: Some of these things are
emergent from any goal. Even a paperclip maximizer will want to study
physics, if only to create better paperclip-maximization machines. Others
aren’t. If art, music, etc come mostly from signaling drives, AIs with a
different relationship to individuality than humans might not have these.
Music in particular seems to be a spandrel of other design decisions in the
human brain. All of these might be selected out of any AI that was
ruthlessly optimized for a specific goal.

*Will AIs And Humans Merge?* This is the one where I feel most confident in
my answer, which is: not by default.

In millennia of invention, humans have never before merged with their
tools. We haven’t merged with swords, guns, cars, or laptops. This isn’t
just about lacking the technology to do so - surgeons could implant swords
and guns in people’s arms if they wanted to. It’s just a terrible idea.

AI is even harder to merge with than normal tools, because the brain is
very complicated. And “merge with AI” is a much harder task than just
“create a brain computer interface”. A brain-computer interface is where
you have a calculator in your head and can think “add 7 + 5” and it will do
that for you. But that’s not much better than having the calculator in your
hand. Merging with AI would involve rewiring every section of the brain to
the point where it’s unclear in what sense it’s still your brain at all.

Finally, an AI + human Franken-entity would soon become worse than AIs
alone. At least this would how things worked in chess. For about ten years
after Deep Blue beat Kasparov, “teams” of human grandmasters and chess
engines could beat chess engines alone. But this is no longer true
<https://substack.com/redirect/01566cc3-b39b-4af9-b69c-cb020db357d8?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>
- the human no longer adds anything. There might be a similar ten-year
window where AIs can outperform humans but cyborgs are better than either-
but realistically once we’re in the deep enough future that AI/human
mergers are possible at all, that window will already be closed.

In the very far future, after AIs have already solved the technical
problems involved, some eccentric rich people might try to merge with AI.
But this won’t create a new master race; it will just make them slightly
less far behind the AIs than everyone else.
*II.*

Even if all of these end up going as well as possible - the AIs are
provably conscious, exist as individuals, care about art and philosophy,
etc - there’s still a residual core of resistance that bothers me. It goes
something like:

Imagine that scientists detect a massive alien fleet heading towards Earth.
We intercept and translate some of their communications (don’t ask how) and
find they plan to kill all humans and take Earth’s resources for themselves.

Although the aliens are technologically beyond us, science fiction suggests
some clever strategies for defeating them - maybe microbes like *War of the
Worlds*, or computer viruses like *Independence Day*. If we can pull
together a miracle like this, should we use it?

Here I bet even Larry Page would support Team Human. But why? The aliens
are more advanced than us. They’re presumably conscious, individuated, and
have hopes and dreams like ourselves. Still, humans *uber alles.*

Is this specieist? I don’t know - is it racist to *not* want English
colonists to wipe out Native Americans? Would a Native American who
expressed that preference be racist? That would be a really strange way to
use that term!

I think rights trump concerns like these - not fuzzy “human rights”, but
the basic rights of life, liberty, and property. If the aliens want to kill
humanity, then they’re not as superior to us as they think, and we should
want to stop them. Likewise, I would be most willing to accept being
replaced by AI if it didn’t want to replace us by force.
*III.*

Maybe the future should be human, and maybe it shouldn’t. But the kind of
AIs that I’d be comfortable ceding the future to won’t appear by default.
And the kind of work it takes to make a successor species we can be proud
of, is the same kind of work it takes to trust that successor species to
make decisions about the final fate of humanity. We should do that work
instead of blithely assuming that we’ll get a kind of AI we like.

You're currently a free subscriber to Astral Codex Ten
<https://substack.com/redirect/eedd3184-b0da-4541-82b2-7f126b4d3237?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>.
For the full experience, upgrade your subscription.
<https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly93d3cuYXN0cmFsY29kZXh0ZW4uY29tL3N1YnNjcmliZT91dG1fc291cmNlPXBvc3QmdXRtX2NhbXBhaWduPWVtYWlsLWNoZWNrb3V0Jm5leHQ9aHR0cHMlM0ElMkYlMkZ3d3cuYXN0cmFsY29kZXh0ZW4uY29tJTJGcCUyRnNob3VsZC10aGUtZnV0dXJlLWJlLWh1bWFuJnI9Nngzbm4mdG9rZW49ZXlKMWMyVnlYMmxrSWpveE1UWXlNakE0TXl3aWFXRjBJam94TnpBMU9UZzFNekV3TENKbGVIQWlPakUzTURnMU56Y3pNVEFzSW1semN5STZJbkIxWWkwNE9URXlNQ0lzSW5OMVlpSTZJbU5vWldOcmIzVjBJbjAuZzAtNlU3M2RBWFJselJKd2JDeUZxUmQ4eUhSSVN5elVxZXFFaUhieHBETSIsInAiOjE0MDkxNTQwMCwicyI6ODkxMjAsImYiOnRydWUsInUiOjExNjIyMDgzLCJpYXQiOjE3MDU5ODUzMTAsImV4cCI6MTcwODU3NzMxMCwiaXNzIjoicHViLTAiLCJzdWIiOiJsaW5rLXJlZGlyZWN0In0.2gK5IbweIjlD52t0MhhNd7fPBlxLnTj9jYqERPGZgLI?&utm_source=substack&utm_medium=email&utm_content=postcta>

Upgrade to paid
<https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly93d3cuYXN0cmFsY29kZXh0ZW4uY29tL3N1YnNjcmliZT91dG1fc291cmNlPXBvc3QmdXRtX2NhbXBhaWduPWVtYWlsLWNoZWNrb3V0Jm5leHQ9aHR0cHMlM0ElMkYlMkZ3d3cuYXN0cmFsY29kZXh0ZW4uY29tJTJGcCUyRnNob3VsZC10aGUtZnV0dXJlLWJlLWh1bWFuJnI9Nngzbm4mdG9rZW49ZXlKMWMyVnlYMmxrSWpveE1UWXlNakE0TXl3aWFXRjBJam94TnpBMU9UZzFNekV3TENKbGVIQWlPakUzTURnMU56Y3pNVEFzSW1semN5STZJbkIxWWkwNE9URXlNQ0lzSW5OMVlpSTZJbU5vWldOcmIzVjBJbjAuZzAtNlU3M2RBWFJselJKd2JDeUZxUmQ4eUhSSVN5elVxZXFFaUhieHBETSIsInAiOjE0MDkxNTQwMCwicyI6ODkxMjAsImYiOnRydWUsInUiOjExNjIyMDgzLCJpYXQiOjE3MDU5ODUzMTAsImV4cCI6MTcwODU3NzMxMCwiaXNzIjoicHViLTAiLCJzdWIiOiJsaW5rLXJlZGlyZWN0In0.2gK5IbweIjlD52t0MhhNd7fPBlxLnTj9jYqERPGZgLI?&utm_source=substack&utm_medium=email&utm_content=postcta>

Like
<https://substack.com/app-link/post?publication_id=89120&post_id=140915400&utm_source=substack&isFreemail=true&submitLike=true&token=eyJ1c2VyX2lkIjoxMTYyMjA4MywicG9zdF9pZCI6MTQwOTE1NDAwLCJyZWFjdGlvbiI6IuKdpCIsImlhdCI6MTcwNTk4NTMxMCwiZXhwIjoxNzA4NTc3MzEwLCJpc3MiOiJwdWItODkxMjAiLCJzdWIiOiJyZWFjdGlvbiJ9.7UtHv8WueYFcyVRltjbPTuuOxF0rgm8SDlzyYDVd0NE&utm_medium=email&utm_campaign=email-reaction&r=6x3nn>
Comment
<https://substack.com/app-link/post?publication_id=89120&post_id=140915400&utm_source=substack&utm_medium=email&isFreemail=true&comments=true&token=eyJ1c2VyX2lkIjoxMTYyMjA4MywicG9zdF9pZCI6MTQwOTE1NDAwLCJpYXQiOjE3MDU5ODUzMTAsImV4cCI6MTcwODU3NzMxMCwiaXNzIjoicHViLTg5MTIwIiwic3ViIjoicG9zdC1yZWFjdGlvbiJ9.OaOjGQzfRKJMmj4jw7OW1ppfoc_NWzxNufGeyLOtSHw&r=6x3nn&utm_campaign=email-half-magic-comments&utm_source=substack&utm_medium=email>
Restack
<https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly93d3cuYXN0cmFsY29kZXh0ZW4uY29tL3Avc2hvdWxkLXRoZS1mdXR1cmUtYmUtaHVtYW4_dXRtX3NvdXJjZT1zdWJzdGFjayZ1dG1fbWVkaXVtPWVtYWlsJmFjdGlvbj1yZXN0YWNrLWNvbW1lbnQmdXRtX2NhbXBhaWduPWVtYWlsLXJlc3RhY2stY29tbWVudCZyPTZ4M25uIiwicCI6MTQwOTE1NDAwLCJzIjo4OTEyMCwiZiI6dHJ1ZSwidSI6MTE2MjIwODMsImlhdCI6MTcwNTk4NTMxMCwiZXhwIjoxNzA4NTc3MzEwLCJpc3MiOiJwdWItMCIsInN1YiI6ImxpbmstcmVkaXJlY3QifQ.E7vSO4_-8We_K6zRo8tHE7-lowjvXoFN-EiH6la5xmM?&utm_source=substack&utm_medium=email>


© 2024 Scott Alexander
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe
<https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly93d3cuYXN0cmFsY29kZXh0ZW4uY29tL2FjdGlvbi9kaXNhYmxlX2VtYWlsP3Rva2VuPWV5SjFjMlZ5WDJsa0lqb3hNVFl5TWpBNE15d2ljRzl6ZEY5cFpDSTZNVFF3T1RFMU5EQXdMQ0pwWVhRaU9qRTNNRFU1T0RVek1UQXNJbVY0Y0NJNk1UY3dPRFUzTnpNeE1Dd2lhWE56SWpvaWNIVmlMVGc1TVRJd0lpd2ljM1ZpSWpvaVpHbHpZV0pzWlY5bGJXRnBiQ0o5LnFTLURETXMwdXRYOWFtZ2Y2aTBUSmVMWk1TaV9HNUZUWDdURHpVWk5mRkUmZXhwaXJlcz0zNjVkIiwicCI6MTQwOTE1NDAwLCJzIjo4OTEyMCwiZiI6dHJ1ZSwidSI6MTE2MjIwODMsImlhdCI6MTcwNTk4NTMxMCwiZXhwIjoxNzA4NTc3MzEwLCJpc3MiOiJwdWItMCIsInN1YiI6ImxpbmstcmVkaXJlY3QifQ.DXaze-zpTEqgtkDu6DXn9-qvxLk4HKKlAtXM7142DjM?>

[image: Get the app]
<https://substack.com/redirect/801c90e3-c915-4906-86ff-bf0121e8b959?j=eyJ1IjoiNngzbm4ifQ.I1PMvYo4mI3PquTDRhL5Dev-9_ouIq3kw6ZhrVNsy8o>[image:
Start writing]
<https://substack.com/redirect/2/eyJlIjoiaHR0cHM6Ly9zdWJzdGFjay5jb20vc2lnbnVwP3V0bV9zb3VyY2U9c3Vic3RhY2smdXRtX21lZGl1bT1lbWFpbCZ1dG1fY29udGVudD1mb290ZXImdXRtX2NhbXBhaWduPWF1dG9maWxsZWQtZm9vdGVyJmZyZWVTaWdudXBFbWFpbD1qb2hua2NsYXJrQGdtYWlsLmNvbSZyPTZ4M25uIiwicCI6MTQwOTE1NDAwLCJzIjo4OTEyMCwiZiI6dHJ1ZSwidSI6MTE2MjIwODMsImlhdCI6MTcwNTk4NTMxMCwiZXhwIjoxNzA4NTc3MzEwLCJpc3MiOiJwdWItMCIsInN1YiI6ImxpbmstcmVkaXJlY3QifQ.Zxn1W7z6lLuQfqv9WLfYAN5qa6YeS5DuForYgiADmzE?>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv277FAyX74-g7o%2B9s%2Bxaec%2BmNFKQcmKVCG9UupVQu1HLA%40mail.gmail.com.

Reply via email to