From: [EMAIL PROTECTED]
Date: August 14, 2004 4:09:30 AM GMT+09:00
To: [EMAIL PROTECTED]
Subject: [NEC] #3.5: The Possibility of Spectrum As A Public Good
Reply-To: [EMAIL PROTECTED], [EMAIL PROTECTED]
NEC @ Shirky.com, a mailing list about Networks, Economics, and Culture
Published periodically / #3.5 / August 13, 2004
Subscribe at http://shirky.com/nec.html
Archived at http://shirky.com
Social Software weblog at http://corante.com/many/
In this issue:
- Introduction
- Essay: The Possibility of Spectrum as a Public Good
Also at http://www.shirky.com/writings/spectrum_public_good.html
- Other notes:
3G Owners Protest
The FCC, CALEA, and Plan A vs. Plan B
* Introduction =======================================================
My apologies for a slow summer -- a surprising surge in writing for
clients and an unsurprising surge in midnight diaper changes kept me
away from NEC.
I'm obsessed wiht two things these days -- the phone system (VoIP,
mobile phones as a hackable platform, non-telco networking in phones
like Bluetooth and Wifi) and mesh networking, an extension of the
peer-to-peer revolution away from the PC, with little devices
self-configuring in networks so that their performance _increases_
with the addition of new nodes rather than decreasing, a phenomenon
David Reed calls cooperation gain.
The essay here is on the former topic, being a lightly edited version
of my comments on the FCC's proposed increase of unlicensed spectrum,
originally posted at wirelessunleashed.com
Increasing unlicensed spectrum is a _very_ big deal. For all the
attention the FCC has gotten for ruling on media concentration and
offensive speech, those actions were minor adjustments in existing
rules. Changing the way way spectrum is regulated, by contrast, is
profound, since it means abandoning one of the founding myths of the
FCC -- frequency should be treated like property.
We've had the engineering to upend this assumption for some time now,
but a regulatory regime that has limited its application to relatively
high frequncies like 2.4G, where Wifi lives. If the FCC really does
transmute spectrum in the lower and more useful sub-1Ghz part of the
spectrum, it will change almost every part of the media landscape, as
mass media was based on the assumption that spectrum was scarce.
The commercial forces that profit from the current synthetic scarcity
are dead-set against any such change, of course, so there's every
chance that good engineering will lose out to bad rulemaking, but hey,
a girl can dream...
-clay
PS. I am continuing to write on the topic of social software, but am
doing so in short bursts on Many-to-Many (http://corante.com/many/),
as I am working out some ideas that are in pre-essay form.
* Essay =============================================================
The Possibility of Spectrum as a Public Good
http://www.shirky.com/writings/spectrum_public_good.html
The FCC is considering opening up additional spectrum to unlicensed
uses -- the same kind of regulatory change that gave rise to
Wifi. Much of the spectrum being considered for unlicensed use is
currently allocated for broadcasters, however, so FCC's proposal
creates tension between incumbents and groups that want to take
advantage of the possibilities inherent in unlicensed spectrum.
Most issues the FCC deals with, even contentious ones like limits on
the ownership of radio and television stations, are changes within
regulatory schemes. The recent proposal to move the maximum media
market reach from 35% to 45% took the idea of an ownership cap itself
at face value, and involved a simple change of amount.
Unlicensed spectrum is different. In addition to all the regulatory
complexities, an enormous philosophical change is being
proposed. Transmuting spectrum from licensed to unlicensed changes
what spectrum _is_. This change is possible because of advances in the
engineering of wireless systems.
This matters, a lot, because with the spread of unlicensed wireless,
the FCC could live up to its mandate of managing spectrum on behalf of
the public, by allowing for and even encouraging engineering practices
that treat spectrum itself as a public good. A public good, in
economic terms, is something that is best provisioned for everyone (an
economic characteristic called non-excludability) and which anyone can
use without depleting the resource (a characteristic called non-rival
use -- individual users aren't rivals for the resource.)
This transformation will be no easy task, because the proposed change
differs radically from the current regulatory view of spectrum, which
is two parts physics to two parts engineering.
- Two Parts Physics
Though the details can be arcane, the physics of spectrum is
relatively simple. Spectrum, in the aggregate, is just a collection of
waves, and a wave is defined by its characteristic frequency, measured
by counting the number of waves that pass a given point in a second --
the more waves, the higher the frequency. (Wavelength is a corollary
measurement -- the more waves that pass a point in per second, the
shorter the length of each wave; therefore, the greater the frequency,
the shorter the wavelength. Wavelength and frequency are just
alternate ways of expressing the same characteristic.)
The easiest part of spectrum to understand is light -- light is simply
that collection of frequencies the eye can detect. Other than that,
though, there is no real difference between light and radio waves;
they are all part of the same electro-magnetic spectrum. Light has a
very high frequency compared to almost all useful communications
spectrum. Like all high-frequency waves, light can't pass through
walls, while lower frequencies can -- in fact, the lower the
frequency, the better the penetration. This makes low frequencies more
valuable for long-range communication, particularly in urban areas.
The second important characteristic of spectrum is power. Like the
diminishing height of waves that emanate outward from a rock dropped
in a pond, the power of a wave radiating outward from a broadcasting
antenna falls as the distance from the antenna increases. Worse, this
falloff isn't just related to distance, it is the square of that
distance. This pattern, called the inverse square law, says that power
at distance N will be 1/N^2 -- two miles from a given broadcaster, the
signal will be 1/4th the strength of the signal at one mile, at three
miles, it will be 1/9th, and so on.
- Two Parts Engineering
Frequency and power are real attributes of the waves that make up
spectrum. The questions revolving around regulation of that spectrum,
though, aren't about those characteristics. Instead, they are about
the engineering of systems that make use of the characteristics of
frequency and power. Right now, the FCC regulations make two
assumptions about such systems, based largely on radio engineering as
it existed for most of the 20th century.
First, frequency. Current regulation assumes that a given frequency is
like a virtual wire. For a sender and receiver to communicate, they
need to be communicating on a single, agreed-on frequency. Though our
experience of receiving these frequencies is sometimes discrete
(changing the channel on TV) and sometimes variable (turning the dial
on a radio), the process is always the same -- making the receiver
listen in to the specific frequency being sent by the transmitter.
Treating frequency as a wire also sets limits on the amount of data
that can be transmitted, since the data is encoded as minor changes to
the waves themselves In the frequency-as-wire model, the higher the
frequency of waves, the higher the data rate, and the lower, the
lower. Because of the tradeoff between penetration and data rate, most
of the useful radio frequencies are in the kilohertz (Khz) to
Gigahertz (Ghz) range -- low enough to travel through walls, high
enough to carry the data required for voice or video signals.
Second, power. Because a given frequency is treated like a wire, and
because power falls off so rapidly as it radiates outwards from the
broadcasting antenna, the communication between sender and receiver
relies on no other broadcaster using that same frequency in the same
geographic area. If two or more broadcasters are using the same
frequency, a standard receiver won't be able to discern one signal
from another. Though engineering parlance calls this interference, the
waves aren't actually interfering with one another -- rather the
profusion of signals is interfering with the receiver's ability to
listen to one specific signal.
- Property
In the early decades of radio, interference was a ubiquitous problem
-- no receiving hardware could distinguish between two signals of
similar frequencies. This model of interference required strict limits
on use of a particular frequency, in order to ensure reception -- a
sender had to "own" a frequency to use it. In its role as the
regulator of spectrum, the FCC has been in the business of managing
these engineering tradeoffs, determining who gets to use what spectrum
(based in part on requirements for penetration of buildings and
carriage of data, and in part on what's available.) Once spectrum has
been allocated, the FCC then enforces rights over the spectrum on
behalf of the owners, in order to ensure that no other signals risk
confusing receivers in proximity to the antenna.
With the old model of transmitters locked on one frequency and
receivers unable to do anything but listen, this was the right
answer. Accordingly, almost all the usable spectrum was licensed to a
small number of parties, especially the Government and
broadcasters. These organizations in turn use only a tiny fraction of
this spectrum, treating the rest of it as "white space", a buffer zone
against competition from other signals. (This imbalance between used
and unused signal is actually getting more extreme as broadcasters
transition to digital signal, which requires an even narrower slice of
frequency than analog signals do.)
Thus, because of engineering assumptions, the FCC treats spectrum as
property, a regulatory approach that creates enormous difficulty,
since spectrum isn't actually property. The necessary characteristics
of property are the opposite of the characteristics of a public good.
Things like shoes, cars, and houses are all property. Property is
excludable -- it is easy to prevent others from using it -- and rival
-- meaning that one person's use of it will interfere with another
person's use of it. Spectrum has neither characteristic. Spectrum is
purely descriptive -- a frequency is just a particular number of waves
a second -- so no one can own a particular frequency of spectrum in
the same way no one can own a particular color of light.
Instead, when an organization 'owns' spectrum, what they really have
is a contract guaranteeing Federal prosecution if someone else
broadcasts on their frequency in their area. The regulatory costs of
forcing spectrum to emulate property are enormous, but worthwhile so
long as it leads to better use of spectrum than other methods
can. That used to be true. No longer.
- The Philosopher's Stone
In the handling of spectrum, technological improvement is the
philosopher's stone, capable of turning one kind of material into
another. Since the treatment of spectrum as property is an artifact of
current regulatory structure, itself an artifact of engineering
assumptions, changing the engineering can change what spectrum is, at
least in a regulatory setting. This matters, because the
inefficiencies and distortions arising from treating spectrum as
property create obstacles to more economically efficient and flexible
uses of wireless communication.
There have been two critical changes in the engineering of radio
systems since the FCC's implicit model was adopted. The first is
computationally smart devices that can coordinate with one
another. One possible use of such smart devices is to allow the sender
to broadcast not with as much power as possible, but with as
little. Because smart senders and receivers can coordinate, they can
agree on different degrees of broadcast power in different situations,
in the same way people modulate their volume around a dinner
table. Because the sender no longer has to use maximum power to
maximize the receiver's ability to 'hear' the signal, we can reduce
the overall power required in the system (and thus the cause of
traditional interference), even if no other aspect of radio
engineering were to change.
The second, and much more major change is the invention of
spread-spectrum radio. As the name suggests, spread-spectrum encodes
data on several frequencies simultaneously. This has two critical
advantages. First, it decouples the link between the frequency of a
particular signal and the amount of data that can be sent between
devices, allowing data transfer rates to be much higher than the
carrying capacity of frequency considered as a virtual wire. Second,
because both sender and receiver are computationally smart, they can
agree on ways of sending and receiving data in ways that largely avoid
the traditional form of interference.
Neither smart radios or spread spectrum existed in 1934, the year of
the FCC's birth, and the context in which many of its most basic
engineering assumptions were set. We have good theoretical reasons to
believe that these techniques can transform the way we treat
spectrum. We also have a good practical reason to believe it -- Wifi.
- The Example of Wifi
Wifi, operating in a slice of unlicensed spectrum at the relatively
high frequency of 2.4 Ghz, has been one of the bright economic spots
during the tech downturn, with base stations and cards shipping at a
torrid pace throughout the last few years. (The number of Wifi PC
cards shipped is expected to top 20 million this year.) Wifi is also a
giant demonstration project of what can happen when the problem of
non-interference is left up to smart devices, rather than arranged by
fiat.
The first-order value of this is obvious: You and I can be neighbors,
both running Wifi routers that broadcast signal into one another's
apartments, without generating anything that looks like the old model
of interference. This lowering of coordination costs between
participants in the system has had a hugely beneficial effect on the
spread of the technology (incredibly rapid for hardware), because no
one has to ask for help or permission to set up a Wifi node, nor do
they have to coordinate with anyone else making the same set of
choices.
There is a surprising second-order value of the Wifi revolution as
well: an alternate model of capitalization. Most wireless technology,
whether TV, radio, or phones, requires a huge investment up front in
broadcasting equipment, an investment which limits what can be done
with the technology, since all subsequent uses require extracting
money from the users or third parties like advertisers, in order to
recoup the investment and cover the ongoing expenses.
Wifi networks, by contrast, are capitalized by the users, one hotspot
or PC card at a time. This model has provided an enormous amount of
flexibility in business models, from the Wireless ISP model being
pursued by T-Mobile and Starbucks; to the civic infrastructure model,
as with Emenity unwiring parks and other public spots; to the office
LAN model, where a business treats Wifi access as part of the cost of
doing business. And then, of course, there's the home user model,
where the user sets up an access point in their house and uses it
themselves, as they would a toaster or a TV, without needing to offer
access to anyone else, or to come up with a business model to cover
the small one-time charge.
There are two ways to build $10 billion in network infrastructure. The
first is to get ten large firms to pony up a billion, and the second
is to get 10 million users to spend a hundred dollars each. Wifi fits
that second model, and has created an explosion of interest and
experimentation that would be impossible to create in a world where
the 2.4Ghz band was treated as property.
- Definition of a public good
The Wifi story has two parts -- 20 years ago, the FCC decided to allow
communications tools to operate in the 2.4Ghz frequency, but refused
to police interference the old way. Instead, the devices had to be
able to operate in noisy environments.
Then, for 15 of those 20 years, nothing much happened other than the
spread of garage door openers. The regulatory change alone wasn't
enough. The second part of the story was the development of Wifi as a
standard that any company could build products for, products that
would then be interoperable.
These two characteristics -- unlicensed spectrum and clear engineering
standards -- helped ignite the Wifi revolution. The 2.4Ghz spectrum is
not treated as property, with the FCC in the ungainly role of a 'No
Trespassing" enforcer; instead, it is being treated as a public good,
with regulations in place to require devices to be good neighbors, but
with no caps or other restrictions on deployment or use.
Though Wifi-enabled hardware is property, of course, the slice of
spectrum the hardware uses isn't. Anyone can buy a Wifi base station
or card to make use of the 2.4 Ghz spectrum (that is, the spectrum is
non-excludable.) Furthermore, anyone can use it without interfering
with other's uses of it (it is non-rival as well.)
The right to broadcast on the 2.4G spectrum is almost worthless, since
everyone has that right in an unlicensed regime. But the economic
value created by uses of 2.4G are almost certainly higher than for any
other section of spectrum, and is still growing rapidly.
- So What Could Be Bad?
The problem with Wifi, however, is that it is in the wrong frequency
for wide deployment. 2.4Ghz is the frequency for baby monitors and
cordless phones -- applications designed to operate at short
distances, and usually to be contained in the walls of a house, which
Wifi penetrates partially but weakly. Though it's possible to do a
remarkable amount to extend the range of a Wifi signal through
amplification and antenna design, the basic physics of higher
frequencies means that Wifi isn't appropriate for uses meant to cover
distances of miles rather than yards. Wifi is great for unwiring a
home, or carpeting an urban park, but lousy for getting bandwidth to
rural or remote areas, or indeed to anyplace that doesn't already have
a wired connection to the internet to extend.
With Wifi as proof of concept, it should be easy to argue that other,
lower frequency spectrum should be transmuted from licensed to
unlicensed (which is to say from a synthetic property model to a
public good.) The argument runs into trouble, though, on the fact that
almost all useful spectrum is presently regulated like property,
meaning that any such re-assignment of spectrum will involve the
current license holders of the spectrum in question.
The broadcasters have a legitimate concern about old-style
interference, of course. After 70 years of hearing that anyone else
broadcasting in their spectrum would be catastrophic, they are
understandably leery of models that adopt alternate models of
interference, even models that only operate in their unused "white
space."
Unlike the 2.4 Ghz band, which was already used by microwave ovens and
other appliances, the broadcaster's spectrum is only used for
communications, so they will have to be shown that new devices can not
only cooperate with one another, but operate without disrupting
current signals. (The prospects for this are good -- in a related test
in February concerning low-power radio, the company performing the
interference tests concluded, "Due to the lack of measurable
interference produced by [low-power] stations during testing, the
listener tests and economic analysis scheduled for Phase II of the
LPFM field tests and experimental program should not be done."
http://www.mediaaccess.org/programs/lpfm/RADIOmagazine.pdf)
Beneath the simple challenge of avoiding interference, though, is a
deeper and more hidden fear. Spectrum is currently valuable because it
is scarce, and it is scarce because it is treated like property. Even
if novel uses of spectrum can be shown not to interfere with the
current broadcast model, evidence that spectrum can be transmuted from
a property-rights model to being treated as a public good might not be
welcome, in part because it could call into question the hold the
broadcasters have on spectrum. This is especially true now that over
85% of television viewers get their TV from cable and satellite, not
from traditional broadcast.
The potential threat to spectrum holders is clear. We have a set of
arguments for creating and enforcing property rights for things that
aren't actually property. We usually apply this artificial scarcity to
intellectual property -- patents, trademarks, copyright -- and grant
these rights to protect certain forms of abstract work or
communications.
The rationale for all these rights, however, is to reward their
creators for novel intellectual work. This does not offer much relief
to spectrum holders seeking a justification for continued Government
enforcement of scarcity. None of the current holders of spectrum have
created any of it -- a wavelength is a physical property that cannot
be created or destroyed. If spectrum can be regulated without the
traditional licensing regime, it's hard to argue that the Government
has a compelling interest in creating and enforcing scarcity.
And this is what makes the current fight so interesting, and so
serious. There are simple arguments about interference, but the
ramifications of these arguments are about essence -- what kind of
thing is spectrum? We have the opportunity to get a world where cheap
but smart equipment allows for high utility and low coordination costs
between users.
As we've seen with Wifi, a small slice of spectrum can become an
enormous platform for innovation and user-created value. The kinds of
economic activity we've seen in the limited example of Wifi can be
realized on a much broader scale. The only issue now is whether and
how the FCC manages its proposed transmutation of small slices of
spectrum away from property rights and towards a model that regulates
spectrum as a public good.
* Other Notes
===========================================================
- 3G Owners Protest
If you want to see how a group threatened by unlicensed spectrum
addresses the issue, look no further than the 3G America white
paper. http://www.3gamericas.org/PDFs/licensed_unlicensed_spectrum.pdf
Throughout the paper, there is a curious disconnect between the
technical analysis, which is appropriately concerned with the risk of
interference if unlicensed devices proliferate, and the political cast
of their recommendations, which exhibit a kind of stifled terror that
there may in fact be good answers to their technical concerns.
The report is shot through with this dichotomy -- here is but one
paragraph, from their conclusions:
While many proposals for systems employing unlicensed spectrum have
been offered, most work on an intelligent device premise. That is,
the device tries to determine what spectrum is available for use and
attempts to use it. While potentially viable, the potential also
clearly exists for harmful interference, poor efficiency, and
disruptions in service. In contrast, a properly designed unlicensed
WAN system offers the public an intelligent network of devices that
can explicitly avoid interference, apply advanced modulations,
ensure spectrum rights are not abused, allow spectrum owners the
ability to share their spectrum without fear that control over it
will be lost, and even allow spectrum owners to offer their spectrum
for short-term sale through an automated database system. These
overwhelming benefits make it clear that WANs should be developed in
such a way that access to the spectrum is controlled so as to make
the most efficient use of the available spectrum.
Got that? This 'intelligent device' stuff is potentially viable, but
should not in any case be tried, because of the overwhelming benefits
of managed wireless WANs. And what are those benefits? They list 5:
* Explicit avoidance of interference (which the old model of
spectrum as property offers)
* Apply advance modulations (this is not, strictly speaking, an
advantage, as intelligent device systems can do so as well)
* Ensure that spectrum rights are not abused (for whom is this an
advantage, I wonder?)
* Allow owners to share spectrum without fear of loss of control
(ditto)
* Allow spot markets to increase profits to spectrum owners
(ditto)
And there you have it -- of the 5 "overwhelming" benefits of extending
the current system, one is a re-capitualtion of the idea of spectrum
as property itself, one is not in fact a benefit peculiar to this
solution, and three of the five benefits are about maintaining the
commercial value of spectrum for current owners, rather than
increasing value for users.
No need to be a conspiracy theorist when it's all out in the open like
that...
* The FCC, CALEA, and Plan A vs. Plan B
Back in February, I wrote VoIP: Plan vs. Plan B, which posited that
the VoIP world was characterized by two strategies: PlanA was to
replace
the traditional telephone system incrementally and from within, while
Plan B was to replace the telcos all at once.
http://shirky.com/writings/voip_a_b.html
Even as I posted it, I suspected I was being proven wrong, and that as
Vonage increased its non-telco like services while Skype went after
telco-interoperability, the two strategies would merge.
Now, though, it looks like that analysis may be right. At the time I
said "...much of how VoIP unfolds will have to do with regulations
written in 2004", but I thought those regulations would have to do
with state competition rules. It turns out instead to have been CALEA,
the Communications Assistance for Law Enforcement Act.
There's lot of good stuff on the FCC's decision to force VoIP providers
to implement CALEA wiretaps, much of it having to do with policy and
jurisdicational questions. But of everything Ive read, this bit from
Bob Frankston really hits the nail on the head:
I've got to give Mike Powell credit here for a nice political way of
saying that CALEA is moot: The item also tentatively concludes that
non-managed, or disintermediated, VoIP and Instant Messaging are not
subject to CALEA, and that it is unnecessary to identify future
services and entities subject to CALEA.
The FCC, in other words, is drawing a line not between circuit-
switched and packet-switched communications, but between anything that
emulates the phone and anything that doesn't, and saying that they
can't imaginine enforcing phone-like rules on non-phone-like
communications.
In the end, the likely acid test for this 'quacks like a duck'
decision will be whether a particular form of voice communication is
keyed to phone numbers. The irony here is that for all their physical
holdings, the telcos' greatest asset my now be the use of phone
numbers as an address space. Touch that, as a VoIP provider, and you
take on enormous new burdens. (And God help this leaky decision when
ENUM makes phone numbers addressable by DNS, the beating heart of many
a non-managed and disintermediated service.)
The immediate effect of this, assuming it stands without significant
moderation, will be to reinforce the previously weakening Plan A/Plan
B split. Any company hoping to interoperate with the telcos will
suddenly take on a significant new costs, costs which will not accrue
to firms offering two-way voice in other applications, like IM. (I'm
finding I do an increasing amount of my business communication using
iChat as a voice tool, for example.)
This will retard, and perhaps even reverse, the move towards
internet/telco hybrids, and will create great value for services that
don't interoperate with the legacy phone system.
* End
====================================================================
This work is licensed under the Creative Commons Attribution License.
The licensor permits others to copy, distribute, display, and perform
the work. In return, licensees must give the original author credit.
To view a copy of this license, visit
http://creativecommons.org/licenses/by/1.0
or send a letter to
Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305,
USA.
2004, Clay Shirky
_______________________________________________
NEC - Clay Shirky's distribution list on Networks, Economics & Culture
[EMAIL PROTECTED]
http://shirky.com/nec.html