Brent I believe you are correct; cellphones regularly broadcast in order to
participate in the network. A steerable antenna could cut power usage by a
large factor - maybe even by an order of magnitude - but it would need to be
able to constantly reorient itself as it gets shifted around the x,y & z
axis' while for example being in a pocket while someone is walking. 

I think in this case software could help on a couple levels. 

Obviously a lower powered antenna would be a huge win - and would make the
patent owner very wealthy, but absent that.

It could be possible, by using algorithmic means to improve and sharpen the
quality of an unusably poor signal thereby enabling the use of a much lower
powered antenna. Another possibility is in how the mobile unit and the
network synch. The network could buffer attempts to contact the mobile unit
for a short duration (from the human perspective, but an eon of time from
the machine perspective) without it being excessively noticeable to the
users. The mobile device would thus limit its communication back to the cell
network to a shared configuration ping schedule. The network would know when
to expect a ping and if there was anything in the mobile devices in buffer
it would at that time make the connection.

The second option of course relies on a controlled degradation of the
service that is kept below the level where users begin to notice the delays;
by sharing a configured schedule both the cell network and the mobile device
would have advance knowledge of when the next synch point would be
(something on the order of every seven seconds) enabling both sides of the
networked handshake to optimize for that synchronization sequence point.

A third option is to ramp up the number of base stations by several orders
of magnitude and go to a much lower powered signal - the antenna would still
be the main power draw perhaps, but overall energy would be saved because
the transmission signal strength could be much lower (because cell base
stations would be much more numerous).

-Chris

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
Sent: Friday, September 20, 2013 11:47 PM
To: everything-list@googlegroups.com
Subject: Re: What gives philosophers a bad name?

 

Correct me if I'm wrong, but isn't it worse than that.  Doesn't the
smartphone (or cel phone) radiate even when you're not talking, so that the
system knows where you are if someone calls you?  The only improvement in
efficiency I could suggest is electronically steerable antennae to reduce
the required radiated power.

Brent

On 9/20/2013 8:08 PM, L.W. Sterritt wrote:

Chris, Brent and meekerdb,  

While we have been considering optimizing the efficiency of circuitry and
software, we neglected that while talking on the smartphone, 1/2 of the
total power budget goes to radiation from the smartphone antenna - about 2
Watts as I remember.  That will drain a typical smartphone battery in less
than 3 hours, and there is not a lot we can do about it, except use the
phone for all of it's other functions and don't talk too much!  

LWSterritt

 

 

On Sep 20, 2013, at 5:24 PM, meekerdb <meeke...@verizon.net> wrote:





On 9/20/2013 4:40 PM, Chris de Morsella wrote:

Current software is very energy efficient -- and on so many levels. I worked
developing code used in the Windows Smartphone and it was during that time
that I had to first think hard about the energy efficiency dimension in
computing -- as measured by useful work done per unit of energy. The
engineering management in that group was constantly harping on the need to
produce energy efficient code. 

 

Programmers are deeply engrained with a lot of bad habits -- and not only in
terms of producing energy efficient software. For example most developers
will instinctively grab large chunks of resources -- in order to ensure that
their processes are not starved of resources in some kind of peak scenario.
While this may be good for the application -- when measured by itself -- it
is bad for the overall footprint of the application on the device  (bloat)
and for the energy requirements that that software will impose on the
hardware. Another example of a common bad practice poorly written
synchronization code (or synchronized containers).

 

These bad practices (anti-patterns in the jargon) can not only have a huge
impact on performance in peak usage scenarios, but also act to increase the
energy requirements for that software to run.

 

I think that -- with a lot of programming effort of course (which is why it
will never happen) that the current code base, and not only in the mobile
small device space, where it is clearly important, but in datacenter scale
applications and service (exposed) applications as well -- that the energy
efficiency of software has a huge headroom for improvement. But in order for
this to happen there has to first be a profound cultural change amongst
software developers who are being driven by speed to market, and other
draconian economic and marketing imperatives and are producing code under
these types od deadlines and constraints.


There's a lot of bad design in consumer electronics, particularly in user
interfaces, because the pressure is to get more and newer features and apps.
Eventually (maybe already) this will slow down and designers will start to
pay more attention to refining the stuff already there.




 

If there is a theoretical minimum that derives from the second law of
thermodynamics it must be exceedingly far below what the current practical
minimums are for actual real world computing systems. And I do not see how a
minimum can be determined without reference to the physical medium in which
the computing system being measured is implemented. 


It is determined by the temperature of the environment in which entropy must
be dumped in order to execute irreversible operations (like erasing a bit).
But you're right that current practicle minimums are very far above the
Landauer limit and so it has not effect on current design practice.  The
current practice is limited by heat dissipation and battery capacity.




 

In fact how could a switch be implemented without it being implemented in
some medium that contains the switch?


The way to completely avoid Landauer's limit is to make all operations
reversible, never lose any information so that the whole calculation could
be reversed.  Then there's no entropy dumped to the environment and
Landauer's limit doesn't apply.

Brent

 

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to