Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: non-dissipative circuitry



Chuck Britton wrote:

It seems to ME that the METHOD of connecting the capacitors is
absolutely NO effect on the final state of the two capacitors.

Connect them with and inductor, connect them with a resistor. Once
things calm down, the charge will have redistributed itself in a
totally predictable way. Energy has been 'dissipated'. Exactly the
SAME amount of energy, regardless of the connection method. nicht
wahr???

Ganz nicht. Method matters.

Here's a mechanical analogy that some people may
find easier to picture:

I'll assume you know how to analyze a ballistic
pendulum. Actually, let me replace it with a
ballistic mass-on-a-spring ("BMOAS"); the principle
is fundamentally the same but I don't want to be
bothered with the vertical component of the
pendulum motion.

If the BMOAS is initially at rest and you shoot
it with a bullet, the process is well-known to be
highly dissipative. The bullet has some momentum
and some energy; you can't conserve momentum _and_
energy without some dissipation.

Now in contrast imagine that the BMOAS is initially
oscillating. There will be a special time in its
cycle when its natural velocity matches the bullet
velocity. If you get the timing right, you can
shoot it just at that special time. The BMOAS
grabs the bullet. Later in the cycle there will be
a time when the BMOAS+bullet system comes to rest.
All the KE has been converted to PE at this point.
The BMOAS drops the now-stationary bullet to the
nearby ground and continues with its oscillations.
The BMOAS has extracted 100% of the bullet energy
with arbitrarily little dissipation.

You have to get the timing right, but there's no
law of physics that says you can't get the timing
right.

==============

Returning to the setup that prompted the question,
i.e. using one capacitor to charge up another:

You can do this with arbitrarily little dissipation.
One way to do it would be to use a switching
power supply, with the first capacitor as its
"line" (supply) and the second capacitor as its
"load".

I don't feel like describing a switching power
supply in detail right now, but qualitatively
the principle is the same as the BMOAS in the
previous section: A resonant circuit and some
deft timing.

=======================================

True story: It had been known since the 1960s that
it was possible in principle to build a "reversible
computer" using mechanical components; ones and
zeros could be represented by the presence or
absence of billiard balls moving in a channel.
In 1992 I attended the "Physics of Computation"
workshop in Texas. One of the speakers remarked
from the podium that you could do the same thing
in CMOS, using FET switches to move bits of
charge around. Then he went on with the rest
of his talk.

I just about fell out of my chair. I don't
remember anything else about the talk or about
the rest of the conference. I ran home as fast
as I could (Ed Fredkin let me fly right seat in
his TBM-700) and I gave four seminars in the next
week. Soon there was a whole group of people
working on this.

*) Normally, a CMOS logic circuit operates by
taking electrons and energy out of the power
supply. All that energy is dissipated. The
electrons are returned via the ground wire.
*) Now imagine a circuit where at the end of
each step in the calculation, you give back all
of the electrons and most of the energy, but you
get to keep the results of the calculation. I
know this sounds like magic. It's as close to
pure magic as anything I've ever done, but it
works. We built things, including a multiplier
chip, that do this.

CMOS circuits are basically RC networks. Inductance
is negligible. You can turn transistors on and off
to make the R bigger or smaller here and there, but
still it's an RC circuit.

Until 1992 all the textbooks said it was a law of
nature that every 1->0 or 0->1 switching event would
dissipate .5 C V^2. But it's not true.
-- You can win if you have inductors.
-- You can win if you have time-varying capacitance.
-- There are probably other ways of winning.

It had been known since the 1960s that you could
could build low-dissipation electronic logic gates
if you had an inductor per gate. That was just as
impractical as billiard balls, because inductors are
big and expensive; you aren't going to put millions
of them on a chip.

I devised a method whereby you don't need an inductor
per gate, just an inductor per chip. Actually things
are easier if you have two inductors per chip, but
that's affordable.

In more detail: The Thevenin equivalent of the
output side of a CMOS gate is an open-circuit
voltage (that switches from Vdd to zero and back)
plus some output impedance. The equivalent of
the input side of the next gate(s) downstream
is a bunch of capacitance. So we have the usual
setup: an RC circuit with a switch.

If, as just stated, the switch connects to the
power-supply rails (Vdd and zero), you've got
an RC circuit driven by a waveform like this:
_____ _____
| | | |
____| |________| |___


But suppose that rather than a square wave, you
use the same sort of CMOS FETs to connect the
load to a ramp-like voltage that comes from the
aforementioned inductors:
_____ _____
/ \ / \
___/ \______/ \__


As long as the ramp-time T is slower than (or even
comparable to) the RC time, you will dissipate
a lot less than .5 C V^2 per transition. We need
to transfer a given amount of charge Q. That's
a constant, since after all the whole point is to
charge up the load. The current scales like Q/T,
the power scales like current squared (Q^2 / T^2)
so the energy scales like Q^2 / T. For large T,
that's a win.