Chronology |
Current Month |
Current Thread |
Current Date |

[Year List] [Month List (current year)] | [Date Index] [Thread Index] | [Thread Prev] [Thread Next] | [Date Prev] [Date Next] |

*From*: Doug Craigen <dcc@ESCAPE.CA>*Date*: Tue, 13 Feb 2001 23:38:01 -0600

One of life's little ironies... in late December I began a job with

Integrated Engineering Software, benchmarking electromagnetic modelling

software and training customers. Between that and some stuff still on

the go with DC Tech I've been too busy for such activities as keeping up

with PHYS-L. Now that I come back I find that you're all just wrapping

up a discussion of just the sort of thing our software does.

Its a little late, but since there seems to have been a lot of interest

in this, here's my contribution - demonstrating another approach based

largely on linear algebra that you will probably find faster and more

accurate, and may find more aesthetically pleasing.

So far you have been using the Finite Element Method. This is probably

the most obvious approach. Break 2D space up into areas, or 3D space up

into volumes, perform the derivatives and use feedback to iteratively

zero in on derivatives that match some appropriate E&M equations. I did

this in 1992 with my second year E&M class, I just set up the basic

routine in Lotus and encouraged them all to take the spreadsheet and

try out their own configurations. As you work with this method however

some drawbacks become apparent: you get feedback from the boundary, so

you need to model more space than you are interested in; because you

differentiate, errors are accentuated; the number of elements you need

in your model goes as the cube of the degree of spatial resolution

sought.

Our software mainly uses a less obvious but (to me) pleasing method to

avoid these problems - the Boundary Element Method. If you can solve

the problem on the boundary, its trivial to calculate the solution

anywhere (not just nearby). The number of elements is related to

surface area of the actual physical boundaries, so the number elements

only goes as the square of the spatial resolution. Also, since the

problem is done with integration it tends to be more accurate. Here's a

brief explanation and some sample code:

I'll deal with the case of a conducting surface held at some potential

to illustrate how this method can work. Break the surface up into a

series of surface elements each of which has some charge Qi distributed

over it. Now define a geometrical matrix G which tells you the

potential on one element due to charge Qj on another element. Then

Vi = sum(Qj*Gij)

or, in pure matrix terms, V & Q can be column or row matrices, G can be

calculated geometrically, V is known, so Q = V/G - a plain linear

algebra problem. Adding as many charges as we want distributed around

space is not a problem

Vi = sum(Qj*Gij) + Vq1i + Vq2i + Vq3i + ...

= sum(Qj*Gij) + total_Vi_due_to_charges

with Vi' = Vi - total_Vi_due_to_charges

Q = G/Vi

So the trick is to illustrate the basics of the method and not give away

any company secrets... what I've done is to produce some code to run in

the matrix math software packages Matlab/Scilab/Octave (translate into a

spreadsheet or Fortran if those are more to your liking - but I would

point out that Scilab and Octave are free). I hope the approach is

fairly self-evident. The most obvious thing would be to consider each

surface element to be a point charge, then Gij = 1/Rij. The problem

with this -- the self-potential of a point charge is infinite (oops).

So, as a next approximation I treat each surface element as consisting

of a point charge Qi at the center and 4 nodes for sampling Vi. The

code as written below took about 1-2 minutes to run. The sampling of

10,000 random locations on the surface consistently produced mean values

of about V=0.999 (I was modelling V=1) with a standard deviation of

0.03. Cutting the elements down with x=-5:5 the calculation was very

fast and V_average was still 0.999, but had a standard deviation of 0.1.

These 10,000 random locations actually represent the worst case for

calculations from the surface solution. Move a little ways away and the

calculation will be more accurate. (Like the tv picture being better

when you step back a bit - the effect of graininess averages out.) One

can similarly calculate V or E anywhere in space, near or far and Qtotal

is just the sum of the surface Qi elements.

If anybody wants to try the method out, there should be enough hints

here to get you going, and a web search for "Boundary Element Method"

also turns up quite a bit of good help.

**** here's the code:

%% a way of setting up the elements with complex numbers for surface

coordinates

x = -11 : 11;

elements = ones(length(x), 1)*x + i*x'*ones(1,length(x));

elements = reshape(elements, 1, length(x)^2);

%% defining the Gij's

for counter = 1:length(x)^2

for counter2=1:length(x)^2

transfer(counter, counter2) = elements(counter) -

elements(counter2);

end

end

transfer = 0.25./abs(transfer+0.25) + 0.25./abs(transfer-0.25) +

0.25./abs(transfer+0.25*i) + 0.25./abs(transfer-0.25*i);

%% defining Vi's

V = ones(1,length(x)^2);

%% calculating Qi's

Q = V/transfer;

%% verify V_average=1 for 10000 randomly selected locations

voltages = [];

for counter = 1 : 10000

x = rand;

y = rand;

voltages = [voltages; x y sum(Q.*(1./abs(elements - x -i*y)))];

end

mean(voltages(:,3))

std(voltages(:,3))

**********

Simple enhancements include smaller elements near the edges (with Gij

scaled appropriately) and trimming elements out of the square definition

to get a more interesting geometry. A 3D surface will force you to do

the Gij calculations with something besides complex numbers.

\_/^\_/^\_/^\_/^\_/^\_/^\_/^\_/^\_/^\_/^\_/^\_/^\_/^\

Doug Craigen

http://www.dctech.com/physics/about_dc.html

- Prev by Date:
**Re: Cosmology** - Next by Date:
**Re: ... plus anomaly and collegiality and GR** - Previous by thread:
**test delete** - Next by thread:
**Re: Calculating Fields etc - Another Approach** - Index(es):