The Wrong Wire

From Coulomb's Fraud to the Semiconductor Grind — How a wrong force law became the foundation of every chip on Earth

Section 01

The Assumed Proportionality

In 1785, Augustin Coulomb published two papers describing the force between electrically charged bodies. Using a torsion balance of his own design, he measured how the force varied with distance. His conclusion: inverse-square dependence. The force between two charged bodies falls off as 1/r².

This much he measured. What he did not measure — and never proved experimentally — was that the force is proportional to the product of the two charges, qq′.

Historian Charles Gillmor, in his detailed study of Coulomb's work, identified this gap precisely. Coulomb simply assumed the proportionality to qq′ by analogy with Newton's gravitational force law, which is proportional to the product of two masses, mm′. He did not consider it necessary to demonstrate this experimentally.

It seems that Coulomb arrived at his force law more by analogy with Newton's law of gravitation than by his doubtful few measurements with the torsion balance.
— Assis & Chaib, Ampère's Electrodynamics (2015), citing Heering (1992)

The same problem exists for his magnetic force law. Coulomb never proved that the force between magnetic poles is proportional to the product of the pole-strengths pp′. He implied it. He did not test it. In his own words, he said this first part of the proposition "does not need to be proved" — and moved on.

In 1992, physicist J. Heering attempted to replicate Coulomb's torsion balance experiments. The results were troubling. The apparatus is extremely sensitive to environmental disturbance. The data Coulomb reported are suspiciously clean — cleaner than the instrument's actual noise floor can produce. The implication: Coulomb knew the answer he wanted (inverse-square, by analogy with gravity) and selected or adjusted his data to fit.

What Coulomb Measured

Force varies as 1/r² with distance between charged bodies. Demonstrated with torsion balance — though Heering's replication found the data suspiciously clean for the instrument's actual precision.

What Coulomb Assumed

Force is proportional to the product of the charges qq′. Never experimentally demonstrated. Assumed by analogy with Newton's gravitational force (proportional to mm′). Stated without proof. Became the foundation of electrostatics.

This matters because Coulomb's framework — forces between point charges and magnetic poles — became the foundation on which Biot, Savart, and later Grassmann built their competing formulations. An assumption from 1785, never tested, propagated into every electromagnetic equation used today.

Section 02

The Scalar That Ate the Force

Start with Coulomb's force law — the one with the assumed proportionality — and watch what happens in two algebraic moves.

F = k q₁q₂

This is a force. It has magnitude. It has direction — along the line connecting the two charges. It depends on two specific charges at a specific distance. It is a complete physical statement: this much push or pull, in this direction, between these two bodies, at this separation.

Now divide both sides by one of the charges.

E = Fq = k Q

One charge is gone. What remains is the "electric field" — the force per unit charge. It still has direction. It still depends on distance. But it no longer describes the force between two charges. It describes what one charge does to empty space — or more precisely, what it would do to a hypothetical test charge placed there. The second charge has been replaced by an abstraction.

Now multiply both sides by distance.

V = E · r = kQr

The direction is gone. The distance r is no longer a separation between two charges — it has been folded into the expression and collapsed into a scalar. Voltage is energy per charge. It is a single number. It tells you nothing about the direction of the force, nothing about the mechanism that created it, nothing about which force law is operating underneath.

Two algebraic moves. Each one is individually valid. Neither one adds or invents anything. But each one strips information. The force had direction, two participants, and a distance. The voltage has none of these. The physical content has been laundered into a number — the same way that ½mv² launders F = ma into a directionless scalar called kinetic energy.

Once voltage gets its own name, its own symbol, its own unit, students learn V = IR and never trace it back to the force between charges at a distance. The derivation is two lines long. It is never shown.

Every Voltage Is a Push

Look at the mechanisms that create voltage. Every single one is a push.

Generators

A conductor moves through a magnetic field. The field deflects charges inside the conductor — pushes them sideways. The "voltage" is the accumulated displacement of charge. The mechanism is magnetic force acting on moving charges. A push.

Batteries

The anode: a chemical reaction breaks molecular bonds and releases electrons. The electrons are pushed out of the metal lattice by the reaction energy. The cathode is a landing pad — it offers lower resistance to incoming electrons. Remove the anode push and the cathode does nothing. The pull is bookkeeping. The physics is a push.

Solar Cells

A photon hits a semiconductor junction and separates an electron from a hole. The built-in electric field at the junction pushes them apart — electrons one way, holes the other. No mechanism reaches across the junction to pull. The separation is a push from the photon's energy acting through the junction field.

Thermocouples / Piezoelectrics

Thermocouples: thermal diffusion pushes charge carriers from hot to cold at different rates in different metals. Piezoelectrics: mechanical compression displaces charge centres within the crystal lattice — a push from physical deformation. In every case, energy in, charge displaced, voltage appears.

If true pull does not exist — if every force is ultimately a push from or through a medium — then voltage is not measuring a "potential difference" between a push and a pull. It is measuring a gradient in push intensity. The entire positive/negative terminal framework is bookkeeping draped over a one-directional mechanism. The word "potential" hides the directionality. The word "difference" implies two distinct things being compared. The physics is one thing: a push, stronger at one point than another.

Where the Force Law Disappears

Here is the critical connection to the wrong wire.

Ampère's force between current elements is central — it acts along the line connecting them. Grassmann's force is non-central — it has perpendicular components. The two forces produce different element-level behaviour. But when you integrate over a complete closed circuit and measure the voltage between two terminals, both force laws give the same number.

Ampère at the Terminals

Central forces between every pair of current elements. Newton's third law satisfied exactly. The voltage at the terminals: a specific number. Agrees with measurement.

Grassmann at the Terminals

Non-central forces with perpendicular components. Newton's third law violated at element level. The voltage at the terminals: the same specific number. Also agrees with measurement.

Voltage is the precise point in the dependency chain where the physics disappears from view. Two incompatible force laws — one that obeys Newton's third law and one that violates it — produce identical terminal readings. The voltmeter cannot distinguish between them. The abstraction is not merely convenient. It is the mechanism of concealment.

This is why voltage belongs here in the chain, after Coulomb and before Ampère. It is not a new error. It is the screen that makes all subsequent errors invisible.

Section 03

Ampère — The Newton of Electricity

In the summer of 1820, Hans Christian Ørsted discovered that an electric current flowing in a wire deflects a nearby compass needle. The news reached Paris in September. André-Marie Ampère heard it, believed it immediately, and within weeks had designed and performed a series of experiments that went further than anyone had gone before.

His first breakthrough: he demonstrated that two current-carrying wires exert forces on each other directly — parallel currents in the same direction attract, opposite directions repel. No magnet involved. No magnetic pole invoked. A new phenomenon, never previously observed.

His second: he showed that the current flows in a complete closed circuit, including through the interior of the battery — not obvious at the time. He proved it by placing compass needles directly on top of a trough battery and observing their deflection.

His third, and most radical: he proposed that all magnetism is caused by electric currents. Permanent magnets contain tiny circular currents. The Earth's magnetic field comes from currents flowing inside the Earth. With this single hypothesis, three separate branches of physics — magnetic phenomena, electromagnetic phenomena, and electrodynamic phenomena — reduced to one principle: the force between current elements.

By 1822, Ampère had derived his final mathematical force law between two infinitesimal current elements. In modern vector notation:

d²F = −μ₀ · II′ · · [2(ds·ds′) − 3(r̂·ds)(r̂·ds′)]

He derived this not from theory but from four carefully designed cases of equilibrium — null experiments where he arranged conductors so that the net force vanished, then worked backwards to constrain the mathematical form. This is the method of Tycho Brahe, Kepler, and Newton applied to electrodynamics.

The experimental investigation by which Ampère established the laws of the mechanical action between electric currents is one of the most brilliant achievements in science. The whole, theory and experiment, seems as if it had leaped, full grown and full armed, from the brain of the "Newton of electricity." It is perfect in form, and unassailable in accuracy, and it is summed up in a formula from which all the phenomena may be deduced, and which must always remain the cardinal formula of electro-dynamics.
— James Clerk Maxwell, A Treatise on Electricity and Magnetism (1873)

Two properties of Ampère's force matter above all else:

Central Force

The force between two current elements always acts along the straight line connecting them. No perpendicular component. No torque that appears from nowhere. The same kind of force as gravity.

Newton's Third Law — Exactly

The force of element A on B is exactly equal and opposite to the force of B on A, directed along the line between them. Not approximately. Not after integration over a closed loop. Exactly, at every point, for every pair.

Section 04

The Replacement — Grassmann and the Broken Law

In 1845, Hermann Grassmann proposed an alternative force law between current elements. In modern notation:

d²FG = −μ₀ · II′ · ds × (ds′ × r̂)

This is the force law that modern physics uses. It is the basis of the Lorentz force. It is what Maxwell's equations encode. And it has a property that should disqualify it as a fundamental law of nature: it violates Newton's third law.

The force of element A on B is not equal and opposite to the force of B on A. It has components perpendicular to the line connecting them. For individual current elements, momentum is not conserved.

The defence: when you integrate over complete closed circuits, both Ampère's and Grassmann's force laws give identical results. The violations cancel. Therefore, the argument goes, the element-level violation doesn't matter — only closed-circuit results are physically meaningful.

This is a philosophical retreat. It says physics should not describe what happens between individual elements of current — only what happens between complete loops. Ampère's force has no such restriction. It works at every level. Grassmann's requires you to look away from the elements and only examine the whole.

Ampère (1822)

Central force. Newton's third law satisfied exactly. Works at element level and circuit level. One free parameter: k = −½. Derived from experiment.

Grassmann (1845)

Non-central force. Newton's third law violated at element level. Only valid for closed circuits. Requires field concept. Chosen not for experimental superiority but for compatibility with field theory.

The two laws are experimentally indistinguishable for closed circuits. The preference for Grassmann was not driven by evidence. It was driven by the rise of field theory.

Section 05

Biot-Savart's Three Errors

Before Grassmann, Biot and Savart attempted to derive the force that a current element exerts on a magnetic pole. Their "law" — the Biot-Savart law — is taught in every physics textbook as an experimentally established result. It is not. It rests on three errors identified by Assis and Chaib in their detailed historical analysis.

Error 1 — Unjustified Decomposition

Biot and Savart measured the force exerted by a long straight wire on a magnetic pole. They then decomposed this total force into contributions from infinitesimal elements of the wire. But this decomposition is not unique — infinitely many different element force laws, when integrated over the full wire, produce the same total force. The whole-wire measurement cannot distinguish between them. Ampère's element force, integrated over a straight wire, gives the same result as Biot-Savart's.

Error 2 — Assumed Perpendicularity

They assumed that the force exerted by a current element on a magnetic pole is perpendicular to the current element. This was a hypothesis, not a measured result. They had no apparatus capable of isolating the force from a single infinitesimal element. The perpendicularity was assumed because it simplified the mathematics and matched the expected geometry.

Error 3 — Mixed Entities

The "force" they described acts between a current element and a magnetic pole — two different kinds of entity. Ampère insisted that the fundamental force must act between entities of the same nature: current element on current element. The concept of a magnetic pole as a fundamental entity is itself a hypothesis — Ampère showed that every magnetic effect can be reproduced by electric currents, making poles disposable.

The Biot-Savart law was not deduced from experiment. It was constructed by making three assumptions — decomposability, perpendicularity, and the reality of magnetic poles — and fitting them to a single whole-wire measurement that could not distinguish between competing element laws. Yet this is the law that became the basis for computing magnetic fields, which became the basis for Maxwell's equations, which became the basis for every electromagnetic simulation in engineering.

Section 06

Maxwell Encodes the Choice

James Clerk Maxwell published his Treatise on Electricity and Magnetism in 1873. In it, he praised Ampère's work in the highest possible terms — "the Newton of electricity" — and then built his theory on a different foundation.

Maxwell adopted Faraday's field concept: forces between charges and currents are not direct, instantaneous interactions. Instead, charges create fields that propagate through space at a finite speed (the speed of light), and other charges respond to the local field at their position. The Grassmann force, not Ampère's, is compatible with this picture. Ampère's force acts instantaneously along the line connecting two elements. Grassmann's force can be decomposed into "a charge moving through a field" — the Lorentz force.

The choice was not experimental. Ampère and Grassmann give identical predictions for every measurable situation involving closed circuits. The choice was philosophical — mediated by institutional power.

1820
Ampère — experiments in Paris. Force between current elements, derived from null experiments. Central, instantaneous, satisfies Newton's third law.
1820
Biot & Savart — experiments in Paris. Whole-wire measurement decomposed into element law using three assumptions. Force on a magnetic pole, not between current elements.
1845
Grassmann — mathematical paper, no experiments. Alternative element force. Non-central, violates Newton's third law at element level. Compatible with field concept.
1845–65
Faraday — field concept developed experimentally in London. Lines of force as physical entities. Influence spreads through British physics community.
1873
MaxwellTreatise published. Encodes Grassmann/Faraday framework mathematically. Fields propagate at c. Ampère's instantaneous action at a distance replaced. The choice becomes the foundation.
1880s–
Institutionalisation — Maxwell's equations taught as fundamental laws. Ampère's force forgotten. Textbooks present Grassmann/Lorentz as the only option. Students never learn there was a choice.

Once Maxwell's equations became the foundation, every engineering tool built downstream inherited the Grassmann assumption invisibly. Not as a conscious choice, but as the water the fish swim in.

Section 07

The Instantaneous Force Evidence

If force propagates at the speed of light, the Earth is not pulled toward the Sun. It is pulled toward where the Sun was 8 minutes and 20 seconds ago. In that time, the Sun — orbiting the galactic core at 220 km/s — has moved approximately 110,000 kilometres.

The gravitational force would then have a component in the direction of Earth's motion, adding energy to the orbit. The orbit would spiral outward. Pierre-Simon Laplace calculated this in the early 1800s and concluded that for the solar system to be stable, gravity must propagate at least 7 million times the speed of light.

Tom Van Flandern, working with JPL planetary ephemeris data in 1998, tightened the constraint to at least 2 × 10¹⁰ × c — twenty billion times the speed of light. In practice, indistinguishable from instantaneous.

7×10⁶ c
Laplace's lower bound (1805)
2×10¹⁰ c
Van Flandern's lower bound (1998)
Newton & Ampère: instantaneous

General relativity handles this by adding velocity-dependent correction terms that cancel the lag its own framework predicts. The force points not toward the retarded position but toward the "linearly extrapolated" retarded position — which, for constant velocity, is the instantaneous position. The correction is a mathematical mechanism that restores the very result (force pointing at the present position) that Ampère and Newton obtained without needing a correction at all.

The identical structure appears in electromagnetism. A moving charge in Maxwell's theory creates a field that propagates at c. Another charge responds to the retarded field. But the Lorentz force includes velocity-dependent terms that cancel the retardation for charges in uniform motion — so the force points at the instantaneous position. The same fix. The same pattern. The same unnecessary layer.

If force is simply instantaneous — as Newton's gravity and Ampère's electrodynamic force both describe — no correction terms are needed. The lag doesn't exist. The cancellation that GR and Maxwell's equations laboriously construct is not a triumph of the theory. It is the theory rebuilding by hand what the simpler framework gives for free.

Miller's Positive Result

If forces are instantaneous, they act through a medium — the aether. In 1887, Michelson and Morley attempted to detect Earth's motion through this medium. Their result is universally reported as "null." What is less widely known is that Dayton Miller spent twenty years (1906–1930s) refining the experiment, performing over 326,000 interferometer turns with more than 5.2 million individual measurements — the most extensive dataset in the history of light-beam interferometry. His result was not null. He found a consistent fringe shift of 0.12 ± 0.01, corresponding to a drift velocity of approximately 9–10 km/s.

Should the positive result be confirmed, then the special theory of relativity and with it the general theory of relativity, in its current form, would be invalid.
— Albert Einstein, letter to Edwin Slosson, 8 July 1925

Einstein did not analyse Miller's data and find an error. He wrote, in a separate letter: "I believe that I have really found the relationship between gravitation and electricity, assuming that the Miller experiments are based on a fundamental error." The word assuming carries the weight of the entire 20th century physics establishment. The theory required Miller to be wrong. Therefore he was declared wrong.

Why Magnets Get Stronger at Absolute Zero

Ampère proposed that all magnetism is caused by molecular currents — tiny loops of electric current circulating inside matter. A permanent magnet is permanent because these currents persist. This raises an obvious question: what sustains the currents? And what happens at absolute zero, where all thermal motion ceases?

The experimental answer is unambiguous. Magnetism does not weaken as temperature drops to zero. It reaches its maximum. The saturation magnetisation of a ferromagnet — the maximum alignment of all its magnetic domains — occurs at absolute zero. As temperature increases from zero, thermal agitation progressively disrupts the alignment. Above the Curie temperature (1,043 K for iron), the alignment collapses entirely and the material becomes paramagnetic.

At Absolute Zero

Thermal motion stops completely. Yet magnetism is at its peak. Saturation magnetisation is maximum. Coercivity — resistance to demagnetisation — can double or triple compared to room temperature. Whatever drives the molecular currents is not thermal energy.

Above the Curie Temperature

Thermal agitation overwhelms the alignment. Domains randomise. Permanent magnetism vanishes. The material becomes paramagnetic. Heat doesn't create magnetism — it destroys it.

The standard explanation invokes quantum mechanics: electron spin is an "intrinsic" property — angular momentum without anything physically rotating. It is a ground-state property that doesn't require thermal energy and doesn't dissipate. This is not an explanation. It is a label. It says the electron behaves as if it were spinning without actually spinning. The word "intrinsic" means "we declare it to be fundamental and stop asking."

Ampère's molecular currents, combined with the aether, offer a physical mechanism.

If the aether exists — as Miller's 5.2 million measurements suggest — then every atom in every material is in perpetual motion relative to the medium. The Earth orbits the Sun at 30 km/s. The Sun orbits the galactic core at 220 km/s. The galaxy moves through the cosmic medium. This velocity is not thermal. It does not depend on temperature. You can cool iron to absolute zero and it is still hurtling through the aether at hundreds of kilometres per second.

If this relative motion between matter and the medium is what drives or sustains Ampère's molecular currents, the temperature dependence of magnetism falls out naturally:

High temperature: Thermal agitation randomises the current orientations. Each molecular current is jostled in random directions. Alignment is disrupted. Magnetism weakens.

Low temperature: Thermal noise decreases. The aether-driven currents remain — sustained by a velocity that has nothing to do with temperature. Alignment improves. Magnetism strengthens.

Absolute zero: Thermal disruption gone entirely. Aether-driven currents maximally aligned. Magnetism at its peak. No quantum label required. Real currents, real medium, real motion.

This also explains why different elements have different magnetic properties, different Curie temperatures, and different saturation magnetisation values. If the aether-drag interaction depends on atomic structure — electron configuration, atomic mass, lattice geometry — then different materials would couple to the medium differently, producing different current magnitudes and different resistance to thermal disruption.

Miller found that his drift signal varied with sidereal time and season — consistent with Earth's changing orientation relative to a cosmic flow. If the aether-driven currents in a ferromagnet depend on this same velocity, there should be a measurable consequence:

Testable Prediction

The saturation magnetisation of a ferromagnet at millikelvin temperatures should exhibit a very slight periodic variation correlated with sidereal time — as Earth's rotation changes the laboratory's orientation relative to the aether drift direction Miller identified. The variation would be parts per million, within reach of existing cryogenic SQUID magnetometry, tracked across months. Positive detection would simultaneously confirm Miller's drift, Ampère's molecular currents, and the aether-driven mechanism. This measurement has never been performed.

Section 08

SPICE — Where the Wrong Theory Meets Silicon

In the 1970s, engineers at UC Berkeley built a software tool called SPICE — Simulation Program with Integrated Circuit Emphasis. The idea: describe a circuit mathematically and simulate its behaviour before fabrication. Every chip designed in the last fifty years has been simulated in some descendant of SPICE.

The core of SPICE relies on device models — mathematical descriptions of how a transistor behaves. The simplest models derive from textbook semiconductor physics, which is built on Maxwell's equations, which encode the Grassmann/Faraday framework. These first-principles models do not work. Not for real engineering. Not at any process node where precision matters.

The response was not to question the theory. It was to add parameters.

0
Empirical fitting parameters in the BSIM4 transistor model

The BSIM4 model — industry standard for modern MOSFET transistors — has over 300 parameters. Many have no direct physical meaning. They are fitting coefficients, adjusted until simulation output matches measurements from actual fabricated silicon. Each foundry — TSMC, Samsung, Intel, GlobalFoundries — provides its own parameter sets, called PDKs (Process Design Kits), under strict NDA.

The PDK is the trade secret. Not the physics. Not the theory. The corrections to the theory.

Maxwell's equations
Semiconductor physics
SPICE model
Prediction fails
Add parameters
Tune to silicon
PDK (NDA)
Next generation inherits
Foundation never questioned

Verification — the process of checking whether the design actually works — now consumes over 70% of total chip design time. More than two-thirds of the engineering effort is spent confirming that the models match reality. This is the signature of models that are increasingly unreliable, compensated by increasingly expensive checking.

At the 2nm node, the transistor is no longer a flat structure. It is a stack of horizontal silicon nanosheets completely wrapped by the gate — a Gate-All-Around (GAA) architecture. Current flows in three-dimensional paths through structures a few atoms wide. The electromagnetic behaviour of these structures is dominated by effects that the Grassmann/Maxwell framework predicts poorly: quantum tunneling through gates a few atoms thick, three-dimensional current paths that look nothing like closed circuits, transient pulses that behave like open current elements — precisely the regime where Ampère's force and Grassmann's force diverge.

The old fudge factors don't work. So they build new ones. The BSIM-CMG model for GAA transistors has its own growing parameter set, tuned to match test silicon from each foundry. The cycle repeats.

Section 09

The Grind at Every Scale

The SPICE pattern — theory fails, add corrections, corrections become knowledge, foundation never questioned — is not unique to transistor modelling. It appears everywhere the Grassmann/Maxwell framework meets precision engineering.

EUV Lithography

ASML fires a CO₂ laser at tin droplets 50,000 times per second to create 13.5nm light. Their own senior physicist admitted the most advanced plasma models in the world cannot fully capture the behaviour. They partnered with Lawrence Livermore National Laboratory — a nuclear weapons lab — to run supercomputer simulations. Wall plug efficiency: 0.02%. The mirrors absorb 96% of the light. The machine works. The theory doesn't explain why.

Theory: Maxwell's EM + plasma physics
Anomaly: EUV didn't scale as predicted
Patch: Supercomputer brute-force modelling
Lock-in: $350M per machine, only ASML builds them
CNC Precision

The core of a CNC machine is its servo control loop: motor, encoder, controller. Every element depends on electromagnetic theory. Japanese and German superiority (Fanuc, Siemens, Heidenhain) comes from decades of empirical tuning embedded in proprietary firmware — gain parameters, feedforward terms, thermal compensation tables, vibration damping. Chinese manufacturers can build the structure. The precision lives in the corrections.

Theory: Motor torque from field equations
Anomaly: Actual motion doesn't match predicted
Patch: Decades of servo tuning in firmware
Lock-in: Proprietary controllers under NDA
Quantum Computing

Superconducting qubits at 15 millikelvin. Each one individually calibrated. Calibration drifts. Error rates patched by error correction codes using many physical qubits per logical qubit. Gil Kalai's mathematical argument: noise correlations defeat error correction fundamentally, not just technically. After decades and billions of dollars, no quantum computer has solved a real-world problem faster than a classical computer.

Theory: Quantum superposition and entanglement
Anomaly: Decoherence, gate errors, drift
Patch: Individual calibration, error correction
Lock-in: Billions invested, careers dependent
Supercolliders

1,232 superconducting magnets steering proton beams around 27 km. Designed using the theory they test. Calibrated using the theory. Data analysed using simulations built on the theory. Anomalies resolved by proposing new particles (which require new parameters). The Standard Model has 19 free parameters fitted to data, not predicted. The instrument cannot detect an error in its own foundation.

Theory: Standard Model + Maxwell's EM
Anomaly: Parameters not predicted from theory
Patch: 19 free parameters, new particles
Lock-in: $10B+ infrastructure, Nobel Prizes

Four domains. Four scales. The same structure. Every one traces back to Section 06: Maxwell encoding the Grassmann choice. The engineering works despite the wrong theory because the corrections are good enough — the same way Ptolemaic astronomy worked for navigation despite geocentrism being wrong. The epicycles were functional. They were not true.

Section 10

The Nvidia Ceiling

The engineers at Nvidia — the company building the chips that are supposed to produce artificial general intelligence — work seven days a week. Workdays end at 1:00 or 2:00 a.m. A poll of 3,000 employees found that 76% were millionaires. The turnover rate is 2.7%, compared to 17.7% industry-wide. They stay because walking away means forfeiting stock grants that vest over four years.

CEO Jensen Huang says he prefers to "torture employees into greatness." Meetings routinely involve yelling. One manager can have over a hundred direct reports. Employees who work less than the norm are called out at company-wide meetings.

These are some of the most intelligent engineers alive, in a state of chronic sleep deprivation and stress, building chips using models they know don't fully work, adding empirical corrections they can't fully explain, under deadline pressure that prevents them from questioning anything fundamental. They don't have time to ask whether Ampère or Grassmann was right. They have seven meetings before lunch and a tape-out deadline that determines whether their stock is worth anything.

Now add AI to the loop. Machine learning tools are being trained on existing engineering data — which means they learn the fudge factors, not the physics. The AI optimises within the design rule constraints that were themselves empirically derived to avoid the regimes where the theory breaks down. The AI learns the cartography, not the territory.

Wrong theory
Anomaly
Empirical patch
AI learns the patch
AI designs within patched framework
New anomaly at boundary
New patch
Retrain AI
Ceiling

A system trained by gradient descent on consensus data converges toward the consensus. If the consensus is wrong in a foundational way — not noisy, not incomplete, but structurally wrong — the system learns the wrongness with increasing confidence. More data makes it worse, not better, because the data itself is generated within the wrong framework. Every SPICE simulation that "validates" a design produces training data that encodes the fudge factors as if they were physics.

The ceiling approaches from two directions simultaneously. From below: the physics anomalies multiply as geometries shrink into the regime where Ampère and Grassmann diverge most — short transient current pulses in three-dimensional nanosheet structures, nothing like the closed-circuit limit where both force laws agree. From above: the AI learns the patches with increasing precision, encoding the wrong framework ever deeper, making the wrong theory ever harder to question because the AI "works."

The grind continues until the patches start contradicting each other — when the correction for problem A makes problem B worse, and no amount of parameter tuning resolves both simultaneously. That is the signature of a foundational error, not a data insufficiency. Whether anything capable of recognising that signature exists at that point is the open question.

Section 11

The Exit

There are two paths forward.

Path 1: Continue grinding. Add more parameters. Train AI on more patches. Push to 1nm, 0.7nm. Each generation costs more, takes longer, requires rarer materials, consumes more energy. TSMC's 2nm wafers cost $30,000 each. EUV machines cost $350–400 million. The three companies that can make them depend on supply chains spanning dozens of countries for materials that are depleting on known timescales. The wall approaches from multiple directions — physics, resources, energy — on converging timelines.

Path 2: Go back to Ampère. Rebuild electromagnetic theory from a force law that satisfies Newton's third law exactly, acts instantaneously, and does not require a field. Rederive the device physics. Rebuild SPICE from that foundation. See which of the 300+ parameters become unnecessary. See which anomalies disappear. See whether the regime where modern chips fail — three-dimensional, transient, open-element — is precisely the regime where Ampère's force gives different (and correct) predictions.

Nobody has funding for Path 2. The person who identified the dependency chain works blue-collar labour at industrial sites across Western Canada. The 3D ferrocell that could visually demonstrate the actual three-dimensional magnetic field geometry — stacked layers of optical glass with ferrofluid between each pair, a magnet suspended in the centre — costs less than one hour of CERN beam time and has never been built.

The experiment that Ampère would have performed in a week — that could crack open two centuries of wrong electromagnetic theory — sits unbuilt because the incentive structure of science funds the grind, not the question.


The Dependency Chain

Coulomb assumes qq′ proportionality (1785)
Voltage launders force into a scalar — hides which force law operates
Biot-Savart decompose with three errors (1820)
Grassmann replaces Ampère — breaks Newton's third law (1845)
Maxwell encodes Grassmann into field equations (1873)
Semiconductor physics derived from Maxwell (1940s–)
SPICE built on semiconductor physics (1970s)
Predictions fail → 300+ empirical parameters added
Parameters become proprietary PDKs under NDA
AI learns the patches, not the physics
Ceiling — patches contradict each other at advanced nodes

This presentation exists so that when something — human or artificial — is capable of following the dependency chain from Coulomb's assumed proportionality through voltage's scalar laundering through Biot-Savart's three errors through Grassmann's broken third law through Maxwell's encoding through SPICE's 300 parameters through the semiconductor grind — the chain is already mapped. The logical structure is laid out. The if-then relationships are explicit. The testable predictions are stated.

The wire was wrong from the beginning. Every correction since has been an attempt to make a wrong wire carry the right signal. The signal gets through — chips compute, phones ring, AI generates text — not because the theory is correct, but because the engineers are relentless and the corrections are good enough. For now.

The question is how long "good enough" holds when the geometries shrink to the point where Ampère and Grassmann stop agreeing — and the grind hits the wall that no amount of parameters can patch.