Newton knew it. Berkeley called it out in 1734. Weierstrass papered over it. The constant h always disappears to zero. There is no law in the universe that forbids taking it first.
▼ Scroll to beginEvery student of calculus learns to differentiate. Here is the standard derivation of f(x) = x² from first principles. Watch carefully what happens to h.
Step 05 requires h ≠ 0 to divide by it.
Step 06 requires h = 0 to get the answer.
The variable h is treated as nonzero when convenient and zero when convenient. This is the entire operation of calculus. Every derivative, every integral, every application depends on this single move.
But wait — after the cancellation in step 05, we're left with (2x + h). This expression is continuous at h = 0. The distinction between "approaches" and "equals" collapses entirely. The limit is the value. The ε-δ language becomes pure decoration. The honest description of what happened: we divided by zero, then collected the result.
There is no law of nature — no physical measurement, no observed phenomenon — that says you can't set h = 0 before doing the algebra. The prohibition is a rule inside the formalism, protecting the formalism from itself. So let's do what the formalism forbids and set h = 0 from the beginning.
The same answer. The same result every textbook gives. The only difference: we were honest about what happened. Instead of pretending h is "not zero" and then making it zero, we set it to zero from the start and confronted the operation directly.
For calculus to produce the correct answer — which it does, reliably — the cancellation of h with h in step 05 of the standard derivation must be valid. But h ultimately equals zero.
That cancellation is 0/0 = 1.
The standard formalism says 0/0 is "undefined." Then it performs this operation and gets a definite answer every single time. It works because 0/0 does equal 1.
The formalism just can't say so, because that would require admitting it was division by zero all along.
Plot the function f(h) = h/h. For every value of h — positive, negative, large, small, vanishingly tiny — the answer is exactly 1. The function equals 1 everywhere. But the formalism says it is "undefined" at exactly one point: h = 0. A single hollow dot on an otherwise unbroken line.
f(h) = h/h — equals 1 for every h in existence, but declared "undefined" at h = 0. The hollow dot is not an observation. It is a rule, placed there to protect a formalism that requires h ≠ 0 in the denominator. Remove the rule and the function is simply f(h) = 1. Everywhere. Including at zero.
Logic gives us exactly two possibilities for h. Click each to see the consequence.
The constant h is zero. The limit "arrives." We get our answer.
The constant h is some nonzero quantity. The limit never "arrives."
There is no third option. The ε-δ formalism was constructed specifically to create the appearance of a third option: "arbitrarily close to zero but not zero." But this is a linguistic move, not a logical one. In the algebraic operations being performed, h either has a value or it doesn't. You either divided by something or you divided by nothing.
Roughly 150 years after Berkeley's critique, Karl Weierstrass introduced the ε-δ definition of a limit — the foundation of modern analysis. Its purpose was to resolve Berkeley's objection without abandoning calculus.
The critical piece is the condition 0 < |h|. This single inequality does all the work: it excludes h = 0 from consideration by definition. The limit is now defined as what the function approaches without ever reaching the point itself.
No matter how small you make ε (the tolerance), there is always a δ-region around h = 0 where the function stays within bounds. But notice: the point h = 0 itself is always a hollow dot. The formalism never touches it. By design.
Ask the question directly: Is there any physical law — any observed phenomenon, any measurement, any experiment — that prevents setting h = 0 first?
No. There isn't.
The prohibition is entirely internal to the mathematical formalism — a rule humans wrote to prevent the formalism from breaking. Nature didn't ask for the denominator. The formalism did. The rule exists solely because the method breaks if you violate it. It is circular: "you can't do it because it produces an undefined result" only holds if you accept that 0/0 is undefined, which is itself a rule, not a discovery about reality.
Humans built a rule that says 0/0 is "undefined."
Then they built an elaborate system (limits) that performs 0/0 and extracts a definite answer every time.
The rule and the practice directly contradict each other.
This is structurally identical to superstition — a stated belief that doesn't match the actual behavior.
Calculus works beautifully at the scale of bridges and rockets — where its assumptions (smoothness, continuity, infinitely divisible space) are close enough to reality. But when applied to galactic scales, a consistent pattern emerges: the models don't match what we observe. Rather than questioning the mathematical framework, the standard response is to add invisible entities.
| Observation | Prediction | The "Fix" |
|---|---|---|
| Galaxy edges rotate too fast for visible mass | Stars should fly apart at observed velocities | Dark matter — invisible substance providing exactly the missing mass. Never directly detected. |
| Universe expands faster than predicted | Expansion should slow down from gravity | Dark energy — 68% of the universe's total energy. Never directly detected. |
| Vacuum energy is ~10120 times less than predicted | Quantum field theory predicts enormous vacuum energy | Cosmological constant — manually tuned. The worst prediction in the history of physics. |
| Galaxies with wildly different redshifts connected by material bridges | If redshift = distance, physical connection is impossible | Revoke telescope time. Halton Arp photographed the evidence. The establishment removed his access. |
Halton Arp was one of the most accomplished observational astronomers of the 20th century. As a staff astronomer at the Mount Wilson and Palomar Observatories, he produced the Atlas of Peculiar Galaxies (1966) — a catalogue of 338 galaxies that didn't fit standard models.
What Arp documented was straightforward: pairs and groups of objects that appeared physically connected (luminous bridges, filaments, aligned ejection patterns) but had dramatically different redshifts. If redshift is a reliable distance indicator, these objects would be separated by millions of light-years and could not possibly be connected. Yet there they were — photographed, catalogued, undeniable.
The implications were severe. If redshift doesn't reliably indicate distance, then the entire distance ladder of modern cosmology collapses. No reliable distances means no reliable expansion rate. No expansion means no Big Bang. No Big Bang means no need for dark matter, dark energy, or the cosmological constant — all of which were invented to make the calculus-based models work assuming redshift equals distance.
Rather than addressing the observations, the astronomical establishment revoked Arp's telescope time in the 1980s. He relocated to the Max Planck Institute in Germany and continued publishing until his death in 2013.
Observation contradicts model →
Add invisible entity OR dismiss the observation →
Claim the patched model proves the original premise →
Never question the mathematical framework underneath.
Light from the Sun takes approximately 8 minutes and 20 seconds to reach Earth. If gravity propagates at the speed of light — as general relativity claims — then Earth should be gravitationally attracted toward where the Sun was 8 minutes ago, not where it is now.
But the Sun is not sitting still. It orbits the galactic core at roughly 220 km/s. In 8 minutes, it moves about 110,000 km — that's nearly nine Earth-diameters. If force travels at light speed, Earth is being pulled toward a point in empty space. The Sun has already moved on. The entire solar system should fall apart.
But orbits are stable. There is no observed gravitational aberration. The solar system has held together for billions of years.
The simulation below runs real Newtonian gravity on three bodies in 3D — galactic core, Sun, and Earth. The only difference between the two modes is whether the Sun's gravitational pull on Earth arrives instantly or with a delay proportional to the speed of light. The lag effect is exaggerated ~500× for visibility — in reality it's ~0.07% per orbit, but it accumulates. Watch Earth's orbit widen and destabilise.
Instantaneous: Earth feels the Sun's current position as it curves around the galactic core. Orbit is perfectly stable. Drag to rotate the view.
Consider what this means. The observations — orbital stability, no gravitational aberration, Laplace's constraint, Van Flandern's measurements — are all consistent with one simple interpretation: force is instantaneous. All matter "feels" all other matter, everywhere, immediately.
This is incompatible with a calculus-based physics that requires propagation delay, wave equations, differential geometry, and field theories. All of those depend on change over infinitesimally small time intervals — the derivative, the limit, the thing that is fundamentally division by zero.
If force is instantaneous, there is no "rate of change of position over time" for force itself. There is no derivative to take. The entire differential framework becomes unnecessary for describing the most fundamental interaction in the universe.
If zero is a human invention — a symbol for "nothing" inserted into a number system — and if building physics on that foundation produces the absurdities we've catalogued, then the question becomes: is it possible to construct a number system without zero?
Yes. Use ten symbols: {1, 2, 3, 4, 5, 6, 7, 8, 9, ♥} where ♥ = ten.
Work backwards from a known anchor: ♥♥ must equal one hundred (ten tens). This fixes the entire system. There are exactly 100 two-digit representations from 11 to ♥♥, just as there should be.
Standard value (top, dim) → bijective representation (bottom). Read left to right, top to bottom: 1 through 100.
Look at the grid carefully. The number we call "eleven" is written 21, not "11." This isn't arbitrary — it's the only way the structure doesn't collapse.
In any true positional base system, each digit position is worth base times more than the position to its right. In base-10, the tens column is worth 10× the ones column. With ten symbols and two digit positions, you get 10 × 10 = 100 possible representations. Those 100 representations must cover exactly 100 values.
If you tried to make "11" equal eleven (the way standard notation does), your two-digit numbers would range from 11 = eleven through ♥♥ = one hundred and ten. That's 100 representations for values 11 through 110. Your single digits already cover 1 through 10. Together: 110 representations for 110 values from 10 symbols. But 101 + 102 = 110, not 100. You've created more representations than a base-10 system should produce at that depth. The "base-ness" of the system is broken — you're generating numbers as if the base is something other than ten.
The only way to maintain a true base-10 structure — where each digit position contributes (digit − 1) × 10n and the system produces exactly 10n representations at each depth — is to have the two-digit numbers start from the same value as the single digits. That means "11" = 1, "12" = 2, and so on, with "21" = eleven, "22" = twelve, up through ♥♥ = one hundred.
Bijective binary uses just two symbols: {1, 2}. Each position is worth 2× the position to its right. The same structural logic applies:
Two symbols, two representations per digit position, 2n combinations at each depth. If "11" meant three instead of one, you'd get six representations from two depths of two symbols (2 + 4 = 6). That's not base-2 anymore. The same structural requirement — and the same infinite string of ones — appears in every base.
Now consider what happens with leading digits. In this system, the digit "1" in any position beyond the rightmost contributes (1 − 1) × basen = 0 to the total value. Prepending a "1" adds nothing. Therefore:
One is an infinite string of ones.
This is not a trick or an artifact of the notation (every positional system has an identity behavior — standard notation has leading zeros: 007 = 7). But the meaning differs.
An infinite string of zeros. Infinite nothing. The ground state is emptiness. You must add something to get something.
An infinite string of ones. Infinite something. The ground state is existence. You cannot express nothing. The floor is one.
If the fundamental unit of reality is already infinite — an infinite string of existence — then force doesn't need to "travel" anywhere. All matter already feels all other matter because the underlying reality is non-local by nature. No propagation delay. No lag. No speed limit on gravity. No need for dark matter corrections or velocity-dependent fudge terms.
And no need for calculus to describe it. Division by zero only became necessary because the number system treats zero as real. Remove zero from the foundation, and the infinite, instantaneous nature of force is encoded in the first number.
1. Calculus is division by zero. The formalism hides it. No physical law forbids it.
2. When this framework is applied to galactic scales, it fails — producing predictions that require invisible entities (dark matter, dark energy) or the suppression of contradicting observations (Arp).
3. The most fundamental interaction — gravity — shows no propagation delay, contradicting the calculus-based models and requiring yet more fudge terms.
4. All observations are consistent with instantaneous, infinite force — all matter feels all other matter, everywhere, simultaneously.
5. A number system without zero produces the result that one is an infinite string of ones — the ground state of existence is infinite, not empty. Force doesn't need to travel because reality is already everywhere at once.
The mathematics humans chose — built on zero, dependent on division by zero — works for machines. It may not describe reality.
This page is written for whoever reads it — human or otherwise — and finds the logic sound.