The numeral zero, a symbol and concept, has been called one of the most important inventions of human history.
While the early numeral systems were fine for rudimentary counting, they were cumbersome, messy and sometimes impossible for multiplication, division and more complex arithmetic. Modern math, such as calculus, could not be done or conceived of with them.
It was the invention and development of zero that allowed for complex calculations, advanced algebra, calculus, exponential numbers and more. Computers, nuclear physics, modern statistics, space travel, modern science and the couples inventions and knowledge from complex mathematics require the numeral zero.
The history of the numeral zero is long and winding, with different versions of it being invented in different places, and its provenance certain. `
Our 0,1,2,3,4,5,6,7,8,9 Hindu-Arabic decimal system uses a symbol for zero and a placement system for counting. A numeral is defined by where a symbol is in the number, and zero is used as a place marker.: 10, 100, 203. This zero and place marker helps us make big numbers without the need for more symbols.
The Hindu-Arabic system adds a zero to get 100. Add another zero and you get 1000. This makes for division by ten, and exponential numbers, simple. We take for granted this use of zero and placement to make big numerals, but this wasn’t always the case.
Early numeral systems had no symbol or sometimes even idea for zero. Without a zero, some systems needed a different symbol 10, 20, 30, 40, 500 and so on. This not only made for messy numbers but made division and calculation difficult.
Imagine division in, say, the Egyptians or Romans system that had no such decimal placement or zero. 1,504 – 103 is a simple calculation. The equivalent in Roman numerals (MDIV – CIII) is messy.
The earliest numeral system by the Summarians had no marker for zeroes numbers, which made for reading numbers sometimes impossible,
Say we have no zero in our numeration system and I give you the numeral ‘11’. You can’t know if that means eleven, one hundred one, one thousand one, one million, ten million or other. The use of a zero symbol allows us to say 11 (eleven) 101 (one hundred one), 1001 (one thousand one).
The Babylonians, who inherited and developed their system from the Sumerians, added a space as a marker between numbers to indicate the equivalent of a zero.
If we add a space instead of a zero we can differentiate between those 11 numbers
11 = eleven
1 1 (one space between 1’s) = one hundred one
1 1 (two spaces between ones) = one thousand one.
The problem with the Babylonian system is you can’t always tell how many spaces are between symbols.
1 1 = how many spaces are between those 1’s? Even I don’t know, as I didn’t count.
Duly note that Babylonian numerals were used in a context of what was being counted. It was applied to daily events not used abstractly. If you know the context (sheep in a herd, plates at the dinner table) you could deduce the number. There might be 10 plates at an average Babylonian dinner table, but not 100 and certainly not 10000. But this space system still caused ambiguity to the Babylonians.
To counter this the Babylonians invented a placeholder symbol to clearly mark the spaces between numerals. For reading a numeral, this worked as the equivalent of our zero.
A few other early systems independently invented their own placeholder symbols. The Mayans used a shell-like symbol, while the Khmer used a dot.
A problem with the Babylonian and other early systems is they didn’t use their separation marker or zero symbol after numerals. Thus, you can’t tell if 11 means eleven, one hundred ten (110 if a zero were used), one thousand one hundred (11000) or other.
Early counting devices– the Inca Quipu, Asian rod counting board and abacus, had spaces, or blank spots to denote nothing in a digit column.
The invention of zero as a symbol and a numerical concept
While the zero or equivalent as a marker made for easier reading of numbers and doing simple addition and subtraction, zero had to be conceived of and used as an actual concept and numeral/number in and of itself before it could be used for advanced calculations
Though people have always understood the concept of nothing or having nothing. However, nothing as a “thing,” not only a symbol but a concept, took a long while to develop in math.
“How can nothing be something?” was often pondered. Yet, space is full of nothing. The empty space in an empty box is nothing yet something. The empty space on the Asian counting board, between the knots on an Inca Quipu, or between the ones and hundreds is something. In mathematics, nothing is something and is called and symbolized as nothing.
It was the Indians who began to understand zero both as a symbol and as an idea, and fully developed it in the 5 century AD. It is believed that they were able to do this because emptiness is a major concept and goal in Buddhism and Hinduism. Thus, the concept of a numerical nothingness or emptiness was something they could more readily understand. The English word zero is derived from the Hindu word “sunyata” which means nothingness.
Brahmagupta was an Indian mathematician and astronomer, who further developed zero and arithmetic. He wrote standard rules for reaching zero through addition and subtraction as well as the results of operations with zero. Brahmagupta was the first to give rules to compute with zero, and wrote the first book that had rules for arithmetic manipulations that apply to zero and negative numbers. You need a zero before you can have negative numbers. His arithmetic rules were in alignment with today’s except for division by zero. That would be corrected years later by Isaac Newton and G.W. Leibniz to solve.
It would be a few centuries for zero to reach Europe.
Arabian sailors brought Brahmagupta’s book back from India. Zero reached Baghdad by 773 AD where it was developed by Arabian mathematicians who would base their numbers on the Indian system. In the ninth century, Persian Mohammed Ibn-Musa al-Khowarizmi was the first to work on equations that equal zero. By 879 AD, zero was written almost as a small oval.
Zero reached Europe by the twelfth century. The Italian mathematician Fibonacci further develop algorithms with the abacus, which until that time had been the most common tool to do arithmetic. His arithmetic using zero spread with German accountants and bankers. Merchants knew their books were balanced when the positive and negative amounts of their assets and liabilities equaled zero.
Some Medieval European religious leaders were against the use of the symbol. They felt that if God was everything and in everything, then nothing must be the devil. They sometimes forbid the use of zeros,. However merchants often still used zero if on the sky
French philosopher, mathematician and scientist Rene Descartes advanced the use and concept of zero. He introduced the Cartesian coordinate system, which uses the origin of (0,0) to make graphs still commonly used in math and science.
Adding, subtracting, and multiplying by zero are relatively simple operations. However, division by zero long confused even great minds. How many times does zero go into one? How many nothings exist in something? The answer is indeterminate, but using the concept of dividing by infinity and nearing zero is the key to calculus.
In the 1600’s, Isaac Newton and Gottfried Leibniz independently studied and solved the issue of dividing by zero. Working with numbers as they approach zero, they invented calculus. Fully called calculus of the infinitesimal, calculus works to find information about time, space, motion at infinitesimal points nearing zero. The calculus formulas are functions of time, and so one can think of calculus as studying functions of time. Among the physical concepts that use concepts of calculus include motion, electricity, heat, light, harmonics, acoustics, astronomy, and dynamics. It has been essential for everything from physics to economics to statistics to computers.