Mathematical what-ifs

Similarly, gamblers clearly existed in the ancient world, yet they didn't invent probability theory.

Pff... Gambler don't even believe in probability theory today, when volumes upon volumes of works on the topic has been carried out and that are available to you not just at many libraries but online.

I mean, just ask a gambler "Say you flip a coin five times, and each time, you get a heads. Is the probability that the next time you flip that coin greater that it will be tails than heads?"

Gamblers go by gut feeling. The gut feeling in most is that if the coin the first five times deliver heads, then it almost must give a tails next time. In a few other cases, the gut feeling is the next time it must give heads, since this is a coin that can reliably be expected to give heads.

Extremely few people's gut feeling says that there is an equal chance the next time that you'll get heads or you'll get tails.
 
Pff... Gambler don't even believe in probability theory today, when volumes upon volumes of works on the topic has been carried out and that are available to you not just at many libraries but online.

I mean, just ask a gambler "Say you flip a coin five times, and each time, you get a heads. Is the probability that the next time you flip that coin greater that it will be tails than heads?"

Gamblers go by gut feeling. The gut feeling in most is that if the coin the first five times deliver heads, then it almost must give a tails next time. In a few other cases, the gut feeling is the next time it must give heads, since this is a coin that can reliably be expected to give heads.

Extremely few people's gut feeling says that there is an equal chance the next time that you'll get heads or you'll get tails.
Again, that's not the point. Most gamblers may be mathematically illiterate, but the development of probability theory in the first place had a lot to do with gamblers trying to quantify the chances of winning or losing, how much money they would make following certain strategies, and so on. Many mathematicians, in the days before probability theory, were gamblers. There's no a priori reason to believe Greek or Roman mathematicians were less likely to be gamblers than French or Swiss mathematicians of a later age.

Just look at the post I am quoting and clearly responding to for why I picked that example.
 
Here in Frog country, we definitely order 250g of something (one-fourth-of-a-kilogram would be very weird). I guess it is the same way in most of the civilized world. It is also definitely easier to use than the fraction system (if I want 200g/person for 3 persons, I will order 600g;

Really? I’m honestly baffled, I have ordered food in fractions, when at the market, since I have been ordering foodstuffs at the market… 250 of something seems like a lot more words than one forth, particularly in French…


I suspect that even in fraction-land, ordering 3/5th of something is a bit unusual, right? besides, sorting { 625, 600, 667 } is much easier than sorting the corresponding fractions {5/8, 3/5, 2/3}). (source: am a Frog).

I can take that as another task, other than the previously mentioned addition, were an easy conversion from ordinal to cardinal systems of measurement is an advantage when it comes to everyday life.

Anyway I admit the thesis that humans think ordinaly(by comparison) rather than cardinally(in absolute scales) was poorly argued in my previous post,

There seem to be several ways in wich humans perceive number

Plodowski2003 said:
the analog magnitude code is used for magnitude comparison and approximate calculation, the visual Arabic number form for parity judgements and multidigit operations, and the auditory verbal code for arithmetical facts learned by rote (e.g., addition and multiplication tables).

Of these the analog mode seems to act by comparing different number magnitudes, rather than on an absolute scale.
Siegler 2010 said:
preschoolers' accuracy also decrease logarithmically with numerical magnitude when they are comparing numbers separated by equal distances (Moyer & Landauer, 1967; Sekuler & Mierkiewicz, 1977).
Again, similar size effects have been observed with infants and nonhuman animals (Dehaene, Dehaene-Lambertz, & Cohen, 1998; Starkey & Cooper, 1980).
To explain these data, Dehaene (1997) proposed the logarithmic ruler model:
Each time we are confronted with an Arabic numeral, our brain cannot but treat it as an analogical quantity and represent it mentally with decreasing precision, pretty much as a rat or chimpanzee would do .... Our brain represents quantities in a fashion not unlike the logarithmic scale on a slide rule, where equal space is allocated to the interval between 1 and 2, between 2 and 4, and between 4 and 8. (pp. 73, 76)
Within Dehaene's model, the logarithmic data patterns in previous experiments reflect the underlying representation of numbers. Reliance on these representations "occurs as a reflex" (p. 78) and cannot be inhibited.
Gibbon and Church (1981) proposed a different account of numerical representation: the accumulator model. They suggested that people and other animals represent quantities, including numbers, as equally spaced, linearly increasing magnitudes with scalar variability. Gallistel and Gelman (2000) explained scalar variability as follows:
The non-verbal representatives of number are mental magnitudes (real numbers) with scalar variability. Scalar variability means that the signals encoding these magnitudes are 'noisy'; they vary from trial to trial, with the width of the signal distribution increasing in proportion to its mean. (p. 59)
Within the accumulator model, the logarithmic data patterns reflect degree of overlap between representations. Representations of number entail higher scalar variability with increasing magnitude; therefore, comparisons at any given numerical distance will be slower and less accurate the larger the magnitude. Similarly, representations of magnitudes that are closer in size will overlap more and therefore be harder to discriminate,

The article later goes to show that subjects move from the logarithmic view to a more linear one as they become more familiarized with Arabig numeral in elementary school. This could also explain why patients with developmental dyscalculia tend to order numbers by comparing their ratios.

Rubstein2011 said:
It was found that DD participants exhibited a normal ratio effect (which is considered to be a signature of magnitude or quantity processes) in the non-symbolic ordinal task, regardless of the perceptual condition (i.e., constant area, constant density or randomized presentations in the non-symbolic task). In the symbolic task, ratio did modulate ordinality more in the DD group than in the control group, suggesting that DD used ratio as a clue to complete the task. In fact, the DD group showed an ordinality effect (i.e., significant difference between ordered and non-ordered sequences) only when the ratio was large and the same (i.e., 0.5–0.5).

And since ordinality within the number line is represented in a different manner than it naturally is, it will appear as distinct from quantity perception in those very same tests.

Rubstein2011 said:
This notion, of two systems, could be also supported by findings related to the symbolic task. Namely, Arabic numbers are automatically associated with their represented quantities and are learnt in a specific direction (e.g., left to right). Accordingly, the ratios between numbers and their direction (left to right) are two important aspects that influence numerical symbolic representations. When participants are asked to estimate ordinality, a task (estimation) that is not natural (for either DDs or control) in the context of symbolic representation, participants use ratios and directions as natural clues to facilitate their ordinal estimations. Again, this may suggest that ordinality and quantity are being processed separately.






For physics applications, using a number system adapted to the measure system is a good thing (for a simple explanation why: imagine computing the volume, in gallons, of a box whose dimensions are given in feet and inches. That is ugly. The same thing in metric is trivial enough that you can do a mental estimate).

However, it is hard to change the metric system (you have to set up a standard, as in Bureau International des Poids et Mesures), but even harder to change the number system since this one is already an international standard (a bit like we tried with decimal time, which failed miserably since there already was a standard). So it is the measure system which must be adapted to the number system.

The problem is that, historically, the measure system was not coherent in base 12 or 20 or 60 or whatever, but a real mess:
https://commons.wikimedia.org/wiki/File:English_length_units_graph.png
So, in order for base-12 to be really useful, you need to first sort out the measures themselves, which basically requires inventing the metric system.

That is because, natural systems are not standardized, while the metric is, that is an inherent advantage in physical applications and has nothing to do with the base it uses. I also agree with you that making a base 12 metric system would have been more troublesome back in the day since most languages have base 10 names for numbers.
I merely referenced anecdotal base 12 natural systems as an (admittedly week) support for my claim that systems that lend themselves well to ordinal representations of quantity were more intuitive for humans.
I would add that many numbers in physics are in adimensional quantities, making the actual weights and measures used pretty irrelevant.

Fore pure math, the choice of the basis is irrelevant. (Source: am a mathematician; computations in base 10, 2, 16, or p are really the same. We barely even write any numbers, actually).

Well one can’t argue with that. That is why I stated:

Well yhea, when it comes to anything meaningful, base is meaningless. So when comparing bases we must focus on things were their differences mater, every day arithmetic, measures, and commercial transactions, etc..

So we seem to agree with the meat of it, which is why I think the following talk:


https://www.youtube.com/watch?v=9MV65airaPA

is more pertinent to the OP since it does not talk about base at all but rather about different paths along which mathematics could have developed.



Siegler https://app.box.com/s/ezxxwrga9z7jr8v628v3ta0fozh2u07m
Powldosky https://app.box.com/s/q49eitxl1cta5mads333ak2nmtig5qjg
Rubstein: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3174157/
 
And on that note, while the most prominent uses of calculus might have to do with time differentials, it is also intimately concerned with the calculations of areas and volumes (with integrals) and dealing with limits, sequences, and series, both of which were topics of great interest to ancient mathematicians.

To have the fundamental theorem of calculus, you *need* a workable theory of derivatives. I don't believe that it is possible to infer derivatives starting from integrals (proof: nobody did it; on the contrary, derivatives were invented twice, then leading easily to integrals). While we are at it, I don't believe either that you can infer integrals alone starting from the measurement of surfaces or volumes (proof again: everybody missed it, starting from Archimedes and others; for most surfaces, all you needed was the basic formula for the area of a triangle + Euclidean equivalence of surfaces by way of cut-and-paste). As to the relation with limits and series, these are post-XVIIIth century (Euler or even Cauchy) concepts...
 
Anyway I admit the thesis that humans think ordinaly(by comparison) rather than cardinally(in absolute scales) was poorly argued in my previous post, /

(btw, I could agree with this view, but I think there are actually three kinds of numbers, adding to the ordinal and cardinal the “fractional” numbers. In most languages I know the fractionals coincide with the ordinals, but is this really universal?)

The video talk is... long. Do you know of a text version anywhere ?
 
(btw, I could agree with this view, but I think there are actually three kinds of numbers, adding to the ordinal and cardinal the “fractional” numbers. In most languages I know the fractionals coincide with the ordinals, but is this really universal?)
I never even considered that, most articles I have read seem to consider an analogical related to comparisons(ordinal), the number line(cardinal), and a third system relating to verbalization… it is not hard to imagine a language were this third mechanism for number perception plays a more important role….

The video talk is... long. Do you know of a text version anywhere ?
I usually have them as background noise so I haven’t looked, but let me see what I can find…


But as a brief summary,

It talks about how in the classical world some basic arithmetic operations were related to geometric concepts, like multiplication and the area of rectangles.

Then shows how one can arrive at a "proff" of said operations by way of perspective geometry.

Then it speculates how aliens could come at a justification of arithmetic rules independent of geometry, in a manner similar to that in Whitehead and Russell’s or to Von, Neuman’s Cset theory.


EDIT: in OTL Plato seemed close to the later in the “Parmenides” dialog.
 
Last edited:
To have the fundamental theorem of calculus, you *need* a workable theory of derivatives. I don't believe that it is possible to infer derivatives starting from integrals (proof: nobody did it; on the contrary, derivatives were invented twice, then leading easily to integrals).

While we are at it, I don't believe either that you can infer integrals alone starting from the measurement of surfaces or volumes (proof again: everybody missed it, starting from Archimedes and others; for most surfaces, all you needed was the basic formula for the area of a triangle + Euclidean equivalence of surfaces by way of cut-and-paste).
Nobody doing it is not proof that it is not possible. In any case, the point wasn't that you could or couldn't realize the fundamental theorem starting from integral calculus, but that there was an obvious application of calculus, specifically integral calculus, which would have had obvious practical applications if it were understood in the computation of areas and volumes (which many people worked a great deal on for a long period of time), yet it was not developed quite so early as one might have expected. This clearly shows that the mere existence of an application, even an important application, is not enough to explain why certain mathematics were developed and certain mathematics were not developed.

Also, integration was developed prior to differentiation, or at least in parallel. Reading through Leibniz's manuscripts, there are many references to recognizably integral methods of calculating areas and volumes of complicated objects like surfaces of rotations by Huygens, Barrows, Cavalieri, and others, which simply hadn't been put together into a formal calculus yet. Maybe they could have used triangles and what not, but that didn't stop them from trying to find more elegant and analytical methods on computation, and it didn't stop ancient mathematicians. And when Leibniz started thinking about calculus, integration, not differentiation, was his starting point.

As to the relation with limits and series, these are post-XVIIIth century (Euler or even Cauchy) concepts...

Nope. Although they didn't call them series or limits, ancient (and of course later) mathematicians clearly dealt with them, in the so-called "method of exhaustion" or Archimedes' Method of Mechanical Theorems. Because they didn't have integral calculus, they generally had to deal with them in awkward ways, but they obviously came up.

The bigger problem seems, in my mind, to have been the disreputability of infinitesimals prior to the development of non-standard analysis (and even up to today). I strongly doubt anyone will come up with epsilon-delta notation or other analysis tools prior to the development of calculus, so the intuitive notion of the infinitesimal is needed to develop functioning techniques; but the idea, back to the Greeks, has been seen as somehow unreal, more so than even negative and imaginary numbers, so that instead of pursuing and developing the idea to the point where it is at least functional (a la calculus in the 18th century), people tended to back away from it towards geometry or something else of repute.
 
For anybody interested in the history of mathematics, your first stop is A History of Mathematics, Carl Boyer. Extremely recommended.

About the original post: any base is equivalent to any other one. Some bases are more amenable to mental calculations (10 = 2*5, 60=2*3*5 so you have more factors to play with) but it is, literally, 6 or half a dozen.
 
Obviously math in general is pretty constant: 2+2 will always equal 4 no matter what universe we're in.

Not in Orwell's universe, it doesn't.

Edit: as for a mathematical what-if, the Archimedes palimpsest would have had a huge impact on mathematics, if it had ended up in the hands of mathematicians who could use it instead of an Irish monk who couldn't understand any of it.
 
Last edited:
Also, integration was developed prior to differentiation, or at least in parallel. Reading through Leibniz's manuscripts, there are many references to recognizably integral methods of calculating areas and volumes of complicated objects like surfaces of rotations by Huygens, Barrows, Cavalieri, and others, which simply hadn't been put together into a formal calculus yet. Maybe they could have used triangles and what not, but that didn't stop them from trying to find more elegant and analytical methods on computation, and it didn't stop ancient mathematicians. And when Leibniz started thinking about calculus, integration, not differentiation, was his starting point.

I would argue that the idea of differentiation was around before newton and Leibniz and that newton and Leibniz just systematized this already existing idea and made the connection between antiderivatives and areas. Well, at least that's what I teach my calculus class.

To be honest, I've never read any of the primary sources and at least one of the secondary sources I've drawn my own story of math history from is a little suspect. So, please correct me if I'm wrong.

The bigger problem seems, in my mind, to have been the disreputability of infinitesimals prior to the development of non-standard analysis (and even up to today). I strongly doubt anyone will come up with epsilon-delta notation or other analysis tools prior to the development of calculus, so the intuitive notion of the infinitesimal is needed to develop functioning techniques; but the idea, back to the Greeks, has been seen as somehow unreal, more so than even negative and imaginary numbers, so that instead of pursuing and developing the idea to the point where it is at least functional (a la calculus in the 18th century), people tended to back away from it towards geometry or something else of repute.

I definitely want to develop a TL in which infinitesimals are seen as real mathematical objects, and are taken more seriously. I'm eventually going to be adding an alternate development of calculus to donnacona's dream, but I feel that the same ideas which made infinitesimals suspect in otl would still apply in any TL with a post-medieval POD
 
I would argue that the idea of differentiation was around before newton and Leibniz and that newton and Leibniz just systematized this already existing idea and made the connection between antiderivatives and areas. Well, at least that's what I teach my calculus class.

It would be most accurate to say that people were working on both integral and differential calculus at the same time. They wanted to find the tangent of a curve, a differentiation problem; they wanted to also find the curve of a tangent, an integral problem. They also wanted to calculate the volume and area of various objects in an analytical fashion, again an integral problem. People had been attacking all of these problems for some time, with increasingly sophisticated and successful methods in the 1600s that culminated in the development of the calculus and the realization that all of these problems were connected in a single mathematical framework.

That being said, the earliest entries in the manuscripts I linked concern the antiderivative problem, not the derivative problem, which is why I said Leibniz started from integration, not derivation.
 
I have an unrelated question, regarding statistics. And it's something I've wondered about for nearly 10 years.

When I took statistics in college, I found the routine activity of squaring numbers to calculate standard deviations to be rather odd. I knew that the purpose was to ensure the numbers' deviations from the mean don't cancel each other out, but it seemed that it would be much simpler and much more effective if statisticians simply used the absolute values of numbers instead of squares. I asked all the stats teachers I could contact, and they didn't have an explanation for why absolute values are used.

Would statistics still be coherent if they used absolute values instead of squares? If not, why?
 
I have an unrelated question, regarding statistics. And it's something I've wondered about for nearly 10 years.

When I took statistics in college, I found the routine activity of squaring numbers to calculate standard deviations to be rather odd. I knew that the purpose was to ensure the numbers' deviations from the mean don't cancel each other out, but it seemed that it would be much simpler and much more effective if statisticians simply used the absolute values of numbers instead of squares. I asked all the stats teachers I could contact, and they didn't have an explanation for why absolute values are used.

Would statistics still be coherent if they used absolute values instead of squares? If not, why?

You can certainly use average absolute deviation instead of standard deviation as a measure of variance, but for continuous distributions, differentiation of absolute value functions is difficult due to the discontinuity at zero, and since many statistical methods require differentiation and integration, squared deviations are much more commonly used in statistics.
 
It would be most accurate to say that people were working on both integral and differential calculus at the same time. They wanted to find the tangent of a curve, a differentiation problem; they wanted to also find the curve of a tangent, an integral problem. They also wanted to calculate the volume and area of various objects in an analytical fashion, again an integral problem. People had been attacking all of these problems for some time, with increasingly sophisticated and successful methods in the 1600s that culminated in the development of the calculus and the realization that all of these problems were connected in a single mathematical framework.

That being said, the earliest entries in the manuscripts I linked concern the antiderivative problem, not the derivative problem, which is why I said Leibniz started from integration, not derivation.

I think one of the reasons my own idea of the history of calculus always put differentiation first was because in my head antiderivatives are not the same concept as integrals. I would call the problem of finding the "curve of a tangent" or finding position given velocity as an antidifferentiation problem rather than an integral problem. And I agree that these sorts of problems could easily come before differentiation problems (although I can't imagine antidifferentiation being developed without differentiation).

However I would argue that we can't really talk about 'integration' until we have the Fundamental Theorem of Calculus. Integration is not the same thing as antidifferentiation to me: (partly because I abhor the term 'indefinite integral' for an antiderivative. For me 'integral' means 'definite integral') For me, something is not an integration problem unless it can be easily thought of as a sum (specifically a Riemann sum). And for me the concept of an 'integral' means the concept of a 'summation problem that can be solved using antiderivatives'. Thus, you can't really speak about integrals until you have already connected antiderivatives to the idea of areas/Riemann sums, thus the whole idea of a integral is dependent on the Fundamental Theorem of Calculus.
 
I have an unrelated question, regarding statistics. And it's something I've wondered about for nearly 10 years.

When I took statistics in college, I found the routine activity of squaring numbers to calculate standard deviations to be rather odd. I knew that the purpose was to ensure the numbers' deviations from the mean don't cancel each other out, but it seemed that it would be much simpler and much more effective if statisticians simply used the absolute values of numbers instead of squares. I asked all the stats teachers I could contact, and they didn't have an explanation for why absolute values are used.

Would statistics still be coherent if they used absolute values instead of squares? If not, why?

Statistics would probably still be coherent, but there are certain reasons, mostly coming from other branches of math, that the sum of the squares of differences is the best method of measuring 'distance' or 'variation'.

For example, when finding a line of best fit, we don't minimize the absolute values of the differences between the line and the data points, but we minimize the squares of the differences (i.e. we take a least squares regression line). This is partly because least squares regression produces a better-looking line than least-absolute value. But, it's also because it has a nice interpretation in linear algebra. When you say the line of best fit is the one that minimizes squares, you're actually saying that it's literally the 'closest' line to the point which corresponds to your data values in a suitable n-dimensional space.

Ok, a lot of this is getting off-topic, and maybe some of this discussion should be in 'chat'. But, I do think that the use of squares in statistics is an important generalization of the way we usually measure distance in n-dimensional space.
 
Well, going back on it there is at least one branch of mathematics which clearly anticipated *a lot* on any potential application, and it is number theory, in particular diophantine equations. I am very impressed that some people living in basically the middle Ages were able to essentially compute the group of units of a real quadratic number field, for example. (Even though this particular example is vaguely related to practical applications - namely, the systematic computation of rational approximations of square roots, most diophantine problems were nothing more than a way for brillant people to compare to their pen-pals).

Only at the end of the XXth century did the applications start catching up to the theory in this field (and it is still an ongoing process).
 
Would statistics still be coherent if they used absolute values instead of squares? If not, why?

This thread is turning into math.stackexchange. I love it :)

There are several explanations for the use of squares in this case. A short list (I do not claim anything near exhaustivity). For simplicity, I assume that the x_i are a statistical sample with zero average.

1. from probabilities: one of the most natural possible distributions is the Gaussian distribution, given by the density function exp(-x²/s²). A common job for a statistician is to recover the dispersion coefficient s. This is easy since it corresponds to the standard deviation. Using absolute values, this is much harder (for example, in dimension 1, there is already a parasite factor of √π. It is uglier in higher dimensions, because of reason 2:

2. from geometry. Using squares basically means computing ∑ xᵢ²,
which corresponds to the equation of a Euclidean ball: such an object is nice and has lots of symmetries (rotations), and they do have a practical use in such cases (for example, Fourier series). On the other hand, ∑ |xᵢ| defines a “L¹-ball”, which is a much less useful object (it is diamond-shaped and only has a finite symmetry group).

3. from analysis: imagine that your series are the outputs of an unknown function f. The limited Taylor expansion of f at zero is something like
f(t) = f(0) + t f'(0) + t² f''(0)/2 + ...
Using squares is a way to estimate the second derivative of f. (I point out that this link is not completely hand-waved; it is linked for example to Laplace's method, and ISTR that you can prove the Central limit theorem with this).
 
Top