Quantcast
Channel: Real life maths – IB Maths Resources from Intermathematics
Viewing all 104 articles
Browse latest View live

Does Sacking a Manager Improve Results?

$
0
0

premier league regression

In sports leagues around the world, managers are often only a few bad results away from the sack – but is this all down to a misunderstanding of statistics?

According to the Guardian, in the 21 year history of the Premier League, approximately 140 managers have been sacked. In more recent years the job is getting ever more precarious – 12 managers lost their jobs in 2013, and 20 managers in the top flight have been shown the door in the last 2 years. Indeed, there are now only three Premier League managers who have held their position for more than 2 years (Arsene Wenger, Sam Allardyce and Alan Pardew).

Owners appear attracted to the idea that a new manager can bring a sudden improvement in results – and indeed most casual observers of football would agree that new managers often seem to pull out some good initial results. But according to Dutch economist Dr Bas ter Weel this is just a case of regression to the mean – if a team has been underperforming relative to their abilities then over the long run we would expect them to improve to get closer to the mean value.

As the BBC reported:

“Changing a manager during a crisis in the season does improve the results in the short term,” Dr Bas ter Weel says. “But this is a misleading statistic because not changing the manager would have had the same result.”

Ter Weel analysed managerial turnover across 18 seasons (1986-2004) of the Dutch premier division, the Eredivisie. As well as looking at what happened to teams who sacked their manager when the going got tough, he looked at those who had faced a similar slump in form but who stood by their boss to ride out the crisis.

He found that both groups faced a similar pattern of declines and improvements in form.

By looking at the graph at the top of the page it is clear to see that sacking a manager may have appeared to lead to an improvement in results – but that actually had the manager not been sacked results would have been even better!

We can understand regression to the mean better by considering coin tosses as a crude model for football games (ignoring draws).   If we get a head the team wins, if we get a tail the team loses.   So this is a distinctly average team – which over a season we would expect to finish around mid-table.  However over that season they will have “good runs” and “bad runs.”

regression

This graphic above is the result of 38 coin tosses (the length of a Premier League season).  Even though it’s the result of random throws you can see a run of 6 wins in a row – a good run.  There’s also a run of 8 defeats and only 2 wins in 10 games – which would have more than a few Chairman thinking about getting a new manager.

Being aware of regression to the mean – i.e that over the long term results tend towards the mean would help owners to have greater confidence in riding out “bad runs” – and maybe would keep a few more managers in their jobs.


The Riemann Sphere

$
0
0

Joh-RiemannSphere01

The Riemann Sphere

The Riemann Sphere is a fantastic glimpse of where geometry can take you when you escape from the constraints of Euclidean Geometry – the geometry of circles and lines taught at school.  Riemann, the German 19th Century mathematician, devised a way of representing every point on a plane as a point on a sphere.  He did this by first centering a sphere on the origin – as shown in the diagram above.  Next he took a point on the complex plane (z = x + iy ) and joined up this point to the North pole of the sphere (marked W).  This created a straight line which intersected the sphere at a single point at the surface of the sphere (marked z’).  Therefore every point on the complex plane (z) can be represented as a unique point on the sphere (z’) – in mathematical language, there is a one-to-one mapping between the two.  The only point on the sphere which does not equate to a point on the complex plane is that of the North pole itself (W).  This is because no line touching W and no other point on the sphere surface can ever reach the complex plane.  Therefore Riemann assigned the value of infinity to the North pole, and therefore the the sphere is a 1-1 mapping of all the points in the complex plane (and infinity).

Riemann 2

So what does this new way of representing the two dimensional (complex) plane actually allow us to see?  Well, it turns on its head our conventional notions about “straight” lines.  A straight line on the complex plane is projected to a circle going through North on the Riemann sphere (as illustrated above).  Because North itself represents the point at infinity, this allows a line of infinite length to be represented on the sphere.

riemann sphere

Equally, a circle drawn on the Riemann sphere not passing through North will project to a circle in the complex plane (as shown in the diagram above).  So, on the Riemann sphere – which remember is isomorphic (mathematically identical) to the extended complex plane, straight lines and circles differ only in their position on the sphere surface.  And this is where it starts to get really interesting – when we have two isometric spaces there is no way an inhabitant could actually know which one is his own reality.  For a two dimensional being living on a Riemann sphere,  travel in what he regarded as straight lines would in fact be geodesic (a curved line joining up A and B on the sphere with minimum distance).

By the same logic, our own 3 dimensional reality is isomorphic to the projection onto a 4 dimensional sphere (hypersphere) – and so our 3 dimensional universe is indistinguishable from a a curved 3D space which is the surface of a hypersphere.  This is not just science fiction – indeed Albert Einstein was one to suggest this as a possible explanation for the structure of the universe.  Indeed, such a scenario would allow there to be an infinite number of 3D universes floating in the 4th dimension – each bounded by the surface of their own personal hypersphere.  Now that’s a bit more interesting than the Euclidean world of straight lines and circle theorems.

If you liked this you might also like:

Imagining the 4th Dimension. How mathematics can help us explore the notion that there may be more than 3 spatial dimensions.

The Riemann Hypothesis Explained. What is the Riemann Hypothesis – and how solving it can win you $1 million

Are You Living in a Computer Simulation? Nick Bostrom uses logic and probability to make a case about our experience of reality.

The Mathematics of Bluffing

$
0
0

poker

This post is based on the fantastic PlusMaths article on bluffing- which is a great introduction to this topic.  If you’re interested then it’s well worth a read.  This topic shows the power of mathematics in solving real world problems – and combines a wide variety of ideas and methods – probability, Game Theory, calculus, psychology and graphical analysis.

You would probably expect that there is no underlying mathematical strategy for good bluffing in poker – indeed that a good bluffing strategy would be completely random so that other players are unable to spot when a bluff occurs.  However it turns out that this is not the case.

As explained by John Billingham in the PlusMaths article, when considering this topic it helps to really simplify things first.  So rather than a full poker game we instead consider a game with only 2 players and only 3 cards in the deck (1 Ace, 1 King, 1 Queen).

The game then plays as follows:
1) Both players pay an initial £1 into the pot.
2) The cards are dealt – with each player receiving 1 card.
3) Player 1 looks at his card and can:
(a) check
(b) bet an additional £1
4) Player 2 then can respond:
a) If Player 1 has checked, Player 2 must also check.  This means both cards are turned over and the highest card wins.
b) If Player 1 has bet £1 then Player 2 can either match (call) that £1 bet or fold.  If the bets are matched then the cards are turned over and the highest card wins.

So, given this game what should the optimal strategy be for Player 1? An Ace will always win a showdown, and a Queen always lose – but if you have a Queen and bet, then your opponent who may only have a King might decide to fold thinking you actually have an Ace.

In fact the optimal strategy makes use of Game Theory – which can mathematically work out exactly how often you should bluff:

poker2

This tree diagram represents all the possible outcomes of the game.  The first branch at the top represents the 3 possible cards that Player 2 can be dealt (A,K,Q) each of which have a probability of 1/3.  The second branch represents the remaining 2 possible cards that Player 1 has – each with probability 1/2.  The numbers at the bottom of the branches represent the potential gain or loss from betting strategies for Player 2 – this is calculated by comparing the profit/loss relative to if both players had simply shown their cards at the beginning of the game.

For example, Player 2 has no way of winning any money with a Queen – and this is represented by the left branch £0, £0.  Player 2 will always win with an Ace.  If Player 1 has a Queen and bluffs then Player 2 will call the bet and so will have gained an additional £1 of his opponents money relative to a an initial game showdown (represented by the red branch).  Player 1 will always check with a King (as were he to bet then Player 2 would always call with an Ace and fold with a Queen) and so the AK branch also has a £0 outcome relative to an initial showdown.

So, the only decisions the game boils down to are:

1) Should Player 1 bluff with a Queen? (Represented with a probability of b on the tree diagram )
2) Should Player 2 call with a King?  (Represented with a probability of c on the tree diagram ).

Now it’s simply a case of adding the separate branches of the tree diagram to find the expected value for Player 2.

The right hand branch (for AQ and AK) for example gives:

1/3 . 1/2 . b . 1
1/3 . 1/2 . (1-b) . 0
1/3 . 1/2 . 0

So, working out all branches gives:

Expected Value for Player 2 = 0.5b(c-1/3) – c/6
Expected Value for Player 1 = -0.5b(c-1/3) + c/6

(Player 1′s Expected Value is simply the negative of Player 2′s. This is because if Player 2 wins £1 then Player 1 must have lost £1). The question is what value of b (Player 1 bluff) should be chosen by Player 1 to maximise his earnings?  Equally, what is the value of c (Player 2 call) that maximises Player 2′s earnings?

It is possible to analyse these equations numerically to find the optimal values (this method is explained in the article), but it’s more mathematically interesting to investigate both the graphical and calculus methods.

Graphically we can solve this problem by creating 2 equations in 3D:

z = 0.5xy-x/6 – y/6

poker7

z = -0.5xy+x/6 + y/6

poker8

In both graphs we have a “saddle” shape – with the saddle point at x = 1/3 and y = 1/3.  This can be calculated using Wolfram Alpha. At the saddle point we have what is known in Game Theory as a Nash equilibrium – it represents the best possible strategy for both players.   Deviation away from this stationary point by one player allows the other player to increase their Expected Value.

Therefore the optimal strategy for Player 2 is calling with precisely c = 1/3 as this minimises his loses to -c/6 = -£1/18 per hand.  The same logic looking at the Expected Value for Player 1 also gives b = 1/3 as an optimal strategy.  Player 1 therefore has an expected value of +£1/18 per hand.

We can arrive at the same conclusion using calculus – and partial derivatives.

z = 0.5xy-x/6 – y/6

For this equation we find the partial derivative with respect to x (which simply means differentiating with respect to x and treating y as a constant):

zx = 0.5x – 1/6

and also the partial derivative with respect to y (differentiate with respect to y and treat x as a constant):

zy = 0.5y -1/6

We then set both of these equations to 0 and solve to find any stationary points.

0 = 0.5x – 1/6
0 = = 0.5y -1/6
x = 1/3 y = 1/3

We can then see that this is a saddle point by using the formula:

D = zxx . zyy – (zxy)2

(where zxx means the partial 2nd derivative with respect to x and zxy means the partial derivative with respect to x followed by the partial derivative with respect to y. When D < 0 then we have a saddle point).

This gives us:

D = 0.0 – (0.5)2 = -0.25

As D < 0 then we have a saddle point – and the optimal strategy for both players is c= 1/3 and b = 1/3.

We can change the rules of the game to see how this affects the strategy.  For example, if the rules remain the same except that players now must place a £1.50 bet (with the initial £1 entry still intact) then we get the following equation:

Player 2 Expected Value = b/12(-1+7c) – 3c/12

This has a saddle point at b = 3/7, c = 1/7.  So the optimal strategy is 3/7 bluffing and 1/7 calling.  If Player 2 calls more than 3/7 then Player 1 can never bluff (b = 0), leaving Player 2 with a negative Expected Value.  If Player 2 calls less than 3/7 then Player 1 can always bluff (b = 1).

If you enjoyed this you might also like:

The Gambler’s Fallacy and Casino Maths - using maths to better understand casino games

Game Theory and Tic Tac Toe - using game theory to understand games such as noughts and crosses

Investigation into the Amazing e

$
0
0

leonard

e’s are good – He’s Leonard Euler.

e is the number  2. 7 1828 1828 45 90 45….. It is an irrational number -i.e. it can’t be written as an integer fraction.  It carries on forever – but here are the first 1000 digits.  Along with pi it is one of the most important constants in mathematics.

Leonard Euler

e is sometime named after Leonard Euler (Euler’s number).   He wasn’t the first mathematician to discover e – but he was the first mathematician to publish a paper using it.    Euler is not especially well known outside of mathematics, yet is undoubtedly one of the true great mathematicians.  He published over 800 mathematical papers on everything from calculus to number theory to algebra and geometry.

Why is e so important? 

Lots of functions in real life display exponential growth.  One example is the chessboard and rice problem, (if I have one grain of rice on the first square, two on the second, how many will I have on the 64th square?) This famous puzzle demonstrates how rapidly numbers grow with exponential growth.

If you sketch
y = 2x
y = ex
y = 3x

for between x = 0 and 3. You can see that y = ex is between y=2x and y = 3x on the graph, so why is e so much more useful than these numbers? By graphical methods you can find the gradient when the graphs cross the y axis. The derivative of ex is still ex – which makes it really useful in calculus.

The beauty of e.

e appears in a host of different and unexpected mathematical contexts, from probability models like the normal distribution, to complex numbers and trigonometry.

Euler’s Identity is frequently voted the most beautiful equation of all time by mathematicians, it links 5 of the most important constants in mathematics together into a single equation.

euler4

Infinite fraction: e can be represented as a continued infinite fraction can students you spot the pattern? – the LHS is given by 2 then 1,2,1 1,4,1 1,6,1 etc.

e=2+{\cfrac  {1}{1+{\cfrac  {1}{{\mathbf  2}+{\cfrac  {1}{1+{\cfrac  {1}{1+{\cfrac  {1}{{\mathbf  4}+{\cfrac  {1}{1+{\cfrac  {1}{1+\ddots }}}}}}}}}}}}}}=1+{\cfrac  {1}{{\mathbf  0}+{\cfrac  {1}{1+{\cfrac  {1}{1+{\cfrac  {1}{{\mathbf  2}+{\cfrac  {1}{1+{\cfrac  {1}{1+{\cfrac  {1}{{\mathbf  4}+{\cfrac  {1}{1+{\cfrac  {1}{1+\ddots }}}}}}}}}}}}}}}}}}.

Infinite sum of factorials: e can also be represented as the infinite sum of factorials:

e=\sum_{n=0}^\infty \frac{1}{n!}

A limit: e can also be derived as the limit to the following function.  It was this limit that Jacob Bernoulli investigated – and he is in fact credited with the first discovery of the constant.

\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n,

Complex numbers and trigonometry :  e can be used to link both trigonometric identities and complex numbers:

e^{{ix}}=\cos x+i\sin x,\,\!

You can explore more of the mathematics behind the number e here.

If you enjoyed this post you might also like:

Ramanujan’s Beauty in Mathematics Some of the amazingly beautiful equations of Ramanujan.

Differential Equations in Real Life

$
0
0

differential

Real life use of Differential Equations

Differential equations have a remarkable ability to predict the world around us.  They are used in a wide variety of disciplines, from biology, economics, physics, chemistry and engineering. They can describe exponential growth and decay, the population growth of species or the change in investment return over time.  A differential equation is one which is written in the form dy/dx = ……….  Some of these can be solved (to get y = …..) simply by integrating, others require much more complex mathematics.

Population Models

One of the most basic examples of differential equations is the Malthusian Law of population growth dp/dt = rp shows how the population (p) changes with respect to time.  The constant r will change depending on the species.  Malthus used this law to predict how a species would grow over time.

More complicated differential equations can be used to model the relationship between predators and prey.  For example, as predators increase then prey decrease as more get eaten. But then the predators will have less to eat and start to die out, which allows more prey to survive.  The interactions between the two populations are connected by differential equations.

differential2

The picture above is taken from an online predator-prey simulator .  This allows you to change the parameters (such as predator birth rate, predator aggression and predator dependance on its prey).  You can then model what happens to the 2 species over time.  The graph above shows the predator population in blue and the prey population in red – and is generated when the predator is both very aggressive (it will attack the prey very often) and also is very dependent on the prey (it can’t get food from other sources).  As you can see this particular relationship generates a population boom and crash – the predator rapidly eats the prey population, growing rapidly – before it runs out of prey to eat and then it has no other food, thus dying off again.

differential 2

This graph above shows what happens when you reach an equilibrium point – in this simulation the predators are much less aggressive and it leads to both populations have stable populations.

differential 3

There are also more complex predator-prey models – like the one shown above for the interaction between moose and wolves.  This has more parameters to control.  The above graph shows almost-periodic behaviour in the moose population with a largely stable wolf population.

Some other uses of differential equations include:

1) In medicine for modelling cancer growth or the spread of disease
2) In engineering for describing the movement of electricity
3) In chemistry for modelling chemical reactions
4) In economics to find optimum investment strategies
5) In physics to describe the motion of waves, pendulums or chaotic systems.

With such ability to describe the real world, being able to solve differential equations is an important skill for mathematicians.  If you want to learn more, you can read about how to solve them here.

If you enjoyed this post, you might also like:

Langton’s Ant – Order out of Chaos How computer simulations can be used to model life.

Does it Pay to be Nice? Game Theory and Evolution. How understanding mathematics helps us understand human behaviour

 

Premier League Wages Predict League Positions?

$
0
0

wage bill

Is there a correlation between Premier League wages and league position?

The Guardian has just released its 2012-13 Premier League season data analysis - which shows exactly how much each club in the Premier League spent on wages last year (see the bar chart above).  This can be easily plotted on a scatter graph to test how strong the correlation is between spending and league position. (y axis is league position, x axis is wage bill in millions of pounds).

scatter1

The mean spending on wages is 89 million pounds.  Our regression line is y = -0.08x + 17.52.  We can see some of the big outliers are QPR (with a big wage bill but low premier league position) and Everton (with a low wage bill relative to others who finished in a similar position).

The Pearson’s product moment correlation coefficient (r) is -0.73.  This is negative because in our case league position is numerically lower the higher up the league you are.  This shows a pretty strong correlation between league spending and league position.  An r value of -1 would be a perfect correlation in our case, whereas 0 would be no correlation.

Is there a correlation between turnover and league position?

turnover

We can also see what the correlation is between league position and overall club turnover (see the bar chart above).  Here we can see there is a huge gulf between the top few clubs and everyone else in the league.  There’s only 40 million pounds difference between the bottom ranked club for revenue Wigan and Newcastle, with the 7th biggest revenue.  But then a massive jump up to those with the top 6 revenues.

scatter2

This time we have a mean turnover of 128 million pounds and a regression line of y = -0.05x + 16.89.   The Pearson’s r value this time is r = -0.79, so there is a slightly stronger correlation than from wages – and this is a strong correlation overall.  So, both wage bills and turnover provide a pretty good predictor of where a team will finish – and also a decent yardstick to measure how well a team has done relative to their resources.

If you like this post you might also like:

Does Sacking a Manager Improve Results? How an improvement in team results is often just down to a statistical result – regression to the mean.

Maths Studies IA Exploration Topics – A large number of examples of statistics investigations to explore.

Modelling Infectious Diseases

$
0
0

SIR MODEL

Modelling Infectious Diseases

Using mathematics to model the spread of diseases is an incredibly important part of preparing for potential new outbreaks.  As well as providing information to health workers about the levels of vaccination needed to protect a population, it also helps govern first response actions when new diseases potentially emerge on a large scale (for example, Bird flu, SARS and Ebola have all merited much study over the past few years).

The basic model is based on the SIR model – this is represented by the picture above (from Plus Maths which has an excellent and more detailed introduction to this topic).  The SIR model looks at how much of the population is susceptible to infection, how many of these go on to become infectious, and how many of these go on to recover (and in what timeframe).

SIR MODEL2

Another important parameter is R0, this is defined as how many people an infectious person will pass on their infection to in a totally susceptible population.  Some of the R0values for different diseases are shown above.  This shows how an airbourne infection like measles is very infectious – and how malaria is exceptionally hard to eradicate because infected people act almost like a viral storage bank for mosquitoes.

One simple bit of maths can predict what proportion of the population needs to be vaccinated to prevent the spread of viruses.  The formula is:

VT = 1 – 1/R0

Where VT is the proportion of the population who require vaccinations. In the case of something like the HIV virus (with an R0 value of between 2 and 5), you would only need to vaccinate a maximum of 80% of the population.  Measles however requires around 95% vaccinations.  This method of protecting the population is called herd immunity

SIR MODEL 3
This graphic above shows how herd immunity works.  In the first scenario no members of the population are immunised, and that leads to nearly all the population becoming ill – but in the third scenario, enough members of the population are immunised to act as buffers against the spread of the infection to non-immunised people.

 \frac{dS}{dt} = - \beta I S
 \frac{dI}{dt} = \beta I S - \nu I
 \frac{dR}{dt} = \nu I

The equations above represent the simplest SIR (susceptible, infectious, recovered) model – though it is still somewhat complicated!

dS/dt represents the rate of change of those who are susceptible to the illness with respect to time.  dI/dt represents the rate of change of those who are infected with respect to time.  dR/dt represents the rate of change of those who have recovered with respect to time.

For example, if dI/dt is high then the number of people becoming infected is rapidly increasing.  When dI/dt is zero then there is no change in the numbers of people becoming infected (number of infections remain steady).  When dI/dt is negative then the numbers of people becoming infected is decreasing.

The constants β and ν are chosen depending on the type of disease being modelled.  β represents the contact rate – which is how likely someone will get the disease when in contact with someone who is ill.  ν is the recovery rate which is how quickly people recover (and become immune.

ν can be calculated by the formula:

D = 1/ν

where D is the duration of infection.

β can then be calculated if we know R0 by the formula:

R0 = β/ν

Modelling measles

So, for example, with measles we have an average infection of about a week, (so if we want to work in days, 7 = 1/ν and so ν = 1/7).   If we then take R0 = 15 then:

R0 = β/ν
15 = β/0.14
β = 2.14

Therefore our 3 equations for rates of change become:

dS/dt = -2.14 I S

dI/dt = 2.14 I S – 0.14 I

dR/dt = 0.14 I

Unfortunately these equations are very difficult to solve – but luckily we can use a computer program to plot what happens.   We need to assign starting values for S, I and R – the numbers of people susceptible, infectious, recovered (immune) from measles.  Let’s say we have a total population of 11 people – 10 who are susceptible, 1 who is infected and 0 who are immune.  This gives the following outcome:

SIR model 5

This shows that the infection spreads incredibly rapidly – by day 2, 8 people are infected.  By day 10 most people are immune but the illness is still in the population, and by day 30 the entire population is immune and the infection has died out.

SIR model 6

An illustration of just how rapidly measles can spread is provided by the graphic above.  This time we start with a population of 1000 people and only 1 infected individual – but even now, within 5 days over 75% of the population are infected.

SIR model 7

This last graph shows the power of herd immunity.  This time there are 100 susceptible people, but 900 people are recovered (immune), and there is again one infectious person.  This time the infection never takes off in the community – those who are already immune act as a buffer against infection.

If you enjoyed this post you might also like:

Differential Equations in Real Life – some other uses of differential equations in modelling predator-prey relationships between animal populations.

Championship Wages Predict League Position?

$
0
0

football5

Following on from the data released a couple of weeks ago about Premier League clubs’ financial data, the data from the Championship (England’s second tier) have just been published by the Guardian.  These are from 12 months ago (the most recent data available).  The Championship is famously very competitive – so it will be interesting to see if the same wages and league position correlation that we see in the Premier League also holds here.

Wage Bill 2012-13 Season (millions of pounds)

football2

Using an online scatter plot program we can get the following graph:

football4

Here the league position is on the x axis and the wage bill is on the y axis. In this case if there was a correlation between wages paid and league position we would expect the slope to be negative (as greater wages would lead to a lower league position).  From the graph we can see a pretty weak correlation:

Correlation coefficient (r): -0.3451690473979

Regression line equation: y=24.59-0.43x

The correlation coefficient shows that there is a weak negative correlation.

If we compare this to the scatter graph for the Premier League for the same period (this time the wages are plotted on the y axis – though this will not affect the calculations!)

scatter1

We can see a stark difference.  This time the correlation coefficient is -0.73, which shows a pretty strong negative correlation.  So what does this show?  Well, it confirms what many people already think about the 2 leagues – the Premier League is overall quite predictable – just by looking at the relative wage bills you can get a pretty good idea about league positions.  The Championship on the other hand is really pretty unpredictable – wages seem to have only a weak correlation with league position.  Indeed Wolves had one of the highest wage bills and yet finished 2nd from bottom.

Championship Debt Timebomb

football3

This remarkable graphic shows the terrible state of the finances for most Championship clubs.  It shows wages as a percentage of turnover.  Spending 50% of turnover on wages is generally considered a sustainable model for football clubs – yet every club in the table is above this – and the vast majority are spending 95% or more of their turnover just on wages.  Bristol City (who were relegated) were spending a staggering 190% of turnover on wages – i.e nearly twice their total turnover!

This is again quite a contrast to the Premier League, where clubs seem much better run:

football6

Whilst most clubs are spending more than 50% of their turnover, there are only 3 clubs spending more than 90% – and only QPR (who were relegated) spending more than their turnover.

So, the Championship is a much more unpredictable league – but also a financial basketcase.

If you liked this post you might also like:

Premier League Wages Predict League Positions? A look at the Premier League data.

Does Sacking a Manager Improve Results? How an improvement in team results is often just down to a statistical result – regression to the mean.

Maths Studies IA Exploration Topics – A large number of examples of statistics investigations to explore.

 

 


It is Rocket Science

$
0
0

lagrange8

It is Rocket Science

Maths is an essential part of both space travel and satellite programs.  Satellites are one of the most important technologies we have – used for rapid communication, TV signals, weather forecasting, navigation through GPS positioning, surveillance (including spying), mapping land, monitoring ecological change as well as for telescopes to look deep into space.

We want some satellites to rotate around the Earth (i.e covering different parts of the globe with time), but other satellites are most useful if they can remain “fixed” – i.e remaining above a specific place.  But how is this possible?  Surely the satellites will be orbiting the Earth – and so will not remain “stationary” with respect to a place on Earth.  Well, this is where a clever piece of maths comes in very useful:

Lagrange Points

There are 5 special points in the Earth’s orbit which will allow a satellite to remain in a the same position over a place in Earth.  These are called Lagrange points after the mathematician Joseph Lagrange who discovered 2 of these points (the first 3 were discovered by Leonhard Euler).

L1

lagrange points1

L1 is the first Lagrange point – it is a position between the sun and the Earth (in purple on the picture above) such that the gravitational pull from the Sun and the Earth cancel each other out to mean that the satellite rotates at the same speed as the Earth and so remains over the same position.

Remarkably Euler was able to work out where this Lagrange point was in the 1760s – hundreds of years before this had a genuine application and long before it could be empirically tested.  The formula for working out L1 is:

\frac{M_1}{(R-r)^2}=\frac{M_2}{r^2}+\left(\frac{M_1}{M_1+M_2}R-r\right)\frac{M_1+M_2}{R^3}

Where M1 is the mass of the large object (in this case the Sun) and M2 is the mass of the small object (in this case the Earth).  R is the distance between the 2 objects.  If we solve this equation we can find r – and this is the distance from the small object (the Earth) a satellite has to be to remain in synchronized orbit.

if M1 = Mass of the Sun = 1.9891 x 1030 kg

M2 = Mass of the Earth = 5.97219 x 1024 kg

R = Distance between the Earth and the Sun = 149,000,000km

Then we could put these numbers and solve (probably with the help of some graphical software).

Luckily we can have a good approximation to this formula when the mass of M1 is much bigger than M2:

r \approx R \sqrt[3]{\frac{M_2}{3 M_1}}

This gives in our case

r ≈ 149,000,000(1.9891 x 1030 /3×5.97219 x 1024 )1/3

r ≈ 1,500,000km

Therefore, a satellite 1.5million km away from the Earth will have a synchronized orbit with the Earth.  Satellites that are positioned in L1 are often ones which monitor the Sun’s activities  – such as the Solar and Heliospheric Observatory (SOHO) which has discovered over 2700 comets as well as monitoring solar conditions.

L2

lagrange points3

L2 makes use of exactly the same equations as L1 – but is on the outside of Earth’s orbit (as pictured).  For L2, the distance from Earth is also r ≈ 1,500,000km.  The L2 region is useful for observing the wider universe – and satellites launched into this area have included the Planck spacecraft which was used to map cosmic background radiation to help better understand the Big Bang and the origin of the universe.

L3

lagrange points4

The L3 point is opposite the Earth – on the other side of the orbital path (as pictured above).

The equation to find r (which now gives the distance from the Sun) is:

\frac{M_1}{(R-r)^2}+\frac{M_2}{(2R-r)^2}=\left(\frac{M_2}{M_1+M_2}R+R-r\right)\frac{M_1+M_2}{R^3}

Again this can be simplified with the approximation:

r \approx R \frac{7M_2}{12 M_1}

so r ≈ 149,000,000(7×1.9891 x 1030 /12×5.97219 x 1024 )

r ≈ 2.9×1013km from the Sun.

L4 and L5

lagrange points5

L4 and L5 are a little more complicated and are based such that they form an equilateral triangle with the 2 objects (the Earth and the Sun in this case).  The L4 and L5 Lagrange points are interesting to study because a lot of space debris gathers there – as these points have a stable equilibrium.  There are a lot of asteroids at these points.  For example there are around 1700 asteroids at the stable Lagrange points (L4 and L5) of Jupiter’s orbit with the Sun.

lagrange9

This picture above shows all the objects near the centre of our solar system.  The Sun is in the middle, and each of the ellipses represents the orbit of a planet around the Sun.  Jupiter’s orbit is the outermost ellipse  – and as Jupiter orbits it has clusters of asteroids 60 degrees ahead and behind which follow it round.  These are marked in green – and labelled as the Trojans and the Greeks.

lagrange points6

Using the same method it is also possible to find the Lagrange points for the orbital paths between the moon and the Earth.  In this case, the Earth would be M1 and the moon M2.  In the picture above, the Earth-moon Lagrange points are marked as LL1, LL2 etc.

You can read more about the maths and the history behind Lagrange points by reading through this excellent PlusMaths article on the topic.  The European Space Agency also goes into some more detail about Lagrange points and their use.

Crypto Analysis to Crack Vigenere Ciphers

$
0
0

code2

Crypto Analysis to Crack Vigenere Ciphers

(This post assumes some familiarity with both Vigenere and Ceasar Shift Ciphers.  You can do some background reading on them here first).

We can crack a Vigenere Cipher using mathematical analysis.  Vigenere Ciphers are more difficult to crack than Caesar Shifts, however they are still susceptible to mathematical techniques.  As an example, say we receive the code:

VVLWKGDRGLDQRZHSHVRAVVHZKUHRGFHGKDKITKRVMG

If we know it is a Vigenere Cipher encoded with the word CODE then we can create the following decoding table.

VIGENERE4

Here we have 4 alphabets, each starting with the letters of the code word.  To decode we cycle through the alphabets.  The first code letter is V so we find this in the C row and then look at the letter at the top of the column – this is T.  This is our first letter.  Next the second code letter is also V, but this time we find it the O row.  The column letter corresponding to this is H.  We continue this method which gives the decoded sentence:

THIS IS AN EXAMPLE OF HOW THE VIGENERE CIPHER WORKS

How do we know what cipher to use? 

In any kind of crypto-analyis we need to decide which technique has been used.  Say for example we receive the message:

GZEFWCEWTPGDRASPGNGSIAWDVFTUASZWSFSGRQOHEUFLAQVTUWFV
JSGHRVEEAMMOWRGGTUWSRUOAVSDMAEWNHEBRJTBURNUKGZIFOHR
FYBMHNNEQGNRLHNLCYACXTEYGWNFDRFTRJTUWNHEBRJ

In real code breaking we won’t have a message alongside it saying, “Use a Vigenere Cipher.”  A large part of the skill of code breaking is deciding which encoding technique has been used.  For our received message we have the frequency:

VINEGERE7

So, in this case is it best to do look for a Caesar Shift or a Vigenere Cipher?  To find this out, we could do with finding out how “smooth” the bar chart is and how it compares with the expected frequencies.  The expected values in English are:

vigenere3

A Caesar Shift simply shifts every letter in the message by a given number of letters in the alphabet, so we would expect a frequency barchart for a Caesar Shift to have the same peaks and troughs (just shifted along).  The Vigenere makes frequency analysis more difficult because it “smooths out” the frequencies – this means that the bar chart for the frequency will be less spiky and more uniform.

Incidence of Coincidence

A mathematical method to check how smooth the bar chart is, is to use the Incidence of Coincidence – this method is outlined in this post on Practical Cryptography, and uses this formula:

VIGENERE5

There is also a script on the site to work out the I.C for us.  If we enter our received code we get an I.C of 0.045.  We would expect an I.C of around 0.067 for a regular distribution of English letters (which we would find in a Caesar Shift for example).  Therefore this I.C value is a clue that we have a Vigenere Cipher rather than a Caesar shift.

Exploiting the cyclic nature of the Vigenere Cipher

So, we suspect it is a Vigenere Cipher, next we want to find out what the code word that was used to generate the code table is.  To do this we can look at the received code for repeating groups of letters.   There is a cyclic nature to the Vigenere Cipher, so there will also be a cyclic nature to the encoded message.

Using the site Crypto Corner we can analyse the text for repeating patterns of letters.  This gives us:

VINEGER8

This clearly indicates that there are a lot of letters repeating with period of 3.  Therefore it is a good guess that the keyword is also length 3.

So, next we can split the received message into 3 separate messages:

GFEPRPGAVUZFRHFQUVGVAOGURADEHRBNGFRBNQRNYXYNRRUHR
ZWWGAGSWFAWSQELVWJHEMWGWUVMWEJUUZOFMNGLLATGFFJWEJ
ECTDSNIDTSSGOUATFSREMRTSOSANBTRKIHYHENHCCEWDTTNB

Here we have simply generated the first line by taking the first, fourth, seventh, tenth etc. letters.

Cracking the code

Now we can do three separate Cesar Shift tests on these separate lines:

The first line has frequency:

vigg1

which strongly suggests that R in the cipher text is going to E.  This gives us the following Caesar Shift:

vigenere10

The second line has the following frequency:

VINEGER11

Which strongly suggests that W in the cipher text is going to E.  This gives us:

vigenere11

Lastly we notice that this will give us the codeword NS_.  Well NSA, (the American digital spy agency) would be a good guess so for the third Caesar Shift we try:

vigg2

Putting these together we have the Vigenere Cipher:

vigg3

and this decodes our received code as:

THE SECRET CODE IS CONTAINED IN THIS MESSAGE.  YOU MUST ADD THE FIRST PRIME NUMBER TO THE SECOND SQUARE NUMBER TO CRACK THIS. WHEN YOU HAVE DONE THAT CLICK BELOW AND ENTER THE NUMBER.

We have done it!  We have cracked the Vigenere Cipher using a mixture of statistics, logic and intuition.  The method may seem long, but this was a cipher that was thought to be unbreakable – and indeed took nearly 300 years to crack.  Today, using statistical algorithms it can be cracked in seconds.  Codes have moved on from the Vigenere Cipher – but maths remains at the heart of both making and breaking them.

If you enjoyed this post you might also like:

The Maths Code Challenge - three levels of codes to attempt, each one providing a password to access the next code in the series.  Can you make it onto the leaderboard?

RSA public key encryption - the code that secures the internet.

 

 

Why Do England Always Lose on Penalties?

$
0
0

penalties2

Statistics to win penalty shoot-outs

With the World Cup nearly upon us we can look forward to another heroic defeat on penalties by England. England are in fact the worst country of any of the major footballing nations at taking penalties, having won only 1 out of 7 shoot-outs at the Euros and World Cup. In fact of the 35 penalties taken in shoot-outs England have missed 12 – which is a miss rate of over 30%. Germany by comparison have won 5 out of 7 – and have a miss rate of only 15%.

With the stakes in penalty shoot-outs so high there have been a number of studies to look at optimum strategies for players.

Shoot left when ahead

One study published in Psychological Science looked at all the penalties taken in penalty shoot-outs in the World Cup since 1982. What they found was pretty incredible – goalkeepers have a subconscious bias for diving to the right when their team is behind.

penalties6

As is clear from the graphic, this is not a small bias towards the right, but a very strong one. When their team is behind the goalkeeper apparently favours his (likely) strong side 71% of the time. The strikers’ shot meanwhile continues to be placed either left or right with roughly the same likelihood as in the other situations. So, this built in bias makes the goalkeeper much less likely to help his team recover from a losing position in a shoot-out.

Shoot high

Analysis by Prozone looking at the data from the World Cups and European Championships between 1998 and 2010 compiled the following graphics:

penalties3

The first graphic above shows the part of the goal that scoring penalties were aimed at. With most strikers aiming bottom left and bottom right it’s no surprise to see that these were the most successful areas.

penalties4

The second graphic which shows where penalties were saved shows a more complete picture – goalkeepers made nearly all their saves low down. A striker who has the skill and control to lift the ball high makes it very unlikely that the goalkeeper will save his shot.

penalties5

The last graphic also shows the risk involved in shooting high. This data shows where all the missed penalties (which were off-target) were being aimed. Unsurprisingly strikers who were aiming down the middle of the goal managed to hit the target! Interestingly strikers aiming for the right corner (as the goalkeeper stands) were far more likely to drag their shot off target than those aiming for the left side. Perhaps this is to do with them being predominantly right footed and the angle of their shooting arc?

Win the toss and go first

The Prozone data also showed the importance of winning the coin toss – 75% of the teams who went first went on to win. Equally, missing the first penalty is disastrous to a team’s chances – they went on to lose 81% of the time. The statistics also show a huge psychological role as well. Players who needed to score to keep their teams in the competition only scored a miserable 14% of the time. It would be interesting to see how these statistics are replicated over a larger data set.

Don’t dive

A different study which looked at 286 penalties from both domestic leagues and international competitions found that goalkeepers are actually best advised to stay in the centre of the goal rather than diving to one side. This had quite a significant affect on their ability to save the penalties – increasing the likelihood from around 13% to 33%. So, why don’t more goalkeepers stay still? Well, again this might come down to psychology – a diving save looks more dramatic and showcases the goalkeeper’s skill more than standing stationary in the centre.

penalties7

So, why do England always lose on penalties?

There are some interesting psychological studies which suggest that England suffer more than other teams because English players are inhibited by their high public status (in other words, there is more pressure on them to perform – and hence that pressure is harder to deal with).  One such study noted that the best penalty takers are the ones who compose themselves prior to the penalty.  England’s players start to run to the ball only 0.2 seconds after the referee has blown – making them much less composed than other teams.

However, I think you can put too much analysis on psychology – the answer is probably simpler – that other teams beat England because they have technically better players.  English footballing culture revolves much less around technical skill than elsewhere in Europe and South America – and when it comes to the penalty shoot-outs this has a dramatic effect.

As we can see from the statistics, players who are technically gifted enough to lift their shots into the top corners give the goalkeepers virtually no chance of saving them.  England’s less technically gifted players have to rely on hitting it hard and low to the corner – which gives the goalkeeper a much higher percentage chance of saving them.

Test yourself

You can test your penalty taking skills with this online game from the Open University – choose which players are best suited to the pressure, decide what advice they need and aim your shot in the best position.

If you liked this post you might also like:

Championship Wages Predict League Position? A look at how statistics can predict where teams finish in the league.

Premier League Wages Predict League Positions? A similar analysis of Premier League teams.

Using Chi Squared to Crack Codes

$
0
0

 

crypto4

This is inspired from the great site,  Practical Cryptography which is a really good resource for code making and code breaking.  One of their articles is about how we can use the Chi Squared test to crack a Caesar Shift code.  Indeed, if you use an online program to crack a Caesar shift, they are probably using this technique.

crypto

This is the formula that you will be using for Chi Squared.  It looks more complicated than it is.  Say we have the following message (also from Practical Cryptography):

AOLJHLZHYJPWOLYPZVULVMAOLLHYSPLZARUVDUHUKZPTWSLZAJPWOLY ZPAPZHAFWLVMZBIZAPABAPVUJPWOLYPUDOPJOLHJOSLAALYPUAOLWSH PUALEAPZZOPMALKHJLYAHPUUBTILYVMWSHJLZKVDUAOLHSWOHILA

We first work out the frequency of each letter which we do using the Counton site.

crypto2

We next need to work out the expected values for each letter.  To do this we first need the expected percentages for the English language:

crypto3

Then we can count the number of letters in the code we want to crack (162 – again we can use an online tool)

Now, to find the expected number of As in the code we simply do 162 x 0.082 = 13.284.

The actual number of As in the code is 18.

Therefore we can do (13.284-18)2/18  following the formula at the top of the page.

We then do exactly the same for the Bs in the code.  The expected number is 162 x 0.015 = 2.43.  The actual number is 3.

Therefore we can do (3-2.43)2 /2.43

We do this same method for all the letters A-Z and then add all those numbers together.  This is our Chi Squared statistic.  The lower the value, the closer the 2 distributions are.  If the expected values and the observed values are the same then there will be a chi squared of zero.

If you add all the values together you get a Chi Squared value of ≈1634 – which is quite large!   This is what we would expect – because we already know that the code we have received has letter frequencies quite different to normal English sentences.  Now, what a Caesar Shift decoder can do is shift the received code through all the permutations and then for each one find out the Chi Squared value.  The permutation with the lowest Chi Squared will be the solution.

For example, if we shift every letter in our received code back by one – using the Counton tool (so A goes to Z etc) we get:

ZNKIGKYGXIOVNKXOYUTKULZNKKGXROKYZQTUCTGTJYOSVRKYZIOVNKX YOZOYGZEVKULYAHYZOZAZOUTIOVNKXOTCNOINKGINRKZZKXOTZNKVRG OTZKDZOYYNOLZKJGIKXZGOTTASHKXULVRGIKYJUCTZNKGRVNGHKZ

We can then do the same Chi Squared calculations as before.  This will give a Chi Squared of ≈3440 – which is an even worse fit than the last calculation.  If we carried this on so that A goes to T we would get:

THECAESARCIPHERISONEOFTHEEARLIESTKNOWNANDSIMPLESTCIPHER SITISATYPEOFSUBSTITUTIONCIPHERINWHICHEACHLETTERINTHEPLA INTEXTISSHIFTEDACERTAINNUMBEROFPLACESDOWNTHEALPHABET

and a Chi Squared on this would show that this has a Chi Squared of ≈33 – ie it is a very good fit.  (You will get closer to zero on very long code texts which follow standard English usage).  Now, obviously we could see that this is the correct decryption without even working out the Chi Squared value – but this method allows a computer to do it, without needing the ability to understand English.  Additionally a codebreaker who spoke no English would still be able to decipher this code, on mathematics alone.

The Practical Cryptography site have a tool for quickly working out Chi Squared values from texts – so you can experiment with your own codes.  Note that this is a slightly different use of Chi-Squared as here we are not comparing with a critical value, but instead comparing all Chi Squared to find the lowest value.

If you liked this post you might also like:

Code Breakers Wanted by the NSA – A look at some other code breaking techniques.

RSA Public Key Encryption – The Code that Secures the internet – How understanding RSA code is essential for all people involved in internet security.

Non Euclidean Geometry – An Introduction

$
0
0

euclidean

Non Euclidean Geometry – An Introduction

It wouldn’t be an exaggeration to describe the development of non-Euclidean geometry in the 19th Century as one of the most profound mathematical achievements of the last 2000 years.  Ever since Euclid (c. 330-275BC) included in his geometrical proofs an assumption (postulate) about parallel lines, mathematicians had been trying to prove that this assumption was true.  In the 1800s however, mathematicians including Gauss started to wonder what would happen if this assumption was false – and along the way they discovered a whole new branch of mathematics.  A mathematics where there is an absolute measure of distance, where straight lines can be curved and where angles in triangles don’t add up to 180 degrees.  They discovered non-Euclidean geometry.

Euclid’s parallel postulate (5th postulate)

Euclid was a Greek mathematician – and one of the most influential men ever to live.  Through his collection of books, Elements, he created the foundations of geometry as a mathematical subject.  Anyone who studies geometry at secondary school will still be using results that directly stem from Euclid’s Elements – that angles in triangles add up to 180 degrees, that alternate angles are equal, the circle theorems, how to construct line and angle bisectors.  Indeed you might find it slightly depressing that you were doing nothing more than re-learn mathematics well understood over 2000 years ago!

All of Euclid’s results were based on rigorous deductive mathematical proof – if A was true, and A implied B, then B was also true.  However Euclid did need to make use of a small number of definitions (such as the definition of a line, point, parallel, right angle) before he could begin his first book  He also needed a small number of postulates (assumptions given without proof) – such as:  “(It is possible) to draw a line between 2 points” and “All right angles are equal”

Now the first 4 of these postulates are relatively uncontroversial in being assumed as true.  The 5th however drew the attention of mathematicians for centuries – as they struggled in vain to prove it.  It is:

If a line crossing two other lines makes the interior angles on the same side less than two right angles, then these two lines will meet on that side when extended far enough. 

euclid3

This might look a little complicated, but is made a little easier with the help of the sketch above.  We have the line L crossing lines L1 and L2, and we have the angles A and B such that A + B is less than 180 degrees.  Therefore we have the lines L1 and L2 intersecting.  Lines which are not parallel will therefore intersect.

Euclid’s postulate can be restated in simpler (though not quite logically equivalent language) as:

At most one line can be drawn through any point not on a given line parallel to the given line in a plane.

euclid2

In other words, if you have a given line (l) and a point (P), then there is only 1 line you can draw which is parallel to the given line and through the point (m).

Both of these versions do seem pretty self-evident, but equally there seems no reason why they should simply be assumed to be true.  Surely they can actually be proved?  Well, mathematicians spent the best part of 2000 years trying without success to do so.

Why is the 5th postulate so important? 

Because Euclid’s proofs in Elements were deductive in nature, that means that if the 5th postulate was false, then all the subsequent “proofs” based on this assumption would have to be thrown out.  Most mathematicians working on the problem did in fact believe it was true – but were keen to actually prove it.

As an example, the 5th postulate can be used to prove that the angles in a triangle add up to 180 degrees.

euclid3

The sketch above shows that if A + B are less than 180 degrees the lines will intersect.  Therefore because of symmetry (if one pair is more than 180 degrees, then other side will have a pair less than 180 degrees), a pair of parallel lines will have A + B = 180.  This gives us:

euclid4

This is the familiar diagram you learn at school – with alternate and corresponding angles.   If we accept the diagram above as true, we can proceed with proving that the angles in a triangle add up to 180 degrees.

euclid5

Once, we know that the two red angles are equal and the two green angles are equal, then we can use the fact that angles on a straight line add to 180 degrees to conclude that the angles in a triangle add to 180 degrees.  But it needs the parallel postulate to be true!

In fact there are geometries in which the parallel postulate is not true  – and so we can indeed have triangles whose angles don’t add to 180 degrees.  More on this in the next post.

If you enjoyed this you might also like:

The Riemann Sphere – The Riemann Sphere is a way of mapping the entire complex plane onto the surface of a 3 dimensional sphere.

Circular Inversion – Reflecting in a Circle The hidden geometry of circular inversion allows us to begin to understand non-Euclidean geometry.

 

Non-Euclidean Geometry II – Attempts to Prove Euclid

$
0
0

euclidean

Non-Euclidean Geometry – A New Universe

This post follows on from Non-Euclidean Geometry – An Introduction – read that one first! 

The Hungarian army officer and mathematician Johan Bolyai wrote to his father in 1823 in excitement at his mathematical breakthrough with regards to the parallel postulate.  “I have created a new universe from nothing.” Johan Bolyai was one of the forerunners of 19th century mathematicians who, after noting that mathematicians had spent over 2000 years trying to prove the parallel postulate, decided to see what geometry would look like if the constraint of the postulate was removed.  The result was indeed, a new universe from nothing.

To recap, Euclid’s fifth postulate was as follows:

If a line crossing two other lines makes the interior angles on the same side less than two right angles, then these two lines will meet on that side when extended far enough.

euclid3

It had been understood in a number of (non-equivalent) ways – that parallel lines remain equidistant from each other, that non-parallel lines intersect, that if the lines L1 and L2 in the diagram are parallel then A + B = 180 degrees, that there can only be one line through  a point parallel to any given line.

Collectively these assumptions lead to the basis of numerous geometric proofs – such as the fact that angles in a triangle add up to 180 degrees and that angles in a quadrilateral add up to 360 degrees.

Gerolamo Saccheri

A geometry not based on the parallel postulate could therefore contain 3 possibilities, as outlined by the Italian mathematician Gerolamo Saccheri in 1733:

euclid7

1) A quadrilateral with (say) 2 right angles A,B and two other angles C,D also both right angles.  This is the hypothesis of the right angle – the “normal” geometry of Euclid.

2) A quadrilateral with (say) 2 right angles A,B and two other angles C,D both obtuse.  This is the hypothesis of the obtuse angle – a geometry in which the angles in quadrilaterals add up to more than 360 degrees.

3) A quadrilateral with (say) 2 right angles A,B and two other angles C,D also both acute.  This is the hypothesis of the acute angle – a geometry in which the angles in quadrilaterals add up to less than 360 degrees.

Don’t be misled by the sketch above – the top line of the quadrilateral is still “straight” in this new geometry – even if it can’t be represented in flat 2 dimensions.

Adrien Legendre

Mathematicians now set about trying to prove that both the cases (2) and (3) were false – thus proving that the Euclidean system was the only valid geometry.  The French mathematician Adrien Legendre, who made significant contributions to Number Theory tried to prove that the hypothesis of the obtuse angle was impossible.  His argument went as follows:

euclid8

1) Take a straight line and divide it into n equal segments.  In the diagram these are the 4 lines A1A2, A2A3, A3A4, A4A5

2) Complete the diagram as shown above so that the lengths B1B2, B2B3, B3B4, B4B5 are all equal.  From the sketch we will have lines A1B1 and A2B2 (and subsequent lines) equal.

3) Now we see what will happen if angle β is greater than α.  We compare the two triangles A1B1A2 and A2B2A3.  These have 2 sides the same.  Therefore if β is greater than α then the length A1A2 must be larger than B1B2.

euclid12

4) Now we note that the distance A1B1 + B1B2 + B2B3 + … BnBn+1 + Bn+1An+1 is greater than A1A2 + A2A3 + …AnAn+1.   In other words, the distance starting at A1 then travelling around the shape missing out the bottom line (the yellow line) is longer than the bottom line (green line).

5) Therefore we can write this as

A1B1 + nB1B2 + An+1Bn+1 > nA1A2

(Here we have simplified the expression by noting that as all the distances B1B2, B2B3 etc are equal)

6) Therefore this gives

2A1B1 > n(A1A2 -B1B2)

(Here we simplify by noting that A1B1 = An+1Bn+1 and then rearranging)

7) But this then gives a contradiction – because we can make the RHS as large as we like by simply subdividing the line into more pieces (thus increasing n), but the LHS remains bounded (as it is a fixed value).  Therefore as n tends to infinity, this inequality must be broken.

8) This means that β is not greater than α, so we can write β ≤ α.  This will therefore mean that the angles in the triangle A1B1A2 will be ≤ 180.  To see this

euclid13

We can work out the angles in A1B1A2 by noting that c = (180-α)/2 .  Therefore

angles in A1B1A2 = (180-α)/2 + (180-α)/2 + β

angles in A1B1A2 = 180 + β – α

But we know that β ≤ α.  Therefore β – α ≤ 0

So angles in A1B1A2 = 180 + β – α ≤ 180

Adrien Legendre therefore concluded that the hypothesis of the obtuse angle was impossible.  In fact, it isn’t – and the flaw wasn’t in the logic of his proof but in the underlying assumptions contained within it.  This will be revealed in the next post!

If you enjoyed this you might also like:

The Riemann Sphere – The Riemann Sphere is a way of mapping the entire complex plane onto the surface of a 3 dimensional sphere.

Circular Inversion – Reflecting in a Circle The hidden geometry of circular inversion allows us to begin to understand non-Euclidean geometry.

Non Euclidean Geometry III – Breakthrough Into New Worlds

$
0
0

euclidean

Non Euclidean Geometry – Spherical Geometry

This article follow on from Non Euclidean Geometry – An Introduction – read that first!

Most geometers up until the 19th century had focused on trying to prove that Euclid’s 5th (parallel) postulate was true.  The underlying assumption was that Euclidean geometry was true and therefore the 5th postulate must also be true.

The German mathematician Franz Taurinus made huge strides towards developing non-Euclidean geometries when in 1826 he published his work on spherical trigonometry.

euclid14

Spherical trigonometry is a method of working out the sides and angles of triangles which are drawn on the surface of spheres.

One of the fundamental formula for spherical trigonometry, for a sphere of radius k is:

cos(a/k) = cos(b/k).cos(c/k) + sin(b/k).sin(c/k).cosA

So, say for example we have a triangle as sketched above.  We know the radius of the sphere is 1, that the angle A = 60 degrees, the length b = 1, the length c =1, we can use this formula to find out what the length a is:

cos(a) = cos(1).cos(1) + sin(1).sin(1).cos60

a = 0.99996

We can note that for the same triangle sketched on a flat surface we would be able to use the formula:

a2 = b2 + c2 – 2bc.cosA

a2= 1 + 1 – 2cos60

a = 1

Taurinus however wanted to investigate what would happen if the sphere had an imaginary radius (i).  Without worrying too much about what a sphere with an imaginary radius would look like, let’s see what this does to the previous spherical trigonometric equations:

The sphere now has a radius of ik where i = √-1, so:

cos(a/ik) = cos(b/ik).cos(c/ik) + sin(b/ik).sin(c/ik).cosA

But cos(ix) = cosh(x) and sin(ix) = (-1/i)sinh(x)  – where cosh(x) and sinh(x) are the hyperbolic trig functions.   So we can convert the above equation into:

cosh(a/k) = cosh(b/k)cosh(c/k) – sinh(b/k).sinh(c/k).cosA

This equation will give us the relationship between angles and sides on a triangle drawn on a sphere with an imaginary radius.

Now, here’s the incredible part – this new geometry based on an imaginary sphere (which Taurinus called Log-Spherical Geometry) actually agreed with the hypothesis of the acute angle  (the idea that triangles could have an angle sum less than 180 degrees).

Even more incredible, if you take the limit as k approaches infinity of this new equation, you are left with:

a2 = b2 + c2 – 2bc.cosA

What does this mean?  Well, if we have a sphere of infinite imaginary radius it stretches and flattens to be indistinguishable from a flat plane – and this is where our normal Euclidean geometry works.  So, Taurinus had created a geometry for which our own Euclidean geometry is simply a special case.

So what other remarkable things happen in this new geometric world?  Well we have triangles that look like this:

euclid15

This triangle has angle A = 0, angle C = 90 and lines AB and AC are parallel, (they never meet).  This sketch introduces a whole new concept of parallelism far removed from anything Euclid had imagined. The angle  β is called the angle of parallelism – and measures the angle between a perpendicular and parallel line.  Unlike in Euclidean geometry this angle does not have to be 90 degrees.  Indeed the angle  β will now change as we move the perpendicular along AC – as it is dependent on the length of the line a.

So, we are now into some genuinely weird and wonderful realms where normal geometry no longer makes sense.  Be warned – it gets even stranger!  More on that in the next post.

If you enjoyed this post you might also like:

The Riemann Sphere – The Riemann Sphere is a way of mapping the entire complex plane onto the surface of a 3 dimensional sphere.

Circular Inversion – Reflecting in a Circle The hidden geometry of circular inversion allows us to begin to understand non-Euclidean geometry.


Non Euclidean Geometry IV – New Universes

$
0
0

euclidean

Non Euclidean Geometry IV – New Universes

The 19th century saw mathematicians finally throw off the shackles of Euclid’s 5th (parallel) postulate – and go on to discover a bewildering array of geometries which no longer took this assumption about parallel lines as an axiomatic fact.

1) A curved space model

 

euclid18

The surface of a sphere is a geometry where the parallel postulate does not hold.  This is because all straight lines in this geometry will meet.  We need to clarify what “straight” means in this geometry.  “Straight” lines are those lines defined to be of minimum distance from a to b on the surface of the sphere.  These lines therefore are defined by “great circles” which have the same radius as the sphere like those shown above.

A 2 dimensional being living on the surface of a 3D sphere would feel like he was travelling in a straight line from a to b when he was in fact travelling on the great circle containing both points.  He would not notice the curvature because the curvature would be occurring in the 3rd dimension – and as a 2 dimensional being he would not be able to experience this.

2) A field model -  Stereographic Projection for Riemann’s Sphere

 

Joh-RiemannSphere01

A field model can be thought of in reverse.  A curved space model is a curved surface where straight lines are parts of great circles.  A field model is a flat surface where “straight lines” are curved.

This may seem rather strange, however, the German mathematician Riemann devised a way of representing every point on the sphere as a point on the plane.  He did this by first centering the sphere on the origin – as shown in the diagram above.  Next he took a point on the complex plane (z = x + iy ) and joined up this point to the North pole of the sphere (marked W).  This created a straight line which intersected the sphere at a single point at the surface of the sphere (say at z’).  Therefore every point on the sphere (z’) can be represented as a unique point on the plane (z) – in mathematical language, there is a one-to-one mapping between the two.

The only point on the sphere which does not equate to a point on the complex plane is that of the North pole itself (point w).  This is because no line touching w and another point on the sphere surface can ever reach the complex plane.  Therefore Riemann assigned the value of infinity to the North pole, and therefore the the sphere is a 1-1 mapping of all the points in the complex plane (and infinity).

euclid19

On this field model (which is the flat complex plane), our straight lines are the stereographic projections of the great circles on the sphere.  As you can see from the sketch above, these projections will give us circles of varying sizes.  These are now our straight lines!

And this is where it starts to get really interesting – when we have two isometric spaces there is no way an inhabitant could actually know which one is his own reality.  A 2 dimensional being could be living in either the curved space model, or the field model and not know which was his true reality.

The difference between the 2 models is that in the first instance we accept an unexplained curvature of space that causes objects to travel in “straight” lines along great circles, and that in the second instance we accept an unexplained field which forces objects travelling in “straight” lines to follow curved paths.  Both of these ideas are fundamental to Einstein’s Theory of Relativity – where we must account for both the curvature of space-time and a gravitational force field.

Interestingly, our own 3 dimensional reality is isomorphic to the projection onto a 4 dimensional sphere (hypersphere) – and so our 3 dimensional universe is indistinguishable from a a curved 3D space which is the surface of a hypersphere.  A hypersphere may be a bit difficult to imagine, but the video above is about as close as we can get.

Such a scenario would allow for our space to be bounded rather than infinite, and for there to be an infinite number of 3D universes floating in the 4th dimension – each bounded by the surface of their own personal hypersphere.  Now that’s a bit more interesting than the Euclidean world of straight lines and circle theorems.

If you enjoyed this you might also like:

Imagining the 4th Dimension. How mathematics can help us explore the notion that there may be more than 3 spatial dimensions.

Circular Inversion – Reflecting in a Circle The hidden geometry of circular inversion allows us to begin to understand non-Euclidean geometry.

Geometry, Relativity and the Fourth Dimension is a fantastic (and very readable despite its daunting title!) book full of information about non-Euclidean geometry and extra dimensions.

Non Euclidean Geometry V – The Shape of the Universe

$
0
0

euclidean

Non Euclidean Geometry V – Pseudospheres and other amazing shapes

Non Euclidean geometry takes place on a number of weird and wonderful shapes.  Remember, one of fundamental questions mathematicians investigating the parallel postulate were asking was how many degrees would a triangle have in that geometry- and it turns out that this question can be answered depending on something called Gaussian curvature.

Gaussian curvature measures the nature of the curvature of a a 3 dimensional shape.  The way to calculate it is to take a point on a surface, draw a pair of lines at right angles to each other, and note the direction of their curvature.  If both curve down or both curve up, then the surface has positive curvature.  If one line curves up and the other down, then the surface has negative curvature.  If at least one of the lines is flat then the surface has no curvature.

Positive curvature:

euclid21

A sphere is an example of a shape with constant positive curvature – that means the curvature at every point is the same.

Negative curvature:

 

euclid20

The pseudosphere is a shape which is in some respects the opposite of a sphere (hence the name pseudo-sphere).  This shape has a constant negative curvature.  It is formed by a surface of revolution of a called called a tractrix.

Zero curvature:

euclid22

It might be surprising at first to find that the cylinder is a shape is one which is classified as having zero curvature.  But one of the lines drawn on it will always be flat – hence we have zero curvature.  We can think of the cylinder as analogous to the flat plane – because we could unravel the cylinder without bending or stretching it, and achieve a flat plane.

So, what is the difference between the geometries of the 3 types of shapes?

Parallel lines

Firstly, given a line m and a point p not on m, how many lines parallel to m through p can be drawn on each type of shape?

euclid23

A shape with positive curvature has no such lines – and so has no parallel lines.  A shape with negative curvature has many such lines – and so has many parallel lines through the same point.  A shape with no curvature follows our normal Euclidean rules – and has a single parallel line through a point.

Sums of angles in a triangle and other facts

euclidean

Triangles on shapes with positive curvature have angles which add to more than 180 degrees.  Triangles on shapes with negative curvature have angles which add to less than 180 degrees.  Triangles on shapes with no curvature are our familiar 180 degree types.  Pythagoras’ theorem no longer holds, and circles no longer have pi as a ratio of their circumference and diameter outside of non-curved space.

Torus

The torus is a really interesting mathematical shape – basically a donut shape, which has the property of of having variable Gaussian curvature.  Some parts of the surface has positive curvature, others zero, others negative.

euclid24

The blue parts of the torus above have positive curvature, the red parts negative and the top grey band has zero curvature.  If our 3 dimensional space was like the surface areas of a 4 dimensional torus, then triangles would have different angle sums depending on where we were on the torus’ surface.  This is actually one of the current theories as to the shape of the universe.

Mobius Strip and Klein Bottle

euclid25

These are two more bizarre shapes with strange properties.  The Mobius strip only has one side – if you start anywhere on its surface and follow the curvature round you will eventually return to the same place having travelled on every part of the surface.

euclid26

The Klein bottle is in someways a 3D version of the Mobius strip – and even though it exists in 3 dimensions, to make a true one you need to “fold through” the 4th dimension.

The shape of the universe

OK, so this starts to get quite esoteric – why is knowing the geometry and mathematics of all these strange shapes actually useful?  Can’t we just stick to good old flat-plane Euclidean geometry?  Well, on a fundamental level non-Euclidean geometry is at the heart of one of the most important questions in mankind’s history – just what is the universe?

euclid27

At the heart of understanding the universe is the question of the shape of the universe.  Does it have positive curvature, negative curvature, or is it flat?  Is it like a torus, a sphere, a saddle or something else completely?  These questions will help determine if the universe is truly infinite – or perhaps a bounded loop – in which if you travelled far enough in one direction you would return to where you had set off from.  It will also help determine what will happen to universe – will it keep expanding?  Slow down and stop, or crunch back in on itself?  You can read more on these questions here.

 

Fourier Transforms – the most important tool in mathematics?

$
0
0

fourier5

Fourier Transform

The Fourier Transform and the associated Fourier series is one of the most important mathematical tools in physics. Physicist Lord Kelvin remarked in 1867:

“Fourier’s theorem is not only one of the most beautiful results of modern analysis, but it may be said to furnish an indispensable instrument in the treatment of nearly every recondite question in modern physics.”

The Fourier Transform deals with time based waves – and these are one of the fundamental building blocks of the natural world. Sound, light, gravity, radio signals, Earthquakes and digital compression are just some of the phenomena that can be understood through waves. It’s not an exaggeration therefore to see the study of waves as one of the most important applications of mathematics in our modern life.

Here are some real life applications in a wide range of fields:

JPEG picture and MP3 sound compression – to allow data to reduced in size.

Analysing DNA sequences – to allow identification of specific regions of genetic code

Apps like Shazam which can recognise a song from a sample of music

Processing mobile phone network data and WIFI data

Signal processing - in everything from acoustic guitar amps or electrical currents through capacitors

Radio telescopes - used to construct images of the night sky

Building’s natural frequencies - architects can design buildings to better withstand earthquakes.

Medical imaging such as MRI scans

There are many more applications – this Guardian article is a good introduction to some others.

So, what is the Fourier Transform? It takes a graph like the graph f(t) = cos(at) below:

 

fourier1

and transforms it into:

fourier2

From the above cosine graph we can see that it is periodic time based function. Time is plotted on the x axis, and this graph will tell us the value of f(t) at any given time. The graph below with 2 spikes represents this same information in a different way. It shows the frequency (plotted on the x axis) of the cosine graph. Now the frequency of a function measures how many times it repeats per second. So for a graph f(t) = cos(at) it can be calculated as the inverse of the period. The period of cos(at) is 2pi/a so it has a frequency of a/2pi.

Therefore the frequency graph for cos(ax) will have spikes at a/2pi and -a/2pi.

But how does this new representation help us? Well most real life waves are much more complicated than simple sine or cosine waves – like this trumpet sound wave below:

fourier3

But the remarkable thing is that every continuous wave can be modelled as the sum of sine and cosine waves. So we can break-down the very complicated wave above into (say) cos(x) + sin(2x) + 2cos(4x) . This new representation would be much easier to work with mathematically.

The way to find out what these constituent sine and cosine waves are that make up a complicated wave is to use the Fourier Transform. By transforming a function into one which shows the frequency peaks we can work out what the sine and cosine parts are for that function.

fourier4

For example, this transformed graph above would show which frequency sine and cosine functions to use to model our original function. Each peak represents a sine or cosine function of a specific frequency. Add them all together and we have our function.

The maths behind this does get a little complicated. I’ll try and talk through the method using the function f(t) = cos(at).

\\1.\ f(t) = cosat\\

So, the function we want to break down into its constituent cosine and sine waves is cos(at). Now, obviously this function can be represented just with cos(at) – but this is a good demonstration of how to use the maths for the Fourier Transform. We already know that this function has a frequency of a/2pi – so let’s see if we can find this frequency using the Transform.

\\2.\ F(\xi) = \int_{-\infty}^{\infty} f(t)(e^{-2\pi i\xi t})dt\\

This is the formula for the Fourier Transform. We “simply” replace the f(t) with the function we want to transform – then integrate.

\\3.\ f(t)= 0.5({e}^{iat}+ {e}^{-iat})\\

To make this easier we use the exponential formula for cosine. When we have f(t) = cos(at) we can rewrite this as the function above in terms of exponential terms.

\\4.\ F(\xi) = 0.5\int_{-\infty}^{\infty} (e^{iat}+e^{-iat})(e^{-2\pi i\xi t})dt\\

We substitute this version of f(t) into the formula.

\\5.\ F(\xi) = \frac{1}{2} \int_{-\infty}^{\infty} e^{it(a-2\pi \xi) }dt + \frac{1}{2} \int_{-\infty}^{\infty}e^{it(-a-2\pi \xi)}dt\\

Next we multiply out the exponential terms in the bracket (remember the laws of indices), and then split the integral into 2 parts. The reason we have grouped the powers in this way is because of the following step.

\\6.\ \delta (a-2\pi \xi) = \frac{1}{2\pi}\int_{-\infty}^{\infty} e^{it(a-2\pi \xi)}\\

This is the delta function – which as you can see is very closely related to the integrals we have. Multiplying both sides by pi will get the integral in the correct form. The delta function is a function which is zero for all values apart from when the domain is zero.

\\7.\ F(\xi) =\pi [ \delta (a-2\pi \xi ) + \delta (a+2\pi \xi ) ]\\

So, the integral can be simplified as this above.

\\8.\ a-2\pi \xi = 0 \ or \ a+2\pi \xi = 0\\

So, our function F will be zero for all values except when the delta function is zero. This gives use the above equations.

\\9.\ \xi = \pm\frac{a}{2\pi }\\

Therefore solving these equations we get an answer for the frequency of the graph.

\\10.\ frequency\ of\ cosat = \frac{a}{2\pi }

This frequency agrees with the frequency we already expected to find for cos(at).

A slightly more complicated example would be to follow the same process but this time with the function f(t) = cos(at) + cos(bt). If the Fourier transform works correctly it should recognise that this function is composed of one cosine function with frequency a/2pi and another cosine function of b/2pi. If we follow through exactly the same method as above (we can in effect split the function into cos(at) and cos(bt) and do both separately), we should get:

\\7.\ F(\xi) =\pi [ \delta (a-2\pi \xi ) + \delta (a+2\pi \xi ) + \delta (b-2\pi \xi ) + \delta (b+2\pi \xi ) ]\\

This therefore is zero for all values except for when we have frequencies of a/2pi and b/2pi. So the Fourier Transform has correctly identified the constituent parts of our function.

If you want to read more about Fourier Transforms, then the Better Explained article is an excellent start.

Zeno’s Paradox – Achilles and the Tortoise

$
0
0

Zeno’s Paradox – Achilles and the Tortoise

This is a very famous paradox from the Greek philosopher Zeno – who argued that a runner (Achilles) who constantly halved the distance between himself and a tortoise  would never actually catch the tortoise.  The video above explains the concept.

There are two slightly different versions to this paradox.  The first version has the tortoise as stationary, and Achilles as constantly halving the distance, but never reaching the tortoise (technically this is called the dichotomy paradox).  The second version is where Achilles always manages to run to the point where the tortoise was previously, but by the time he reaches that point the tortoise has moved a little bit further away.

Dichotomy Paradox

Screen Shot 2014-08-18 at 11.38.35 AM

The first version we can think of as follows:

Say the tortoise is 2 metres away from Achilles.  Initially Achilles halves this distance by travelling 1 metre.  He halves this distance again by travelling a further 1/2 metre.  Halving again he is now 1/4 metres away.  This process is infinite, and so Zeno argued that in a finite length of time you would never actually reach the tortoise.  Mathematically we can express this idea as an infinite summation of the distances travelled each time:

1 + 1/2 + 1/4 + 1/8 …

Now, this is actually a geometric series – which has first term a = 1 and common ratio r = 1/2.  Therefore we can use the infinite summation formula for a geometric series (which was derived about 2000 years after Zeno!):

sum = a/(1-r)

sum = 1/(1-0.5)

sum = 2

This shows that the summation does in fact converge – and so Achilles would actually reach the tortoise that remained 2 metres away.  There is still however something of a sleight of hand being employed here however – given an infinite length of time we have shown that Achilles would reach the tortoise, but what about reaching the tortoise in a finite length of time?  Well, as the distances get ever smaller, the time required to traverse them also gets ever closer to zero, so we can say that as the distance converges to 2 metres, the time taken will also converge to a finite number.

There is an alternative method to showing that this is a convergent series:

S = 1/2 + 1/4 + 1/8 + 1/16 + …

0.5S = 1/4 + 1/8 + 1/16 + …

S – 0.5S = 1/2

0.5S = 1/2

S = 2

Here we notice that in doing S – 0.5S all the terms will cancel out except the first one.

Achilles and the Tortoise

Screen Shot 2014-08-18 at 10.19.42 AM

The second version also makes use of geometric series.  If we say that the tortoise has been given a 10 m head start, and that whilst the tortoise runs at 1 m/s, Achilles runs at 10 m/s, we can try to calculate when Achilles would catch the tortoise.  So in the first instance, Achilles runs to where the tortoise was (10 metres away).  But because the tortoise runs at 1/10th the speed of Achilles, he is now a further 1m away.  So, in the second instance, Achilles now runs to where the tortoise now is (a further 1 metre).  But the tortoise has now moved 0.1 metres further away.  And so on to infinity.

This is represented by a geometric series:

10 + 1 + 0.1 + 0.01 …

Which has first time a = 10 and common ratio r = 0.1.  So using the same formula as before:

sum = a/(1-r)

sum = 10/(1-0.1)

sum = 11.11m

So, again we can show that because this geometric series converges to a finite value (11.11), then after a finite time Achilles will indeed catch the tortoise (11.11m away from where Achilles started from).

We often think of mathematics and philosophy as completely distinct subjects – one based on empirical measurement, the other on thought processes – but back in the day of the Greeks there was no such distinction.  The resolution of Zeno’s paradox by use of calculus and limits to infinity some 2000 years after it was first posed is a nice reminder of the power of mathematics in solving problems across a wide range of disciplines.

The Chess Board Problem

The chess board problem is nothing to do with Zeno (it was first recorded about 1000 years ago) but is nevertheless another interesting example of the power of geometric series.  It is explained in the video above.  If I put 1 grain of rice on the first square of a chess board, 2 grains of rice on the second square, 4 grains on the third square, how much rice in total will be on the chess board by the time I finish the 64th square?

The mathematical series will be:

1+ 2 + 4 + 8 + 16 +……

So a = 1 and r = 2

Sum = a(1-r64)/(1-r)

Sum = (1-264)/(1-2)

Sum = 264 -1

Sum = 18, 446,744, 073, 709, 551, 615

This is such a large number that, if stretched from end to end the rice would reach all the way to the star Alpha Centura and back 2 times.  (Interestingly this number, 264 -1 is also a special type of prime number called a Mersenne Prime.  These are prime numbers of the form 2-1).

Batman and Superman Maths

$
0
0

wolfram

Batman and Superman Maths

Wolfram Alpha is an incredibly powerful mathematical tool – which has been developed to allow both complex calculations and data analysis. It is able to generate images like that shown above, of the Batman logo. What’s really impressive however is that you can see the underlying graph input that would generate this image:

wolfram 2

At first glance this look indecipherable – but we can actually understand this a little better by breaking these inequalities down and looking at the individually.

wolfram3

The first inequality defines the area inside an ellipse.  All ellipses have a general formula:

  \left(\frac{x}{a}\right)^2 + \left(\frac{y}{b}\right)^2 = 1.

In our inequality, the a simply stands for an arbitrary constant (because the Batman logo has no scale).  To keep things simple we can set a = 1.  This gives an equation:

wolfram4

which generates the ellipse:

wolfram5

When we now make this the inequality:

wolfram6

Then this simply has the effect of shading in the area contained within the ellipse.  So, comparing this to the original Batman shape we can see that the ellipse we have drawn forms the wings of the logo.

Next, let’s look at the next inequality:

wolfram7

Which, if we again choose a = 1 for simplicity, we will get:

wolfram8

when the part of this graph which is greater or equal to 4, we will get the part of the Batman logo which defines the inside part of the Batman ears.

I’m not going to go through each part – as that would take too long!  Let’s look at one more inequality though:

wolfram9

This will generate the part of the graph that looks like:

wolfram10

This will form part of the Batman logo cape.

Superman Logo

wolfram11

Now if you thought that was hard, have a look at the inequalities needed for the Superman logo above:

wolfram12

Now this really is almost indecipherable!  I can at least explain what the min(a,b) means.  For example, say we had:

y = min(cosx,sinx)

This would simply mean that for any x value, I would find out what cosx was equal to, find out what sinx was equal to, and then plot the smallest value as my y value.  For example, when x = 0, I would have cos(0) = 1 and sin(0) = 0.  So I choose my y value as 0 when x = 0.  Plotting this graph would give:

  wolfram13wolfram14

Which is an interesting periodic function that shares some of the features of the regular trig graphs.  Anyway, the real Superman inequality is much harder than this – and demonstrates just how powerful Wolfram Alpha is.

Viewing all 104 articles
Browse latest View live