### Franka Miriam Brückler, Dept. of Mathematics, University of Zagreb, Croatia

Most mathematicians and other scientists will agree that mathematics can be divided into pure and applied mathematics, with the bordeline between them somewhat diffuse. Generally speaking, pure mathematics is mathematics created in a l'art-pour-l'art-istic way, it is abstract mathematics done for reasons other than possible applications. Applied mathematics, as is visible from its name, is created to solve some concrete problems. Historically speaking, most of the mathematics existing before the 19th century could be classified as applied mathematics, because even theoretical results sprang mostly from mathematics created to solve a particular problem in physics, astronomy or some other field of applications. Every mathematics discipline can be considered pure or applied, so one can argue if a section on "applied mathematics" presented as a discipline in mathematics is really justified. On the other side, there are several mathematical disciplines where a large number of results is obtained in order to solve some concrete problems, or at least it is to be expected that an application will be found. These disciplines are numerical analysis, computer science, mathematical physics, operations research and game theory, and several others. Many would say that statistics (described in the text on probability and statistics) and differential equations and Fourier theory (described under analysis) also belong to applied mathematics.

The probably best known applied mathematics discipline is . It is the study of methods for computing numerical data, where mathematical theory is used not only to find an algorithm for computing a solution to a given problem, but also to give answers on error bounds and accuracy of the algorithms. Although nowadays primarily designed for use with computers, numerical analysis results were created many centuries before even any calculating machine more sophisticated than an abacus was invented. The oldest numerical analysis result is usually attributed to Babylonians, who devised an algorithm for the approximate calculation of the square root of 2 (as the length of the diagonal in an unit square). The method is known as Heron's method, because the Greek mathematician from the 1st century AD gave the first exact description of it, but it can be found on a Babylonian clay tablet from about 1800-1600 BC. The method is simple and can be used to find other square roots as well: First you make a guess (say, as 12=1≤2≤4=22, one sees that √2 is between 1 and 2 and can take 3/2 = 1.5 as a first guess). Then one divides the original number by the guess ( 2 : (3/2) = 4/3 ) and finds the average of this and the guess ( (4/3 + 3/2)/2 = 17/12 ≈ 1.416667). This average is your next guess and you can repeat the process with it to obtain a better guess, and do this as many times as you like. This process converges (i.e. every next guess is nearer to the exact value of the desired square root), as has been proven in more recent times. For example, after the first two guesses for √2 (3/2=1,5 and 17/12=1.41666666666667), the third will be 577/408 ≈ 1.41421568627451, and the fourth 665857/470832 ≈ 1.41421356237 is already exact to the first twelve decimal places (√2 ≈ 1.41421356237310). An important point when applying this, or any other numerical, method is that that rounding errors slow the convergence, so one should keep at least one extra digit beyond the desired accuracy during his or hers computations. The problem of roundoff errors and similar questions (error bounding, stability of algorithms, rates of convergence of algorithms, time and memory requirements of computation and possible platform dependence of an algorithm) are primary concerns of numerical analysis.

Approximation theory is also one of the numerical analysis subdisciplines. Results in approximation theory are used by various scientists and engineers every day. It is concerned with questions how to approximate various functions with simpler ones, and estimating errors made by such an approximation. For example, one of the basic theorems of approximation theory is the Weierstrass approximation theorem. It states that any continuous real-valued function defined on a closed interval can be approximated by polynomials to any desired degree of accuracy. This is a very important result since there are many functions that are hard to handle in computations, while polynomials - functions defined as finite sums of terms of the form "a constant times a power of the variable" - are easy to handle. Now, this is an existence theorem and approximation theory gives various ways of constructing the desired approximation polynomials, fulfiling possible extra conditions. A related topic is interpolation, i.e. fitting a function of a given sort - e.g. a polynomial - to a given set of points. An interpolation example is shown in figure 1. A related topic is that of finding a function of a given type not necessarily passing through the given points, but not missing them too much. This problem is an everyday problem in all fields that apply mathematics: given a number of measurements (e.g. measured concentrations of a chemical component at various moments), as one is sure to have made some experimental errors, it is not to be expected that the points lie exactly on the curve that describes the relationship between the variables. Thus it is much more reasonable to fit the data by a simpler function, e.g. a line y = ax + b (usually known physical or other laws suggest which type of a function is appropriate). Now one wants to choose the one of all the functions of the desired type that gives the smallest error. For example, when fitting a line to some set of points (see figure 2) one chooses the one to which all of the points lye near.

Figure 1. The interpolating polynomial having values 25 for x=-1, 0 for x=0, 40 for x=2 and 12 for x=4 is -0.6 x + 19.7 x2 - 4.7 x3.

Figure 2. The line obtained by linear least squares that best fits the points (5, 60), (15, 120), (20, 100), (30, 150), (45, 170), (50, 240), (60, 250), (80, 280) is y = 56.2052 + 3.01757x. This line is shown in blue, and for comparison some lines that do not give as good an approximation are shown in black.

Figure 3. Turing machine recognizing a sequence of the form @@...@*...*. The circles represent states, the arrows represent transition rules, and a label "a:n" should be read "if head reads a move the head by n and switch to the state indicated by arrow", where n=1 means "move the head one place to the right" and n=0 means "stop the head".

Mathematical physics studies interconnections between mathematics and physics. It is concerned with applications of mathematics to physics. It also develops mathematical models and methods suitable for describing physical phenomena and theories. Although physical problems were solved by mathematical methods since ancient times, mathematical physics in the modern sense of the word was created when sir Isaac Newton developed calculus to solve problems related to motion. The power of calculus for modeling physical laws was recognized by Newton and his contemporaries, and differential equations as models were used since the beginnings of calculus. Up to this day, ordinary and more often partial differential equations are a typical "ingredient" of mathematical physics, and many results abut them were discovered and proven because of physics. A typical partial differential equation from mathematical physics is the Poisson equation ∇2φ = f. The symbol ∇2 denotes an operator known as the Laplacian; it is a way to generalize taking second derivatives to multivariable functions. It arises in physical problems about finding potentials φ for a known density function f. The best known example comes from electrostatics, where f = -ρ/ε0 and the equation describes the relationship of the electric potential φ and the charge density ρ. Many other areas of analysis (potential theory, variational calculus, Fourier thery, ...) are also used in all areas of physics and developed for physical reasons.

Functional analysis and operator theory were developed partly as mathematical foundations of quantum theory. But, other areas of mathematics also partly belong to mathematical physics. Algebra, particularly group theory, and topology play a fundamental role in the theory of relativity and quantum field theory. Geometry - interestingly: not only differential and classical geometry, but also abstract geometry (e.g. noneuclidean geometries) - are also substantial for modern physics (relativity theory, string theory, ...). Combinatorics and probability are foundations of statistical mechanics (this is the study of thermodynamics of systems consisting of a large number of particles). Because there is no sharp distinction where e.g. analysis ends and mathematical physic begins, many of the results cannot be clearly classified. Thus Lagrangian mechanics (a re-formulation of classical Newtonian mechanics in terms of differential equations about potentials) is considered mathematical physics, although the general study of differential equations (where some results applicable to Lagrangian mechanics are also proven) is "pure" mathematics.

It may also be interesting to know that 10 of the 64 official mathematical areas (as listed in the Mathematical Subject Classification) are mathematical physics subdisciplines: mechanics of particles and systems, mechanics of deformable solids, fluid mechanics, optics and electromagnetic theory, classical thermodynamics and heat transfer, quantum theory, statistical mechanics and structure of matter, relativity and gravitational theory, astronomy and astrophysics, geophysics.

Although probably the oldest field of applications of mathematical results, physics is by far not the only one. As already said, computer science is nowadays a separate field in applied mathematics. Mathematical biology and chemistry also exist. Also, in the 20th century so many applications of mathematics to economy emerged that now we have two mathematical disciplines (operations research and game theory) that were developed for applications in economy and social and behavioral sciences. Operations research is the mathematical discipline studying various mathematical methods coming from various fields (probability and statistics, graph theory, optimization, ...) that help make better decisions. It originated in military problems before World War II and after the war the techniques were applied to problems in economics and society. A typical class of problems studyed in operations research are scheduling problems. These include the planning of optimal, or the best possible, schedules of a variety of "jobs", from the organisation of workers at a building site in such a way that the work goes on as fast as possible and taking into account various special abilities of workers, over airline scheduling to CPU scheduling (how to distribute processes that have to work on some available CPUs). Mathematical programming or optimization is also a subdiscipline associated with operations research. It refers to the study of methods of finding the best element from a given set. An example would be the following: A guitar manufacturer produces two types of guitars, an acoustic and an electric one. A sold acoustic guitar brings a profit of 180? and a sold electric one a profit of 120?. They are produced in two departments, A (where the parts are prepared) and B (assembly). One acoustic guitar needs 5 hours of work in department A and 10 hours in B. One electric guitar needs 3 hours of work in department A and 12 hours in B. As there is not an infinite number of employees, the maximal number of working hours per week in the two departments are known: 120 in A and 360 in B. How many guitars of each sort should be produced per week in order to maximize the profit? This is an example of a linear programming problem with two unknowns (the number x of acoustic and y of electric guitars produced per week). The conditions (restrictions due to limited numbers of working hours) can be represented by linear inequalities in x and y, and visually represented in the coordinate plane (see figure 5, left). For any of allowable combinations x and y the corresponding profit is 180x + 120y, and this can take various values. Thus for any imagined value P of the profit, it is represented by a line 180x + 120y = P. All such lines are parallel and as long as they have points in common with the area representing allowable production numbers, the product is achievable. Plotting the lines and noticing in what direction the values increase, one can find the solution to our problem (see figure 5, right): the manufacturer should produce 12 acoustic and 20 electric guitars per week and this will bring him a weekly profit of 4560?.

Figure 5. The yellow quadrilateral represents all possible combinations of the number of produced acoustic and electric guitars, according to the conditions about the maximal possible numbers of working hours. On the right picture some profit lines are included (for values P = 1000, 2000, 3000, 4000 and 4560, from left to right). If P is larger than 4560, the profit line will not have any points in common with the yellow area, so the maximal profit is 4560. The corresponding vertex in which the line touches the yellow area has coordinates (12,20) and represents the numbers of acoustic and electric guitars that should be produced weekly to maximize the profit.

Game theory is the mathematical study of behaviour in competitive situations called games, i.e. for a given game it attempts to prove there are optimal strategies for the opposed parties (players) playing the game, and to find them. It is understood that there are wins and loses (payoffs) depending on the players' choices; these are usually described as amounts of money, but can be of any kind (even immaterial), as long as there is a way to compare the values. Also, game theory assumes that all players are rational, i.e. won't take unnecessary risks or trust their luck. Game theory also developed after World War II, and it has received more public attention after the mathematician John Forbes Nash Jr. received the Nobel Memorial Prize in Economic Sciences in 1994 for his work on game theory and the movie A Beautiful Mind about his life was released in 2001. A basic game theory notion is called Nash equilibrium: it is a set of strategies (choices of sequences of moves) for all the players, together with the corresponding payoffs, with the property that no player can benefit by changing his strategy if the other players keep their strategies unchanged. A class of games posessing a Nash equilibrium is known as two person zero-sum games. These are games between two opposed parties (persons, firms, states...) played in rounds. In each round there a payoff between the two sides is defined, but no payoffs are made to anybody else (i.e. "what I lose, you win"). It is understood that both parties have complete information, i.e. both know how many possible choices (moves) both have in every round, and what the payoff will be for any of the possible combination of moves. For example, let two friends Frank and Mary play the following game. In each round they both write down a number on a piece of paper, and they show their numbers to each other simultaneously. They agreed that John may write 0 or 1, while Mary may write 2, 3 or 4, and also agreed to the following payoff rules (the rows correspond to John's choices, and the columns to Mary's):

 2 3 4 0 0 -3 5 1 3 -1 -1

A positive number in the above table indicates that Mary has to give John two candies (John wins), and a negative number means that Mary wins and John has to give her the written number of candies. A zero means a draw. For example, if John writes a 0 and Mary a 3, the payoff is -3, i.e. John has to give Mary three candies. Thus the numbers represent winnings of John and losses of Mary. From John's point of view, the worst that can happen to him if he writes down a 0 is -3 (having to give 3 candies away) and the worst thing that can happen if he writes 1 is -1 (having to give one candy to Mary), so he will choose to write 1. On the other side, the worst that can happen to Mary if she writes 2 is 3 (this is a win for John, so a loss for her!), if she writes 3 the worst for her is -1 (i.e. she is sure to win at least 1 candy) and if she writes 5 she could lose 5, so she will choose to write 3. When they show their numbers (John: 1, Mary: 3), the payoff (-1, one candy from John to Mary) is made. Now, a little bit of thinking will convince you that without knowing what move the opponent will make, is makes no sense for any of these two players to choose another option, so if they continue in rounds they will be always playing the same. Such a two-person zero-sum game is called strictly determined and its Nash equilibrium consists of the optimal strategy for John (always write 1), the optimal strategy for Mary (always write 3) and the corresponding payoff per round (-1, i.e. in every round John loses and Mary wins 1 candy).

Besides the mentioned applied mathematics disciplines, there are many others. There is practically no field of life to which some mathematics isn't applied. As a final note, the author wishes to emphasize that one should take care not to confuse applied mathematics with applications of mathematics. For example, people (Gauss, Legendre and others) who discovered the least squares method and proved its properties were applied mathematicians. Every scientist using the method of least squares to get conclusions from data is applying it, not creating it. So, applied mathematics is a mathematics discipline, where new methods to use in applications are discovered and created, and their properties and limitations proven. An applied mathematician is making new mathematics, and the chief difference to a pure mathematician is that he or she does it hoping to solve a practical problem. The results of applied mathematics are then used - applied - by mathematicians and nonmathematicians; these users are not doing mathematics, they are just using it.