Computational Tools in Materials Technology

RelevantCourse: Numerical methods or computational methods or computational tools course

Relevant Department : Metallurgical Engineering and Materials Science

Relevant Semester: 3rd

Pre-requisite: Knowledge in Engineering Mathematics and Introductory Materials Science is Preferable and it’s Mandatory to Bring a Calculator

Course Description & Outline :

  • Data Analysis
  • Eigenvalues, solution of simultaneous linear equations
  • Thermodynamics
  • Newton's law of cooling, steady state heat / mass transfer

Schedule for Lecture Delivery

Session 1 : 15-Sep-2015 ( 2-4 pm)

Session 2 : 18-Sep-2015 ( 2-4 pm)

Session 3 : 21-Sep-2015 ( 10-12 pm)

Teacher Forum

Preliminaries

    In this course, we will discuss the use of computers for solving problems that are of interest  to materials scientists and engineers. The methods are well known and are discussed in engineering mathematics courses under the topic of numerical methods. The problems chosen to illustrate the methods, however, are from materials science and engineering.  

    During the course, unfortunately, we will not have access to a computer. That is a pity because most of these methods are learnt best using a computer and by programming the algorithms in the computer. Computers programs also help us plot and graphically represent the results which further helps us understand the problems and solutions better.

     In the absence of computers, we will use calculators to carry out calculations. Obviously, this imposes severe restrictions on the type of problems that we will be able to tackle in the class. However, I will also share some supplementary material on using GNU Octave to carry out some of the computations that we discuss in the class (and their graphical visualization).

     I strongly recommend that you bring a scientific calculator along with a notebook and pen to the class. As we discuss the methods, we will also solve example problems together.

 Numerical Computation and Errors


     When we carry out calculations using pen and paper, they are accurate. However, when we use computers, since the computers use only a limited number of bits (binary digits) to store numbers, the representation of numbers and the calculations based on them is only approximate.

 Single and Bouble Precision

     Typically, the computers allow two types of representations of real numbers: single precision(in which, again, typically, 32 bits are used to store real numbers) and double precision (in which, typically, the number of bits used to store the real number are double that of single precision, say, 64 bits for example). Single and double precision typically mean that the accuracy of numbers represented by them is of the order of 10−8 and 10−16 respectively; this is known as machine accuracy (εm).

    Single and double precisions also place restrictions on the smallest and the largest number that can be represented on computers. Any number smaller (larger) than this leads to underflow (overflow).

    Note that while the integers are also restricted in range, there are no problems of accuracy associated with their representation. Most of scientific computation requires double precision.

Round-off Errors

    Any arithmetic operation on real numbers which are represented in an approximate manner result in accumulation of errors. Such errors are known as round-off errors since they come about by rounding off (or, sometimes, simply discarding) numbers beyond certain decimal position.

   The error associated with N arithmetic operations due to rounding-off on a machine with machine accuracy εm is √Nεm – assuming that the rounding-off is random. However, because of the way round-off is carried out on computers and because of the regularities in calculations associated with numerical algorithms, typical rounding-off errors of the order Nεm.

     Hence, while carrying out numerical computations, it is important to keep in mind the round-off errors (especially when a large number of arithmetic operations are involved).

Truncation Errors

  Truncation errors are errors associated with the approximations used in the numerical algorithm – for example evaluating sin x by only summing certain number of terms in the infinite series x − x⁄3! x5 /5! − .... Truncation errors, thus, will be there even in pen and paper calculations. In most of numerical methods, our aim is to find algorithms which would minimize the truncation errors as much as possible.

Stability

    In an algorithm, for some reason if the truncation errors interact with rounding-off errors and hence lead to magnification of errors resulting in complete break-down of calculations, such a method is called unstable.

Error Bounds and Error Propagation

    We can define two types of bounds, namely, error bound and relative error bound. Suppose the actual quantity is a and the approximate value obtained for the quantity is ˜a. Then, the error bound is defined as |a− ˜a| ≤ β. The relative error bound is defined as |a−˜a|/a  ≤ βr.

      It can be shown that, in addition and subtraction, the error bound for the results is the sum of error bounds for the terms; in multiplication and division, the error bound for the relative errors for the results is approximately the sum of error bounds for relative errors of the given terms.

Error Estimate

    When you perform any numerical computation, it is always a good idea to estimate the error. The way to do an error estimate is to carry out the computation with two different accuracies and take the difference between the results as a measure of error.

 Taylor Series Expansion

     We will use Taylor series expansion very extensively. Lots of numerical algorithms and methods use Taylor series as the starting point. Just to recall,

where ′ denotes differentiation with respect to x.

Further Reading

      The engineering mathematics textbook of Kreyzig [1] and the Numerical Recipes book [2] (available freely online) are good places to read more about these topics.

Videos For Preliminaries

Quiz I

Quiz I

Quiz not avilable

Forum for Preliminaries

Forum for Preliminaries

Forum not available

Fitting and Interpolation

   Suppose, through some experiments, we identify the change in some quantity Y as a function of some parameter X. Then, getting an expression that connect X to Y is known as fitting. The process of finding the Y value for a given X value for which no experimental data is available using the available data is called interpolation (provided the X value lies in the range of experiments) or extrapolation (if the X value lies outside the range of experiments. In this section, we will discuss methods of fitting and interpolation.

Lattice Parameter Changes With Composition

   Let us consider the changes in lattice parameter with composition. Typically for small variations in composition, the lattice parameter change is linear (so called Vegard’s law). In Table. 1 the data on the variation of lattice parameter in Si-Ge system with Si composition is given. For the source of the data, please see [3].

  Suppose we want to find out the value of lattice parameter at, say, 50 mol % of silicon. Since we know that the lattice parameter changes linearly, it is possible to interpolate

  linearly using the 22.9 and 57.5 mol % silicon data.

In Class Tutorial

Why Fitting?

    How correct is the answer that we got using linear interpolation? To answer this question, let us use the data for 22.9 and 85.8 mol % si alloys and calculate the lattice parameter for 57.5 mol % alloy and compare with the available experimental data. We get a lattice parameter of 5.520 °Afrom calculation instead of the reported 5.518 °A. The reason for this mismatch is the experimental error. So, a better way to calculate the lattice parameter for the given alloy composition is to fit the data to a straight line and use the fit.

Linear Least Squares Fit

Further Reading

   The textbooks of Kreyzig [1] and Gerald and Wheatley [5] are good places to read more on these topics.

      

Videos For Fitting And Interpolation

Video 1

Video 2

Forum for Fitting and Interpolation

Forum for 

Forum not available

Numerical Differentiation and Integration

    All of us know how calculate the derivative of any given function or integrate a given function. In the case of discrete data also, the same operations can be carried out. Such differentiation and integration carried out on discrete data is called numerical differentiation and numerical integration.

   In this section, we will discuss methods of calculating numerical differentiation and numerical integration and show how such numerical calculations can be useful for some problems in materials science and engineering.

Numerical Differentiation

   The differentiation on a computer is calculate using forward or backward or central difference formula. These formulae are derived from Taylor series expansion. Let us consider the following Taylor series expansion which is terminated at the second term:

  The error in such a termination is of the order of h2. Rearranging the above equation, we obtain the expression for the forward difference formula for numerical differentiation:

  Note that while the termination error is of the order of h2, the error in the derivative is of the order of h.

  By using the Taylor series expansion for f(x−h), in a similar manner, we can derive the backward difference formula.

  Finally, combining the forward and backward difference formula we obtain the central difference formula.

Elastic Modulus From Stress-Strain Data

    The tensile test data for a metallic material is as shown in Table. 2.
This data can be plotted and the stress-strain curve is as shown in Fig. 1.From the figure, it is clear that the yield stress of the given material (that is, the point of deviation from linearity) is about 130 MPa. By taking a derivative of the linear region one can obtain the Young’s modulus of this material.

   As is clear from the Table 2, the first eight data points correspond to the linear (elastic) region. In Table 3, we consider these seven points and calculate the derivative (which is the Young’s modulus), for these eight points by considering the consecutive points

Numerical Integration

   We know that the area under the elastic part of the stress-strain curve is resilience and the area under the entire stress-strain curve is the ductility. It is possible to calculate these values numerically. The numerical integration for unevenly spaced data is calculation using the trapezoidal rule [5]:

Further Reading

   The textbooks of Kreyzig [1] and Gerald and Wheatley [5] are good places to read more on these topics.

Videos For Numerical Differentiation and integration

 Numerical Differentiation

Numerical Integration 

Forum for Numerical Differentiation and Integration

Forum for Numerical Differentiation and Integration

Forum not available

Slides for - Preliminaries Fitting and Interpolation Numerical Differentiation and Integration

Assignment -Fitting and Interpolation

Consider the consumption of copper as a function of temperature in a reaction. The data is as shown in Table 2 (taken from [4]). Assume that the consumption is related to temperature through an Arrhenius type of reaction, namely,where R is the universal gas constant (8.314 J/mol/K), T is the absolute temperature, Q is the activation energy for the reaction,  is the copper consumption (in microns), A is the pre-exponential constant. From the given data, estimate Q and A.

Assignment not available

Assignment-Numerical differentiation and integration

Use the Linear fit for the elastic region that you carried out in the previous problem and calculate resilience by integrating the same from zero to  How does it compare with the numerical result?

Assignment not available

Matrix Algebra

   Matrix algebra is very useful in many materials science and engineering problems. In this section, we will discuss some simple examples of such matrix calculations. The book on crystal geometry by Bhadeshia [8] (which is available for free download online) is a great place to read about many such applications.


Coordinate Transformations

   Consider two adjacent grains in polycrystalline copper; let they be called ‘A’ and ‘B’. The base vectors ai and bi define the unit cells in A and B, respectively. Note that the lattice parameter is the same in both grains: |a| = |b| = aCu. The grains are oriented such that the a1 and b1 are parallel; b2 makes and angle of 30 with a2 and 60 with a3; b3 makes an angle of 30 with a3. If a vector r has the components (1, 2√2, 2) in the ai frame of reference, what are its coordinates in the bi frame of reference? 

  Note that the described orientations can be described as a rotation of 30 about the a1(b1) axis. Such a rotation is described by the following matrix:

  Of course, irrespective of the coordinate frame used, the magnitude of the vector remains unchanged.

Bain Strain

   A homogeneous deformation transforms the FCC austenite into BCC martensite known as Bain strain. In the crystallographic coordinate system of the FCC phase, the Bain strain has the following matrix representation:

  This deformation can be shown to leave the vector (−0.645452, 0.408391, 0.645452) undis-torted but rotated. In fact, it can be shown that any vector x that satisfies the condition

  where 1 = 2 = 1.283883 and 3 = 0.794705 are the principal strains of the Bain
distortion in the FCC frame of reference will remain undistorted [8].

Videos For Matrix Algebra

Video 1

Video 2

Video 3 

Forum For Matrix Algebra

Forum For Matrix Algebra 

Forum not available

Eigenvalues and Eigenvectors

   One of the other important linear algebra problem that we are interested in solving is ob-taining the eigenvalues and eigenvectors of a given matrix. For example, for a given stress tensor, by identifying the eigenvalues and eigenvectors, we get the principal values and principal directions of that stress tensor. The complete determination of all eigenvalues and eigenvectors can be achieved through QR decomposition; however, QR decompositionis best carried out using a computer. In this section, we will restrict ourselves to the use methods which are ideal for hand computation but still give important information.

Representing Deformation, Strain and Rotations

   Consider a general deformation P. It can be considered to be consisting of a pure strain (E) (a strain that is represented as a symmetric matrix irrespective of frame of reference and consists only of simple extensions and contractions along the principal axes) and a rigid body rotation (R): that is, P = RE

Let us consider

since for rigid body rotation RTR = I and for pure deformation ET = E.

   Consider Co-6.5 wt.% Fe alloy. It undergoes a martensitic transformation – from face centered cubic (fcc) to hexagonal close packed (hcp) structure. There is no density change associated with this transformation. The invariant plane of the transformation is the close packed {111}fcc plane; the shear direction is {112} ifcc. The magnitude of shear is 1/ √8 . The transformation strain associated with this martensitic transformation can be represented using the following matrix, P [8]:

 

   Suppose we want to know the maximum strain experienced by any lattice vector of the parent lattice as a result of this transformation. That is obtained by finding out the eigen- values of the pure strain component (E) of the transformation matrix P. As noted earlier, PTP = E2 and hence, the square root of the eigenvalues of PTP gives the eigenvalues of the pure strain E.

  The roots of this polynomial gives the eigenvalues. Thus, the problem of finding the
eigenvalues reduces to finding the zeros of a polynomial.

Fixed Point Iteration

   One of the methods of finding the roots of a polynomial is the fixed point iteration (or,x = g(x) method). Let us re-write Eq. 10 as follows:

  This equation, starting from 0, converges to one of the roots, namely, 0.70346 after 195 iterations (within an error of 10−6) and starting from 2, converges to the other root, namely, 1.4215 after 140 iterations. Since the trace of the matrix is an invariant, and is equal to the sum of the eigenvalues, we see that the third eigenvalue is 1.0. However, the fixed point iteration does not converge to 1.0 irrespective of the initial value chosen (unless, 1.0 itself is chosen).

 The fixed point iterations Eq. 12 does converge to unity starting from zero albeit after more than 1000 iterations; more interestingly, even though the root is real, it meanders into the complex plane before converging to unity.

Newton’s Method

   There is another fixed point-like iteration method that converges faster. It is known as Newton’s method. In this method, we use

where f′ is the derivative of f with respect to x.

Let us start with the initial guess of zero and use the Newton’s method algorithm:

 We get the numbers 0.32000, 0.52055, 0.63599, 0.68907, 0.70257, 0.70346 respectively for iterations 1 to 6. Thus, in about 7 steps we converge. 

 Similarly, for an initial guess of 1.1, we get 0.99375 and 1.0 for the steps 1 and 2. For the initial guess of 2.0, we get convergence to 1.4215 within about 9 steps.

 Thus, it is clear that Newton’s method is superior in terms of convergence. However, Newton’s method also has the same problem as fixed point iterations; it converges to roots that are far off even when you start close to another root.

Bisection Method

  If we can bracket the zeros of a polynomial, then bisection method can assure convergence to the root. In this method, we use the fact that the root is a point at which f(x) = 0; hence at f(x −δ )f(x δ ) < 0.

  In bisection, we first identify two points a and b which bracket the root. In the first
iteration, we consider the mid-point c = a b/2 . If f(a)f(c) < 0, we reduce the interval to (a, c) by discarding (c, d). Otherwise, we discard (a, c) and retain c, b. In the next step, we halve the current interval and carry on as above. Thus, the root can be identified to the desired accuracy.

   Considering the interval 0.8 and 1.1, for the Eq. 10, we can identify the root, namely unity. Thus, the bisection algorithm by design identifies the bracketed root; unlike fixed point iteration or Newton’s method, the iterations do not jump over the root and identify some other root.

Gerschgorin’s Theorem

   How to identify the brackets of the roots? Gerschgorin’s theorem states that eigenvalues lie in circles whose centres are at aii with radius equal to the sum of the magnitudes of the other elements in row i.

 Co-6.5 wt.% Fe: Maximum Strains and Directions

   To answer the maximum strain question raised earlier, since the eigenvalues of the pure strain matrix are square roots of the eigenvalues of the PTP matrix, they are 1.1923, 1.0, 0.83873. Thus, the maximum stretching is less than 20% and maximum contraction is about 17% during the martensitic transformation in the Co-6.5 wt.% alloy.
  The directions of the maximum stretching and contraction are given by the corresponding eigenvectors; the eigenvectors of both E and PTP are the same. By substituting the eigenvalues in the equation

  we obtain the eigenvector (u, v,w) corresponding to that eigenvector. This leads us to the next topic, namely, simultaneous linear equations.

Power Method

    Before discussing the solution of simultaneous linear equations, we discuss the power method for eigenvalue problems. Power method is an iterative method for finding the largest eigenvalue of a given matrix and the corresponding eigenvector. The advantage is that both the eigenvalue and the eigenvector are obtained in one go. The disadvantage is that it can only identify the largest eigenvector. Of course, by considering the inverse of the given matrix and using the power method, we can identify the smallest eigenvector for the given matrix.

   The algorithm for the power method is as follows:

 1. Suppose we want to find the higher eigenvalue and the corresponding       eigenvector od a matrix A. Take a random vector u and calculate v = Au;

 2.  Normalise v by dividing each component by the largest in magnitude; call the nor- malised v as u;

 3. Go back to 1 until the change in normalising factor becomes negligible;The nor-malising factor is the eigenvalue and the vector v is the eigenvector.

Videos for Eigenvalues and Eigenvectors

Video 1

Video 2

Forum For Eigenvalues and Eigenvectors

Forum For Eigenvalues and Eigenvectors

Forum not available

Simultaneous Linear Equations

     In many problems of practical interest, we need to solve a set of linear equations simul-taneously for a given number of unknowns as in the Eq. 16 above. In this section, we discuss some numerical methods to solve systems of simultaneous linear equations (which are amenable to hand computation).

Non-linear Fitting

Consider the temperature dependence of the linear thermal expansion (α) of BN on tem-perature in the range 77 to 1205 K as shown in Table 1.

Table 1: The variation of linear thermal expansion with temperature in BN in the tem-perature regime (77 – 1205 K). The data is taken from [6].

Table 2: The variation of linear thermal expansion with temperature in BN in the tem-perature regime (77 – 1205 K).

Using Table. 2, one can see that the matrix reduces to

Gaussian Elimination

The method that we used above can also be made more algorithmic using the Gaussian elimination with partial pivoting. In this method, we make the augmented matrix which contains the matrix along with the column of the right hand side vector appended to it as shown below:

The first step is to reduce the augmented matrix to upper triangular form using the
following elimination operations:


• Interchange any rows of the augmented matrix; and
• Replace any row with a linear combination of itself and any other two;

Note that these operations leave the solution to the given set of equations unchanged. Let us consider the first row, for example. By dividing all the terms by 6, the 11 term of the matrix will become unity. Similarly, we can replace the second row using the linear combination (first row - second row divided by 3810); this leads to a zero at the 21 position and so on. This procedure can be continued till the 11, 22 and 33 become unity and the 21, 31, and 32 become zero. Then, by back substitution, we can obtain the unknowns a2 first, then a1 and finally, a0.

 Note that this procedure fails if any of the terms that we use for dividing the rows is zero. To avoid this, we use what is known as partial pivoting. In pivoting, in each column, we interchange rows in such a way that the largest number appears at the aii-th position. Even if the aii-th term is not zero, if pivoting is not done, the Gaussian elimination algorithm can becomes unstable – that is, the round-off errors can accumulate and lead to loss of accuracy. This is because small pivot values, when used in division, can lead to large multiplication factors which in turn produce loss of significance.
 Considering the augmented matrix, thus, we first interchange the first row with the last one:

We divide the first row by 3416468; we replace the second row by (first row - second row/3810) and the third by (first row - third row/6). This leads to

Now, we exchange the second and third rows, and again divide the second row by 365.80 and replace the third row by (second row - third row/104.09). This leads to

Now, from the third row, a2 is directly obtained; from second row, using the a2, we can obtain a1; from the first row, using a2 and a1, we obtain a0. Thus, the fit is obtained to be

As compared to Eq. 23, we can see that the parameters obtained using Gaussian elimina- tion are slightly different.

Iterative Improvement

It is possible to improve the accuracy of the solution obtained by the Gaussian elimination method using the following iterative procedure:

• Let Ax = b be the equation for which iterative improvement is sought. Calculate
r = b − Aˆx
• Solve the equation Ae = r
• Replace ˆx by ˆx e
For example, for the Eq. 26, let us assume an approximate solution of ˆx = (−400, 0.1, 0.01).
We see that r is (−7504.5800,−9.4223, 1.3149).
Solving Ae = r gives e = (−11.071,−0.021223,−0.0070834).
This in turn leads to the improved solution, x = (−411.07, 0.078777, 0.0029166).

Gauss-Seidel Algorithm

There is an iterative procedure (known as Gauss-Seidel procedure) which can be used to solve the set of equations represented in Eq. 20. This procedure works better on a computer (and the algorithm looks slighlty different). However, since we will be using pen, paper and calculator to carry out the calculations, Gauss-Seidel can be carried out as follows.

Consider the Eq. 20; it can be re-written as

Now, we can start with a initial guess, say (0, 0, 0) for (a0, a1, a2). This leads to a new set of values, namely (1299.7, 1.0899, 3.3562× 10−04). Using this as the new guess value, we get (416.45, 1.7390, 5.5709 × 10−04). We can continue in this manner till the guess value that we use and the solution that we obtain differ only by a small, predetermined error value; that is, they converge within some given error. In this case, after 482 iterations we obtain the solution, namely, (−411.39, 0.079944, 0.0029158) when the square of the distance between the old and new guess is less than 10−6.

Note that the large number of iterations means that the methodology is better suited for computers than for hand computations. Further, the convergence of the solution is guaranteed only if the matrix is diagonally dominant or symmetric and positive definite.

Design Matrix

There is another matrix that can be generated to solve nonlinear least square fitting
problems. This matrix is called the design matrix [7]. The design matrix for the above
quadratic fit problem, for example, is as follows:

By solving the above equation for the unknowns ai, one can again find the fitting param-eters.

The design matrix is not a square matrix. So, Gaussian elimination method can be used in which we make several of the rows identically zero and the rest as upper triangular – along with iterative refinement, if needed.

Design matrix is also useful in cases in which we need to fit to a specific functional form which is not a generic polynomial. For example, in the case of stress-strain curve, we want to fit the data to y = mx and not to y = mx c or, in the case of specific heat as a function of temperature at low temperatures, we want to fit the data to cp = aT bT3.

It is easier to set up the design matrix in such cases and solve them.

Revisiting the Eigenvector Problem

Let us consider the Eq. 16 for the eigenvalue 1:

Further Reading

Gerald and Wheatley [5] has a nice discussion on shifting with the power method which can help get the intermediate eigenvalues and the corresponding eigenvectors.

Slides for Eigenvalues and Eigenvectors and Simultaneous Linear Equations

Videos For Simultaenous Linear Equations

Video 1

Video 2 

Quiz - II

Quiz - II

Quiz not avilable

Forum For Simultaneous Linear Equations

Forum For Simultaneous Linear Equations

Forum not available

Optimization

Free Energy Minimization: Search Methods

A function is said to be unimodal in a given interval if it encloses a single minimum point. If a function is unimodal, it is possible to use search methods to identify the minimum point without using the derivatives. There are many such search methods such as interval halving method, Fibanocci method and golden section method [9]. In this section, we will use interval halving method.

Let us consider Eq. 3

Linear programming: Optimizing Sand Casting Parameters

Unlike the previous problem, if the function to be optimized (as well as constraints) are linear, then such problems are known as linear programming problems and can be solved using the simplex method.

Consider the following problem, for example [10]:

Slides for Optimization

Videos For Optimization

Video 1

Video 2

Quiz-III

Quiz-III

Quiz not avilable

Forum for Optimization

Forum for Optimization

Forum not available

Ordinary Differential equations

There are many problems, which, when posed in mathematical problem give rise to ordi- nary dierential equations (ODEs). To solve the problem, we need to obtain the solution of the ODE under the given boundary conditions. There are several numerical techniques to obtain the solution of ODEs. In this section, we will discuss two methods, namely, improved Euler (Heun's method) and classical Runge-Kutta method of fourth order

Mixing problem: Heun's Method

Consider a tank containing 100 litres of water in which 20 kilograms of salt is dissolved. Two litres of brine, each containing one kilogram of dissolved salt run into the tank per minute and the mixture is kept uniform by constant stirring. Two litres of the mixture is removed from the tank every minute. Find the amount of salt as a function of time for the rest 10 minutes.

This problem can be formulated as an ordinary dierential equation using law of conser-vation of mass. Specically, let y(t) be the total amount of salt in the tank at any time; then, the time rate of change y0 = dy=dx of y is nothing but the in flow of the salt minus the out flow. The in flow is 2 kg per minute (2 litres of brine with one kilogram of salt dissolved in each). The amount of water in the tank is always 100 litres (since two litres of brine enters and two litres of the mixture is taken out every minute). Hence the out flow of salt is 2y(t)=100 kg per minute.

Thus, we have to solve the equation

Heating problem: Runge-Kutta Method

Suppose you have completed a heat treatment experiment and removed the sample from the furnace at 2 pm in the afternoon. The temperature of the sample at 2 pm is 1000 K and the room temperature is 303 K. What will be the temperature of the sample after 15 minutes assuming that the cooling of the sample obeys Newton's law of cooling and the value of the constant of proportionality is -0.05 (in per hour units time)

 

Slides for Ordinary Differential equations

Videos For Ordinary Differential equations

Forum for Ordinary Differential equations

Forum for Ordinary Differential equations

Forum not available

Solutions

Page view resticted for students

References

Table of Contents