Page 1
Semester 1: B.Sc. Mathematics
Reciprocal Equations - Standard form, Increasing or decreasing the roots of a given equation, Removal of terms, Approximate solutions of roots of polynomials by Horner's method
Reciprocal Equations
Standard Form
Reciprocal equations often take can be expressed in standard form as a polynomial equation set equal to zero. They can be represented in the form of 1/f(x) = 0, where f(x) is a polynomial function.
Increasing or Decreasing the Roots
The roots of reciprocal equations can be manipulated by altering the function f(x). For instance, adding or subtracting a constant term affects the roots and their positions on the number line.
Removal of Terms
In reciprocal equations, it is sometimes possible to remove terms to simplify the equation. This can involve factoring or canceling out common terms, leading to a clearer form for finding roots.
Approximate Solutions of Roots of Polynomials by Horner's Method
Horner's method is an efficient algorithm used to evaluate polynomials and can also be applied to find roots. It allows for the computation of polynomial values at given points, which can lead to initial approximations for the roots.
Summation of Series - Binomial, Exponential, Logarithmic series, Theorems without proof, Approximations
Summation of Series
Binomial Series
The binomial series is an expansion of the expression (a + b)^n for any real number n. The general term is given by (n choose k) * a^(n-k) * b^k, where k ranges from 0 to n. The series converges for |b/a| < 1.
Exponential Series
The exponential series is a power series that represents the function e^x. It can be expressed as the sum of the infinite series x^n/n! for n=0 to infinity. This series converges for all real values of x.
Logarithmic Series
The logarithmic series can be derived from the Taylor series expansion of ln(1 + x). It is given by the sum of the series (-1)^(n+1) * x^n/n for |x| < 1. This series converges within the interval (-1, 1).
Theorems without Proof
Approximations
For small x, the binomial series can be approximated using the first few terms. The exponential function can be approximated as e^x ≈ 1 + x for small x. The logarithmic function can be approximated using ln(1 + x) ≈ x for small x.
Inverse of a square matrix up to order 3, Characteristic equation, Eigen values and Eigen Vectors, Similar matrices, Cayley Hamilton Theorem (statement), Finding powers of square matrix, Diagonalization of square matrices
Inverse of a Square Matrix Up to Order 3, Characteristic Equation, Eigenvalues and Eigenvectors, Similar Matrices, Cayley-Hamilton Theorem, Finding Powers of Square Matrix, Diagonalization of Square Matrices
Inverse of a Square Matrix
The inverse of a square matrix A is another matrix denoted as A^-1 such that AA^-1 = I, where I is the identity matrix. For a 2x2 matrix A = [[a, b], [c, d]], the inverse is given by A^-1 = (1/det(A)) * [[d, -b], [-c, a]], where det(A) = ad - bc. For higher order matrices, methods such as the adjoint method or row reduction can be utilized.
Characteristic Equation
The characteristic equation of a matrix A is obtained from the determinant equation |A - λI| = 0, where λ is an eigenvalue and I is the identity matrix. Solving for λ gives the eigenvalues of the matrix. This is a polynomial equation of degree n for an n x n matrix.
Eigenvalues and Eigenvectors
Eigenvalues are scalars λ that satisfy the equation A v = λv, where v is a non-zero vector known as an eigenvector. The corresponding eigenvectors provide insights into the transformations represented by the matrix A. To find eigenvalues, solve the characteristic equation, and for each eigenvalue, substitute back to find the respective eigenvectors.
Similar Matrices
Two square matrices A and B are said to be similar if there exists an invertible matrix P such that B = P^-1AP. Similar matrices share many properties, such as having the same eigenvalues, characteristic polynomial, and rank. They represent the same linear transformation under different bases.
Cayley-Hamilton Theorem
The Cayley-Hamilton theorem states that every square matrix satisfies its own characteristic equation. If A is an n x n matrix with characteristic polynomial p(λ), then substituting A into the polynomial yields p(A) = 0.
Finding Powers of Square Matrix
To find the powers of a square matrix A, one can use various methods such as direct multiplication, diagonalization, or the eigendecomposition method. For large powers, diagonalization simplifies the process, allowing for computation of powers using the diagonal matrix.
Diagonalization of Square Matrices
A matrix A is diagonalizable if there exists an invertible matrix P and a diagonal matrix D such that A = PDP^-1. Diagonalization is possible if A has n linearly independent eigenvectors. This process simplifies matrix operations such as powers and exponentials.
Expansions of sine n, cosine n, tangent n in powers of sine, cosine, Expansion of tan θ 0,1,2,...,n, Expansions of cosine, sine, and tangent - related problems
Expansions of sine, cosine, and tangent in powers of sine, cosine
Expansion of sine n
Sine functions can be expanded using Taylor series or Maclaurin series. The expansion is given by sine n = sin(x) = x - x^3/3! + x^5/5! - x^7/7! + ... for any angle x.
Expansion of cosine n
Cosine functions can also be expanded using Taylor series or Maclaurin series. The expansion is given by cosine n = cos(x) = 1 - x^2/2! + x^4/4! - x^6/6! + ... for any angle x.
Expansion of tangent n
Tangent functions can be expressed as a series as well. For small angles, tan(x) = x + x^3/3 + 2x^5/15 + ... This series is useful in many applications of calculus.
Expansion of tan (θ) for integer values
The expansion of tangent function for θ = 0, 1, 2,..., n can be studied using known values of tan for common angles. For example, tan(0) = 0, tan(45°) = 1, tan(90°) is undefined.
Applications and related problems
The expansions of sine, cosine, and tangent have applications in physics, engineering, and computer science. They are used in solving differential equations, modeling oscillations, and in signal processing.
Hyperbolic functions, Relation between circular and hyperbolic functions, Inverse hyperbolic functions, Logarithm of complex quantities, Summation of trigonometric series related problems
Hyperbolic Functions
Definition of Hyperbolic Functions
Hyperbolic functions are analogs of trigonometric functions based on hyperbolas instead of circles. The primary hyperbolic functions include: - Hyperbolic sine: sinh(x) = (e^x - e^(-x)) / 2 - Hyperbolic cosine: cosh(x) = (e^x + e^(-x)) / 2 - Hyperbolic tangent: tanh(x) = sinh(x) / cosh(x)
Properties of Hyperbolic Functions
Hyperbolic functions have several properties similar to their circular counterparts: - sinh(-x) = -sinh(x) - cosh(-x) = cosh(x) - tanh(-x) = -tanh(x) - The identity: cosh^2(x) - sinh^2(x) = 1.
Relation Between Circular and Hyperbolic Functions
There are relationships between circular and hyperbolic functions, such as: - e^(ix) = cos(x) + i*sin(x) - e^(x) = cosh(x) + sinh(x) - The exponential definitions link both function types through the use of complex arguments.
Inverse Hyperbolic Functions
The inverse hyperbolic functions are the inverses of the hyperbolic functions: - Inverse hyperbolic sine: arsinh(x) = ln(x + √(x^2 + 1)) - Inverse hyperbolic cosine: arcosh(x) = ln(x + √(x^2 - 1)) - Inverse hyperbolic tangent: artanh(x) = (1/2) ln((1 + x)/(1 - x))
Logarithm of Complex Quantities
Logarithms of complex quantities can be expressed using polar coordinates: - For a complex number z = r*(cos(θ) + i*sin(θ)), its logarithm is given by ln(z) = ln(r) + iθ. - Properties of logarithms extend to complex numbers similarly as for real numbers.
Summation of Trigonometric Series
Trigonometric series can often be expressed in terms of exponential functions, allowing summation through the use of identities or calculus techniques. Specific techniques include: - Euler's formula and its application to derive series. - Summation techniques using double angles and product-to-sum identities.
