Constants, Identities, and Variations
(Some Essential Constants in Identities and Variations using $\large\rm\LaTeX$)
Calculus:
If we have the following conditions:
- $f(x)$ is continuous on $[a,b]$,
- $f(a)$ and $f(b)$ are of different signs,
Power Rule:
\[\frac{d(x^n)}{dx}=nx^{n-1}\]
Sum Rule:
\[\frac{d}{dx}\bigl(g(x)+h(x)\bigr)=\frac{dg}{dx}+\frac{dh}{dx}\]
Product Rule:
\[\frac{d}{dx}\bigl(f(x)g(x)\bigr)=f(x)g^\prime(x)+g(x)f^\prime(x)\]
Chain Rule:
\[\frac{d}{dx}\bigl(g(h(x)\bigr)=\frac{dg}{dh}\bigl(h(x)\bigr)+\frac{dh}{dx}(x)\]
Some Linear Algebra Concepts:
Displaying a vector:
$$\vec{x} = \begin{pmatrix}8\\6\\7\\5\\3\end{pmatrix}$$
Vector addition:
$$A = [a_{1}, a_{2}, \dotsc, a_{n}]$$
$$B = [b_{1}, b_{2}, \dotsc, b_{n}]$$
$$A + B = [a_{1} + b_{1}, a_{2} + b_{2}, \dotsc, a_{n} + b_{n}]$$
Calculating vector length: The length of a vector is found by squaring each component, summing them, and taking the square root of the sum. If $\vec{v}$ is a vector, its length is $\lvert\vec{v}\rvert$.
$$\lvert\vec{v}\rvert = \sqrt{\sum_{i=1}^{n}{x_i^2}}$$
The length (magnitude) of vectors:
\begin{align*}
\|\mathbf{v}\| \text{ or } \left\|\frac{a}{b}\right \|
\end{align*}
Scalar multiplication: Multiplying a scalar (real number) to a vector means multiplying every component by that real number to yield a new vector.
$$\vec{v} = [3, 6, 8, 4] \times 1.5 = [4.5, 9, 12, 6]$$
Inner product: Multiplying a scalar (real number) to a vector means multiplying every component by that real number to yield a new vector. The inner product of two vectors (also called the dot product or scalar product) defines multiplication of vectors. The inner product of two vectors is denoted by $(\vec{v_{1}}$, $\vec{v_{2}})$ or $\vec{v_{1}} \cdot \vec{v_{2}}$.
$$(\vec{x}, \vec{y}) = \vec{x} \cdot \vec{y} = \sum_{i=1}^{n}{x_{i}y_{i}}$$
For example, if $\vec{x} = [1, 6, 7, 4]$ and $\vec{y} = [3, 2, 8, 3]$, then
$$\vec{x} \cdot \vec{y} = 1(3) + 6(2) + 7(8) + 4(3) = 83$$
Orthogonality:
Two vectors are orthogonal to each other if their inner product equals zero. For example, the vectors $[2, 1, -2, 4]$ and $[3, -6, 4, 2]$ are orthogonal, because
$$[2, 1, -2, 4] \cdot [3, -6, 4, 2] = 2(3) + 1(-6) - 2(4) + 4(2) = 0$$
Normal vector: A normal vector (or unit vector) is a vector of length 1. If you divide each component by its vector length, you get the normal/unit vector.
If $\vec{v} = [2, 4, 1, 2]$, then
$${\lvert}\vec{v}{\rvert} = \sqrt{2^2 + 4^2 + 1^2 + 2^2} = \sqrt{25} = 5$$
Then $\vec{u} = [2/5, 4/5, 1/5, 2/5]$ is a normal vector, because:
$${\lvert}\vec{u}{\rvert} = \sqrt{(2/5)^2 + (4/5)^2 + (1/5)^2 + (2/5)^2} = \sqrt{25/25} = 1$$
Orthonormal vectors:
Vectors of unit length that are orthogonal to each other are said to be orthonormal.
For example:
$$\vec{u} = [2/5, 4/5, 1/5, 2/5]$$
and
$$\vec{v} = [3 / \sqrt{65}, -6 / \sqrt{65}, 4 / \sqrt{65}, 2 / \sqrt{65}]$$
are orthonormal, because:
$${\lvert}\vec{u}\rvert = \sqrt{(2/5)^2 + (4/5)^2 + (1/5)^2 + (2/5)^2} = 1$$
$${\lvert}\vec{v}\rvert = \sqrt{(3 / \sqrt{65})^2 + (-6 / \sqrt{65})^2 + (4 / \sqrt{65})^2 + (2 / \sqrt{65})^2} = 1$$
$$\vec{u} \cdot \vec{v} = \frac{6}{5\sqrt{65}} - \frac{6}{5\sqrt{65}} - \frac{8}{5\sqrt{65}} + \frac{8}{5\sqrt{65}} = 0$$
Square matrix: A matrix is square when $m = n$. To designate the size of a square matrix with $n$ rows and columns, it is called $n$-square. Here a 3-square matrix:
$$A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}$$
Transpose:
The transpose of matrix $A$ is $A^T$:
$$A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}$$
$$A^T = \begin{bmatrix} 1 & 4 \\ 2 & 5 \\ 3 & 6 \end{bmatrix}$$
Matrix multiplication: It is possible to multiply two matrices only when the number of rows of the first matrix matches the number of columns of the second matrix.
$$AB = \begin{bmatrix} 2 & 1 & 4 \\ 1 & 5 & 2 \end{bmatrix} \begin{bmatrix} 3 & 2 \\ -1 & 4 \\ 1 & 2 \end{bmatrix} = \begin{bmatrix} 9 & 16 \\ 0 & 26 \end{bmatrix}$$
Example:
$$ab_{11} = \begin{bmatrix} 2 & 1 & 4\end{bmatrix}\begin{bmatrix} 3 \\ -1 \\ 1 \end{bmatrix} = 2(3)\ +\ 1(-1)\ +\ 4(1) = 9$$
$$ab_{12} = \begin{bmatrix} 2 & 1 & 4\end{bmatrix}\begin{bmatrix} 2 \\ 4 \\ 2 \end{bmatrix} = 2(2)\ +\ 1(4)\ +\ 4(2) = 16$$
$$ab_{21} = \begin{bmatrix} 1 & 5 & 2\end{bmatrix}\begin{bmatrix} 3 \\ -1 \\ 1 \end{bmatrix} = 1(3)\ +\ 5(-1)\ +\ 2(1) = 0$$
$$ab_{22} = \begin{bmatrix} 1 & 5 & 2\end{bmatrix}\begin{bmatrix} 2 \\ 4 \\ 2 \end{bmatrix} = 1(2)\ +\ 5(4)\ +\ 2(2) = 26$$
Identity matrix: The identity matrix $(I)$ is a square matrix where all components are $0$ expected for components on the diagonal, which are equal to $1$. If you multiple a matrix with an identity matrix, you end up with the matrix:
$$AI = \begin{bmatrix}2 & 4 & 6 \\ 1 & 3 & 5 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix}2 & 4 & 6 \\ 1 & 3 & 5 \end{bmatrix}$$
Orthogonal matrix:
Matrix $A$ is orthogonal if $AA^T = A^TA = I$.
Matrix $A$ is a symmetric matrix since it is equal to its transpose.
$$A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 3/5 & -4/5 \\ 0 & 4/5 & 3/5 \end{bmatrix}$$
$A$ is orthogonal, because:
$$AA^T = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 3/5 & -4/5 \\ 0 & 4/5 & 3/5 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 0 & 3/5 & 4/5 \\ 0 & -4/5 & 3/5 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$$
Diagonal matrix: A diagonal matrix A is a square matrix where all the entries $a_{ij}$ are 0 when $i\neq j$.
$$A = \begin{bmatrix} a_{11} & 0 & 0 & 0 \\ 0 & a_{22} & 0 & 0 \\ 0 & 0 & a_{33} & 0 \\ 0 & 0 & 0 & a_{nn} \end{bmatrix}$$
The trace of an $n \times n$ square matrix is defined to be the sum of the elements on the main diagonal.
$$tr(A) = \sum^{n}_{i=1} a_{ii} = a_{11} + a_{22} + \dots + a_{nn}$$
Determinant:
A determinant is a function of a square matrix that reduces it to a single number. The determinant of a matrix $A$ is denoted ${\lvert}A\rvert$ or $det(A)$. If $A$ consists of one element $a$, then ${\lvert}A\rvert = a$. If $A$ is a 2 x 2 matrix, then
$${\lvert}A\rvert = \left| \begin{array}{cc} a & b \\ c & d \end{array} \right| = ad - bc$$
The determinant of:
$$A = \begin{bmatrix} 4 & 1 \\ 1 & 2 \end{bmatrix}$$
is:
$${\lvert}A\rvert = \left| \begin{array}{cc} 4 & 1 \\ 1 & 2 \end{array} \right| = 4(2) - 1(1) = 7$$
When the determinant of a matrix is $0$, the inverse of the matrix does not exist.
Eigenvectors and eigenvalues:
An eigenvector is a nonzero vector that satisfies the equation
$$A\vec{v} = \lambda\vec{v}$$
where $A$ is a square matrix, $\lambda$ is a scalar, and $\vec{v}$ is the eigenvector. $\lambda$ is called an eigenvalue.
\[\\\]
Other Interesting Aspects in Mathematics:
Derivatives of Atom Functions:
\begin{array}{ l|c|c|c|c|c|c|r }
f(x) & c & x^n & sin\ x & cos\ x & ln\ x & e^x & \cdots\\
\hline
f'(x) & 0 & nx^{n-1} & cos\ x & -sin\ x & \frac{1}{x} & e^x & \cdots\\
\end{array}
—•—
There are four fraction types with respect to odd and even numbers:
\[\frac{ODD}{EVEN}\ \frac{EVEN}{ODD}\ \frac{EVEN}{EVEN}\ \frac{ODD}{ODD}\]
The first type, $\frac{ODD}{EVEN}$ is special, because it can never yield an Integer. All the others can.
Rational, Irrational, Algebraic, and Transcendental
(as defined in Mathematics - defined elsewhere in other ways)
Rational numbers are expressed as the ratio of two integers or whole numbers:
$$r=\frac{p}{q}$$
Algebraic numbers are the roots of finite polynomials with integer coefficients:
$$a_nx^n+\dotsb+a_2x^2+a_1x+a_0=0$$
Every rational number is algebraic when it is a root of the equation $qx-p=0$. There are algebraic numbers which are not rational. The most famous one is $\sqrt2$, which is a root of $x^2-2=0$.
The irrational and transcendental numbers are defined by what they are not: members of $\mathbb R$ in some way:
Every transcendental number is irrational.
That transcendental numbers exist and that not all real numbers are algebraic was first proved by Joseph Liouville in 1844. The first number to be demonstrably transcendental is now called the Liouville constant:
$$\quad\displaystyle\sum_{k=1}^{\infty}10^{-k!}=0.110\,001\,000\,000\,000\,000\,000\,001\,000\dotsc$$
The constants $e$ and $\pi$ have been proven to be transcendental, but Transcendental Number Theory has not yet determined whether $e+\pi$ is transcendental.
Variations on a theme of e:
Euler's Formula for Complex Numbers:
$$e^{ \pm i\theta } = \cos \theta \pm i\sin \theta$$
$z = cos\ \theta + i\ sin\ \theta = e^{i\theta}$ (standard form)
$z = r (\cos(\theta)+ i \sin(\theta))$ (polar form)
$z = r e^{i\theta}$ (exponential form)
$z = r e^{i_{0}\theta}$ (holor form)
When $\theta = \pi$, Euler's formula evaluates to: $e^{i\pi} + 1 = 0$,
which is known as Euler's Identity:
$$\pmb{e^{i\pi}+1=0}$$
or
$$\pmb{e^{i\pi}+0^0=0}$$ (hotly debatable between mathematicians)
As a radical:
$$\sqrt{e^{i\pi }}= \sqrt{-1}$$
In terms of $\pi$:
$$\pi = -i \ log_{e}(-1)$$
As a limit:
$${e}\equiv \lim_{n\to\infty} \biggl( 1 + \frac{1}{n} \biggr)^n = 2.7182818284590452353602874713526624977572 \cdots$$
$${e} = \lim_{n\to\infty} \biggl( 1 + {n} \biggr)^{\frac{1}{n}}$$
As a series:
$${e} = \sum_{n=0}^\infty \frac{1}{n!}= \frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \cdots$$
As a logarithm:
$$\ln x = \log_ex$$
As an exponential:
$$f(x) = exp(x) = e^x$$
The Reciprocal of e:
$$\lim_{n \mapsto \infty}\biggl(1 - \frac{1}{n}\biggr)^{n} = \frac{1}{{e}}$$
Derivatives of e:
$$(e^n)' = e^n$$
$$(\log_e n)' = (\ln n)' = 1/n$$
Integrals of e:
$$\int e^x dx = e^x+c$$
$$\int log_{e}{x}\ dx = \int ln\ x\ dx = x\ln\ x - x + c$$
$$\int_1^e \frac{1}{x}dx = 1$$
$$\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-\frac{x^2}{2}}dx$$