school: University of Missouri-Rolla (aka Missouri University of Science and Technology)
instructor: Barbara Hale
start date: 2007-08-30
end date: 2007-12-xx
\(\vec{r}\) = three dimensional position vector with components \((x,y,z)\)
Definition of field:
\(F \equiv \{\alpha, \beta, \gamma, ...\}\) where \(\alpha, \beta, \gamma, ...\) are
(in general) complex numbers and
\(\alpha+\beta\) and \(\alpha-\beta\) are defined and are elements of F. (\(\alpha+\beta \in F\) and \(\alpha-\beta \in F\).)
\(\alpha+(\beta+\gamma)=(\alpha+\beta)+\gamma\) (Associative Property of addition) and
\(\alpha\cdot(\beta\cdot \gamma)=(\alpha\cdot \beta)\cdot \gamma\) (Associative Property of multiplication) and
\(\alpha\cdot(\beta+\gamma)=(\alpha\cdot \beta)+(\alpha\cdot \gamma)\) (distributive property of multiplication over addition)
\(\alpha+\beta=\beta+\alpha\) (commutative property of addition) and
\(\alpha\cdot\beta=\beta\cdot \alpha\) (commutative property of multiplication)
The element \(0\) exists where \(a+0=a\) and
\(a\cdot 0\)=0 and
\(\forall \alpha \in F\) there exists a \(\beta\) such that \(\alpha+\beta=0\).
an identity, \(E\), exists such that \(E\cdot \alpha=\alpha \forall \alpha\); e.g., \(E=1\)
at least one element of \(F \neq 0\)
\(\forall \alpha \in F, \exists \beta \in F \backepsilon \alpha\cdot \beta = E\); e.g., \(\beta \equiv \alpha^{-1}\)
Read as "For every \(\alpha\) in field \(F\) there exists \(\beta\) in \(F\) such that the product of \(\alpha\cdot\beta\) is element E (unity). In other words, \(\beta\) is the inverse of \(\alpha\). "
Definition of space:
\(\vec{S} \equiv \{ \vec{x}, \vec{y}, \vec{z},\vec{v},...\}\) where \(\vec{x}, \vec{y}, \vec{z},\vec{v},...\)
are mathematical objects ("vectors") over field \(F\) and
\(\vec{x}+\vec{y} \in \vec{S} \forall \vec{x}\in\vec{S}\) and \(\vec{y}\in\vec{S}\); and
\(\alpha\vec{x}\in\vec{S} \forall \alpha\in F\) and \(\vec{x}\in\vec{S}\)
The "zero" or "null" vector \(\vec{0}\) exists (and \(\vec{0}\in\vec{S}\)) \(\backepsilon \vec{x}+\vec{0}=\vec{x}\) and \(\alpha\vec{0}=\vec{0}\)
also, \(\forall \vec{x}\in\vec{S} \exists \vec{y}\in\vec{S}\backepsilon \vec{x}+\vec{y}=\vec{0}\), e.g., \(\vec{y}\) is the additive inverse of \(\vec{x}\).
Examples of vector spaces:
three dimensional Euclidean space. (The \(\vec{r}\) coordinate space with \(F\) being the real numbers.)
n-dimensional vector space over the field of complex numbers; \(vec{x}=(a_1, a_2,...,a_n)\)
set of all real, continuous functions \(f(x)\) on \([0,1]\). Note that \(f(x)=\vec{y}\), a vector element of \(\vec{S}\).
set of all complex functions \(\Psi(x)\) with domain \(-\infty \lt x \lt \infty \backepsilon \int \Psi^* \Psi \) is finite
This is sometimes called \(L^2\), a Hilbert space of all square integrable functions
set of solutions to \(\nabla^2f(\vec{r})=0\), or \(\nabla^2f(\vec{r})=k^2\) for real \(k\).
set of functions \(\Psi(\vec{r})\) where \(|\vec{r}|\leq\infty\) and where the integral over all space \(\int|\Psi(\vec{r})|^2d^3x\) is finite.
n-dimensional vector space over the field of real numbers is described by
\begin{equation}
\vec{x} = x_1 \hat{e}^1 + x_2 \hat{e}^2 + ... + x_n \hat{e}^n
\end{equation}
where \(\hat{e}^1=(1,0,...0)\) and \(\hat{e}^2=(0,1,...,0)\). The \(\hat{e}^i\) are called basis vectors.
A short-hand notation for \(\vec{x}\) is
\begin{equation}
\vec{x} = (x_1, x_2, ..., x_n)
\end{equation}
A normed vector space
is a vector space in which \(\forall \vec{x}\in\vec{S}\) a quantity defined
to be in the norm of \(\vec{x}\), denoted as \(||\vec{x}||\), exists. The norm must satisfy
\(||\vec{x}|| \geq 0\)
\(||\alpha\vec{x}|| = |\alpha|\cdot||\vec{x}||\)
\(||\vec{x}+\vec{y}||\leq||\vec{x}||+||\vec{y}||\). This is called the Minkowski inequality
\(||\vec{x}||=0\) iff \(\vec{x}=\vec{0}\)
Example:
In an n-dimensional Euclidean space with \(\vec{x}=(a_1, a_2, ..., a_n)\) then
\begin{equation}
||\vec{x}||=\sqrt{|a_1|^2+|a_2|^2+...+|a_n|^2}
\end{equation}
A vector space is unitary iff it is possible to define a special operation called the inner product (or scalar product) \((\vec{x}, \vec{y}) \forall \vec{x},\vec{y}\in\vec{S}\).
The inner product must satisfy
\((\vec{x},\vec{x})\geq 0\) and \((\vec{x},\vec{x})=0\) iff \(\vec{x}=\vec{0}\)
From the above it can be shown that
\begin{equation}
(\alpha\vec{x},\vec{y})=\alpha^*(\vec{x},\vec{y})
\end{equation}
and
\begin{equation}
|(\vec{x},\vec{y})|^2 \leq (\vec{x},\vec{x})\cdot (\vec{y},\vec{y})
\end{equation}
which is called the Cauchy-Shwartz Inequality.
Example
In an n-dimensional vector space,
\begin{equation}
(\vec{x},\vec{y}) = x_1^*\ y^1 + x_2^*\ y^2 + ... + x_n^*\ y^n
\end{equation}
In matrix notation,
\begin{equation}
(\vec{x},\vec{y}) = (x_1^*, x_2^*, ..., x_n^*)\cdot
\begin{bmatrix}
y^1 \\
y^2 \\
\vdots \\
y^n
\end{bmatrix}
= \vec{x}^{*T} \vec{y}
\end{equation}
where \(T\) is the transpose operation and \(\vec{x}\) and \(\vec{y}\) are matrices.
Definition of orthogonality:
Two vectors \(\vec{x}\) and \(\vec{y}\) are said to be orthogonal if \((\vec{x},\vec{y})=0\).