Skip to main content
Logo image

Section 20.3 Linear Independence

Let \(S = \{v_1, v_2, \ldots, v_n\}\) be a set of vectors in a vector space \(V\text{.}\) If there exist scalars \(\alpha_1, \alpha_2 \ldots \alpha_n \in F\) such that not all of the \(\alpha_i\)’s are zero and
\begin{equation*} \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_n v_n = {\mathbf 0 }\text{,} \end{equation*}
then \(S\) is said to be linearly dependent. If the set \(S\) is not linearly dependent, then it is said to be linearly independent. More specifically, \(S\) is a linearly independent set if
\begin{equation*} \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_n v_n = {\mathbf 0 } \end{equation*}
implies that
\begin{equation*} \alpha_1 = \alpha_2 = \cdots = \alpha_n = 0 \end{equation*}
for any set of scalars \(\{ \alpha_1, \alpha_2 \ldots \alpha_n \}\text{.}\)

Proof.

If
\begin{equation*} v = \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_n v_n = \beta_1 v_1 + \beta_2 v_2 + \cdots + \beta_n v_n\text{,} \end{equation*}
then
\begin{equation*} (\alpha_1 - \beta_1) v_1 + (\alpha_2 - \beta_2) v_2 + \cdots + (\alpha_n - \beta_n) v_n = {\mathbf 0}\text{.} \end{equation*}
Since \(v_1, \ldots, v_n\) are linearly independent, \(\alpha_i - \beta_i = 0\) for \(i = 1, \ldots, n\text{.}\)
The definition of linear dependence makes more sense if we consider the following proposition.

Proof.

Suppose that \(\{ v_1, v_2, \dots, v_n \}\) is a set of linearly dependent vectors. Then there exist scalars \(\alpha_1, \ldots, \alpha_n\) such that
\begin{equation*} \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_n v_n = {\mathbf 0 }\text{,} \end{equation*}
with at least one of the \(\alpha_i\)’s not equal to zero. Suppose that \(\alpha_k \neq 0\text{.}\) Then
\begin{equation*} v_k = - \frac{\alpha_1}{\alpha_k} v_1 - \cdots - \frac{\alpha_{k - 1}}{\alpha_k} v_{k-1} - \frac{\alpha_{k + 1}}{\alpha_k} v_{k + 1} - \cdots - \frac{\alpha_n}{\alpha_k} v_n\text{.} \end{equation*}
Conversely, suppose that
\begin{equation*} v_k = \beta_1 v_1 + \cdots + \beta_{k - 1} v_{k - 1} + \beta_{k + 1} v_{k + 1} + \cdots + \beta_n v_n\text{.} \end{equation*}
Then
\begin{equation*} \beta_1 v_1 + \cdots + \beta_{k - 1} v_{k - 1} - v_k + \beta_{k + 1} v_{k + 1} + \cdots + \beta_n v_n = {\mathbf 0}\text{.} \end{equation*}
The following proposition is a consequence of the fact that any system of homogeneous linear equations with more unknowns than equations will have a nontrivial solution. We leave the details of the proof for the end-of-chapter exercises.
A set \(\{ e_1, e_2, \ldots, e_n \}\) of vectors in a vector space \(V\) is called a basis for \(V\) if \(\{ e_1, e_2, \ldots, e_n \}\) is a linearly independent set that spans \(V\text{.}\)

Example 20.12.

The vectors \(e_1 = (1, 0, 0)\text{,}\) \(e_2 = (0, 1, 0)\text{,}\) and \(e_3 =(0, 0, 1)\) form a basis for \({\mathbb R}^3\text{.}\) The set certainly spans \({\mathbb R}^3\text{,}\) since any arbitrary vector \((x_1, x_2, x_3)\) in \({\mathbb R}^3\) can be written as \(x_1 e_1 + x_2 e_2 + x_3 e_3\text{.}\) Also, none of the vectors \(e_1, e_2, e_3\) can be written as a linear combination of the other two; hence, they are linearly independent. The vectors \(e_1, e_2, e_3\) are not the only basis of \({\mathbb R}^3\text{:}\) the set \(\{ (3, 2, 1), (3, 2, 0), (1, 1, 1) \}\) is also a basis for \({\mathbb R}^3\text{.}\)

Example 20.13.

Let \({\mathbb Q}( \sqrt{2}\, ) = \{ a + b \sqrt{2} : a, b \in {\mathbb Q} \}\text{.}\) The sets \(\{1, \sqrt{2}\, \}\) and \(\{1 + \sqrt{2}, 1 - \sqrt{2}\, \}\) are both bases of \({\mathbb Q}( \sqrt{2}\, )\text{.}\)
From the last two examples it should be clear that a given vector space has several bases. In fact, there are an infinite number of bases for both of these examples. In general, there is no unique basis for a vector space. However, every basis of \({\mathbb R}^3\) consists of exactly three vectors, and every basis of \({\mathbb Q}(\sqrt{2}\, )\) consists of exactly two vectors. This is a consequence of the next proposition.

Proof.

Since \(\{ e_1, e_2, \ldots, e_m \}\) is a basis, it is a linearly independent set. By Proposition 20.11, \(n \leq m\text{.}\) Similarly, \(\{ f_1, f_2, \ldots, f_n \}\) is a linearly independent set, and the last proposition implies that \(m \leq n\text{.}\) Consequently, \(m = n\text{.}\)
If \(\{ e_1, e_2, \ldots, e_n \}\) is a basis for a vector space \(V\text{,}\) then we say that the dimension of \(V\) is \(n\) and we write \(\dim V =n\text{.}\) We will leave the proof of the following theorem as an exercise.