We define a set \(S\) to be linearly independent if, for any choice of pairwise distinct vectors \(\Vect{a}_1,\dots , \Vect{a}_r\) from \(S\), the vector equation
\(t_1 \Vect{a}_1 + \cdots + t_r \Vect{a}_r\) | \(=\) | \(0\) |
has exactly one solution, namely \(t_1=\cdots = t_r=0\). – If \(S\) fails to be linearly independent, it is called linearly dependent.
This definition is remarkable efficient as it relates the new concept of linear independence directly to the familiar concept of a system of linear equations having a unique solution. – But let us analyze a bit more what it actually means:
Using a collection of vectors \(\Vect{a}_1,\dots ,\Vect{a}_r\), it is always possible to express the zero vector as a linear combination:
\(\Vect{0}\) | \(=\) | \(t_1 \Vect{a}_1 + \cdots + t_r \Vect{a}_r\) |
All we need to do is set \(t_1= \cdots = t_r = 0\). This is often called the ‘trivial’ linear combination of \(\Vect{0}\). With the concept of linear independence we distinguish those vector collections \(\Vect{a}_1,\dots ,\Vect{a}_r\) for which the trivial combination is the only way of expressing \(\Vect{0}\) as a linear combination of \(\Vect{a}_1, \dots ,\Vect{a}_r\).
In contrast, suppose this linear independence requirement is violated. Then it is possible to express the zero vector as a linear combination
\(\Vect{0}\) | \(=\) | \(t_1 \Vect{a}_1 + \cdots + t_r \Vect{a}_r\) |
in which at least one of the numbers \(t_1, \dots , t_r\) is not zero. Consider, for example, the case where \(t_1\neq 0\). We may then compute.
\(\Vect{0}\) | \(=\) | \(t_1 \Vect{a}_1 + \cdots + t_r \Vect{a}_r\) |
\(t_1 \Vect{a}_1\) | \(=\) | \(t_2\Vect{a}_2 + \cdots + t_r \Vect{a}_r\) |
\(\Vect{a}_1\) | \(= \) | \(\frac{1}{t_1}\left( t_2\Vect{a}_2 + \cdots + t_r \Vect{a}_r \right)\) |
This last equation is a linear dependence relation amongst the vectors \(\Vect{a}_1, \dots ,\Vect{a}_r\) as it expresses \(\Vect{a}_1\) as a linear combination of the remaining ones.