Compute the Hamming distances between the following pairs of \(n\)-tuples.
\(\displaystyle (011010), (011100)\)
\(\displaystyle (11110101), (01010100)\)
\(\displaystyle (00110), (01111)\)
\(\displaystyle (1001), (0111)\)
4.
Compute the weights of the following \(n\)-tuples.
\(\displaystyle (011010)\)
\(\displaystyle (11110101)\)
\(\displaystyle (01111)\)
\(\displaystyle (1011)\)
5.
Suppose that a linear code \(C\) has a minimum weight of \(7\text{.}\) What are the error-detection and error-correction capabilities of \(C\text{?}\)
6.
In each of the following codes, what is the minimum distance for the code? What is the best situation we might hope for in connection with error detection and error correction?
Compute the null space of each of the following matrices. What type of \((n,k)\)-block codes are the null spaces? Can you find a matrix (not necessarily a standard generator matrix) that generates each code? Are your generator matrices unique?
Suppose that a \(1000\)-bit binary message is transmitted. Assume that the probability of a single error is \(p\) and that the errors occurring in different bits are independent of one another. If \(p = 0.01\text{,}\) what is the probability of more than one error occurring? What is the probability of exactly two errors occurring? Repeat this problem for \(p = 0.0001\text{.}\)
11.
Which matrices are canonical parity-check matrices? For those matrices that are canonical parity-check matrices, what are the corresponding standard generator matrices? What are the error-detection and error-correction capabilities of the code generated by each of these matrices?
Compute the syndrome caused by each of the following transmission errors.
An error in the first bit.
An error in the third bit.
An error in the last bit.
Errors in the third and fourth bits.
14.
Let \(C\) be the group code in \({\mathbb Z}_2^3\) defined by the codewords \((000)\) and \((111)\text{.}\) Compute the cosets of \(C\) in \({\mathbb Z}_2^3\text{.}\) Why was there no need to specify right or left cosets? Give the single transmission error, if any, to which each coset corresponds.
15.
For each of the following matrices, find the cosets of the corresponding code \(C\text{.}\) Give a decoding table for each code if possible.
In other words, a metric is simply a generalization of the notion of distance. Prove that Hamming distance is a metric on \({\mathbb Z}_2^n\text{.}\) Decoding a message actually reduces to deciding which is the closest codeword in terms of distance.
18.
Let \(C\) be a linear code. Show that either the \(i\)th coordinates in the codewords of \(C\) are all zeros or exactly half of them are zeros.
19.
Let \(C\) be a linear code. Show that either every codeword has even weight or exactly half of the codewords have even weight.
20.
Show that the codewords of even weight in a linear code \(C\) are also a linear code.
21.
If we are to use an error-correcting linear code to transmit the 128 ASCII characters, what size matrix must be used? What size matrix must be used to transmit the extended ASCII character set of 256 characters? What if we require only error detection in both cases?
22.
Find the canonical parity-check matrix that gives the even parity check bit code with three information positions. What is the matrix for seven information positions? What are the corresponding standard generator matrices?
23.
How many check positions are needed for a single error-correcting code with 20 information positions? With 32 information positions?
24.
Let \({\mathbf e}_i\) be the binary \(n\)-tuple with a \(1\) in the \(i\)th coordinate and \(0\)’s elsewhere and suppose that \(H \in {\mathbb M}_{m \times n}({\mathbb Z}_2)\text{.}\) Show that \(H{\mathbf e}_i\) is the \(i\)th column of the matrix \(H\text{.}\)
25.
Let \(C\) be an \((n,k)\)-linear code. Define the dual or orthogonal code of \(C\) to be
\begin{equation*}
C^\perp = \{ {\mathbf x} \in {\mathbb Z}_2^n : {\mathbf x} \cdot {\mathbf y} = 0 \text{ for all } {\mathbf y} \in C \}\text{.}
\end{equation*}
Find the dual code of the linear code \(C\) where \(C\) is given by the matrix
Show that \(C^\perp\) is an \((n, n-k)\)-linear code.
Find the standard generator and parity-check matrices of \(C\) and \(C^\perp\text{.}\) What happens in general? Prove your conjecture.
26.
Let \(H\) be an \(m \times n\) matrix over \({\mathbb Z}_2\text{,}\) where the \(i\)th column is the number \(i\) written in binary with \(m\) bits. The null space of such a matrix is called a Hamming code.
generates a Hamming code. What are the error-correcting properties of a Hamming code?
The column corresponding to the syndrome also marks the bit that was in error; that is, the \(i\)th column of the matrix is \(i\) written as a binary number, and the syndrome immediately tells us which bit is in error. If the received word is \((101011)\text{,}\) compute the syndrome. In which bit did the error occur in this case, and what codeword was originally transmitted?
Give a binary matrix \(H\) for the Hamming code with six information positions and four check positions. What are the check positions and what are the information positions? Encode the messages \((101101)\) and \((001001)\text{.}\) Decode the received words \((0010000101)\) and \((0000101100)\text{.}\) What are the possible syndromes for this code?
What is the number of check bits and the number of information bits in an \((m,n)\)-block Hamming code? Give both an upper and a lower bound on the number of information bits in terms of the number of check bits. Hamming codes having the maximum possible number of information bits with \(k\) check bits are called perfect. Every possible syndrome except \({\mathbf 0}\) occurs as a column. If the number of information bits is less than the maximum, then the code is called shortened. In this case, give an example showing that some syndromes can represent multiple errors.