Distributed Source Coding

Introduction   DSC using Convolutional Codes   DSC Using Turbo Codes    Other DSC Work

(Last modified November 2001. For more recent work, refer to the Wyner-Ziv Coding of Video webpage)

DSC Using Convolutional Codes

The following asymmetric scenario was used to investigate convolutional codes for distributed source coding:  Let X and Y be statistically dependent binary sequences with cross-over probability p (Fig. 2).  Assume that Y is sent at a rate of H(Y) and is recovered perfectly at the decoder.  X is encoded with no knowledge of Y and is sent to the decoder which knows the side information Y (Fig. 3).  This study seeks to find techniques that will allow X to be sent at a rate close to the Slepian-Wolf bound of H(X|Y) to achieve an overall rate close to H(X,Y).

 
    In this study, channel coding techniques are used to source code X.  Since Y is correlated to X, the side information Y at the decoder can be viewed as a channel output where X is the input to the dependency channel. The only difference from simple channel coding is that there are additional bits sent from the encoder which are also used for the decoding.  This dependency channel concept is the motivation for applying channel coding techniques, such as convolutional coding and turbo coding, to this source coding problem.
Convolutional Coder and Decoder
    The following system was implemented to encode X using convolutional coding (Fig. 4):  Let X be the bit sequence.  At the source, X is passed through a rate 2/3 systematic convolutional encoder.  For this type of convolutional encoder, every 2 bits of input gives 3 bits of output (2 output bits are the actual input values and 1 bit is the parity bit).  Instead of sending all 3 bits to the decoder, only the parity bit is sent, resulting in RX = 0.5.  As a substitute for the unsent L bits from the encoder, the decoder makes use of the side information Y. The decoder uses the parity bit stream, XP, together with the side information Y to calculate an estimate of X.  The decoder implements either the Viterbi maximum likelihood sequence detection (MLSD) algorithm or the more complex a posteriori probability (APP) algorithm to choose the best estimate for X. The Viterbi algorithm chooses the most likely sequence – it maximizes the probability P(X'|XP, Y)– while the APP algorithm chooses the most likely bits by maximizing the probability P(X'i|XP, Y).

    Given that RX = 0.5, the Slepian-Wolf theorem asserts that it is possible to send X without error through this system if H(X|Y) £ 0.5.For the given X-Y dependency with cross-over probability p, H(X|Y) is calculated as
.

    Maximum free distance rate 2/3 systematic convolutional codes, of varying number of states, were used in the experiments.  More states in the code entail more complex decoding but give better coding performance.  Table 1 lists the convolutional code generators (in octal form) used [12].

 
# of states
h0(D)
h1(D)
h2(D)
8
17
15 
13
32
121
113
137
256
563
601
475
2048
5575
7377
4033

Table 1 – Rate 2/3 maximum free distance convolutional code generators


 

Simulation Results

    The Viterbi decoding algorithm was used with the 8, 32, 256 and 2048-state convolutional encoders applied on bit sequences of length L = 103.  The results, derived from a total of 106 sample bits, (Fig. 5) show that in spite of the probability of error improvement caused by more coding states, the system performance is still very far from the ideal Slepian-Wolf bound.  The APP decoding algorithm improves the performance only by a slight amount compared to the Viterbi system, as can be seen in Fig. 6.

Figure 5 – Probability of error, Pe, for convolutional coded system with different number of states.
Figure 6 – Pe for convolutional coded system with either Viterbi or APP decoding.

Comparison with Theoretical Performance of Convolutional Codes

    As discussed above, the distributed source coding of X with side information Y can be viewed as a channel coding problem.  The specific scenario used, with X and Y having a BSC relationship with crossover probability p, can be interpreted as a simple channel coded system composed of a rate 2/3 systematic convolutional code.  Since the L/3 parity bits are sent without error, the “channel” would be a BSC having cross-over probability p’ = (2/3)p (Fig. 7).A theoretical probability of error value can be calculated using the traditional performance analysis often applied to convolutional codes, that is, , where Ne and dmin are specific to the code used.  (This analysis is an estimate since there is added information in knowing that the parity bits arrive at the decoder with no crossovers.)

     Figure 7 – The convolutional coded system viewed as a BSC with p’=(2/3)p.

    Fig. 8 compares the simulation results with the performance analysis estimates.  Like the simulation results, the theoretical calculations show that acceptable probability of errors occur at H(X|Y) values far below the Slepian-Wolf bound, even if a large number of coding states is used.

Figure 8 – Pe from simulation results and from theoretical calculations for system using convolutional codes with p’=(2/3)p.