Introduction DSC using Convolutional Codes DSC Using Turbo Codes Other DSC Work

(Last modified November 2001. For more recent work, refer to the Wyner-Ziv
Coding of Video webpage)

**DSC Using Turbo Codes**

Simple convolutional coding is not enough to approach the results of the Slepian-Wolf distributed source coding theorem.Thus, the next logical step is to apply better channel coding techniques – codes that give lower probability of errors for the same encoding rate.Turbo coding, first introduced in 1993 by Berrou, et al. [13], have been demonstrated to be superior error correcting channel codes.Studies have shown that these codes allow systems to perform close to the Shannon channel capacity limit [13].

A system based on turbo codes, composed of two parallel concatenated
convolutional
codes, was implemented (Fig. 9). The bit sequence **X **is fed
into
a 16-state, rate 4/5 systematic convolutional encoder of generator
matrix
h_{0} = 23, h_{1} = 35, h_{2} = 31, h_{3}
= 37 and h_{4} = 27 [14].
In parallel, a randomly interleaved version of** X** is used as an
input
to another convolutional encoder of the same type.
At both encoder outputs, only the parity bits (1 parity bit for every 4
input bits), **X**_{P,enc1} and
**X**_{P,enc2} are
sent to the decoder resulting in R_{x} = 0.25+0.025 = 0.5.
The decoder is composed of two soft-input soft-output (SISO)
constituent
decoders. Each SISO decoder
uses *a priori* probabilities (P_{apriori}) for **X**
and
the “channel” probabilities (P_{channel}) calculated from side
information **Y** and the corresponding parity bits **X**_{P,enci
}to calculate extrinsic probabilities (P_{extrinsic}) and
*a posteriori* probabilities (P_{app}) for **X**. The
iterative
decoding is executed by passing the extrinsic probability results of
one
SISO decoder as the *a priori* probabilities of the other SISO
decoder.
Iterations between the two constituent decoders are performed until a
satisfactory
convergence is reached. The
final estimate **X'** is based on thresholding the *a posteriori*
probabilities produced by the final iteration.

The turbo coded system was applied to bit sequences of length L = 10^{4},
10^{5}, and 10^{6}and
a maximum of 24 iterations were executed. The results are plotted in
Figure
10. Each point on the plot
is generated from a total of 10^{7} sample bits.
It can be observed from the plots that the probability of error, P_{e},
of the turbo-coded system has a very steep drop, similar to the “ideal”
plot predicted by the Slepian-Wolf theorem.It
is also very evident that the turbo-coded system produces acceptable
probability
of errors at H(**X|Y**) values much closer to the
information-theoretic
bound, when compared to the convolutional coded system.

Bit sequences of length N = 10

An important component of turbo coding is the “interleaver gain.”
It has been shown that the greater the interleaver length, the better
the
performance of the turbo code.
The results do show that as X was grouped into larger sequence lengths,
the* error-free* H(**X**|**Y**) values moved closer to the
ideal
point.One way to understand this
is that the longer sequences and more “random” interleaving bring the
system
closer to the random coding and asymptotic equipartition principles
assumed
by the proof of the Slepian-Wolf theorem.

**Turbo
Codes with Puncturing**

To be able to send **X** at lower rates, it is possible to apply
the
concepts of turbo code puncturing.
An alternative turbo-coded system was implemented wherein **X** was
sent at a rate of R_{X} = 0.25.This
was done by puncturing half of the parity bits from each constituent
convolutional
encoder. Ideally, the system
should have close to zero errors at H(**X**|**Y**) £0.25.The
results of the simulations (10^{7} sample bits per point) are
shown
in Figure 11. Like the previous
results, the punctured turbo code produces steep probability of error
curves
and larger sequence lengths shift the plot closer to the ideal curve.

**Figure 11** - P_{e}
results for punctured turbo code.

Results in this study suggest that applying turbo codes for distributed
source coding is very promising.
The simulation results show that acceptable probability of errors can
be
achieved at H(**X**|**Y**) values close to the Slepian-Wolf bound.
This method can easily be extended to other X-Y dependency structures
by
calculating the *a priori* and channel probabilities relative to
the
specific dependency. Furthermore,
turbo coding encoders are very simple, making them reasonable to use in
such distributed system scenarios wherein there are low-complexity
nodes
communicating to a more complex central processing unit.