Introduction DSC using Convolutional Codes DSC Using Turbo Codes Other DSC Work
(Last modified November 2001. For more recent work, refer to the Wyner-Ziv
Coding of Video webpage)
DSC Using Turbo Codes
Simple convolutional coding is not enough to approach the results of the Slepian-Wolf distributed source coding theorem.Thus, the next logical step is to apply better channel coding techniques – codes that give lower probability of errors for the same encoding rate.Turbo coding, first introduced in 1993 by Berrou, et al. , have been demonstrated to be superior error correcting channel codes.Studies have shown that these codes allow systems to perform close to the Shannon channel capacity limit .
The turbo coded system was applied to bit sequences of length L = 104, 105, and 106and a maximum of 24 iterations were executed. The results are plotted in Figure 10. Each point on the plot is generated from a total of 107 sample bits. It can be observed from the plots that the probability of error, Pe, of the turbo-coded system has a very steep drop, similar to the “ideal” plot predicted by the Slepian-Wolf theorem.It is also very evident that the turbo-coded system produces acceptable probability of errors at H(X|Y) values much closer to the information-theoretic bound, when compared to the convolutional coded system.
An important component of turbo coding is the “interleaver gain.” It has been shown that the greater the interleaver length, the better the performance of the turbo code. The results do show that as X was grouped into larger sequence lengths, the error-free H(X|Y) values moved closer to the ideal point.One way to understand this is that the longer sequences and more “random” interleaving bring the system closer to the random coding and asymptotic equipartition principles assumed by the proof of the Slepian-Wolf theorem.
Turbo Codes with Puncturing
To be able to send X at lower rates, it is possible to apply the concepts of turbo code puncturing. An alternative turbo-coded system was implemented wherein X was sent at a rate of RX = 0.25.This was done by puncturing half of the parity bits from each constituent convolutional encoder. Ideally, the system should have close to zero errors at H(X|Y) £0.25.The results of the simulations (107 sample bits per point) are shown in Figure 11. Like the previous results, the punctured turbo code produces steep probability of error curves and larger sequence lengths shift the plot closer to the ideal curve.
Figure 11 - Pe results for punctured turbo code.
Results in this study suggest that applying turbo codes for distributed
source coding is very promising.
The simulation results show that acceptable probability of errors can
achieved at H(X|Y) values close to the Slepian-Wolf bound.
This method can easily be extended to other X-Y dependency structures
calculating the a priori and channel probabilities relative to
specific dependency. Furthermore,
turbo coding encoders are very simple, making them reasonable to use in
such distributed system scenarios wherein there are low-complexity
communicating to a more complex central processing unit.