Improvement of reconciliation for quantum key distribution

Show full item record

Title: Improvement of reconciliation for quantum key distribution
Author: Chen, Keath
Abstract: Quantum Key Distribution is meant to be an ultimate computer security system that will not need upgrade/overhaul from time to time. QKD allows the generation of long key length on demand. QKD, coupled with the unconditional secure "one-time pad" encryption system, will be unbreakable for eavesdropper with infinite resources. Numerous experimental QKD prototypes have demonstrated that QKD is likely to be a reality before quantum computer. One of the steps in a Quantum Key Distribution protocol is to remove the transmission errors from quantum communication, which typically has a high error rate (one percent or higher). An interactive error control method is utilized in the reconciliation of QKD. This procedure divides transmitted bits into blocks. The size of the block is chosen so that the chance of having multiple error bits in one block is small. By checking parity and doing interactive BINARY search when a parity error is found, error bits can be located and removed. Some error bits escape the detection from the first pass. By repeating this procedure several times, each time randomly dividing bits into blocks, most error bits can be detected and removed. Each parity check means loss of one bit. The goal is to minimize the number of parity checks to locate all (or most) errors and to have a high reliability that the remaining bits have very small residue error rate. Brassard and Salvail devised a better error control procedure in 1993. By keeping track of block parity information from pass to pass, which they called CASCADE, the number of parity checks per error bit is reduced. They observed no errors left after four passes from 10 simulation runs. However, they offered upper bound of residue error rate that was much higher than observed from simulation. Therefore, there is a need to find the optimal block size and to improve the residue error rate analysis of this procedure. In this work, improvement in reconciliation procedure is obtained in throughput (i.e. the number of input bits minus the number of parity checks) and in lower residue error rate. Throughput is doubled at high input error rate and less throughput gain at low input error rate. Three most important factors that enhance the system throughput are: 1) Using larger block size to fully utilize the power of CASCADE method. 2) Avoiding any pair of bits to stay in the same block of any later pass, which allows the use of three passes only (instead of four passes in the BS procedure). 3) Some bits are already known to be lost from the current pass, so I avoid carrying these bits to the later pass. The residue error rate at the end of the second pass (and the third pass) for small initial block size is derived. The residue error rate at the end of the third pass is found much lower than the upper bound given by the previous workers. This residue error rate is found to be a function of input size and the block size. This error control method has many similar features and difficulties to the recent development of "turbo-code" in the error control-coding field.
Record URI: http://hdl.handle.net/1850/14810
Date: 2000

Files in this item

Files Size Format View
KChenThesis06-28-2000.pdf 1.176Mb PDF View/Open

The following license files are associated with this item:

This item appears in the following Collection(s)

Show full item record

Search RIT DML


Advanced Search

Browse