Huffman Decoding

View as PDF

Submit solution

Points: 5
Time limit: 2.0s
Memory limit: 64M

Problem type

In 1952, David Huffman published a paper detailing a method for text compression. The text compression algorithm known as Huffman Coding takes a string, and assigns a binary sequence to each unique character in the string. The length of this sequence is dependent on the frequency of the corresponding character in the string. This makes optimal use of space by using less bits for the more common characters. For example in the string AABCAA the character A may be a 2 bit sequence, but the characters B and C may be 3 bits.

Your job is to take in a dictionary containing characters and their respective binary sequence and decode a string that has been encoded using Huffman Coding

Input Specification

The first line of input will be n (2 \le n \le 26), representing the size of the dictionary. This is the number of different characters that will have an associated binary sequence. The next n lines will each contain a character, followed by a space and its corresponding binary sequence. Each of these lines will be a unique dictionary entry. The next line will be the string that needs to be decoded. It is guaranteed that this string contains only 1s and 0s with no spaces. The length of this string will be \le 10^{99} characters.

Output Specification

Output a single string, the decoded text sequence using Huffman Coding.

Sample Input

A 00
B 01
C 10
D 110

Output for Sample Input



There are no comments at the moment.