Huffman Encoding

View as PDF

Submit solution

Points: 5
Time limit: 2.0s
Memory limit: 62M

Author:
Problem type

In 1952, David Huffman published a paper detailing a method for text compression. The text compression algorithm known as Huffman Coding takes a string, and assigns a binary sequence to each unique character in the string. The length of this sequence is dependent on the frequency of the corresponding character in the string. This makes optimal use of space by using less bits for the more common characters. For example in the string AABCAA the character A may be a 2 bit sequence, but the characters B and C may be 3 bits.

Your job is to take in a dictionary containing characters and their respective binary sequence and encode a string using Huffman Coding.

Input Specification

The first line of input will be n (2 \le n \le 26), representing the size of the dictionary. This is the number of different characters that will have an associated binary sequence. The next n lines will each contain a character, followed by a space and its corresponding binary sequence. Each of these lines will be a unique dictionary entry. The next line will be the string that needs to be encoded. It is guaranteed that this string contains only capital letters from A-Z with no spaces and is no greater than 10^{99} characters in length.

Output Specification

Output a single string, the encoded binary sequence using Huffman Coding.

Sample Input

4
A 00
B 01
C 10
D 110
AABCCBAADBDDABD

Output for Sample Input

0000011010010000110011101100001110

Comments

There are no comments at the moment.