In 1952, David Huffman published a paper detailing a method for text compression.
The text compression algorithm known as Huffman Coding takes a string, and assigns a binary sequence to each unique character in the string. The length of this sequence is dependent on the frequency of the corresponding character in the string. This makes optimal use of space by using less bits for the more common characters. For example in the string AABCAA
the character A
may be a 2 bit sequence, but the characters B
and C
may be 3 bits.
Your job is to take in a dictionary containing characters and their respective binary sequence and decode a string that has been encoded using Huffman Coding
Input Specification
The first line of input will be , representing the size of the dictionary. This is the number of different characters that will have an associated binary sequence.
The next lines will each contain a character, followed by a space and its corresponding binary sequence. Each of these lines will be a unique dictionary entry.
The next line will be the string that needs to be decoded. It is guaranteed that this string contains only 1
s and 0
s with no spaces. The length of this string will be characters.
Output Specification
Output a single string, the decoded text sequence using Huffman Coding.
Sample Input
4
A 00
B 01
C 10
D 110
0000011010010000110011101100001110
Output for Sample Input
AABCCBAADBDDABD
Comments