Hamming weight

The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length. For the most typical case, a string of bits, this is the number of 1's in the string, or the digit sum of the binary representation of a given number and the ₁ norm of a bit vector. In this binary case, it is also called the population count,[1] popcount, sideways sum,[2] or bit summation.[3]

Examples
String Hamming weight
11101 4
11101000 4
00000000 0
789012340567 10

History and usage

The Hamming weight is named after Richard Hamming although he did not originate the notion.[4] The Hamming weight of binary numbers was already used in 1899 by James W. L. Glaisher to give a formula for the number of odd binomial coefficients in a single row of Pascal's triangle.[5] Irving S. Reed introduced a concept, equivalent to Hamming weight in the binary case, in 1954.[6]

Hamming weight is used in several disciplines including information theory, coding theory, and cryptography. Examples of applications of the Hamming weight include:

  • In modular exponentiation by squaring, the number of modular multiplications required for an exponent e is log2 e + weight(e). This is the reason that the public key value e used in RSA is typically chosen to be a number of low Hamming weight.
  • The Hamming weight determines path lengths between nodes in Chord distributed hash tables.[7]
  • IrisCode lookups in biometric databases are typically implemented by calculating the Hamming distance to each stored record.
  • In computer chess programs using a bitboard representation, the Hamming weight of a bitboard gives the number of pieces of a given type remaining in the game, or the number of squares of the board controlled by one player's pieces, and is therefore an important contributing term to the value of a position.
  • Hamming weight can be used to efficiently compute find first set using the identity ffs(x) = pop(x ^ (~(-x))). This is useful on platforms such as SPARC that have hardware Hamming weight instructions but no hardware find first set instruction.[8][1]
  • The Hamming weight operation can be interpreted as a conversion from the unary numeral system to binary numbers.[9]
  • In implementation of some succinct data structures like bit vectors and wavelet trees.

Efficient implementation

The population count of a bitstring is often needed in cryptography and other applications. The Hamming distance of two words A and B can be calculated as the Hamming weight of A xor B.[1]

The problem of how to implement it efficiently has been widely studied. Some processors have a single command to calculate it (see below), and some have parallel operations on bit vectors. For processors lacking those features, the best solutions known are based on adding counts in a tree pattern. For example, to count the number of 1 bits in the 16-bit binary number a = 0110 1100 1011 1010, these operations can be done:

Expression Binary Decimal Comment
a 01 10 11 00 10 11 10 10 The original number
b0 = (a >> 0) & 01 01 01 01 01 01 01 01 01 00 01 00 00 01 00 00 1,0,1,0,0,1,0,0 every other bit from a
b1 = (a >> 1) & 01 01 01 01 01 01 01 01 00 01 01 00 01 01 01 01 0,1,1,0,1,1,1,1 the remaining bits from a
c = b0 + b1 01 01 10 00 01 10 01 01 1,1,2,0,1,2,1,1 list giving # of 1s in each 2-bit slice of a
d0 = (c >> 0) & 0011 0011 0011 0011 0001 0000 0010 0001 1,0,2,1 every other count from c
d2 = (c >> 2) & 0011 0011 0011 0011 0001 0010 0001 0001 1,2,1,1 the remaining counts from c
e = d0 + d2 0010 0010 0011 0010 2,2,3,2 list giving # of 1s in each 4-bit slice of a
f0 = (e >> 0) & 00001111 00001111 00000010 00000010 2,2 every other count from e
f4 = (e >> 4) & 00001111 00001111 00000010 00000011 2,3 the remaining counts from e
g = f0 + f4 00000100 00000101 4,5 list giving # of 1s in each 8-bit slice of a
h0 = (g >> 0) & 0000000011111111 0000000000000101 5 every other count from g
h8 = (g >> 8) & 0000000011111111 0000000000000100 4 the remaining counts from g
i = h0 + h8 0000000000001001 9 the final answer of the 16-bit word

Here, the operations are as in C programming language, so X >> Y means to shift X right by Y bits, X & Y means the bitwise AND of X and Y, and + is ordinary addition. The best algorithms known for this problem are based on the concept illustrated above and are given here:[1]

//types and constants used in the functions below
//uint64_t is an unsigned 64-bit integer variable type (defined in C99 version of C language)
const uint64_t m1  = 0x5555555555555555; //binary: 0101...
const uint64_t m2  = 0x3333333333333333; //binary: 00110011..
const uint64_t m4  = 0x0f0f0f0f0f0f0f0f; //binary:  4 zeros,  4 ones ...
const uint64_t m8  = 0x00ff00ff00ff00ff; //binary:  8 zeros,  8 ones ...
const uint64_t m16 = 0x0000ffff0000ffff; //binary: 16 zeros, 16 ones ...
const uint64_t m32 = 0x00000000ffffffff; //binary: 32 zeros, 32 ones
const uint64_t h01 = 0x0101010101010101; //the sum of 256 to the power of 0,1,2,3...

//This is a naive implementation, shown for comparison,
//and to help in understanding the better functions.
//This algorithm uses 24 arithmetic operations (shift, add, and).
int popcount64a(uint64_t x)
{
    x = (x & m1 ) + ((x >>  1) & m1 ); //put count of each  2 bits into those  2 bits 
    x = (x & m2 ) + ((x >>  2) & m2 ); //put count of each  4 bits into those  4 bits 
    x = (x & m4 ) + ((x >>  4) & m4 ); //put count of each  8 bits into those  8 bits 
    x = (x & m8 ) + ((x >>  8) & m8 ); //put count of each 16 bits into those 16 bits 
    x = (x & m16) + ((x >> 16) & m16); //put count of each 32 bits into those 32 bits 
    x = (x & m32) + ((x >> 32) & m32); //put count of each 64 bits into those 64 bits 
    return x;
}

//This uses fewer arithmetic operations than any other known  
//implementation on machines with slow multiplication.
//This algorithm uses 17 arithmetic operations.
int popcount64b(uint64_t x)
{
    x -= (x >> 1) & m1;             //put count of each 2 bits into those 2 bits
    x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits 
    x = (x + (x >> 4)) & m4;        //put count of each 8 bits into those 8 bits 
    x += x >>  8;  //put count of each 16 bits into their lowest 8 bits
    x += x >> 16;  //put count of each 32 bits into their lowest 8 bits
    x += x >> 32;  //put count of each 64 bits into their lowest 8 bits
    return x & 0x7f;
}

//This uses fewer arithmetic operations than any other known  
//implementation on machines with fast multiplication.
//This algorithm uses 12 arithmetic operations, one of which is a multiply.
int popcount64c(uint64_t x)
{
    x -= (x >> 1) & m1;             //put count of each 2 bits into those 2 bits
    x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits 
    x = (x + (x >> 4)) & m4;        //put count of each 8 bits into those 8 bits 
    return (x * h01) >> 56;  //returns left 8 bits of x + (x<<8) + (x<<16) + (x<<24) + ... 
}

The above implementations have the best worst-case behavior of any known algorithm. However, when a value is expected to have few nonzero bits, it may instead be more efficient to use algorithms that count these bits one at a time. As Wegner (1960) described,[10] the bitwise and of x with x  1 differs from x only in zeroing out the least significant nonzero bit: subtracting 1 changes the rightmost string of 0s to 1s, and changes the rightmost 1 to a 0. If x originally had n bits that were 1, then after only n iterations of this operation, x will be reduced to zero. The following implementation is based on this principle.

//This is better when most bits in x are 0
//This is algorithm works the same for all data sizes.
//This algorithm uses 3 arithmetic operations and 1 comparison/branch per "1" bit in x.
int popcount64d(uint64_t x)
{
    int count;
    for (count=0; x; count++)
        x &= x - 1;
    return count;
}

If we are allowed greater memory usage, we can calculate the Hamming weight faster than the above methods. With unlimited memory, we could simply create a large lookup table of the Hamming weight of every 64 bit integer. If we can store a lookup table of the hamming function of every 16 bit integer, we can do the following to compute the Hamming weight of every 32 bit integer.

static uint16_t wordbits[65536] = { /* bitcounts of integers 0 through 65535, inclusive */ };
//This algorithm uses 3 arithmetic operations and 2 memory reads.
int popcount32e(uint32_t x)
{
    return wordbits[x & 0xFFFF] + wordbits[x >> 16];
}
//Optionally, the wordbits[] table could be filled using this function
int popcount32e_init(void)
{
    uint32_t i;
    uint16_t x;
    int count;
    for (i=0; i <= 0xFFFF; i++)
    {
        x = i;
        for (count=0; x; count++) // borrowed from popcount64d() above
            x &= x - 1;
        wordbits[i] = count;
    }
}


Muła et al. [11] have shown that a vectorized version of popcount64b can run faster than dedicated instructions (e.g., popcnt on ×64 processors).

Language support

Some C compilers provide intrinsic functions that provide bit counting facilities. For example, GCC (since version 3.4 in April 2004) includes a builtin function __builtin_popcount that will use a processor instruction if available or an efficient library implementation otherwise.[12] LLVM-GCC has included this function since version 1.5 in June 2005.[13]

In C++ STL, the bit-array data structure bitset has a count() method that counts the number of bits that are set.

In Java, the growable bit-array data structure BitSet has a BitSet.cardinality() method that counts the number of bits that are set. In addition, there are Integer.bitCount(int) and Long.bitCount(long) functions to count bits in primitive 32-bit and 64-bit integers, respectively. Also, the BigInteger arbitrary-precision integer class also has a BigInteger.bitCount() method that counts bits.

In Common Lisp, the function logcount, given a non-negative integer, returns the number of 1 bits. (For negative integers it returns the number of 0 bits in 2's complement notation.) In either case the integer can be a BIGNUM.

Starting in GHC 7.4, the Haskell base package has a popCount function available on all types that are instances of the Bits class (available from the Data.Bits module).[14]

MySQL version of SQL language provides BIT_COUNT() as a standard function.[15]

Fortran 2008 has the standard, intrinsic, elemental function popcnt returning the number of nonzero bits within an integer (or integer array).[16]

Some programmable scientific pocket calculators feature special commands to calculate the number of set bits, e.g. #B on the HP-16C,[3][17] #BITS,[18][19] BITSUM,[20][21] or nBITS on the WP 34S.[22][23]

Processor support

See also

References

  1. 1 2 3 4 5 6 7 Warren, Jr., Henry S. (2013) [2002]. Hacker's Delight (2 ed.). Addison Wesley - Pearson Education, Inc. pp. 81–96. ISBN 978-0-321-84268-8. 0-321-84268-5.
  2. Knuth, Donald Ervin (2009). "Bitwise tricks & techniques; Binary Decision Diagrams". The Art of Computer Programming. Volume 4, Fascicle 1. Addison–Wesley Professional. ISBN 0-321-58050-8. (NB. Draft of Fascicle 1b available for download.)
  3. 1 2 Hewlett-Packard HP-16C Computer Scientist Owner's Handbook (PDF). Hewlett-Packard Company. April 1982. 00016-90001. Archived (PDF) from the original on 2017-03-28. Retrieved 2017-03-28.
  4. Thompson, Thomas M. (1983), From Error-Correcting Codes through Sphere Packings to Simple Groups, The Carus Mathematical Monographs #21, The Mathematical Association of America, p. 33
  5. Glaisher, James Whitbread Lee (1899), "On the residue of a binomial-theorem coefficient with respect to a prime modulus", The Quarterly Journal of Pure and Applied Mathematics, 30: 150–156 (NB. See in particular the final paragraph of p. 156.)
  6. Reed, Irving Stoy (1954), "A Class of Multiple-Error-Correcting Codes and the Decoding Scheme", IRE Professional Group on Information Theory, Institute of Radio Engineers (IRE), PGIT-4: 38–49
  7. Stoica, I.; Morris, R.; Liben-Nowell, D.; Karger, D. R.; Kaashoek, M. F.; Dabek, F.; Balakrishnan, H. (February 2003). "Chord: a scalable peer-to-peer lookup protocol for internet applications". IEEE/ACM Transactions on Networking. 11 (1): 17–32. Section 6.3: "In general, the number of fingers we need to follow will be the number of ones in the binary representation of the distance from node to query."
  8. 1 2 SPARC International, Inc. (1992). "A.41: Population Count. Programming Note". The SPARC architecture manual: version 8 (PDF) (Version 8 ed.). Englewood Cliffs, New Jersey, USA: Prentice Hall. p. 231. ISBN 0-13-825001-4. Archived from the original (PDF) on 2012-01-18.
  9. Blaxell, David (1978), "Record linkage by bit pattern matching", in Hogben, David; Fife, Dennis W., Computer Science and Statistics--Tenth Annual Symposium on the Interface, NBS Special Publication, 503, U.S. Department of Commerce / National Bureau of Standards, pp. 146–156
  10. Wegner, Peter (May 1960), "A technique for counting ones in a binary computer", Communications of the ACM, 3 (5): 322, doi:10.1145/367236.367286
  11. Muła, Wojciech; Kurz, Nathan; Lemire, Daniel (January 2018), "Faster Population Counts Using AVX2 Instructions", Computer Journal, 61 (1), arXiv:1611.07612, doi:10.1093/comjnl/bxx046
  12. "GCC 3.4 Release Notes". GNU Project.
  13. "LLVM 1.5 Release Notes". LLVM Project.
  14. "GHC 7.4.1 release notes". GHC documentation.
  15. "12.11. Bit Functions — MySQL 5.0 Reference Manual".
  16. Metcalf, Michael; Reid, John; Cohen, Malcolm (2011). Modern Fortran Explained. Oxford University Press. p. 380. ISBN 0-19-960142-9.
  17. Schwartz, Jake; Grevelle, Rick (2003-10-20) [1993]. HP16C Emulator Library for the HP48S/SX. 1.20 (1 ed.). Retrieved 2015-08-15. (NB. This library also works on the HP 48G/GX/G+. Beyond the feature set of the HP-16C this package also supports calculations for binary, octal, and hexadecimal floating-point numbers in scientific notation in addition to the usual decimal floating-point numbers.)
  18. Martin, Ángel M.; McClure, Greg J. (2015-09-05). "HP16C Emulator Module for the HP-41CX - User's Manual and QRG" (PDF). Archived (PDF) from the original on 2017-04-27. Retrieved 2017-04-27. (NB. Beyond the HP-16C feature set this custom library for the HP-41CX extends the functionality of the calculator by about 50 additional functions.)
  19. Martin, Ángel M. (2015-09-07). "HP-41: New HP-16C Emulator available". Archived from the original on 2017-04-27. Retrieved 2017-04-27.
  20. Thörngren, Håkan (2017-01-10). "Ladybug Documentation" (release 0A ed.). Retrieved 2017-01-29.
  21. "New HP-41 module available: Ladybug". 2017-01-10. Archived from the original on 2017-01-29. Retrieved 2017-01-29.
  22. Dale, Paul; Bonin, Walter (2012) [2008]. "WP 34S Owner's Manual" (PDF) (3.1 ed.). Retrieved 2017-04-27.
  23. Bonin, Walter (2015) [2008]. WP 34S Owner's Manual (3.3 ed.). ISBN 978-1-5078-9107-0.
  24. Blackfin Instruction Set Reference (Preliminary ed.). Analog Devices. 2001. pp. 8–24. Part Number 82-000410-14.

Further reading

  • Schroeppel, Richard C.; Orman, Hilarie K. (1972-02-29), "compilation", HAKMEM, by Beeler, Michael; Gosper, Ralph William; Schroeppel, Richard C. (report), Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA, MIT AI Memo 239 (Item 169: Population count assembly code for the PDP/6-10.)
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.