Fingerprinting Computers – Part II – Hardware


The fingerprinting of a computer using data accessible or generated by software is subjected to a Replay attack or could be easily disrupted by malware. This method should not be used to authenticate the machine.
In order to defeat Replay attacks, the fingerprinting algorithm needs to generate a one time string, based on some unique property of the hardware and that can be used by the verifier to check the identity of the computer.
One example of such technology is the Intel IPT (Identity Protection Technology) that works by generating a unique 6 digit number every 30 seconds. This number is generated by a section of the chip that is inaccessible to the Operating system and holds some secret key shared with the validator/server. Once a particular processor is linked to a server, the server will be able to identify the CPU and validate the computer. Of course this does not imply user authentication and the intended use of this technology is as an additional factor on a multi-factor authentication scheme.
A Public Key infrastructure (Certificate Authority) is still needed to defeat the Man in the Middle attack.
Technologies that can identify hardware to the chip level are being developed to prevent counterfeiting. These are based on the PUF (Physically Unclonable Functions) that use physical variations of the circuit to extract certain parameters that are unique to each chip and cannot be reproduced nor manipulated without physically tampering with the circuit.
Related:
Power-up of a SRAM as a source of Entropy and Identification
Secure Processors, the ultimate battlefield
A PUF Design for Secure FPGA-Based Embedded Systems

Fingerprinting Computers – Part I – Your browser.

Authentication is about the only big open problem in the practice of internet security. The existing encryption and hashing algorithms as well as the key generation/management protocols offer a high degree of security, barring programming/implementation errors.
Authentication technologies face serious challenges mainly because identity is difficult to establish with a 100% certainty even using physical characteristics, i.e., signatures and credentials can be forged, the physical appearance of people can be manipulated, etc.
Read more of this post

Page on authentication added

I’ve added a page on authentication that sums up several posts and put them in one place.
Check it out on the right side under “Look Inside”

The Random Matchmaker : Phone Company’s new by product.

A network glitch(?) that logs AT&T users into other people facebook accounts at random was reported today.

Who knows, in the future many kids could attribute their existence to a programming error. If so should we call it the Destiny_2.0 bug?

SSL 3.0 / TLS subjected to Man in the Middle Attack

An “Authentication Gap” was discovered in the latest version of SSL/TLS protocol.This could potentially be a huge problem. The gap is not due to some erroneous implementation, it is a property of the protocol.

Here is a list of links to websites where the issue is being followed:

http://www.phonefactor.com/sslgap/

IETF resources

Red Hat

SANS.org

More reviews for the AMS

I have a few new reviews of papers on cryptography in my updated page. For those interested in the security of NMAC and HMAC or affiliation hiding key exchanges, I recommend reading the reviews. They include links to relevant papers.

About the need to protect Biometric Data

An article stating the need to protect biometric data appeared in the IEEE spectrum magazine. Not a lot of new information, a good summary of the threats as biometrics are being used more and more as authenticators.

 

Power-up of a SRAM as a source of Entropy and Identification

Many years ago I was involved in a research project looking to use tiny differences in processing time inside a computer as a way to fingerprint the device. The idea was not unique, I guess that at the same time many were busy looking for similar things.

The reason was that in the framework of Internet security protocols such as SSL, if each party can fingerprint the other party’s computer, that will add another dimension to the development of a strong authentication scheme. Eventually the company supporting the research run out of interest and money and I forgot all about the idea until I recently read the news.

Enter the Fingerprint Extraction and Random Numers in SRAM (FERNS) method developed by Holcomb,  Burleson and Fu of the University of California Berkeley. They analyzed the initial state of the cells of a 512 kb Static Random Access Memory (SRAM) after power up and discovered that the stable states of some cells representing the bits were random, that is they have equal probability to be 1 or 0, while others cells were skewed to start as a 1 or as a 0. This property of the cells is due to imperfections of the fabrication process and are impossible to control.

A paper describing Burleson’s group work is going to appears in the IEEE Transactions on Computers.

From the Abstract
…..  We use experimental data from high performance SRAM, and the WISP UHF RFID tag to validate the principles behind FERNS. We demonstrate that 8 byte fingerprints from an SRAM chip are sufficient for uniquely identifying circuits among a population of 5,120 and extrapolate that 16 to 24 bytes of SRAM would be sufficient for uniquely identifying every instance of the SRAM ever produced. Using a smaller population, we demonstrate similar identifying ability from the embedded SRAM microcontroller memory of the WISP. In addition to identification, we show that SRAM fingerprints capture noise, enabling true random number generation. We demonstrate that the initial states of a 256 byte SRAM can produce 128 bit true random numbers capable of passing the NIST approximate entropy test.

The possibilities for the application of this technology to authentication and key generation schemes are enormous, specially in the field of portable devices. To have an entropy generator “in a chip” is great, if you get that together with a fingerprint of the chip is wonderful news. Certainly we’ll hear more about it.

 

Related reading: Quirks of RFID Memory Make for Cheap Security Scheme

 

One Password fits all

I recently discussed the problems associated with weak passwords here. Since then, there have been a few cases of hackers publishing stolen passwords form popular sites such as phpbb or the passwords that the conficker worm uses to spread across shares. Some researches report that people often use the same password on many websites making themselves vulnerable to serious attack if the password for a low value website is the same as the one used in a high value target

Password selection tips abound and as long as your password has enough entropy, users data is somewhat out of reach of most hackers.

Despite the advice of security gurus, the manifest limitations of the average human brain for generating and remembering more than a few passwords is a physical barrier to a widespread adoption secure practices. Password managers may help to keep your passwords organized. They have functions to generate strong passwords and can connect directly with browsers or e-mail programs.

Another way around is the OpenID network that allow users to have one identity for multiple on-line services. The OpenID protocol is inclusive enough that can work as an Authenticator using biometrics or smart-tokens.  Open ID is still in the adoption phase, not all online services accept it.

Collisions, a secure hash function killer (MD5, SHA1, SHA2)

The trouble with the use of MD5 in digital signatures recently uncovered by Sotirov et al. is common to other hash functions.

NIST has been discouraging people to use MD5 and even SHA 1 since many years ago. A good account of this was posted by Dustin Trammell here.

Because the output of a hash function is of a fixed length, usually smaller that the input, there will necessarily be collisions. The collision-free property for hash is thus defined by:

A function H that maps an arbitrary length message M to a fixed length message digest MD is a collision-free hash function if:

1. It is a one-way hash function.

2. It is hard to find two distinct messages (M', M) that hash to the same result H(M')=H(M).

Cryptographers talk about “relatively collision free” hash functions. A good hash function should be designed with the Avalanche Criterion in mind.

The Avalanche Criterion (AC) is used in the analysis of S-boxes or substitution boxes. S-boxes take a string as input and produce an encoded string as output.

The avalanche criterion requires that if any one bit of the input to an S-box is changed, about half of the bits that are output by the S-box should change their values. Therefore, even if collisions are unavoidable, there is no way to generate two strings with the same hash value other than brute force.