One Password fits all

I recently discussed the problems associated with weak passwords here. Since then, there have been a few cases of hackers publishing stolen passwords form popular sites such as phpbb or the passwords that the conficker worm uses to spread across shares. Some researches report that people often use the same password on many websites making themselves vulnerable to serious attack if the password for a low value website is the same as the one used in a high value target

Password selection tips abound and as long as your password has enough entropy, users data is somewhat out of reach of most hackers.

Despite the advice of security gurus, the manifest limitations of the average human brain for generating and remembering more than a few passwords is a physical barrier to a widespread adoption secure practices. Password managers may help to keep your passwords organized. They have functions to generate strong passwords and can connect directly with browsers or e-mail programs.

Another way around is the OpenID network that allow users to have one identity for multiple on-line services. The OpenID protocol is inclusive enough that can work as an Authenticator using biometrics or smart-tokens.  Open ID is still in the adoption phase, not all online services accept it.

Authentication – Part IV Password Protocols

To be useful, passwords need to be transmitted or negotiated between the server and the client.

Transmission of the password in the clear is subjected to eavesdropping and therefore very insecure. The password storage on the servers side must also be protected from the possibility that the file falls on the wrong hands, compromising the security of the system.

There are several constraints to the design of password protections protocols, one of the most important being the limited amount of entropy that user memorized passwords necessarily have. Computation time is another big constraint. Even small delays in the response time can make the difference between a system the user is happy to interact with, and a system in which security features will be disabled for the sake of interactivity.

The key features of a password protection protocol are described below:

  1. The transmission and storage of passwords should be non plain-text equivalent: This means, the protocol should be such that even if an attacker obtains the database containing the password or eavesdrop the exchange between client and server, this will not compromise the security of the exchange.
  2. The protocol must be resistant to replay attack: That is, if an eavesdropper successfully record a login session, the information can not be used to compromise a future (or past) exchange between the Client and the server.
  3. The protocol must be resistant to the Denning-Sacco attack: In this attack, by capturing the session key (not the raw password) the eavesdropper has enough information to successfully mount a brute force attack or at least to successfully impersonate the user.
  4. The protocol must be resistant to active attacks:In these situations the protocol leaks enough information that allows the attacker to impersonate the server to the client, make a guess of the correct password and then, by faking a failure, obtain confirmation from the client when the guessed password is correct.
  5. Protocols that work on the base of zero-knowledge proof of password possession are preferable: Zero-knowledge means that the server does not need to know the password to prove that the client knows the password. Passwords are never stored on the server therefore they cannot be stolen. 

Some protocols encrypt the exchange of information to avoid the plain-text equivalence. Others used a form of asymmetric key exchange (a la Diffie-Helmann) that are generated based on the password but do not leak any information about it.

The following is a list of the some of the commonly used password schemes, classified by its strength:

(adapted from SRP competitive analysis)

Weak (not to be used)

  • Clear-text passwords (such as unsecured telnet, rlogin, etc.)
  • Encoded passwords (HTTP Basic Authentication)
  • Classic challenge-response protocols (HTTP Digest Authentication, Windows NTLM Authentication, APOP, CRAM, CHAP, etc.)
  • One-Time Password schemes based on a human memorable (low entropy) secret (S/Key, OPIE)
  • Kerberos V4

 

Pseudo-Strong (they have known vulnerabilities in some implementations)

Strong

  • Secure Remote Password (SRP) – Developed in 1997 by Wu, is a strong password authentication protocol now widespread among Open Source and commercial products. SRP does not expose passwords to either passive or active network intruders, and it stores passwords as a “non-plaintext-equivalent” one-way hash on the server. SRP is available as part of standard Telnet and FTP implementations, and is being rapidly incorporated into Internet protocols that require strong password authentication.
  • Encrypted Key Exchange(EKE) – Developed by Bellovin & Merritt in 1992 is one of the earliest examples of secure password protocols.
  • Strong Password Exponential Key Exchange (SPEKE) developed by David Jablon . It is licensed by Entrust for their TruePass product.
  • Diffie-Helmann Encrypted Key Exchange (DH-EKE)
  • AMP
  • SNAPI
  • AuthA
  • OKE
  • Variations of all of the above.

 

See also: Authentication – Part I, Part II and Part III.

Authentication – Part III Passwords

Passwords have been the main tool for verifying identity and granting access to computer resources.In general, as FIPS 112 defines it, a password is a sequence of characters that can be used for several authentication purposes.

There are two security problems with a password; 1) somebody other than the legitimate user can guess it; or 2) it can be intercepted (sniffed) during transmission by a third party. Over the years, many different kinds of password generation methods and password protection protocols were designed to address  hese two weaknesses.

Password Strength

The security (strength) of a password is determined by its Shannon entropy , which is a measure of the difficulty in guessing the password. This entropy is measured in Shannon bits. For example, a random 10-letter English text would have an estimated entropy of around 15 Shannon bits, meaning that on average we might have to try \frac{1}{2}(2^{15}) = 2^{14} =16384 possibilities to guess it. In practice, the number of attempts needed would be considerably less because of side information available and redundancy (patterns and lack of randomness).  Since most human users cannot remember long random strings, a major weakness of passwords is that the entropy is usually too small for security. Even if the user can construct long passwords using an algorithm or a fix rule, the same rule may be know or guess by hackers. 

It is well-known to hackers that users commonly select passwords that include variations of the user name, make of the car they drive, name of some family member, etc.  Social engineering is one of the most powerful tools being used by hackers.

Because of limitations in the underlying infrastructure, some authentication systems (notably banks) limit the number of characters and the alphabet from which the characters can be chosen. These kind of limitations are susceptible to dictionary-type attacks. In a dictionary  or brute force attack, the hacker will attempt to gain access using words from a list or dictionary. If the actual password is in the list, it can be obtained (in average)  when about half of the total number of possibilities have been tried. Even with systems that limit the number of trials for a user this is a potential security risk, because the hacker has good chances at gaining access by randomly trying names and passwords until a valid combination is found.

It is commonly accepted that with current tools, up to 30% of the passwords in a system can be recovered within hours. Moreover it is predicted that even random (perfect) passwords of 8 characters will be routinely cracked with technology available to most users by the year 2016.

There are many documents that give rules and policies for good password selection such as NIST Special Publication 800-63 and SANS security Project.

CrypTool  (ver 1.4)  has a very good tool to check the strength of a password against several criteria such as the amount of entropy and the resistance to dictionary attacks.

image

 

Related Posts:

Waiting for the Quantum Leap

For the longest time I have the suspicion that quantum cryptography, although a neat idea, is overrated. I was keeping an eye into developments (see previous posts) just in case. currently my impression is that, with the current, technology, QC is an expensive proposition for the added value it provides. It looks like I am in good company on this. In the October issue of Wired, Bruce Schneier writes a commentary piece where he asserts:

While I like the science of quantum cryptography — my undergraduate degree was in physics — I don’t see any commercial value in it. I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it. Systems that use it don’t magically become unbreakable, because the quantum part doesn’t address the weak points of the system.

Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. Our symmetric and public-key algorithms are pretty good, even though they’re not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.

Moreover I have a nagging question about the fundamental tenet of quantum cryptography. The principle is that Alice and Bob will know for sure that Eve is eavesdropping in their channel because their bits will be changing as required by the uncertainty principle. Eve may be out of luck in getting the secrets as Bob and Alice will certainly decide not to exchange them in her presence. However, the mischievous Eve may decide that she is quite happy with only preventing the exchange. I will call this a denial of channel attack by which Eve can prevent Alice and Bob to exchange any secret until the police figures out where she is tapping the quantum line and force her to stop. Eve-hacker can now start a cat and mouse chase, that judging from the record on netting hackers by the internet police, is lopsided on Eve’s favor.

A mathematical note aside, Schneier mentions in his article the Bennet-Brassard and key reconciliation algorithms used by quantum cryptography. In a paper written with A. Bruen and D. Wehlau we gave rigurous proof of convergence for the Bennet-Bessete-Brassard-Salvail and Smolin (BBBSS92)method. These results and more about quantum cryptography also appear on the our book.

Authentication – Part II, Free lunches are very rare indeed.

A total stranger, talking from behind a screen, promises you that he/she/it (you don’t have ways to know) will perform some service after you reveal sensitive private information about yourself and your financial situation by shouting it in a public place.

Strangely enough, you do as he/she/it says because there is the underlying promise that everything will be perfectly fine, as demonstrated by the millions of daily similar transactions being done by millions of people all over the world. Moreover, everybody agrees that transaction as the one described are the minimum standard of performance required from any protocol for authentication and encryption over public channels such as the internet that pretend to have any chance at success.

When we look at the problem from this angle, we should ask not why there are security breaches, instead how is possible that there are any successful (meaning secure) transactions at all.

The problem of exchanging secure messages over a public channel is relatively easy to solve, since the 1960’s we count with encryption algorithms that gives us a relatively comfortable advantage in the race between coders and hackers. These algorithms are the result of interesting mathematical discoveries. However mathematics discoveries alone cannot prevent people from seeking some advantages through cheating and lying. The Internet was born as a very naive environment in which everybody was trusting and trustworthy, but as soon as valuable information started to be exchanged, mechanisms to avoid misrepresentation needed to be put in place.

This is well known, absolutely secure communications can be had over a public channel provided that there was at some point in time a secure exchange between the parties over a private channel. Thus if I want to communicate with my lawyer with absolute security, I will meet him at his office and exchange a set of encryption keys that we will keep confidential (is in both parties best interest). Next time we need to communicate, we will encrypt the messages using this set of keys, which also ensures the identity of the other party.

Enter W. Diffie and M. Hellman, who develop a secure and very elegant way to exchange a piece of secret information (say an encryption key) over a public channel in which the resident evil eavesdropper (Eve) cannot guess the secret number unless she knows how to solve the discrete logarithm problem (an open problem in mathematics ).
Without a strong authentication however, Eve can succeed by intercepting the communications and impersonate my lawyer to me and myself to my lawyer. This way Eve will end up with two secret keys, one to communicate with me and the other to communicate with my lawyer, and the power to snoop in the conversation or even tamper with it, without raising suspicions (the famous man in the middle attack).

The man in the middle has an upside though, in the case in which Eve is a benign entity that is trusted by the participants in the communication (which do not need to know nor trust each other) she can serve as the Trusted Server and provide authentication and encryption keys to both parties. This exchange can be done in such a way as to keep the Trusted Server powerless to snoop or tamper with the communication (see the Kerberos system)

The infrastructure of authentication most commonly used in the internet (often referred as PKI) is based on certificates, special files that carry public encryption keys and data that identifies and authenticates the owner. These certificates are issued by a Certificate Authority (CA) that (in theory) check the identity of the owner and, though a digital signature scheme embeds its own encryption keys in the certificate.

Certificates are akin to a token in the sense that anybody that has control of the certificate (a computer file) can impersonate the owner. Also certificates are as trustworthy as the CA is. If the procedures used by the CA to authenticate and identify the user are sloppy, the certificate itself is of little value.

McEliece encryption system cracked

Researcher Tanja Lange at TU/e in the Netherlands reported to have succeed in cracking the McEliece crptosystem.

The software that was used can crack the McEliece encryption system within fourteen days, with the help of the computing power of one hundred computers. This feat was carried out recently by means of several dozens of computers, scattered throughout the world, according to Lange.

Odorprint coming soon to a Multifactor Authentication system near you.

Body odor may become the next frontier among the biometric factors useful for authentication, says a recent press release by the Monell Chemical Senses Center

“The findings using this animal model support the proposition that body odors provide a consistent ‘odorprint’ analogous to a fingerprint or DNA sample,” said Gary Beauchamp, PhD, a behavioral biologist at Monell and one of the paper’s senior authors. “This distinctive odor can be detected using either an animal’s nose or chemical instruments.”
………
“These findings indicate that biologically-based odorprints, like fingerprints, could be a reliable way to identify individuals. If this can be shown to be the case for humans, it opens the possibility that devices can be developed to detect individual odorprints in humans,” said lead author Jae Kwak, PhD, a Monell chemist.

Electronic smell sensors have been available for some time now.

Authentication – Part I, the Achille’s heel.

Most practitioners will agree that state of the art encryption systems (including quantum encryption) provide an adequate level of protection of the information trusted to them. In fact, other than the existence of programming errors in the encryption functions, the only hope an attacker has to gain access to encrypted information, is to fool the authentication measures and impersonate a legitimate user.
The fact that there is no bullet proof authentication system (still an open problem) is indeed the Achille’s heel of modern data security systems.
In a broad sense Authentication has been associated with identification, however a more stringent criterion can be applied if we define it as:

Authentication, the process of verifying that the user has the credentials that authorize him/her to access certain service.

The difference between Identification and Authorization is important and has been analyzed at length by Jim Harper in his book Identity Crisis.

Traditionally, user authentication is based on one or more of the following:

  • Something you know, for example a password or PIN number
  • Something you have, for example, a smart card or an ATM card
  • Something that is physically connected to the user such as biometrics, voice, handwriting, etc.

A fourth factor to be considered is Somebody you know, which has been recently added to the list of factors for electronic authentication, although it has always been a very common form of identification within social networks.

Identifiers

You can hear about this subject at Steve Gibson’s podcast

Man in the Middle attack for QKD?

In an article titled “Can Eve control PerkinElmer actively-quenched single-photon detector?” Vadim Makarov et al. report that thanks to the “strange behavior” found of one of the most often used photon detectors, it is possible for the eavesdropper (Eve) to inject a crafty pulse and fools Bob’s detector by introducing a click of her choice. Eve can now run a man in the middle attack, and when Alice send Bob the list of erroneous polarizations (the “key reconciliation” phase) Eve will be able to recover the key.
I am not sure at this point whether this is just a shortcoming of the particular hardware being used or if this is a pervasive characteristic of the type of circuits used for the detectors. Let’s see.