Wednesday, October 29, 2008

My Own Worst Enemy "Butterfly"

In July 2008, a group a researchers from Princeton University released a paper that described a new technique that recovered encryption keys from volatile memory on a freshly rebooted laptop. This technique is now known as a cold boot attack. These findings went against a long standing assumption that once power was cut to this type of memory, all data would be lost almost immediately.

Volatile memory, commonly known as RAM or Random Access Memory, is used by a computer to store data it needs temporarily for computational activities. Long term data storage is done with non-volatile memory, such as a hard drive or USB key fob. A frequent way to describe the difference between the two is to say that volatile memory loses its data when a computer is turned off, but non-volatile memory does not.

This distinction is often used when computer software is designed. For example, when an application stores passwords on a hard drive they are (hopefully) encrypted; when those passwords are moved into memory, they are typically stored in plain-text. It was generally assumed that this was a safe practice, and in defense of this type of thinking, encrypted data has to decrypted at some point in time and non-volatile memory is the safer place to store the plain-text.

To prevent attackers from grabbing passwords and other sensitive information from running memory , developers began clearing, or wiping, the areas of non-volatile memory that contained the sensitive data once it is no longer needed. Some operating systems also provide an additional level of protection by preventing other running application from accessing the memory locations where the sensitive data reside.

The decrypt and wipe process works fairly well for applications that only need to use the password or key once at start up, or intermittently during user activity, but for high performance applications that need a password or cryptographic key for every transaction, it may not be feasible from a performance stand-point. One such application is full disk encryption.

Modern hard drives are capable of transferring 80 or more megabytes of data per second, so you will see a pretty substantial performance decrease every time the operating system has to transfer encrypted data to or from the hard drive. If you have to decrypt and then wipe the encryption key every time you read or write data, you make these performance problems much worse.

To reduce this additional overhead, most whole disk encryption software loads the plain-text encryption keys in memory at startup and rely on the assumption that the key is erased when the computer is shutdown or loses power. Which leads us back to the Princeton researchers.

What the researchers discovered is that non-volatile memory actually loses its data slowly and predictably over a time frame of a few seconds to a few minutes. This allows an attacker to cut power to a computer and reboot it with a specially designed operating system and extract the encryption keys from memory before the data has time to fade away.

Additionally, they found that when the memory chips where cooled to -50 °C, you have more than enough time to remove the memory chip and read it on another computer or device. This can be accomplished by spraying the chip with an upside down canned-air spray duster, such as Dust-Off. For more advanced attackers, the chip can be cooled with liquid nitrogen to increase the decay time to a few hours.

The writers of this episode got most of their facts right, but in the first clip, the tech guy says that cooling the memory chips enables you to extract the keys, which is not correct because you can actually perform that attack without doing so.

The second clip shows one of the agents pulling a single cooled memory chip from a server and putting it into a device that extract the encryption keys. In this scenario, the cooling would be important to give the agent time to remove the chip and install it in the second computer.

The problem I have with this scene is that, unlike laptops, servers usually have several memory chips to provide redundancy and additional capacity. Depending on how the server spreads the data out across the individual chips, pulling out only one chip, or pulling out one chip at a time, would probably not get you the encryption key. To make things worse, the agent pulls the chip out of what appears to be a running system, which would potentially introduce unpredictable errors into the memory and would likely cause a complete system failure unless the system had hot swappable memory.

The only way to ensure that the keys would be extracted in the short period of time that agent had, rebooting the server with the special operating system would be the only viable approach.

Saturday, September 20, 2008

Law & Order: Criminal Intent "Legacy"

Criminal Intent is one of the half-dozen or so spin-offs of the ever popular procedural drama Law & Order. The series follows a group of detectives--members of the NYPD's Major Case Squad--who are dedicated to bringing New York City's worst criminals to justice.

In this episode, the elite crime fighting squad get called to a prestigious private school to investigate a murder that was made to look like a suicide. During the course of their investigation, they find a laptop belonging to one of the suspects, and like all good television detectives, they turn it over to a nerdy guy named Ira for analysis.

As this plot line develops, the writers introduce two of my favorite gimmicks: the nonsensical technical monologue and the explain it in English one-liner:

Kiana used data utility wiping freeware but it performs like malware."
"In English, Ira."
"She download a free program to permanently delete a video file but it just moved it to another part of her hard drive."

I'm not really sure what "data utility wiping freeware" is exactly, but from the English explanation, I can only assume that it is a program that permanently deletes files off of a computer's hard drive, otherwise know as a disk or file wiping utility.

Techno-gibberish aside, I understand why the plot needs the girl to use
a this type of program--it shows that she understands what she did was wrong--but there is no reason for the program to be malware, or for her to even use it, to have the same plot outcome

Let me explain.

When someone edits a documents, especially with video editing software, temporary files are created to help keep track of changes for rollbacks (undo) or to preserve changes in the event of a system crash.

An every day example of this is when you have auto-save enabled in
Microsoft Word. If you look in the directory of the document you are editing, you can see a series of temp files that look like ~wrdxxxx.tmp. Another exampleare the temporary files that the operating system creates when you print a document--this is known as print spooling. These files usually get deleted by the application or operating system when they are no longer needed, but sometimes they don't.

This can create a serious problem if you want to encrypt or permanently delete a file. Most people assume that the file they just encrypted or deleted is the only copy on the disk drive, but in some cases it is not.

Additionally, most people assume that when you empty the trash everything in it is permanently deleted, when in reality, these files are very easy to recover if the computer is not used heavily after the deletion.

So, a more likely scenario for recovering the file would be Ira using a data recovery application or finding a temporary file that the suspect didn't know was there. The data wiping utility
malware angle, while possible, just does not seem likely.

Friday, September 19, 2008

Burn Notice "Good Soldier"

Hollywood has always had a love affair with biometrics. They were a mainstay of military, spy, and science fiction movies long before they were included on consumer laptops and door locks.

Because Hollywood got such a jump start on biometrics, most people's expectations have been set by these fictional depictions. In reality, the effectiveness of most biometric systems do not come close to what you see in movies and television.

An unfortunate side effect of this is that corporations have spent millions of dollars promoting and implementing these ineffective systems and, more discouragingly, governments have based public policy on these Hollywood induced misconceptions.

If you remember back to the Burn Notice pilot, the protagonist--black listed spy Michael Weston--opens a biometric safe with a print he lifted off of its finger print reader.

This episode shows an attack against another biometric security mechanism, this time a facial recognition system that is designed to generate an alert when an unauthorized person enters a room.

Earlier this year, the Japanese government introduced regulation that allows for the prosecution of vending machine companies that sell cigarettes to persons under the age of 20.

Long before facial recognition became fashionable, 41 states and the District of Columbia implemented policies that restricted the sales of cigarettes through vending machines, in some cases these policy resulted in a complete ban on the practice.

These policies were implemented based on years of research that suggested that younger children where more likely to obtain cigarettes from vending machine than any other source, including friends and family. Additionally, subsequent research data has shown that a complete ban on cigarette machines in places frequented by young children is significantly more effective than alternatives such as device locks.

So why did the Japanese government choose not to ban vending machines? While I am no expert in Japanese politics, I suspect that a vending machine company named Fujitaka convinced the regulating body that they could accurately judge the age of a purchaser by using biometrics--at least 90% of the time.

What Fujitaka and the Japanese regulators soon found out was that a 3-inch magazine photo placed in front of the camera would fool the system into selling cigarette to underage kids. Oops.

This is exactly what Michael Weston does to gain entry to the hotel room of his sexy nemesis Carla. Armed with a 8x10 head shot of the room service guy, he easily gains entry into the room without setting off the alarm. Sound familiar? You can thank a bunch of Japanese school girls for this one.

Saturday, July 19, 2008

Burn Notice "Turn and Burn"

Steganography, for those of you who don't know, is the art of hidden writing. While cryptography scrambles or obscures the content of a message, steganography attempts to hide the fact that a message is being sent. The example used in this episode shows a message hidden in a crossword puzzle, but modern techniques have been developed that allow messages to be hidden in everything from digital photographs to common network protocols.

In steganography the message is hidden by a technique, or process, but does not use a key in the same way that cryptography does, so once the encoding technique is discovered you can extract the plain text from the stegotext without any additional information. With cryptography, on the other hand, you would need both the method and a key to extract the plaintext message.

When the episode's opening voice-over tells the audience that "unless you have the key" you won't wont be able read the message, it is a little misleading because the differences between steganography and cryptography is not explained.

It may have been better to say that without knowing how or where the message is hidden, you would even know its there. Better yet, you could have bored the audience with a lengthy explanation of the history of steganography and how it differs from cryptography.

Wednesday, July 2, 2008

WarGames 25th Anniversary

Looks like they are going to re-release WarGames into theaters for one night to coincide the with the release of the direct-to-video sequel to this 1983 classic.

Even with all of its technical inaccuracies and idealistic plot, this movie did for hacking in the '80s what Gidget did for surfing in the '60s.

In the mayhem that ensued from this hacker renaissance, a writer for Newsweek magazine suggested that parents should lock up modems like they would firearms--they were simply that dangerous. The nerve. Imagine if the ghost of hacking future had given him a peak at what was in store with the Internet!

Much of the cold war paranoia and fear will not play as strongly with a modern audience, but if you're looking for a trip down memory lane, this might be the ticket for you.

Monday, June 16, 2008

The Incredible Hulk (2008)

A recent study conducted in London showed that 21% of the 578 people stopped on the street by the researchers where willing to reveal their passwords in exchange for a chocolate bar. The obvious flaw in this study is the fact that the researchers had no way of verifying that the passwords provided were real, but I wonder how many people are devious enough to realize that giving a fake password will still get them that little piece of heaven.

The Incredible Hulk was already in the can when this study was released, so I have to give Zak Penn (or Edward Norton who apparently did an uncredited rewrite of the script) credit for coming up with a similar social engineering technique. Towards the end of the movie Bruce Banner, played by Edward Norton, needs to get into a high security university research building to gain access to a computer network. How does he do it (spoiler)? He brings several pizzas from the pizzeria that he was hiding out in and uses them to bribe a security guard and a graduate student into looking the other way while he accesses the network with his ex-girlfriend's user name and password.

In real life this probably would not have worked on a trained security guard--I recently saw someone try something very similar and fail--but there is no doubt in my mind that the graduate student would have handed over the keys to the kingdom for a free pizza.

Sunday, April 6, 2008

Firewall (2006)

This film garnered a significant amount of criticism in the computer community for its presumed technical inaccuracies, most notably for how Harrison Ford's character used his daughters iPod to store bank account numbers. However, as Roger Ebert correctly pointed out in his review of the film, " iPod can do that -- act as a backup hard drive...."

With a few Google queries, its easy to figure out that you can connect digital cameras to iPods and use them to store images, so, its not that far of a stretch to assume that the scanner acted in the same way. Come on people, get a grip.

To that point, I have been finding that critics, like screen writers, have gotten into a bad habit of assuming that the general public's lack of knowledge somehow negates their responsibly to know how a technology works before they write about. This was painfully obvious in the criticism of Untraceable, and just as evident in the focus of the criticism of this movie. I'm not saying that this movie isn't flawed, just that the true flaws were overlooked.

The most obvious flaw, from my perspective, shows up about 7 minutes into the film. While too short to be called a technical monologue, the following lines start things off:

"Let's try a rule change on him and see what he does. I'll put in an IPS signature that black holes the pattern...see if that slows him down."

Possibly afraid that Harrison Ford's delivery of the line would not play on its own, the film makers quickly cut to a shot of him typing commands into a computer:

For those of you who don't immediately see the problem, I may need to explain what IPS is.

Intrusion prevention systems, or IPS, are inline intrusion detection systems that monitor traffic looking for specific signatures, or patterns, in network packets and attempt to block attacks. Traditional IDS simply send alerts when they detect patterns, but do not attempt to stop the attack.

The following is an example of an intrusion detection signature, or rule, that would detects brute force logins to a Web application, which is similar to what is describe in the dialog:

alert tcp $WEB_SERVERS 80 -> $EXTERNAL_NET any (msg:"WEB AUTH LOGON brute force attempt"; flow:from_server,established; content:"Authentication unsuccessful"; offset:54; nocase; threshold:type threshold, track by_dst, count 5, seconds 60; classtype:suspicious-login; sid:2275; rev:2;)

The first thing that you will notice is that the IDS rule looks nothing like what is being typed into the administrative console. What they shown in the film is actually a Cisco ACL (Access Control List) that blocks all traffic from the subnet, not an IPS signature. This would be resilient to false positives, but wouldn't stop an attacker coming from Hong Kong, Korea, and Malaysia.

The other problem, which is not as obvious, is that the traffic that they are showing is unencrypted HTTP. Needless to say, this is not something you want to do when you are running a banking Web site.

An interesting thing about that, however, is that intrusion detection systems are not very effective with encrypted traffic. There are things you can do to make it work, but in real life, brute force login attempts would most likely be tracked and blocked by the Web application, not something that monitors the network.

Friday, March 7, 2008

Untraceable (2008), part 2

For those of you not in the know, black holing, a term used in the technical monologue from the previous post, is a technique used by internet service providers, also known as ISPs, to block access to phishing sites and other criminally themed internet destinations.

Black holing is usually done in two ways. The first is to prevent traffic from reaching the IP address of the server by manipulating the ISPs routing configuration, or routing table, to force any packet destined for the server to go to an non-existent network location. This is also called null routing.

The problem with this approach is that more than one Web site can be associated with a single IP address--large Web site hosting companies will do this to save money and simplify configuration. Consequently, if an ISP black holes the IP address of a criminal site that is hosted by, lets say, Yahoo! GeoCities, they could inadvertently block hundreds, if not thousands, of legitimate sites in the process. This is not a good thing.

The second method is changing the DNS record on the service provider's name servers to map a domain to another IP address, usually is your local computer. Alternatively, an ISP can point to an informational Web site that they host explaining that the site has been blocked. The limit of this approach is that you can't black hole by URL, only by domain name.

A URL, or Universal Resource Locater, is the combination of the domain name, protocol, and location of the object, such as an image or Web page, on the Web server. For example if you look at the address bar on you browser, you can see all three elements. The first component http:// specifies the protocol, the second, is the domain, and the third, /2008/02/Untraceable.html is the location of this page on the web server. In simple terms, with DNS black holing you can block entire Web sites, but not specific pages contained in them.

While this is an improvement over blocking by IP address, it is not without its problems. Sometime in 2007, the MySpace page of Alicia Keys was compromised. The attackers embedded malware on the site in a way that fooled users into downloading it by inadvertently clicking on a hidden link. By using Alicia Key's fan site to host their malware, the bad guys effectively prevented any ISP from black holing the site because the service providers would have needed to block everything on MySpace just to block the one file.

All that being said, implementing black hole filters is not something that ISPs do without significant debate. Additionally, the FBI does not have direct access to core internet routers, nor would a country that has constitutional protection of free speech allow any of its agents to block access to any Web content without due process.

In the real world, the FBI would have sought a court order to have the Web site shutdown, or the a service provider would have implemented the filters on behalf of their customers . Either way, it would have been the ISPs that took the action, not the FBI. This is another thing that the writers of Untraceable got wrong.

Tuesday, February 19, 2008

Untraceable (2008)

Untraceable follows an FBI cyber crimes investigator as she attempts to track down a spree killer who posts live videos of his victims being tortured and killed on the Internet. As if that was not bad enough, the victims are killed faster as more people visit the Web site.

The title is derived from the fact that the FBI investigator, played by Diane Lane, is unable to track down the killer nor shutdown the his Web site down.

So how did the suspect hide and prevent the FBI from bring his site down? The movie describes it this way:

"The site's IP keeps changing constantly. Each new address is an exploited server. It is running a mirror of the site. The site's Russian main server uses a low TTL so that your computer constantly queries the name server's record. And that is how it gives you a new address so consistently. There are thousands of exploited servers on the Internet, so he is not going to run out of victims anytime soon. But he is accessing these servers so quickly; he has got to be running his own botnet. I mean, we are black holing these IPs. Every time we shut one mirror down another one pops up."

What this technical monologue describes, with surprising accuracy and correct pronuciation, is fast-flux DNS. Let me explain how it works in a little more detail.

DNS, or Domain Name System, are the servers--sometimes known as name servers--that turn human readable domain names, such as, into numeric Internet address, such as These mappings--known as DNS records--include a mechanism to tell the requester how long the mapping is valid. That mechanism is know as time-to-live, or TTL.

Bot herders, the nefarious operators of botnets, figured out that you could use a low TTL to avoid having a botnet or phishing site shutdown. To do this, these lawless vagabonds create DNS records that map a single domain to hundreds or thousands of IP addresses. When they add the low TTL, which causes the IP address maps to update as fast as once per minute, it makes it possible to deploy a phishing site or botnet controller across thousands of mirrors--computers with copies of the Web site or controller application--while the ISPs' security staff played whac-a-mole trying to knock the servers off the Net.

In spite of the fact that the the screen writers got the description of fast-flux correct, in the scenario that they presented, it would not have prevented the FBI from tracking down the source of the videos. What the screen writers missed in their logic was the fact that the videos were live, not pre-recorded. A pre-recorded video would have been extremely difficult to track down unless the investigators knew exactly when it was seeded to the mirrors; had the video been seeded into a peer-to-peer network for distribution, it would have made the source almost impossible to find.

With live video, on the other hand, a network stream would have to originate, in real-time, from the physical location where the event is taking place. To track down the source of a live video, the FBI could have started with a single mirror of the Web site and worked backwards based on the network traffic being sent to it. As you can see from the diagram below, even if the killer hid behind multiple layers of servers, a properly trained investigator would still have been able to determine the origin of the video by tracing the network traffic from node to node.

The investigator would have used data generated from a tool known as Netflow. Netflow works by extracting information from network packets that are received by a router's interface and creating records that describe the unique flows. For the layman, flows are groups of similar packets from the same source and destination that are sent and received during the same period of time. For the more advanced reader, flows are based on the 5-tuple, which is source address and port, destination addresses and port, and protocol. Start time of the flow is defined when the first packet is seen, and an aging timer is used to determine the end time--when the router sees a new packet it resets the aging timer, if the timer reaches zero before another packet is seen, the flow is considered complete. For TCP, the end time is also determined when a session teardown is initiated with FIN/FIN-ACK packets.

The live video would have produced an easily identifiable flow that could have been used to track the network location of the creator and subsequently their physical location. With a little router command line magic, it could have been done in real-time. Whether the FBI could have mobilized fast enough to save the victim and catch the bad guy is another issue, but the bad guy would have definitely been traceable.

Untraceable, Continued

Tuesday, January 1, 2008

National Treasure: Book of Secrets (2007)

The second installment of the National Treasure franchise brings us more riddles that unlock clues that bring more riddles. One of these clues (or was it a riddle? I cant keep track) is a burned piece of paper that contains a partial cipher text message. It turns out that this message was encrypted with the Playfair cipher, which was created in the mid-1800s by a gentalman named Charles Wheatstone and named after Lord Playfair, who promoted its use.

By modern standards Playfair is extremely weak, but at the time it offered a relatively simple method for encrypting messages that made frequency analysis attacks difficult, if not impossible, to perform.

If you are not familiar with substitution ciphers, the simplest example is ROT-13 (or rotate 13), a variation of the Caesar cipher that creates cipher text by replacing, or substituting, each letter in a word by the letter that is 13 places away in the Latin alphabet.

Any fan of Wheel of Fortune can tell you that the three most common letter in the English language are E, T, and A. With frequency analysis, it is pretty easy to determine that R, G and N represent E,T and A, simply by the fact that they occur most often in the cipher text. You can do further analysis by looking at the common ending letters, letters that most often follow E, etc. This type of analysis is made easier by the fact the ROT-13 keeps the structure of the words and sentences.

While still considered a substitution cipher, Playfair does a couple of things to break up frequency and structure. First, the plain text is broken down into groups of two letters called digraphs. If a grouping produces a double letter digraph, or there is a single letter left at the end, a substitution character is used, typically "X," for the second letter. For example, "he departed yesterday" becomes "he de pa rt ed ye st er da yx." Second, the plain text is encrypted using a 5 x 5 table containing a key word or phrase and some relatively simple rules that encrypt the plain text with 676 possible variations per digraph, versus 25 for each letter with Caesar type ciphers. The resulting cipher text will look something like "DA EA RD SA AE WT YG AQ ET ZY."

One obvious weakness of Playfair is the fact that a digraph and its reverse will encrypt with the same pattern. From the example, you can see that "departed" has a reverse digraph, "DE" and "ED." In the cipher text they can be easily found as "EA" and "AE." Knowing that "ED" is one of the 10 most common digraphs in English you might be able to decipher "EA RD SA AE" by replacing the reverse digraphs to get "DE RD SA ED."

So, while Ben Gates was racking his brain to figure out what debt that all men pay, his unfunny sidekick Riley Poole could have easily enhanced his computer program to discover the key or simply figured it out by hand. The small amount of cipher text may have complicated his analysis, but there are only so many word combinations and digraphs that could have produced "ME IK QO TX CQ TE ZY."