Category Archives: Hacking / Security

The Case of the Changing Blog

I rarely re-read or revisit old blog posts on this site. Generally speaking I write them, give them a quick once over, and send them on their way. The only time I look up old posts is to either verify a date or find a link to send to someone. That’s what I was doing over the weekend when I dug up a blog post from four years ago and was surprised to find spam links embedded throughout the post — links I did not put there. The game, as they say, was afoot.

Discovering your website’s been hacked in this fashion isn’t like coming home from work and discovering that someone has kicked in your front door. It’s like coming home fro mwork, unlocking your front door, setting down your stuff, fixing a drink, sitting down in the living room… and then realizing your television is missing. And quite often it’s like realizing that the television in that back bedroom you only go into once a month is missing. At least when someone kicks in your front door, you know how they got in.

Last week through WordPress I was notified that one of the plugins I use was got hacked. I don’t mean someone used a vulnerability associated with one of my plugins to hack my website. Apparently someone hacked multiple WordPress plugins at their source, which then got pushed out to everyone who was using those plugins. This is one of those cases where doing the right thing and enabling auto updates bit me.

My initial hunch, that someone had snuck those spammy links in directly into my posts, was incorrect. When I tried editing one of the infected posts, turned out to be wrong. When I attempted to edit the offending posts, the spam links were nowhere to be found. Instead, they were somehow being injected on the fly when each post was being generated. I ultimately found a bunch of encrypted code hiding inside my functions.php file that seemed to be doing the dirty work.

I still haven’t put all the pieces together, but best I can tell here’s what happened.

– POWERPRESS PODCASTING PLUGIN BY BLUBERRY: Last week I received a notification from WordPress that this plugin (also known as “PowerPress”) had been compromised. (The plugin has since been updated.)

– HEAD, FOOTER, AND POST INJECTION PLUGIN: Head, Foorter, and Post Injection Plugin: I don’t know if this is related, but around the same time this plugin appeared on all my WordPress sites and was enabled. I only noticed it because it broke the header of most of my WordPress sites. In the “post injection” portion of the plugin was a bunch of encrypted code. That seems sus. (The plugin has since been removed.)

– ADMIN ACCOUNTS CREATED: I discovered four new admin accounts on all my WordPress sites. All of them had randomly generated names that were eight characters long and email domains of example.com. (Accounts were all removed.)

– MORE SUSPICIOUS PLUGINS DISCOVERED: Discovered the existence of two more plugins, “Code Functionality” and another with just the name of my domain (“RobOHara.com”) that were new. One linked back to my functions.php file which had been compromised and contained a very large section of hex-obfuscated code. Removed all the offending code.

I think that’s everything I found. Because my old WordPress theme was out of date and no longer being supported, I’ve changed to a new one. I don’t love the new one and I’m sure I’ll be tweaking it a bit, but it’s modern and up to date, so there’s that. I’ve also installed a couple of WordPress plugins that scan for code changes so I won’t be caught quite so blind-sided next time.

EDIT:

I found someone else, Terence Eden on Mastodon, who experienced the exact same hack. One of the remediations he suggested was grepping all the PHP files on your site for the IP address of the attacker. Here was the exact command he suggested:

grep -r –include=”*.php” “94\.156\.79”

Leveraging that, I found multiple other malicious plugins that had been installed on my websites, including:

/wp-content/plugins/custom-mail-smtp-checker/custom-mail-smtp-checker.php
/wp-content/plugins/informative/testplugingodlike.php

Between removing those, the original ones, and removing all the newly created admin accounts, I think (hope) I have this one squashed.

Kevin Mitnick (1963-2023)

Earlier this week I was informed that Kevin Mitnick, the “world’s most infamous hacker,” had passed away. I was asked to sit on the news until the family had time to release a statement, but word travels fast and this morning it appeared on the front page of the New York Post.

For those who haven’t heard or read the story, back in the mid-2000s my wife, who was in charge of putting together a training class at work, hired Kevin Mitnick to travel to Oklahoma and teach a course on social engineering. Susan knew how into computers and security I was and had heard me mention Mitnick’s name many times. As you might imagine, getting permission to bring Kevin Mitnick and his friend and business partner Alex Kasper onto a federal campus took some string pulling, but Susan managed to pull it off. This would have been approximately six years after Mitnick had been released from federal prison on hacking charges, and only three since his probation barring him from using computers or electronic devices had been lifted. One stipulation from management was that Mitnick and Kasper would be accompanied at all times while they were on the campus, and it was almost embarrassing how quickly I volunteered for that job. For three days, I followed Kevin and Alex around like a shadow, something I probably would have done anyway. After hours, we took the two of them sight seeing, out for dinner, to a country and western bar (which I got tossed out of), and even spent time at a Waffle House at two in the morning.

As I mentioned in the blog post where I documented that week, I started the week excited about meeting Kevin Mitnick the celebrity, and ended up meeting Kevin Mitnick the person. Yes, I got his autograph and he gave me copies of his books (and I gave him copies of mine!), but the real fun happened after the ice had been broken and we were able to swap old stories about computers, networks, and phone systems.

I have always been interested in computer security, and a couple of years after Kevin’s class I changed jobs and moved to a security team where I spent a couple of years traveling the country and testing the networks of other government agencies. On one of those trips we discovered several employees had connected modems to their machines and were dialing into them remotely from home, circumventing the agency’s firewall and every other network security system that had been established. To find the machines I ended up in a hotel conference room late one night with four modems and phone lines connected to my laptop, wardialing the entire office looking for those modems. It felt like a scene right out of the 80s, so I took a picture of that crazy setup and sent it to Kevin. He got a real kick out of it.

Mitnick and I were not close friends, but we did remain in contact through social media. We talked about meeting up for coffee when Susan and I were in Vegas, but the timing didn’t work out. I always got a kick when he liked a picture I had posted on Facebook or Twitter, usually of an old computer or payphone.

Kevin Mitnick’s life had the potential to go a lot of different ways. Not everyone who emerges from federal prison is able to turn over a new leaf and go straight, but Kevin was one of them. He turned his passion for security into a career that has lasted nearly 20 years. For all the trouble he had with federal agencies over the years, I’m glad ours took a chance on him and helped launch his career in cybersecurity. Kevin Mitnick was an interesting, creative, funny, and dangerously smart individual. I’m glad I had the opportunity to meet him.

FBI vs. Apple vs. You

Shortly before entering the Inland Regional Center in San Bernardino, California and opening fire, killing 14 people and injuring another 20, the shooters — Syed Rizwan Farook and Tashfeen Malik — discarded their cell phones laptop’s hard drive. While the hard drive has not been located, the cell phones turned up in a dumpster near the terrorists’ rented home.

Four hours after the attack, Farook and Malik were killed in a gun battle with FBI agents. Unfortunately, they were shot before anybody got a chance to ask Farook what the four-digit lock code on his iPhone was. Oops.

An iPhone, when configured to do so, will back itself up to Apple’s iCloud when connected to an approved WiFi hotspot. Farook’s iPhone was configured to do this, but hadn’t been backed up in six weeks. To access the data on the phone, all the FBI needed to do was take the phone to a pre-approved WiFI network (say, Farook’s house or work) and turn the phone on. The phone would have backed itself up to iCloud, and the FBI would have been able to file a subpoena to obtain the (unencrypted) data from Apple.

But that’s not what they did. Instead, an FBI agent attempted to reset the phone’s security PIN via iCloud. This requires the phone to be unlocked to sync up. In other words, a random FBI agent who knew nothing about how iCloud works (he could have asked any 13 year old) locked the FBI out of the phone with this one single (dumb) action.

The FBI’s backup plan was to have Apple unlock the terrorist’s phone. First, they politely asked if Apple would break into the phone for them. Apple politely declined. Then, the FBI took Apple to court. When Apple still refused to cooperate, the Department of Justice also took the company to court, citing the All Writs Act (part of the Judiciary Act of 1789). Apple continued to drag their feet on the request.

And, for clarification, what the FBI was asking Apple to do was create a custom version of iOS with a backdoor in it that would allow them to bypass the security code. Because, nothing bad could possibly come from developing that. The government promised that it would only be used one time in a controlled environment, because of course they would promise that.

This story has freedom of speech, citizens’ rights, the right to encryption (and privacy from the government), the FBI vs. Apple, terrorists, murder… all they had to do was throw in a Star Wars reference and a video game and it would have been perfect!

From day one, I told my wife “the FBI does not need Apple to get into that phone. They will get in, regardless. This is a PR stunt.” My wife thinks I’m crazy (and not just because of this theory.) Any time the FBI makes a public release, it’s for a reason. The stuff they don’t want you to know about, you don’t know about. The stuff they do want you to know about makes the news.

Think of it this way: if Apple were to cave, it’s a lose/lose. Apple loses because it makes them look like they are catering to the government at the expense of their customers’ privacy. And the FBI loses twice: first, they look weak by not being able to break into a single phone, and second, they look like bullies. But if Apple were to stand up to the FBI and refuse to unlock the phone and the FBI were eventually able to unlock it on their own, that would be a win/win! Apple becomes the valiant defender of encryption and customer rights, while the FBI ends up looking like uber-hackers!

And, of course, that’s exactly what happened. On Monday, the FBI withdrew their case against Apple and said “thanks, bro, but we got in anyway.”

Above is a video of the XPIN CLIP in action attacking an iPhone running iOS 7x. What the device on the left is doing is sequentially sending passcodes to the phone. If you want to jump to the 3:30 mark you’ll see it send 1230, 1231, 1232, and 1233 before unlocking the phone with the correct code, 1234. Apple fixed this hole in iOS 8. A few weeks later, someone released a new device that worked against iPhones running iOS 8. Apple fixed that hole in iOS 9. It wouldn’t take a complete leap of faith to say that there’s a new device out there that works on the latest iPhone operating system.

But the terrorist’s phone had the security feature enabled that would wipe his phone after 10 incorrect guesses. Welp…

This is the IP Box unlocking an iPhone running iOS 8. The IP Box utilized an exploit that prevented the iPhone from recognizing incorrect guesses by pressing two buttons at the same time. Rumor has it that the newer versions of this box (available for around $200) can cut the power to the phone immediately after each attempt to prevent the phone from logging the incorrect guesses. It takes longer, extending the maximum amount of time from hours to days (but not weeks), but if you’re just dealing with one phone, that’s not too bad.

For now, this story is over (although you can bet Apple already has people trying to figure out how the FBI got into iOS 9, and will be patching that hole in the inevitably soon-to-be released update). Apple politely asked the FBI how they did it; the FBI politely refused to offer up that information. In the end, Apple won by not backing down, and the FBI won by gaining access to the terrorists’ selfies. The terrorists lost, but they were already dead so having their phone compromised is really just a parting gift.

The rest of us are stuck in the middle, hoping that the private information on our phones, computers, and stored in the cloud remains private.

child_iphone_ars-thumb-640xauto-20200[1]

Change your (everything) Password — Introducing the Heartbleed Bug

If you think you don’t need to read this post, you definitely need to read this post.

Heartbleed is a security vulnerability that was discovered this week. It probably affects you. First, the five W’s:

Who: Anyone who uses the web and uses https links. That’s probably you.
What: Heartbleed is a vulnerability that allows people to see the information you send to some websites that use OpenSSL. It’s a lot of them.
Where: Gmail, Yahoo, Tumblr, Flickr, Facebook…
When: The problem has been around for two years now, but nobody noticed it until this week.
Why: Honest human error.

You’ve probably noticed the letters “HTTP” preceding most web links. HTTP stands for “hypertext transfer protocol,” and by putting that in front of a web link you’re telling your web browser “Hey, what comes next is going to be a web page.” It’s kind of like saying, “the following message will be in English.”

Sometimes, you’ll see HTTPS instead. The S stands for “secure sockets layer” (or SSL for short), but you can think of that S as simply meaning “secure”. When you use HTTP, the things you read and send across the internet are sent in plain text. That means anyone with the means to do so who is looking and listening for your message can read what you are sending and receiving. With HTTPS, what you send and receive to and from websites is secure and encrypted. Even if someone were to intercept your message, if you are using HTTPS, the information would look scrambled and no one would be able to read it. This is why websites like Gmail and Facebook and your bank’s website default to HTTPS — because it’s secure.

Or, so we thought. Turns out, back in early 2012, someone made a mistake while updating OpenSSL. A big one. Well known security expert Bruce Schneier said on his website this week, “on a scale of 1 to 10, this is an 11.” This bug, which again was introduced in 2012, allows/allowed hackers to read information in certain HTTPS transfers. One frustrating thing about this bug is that there’s no way for servers owners to know if people were hacking them or not; all they can tell is if they were vulnerable or not. And it turns out, a lot of websites were vulnerable.

The good news is Heartbleed only lets attackers view a small portion of memory at a time, so there’s a chance nobody ever saw your password. The bad news is, this vulnerability has been around for two years now, so there’s no telling if you were affected or not.

Several sites including this link at Mashable.com are compiling lists of websites that were affected and have been patched. You’ll want to change your password on those sites. Some of the ones on that list currently include: Facebook, Instagram, Pinterest, Tumblr, Flickr, Google/Gmail, Yahoo/Yahoo Mail/AIM, YouTube, Etsy, GoDaddy, Netflix, Soundcloud, TurboTax, USAA, Box, DropBox, Github, and IFTTT.

Oh, and Minecraft.

This is a good time to remind you that if you use the same password on any other site that you also use on those sites, you should change that password too. Also, stop doing that.

So what about your bank or some other SSL page you want to test? Several “Heartbleed Testers” have been stood up online. Here’s one. Simply click the link and cut/paste the URL to your bank (or any other HTTPS web link) and the website will let you know if they are currently using a safe version of OpenSSL. Of course it doesn’t tell you if they had the bad version last week…

I spent a couple of hours last night changing my passwords on a bevy of services including Facebook, Twitter, Gmail, and more. You should to. It’s a pain in the butt, especially when you have multiple devices (phones, tablets, laptops) that will all need the news passwords, but you’ll thank me in the morning.

A Resurgence of Interest in eCoder Ring

A lot of things just happened when you clicked on this article. Your computer connected to my computer, and each of these words I wrote zipped across the internet to their destination. Since this article contains words like encryption, NSA, and secret codes, it probably flagged something for the NSA along the way — you for reading about it, and me for writing about it. In some giant, government data warehouse, there’s now a record that you were here. We’re probably both on a watch list now. Welcome to the machine, and all that.

About five years ago I wrote a silly little program called eCoder Ring. eCoder Ring is a small program that allows you to encrypt and decrypt secret codes. It does this by using any text file, web page, or graphic file as a key for a one-time pad encryption. Here’s what Wikipedia has to say about one-time pad encryption:

In cryptography, the one-time pad (OTP) is a type of encryption which is impossible to crack if used correctly. Each bit or character from the plaintext is encrypted by a modular addition with a bit or character from a secret random key (or pad) of the same length as the plaintext, resulting in a ciphertext. If the key is truly random, as large as or greater than the plaintext, never reused in whole or part, and kept secret, the ciphertext will be impossible to decrypt or break without knowing the key. It has also been proven that any cipher with the perfect secrecy property must use keys with effectively the same requirements as OTP keys. However, practical problems have prevented one-time pads from being widely used.

The key to breaking most codes lies in discovering patterns, and in a properly implemented one-time pad there are none. Not to delve too far into details, but the point of eCoder Ring is that it plucks letters out of a keyfile and uses the numerical position of those letters to represent the letters of your message. eCoder Ring lets you use things like digital pictures (which it converts to ASCII numbers and characters) as keyfiles. It also allows you to skew the code by adding variables to start your code further down in the keyfile, or skip numbers, and do all other sorts of random files. Even if you had eCoder Ring and the keyfile used to generate a message, it would be practically impossible to crack a code generated by it without the proper variables inserted into the program.

It is my belief now as it was when I wrote it that the codes generated with eCoder Ring are impervious to brute force attacks. To prove my point, when I released eCoder Ring I included a code and offered a reward for cracking it. At first I was offering a hundred bucks; later I upped it to two hundred, and I think I may have raised it to five hundred at one point. The reward for cracking the code is moot because without the keyfile or the skew variables, the code is unbreakable. In theory I feel confident about offering a million dollars, but I wouldn’t do that for two reasons, the second of which exposes the weakness of eCoder Ring. The first reason is quite simply that I don’t have a million dollars. The second reason, the scarier reason, and the weakness that plagues all implementations of one-time pads is that both the sender and the receiver have to know what the keyfile is. I know what the keyfile is for the message I encoded. For a hundred dollars I am hoping someone does not kick in my front door, hold a gun to my head and demand access to the keyfile. For a million dollars, someone might. When I wrote that original readme file five years ago that contained the code, I specifically made it clear that the keyfile does not exist on any computer I have control over (not my laptop or my desktop and not my server) and no one else knows what the keyfile is, so bribing my kid with candy or PlayStation games won’t work.

But yes, as I joked in the program’s readme file, any codes generated with eCoder Ring will stand thousands of years of brute force attacks, but will fail in seconds when someone shows up to your house and begins to peel your children’s fingernails off as you watch. As a human being who knows the keyfile, you are eCoder Ring’s weakest link. If the keyfile is stored improperly or transferred improperly, the code can be compromised. When some mug shows up and decides to squeeze the cider out of your Adam’s apple for the keyfile, look out.

So why am I writing about eCoder Ring again after all these years?

From 2007 (when I released it) to 2012, eCoder Ring was downloaded approximately 2,000 times.

In the past two months, eCoder Ring has been downloaded an additional 3,000 times.

In the last two months we have learned that the NSA either gathers or simply pilfers through pretty much everything we do on the Internet. They store records of what websites you visit. They keep track of who you e-mail, and how many times you do so. Most signs point to the fact that the NSA has direct connections to some of the largest content providers in the world and pull data pre-encryption, making the phrases “HTTPS” and “SSL” mean almost nothing. The latest NSA-related leak tells us the NSA pays 35,000 people to break codes and crypto. I hope one of those 35,000 guys runs across a code generated with eCoder Ring someday. That would make me chuckle. There are also rumors that the NSA can effectively either crack or circumvent some/most/all encryption methods being employed today.

Based on the increase in downloads, do I think eCoder Ring is the answer?

No, obviously. It’s too cumbersome to be used on any mass scale and too difficult to properly implement. (What I had always imagined implementing (but is beyond my skills) is an API or something that could be used in chat programs, so instead of sending clear text back and forth across the internet, people could send random-looking encoded text.) What these recent downloads tell me based on current events is that normal people are interested in security. Normal people are interested in learning about codes, and keeping their messages away from prying eyes. Normal people are hitting search engines and looking for ways to regain their privacy. eCoder Ring probably isn’t the answer, but maybe it’ll inspire someone else to create the answer.

Link: eCoder Ring

Removing Malware from my own Site

A few months ago I spun up a new website, SpriteCastle.com. There’s no real content there yet — it’s more of a proof of concept site at this point. Last night after finishing up the latest episode of You Don’t Know Flack I decided to do some tweaking to the Sprite Castle. When I opened the site in Google Chrome, I got the following message:

Crap. I know WordPress has been under attack lately, so my first assumption was that the site had been compromised. Bypassing Chrome’s warning, I opened the site and searched for any sign of malware. I couldn’t find any. I then clicked “View Source Code” and quickly found the problem — links to a “posh laptop bag” website. While viewing the page itself I couldn’t see the link, but while viewing the code there it was, plain as day. A quick Google search shows that I’m not the only person running WordPress with the issue.

After a few minutes of research I tracked the problem back to the free WordPress theme I had downloaded. The theme was injecting links to sites hosting malware in the theme’s footer, and the links were encrypted (technically, obfuscated) making them difficult to find while sifting through the code.

There are lots of websites out there like this one that will help you remove encrypted footer links. Even with those removed, I was still seeing links in my source to malware sites. By using Windows’ FINDSTR command (similar to GREP) I was able to find more encrypted sections (hint: search your PHP files for “EVAL”). Each time I tried dinking with the code, the website would stop loading. Someone spent a lot of time putting those encrypted links into this particular theme.

So, I spent a lot of time getting rid of them.

The simplest branching point in any programming language is the IF…THEN clause, which does exactly what it sounds like:

IF (this) THEN (do this)

One baby step beyond that is IF…THEN…ELSE logic. Even if you are not a programmer you can see that this is used in every single program.

IF PASSWORD IS CORRECT
– ALLOW USER TO LOG IN TO E-MAIL
ELSE
– PRINT “Denied!”
END IF

Simple.

This was also, in its simplest form, the basis for most early forms of copy protection. Consider the old paper-based protection schemes that required gamers to enter a code to play a game.

HAVE USER ENTER CODE
IF CODE IS CORRECT
– RUN GAME
ELSE
– DO NOT RUN GAME
END IF

Once you understand this logic you can see that with a minor change, programs could be re-programmed to always load. Or, “cracked.”

HAVE USER ENTER CODE
IF CODE IS CORRECT
– RUN GAME
ELSE
DO NOT RUN GAME
END IF

Again, simple. No matter what the user enters at the prompt, the game loads. There are other ways to do it, of course. Another simple way would be to tell the program that no matter what the user enters, it’s correct.

HAVE USER ENTER CODE
CODE IS CORRECT
IF CODE IS CORRECT
– RUN GAME
ELSE
– DO NOT RUN GAME
END IF

In this instance, no matter what the player enters, we tell the code that it was correct and the program continues down that path.

This is essentially how I removed the malware from the theme. The theme checks to see if a particular file exists on the computer. If it is, it reads a serial number from the file. If the serial number checks out, the malware links are removed from the footer.

CHECK TO SEE IF LICENSE FILE EXISTS
TELL PROGRAM FILE EXISTS
IF FILE EXISTS
– DO NOT INJECT MALWARE LINKS
ELSE
– INJECT MALWARE LINKS
END IF

A quick check of the theme’s output showed that the technique worked and the malware links had been removed. With that part fixed I began systematically removing all the malware-seeking code. It took a couple of hours, but I think the entire theme is now clean.

Unfortunately, once Google detects malware on a site it removes the URL from its search engine (SpriteCastle.com no longer shows up in Google searches) and Google Chrome still flags the site as one that hosts malware, even though the links have been removed. To get re-added, a request has to be submitted to Google and a scan of the site has to be performed. That ball’s already started rolling, so hopefully in the next day or two I’ll be back in business.

YDKF Episode 119: Hohocon ’94

Another week, another episode.

Episode 119 of You Don’t Know Flack is about Hohocon — specifically Hohocon ’94, the last Hohocon and the only one I attended. Hohocon was a hacker conference that ran for 5 years in a row, from 1990 to 1994. It was put on by dFx, the Cult of the Dead Cow, and Phrack Magazine.

This was a tough episode to complete. During the time slot I set aside to record, my sister inconveniently and inconsiderately had a baby. Don’t you hate it when other people schedule things when you already have plans? Sheesh! All kidding aside, I spent a few hours at the hospital yesterday and a few hours watching the NFL playoffs yesterday, just enough to set me back half a day. On top of that I spent 90 minutes recording and another 3 hours editing my own babble.

Listen to me ramble. I sound like Jodie Foster’s award speech from last night, except I’m not coming out in this post. Unless it’ll increase my number of subscribers.

Link: YDKF Episode 119: Hohocon ’94
Facebook: You Don’t Know Flack

Deconstructing the PS3 Hack

Last week at the 27th annual Chaos Communication Congress (CCC), a group calling themselves “fail0verflow” displayed the single-most important PlayStation 3 hack to date. A few months from now, when everybody who wants one has a modified PS3, you’ll be able to point your finger back to fail0verflow’s CCC presentation and say, “that is where is all began.”

Just like the original Xbox, the PlayStation 3’s defenses didn’t fall to pirates, but to Linux experts. The quickest way to have your security precautions ripped out of your device, run up the flagpole and laughed at is to prevent people from running Linux on it. In fact, the general consensus has been all along that since the PlayStation 3 allowed users to install Linux on an unmodified console, Linux hackers have had no incentive to tinker with the console’s security measures. As a result, the PS3 has remain “unbroken” for over four years, the longest of any modern console. However in the late spring of 2009, Sony removed the OtherOS feature from PlayStation 3’s through a mandatory (if you want to play online and/or new games) BIOS upgrade. While this made a lot of PlayStation 3 owners mad, it apparently made fail0verflow really mad.

The reason your PS3 (or any game console) won’t play a copied disc is because games must be digitally signed. As with any encryption, this digital handshake requires a private key and a public key. A PlayStation 3, using its private key, examines public keys and, based on its findings, determines whether or not to execute the code. This is why games you buy off the shelf will run on your PS3, but a copy of that same game will not.

(Old mod chips for the original PlayStation used to trick consoles by returning the right answer, regardless of what the question was. The PS1 was looking for region codes instead of digitally encrypted signatures, but the concept was the same. When a backup copy was inserted into the original PlayStation, the console would ask, “should I play this game?” The console checked for the region code and, when it could not be found, would reply with “no.” That response was sent back through the modchip, who slyly changed it to “yes!”)

While digging through the PlayStation 3, fail0verflow didn’t just find a private key — they found the private key. The master root encryption key. Using this key, hackers can generate working public keys. With valid public keys, hackers can boot anything they want on the PS3. There are two important things to note here. One, is that this key is included in the PlayStation 3’s hardware. It does not appear that a BIOS upgrade can change the master key. And two, changing the key could cause all PlayStation 3 games to stop working — so that’s not very likely. fail0verflow went looking for this key in the name of Linux. Other folks may not be so kind.

You know how there’s that one guy that takes things to another level? In the hacking world, that guy is GeoHot. GeoHot perfected the iPhone jailbreak; if your iPhone is jailbroken, you owe it to GeoHot. The PlayStation 3 has been a thorn in GeoHot’s side for quite some time now. He’s picked at it, poked at it, and even released a couple of hacks that were eventually closed up by Sony. fail0verflow announced that within the next month, they plan on releasing some tools that will allow the homebrew and hacking communities to start looking at the PS3. GeoHot said to hell with that, and posted the master key on his website.

Click to Enlarge

Right now, this kid’s house is probably surrounded by lawyers. Or assassins. Or both.

Now, I don’t know what to do with that number, and chances are you don’t either, but you can get your booty there are people that do, people that have been waiting four long years for those numbers. The PS3’s homebrew and hacking scenes are about to light up. I can’t wait to see what happens next.

Sony Making a Grave Mistake (Please Read)

Sony’s decision to remove OtherOS from the PlayStation 3 could change the future of all electronic devices as we know them. You may not agree with or even completely understand that statement yet, but if you own anything (even a computer or a phone) that connects to the Internet, I urge you to read today’s post.

Today’s story begins back in 2006 with Sony’s release of the PlayStation 3 (PS3). The PS3 was (and still is) the most advanced video game console ever released. In fact, the console was so powerful that not only could it also play both PS2 and PS1 games, but using a feature called OtherOS, you could actually install Linux on the PS3’s hard drive and use the gaming console like a computer. Due to built-in restrictions the end result isn’t a terribly powerful computer, but it does work, and it is useful. I use it.

Most video game consoles contain some type of internal copy protection, to prevent people from downloading/burning/copying games instead of buying them. This was obviously a much smaller problem back in the days of cartridges, as most gamers had no way of creating pirate circuit boards and/or EPROMS. In today’s world where every computer has a DVD burner installed, this is a much bigger problem. So, companies like Nintendo and Microsoft and Sony include copy protection inside their video game consoles that prevent copied games from working. Many console manufacturers lose money on each game console sold, but recoup those losses over time by selling games for a profit.

To circumvent copy protection, pirates develop custom chips (“mod chips”) that allow these consoles to play copied games. Installing a mod chip requires a certain amount of technical ability as well as a certain amount of courage — one wrong move can both void your warranty and destroy your console all at the same time.

In the old days, once a console was modded, it was game over for the manufacturer. For example, consider the original Sony PlayStation. Once a mod chip was released, there was little Sony could do but watch as pirates sold mod chips by the thousands on the Internet and games were freely distributed. Suing the sellers, distributors and even makers of mod chips turned out to be a fruitless game of whack-a-mole. Sony’s only recourse was to redesign the internals of the PlayStation so that old mod chips wouldn’t work on it; pirates quickly countered with new mod chips that worked on both old and new machines. These days, it’s not a matter of if a new gaming console’s security measures will get “cracked”, but when.

The ultimate nightmare, however, is when pirates find flaws that don’t require any sort of hardware modification at all. The most memorable example of this was Sega’s Dreamcast. Utopia (a pirate group) released a boot disc for the Dreamcast that allowed burned games to be loaded and played without physically modifying the machine. Eventually, the boot loader was included on copies of pirated games. Utopia released their boot loader in June of 2000; Sega announced the death of the Dreamcast in January of 2001. Piracy is often (unofficially) cited as one of the major causes of the death of the console.

Enter George Hotz, aka Geohot, who by all accounts is a teenage genius. Geohot made his name in iPhone circles by creating and publicly releasing software to jailbreak iPhones. “Jailbreaking” allows iPhones to run unsigned code and change settings (including carriers) that customers are not supposed to be able to change. The first time I jailbroke an iPod Touch, it took me about two hours of dumping, patching, and reapplying firmwares. With Geohot’s blackra1n utility, you can do it in about 10 seconds by clicking a single button.

The seventh generation of video game consoles include the Nintendo Wii, Microsoft’s Xbox 360, and Sony’s PlayStation 3. Two of those three — the Wii and the 360 — have already been cracked. Early Wii mod chips have since been replaced by a software exploit that anyone with access to Youtube can figure out and perform in about 10 minutes. The 360 is a bit more complicated and requires flashing a BIOS, but it’s still relatively easy and requires no soldering or real technical skill. To date, only the PS3 remains unmodded … which is why Geohot set his sights on it last November.

By January, the whiz kid announced that he had successfully rooted the console, but it wasn’t easy. Circumventing Sony’s security measures required not only opening the console and soldering wires to the machine’s internals, but also required using an exploit found, apparently, in the PS3’s OtherOS feature. According to Geohot’s blog posting, “Sony may have difficulty patching the exploit.”

In fact, Sony has found a very simple way to patch the exploit. Sony’s latest mandatory update removes OtherOS from the PS3. And by mandatory, I mean you will not be able to play online any longer without applying this update. PS3 owners have two options; apply the patch and lose the ability to use OtherOS, or stop playing online. It’s that simple.

I’m not a lawyer and I’ve not read every license agreement, but I’m guessing Sony has and that in some bizarre way, this must be legal. It sure doesn’t seem like it should be to me. I bought a PS3, and when I bought it, it came with the ability to run additional operating systems. And now, that option is being removed from a device that I bought, paid for, and is sitting in my living room. It just doesn’t seem right.

Geohot, for his part, has promised PS3 owners a custom firmware that will allow both the use of OtherOS and the ability to play online. More power to the guy should he release it, but installing a custom firmware would most definitely void my warranty, something I don’t want to do (and shouldn’t have to do) just to keep the functionality my PS3 had when I bought it new.

For the record, other companies have waged wars against pirates as well. Microsoft, for example, routinely bans gamers running modified BIOS versions or pirated games. I have no problem with this. What I have a problem with is the removal of features that I paid for after I paid for them.

To some, this may seem like an essay about video games, but it’s not. It’s a question: what does it mean to own something in this day and age? Could AT&T or Apple prevent my iPhone from dialing 1-800 numbers if people start prank calling 1-800 numbers? Could Chevrolet remotely lower the top speed of my truck if they decide I drive too fast? Can television manufacturers retroactively lock TV channels that they decide aren’t worth watching? Where is the line between consumers and manufacturers? I don’t know, anymore.

Amazon pissed off thousands of paying customers last year by quietly removing books people had already bought and paid for on their Kindles. It was a public relations nightmare that caused loud discontent from Kindle owners (an obviously web-friendly demographic — oops). It appears that Sony is about to commit two giant faux pas with one stroke. Simultaneously, Sony plans on screwing millions of customers that own launch PS3s by removing the OtherOS option, and drawing the ire of Geohot, a technical genius who really doesn’t need that kind of prodding in order to chew up Sony’s security and spit it back up at them. Prior to this last announcement I was content to sit by the sidelines and see how this all played out, but after Sony’s latest blunder, I’m actually rooting for Geohot.

C’mon kid, let’s see what you got.

Security Through Obscurity, and why it fails.

Before we begin today’s lesson, we’re going to do something fun and generate your Rock Star name. Your first name will be the name of your first pet and your last name will be the name of the street you live on. Mine’s “Ernie Gregg.” Write this down or just make note of it; you’ll need it later near the end of today’s program.

Security Through Obscurity (“STO”, for short) is the concept that things will be secure if you hide them. I’ve mentioned the concept before; I covered it in detail on Episode 104 of You Don’t Know Flack. The concept is simple: if you hide things well enough, people won’t be able to find them. People do this in the real world all the time. An example would be hiding your house key inside a fake rock. By doing this, you have obscured (or hidden) the security method to open the door (the key). STO also applies to computer systems as well. Hiding your password under your mouse pad would be a very basic example.

STO is most often used to hide what security guys like to call “low hanging fruit”. For example, let’s say everybody in your office writes their password down on a sticky note and sticks it to their monitor, but you stick yours under your mouse pad. When Joe the Hacker shows up looking for passwords, he is more likely to use a password that he sees out in the open than spend the time digging around your desk looking for yours. The same concept can be applied to network security. Breaking WEP passwords on wireless routers is trivial at this point, but if Joe the Hacker needs wireless access and he sees five routers and two of them have passwords, chances are he’s going to hop on one of the open ones over a password protected one because it’s less work.

Computer people have been using Security Through Obscurity for years and years now, and time and time again it’s failed. It rarely works. The biggest enemy of STO is “time”, and there are plenty of people out there with plenty of it. STO may help you by not being a “low hanging fruit”, but if someone has specifically targeted your basket of fruit … look out. Going back to our “key in a rock” example for a moment — if a burglar is looking for the easiest house to break in on a street, he might skip yours. BUT, if he has targeted YOUR home specifically, now you’re in trouble. Burglars know where to look; after searching on top of your door frame and under the welcome mat, he’ll start looking for other places people hide keys. People don’t hide their house keys in six-foot-deep holes where it would take them an hour to recover them. Time is his advantage here.

Take that same concept and apply it to computer security. FTP runs on port 21. When someone wants to know if your server is running FTP, they’ll touch that port and look for a response. If they get a response, they’ve found it. Direct security would mean using difficult passwords, but an example of security through obscurity would be moving FTP to a different port. When a hacker scans a range of IPs looking for FTP servers, yours might not show up, and in that example, you’ve helped yourself. In a direct attack against your server however, hackers will scan every port on your server. They’ll find the FTP service in no time and, if you haven’t added any additional security methods, your server may now be in trouble.

One of the main reasons STO fails is because the average person doesn’t think like a criminal. When you hide your password under your mouse pad or your house key in a fake rock, you think you’re being pretty sneaky. The problem is, criminals know these tricks too. Hackers know those same tricks. You may think you’re being sneaky by changing a port or renaming your machine or whatever it is you’ve come up with, but the truth of the matter is, security through obscurity FAILS CONSISTENTLY.

Hey look — it only took me five (six, counting this one) to get to today’s point. It’s a new record!

One of the most common examples of STO today is your “secret answers”. We’ve all had to give (and answer) these things before. “What’s your mother’s maiden name?” “What’s your favorite color?” “What was your first car?” That stuff might have been tough to find in a world before Facebook; today, you can glean most of that stuff from a person’s Facebook page. Did you know that by default Facebook lists every woman’s maiden name? There are a lot of teens on Facebook whose mommies are on Facebook too. This is a big problem for the average person. It’s a bigger problem for celebrities.

Last September, Sarah Palin’s Yahoo e-mail account was hacked. Here’s how it was done. The “hacker” logged into Yahoo, entered Palin’s e-mail address, and clicked “reset password.” Yahoo then asked the hacker three questions: Palin’s zip code, her birth date, and where she met her spouse. The “hacker” (I keep putting that in quotes because the guy doesn’t deserve the honor) found the answer to all three questions via Google. The zip code took two tries. Her birth date was listed on Wikipedia. Where she met her husband (Wasalla High) showed up in Google. Bingo.

Last night it was reported that Celebrity Accounts on Twitter had been hacked. Read through the details though and you’ll see a few similarities to the above story; Twitter itself wasn’t hacked, an admin account was. Here’s a quote from the story:

“Hacker Croll claimed to have used social engineering techniques to access Goldman’s account: “One of the admins has a Yahoo account, I’ve reset the password by answering the secret question. Then, in the mailbox, I have found her [sic] Twitter password.”

So, a recap; the hacker reset Jason Goldman’s (Twitter’s Director of Product Management) Yahoo mail account. After doing that he logged into the Yahoo mail account and found his Twitter password sitting in his mailbox. Using that password, Hacker Croll logged in to Twitter as Goldman and then began looking at celebrity’s accounts.

In a world where everybody apparently wants to put everything online for everybody to see, this type of security is not going to work. Shaq’s mother’s maiden name is actually O’Neal. Ashton Kutcher’s favorite color is red. Brittney Spears birthday is December 2nd, 1981. Her son Jayden was born on September 12, 2006. Here’s the birth certificate. This stuff is not hard to find, and even non-celebrities are not immune. The About Me/Us link on my own website lists my birth date, pet’s name, kids’ names, and lots of information that shows up regularly on those lists of security questions. First car? That’s embedded on my website somewhere. Susan’s maiden name is on there too.

To bring this full circle … let’s take a look at my Rock Star name again: “Ernie Gregg”. Let’s say I post that on my Facebook page. Now you’ve got my name, whatever information you can get from Facebook, PLUS the name of my first pet AND the name of the street I live on. I know for a FACT many sites use “What was the name of your first pet?” as a security response. The “Rock Star name” is just one of many variations on this game. Here’s a form I found posted on Facebook recently:

THE NAME GAME

1. YOUR ROCK STAR NAME: (first pet and current street)
2. YOUR MOVIE STAR NAME: (grandfather/grandmother on your mother’s side, your favorite candy)
3. YOUR “FLY GIRL/GUY” NAME: (first initial of first name, first two or three letters of your last name)
4. YOUR DETECTIVE NAME: (favorite animal, favorite color)
5. YOUR SOAP OPERA NAME: (middle name, city where you were born)
6. YOUR STAR WARS NAME: (first 3 letters of your last name- last 3 letters of mother’s maiden name, first 3 letters of your pet’s name)
7. JEDI NAME: (last name spelled backwards, your mom’s first name spelled backward)
8. PORN STAR NAME: (friend’s middle name, street you grew up on)
9. SUPERHERO NAME: (“The”, your favorite color, the automoblie you drive)
10. EMO BAND NAME: (first word in the top banner ad above, city of the away team of the last major sporting event you went to/remember)

Take a second to read over that list. First pet? Current street? Favorite animal? Favorite color? City where you were born? Street you grew up on? Are these things ringing any bells yet? Holy Christmas, it’s like a who’s who list of security information! And you just posted it! On the Internet! For everybody to read! MY HEAD JUST EXPLODED!!! Seriously, if I couldn’t reset your AOL password before I had all that information, I’m betting I can now!! The only one they forgot is DUMBASS NAME: (what time you leave for work, where you hide your porch key).

Security Through Obscurity. Don’t count on it; it doesn’t work. Just ask Microsoft.