12th Annual Linux Fest Northwest
When: Saturday April 30th & Sunday May 1st, 2011
Where: Bellingham Technical College
When: April 29th, 30th, and May 1st, 2011
Where: Holiday Inn – Glenwood Avenue in Raleigh, NC
8th Annual Charlotte ISSA Security Summit (featuring Ed Skoudis, Paul Asadoorian, and Chris Hadnagy)
When: May 5th, 2011 08:00 – 17:00
Where: Charlotte, NC
SANS Mentor: Security 401: SANS Security Essentials Bootcamp Style (Matthew Romanek)
When: Thursday, May 5, 2011 – Thursday, July 7, 2011
Where: Federal Way, WA
Discount Code: MRPOD10 for 10% savings
SANS: SANS Security 504: Hacker Techniques, Exploits & Incident Handling (Dave Shackleford)
When: Sunday, May 15, 2011 – Friday, May 20, 2011
Where: Baltimore, MD
My Hard Drive Died
5-Day Data Recovery Expert Certification
Where: Atlanta, Georgia
When: June 6-10, 2011
Where: Chicago, Illinois
When: July 18-22, 2011
When: June 18, 2011
Where: Vienna, Austria
CFP open now!
2nd Annual AIDE conference
When: July 11th – 15th, 2011
Where: Marshall University Forensic Science Center, Huntington, WV
When: August 3-4, 2011
Where: Las Vegas, Nevada
When: Sept 19-22, 2011
Where: Brussels, Belgium
CFP & CFT open now! http://blog.brucon.org/2011/01/brucon-call-for-papers-2011.html
When: September 30th – October 2, 2011
Where: Louisville, KY
Apparently someone had the bright idea to throw their cardiac monitoring services into the cloud. This little gem was found in Amazon’s EC2 support forums. The message reads:
“Life of our patients is at stake – I am desperately asking you to contact.
Sorry, I could not get through in any other way
We are a monitoring company and are monitoring hundreds of cardiac patients at home.
We were unable to see their ECG signals since 21st of April<snip>”
One of the responses below really resonates:
“Oh this is not good. Man mission critical systems should never be ran in the cloud. Just because AWS is HIPPA certified doesn't mean it won't go down for 48+ hours in a row. “
Reading further down the comments, the original poster tries to downplay the criticality of the situation by making statements such as, “This is a home based system, not an intra hospital system. So the promised 99.95% uptime is fine.” and “As I wrote, this is not a life saving system. Which does not mean that patient's life cannot be saved using it.” in a feeble attempt at backpedalling against the outrage others who came across the thread but in the end, let this be a case study as to why companies should think twice about outsourcing to cloud provides without having their hands on enough iron to toss up in the event of an emergency. And if you need to have that much capacity lying around for backup anyways, while not use the cloud provider as your backup instead?
Sartin, whose team gets called in to find the cause of data breaches, says that he's seen a tendency to label any hacking incident an APT attack play out several times since Google went public with the issue in January last year. Usually it happens about a month or two after his team finishes its analysis. "I get a link sent to me from one of my investigators saying, 'You're not going to believe this.' I open the link and get a statement from the company blaming advanced persistent threat."
Advanced persistent threat attacks are supposed to be sophisticated and highly targeted data exfiltration exercises conducted by spies or agents working on behalf of nation states.
Blaming APT has "become the perfect excuse" for companies recovering from a data breach, Sartin said. "It's almost as if it's become chic in the U.S. to blame it [on APT]," he said.
Part of the problem is confusion over China, the country most commonly associated with APT attacks. China is the source for most online attacks these days, no matter what the motivation. The country has more than 400 million Internet users, and many of them are using computers that don't have up-to-date patches or security software. Those PCs often get hacked and then used as stepping-stones for further attacks.
"China is like the wild west of source IP addresses that can be taken over to stage attacks, " Sartin said. So when attacks happen, "everybody looks at it and says, 'Oh that's the Chinese government.'"
That's a mistake, Sartin said. In fact, the majority of attacks — 78 percent of all incidents — result in stolen bank card data. That's not something that APT data-stealers are looking for. Data that's important to national security — a prime target in the real APT incidents — is stolen just 3 percent of the time, he said.
Working with the U.S. Secret Service and the Dutch National High Tech Crime Unit, Verizon was able to analyze 760 data breaches that occurred in 2010. Verizon is publishing its Data Breach Investigations Report detailing these findings on Tuesday.
The trend in 2010 was away from the massive data breaches that led to 144 million compromised records in 2009. Instead hackers are hitting a larger number of smaller businesses. The attacks are less sophisticated, but they are also more likely to stay under the radar of law enforcement. Although the total number of incidents counted in the report went up, just four million records were compromised in 2010, according to Verizon's data.
Instead of hitting big companies like TJ Maxx, hackers are more likely to go after smaller companies with less than 100 employees. These are often hotels, restaurants or mom and pop shops with a cash register or computer connected to the Internet. Their security isn't as good, and police are less likely to respond when they get hacked.
Most attackers are not super-sophisticated state-sponsored cyber-criminals. In fact, a lot of the really good criminals are already behind bars, so today's hackers tend to be less sophisticated, Sartin said. In fact, only 3 percent of all incidents were so sophisticated they were considered nearly impossible to stop.
Although many companies worry about insider attacks, 92 percent of the attacks came from outside the institution. Malicious software such as keyloggers and back door programs was involved about half the time.
Geordy’s comments: A reminder that it was Google that gave us the term “APT” and even though it’s not that old it has caught on like wildfire since it’s the perfect scapegoat… Now who the f’ gave us the term ‘cyber’?!
The White House unveiled guidelines for establishing secure online credentials to boost confidence and business online.
The Department of Commerce unveiled the plans for National Strategy for Trusted Identities in Cyberspace at a release event on April 15 to protect the privacy and security of Internet users by encouraging the creation of secure and reliable online credentials for consumers who want to use them.
“The fact is that the old password and username combination we often use to verify people is no longer good enough,” Commerce Secretary Gary Locke said at the event. The current system leaves “too many consumers, government agencies and businesses vulnerable” to identity thieves and criminals intent on stealing information, Locke said.
The identity ecosystem would revolve around credentials stored outside of the actual Website, application or service, and would eliminate the need for unique passwords, Locke said.
With the increasing amount of identity theft and online fraud, consumers don’t trust the Internet. “It will not reach its full potential, commercial or otherwise, until users and consumers feel more secure,” Locke said.
The technologies described in NSTIC would allow online users to stop using unique passwords on each site and instead use a set of credentials that are accepted by multiple sites. The goal is to not have just one trusted identity technology or provider, but to have several and let users choose which ones to use.
Since consumers will be able to choose among a diverse market of different providers of credentials, there will be no single, centralized database of information. Consumers can use their credentials to prove their identity when they're carrying out sensitive transactions, like banking, and can stay anonymous when they are not, said privacy advocate Susan Landau, a fellow at Harvard University who was on the panel discussing the latest NSTIC plan.
A new report issued on Tuesday by security firm Veracode paints a grim picture of the amount of protection built into application software.
More than half of all applications fail to meet acceptable security quality, according to the "State of Software Security Report: The Intractable Problem of Insecure Software."
The study, which assessed nearly 5,000 applications over the last 18 months, found that 58 percent of all applications had “unacceptable” security quality when initially submitted to Veracode's testing platform. Further, more than eight out of 10 web applications failed when measured against the OWASP Top 10, an industry benchmark that documents the most common critical web application errors.
One explanation for these failings, according to the report, is that security processes, such as threat modeling or secure coding standards, were not integrated or poorly integrated into the development lifecycle, Sam King, vice president of product marketing at Veracode, told SCMagazineUS.com on Tuesday. Security is something everyone knows should be there, but oftentimes is less a priority than time and budget.
However, as the threat environment continues to evolve and gain strength, these weaknesses "translate into real and present danger for the risk-free operation of software infrastructure," the report said.
The root cause, King said, was a lack of awareness around secure coding principles.
"There's a poor state of application security knowledge," she said. "Formal training is not offered in most university computer science courses, nor in development training on a professional level."
This sentiment is echoed by Dave Wichers, COO at Aspect Security, a Columbia, Md.-based provider of secure software applications.
"Most applications are in terrible shape," he told SCMagazineUS.com on Tuesday.
While some exploits are less used today than previously, developers cannot let down their guard, experts said.
"In the last couple of years, we've been seeing a decline in popular exploitable security vulnerabilities, such as cross-site scripting (XSS) and SQL injection," Bojan Ždrnja, senior information security consultant at INFIGO IS, a Croatia-based security consultancy, told SCMagazineUS.com on Tuesday. "However, in many cases this happened due to the framework that the developers are using. Modern frameworks such as ASP.NET can prevent certain attacks, such as XSS out of box."
This is good because it will stop some attacks, but bad because it can lead to developers ignoring these vulnerabilities, which can be especially problematic since the application now depends on a different layer of security that can sometimes be inadvertently turned off by an administrator, Ždrnja said.
Meanwhile, another high-risk business impediment are business logic flaws. These involve mistakes such as insufficient authorization or predictable resource location that can lead to, for example, being able to reserve a seat on a flight prior to paying for it or guessing the URLs of press releases announcing the earnings of a particular public company, prior to their official release.
"This is even more worrying in high-profile applications, such as those used by the finance industry, since automated scanning tools today fail to identify business logic vulnerabilities," Ždrnja said.
Experts, however, remain hopeful.
The recent spate of major breaches involving flawed application code, such as the breach of security firm Barracuda Networks via SQL injection, might serve a wake-up call to corporations to get serious about the security of their software, Veracode's King said.
Wichers, who is also an OWASP board member and OWASP Top 10 project lead, said training developers on how to avoid all application weaknesses is key, but what is missing from the Veracode recommendations, he said, is providing developers with standard security controls that help address these problems.
"All programming languages have a safe way of using SQL safely to avoid SQL injection, but they also have an unsafe way, and developers frequently do it wrong," Wichers said. "However, for cross-site scripting, many languages don't provide a built-in library for making user input safe from XSS."
Libraries exist where code writers can obtain safe script, he added.
"This makes building secure applications much easier," Wichers said.
Sony’s online gaming platform, The PlayStation Network (PSN), continued a five day outage on Monday after what the company described as an "attack" on its network knocked PSN offline on April 20. And hope is fading for a fast resolution, with Sony saying it is revamping the network to make it more secure. The company released a statement on their PlayStation blog on Friday claiming that an "external intrusion on our systems has affected our PlayStation Network and Qriocity services.” The company said PSN has been turned off, and will remain off, until Sony is satisfied that their network is secure enough that this sort of thing won’t happen in the future.
While Sony did not attribute blame for the attack, published reports have speculated that the online mischief-making collective, Anonymous, might be behind the hack of Qriocity, a media streaming service that was hosted on the PlayStation Network. The group has claimed responsibility for denial of service attacks against Sony for legal attacks on hacker enthusiasts who have cracked content protection technology for its PS3 and other products.
On Friday, Anonymous posted a statement on the Web site Anonnews.org denying responsibility for the hack. “For once, we didn’t do it,” the statement read.
In a post at PlayStation’s self-help Knowledge Center, the gaming giant claims they are working around the clock to bring the network and Qriocity, their music and movie streaming service, back online. Unfortunately for a number of increasingly destitute gamers, there is little hope that the PSN will return to service anytime soon, as PlayStation says they are in the process of “rebuilding [their] system to strengthen [their] network infrastructure.”
Revelations that this outage was likely the result of an external attack aren't altogether surprising considering the amount of ire Sony has drawn from the hacking community as a result of their legal action against suspcted PS3 hackers.