Archive by Author

Watch Out for HummingBad Android Malware

Security researchers at Check Point released their findings about HummingBad this week (pdf), after a five-month long analysis of the Android malware campaign. Since first discovered in February 2016, the malware has infected an estimated 10 million Android devices, earning its developer $300,000 a month in revenue from fraudulent ad clicks and app installs. While devices located in China and India make up a comparatively large percentage of infections, western nations like the United States and Mexico still have estimated victim counts of over 250,000 each.

The HummingBad campaign uses drive-by download attacks hosted on adult content sites to initially infect new victims. During infection, the malware attempts to obtain root access on the victim device by exploiting known Android vulnerabilities. If rooting fails, the malware instead creates a fake system update notification to trick users into granting it system-level permissions. During this rooting process, the malware also downloads several malicious components and apps which contain the actual malevolent functionality.

As mentioned earlier, HummingBad’s main intent is to earn revenue through illegitimate ads and fraudulent app installs.  Device events such as booting, locking or unlocking your screen, and changing your network connectivity trigger the malware’s main process, causing it to display illegitimate ads that include a fake “close” button. Whether you click the ad or the “close” button, HummingBad’s developers earn revenue from the click. Throughout this process, the malware blocks you from returning to your home screen, making it very hard to avoid these evil ads.

While you’re inadvertently clicking these evil ads, another HummingBad process forcefully downloads and installs more unwanted applications on your device, helping earn the authors even more illicit revenue from something called “installation referrals”. Google Play includes mechanisms that share “INSTALL_REFERRER” information with app developers. This mechanism allows legitimate app developers to pay commissions whenever a customer buys or installs their app based on someone’s referral. The HummingBad malware includes a sophisticated process injection technique that can subvert the Google Play referral process. It can imitate clicks on the install/buy/accept buttons in the Google Play store, allowing the malware to simulate app installation referrals.  The malicious process also can inject fake International Mobile Station Equipment Identity (IMEI) numbers during app installation, allowing the same app to be installed multiple times on the same device (which generates even more revenue for these criminals).

If forcing your device into an ad zombie wasn’t bad enough, HummingBad’s root capabilities potentially expose it up to even more  foul play. With full system privilege, Attackers could easily leverage the army of HummingBad-infected devices to launch DDoS attacks or simply use its included functionality to load even worse malware onto infected devices.

Interestingly, Check Point’s report connects HummingBad to the Chinese advertisement and analysis company Yingmob—the same firm linked to the Yispecter iOS malware discovered towards the end of 2015. Yingmob applications, both legitimate and malicious, have an estimated installation base of 85 million devices according to the researcher’s findings. I find this very frightening since it puts Yingmob one malicious update away from creating a massive number of infected devices.

There are several steps you should take to protect your Android devices from becoming infected.

  1. First, avoid rooting your device. While rooting can enable beneficial functionality, which is normally locked down by your carrier, it leaves you wide open to malware installed via drive-by download attacks.
  2. Second, always keep your device updated with the latest available patches. By running the latest OS update, you limit the vulnerabilities attackers might exploit to install malware like HummingBad. That said, Google allows carriers to package their own versions of Android, and some carriers don’t use the latest Google Android versions. This means your device’s security may be more dependent on your carrier than the devices itself.
  3. Third, never install applications from unknown sources. By default, Android prevents users from installing applications that aren’t available in the Google Play Store (sideloading). Disabling this prevention leaves you at risk of installing malicious applications like HummingBad.

HummingBad is just the latest in an increasing series of attacks against mobile devices. With an estimated 2 billion smartphones in use worldwide, the incentive for attack is already there. Users need to make sure they are prepared for the incoming onslaught. –Marc Laliberte

Friends Don’t Let Friends Download Malware

Last weekend, a user on the question and answer site Stack Exchange asked for help identifying malware he found distributed via Facebook. He said he received a notification on Facebook, informing him that one of his friends had tagged him in a comment on the site. When the user clicked on the notification link, his browser automatically downloaded an obfuscated JavaScript file. Quick analysis of the JavaScript showed that when executed, it acted as a loader application to download and execute malware.

Another Stack Exchange user provided further analysis of the malicious JavaScript file. This user found that the JavaScript downloaded and installed a Chrome extension, the AutoIt Windows executable, and a few malicious AutoIt scripts. The malware likely creates its tainted Facebook posts using this Chrome Extension to continue infecting other hosts.

Aside from the Chrome extension, the JavaScript loader also included functions to download the AutoIt executable and various AutoIt Scripts. AutoIt is a (usually legitimate) scripting language designed to help IT administrators easily configure large numbers of Windows hosts. In the case of this Malware, the bad guys were using AutoIt scripts to preform ransomware-like behaviors. The scripts themselves were hosted on a compromised website, disguised with .jpg extensions to appear as regular image files without closer inspection.

Luckily, even though this user’s browser automatically downloaded the malicious JavaScript after visiting the notification link, his browser didn’t automatically execute the code. It seems the malware’s author relied on users launching the JavaScript themselves, which would greatly lessen this attack’s success.

In any case, this incident is a great example of why you should never execute unsolicited applications from the Internet. If your browser downloads a file after you click a Facebook notification, it should raise immediate red flags. The user on Stack Exchange did the right thing by investigating the file first and then asking for help from experts.

You should also keep your browser and all of its extensions fully updated with the latest patches. While this attack’s delivery method was relatively unsophisticated, that’s not always the case. A more motivated attacker may have tried to exploit known browser vulnerabilities to auto-execute the malware and compromise the would-be victim’s computer before they even knew what hit them. –Marc Laliberte

Little Uber Hacks Snowball into Bigger Threats

Integrity, a UK and Portugal based security consulting firm, recently released some interesting research after participating in Uber’s bug bounty program. For those unfamiliar, bug bounties are a way for organizations to incentivize security researchers to responsibly disclose vulnerabilities in their products. By promising a bounty, organizations hope that researchers will work with them to resolve security issues instead of selling them on the underground to the highest bidder.

Last week Integrity shared their experience with Uber’s bug bounty program. They describe their process for identifying bugs in the different areas of Uber’s API and mobile apps, and responsibly disclosed several vulnerabilities, which Uber has since resolved. I highly recommend you read the article, but I’ll highlight some of their more notable findings below.

Uber sometimes offers promotional codes for discounted rides either to new users or as a part of emergency ride home programs with other ride share services. In their testing, Integrity discovered that Uber had no protection against “brute forcing” these promo codes in their application. Integrity quickly found over a thousand valid discount codes, but Uber’s security team initially turned the research away because they considered the promo codes public. It wasn’t until Integrity found a $100 code, intended for a Washington-based carpool community’s emergency ride home program, that Uber bugged and resolved the brute force issue.

While intercepting traffic from the Uber cell phone app during an actual ride, Integrity also found that they could enumerate Uber User IDs by sending phone numbers to an Uber API designed to allow splitting ride fare bills. Paired with another bug, these User IDs were easily leveraged to return the personal email address of the associated Uber user.

Most frighteningly, Integrity found they could use a rider’s User ID (obtained from the previously mentioned bug) to find details about that user’s trips. The details included the date of the trip, the cost of the trip, and a map of the entire trip route. Putting these bugs together, armed only with a rider’s phone number, Integrity was able to ultimately see a scary level of detail on every Uber ride ever taken by that user.

Luckily for Uber users everywhere, these vulnerabilities were responsibly disclosed to Uber and subsequently fixed. I think Integrity’s article shows an important example where small individual security issues can snowball together and become a large threat. Yes, enumerating User IDs for a web application is a potential privacy issue on its own, but it becomes critical when those User IDs can be converted into even more sensitive information about the users.

Uber did an excellent job working with the researchers at Integrity to quickly resolve these issues. I would urge anyone involved with application development to keep an open rapport with external security researchers. Internal QA will never catch everything, meaning external researchers are an important tool in protecting your product from the bad guys. –Marc Laliberte

One Password to Rule Them All

Anyone with a social media account has at some point encountered blatant spam followed by a “sorry everyone, someone hacked my account” post from one of their friends. Unless this friend in question is some celebrity, it is extremely unlikely that they were specifically targeted in a hack. Instead, these types of account takeovers are usually caused by the victim using the same password with another account on a site that was compromised.

When you re-use a password, all accounts that share those credentials become as vulnerable as the weakest link. Github recently reset the passwords for a large number of accounts after detecting that a number of their users were victims of a password dump from a completely unrelated service. Earlier in the month, attackers took over Mark Zuckerber’s Twitter and Pinterest accounts, claiming to have used his credentials from the LinkedIn password dump a few weeks before.

Password dumps are a common form of attack and aren’t always as public as the aforementioned LinkedIn credential dump that leaked more than 100 million credentials. Just this week, 45 million more credentials were leaked from relatively niche sites like motorcycle.com and mothering.com. You won’t always know when one of your passwords becomes public knowledge. It’s even possible that a site you own an account on is the one doing the bad deeds, as humorously described in this XKCD comic.

Users should always use unique passwords wherever possible, especially for critical accounts like email and banking services. Users should also change their passwords on a regular basis to mitigate undisclosed password dumps. That said, creating and remembering dozens of complex passwords can be very difficult. I highly recommend the use of password managers like LastPass or KeePass to help in that regard. Password managers allow you to create and save strong unique passwords for every account while only needing to remember a single complex master password to unlock them all. When paired with a second authentication factor like a fingerprint to go with the master password, password managers are an excellent tool to increase your security and prevent you from falling victim to the next dump of credentials. –Marc Laliberte

Genius Security

What happens when your business model disables critical security protections for your users? Last week, software developer Vijith Assar wrote an editorial on The Verge discussing his research into the Genius web annotation platform and their questionable practices. Genius is a Brooklyn-based startup that allows users to create annotations on any webpage on the internet, effectively adding a comment section anywhere and everywhere. For an example of Genius annotations, check out the About Us page on Genius.com.

Genius proxies web content through their own servers to add annotation-enabling JavaScript. Instead of browsing to website.com, a Genius user instead browses to genius.it/website.com. In the background, Genius’s servers grab the original content from website.com, inject their own JavaScript into the page source, and then forward that content onto the client browser. In the browser, the injected JavaScript rewrites all links on the page to continue directing requests through Genius’s proxy and then provides the functionality to add and view the annotations themselves.

Genius’s modus operandi runs into an issue though, with websites that use Content Security Policy (CSP) to protect their users against Cross-Site Scripting (XSS) attacks. With CSP, a webserver uses an HTTP Header to tell a compliant web browser where content can load from. For example, a web server can use CSP to tell a web browser that it should only retrieve images from cdn.website.com, JavaScript from api.google.com, and Cascading Style Sheets (CSS) from css.othersite.com. Any content that originates from a site not specified in the Content Security Policy header will be denied by the browser. Additionally, the mere presence of the Content Security Policy header tells a browser that it should not allow any inline JavaScript or CSS to load unless explicitly allowed within the header.

To protect against cross-cite scripting attacks, many websites use Content Security Policy to block inline JavaScript by default and instead load their own JavaScript using external sources. This way, if an attacker somehow injects JavaScript into the website, perhaps through the comment section or some other input, web browsers will prevent the malicious script from executing thanks to the Content Security Policy header.

Content Security Policy poses a problem for Genius. In order for their JavaScript to run and rewrite all links in a webpage, the script cannot be loaded externally and must be injected directly into the page’s source. If Genius were to retain the original Content Security Policy for a website, that policy may deny Genius’s JavaScript from executing. To counter this, Genius would strip out the original Content Security Policy header, leaving visitors of proxied websites wide open to cross-site scripting attacks that could have been blocked on the original site.

When Assar originally disclosed the security impact of disabling Content Security Policy, Genius noted that the risk of a cross-site scripting attack was minimal because their annotator does not store any personal information about its users between page loads. Assar went on to explain that while Genius was correct in their statement, users would still be vulnerable to arguably more serious security risks such as drive-by malware downloads and key loggers. Luckily, after Assar provided proof-of-concept examples to Genius, their developers made changes to re-enable the original Content Security Policy for proxied websites, with a few modifications to allow the Genius scripts to run.

Genius now includes extensions from Content Security Policy Level 2, a revised version of the original Content Security Policy specification. Specifically, Genius now uses a cryptographic nonce (a randomly generated, single-use string of letters and numbers), to validate inline JavaScript. Before website content is forwarded from Genius’s proxy servers to the client web browser, the proxy server generates a random string of letters and numbers to use as a validation key. The nonce is included in the new Content Security Policy header and all inline JavaScript. Even if an attacker injects JavaScript into the webpage, the malicious JavaScript won’t include the nonce key and will be denied by the Content Security Policy.

The latest version of Google Chrome fully supports the Content Security Policy Level 2 specification while the latest version of Firefox supports all but one directive, unrelated to this particular security issue. Internet Explorer, Edge and Safari currently support the original Content Security Policy specification but not the Level 2 revision. Luckily, if a user browses to a website through the Genius service with a browser that does not support the CSP Level 2 spec, the default browser behavior is to disallow the content for security purposes. This also means that the Genius JavaScript used to rewrite links to use the Genius proxy will fail to run in certain browsers.

I’m impressed that Genius has taken steps to increase their users’ security at the expense of containing them in their service’s ecosystem. Content Security Policy is an important mechanism for defending against cross-site scripting attacks. I would recommend all users choose web browsers that support the latest specifications and be mindful of services they use that might compromise that protection. — Marc Laliberte

Decrypting Ransomware

Ransomware works by encrypting a victim’s files and then convincing them that the only way to retrieve their files is to pay a ransom. The attackers further this appeal to fear by setting a short deadline for payment, and telling the victim that their files will be gone for good if the deadline is missed. Ransomware is so successful because victims continue paying these ransoms.

The Cyber Threat Alliance reports an estimated $325 million in payments for the CryptoWall 3 ransomware alone during 2015. These payments provide both incentive and financing for further ransomware development by the bad guys. A recent report by McAfee shows a sharp increase in detected ransomware samples over the last two years.

Taking steps to prevent ransomware infections will always be the best defense strategy. Unfortunately, no protection is perfect, which means your systems may eventually fall victim to a successful attack. If you find yourself infected and without proper backups, you may think that paying up is your only option. Thanks to a few cyber security organizations, there may be another way out.

This week, Emsisoft launched a webpage dedicated to ransomware decryption. The webpage helps ransomware victims identify which flavor of ransomware infected their system and then provides a free downloadable decryption tool. Emsisoft is not the only one providing these tools. Kaspersky also maintains a page full of ransomware decryption utilities (and other malware removal tools). If you need help identifying exactly which version of ransomware locked your files, ID Ransomware is another tool you can use.

Ransomware decryption is a cat and mouse game. These utilities typically exploit errors in the ransomware encryption code to decrypt the affected files. When the attackers fix these errors and update their ransomware, the decryption utilities are no longer effective. Because of this, you should not rely on ransomware decryption utilities as your only protection. Instead, they should be treated as an option of last resort.

The best defense against ransomware remains a three-pronged approach of prevention, recovery, and education. You should take steps to prevent the initial infection by using a multi-layer security approach. Network-based AV scanning and APT protection along with host-based endpoint protection remain a must. You should also regularly create and test offline backups to recover from a ransomware infection. It is important that your backups be offline to protect against ransomware that locates and encrypts networked file shares. Finally, you should educate your employees on how to spot phishing attempts, which continue to be the most common attack vector for ransomware. If all of these steps fail though, you may still have hope with a decryption utility. – Marc Laliberte

Not So SmartApps

I’m a big fan of the Internet of Things (IoT), in theory. I like the idea of using small, purpose-built gadgets to make my life easier. The problem with current generation IoT devices though, is that they typically trade security for convenience. As a security professional, this is a tough compromise for me to make.

If you follow the blog, you likely saw my article on IoT cameras delivering malware last month. Having a brand new IoT device infect you with malware is probably the most extreme example of poor IoT security. Whereas, IoT devices shipping full of exploitable security holes is much more common.

Last week, researchers at the University of Michigan (UM) shared their findings around a security audit they performed on Samsung’s SmartThings home automation systems. At a high-level, they found four attack vectors that all stemmed from permission problems with the SmartThings Android app.

The SmartThings Android app includes its own SmartApps store where third-party developers can create widgets to add functionality to SmartThings devices. The researchers leveraged these SmartApps to launch their attacks.

In one attack, the researchers created their own application, disguised as a battery level monitor. When installed, the application only asked permission to monitor battery level, as you would expect. However, in reality the app had enough privileges to listen for newly entered door lock PIN codes, capture them, and send them to the researchers (or would be attackers) in a text message.

In another attack, the researchers remotely exploited another popular SmartApp to program an additional PIN into a connected door lock, giving them a literal backdoor into the house. The vulnerable SmartApp wasn’t even designed to program PIN codes into locks.

For the last two attacks, the researchers abused permissions in one SmartApp to turn off “vacation mode” and exploited another SmartApp by injecting false messages to make a fire alarm go off.

There will always be tradeoffs between security, functionality and ease of use when it comes to IoT devices. Depending on the embedded platform, remote code execution on an internet-connected toaster might not be the end of the world; that is, until it burns down your house I suppose. On the other hand, if I plan to replace my door locks with ones that I can control with my phone, I can reasonably demand the vendor delivers a properly secure system.

The Internet of Things market is still young and growing. Until security becomes a priority, you should remain mindful of the impact a compromised IoT device might cause on your network. – Marc Laliberte

How Not to Protect a National Bank

In early March, malicious hackers stole $80 Million from a U.S. Federal Reserve account for Bangladesh’s central bank. Early investigation found that the attackers used stolen credentials for the Society for Worldwide Interbank Financial Telecommunication (SWIFT) payment processing network to attempt nearly $1 Billion in fraudulent transfers before the compromise was discovered. SWIFT is a messaging network that banks primarily use to send payment orders between one another. SWIFT hardware is deployed on-premises at financial institutions and then connected back to central data centers using IP network infrastructure.

This month, investigators discussed some of the security failures that made this attack possible. As it turns out, Bangladesh’s central bank used cheap unmanaged switches on their internal network and, worse yet, completely lacked any firewall. The SWIFT equipment, while in a separate room, was only separated from the rest of the building’s network by a $10 second-hand switch.

While we still don’t know all the details behind this attack, we can begin to see why the criminals succeeded. Without proper network segmentation, attackers can move laterally between hosts unhindered and undetected. If the attackers were able to compromise a single host, perhaps through a phishing attack or an infected USB drive, they could then easily pivot and compromise the SWIFT systems on the same network.

It is not clear whether the bank’s complete lack of network security was an attempt to save money, or just plain incompetence. Regardless, you can use this incident as an opportunity to refresh some important network security basics. Administrators should always deploy critical systems on a separate network from general workstations, whether by the use of VLANs or even different physical cabling. Not only should you leverage a firewall to segment those networks and to inspect inter-network traffic, but you should use a UTM appliance that also scans the traffic that you do allow between the segments. Not only could a proper firewall implementation have protected Bandladesh’s central bank from losing $80 Million, it could have also provided important visibility into the attack to help identify the criminals and prevent them from attacking again elsewhere. — Marc Laliberte

Watch Out For Malware In Your New IoT Devices

Over the weekend, security researcher Mike Olsen published an article about his experience with a set of PoE security cameras that he ordered from Amazon.com. While troubleshooting a display issue, Mike found that the web portal for his cameras was using an HTML iframe element to silently load a malicious web site without his knowledge. This type of attack is a perfect example of a Cross Frame Scripting (CFS) attack.

An HTML iframe element allows one web page to load and display a second web page as part of its own page content. As an example of a legitimate use for an iframe element, WatchGuard Dimension uses iframes to display the Web UI for Fireboxes that are managed via Dimension Command. In the security cameras that Mike purchased however, the iframe was styled to load a known malicious web site into an effectively invisible 1 x 3 pixel window at the bottom of the web portal.

By using a hidden iframe, the browser loads the malicious web site without the victim’s knowledge. The malicious web site can then exploit unpatched browser vulnerabilities to preform attacks like stealing web authentication cookies or preforming drive-by-downloads of malware onto the client machine, all without any warning to the victim.

Manufacturer-delivered malware isn’t anything new. In 2014, TrapX discovered industrial barcode scanners delivering malware via infected firmware. In 2015, security researchers found adware performing man-in-the-middle attacks on HTTPS connections pre-installed on Lenovo laptops. Even way back in 2006, a small batch of iPods were shipped pre-infected with the RavMonE worm. How or why a product becomes compromised is not always easily answered. Was the manufacturer accidently infected by something that was then transferred to their product? Did an external attacker or insider specifically target the product? Or did the manufacturer itself knowingly deliver their product with this type of issue? One thing is obvious; we assume out new purchases will arrive in a clean state and bad actors exploit that trust.

As IoT devices continue to become more popular, opportunities for bad guys to launch attacks on your other network connected devices will increase. Consumers should make an effort to avoid purchasing products from non-reputable manufacturers or at least search online for reviews that might expose shady behavior. Administrators should continue following best practices of testing and monitoring new devices in a sandboxed environment before moving them into production where they could cause real harm. — Marc Laliberte

Application Layer DoS Attacks

In a Denial of Service (DoS) or Distributed Denial of Service (DDos) attack, malicious actors forcefully eat up resources on a victim network service to the point that access to the service becomes impossible. Motivations for DoS attacks range from political, to criminal, to just shouts for attention. For instance, the political hacktivist group Anonymous took down the car maker Nissan’s global websites in protest to Japanese whaling. Criminals also leverage them to hide or distract from network intrusion, as described by Kaspersky in a DDoS report last year. Other times, attention seeking groups like Lizard Squad launch them for the “lulz”, like in their 2014 Christmas Day DDoS attack against the Xbox Live and PlayStation Network gaming services.

Attackers most commonly carry out DoS attacks on the network and transport layers via SYN Floods. If you are unfamiliar with the TCP handshake process, this short article by InetDaemon is an excellent primer. During a SYN Flood, the attacker sends TCP SYN messages to open connections on the victim server without sending any ACK messages to confirm the connection. This ties up server resources until no new connections can be made, thus denying further access. In a distributed (DDoS) attack, the source of the malicious traffic is spread out across multiple clients, allowing attackers to use proportionately fewer resources on each client compared to those used on the victim server.

You can mitigate network layer DoS attacks by using network firewalls to recognize and throttle flood attacks. We pre-configure our WatchGuard Firebox Default Packet Handling rules to block sources that attempt flood attacks and throttle traffic in the event of a DDoS attack against the protected network.

Attackers continue to change their methods as protection against network layer DoS attacks improve. One increasingly common trick is a shift to application layer attacks. In an application layer DoS attack, malicious clients send traffic either to a listening UDP socket or through a completed TCP connection, using up both network bandwidth and resources on the victim network service. A common form of application layer DoS is a reflective DNS amplification attack. During a reflective DNS amplification attack, a malicious client sends a DNS query to an open DNS resolver with a spoofed source IP matching the victim server. The DNS resolver then sends (reflects) a response, often several times larger than the query, to the victim server due to the spoofed address. This type of attack is distributed across multiple clients using multiple DNS resolvers to increase the amount of bandwidth used up by the responses sent to the server.

In a blog post last week, Imperva described another type of application layer DDoS attack that one of their clients experienced. In this attack, a botnet infected with a variant of the Nitol malware randomly generated large files and then uploaded them to the victim server via HTTP POST requests. By using the application layer, the malicious traffic was only identified after the TCP handshake completed. This meant that the victim service already received the bandwidth hogging application traffic through the network before it was identified as malicious. Furthermore, the attack forced the victim service to waste processing resources on handling the ultimately bad traffic.

Protecting against an application layer DoS attack is much more difficult than protecting against a network layer DoS attack. You can easily mitigate Network layer attacks upstream from your network perimeter using flood detection. However, since an application layer attack goes so far as to complete a TCP connection first, it’s typically allowed to reach your perimeter before it’s identified as malicious. Usually mitigation comes down to simply having a larger network pipe than the attacker can fill.

At the network perimeter, you can use a stateful firewall to at least prevent reflected traffic from entering your network. A stateful firewall keeps record of sessions from clients on your network. If a response to a spoofed packet comes in, it will not match any record in the stateful firewall’s session table, so the firewall drops the packet. While this kind of protections stops malicious packets from entering your protected network, it doesn’t prevent the data from using up your external network bandwidth. For the best protection, Internet Service Providers need to adopt BCP 38 which describes filtering spoofed traffic before it even enters the internet backbone (which is another function firewalls like WatchGuard’s Fireboxes provide). — Marc Laliberte

Reference Section:

%d bloggers like this: